Newer
Older
[![Documentation Status](https://readthedocs.org/projects/csp-lmc/badge/?version=latest)](https://developer.skatelescope.org/projects/csp-lmc/en/latest/?badge=latest)
[![coverage report](https://gitlab.com/ska-telescope/csp-lmc/badges/master/coverage.svg)](https://ska-telescope.gitlab.io/csp-lmc/)
[![pipeline status](https://gitlab.com/ska-telescope/csp-lmc/badges/master/pipeline.svg)](https://gitlab.com/ska-telescope/csp-lmc/pipelines)
## Table of contents
* [Introduction](#introduction)
* [Repository](#repository)
* [CSP.LMC Common Package](#csp-lmc-common)
* [Create the CSP.LMC Common Software python package](#python-package)
* [CSP_Mid LMC Deployment in Kubernetes](#mid-CSP.LMC-Kubernetes-Deployment-via-Helm-Charts)
* [CSP_Low LMC](#csp-low-lmc)
* [Run in containers](#how-to-run-in-docker-containers)
* [Known bugs](#known-bugs)
* [Troubleshooting](#troubleshooting)
* [License](#license)
## Introduction
General requirements for the monitor and control functionality are the same for both the SKA MID and LOW telescopes. <br/>
In addition, two of three other CSP Sub-elements, namely the `Pulsar Search` and the `Pulsar Timing`, have the same functionality and use the same design in both telescopes.<br/>
Functionality common to `CSP_Low.LMC` and `CSP_Mid.LMC` includes: communication framework, logging, archiving, alarm generation, sub-
arraying, some of the functionality related to handling observing mode changes, `Pulsar Search` and
`Pulsar Timing`, and to some extent Very Long Baseline Interferometry (`VLBI`).<br/>
The difference between `CSP_Low.LMC` and `CSP_Mid.LMC` is mostly due to different receivers (dishes vs stations) and
different `CBF` functionality and design.<br/>
To maximize code reuse, the software common to `CSP_Low.LMC` and `CSP_Mid.LMC` is developed by the work
package `CSP_Common.LMC` and provided to work packages `CSP_Low.LMC` and `CSP_Mid.LMC`, to
be used as a base for telescope specific `CSP.LMC` software.
## Repository organization
To simplify the access at the whole CSP.LMC software, the `CSP_Common.LMC`, `CSP_Low.LMC` and `CSP_Mid.LMC` software packages are hosted in the same SKA GitLab repository, named `CSP.LMC`.<br/>
The `CSP.LMC` repository is organized in three main folders, `csp-lmc-common`, `csp-low-lmc` and `csp-mid-lmc`, each presenting
the same organization:
* project source: contains the specific project TANGO Device Class files
* pogo: contains the POGO files of the TANGO Device Classes of the project
* tests: contains the test
* charts: stored the HEML charts to deploy the Mid CSP.LMC system under kubernets environment.
* docker: containes the `docker`, `docker-compose` and `dsconfig` configuration files as well as
the Makefile to generate the docker image and run the tests.
```bash
git clone https://gitlab.com/ska-telescope/csp-lmc.git
```
## Prerequisities
* A TANGO development environment properly configured, as described in [SKA developer portal](https://developer.skatelescope.org/en/latest/tools/tango-devenv-setup.html)
* [SKA Base classes](https://gitlab.com/ska-telescope/lmc-base-classes)
* access to a K8s/minikube cluster.
# CSP_Mid.LMC
The TANGO devices of the CSP_Mid.LMC prototype run in a containerised environment.
Currently only a limitated number of CSP_Mid.LMC and CBF_Mid.LMC devices are run in Docker containers:
* the MidCspMaster and MID CbfMaster
* the MidCspCapabilityMonitor devices
* three instances of the CSP_Mid and CBF_Mid subarrays
* four instances of the Very Coarse Channelizer (VCC) devices
* four instance of the Frequency Slice Processor (FPS) devices
* two instances of the TM TelState Simulator devices
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
* one instance of the TANGO database
## Containerised Mid CSP.LMC in Kubernetes
The Mid CSP.LMC containerised TANGO servers are managed via Kubernetes.
The system is setup so that each k8s Pod has only one Docker container that in turn
runs only one Tango Device Server application.<br/>
Mid CSP.LMC TANGO Servers rely on two different Docker images: `mid-csplmc` and `mid-cbf-mcs`. <br/>
The first one runs the CSP.LMC TANGO devices and the sencond those of the Mid CBF.LMC prototype.
### Mid CSP.LMC Kubernetes Deployment via Helm Charts
The deployment of the system is handled by the Helm tool, via the Helm Charts, a set of YAML files describing
how the Kubernetes resources are related.<br/>
The Mid CSP.LMC Helm Charts are stored in the `charts` directory, organized in two sub-folders: <br/>
* csp-proto with the Helm chart to deploy only the CSP.LMC devices (MidCspCapabilityMonitor, MidcSpMaster nad MidCspSubarray)
* mid-csp with the Helm chart to deploy the whole Mid CSP.LMC system, including the TANGO Database and the Mid CSF.LMC devices.
In particular, the `mid-csp` chart depends on the CSP.LMC, CBF.LMC and Tango DB charts and these dependecies are
dynamically linked specifying the `dependencies` field in the Chart.yaml.<br/>
The `Makefile` in the csp-lmc-mid root directory provides the targets to deploy the system, stop the running services and run
the tests locally, on a k8s/minikube machine.
To deploy the whole Mid CSP.LMC system run:
``` bash
make deploy
```
that installs the mid-csp helm chart specifying `test` as relase name and assigns it to the `csp-proto` namespace.
Running the command:
```bash
helm list -n csp-proto
```
an output like the one below is shown:
```
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
test csp-proto 1 2020-09-21 10:07:19.308839059 +0200 CEST deployed mid-csp-0.1.0 0.6.8
```
To list all the pods and service in the csp-proto namespace, issue the command:
```bash
kubectl get all -n csp-proto
```
that provides the following output lines:
```
NAME READY STATUS RESTARTS AGE
pod/databaseds-tango-base-test-0 1/1 Running 0 2m48s
pod/mid-cbf-cbf-proto-cbfmaster-test-0 1/1 Running 0 2m50s
pod/mid-cbf-cbf-proto-cbfsubarray01-test-0 1/1 Running 1 2m50s
pod/mid-cbf-cbf-proto-cbfsubarray02-test-0 1/1 Running 1 2m48s
pod/mid-cbf-cbf-proto-cbfsubarray03-test-0 1/1 Running 1 2m48s
pod/mid-cbf-cbf-proto-fsp01-test-0 1/1 Running 0 2m49s
pod/mid-cbf-cbf-proto-fsp02-test-0 1/1 Running 0 2m49s
pod/mid-cbf-cbf-proto-fsp03-test-0 1/1 Running 0 2m50s
pod/mid-cbf-cbf-proto-fsp04-test-0 1/1 Running 0 2m50s
pod/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest-test-0 1/1 Running 0 2m49s
pod/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest2-test-0 1/1 Running 0 2m48s
pod/mid-cbf-cbf-proto-vcc001-test-0 1/1 Running 3 2m47s
pod/mid-cbf-cbf-proto-vcc002-test-0 1/1 Running 3 2m50s
pod/mid-cbf-cbf-proto-vcc003-test-0 1/1 Running 3 2m50s
pod/mid-cbf-cbf-proto-vcc004-test-0 1/1 Running 3 2m49s
pod/mid-cbf-configurator-cbf-proto-test-m6j2p 0/1 Error 0 2m50s
pod/mid-cbf-configurator-cbf-proto-test-qm8xg 0/1 Completed 0 2m15s
pod/midcsplmc-configurator-csp-proto-test-d7hmp 0/1 Completed 0 2m15s
pod/midcsplmc-configurator-csp-proto-test-qnks4 0/1 Error 0 2m50s
pod/midcsplmc-csp-proto-midcapabilitymonitor-test-0 1/1 Running 3 2m48s
pod/midcsplmc-csp-proto-midcspmaster-test-0 1/1 Running 0 2m50s
pod/midcsplmc-csp-proto-midcspsubarray01-test-0 1/1 Running 1 2m50s
pod/midcsplmc-csp-proto-midcspsubarray02-test-0 1/1 Running 1 2m50s
pod/midcsplmc-csp-proto-midcspsubarray03-test-0 1/1 Running 1 2m50s
pod/tango-base-tangodb-0
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/databaseds-tango-base-test NodePort 10.103.37.75 <none> 10000:31664/TCP 2m50s
service/mid-cbf-cbf-proto-cbfmaster-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-cbfsubarray01-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-cbfsubarray02-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-cbfsubarray03-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-fsp01-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-fsp02-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-fsp03-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-fsp04-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest2-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-vcc001-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-vcc002-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-vcc003-test ClusterIP None <none> 1234/TCP 2m50s
service/mid-cbf-cbf-proto-vcc004-test ClusterIP None <none> 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcapabilitymonitor-test ClusterIP None <none> 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcspmaster-test ClusterIP None <none> 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcspsubarray01-test ClusterIP None <none> 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcspsubarray02-test ClusterIP None <none> 1234/TCP 2m50s
service/midcsplmc-csp-proto-midcspsubarray03-test ClusterIP None <none> 1234/TCP 2m50s
service/tango-base-tangodb NodePort 10.102.174.225 <none> 3306:30633/TCP 2m50s
NAME READY AGE
statefulset.apps/databaseds-tango-base-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-cbfmaster-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-cbfsubarray01-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-cbfsubarray02-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-cbfsubarray03-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-fsp01-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-fsp02-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-fsp03-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-fsp04-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-tmcspsubarrayleafnodetest2-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-vcc001-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-vcc002-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-vcc003-test 1/1 2m50s
statefulset.apps/mid-cbf-cbf-proto-vcc004-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcapabilitymonitor-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcspmaster-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcspsubarray01-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcspsubarray02-test 1/1 2m50s
statefulset.apps/midcsplmc-csp-proto-midcspsubarray03-test 1/1 2m50s
statefulset.apps/tango-base-tangodb 1/1 2m50s
NAME COMPLETIONS DURATION AGE
job.batch/mid-cbf-configurator-cbf-proto-test 1/1 61s 2m50s
job.batch/midcsplmc-configurator-csp-proto-test 1/1 59s 2m50s
```
The helm release can be deleted and the application stopped using the command:
```bash
make delete
```
that unistalls the `mid-csp` chart and delete the `test` release in the `csp-proto` namespace.
Other Makefile targets, such as `describe` and `logs`, provide some useful information when the system has been properly deployed.
## Run integration tests on a local k8s/minikube cluster
The project includes a set of tests for the `MidCspMaster` and `MidCspSubarray` TANGO Devices
that can be found in the project `tests` folder.<br/>
To run the tests on the local k8s cluster, issue the command:
from the root project directory.<br/>
This command first deploys the system and then executes the integration tests.
After tests end, run the command:
to uninstall the HELM charts of the `test` release.
## Gitlab continuos integration tests
Continuos integration tests in Gitlab rely on the `.gitlab-ci.yml` configuration file that provides al the scripts to build, test and deploy the application. <br/>
This file has been updated to run test in K8s environment and any reference to the use of `docker-compose` as containers manager,
has been removed. <br/>
A new job has been added in the pipline `publish` stage to release the the `csp-proto` helm chart in the SKA Helm charts repositoryhostes under `nexus`.
## Docker-compose support
Support to `docker-compose` has not been completely removed even if all the main operations are
performed in kubernetes environment.<br/>
Use of `docker-compose` has been maintened only to simplify the development on machines that
are not capable to run minikube in a virtual machine.<br/>
The `docker` folder of the project contains all the files required to run the system via the
docker-compose tool.
From the docker folder of the project, one can still:
• build the image running `make build`
• start the system dockers with docker-compose executing `make up`
• run the test on the local machine calling `make test`
The Docker containers running the CBF_Mid devices are instantiated pulling the `mid-cbf-mcs:test` project image from the [Nexus repository](https://nexus.engageska-portugal.pt). <br/>
The CSP_Mid.LMC project provides a [Makefile](Makefile) to start the system containers and the tests.<br/>
The containerised environment relies on three YAML configuration files:
Each file includes the stages to run the the `CSP_Mid.LMC TANGO DB`, the `CSP_Mid.LMC` devices and `Mid-CBF.LMC` TANGO Devices inside separate docker containers.<br/>
These YAML files are used by `docker-compose` to run both the CSP_Mid.LMC and CBF.LMC TANGO device
instances, that is, to run the whole `CSP_Mid.LMC` prototype.<br/>
In this way, it's possible to execute some preliminary integration tests, as for example the assignment/release of receptors to a `CSP_Mid Subarray` and its configuration to execute a scan in Imaging mode.<br/>
The `CSP_Mid.LMC` and `Mid-CBF.LMC TANGO` Devices are registered with the same TANGO DB, and its
configuration is performed via the `dsconfig` TANGO Device provided by the [dsconfig project](https://gitlab.com/MaxIV-KitsControls/lib-maxiv-dsconfig). <br/>
To run the `CSP_Mid.LMC` prototype inside Docker containers,issue the command:
from the `docker` of the project directory. At the end of the procedure the command
shows the list of the running containers:
mid-csp-lmc-tangodb: the MariaDB database with the TANGO database tables
mid-csp-lmc-databaseds: the TANGO DB device server
mid-csp-lmc-cbf_dsconfig: the dsconfig container to configure CBF.LMC devices in the TANGO DB
mid-csp-lmc-cbf_dsconfig: the dsconfig container to configure CSP.LMC devices in the TANGO DB
mid-csp-lmc-midcspmaster: the CspMaster TANGO device
mid-csp-lmc-midcapabilitymonitor: the monitor devices of the CSP_Mid.LMC Capabilities
mid-csp-lmc-midcspsubarray[01-023: two instances of the CspSubarray TANGO device
mid-csp-lmc-rsyslog-csplmc: the rsyslog container for the CSP.LMC devices
mid-csp-lmc-rsyslog-cbf : the rsyslog container for the CBF.LMC devices
mid-csp-lmc-cbfmaster: the CbfMaster TANGO device
mid-csp-lmc-cbfsubarray[01-03]: two instances of the CbfSubarray TANGO device
mid-csp-lmc-vcc[001-004]: four instances of the Mid-CBF VCC TANGO device
mid-csp-lmc-fsp[01-04]: four instances of the Mid-CBF FSP TANGO device
mid-csp-lmc-tmcspsubarrayleafnodetest/2: two instances of the TelState TANGO Device
simulator provided by the CBF project to support scan
configuration for Subarray1/2
To stop and removes the Docker containers, issue the command
<pre><code>make down</code></pre>
from the prototype root directory.
__NOTE__
>Docker containers are run with the `--network=host` option.
In this case there is no isolation between the host machine and the containers. <br/>
This means that the TANGO DB running in the container is available on port 10000 of the host machine. <br />
Running `jive` on the local host, the `CSP.LMC` and `Mid-CBF.LMC` TANGO Devices registered with the TANGO DB (running in a docker container)
can be visualized and explored.
## Known bugs
## Troubleshooting