Loading README.txt +69 −12 Original line number Diff line number Diff line Simple communication test that involes 4 docker containers: - AMQP client (container_name: test_client, file to run: test_client.py) - AMQP server (container_name: transfer_service) - client (container_name: test_client, files to run: 'vos_rest_client.py' and 'dataArchiver.py') - server (container_name: transfer_service) - RabbitMQ (container_name: rabbitmq) - Redis (container_name: redis) Loading @@ -16,18 +16,18 @@ These last two containers are employed to test transfers using xrootd python bin You can start the whole environment with (launch the following command from the 'docker' dir): $ docker-compose up Once all the containers are up and running, open another shell and access the test_client container: $ docker exec -it test_client /bin/bash Once all the containers are up and running, open another shell and access the 'client' container: $ docker exec -it client /bin/bash At this point you can launch the test_client.py within the test_client container using the following syntax: $ python test_client.py QUEUE_NAME At this point you can launch the 'vos_rest_client.py' within the 'client' container using the following syntax: $ python vos_rest_client.py QUEUE_NAME For example: $ python test_client.py start_job_queue $ python vos_rest_client.py start_job_queue The output should be something like this: test_client@a89c0bb962f7:~$ python test_client.py start_job_queue client@a89c0bb962f7:~$ python vos_rest_client.py start_job_queue Sending transfer request: { "transfer": { Loading Loading @@ -64,14 +64,14 @@ The output should be something like this: After processing the request, the server launches an internal thread delayed of 10 seconds which changes the state of the job from "PENDING" to "EXECUTING". You can easily verify this change by launching again the client in this other way: $ python test_client.py QUEUE_NAME JOB_ID $ python vos_rest_client.py QUEUE_NAME JOB_ID For example, in our case: $ python test_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007 $ python vos_rest_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007 The output should be something like this: test_client@a89c0bb962f7:~$ python test_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007 client@a89c0bb962f7:~$ python vos_rest_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007 Sending poll request: { "jobID": "3ff92acedc9611eabf140242ac1f0007" Loading @@ -98,6 +98,52 @@ The output should be something like this: } } --------------------------------------------------------------------------------------------------------------- Another thing you can do is to launch the 'dataArchiver.py' client. Launching the client without any argument will show you how to use it: client@28970a09202d:~$ python3 dataArchiverCli.py NAME dataArchiverCli.py SYNOPSYS python3.x dataArchiverCli.py COMMAND USERNAME DESCRIPTION The purpose of this client application is to notify to the VOSpace backend that data is ready to be saved somewhere. The client accepts only one command at a time. This command is mandatory. A list of supported commands is shown here below: cstore performs a 'cold storage' request, data will be saved on the tape library hstore performs a 'hot storage' request, data will be saved on a standard server The client also needs to know the username associated to a storage request process. The username must be the same used for accessing the transfer node. For example, if we want to perform a 'cold storage' request for a the 'transfer_service' user, we do: client@28970a09202d:~$ python3 dataArchiverCli.py cstore transfer_service Sending CSTORE request... WARNING!!! WARNING!!! WARNING!!! WARNING!!! WARNING!!! If you confirm, all your data on the transfer node will be available in read-only mode for all the time the archiving process is running. WARNING!!! WARNING!!! WARNING!!! WARNING!!! WARNING!!! Are you sure to proceed? [yes/no]: yes JobID: c63697eafbf711eaa44d0242ac1c0008 Store process started successfully! ############################################################################################################### You can access the rabbitmq web interface via browser: Loading @@ -105,7 +151,18 @@ You can access the rabbitmq web interface via browser: $ docker network inspect vos-ts_backend_net | grep -i -A 3 rabbitmq 2) Open your browser and point it to http://IP_ADDRESS:15672 (user: guest, password: guest) You can access the file catalog from the test_client container: You can access the redis server from the 'client' container: 1) Use redis-cli to connect to redis: $ redis-cli -h redis -n DB_INDEX NOTE: DB_INDEX is a non-negative number representing the db to work on: - 0: jobs that retrieve data (pullFromVOSpace) - 1: jobs that store data (push) - 2: scheduling queues (not yet implemented) 2) You can now perform a query based on the job ID, for example show the job object info stored on db = 1: get JOB_ID (if we consider the last example: "get c63697eafbf711eaa44d0242ac1c0008") 3) This will return all the information regarding the job You can access the file catalog from the 'client' container: 1) Access the db via psql client (password: postgres): $ psql -h file_catalog -U postgres -d vospace_testdb 2) You can now perform a query, for example show all the tuples of the Node table: Loading Loading
README.txt +69 −12 Original line number Diff line number Diff line Simple communication test that involes 4 docker containers: - AMQP client (container_name: test_client, file to run: test_client.py) - AMQP server (container_name: transfer_service) - client (container_name: test_client, files to run: 'vos_rest_client.py' and 'dataArchiver.py') - server (container_name: transfer_service) - RabbitMQ (container_name: rabbitmq) - Redis (container_name: redis) Loading @@ -16,18 +16,18 @@ These last two containers are employed to test transfers using xrootd python bin You can start the whole environment with (launch the following command from the 'docker' dir): $ docker-compose up Once all the containers are up and running, open another shell and access the test_client container: $ docker exec -it test_client /bin/bash Once all the containers are up and running, open another shell and access the 'client' container: $ docker exec -it client /bin/bash At this point you can launch the test_client.py within the test_client container using the following syntax: $ python test_client.py QUEUE_NAME At this point you can launch the 'vos_rest_client.py' within the 'client' container using the following syntax: $ python vos_rest_client.py QUEUE_NAME For example: $ python test_client.py start_job_queue $ python vos_rest_client.py start_job_queue The output should be something like this: test_client@a89c0bb962f7:~$ python test_client.py start_job_queue client@a89c0bb962f7:~$ python vos_rest_client.py start_job_queue Sending transfer request: { "transfer": { Loading Loading @@ -64,14 +64,14 @@ The output should be something like this: After processing the request, the server launches an internal thread delayed of 10 seconds which changes the state of the job from "PENDING" to "EXECUTING". You can easily verify this change by launching again the client in this other way: $ python test_client.py QUEUE_NAME JOB_ID $ python vos_rest_client.py QUEUE_NAME JOB_ID For example, in our case: $ python test_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007 $ python vos_rest_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007 The output should be something like this: test_client@a89c0bb962f7:~$ python test_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007 client@a89c0bb962f7:~$ python vos_rest_client.py poll_job_queue 3ff92acedc9611eabf140242ac1f0007 Sending poll request: { "jobID": "3ff92acedc9611eabf140242ac1f0007" Loading @@ -98,6 +98,52 @@ The output should be something like this: } } --------------------------------------------------------------------------------------------------------------- Another thing you can do is to launch the 'dataArchiver.py' client. Launching the client without any argument will show you how to use it: client@28970a09202d:~$ python3 dataArchiverCli.py NAME dataArchiverCli.py SYNOPSYS python3.x dataArchiverCli.py COMMAND USERNAME DESCRIPTION The purpose of this client application is to notify to the VOSpace backend that data is ready to be saved somewhere. The client accepts only one command at a time. This command is mandatory. A list of supported commands is shown here below: cstore performs a 'cold storage' request, data will be saved on the tape library hstore performs a 'hot storage' request, data will be saved on a standard server The client also needs to know the username associated to a storage request process. The username must be the same used for accessing the transfer node. For example, if we want to perform a 'cold storage' request for a the 'transfer_service' user, we do: client@28970a09202d:~$ python3 dataArchiverCli.py cstore transfer_service Sending CSTORE request... WARNING!!! WARNING!!! WARNING!!! WARNING!!! WARNING!!! If you confirm, all your data on the transfer node will be available in read-only mode for all the time the archiving process is running. WARNING!!! WARNING!!! WARNING!!! WARNING!!! WARNING!!! Are you sure to proceed? [yes/no]: yes JobID: c63697eafbf711eaa44d0242ac1c0008 Store process started successfully! ############################################################################################################### You can access the rabbitmq web interface via browser: Loading @@ -105,7 +151,18 @@ You can access the rabbitmq web interface via browser: $ docker network inspect vos-ts_backend_net | grep -i -A 3 rabbitmq 2) Open your browser and point it to http://IP_ADDRESS:15672 (user: guest, password: guest) You can access the file catalog from the test_client container: You can access the redis server from the 'client' container: 1) Use redis-cli to connect to redis: $ redis-cli -h redis -n DB_INDEX NOTE: DB_INDEX is a non-negative number representing the db to work on: - 0: jobs that retrieve data (pullFromVOSpace) - 1: jobs that store data (push) - 2: scheduling queues (not yet implemented) 2) You can now perform a query based on the job ID, for example show the job object info stored on db = 1: get JOB_ID (if we consider the last example: "get c63697eafbf711eaa44d0242ac1c0008") 3) This will return all the information regarding the job You can access the file catalog from the 'client' container: 1) Access the db via psql client (password: postgres): $ psql -h file_catalog -U postgres -d vospace_testdb 2) You can now perform a query, for example show all the tuples of the Node table: Loading