{"id":3046,"date":"2021-01-26T16:28:28","date_gmt":"2021-01-26T15:28:28","guid":{"rendered":"https:\/\/www.ict.inaf.it\/computing\/?page_id=3046"},"modified":"2021-01-26T17:09:00","modified_gmt":"2021-01-26T16:09:00","slug":"hotcat-user-manual","status":"publish","type":"page","link":"https:\/\/www.ict.inaf.it\/computing\/skadc2\/additional-documentation\/hotcat-user-manual\/","title":{"rendered":"Hotcat user manual"},"content":{"rendered":"<p>[et_pb_section admin_label=&#8221;section&#8221;][et_pb_row admin_label=&#8221;row&#8221;][et_pb_column type=&#8221;1_2&#8243;][et_pb_sidebar admin_label=&#8221;Barra Laterale&#8221; orientation=&#8221;left&#8221; area=&#8221;et_pb_widget_area_7&#8243; background_layout=&#8221;light&#8221; remove_border=&#8221;off&#8221;]<\/p>\n<p>[\/et_pb_sidebar][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243;][et_pb_text admin_label=&#8221;Testo&#8221; background_layout=&#8221;light&#8221; text_orientation=&#8221;left&#8221; use_border_color=&#8221;off&#8221; border_color=&#8221;#ffffff&#8221; border_style=&#8221;solid&#8221;]<\/p>\n<pre class=\"aLF-aPX-K0-aPE\">----+ HOTCAT USER MANUAL +----\r\n\r\n(version 1.0)\r\n\r\nAuthors: giuliano.taffoni@inaf.it\r\n         gianmarco.maggio@inaf.it\r\n\r\nSupport Email: for any support requests please contact help.hotcat@inaf.it\r\n\r\n*******************************************************************************\r\nIMPORTANT: In case of use of the HOTCAT computing infrastructure, on your paper\r\nyou have to cite the following papers:\r\n\r\n [1] Taffoni, Giuliano, Ugo Becciani, Bianca Garilli, Gianmarco Maggio, Fabio\r\n    Pasian, Grazia Umana, Riccardo Smareglia and Fabio Vitello.\r\n    ''CHIPP: INAF pilot project for HTC, HPC and HPDA.''\r\n    ArXiv abs\/2002.01283 (2020)\r\n [2] Bertocco, Sara, David Goz, Laura Tornatore, Antonio Ragagnin, G. Maggio,\r\n     F. Gasparo, Claudio Vuerli, Gaia Taffoni and Mateus Molinaro.\r\n     INAF Trieste Astronomical Observatory Information Technology Framework.\r\n     arXiv: Instrumentation and Methods for Astrophysics (2019)\r\n*******************************************************************************\r\n\r\n0. Index\r\n    0.1 Conventions\r\n1. Cluster overview and characteristics (cpu,ram etc.)\r\n    1.1 The SKADC partition\r\n    1.2 Storage and Data\r\n2. Access the cluster\r\n3. The queue system: SLURM\r\n    3.1 List of Useful Commands\r\n    3.2 Submit jobs on the cluster with SLURM by examples\r\n4. Software and modules\r\n    4.1 Environmental Modules\r\n    4.2 Containers\r\n      4.2.1 Executing the SKADC container\r\n      4.2.2 Executing your own container\r\n      4.2.3 Submit a container job\r\n    4.3 Compiling Software\r\n    4.4 Python packages and virtual environment\r\n\r\n0.1 Conventions\r\n\r\nHardware:\r\n\r\n - login node: node used to access the cluster\r\n - compute node: node where the code is actually running\r\n - storage node: node where the data is stored\r\n - home storage: the file system directory containing files for a given user on\r\n                the cluster\r\n - scratch storage: large capacity and high performance temporary storage to be\r\n                   used while applications are running on the cluster.\r\n\r\nThe examples:\r\n\r\n- an example is inside a ++++ section;\r\n- lines starting with '$' are commands executed on the user(s);\r\n- lines starting with '## ' are comments.\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n$ cat examples.sh\r\n\r\n#!\/bin\/bash\r\n## This is an examples\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\n\r\n1. Cluster overview and characteristics (cpu,ram etc.)\r\n--------------------\r\nHOTCAT cluster is composed by three partitions:\r\na) base\r\nb) CHIPP;\r\nc) skadc.\r\n\r\nThe  = partitions share the same login node \"amonra.oats.inaf.it\" .\r\n\r\n1.1 The SKADC partition\r\nIt is composed by 20 computing nodes, 800 cores in total, and 6 GB Ram per core,\r\nin details each compute node has:\r\n\r\n - 4x10 Core (40 core) Haswell E5-4627v3 @ 2.60GHz\r\n - 256 GB DDR3 1333 MHz\r\n - Network: Infiniband ConnectX 56GBs and 1GB Ethernet.\r\n\r\nTo reserve some resources to the operating system and to the cluster services,\r\nthe user will  be able to use 38 cores out of 40 and 240 GB of RAM per node.\r\n\r\nThose nodes are referred as GEN9 in the rest of this document.\r\n\r\n1.3 Storage and Quotas\r\nAll users have access to a high capacity primary storage.\r\nThis system currently provides 50 TB (terabytes) of storage.\r\nThe integrity of the data is protected by a RAID 6 system but no backups are\r\ndone so anyway copy out to a safer storage any data you must keep secure.\r\n\r\nHOTCAT provides each user  a  home directory space  on the primary\r\nstorage that is accessible from all HOTCAT nodes:\r\n\r\n\/u\/username\r\n\r\nYour use of this space is limited by a storage quota that applies to your\r\naccount's usage as a whole.\r\n\r\nHOTCAT provides large capacity (600TB) and high performance storage (2GBs IO)\r\nto be used while  applications are running on the supercomputer.\r\nThis is the scratch parallel FS based on beegfs and 4 storage nodes.\r\nThe scratch space is  a set-aside area of primary storage, and you can find\r\na scratch space dedicated to SKA Data Challenge on:\r\n\r\n\/beegfs\/skadc\r\n\r\nThe space is organised as follows:\r\n\r\nSkadc -+\r\n       |\r\n       +- data\r\n       +- doc\r\n       +- singularity\r\n       +- skadc04 \r\n       +- skadc05\r\n       +- skadc06\r\n       +- skadc07\r\n       +- software\r\n\r\n\r\nThe data directory is where you will find SKADC data. \r\nNOTE: Copy out the data into your work directory in \/beegfs.\r\n\r\nThe 'doc' directory contains the documentations and some examples to execute \r\nJobs using the cluster and to run singularity.\r\n\r\nSingularity contains the shared singularity images.\r\n\r\nskadc0X is the group reserved space to use for, computing, private data and \r\nsoftware.\r\n\r\nSoftware is the directory for the shared software to use through the containers.\r\nSome software is too big to include in a docker\/singularity container (the\r\nimage becomes to large) so we install il locally.\r\n\r\nThe \/beegfs filesystem is parallel high performance filesystem, that must be\r\nused during application runs but:\r\nTHERE IS NOT ANY BACKUP, IT IS INTRINSIC  SUBJECT TO FAILURES, IT IS HIGHLY\r\nRISKY TO LEAVE DATA THERE FOR LONG TIME. EACH YEAR WE WILL CLEAN IT.\r\n\r\nSUGGESTION:  always use the \/beegfs partition for running programs, then\r\ncopy back to your \/u\/username disk only the data necessary to preserve.\r\n\r\n\r\n\r\n2.0 Access the Cluster\r\n----------------------\r\nTo access the cluster the user must ssh to the login node:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n$ ssh username@amonra.oats.inaf.it\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nThe login is done using ssh key (NO PASSWORD IS REQUIRED) that the user must\r\nprovide to the administrators at registration.\r\n\r\nThe user is assigned to the skadc partition ad to a set of resources according to\r\nher capabilities (project, fundings etc.)\r\n\r\nThe user must use the compute nodes to execute her program, for no reason it\r\nwill be allowed to run programs on the login node. Programs running on the\r\nlogin node will be killed with no advice!!!!\r\n\r\nCompute nodes can be used also to compile jobs or for interactive post\r\nprocessing using interactive jobs (see below).\r\n\r\nSoftware is distributed either using singularity containers or environmental \r\nmodules (see below).\r\n\r\n3.0 The queue system: SLURM\r\n---------------------------\r\nThe Simple Linux Utility for Resource Management (SLURM) is an open source,\r\nfault-tolerant, and highly scalable cluster management and job scheduling system\r\nfor large and small Linux clusters.\r\n\r\n3.1 List of Useful Commands\r\nMan pages exist for all SLURM daemons, commands, and API functions.\r\nHere an on-line manual: https:\/\/slurm.schedmd.com\/\r\n\r\n*******************************************************************************\r\nNOTE: For most of the users it is sufficient to learn the following commands:\r\n      srun, sbatch, squeue and scancel.\r\n******************************************************************************\r\n\r\n\r\nThe command option --help also provides a brief summary of options.\r\nNote that the command options are all case insensitive.\r\n\r\n+------------------------------------------------------------------------------+\r\n| sbatch  | used to submit a job script for later execution. The script will   |\r\n|         | typically contain one or more commands to launch parallel tasks.   |\r\n+------------------------------------------------------------------------------+\r\n| squeue  | reports the state of jobs or job steps. It has a wide variety of   |\r\n|         | filtering sorting, and formatting options. By default, it reports  |\r\n|         | the running jobs in priority order and then the pending jobs in    |\r\n|         | priority order.                                                    |\r\n+---------|--------------------------------------------------------------------+\r\n| srun    | used to submit a job for execution or initiate job steps in real   |\r\n|         | time. srun has a wide variety of options to specify resource       |\r\n|         | requirements, including: minimum and maximum node count, processor |\r\n|         | count, specific nodes to use or not use, and specific node         |\r\n|         | characteristics (so much memory, disk space, certain required      |\r\n|         | features, etc.). A job can contain multiple job steps executing    |\r\n|         | sequentially or in parallel on independent or shared nodes within  |\r\n|         | the job's node allocation.                                         |\r\n+---------|--------------------------------------------------------------------+\r\n| scancel | cancel a running job.            |\r\n+------------------------------------------------------------------------------+\r\n\r\n\r\n3.2 Submit jobs on the cluster with SLURM by examples\r\n\r\nUsers can submit jobs using the sbatch command.\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n$ sbatch job_script.sh\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nIn the job script, in order to define the sbatch parameters you have to use\r\nthe #SBATCH directives.\r\nUsers can also start jobs using directly the srun command.\r\nBut the best way to submit a job is to use sbatch in order to allocate the\r\nrequired resource with the desired walltime and then call mpirun or srun inside\r\nthe script.\r\n\r\nHere is a simple example where we execute 2 system commands inside the script,\r\nsleep and hostname.\r\n\r\nThis job will have a name as TestJob, will run on the chipp partition,\r\nwe allocated 1 compute node and 128 GB RAM, we defined the output  files \r\nand we requested 8 hours of walltime.\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n#!\/bin\/bash\r\n#SBATCH -J TestJob\r\n#SBATCH -p skadc\r\n#SBATCH -N 1\r\n#SBATCH --mem=128G\r\n#SBATCH -o TestJob-%j.out\r\n#SBATCH -e TestJob-%j.err\r\n#SBATCH --time=00-80:00:00\r\nsleep 5\r\nhostname\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nWe could do the same using directly the srun command (accepts only one\r\nexecutable as argument):\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n$ srun -N1 --time=8:00:00 --mem=128G  -p skadc hostname\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\n\r\nTo run interactive jobs, users can call srun with some specific arguments.\r\n\r\nFor example:\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n$ srun -N2 --time=01:20:00 -p skadc --pty -u bash -i -l \r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nThis command will return a console of the compute nodes.\r\n\r\nEvery command that will called there it will be executed on all allocated\r\ncompute nodes.\r\n\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n## On the login node (amonra.oats.inaf.it)\r\n\r\n[amonra]$ hostname\r\n amonra\r\n[amonra]$ srun --mem=4096 --nodes=1 --ntasks-per-node=4 --time=01:00:00 -p skadc \\\r\n          --pty \/bin\/bash\r\n[gen10-09]$ hostname\r\ngen10-09\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nIn this example we request an interactive console on 1 node, using 4 CPUs (core)\r\nand a total of 4GB of RAM for 1 hour.\r\nAnother way to start an interactive job is to call salloc.\r\nPlease choose the way you like more.\r\n\r\n\r\n4.0 Software and modules\r\n------------------------\r\n\r\nTo access software on Linux systems, use the module command to load the software\r\ninto your environment.\r\n\r\nWe recommend  to use default_gnu module unless you know exacltly\r\nwhat you are doing. At login and in any job script type:\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% module load default_gnu\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nNOTE: do not put this command in your bashrc or bash_profile, is could conflict\r\nwith your slurm jobs.\r\n\r\n4.1 Environmental Modules\r\nYou can find other modules available on  HOTCAT using the module command:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n$ module available\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nFor example, if you want to use the compiler  9.3.0 version of gcc, you would\r\ntype in your job script or at the command line:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% module load gnu\/9.3.0\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\nTo find out what module versions are available,  use  the module avail command\r\nto search by name.\r\n\r\nFor example to find out what versions of gnu compiler  are available:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\u200b% module avail gnu\r\n\r\n-------------------- \/opt\/cluster\/Modules\/3.2.10\/compilers ---------------------\r\ngnu\/4.8.5 gnu\/9.3.0\r\n\r\n% module avail fftw\r\n\r\n-------------------- \/opt\/cluster\/Modules\/3.2.10\/libraries ---------------------\r\nfftw\/2.1.5\/openmpi\/3.1.6\/gnu\/4.8.5 fftw\/2.1.5\/openmpi\/4.0.3\/gnu\/9.3.0\r\nfftw\/2.1.5\/openmpi\/3.1.6\/gnu\/9.3.0 fftw\/3.3.8\/openmpi\/3.1.6\/gnu\/4.8.5\r\nfftw\/2.1.5\/openmpi\/3.1.6\/pgi\/19.10 fftw\/3.3.8\/openmpi\/3.1.6\/gnu\/9.3.0\r\nfftw\/2.1.5\/openmpi\/4.0.3\/gnu\/4.8.5\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nTo clean your module enviroment use purge command:\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% module purge\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nTo check the module loaded use the list command:\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% module list\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\n4.2 Containers\r\n\r\nWe support the use of singularity containers that can be either created by the \r\nuser or available in the cluster.\r\n\r\nThe \"official\" SKADC container is\r\n\r\nskadc_software_0.0.5.sif\r\n\r\nIt could be updated\/integrated with other containers during the challenge if \r\nRequested\/needed. It is an ubuntu 18.04 with kern suite software, python3, \r\nastropy, CASA, CARTA, Sofia2 etc. \r\nThe list of available  software is \/beegfs\/skadc\/doc\/SOFTWARE_LIST\r\n\r\n4.2.1 Executing the SKADC container\r\n\r\nThe container is fully isolated and to run it properly you can use the \r\nExample script available at \/beegfs\/skadc\/doc\/:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% cat \/beegfs\/skadc\/doc\/run_singularity_local.sh\r\n\r\n#!\/bin\/bash\r\nBASE_SINGULARITY_DIR=\/beegfs\/skadc\/singularity\/\r\nCONTAINER_NAME=skadc_software\r\nCONTAINER_VERSION=0.0.5\r\nif [ 'XXX'$1 = 'XXX' ]; then\r\n    COMMAND=bash\r\nelse\r\n    COMMAND=$1\r\nfi\r\nHOMEDIR=`mktemp -d -t singularity_XXXXXXX`\r\nsingularity run  --pid --no-home --home=\/home\/skauser --workdir ${HOMEDIR}\/tmp -B${HOMEDIR}:\/home\/ --containall --cleanenv -B \/beegfs:\/beegfs ${BASE_SINGULARITY_DIR}${CONTAINER_NAME}_${CONTAINER_VERSION}.sif ${COMMAND}\r\nrm -fr ${HOMEDIR}\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nYou can customise this script as you like. Remember that the only persistent\r\ndirectory is the \/beegfs, for security reason you cannone share your home \r\ndirectory.\r\n\r\nThe best way to execute your code through a container is to prepare a script\r\n(But also a python code), for example:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% sh run_singularity_local.sh \/beegfs\/skarc\/doc\/example.sh\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nWhere example.sh is:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% cat \/beegfs\/skadc\/doc\/run_singularity_local.sh\r\n\r\n#!\/bin\/bash\r\ncd \/beegfs\/skadc\/doc\r\necho \"Just an example\"\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nor \r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% sh run_singularity_local.sh \/beegfs\/skarc\/doc\/example.py\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\n\r\nSofia2 (https:\/\/github.com\/SoFiA-Admin\/SoFiA-2) and CARTA can be executed\r\ninside the container:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n%sh run_singularity_local.sh\r\nsingularity&gt; \/beegfs\/skadc\/software\/SoFiA-2-2.2.0\/sofia\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nOr\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\nsh run_singularity_local.sh \/beegfs\/skadc\/doc\/Sofia.sh\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\n\r\n4.2.2 Executing your own container\r\nYou can execute your own container in the cluster if it is available in docker hub\r\nor any other docker registry.\r\n\r\nYou can use the \/beegfs\/skadc\/doc\/run_singularity.sh as example.\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% cat run_singularity.sh\r\n\r\n#!\/bin\/bash\r\nexport CONTAINER_NAME=morgan1971\/skadc_software\r\nexport CONTAINER_VERSION=0.0.5\r\nexport BASE_PORT=\r\nif [ 'XXX'$1 = 'XXX' ]; then\r\n    COMMAND=bash\r\nelse\r\n    COMMAND=$1\r\nfi\r\nHOMEDIR=`mktemp -d -t singularity_XXXXXXX`\r\nmkdir $HOMEDIR\/tmp\r\nmkdir $HOMEDIR\/home\r\nsingularity run  --pid --no-home --home=\/home\/skauser --workdir ${HOMEDIR}\/tmp -B${HOMEDIR}:\/home\/ -B\/beegfs:\/beegfs --containall --cleanenv docker:\/\/${CONTAINER_NAME}:${CONTAINER_VERSION} $COMMAND\r\nrm -fr ${HOMEDIR}\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\n\r\n4.2.3 Submit a container job\r\n\r\nTo execute your code in the cluster you must submit a job to the queue system.\r\n\r\nAn example of submission script is available at \/beegfs\/skadc\/doc\/submit_example.slurm\r\nTo run on the cluster nodes you must submit the  job using the sbatch command:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% sbatch submit_example.slurm\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\n\r\nCheck the status of your job with squeue command:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% squeue submit_example.slurm\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\n\r\n\r\n4.3 Compiling Software\r\nIf you want to run a compile, then  look at interactive job submission.\r\nStart an interactive session and run your compile there.\r\nDo NOT run compiles on the login nodes.\r\nWe suggest to  request 4 slots with your interactive job, here an exmaple with\r\nchip partition (change it according to your needs):\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% srun --mem=4096 --nodes=1 --ntasks-per-node=4 -p skadc --pty \/bin\/bash\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nIf you want to compile sw inside a container you need to execute the container \r\nand then work in your \/beegfs directory. For example\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% srun --mem=4096 --nodes=1 --ntasks-per-node=4 -p skadc --pty \/bin\/bash\r\n\r\ngen09-10% cd \/beegfs\/skadc\/skadc04\/\r\ngen09-10% sh \/beegfs\/skadc\/doc\/run_singularity_local.sh\r\nsingularity&gt; cd \/beegfs\/skadc\/skadc04\/your_software_dir\r\nsingularity&gt; configure --prefix=\/beegfs\/skadc\/skadc04\/wherever_you_like\r\nsingularity&gt; make; make install\r\nsingularity&gt; exit\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nThen you will execute the sw from inside the container as\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n% \/beegfs\/skadc\/doc\/run_singularity_local.sh   \\\r\n                  \/beegfs\/skadc\/skadc04\/wherever_you_like\/mysoftware\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nOr using a script.\r\n\r\n\r\n\r\n4.4 Python packages and virtual environment\r\nHotcat cluster provides two python versions:\r\n - python 2.7 (deprecated)\r\n - python 3.6\r\n\r\nWe install some of the most common python packages but not all.\r\nA user can customise her python environment using the virtual env package that\r\nallows any user to install any python modules.\r\nA python virtual environment is a self-contained directory tree that contains a\r\nPython installation for a particular version of Python, plus a number of\r\nadditional packages.\r\n\r\nDifferent applications can then use different virtual environments.\r\n\r\nThe module used to create and manage virtual environments is called venv.\r\nIf you have multiple versions of Python on your system, you can select a\r\nspecific Python version by running python3 or whichever version you want.\r\n\r\nTo create a virtual environment, decide upon a directory where you want to place\r\nit, and run the venv module as a script with the directory path:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n$ python3 -m venv tutorial-env\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nThis will create the tutorial-env directory if it doesn\u2019t exist, and also create\r\ndirectories inside it containing a copy of the Python interpreter, the standard\r\nlibrary, and various supporting files.\r\n\r\nOnce you\u2019ve created a virtual environment, you may activate it.\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n$ source tutorial-env\/bin\/activate\r\n(tutorial-env) $\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nYou can install, upgrade, and remove packages using a program called pip.\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n(tutorial-env) $ pip search astronomy\r\npip search astronomy\r\nastronomy (0.0.1)             - Astronomy!\r\ncatastropy (0.0dev)           - (cat)astronomy\r\ngastropy (0.0dev)             - (g)astronomy\r\npykepler (1.0.1)              - Algorithms for positional astronomy\r\n...\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nYou can install the latest version of a package by specifying a package\u2019s name:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n(tutorial-env) $ pip install astronomy\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\nIf you do not need the environment anymore you can just remove the tutorial-env\r\ndirectory.\r\n\r\nTo submit a batch job that uses the environment you can use:\r\n\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n#!\/bin\/bash\r\n#SBATCH -J TestJob\r\n#SBATCH -p chipp\r\n#SBATCH -N 1\r\n#SBATCH --ntasks-per-node=36\r\n#SBATCH --mem-per-cpu=10000\r\n#SBATCH -o TestJob-%j.out\r\n#SBATCH -e TestJob-%j.err\r\n#SBATCH --time=30\r\nsource tutorial-env\/bin\/activate\r\n\r\npython3 your_python_code.py\r\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n\r\n\r\nMore informations at the official python page\r\nhttps:\/\/docs.python.org\/3\/tutorial\/venv.html<\/pre>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8212;-+ HOTCAT USER MANUAL +&#8212;- (version 1.0) Authors: giuliano.taffoni@inaf.it gianmarco.maggio@inaf.it Support Email: for any support requests please contact help.hotcat@inaf.it ******************************************************************************* IMPORTANT: In case of use of the HOTCAT computing infrastructure, on your paper you have to cite the following papers: [1] Taffoni, Giuliano, Ugo Becciani, Bianca Garilli, Gianmarco Maggio, Fabio Pasian, Grazia Umana, Riccardo Smareglia [&hellip;]<\/p>\n","protected":false},"author":260,"featured_media":0,"parent":3059,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":""},"_links":{"self":[{"href":"https:\/\/www.ict.inaf.it\/computing\/wp-json\/wp\/v2\/pages\/3046"}],"collection":[{"href":"https:\/\/www.ict.inaf.it\/computing\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.ict.inaf.it\/computing\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.ict.inaf.it\/computing\/wp-json\/wp\/v2\/users\/260"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ict.inaf.it\/computing\/wp-json\/wp\/v2\/comments?post=3046"}],"version-history":[{"count":15,"href":"https:\/\/www.ict.inaf.it\/computing\/wp-json\/wp\/v2\/pages\/3046\/revisions"}],"predecessor-version":[{"id":3067,"href":"https:\/\/www.ict.inaf.it\/computing\/wp-json\/wp\/v2\/pages\/3046\/revisions\/3067"}],"up":[{"embeddable":true,"href":"https:\/\/www.ict.inaf.it\/computing\/wp-json\/wp\/v2\/pages\/3059"}],"wp:attachment":[{"href":"https:\/\/www.ict.inaf.it\/computing\/wp-json\/wp\/v2\/media?parent=3046"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}