Newer
Older
.. _dl2:
DL2 files generation
====================
The script `DL1_to_DL2.py` allows to analyze DL1 data and produce DL2 files.
Mandatory arguments are the name of the source (option ``-n``) and a configuration file `config_dl1_to_dl2.yaml` (option ``--config``). The script will search for the configuration file in the ``$CONFIG`` folder specified in the initial settings.
usage: DL1_to_DL2.py [-h] [--verbose] --source_name SOURCE_NAME --config CONFIG [--dry] [--outdir OUTDIR] [--tcuname TCUNAME] [--runlist RUNLIST] [--distance DISTANCE] [--ra RA] [--dec DEC]
[--submit] [--globber]
DL1 to DL2 converter
optional arguments:
--config CONFIG, -c CONFIG
Specify a personal config file for the analysis
--dec DEC Dec coordinate of the target. To add if you want to use custom position
--distance DISTANCE, -dis DISTANCE
Max distance in degrees between the target position and the run pointing position for the run selection. Negative value means no selection using this parameter (default: -1).
--globber, -g If True, overwrites existing output file without asking (default: False).
--outdir OUTDIR, -o OUTDIR
Directory to store the output
--ra RA RA coordinate of the target. To add if you want to use custom position
--runlist RUNLIST, -rl RUNLIST
File with a list of run and the associated night to be analysed
--source_name SOURCE_NAME, -n SOURCE_NAME
Name of the source
--submit Submit the cmd to slurm on site
--tcuname TCUNAME Apply run selection based on TCU source name
--verbose, -v Increase output verbosity
-h, --help show this help message and exit
jobmanager: ../jobmanager
# Database file name
db: database.csv
# LST real data path (don't modify it)
data_folder: /fefs/aswg/data/real
# path to main data folder of the user
# change it accordingly to your working env
base_dir: /fefs/aswg/workspace/alice.donini/Analysis/data
# Path to personal directory where output data will be saved.
# Uncomment and modify in case you want to use a non standard path
# Otherwise files will be saved to {base_dir}/DL2/{source_name}/{night}/{version}/{cleaning}
#dl2_output_folder: ../DL2/Crab
# Uncomment and modify in case you want to specify a custom configuration file for the lstchain script
# lstchain_config: ../lstchain_84.json
# uncomment the line below and specify the nights only if database search is used and not a custom runlist file
#night: [20210911, 20210912] # day(s) of observation (more than one is possible)
Edit the configuration file: change the paths based on your working directory and modify the DL1 data information used to search for the files at the IT.
There are two ways to select the data to analyze: through a search in the database (outdated for now) or through a run list given by the user.
If you want to use the created database (:ref:`db_generation`) to find the runs to analyze, then fill in the information about the wanted nights in the configuration file `config_dl1_to_dl2.yaml`.
An extra selection can be done in coordinates (in case ``--distance``, ``--ra`` and ``--dec`` are mandatory arguments) or by the name of the source as saved in TCU (argument ``--tcuname``).
If none of these selection methods is given, then all the runs available in the dates specified in the configuration file are considered for the analysis.
.. warning::
The search of runs through the database has a "feature". The database is generated from the drive log, so all the runs taken after the midnight are saved under the following day. This doesn't happen at the IT, where the runs are stored under the day of the starting night. So for some runs the search could fail, even if they are there.
Thus if you use the database search always add in the configuration file also the date of the night after, so that you are sure all the runs take after the midnight are considered too.
The search for DL1 files can be also done by giving a file list of runs and nights to be analyzed (option ``--runlist``).
No database file is needed in this case.
The runlist can be either manually created or produced using the `create_run_list.py` script (:ref:`runlist_generation`).
Example of a runlist file:
.. code-block::
2909 20201117
2911 20201117
3089 20201206
The argument ``--dry`` allows to perform a dry run. No jobs are submitted and only the verbose output is printed.
This option is usefull to have a check of which runs are selected and the goodness of the sent command.
Some examples of how to run the script:
.. code-block::
python DL1_to_DL2.py -c config_dl1_to_dl2.yaml -n Crab --tcuname Crab -v --submit
python DL1_to_DL2.py -c config_dl1_to_dl2.yaml -n Crab --distance 2 --ra 83.633080 --dec 22.014500 -v --submit
python DL1_to_DL2.py -c config_dl1_to_dl2.yaml -n Crab --runlist $CONFIG_FOLDER/Crab_Nov2020.txt -v --submit