Skip to content
index.rst 5.32 KiB
Newer Older
Alice Donini's avatar
Alice Donini committed
.. _usage:

Workflow and usage
===================

.. toctree::
    :maxdepth: 1
    :hidden:


    db_generation
    irf_generation
    dl1_production
    dl2_production
    dl3_production
    index_generation

You should have at least a working installation of ``lstchain`` (:ref:`install`) and access to the IT.

Analysis steps
--------------

There are many data levels and different scripts are used to call the corresponding ``lstchain`` one. The steps in order are:

* Database Generation (:ref:`db_generation`)
* DL1 generation (:ref:`dl1`)
* DL2 generation (:ref:`dl2`)
* DL3 generation (:ref:`dl3`)

.. note::

    For the scripts to work, you always have to make a source of the `init.sh` file (:ref:`install`) before starting, so that all the needed variables are set.

The scripts are created to adapt to the following directory tree:

.. code-block:: bash
    
    Parent_Folder
        └──DL1
            ├── source
            │   └── night
            │      └── version
            │           └── cleaning
            DL2
            ├── source
            │   └── night
            │      └── version
            │           └── cleaning
            DL3
            └── source
                └── night
                   └── version
                        └── cleaning

The script ``create_analysis_tree.py`` allows to create the needed directories, following the scheme above.
Define the default name for the "Parent Folder" in the argparse with your own path.
"Parent Folder" path will be the equivalent of the varaible ``base_dir`` in the configuration files used by the scripts in the later analysis.
Modify the script at your convenience to obtain a different structure, but be aware that I/O path in the scripts are based on this structure, so you may need to adapt the scripts too.
Multiple nights can be specified.

.. code-block:: bash
    
    usage: create_analysis_tree.py [-h] [--main_dir MAIN_DIR] --source SOURCE --night NIGHT [NIGHT ...] [--version VERSION] [--cleaning CLEANING]

    Create a directory structure

    optional arguments:
      -h, --help            show this help message and exit
      --main_dir MAIN_DIR   Path to parent folder
      --source SOURCE       Source name
      --night NIGHT [NIGHT ...]
                            Night date
      --version VERSION     lstchain version (default: v0.9.2)
      --cleaning CLEANING   Cleaning type (default: tailcut84)

Example of use:

.. code-block:: bash
    
    python create_analysis_tree.py --source Crab --night 20220304

Output:

.. code-block:: bash

    Directory  /fefs/aswg/workspace/alice.donini/Analysis/data/DL1/Crab/20220304/v0.9.2/tailcut84  already exists
    Directory  /fefs/aswg/workspace/alice.donini/Analysis/data/DL2/Crab/20220304/v0.9.2/tailcut84  created
    Directory  /fefs/aswg/workspace/alice.donini/Analysis/data/DL3/Crab/20220304/v0.9.2/tailcut84  created
    Directory structure for analysis on Crab was created.

Database generation
~~~~~~~~~~~~~~~~~~~~

Unless you specify a file with a list of runs and nights, there is the need of a database, through which the run selection is made.

For the generation of the database file refer to :ref:`db_generation`.

DL1 generation
~~~~~~~~~~~~~~~~~~~~~

R0 data to DL1 data, i.e. from camera raw waveforms to calibrated and parametrized images:

* Low level camera calibration
* High level camera calibration
* Image cleaning
* Image parameter calculation

Usage of ``lstchain.scripts.lstchain_data_r0_to_dl1`` for real data and ``lstchain.scripts.lstchain_mc_r0_to_dl1`` for MC.

If you already have a DL1 file containing images and parameters (DL1a and DL1b), you can recalculate the parameters
using a different cleaning by using: ``lstchain.scripts.lstchain_dl1ab``

Refer to :ref:`dl1` (yet to be implemented).

DL2 generation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DL1 to DL2 data, i.e. training of machine learning methods to perform:

* Energy estimation
* Arrival direction
* gamma/hadron separation

Usage of ``lstchain.scripts.lstchain_dl1_to_dl2`` for real data and MC.

Refer to :ref:`dl2`.

DL3 generation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DL2 to DL3 data. At this stage gammaness and direction cut are applied to produce list of gamma candidates.
To generate DL3 files, an IRF file has to be provided. If not available, the IRF file has to be produced before the generation of the DL3 files.
In this step usage of:

* ``lstchain.tools.lstchain_create_irf_files`` (:ref:`irf_generation`)
* ``lstchain.tools.lstchain_create_dl3_file`` (:ref:`dl3`)

To analyze the results using `Gammapy <https://gammapy.org>`_ there is the need of an index. This is produced using:

* ``lstchain.tools.lstchain_create_dl3_index_files`` (:ref:`index_generation`)

For a quick look into the data and perform :math:`{\theta}^2/{\alpha}` plots starting from DL2 data, you can also use ``lstchain.scripts.lstchain_post_dl2``.
You will need to parse a toml configuration file, two examples can be found in `lstchain <https://github.com/cta-observatory/cta-lstchain/tree/master/docs/examples/post_dl2_analysis>`_. You will need to specify the runs to analyze and the `data_tag`, which specifies the version of lstchain used to process the data, e.g v0.7.3. Different cut can be also specified.
To run the script simply do:

.. code-block:: bash

    lstchain_post_dl2 -c config_wobble.toml -v