Skip to content

Commit

Permalink
Merge pull request #180 from ctlearn-project/aitrigger
Browse files Browse the repository at this point in the history
AI-based trigger system
  • Loading branch information
nietootein authored May 30, 2024
2 parents bd0092f + 87ac31e commit b20f334
Show file tree
Hide file tree
Showing 21 changed files with 446 additions and 163 deletions.
51 changes: 38 additions & 13 deletions config/example_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@

# List of reconstruction tasks. For now it's recommended to you single task learning.
# Valid options:
# - 'particletype'
# - 'type'
# - 'energy'
# - 'direction'
# These options have to be in consonance with the 'event_info' and
# 'transforms' in 'Data'.
# Requirements/recommendations:
# - 'particletype':
# - 'type':
# event_info:
# - 'true_shower_primary_id'
# transforms:
Expand Down Expand Up @@ -66,8 +66,9 @@ Data:
# - 'LST_MAGIC_MAGICCam': MAGIC Telescopes
# - 'MST_MST_FlashCam': MST Telescope with FlashCAM Camera
# - 'MST_MST_NectarCam': MST Telescope with NectarCAM Camera
# - 'SST_1M_DigiCam4': SST1M Telescope with DigiCam Camera
# - 'SST_SCT_SCTCam': SCT Telescope and Camera
# - 'SST_ASTRI_ASTRICam': SST-2M ASTRI Telescope and CHECH Camera
# - 'SST_ASTRI_ASTRICam': SST-2M ASTRI Telescope and CHEC Camera
# Camera images from the following telescopes can be read, but writers
# for the data are not yet available:
# - FACT, HESS-I, HESS-II, VERITAS
Expand All @@ -90,11 +91,11 @@ Data:
# The recommended way of applying quality cuts to the data.
# Only available for mono analysis at the moment.
# Optional dict.
parameter_selection:
parameter_selection:
- {col_name: "hillas_intensity", min_value: 50.0}
- {col_name: "leakage_intensity_width_2", max_value: 0.2}

# Not the recommended way of applying quality cuts to the data.
# Not the recommended way of applying quality cuts to the data:
# PyTables cut condition to apply selection cuts to events.
# Optional string. Default: don't apply any cuts
# See https://www.pytables.org/usersguide/condition_syntax.html
Expand Down Expand Up @@ -151,11 +152,33 @@ Data:
# Optional integer. Default: use default random initialization.
seed: 1234

# Image channels to load.
# Optional list of strings. Default: ['image']
# Valid options are 'image', 'peak_time' 'cleaned_image' and 'clean_peak_time', or whichever columns are in
# the telescope tables of your data files.
image_channels: ['image', 'peak_time']
# Waveform settings to load.
# Optional dicts. Default: None
# - Valid options for 'waveform_type' are 'raw' (R0) or 'calibrated' (R1).
# - 'waveform_sequence_length' is an integer corresponding to the length
# of the sequence around the shower maximum
# - Valid options for 'waveform_format' are 'timechannel_first' or 'timechannel_last'.
# - 'waveform_r0pedsub' is a boolean weather to calculate and subtract the pedestal level
# per pixel outside the sequence aound the shower maximum (only selectable for R0).
waveform_settings:
waveform_type: 'raw'
waveform_sequence_length: 5
waveform_format: 'timechannel_last'
waveform_r0pedsub: True

# Image settings to load.
# Optional dicts. Default: None
# - Valid options for 'image_channels' are 'image', 'peak_time' 'cleaned_image' and 'clean_peak_time',
# or whichever columns are in the telescope tables of your data files.
image_settings:
image_channels: ['image', 'peak_time']

# Parameter settings to load.
# Optional dicts. Default: None
# - Valid options for 'parameter_list' are 'image', 'peak_time' 'cleaned_image' and 'clean_peak_time',
# or whichever columns are in the telescope tables of your data files.
#parameter_settings:
# parameter_list: ['hillas_intensity', 'leakage_intensity_width_2']

# Settings passed directly as arguments to ImageMapper.
# Optional dictionary. Default: {}
Expand Down Expand Up @@ -193,6 +216,7 @@ Data:
'LSTCam': 'bilinear_interpolation'
'FlashCam': 'bilinear_interpolation'
'NectarCam': 'bilinear_interpolation'
'DigiCam': 'bilinear_interpolation'
'SCTCam': 'oversampling'
'CHEC': 'oversampling'
'MAGICCam': 'bilinear_interpolation'
Expand All @@ -205,6 +229,7 @@ Data:
'LSTCam': 2
'FlashCam': 1
'NectarCam': 2
'DigiCam': 2
'SCTCam': 0
'CHEC': 0
'MAGICCam': 2
Expand Down Expand Up @@ -267,10 +292,10 @@ Input:
# Optional integer. Default: 1
batch_size_per_worker: 32

# Whether to concatenate the stereo images from an event to feed it
# Whether to stack the stereo images from an event to feed it
# to a monoscopic model.
# Optional boolean. Default: false
concat_telescopes: false
stack_telescope_images: false

# Settings for the TensorFlow model. The options in this and the
# Model Parameters section are passed to the Estimator model_fn
Expand Down Expand Up @@ -442,7 +467,7 @@ Model Parameters:
# For the classification task, the class names are passed into the dict
# in order of class label (0 to n-1).
standard_head:
particletype: {class_names: ['proton', 'gamma'], fc_head: [512, 256, 2], weight: 1.0}
type: {class_names: ['proton', 'gamma'], fc_head: [512, 256, 2], weight: 1.0}
energy: {fc_head: [512, 256, 1], weight: 1.0}
direction: {fc_head: [512, 256, 2], weight: 1.0}

Expand Down
7 changes: 2 additions & 5 deletions ctlearn/build_irf.py
Original file line number Diff line number Diff line change
Expand Up @@ -312,11 +312,8 @@ def main():
energy_max=u.Quantity(p["mc_header"]["energy_range_max"].max(), u.TeV),
spectral_index=p["mc_header"]["spectral_index"][0],
max_impact=u.Quantity(p["mc_header"]["max_scatter_range"].max(), u.m),
viewcone=u.Quantity(
p["mc_header"]["max_viewcone_radius"][0]
- p["mc_header"]["min_viewcone_radius"][0],
u.deg,
),
viewcone_min=u.Quantity(np.around(p["mc_header"]["min_viewcone_radius"][0], decimals=2), u.deg),
viewcone_max=u.Quantity(np.around(p["mc_header"]["max_viewcone_radius"][0], decimals=2), u.deg),
)
p["simulation_info"] = simulation_info
p["simulated_spectrum"] = PowerLaw.from_simulation(simulation_info, T_OBS)
Expand Down
42 changes: 27 additions & 15 deletions ctlearn/data_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ def __init__(
mode="train",
class_names=None,
shuffle=True,
concat_telescopes=False,
stack_telescope_images=False,
):
"Initialization"
self.DL1DataReaderDL1DH = DL1DataReaderDL1DH
Expand All @@ -22,13 +22,14 @@ def __init__(
self.mode = mode
self.class_names = class_names
self.shuffle = shuffle
self.concat_telescopes = concat_telescopes
self.stack_telescope_images = stack_telescope_images
self.on_epoch_end()

# Decrypt the example description
# Features
self.singleimg_shape = None
self.trg_pos, self.trg_shape = None, None
self.trgpatch_pos, self.trgpatch_shape = None, None
self.pon_pos = None
self.pointing = []
self.wvf_pos, self.wvf_shape = None, None
Expand All @@ -42,23 +43,24 @@ def __init__(
self.mjd_list, self.milli_list, self.nano_list = [], [], []
# Labels
self.prt_pos, self.enr_pos, self.drc_pos = None, None, None
self.prt_labels, self.enr_labels, self.alt_labels, self.az_labels = (
[],
[],
[],
[],
)
self.prt_labels = []
self.enr_labels = []
self.alt_labels, self.az_labels = [], []
self.trgpatch_labels = []
self.energy_unit = None

for i, desc in enumerate(self.DL1DataReaderDL1DH.example_description):
if "trigger" in desc["name"]:
if "HWtrigger" in desc["name"]:
self.trg_pos = i
self.trg_shape = desc["shape"]
elif "pointing" in desc["name"]:
self.pon_pos = i
elif "waveform" in desc["name"]:
self.wvf_pos = i
self.wvf_shape = desc["shape"]
elif "trigger_patch" in desc["name"]:
self.trgpatch_pos = i
self.trgpatch_shape = desc["shape"]
elif "image" in desc["name"]:
self.img_pos = i
self.img_shape = desc["shape"]
Expand Down Expand Up @@ -91,7 +93,7 @@ def __init__(
self.img_shape[3],
)
# Reshape inputs into proper dimensions for the stereo analysis with merged models
if self.concat_telescopes:
if self.stack_telescope_images:
self.img_shape = (
self.img_shape[1],
self.img_shape[2],
Expand Down Expand Up @@ -131,7 +133,10 @@ def __data_generation(self, batch_indices):
energy = np.empty((self.batch_size))
if self.drc_pos is not None:
direction = np.empty((self.batch_size, 2))

if self.trgpatch_pos is not None:
trigger_patches_true_image_sum = np.empty(
(self.batch_size, *self.trgpatch_shape)
)
# Generate data
for i, index in enumerate(batch_indices):
event = self.DL1DataReaderDL1DH[index]
Expand All @@ -157,15 +162,19 @@ def __data_generation(self, batch_indices):
energy[i] = event[self.enr_pos]
if self.drc_pos is not None:
direction[i] = event[self.drc_pos]
if self.trgpatch_pos is not None:
trigger_patches_true_image_sum[i] = event[self.trgpatch_pos]
else:
# Save all labels for the prediction phase
if self.prt_pos is not None:
self.prt_labels.append(np.float32(event[self.prt_pos]))
if self.enr_pos is not None:
self.enr_labels.append(event[self.enr_pos][0])
self.enr_labels.append(np.float32(event[self.enr_pos][0]))
if self.drc_pos is not None:
self.alt_labels.append(event[self.drc_pos][0])
self.az_labels.append(event[self.drc_pos][1])
self.alt_labels.append(np.float32(event[self.drc_pos][0]))
self.az_labels.append(np.float32(event[self.drc_pos][1]))
if self.trgpatch_pos is not None:
self.trgpatch_labels.append(np.float32(event[self.trgpatch_pos]))
# Save pointing
if self.pon_pos is not None:
self.pointing.append(event[self.pon_pos])
Expand Down Expand Up @@ -198,7 +207,7 @@ def __data_generation(self, batch_indices):
labels = {}
if self.mode == "train":
if self.prt_pos is not None:
labels["particletype"] = tf.keras.utils.to_categorical(
labels["type"] = tf.keras.utils.to_categorical(
particletype,
num_classes=self.DL1DataReaderDL1DH.num_classes,
)
Expand All @@ -212,6 +221,9 @@ def __data_generation(self, batch_indices):
if self.drc_pos is not None:
labels["direction"] = direction
label = direction
if self.trgpatch_pos is not None and self.DL1DataReaderDL1DH.reco_cherenkov_photons:
labels["cherenkov_photons"] = trigger_patches_true_image_sum
label = trigger_patches_true_image_sum

# Temp fix till keras support class weights for multiple outputs or I wrote custom loss
# https://github.com/keras-team/keras/issues/11735
Expand Down
13 changes: 8 additions & 5 deletions ctlearn/default_config_files/CNNRNN.yml
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
Data:
mode: 'stereo'
image_channels: ['image', 'peak_time']
image_settings:
image_channels: ['image', 'peak_time']
mapping_settings:
mapping_method:
'LSTCam': 'bilinear_interpolation'
'FlashCam': 'bilinear_interpolation'
'NectarCam': 'bilinear_interpolation'
'DigiCam': 'bilinear_interpolation'
'CHEC': 'oversampling'
'SCTCam': 'oversampling'
'LSTSiPMCam': 'bilinear_interpolation'
Expand All @@ -14,17 +16,18 @@ Data:
'LSTCam': 2
'FlashCam': 2
'NectarCam': 2
'DigiCam': 2
'CHEC': 0
'SCTCam': 0
'LSTSiPMCam': 2
'MAGICCam': 2
Input:
batch_size_per_worker: 16
concat_telescopes: false
stack_telescope_images: false
Model:
name: 'CNNRNN'
backbone: {module: 'cnn_rnn', function: 'cnn_rnn_model'}
engine_img: {module: 'basic', function: 'conv_block'}
image_engine: {module: 'basic', function: 'conv_block'}
head: {module: 'head', function: 'standard_head'}
Model Parameters:
attention: {mechanism: 'Squeeze-and-Excitation', ratio: 16}
Expand All @@ -39,11 +42,11 @@ Model Parameters:
bottleneck: null
batchnorm: false
standard_head:
particletype: {fc_head: [], weight: 1.0}
type: {fc_head: [], weight: 1.0}
energy: {fc_head: [], weight: 1.0}
direction: {fc_head: [], weight: 1.0}
Training:
validation_split: 0.05
validation_split: 0.1
num_epochs: 5
verbose: 2
workers: 1
Expand Down
11 changes: 7 additions & 4 deletions ctlearn/default_config_files/SingleCNN.yml
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
Data:
mode: 'mono'
image_channels: ['image', 'peak_time']
image_settings:
image_channels: ['image', 'peak_time']
mapping_settings:
mapping_method:
'LSTCam': 'bilinear_interpolation'
'FlashCam': 'bilinear_interpolation'
'NectarCam': 'bilinear_interpolation'
'DigiCam': 'bilinear_interpolation'
'CHEC': 'oversampling'
'SCTCam': 'oversampling'
'LSTSiPMCam': 'bilinear_interpolation'
Expand All @@ -14,17 +16,18 @@ Data:
'LSTCam': 2
'FlashCam': 2
'NectarCam': 2
'DigiCam': 2
'CHEC': 0
'SCTCam': 0
'LSTSiPMCam': 2
'MAGICCam': 2
Input:
batch_size_per_worker: 64
concat_telescopes: false
stack_telescope_images: false
Model:
name: 'SingleCNN'
backbone: {module: 'single_cnn', function: 'single_cnn_model'}
engine_img: {module: 'basic', function: 'conv_block'}
image_engine: {module: 'basic', function: 'conv_block'}
head: {module: 'head', function: 'standard_head'}
Model Parameters:
attention: {mechanism: 'Squeeze-and-Excitation', ratio: 16}
Expand All @@ -39,7 +42,7 @@ Model Parameters:
bottleneck: null
batchnorm: false
standard_head:
particletype: {fc_head: [512, 256], weight: 1.0}
type: {fc_head: [512, 256], weight: 1.0}
energy: {fc_head: [512, 256], weight: 1.0}
direction: {fc_head: [512, 256], weight: 1.0}
Training:
Expand Down
11 changes: 7 additions & 4 deletions ctlearn/default_config_files/TRN.yml
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
Data:
mode: 'mono'
image_channels: ['image', 'peak_time']
image_settings:
image_channels: ['image', 'peak_time']
mapping_settings:
mapping_method:
'LSTCam': 'bilinear_interpolation'
'FlashCam': 'bilinear_interpolation'
'NectarCam': 'bilinear_interpolation'
'DigiCam': 'bilinear_interpolation'
'CHEC': 'oversampling'
'SCTCam': 'oversampling'
'LSTSiPMCam': 'bilinear_interpolation'
Expand All @@ -14,17 +16,18 @@ Data:
'LSTCam': 2
'FlashCam': 2
'NectarCam': 2
'DigiCam': 2
'CHEC': 0
'SCTCam': 0
'LSTSiPMCam': 2
'MAGICCam': 2
Input:
batch_size_per_worker: 64
concat_telescopes: false
stack_telescope_images: false
Model:
name: 'ThinResNet'
backbone: {module: 'single_cnn', function: 'single_cnn_model'}
engine_img: {module: 'resnet', function: 'stacked_res_blocks'}
image_engine: {module: 'resnet', function: 'stacked_res_blocks'}
head: {module: 'head', function: 'standard_head'}
Model Parameters:
attention: {mechanism: 'Squeeze-and-Excitation', ratio: 16}
Expand All @@ -37,7 +40,7 @@ Model Parameters:
- {filters: 128, blocks: 3}
- {filters: 256, blocks: 3}
standard_head:
particletype: {fc_head: [512, 256], weight: 1.0}
type: {fc_head: [512, 256], weight: 1.0}
energy: {fc_head: [512, 256], weight: 1.0}
direction: {fc_head: [512, 256], weight: 1.0}
Training:
Expand Down
Loading

0 comments on commit b20f334

Please sign in to comment.