Project ml_suite

Invoke with: import icecube.ml_suite

Python I3Modules

EventFeatureExtractorModule

EventFeatureExtractorModule (Python I3Module)

Builds an EventFeatureExtractor from a config file and applies to P frames.

Param cfg_file:

Default = None,

Param IcePickServiceKey:

Default = '', Key for an IcePick in the context that this module should check before processing physics frames.

Param If:

Default = None, A python function… if this returns something that evaluates to True, Module runs, else it doesn’t

Param output_key:

Default = 'ml_suite_features',

Param plot_results:

Default = False, Plot results if true.

ModelWrapper

ModelWrapper (Python I3Module)

General Model Wrapper.

Param batch_size:

Default = 64, The number of events to accumulate and pass through the NN in parallel. A higher batch size than 1 can usually improve recontruction runtime, but will also increase the memory footprint.

Param data_transformer:

Default = None, Optionally, a data transformer may be provided. This must be a python callable that takes the feature tensor obtained from the event_feature_extractor as input and returns the transformed output. The data_transformer may, for instance, be used to transform IceCube data on an hexagonal grid, or to normalize the input data prior to passing to the neural network. The data_transformer may return a single numpy array, or a tuple or list of numpy arrays. There is no constraint on the number of tensors the data_transformer may return. However, the first axis of each tensor must always correspond to the batch dimension.

Param event_feature_extractor:

Default = <Unprintable>, The EventFeatureExtractor object or file path to a yaml configuration file defining the extractor. The EventFeatureExtractor will be used to compute. the input data to the NN.

Param IcePickServiceKey:

Default = '', Key for an IcePick in the context that this module should check before processing physics frames.

Param If:

Default = None, A python function… if this returns something that evaluates to True, Module runs, else it doesn’t

Param nn_model:

Default = <Unprintable>, The callable ML model. It can be a function or other callable with signature nn_model(input) -> prediction.

Param output_key:

Default = 'TFModelWrapperOutput', Frame key to which the result will be written.

Param output_names:

Default = None, If provided the predictions will named according to providedlist. Otherwise names will be: prediction_{:04d}

Param sub_event_stream:

Default = None, If provided, only process events from this sub event stream

Param write_runtime_info:

Default = True, Whether or not to write runtime estimates to the frame.

TFModelWrapper

TFModelWrapper (Python I3Module)

Tensorflow Model Wrapper.

Param batch_size:

Default = 64, The number of events to accumulate and pass through the NN in parallel. A higher batch size than 1 can usually improve recontruction runtime, but will also increase the memory footprint.

Param data_transformer:

Default = None, Optionally, a data transformer may be provided. This must be a python callable that takes the feature tensor obtained from the event_feature_extractor as input and returns the transformed output. The data_transformer may, for instance, be used to transform IceCube data on an hexagonal grid, or to normalize the input data prior to passing to the neural network. The data_transformer may return a single numpy array, or a tuple or list of numpy arrays. There is no constraint on the number of tensors the data_transformer may return. However, the first axis of each tensor must always correspond to the batch dimension.

Param event_feature_extractor:

Default = <Unprintable>, The EventFeatureExtractor object or file path to a yaml configuration file defining the extractor. The EventFeatureExtractor will be used to compute. the input data to the NN.

Param IcePickServiceKey:

Default = '', Key for an IcePick in the context that this module should check before processing physics frames.

Param If:

Default = None, A python function… if this returns something that evaluates to True, Module runs, else it doesn’t

Param nn_model:

Default = <Unprintable>, The callable ML model. It can be a function or other callable with signature nn_model(input) -> prediction.

Param output_key:

Default = 'TFModelWrapperOutput', Frame key to which the result will be written.

Param output_names:

Default = None, If provided the predictions will named according to providedlist. Otherwise names will be: prediction_{:04d}

Param sub_event_stream:

Default = None, If provided, only process events from this sub event stream

Param write_runtime_info:

Default = True, Whether or not to write runtime estimates to the frame.