Project ml_suite ---------------- Invoke with: ``import icecube.ml_suite`` Python I3Modules ^^^^^^^^^^^^^^^^ .. js:data:: EventFeatureExtractorModule ``EventFeatureExtractorModule`` *(Python I3Module)* Builds an EventFeatureExtractor from a config file and applies to P frames. :param cfg_file: *Default* = ``None``, :param IcePickServiceKey: *Default* = ``''``, Key for an IcePick in the context that this module should check before processing physics frames. :param If: *Default* = ``None``, A python function... if this returns something that evaluates to True, Module runs, else it doesn't :param output_key: *Default* = ``'ml_suite_features'``, :param plot_results: *Default* = ``False``, Plot results if true. .. js:data:: ModelWrapper ``ModelWrapper`` *(Python I3Module)* General Model Wrapper. :param batch_size: *Default* = ``64``, The number of events to accumulate and pass through the NN in parallel. A higher batch size than 1 can usually improve reconstruction runtime, but will also increase the memory footprint. :param data_transformer: *Default* = ``None``, Optionally, a data transformer may be provided. This must be a python callable that takes the feature tensor obtained from the `event_feature_extractor` as input and returns the transformed output. The `data_transformer` may, for instance, be used to transform IceCube data on an hexagonal grid, or to normalize the input data prior to passing to the neural network. The `data_transformer` may return a single numpy array, or a tuple or list of numpy arrays. There is no constraint on the number of tensors the `data_transformer` may return. However, the first axis of each tensor must always correspond to the batch dimension. :param event_feature_extractor: *Default* = ````, The EventFeatureExtractor object or file path to a yaml configuration file defining the extractor. The EventFeatureExtractor will be used to compute. the input data to the NN. :param IcePickServiceKey: *Default* = ``''``, Key for an IcePick in the context that this module should check before processing physics frames. :param If: *Default* = ``None``, A python function... if this returns something that evaluates to True, Module runs, else it doesn't :param nn_model: *Default* = ``None``, The callable ML model. It can be a function or other callable with signature `nn_model(input) -> prediction`. :param output_key: *Default* = ``'TFModelWrapperOutput'``, Frame key to which the result will be written. :param output_names: *Default* = ``None``, If provided the predictions will named according to providedlist. Otherwise names will be: `prediction_{:04d}` :param sub_event_stream: *Default* = ``None``, If provided, only process events from this sub event stream :param write_runtime_info: *Default* = ``True``, Whether or not to write runtime estimates to the frame. .. js:data:: TFModelWrapper ``TFModelWrapper`` *(Python I3Module)* Tensorflow Model Wrapper. :param batch_size: *Default* = ``64``, The number of events to accumulate and pass through the NN in parallel. A higher batch size than 1 can usually improve reconstruction runtime, but will also increase the memory footprint. :param data_transformer: *Default* = ``None``, Optionally, a data transformer may be provided. This must be a python callable that takes the feature tensor obtained from the `event_feature_extractor` as input and returns the transformed output. The `data_transformer` may, for instance, be used to transform IceCube data on an hexagonal grid, or to normalize the input data prior to passing to the neural network. The `data_transformer` may return a single numpy array, or a tuple or list of numpy arrays. There is no constraint on the number of tensors the `data_transformer` may return. However, the first axis of each tensor must always correspond to the batch dimension. :param event_feature_extractor: *Default* = ````, The EventFeatureExtractor object or file path to a yaml configuration file defining the extractor. The EventFeatureExtractor will be used to compute. the input data to the NN. :param IcePickServiceKey: *Default* = ``''``, Key for an IcePick in the context that this module should check before processing physics frames. :param If: *Default* = ``None``, A python function... if this returns something that evaluates to True, Module runs, else it doesn't :param nn_model: *Default* = ``None``, The callable ML model. It can be a function or other callable with signature `nn_model(input) -> prediction`. :param output_key: *Default* = ``'TFModelWrapperOutput'``, Frame key to which the result will be written. :param output_names: *Default* = ``None``, If provided the predictions will named according to providedlist. Otherwise names will be: `prediction_{:04d}` :param sub_event_stream: *Default* = ``None``, If provided, only process events from this sub event stream :param write_runtime_info: *Default* = ``True``, Whether or not to write runtime estimates to the frame. .. js:data:: TritonInferenceWrapper ``TritonInferenceWrapper`` *(Python I3Module)* Triton Inference Wrapper. :param batch_limit: *Default* = ``1``, Limit on number of requests batched together :param batch_size: *Default* = ``64``, The number of events to accumulate and pass through the NN in parallel. A higher batch size than 1 can usually improve reconstruction runtime, but will also increase the memory footprint. :param data_transformer: *Default* = ``None``, Optionally, a data transformer may be provided. This must be a python callable that takes the feature tensor obtained from the `event_feature_extractor` as input and returns the transformed output. The `data_transformer` may, for instance, be used to transform IceCube data on an hexagonal grid, or to normalize the input data prior to passing to the neural network. The `data_transformer` may return a single numpy array, or a tuple or list of numpy arrays. There is no constraint on the number of tensors the `data_transformer` may return. However, the first axis of each tensor must always correspond to the batch dimension. :param event_feature_extractor: *Default* = ````, The EventFeatureExtractor object or file path to a yaml configuration file defining the extractor. The EventFeatureExtractor will be used to compute. the input data to the NN. :param IcePickServiceKey: *Default* = ``''``, Key for an IcePick in the context that this module should check before processing physics frames. :param If: *Default* = ``None``, A python function... if this returns something that evaluates to True, Module runs, else it doesn't :param nn_model: *Default* = ``None``, The callable ML model. It can be a function or other callable with signature `nn_model(input) -> prediction`. :param nn_model_name: *Default* = ``'tglauch_classifier'``, The ML model name in triton inference. Needs to be a string. triton("nn_model_name", input) -> prediction`. :param output_key: *Default* = ``'TFModelWrapperOutput'``, Frame key to which the result will be written. :param output_names: *Default* = ``None``, If provided the predictions will named according to providedlist. Otherwise names will be: `prediction_{:04d}` :param request_limit: *Default* = ``4``, Limit on number of async HTTP requests :param sub_event_stream: *Default* = ``None``, If provided, only process events from this sub event stream :param triton_server_URI: *Default* = ``'https://localhost:8000'``, URI for triton server with port number :param write_runtime_info: *Default* = ``True``, Whether or not to write runtime estimates to the frame. .. js:data:: ONNXInferenceWrapper ``ONNXInferenceWrapper`` *(Python I3Module)* ONNX Inference Wrapper. :param batch_size: *Default* = ``64``, The number of events to accumulate and pass through the NN in parallel. A higher batch size than 1 can usually improve reconstruction runtime, but will also increase the memory footprint. :param data_transformer: *Default* = ``None``, Optionally, a data transformer may be provided. This must be a python callable that takes the feature tensor obtained from the `event_feature_extractor` as input and returns the transformed output. The `data_transformer` may, for instance, be used to transform IceCube data on an hexagonal grid, or to normalize the input data prior to passing to the neural network. The `data_transformer` may return a single numpy array, or a tuple or list of numpy arrays. There is no constraint on the number of tensors the `data_transformer` may return. However, the first axis of each tensor must always correspond to the batch dimension. :param event_feature_extractor: *Default* = ````, The EventFeatureExtractor object or file path to a yaml configuration file defining the extractor. The EventFeatureExtractor will be used to compute. the input data to the NN. :param IcePickServiceKey: *Default* = ``''``, Key for an IcePick in the context that this module should check before processing physics frames. :param If: *Default* = ``None``, A python function... if this returns something that evaluates to True, Module runs, else it doesn't :param nn_model: *Default* = ``None``, The callable ML model. It can be a function or other callable with signature `nn_model(input) -> prediction`. :param ONNX_model_path: *Default* = ``None``, Path to the ONNX model file (usually ends in *.onnx) :param output_key: *Default* = ``'TFModelWrapperOutput'``, Frame key to which the result will be written. :param output_names: *Default* = ``None``, If provided the predictions will named according to providedlist. Otherwise names will be: `prediction_{:04d}` :param sub_event_stream: *Default* = ``None``, If provided, only process events from this sub event stream :param write_runtime_info: *Default* = ``True``, Whether or not to write runtime estimates to the frame.