π€Using the Predictor Block
Getting familiar with the Predictor Block
The Predictor block in Collimator is a powerful tool that allows users to integrate machine learning models in their simulation. It supports two popular model formats, TensorFlow SavedModel and PyTorch TorchScript.
Inputs
To configure the input parameters of the Predictor block, users must specify the name and dtype of each input. Collimator will automatically cast the input data to the specified dtype.
TensorFlow Models
For TensorFlow models saved as a SavedModel, Collimator currently only supports the default serving signature "serving_default". If the model was saved with Keras, input names should be "input_1", "input_2", etc. in the same order as the serving function's signature. If the model was saved using tf.saved_model.save
function (e.g., tf.Module
with @tf.function
decorator), the input names should match the serving function's signature.
PyTorch Models
For PyTorch models, input names must match the model's forward
function signature.
Outputs
To configure the output parameters of the Predictor block, users must specify the name, dtype, and shape of each output. The block supports both single and multiple outputs, which can be tuples or dictionaries.
TensorFlow Models
For TensorFlow models saved as a SavedModel, the outputs should be named "output_1", "output_2", etc. in the case of a tuple or single output or match the keys if the output is a dictionary.
PyTorch Models
In the case of a tuple or single output, the output names do not matter, as long as their number matches the output of the Torchscript model. In the case of a dictionary output, Collimator will match the output names dictionary's keys.
Last updated