Serving your pipeline with fastdeploy example
- Create a recipe folder with the following structure:
recipe_folder/
├── example.py
├── predictor.py
├── requirements.txt (optional)
└── extras.sh (optional)
example.py
name = "your_app_or_model_name"
example = [
example_object_1,
example_object_2,
]
predictor.py
# Whatever code and imports you need to load your model and make predictions
# predictor function must be defined exactly as below
# batch_size is the optimal batch size for your model
# inputs length may or may not be equal to batch_size
# len(outputs) == len(inputs)
def predictor(inputs, batch_size=1):
return outputs
-
requirements.txt
(optional): all python dependencies for your pipeline -
extras.sh
(optional): any bash commands to run before installing requirements.txt
fastdeploy --loop --recipe recipes/echo_chained
fastdeploy --rest --recipe recipes/echo_chained
Chained recipe example
-
Chained recipe means you have multiple predictor_X.py which are chained sequentially
-
predictor_1.py
will be called first, thenpredictor_2.py
and so on -
Each predictor_X.py must have a predictor function defined as above
-
Each predictor_X.py is run separately i.e: can be in different virtualenvs
fastdeploy --loop --recipe recipes/echo_chained --config "predictor_name:predictor_1.py"
fastdeploy --loop --recipe recipes/echo_chained --config "predictor_name:predictor_2.py"
fastdeploy --rest --recipe recipes/echo_chained