You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Opening this issue to detail my investigation on how to use torch-serve for running inference with the Monai built model.
This work is in the context of the PACS-AI project.
Response I got:
As a first step, your model should be compatible with torchserve or tensorflow serve. In PACS-AI, the torchserve microservice loads models in .mar file (https://pytorch.org/serve/use_cases.html). If you can serve the model on your side following the torchserve guide, integrating it in pacs ai after will be very easy (we will give detailled steps this summer)!
We plan to also support docker images as an "endpoint" to run inferences as it could be necessary in certain cases where certain CLI tools are used! But the idea would remain the same as the input/output will need to follow a certain format
The text was updated successfully, but these errors were encountered:
Opening this issue to detail my investigation on how to use torch-serve for running inference with the Monai built model.
This work is in the context of the PACS-AI project.
Response I got:
The text was updated successfully, but these errors were encountered: