Is it possible to use InnerEye fully on a local GPU machine (without Azure ML) #842
-
Based on the this documentation, model training and testing can be done on a local machine. But the example of inference is only applied to a model registered in Azure ML. I wonder if there is a way to do inference on the same local machine, since this will greatly reduce the efforts in securing the environment when PHI is involved. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @furtheraway, thanks for opening this thread! The short answer to question is yes, absolutely 😄 All of InnerEye is designed to work locally identically to how it works in AzureML. When you complete your training locally, the trained model ensemble and all the files you need for running inference will be saved to the cd-ing into the score.py --data_folder <path_to_data_folder> --image_files <data_file(s)> --use_dicom <True/False> There are other optional arguments that can be used to run this script, but you've highlighted a hole in the documentation here that I will create an issue for and try to address myself over the next couple of days. You should be able to view all the args with the |
Beta Was this translation helpful? Give feedback.
Hi @furtheraway, thanks for opening this thread! The short answer to question is yes, absolutely 😄 All of InnerEye is designed to work locally identically to how it works in AzureML.
When you complete your training locally, the trained model ensemble and all the files you need for running inference will be saved to the
final_model/
folder. In there you will find a file,score.py
. This file is the entry point for running inference on a single file that is used by thesubmit_for_inference.py
script provided in the InnerEye-DeepLearning repository.cd-ing into the
final_model/
and running the following command will give you the functionality you're looking for:score.py --data_folder <path_t…