This lab uses the Oracle Cloud Infrastructure (OCI) Data Science service to create a model-based around employee attrition. You develop a machine learning model and perform all the steps to deploy the model into production. You create a machine learning model, create a Data Flow application, create a model artifact and store it in the Model Catalog. Using the console, you deploy the model and then call a REST API endpoint to perform inference operations on your model.
The notebook makes connections to other OCI resources. This is done using resource principals. If you have not configured your tenancy to use resource principals then you can do so using the instructions that are here. Alternatively, you can use API keys but this requires you to make a few modifications to the notebook. The preferred method for authentication is resource principals.
Your notebook needs internet access.
- Open a Data Science Notebook session (i.e. JupyterLab).
- Open a file terminal by clicking on File -> New -> Terminal.
- In the terminal run the following commands:
odsc conda install -s mlcpuv1
to install the General Machine Learning for CPUs conda.mkdir /home/datascience/dataflow
conda activate /home/datascience/conda/mlcpuv1
to activate the conda.pip install oci
to install the OCI Python SDK.
- Copy the
employee-attrition.ipynb
into the notebook session. - Open the notebook.
- Change the notebook kernel to
Python [conda env:mlcpuv1]
. - Read the notebook and execute each cell.
Note: Before you execute the "Invoke the Model HTTP Endpoint" section, ensure you have the right endpoint URI. You can obtain this information by going to the Data Science service in the console. Select the compartment where the model deployment is contained. Then select the appropriate Project. Under the Resources section select Model Deployments. You can copy the model deployment URI from the model deployment you created.