Deploy to KFserving from Kubeflow Pipelines
These examples illustrate how to use Kubeflow Pipelines component for KFServing.
- Deploy a custom model
- Deploy a Tensorflow model. There is also a notebook which illustrates this.
- Deploy a sample MNIST model end to end using Kubeflow Pipelines with Tekton. The notebook demonstrates how to compile and execute an End to End Machine Learning workflow that uses Katib, TFJob, KFServing, and Tekton pipeline. This pipeline contains 5 steps, it finds the best hyperparameter using Katib, creates PVC for storing models, processes the hyperparameter results, distributedly trains the model on TFJob with the best hyperparameter using more iterations, and finally serves the model using KFServing. You can visit this medium blog for more details on this pipeline.
To dive into the source behind the KFServing Kubeflow Pipeline Component, please look into the yaml for KFServing Component and the source code