Skip to content

Latest commit

 

History

History
109 lines (71 loc) · 5.36 KB

README.md

File metadata and controls

109 lines (71 loc) · 5.36 KB

Kubeflow Training Operator

Build Status Coverage Status Go Report Card

Overview

Starting from v1.3, this training operator provides Kubernetes custom resources that makes it easy to run distributed or non-distributed TensorFlow/PyTorch/Apache MXNet/XGBoost/MPI jobs on Kubernetes.

Note: Before v1.2 release, Kubeflow Training Operator only supports TFJob on Kubernetes.

Prerequisites

  • Version >= 1.16 of Kubernetes
  • Version >= 3.x of Kustomize
  • Version >= 1.21.x of Kubectl

Installation

Master Branch

kubectl apply -k "github.com/kubeflow/training-operator/manifests/overlays/standalone"

Stable Release

kubectl apply -k "github.com/kubeflow/training-operator/manifests/overlays/standalone?ref=v1.5.0"

TensorFlow Release Only

For users who prefer to use original TensorFlow controllers, please checkout v1.2-branch, patches for bug fixes will still be accepted to this branch.

kubectl apply -k "github.com/kubeflow/training-operator/manifests/overlays/standalone?ref=v1.2.0"

Python SDK for Kubeflow Training Operator

Training Operator provides Python SDK for the custom resources. More docs are available in sdk/python folder.

Use pip install command to install the latest release of the SDK:

pip install kubeflow-training

Quick Start

Please refer to the quick-start-v1.md and Kubeflow Training User Guide for more information.

API Documentation

Please refer to following API Documentation:

Community

You can:

This is a part of Kubeflow, so please see readme in kubeflow/kubeflow to get in touch with the community.

Contributing

Please refer to the DEVELOPMENT

Change Log

Please refer to CHANGELOG

Version Matrix

The following table lists the most recent few versions of the operator.

Operator Version API Version Kubernetes Version
v1.0.x v1 1.16+
v1.1.x v1 1.16+
v1.2.x v1 1.16+
v1.3.x v1 1.18+
latest (master HEAD) v1 1.18+

Acknowledgement

This project was originally started as a distributed training operator for TensorFlow and later we merged efforts from other Kubeflow training operators to provide a unified and simplified experience for both users and developers. We are very grateful to all who filed issues or helped resolve them, asked and answered questions, and were part of inspiring discussions. We'd also like to thank everyone who's contributed to and maintained the original operators.