-
Notifications
You must be signed in to change notification settings - Fork 185
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
191 changed files
with
35,985 additions
and
0 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
# Pull Request | ||
|
||
## Description | ||
<!-- Provide a brief description of the purpose of this pull request --> | ||
|
||
## Changes Made | ||
<!-- Describe the changes introduced by this pull request --> | ||
|
||
## Related Issues | ||
<!-- Mention any related issues or tickets that are being addressed by this pull request --> | ||
|
||
## Screenshots (if applicable) | ||
<!-- Add screenshots or images to visually represent the changes, if applicable --> | ||
|
||
## Checklist | ||
<!-- Make sure to check the items below before submitting your pull request --> | ||
|
||
- [ ] Code follows the project's style guidelines | ||
- [ ] All tests related to the changes pass successfully | ||
- [ ] Documentation is updated (if necessary) | ||
- [ ] Code is reviewed by at least one other team member | ||
- [ ] Any breaking changes are communicated and documented | ||
|
||
## Additional Notes | ||
<!-- Add any additional notes or context that may be helpful for reviewers --> | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,151 @@ | ||
# MLServer | ||
|
||
An open source inference server for your machine learning models. | ||
|
||
[![video_play_icon](https://user-images.githubusercontent.com/10466106/151803854-75d17c32-541c-4eee-b589-d45b07ea486d.png)](https://www.youtube.com/watch?v=aZHe3z-8C_w) | ||
|
||
## Overview | ||
|
||
MLServer aims to provide an easy way to start serving your machine learning | ||
models through a REST and gRPC interface, fully compliant with [KFServing's V2 | ||
Dataplane](https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/v2-protocol.html) | ||
spec. Watch a quick video introducing the project [here](https://www.youtube.com/watch?v=aZHe3z-8C_w). | ||
|
||
- Multi-model serving, letting users run multiple models within the same | ||
process. | ||
- Ability to run [inference in parallel for vertical | ||
scaling](https://mlserver.readthedocs.io/en/latest/user-guide/parallel-inference.html) | ||
across multiple models through a pool of inference workers. | ||
- Support for [adaptive | ||
batching](https://mlserver.readthedocs.io/en/latest/user-guide/adaptive-batching.html), | ||
to group inference requests together on the fly. | ||
- Scalability with deployment in Kubernetes native frameworks, including | ||
[Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/graph/protocols.html#v2-kfserving-protocol) and | ||
[KServe (formerly known as KFServing)](https://kserve.github.io/website/modelserving/v1beta1/sklearn/v2/), where | ||
MLServer is the core Python inference server used to serve machine learning | ||
models. | ||
- Support for the standard [V2 Inference Protocol](https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/v2-protocol.html) on | ||
both the gRPC and REST flavours, which has been standardised and adopted by | ||
various model serving frameworks. | ||
|
||
You can read more about the goals of this project on the [initial design | ||
document](https://docs.google.com/document/d/1C2uf4SaAtwLTlBCciOhvdiKQ2Eay4U72VxAD4bXe7iU/edit?usp=sharing). | ||
|
||
## Usage | ||
|
||
You can install the `mlserver` package running: | ||
|
||
```bash | ||
pip install mlserver | ||
``` | ||
|
||
Note that to use any of the optional [inference runtimes](#inference-runtimes), | ||
you'll need to install the relevant package. | ||
For example, to serve a `scikit-learn` model, you would need to install the | ||
`mlserver-sklearn` package: | ||
|
||
```bash | ||
pip install mlserver-sklearn | ||
``` | ||
|
||
For further information on how to use MLServer, you can check any of the | ||
[available examples](#examples). | ||
|
||
## Inference Runtimes | ||
|
||
Inference runtimes allow you to define how your model should be used within | ||
MLServer. | ||
You can think of them as the **backend glue** between MLServer and your machine | ||
learning framework of choice. | ||
You can read more about [inference runtimes in their documentation | ||
page](./docs/runtimes/index.md). | ||
|
||
Out of the box, MLServer comes with a set of pre-packaged runtimes which let | ||
you interact with a subset of common frameworks. | ||
This allows you to start serving models saved in these frameworks straight | ||
away. | ||
However, it's also possible to **[write custom | ||
runtimes](./docs/runtimes/custom.md)**. | ||
|
||
Out of the box, MLServer provides support for: | ||
|
||
| Framework | Supported | Documentation | | ||
| ------------- | --------- | ---------------------------------------------------------------- | | ||
| Scikit-Learn | ✅ | [MLServer SKLearn](./runtimes/sklearn) | | ||
| XGBoost | ✅ | [MLServer XGBoost](./runtimes/xgboost) | | ||
| Spark MLlib | ✅ | [MLServer MLlib](./runtimes/mllib) | | ||
| LightGBM | ✅ | [MLServer LightGBM](./runtimes/lightgbm) | | ||
| CatBoost | ✅ | [MLServer CatBoost](./runtimes/catboost) | | ||
| Tempo | ✅ | [`github.com/SeldonIO/tempo`](https://github.com/SeldonIO/tempo) | | ||
| MLflow | ✅ | [MLServer MLflow](./runtimes/mlflow) | | ||
| Alibi-Detect | ✅ | [MLServer Alibi Detect](./runtimes/alibi-detect) | | ||
| Alibi-Explain | ✅ | [MLServer Alibi Explain](./runtimes/alibi-explain) | | ||
| HuggingFace | ✅ | [MLServer HuggingFace](./runtimes/huggingface) | | ||
|
||
## Supported Python Versions | ||
|
||
🔴 Unsupported | ||
|
||
🟠 Deprecated: To be removed in a future version | ||
|
||
🟢 Supported | ||
|
||
🔵 Untested | ||
|
||
| Python Version | Status | | ||
| -------------- | ------ | | ||
| 3.7 | 🔴 | | ||
| 3.8 | 🔴 | | ||
| 3.9 | 🟢 | | ||
| 3.10 | 🟢 | | ||
| 3.11 | 🔵 | | ||
| 3.12 | 🔵 | | ||
|
||
## Examples | ||
|
||
To see MLServer in action, check out [our full list of | ||
examples](./docs/examples/index.md). | ||
You can find below a few selected examples showcasing how you can leverage | ||
MLServer to start serving your machine learning models. | ||
|
||
- [Serving a `scikit-learn` model](./docs/examples/sklearn/README.md) | ||
- [Serving a `xgboost` model](./docs/examples/xgboost/README.md) | ||
- [Serving a `lightgbm` model](./docs/examples/lightgbm/README.md) | ||
- [Serving a `catboost` model](./docs/examples/catboost/README.md) | ||
- [Serving a `tempo` pipeline](./docs/examples/tempo/README.md) | ||
- [Serving a custom model](./docs/examples/custom/README.md) | ||
- [Serving an `alibi-detect` model](./docs/examples/alibi-detect/README.md) | ||
- [Serving a `HuggingFace` model](./docs/examples/huggingface/README.md) | ||
- [Multi-Model Serving with multiple frameworks](./docs/examples/mms/README.md) | ||
- [Loading / unloading models from a model repository](./docs/examples/model-repository/README.md) | ||
|
||
## Developer Guide | ||
|
||
### Versioning | ||
|
||
Both the main `mlserver` package and the [inference runtimes | ||
packages](./docs/runtimes/index.md) try to follow the same versioning schema. | ||
To bump the version across all of them, you can use the | ||
[`./hack/update-version.sh`](./hack/update-version.sh) script. | ||
|
||
We generally keep the version as a placeholder for an upcoming version. | ||
|
||
For example: | ||
|
||
```bash | ||
./hack/update-version.sh 0.2.0.dev1 | ||
``` | ||
|
||
### Testing | ||
|
||
To run all of the tests for MLServer and the runtimes, use: | ||
|
||
```bash | ||
make test | ||
``` | ||
|
||
To run run tests for a single file, use something like: | ||
|
||
```bash | ||
tox -e py3 -- tests/batch_processing/test_rest.py | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
* [MLServer](README.md) | ||
* [Getting Started](getting-started.md) | ||
* [User Guide](user-guide/index.md) | ||
* [Content Types (and Codecs)](user-guide/content-type.md) | ||
* [OpenAPI Support](user-guide/openapi.md) | ||
* [Parallel Inference](user-guide/parallel-inference.md) | ||
* [Adaptive Batching](user-guide/adaptive-batching.md) | ||
* [Custom Inference Runtimes](user-guide/custom.md) | ||
* [Metrics](user-guide/metrics.md) | ||
* [Deployment](user-guide/deployment/README.md) | ||
* [Seldon Core](user-guide/deployment/seldon-core.md) | ||
* [KServe](user-guide/deployment/kserve.md) | ||
* [Streaming](user-guide/streaming.md) | ||
* [Inference Runtimes](runtimes/README.md) | ||
* [SKLearn](runtimes/sklearn.md) | ||
* [XGBoost](runtimes/xgboost.md) | ||
* [MLFlow](runtimes/mlflow.md) | ||
* [Spark MlLib](runtimes/mllib.md) | ||
* [LightGBM](runtimes/lightgbm.md) | ||
* [Catboost](runtimes/catboost.md) | ||
* [Alibi-Detect](runtimes/alibi-detect.md) | ||
* [Alibi-Explain](runtimes/alibi-explain.md) | ||
* [HuggingFace](runtimes/huggingface.md) | ||
* [Custom](runtimes/custom.md) | ||
* [Reference](reference/README.md) | ||
* [MLServer Settings](reference/settings.md) | ||
* [Model Settings](reference/model-settings.md) | ||
* [MLServer CLI](reference/cli.md) | ||
* [Python API](reference/python-api/README.md) | ||
* [MLModel](reference/api/model.md) | ||
* [Types](reference/api/types.md) | ||
* [Codecs](reference/api/codecs.md) | ||
* [Metrics](reference/api/metrics.md) | ||
* [Examples](examples/README.md) | ||
* [Serving Scikit-Learn models](examples/sklearn/README.md) | ||
* [Serving XGBoost models](examples/xgboost/README.md) | ||
* [Serving LightGBM models](examples/lightgbm/README.md) | ||
* [Serving MLflow models](examples/mlflow/README.md) | ||
* [Serving a custom model](examples/custom/README.md) | ||
* [Serving Alibi-Detect models](examples/alibi-detect/README.md) | ||
* [Serving HuggingFace Transformer Models](examples/huggingface/README.md) | ||
* [Multi-Model Serving](examples/mms/README.md) | ||
* [Model Repository API](examples/model-repository/README.md) | ||
* [Content Type Decoding](examples/content-type/README.md) | ||
* [Custom Conda environments in MLServer](examples/conda/README.md) | ||
* [Serving a custom model with JSON serialization](examples/custom-json/README.md) | ||
* [Serving models through Kafka](examples/kafka/README.md) | ||
* [Streaming](examples/streaming/README.md) | ||
* [Deploying a Custom Tensorflow Model with MLServer and Seldon Core](examples/cassava/README.md) | ||
* [Changelog](changelog.md) |
Oops, something went wrong.