ServeIt lets you serve model predictions and supplementary information from a RESTful API using your favorite Python ML library in as little as one line of code:
from serveit.server import ModelServer
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
# fit logistic regression on Iris data
clf = LogisticRegression()
data = load_iris()
clf.fit(data.data, data.target)
# initialize server with a model and start serving predictions
ModelServer(clf, clf.predict).serve()
Your new API is now accepting POST
requests at localhost:5000/predictions
! Please see the examples directory for detailed examples across domains (e.g., regression, image classification), including live examples.
Current ServeIt features include:
- Model inference serving via RESTful API endpoint
- Extensible library for inference-time data loading, preprocessing, input validation, and postprocessing
- Supplementary information endpoint creation
- Automatic JSON serialization of responses
- Configurable request and response logging (work in progress)
The following libraries are currently supported:
- Scikit-Learn
- Keras
- PyTorch
Installation is easy with pip: pip install serveit
You can build locally with: python setup.py
Please consider buying me a coffee if you like my work: