This package contains some tools to integrate the Spark computing framework with the popular scikit-learn machine library. Among other tools:
- train and evaluate multiple scikit-learn models in parallel. It is a distributed analog to the multicore implementation included by default in scikit-learn.
- convert Spark's Dataframes seamlessly into numpy
ndarray
s or sparse matrices. - (experimental) distribute Scipy's sparse matrices as a dataset of sparse vectors.
It focuses on problems that have a small amount of data and that can be run in parallel.
-
for small datasets, it distributes the search for estimator parameters (
GridSearchCV
in scikit-learn), using Spark, -
for datasets that do not fit in memory, we recommend using the distributed implementation in Spark MLlib.
NOTE: This package distributes simple tasks like grid-search cross-validation. It does not distribute individual learning algorithms (unlike Spark MLlib).
Difference with the sparkit-learn project The sparkit-learn project aims at a comprehensive integration between Spark and scikit-learn. In particular, it adds some primitives to distribute numerical data using Spark, and it reimplements some of the most common algorithms found in scikit-learn.
This package is released under the Apache 2.0 license. See the LICENSE file.
This package is available on PYPI:
pip install spark-sklearn
This project is also available as as Spark package.
The developer version has the following requirements:
- a recent release of scikit-learn. Releases 0.18.1, 0.19.0 have been tested, older versions may work too.
- Spark >= 2.1.1. Spark may be downloaded from the Spark official website. In order to use this package, you need to use the pyspark interpreter or another Spark-compliant python interpreter. See the Spark guide for more details.
- nose (testing dependency only)
- Pandas, if using the Pandas integration or testing. Pandas==0.18 has been tested.
If you want to use a developer version, you just need to make sure the python/
subdirectory is in the PYTHONPATH
when launching the pyspark interpreter:
PYTHONPATH=$PYTHONPATH:./python:$SPARK_HOME/bin/pyspark
Running tests You can directly run tests:
cd python && ./run-tests.sh
This requires the environment variable SPARK_HOME
to point to your local copy of Spark.
Here is a simple example that runs a grid search with Spark. See the Installation section on how to install the package.
from sklearn import svm, grid_search, datasets
from spark_sklearn import GridSearchCV
iris = datasets.load_iris()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
svr = svm.SVC()
clf = GridSearchCV(sc, svr, parameters)
clf.fit(iris.data, iris.target)
This classifier can be used as a drop-in replacement for any scikit-learn classifier, with the same API.
API documentation is currently hosted on Github pages. To build the docs yourself, see the instructions in docs/README.md.
- 2015-12-10 First public release (0.1)
- 2016-08-16 Minor release (0.2.0):
- the official Spark target is Spark 2.0
- support for keyed models
- 2017-09-20 Minor release (0.2.2):
- The official Spark target is Spark >= 2.1
- 2017-09-29 Minor release (0.2.3):
- Fixes spark-package build of spark-sklearn.