Skip to content

Latest commit

 

History

History
82 lines (57 loc) · 2.4 KB

README.rst

File metadata and controls

82 lines (57 loc) · 2.4 KB

HPAT

Join the chat at https://gitter.im/IntelLabs/hpat https://travis-ci.org/IntelLabs/hpat.svg?branch=master https://coveralls.io/repos/github/IntelLabs/hpat/badge.svg?branch=master

A compiler-based framework for big data in Python

High Performance Analytics Toolkit (HPAT) scales analytics/ML codes in Python to bare-metal cluster/cloud performance automatically. It compiles a subset of Python (Pandas/Numpy) to efficient parallel binaries with MPI, requiring only minimal code changes. HPAT is orders of magnitude faster than alternatives like Apache Spark.

HPAT's documentation can be found here.

Installation

HPAT can be installed in Anaconda environment easily (Linux/Mac/Windows):

conda create -n HPAT -c ehsantn -c numba -c anaconda -c conda-forge hpat

Windows installaton requires Intel MPI to be installed.

Docker Container

An HPAT docker image is also available for running containers. For example:

docker run -it ehsantn/hpat bash

Example

Here is a Pi calculation example in HPAT:

import hpat
import numpy as np
import time

@hpat.jit
def calc_pi(n):
    t1 = time.time()
    x = 2 * np.random.ranf(n) - 1
    y = 2 * np.random.ranf(n) - 1
    pi = 4 * np.sum(x**2 + y**2 < 1) / n
    print("Execution time:", time.time()-t1, "\nresult:", pi)
    return pi

calc_pi(2 * 10**8)

Save this in a file named pi.py and run (on 8 cores):

mpiexec -n 8 python pi.py

This should demonstrate about 100x speedup compared to regular Python version without @hpat.jit and mpiexec.

References

These academic papers describe the underlying methods in HPAT: