-
Notifications
You must be signed in to change notification settings - Fork 172
Home
Build and install Minerva and Owl (strongly recommended) as in Install Minerva. In the wiki, we will majorly use python interface for demonstration.
Enter ./run_owl_shell.sh
in Minerva's root directory. And enter:
>>> x = owl.ones([10, 5])
>>> y = owl.ones([10, 5])
>>> z = x + y
>>> z.to_numpy()
The result will be a 10x5 array filled by value 2. Minerva supports many ndarray operations, please see the API document for more information.
Before using Minerva in your own applications. You may need following API calls:
-
System Initialization: The function call must precede any
owl
API calls.- On Python:
owl.initialize(sys.argv)
- On C++:
MinervaSystem::Initialize(int argc, char** argv)
- On Python:
-
Device Creation: At least one of CPU and GPU devices should be created before any ndarray function calls.
- On Python:
owl.create_cpu_device()
,owl.create_gpu_device(gpuid)
- On C++:
MinervaSystem::Instance().CreateCpuDevice()
,MinervaSystem::Instance().CreateGpuDevice(int gpuid)
- More about
device
could be found in the wiki page about multi-GPU training.
- On Python:
So a typical Minerva-driven application will start like following (in python):
import owl
import sys
owl.initialize(sys.argv)
gpu = owl.create_gpu_device(0)
owl.set_device(gpu)
...
<application logics>
...
Minerva allows you to write you own code for machine learning, using a ndarray interface just like Matlab or NumPy. You can use C++ or Python, whichever you prefer. The C++ and Python interface are quite similar. With Python, you can load data with NumPy and use it in Minerva, or you can convert Minerva NArrays into NumPy array and plot/print it with the tools provided in NumPy.
The NArray interface provided by Minerva is very intuitive. If you are familiar with either one of the matrix programming tools such as Matlab or NumPy, it should be very easy to get started with Minerva. More detailed documents will be available soon.
Minerva allows you to use multiple GPUs at the same time. By using the set_device
function, you can specify which device you want the operation to run on. Once set, all the operations/statements that follow will be performed on this device. This simple primitive will give you flexibility to parallelize on multiple devices (either CPU or GPU).
Minerva uses lazy evaluation
, meaning that the operations are carried out only when necessary. For example, when you write c = a + b
, the matrix addition will not be performed immediately. Instead, a dependency graph is constructed internally to track the dependency relationship. Once you try to evaluate the matrix c
, either by printing some of its elements, or calling c.WaitForEval()
, Minerva will lookup the dependency graph and carry out all computations specified by the graph. In this way, you can "push" multiple operations to different devices, and then trigger the evaluation on both devices at the same time. This is how multi-GPU programming is done in Minerva. Please refer to the wiki page to get more details.
To understand more about Minerva, we recommend:
- Step by Step Walkthrough: MNIST
- Step by Step Walkthrough: AlexNet
- Feature Highlight: Data-flow and lazy evaluation
- Feature Highlight: Multi-GPU training
- Integrate with PS
Also we welcome any contribution to Minerva and FAQ.