-
Notifications
You must be signed in to change notification settings - Fork 27
[ODK] Exposing to SageMath
MPI is not threads, we cannot fork the program at a certain point, excepting it to distribute the following computations.
If the end user want to do complicated things then call for a det
,
I would except to be able to distribute the det
only.
But it would require the user to launch the program with mpirun
,
doing the same bunch of stuff on each node, or managing to
specify that the first part is only for the master node.
Using threads would limit us to non-distributed.
@ClementPernet: This is the bare minimum we should aim at. And would be a fall back solution in case we can't get any distributed code to run through sage (which may happen).
User could encapsulate its code to say what should be run on the main node,
and is required to have run the program through mpirun
, otherwise nothing runs.
Code sample:
LINBOX_MPI_INIT();
LinBox::Matrix A();
LINBOX_MPI_IF_MASTER_BEGIN();
// Complicated linbox code computing A...
LINBOX_MPI_IF_MASTER_END();
// Will consider A on master is the only legit, and internally distribute.
auto d = det(A, ParallelHelper::MPI);
LINBOX_MPI_FINALIZE();
@ClementPernet: This assumes that there is one monolithic binary being run. Sage is mostly a python program, which runs binaries (David correct me if I'm wrong). So we may not need this option. Instead we could have the complicated non-distributed code being run first and then sage launch an mpirun instance of the distributed code.
We could make a binary of the det
solution that takes some matrix
as input (filename below) and outputs the result.
The user could simply call:
// Complicated linbox code computing A...
auto d = det(A, ParallelHelper::MPI);
The user does not worry about anything here.
The internal would call the binary through mpirun /usr/bin/linbox-det tmp.mat
.
We need the user to have configured classic OpenMPI environmental variables
or have new linbox config info for more arguments decided what are the nodes.
While that, the main program is sleeping waiting for the result?
See: https://stackoverflow.com/a/11185654
@ClementPernet: This is more or less similar as my version of proposal 2, except that I do not quite see where the border is between binary executable and compiled library.
@ClementPernet: We should investigate mpi4py or other python/mpi packages, to see if they could meet our needs.