-
Notifications
You must be signed in to change notification settings - Fork 2
Home
ARMCI-MPI is a completely rewritten implementation of the ARMCI one-sided communication interface that uses MPI RMA for one-sided communication.
ARMCI-MPI is nearly feature-complete with respect to ARMCI as provided in Global Arrays 5.0, but due to recent refactoring, does not work with Global Arrays 5.1. However, when used with Global Arrays 5.0, ARMCI-MPI passes all of the valid ARMCI and Global Arrays tests (some tests are invalid due to incorrect usage or assumptions). As of Global Arrays 5.2, ARMCI-MPI can be easily configured to work with GA and all the valid tests pass.
Finally, and of utmost important to the chemistry community, NWChem (version 5.1.1, 6.0 and 6.3) has been tested extensively using ARMCI-MPI across a wide variety of systems. It is used extensively on Blue Gene/P and Blue Gene/Q systems at ALCF and on Cray XE6 and XC30 systems at NERSC. On InfiniBand, substantially larger jobs are possible than with the native conduit due to MPI's more careful use of registered pages.
Jim Dinan wrote ARMCI-MPI with assistance from Pavan Balaji and Jeff Hammond.
Jeff Hammond took over development responsibilities in May 2013 and wrote the MPI-3 implementation.
Please see the first ARMCI-MPI paper (preprint) from IPDPS 2012 for a detailed description and performance data of the MPI-2 implementation.
The implementation details of MPI-3 RMA in MPICH can be found in this paper.
This is not necessarily an inclusive list, but these are some scientific achievements enabled by ARMCI-MPI.
- Lucas Koziol and Miguel A. Morales. J. Chem. Phys. 140, 224316 (2014) A Fixed-Node Diffusion Monte Carlo Study of the 1,2,3-Tridehydrobenzene Triradical
- Yuki Kurashige, Garnet Kin-Lic Chan and Takeshi Yanai. Nature Chemistry 5, 660–666 (2013). Entangled quantum electronic wavefunctions of the Mn4CaO5 cluster in photosystem II
The ARMCI-MPI repo recently moved to http://git.mpich.org/armci-mpi.git/. You can obtain the source by cloning the repository like this:
git clone git://git.mpich.org/armci-mpi.git || git clone http://git.mpich.org/armci-mpi.git
We do not provided downloadable tarballs at the moment. You can request one if you need it. In most cases, the challenge of building from the Git repo is out-of-date Autotools. This issue can be solved using the information found here.
Please modify configure options as appropriate...
./configure CC=mpicc --enable-g --prefix=$HOME/INSTALL make make check make install
See NWChem for complete details on building NWChem against ARMCI-MPI.
If you would like to report a bug or feature request, please see the Trac page.
- Switch to the mpi3rma branch of ARMCI-MPI.
We have tested against the following implementations:
- MPICH 3.0.4 and later on Mac, Linux SMPs and SGI SMPs.
- MVAPICH2 2.0a and later on Linux InfiniBand clusters.
- CrayMPI 6.1.0 and later on Cray XC30.
- SGI MPT 2.09 on SGI SMPs.
- OpenMPI development version on Mac.
Both of these issues were relevant to MVAPICH2 2.0, so if you are using a different version, you might want to not set them, i.e. use the defaults, until you encounter a problem. As always, the MVAPICH user (discuss) list is an excellent place for questions related to MVAPICH problems.
While we think the following issues are network-dependent, they may not be. It is merely that we have only observed these issues on systems that have these networks; however, this could be a coincidence.
When you are using a lot of memory in a GA code, perhaps half or more, you will likely need the following environment variable:
MV2_USE_LAZY_MEM_UNREGISTER=0
Use one of the following for now:
MV2_USE_SLOT_SHMEM_COLL=0or
MV2_SHMEM_COLL_NUM_COMM=$Nwhere N is the max number of MPI_Win you will use, which might need to be 1000 in the case of GA codes.
For some reason, mpicc does not provide the necessary headers or libraries to run with MPI-3 support. One has to set them manually.
~/ARMCI-MPI/git/build-sgi-mpt-2.09> module load mpt && \ ../configure CC=gcc \ CFLAGS="-I/sw/sdev/mpt-x86_64/2.09-p11049/include" \ LDFLAGS="-L/sw/sdev/mpt-x86_64/2.09-p11049/lib -lmpi" \ --enable-g && \ make -j16 check MPIEXEC="mpirun -np 4"
On shared-memory machines like SGI UV1000, we find that MPICH is faster than MPT. We have no experience with MPT on any other SGI platform.
You have to disable datatypes. This may or may not be done automatically by ARMCI-MPI in the MPI-2 branch (master at the moment).
See https://svn.open-mpi.org/trac/ompi/ticket/4438 for the latest status with OpenMPI. As of 27 March 2014, the MPI-3 version of ARMCI-MPI works when datatypes are avoided. Avoidance of datatypes is set as the default when ARMCI-MPI detects that it is being compiled against OpenMPI, so you should only see a problem if you explicitly request datatypes at runtime.
Users of ARMCI-MPI should subscribe to [email protected].
If you are a developer of ARMCI-MPI, you will be subscribed to the developer list.