You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 1, 2023. It is now read-only.
Currently MPI.Isend is called from end_ghost_exchange: this is probably not a huge issue on GPU as the asynchronous kernel launches will handle the mean that end_ghost_exchange will be immediately called after the compute kernels are launched. However on CPU the scheduler will probably run the compute kernels to completion before it gets to running end_ghost_exchange (in fact this must be happening, as there is no yield between the Isend and the blocking Waitall! on the receive requests). I suspect this is probably causing scaling problems for CPU performance.
An alternative would be to schedule the Waitall! on a different thread (see JuliaParallel/MPI.jl#452)
Currently
MPI.Isend
is called fromend_ghost_exchange
: this is probably not a huge issue on GPU as the asynchronous kernel launches will handle the mean thatend_ghost_exchange
will be immediately called after the compute kernels are launched. However on CPU the scheduler will probably run the compute kernels to completion before it gets to runningend_ghost_exchange
(in fact this must be happening, as there is noyield
between theIsend
and the blockingWaitall!
on the receive requests). I suspect this is probably causing scaling problems for CPU performance.An alternative would be to schedule the
Waitall!
on a different thread (see JuliaParallel/MPI.jl#452)cc: @kpamnany @vchuravy
The text was updated successfully, but these errors were encountered: