Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
So far RegularizedLeastSquares.jl offers support for two types of multi-threading which are both transparent to the package.
The highest-level of multi-threading is just when a user creates multiple solvers in parallel (potentially with different parameters):
Then we also support low-level multi-threading within the linear operators given to the package. This is just completely transparent to us. MRIReco for example uses both of these multi-threading options (with
@floop
instead of the Threads version).This PR takes advantage of the split between a solver and its state to add middle-level of multi-threading. This level of multi-threading applies the same solver (and its parameters) to multiple measurement vectors or rather a measurement matrix
B
.Solvers and their state are currently defined as follows:
While both the solver and its state are mutable structs, the solver struct is intended to be immutable during a
solve!
call. This allows us to simply copy the state, initialize each state with a slice ofB
and then perform iterations on each state separately. Since the solver has anAbstractSolverState
as a field, we can exchange this for a new state holding all the usual states for each slice. While this is technically a type-instability, in the iterate method we always dispatch on the state-type too, so we are not affected by the type-instability in our hot-loops.To make this feature optional and hackable to users, I've introduced a new keyword
scheduler
to thesolve!
call which gets passed on to theinit!
call if a user provides a measurement matrix. Scheduler defaults to a simple sequential implementation without multi-threading. Out of the box we also support multi-threading viaThreads.@threads
withscheduler = MultiThreadingState
. The scheduler is treated like a callable that gets invoked with a copy of all necessary states.As an example I sketch how to implement a custom
@floop
scheduler:A solver is also not locked into always doing multi-threading once it was passed a matrix. It seamlessly switches back to normal execution when given a vector.
Note that switching out the state as is done here, also allows one to specify special variants of a solver as is done in this PR.
Furthermore, we can also now specify special algorithms variants that work directly on
B
and not slices of it. An example of this could be an implementation of Kaczmarz as described in this work.