You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The batching system does a few things which make sense in typical ML use cases but, reduce it's usefulness as an efficient parameter-sweeping tool:
VarAccessDuplication::DUPLICATE variables are currently initialised once and the same value is copied across all batches. We probably should 'allocate' num things * num batches RNG streams and decide which ones to actually use on a variable-by-variable basis (controlled by flag in Models::VarInit perhaps)
Currently connectivity i.e. ind, rowLength etc have the equivalent of VarAccessDuplication::SHARED as this fits with typical ML uses of batching. However, as long as max row length etc stays the same across all batches, no reason not to make this more flexible
The text was updated successfully, but these errors were encountered:
The batching system does a few things which make sense in typical ML use cases but, reduce it's usefulness as an efficient parameter-sweeping tool:
VarAccessDuplication::DUPLICATE
variables are currently initialised once and the same value is copied across all batches. We probably should 'allocate' num things * num batches RNG streams and decide which ones to actually use on a variable-by-variable basis (controlled by flag inModels::VarInit
perhaps)ind
,rowLength
etc have the equivalent ofVarAccessDuplication::SHARED
as this fits with typical ML uses of batching. However, as long as max row length etc stays the same across all batches, no reason not to make this more flexibleThe text was updated successfully, but these errors were encountered: