You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The snippet below shows that for small matrices using MMatrix as argument of exponential! is faster than using a regular Matrix. However the MMatrix allocates more. Is this the expected behaviour?
functionldiv_for_generated!(C, A, B) # C=A\B. Called from generated code
F =lu!(A) # This allocation is unavoidable, due to the interface of LinearAlgebraldiv!(F, B) # Result stored in Bif (pointer_from_objref(C) !=pointer_from_objref(B)) # Aliasing allowedcopyto!(C, B)
endreturn C
end
The array version has an allocation due to BLAS. This can be removed by using LinearSolve.jl. But note that there are sub-allocations in that not tracked by GC because it's in the BLAS workspace.
The MMatrix version is using GenericSchur.jl. You can see the extra allocations here:
It would be nice to improve the alloc_mem interface to allocate those and then extend the GenericSchur.jl balance! function so that they can be passed in. If anyone has the time to do that then this is a nice straightforward issue to improve performance a bit.
The snippet below shows that for small matrices using
MMatrix
as argument ofexponential!
is faster than using a regularMatrix
. However theMMatrix
allocates more. Is this the expected behaviour?Matrix
0.000037 seconds (1 allocation: 80 bytes)
MMatrix
0.000006 seconds (6 allocations: 416 bytes)
The text was updated successfully, but these errors were encountered: