-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement a lazy identity operator #90
Comments
Actually, it's not super hard to get a first comparison for performance. If you just throw "lazy" identities in a using QuantumOpticsBase
using FillArrays
using BenchmarkTools
dims = [2,3,5,4]
bs = GenericBasis.(dims)
bcomp = ⊗(bs...)
idx = 3
a = randoperator(bs[idx])
function lazy_identity(b::Basis)
n = length(b)
return Operator(b,b,Eye{ComplexF64}(n))
end
A = LazyTensor(bcomp, (idx,), (a,))
op_list = [i == idx ? a : lazy_identity(b) for (i,b) in enumerate(bs)]
B = LazyTensor(bcomp, Tuple(1:length(dims)), (op_list...,))
ψ = randstate(bcomp)
dψ1 = copy(ψ)
dψ2 = copy(ψ)
# test
QuantumOpticsBase.mul!(dψ1, A, ψ)
QuantumOpticsBase.mul!(dψ2, B, ψ)
dψ1 == dψ2 # true
bench1 = @benchmark QuantumOpticsBase.mul!($dψ1, $A, $ψ)
bench2 = @benchmark QuantumOpticsBase.mul!($dψ2, $B, $ψ) That shows a slow down of a factor of 2 - 3. Note that this will get worse with a larger number of Hilbert spaces. Still, it might be worth looking into optimizations here. |
Could one give the identityoperator its own type? Then LazyTensor could dispatch on this and choose not to store them as part of its operators field? |
You can just define const IdentityOperator{BL,BR} = Operator{BL,BR,<:FillArrays.Eye} and dispatch on that. And yes, then filtering them out on construction would keep things as performant as they are now. However, my idea would be to not do that, but rather skip over them in the |
I like this approach! Also seems like FillArrays is a pretty lightweight dependency. I think I can add this special case to LazyTensor |
Big question for me is whether we (1) provide Option 1 is clearly safer as it does not break any code that relies on Most code that uses |
I would argue for option 2 since this makes the contagious LazyTensor in #86 more efficient and natural to combine with the standard syntax (of course only if it's trivial to make sure nothing is broken by option 2). |
Yeah, I agree that providing minimal surprise when tensoring LazyTensors with identities is nice. As for breakage, I think external code is really the only issue. The fix is pretty trivial if it does cause problems - just do |
In favor of option 2 as well. Maybe we can start keeping a human-focused changelog for this type of stuff: https://keepachangelog.com/en/1.0.0/ I started having one midway through some of my projects (just marking the first tracked version as "changes not logged before this date"), and it has been a valuable reference |
Yeah, I'll give it a shot. |
Implementation in #94 gets speed back up to where it should be. |
The
identityoperator
function returns a sparse array. As the action of the identity matrix on states and operators is known, however, we could make this lazy. This may not appear to be a large change, but it could potentially have a large impact on the implementation ofLazyTensor
and itsmul!
methods.I'm not set on actually doing is, it's just an idea that I thought worth discussing.
We could, for example, use FillArrays to implement this. They export an
Eye
function which lazily represents a matrix that is zero except on the diagonal with correspondinggetindex
methods and such. We could just use that as.data
field when constructingidentityoperator
.Alternatively, if we don't want to add this dependency, we could also implement it ourselves (it's not super complicated), which would also give us more control over e.g. the
mul!
implementations and such. I'd prefer not duplicating work, however.The big advantage I see in implementing this is that it might greatly simplify the implementation of
LazyTensor
. If identity operators are efficient and lazy, then we could remove the entire concept ofindices
from it. This will also simplify the underlyingmul!
methods.The disadvantage I is that this may have a negative impact on
LazyTensor
performance. Without further dispatches on theEye
matrix above, for example, we'd be doing a ton of unnecessary multiplication operations. Ultimately the question whether this impacts performance negatively can only be answered if we do the implementation first, so I'm not sure if we want to invest in this.One option I see for now is to just change the implementation of
identityoperator
to return a lazyEye
(or add a new method such aslazy_identityoperator
) and leave theLazyTensor
implementation untouched.The text was updated successfully, but these errors were encountered: