-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BlockSparseArrays] BlockSparseArray functionality #1336
Comments
Thanks. This currently works: julia> using NDTensors.BlockSparseArrays: Block, BlockSparseArray, blocks
julia> using LinearAlgebra: I
julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
──────────┼──────────
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
julia> a[Block(2, 2)] = I(3)
3×3 Diagonal{Bool, Vector{Bool}}:
1 ⋅ ⋅
⋅ 1 ⋅
⋅ ⋅ 1
julia> a
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
──────────┼──────────
0.0 0.0 │ 1.0 0.0
0.0 0.0 │ 0.0 1.0
julia> using NDTensors.SparseArrayInterface: stored_indices
julia> stored_indices(blocks(a))
1-element Dictionaries.MappedDictionary{CartesianIndex{2}, CartesianIndex{2}, NDTensors.SparseArrayInterface.var"#1#2"{NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}}, Tuple{Dictionaries.Indices{CartesianIndex{2}}}}
CartesianIndex(2, 2) │ CartesianIndex(2, 2) though using this alternative syntax is currently broken: julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
──────────┼──────────
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
julia> a[Block(2), Block(2)] = I(3)
ERROR: DimensionMismatch: tried to assign (3, 3) array to (2, 2) block
Stacktrace:
[1] setindex!(::BlockSparseArray{…}, ::Diagonal{…}, ::Block{…}, ::Block{…})
@ BlockArrays ~/.julia/packages/BlockArrays/L5yjb/src/abstractblockarray.jl:165
[2] top-level scope
@ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types. I would have to think about if it makes sense to support |
In terms of I have a protype of a QR decomposition of a |
Also note that slicing like this should work right now: a[Block(1, 1)[1:2, 1:2]] i.e. you can take slices within a specified block. See BlockArrays.jl for a reference on that slicing notation. |
new feature request: I updated the first comment. Edit: FIXED |
new issue: Edit: FIXED |
new issue:
edit: FIXED |
@ogauthe a number of these issues were fixed by #1332, I've updated the list in the first post accordingly. I added regression tests in #1360 for ones that still need to be fixed, and additionally added placeholder tests that I've marked as broken in the BlockSparseArrays tests. Please continue to update this post with new issues you find, and/or make PRs with broken behavior marked with |
Feature request: Edit: FIXED |
I think ideally Alternatively, Good question about whether or not the axes should get dualed if |
The solution to accept any Axes<:Tuple{Vararg{<:AbstractUnitRange,N}}, Then one can construct a g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArray{Float64}(dual(g1), g1,) outputs
Edit: FIXED |
Thanks for investigating. That seems like the right move to generalize the axes in that way. Hopefully that error is easy enough to circumvent. |
I continue in exploring the effect of
Edit: FIXED |
issue: I cannot write a slice of a block a[BlockArrays.Block(1,1)][1:2,1:2] = ones((2,2)) does not write |
issue: a[BlockArrays.Block(1,1)] = ones((2,2))
println(LinearAlgebra.norm(a)) # 2.0
a[BlockArrays.Block(1,1)][1, 1] = NaN
println(LinearAlgebra.norm(a[BlockArrays.Block(1,1)])) # NaN
println(LinearAlgebra.norm(a)) # AssertionError outputs
I just checked that replacing |
issue: a block can be written with an invalid shape. An error should be raised. a = BlockSparseArray{Float64}([2, 3], [2, 3])
println(size(a)) # (5,5)
b = BlockArrays.Block(1,1)
println(size(a[b])) # (2,2)
a[b] = ones((3,3))
println(size(a)) # (5,5)
println(size(a[b])) # (3,3) Edit: FIXED |
Thanks to #1467, I can now initialize a using NDTensors.GradedAxes: GradedAxes, dual, gradedrange
using NDTensors.Sectors: U1
using NDTensors.BlockSparseArrays: BlockSparseArray
g1 = gradedrange([U1(0) => 1, U1(1) => 2, U1(2) => 3])
g2 = gradedrange([U1(0) => 2, U1(1) => 2, U1(3) => 1])
m1 = BlockSparseArray{Float64}(g1, GradedAxes.dual(g2)); # display crash
m2 = BlockSparseArray{Float64}(g2, GradedAxes.dual(g1)); # display crash
m12 = m1 * m2; # MethodError
m21 = m2 * m1; # MethodError
Edit: FIXED |
When no dual axis is involved,
Edit: FIXED |
issue: display error when writing a block using BlockArrays: BlockArrays
using NDTensors.BlockSparseArrays: BlockSparseArrays
using NDTensors.GradedAxes: GradedAxes
using NDTensors.Sectors: U1
g = GradedAxes.gradedrange([U1(0) => 1])
m = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
m[BlockArrays.Block(1,1)] .= 1 1×1 view(::NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, BlockSlice(Block(1),1:1), BlockSlice(Block(1),1:1)) with eltype Float64 with indices NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0])×NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):
Error showing value of type SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}, BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}}, false}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)
Closest candidates are:
(::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
(::Type{T})(::T) where T<:Number
@ Core boot.jl:792
(::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
...
Stacktrace:
[1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
@ Base ./number.jl:7
[2] cvt1
@ ./essentials.jl:468 [inlined]
[3] ntuple
@ ./ntuple.jl:49 [inlined]
[4] convert(::Type{Tuple{…}}, x::Tuple{Int64, Int64})
@ Base ./essentials.jl:470
[5] push!(a::Vector{Tuple{…}}, item::Tuple{Int64, Int64})
@ Base ./array.jl:1118
[6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
@ Base ./arrayshow.jl:76
[7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
@ Base ./arrayshow.jl:207
[8] print_matrix(io::IOContext{…}, X::SubArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
@ Base ./arrayshow.jl:171
[9] print_matrix
@ ./arrayshow.jl:171 [inlined]
[10] print_array
@ ./arrayshow.jl:358 [inlined]
[11] show(io::IOContext{…}, ::MIME{…}, X::SubArray{…})
@ Base ./arrayshow.jl:399
[12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
[13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
[14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
[15] display
@ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
[16] display(x::Any)
@ Base.Multimedia ./multimedia.jl:340
[17] #invokelatest#2
@ ./essentials.jl:892 [inlined]
[18] invokelatest
@ ./essentials.jl:889 [inlined]
[19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{…})
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
[20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
[21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
[22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
[23] (::REPL.var"#do_respond#80"{…})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
[24] #invokelatest#2
@ ./essentials.jl:892 [inlined]
[25] invokelatest
@ ./essentials.jl:889 [inlined]
[26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
@ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
[27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
[28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types. This looks like the same error as previously triggered by dual axes. Edit: FIXED |
Thanks for the report, looks like it is more generally a problem printing views of blocks of BlockSparseArray with GradedUnitRange axes: using BlockArrays: Block
using NDTensors.BlockSparseArrays: BlockSparseArray
using NDTensors.GradedAxes: gradedrange
using NDTensors.Sectors: U1
r = gradedrange([U1(0) => 1])
a = BlockSparseArray{Float64}(r, r)
@view a[Block(1, 1)] |
It would be a really useful feature. For an interface, what about just:
which is very similar to the existing |
That's definitely an interface we could consider. I don't think it technically causes an ambiguity issue, though we would not want to overload |
I also worry it is a slight abuse of notation, since it is a slightly different meaning from Interestingly, |
Another consideration would be to define a more general macro So then the syntax would be: @! a[i, j] # getindex!(a, i, j)
@! @view a[i, j] # view!(a, i, j)
@! view(a, i, j) # view!(a, i, j) That would be a compelling reason to use That would be useful in other places as well, such as when constructing o = OpSum()
for j in 1:10
o += "X", j
end into an in-place version: o = OpSum()
@! for j in 1:10
o += "X", j
end |
I like the idea of that macro for things like OpSum. Is the name For the specific case of getting array / dictionary / tensor values, I wonder if the notation becomes too terse. Like if a reader of the code would understand that the "in-place-ness" there is meaning that default will be set if the element/block is missing versus more general in-place functions that might modify data for other reasons (like e.g. |
This package: https://github.com/davidavdav/InplaceLinalg.jl uses EDIT: See also: |
For now though we could go with |
issue: there are compatibility issues between I upgraded to v1.1 on the Edit: FIXED |
There is still an issue with views of blocks: g = GradedAxes.gradedrange(['a' => 1])
bsa = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
bsa[1, 1] = 1.0
b1 = view(bsa, BlockArrays.Block(1,1)) # block b1 has been initialized before
TensorAlgebra.contract(b1, (1, 2), ones((1, 1)), (2, 3)) # ArgumentError ERROR: ArgumentError: No fallback for applying `zerovector!` to (values of) type `NDTensors.BlockSparseArrays.BlockSparseStorage{NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}}` could be determined
Stacktrace:
[1] zerovector!(x::NDTensors.BlockSparseArrays.BlockSparseStorage{NDTensors.BlockSparseArrays.BlockSparseArray{…}})
@ VectorInterface ~/.julia/packages/VectorInterface/1L754/src/fallbacks.jl:46
[2] sparse_zerovector!(a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
@ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/vectorinterface.jl:26
[3] sparse_zero!(a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
@ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/base.jl:64
[4] sparse_map_stored!(f::Function, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, as::PermutedDimsArray{…})
@ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:79
[5] sparse_map!(::Base.Broadcast.DefaultArrayStyle{…}, f::Function, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, as::PermutedDimsArray{…})
@ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:101
[6] sparse_map!(f::Function, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, as::PermutedDimsArray{…})
@ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93
[7] sparse_copyto!(dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, src::PermutedDimsArray{…})
@ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8
[8] sparse_permutedims!(dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, src::Base.ReshapedArray{…}, perm::Tuple{…})
@ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:13
[9] permutedims!(a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a_src::Base.ReshapedArray{…}, perm::Tuple{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:97
[10] _permutedims!(a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a_src::Base.ReshapedArray{…}, perm::Tuple{…})
@ NDTensors.TensorAlgebra.BaseExtensions ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/BaseExtensions/permutedims.jl:6
[11] splitdims!(a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a::Base.ReshapedArray{…}, blockedperm::NDTensors.TensorAlgebra.BlockedPermutation{…})
@ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/splitdims.jl:66
[12] contract!(alg::NDTensors.BackendSelection.Algorithm{…}, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool, β::Bool)
@ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract_matricize/contract.jl:19
[13] contract(alg::NDTensors.BackendSelection.Algorithm{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool; kwargs::@Kwargs{})
@ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:118
[14] contract(alg::NDTensors.BackendSelection.Algorithm{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool)
@ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:107
[15] contract(alg::NDTensors.BackendSelection.Algorithm{…}, labels_dest::Tuple{…}, a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool; kwargs::@Kwargs{})
@ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:88
[16] contract(alg::NDTensors.BackendSelection.Algorithm{…}, labels_dest::Tuple{…}, a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool)
@ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:77
[17] contract(alg::NDTensors.BackendSelection.Algorithm{…}, a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool; kwargs::@Kwargs{})
@ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:45
[18] contract
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:35 [inlined]
[19] #contract#31
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:32 [inlined]
[20] contract
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:23 [inlined]
[21] contract(a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…})
@ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:23
[22] top-level scope
@ REPL[52]:1
Some type information was truncated. Use `show(err)` to see complete types. Note that this is fine: bsa2 = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
b2 = BlockSparseArrays.view!(bsa2, BlockArrays.Block(1, 1))
TensorAlgebra.contract(b2, (1, 2), ones((1, 1)), (2, 3)) # Ok I am surprised that Edit: FIXED |
Yes, I didn't fix the The latest design of
It would be possible for |
I see, thank you for the explanation. I guess I should just use I was confused by the error message, but if this is expected to work at some point there is no need to change it. |
Right, the goal of |
issue: gr = GradedAxes.dual(GradedAxes.gradedrange([U1(1) => 1]))
gc = GradedAxes.gradedrange([U1(0) => 1, U1(1) => 1])
m = BlockSparseArrays.BlockSparseArray{Float64}(gr, gc)
m[1, 2] = 1
existing_blocks = BlockSparseArrays.block_stored_indices(m)
@show existing_blocks # {CartesianIndex(1, 2) = Block(1, 2)}
col_sectors = GradedAxes.blocklabels(axes(m, 2))
existing_sectors = [col_sectors[it[2]] for it in eachindex(existing_blocks)] # Ok
mh = adjoint(m) # display error, due to present issue
existing_blocks = BlockSparseArrays.block_stored_indices(mh)
@show existing_blocks # {CartesianIndex(1, 2) = Block(2, 1)} THIS IS WRONG
col_sectors = GradedAxes.blocklabels(axes(mh, 2))
existing_sectors = [col_sectors[it[2]] for it in eachindex(existing_blocks)] # raises BoundError |
Overall, I find the interface defined by b = BlockArrays.Block(1,1)
inds = Int.(Tuple(b)) # (1,1) using the index from the Block directly, |
issue: using LinearAlgebra: LinearAlgebra
using BlockArrays: BlockArrays
using Dictionaries: Dictionaries
using NDTensors.BlockSparseArrays: BlockSparseArrays
sdic = Dictionaries.Dictionary{
BlockArrays.Block{2,Int},LinearAlgebra.Diagonal{Float64,Vector{Float64}}
}()
Dictionaries.set!(sdic, BlockArrays.Block(1, 1), LinearAlgebra.Diagonal([1.0, 2.0]))
s = BlockSparseArrays.BlockSparseArray(sdic, (1:2, 1:2))
println(typeof(s)) # BlockSparseArrays.BlockSparseArray{Float64, 2, LinearAlgebra.Diagonal...}
println(typeof(similar(s))) # BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix...}
println(typeof(s * s)) # BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix...} Edit : I guess this is what "Support for blocks that are DiagonalArrays and SparseArrayDOKs" stands for. |
issue: cannot take a block subslice of a g = GradedAxes.gradedrange([U1(1) => 2])
m1 = BlockSparseArrays.BlockSparseArray{Float64}(g,g)
m2 = BlockSparseArrays.BlockSparseArray{Float64}(GradedAxes.dual(g), g)
m1[BlockArrays.Block(1,1)] = ones((2,2))
m2[BlockArrays.Block(1,1)] = ones((2,2)) # need to initialize a block to trigger the error
I = [BlockArrays.Block(1)[1:1]]
m1[I,I] # Ok
m1[I,:] # Ok
m1[:, I] # Ok
m2[I,I] # first axis lost its label
m2[I, :] # first axis lost its label
m2[:, I] ; # MethodError ERROR: MethodError: no method matching to_blockindexrange(::Base.Slice{NDTensors.GradedAxes.UnitRangeDual{…}}, ::BlockArrays.Block{1, Int64})
Closest candidates are:
to_blockindexrange(::Base.Slice{<:BlockArrays.BlockedOneTo}, ::BlockArrays.Block{1})
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:194
to_blockindexrange(::NDTensors.BlockSparseArrays.BlockIndices{var"#s50", T} where {var"#s50"<:(BlockArrays.BlockArray{var"#s49", 1, var"#s11", BS} where {var"#s49"<:(BlockArrays.BlockIndex{1, TI, Tα} where {TI<:Tuple{Integer}, Tα<:Tuple{Integer}}), var"#s11"<:(Vector{<:BlockArrays.BlockIndexRange{1, R, I} where {R<:Tuple{AbstractUnitRange{<:Integer}}, I<:Tuple{Integer}}}), BS<:Tuple{AbstractUnitRange{<:Integer}}}), T<:Integer}, ::BlockArrays.Block{1})
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:186
Stacktrace:
[1] (::NDTensors.BlockSparseArrays.var"#49#50"{SubArray{…}, Tuple{…}})(dim::Int64)
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:208
[2] ntuple
@ ./ntuple.jl:19 [inlined]
[3] viewblock
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:208 [inlined]
[4] viewblock
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:147 [inlined]
[5] view
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:125 [inlined]
[6] getindex
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:249 [inlined]
[7] (::NDTensors.BlockSparseArrays.var"#70#73"{Tuple{SubArray{…}}, Tuple{BlockArrays.BlockIndexRange{…}}})(i::Int64)
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:78
[8] ntuple
@ ./ntuple.jl:19 [inlined]
[9] sparse_map!(::NDTensors.BlockSparseArrays.BlockSparseArrayStyle{…}, f::NDTensors.BroadcastMapConversion.MapFunction{…}, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a_srcs::SubArray{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:77
[10] sparse_map!
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93 [inlined]
[11] copyto!
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/broadcast.jl:37 [inlined]
[12] materialize!
@ ./broadcast.jl:914 [inlined]
[13] materialize!
@ ./broadcast.jl:911 [inlined]
[14] sub_materialize
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/arraylayouts.jl:21 [inlined]
[15] sub_materialize
@ ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:131 [inlined]
[16] sub_materialize
@ ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:132 [inlined]
[17] layout_getindex
@ ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:138 [inlined]
[18] getindex(A::NDTensors.BlockSparseArrays.BlockSparseArray{…}, kr::Colon, jr::Vector{…})
@ ArrayLayouts ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:155
[19] top-level scope
@ REPL[296]:1
Some type information was truncated. Use `show(err)` to see complete types. |
Feature request: typeof(m1[BlockArrays.Block(1),:]) # BlockSparseArray
typeof(adjoint(m1)[BlockArrays.Block(1),:]) # Matrix EDIT: I wonder if the design of |
@ogauthe thanks for the issue reports.
julia> v = [1, 2, 3];
julia> v' * v
14 if |
Thanks for the detailed answer. Concerning |
issue: display error when g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArrays.BlockSparseArray{Float64}(g1, g1)
m2 = BlockSparseArrays.BlockSparseArray{Float64}(g1, dual(g1))
display(m1[:,:]) # Ok
display(m2) # Ok
display(m2[:,:]) # MethodError ERROR: MethodError: no method matching LabelledInteger{Int64, U1}(::Int64)
Closest candidates are:
(::Type{LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
@ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
(::Type{T})(::T) where T<:Number
@ Core boot.jl:792
(::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
@ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
...
Stacktrace:
[1] convert(::Type{LabelledInteger{Int64, U1}}, x::Int64)
@ Base ./number.jl:7
[2] cvt1
@ ./essentials.jl:468 [inlined]
[3] ntuple
@ ./ntuple.jl:49 [inlined]
[4] convert(::Type{Tuple{Int64, LabelledInteger{Int64, U1}}}, x::Tuple{Int64, Int64})
@ Base ./essentials.jl:470
[5] push!(a::Vector{Tuple{Int64, LabelledInteger{Int64, U1}}}, item::Tuple{Int64, Int64})
@ Base ./array.jl:1118
[6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
@ Base ./arrayshow.jl:76
[7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
@ Base ./arrayshow.jl:207
[8] print_matrix(io::IOContext{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
@ Base ./arrayshow.jl:171
[9] print_matrix
@ ./arrayshow.jl:171 [inlined]
[10] print_array
@ ./arrayshow.jl:358 [inlined]
[11] show(io::IOContext{…}, ::MIME{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…})
@ Base ./arrayshow.jl:399
[12] #blocksparse_show#11
@ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:120 [inlined]
[13] blocksparse_show
@ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:112 [inlined]
[14] #show#12
@ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:130 [inlined]
[15] show(io::IOContext{…}, mime::MIME{…}, a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
@ NDTensors.BlockSparseArrays.BlockSparseArraysGradedAxesExt ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:127
[16] (::OhMyREPL.var"#15#16"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::IOContext{Base.TTY})
@ OhMyREPL ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:23
[17] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
[18] display
@ ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:6 [inlined]
[19] display
@ ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
[20] display(x::Any)
@ Base.Multimedia ./multimedia.jl:340
[21] top-level scope
@ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types. This is the same error as in #1336 (comment), in a different context. This previous case was fixed and does not error any more. This is another case that should be fixed by refactoring |
I realize there are other issues with Should we change the behavior of |
I think |
issue: it is still possible to create a r = gradedrange([U1(1) => 2, U1(2) => 2])[1:3]
a = BlockSparseArray{Float64}(r,r)
a[1:2,1:2] # MethodError ERROR: MethodError: no method matching to_blockindices(::BlockArrays.BlockedUnitRange{…}, ::UnitRange{…})
Closest candidates are:
to_blockindices(::UnitRangeDual, ::UnitRange{<:Integer})
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/unitrangedual.jl:54
to_blockindices(::Base.OneTo, ::UnitRange{<:Integer})
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:186
to_blockindices(::BlockedOneTo, ::UnitRange{<:Integer})
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:170
Stacktrace:
[1] blocksparse_to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:32
[2] to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:26
[3] to_indices
@ ./indices.jl:344 [inlined]
[4] view
@ ./subarray.jl:183 [inlined]
[5] layout_getindex
@ ~/.julia/packages/ArrayLayouts/31idh/src/ArrayLayouts.jl:138 [inlined]
[6] getindex(::BlockSparseArray{…}, ::UnitRange{…}, ::UnitRange{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:92
[7] top-level scope
@ REPL[57]:1
Some type information was truncated. Use `show(err)` to see complete types. main at |
issue: r = gradedrange([U1(0) => 2, U1(1) => 2])
a = BlockSparseArray{Float64}(r, r)
@test isdual.(axes(a)) == (false, false)
@test isdual.(axes(adjoint(a))) == (true, true)
@test_broken isdual.(axes(copy(adjoint(a)))) == (true, true) main at EDIT: I got confused with |
This issue lists functionalities and feature requests for
BlockSparseArray
.Issues
copy(adjoint)
does not preserve duala[:, :]
creates an array with ill behaved axesblock_stored_indices(::LinearAlgebra.Adjoint{T, BlockSparseArray})
does not transpose its indicesLinearAlgebra.Adjoint{T, NDTensors.BlockSparseArrays.BlockSparseArray}
returns aBlockedArray
when sliced withBlock
.LinearAlgebra.norm(a)
crashes whena
containsNaN
.Strided.@strided
fails when called withview(::BlockSparseArray, ::Block)
. As a workaround it will work if you useview!
/@view!
instroduced in [BlockSparseArrays] Define in-place view that may instantiate blocks #1498.Feature requests
a[1:2, 1:2]
) to output non-blocked arrays, and define@blocked a[1:2, 1:2]
to explicitly preserve blocking. See the discussion in Functionality for slicing with unit ranges that preserves block information JuliaArrays/BlockArrays.jl#347.Base.cat
and related functions.svd
,qr
, etc. See [BlockSparseArrays] Blockwise matrix factorizations #1515. These are well defined if the block sparse matrix has a block structure (i.e. the sparsity pattern of the sparse array of arraysblocks(a)
) corresponding to a generalized permutation matrix. Probably they should be called something likeblock_svd
,block_eigen
,block_qr
, etc. to distinguish that they are meant to be used on block sparse matrices with those structures (and error if they don't have that structure). See 1 for a prototype of a blockwise QR. See also BlockDiagonals.jl for an example in Julia of blockwise factorizations, they use a naming schemesvd_blockwise
,eigen_blockwise
, etc. The slicing operation introduced in [BlockSparseArrays] Sub-slices of multiple blocks #1489 will be useful for performing block-wise truncated factorizations.BlockSparseArrayLike
toAnyBlockSparseArray
, which is the naming convention used in other Julia packages for a similar concept2.block_nstored
toblock_stored_length
andnstored
tostored_length
.Array
, for exampleDiagonalArrays.DiagonalArray
,SparseArrayDOKs.SparseArrayDOK
,LinearAlgebra.Diagonal
, etc.BlockSparseArray
can have blocks that are AbstractArray subtype, however some operations don't preserve those types properly (i.e. implicitly convert toArray
blocks) or don't work.block_stored_indices
tostored_blocks
, for outputting a listVector{<:Block}
representing which blocks are stored.Fixed
a = BlockSparseArray{Float64}([2, 3], [2, 3]); @view a[Block(1, 1)]
returns aSubArray
where the last type parameter which marks whether or not the slice supports faster linear indexing isfalse
, while it should betrue
if that is the case for that block ofa
(this is addressed by [BlockSparseArrays] Redesign block views again #1513,@view a[Block(1, 1)]
no longer outputs aSubArray
, but rather either the block data directly or aBlockView
object if the block doesn't exist yet).TensorAlgebra.contract
fails when called withview(::BlockSparseArray, ::Block)
orreshape(view(::BlockSparseArray, ::Block), ...)
. As a workaround it will work if you useview!
/@view!
instroduced in [BlockSparseArrays] Define in-place view that may instantiate blocks #1498.a = BlockSparseArray{Float64}([2, 3], [2, 3]); b = @view a[Block.(1:2), Block.(1:2)]; b[Block(1, 1)] = randn(2, 2)
doesn't set the blockBlock(1, 1)
(it remains uninitialized, i.e. structurally zero). I think the issue is that@view b[Block(1, 1)]
makes two layers ofSubArray
wrappers instead of flattening down to a single layer, and those two layers are not being dispatched on properly (in general we only catch if something is aBlockSparseArray
or aBlockSparseArray
wrapped in a single wrapper layer).BlockArrays
v1.1, see CI for [Sectors] Non-abelian fusion #1363. Fixed by [BlockSparseArrays] Update to BlockArrays v1.1, fix some issues with nested views #1503.r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(r, r); size(view(a, Block(1,1))[1:1,1:1])
returns a tuple ofLabelledInteger
instead ofInt
(see discussion, keep it that way at least for now).r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(dual(r), r); @view(a[Block(1, 1)])[1:1, 1:1]
and other combinations ofdual
lead to method ambiguity errors.Vector{<:BlockIndexRange{1}}
JuliaArrays/BlockArrays.jl#358.dual
is not preserved when adding/subtractingBlockSparseArray
s, i.e.g = gradedrange([U1(0) => 1]); m = BlockSparseArray{Float64}(dual(g), g); isdual(axes(m + m, 1))
should betrue
but isfalse
.r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(r, r); @view a[Block(1, 1)]
.a[2:4, 2:4]
, by usingBlockArrays.BlockSlice
.a[Block(2), Block(2)] = randn(3, 3)
.a[Block(2, 2)] .= 1
.@view(a[Block(1, 1)])[1:1, 1:1] = 1
.a[Block(1, 1)] = b
ifsize(a[Block(1, 1)]) != size(b)
.BlockSparseMatrix
involving dual axes.BlockSparseMatrix
, i.e.a' * a
anda * a'
, with and without dual axes.adjoint(::BlockSparseMatrix)
. Can be implemented by overloadingaxes(::Adjoint{<:Any,<:AbstractBlockSparseMatrix})
.show(::Adjoint{<:Any,<:BlockSparseMatrix})
andshow(::Transpose{<:Any,<:BlockSparseMatrix))
are broken.eachindex(::BlockSparseArray)
involving dual axes.BlockSparseMatrix
, i.e.a'
(in progress in4).Base.similar(a::BlockSparseArray, eltype::type)
andBase.similar(a::BlockSparseArray, eltype::type, size::NTuple{N,AbstractUnitRange})
do not seteltype
copy(::BlockSparseArray)
copy the blocks.a[1:2, 1:2]
is not implemented yet and needs to be implemented (in progress in5).stored_indices(blocks(a)))
to get a list ofBlock
corresponding to initialized/stored blocks. Ideally there would be shorthands for this likeblock_stored_indices(a)
(in progress in5).nstored(blocks(a))
to get the number of initialized/stored blocks. Ideally there would be shorthands for this likeblock_nstored(a)
(in progress in5)..*=
and./=
, such asa .*= 2
, are broken (in progress in[^1]).Base.:*(::BlockSparseArray, x::Number)
andBase.:/(::BlockSparseArray, x::Number)
are not definedBase.:*(::ComplexF64, ::BlockSparseArray{Float64})
does not change data type for empty array and crashes ifa
contains data.Footnotes
https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl ↩
https://github.com/JuliaGPU/CUDA.jl/blob/v5.4.2/src/array.jl#L396 ↩
https://github.com/ITensor/ITensors.jl/pull/1452, https://github.com/JuliaArrays/BlockArrays.jl/pull/255 ↩
[BlockSparseArrays] Fix adjoint and transpose #1470 ↩
[BlockSparseArrays] More general broadcasting and slicing #1332 ↩ ↩2 ↩3
The text was updated successfully, but these errors were encountered: