Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BlockSparseArrays] BlockSparseArray functionality #1336

Open
31 of 47 tasks
ogauthe opened this issue Feb 16, 2024 · 61 comments
Open
31 of 47 tasks

[BlockSparseArrays] BlockSparseArray functionality #1336

ogauthe opened this issue Feb 16, 2024 · 61 comments
Labels
BlockSparseArrays enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library.

Comments

@ogauthe
Copy link
Contributor

ogauthe commented Feb 16, 2024

This issue lists functionalities and feature requests for BlockSparseArray.

using LinearAlgebra
using NDTensors.BlockSparseArrays: BlockSparseArray

a = BlockSparseArray{Float64}([2, 3], [2, 3]);

Issues

  • copy(adjoint) does not preserve dual
  • a[:, :] creates an array with ill behaved axes
  • Sub-slices of multiple blocks sometimes fails with a dual axis.
  • block_stored_indices(::LinearAlgebra.Adjoint{T, BlockSparseArray}) does not transpose its indices
  • LinearAlgebra.Adjoint{T, NDTensors.BlockSparseArrays.BlockSparseArray} returns a BlockedArray when sliced with Block.
  • LinearAlgebra.norm(a) crashes when a contains NaN.
  • Strided.@strided fails when called with view(::BlockSparseArray, ::Block). As a workaround it will work if you use view!/@view! instroduced in [BlockSparseArrays] Define in-place view that may instantiate blocks #1498.

Feature requests

  • Change the behavior of slicing with non-blocked ranges (such as a[1:2, 1:2]) to output non-blocked arrays, and define @blocked a[1:2, 1:2] to explicitly preserve blocking. See the discussion in Functionality for slicing with unit ranges that preserves block information JuliaArrays/BlockArrays.jl#347.
  • Implement direct sums/concatenations of block sparse arrays that preserve and make use of block sparsity. This could be done by overloading Base.cat and related functions.
  • Blockwise linear algebra operations like svd, qr, etc. See [BlockSparseArrays] Blockwise matrix factorizations #1515. These are well defined if the block sparse matrix has a block structure (i.e. the sparsity pattern of the sparse array of arrays blocks(a)) corresponding to a generalized permutation matrix. Probably they should be called something like block_svd, block_eigen, block_qr, etc. to distinguish that they are meant to be used on block sparse matrices with those structures (and error if they don't have that structure). See 1 for a prototype of a blockwise QR. See also BlockDiagonals.jl for an example in Julia of blockwise factorizations, they use a naming scheme svd_blockwise, eigen_blockwise, etc. The slicing operation introduced in [BlockSparseArrays] Sub-slices of multiple blocks #1489 will be useful for performing block-wise truncated factorizations.
  • Rename BlockSparseArrayLike to AnyBlockSparseArray, which is the naming convention used in other Julia packages for a similar concept2.
  • Rename block_nstored to block_stored_length and nstored to stored_length.
  • Support for blocks that are on GPU.
  • Better support for blocks that are not Array, for example DiagonalArrays.DiagonalArray, SparseArrayDOKs.SparseArrayDOK, LinearAlgebra.Diagonal, etc. BlockSparseArray can have blocks that are AbstractArray subtype, however some operations don't preserve those types properly (i.e. implicitly convert to Array blocks) or don't work.
  • Display non-initialized blocks differently from zero-initialized blocks, currently they both print as zeros.
  • Maybe rename block_stored_indices to stored_blocks, for outputting a list Vector{<:Block} representing which blocks are stored.

Fixed

  • Implement slicing syntax for merging blocks, i.e. More convenient syntax for merging blocks JuliaArrays/BlockArrays.jl#359.
  • a = BlockSparseArray{Float64}([2, 3], [2, 3]); @view a[Block(1, 1)] returns a SubArray where the last type parameter which marks whether or not the slice supports faster linear indexing is false, while it should be true if that is the case for that block of a (this is addressed by [BlockSparseArrays] Redesign block views again #1513, @view a[Block(1, 1)] no longer outputs a SubArray, but rather either the block data directly or a BlockView object if the block doesn't exist yet).
  • TensorAlgebra.contract fails when called with view(::BlockSparseArray, ::Block) or reshape(view(::BlockSparseArray, ::Block), ...). As a workaround it will work if you use view!/@view! instroduced in [BlockSparseArrays] Define in-place view that may instantiate blocks #1498.
  • Fix tests that are marked as broken in https://github.com/ITensor/ITensors.jl/blob/main/NDTensors/src/lib/BlockSparseArrays/test/test_basics.jl.
  • a = BlockSparseArray{Float64}([2, 3], [2, 3]); b = @view a[Block.(1:2), Block.(1:2)]; b[Block(1, 1)] = randn(2, 2) doesn't set the block Block(1, 1) (it remains uninitialized, i.e. structurally zero). I think the issue is that @view b[Block(1, 1)] makes two layers of SubArray wrappers instead of flattening down to a single layer, and those two layers are not being dispatched on properly (in general we only catch if something is a BlockSparseArray or a BlockSparseArray wrapped in a single wrapper layer).
  • Compatibility issues with BlockArrays v1.1, see CI for [Sectors] Non-abelian fusion #1363. Fixed by [BlockSparseArrays] Update to BlockArrays v1.1, fix some issues with nested views #1503.
  • Rewrite GradedAxes using BlockArrays v13.
  • r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(r, r); size(view(a, Block(1,1))[1:1,1:1]) returns a tuple of LabelledInteger instead of Int (see discussion, keep it that way at least for now).
  • r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(dual(r), r); @view(a[Block(1, 1)])[1:1, 1:1] and other combinations of dual lead to method ambiguity errors.
  • Implement slicing syntax for slicing multiple subsets of blocks, i.e. Indexing with Vector{<:BlockIndexRange{1}} JuliaArrays/BlockArrays.jl#358.
  • dual is not preserved when adding/subtracting BlockSparseArrays, i.e. g = gradedrange([U1(0) => 1]); m = BlockSparseArray{Float64}(dual(g), g); isdual(axes(m + m, 1)) should be true but is false.
  • Error printing views of blocks of BlockSparseArray with GradedUnitRange axes, i.e.r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(r, r); @view a[Block(1, 1)].
  • Change the design of slicing with unit ranges, i.e. a[2:4, 2:4], by using BlockArrays.BlockSlice.
  • Fix a[Block(2), Block(2)] = randn(3, 3).
  • Fix a[Block(2, 2)] .= 1.
  • Fix @view(a[Block(1, 1)])[1:1, 1:1] = 1.
  • Throw error if a[Block(1, 1)] = b if size(a[Block(1, 1)]) != size(b).
  • Fix matrix multiplication of BlockSparseMatrix involving dual axes.
  • Fix matrix multiplication involving adjointed BlockSparseMatrix, i.e. a' * a and a * a', with and without dual axes.
  • Take the dual of the axes in adjoint(::BlockSparseMatrix). Can be implemented by overloading axes(::Adjoint{<:Any,<:AbstractBlockSparseMatrix}).
  • show(::Adjoint{<:Any,<:BlockSparseMatrix}) and show(::Transpose{<:Any,<:BlockSparseMatrix)) are broken.
  • fix eachindex(::BlockSparseArray) involving dual axes.
  • Fix [Hermitian] transposed BlockSparseMatrix, i.e. a' (in progress in4).
  • Base.similar(a::BlockSparseArray, eltype::type) and Base.similar(a::BlockSparseArray, eltype::type, size::NTuple{N,AbstractUnitRange}) do not set eltype
  • Make copy(::BlockSparseArray) copy the blocks.
  • Slicing with a[1:2, 1:2] is not implemented yet and needs to be implemented (in progress in5).
  • Function for accessing a list of initialized blocks. Currently you can use stored_indices(blocks(a))) to get a list of Block corresponding to initialized/stored blocks. Ideally there would be shorthands for this like block_stored_indices(a) (in progress in5).
  • Function for accessing the number of initialized blocks. Currently you can use nstored(blocks(a)) to get the number of initialized/stored blocks. Ideally there would be shorthands for this like block_nstored(a) (in progress in5).
  • In place operations .*= and ./=, such asa .*= 2, are broken (in progress in[^1]).
  • Base.:*(::BlockSparseArray, x::Number) and Base.:/(::BlockSparseArray, x::Number) are not defined
  • Base.:*(::ComplexF64, ::BlockSparseArray{Float64}) does not change data type for empty array and crashes if a contains data.

Footnotes

  1. https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl

  2. https://github.com/JuliaGPU/CUDA.jl/blob/v5.4.2/src/array.jl#L396

  3. https://github.com/ITensor/ITensors.jl/pull/1452, https://github.com/JuliaArrays/BlockArrays.jl/pull/255

  4. [BlockSparseArrays] Fix adjoint and transpose #1470

  5. [BlockSparseArrays] More general broadcasting and slicing #1332 2 3

@ogauthe ogauthe added enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library. labels Feb 16, 2024
@mtfishman
Copy link
Member

mtfishman commented Feb 16, 2024

Thanks. This currently works:

julia> using NDTensors.BlockSparseArrays: Block, BlockSparseArray, blocks

julia> using LinearAlgebra: I

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0

julia> a[Block(2, 2)] = I(3)
3×3 Diagonal{Bool, Vector{Bool}}:
 1    
   1  
     1

julia> a
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.01.0  0.0
 0.0  0.00.0  1.0

julia> using NDTensors.SparseArrayInterface: stored_indices

julia> stored_indices(blocks(a))
1-element Dictionaries.MappedDictionary{CartesianIndex{2}, CartesianIndex{2}, NDTensors.SparseArrayInterface.var"#1#2"{NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}}, Tuple{Dictionaries.Indices{CartesianIndex{2}}}}
 CartesianIndex(2, 2) │ CartesianIndex(2, 2)

though using this alternative syntax is currently broken:

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0

julia> a[Block(2), Block(2)] = I(3)
ERROR: DimensionMismatch: tried to assign (3, 3) array to (2, 2) block
Stacktrace:
 [1] setindex!(::BlockSparseArray{…}, ::Diagonal{…}, ::Block{…}, ::Block{…})
   @ BlockArrays ~/.julia/packages/BlockArrays/L5yjb/src/abstractblockarray.jl:165
 [2] top-level scope
   @ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types.

I would have to think about if it makes sense to support a[Block(2, 2)] .= 1.0 or a[Block(2), Block(2)] .= 1.0 if Block(2, 2) isn't initialized yet (probably we should support that).

@mtfishman
Copy link
Member

mtfishman commented Feb 16, 2024

In terms of svd and qr of BlockSparseMatrix, it is well defined to perform them block-wise (i.e. map the problem to factorizing the individual non-zero blocks) as long as there is only one block per row or column, i.e. diagonal up to block permutations of the rows and columns. So I was thinking we would have specialized block_svd and block_qr that checks if they have that structure and performs svd or qr block-wise, and otherwise it errors (it can check if blocks(a) is a generalized permutation matrix).

I have a protype of a QR decomposition of a BlockSparseMatrix here: https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl but it may be outdated since it was written based on an earlier version of BlockSparseArrays.

@mtfishman mtfishman changed the title [NDTensors] [ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays][ENHANCEMENT] BlockSparseArray functionalities Feb 16, 2024
@mtfishman
Copy link
Member

mtfishman commented Feb 18, 2024

Also note that slicing like this should work right now:

a[Block(1, 1)[1:2, 1:2]]

i.e. you can take slices within a specified block. See BlockArrays.jl for a reference on that slicing notation.

@ogauthe
Copy link
Contributor Author

ogauthe commented Feb 26, 2024

new feature request: Base.:*(::BlockSparseArray, x::Number) and Base.:/(::BlockSparseArray, x::Number)

I updated the first comment.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Feb 26, 2024

new issue: 1im * a does not modify a data type if a is empty. It crashes if a contains data.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Mar 1, 2024

new issue: similar(a::BlockSparseArray, eltype::type) do not set new eltype.
similar(a::BlockSparseArray, eltype::type, size) behavior depends on size type.

  • For size::NTuple{N,Int}, it returns a dense julia Array with correct eltype.
  • For size::NTuple{N,AbstractUnitRange}, it returns a BlockSparseArray{Float64} and eltype is not correct.

edit: FIXED

@mtfishman
Copy link
Member

@ogauthe a number of these issues were fixed by #1332, I've updated the list in the first post accordingly. I added regression tests in #1360 for ones that still need to be fixed, and additionally added placeholder tests that I've marked as broken in the BlockSparseArrays tests. Please continue to update this post with new issues you find, and/or make PRs with broken behavior marked with @test_broken or bug fixes and regression tests. I think with the reorganization I did in #1332 these kinds of issues will be easier to fix going forward, though I have to admit it may be a bit tough to jump into that code right now since there are many abstraction layers involved, since the goal is for that code to be as minimal and general as possible.

@ogauthe
Copy link
Contributor Author

ogauthe commented Apr 24, 2024

Feature request: display(::Adjoint). The function Base.adjoint is well defined and returns an Adjoint type, but it throws an error at display.
Closely related, it is not clear to me how GradedAxes.dual and BlockSparseArray should work together. The current behavior is to only accept a GradedUnitRange as an axis and to throw for a UnitRangeDual input. Base.adjoint does not dual the axis. I do not have an opinion yet whether this should be modified or not.

Edit: FIXED

@mtfishman
Copy link
Member

mtfishman commented Apr 24, 2024

I think ideally BlockSparseArray would accept any AbstractUnitRange.

Alternatively, BlockArrays.jl introduced an AbstractBlockedUnitRange in JuliaArrays/BlockArrays.jl#348, we could define a BlockedUnitRangeDual <: AbstractBlockedUnitRange type which wraps another AbstractBlockedUnitRange. I think it will require some experimentation to see which one leads to simpler code.

Good question about whether or not the axes should get dualed if Base.adjoint(::BlockSparseMatrix) is called. Probably we should dual the axes. For example, if we want operations like a' * a to work for a::BlockSparseMatrix and GradedUnitRange axes, I think we need to dual the axes.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 2, 2024

The solution to accept any AbstractUnitRange is to replace the definition of Axes with

  Axes<:Tuple{Vararg{<:AbstractUnitRange,N}},

Then one can construct a BlockSparseArray with a dual axis. For a reason I do not really understand, the resulting array can be printed but not displayed. The error is not the same as when one tries to display a transpose or adoint.

g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArray{Float64}(dual(g1), g1,)

outputs

1×1-blocked 1×1 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
Error showing value of type BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)

Closest candidates are:
  (::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{T})(::BigFloat) where T<:Integer
   @ Base mpfr.jl:378
  ...

Stacktrace:
  [1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{Base.TTY}, X::AbstractVecOrMat, rows::Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, cols::Vector{Int64}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, X::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ Base ./arrayshow.jl:399
 [12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [15] display
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [17] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [18] invokelatest
    @ ./essentials.jl:889 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{Nothing, AbstractDisplay})
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [23] (::REPL.var"#do_respond#80"{Bool, Bool, REPL.var"#93#103"{REPL.LineEditREPL, REPL.REPLHistoryProvider}, REPL.LineEditREPL, REPL.LineEdit.Prompt})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [24] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [25] invokelatest
    @ ./essentials.jl:889 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@mtfishman
Copy link
Member

Thanks for investigating. That seems like the right move to generalize the axes in that way. Hopefully that error is easy enough to circumvent.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 2, 2024

I continue in exploring the effect of dual. eachindex is broken on such an array. This affects ==.

julia> eachindex(m1)
ERROR: MethodError: no method matching AbstractUnitRange{…}(::NDTensors.GradedAxes.UnitRangeDual{…})

Closest candidates are:
  AbstractUnitRange{T}(::AbstractUnitRange{T}) where T
   @ Base range.jl:1308
  AbstractUnitRange{T}(::NDTensors.LabelledNumbers.LabelledUnitRange) where T
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledunitrange.jl:17
  AbstractUnitRange{Int64}(::Static.OptionallyStaticUnitRange)
   @ Static ~/.julia/packages/Static/6JeQI/src/ranges.jl:258
  ...

Stacktrace:
 [1] OrdinalRange{…}(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:1313
 [2] convert(::Type{…}, r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:265
 [3] (::Base.IteratorsMD.var"#3#4")(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [4] map
   @ ./tuple.jl:292 [inlined]
 [5] CartesianIndices(inds::Tuple{NDTensors.GradedAxes.UnitRangeDual{…}, BlockArrays.BlockedUnitRange{…}})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [6] eachindex(::IndexCartesian, A::BlockSparseArray{…})
   @ Base.IteratorsMD ./multidimensional.jl:392
 [7] eachindex(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
   @ Base ./abstractarray.jl:378
 [8] top-level scope
   @ REPL[37]:1
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@mtfishman mtfishman changed the title [BlockSparseArrays][ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays] [ENHANCEMENT] BlockSparseArray functionalities May 10, 2024
@ogauthe
Copy link
Contributor Author

ogauthe commented May 10, 2024

issue: I cannot write a slice of a block

a[BlockArrays.Block(1,1)][1:2,1:2] = ones((2,2))

does not write a. This is probably associated to "Fix initializing a block with broadcasting syntax".

@mtfishman mtfishman changed the title [BlockSparseArrays] [ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays] BlockSparseArray functionality May 10, 2024
@ogauthe
Copy link
Contributor Author

ogauthe commented May 11, 2024

issue: LinearAlgebra.norm(a) triggers AssertionError when a contains NaN

a[BlockArrays.Block(1,1)] = ones((2,2))
println(LinearAlgebra.norm(a))  # 2.0
a[BlockArrays.Block(1,1)][1, 1] = NaN
println(LinearAlgebra.norm(a[BlockArrays.Block(1,1)]))  # NaN
println(LinearAlgebra.norm(a))  # AssertionError

outputs

ERROR: AssertionError: op(output, (eltype(output))(f_notstored)) == output
Stacktrace:
  [1] #sparse_mapreduce#16
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:118 [inlined]
  [2] sparse_mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:115 [inlined]
  [3] #mapreduce#29
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:82 [inlined]
  [4] mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81 [inlined]
  [5] generic_normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:453 [inlined]
  [6] normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:527 [inlined]
  [7] generic_norm2(x::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:463
  [8] norm2
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:529 [inlined]
  [9] norm(itr::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, p::Int64)
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:598
 [10] _norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:297 [inlined]
 [11] norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298 [inlined]
 [12] norm(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298
 [13] top-level scope
    @ REPL[1220]:1
Some type information was truncated. Use `show(err)` to see complete types.

I just checked that replacing NaN with Inf is correct.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 17, 2024

issue: a block can be written with an invalid shape. An error should be raised.

a = BlockSparseArray{Float64}([2, 3], [2, 3])
println(size(a))  # (5,5)
b = BlockArrays.Block(1,1)
println(size(a[b]))  # (2,2)
a[b] = ones((3,3))
println(size(a))  # (5,5)
println(size(a[b]))  # (3,3)

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented May 28, 2024

Thanks to #1467, I can now initialize a BlockSparseArray with a dual axis. However contracting such arrays is broken:

using NDTensors.GradedAxes: GradedAxes, dual, gradedrange
using NDTensors.Sectors: U1
using NDTensors.BlockSparseArrays: BlockSparseArray

g1 = gradedrange([U1(0) => 1, U1(1) => 2, U1(2) => 3])
g2 = gradedrange([U1(0) => 2, U1(1) => 2, U1(3) => 1])

m1 = BlockSparseArray{Float64}(g1, GradedAxes.dual(g2));  # display crash
m2 = BlockSparseArray{Float64}(g2, GradedAxes.dual(g1));  # display crash

m12 = m1 * m2;  # MethodError
m21 = m2 * m1;  # MethodError
ERROR: MethodError: no method matching AbstractUnitRange{…}(::NDTensors.GradedAxes.UnitRangeDual{…})

Closest candidates are:
  AbstractUnitRange{T}(::AbstractUnitRange{T}) where T
   @ Base range.jl:1308
  AbstractUnitRange{T}(::NDTensors.LabelledNumbers.LabelledUnitRange) where T
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledunitrange.jl:17
  AbstractUnitRange{Int64}(::Static.OptionallyStaticUnitRange)
   @ Static ~/.julia/packages/Static/6JeQI/src/ranges.jl:258
  ...

Stacktrace:
  [1] OrdinalRange{…}(r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base ./range.jl:1313
  [2] convert(::Type{…}, r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base ./range.jl:265
  [3] (::Base.IteratorsMD.var"#3#4")(r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base.IteratorsMD ./multidimensional.jl:259
  [4] map
    @ ./tuple.jl:292 [inlined]
  [5] CartesianIndices(inds::Tuple{BlockArrays.BlockedUnitRange{…}, NDTensors.GradedAxes.UnitRangeDual{…}})
    @ Base.IteratorsMD ./multidimensional.jl:259
  [6] eachindex(::IndexCartesian, A::BlockSparseArray{…})
    @ Base.IteratorsMD ./multidimensional.jl:392
  [7] eachindex
    @ ./abstractarray.jl:378 [inlined]
  [8] fill!(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, x::Float64)
    @ Base ./multidimensional.jl:1113
  [9] zero!(::BlockArrays.BlockLayout{…}, A::BlockSparseArray{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:289
 [10] zero!(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:288
 [11] _fill_copyto!(dest::BlockSparseArray{…}, C::FillArrays.Zeros{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:80
 [12] copyto!
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:82 [inlined]
 [13] copy(M::ArrayLayouts.MulAdd{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:77
 [14] copy
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:130 [inlined]
 [15] materialize
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:127 [inlined]
 [16] mul
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:128 [inlined]
 [17] *(A::BlockSparseArray{…}, B::BlockSparseArray{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:213
 [18] top-level scope
    @ REPL[13]:1
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented May 28, 2024

When no dual axis is involved, m' * m and m * m' trigger StackOverflowError

Stacktrace:
     [1] muladd!(α::Float64, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, β::Float64, C::BlockSparseArray{…}; kw::@Kwargs{})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:75
     [2] mul!(dest::BlockSparseArray{…}, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, α::Float64, β::Float64)
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:134
     [3] mul!(dest::BlockSparseArray{…}, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, α::Float64, β::Float64)
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:271
     [4] materialize!(m::ArrayLayouts.MulAdd{…})
       @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/arraylayouts.jl:14
--- the last 4 lines are repeated 10900 more times ---
 [43605] muladd!(α::Float64, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, β::Float64, C::BlockSparseArray{…}; kw::@Kwargs{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:75
 [43606] copyto!
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:82 [inlined]
 [43607] copy(M::ArrayLayouts.MulAdd{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:77
 [43608] copy
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:130 [inlined]
 [43609] materialize
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:127 [inlined]
 [43610] mul
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:128 [inlined]
 [43611] *(A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:290
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Jun 4, 2024

issue: display error when writing a block

using BlockArrays: BlockArrays
using NDTensors.BlockSparseArrays: BlockSparseArrays
using NDTensors.GradedAxes: GradedAxes
using NDTensors.Sectors: U1

g = GradedAxes.gradedrange([U1(0) => 1])
m = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
m[BlockArrays.Block(1,1)] .= 1
1×1 view(::NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, BlockSlice(Block(1),1:1), BlockSlice(Block(1),1:1)) with eltype Float64 with indices NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0])×NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):
Error showing value of type SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}, BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}}, false}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)

Closest candidates are:
  (::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
  ...

Stacktrace:
  [1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{…}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{…}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::SubArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{…}, ::MIME{…}, X::SubArray{…})
    @ Base ./arrayshow.jl:399
 [12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [15] display
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [17] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [18] invokelatest
    @ ./essentials.jl:889 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{…})
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [23] (::REPL.var"#do_respond#80"{})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [24] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [25] invokelatest
    @ ./essentials.jl:889 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

This looks like the same error as previously triggered by dual axes.

Edit: FIXED

@mtfishman
Copy link
Member

Thanks for the report, looks like it is more generally a problem printing views of blocks of BlockSparseArray with GradedUnitRange axes:

using BlockArrays: Block
using NDTensors.BlockSparseArrays: BlockSparseArray
using NDTensors.GradedAxes: gradedrange
using NDTensors.Sectors: U1
r = gradedrange([U1(0) => 1])
a = BlockSparseArray{Float64}(r, r)
@view a[Block(1, 1)]

@emstoudenmire
Copy link
Collaborator

It would be a really useful feature. For an interface, what about just:

x = get!(a, i)

which is very similar to the existing get! method but lets you opt out of providing a default value, since the default has already been pre-defined. Would that cause any ambiguity issues with the existing get! method?

@mtfishman
Copy link
Member

That's definitely an interface we could consider. I don't think it technically causes an ambiguity issue, though we would not want to overload Base.get!(a::AbstractArray, i) since that would be type piracy so we could only define it on our own types, which maybe makes it a bit less powerful/general.

@mtfishman
Copy link
Member

mtfishman commented Jun 12, 2024

I also worry it is a slight abuse of notation, since it is a slightly different meaning from Base.get!.

Interestingly, Base.getindex(d::DataStructures.DefaultDict, key) is actually defined as get!(d, key, default_value(d)), i.e. https://github.com/JuliaCollections/DataStructures.jl/blob/v0.18.15/src/default_dict.jl#L63. That seems pretty extreme/opinionated to me since then in the case of SparseArrayDOK if you have a broadcasting or reduction operation that touches all of the elements then it would actually instantiate every element of the array in memory, i.e. make it effectively dense, which would be a big issue for large arrays since you could easily accidentally run out of memory.

@mtfishman
Copy link
Member

mtfishman commented Jun 12, 2024

Another consideration would be to define a more general macro @! which converts all operations it sees to their in-place versions. That is similar to this package: https://github.com/simonbyrne/InplaceOps.jl.

So then the syntax would be:

@! a[i, j] # getindex!(a, i, j)
@! @view a[i, j] # view!(a, i, j)
@! view(a, i, j) # view!(a, i, j)

That would be a compelling reason to use getindex! since then it is clear from a syntax level that it is the in-place version of getindex, and the macro could even make use of that and just change all functions f(...) that it sees to f!(...).

That would be useful in other places as well, such as when constructing OpSums, we could use @! annotations to turn this out-of-place version:

o = OpSum()
for j in 1:10
  o += "X", j
end

into an in-place version:

o = OpSum()
@! for j in 1:10
  o += "X", j
end

@emstoudenmire
Copy link
Collaborator

I like the idea of that macro for things like OpSum. Is the name @! already emerging as a standard, or what about @inplace which would be more self-documenting?

For the specific case of getting array / dictionary / tensor values, I wonder if the notation becomes too terse. Like if a reader of the code would understand that the "in-place-ness" there is meaning that default will be set if the element/block is missing versus more general in-place functions that might modify data for other reasons (like e.g. sort!(arr) where it's more obvious what the in-place operation is that happens). The contrast I'm making is versus some other name like @maybe_grow or something like @get_or_set_default.

@mtfishman
Copy link
Member

mtfishman commented Jun 12, 2024

This package: https://github.com/davidavdav/InplaceLinalg.jl uses @inplace. I was also considering a name like @maybe_set but I was trying to come up with something shorter and also more general. ! is of course standard notation for in-place operations so @! seems pretty self-explanatory to me but I could see an argument for using a more descriptive name.

EDIT: See also:

@mtfishman
Copy link
Member

For now though we could go with view! and getindex! for the sake of expediency and think about more convenient syntax built on top of that later.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jun 18, 2024

issue: there are compatibility issues between BlockSparseArrays and BlockArrays v1.1.

I upgraded to v1.1 on the non-abelian_fusion branch, the CI lists the failed tests
https://github.com/ITensor/ITensors.jl/actions/runs/9568584523/job/26379147331?pr=1363

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Jun 26, 2024

There is still an issue with views of blocks:
main at ab8a59e20e4f8ac009a8234aced2e805b9f253fd

g = GradedAxes.gradedrange(['a' => 1])
bsa = BlockSparseArrays.BlockSparseArray{Float64}(g,  g)
bsa[1, 1] = 1.0
b1 = view(bsa, BlockArrays.Block(1,1))  # block b1 has been initialized before
TensorAlgebra.contract(b1, (1, 2), ones((1, 1)), (2, 3))  # ArgumentError
ERROR: ArgumentError: No fallback for applying `zerovector!` to (values of) type `NDTensors.BlockSparseArrays.BlockSparseStorage{NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}}` could be determined
Stacktrace:
  [1] zerovector!(x::NDTensors.BlockSparseArrays.BlockSparseStorage{NDTensors.BlockSparseArrays.BlockSparseArray{…}})
    @ VectorInterface ~/.julia/packages/VectorInterface/1L754/src/fallbacks.jl:46
  [2] sparse_zerovector!(a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/vectorinterface.jl:26
  [3] sparse_zero!(a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/base.jl:64
  [4] sparse_map_stored!(f::Function, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, as::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:79
  [5] sparse_map!(::Base.Broadcast.DefaultArrayStyle{…}, f::Function, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, as::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:101
  [6] sparse_map!(f::Function, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, as::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93
  [7] sparse_copyto!(dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, src::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8
  [8] sparse_permutedims!(dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, src::Base.ReshapedArray{…}, perm::Tuple{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:13
  [9] permutedims!(a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a_src::Base.ReshapedArray{…}, perm::Tuple{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:97
 [10] _permutedims!(a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a_src::Base.ReshapedArray{…}, perm::Tuple{…})
    @ NDTensors.TensorAlgebra.BaseExtensions ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/BaseExtensions/permutedims.jl:6
 [11] splitdims!(a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a::Base.ReshapedArray{…}, blockedperm::NDTensors.TensorAlgebra.BlockedPermutation{…})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/splitdims.jl:66
 [12] contract!(alg::NDTensors.BackendSelection.Algorithm{…}, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool, β::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract_matricize/contract.jl:19
 [13] contract(alg::NDTensors.BackendSelection.Algorithm{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:118
 [14] contract(alg::NDTensors.BackendSelection.Algorithm{…}, biperm_dest::NDTensors.TensorAlgebra.BlockedPermutation{…}, a1::SubArray{…}, biperm1::NDTensors.TensorAlgebra.BlockedPermutation{…}, a2::Matrix{…}, biperm2::NDTensors.TensorAlgebra.BlockedPermutation{…}, α::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:107
 [15] contract(alg::NDTensors.BackendSelection.Algorithm{…}, labels_dest::Tuple{…}, a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:88
 [16] contract(alg::NDTensors.BackendSelection.Algorithm{…}, labels_dest::Tuple{…}, a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool)
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:77
 [17] contract(alg::NDTensors.BackendSelection.Algorithm{…}, a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…}, α::Bool; kwargs::@Kwargs{})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:45
 [18] contract
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:35 [inlined]
 [19] #contract#31
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:32 [inlined]
 [20] contract
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:23 [inlined]
 [21] contract(a1::SubArray{…}, labels1::Tuple{…}, a2::Matrix{…}, labels2::Tuple{…})
    @ NDTensors.TensorAlgebra ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/TensorAlgebra/src/contract/contract.jl:23
 [22] top-level scope
    @ REPL[52]:1
Some type information was truncated. Use `show(err)` to see complete types.

Note that this is fine:

bsa2 = BlockSparseArrays.BlockSparseArray{Float64}(g,  g)
b2 = BlockSparseArrays.view!(bsa2, BlockArrays.Block(1, 1))
TensorAlgebra.contract(b2, (1, 2), ones((1, 1)), (2, 3))  # Ok

I am surprised that view and view! have different output types, I was expecting view! to be "initialize a block if it is missing and return a view of it"

Edit: FIXED

@mtfishman
Copy link
Member

mtfishman commented Jun 26, 2024

Yes, I didn't fix the view/@view version yet, you can see it is still listed as an issue in the first post.

The latest design of view/@view for taking a view of a Block was implemented in: #1481. As of that PR it creates a SubArray wrapping the block sparse array along with the block being viewed. That design was chosen because it makes it easier to handle the fact that blocks may or may not exist. If the block you are taking a view of doesn't exist, there isn't any data to access/return, so if you want to be able to modify that block it needs to be a wrapper around the block sparse array that points to the location of the non-existent block being viewed in some way or another.

view!/@view! is able to always output the block data directly since it instantiates the block if it doesn't exist, so the output type can be simpler (i.e. it can just be an Array in standard cases).

It would be possible for view/@view to either output the block data directly if the block exists or output a wrapper around the array if it doesn't exist which points towards the location of the non-existent block, I'm trying that out now. I didn't do that so far since I didn't like the idea that view/@view might output different types depending on if the Block is stored or not, that seemed a bit weird to me.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jun 26, 2024

I see, thank you for the explanation. I guess I should just use view! everywhere.

I was confused by the error message, but if this is expected to work at some point there is no need to change it.

@mtfishman
Copy link
Member

mtfishman commented Jun 26, 2024

Right, the goal of view!/@view! was to be a stopgap solution that outputs something that is more guaranteed to work in a variety of functions (at the expense of allocating unallocated blocks that are accessed), while we continue to improve support for the more complicated types that get output by view/@view.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 3, 2024

issue: BlockSparseArrays.block_stored_indices does not transpose indices for a LinearAlgebra.Adjoint{T, NDTensors.BlockSparseArrays.BlockSparseArray
This is the origin of the display bug of an adjoint, it also triggers issues in my own code.

gr = GradedAxes.dual(GradedAxes.gradedrange([U1(1) => 1]))
gc = GradedAxes.gradedrange([U1(0) => 1, U1(1) => 1])
m = BlockSparseArrays.BlockSparseArray{Float64}(gr, gc)
m[1, 2] = 1

existing_blocks = BlockSparseArrays.block_stored_indices(m)
@show existing_blocks  # {CartesianIndex(1, 2) = Block(1, 2)}
col_sectors = GradedAxes.blocklabels(axes(m, 2))
existing_sectors = [col_sectors[it[2]] for it in eachindex(existing_blocks)]  # Ok

mh = adjoint(m)   # display error, due to present issue
existing_blocks = BlockSparseArrays.block_stored_indices(mh)
@show existing_blocks  # {CartesianIndex(1, 2) = Block(2, 1)}  THIS IS WRONG
col_sectors = GradedAxes.blocklabels(axes(mh, 2))
existing_sectors = [col_sectors[it[2]] for it in eachindex(existing_blocks)]  # raises BoundError

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 4, 2024

Overall, I find the interface defined by block_stored_indices not satisfying, the output type is hard to manipulate. As discussed, returning a Vector{Block} would be much more convenient. I finally found out how to recover the indices from the Block itself

b = BlockArrays.Block(1,1)
inds = Int.(Tuple(b))  # (1,1)

using the index from the Block directly, CartesianIndex is no more needed.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 8, 2024

issue: LinearAlgebra.Diagonal as a block type is brittle. It is possible to construct such a BlockSparseArray, but any operation on it will cast it back to Matrix

using LinearAlgebra: LinearAlgebra

using BlockArrays: BlockArrays
using Dictionaries: Dictionaries

using NDTensors.BlockSparseArrays: BlockSparseArrays

sdic = Dictionaries.Dictionary{
  BlockArrays.Block{2,Int},LinearAlgebra.Diagonal{Float64,Vector{Float64}}
}()
Dictionaries.set!(sdic, BlockArrays.Block(1, 1), LinearAlgebra.Diagonal([1.0, 2.0]))
s = BlockSparseArrays.BlockSparseArray(sdic, (1:2, 1:2))
println(typeof(s))  # BlockSparseArrays.BlockSparseArray{Float64, 2, LinearAlgebra.Diagonal...}
println(typeof(similar(s)))  # BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix...}
println(typeof(s * s))  # BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix...}

Edit : I guess this is what "Support for blocks that are DiagonalArrays and SparseArrayDOKs" stands for.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 8, 2024

issue: cannot take a block subslice of a BlockSparseArray with a dual axis. I guess this can be solved by making UnitRangeDual a subtype of BlockArrays.AbstractBlockedUnitRange.

g = GradedAxes.gradedrange([U1(1) => 2])
m1 = BlockSparseArrays.BlockSparseArray{Float64}(g,g)
m2 = BlockSparseArrays.BlockSparseArray{Float64}(GradedAxes.dual(g), g)

m1[BlockArrays.Block(1,1)] = ones((2,2))
m2[BlockArrays.Block(1,1)] = ones((2,2))   # need to initialize a block to trigger the error

I = [BlockArrays.Block(1)[1:1]]
m1[I,I] # Ok
m1[I,:] # Ok
m1[:, I] # Ok

m2[I,I] #   first axis lost its label
m2[I, :] #  first axis lost its label
m2[:, I] ;  # MethodError
ERROR: MethodError: no method matching to_blockindexrange(::Base.Slice{NDTensors.GradedAxes.UnitRangeDual{…}}, ::BlockArrays.Block{1, Int64})

Closest candidates are:
  to_blockindexrange(::Base.Slice{<:BlockArrays.BlockedOneTo}, ::BlockArrays.Block{1})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:194
  to_blockindexrange(::NDTensors.BlockSparseArrays.BlockIndices{var"#s50", T} where {var"#s50"<:(BlockArrays.BlockArray{var"#s49", 1, var"#s11", BS} where {var"#s49"<:(BlockArrays.BlockIndex{1, TI, Tα} where {TI<:Tuple{Integer}, Tα<:Tuple{Integer}}), var"#s11"<:(Vector{<:BlockArrays.BlockIndexRange{1, R, I} where {R<:Tuple{AbstractUnitRange{<:Integer}}, I<:Tuple{Integer}}}), BS<:Tuple{AbstractUnitRange{<:Integer}}}), T<:Integer}, ::BlockArrays.Block{1})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:186

Stacktrace:
  [1] (::NDTensors.BlockSparseArrays.var"#49#50"{SubArray{}, Tuple{}})(dim::Int64)
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:208
  [2] ntuple
    @ ./ntuple.jl:19 [inlined]
  [3] viewblock
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:208 [inlined]
  [4] viewblock
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:147 [inlined]
  [5] view
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:125 [inlined]
  [6] getindex
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:249 [inlined]
  [7] (::NDTensors.BlockSparseArrays.var"#70#73"{Tuple{SubArray{}}, Tuple{BlockArrays.BlockIndexRange{}}})(i::Int64)
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:78
  [8] ntuple
    @ ./ntuple.jl:19 [inlined]
  [9] sparse_map!(::NDTensors.BlockSparseArrays.BlockSparseArrayStyle{…}, f::NDTensors.BroadcastMapConversion.MapFunction{…}, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a_srcs::SubArray{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:77
 [10] sparse_map!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93 [inlined]
 [11] copyto!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/broadcast.jl:37 [inlined]
 [12] materialize!
    @ ./broadcast.jl:914 [inlined]
 [13] materialize!
    @ ./broadcast.jl:911 [inlined]
 [14] sub_materialize
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/arraylayouts.jl:21 [inlined]
 [15] sub_materialize
    @ ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:131 [inlined]
 [16] sub_materialize
    @ ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:132 [inlined]
 [17] layout_getindex
    @ ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:138 [inlined]
 [18] getindex(A::NDTensors.BlockSparseArrays.BlockSparseArray{…}, kr::Colon, jr::Vector{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:155
 [19] top-level scope
    @ REPL[296]:1
Some type information was truncated. Use `show(err)` to see complete types.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 8, 2024

Feature request: LinearAlgebra.Adjoint{T, BlockSparseMatrix} to behave the same way as BlockSparseMatrix under slicing operations.
Currently slicing an Adjoint with a Block returns a BlockedArray

typeof(m1[BlockArrays.Block(1),:])  # BlockSparseArray
typeof(adjoint(m1)[BlockArrays.Block(1),:])  # Matrix

EDIT: I wonder if the design of LinearAlgebra.Adjoint{T, BlockSparseMatrix} is the best one. Maybe adjoint should return a BlockSparseMatrix where each block would be a LinearAlgebra.Adjoint{Matrix}? That would solve slicing issues and should be similar to supporting DiagonalArrays as block type.

@mtfishman
Copy link
Member

mtfishman commented Jul 27, 2024

@ogauthe thanks for the issue reports.

  1. One reason to not use Vector{<:Block} for the return type of block_stored_indices is that lookup wouldn't be O(1), that's why it is a Dictionary. You are free to convert to Vector{<:Block} if you want to.
  2. Regarding block_stored_indices(::Adjoint), I can look into that when I have time but it may be faster for you to do it yourself, it should be an easy fix. Feel free to make a PR.
  3. I'm aware of the issues using LinearAlgebra.Diagonal as blocks of a BlockSparseMatrix, I definitely plan to support that but didn't have time to look into it. I've clarified that in the issue list in the first post. For now you'll just have to use Array blocks instead and accept the performance penalties with that. Mostly I think the issue is that as a shortcut I hardcoded some return types of certain operations (for example in some similar definitions) to be BlockSparseArray with Array blocks instead of inferring the block type from the input BlockSparseArray. Feel free to make PRs fixing issues you come across with that.
  4. Those issues with slicing block sparse arrays with dual axes and also adjointed block sparse arrays look more subtle, feel free to look into them and make PRs if you have time.
  5. I don't think it is a good idea to make adjoint of BlockSparseArray not return Adjoint, I think that would make some generic code harder to write since it would be more difficult to get this kind of behavior:
julia> v = [1, 2, 3];

julia> v' * v
14

if v was a BlockSparseVector rather than Adjoint wrapping a BlockSparseVector. More generally, the Adjoint wrapper is helpful for performing optimizations and dispatch. We should just focus on fixing issues with BlockSparseArray wrapped in Adjoint as they come up. I think the challenge with slicing is that it makes a SubArray wrapping an Adjoint wrapping a BlockSparseArray, handling multiple wrapper types is a bit tricky but I'm sure we can make it work. Again, feel free to make a PR with fixes for issues you come across.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 31, 2024

Thanks for the detailed answer.

Concerning block_stored_indices, I get your point about complexity. I still feel like the duplication is not needed and should be avoided: the issue comes precisely because keys duplicated at NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/interface_optional.jl:27 are only partially transposed at NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:221. Preserving a Dictionaries.Indices (type before duplication) or a Set for instance would avoid reaching such inconsistent state.

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 11, 2024

issue: display error when dual is involved
main at 5d4111e3f0d720b17b58de137f633225094fb379

g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArrays.BlockSparseArray{Float64}(g1, g1)
m2 = BlockSparseArrays.BlockSparseArray{Float64}(g1, dual(g1))
display(m1[:,:])  # Ok
display(m2)  # Ok
display(m2[:,:])  # MethodError
ERROR: MethodError: no method matching LabelledInteger{Int64, U1}(::Int64)

Closest candidates are:
  (::Type{LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
   @ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
  ...

Stacktrace:
  [1] convert(::Type{LabelledInteger{Int64, U1}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{Int64, LabelledInteger{Int64, U1}}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{Int64, LabelledInteger{Int64, U1}}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{…}, ::MIME{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ Base ./arrayshow.jl:399
 [12] #blocksparse_show#11
    @ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:120 [inlined]
 [13] blocksparse_show
    @ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:112 [inlined]
 [14] #show#12
    @ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:130 [inlined]
 [15] show(io::IOContext{…}, mime::MIME{…}, a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ NDTensors.BlockSparseArrays.BlockSparseArraysGradedAxesExt ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:127
 [16] (::OhMyREPL.var"#15#16"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::IOContext{Base.TTY})
    @ OhMyREPL ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:23
 [17] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [18] display
    @ ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:6 [inlined]
 [19] display
    @ ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [20] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [21] top-level scope
    @ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types.

This is the same error as in #1336 (comment), in a different context. This previous case was fixed and does not error any more.

This is another case that should be fixed by refactoring UnitRangeDual (I am currently working on it)

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 12, 2024

I realize there are other issues with a[:,:]. The axes are cast to Base.IdentityUnitRange{-> axes(a) <-}. This is causing the display error above, but also creates issues with blocklabels and blocklengths. So when calling a[:,:], the array itself is correct but the axes have yet another type that does not behave as expected.

Should we change the behavior of a[:,:] to preserve the axes or should we define blocklabels and blocklengths for Base.IdentityUnitRange{GradedUnitRange}? I currently favor fixing the axes types to avoid adding more code to support exotic axes types.

@mtfishman
Copy link
Member

I think a[:, :] should always be exactly the same as copy(a), if I understand correctly, so if it doesn't act like that then we should change it to act like that.

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 12, 2024

issue: it is still possible to create a GradedAxes.GradedUnitRange, a.k.a BlockArrays.BlockedUnitRange{<:LabelledInteger} by slicing a BlockedOneTo. Using such an axis for a BlockSparseArrays creates many issues.
e.g

r = gradedrange([U1(1) => 2, U1(2) => 2])[1:3]
a = BlockSparseArray{Float64}(r,r)
a[1:2,1:2]  # MethodError
ERROR: MethodError: no method matching to_blockindices(::BlockArrays.BlockedUnitRange{…}, ::UnitRange{…})

Closest candidates are:
  to_blockindices(::UnitRangeDual, ::UnitRange{<:Integer})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/unitrangedual.jl:54
  to_blockindices(::Base.OneTo, ::UnitRange{<:Integer})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:186
  to_blockindices(::BlockedOneTo, ::UnitRange{<:Integer})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:170

Stacktrace:
 [1] blocksparse_to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
   @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:32
 [2] to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
   @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:26
 [3] to_indices
   @ ./indices.jl:344 [inlined]
 [4] view
   @ ./subarray.jl:183 [inlined]
 [5] layout_getindex
   @ ~/.julia/packages/ArrayLayouts/31idh/src/ArrayLayouts.jl:138 [inlined]
 [6] getindex(::BlockSparseArray{…}, ::UnitRange{…}, ::UnitRange{…})
   @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:92
 [7] top-level scope
   @ REPL[57]:1
Some type information was truncated. Use `show(err)` to see complete types.

main at 5d4111e3f0d720b17b58de137f633225094fb379

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 16, 2024

issue: copy(adjoint) does not preserve dual axes

r = gradedrange([U1(0) => 2, U1(1) => 2])
a = BlockSparseArray{Float64}(r, r)

@test isdual.(axes(a)) == (false, false)
@test isdual.(axes(adjoint(a))) == (true, true)
@test_broken isdual.(axes(copy(adjoint(a)))) == (true, true)

main at 5d4111e3f0d720b17b58de137f633225094fb379

EDIT: I got confused with adjoint, there is only an issue with copy here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BlockSparseArrays enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library.
Projects
None yet
Development

No branches or pull requests

3 participants