diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index e6ee3648..5b76d358 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.12.0-DEV.0","generation_timestamp":"2024-02-16T12:14:06","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.12.0-DEV.44","generation_timestamp":"2024-02-21T17:22:02","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/dev/index.html b/dev/index.html index 0a1e99c4..cf9feacd 100644 --- a/dev/index.html +++ b/dev/index.html @@ -62,7 +62,7 @@ 3-element SparseVector{Float64, Int64} with 2 stored entries: [1] = 1.0 [3] = 1.0

You can go in the other direction using the Array constructor. The issparse function can be used to query if a matrix is sparse.

julia> issparse(spzeros(5))
-true

Sparse matrix operations

Arithmetic operations on sparse matrices also work as they do on dense matrices. Indexing of, assignment into, and concatenation of sparse matrices work in the same way as dense matrices. Indexing operations, especially assignment, are expensive, when carried out one element at a time. In many cases it may be better to convert the sparse matrix into (I,J,V) format using findnz, manipulate the values or the structure in the dense vectors (I,J,V), and then reconstruct the sparse matrix.

Correspondence of dense and sparse methods

The following table gives a correspondence between built-in methods on sparse matrices and their corresponding methods on dense matrix types. In general, methods that generate sparse matrices differ from their dense counterparts in that the resulting matrix follows the same sparsity pattern as a given sparse matrix S, or that the resulting sparse matrix has density d, i.e. each matrix element has a probability d of being non-zero.

Details can be found in the Sparse Vectors and Matrices section of the standard library reference.

SparseDenseDescription
spzeros(m,n)zeros(m,n)Creates a m-by-n matrix of zeros. (spzeros(m,n) is empty.)
sparse(I,n,n)Matrix(I,n,n)Creates a n-by-n identity matrix.
sparse(A)Array(S)Interconverts between dense and sparse formats.
sprand(m,n,d)rand(m,n)Creates a m-by-n random matrix (of density d) with iid non-zero elements distributed uniformly on the half-open interval $[0, 1)$.
sprandn(m,n,d)randn(m,n)Creates a m-by-n random matrix (of density d) with iid non-zero elements distributed according to the standard normal (Gaussian) distribution.
sprandn(rng,m,n,d)randn(rng,m,n)Creates a m-by-n random matrix (of density d) with iid non-zero elements generated with the rng random number generator

SparseArrays API

SparseArrays.AbstractSparseArrayType
AbstractSparseArray{Tv,Ti,N}

Supertype for N-dimensional sparse arrays (or array-like types) with elements of type Tv and index type Ti. SparseMatrixCSC, SparseVector and SuiteSparse.CHOLMOD.Sparse are subtypes of this.

source
SparseArrays.AbstractSparseVectorType
AbstractSparseVector{Tv,Ti}

Supertype for one-dimensional sparse arrays (or array-like types) with elements of type Tv and index type Ti. Alias for AbstractSparseArray{Tv,Ti,1}.

source
SparseArrays.AbstractSparseMatrixType
AbstractSparseMatrix{Tv,Ti}

Supertype for two-dimensional sparse arrays (or array-like types) with elements of type Tv and index type Ti. Alias for AbstractSparseArray{Tv,Ti,2}.

source
SparseArrays.SparseVectorType
SparseVector{Tv,Ti<:Integer} <: AbstractSparseVector{Tv,Ti}

Vector type for storing sparse vectors. Can be created by passing the length of the vector, a sorted vector of non-zero indices, and a vector of non-zero values.

For instance, the vector [5, 6, 0, 7] can be represented as

SparseVector(4, [1, 2, 4], [5, 6, 7])

This indicates that the element at index 1 is 5, at index 2 is 6, at index 3 is zero(Int), and at index 4 is 7.

It may be more convenient to create sparse vectors directly from dense vectors using sparse as

sparse([5, 6, 0, 7])

yields the same sparse vector.

source
SparseArrays.SparseMatrixCSCType
SparseMatrixCSC{Tv,Ti<:Integer} <: AbstractSparseMatrixCSC{Tv,Ti}

Matrix type for storing sparse matrices in the Compressed Sparse Column format. The standard way of constructing SparseMatrixCSC is through the sparse function. See also spzeros, spdiagm and sprand.

source
SparseArrays.sparseFunction
sparse(A)

Convert an AbstractMatrix A into a sparse matrix.

Examples

julia> A = Matrix(1.0I, 3, 3)
+true

Sparse matrix operations

Arithmetic operations on sparse matrices also work as they do on dense matrices. Indexing of, assignment into, and concatenation of sparse matrices work in the same way as dense matrices. Indexing operations, especially assignment, are expensive, when carried out one element at a time. In many cases it may be better to convert the sparse matrix into (I,J,V) format using findnz, manipulate the values or the structure in the dense vectors (I,J,V), and then reconstruct the sparse matrix.

Correspondence of dense and sparse methods

The following table gives a correspondence between built-in methods on sparse matrices and their corresponding methods on dense matrix types. In general, methods that generate sparse matrices differ from their dense counterparts in that the resulting matrix follows the same sparsity pattern as a given sparse matrix S, or that the resulting sparse matrix has density d, i.e. each matrix element has a probability d of being non-zero.

Details can be found in the Sparse Vectors and Matrices section of the standard library reference.

SparseDenseDescription
spzeros(m,n)zeros(m,n)Creates a m-by-n matrix of zeros. (spzeros(m,n) is empty.)
sparse(I,n,n)Matrix(I,n,n)Creates a n-by-n identity matrix.
sparse(A)Array(S)Interconverts between dense and sparse formats.
sprand(m,n,d)rand(m,n)Creates a m-by-n random matrix (of density d) with iid non-zero elements distributed uniformly on the half-open interval $[0, 1)$.
sprandn(m,n,d)randn(m,n)Creates a m-by-n random matrix (of density d) with iid non-zero elements distributed according to the standard normal (Gaussian) distribution.
sprandn(rng,m,n,d)randn(rng,m,n)Creates a m-by-n random matrix (of density d) with iid non-zero elements generated with the rng random number generator

SparseArrays API

SparseArrays.AbstractSparseVectorType
AbstractSparseVector{Tv,Ti}

Supertype for one-dimensional sparse arrays (or array-like types) with elements of type Tv and index type Ti. Alias for AbstractSparseArray{Tv,Ti,1}.

source
SparseArrays.AbstractSparseMatrixType
AbstractSparseMatrix{Tv,Ti}

Supertype for two-dimensional sparse arrays (or array-like types) with elements of type Tv and index type Ti. Alias for AbstractSparseArray{Tv,Ti,2}.

source
SparseArrays.SparseVectorType
SparseVector{Tv,Ti<:Integer} <: AbstractSparseVector{Tv,Ti}

Vector type for storing sparse vectors. Can be created by passing the length of the vector, a sorted vector of non-zero indices, and a vector of non-zero values.

For instance, the vector [5, 6, 0, 7] can be represented as

SparseVector(4, [1, 2, 4], [5, 6, 7])

This indicates that the element at index 1 is 5, at index 2 is 6, at index 3 is zero(Int), and at index 4 is 7.

It may be more convenient to create sparse vectors directly from dense vectors using sparse as

sparse([5, 6, 0, 7])

yields the same sparse vector.

source
SparseArrays.sparseFunction
sparse(A)

Convert an AbstractMatrix A into a sparse matrix.

Examples

julia> A = Matrix(1.0I, 3, 3)
 3×3 Matrix{Float64}:
  1.0  0.0  0.0
  0.0  1.0  0.0
@@ -72,7 +72,7 @@
 3×3 SparseMatrixCSC{Float64, Int64} with 3 stored entries:
  1.0   ⋅    ⋅
   ⋅   1.0   ⋅
-  ⋅    ⋅   1.0
source
sparse(I, J, V,[ m, n, combine])

Create a sparse matrix S of dimensions m x n such that S[I[k], J[k]] = V[k]. The combine function is used to combine duplicates. If m and n are not specified, they are set to maximum(I) and maximum(J) respectively. If the combine function is not supplied, combine defaults to + unless the elements of V are Booleans in which case combine defaults to |. All elements of I must satisfy 1 <= I[k] <= m, and all elements of J must satisfy 1 <= J[k] <= n. Numerical zeros in (I, J, V) are retained as structural nonzeros; to drop numerical zeros, use dropzeros!.

For additional documentation and an expert driver, see SparseArrays.sparse!.

Examples

julia> Is = [1; 2; 3];
+  ⋅    ⋅   1.0
source
sparse(I, J, V,[ m, n, combine])

Create a sparse matrix S of dimensions m x n such that S[I[k], J[k]] = V[k]. The combine function is used to combine duplicates. If m and n are not specified, they are set to maximum(I) and maximum(J) respectively. If the combine function is not supplied, combine defaults to + unless the elements of V are Booleans in which case combine defaults to |. All elements of I must satisfy 1 <= I[k] <= m, and all elements of J must satisfy 1 <= J[k] <= n. Numerical zeros in (I, J, V) are retained as structural nonzeros; to drop numerical zeros, use dropzeros!.

For additional documentation and an expert driver, see SparseArrays.sparse!.

Examples

julia> Is = [1; 2; 3];
 
 julia> Js = [1; 2; 3];
 
@@ -82,10 +82,10 @@
 3×3 SparseMatrixCSC{Int64, Int64} with 3 stored entries:
  1  ⋅  ⋅
  ⋅  2  ⋅
- ⋅  ⋅  3
source
SparseArrays.sparse!Function
sparse!(I::AbstractVector{Ti}, J::AbstractVector{Ti}, V::AbstractVector{Tv},
         m::Integer, n::Integer, combine, klasttouch::Vector{Ti},
         csrrowptr::Vector{Ti}, csrcolval::Vector{Ti}, csrnzval::Vector{Tv},
-        [csccolptr::Vector{Ti}], [cscrowval::Vector{Ti}, cscnzval::Vector{Tv}] ) where {Tv,Ti<:Integer}

Parent of and expert driver for sparse; see sparse for basic usage. This method allows the user to provide preallocated storage for sparse's intermediate objects and result as described below. This capability enables more efficient successive construction of SparseMatrixCSCs from coordinate representations, and also enables extraction of an unsorted-column representation of the result's transpose at no additional cost.

This method consists of three major steps: (1) Counting-sort the provided coordinate representation into an unsorted-row CSR form including repeated entries. (2) Sweep through the CSR form, simultaneously calculating the desired CSC form's column-pointer array, detecting repeated entries, and repacking the CSR form with repeated entries combined; this stage yields an unsorted-row CSR form with no repeated entries. (3) Counting-sort the preceding CSR form into a fully-sorted CSC form with no repeated entries.

Input arrays csrrowptr, csrcolval, and csrnzval constitute storage for the intermediate CSR forms and require length(csrrowptr) >= m + 1, length(csrcolval) >= length(I), and length(csrnzval >= length(I)). Input array klasttouch, workspace for the second stage, requires length(klasttouch) >= n. Optional input arrays csccolptr, cscrowval, and cscnzval constitute storage for the returned CSC form S. If necessary, these are resized automatically to satisfy length(csccolptr) = n + 1, length(cscrowval) = nnz(S) and length(cscnzval) = nnz(S); hence, if nnz(S) is unknown at the outset, passing in empty vectors of the appropriate type (Vector{Ti}() and Vector{Tv}() respectively) suffices, or calling the sparse! method neglecting cscrowval and cscnzval.

On return, csrrowptr, csrcolval, and csrnzval contain an unsorted-column representation of the result's transpose.

You may reuse the input arrays' storage (I, J, V) for the output arrays (csccolptr, cscrowval, cscnzval). For example, you may call sparse!(I, J, V, csrrowptr, csrcolval, csrnzval, I, J, V). Note that they will be resized to satisfy the conditions above.

For the sake of efficiency, this method performs no argument checking beyond 1 <= I[k] <= m and 1 <= J[k] <= n. Use with care. Testing with --check-bounds=yes is wise.

This method runs in O(m, n, length(I)) time. The HALFPERM algorithm described in F. Gustavson, "Two fast algorithms for sparse matrices: multiplication and permuted transposition," ACM TOMS 4(3), 250-269 (1978) inspired this method's use of a pair of counting sorts.

source
SparseArrays.sparse!(I, J, V, [m, n, combine]) -> SparseMatrixCSC

Variant of sparse! that re-uses the input vectors (I, J, V) for the final matrix storage. After construction the input vectors will alias the matrix buffers; S.colptr === I, S.rowval === J, and S.nzval === V holds, and they will be resize!d as necessary.

Note that some work buffers will still be allocated. Specifically, this method is a convenience wrapper around sparse!(I, J, V, m, n, combine, klasttouch, csrrowptr, csrcolval, csrnzval, csccolptr, cscrowval, cscnzval) where this method allocates klasttouch, csrrowptr, csrcolval, and csrnzval of appropriate size, but reuses I, J, and V for csccolptr, cscrowval, and cscnzval.

Arguments m, n, and combine defaults to maximum(I), maximum(J), and +, respectively.

Julia 1.10

This method requires Julia version 1.10 or later.

source
SparseArrays.sparsevecFunction
sparsevec(I, V, [m, combine])

Create a sparse vector S of length m such that S[I[k]] = V[k]. Duplicates are combined using the combine function, which defaults to + if no combine argument is provided, unless the elements of V are Booleans in which case combine defaults to |.

Examples

julia> II = [1, 3, 3, 5]; V = [0.1, 0.2, 0.3, 0.2];
+        [csccolptr::Vector{Ti}], [cscrowval::Vector{Ti}, cscnzval::Vector{Tv}] ) where {Tv,Ti<:Integer}

Parent of and expert driver for sparse; see sparse for basic usage. This method allows the user to provide preallocated storage for sparse's intermediate objects and result as described below. This capability enables more efficient successive construction of SparseMatrixCSCs from coordinate representations, and also enables extraction of an unsorted-column representation of the result's transpose at no additional cost.

This method consists of three major steps: (1) Counting-sort the provided coordinate representation into an unsorted-row CSR form including repeated entries. (2) Sweep through the CSR form, simultaneously calculating the desired CSC form's column-pointer array, detecting repeated entries, and repacking the CSR form with repeated entries combined; this stage yields an unsorted-row CSR form with no repeated entries. (3) Counting-sort the preceding CSR form into a fully-sorted CSC form with no repeated entries.

Input arrays csrrowptr, csrcolval, and csrnzval constitute storage for the intermediate CSR forms and require length(csrrowptr) >= m + 1, length(csrcolval) >= length(I), and length(csrnzval >= length(I)). Input array klasttouch, workspace for the second stage, requires length(klasttouch) >= n. Optional input arrays csccolptr, cscrowval, and cscnzval constitute storage for the returned CSC form S. If necessary, these are resized automatically to satisfy length(csccolptr) = n + 1, length(cscrowval) = nnz(S) and length(cscnzval) = nnz(S); hence, if nnz(S) is unknown at the outset, passing in empty vectors of the appropriate type (Vector{Ti}() and Vector{Tv}() respectively) suffices, or calling the sparse! method neglecting cscrowval and cscnzval.

On return, csrrowptr, csrcolval, and csrnzval contain an unsorted-column representation of the result's transpose.

You may reuse the input arrays' storage (I, J, V) for the output arrays (csccolptr, cscrowval, cscnzval). For example, you may call sparse!(I, J, V, csrrowptr, csrcolval, csrnzval, I, J, V). Note that they will be resized to satisfy the conditions above.

For the sake of efficiency, this method performs no argument checking beyond 1 <= I[k] <= m and 1 <= J[k] <= n. Use with care. Testing with --check-bounds=yes is wise.

This method runs in O(m, n, length(I)) time. The HALFPERM algorithm described in F. Gustavson, "Two fast algorithms for sparse matrices: multiplication and permuted transposition," ACM TOMS 4(3), 250-269 (1978) inspired this method's use of a pair of counting sorts.

source
SparseArrays.sparse!(I, J, V, [m, n, combine]) -> SparseMatrixCSC

Variant of sparse! that re-uses the input vectors (I, J, V) for the final matrix storage. After construction the input vectors will alias the matrix buffers; S.colptr === I, S.rowval === J, and S.nzval === V holds, and they will be resize!d as necessary.

Note that some work buffers will still be allocated. Specifically, this method is a convenience wrapper around sparse!(I, J, V, m, n, combine, klasttouch, csrrowptr, csrcolval, csrnzval, csccolptr, cscrowval, cscnzval) where this method allocates klasttouch, csrrowptr, csrcolval, and csrnzval of appropriate size, but reuses I, J, and V for csccolptr, cscrowval, and cscnzval.

Arguments m, n, and combine defaults to maximum(I), maximum(J), and +, respectively.

Julia 1.10

This method requires Julia version 1.10 or later.

source
SparseArrays.sparsevecFunction
sparsevec(I, V, [m, combine])

Create a sparse vector S of length m such that S[I[k]] = V[k]. Duplicates are combined using the combine function, which defaults to + if no combine argument is provided, unless the elements of V are Booleans in which case combine defaults to |.

Examples

julia> II = [1, 3, 3, 5]; V = [0.1, 0.2, 0.3, 0.2];
 
 julia> sparsevec(II, V)
 5-element SparseVector{Float64, Int64} with 3 stored entries:
@@ -103,14 +103,14 @@
 3-element SparseVector{Bool, Int64} with 3 stored entries:
   [1]  =  1
   [2]  =  0
-  [3]  =  1
source
sparsevec(d::Dict, [m])

Create a sparse vector of length m where the nonzero indices are keys from the dictionary, and the nonzero values are the values from the dictionary.

Examples

julia> sparsevec(Dict(1 => 3, 2 => 2))
+  [3]  =  1
source
sparsevec(d::Dict, [m])

Create a sparse vector of length m where the nonzero indices are keys from the dictionary, and the nonzero values are the values from the dictionary.

Examples

julia> sparsevec(Dict(1 => 3, 2 => 2))
 2-element SparseVector{Int64, Int64} with 2 stored entries:
   [1]  =  3
-  [2]  =  2
source
sparsevec(A)

Convert a vector A into a sparse vector of length m.

Examples

julia> sparsevec([1.0, 2.0, 0.0, 0.0, 3.0, 0.0])
+  [2]  =  2
source
sparsevec(A)

Convert a vector A into a sparse vector of length m.

Examples

julia> sparsevec([1.0, 2.0, 0.0, 0.0, 3.0, 0.0])
 6-element SparseVector{Float64, Int64} with 3 stored entries:
   [1]  =  1.0
   [2]  =  2.0
-  [5]  =  3.0
source
Base.similarMethod
similar(A::AbstractSparseMatrixCSC{Tv,Ti}, [::Type{TvNew}, ::Type{TiNew}, m::Integer, n::Integer]) where {Tv,Ti}

Create an uninitialized mutable array with the given element type, index type, and size, based upon the given source SparseMatrixCSC. The new sparse matrix maintains the structure of the original sparse matrix, except in the case where dimensions of the output matrix are different from the output.

The output matrix has zeros in the same locations as the input, but uninitialized values for the nonzero locations.

source
SparseArrays.issparseFunction
issparse(S)

Returns true if S is sparse, and false otherwise.

Examples

julia> sv = sparsevec([1, 4], [2.3, 2.2], 10)
+  [5]  =  3.0
source
Base.similarMethod
similar(A::AbstractSparseMatrixCSC{Tv,Ti}, [::Type{TvNew}, ::Type{TiNew}, m::Integer, n::Integer]) where {Tv,Ti}

Create an uninitialized mutable array with the given element type, index type, and size, based upon the given source SparseMatrixCSC. The new sparse matrix maintains the structure of the original sparse matrix, except in the case where dimensions of the output matrix are different from the output.

The output matrix has zeros in the same locations as the input, but uninitialized values for the nonzero locations.

source
SparseArrays.issparseFunction
issparse(S)

Returns true if S is sparse, and false otherwise.

Examples

julia> sv = sparsevec([1, 4], [2.3, 2.2], 10)
 10-element SparseVector{Float64, Int64} with 2 stored entries:
   [1]  =  2.3
   [4]  =  2.2
@@ -119,37 +119,37 @@
 true
 
 julia> issparse(Array(sv))
-false
source
SparseArrays.nnzFunction
nnz(A)

Returns the number of stored (filled) elements in a sparse array.

Examples

julia> A = sparse(2I, 3, 3)
+false
source
SparseArrays.nnzFunction
nnz(A)

Returns the number of stored (filled) elements in a sparse array.

Examples

julia> A = sparse(2I, 3, 3)
 3×3 SparseMatrixCSC{Int64, Int64} with 3 stored entries:
  2  ⋅  ⋅
  ⋅  2  ⋅
  ⋅  ⋅  2
 
 julia> nnz(A)
-3
source
SparseArrays.findnzFunction
findnz(A::SparseMatrixCSC)

Return a tuple (I, J, V) where I and J are the row and column indices of the stored ("structurally non-zero") values in sparse matrix A, and V is a vector of the values.

Examples

julia> A = sparse([1 2 0; 0 0 3; 0 4 0])
+3
source
SparseArrays.findnzFunction
findnz(A::SparseMatrixCSC)

Return a tuple (I, J, V) where I and J are the row and column indices of the stored ("structurally non-zero") values in sparse matrix A, and V is a vector of the values.

Examples

julia> A = sparse([1 2 0; 0 0 3; 0 4 0])
 3×3 SparseMatrixCSC{Int64, Int64} with 4 stored entries:
  1  2  ⋅
  ⋅  ⋅  3
  ⋅  4  ⋅
 
 julia> findnz(A)
-([1, 1, 3, 2], [1, 2, 2, 3], [1, 2, 4, 3])
source
SparseArrays.spzerosFunction
spzeros([type,]m[,n])

Create a sparse vector of length m or sparse matrix of size m x n. This sparse array will not contain any nonzero values. No storage will be allocated for nonzero values during construction. The type defaults to Float64 if not specified.

Examples

julia> spzeros(3, 3)
+([1, 1, 3, 2], [1, 2, 2, 3], [1, 2, 4, 3])
source
SparseArrays.spzerosFunction
spzeros([type,]m[,n])

Create a sparse vector of length m or sparse matrix of size m x n. This sparse array will not contain any nonzero values. No storage will be allocated for nonzero values during construction. The type defaults to Float64 if not specified.

Examples

julia> spzeros(3, 3)
 3×3 SparseMatrixCSC{Float64, Int64} with 0 stored entries:
   ⋅    ⋅    ⋅
   ⋅    ⋅    ⋅
   ⋅    ⋅    ⋅
 
 julia> spzeros(Float32, 4)
-4-element SparseVector{Float32, Int64} with 0 stored entries
source
spzeros([type], I::AbstractVector, J::AbstractVector, [m, n])

Create a sparse matrix S of dimensions m x n with structural zeros at S[I[k], J[k]].

This method can be used to construct the sparsity pattern of the matrix, and is more efficient than using e.g. sparse(I, J, zeros(length(I))).

For additional documentation and an expert driver, see SparseArrays.spzeros!.

Julia 1.10

This methods requires Julia version 1.10 or later.

source
SparseArrays.spzeros!Function
spzeros!(::Type{Tv}, I::AbstractVector{Ti}, J::AbstractVector{Ti}, m::Integer, n::Integer,
+4-element SparseVector{Float32, Int64} with 0 stored entries
source
spzeros([type], I::AbstractVector, J::AbstractVector, [m, n])

Create a sparse matrix S of dimensions m x n with structural zeros at S[I[k], J[k]].

This method can be used to construct the sparsity pattern of the matrix, and is more efficient than using e.g. sparse(I, J, zeros(length(I))).

For additional documentation and an expert driver, see SparseArrays.spzeros!.

Julia 1.10

This methods requires Julia version 1.10 or later.

source
SparseArrays.spzeros!Function
spzeros!(::Type{Tv}, I::AbstractVector{Ti}, J::AbstractVector{Ti}, m::Integer, n::Integer,
          klasttouch::Vector{Ti}, csrrowptr::Vector{Ti}, csrcolval::Vector{Ti},
-         [csccolptr::Vector{Ti}], [cscrowval::Vector{Ti}, cscnzval::Vector{Tv}]) where {Tv,Ti<:Integer}

Parent of and expert driver for spzeros(I, J) allowing user to provide preallocated storage for intermediate objects. This method is to spzeros what SparseArrays.sparse! is to sparse. See documentation for SparseArrays.sparse! for details and required buffer lengths.

Julia 1.10

This methods requires Julia version 1.10 or later.

source
SparseArrays.spzeros!(::Type{Tv}, I, J, [m, n]) -> SparseMatrixCSC{Tv}

Variant of spzeros! that re-uses the input vectors I and J for the final matrix storage. After construction the input vectors will alias the matrix buffers; S.colptr === I and S.rowval === J holds, and they will be resize!d as necessary.

Note that some work buffers will still be allocated. Specifically, this method is a convenience wrapper around spzeros!(Tv, I, J, m, n, klasttouch, csrrowptr, csrcolval, csccolptr, cscrowval) where this method allocates klasttouch, csrrowptr, and csrcolval of appropriate size, but reuses I and J for csccolptr and cscrowval.

Arguments m and n defaults to maximum(I) and maximum(J).

Julia 1.10

This method requires Julia version 1.10 or later.

source
SparseArrays.spdiagmFunction
spdiagm(kv::Pair{<:Integer,<:AbstractVector}...)
+         [csccolptr::Vector{Ti}], [cscrowval::Vector{Ti}, cscnzval::Vector{Tv}]) where {Tv,Ti<:Integer}

Parent of and expert driver for spzeros(I, J) allowing user to provide preallocated storage for intermediate objects. This method is to spzeros what SparseArrays.sparse! is to sparse. See documentation for SparseArrays.sparse! for details and required buffer lengths.

Julia 1.10

This methods requires Julia version 1.10 or later.

source
SparseArrays.spzeros!(::Type{Tv}, I, J, [m, n]) -> SparseMatrixCSC{Tv}

Variant of spzeros! that re-uses the input vectors I and J for the final matrix storage. After construction the input vectors will alias the matrix buffers; S.colptr === I and S.rowval === J holds, and they will be resize!d as necessary.

Note that some work buffers will still be allocated. Specifically, this method is a convenience wrapper around spzeros!(Tv, I, J, m, n, klasttouch, csrrowptr, csrcolval, csccolptr, cscrowval) where this method allocates klasttouch, csrrowptr, and csrcolval of appropriate size, but reuses I and J for csccolptr and cscrowval.

Arguments m and n defaults to maximum(I) and maximum(J).

Julia 1.10

This method requires Julia version 1.10 or later.

source
SparseArrays.spdiagmFunction
spdiagm(kv::Pair{<:Integer,<:AbstractVector}...)
 spdiagm(m::Integer, n::Integer, kv::Pair{<:Integer,<:AbstractVector}...)

Construct a sparse diagonal matrix from Pairs of vectors and diagonals. Each vector kv.second will be placed on the kv.first diagonal. By default, the matrix is square and its size is inferred from kv, but a non-square size m×n (padded with zeros as needed) can be specified by passing m,n as the first arguments.

Examples

julia> spdiagm(-1 => [1,2,3,4], 1 => [4,3,2,1])
 5×5 SparseMatrixCSC{Int64, Int64} with 8 stored entries:
  ⋅  4  ⋅  ⋅  ⋅
  1  ⋅  3  ⋅  ⋅
  ⋅  2  ⋅  2  ⋅
  ⋅  ⋅  3  ⋅  1
- ⋅  ⋅  ⋅  4  ⋅
source
spdiagm(v::AbstractVector)
+ ⋅  ⋅  ⋅  4  ⋅
source
spdiagm(v::AbstractVector)
 spdiagm(m::Integer, n::Integer, v::AbstractVector)

Construct a sparse matrix with elements of the vector as diagonal elements. By default (no given m and n), the matrix is square and its size is given by length(v), but a non-square size m×n can be specified by passing m and n as the first arguments.

Julia 1.6

These functions require at least Julia 1.6.

Examples

julia> spdiagm([1,2,3])
 3×3 SparseMatrixCSC{Int64, Int64} with 3 stored entries:
  1  ⋅  ⋅
@@ -160,13 +160,13 @@
 3×3 SparseMatrixCSC{Int64, Int64} with 2 stored entries:
  1  ⋅  ⋅
  ⋅  ⋅  ⋅
- ⋅  ⋅  3
source
SparseArrays.sparse_hcatFunction
sparse_hcat(A...)

Concatenate along dimension 2. Return a SparseMatrixCSC object.

Julia 1.8

This method was added in Julia 1.8. It mimics previous concatenation behavior, where the concatenation with specialized "sparse" matrix types from LinearAlgebra.jl automatically yielded sparse output even in the absence of any SparseArray argument.

source
SparseArrays.sparse_vcatFunction
sparse_vcat(A...)

Concatenate along dimension 1. Return a SparseMatrixCSC object.

Julia 1.8

This method was added in Julia 1.8. It mimics previous concatenation behavior, where the concatenation with specialized "sparse" matrix types from LinearAlgebra.jl automatically yielded sparse output even in the absence of any SparseArray argument.

source
SparseArrays.sparse_hvcatFunction
sparse_hvcat(rows::Tuple{Vararg{Int}}, values...)

Sparse horizontal and vertical concatenation in one call. This function is called for block matrix syntax. The first argument specifies the number of arguments to concatenate in each block row.

Julia 1.8

This method was added in Julia 1.8. It mimics previous concatenation behavior, where the concatenation with specialized "sparse" matrix types from LinearAlgebra.jl automatically yielded sparse output even in the absence of any SparseArray argument.

source
SparseArrays.blockdiagFunction
blockdiag(A...)

Concatenate matrices block-diagonally. Currently only implemented for sparse matrices.

Examples

julia> blockdiag(sparse(2I, 3, 3), sparse(4I, 2, 2))
+ ⋅  ⋅  3
source
SparseArrays.sparse_hcatFunction
sparse_hcat(A...)

Concatenate along dimension 2. Return a SparseMatrixCSC object.

Julia 1.8

This method was added in Julia 1.8. It mimics previous concatenation behavior, where the concatenation with specialized "sparse" matrix types from LinearAlgebra.jl automatically yielded sparse output even in the absence of any SparseArray argument.

source
SparseArrays.sparse_vcatFunction
sparse_vcat(A...)

Concatenate along dimension 1. Return a SparseMatrixCSC object.

Julia 1.8

This method was added in Julia 1.8. It mimics previous concatenation behavior, where the concatenation with specialized "sparse" matrix types from LinearAlgebra.jl automatically yielded sparse output even in the absence of any SparseArray argument.

source
SparseArrays.sparse_hvcatFunction
sparse_hvcat(rows::Tuple{Vararg{Int}}, values...)

Sparse horizontal and vertical concatenation in one call. This function is called for block matrix syntax. The first argument specifies the number of arguments to concatenate in each block row.

Julia 1.8

This method was added in Julia 1.8. It mimics previous concatenation behavior, where the concatenation with specialized "sparse" matrix types from LinearAlgebra.jl automatically yielded sparse output even in the absence of any SparseArray argument.

source
SparseArrays.blockdiagFunction
blockdiag(A...)

Concatenate matrices block-diagonally. Currently only implemented for sparse matrices.

Examples

julia> blockdiag(sparse(2I, 3, 3), sparse(4I, 2, 2))
 5×5 SparseMatrixCSC{Int64, Int64} with 5 stored entries:
  2  ⋅  ⋅  ⋅  ⋅
  ⋅  2  ⋅  ⋅  ⋅
  ⋅  ⋅  2  ⋅  ⋅
  ⋅  ⋅  ⋅  4  ⋅
- ⋅  ⋅  ⋅  ⋅  4
source
SparseArrays.sprandFunction
sprand([rng],[T::Type],m,[n],p::AbstractFloat)
 sprand([rng],m,[n],p::AbstractFloat,[rfn=rand])

Create a random length m sparse vector or m by n sparse matrix, in which the probability of any element being nonzero is independently given by p (and hence the mean density of nonzeros is also exactly p). The optional rng argument specifies a random number generator, see Random Numbers. The optional T argument specifies the element type, which defaults to Float64.

By default, nonzero values are sampled from a uniform distribution using the rand function, i.e. by rand(T), or rand(rng, T) if rng is supplied; for the default T=Float64, this corresponds to nonzero values sampled uniformly in [0,1).

You can sample nonzero values from a different distribution by passing a custom rfn function instead of rand. This should be a function rfn(k) that returns an array of k random numbers sampled from the desired distribution; alternatively, if rng is supplied, it should instead be a function rfn(rng, k).

Examples

julia> sprand(Bool, 2, 2, 0.5)
 2×2 SparseMatrixCSC{Bool, Int64} with 2 stored entries:
  1  1
@@ -175,10 +175,10 @@
 julia> sprand(Float64, 3, 0.75)
 3-element SparseVector{Float64, Int64} with 2 stored entries:
   [1]  =  0.795547
-  [2]  =  0.49425
source
SparseArrays.sprandnFunction
sprandn([rng][,Type],m[,n],p::AbstractFloat)

Create a random sparse vector of length m or sparse matrix of size m by n with the specified (independent) probability p of any entry being nonzero, where nonzero values are sampled from the normal distribution. The optional rng argument specifies a random number generator, see Random Numbers.

Julia 1.1

Specifying the output element type Type requires at least Julia 1.1.

Examples

julia> sprandn(2, 2, 0.75)
+  [2]  =  0.49425
source
SparseArrays.sprandnFunction
sprandn([rng][,Type],m[,n],p::AbstractFloat)

Create a random sparse vector of length m or sparse matrix of size m by n with the specified (independent) probability p of any entry being nonzero, where nonzero values are sampled from the normal distribution. The optional rng argument specifies a random number generator, see Random Numbers.

Julia 1.1

Specifying the output element type Type requires at least Julia 1.1.

Examples

julia> sprandn(2, 2, 0.75)
 2×2 SparseMatrixCSC{Float64, Int64} with 3 stored entries:
  -1.20577     ⋅
-  0.311817  -0.234641
source
SparseArrays.nonzerosFunction
nonzeros(A)

Return a vector of the structural nonzero values in sparse array A. This includes zeros that are explicitly stored in the sparse array. The returned vector points directly to the internal nonzero storage of A, and any modifications to the returned vector will mutate A as well. See rowvals and nzrange.

Examples

julia> A = sparse(2I, 3, 3)
+  0.311817  -0.234641
source
SparseArrays.nonzerosFunction
nonzeros(A)

Return a vector of the structural nonzero values in sparse array A. This includes zeros that are explicitly stored in the sparse array. The returned vector points directly to the internal nonzero storage of A, and any modifications to the returned vector will mutate A as well. See rowvals and nzrange.

Examples

julia> A = sparse(2I, 3, 3)
 3×3 SparseMatrixCSC{Int64, Int64} with 3 stored entries:
  2  ⋅  ⋅
  ⋅  2  ⋅
@@ -188,7 +188,7 @@
 3-element Vector{Int64}:
  2
  2
- 2
source
SparseArrays.rowvalsFunction
rowvals(A::AbstractSparseMatrixCSC)

Return a vector of the row indices of A. Any modifications to the returned vector will mutate A as well. Providing access to how the row indices are stored internally can be useful in conjunction with iterating over structural nonzero values. See also nonzeros and nzrange.

Examples

julia> A = sparse(2I, 3, 3)
+ 2
source
SparseArrays.rowvalsFunction
rowvals(A::AbstractSparseMatrixCSC)

Return a vector of the row indices of A. Any modifications to the returned vector will mutate A as well. Providing access to how the row indices are stored internally can be useful in conjunction with iterating over structural nonzero values. See also nonzeros and nzrange.

Examples

julia> A = sparse(2I, 3, 3)
 3×3 SparseMatrixCSC{Int64, Int64} with 3 stored entries:
  2  ⋅  ⋅
  ⋅  2  ⋅
@@ -198,7 +198,7 @@
 3-element Vector{Int64}:
  1
  2
- 3
source
SparseArrays.nzrangeFunction
nzrange(A::AbstractSparseMatrixCSC, col::Integer)

Return the range of indices to the structural nonzero values of a sparse matrix column. In conjunction with nonzeros and rowvals, this allows for convenient iterating over a sparse matrix :

A = sparse(I,J,V)
+ 3
source
SparseArrays.nzrangeFunction
nzrange(A::AbstractSparseMatrixCSC, col::Integer)

Return the range of indices to the structural nonzero values of a sparse matrix column. In conjunction with nonzeros and rowvals, this allows for convenient iterating over a sparse matrix :

A = sparse(I,J,V)
 rows = rowvals(A)
 vals = nonzeros(A)
 m, n = size(A)
@@ -208,7 +208,7 @@
       val = vals[i]
       # perform sparse wizardry...
    end
-end
Warning

Adding or removing nonzero elements to the matrix may invalidate the nzrange, one should not mutate the matrix while iterating.

source
nzrange(x::SparseVectorUnion, col)

Give the range of indices to the structural nonzero values of a sparse vector. The column index col is ignored (assumed to be 1).

source
SparseArrays.droptol!Function
droptol!(A::AbstractSparseMatrixCSC, tol)

Removes stored values from A whose absolute value is less than or equal to tol.

source
droptol!(x::AbstractCompressedVector, tol)

Removes stored values from x whose absolute value is less than or equal to tol.

source
SparseArrays.dropzeros!Function
dropzeros!(x::AbstractCompressedVector)

Removes stored numerical zeros from x.

For an out-of-place version, see dropzeros. For algorithmic information, see fkeep!.

source
SparseArrays.dropzerosFunction
dropzeros(A::AbstractSparseMatrixCSC;)

Generates a copy of A and removes stored numerical zeros from that copy.

For an in-place version and algorithmic information, see dropzeros!.

Examples

julia> A = sparse([1, 2, 3], [1, 2, 3], [1.0, 0.0, 1.0])
+end
Warning

Adding or removing nonzero elements to the matrix may invalidate the nzrange, one should not mutate the matrix while iterating.

source
nzrange(x::SparseVectorUnion, col)

Give the range of indices to the structural nonzero values of a sparse vector. The column index col is ignored (assumed to be 1).

source
SparseArrays.droptol!Function
droptol!(A::AbstractSparseMatrixCSC, tol)

Removes stored values from A whose absolute value is less than or equal to tol.

source
droptol!(x::AbstractCompressedVector, tol)

Removes stored values from x whose absolute value is less than or equal to tol.

source
SparseArrays.dropzeros!Function
dropzeros!(x::AbstractCompressedVector)

Removes stored numerical zeros from x.

For an out-of-place version, see dropzeros. For algorithmic information, see fkeep!.

source
SparseArrays.dropzerosFunction
dropzeros(A::AbstractSparseMatrixCSC;)

Generates a copy of A and removes stored numerical zeros from that copy.

For an in-place version and algorithmic information, see dropzeros!.

Examples

julia> A = sparse([1, 2, 3], [1, 2, 3], [1.0, 0.0, 1.0])
 3×3 SparseMatrixCSC{Float64, Int64} with 3 stored entries:
  1.0   ⋅    ⋅
   ⋅   0.0   ⋅
@@ -218,7 +218,7 @@
 3×3 SparseMatrixCSC{Float64, Int64} with 2 stored entries:
  1.0   ⋅    ⋅
   ⋅    ⋅    ⋅
-  ⋅    ⋅   1.0
source
dropzeros(x::AbstractCompressedVector)

Generates a copy of x and removes numerical zeros from that copy.

For an in-place version and algorithmic information, see dropzeros!.

Examples

julia> A = sparsevec([1, 2, 3], [1.0, 0.0, 1.0])
+  ⋅    ⋅   1.0
source
dropzeros(x::AbstractCompressedVector)

Generates a copy of x and removes numerical zeros from that copy.

For an in-place version and algorithmic information, see dropzeros!.

Examples

julia> A = sparsevec([1, 2, 3], [1.0, 0.0, 1.0])
 3-element SparseVector{Float64, Int64} with 3 stored entries:
   [1]  =  1.0
   [2]  =  0.0
@@ -227,7 +227,7 @@
 julia> dropzeros(A)
 3-element SparseVector{Float64, Int64} with 2 stored entries:
   [1]  =  1.0
-  [3]  =  1.0
source
SparseArrays.permuteFunction
permute(A::AbstractSparseMatrixCSC{Tv,Ti}, p::AbstractVector{<:Integer},
         q::AbstractVector{<:Integer}) where {Tv,Ti}

Bilaterally permute A, returning PAQ (A[p,q]). Column-permutation q's length must match A's column count (length(q) == size(A, 2)). Row-permutation p's length must match A's row count (length(p) == size(A, 1)).

For expert drivers and additional information, see permute!.

Examples

julia> A = spdiagm(0 => [1, 2, 3, 4], 1 => [5, 6, 7])
 4×4 SparseMatrixCSC{Int64, Int64} with 7 stored entries:
  1  5  ⋅  ⋅
@@ -247,9 +247,9 @@
  ⋅  ⋅  5  1
  ⋅  6  2  ⋅
  7  3  ⋅  ⋅
- 4  ⋅  ⋅  ⋅
source
Base.permute!Method
permute!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{Tv,Ti},
+ 4  ⋅  ⋅  ⋅
source
Base.permute!Method
permute!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{Tv,Ti},
          p::AbstractVector{<:Integer}, q::AbstractVector{<:Integer},
          [C::AbstractSparseMatrixCSC{Tv,Ti}]) where {Tv,Ti}

Bilaterally permute A, storing result PAQ (A[p,q]) in X. Stores intermediate result (AQ)^T (transpose(A[:,q])) in optional argument C if present. Requires that none of X, A, and, if present, C alias each other; to store result PAQ back into A, use the following method lacking X:

permute!(A::AbstractSparseMatrixCSC{Tv,Ti}, p::AbstractVector{<:Integer},
          q::AbstractVector{<:Integer}[, C::AbstractSparseMatrixCSC{Tv,Ti},
-         [workcolptr::Vector{Ti}]]) where {Tv,Ti}

X's dimensions must match those of A (size(X, 1) == size(A, 1) and size(X, 2) == size(A, 2)), and X must have enough storage to accommodate all allocated entries in A (length(rowvals(X)) >= nnz(A) and length(nonzeros(X)) >= nnz(A)). Column-permutation q's length must match A's column count (length(q) == size(A, 2)). Row-permutation p's length must match A's row count (length(p) == size(A, 1)).

C's dimensions must match those of transpose(A) (size(C, 1) == size(A, 2) and size(C, 2) == size(A, 1)), and C must have enough storage to accommodate all allocated entries in A (length(rowvals(C)) >= nnz(A) and length(nonzeros(C)) >= nnz(A)).

For additional (algorithmic) information, and for versions of these methods that forgo argument checking, see (unexported) parent methods unchecked_noalias_permute! and unchecked_aliasing_permute!.

See also permute.

source
SparseArrays.halfperm!Function
halfperm!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{TvA,Ti},
-          q::AbstractVector{<:Integer}, f::Function = identity) where {Tv,TvA,Ti}

Column-permute and transpose A, simultaneously applying f to each entry of A, storing the result (f(A)Q)^T (map(f, transpose(A[:,q]))) in X.

Element type Tv of X must match f(::TvA), where TvA is the element type of A. X's dimensions must match those of transpose(A) (size(X, 1) == size(A, 2) and size(X, 2) == size(A, 1)), and X must have enough storage to accommodate all allocated entries in A (length(rowvals(X)) >= nnz(A) and length(nonzeros(X)) >= nnz(A)). Column-permutation q's length must match A's column count (length(q) == size(A, 2)).

This method is the parent of several methods performing transposition and permutation operations on SparseMatrixCSCs. As this method performs no argument checking, prefer the safer child methods ([c]transpose[!], permute[!]) to direct use.

This method implements the HALFPERM algorithm described in F. Gustavson, "Two fast algorithms for sparse matrices: multiplication and permuted transposition," ACM TOMS 4(3), 250-269 (1978). The algorithm runs in O(size(A, 1), size(A, 2), nnz(A)) time and requires no space beyond that passed in.

source
SparseArrays.ftranspose!Function
ftranspose!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{Tv,Ti}, f::Function) where {Tv,Ti}

Transpose A and store it in X while applying the function f to the non-zero elements. Does not remove the zeros created by f. size(X) must be equal to size(transpose(A)). No additional memory is allocated other than resizing the rowval and nzval of X, if needed.

See halfperm!

source

Noteworthy External Sparse Packages

Several other Julia packages provide sparse matrix implementations that should be mentioned:

  1. SuiteSparseGraphBLAS.jl is a wrapper over the fast, multithreaded SuiteSparse:GraphBLAS C library. On CPU this is typically the fastest option, often significantly outperforming MKLSparse.

  2. CUDA.jl exposes the CUSPARSE library for GPU sparse matrix operations.

  3. SparseMatricesCSR.jl provides a Julia native implementation of the Compressed Sparse Rows (CSR) format.

  4. MKLSparse.jl accelerates SparseArrays sparse-dense matrix operations using Intel's MKL library.

  5. SparseArrayKit.jl available for multidimensional sparse arrays.

  6. LuxurySparse.jl provides static sparse array formats, as well as a coordinate format.

  7. ExtendableSparse.jl enables fast insertion into sparse matrices using a lazy approach to new stored indices.

  8. Finch.jl supports extensive multidimensional sparse array formats and operations through a mini tensor language and compiler, all in native Julia. Support for COO, CSF, CSR, CSC and more, as well as operations like broadcast, reduce, etc. and custom operations.

+ [workcolptr::Vector{Ti}]]) where {Tv,Ti}

X's dimensions must match those of A (size(X, 1) == size(A, 1) and size(X, 2) == size(A, 2)), and X must have enough storage to accommodate all allocated entries in A (length(rowvals(X)) >= nnz(A) and length(nonzeros(X)) >= nnz(A)). Column-permutation q's length must match A's column count (length(q) == size(A, 2)). Row-permutation p's length must match A's row count (length(p) == size(A, 1)).

C's dimensions must match those of transpose(A) (size(C, 1) == size(A, 2) and size(C, 2) == size(A, 1)), and C must have enough storage to accommodate all allocated entries in A (length(rowvals(C)) >= nnz(A) and length(nonzeros(C)) >= nnz(A)).

For additional (algorithmic) information, and for versions of these methods that forgo argument checking, see (unexported) parent methods unchecked_noalias_permute! and unchecked_aliasing_permute!.

See also permute.

source
SparseArrays.halfperm!Function
halfperm!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{TvA,Ti},
+          q::AbstractVector{<:Integer}, f::Function = identity) where {Tv,TvA,Ti}

Column-permute and transpose A, simultaneously applying f to each entry of A, storing the result (f(A)Q)^T (map(f, transpose(A[:,q]))) in X.

Element type Tv of X must match f(::TvA), where TvA is the element type of A. X's dimensions must match those of transpose(A) (size(X, 1) == size(A, 2) and size(X, 2) == size(A, 1)), and X must have enough storage to accommodate all allocated entries in A (length(rowvals(X)) >= nnz(A) and length(nonzeros(X)) >= nnz(A)). Column-permutation q's length must match A's column count (length(q) == size(A, 2)).

This method is the parent of several methods performing transposition and permutation operations on SparseMatrixCSCs. As this method performs no argument checking, prefer the safer child methods ([c]transpose[!], permute[!]) to direct use.

This method implements the HALFPERM algorithm described in F. Gustavson, "Two fast algorithms for sparse matrices: multiplication and permuted transposition," ACM TOMS 4(3), 250-269 (1978). The algorithm runs in O(size(A, 1), size(A, 2), nnz(A)) time and requires no space beyond that passed in.

source
SparseArrays.ftranspose!Function
ftranspose!(X::AbstractSparseMatrixCSC{Tv,Ti}, A::AbstractSparseMatrixCSC{Tv,Ti}, f::Function) where {Tv,Ti}

Transpose A and store it in X while applying the function f to the non-zero elements. Does not remove the zeros created by f. size(X) must be equal to size(transpose(A)). No additional memory is allocated other than resizing the rowval and nzval of X, if needed.

See halfperm!

source

Noteworthy External Sparse Packages

Several other Julia packages provide sparse matrix implementations that should be mentioned:

  1. SuiteSparseGraphBLAS.jl is a wrapper over the fast, multithreaded SuiteSparse:GraphBLAS C library. On CPU this is typically the fastest option, often significantly outperforming MKLSparse.

  2. CUDA.jl exposes the CUSPARSE library for GPU sparse matrix operations.

  3. SparseMatricesCSR.jl provides a Julia native implementation of the Compressed Sparse Rows (CSR) format.

  4. MKLSparse.jl accelerates SparseArrays sparse-dense matrix operations using Intel's MKL library.

  5. SparseArrayKit.jl available for multidimensional sparse arrays.

  6. LuxurySparse.jl provides static sparse array formats, as well as a coordinate format.

  7. ExtendableSparse.jl enables fast insertion into sparse matrices using a lazy approach to new stored indices.

  8. Finch.jl supports extensive multidimensional sparse array formats and operations through a mini tensor language and compiler, all in native Julia. Support for COO, CSF, CSR, CSC and more, as well as operations like broadcast, reduce, etc. and custom operations.

diff --git a/dev/solvers/index.html b/dev/solvers/index.html index f2b7aaf9..b98cb34c 100644 --- a/dev/solvers/index.html +++ b/dev/solvers/index.html @@ -56,7 +56,7 @@ 0.707107 -0.408248 1.1547 julia> L * L' ≈ A -true
Note

This method uses the CHOLMOD[ACM887][DavisHager2009] library from SuiteSparse. CHOLMOD only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from the Base.SparseArrays.CHOLMOD module.

source
LinearAlgebra.cholesky!Function
cholesky!(F::CHOLMOD.Factor, A::SparseMatrixCSC; shift = 0.0, check = true) -> CHOLMOD.Factor

Compute the Cholesky ($LL'$) factorization of A, reusing the symbolic factorization F. A must be a SparseMatrixCSC or a Symmetric/ Hermitian view of a SparseMatrixCSC. Note that even if A doesn't have the type tag, it must still be symmetric or Hermitian.

See also cholesky.

Note

This method uses the CHOLMOD library from SuiteSparse, which only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

source
LinearAlgebra.ldltFunction
ldlt(A::SparseMatrixCSC; shift = 0.0, check = true, perm=nothing) -> CHOLMOD.Factor

Compute the $LDL'$ factorization of a sparse matrix A. A must be a SparseMatrixCSC or a Symmetric/Hermitian view of a SparseMatrixCSC. Note that even if A doesn't have the type tag, it must still be symmetric or Hermitian. A fill-reducing permutation is used. F = ldlt(A) is most frequently used to solve systems of equations A*x = b with F\b. The returned factorization object F also supports the methods diag, det, logdet, and inv. You can extract individual factors from F using F.L. However, since pivoting is on by default, the factorization is internally represented as A == P'*L*D*L'*P with a permutation matrix P; using just L without accounting for P will give incorrect answers. To include the effects of permutation, it is typically preferable to extract "combined" factors like PtL = F.PtL (the equivalent of P'*L) and LtP = F.UP (the equivalent of L'*P). The complete list of supported factors is :L, :PtL, :D, :UP, :U, :LD, :DU, :PtLD, :DUP.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition's validity (via issuccess) lies with the user.

Setting the optional shift keyword argument computes the factorization of A+shift*I instead of A. If the perm argument is provided, it should be a permutation of 1:size(A,1) giving the ordering to use (instead of CHOLMOD's default AMD ordering).

Note

This method uses the CHOLMOD[ACM887][DavisHager2009] library from SuiteSparse. CHOLMOD only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from the Base.SparseArrays.CHOLMOD module.

source
LinearAlgebra.qrFunction
qr(A::SparseMatrixCSC; tol=_default_tol(A), ordering=ORDERING_DEFAULT) -> QRSparse

Compute the QR factorization of a sparse matrix A. Fill-reducing row and column permutations are used such that F.R = F.Q'*A[F.prow,F.pcol]. The main application of this type is to solve least squares or underdetermined problems with \. The function calls the C library SPQR[ACM933].

Note

qr(A::SparseMatrixCSC) uses the SPQR library that is part of SuiteSparse. As this library only supports sparse matrices with Float64 or ComplexF64 elements, as of Julia v1.4 qr converts A into a copy that is of type SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

Examples

julia> A = sparse([1,2,3,4], [1,1,2,2], [1.0,1.0,1.0,1.0])
+true
Note

This method uses the CHOLMOD[ACM887][DavisHager2009] library from SuiteSparse. CHOLMOD only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from the Base.SparseArrays.CHOLMOD module.

source
LinearAlgebra.cholesky!Function
cholesky!(F::CHOLMOD.Factor, A::SparseMatrixCSC; shift = 0.0, check = true) -> CHOLMOD.Factor

Compute the Cholesky ($LL'$) factorization of A, reusing the symbolic factorization F. A must be a SparseMatrixCSC or a Symmetric/ Hermitian view of a SparseMatrixCSC. Note that even if A doesn't have the type tag, it must still be symmetric or Hermitian.

See also cholesky.

Note

This method uses the CHOLMOD library from SuiteSparse, which only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

source
LinearAlgebra.ldltFunction
ldlt(A::SparseMatrixCSC; shift = 0.0, check = true, perm=nothing) -> CHOLMOD.Factor

Compute the $LDL'$ factorization of a sparse matrix A. A must be a SparseMatrixCSC or a Symmetric/Hermitian view of a SparseMatrixCSC. Note that even if A doesn't have the type tag, it must still be symmetric or Hermitian. A fill-reducing permutation is used. F = ldlt(A) is most frequently used to solve systems of equations A*x = b with F\b. The returned factorization object F also supports the methods diag, det, logdet, and inv. You can extract individual factors from F using F.L. However, since pivoting is on by default, the factorization is internally represented as A == P'*L*D*L'*P with a permutation matrix P; using just L without accounting for P will give incorrect answers. To include the effects of permutation, it is typically preferable to extract "combined" factors like PtL = F.PtL (the equivalent of P'*L) and LtP = F.UP (the equivalent of L'*P). The complete list of supported factors is :L, :PtL, :D, :UP, :U, :LD, :DU, :PtLD, :DUP.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition's validity (via issuccess) lies with the user.

Setting the optional shift keyword argument computes the factorization of A+shift*I instead of A. If the perm argument is provided, it should be a permutation of 1:size(A,1) giving the ordering to use (instead of CHOLMOD's default AMD ordering).

Note

This method uses the CHOLMOD[ACM887][DavisHager2009] library from SuiteSparse. CHOLMOD only supports real or complex types in single or double precision. Input matrices not of those element types will be converted to these types as appropriate.

Many other functions from CHOLMOD are wrapped but not exported from the Base.SparseArrays.CHOLMOD module.

source
LinearAlgebra.qrFunction
qr(A::SparseMatrixCSC; tol=_default_tol(A), ordering=ORDERING_DEFAULT) -> QRSparse

Compute the QR factorization of a sparse matrix A. Fill-reducing row and column permutations are used such that F.R = F.Q'*A[F.prow,F.pcol]. The main application of this type is to solve least squares or underdetermined problems with \. The function calls the C library SPQR[ACM933].

Note

qr(A::SparseMatrixCSC) uses the SPQR library that is part of SuiteSparse. As this library only supports sparse matrices with Float64 or ComplexF64 elements, as of Julia v1.4 qr converts A into a copy that is of type SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

Examples

julia> A = sparse([1,2,3,4], [1,1,2,2], [1.0,1.0,1.0,1.0])
 4×2 SparseMatrixCSC{Float64, Int64} with 4 stored entries:
  1.0   ⋅
  1.0   ⋅
@@ -80,9 +80,9 @@
 Column permutation:
 2-element Vector{Int64}:
  1
- 2
source
LinearAlgebra.luFunction
lu(A::AbstractSparseMatrixCSC; check = true, q = nothing, control = get_umfpack_control()) -> F::UmfpackLU

Compute the LU factorization of a sparse matrix A.

For sparse A with real or complex element type, the return type of F is UmfpackLU{Tv, Ti}, with Tv = Float64 or ComplexF64 respectively and Ti is an integer type (Int32 or Int64).

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition's validity (via issuccess) lies with the user.

The permutation q can either be a permutation vector or nothing. If no permutation vector is provided or q is nothing, UMFPACK's default is used. If the permutation is not zero-based, a zero-based copy is made.

The control vector defaults to the Julia SparseArrays package's default configuration for UMFPACK (NB: this is modified from the UMFPACK defaults to disable iterative refinement), but can be changed by passing a vector of length UMFPACK_CONTROL, see the UMFPACK manual for possible configurations. For example to reenable iterative refinement:

umfpack_control = SparseArrays.UMFPACK.get_umfpack_control(Float64, Int64) # read Julia default configuration for a Float64 sparse matrix
+ 2
source
LinearAlgebra.luFunction
lu(A::AbstractSparseMatrixCSC; check = true, q = nothing, control = get_umfpack_control()) -> F::UmfpackLU

Compute the LU factorization of a sparse matrix A.

For sparse A with real or complex element type, the return type of F is UmfpackLU{Tv, Ti}, with Tv = Float64 or ComplexF64 respectively and Ti is an integer type (Int32 or Int64).

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition's validity (via issuccess) lies with the user.

The permutation q can either be a permutation vector or nothing. If no permutation vector is provided or q is nothing, UMFPACK's default is used. If the permutation is not zero-based, a zero-based copy is made.

The control vector defaults to the Julia SparseArrays package's default configuration for UMFPACK (NB: this is modified from the UMFPACK defaults to disable iterative refinement), but can be changed by passing a vector of length UMFPACK_CONTROL, see the UMFPACK manual for possible configurations. For example to reenable iterative refinement:

umfpack_control = SparseArrays.UMFPACK.get_umfpack_control(Float64, Int64) # read Julia default configuration for a Float64 sparse matrix
 SparseArrays.UMFPACK.show_umf_ctrl(umfpack_control) # optional - display values
 umfpack_control[SparseArrays.UMFPACK.JL_UMFPACK_IRSTEP] = 2.0 # reenable iterative refinement (2 is UMFPACK default max iterative refinement steps)
 
 Alu = lu(A; control = umfpack_control)
-x = Alu \ b   # solve Ax = b, including UMFPACK iterative refinement

The individual components of the factorization F can be accessed by indexing:

ComponentDescription
LL (lower triangular) part of LU
UU (upper triangular) part of LU
pright permutation Vector
qleft permutation Vector
RsVector of scaling factors
:(L,U,p,q,Rs) components

The relation between F and A is

F.L*F.U == (F.Rs .* A)[F.p, F.q]

F further supports the following functions:

See also lu!

Note

lu(A::AbstractSparseMatrixCSC) uses the UMFPACK[ACM832] library that is part of SuiteSparse. As this library only supports sparse matrices with Float64 or ComplexF64 elements, lu converts A into a copy that is of type SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

source
LinearAlgebra.lowrankupdateFunction
lowrankupdate(F::CHOLMOD.Factor, C::AbstractArray) -> FF::CHOLMOD.Factor

Get an LDLt Factorization of A + C*C' given an LDLt or LLt factorization F of A.

The returned factor is always an LDLt factorization.

See also lowrankupdate!, lowrankdowndate, lowrankdowndate!.

source
LinearAlgebra.lowrankupdate!Function
lowrankupdate!(F::CHOLMOD.Factor, C::AbstractArray)

Update an LDLt or LLt Factorization F of A to a factorization of A + C*C'.

LLt factorizations are converted to LDLt.

See also lowrankupdate, lowrankdowndate, lowrankdowndate!.

source
LinearAlgebra.lowrankdowndateFunction
lowrankdowndate(F::CHOLMOD.Factor, C::AbstractArray) -> FF::CHOLMOD.Factor

Get an LDLt Factorization of A + C*C' given an LDLt or LLt factorization F of A.

The returned factor is always an LDLt factorization.

See also lowrankdowndate!, lowrankupdate, lowrankupdate!.

source
LinearAlgebra.lowrankdowndate!Function
lowrankdowndate!(F::CHOLMOD.Factor, C::AbstractArray)

Update an LDLt or LLt Factorization F of A to a factorization of A - C*C'.

LLt factorizations are converted to LDLt.

See also lowrankdowndate, lowrankupdate, lowrankupdate!.

source
SparseArrays.CHOLMOD.lowrankupdowndate!Function
lowrankupdowndate!(F::CHOLMOD.Factor, C::Sparse, update::Cint)

Update an LDLt or LLt Factorization F of A to a factorization of A ± C*C'.

If sparsity preserving factorization is used, i.e. L*L' == P*A*P' then the new factor will be L*L' == P*A*P' + C'*C

update: Cint(1) for A + CC', Cint(0) for A - CC'

source
+x = Alu \ b # solve Ax = b, including UMFPACK iterative refinement

The individual components of the factorization F can be accessed by indexing:

ComponentDescription
LL (lower triangular) part of LU
UU (upper triangular) part of LU
pright permutation Vector
qleft permutation Vector
RsVector of scaling factors
:(L,U,p,q,Rs) components

The relation between F and A is

F.L*F.U == (F.Rs .* A)[F.p, F.q]

F further supports the following functions:

See also lu!

Note

lu(A::AbstractSparseMatrixCSC) uses the UMFPACK[ACM832] library that is part of SuiteSparse. As this library only supports sparse matrices with Float64 or ComplexF64 elements, lu converts A into a copy that is of type SparseMatrixCSC{Float64} or SparseMatrixCSC{ComplexF64} as appropriate.

source
LinearAlgebra.lowrankupdateFunction
lowrankupdate(F::CHOLMOD.Factor, C::AbstractArray) -> FF::CHOLMOD.Factor

Get an LDLt Factorization of A + C*C' given an LDLt or LLt factorization F of A.

The returned factor is always an LDLt factorization.

See also lowrankupdate!, lowrankdowndate, lowrankdowndate!.

source
LinearAlgebra.lowrankupdate!Function
lowrankupdate!(F::CHOLMOD.Factor, C::AbstractArray)

Update an LDLt or LLt Factorization F of A to a factorization of A + C*C'.

LLt factorizations are converted to LDLt.

See also lowrankupdate, lowrankdowndate, lowrankdowndate!.

source
LinearAlgebra.lowrankdowndateFunction
lowrankdowndate(F::CHOLMOD.Factor, C::AbstractArray) -> FF::CHOLMOD.Factor

Get an LDLt Factorization of A + C*C' given an LDLt or LLt factorization F of A.

The returned factor is always an LDLt factorization.

See also lowrankdowndate!, lowrankupdate, lowrankupdate!.

source
LinearAlgebra.lowrankdowndate!Function
lowrankdowndate!(F::CHOLMOD.Factor, C::AbstractArray)

Update an LDLt or LLt Factorization F of A to a factorization of A - C*C'.

LLt factorizations are converted to LDLt.

See also lowrankdowndate, lowrankupdate, lowrankupdate!.

source
SparseArrays.CHOLMOD.lowrankupdowndate!Function
lowrankupdowndate!(F::CHOLMOD.Factor, C::Sparse, update::Cint)

Update an LDLt or LLt Factorization F of A to a factorization of A ± C*C'.

If sparsity preserving factorization is used, i.e. L*L' == P*A*P' then the new factor will be L*L' == P*A*P' + C'*C

update: Cint(1) for A + CC', Cint(0) for A - CC'

source