-
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Classical Gram-Schmidt with DGKS re-orthogonalization criterion #32
Comments
It doesn't need to be reorthorgonalized. We are using the modified Gram-Schmidt process. Also, if you want to add classical GS with DGKS reorthogonalization criterion 1. There is one in IterativeSolvers.jl https://github.com/JuliaMath/IterativeSolvers.jl/blob/master/src/orthogonalize.jl#L12. |
Sorry, I mean here: ExponentialUtilities.jl/src/arnoldi.jl Line 182 in ebb828b
orthogonalize_and_normalize! is not used. |
It doesn't need to be reorthorgonalized. Modified Gram-Schmidt is stable. |
Yes, indeed. You are using modified GS. I missed it. The complexity of MGS is kn^2, using icgsm is more efficient Wondering if the performance of GM is important in this case? |
Do a profile on https://benchmarks.sciml.ai/html/MOLPDE/allen_cahn_fdm_wpd.html to see whether it's an important part. |
Thanks, @ChrisRackauckas . This is a very nice notebook where I can find nice examples to benchmark. |
Note that all of our benchmarking sources are at https://github.com/SciML/DiffEqBenchmarks.jl |
ICGS seems to be more stable than modified-GS according to the following benchmark, the lanczos procedure may have the same issue. But ICGS is a bit slower than MGS. Let's keep this issue open and see if someone complains the precision problem. The x-axis is the size of matrix to be orthogonalized. The is the code for this plot, """
icgs(u::AbstractVector, Q, M=nothing; alpha=0.5, maxit=3)
Iterative Classical M-orthogonal Gram-Schmidt orthogonalization.
`A` is a matrix to be orthogonalized,
`M` is a matrix or nothing, if not nothing, perform M-orthogonal.
"""
function icgs!(A::AbstractMatrix, j, M=nothing; alpha=0.5, maxit=3)
u = @view A[:,j]
Q = @view A[:,1:j-1]
Mu = _dot(M, u)
r_pre = sqrt(abs(dot(u, Mu)))
local r1
for it = 1:maxit
u .-= Q * (Q' * Mu)
Mu = _dot(M, u)
r1 = sqrt(abs(dot(u, Mu)))
if r1 > alpha * r_pre
break
end
r_pre = r1
end
if r1 <= alpha * r_pre
@warn "loss of orthogonality @icgs."
end
end
_dot(::Nothing, b) = b
_dot(A::AbstractMatrix, b) = A*b
using LinearAlgebra, Test, Random
function generate_q(n, seed=2)
Random.seed!(2)
H = randn(ComplexF64, n, n)
H = H + H'
v = randn(ComplexF64, n)
qs = []
for i=1:n
push!(qs, v)
v = H * v
end
hcat(qs...)
end
@testset "icgs" begin
n = 10
q = generate_q(n, 2)
for j = 1:n
icgs!(q, j)
@views q[:,j] ./= norm(q[:,j])
end
@test q' * q ≈ I
@show sum(abs, q' * q - I)
end
function mgs!(V::AbstractMatrix, j)
y = @view(V[:, j])
@inbounds for i = 1:j-1
α = dot(@view(V[:, i]), y)
axpy!(-α, @view(V[:, i]), y)
end
return y
end
@testset "mgs" begin
n = 10
q = generate_q(n, 2)
for j = 1:n
mgs!(q, j)
@views q[:,j] ./= norm(q[:,j])
end
@test q' * q ≈ I
@show sum(abs, q' * q - I)
end
function get_eps(method, seed, ns)
out = []
for n = ns
q = generate_q(n, seed)
for j = 1:n
method(q, j)
@views q[:,j] ./= norm(q[:,j])
end
push!(out, sum(abs, q' * q - I))
end
out
end
ns = 1:10:200
s1 = get_eps(icgs!, 2, ns)
s2 = get_eps(mgs!, 2, ns)
using Plots
plot(ns, s1, yscale=:log10, label="icgs")
plot!(ns, s2, label="mgs!")
ylabel!("error") |
In Gram-Schmidt iterations, re-orthogonalization is often required to enforce the obtained spanning space is correct up to some precision in many practical applications.
https://www.cs.cornell.edu/~bindel/class/cs6210-f16/lec/2016-11-16.pdf
I didn't find such procedure in this repo yet. Is it because it is not needed in implementing
expmv
, or is it simply not implemented yet or have I missed it? I am happy to submit a PR if re-orthogonalization is really needed.The text was updated successfully, but these errors were encountered: