Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Classical Gram-Schmidt with DGKS re-orthogonalization criterion #32

Open
GiggleLiu opened this issue Apr 7, 2020 · 8 comments
Open

Comments

@GiggleLiu
Copy link
Contributor

GiggleLiu commented Apr 7, 2020

In Gram-Schmidt iterations, re-orthogonalization is often required to enforce the obtained spanning space is correct up to some precision in many practical applications.

https://www.cs.cornell.edu/~bindel/class/cs6210-f16/lec/2016-11-16.pdf

I didn't find such procedure in this repo yet. Is it because it is not needed in implementing expmv, or is it simply not implemented yet or have I missed it? I am happy to submit a PR if re-orthogonalization is really needed.

@YingboMa
Copy link
Member

YingboMa commented Apr 7, 2020

It doesn't need to be reorthorgonalized. We are using the modified Gram-Schmidt process. Also, if you want to add classical GS with DGKS reorthogonalization criterion 1. There is one in IterativeSolvers.jl https://github.com/JuliaMath/IterativeSolvers.jl/blob/master/src/orthogonalize.jl#L12.

@YingboMa YingboMa changed the title Is the arnodi iteration re-orthorgonalized? Classical Gram-Schmidt with DGKS re-orthogonalization criterion Apr 7, 2020
@GiggleLiu
Copy link
Contributor Author

Sorry, I mean here:

function arnoldi_step!(j::Integer, iop::Integer, A::AT,

orthogonalize_and_normalize! is not used.

@YingboMa
Copy link
Member

YingboMa commented Apr 7, 2020

It doesn't need to be reorthorgonalized. Modified Gram-Schmidt is stable.

@GiggleLiu
Copy link
Contributor Author

Yes, indeed. You are using modified GS. I missed it.

The complexity of MGS is kn^2, using icgsm is more efficient
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.67.9572&rep=rep1&type=pdf#page86
Page 72.

Wondering if the performance of GM is important in this case?

@ChrisRackauckas
Copy link
Member

Do a profile on https://benchmarks.sciml.ai/html/MOLPDE/allen_cahn_fdm_wpd.html to see whether it's an important part.

@GiggleLiu
Copy link
Contributor Author

Thanks, @ChrisRackauckas . This is a very nice notebook where I can find nice examples to benchmark.

@ChrisRackauckas
Copy link
Member

Note that all of our benchmarking sources are at https://github.com/SciML/DiffEqBenchmarks.jl

@GiggleLiu
Copy link
Contributor Author

GiggleLiu commented Apr 8, 2020

ICGS seems to be more stable than modified-GS according to the following benchmark, the lanczos procedure may have the same issue. But ICGS is a bit slower than MGS. Let's keep this issue open and see if someone complains the precision problem.

gs

The x-axis is the size of matrix to be orthogonalized.

The is the code for this plot,

"""
    icgs(u::AbstractVector, Q, M=nothing; alpha=0.5, maxit=3)

Iterative Classical M-orthogonal Gram-Schmidt orthogonalization.
`A` is a matrix to be orthogonalized,
`M` is a matrix or nothing, if not nothing, perform M-orthogonal.
"""
function icgs!(A::AbstractMatrix, j, M=nothing; alpha=0.5, maxit=3)
    u = @view A[:,j]
    Q = @view A[:,1:j-1]
    Mu = _dot(M, u)
    r_pre = sqrt(abs(dot(u, Mu)))
    local r1
    for it = 1:maxit
        u .-= Q * (Q' * Mu)
        Mu = _dot(M, u)
        r1 = sqrt(abs(dot(u, Mu)))
        if r1 > alpha * r_pre
            break
        end
        r_pre = r1
    end
    if r1 <= alpha * r_pre
        @warn "loss of orthogonality @icgs."
    end
end

_dot(::Nothing, b) = b
_dot(A::AbstractMatrix, b) = A*b

using LinearAlgebra, Test, Random

function generate_q(n, seed=2)
    Random.seed!(2)
    H = randn(ComplexF64, n, n)
    H = H + H'
    v = randn(ComplexF64, n)
    qs = []
    for i=1:n
        push!(qs, v)
        v = H * v
    end
    hcat(qs...)
end

@testset "icgs" begin
    n = 10
    q = generate_q(n, 2)
    for j = 1:n
        icgs!(q, j)
        @views q[:,j] ./= norm(q[:,j])
    end
    @test q' * q  I
    @show sum(abs, q' * q - I)
end

function mgs!(V::AbstractMatrix, j)
    y = @view(V[:, j])
    @inbounds for i = 1:j-1
        α = dot(@view(V[:, i]), y)
        axpy!(-α, @view(V[:, i]), y)
    end
    return y
end

@testset "mgs" begin
    n = 10
    q = generate_q(n, 2)
    for j = 1:n
        mgs!(q, j)
        @views q[:,j] ./= norm(q[:,j])
    end
    @test q' * q  I
    @show sum(abs, q' * q - I)
end

function get_eps(method, seed, ns)
    out = []
    for n = ns
        q = generate_q(n, seed)
        for j = 1:n
            method(q, j)
            @views q[:,j] ./= norm(q[:,j])
        end
        push!(out, sum(abs, q' * q - I))
    end
    out
end

ns = 1:10:200
s1 = get_eps(icgs!, 2, ns)
s2 = get_eps(mgs!, 2, ns)

using Plots
plot(ns, s1, yscale=:log10, label="icgs")
plot!(ns, s2, label="mgs!")
ylabel!("error")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants