Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TV regularization with different lambdas for different parts of x #89

Closed
patrickhaft opened this issue Jul 14, 2024 · 4 comments
Closed

Comments

@patrickhaft
Copy link

I am trying to apply TV regularization with different lambdas to different parts of x in an ADMM solver. For some reason, it did not work as expected.

As explained in the documentation of the ADMM solver, instead of using the TV regularization directly, I use the TV regularization in the form of a gradient operator and L1 regularization. Since x represents two 100x100 images, I defined the regularization as follows

reg = L1Regularization(0.1)
regTrafo = LinearOperatorCollection.GradientOp(Float64; shape=(100, 100, 2), dims=1:2)

It works fine until I try to apply the TV regularization to only part of x, or use two TV regularizations with different lambdas for the two images in x.
I used a masked regularization because it was the most obvious way for me to apply the regularization to only one part. Is this the right approach?
For some reason, the appearance of the regularized part changes compared to using the TV regularization for both images.

@nHackel
Copy link
Member

nHackel commented Jul 14, 2024

Hello,

which version of the package are you using? And how are you constructing the masked proximal maps?

If you have a small MWE I can take a look later. One issue might be that you are constructing your mask with indices fitting to the shape you give to the gradient op and not with the indices that fit the result of the gradient op

@patrickhaft
Copy link
Author

patrickhaft commented Jul 15, 2024

Hi,

I am using the latest commit, as I am dependent on the gpu supported version (Release 0.16.0). I used to use the gpuStates branch. Great work by the way :).
For the size of the mask, I tried both: the size of x and the size of the output of the gradient operator.
Maybe this MWE explains my problem:

using RegularizedLeastSquares
using Images
using LinearOperatorCollection


function plot_recon_img(img)

    # Extract real parts
    real_part_array = abs.(img) 

    # Normalize for visualization 
    normalized_array = (real_part_array .- minimum(real_part_array)) ./ ((maximum(real_part_array)) )

    resized_array = imresize(normalized_array, size(normalized_array) .* 2)  # Double the size

    # Convert to grayscale image and display
    return Gray.(resized_array)
end

# for plotting x, found_x, and x - found_x 
function compare_images()
    plot_recon_img(
    vcat(hcat(x_as_image[:,:,1],    
    reshaped_found_x[:,:,1],
    x_as_image[:,:,1] - reshaped_found_x[:,:,1]),
    hcat(x_as_image[:,:,2],    
    reshaped_found_x[:,:,2],
    x_as_image[:,:,2] - reshaped_found_x[:,:,2])))
end


# Create Random x and A
x = rand(Float64, 10000 * 2)
A = rand(Float64, 100, 10000 * 2)


# Calculate b
b = A * x 

# store x as 2 images of size 100 x 100
x_as_image = reshape(x, 100, 100, 2)

# Without reg
solver = createLinearSolver(ADMM, A; iterations=10)
found_x = solve!(solver, b)
reshaped_found_x = reshape(found_x, 100, 100, 2)
compare_images()


# Example 1:  TV regularization for all parameters
reg = L1Regularization(.01)
regTrafo = LinearOperatorCollection.GradientOp(Float64; shape=(100, 100, 2), dims=1:2) 
solver = createLinearSolver(ADMM, A; iterations=10, reg=reg, regTrafo=regTrafo)
found_x = solve!(solver, b)
reshaped_found_x = reshape(found_x, 100, 100, 2)
compare_images()



# Example 2: Using masked regularization with the size of x
mask = zeros(Bool, 100, 100, 2)
mask[:, :, 1] .= true
mask = reshape(mask, :)

reg = MaskedRegularization(L1Regularization(0.01), mask)
regTrafo = RegularizedLeastSquares.GradientOp(Float64; shape=(100, 100, 2), dims=1:2)
solver = createLinearSolver(ADMM, A; iterations=10, reg=reg, regTrafo=regTrafo)
found_x = solve!(solver, b)
reshaped_found_x = reshape(found_x, 100, 100, 2)
compare_images()


# Example 3: Using masked regularization with the size of the output of the gradient operator
mask = zeros(Bool, 19800, 2)
mask[:, 1] .= true
mask = reshape(mask, :)

reg = MaskedRegularization(L1Regularization(0.01), mask)
regTrafo = RegularizedLeastSquares.GradientOp(Float64; shape=(100, 100, 2), dims=1:2)
solver = createLinearSolver(ADMM, A; iterations=10, reg=reg, regTrafo=regTrafo)
found_x = solve!(solver, b)
reshaped_found_x = reshape(found_x, 100, 100, 2)
compare_images()

@patrickhaft
Copy link
Author

Well, I think I found the solution to my problem. I thought that the first half of the output of the gradient operator belongs to the first image and the second part to the second image. It turns out I was wrong, the output of the gradient operator is stored in the dimensions. So the first part of the gradient operator's output is the gradient in dimension 1 for both images.
Please correct me if I am wrong :).

@nHackel
Copy link
Member

nHackel commented Jul 15, 2024

Ah yes, that is correct. Here you can see the relevant code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants