-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Code Not Found #9
Comments
+1 |
1 similar comment
+1 |
Would you kindly provide the code of training? |
I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed. So, I think it is impossible to repetition this paper without original training codes. |
If you follow the rough alignment scripts and apply the computed matrices correctly during training, you should be able to get similar results as I showed in the paper. It's not trivial and I haven't finished cleaning up all the util functions. People have emailed me about small artifacts and details about training parameters, given that they were able to re-implement the paper and get close results from what I've shown. If you just use the old training code without changing anything, I won't be surprised it's not converging. |
I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear. Hope for training code ..... |
So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting? |
loss = 1.0cobi_vgg+1.0cobi_rgb and the w_spatial parameter of two cobi loss both 0.5 |
Thanks a lot, I will try that. However, you still mentioned that "the parameter "w_spatial " should be set bigger than 0.5", here you set w_spatial = 0.5 is no problem, right? |
i change the description , i have used 0.5 and 0.8 . but i align the image use "main_align.sh ","main_crop.sh" and "main_wb.sh" |
Thanks for your help, we trained the zoom-learn-zoom model by follow your params and got an extremely good result. |
@bai-shang you train the model used raw data or RGB data ? |
@bai-shang can you share your train.py ?Thank you. |
May I ask how you use the tform.txt and wb.txt during training? |
Dear llp: you said that you have reproduced the code, so what train dataset did you used? The SR_RAW or your own data? If you used the SR_RAW, how did you crop the images of different scales since the author have not released the 'aligned' in the SR_RAW training dataset? |
i use SR_RAW ,use ECC align first |
thank you, I have found the released script as well. |
+1 |
Would you kindly provide the code you train? Thanks! |
Is that possible to share your code? Actually I also did that but find some obvious artifacts in some regions. |
Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks
The text was updated successfully, but these errors were encountered: