You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thank you for publishing such an amazing work! In the paper you decay learning rate after 6e5 steps while in HQ_Dictionary.yaml it is set to 4e5 steps. Schedule steps, learning rate and loss weights in the configs for both HQ dictionary and RestoreFormer are different from the paper. Which settings should I use to reproduce your excellent results?
The text was updated successfully, but these errors were encountered:
Please follow the setting described in the paper. Note that the learning rate set in the config is not the actual learning rate. It will be divided by the number of gpus used. The learning rate described in the paper is the one after dividing.
Hi! Thank you for publishing such an amazing work! In the paper you decay learning rate after 6e5 steps while in HQ_Dictionary.yaml it is set to 4e5 steps. Schedule steps, learning rate and loss weights in the configs for both HQ dictionary and RestoreFormer are different from the paper. Which settings should I use to reproduce your excellent results?
The text was updated successfully, but these errors were encountered: