You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thank you for your excellent work!
I've been trying to reproduce the results reported in the paper recently. Here's what I got:
By using 2 4090s and following the official config, the results I got from pretraining on the homography dataset:
Finetune on Megadepth
Then I followed settings described in the paper : lr as 1e-5, decay by 0.8 after 10 epochs, and I got the checkpoint_best with loss: Epoch 48 :New best val: loss/total=0.4931303240458171
the test results are as follows:
I also tried the settings in the official config: lr as 1e-4 , decay by 0.95 after 30 epochs, this is what i got:
Epoch 49: New best val: loss/total=0.3537058917681376
And I've tried some other possible settings and still didn't reach the same results reported in the paper.
Could you plz give more details on how you finetune the model on the megadepth dataset? Or any other suggestions on improving the performance?
The text was updated successfully, but these errors were encountered:
Hello @AubreyCH , I am currently trying to understand the process of training LightGlue, but I encountered a problem during the fine-tuning stage because I don't have enough storage space to store the downloaded MegaDepth dataset. Could you kindly help me describe briefly about the structure and content stored in the MegaDepth dataset used? I would greatly appreciate your feedback.
Hi! Thank you for your excellent work!
I've been trying to reproduce the results reported in the paper recently. Here's what I got:
By using 2 4090s and following the official config, the results I got from pretraining on the homography dataset:
Finetune on Megadepth
Then I followed settings described in the paper : lr as 1e-5, decay by 0.8 after 10 epochs, and I got the checkpoint_best with loss:
Epoch 48 :New best val: loss/total=0.4931303240458171
the test results are as follows:
I also tried the settings in the official config: lr as 1e-4 , decay by 0.95 after 30 epochs, this is what i got:
Epoch 49: New best val: loss/total=0.3537058917681376
And I've tried some other possible settings and still didn't reach the same results reported in the paper.
Could you plz give more details on how you finetune the model on the megadepth dataset? Or any other suggestions on improving the performance?
The text was updated successfully, but these errors were encountered: