-
Notifications
You must be signed in to change notification settings - Fork 0
May you release the code to finetune on ADE20K? #1
Comments
Hi @Wallace-222, sorry that we don't have a code release for the ADE20K segmentation model yet. For this experiment, we follow the implementation in the MAE paper. I think one can adapt it from the Swin-Transformer repo (https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation) and replace the backbone with ViT for this experiment. |
@ronghanghu Hi, |
Hi @youngwanLEE, we follow the same setting in BEiT, MAE, and ConvNeXt for the ADE20K experiments and sweep the hyperparameters. Our final hyperparameters are as follows:
Hope these are helpful! |
@ronghanghu Many thanks! |
@ronghanghu Hi ronghang, thanks for sharing this great work. Table. 2(c) of the main papers indicates that models with different patch size (e.g., 8, 16, 24) are finetuned on COCO/ADE20K under the same transferring input sizes, so these three models have different GPU memory usage and different FLOPs but can still have similar results, as I understand it correctly? |
Hi @ggjy, during COCO (and similarly ADE20K) fine-tuning, all the three pre-trained model in Table 2(c) are fine-tuned with the same ViT patch size of 16 and same image size of 1024, so they have the same GPU memory usage and FLOPs during fine-tuning. This is mentioned in the "Setups" in Sec. 4.1 (Page 4 right column).
|
Got it! Thanks very much for your quick reply. |
Hello authors, it is really an inspiring work, and it is also very kind of you to release the code at the same time. May you please also release the code to finetune on ADE20K? Although it is stated in your paper that your experiments are simply following the MAE. However, I am unable to find it from the official code of MAE. Thanks a lot for your attention.
Best wishes.
The text was updated successfully, but these errors were encountered: