Skip to content

Source code for "To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation", ICCV 2023

License

Notifications You must be signed in to change notification settings

MarcBotet/hamlet

Repository files navigation

💀 HAMLET

To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation (ICCV23)

Marc Botet Colomer1,2* Pier Luigi Dovesi3*† Theodoros Panagiotakopoulos4 Joao Frederico Carvalho 1 Linus Härenstam-Nielsen5,6 Hossein Azizpour2 Hedvig Kjellström2,3 Daniel Cremers5,6,7 Matteo Poggi8

1 Univrses 2 KTH 3 Silo AI 4 King 5 Technical University of Munich 6 Munich Center of Machine Learning 7 University of Oxford 8 University of Bologna

* Joint first authorship. Part of the work carried out while at Univrses.

📜 arxiv 💀 project page 📽️ video

Method Cover

Citation

If you find this repo useful for your work, please cite our paper:

@inproceedings{colomer2023toadapt,
      title = {To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation},
      author = {Botet Colomer, Marc and 
                Dovesi, Pier Luigi and 
                Panagiotakopoulos, Theodoros and 
                Carvalho, Joao Frederico and 
                H{\"a}renstam-Nielsen, Linus and 
                Azizpour, Hossein and 
                Kjellstr{\"o}m, Hedvig and 
                Cremers, Daniel and
                Poggi, Matteo},
      booktitle = {IEEE International Conference on Computer Vision},
      note = {ICCV},
      year = {2023}
}

Setup Environment

For this project, we used Python 3.9.13. We recommend setting up a new virtual environment:

python -m venv ~/venv/hamlet
source ~/venv/hamlet/bin/activate

In that environment, the requirements can be installed with:

pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.3.7  # requires the other packages to be installed first

All experiments were executed on a NVIDIA RTX 3090

Setup Datasets

Cityscapes: Please, download leftImg8bit_trainvaltest.zip and gt_trainvaltest.zip from here and extract them to /data/datasets/cityscapes.

Rainy Cityscapes: Please follow the steps as shown here: https://team.inria.fr/rits/computer-vision/weather-augment/

If you have troubles creating the rainy dataset, please contact us in [email protected] to obtain the Rainy Cityscapes dataset

We refer to MMSegmentation for further instructions about the dataset structure.

Prepare the source dataset:

python tools/convert_datasets/cityscapes.py /data/datasets/Cityscapes --out-dir data/Cityscapes --nproc 8

Training

For convenience, it is possible to run the configuration by selecting experiment -1. If wandb is configurated, it can be activated by setting the wandb argument to 1

python run_experiments.py --exp -1 --wandb 1

All assets to run a training can be found here.

Make sure to place the pretrained model mitb1_uda.pth in pretrained/.

We provide a config.py file that can be easily modified to run multiple experiments by changing parameters. Make sure to place the random modules to random_modules/.

Code structure

This code is based on MMSegmentation project. The most relevant files are:

Acknowledgements

This project is based on the following open-source projects.

About

Source code for "To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation", ICCV 2023

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published