This repository contains the implementation of CycleGAN and Pix2Pix models for image-to-image translation tasks. It provides comprehensive setup guides for both Windows and Linux environments.
This project focuses on implementing and training CycleGAN and Pix2Pix models for various image-to-image translation tasks. These models can be used for a wide range of applications, including style transfer, object transfiguration, season transfer, and photo enhancement.
Choose the appropriate guide based on your operating system:
-
Clone this repository:
git clone https://github.com/scalable-design-participation-lab/building2parcel-pix2pix.git cd re-blocking
-
Add and initialize the pix2pix submodule:
git submodule add https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.git git submodule update --init --recursive
This project uses the original pix2pix repository as a submodule. Adding it as a submodule allows us to:
- Keep the original implementation separate from our project-specific code.
- Easily update to new versions of pix2pix when needed.
- Clearly distinguish between the original code and our modifications.
-
Follow the setup instructions for your operating system (see links above).
-
Prepare your dataset as described in the setup guides.
You can run the model either using provided scripts or direct commands.
-
Train the model:
- For Windows, use the
train_pix2pix.bat
script - For Linux, use the
train_pix2pix.sh
script
- For Windows, use the
-
Test the model:
- For Windows, use the
test_pix2pix.bat
script - For Linux, use the
test_pix2pix.sh
script
- For Windows, use the
Alternatively, you can run the model directly using Python commands:
-
Train the model:
python train.py --dataroot ./datasets/your_dataset --name your_experiment_name --model pix2pix --direction AtoB
-
Test the model:
python test.py --dataroot ./datasets/your_test_dataset --name your_experiment_name --model pix2pix --direction AtoB
Replace your_dataset
, your_experiment_name
, and other parameters as needed for your specific use case.
Common parameters:
--dataroot
: Path to the dataset--name
: Name of the experiment (this will create a folder under./checkpoints
to store results)--model
: Model to use (pix2pix, cyclegan, etc.)--direction
: AtoB or BtoA
For a full list of available options, refer to the options
directory in the project or run:
python train.py --help
python test.py --help
Note: If you're cloning this repository after the submodule has been added, use the following command to clone the repository including all submodules:
git clone --recurse-submodules https://github.com/scalable-design-participation-lab/building2parcel-pix2pix.git
After training and testing, you can find the results in the following locations:
- Training progress:
checkpoints/[experiment_name]/web/index.html
- Test results:
results/[experiment_name]
We welcome contributions to improve this project. Please follow these steps to contribute:
- Fork the repository
- Create a new branch for your feature
- Commit your changes
- Push to the branch
- Create a new Pull Request
This project is licensed under the BSD 2-Clause License - see the LICENSE file for details.
- Original CycleGAN and Pix2Pix implementation
- PyTorch team for their excellent deep learning framework
If you use this code for your research, please cite the following papers:
@inproceedings{CycleGAN2017,
title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},
year={2017}
}
@inproceedings{isola2017image,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
booktitle={Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on},
year={2017}
}
For more detailed information on usage, parameters, and advanced features, please refer to the OS-specific README files linked above.