Blogpost: https://mlecauchois.github.io/posts/deep-gui/
pip install -r requirements.txt
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install -e .
cd examples/controlnet
pip install -r requirements.txt
Answer questions about your setup:
accelerate config
huggingface-cli login
wandb login
Download the data from the HuggingFace Hub.
Run the training script:
./train.sh
Download the model from the HuggingFace Hub.
For dependencies, see above.
On your GPU-enabled server, run:
python server.py \
--server_port SERVER_PORT \
--base_model_path "stabilityai/stable-diffusion-2-1-base" \
--controlnet_path "mattlc/deepgui" \
--control_image_path "conditioning_image.png"
On your local machine, run:
python client.py --server_ip SERVER_IP --server_port SERVER_PORT --output_folder OUTPUT_FOLDER
If you know how to make this closer to real-time, you are welcome to contribute!