First, download and set up the repo:
git clone https://github.com/Juanerx/Q-DiT.git
cd Q-DiT
Then create the environment and install required packages:
conda create -n qdit python=3.8
conda activate qdit
pip install -r requirements.txt
pip install .
If you want to use gptq or static quantization, calibration data should be generated by:
cd scripts
python collect_cali_data.py
We can quantize the model:
bash quant_main.sh --image-size 256 --num-sampling-steps 50 --cfg-scale 1.5 --use_gptq
@misc{chen2024QDiT,
title={Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers},
author={Lei Chen and Yuan Meng and Chen Tang and Xinzhu Ma and Jingyan Jiang and Xin Wang and Zhi Wang and Wenwu Zhu},
year={2024},
eprint={2406.17343},
archivePrefix={arXiv},
primaryClass={cs.CV}
url={https://arxiv.org/abs/2406.17343},
}
This codebase borrows from GPTQ, Atom and ADM. Thanks to the authors for releasing their codebases!