zero-shot training-free text-to-perceptual scene generation.
[Project Page] [] [Paper]
This repository is the official implementation of DreamDrone.
- To add LCM to the pipeline for faster generating speed.
- To add temporal filter for enhancing the smoothness of the generated videos.
- [15/12/2023] Paper DreamDrone released!
- [15/12/2023] Our huggingface demo is released!
- Clone this repository and enter:
git clone https://github.com/HyoKong/DreamDrone.git
cd DreamDrone/
- Install requirements using Python 3.8 and CUDA >= 11.7
conda create -n DreamDrone python=3.8 -y
pip install -r requirements.txt
To run Gradio interface
python app.py
If you use our work in your research, please cite our publication:
@misc{kong2023dreamdrone,
title={DreamDrone},
author={Hanyang Kong and Dongze Lian and Michael Bi Mi and Xinchao Wang},
year={2023},
eprint={2312.08746},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
We warmly welcome contributions from everyone. Please feel free to reach out to us.
Without further ado, welcome to DreamDrone – enjoy piloting your virtual drone through imaginative landscapes!