This fork of MagicAnimate is modified to include instructions on setting up on Windows. It is tested with Python 3.10.6, Windows 10, RTX 4080 with NVIDIA graphics driver 536.67. No CUDA Toolkit or cuDNN installed. There is a script for downloading the required prerequisite models.
Refer to Windows installation instructions. It describes steps for setting up a Python virtual environment, cloning the repository, and installing necessary dependencies. It guides users on how to download and organize the pretrained models from HuggingFace, either manually or using a provided PowerShell script.
Zhongcong Xu
·
Jianfeng Zhang
·
Jun Hao Liew
·
Hanshu Yan
·
Jia-Wei Liu
·
Chenxu Zhang
·
Jiashi Feng
·
Mike Zheng Shou
National University of Singapore | ByteDance
- [2023.12.4] Release inference code and gradio demo. We are working to improve MagicAnimate, stay tuned!
- [2023.11.23] Release MagicAnimate paper and project page.
Please download the pretrained base models for StableDiffusion V1.5 and MSE-finetuned VAE.
Download our MagicAnimate checkpoints.
Place them as follows:
magic-animate
|----pretrained_models
|----MagicAnimate
|----appearance_encoder
|----diffusion_pytorch_model.safetensors
|----config.json
|----densepose_controlnet
|----diffusion_pytorch_model.safetensors
|----config.json
|----temporal_attention
|----temporal_attention.ckpt
|----sd-vae-ft-mse
|----...
|----stable-diffusion-v1-5
|----...
|----...
prerequisites: python>=3.8
, CUDA>=11.3
, and ffmpeg
.
Install with conda
:
conda env create -f environment.yaml
conda activate manimate
or pip
:
pip3 install -r requirements.txt
Run inference on single GPU:
bash scripts/animate.sh
Run inference with multiple GPUs:
bash scripts/animate_dist.sh
Try our online gradio demo quickly.
Launch local gradio demo on single GPU:
python3 -m demo.gradio_animate
Launch local gradio demo if you have multiple GPUs:
python3 -m demo.gradio_animate_dist
Then open gradio demo in local browser.
We would like to thank AK(@_akhaliq) and huggingface team for the help of setting up oneline gradio demo.
If you find this codebase useful for your research, please use the following entry.
@inproceedings{xu2023magicanimate,
author = {Xu, Zhongcong and Zhang, Jianfeng and Liew, Jun Hao and Yan, Hanshu and Liu, Jia-Wei and Zhang, Chenxu and Feng, Jiashi and Shou, Mike Zheng},
title = {MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model},
booktitle = {arXiv},
year = {2023}
}