Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the degaration results of training Dynamicrafter #8

Open
leoisufa opened this issue Aug 1, 2024 · 1 comment
Open

About the degaration results of training Dynamicrafter #8

leoisufa opened this issue Aug 1, 2024 · 1 comment

Comments

@leoisufa
Copy link

leoisufa commented Aug 1, 2024

Thanks for you wonderful work!!!

I try to reproduce the training results based on your Dynamicrafter code. However, I got some worse training results., like:

5_sample0.mp4
14_sample0.mp4
18_sample0.mp4

I only chaged the batch size from 2 to 8 in the train_512.yaml, because I use the 8 * A800-80G GPU, and I follow the setting in your paper, the total batch_size is 8*8=64. Above video results are generate from the checkpoint in 10000 steps. I find that you just train Dynamicrafter for 20,000 steps in your paper, I guess the checkpoint of 10000 steps should not generate so unsatisfatory results. Can you give some advice for reproducing your results?

@zhuhz22
Copy link
Collaborator

zhuhz22 commented Aug 4, 2024

Hi @leoisufa ,
Thank you for your kind words and sorry for replying late. The reason for the degaration lies in examples/DynamiCrafter/lvdm/models/ddpm3d.py#45, where mu_max=4 should be mu_max=1, which is corresponding to the paper. I'm really sorry that I changed this parameter during the attempt of adjusting parameters in the 1024 resolution training, and forgot to correct it back. Now it's fixed in th repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants