Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anyone with Successful Training? #10

Open
Ahmer-444 opened this issue Nov 2, 2021 · 4 comments
Open

Anyone with Successful Training? #10

Ahmer-444 opened this issue Nov 2, 2021 · 4 comments

Comments

@Ahmer-444
Copy link

Hi!
Anyone out there is able to replicate the results with this training stack. If Yes, then with which resolution, GPU configurations, epochs, and dataset?

I'm able to run the whole process for 224x224 resolution for about 1500 epochs on a dataset of around 10000 face images from StyleGAN2. I have trained it for around 4 days on a Tesla V100 but still can't see any results so far. Should I train it more, instead 500 epochs were mentioned in the original paper "Experiment" section?

@ztf512
Copy link

ztf512 commented Nov 5, 2021

I have trained this net for about 2.5 days on a single GTX 1080Ti (840 epochs), and the training dataset I used is made by make_dataset.py from CelebA (800 images and latents are selected for training). I find the training result of each epoch in name/web/images folder and I think the training process is OK.

When I test my trained net by test_one_image.py, I meet some problems--the generated image is blurred with white light noises. By reading readme.md I find there should be some modification in test_one_image.py, and it's just like this:

from util.norm import SpecificNorm # add
spNorm = SpecificNorm() # add
img_fake = model(img_id, img_att, latend_id, latend_id, True) →
img_fake = model(img_id, spNorm(img_att), latend_id, latend_id, True) # modify

Finally the generated image becomes normal as expected.

@Ahmer-444
Copy link
Author

Hi @ztf512 !
Thanks for your reply. It gives me a way to move forward and ensure if it can be trained for a longer period, we can get something desirable.

I have used a 10K dataset from StyleGAN generated images with resolution 1024x1024. I have resized it to 224x224 and make_dataset downsize the number to around ~5K images.

I have trained it for around 1400 epochs on Tesla V100 for about 4 days. But, the results didn't look promising as attached.

0000001
saim
result

===========================================================================================

Can you please share some of your results here?

@zhangyunming
Copy link

same with you

@tiansw1
Copy link

tiansw1 commented Nov 18, 2021

Hi @ztf512 ! Thanks for your reply. It gives me a way to move forward and ensure if it can be trained for a longer period, we can get something desirable.

I have used a 10K dataset from StyleGAN generated images with resolution 1024x1024. I have resized it to 224x224 and make_dataset downsize the number to around ~5K images.

I have trained it for around 1400 epochs on Tesla V100 for about 4 days. But, the results didn't look promising as attached.

0000001 saim result

===========================================================================================

Can you please share some of your results here?

It finally worked on my project in video swapping. The problem is in util.videoswap.py you need to set crop_size=512 in video_swap function, you can test a video swapping and trace that to find out how to swap a single image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants