We use the c-Deep Convolutional Generative Adverserial Network model to generate animated faces π« and emojis π from a random noise fed as input.
1. Pytorch
Create folder named data
. Then create folders according to your class-label names(n folders for n classes).Then place all images into the corresponding folder(class-label).
Root
β
βββ data
β βββ train
β βββ ONE
β β βββ ONE_0.png
β β βββ ONE_1.png
β β βββ ...
β βββ TWO
β β β
β β βββ TWO_0.png
β β βββ TWO_1.png
β βββ .
β βββ .
β
β
β
βββ c-dcgan.ipynb
βββ G_chk.pt
βββ D_chk.pt
Larger the number and variation in images, larger the accuracy of classification.
Generator (
(DCV1_data): ConvTranspose2d(100, 256, kernel_size=(4, 4), stride=(1, 1))
(BN1_data): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(DCV1_label): ConvTranspose2d(2, 256, kernel_size=(4, 4), stride=(1, 1))
(BN1_label): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(DCV2): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(BN2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(DCV3): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(BN3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
(DCV4): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(BN4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
(DCV5): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
)
Discriminator (
(CV1_data): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(CV1_label): Conv2d(2, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(CV2): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(BN2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(CV3): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(BN3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(CV4): Conv2d(512, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(BN4): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
(CV5): Conv2d(1024, 1, kernel_size=(4, 4), stride=(1, 1))
)
Run the c-dcgan.ipynb
jupyter notebook to start the Training.
Once the training is completed the result is stored the main folder as D_chk_100.pt
and G_chk_100.pt
- Train.
- Save/Load checkpoint.
- Generate.
- UI for input, output and to download checkpoint.
@inproceedings{aneja2016modeling,
title={Modeling Stylized Character Expressions via Deep Learning},
author={Aneja, Deepali and Colburn, Alex and Faigin, Gary and Shapiro, Linda and Mones, Barbara},
booktitle={Asian Conference on Computer Vision},
pages={136--153},
year={2016},
organization={Springer}
}