- mkdir on cmd called AnimeCharImg-Gnrtr
- Create the virtual environment and activate the virutal environment
- Install the required packages pip install --upgrade gradio
- Create a simple demo with gradio and launch
This creates a Gradio interface using the gr.Interface class. It specifies the make_image function as the main function, defines the input sliders and their ranges, sets the title and CSS styling, and specifies the output as an image with the specified width and height. Finally, the .launch() method is called to launch the Gradio interface.
Creating the application The code sets up a Gradio interface for creating anime characters using a pre-trained model. Here is a brief explanation of each step
- Import the necessary libraries for image transformation, PyTorch, and Gradio.
import requests
import torch
import torch.nn as nn
from torchvision.transforms.functional import to_pil_image
from PIL import Image # For resizing images with high quality
import gradio as gr
- Download the pretrained model and save it in the system.
url = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-GPXX0GD8EN/G_trained.pth"
response = requests.get(url)
with open("G_trained.pth", "wb") as f:
f.write(response.content)
- This defines the size of the latent vector, which is the input to the generator.
# This defines the size of the latent vector, which is the input to the generator.
latent_vector_size = 128
- This block of code defines a custom generator class that inherits
fromnn.Modulein
PyTorch. It specifies the architecture of the generator network
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.main = nn.Sequential(
nn.ConvTranspose2d(latent_vector_size, 64 * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(64 * 8),
nn.ReLU(True),
nn.ConvTranspose2d(64 * 8, 64 * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(64 * 4),
nn.ReLU(True),
nn.ConvTranspose2d(64 * 4, 64 * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(64 * 2),
nn.ReLU(True),
nn.ConvTranspose2d(64 * 2, 64, 4, 2, 1, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(True),
nn.ConvTranspose2d(64, 3, 4, 2, 1, bias=False),
nn.Tanh()
)
def forward(self, input):
return self.main(input)
- This creates an instance of the Generator class, which represents the generator network.
# Load the generator model
G = Generator()
- These lines set the device to CPU and load the saved state dictionary of the generator model from theG_trained.pthfile.
device = torch.device("cpu")
G.load_state_dict(torch.load("G_trained.pth", map_location=device))
- This defines a function named make_image that takes input parameters a, b, value and generates an image using the generator network.
def make_image(a, b, value):
- This generates an image by passing the latent vector z through the generator network G. The resulting output is detached from the computational graph, and the dimensions are squeezed to remove any singleton dimensions.
z = a * torch.randn(1, latent_vector_size, 1, 1) + b
Xhat = G(z)[0].detach().squeeze(0)
- This performs min-max normalization on the image tensor Xhat, scaling its values between 0 and 1.
Xhat = (Xhat - Xhat.min()) / (Xhat.max() - Xhat.min())
- These lines define the title and CSS styling for the Gradio interface.
title = "Anime Character Generator"
css = ".output_image {height: 60rem !important; width: 100% !important;}"
- Interactive Sliders: Adjust variation, bias, and latent variable value.
- High-Quality Resizing: Output images are resized to 512x512 using high-quality filtering.
- Gradio Interface: Simple and intuitive UI for generating anime characters.
- Model Initialization: Loads a pre-trained generator model.
- Image Generation:
- A latent vector (
z
) is created using random noise scaled by theVariation
slider. - The generator processes
z
to create a raw image tensor. - The image is normalized and converted to a high-quality PIL image.
- A latent vector (
- Gradio Interface:
- Users adjust sliders to control
Variation
,Bias
, andFine Tune
.
- Users adjust sliders to control
This project uses a Dockerfile
to containerize a Python application.
-
Clone the Repository
Ensure all necessary files (Dockerfile
,requirements.txt
, anddemo.py
) are in the same directory. -
Build the Docker Image
Run this command to build the Docker image:docker build -t python-app .
-
Run the Docker Container
Use this command to start the container:docker run -it --rm python-app
- The
Dockerfile
does the following:- Uses a Python base image.
- Sets
/app
as the working directory. - Copies
requirements.txt
and installs dependencies. - Copies all application files into the image.
- Runs
demo.py
when the container starts.
- Docker installed on your system.
- A
requirements.txt
file with dependencies. - A
demo.py
script as the application entry point.
If you want to test locally, you can install the dependencies:
pip install -r requirements.txt
Let me know if you’d like any further tweaks!
Here are two YouTube videos that helped me understand this concept better:
-
How to install Docker on Windows - 2024 [ step by step guide ] - This is a step by step guide , covering how to setup and enable wsl2 and then install docker.
-
How to “Dockerize” Your Python Applications | How To Build And Run A Python App In Docker Container - Docker is an open-source platform that makes it easy to build, ship, and run distributed applications.