Skip to content

A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

Notifications You must be signed in to change notification settings

ouhenio/StyleGAN3-CLIP-notebooks

Repository files navigation

StyleGAN3 CLIP-based guidance

StyleGAN3 + CLIP

Open in Colab

StyleGAN3 + inversion + CLIP

Open in Colab

This repo is a collection of Jupyter notebooks made to easily play with StyleGAN31 and CLIP2 for a text-based guided image generation.

Both notebooks are heavily based on this notebook, created by nshepperd (thank you!).

Special thanks too to Katherine Crowson for coming up with many improved sampling tricks, as well as some of the code.

Feel free to suggest any changes! If anyone has any idea what license should this repo use, please let me know.

Footnotes

  1. StyleGAN3 was created by NVIDIA. Here is the original repo.

  2. CLIP (Contrastive Language-Image Pre-Training) is a multimodal model made by OpenAI. For more information head over here.

About

A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published