Skip to content

Paper re-implementations from the course METU CENG796 Deep Generative Models.

Notifications You must be signed in to change notification settings

gcinbis/deep-generative-models-course-projects

Repository files navigation

METU CENG 796 - Deep Generative Models - Paper re-implementation projects

Paper re-implementation projects from the course METU CENG796 Deep Generative Models (Spring 2020, 2021, 2022, 2023 and 2024).

Each project involves re-implementation of a recent paper, typically from a top-tier machine learning / computer vision conference, by a student pair. An initial version of each project is peer-reviewed by project groups. The goal has been to re-produce each paper based on paper itself, ideally without consulting to existing public implementations of papers, if available. More information regarding the development process can be found on the course homepages:

The Jupyter notebooks with pre-computed outputs, source codes, obtained results with discussions on the difficulties encountered during the projects can be found in the project folders listed below. Please note that some of the pre-computed outputs (animated plots, etc.) may not appear correctly in github's notebook rendering.

In this repository, the projects are shared as is, with no guarantees, upon kind permissions of the students. Nearly all projects are licensed with the standard MIT license. Please check the project folders for more information, including information on project groups.

Spring 2024 projects

  • DMD - One-step Diffusion with Distribution Matching Distillation (CVPR'24)
  • Diff-Retinex - Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative Diffusion Model (CVPR'23)
  • DiffDis - DiffDis: Empowering Generative Diffusion Model with Cross-Modal Discrimination Capability (ICCV'23)
  • KD-DLGAN - KD-DLGAN: Data Limited Image Generation via Knowledge Distillation (CVPR'23)
  • MasterStyleTransfer - Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot Artistic Style Transfer (CVPR'23)
  • SeD - SeD: Semantic-Aware Discriminator for Image Super-Resolution (CVPR'24)
  • VideoDIP - Video Decomposition Prior: Editing Videos Layer by Layer

Spring 2023 projects

  • SaShiMi - It's Raw! Audio Generation with State-Space Models (ICML'22)
  • MVCGAN - Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR'22)
  • GANSEG - GANSeg: Learning to Segment by Unsupervised Hierarchical Image Generation (CVPR'22)
  • Styleformer - Styleformer: Transformer based Generative Adversarial Networks with Style Vector (CVPR'22)
  • GPNN - Drop The GAN: In Defense of Patch Nearest Neighbors as Single Image Generative Models (CVPR'22)
  • FurryGAN - FurryGAN: High Quality Foreground-aware Image Synthesis (ECCV'22)
  • ProtVAE - ProtVAE: End-to-End deep structure generative model for protein design
  • GenDA - Few-shot cross-domain image generation via inference-time latent-code learning (ICLR'23)

Spring 2022 projects

  • AWGAN - Adaptive Weighted Discriminator for Training Generative Adversarial Networks (CVPR'21)
  • AttnFlow - Generative Flows with Invertible Attentions (CVPR'22)
  • DC-VAE - Dual Contradistinctive Generative Autoencoder (CVPR'21)
  • HeadGAN - HeadGAN: One-shot Neural Head Synthesis and Editing (ICCV'21)
  • HistoGAN - HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR'21)
  • NCSNv2 - Improved Techniques for Training Score-Based Generative Models (NeurIPS'20)
  • PD-GAN - PD-GAN: Probabilistic Diverse GAN for Image Inpainting (CVPR'21)
  • StyleSwinGAN - StyleSwin: Transformer-based GAN for High-resolution Image Generation (CVPR'22)

Spring 2021 projects

  • DeblurGANv2 - DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better
  • MSG-GAN - MSG-GAN: Multi-Scale Gradients for Generative Adversarial Networks
  • STGAN - STGAN: A Unified Selective Transfer Network
  • TransGAN - TransGAN: Two Transformers Can Make One Strong GAN, and That Can Scale Up
  • TuiGAN - TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images
  • U-GAT-IT - Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
  • UNetGAN - A U-Net Based Discriminator for Generative Adversarial Networks

Spring 2020 projects

  • CGWIN - Generative Well-intentioned Networks
  • DualGAN - DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
  • DupGAN - Duplex Generative Adversarial Network for Unsupervised Domain Adaptation
  • HoloGAN - HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
  • MGAN - MGAN: TRAINING GENERATIVE ADVERSARIAL NETS WITH MULTIPLE GENERATORS
  • MarginGAN - MarginGAN: Adversarial Training in Semi-Supervised Learning
  • NbrReg - Deep Semantic Text Hashing with Weak Supervision
  • ProGAN - Progressive Growing of GANs for Improved Quality, Stability, and Variation
  • RGAN - The Relativistic Discriminator: A Key Element Missing from Standard GAN
  • SWGAN - Generative Modeling using the Sliced Wasserstein Distance
  • SinGAN - SinGAN: Learning a Generative Model from a Single Natural Image
  • SphereGAN - Sphere Generative Adversarial Network Based on Geometric Moment Matching
  • WAE - Wasserstein Auto Encoders

About

Paper re-implementations from the course METU CENG796 Deep Generative Models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages