Skip to content

Code for Wölfflin Affective Generative Analysis paper published in ICCC 2021

License

Notifications You must be signed in to change notification settings

Vision-CAIR/WAGA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Wölfflin Affective Generative Analysis

This repo contains the code for implementation of StyleCAN1 and StyleCAN2 models proposed in the paper Wölfflin Affective Generative Analysis for Visual Art published in International Conference on Computational Creativity (ICCC) 2021. The implementations for StyleGAN1 and StyleGAN2 models are taken from rosinality/style-based-gan-pytorch and rosinality/stylegan2-pytorch respectively.

Requirements

  • PyTorch
  • torchvision
  • CUDA 10.1/10.2
  • numpy
  • scipy
  • matplotlib
  • pandas
  • pillow

Usage

First download the wikiart dataset from here.

Extract the data in the root directory.

StyleCAN1

To train StyleCAN1.

cd StyleCAN1
python -u train.py\
        --sched\
        --max_size=256\
        --use_CAN\

Generate images using pretrained model.

Download StyleCAN1 and StyleGAN1 pretrained models pretrained models and put it into StyleCAN1 folder then run

python generate.py --ckpt stylecan2.pt --size 256 # to run StyleCAN2

python generate.py --ckpt stylegan2.pt --size 256 # to run StyleGAN2

StyleCAN2

To train StyleCAN2.

cd StyleCAN2
python -m torch.distributed.launch --nproc_per_node=4 --master_port=12895 train.py\
        --batch=32\
        --n_sample=25\
        --size=256\
        --use_CAN\

Generate images using pretrained model.

Download StyleCAN2 and StyleGAN2 pretrained models put it into StyleCAN2 folder then run

python generate.py --ckpt stylecan2.pt --size 256 # to run StyleCAN2

python generate.py --ckpt stylegan2.pt --size 256 # to run StyleGAN2

Run analysis

To run analysis notebooks, first download precomputed features on generated and real images from here. Extract the features in analyses_notebooks/datasets/features. This file is only required to run the get_best_correlation_coefficients_for_wofflin_principle.ipynb. Other analysis notebooks can be run without downloading these features. In your terminal change your directory to the root folder of this project and start jupyter server using

jupyter notebook

Navigate to analyses_notebooks path and run the notebook of your choice.

Directory structure

├── analyses_notebooks
│   ├── datasets
│   │   ├── features <- Extracted features goes here
│   │   ├── processed
│   │   └── raw
│   ├── emotion_plot.ipynb
│   ├── get_best_correlation_coefficients_for_wofflin_principle.ipynb
│   └── train_wofflin_to_emotion.ipynb
├── LICENSE
├── README.md
├── samples
├── StyleCAN1
│   ├── stylecan1.pt  <- Downloaded model here
├── StyleCAN2
│   ├── stylecan2.pt  <- Downloaded model here
└── wikiart <- Extracted wikiart dataset goes here.

Sample Results

StyleCAN1

StyleCAN1 sample

StyleCAN2

StyleCAN2 sample

About

Code for Wölfflin Affective Generative Analysis paper published in ICCC 2021

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published