Skip to content

Latest commit

 

History

History
83 lines (69 loc) · 3.5 KB

README.md

File metadata and controls

83 lines (69 loc) · 3.5 KB

Towards Bridging the Gap for Fairness in Knowledge Distillation

Installation

Install the necessary packages with the following command:
pip install -r requirements.txt

Data preparation

CelebA

Download the CelebA dataset (img align and crop version) from the official website to a desired directory. Follow the steps described in PreprocessData.ipynb to prepare a fair test benchamark from the original test split following previous works.

Training and evaluating

CelebA

For a quick start please use the scripts provided in scripts/ directory. Run all scripts from the root directory of this repository.

  1. Train baseline student and teacher models:
    bash ./scripts/celeba/clip.sh
    bash ./scripts/celeba/clip50.sh
    bash ./scripts/celeba/flava.sh
    bash ./scripts/celeba/res18.sh
    bash ./scripts/celeba/res34.sh
    bash ./scripts/celeba/shuffv2.sh

  2. Run KD baselines:
    a. BKD
    bash scrips/celeba/kd/clip.sh
    bash scrips/celeba/kd/clip50.sh
    bash scrips/celeba/kd/flava.sh
    bash scrips/celeba/kd/res18.sh
    bash scrips/celeba/kd/res34.sh
    bash scrips/celeba/kd/shuffv2.sh
    b. FitNet Stage 1
    bash scrips/celeba/fit-s1/clip.sh
    bash scrips/celeba/fit-s1/clip50.sh
    bash scrips/celeba/fit-s1/flava.sh
    bash scrips/celeba/fit-s1/res18.sh
    bash scrips/celeba/fit-s1/res34.sh
    bash scrips/celeba/fit-s1/shuffv2.sh
    b. FitNet Stage 2
    bash scrips/celeba/fit-s2/clip.sh
    bash scrips/celeba/fit-s2/clip50.sh
    bash scrips/celeba/fit-s2/flava.sh
    bash scrips/celeba/fit-s2/res18.sh
    bash scrips/celeba/fit-s2/res34.sh
    bash scrips/celeba/fit-s2/shuffv2.sh
    c. AT
    bash scrips/celeba/AT/res18.sh
    bash scrips/celeba/AT/res34.sh
    bash scrips/celeba/AT/shuffv2.sh
    d. AD
    bash scrips/celeba/AD/clip.sh
    bash scrips/celeba/AD/clip50.sh
    bash scrips/celeba/AD/flava.sh
    bash scrips/celeba/AD/res18.sh
    bash scrips/celeba/AD/res34.sh
    bash scrips/celeba/AD/shuffv2.sh
    e. MFD
    bash scrips/celeba/mmdv2/clip.sh
    bash scrips/celeba/mmdv2/clip50.sh
    bash scrips/celeba/mmdv2/flava.sh
    bash scrips/celeba/mmdv2/res18.sh
    bash scrips/celeba/mmdv2/res34.sh
    bash scrips/celeba/mmdv2/shuffv2.sh

  3. Run BIRD (Our Method):
    bash scrips/celeba/bird/v1/clip.sh
    bash scrips/celeba/bird/v1/clip50.sh
    bash scrips/celeba/bird/v1/flava.sh
    bash scrips/celeba/bird/v1/res18.sh
    bash scrips/celeba/bird/v1/res34.sh
    bash scrips/celeba/bird/v1/shuffv2.sh

Notes to the users:

  1. The above scripts will run 5 independent runs of each method and log results on the wandb server. To collect average results, kindly download and parse the wandb project page using Pandas.
  2. In case running the scripts parallely incurs a cuda out of memory error, remove the & flag at the end of each bash command to run the models sequentially.
  3. BIRD takes an additional training step time due to higher dependencies. We plan on fixing the train time optimization subsequently.
  4. For the remaining experiments change the model flags within the provided scripts to run.