This repository contains the code to reproduce the results of the paper:
The code is based on the Pytorch machine learning library.
If you want to use t-softmax in your classifiers/neural networks you can find the modules in src/tsoftmax.py
.
We use conda to create a reproducible environment. Run:
conda env create -f conda_env.yml
to install the dependencies.
Activate the environment using conda activate tsoftmax
In path.sh
change PYPATH
to the bin directory of the environment you just built e.g.:
PYPATH="/path_to_conda/miniconda3/envs/tsoftmax/bin"
and to the LSUN dataset:
LSUNPATH="/path_to_lsun/"
To train a fmnist classifier run:
bash run_bw.sh fmnist
To train a kmnist classifier run:
bash run_bw.sh kmnist
This will train different models and save them in the folder models
.
Run evaluation scripts by running:
bash run_eval_bw.sh
Finally you can view the results by running the following script:
python plot_fom.py --arch convnet --data fmnist
and
python plot_fom.py --arch convnet --data kmnist
For the CIFAR10 experiments, the procedure is similar:
- Training:
run.sh
(Note: training each model can be time-demanding. You might want to comment some lines on this script and run it in parallel using different machines) - Confidence measures:
run_eval.sh
- Visualizing results:
python plot_fom.py --arch densenet --data cifar10