- Clone repo.
- Install necessary packages.
$ pip install -r requirements.txt
- Download databases and requirements
$ bash downloaddata.sh
In our experiments, we do not implement directly the API benchmarks published in their repos (e.g., NAS-Bench-101, NAS-Bench-201, etc). Instead, we create smaller-size databases by accessing their databases and only logging necessary content.
You can reproduce our results by running the below scripts:
$ python train.py --benchmark <DARTS, NASNet, ENAS, PNAS, Amoeba, NB201, NB101, Macro, all>
All weight files for the training process are provided here
$ python test.py --checkpoint /path/to/checkpoint
$ python search.py --checkpoint /path/to/checkpoint
@misc{le2023efficacy,
title={Efficacy of Neural Prediction-Based Zero-Shot NAS},
author={Minh Le and Nhan Nguyen and Ngoc Hoang Luong},
year={2023},
eprint={2308.16775},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Our source code was implemented based on the following sources:
- NAS-Bench-101: Towards Reproducible Neural Architecture Search
- NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search
- NAS-Bench-Macro: Prioritized Architecture Sampling with Monto-Carlo Tree Search
- NDS: Designing Network Design Spaces
- ZenNAS: A Zero-Shot NAS for High-Performance Deep Image Recognition
- Fast Differentiable Sorting and Ranking