Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch release benchmark suite Improvements #2468

Open
atalman opened this issue Sep 24, 2024 · 0 comments
Open

PyTorch release benchmark suite Improvements #2468

atalman opened this issue Sep 24, 2024 · 0 comments

Comments

@atalman
Copy link
Contributor

atalman commented Sep 24, 2024

For PyTorch Releases we execute following benchmarks:
https://github.com/pytorch/benchmark/tree/main/userbenchmark/release-test

These are the tests that we run:

# run mnist
mkdir -p "${RESULT_DIR}/mnist"
pushd "${EXAMPLES_DIR}/mnist"
export LOG_FILE=${RESULT_DIR}/mnist/result.log
export MEM_FILE=${RESULT_DIR}/mnist/result_mem.log
${PREFIX} bash "${CURRENT_DIR}/monitor_proc.sh" python main.py --epochs 10
# run mnist-hogwild
mkdir -p ${RESULT_DIR}/mnist_hogwild
pushd "${EXAMPLES_DIR}/mnist_hogwild"
export LOG_FILE=${RESULT_DIR}/mnist_hogwild/result.log
export MEM_FILE=${RESULT_DIR}/mnist_hogwild/result_mem.log
${PREFIX} bash "${CURRENT_DIR}/monitor_proc.sh" python main.py --epochs 10
# run CPU WLM LSTM
mkdir -p ${RESULT_DIR}/wlm_cpu_lstm
pushd "${EXAMPLES_DIR}/word_language_model"
export LOG_FILE=${RESULT_DIR}/wlm_cpu_lstm/result.log
export MEM_FILE=${RESULT_DIR}/wlm_cpu_lstm/result_mem.log
${PREFIX} bash "${CURRENT_DIR}/monitor_proc.sh" python main.py --epochs 10 --model LSTM
# run GPU WLM LSTM
mkdir -p ${RESULT_DIR}/wlm_gpu_lstm
pushd "${EXAMPLES_DIR}/word_language_model"
export LOG_FILE=${RESULT_DIR}/wlm_gpu_lstm/result.log
export MEM_FILE=${RESULT_DIR}/wlm_gpu_lstm/result_mem.log
${PREFIX} bash "${CURRENT_DIR}/monitor_proc.sh" python main.py --epochs 10 --model LSTM --cuda
# run CPU WLM Transformer
mkdir -p ${RESULT_DIR}/wlm_cpu_trans
pushd "${EXAMPLES_DIR}/word_language_model"
export LOG_FILE=${RESULT_DIR}/wlm_cpu_trans/result.log
export MEM_FILE=${RESULT_DIR}/wlm_cpu_trans/result_mem.log
${PREFIX} bash "${CURRENT_DIR}/monitor_proc.sh" python main.py --epochs 10 --model Transformer
# run GPU WLM Transformer
mkdir -p ${RESULT_DIR}/wlm_gpu_trans
pushd "${EXAMPLES_DIR}/word_language_model"
export LOG_FILE=${RESULT_DIR}/wlm_gpu_trans/result.log
export MEM_FILE=${RESULT_DIR}/wlm_gpu_trans/result_mem.log
${PREFIX} bash "${CURRENT_DIR}/monitor_proc.sh" python main.py --epochs 10 --model Transformer --cuda

Models are taken from here: https://github.com/pytorch/examples

Improvement suggestions:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Cold Storage
Development

No branches or pull requests

1 participant