Skip to content

Commit

Permalink
Fix a bug in test_bench (#2069)
Browse files Browse the repository at this point in the history
Summary:
Bugfix

Pull Request resolved: #2069

Test Plan:
```
$ python run_benchmark.py test_bench -m BERT_pytorch -d cuda -t train,eval --backend torchscript
Running TorchBenchModelConfig(name='BERT_pytorch', test='train', device='cuda', batch_size=None, extra_args=['--backend', 'torchscript'], extra_env=None, output_dir=None) ... [done]
Running TorchBenchModelConfig(name='BERT_pytorch', test='eval', device='cuda', batch_size=None, extra_args=['--backend', 'torchscript'], extra_env=None, output_dir=None) ... [done]
```

```
{
    "name": "test_bench",
    "environ": {
        "pytorch_git_version": "b2f25d6342ed483b461e831c6f970ae59a4fcca2",
        "pytorch_version": "2.2.0.dev20231127+cu121",
        "device": "NVIDIA A100-PG509-200"
    },
    "metrics": {
        "model=BERT_pytorch, test=train, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=latencies": 284.174904,
        "model=BERT_pytorch, test=train, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=cpu_peak_mem": 8.958984375,
        "model=BERT_pytorch, test=train, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=gpu_peak_mem": 7.0191650390625,
        "model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=latencies": 169.736414,
        "model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=cpu_peak_mem": 2.6162109375,
        "model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=gpu_peak_mem": 4.2965087890625
    }
}
```

Reviewed By: aaronenyeshi

Differential Revision: D51711874

Pulled By: xuzhao9

fbshipit-source-id: 6597bedd23b4b8b5b0e2a58ac403cdbea62a2dbc
  • Loading branch information
xuzhao9 authored and facebook-github-bot committed Nov 30, 2023
1 parent 850364a commit 490ccec
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions userbenchmark/test_bench/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,10 +111,11 @@ def run_config(config: TorchBenchModelConfig, metrics: List[str], dryrun: bool=F
metrics_output: TorchBenchModelMetrics = get_model_test_metrics(model, metrics=metrics)
result = {}
for metric in metrics:
if metric == "latency" and metrics_output.latencies:
if metric == "latencies" and metrics_output.latencies:
result[metric] = numpy.median(metrics_output.latencies)
if not result[metric]:
result[metric] = "failed"
else:
result[metric] = getattr(metrics_output, metric, None)
result[metric] = "failed" if result[metric] == None else result[metric]
print(" [done]", flush=True)
return result
except NotImplementedError as e:
Expand Down

0 comments on commit 490ccec

Please sign in to comment.