Skip to content

Commit

Permalink
Reduce box_detections_per_img for vision_maskrcnn (#115487)
Browse files Browse the repository at this point in the history
Summary:
This fixes a failure on the [perf dashboard](https://hud.pytorch.org/benchmark/compilers) with `--amp` mode.  I believe boxes 5 and 6 were getting swapped.  The existing comment explains the issue.

Before
```
$ ./benchmarks/dynamo/torchbench.py --training  --accuracy --no-translation-validatio --amp --backend=inductor --disable-cudagraphs --only vision_maskrcnn
...
[2023-12-09 13:21:27,292] torch._dynamo.utils: [ERROR] RMSE (res-fp64): 0.00171, (ref-fp64): 0.00054 and shape=torch.Size([256, 256, 3, 3])
[2023-12-09 13:21:27,292] torch._dynamo.utils: [ERROR] Accuracy failed for key name backbone.fpn.layer_blocks.2.0.weight.grad
fail_accuracy
```

After
```
$ ./benchmarks/dynamo/torchbench.py --training  --accuracy --no-translation-validatio --amp --backend=inductor --disable-cudagraphs --only vision_maskrcnn
...
pass
```

X-link: pytorch/pytorch#115487
Approved by: https://github.com/yanboliang

Reviewed By: osalpekar

Differential Revision: D52062336

Pulled By: jansel

fbshipit-source-id: bb900b583113c9bda5c13990ea643827b06c2211
  • Loading branch information
jansel authored and facebook-github-bot committed Dec 12, 2023
1 parent ecb251a commit d01dc3a
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions userbenchmark/dynamo/dynamobench/torchbench.py
Original file line number Diff line number Diff line change
Expand Up @@ -417,8 +417,8 @@ def load_model(
# comparison hard with torch.compile. torch.compile can cause minor
# divergences in the output because of how fusion works for amp in
# TorchInductor compared to eager. Therefore, instead of looking at
# all the bounding boxes, we compare only top 5.
model_kwargs = {"box_detections_per_img": 5}
# all the bounding boxes, we compare only top 4.
model_kwargs = {"box_detections_per_img": 4}
benchmark = benchmark_cls(
test="train",
device=device,
Expand Down

0 comments on commit d01dc3a

Please sign in to comment.