-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🚨 Performance regression in #279 #281
Comments
Direct link to comment: 5da330e#commitcomment-137748136 I think that looking at the prior commit, db4375f, as well as a handful of commits before that, the numbers for the "before" baseline were spurious. @samuelburnham I don't think this could have been a caching issue (unpacking a prior bench result that's not quite for the same machine), would it? As we don't have caching installed. |
OK, so, I've run this benchmarking procedure on three interesting commits, which all resolve grumpkin to: CompressedSNARK-NIVC-1
CompressedSNARK-NIVC-2
CompressedSNARK-NIVC-Commitments-2
|
I tested on the penultimate commit of a#290, which contains no use of the MSM in the IPA (see the PR description for why). This does tell us (as expected, see a#290) that using the GPU-MSM code path in the IPA (remember, this never resolves to an actual GPU MSM, just SN's CPU one) is faster in both cases (6s vs 9 w/o cuda, 9s vs 12 with), but that the proving / verifying discrepancy is due to stg else. Benchmark ResultsCompressedSNARK-NIVC-1
CompressedSNARK-NIVC-2
CompressedSNARK-NIVC-Commitments-2
|
TL;DR: this is not a regression, but rather a native property of the batched approach (introduced in #131) that it performs better w/o GPU acceleration. This is not due to the final (IPA, in our benches) commitment. |
Tested yesterday: removing parallelism in commitments on CUDA, no change. |
Regression >= 30.0% found during merge of: #279
Commit: 5da330e
Triggered by: https://github.com/lurk-lab/arecibo/actions/runs/7646169891
The text was updated successfully, but these errors were encountered: