-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tweaks to benchmarking #1401
Tweaks to benchmarking #1401
Conversation
|
Ad load time test resultsFor Test conditions:
|
module.exports = webpackMerge.smart(config, { | ||
mode: 'production', | ||
output: { | ||
filename: `[chunkhash]/graun.standalone.commercial.js`, | ||
chunkFilename: `[chunkhash]/graun.[name].commercial.js`, | ||
filename: `${prefix}graun.standalone.commercial.js`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so that we can optionally build prod without a hash, so playwright can know file names to override the requests to.
@@ -0,0 +1,73 @@ | |||
import { defineConfig, devices } from '@playwright/test'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New seperate playwright config using playwright projects which can depend on each other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice!
body | ||
}); | ||
return; | ||
} else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fab addition, so much more helpful having this as a comment instead of having to manually check the test outcome
Co-authored-by: Emma Imber <[email protected]>
What does this change?
Builds on the work done by @emma-imber in #1363
Reconfigures phases into seperate projects which can depend on one another.
Use a project to set up the consent state and save the storage state for very rapid testing (we can apply this to our other tests!)
Builds a prod build then runs the tests against prod while overriding commercial files to the locally built versions to get as realistic results as possible.
Run the benchmarks in parallel
Write the results to files so they can be averaged across the workers.
Adds a little PR message with the results (see below).
Why?
Often we don't confidently know how changes will affect ad rendering time, e.g. this PR where we're not sure, and running an AB test just to find out isn't optimal.
It's also revealed that with poorer network connections the difference between ad load times between consented vs consentless is not very pronounced 🤔That was a mistake the consentless benchmark was still accepting all!