Designed to benchmark performance of PCs/Laptops/WSL/etc when working on NodeJS-based Front-End projects.
-
Clone this repo
-
Install Python3 (
sudo apt install python-is-python3
on WSL) -
Install
pip
(sudo apt install python3-pip
on WSL) -
Install
nodeenv
:pip install nodeenv
-
Run
npm ci
to install deps -
Copy
config.example.ts
toconfig.ts
-
Modify
config.ts
to your liking (add projects, commands, optionally patches, etc.), see Configuration -
Run
npm start
(ornpm start -- --run-indefinitely
) -
See results in CLI (mean ± standard deviation):
Benchmarking "build"... Average: 10s ±132ms Benchmarking "unit test"... Average: 45s ±12s
and more details in
results.json
file
--run-indefinitely
- when set, will re-run benchmarks for all projects until you stop the process manually (using Ctrl+C). Useful for when you can leave device running for long and want to get more precise benchmark results. Note: afterAll()
hooks won't run in this case, which might affect reporters
Available options:
name
: the name of your projectgitUrl
: URL of your Git repository to clone (make sure credentials are saved before running benchmarks)gitCliConfigOverrides
: key-value object, will be passed togit clone -c your=option -c another=option
to override global config options, such asautocrlf
, etc.rootFolder
: this is your NodeJS root folder (where thepackage.json
is). If you have multiple projects within the same repo - add multiple project entries with different root folderspatches
: optional array of patches to apply, requiresname
andfile
options, see patchingcommands
: an array of commands to be benchmarked, see commands
Commands are what being benchmarked, common examples: npm ci
, npm test
, npm run build
, etc.
Every command needs a name
.
Types of commands (only one per command):
npmScriptName
- will callnpm run ${npmScriptName}
, for examplenpmScriptName: "build-dev"
npxCommand
- will callnpx ${npxCommand}
, for example:npxCommand: "jest"
npmCommand
- will callnpm ${npmCommand}
, for example:npmCommand: "ci"
Patching can be useful to disable certain tests, change scripts, engines, etc. It's run right after cloning, before installing nodeenv and npm modules.
Available patching options:
- replace: set
search: "find-me"
andreplace: "replace-with-me"
- it'll replace first occurrence - delete: set
delete: true
- will delete file - append: set
append: "some-string"
- will append to file
Note: all patching options are exclusive
The system supports multiple reporters that extend Reporter
class.
Available reporters:
cli
- logs totals, averages and deviation to stdoutfs
- preserves reports intoresults.json
filechart
- saves visual representation inresults.png
file
All reporters have to implement collectResult()
method that is called after each command is benchmarked.
Some reporters may choose to implement afterAll()
method that is called after all benchmarks are done for all projects.
- Enable debug logging:
npx -y cross-env DEBUG=true npm start
and look inlog.txt
- Add visual comparison of results: using Vega prototype
- Need to figure out how to implement this. Just adding it as a reporter doesn't make a lot of sense because in a given environment we only have results from that environment. Maybe consider creating a Gist reporter that will upload/append results to a Gist - and then chart can be generated based on all those results.
- Related to previous point: think how can we make managing results easier for the end user.
- Maybe pull config from Gist - this way user can make Gist public and avoid having to auth in all envs