-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I reproduce and update bloop numbers? #7
Comments
I actually had upgraded bloop to 1.3.2 a while back but didn't push the results. I updated the master branch to use 1.3.2. Each tool exists as a resource directory, i.e. The program requires java 11 to compile and run, but you can specify the java home for the forked build tool process by running with
You can also just do
in the sbt shell, but that's a bit slower so I usually just use the command line (also $HOME needs to be expanded to an absolute path when run from sbt). The -i parameter toggles how many test iterations to run. The -w parameter toggles how many iterations to run initially and throw out (to warm up the jvm). The -e toggles how many extra source files to generate for the second test which attempts to measure the additional io overhead of having many source files in your build even if only one of them changes. Bloop was more complicated to add to the benchmarks because I had to manage the client and server processes. My understanding is that 1.4.0 simplifies that with snailgun? I tested on CI because I figured that if I just ran the tests on my computer, I'd get complaints about that methodology as well. In general, I find the results to be quite consistent no matter where I run the tests though the total time is slower in CI than when I run locally. To see what this looks like subjectively, I've uploaded two asciinema clips that show the benchmarks running locally on my mac with sbt 1.3.0 turbo: Bloop seems quite fast for compilation but does it fork for test? You can see in the asciinema clip the multi second delay between compilation running and the test starting. I suspect that if the benchmark was just testing ~compile, then bloop would either win or be neck and neck with sbt 1.3.0. Also, I had to add a sleep between tests for bloop because of scalacenter/bloop#980. Otherwise bloop wouldn't always detect the changes. |
Bloop version is outdated. How can I update the bloop version and run the benchmark again?
P.S. is this testing on CI? How is this benchmark representative and reliable at all? CI performance numbers contain lots of noise; I'd be wary to draw meaningful conclusions from them.
The text was updated successfully, but these errors were encountered: