Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I reproduce and update bloop numbers? #7

Open
jvican opened this issue Sep 12, 2019 · 1 comment
Open

How can I reproduce and update bloop numbers? #7

jvican opened this issue Sep 12, 2019 · 1 comment

Comments

@jvican
Copy link

jvican commented Sep 12, 2019

Bloop version is outdated. How can I update the bloop version and run the benchmark again?

P.S. is this testing on CI? How is this benchmark representative and reliable at all? CI performance numbers contain lots of noise; I'd be wary to draw meaningful conclusions from them.

@eatkins
Copy link
Member

eatkins commented Sep 13, 2019

I actually had upgraded bloop to 1.3.2 a while back but didn't push the results. I updated the master branch to use 1.3.2.

Each tool exists as a resource directory, i.e. src/main/resources/bloop-1.3.2. Whenever the tool run, it extracts the tool's resource directory into a temporary directory, sets up and runs the tool. The entire benchmark program lives in https://github.com/eatkins/scala-build-watch-performance/blob/master/src/main/java/build/performance/Main.java. The code for setting up the bloop project exists at https://github.com/eatkins/scala-build-watch-performance/blob/233d2ea48db1774baaaec4934da65fc4f2199134/src/main/java/build/performance/Main.java#L210.

The program requires java 11 to compile and run, but you can specify the java home for the forked build tool process by running with -j $JAVA_HOME. For example, I tested bloop just now by running:

javac -cp lib/swoval/file-tree-views-2.1.1.jar src/main/java/build/performance/Main.java -d target/classes 
java -cp "target/classes:src/main/resources:lib/swoval/file-tree-views-2.1.1.jar" build.performance.Main -i 50 -w 50 -e 5000 -j ~/.jabba/jdk/[email protected]/Contents/Home bloop-1.3.2

You can also just do

> run -i 50 -w 50 -e 5000 -j $HOME/.jabba/jdk/[email protected]/Contents/Home bloop-1.3.2

in the sbt shell, but that's a bit slower so I usually just use the command line (also $HOME needs to be expanded to an absolute path when run from sbt).

The -i parameter toggles how many test iterations to run. The -w parameter toggles how many iterations to run initially and throw out (to warm up the jvm). The -e toggles how many extra source files to generate for the second test which attempts to measure the additional io overhead of having many source files in your build even if only one of them changes.

Bloop was more complicated to add to the benchmarks because I had to manage the client and server processes. My understanding is that 1.4.0 simplifies that with snailgun?

I tested on CI because I figured that if I just ran the tests on my computer, I'd get complaints about that methodology as well. In general, I find the results to be quite consistent no matter where I run the tests though the total time is slower in CI than when I run locally.

To see what this looks like subjectively, I've uploaded two asciinema clips that show the benchmarks running locally on my mac with sbt 1.3.0 turbo:
asciicast
and bloop 1.3.2:
asciicast

Bloop seems quite fast for compilation but does it fork for test? You can see in the asciinema clip the multi second delay between compilation running and the test starting. I suspect that if the benchmark was just testing ~compile, then bloop would either win or be neck and neck with sbt 1.3.0.

Also, I had to add a sleep between tests for bloop because of scalacenter/bloop#980. Otherwise bloop wouldn't always detect the changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants