Skip to content

Commit

Permalink
MINOR: Update jmh to 1.27 for async profiler support (apache#9129)
Browse files Browse the repository at this point in the history
Also updated the jmh readme to make it easier for new people to know
what's possible and best practices.

There were some changes in the generated benchmarking code that
required adjusting `spotbugs-exclude.xml` and for a `javac` warning
to be suppressed for the benchmarking module. I took the chance
to make the spotbugs exclusion mode maintainable via a regex
pattern.

Tested the commands on Linux and macOS with zsh.

JMH highlights:

* async-profiler integration. Can be used with -prof async,
pass -prof async:help to look for the accepted options.
* perf c2c [2] integration. Can be used with -prof perfc2c,
if available.
* JFR profiler integration. Can be used with -prof jfr, pass
-prof jfr:help to look for the accepted options.

Full details:
* 1.24: https://mail.openjdk.java.net/pipermail/jmh-dev/2020-August/002982.html
* 1.25: https://mail.openjdk.java.net/pipermail/jmh-dev/2020-August/002987.html
* 1.26: https://mail.openjdk.java.net/pipermail/jmh-dev/2020-October/003024.html
* 1.27: https://mail.openjdk.java.net/pipermail/jmh-dev/2020-December/003096.html

Reviewers: Manikumar Reddy <[email protected]>, Chia-Ping Tsai <[email protected]>, Bill Bejeck <[email protected]>, Lucas Bradstreet <[email protected]>
  • Loading branch information
ijuma authored Dec 11, 2020
1 parent 567a2ec commit 8cabd57
Show file tree
Hide file tree
Showing 6 changed files with 111 additions and 45 deletions.
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,11 @@ You can run spotbugs using:
The spotbugs warnings will be found in `reports/spotbugs/main.html` and `reports/spotbugs/test.html` files in the subproject build
directories. Use -PxmlSpotBugsReport=true to generate an XML report instead of an HTML one.

### JMH microbenchmarks ###
We use [JMH](https://openjdk.java.net/projects/code-tools/jmh/) to write microbenchmarks that produce reliable results in the JVM.

See [jmh-benchmarks/README.md](https://github.com/apache/kafka/blob/trunk/jmh-benchmarks/README.md) for details on how to run the microbenchmarks.

### Common build options ###

The following options should be set with a `-P` switch, for example `./gradlew -PmaxParallelForks=1 test`.
Expand Down
10 changes: 9 additions & 1 deletion build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -1713,7 +1713,10 @@ project(':jmh-benchmarks') {
}

dependencies {
compile project(':core')
compile(project(':core')) {
// jmh requires jopt 4.x while `core` depends on 5.0, they are not binary compatible
exclude group: 'net.sf.jopt-simple', module: 'jopt-simple'
}
compile project(':clients')
compile project(':streams')
compile project(':core')
Expand All @@ -1726,6 +1729,11 @@ project(':jmh-benchmarks') {
compile libs.slf4jlog4j
}

tasks.withType(JavaCompile) {
// Suppress warning caused by code generated by jmh: `warning: [cast] redundant cast to long`
options.compilerArgs << "-Xlint:-cast"
}

jar {
manifest {
attributes "Main-Class": "org.openjdk.jmh.Main"
Expand Down
2 changes: 1 addition & 1 deletion gradle/dependencies.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ versions += [
jacoco: "0.8.5",
jetty: "9.4.33.v20201020",
jersey: "2.31",
jmh: "1.23",
jmh: "1.27",
hamcrest: "2.2",
log4j: "1.2.17",
scalaLogging: "3.9.2",
Expand Down
15 changes: 2 additions & 13 deletions gradle/spotbugs-exclude.xml
Original file line number Diff line number Diff line change
Expand Up @@ -237,19 +237,8 @@ For a detailed description of spotbugs bug categories, see https://spotbugs.read
</Match>

<Match>
<!-- Suppress some minor warnings about machine-generated code for
benchmarking. -->
<Or>
<Package name="org.apache.kafka.jmh.cache.generated"/>
<Package name="org.apache.kafka.jmh.common.generated"/>
<Package name="org.apache.kafka.jmh.record.generated"/>
<Package name="org.apache.kafka.jmh.partition.generated"/>
<Package name="org.apache.kafka.jmh.producer.generated"/>
<Package name="org.apache.kafka.jmh.fetchsession.generated"/>
<Package name="org.apache.kafka.jmh.fetcher.generated"/>
<Package name="org.apache.kafka.jmh.server.generated"/>
<Package name="org.apache.kafka.jmh.consumer.generated"/>
</Or>
<!-- Suppress some minor warnings about machine-generated code for benchmarking. -->
<Package name="~org\.apache\.kafka\.jmh\..*\.jmh_generated"/>
</Match>

<Match>
Expand Down
122 changes: 93 additions & 29 deletions jmh-benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,71 @@
### JMH-Benchmark module
### JMH-Benchmarks module

This module contains benchmarks written using [JMH](https://openjdk.java.net/projects/code-tools/jmh/) from OpenJDK.
Writing correct micro-benchmarks in Java (or another JVM language) is difficult and there are many non-obvious pitfalls (many
due to compiler optimizations). JMH is a framework for running and analyzing benchmarks (micro or macro) written in Java (or
another JVM language).

### Running benchmarks

If you want to set specific JMH flags or only run certain benchmarks, passing arguments via
gradle tasks is cumbersome. These are simplified by the provided `jmh.sh` script.

The default behavior is to run all benchmarks:

./jmh-benchmarks/jmh.sh

Pass a pattern or name after the command to select the benchmarks:

./jmh-benchmarks/jmh.sh LRUCacheBenchmark

Check which benchmarks that match the provided pattern:

./jmh-benchmarks/jmh.sh -l LRUCacheBenchmark

Run a specific test and override the number of forks, iterations and warm-up iteration to `2`:

./jmh-benchmarks/jmh.sh -f 2 -i 2 -wi 2 LRUCacheBenchmark

Run a specific test with async and GC profilers on Linux and flame graph output:

./jmh-benchmarks/jmh.sh -prof gc -prof async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph LRUCacheBenchmark

The following sections cover async profiler and GC profilers in more detail.

### Using JMH with async profiler

It's good practice to check profiler output for microbenchmarks in order to verify that they represent the expected
application behavior and measure what you expect to measure. Some example pitfalls include the use of expensive mocks
or accidental inclusion of test setup code in the benchmarked code. JMH includes
[async-profiler](https://github.com/jvm-profiling-tools/async-profiler) integration that makes this easy:

./jmh-benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so

With flame graph output (the semicolon is escaped to ensure it is not treated as a command separator):

./jmh-benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph

A number of arguments can be passed to configure async profiler, run the following for a description:

./jmh-benchmarks/jmh.sh -prof async:help

### Using JMH GC profiler

It's good practice to run your benchmark with `-prof gc` to measure its allocation rate:

./jmh-benchmarks/jmh.sh -prof gc

Of particular importance is the `norm` alloc rates, which measure the allocations per operation rather than allocations
per second which can increase when you have make your code faster.

### Running JMH outside of gradle

The JMH benchmarks can be run outside of gradle as you would with any executable jar file:

java -jar <kafka-repo-dir>/jmh-benchmarks/build/libs/kafka-jmh-benchmarks-all.jar -f2 LRUCacheBenchmark

### Writing benchmarks

For help in writing correct JMH tests, the best place to start is the [sample code](https://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/) provided
by the JMH project.

Expand All @@ -15,47 +76,50 @@ uber-jar file containing the benchmarking code and required JMH classes.
JMH is highly configurable and users are encouraged to look through the samples for suggestions
on what options are available. A good tutorial for using JMH can be found [here](http://tutorials.jenkov.com/java-performance/jmh.html#return-value-from-benchmark-method)

### Gradle Tasks / Running benchmarks in gradle
### Gradle Tasks

If no benchmark mode is specified, the default is used which is throughput. It is assumed that users run
the gradle tasks with './gradlew' from the root of the Kafka project.

* jmh-benchmarks:shadowJar - creates the uber jar required to run the benchmarks.

* jmh-benchmarks:jmh - runs the `clean` and `shadowJar` tasks followed by all the benchmarks.

### Using the jmh script
If you want to set specific JMH flags or only run a certain test(s) passing arguments via
gradle tasks is cumbersome. Instead you can use the `jhm.sh` script. NOTE: It is assumed users run
the jmh.sh script from the jmh-benchmarks module.
the gradle tasks with `./gradlew` from the root of the Kafka project.

* Run a specific test setting fork-mode (number iterations) to 2 :`./jmh.sh -f 2 LRUCacheBenchmark`
* `jmh-benchmarks:shadowJar` - creates the uber jar required to run the benchmarks.

* By default all JMH output goes to stdout. To run a benchmark and capture the results in a file:
`./jmh.sh -f 2 -o benchmarkResults.txt LRUCacheBenchmark`
NOTE: For now this script needs to be run from the jmh-benchmarks directory.

### Running JMH outside of gradle
The JMH benchmarks can be run outside of gradle as you would with any executable jar file:
`java -jar <kafka-repo-dir>/jmh-benchmarks/build/libs/kafka-jmh-benchmarks-all.jar -f2 LRUCacheBenchmark`
* `jmh-benchmarks:jmh` - runs the `clean` and `shadowJar` tasks followed by all the benchmarks.

### JMH Options
Some common JMH options are:

```text
-e <regexp+> Benchmarks to exclude from the run.
-f <int> How many times to fork a single benchmark. Use 0 to
disable forking altogether. Warning: disabling
forking may have detrimental impact on benchmark
and infrastructure reliability, you might want
to use different warmup mode instead.
to use different warmup mode instead.
-i <int> Number of measurement iterations to do. Measurement
iterations are counted towards the benchmark score.
(default: 1 for SingleShotTime, and 5 for all other
modes)
-l List the benchmarks that match a filter, and exit.
-lprof List profilers, and exit.
-o <filename> Redirect human-readable output to a given file.
-v <mode> Verbosity mode. Available modes are: [SILENT, NORMAL,
EXTRA]
-prof <profiler> Use profilers to collect additional benchmark data.
Some profilers are not available on all JVMs and/or
all OSes. Please see the list of available profilers
with -lprof.
-v <mode> Verbosity mode. Available modes are: [SILENT, NORMAL,
EXTRA]
-wi <int> Number of warmup iterations to do. Warmup iterations
are not counted towards the benchmark score. (default:
0 for SingleShotTime, and 5 for all other modes)
```

To view all options run jmh with the -h flag.
2 changes: 1 addition & 1 deletion jmh-benchmarks/jmh.sh
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ $gradleCmd -q :jmh-benchmarks:clean :jmh-benchmarks:shadowJar

echo "gradle build done"

echo "running JMH with args [$@]"
echo "running JMH with args: $@"

java -jar ${libDir}/kafka-jmh-benchmarks-all.jar "$@"

Expand Down

0 comments on commit 8cabd57

Please sign in to comment.