diff --git a/README.md b/README.md
index a7254c9905d4c..34e13b6ea7da7 100644
--- a/README.md
+++ b/README.md
@@ -199,6 +199,11 @@ You can run spotbugs using:
The spotbugs warnings will be found in `reports/spotbugs/main.html` and `reports/spotbugs/test.html` files in the subproject build
directories. Use -PxmlSpotBugsReport=true to generate an XML report instead of an HTML one.
+### JMH microbenchmarks ###
+We use [JMH](https://openjdk.java.net/projects/code-tools/jmh/) to write microbenchmarks that produce reliable results in the JVM.
+
+See [jmh-benchmarks/README.md](https://github.com/apache/kafka/blob/trunk/jmh-benchmarks/README.md) for details on how to run the microbenchmarks.
+
### Common build options ###
The following options should be set with a `-P` switch, for example `./gradlew -PmaxParallelForks=1 test`.
diff --git a/build.gradle b/build.gradle
index dc486a17ee134..28ad728079b4c 100644
--- a/build.gradle
+++ b/build.gradle
@@ -1713,7 +1713,10 @@ project(':jmh-benchmarks') {
}
dependencies {
- compile project(':core')
+ compile(project(':core')) {
+ // jmh requires jopt 4.x while `core` depends on 5.0, they are not binary compatible
+ exclude group: 'net.sf.jopt-simple', module: 'jopt-simple'
+ }
compile project(':clients')
compile project(':streams')
compile project(':core')
@@ -1726,6 +1729,11 @@ project(':jmh-benchmarks') {
compile libs.slf4jlog4j
}
+ tasks.withType(JavaCompile) {
+ // Suppress warning caused by code generated by jmh: `warning: [cast] redundant cast to long`
+ options.compilerArgs << "-Xlint:-cast"
+ }
+
jar {
manifest {
attributes "Main-Class": "org.openjdk.jmh.Main"
diff --git a/gradle/dependencies.gradle b/gradle/dependencies.gradle
index f4868e95761a0..902a54091f20a 100644
--- a/gradle/dependencies.gradle
+++ b/gradle/dependencies.gradle
@@ -70,7 +70,7 @@ versions += [
jacoco: "0.8.5",
jetty: "9.4.33.v20201020",
jersey: "2.31",
- jmh: "1.23",
+ jmh: "1.27",
hamcrest: "2.2",
log4j: "1.2.17",
scalaLogging: "3.9.2",
diff --git a/gradle/spotbugs-exclude.xml b/gradle/spotbugs-exclude.xml
index 9115e0d59ae82..722bfd1fd84bc 100644
--- a/gradle/spotbugs-exclude.xml
+++ b/gradle/spotbugs-exclude.xml
@@ -237,19 +237,8 @@ For a detailed description of spotbugs bug categories, see https://spotbugs.read
-
-
-
-
-
-
-
-
-
-
-
-
+
+
diff --git a/jmh-benchmarks/README.md b/jmh-benchmarks/README.md
index f731badec1418..216a43433bcb9 100644
--- a/jmh-benchmarks/README.md
+++ b/jmh-benchmarks/README.md
@@ -1,10 +1,71 @@
-### JMH-Benchmark module
+### JMH-Benchmarks module
This module contains benchmarks written using [JMH](https://openjdk.java.net/projects/code-tools/jmh/) from OpenJDK.
Writing correct micro-benchmarks in Java (or another JVM language) is difficult and there are many non-obvious pitfalls (many
due to compiler optimizations). JMH is a framework for running and analyzing benchmarks (micro or macro) written in Java (or
another JVM language).
+### Running benchmarks
+
+If you want to set specific JMH flags or only run certain benchmarks, passing arguments via
+gradle tasks is cumbersome. These are simplified by the provided `jmh.sh` script.
+
+The default behavior is to run all benchmarks:
+
+ ./jmh-benchmarks/jmh.sh
+
+Pass a pattern or name after the command to select the benchmarks:
+
+ ./jmh-benchmarks/jmh.sh LRUCacheBenchmark
+
+Check which benchmarks that match the provided pattern:
+
+ ./jmh-benchmarks/jmh.sh -l LRUCacheBenchmark
+
+Run a specific test and override the number of forks, iterations and warm-up iteration to `2`:
+
+ ./jmh-benchmarks/jmh.sh -f 2 -i 2 -wi 2 LRUCacheBenchmark
+
+Run a specific test with async and GC profilers on Linux and flame graph output:
+
+ ./jmh-benchmarks/jmh.sh -prof gc -prof async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph LRUCacheBenchmark
+
+The following sections cover async profiler and GC profilers in more detail.
+
+### Using JMH with async profiler
+
+It's good practice to check profiler output for microbenchmarks in order to verify that they represent the expected
+application behavior and measure what you expect to measure. Some example pitfalls include the use of expensive mocks
+or accidental inclusion of test setup code in the benchmarked code. JMH includes
+[async-profiler](https://github.com/jvm-profiling-tools/async-profiler) integration that makes this easy:
+
+ ./jmh-benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so
+
+With flame graph output (the semicolon is escaped to ensure it is not treated as a command separator):
+
+ ./jmh-benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph
+
+A number of arguments can be passed to configure async profiler, run the following for a description:
+
+ ./jmh-benchmarks/jmh.sh -prof async:help
+
+### Using JMH GC profiler
+
+It's good practice to run your benchmark with `-prof gc` to measure its allocation rate:
+
+ ./jmh-benchmarks/jmh.sh -prof gc
+
+Of particular importance is the `norm` alloc rates, which measure the allocations per operation rather than allocations
+per second which can increase when you have make your code faster.
+
+### Running JMH outside of gradle
+
+The JMH benchmarks can be run outside of gradle as you would with any executable jar file:
+
+ java -jar /jmh-benchmarks/build/libs/kafka-jmh-benchmarks-all.jar -f2 LRUCacheBenchmark
+
+### Writing benchmarks
+
For help in writing correct JMH tests, the best place to start is the [sample code](https://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/) provided
by the JMH project.
@@ -15,47 +76,50 @@ uber-jar file containing the benchmarking code and required JMH classes.
JMH is highly configurable and users are encouraged to look through the samples for suggestions
on what options are available. A good tutorial for using JMH can be found [here](http://tutorials.jenkov.com/java-performance/jmh.html#return-value-from-benchmark-method)
-### Gradle Tasks / Running benchmarks in gradle
+### Gradle Tasks
If no benchmark mode is specified, the default is used which is throughput. It is assumed that users run
-the gradle tasks with './gradlew' from the root of the Kafka project.
-
-* jmh-benchmarks:shadowJar - creates the uber jar required to run the benchmarks.
-
-* jmh-benchmarks:jmh - runs the `clean` and `shadowJar` tasks followed by all the benchmarks.
-
-### Using the jmh script
-If you want to set specific JMH flags or only run a certain test(s) passing arguments via
-gradle tasks is cumbersome. Instead you can use the `jhm.sh` script. NOTE: It is assumed users run
-the jmh.sh script from the jmh-benchmarks module.
+the gradle tasks with `./gradlew` from the root of the Kafka project.
-* Run a specific test setting fork-mode (number iterations) to 2 :`./jmh.sh -f 2 LRUCacheBenchmark`
+* `jmh-benchmarks:shadowJar` - creates the uber jar required to run the benchmarks.
-* By default all JMH output goes to stdout. To run a benchmark and capture the results in a file:
-`./jmh.sh -f 2 -o benchmarkResults.txt LRUCacheBenchmark`
-NOTE: For now this script needs to be run from the jmh-benchmarks directory.
-
-### Running JMH outside of gradle
-The JMH benchmarks can be run outside of gradle as you would with any executable jar file:
-`java -jar /jmh-benchmarks/build/libs/kafka-jmh-benchmarks-all.jar -f2 LRUCacheBenchmark`
+* `jmh-benchmarks:jmh` - runs the `clean` and `shadowJar` tasks followed by all the benchmarks.
### JMH Options
Some common JMH options are:
+
```text
-
+
-e Benchmarks to exclude from the run.
-
+
-f How many times to fork a single benchmark. Use 0 to
disable forking altogether. Warning: disabling
forking may have detrimental impact on benchmark
and infrastructure reliability, you might want
- to use different warmup mode instead.
-
+ to use different warmup mode instead.
+
+ -i Number of measurement iterations to do. Measurement
+ iterations are counted towards the benchmark score.
+ (default: 1 for SingleShotTime, and 5 for all other
+ modes)
+
+ -l List the benchmarks that match a filter, and exit.
+
+ -lprof List profilers, and exit.
+
-o Redirect human-readable output to a given file.
-
-
-
- -v Verbosity mode. Available modes are: [SILENT, NORMAL,
- EXTRA]
+
+ -prof Use profilers to collect additional benchmark data.
+ Some profilers are not available on all JVMs and/or
+ all OSes. Please see the list of available profilers
+ with -lprof.
+
+ -v Verbosity mode. Available modes are: [SILENT, NORMAL,
+ EXTRA]
+
+ -wi Number of warmup iterations to do. Warmup iterations
+ are not counted towards the benchmark score. (default:
+ 0 for SingleShotTime, and 5 for all other modes)
```
+
To view all options run jmh with the -h flag.
diff --git a/jmh-benchmarks/jmh.sh b/jmh-benchmarks/jmh.sh
index e59634bdffe7f..25a40e2fea160 100755
--- a/jmh-benchmarks/jmh.sh
+++ b/jmh-benchmarks/jmh.sh
@@ -35,7 +35,7 @@ $gradleCmd -q :jmh-benchmarks:clean :jmh-benchmarks:shadowJar
echo "gradle build done"
-echo "running JMH with args [$@]"
+echo "running JMH with args: $@"
java -jar ${libDir}/kafka-jmh-benchmarks-all.jar "$@"