This project showcases two features of the JDK:
-
ability to create custom runtime for your application
-
native support for Alpine Linux
Those both boil down to more efficient container usage. Spacewise.
Creating a custom runtime for your app means that you can drop all the useless modules from your packaging. Like in this example, why should a server application ship with all the user interface toolkits that Java ships with (AWT, Swing, and JavaFX).
Being able to put that custom runtime natively on Alpine Linux gives you really slim end result. People are routinely using Alpine Linux for small containers, and community has maintained a patched JRE for Java people. That was fine for a long time, but now we no longer have to use an unofficial JRE.
The end result will be that you’ve got a pretty small, but functional, docker image for a Java app:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE vmj0/http-server-linux-java11 1.0-SNAPSHOT ef252ec83afd 11 minutes ago 101MB vmj0/http-server-alpine-java11 1.0-SNAPSHOT 7854466a32f3 12 minutes ago 35.2MB alpine 3.6 77144d8c6bdc 3 months ago 3.97MB vbatts/slackware 13.37 777ef0cd51ed 6 months ago 69.8MB
As can be seen from above, the vmj0/http-server-alpine-java11
is less than
half of the vmj0/http-server-linux-java11
.
The latter is based on a typical linux distro, vbatts/slackware
in this case.
Below, I will show you the commands you can use to manually come up with above result. It will be a lot of typing, and in reality you would use a build tool to automate it.
You will need a recent JDK, version 9 or above. I’m using JDK 9 even though it has been obsoleted by JDK 10.
$ java -version openjdk version "9.0.4" OpenJDK Runtime Environment (build 9.0.4+11) OpenJDK 64-Bit Server VM (build 9.0.4+11, mixed mode)
As you probably know, JDK was modularized. Let’s see how many modules this JDK alone has:
$ java --list-modules |wc -l 73
In order to follow these steps, you will also need Docker and git installed.
This project has just two Java files: the module info and the main class. The compilation is nothing new:
$ rm -rf build/classes/main $ mkdir -p build/classes/main $ javac -d build/classes/main \ src/main/java/module-info.java \ src/main/java/fi/linuxbox/http/Main.java
Since we have the module-info in there, we can put the class files on the module path as opposed to classpath, and run the module:
$ java --module-path build/classes/main \ -m http.server/fi.linuxbox.http.Main
After testing (see below),
use CTRL-C
or similar to interrupt the application.
While the server is running, hit it with requests. E.g.:
$ curl http://localhost:9000/ Hello World
This little server also supports the HEAD
method:
$ curl --head http://localhost:9000/ HTTP/1.1 200 OK Date: Sun, 18 Jun 2017 15:37:11 GMT Content-type: text/plain; charset=utf-8
And OPTIONS
(note the Allow
header in the response):
$ curl -v -X OPTIONS http://localhost:9000/ * Trying ::1... * TCP_NODELAY set * Connected to localhost (::1) port 9000 (#0) > OPTIONS / HTTP/1.1 > Host: localhost:9000 > User-Agent: curl/7.54.0 > Accept: */* > < HTTP/1.1 200 OK < Date: Sun, 18 Jun 2017 15:37:21 GMT < Allow: GET, OPTIONS < Content-type: text/plain; charset=utf-8 < Content-length: 0 < * Connection #0 to host localhost left intact
The class files by themselves are a bit inconvenient to ship, so let’s build a modular JAR:
$ rm -rf build/jmods $ mkdir -p build/jmods $ jar --create --file build/jmods/http-server-1.0-SNAPSHOT.jar \ --main-class fi.linuxbox.http.Main \ -C build/classes/main .
As you can see, there’s nothing new here:
our javac
and jar
invocations look exactly like they used to.
Now running the application looks like this:
$ java --module-path build/jmods -m http.server
We’re just putting the JAR, instead of the class files, on the module path. Also, there’s no need to specify the main class anymore.
To verify that it works, you can go through the Test section again.
The modular JAR is a fine distribution form for libraries, but for applications we can do a bit better.
One of the coolest features that came with JDK 9, IMHO, is the jlink
command.
It allows you to build a custom runtime just for your app.
For convenience,
let’s define an environment variable that points to the JDK modules.
You will need to adjust the directory here,
since this one is specific to my setup.
You can find the jmods
directory from within your Java installation.
$ export TARGET_JMODS=/Users/vmj/.sdkman/candidates/java/9.0.4-openjdk/jmods
Note
|
While you can use JDK 9 to compile and package (as a JAR) your code,
you can’t use JDK 9 jlink if you are building a custom runtime for Java 10 or 11.
The major version number of jlink has to match the major version number of the JDK modules.
We will address this further down.
|
Now, here’s the basic usage of jlink
:
$ jlink --module-path build/jmods:$TARGET_JMODS \ --add-modules http.server \ --output build/jre/native
It will analyze the http.server
module dependencies transitively,
and spit out a small runtime:
$ du -csh build/jre/native 35M build/jre/native 35M total $ build/jre/native/bin/java --list-modules http.server [email protected] [email protected] [email protected]
So now you’ve got a 35MB directory that includes your app, its dependencies,
and a java
executable.
You’re down from 74 modules (73 for the JDK and 1 for your app)
to just 4 modules.
Nice :)
Turns out that you can shrink the custom runtime even more. Let’s build it again with some more flags:
$ rm -rf build/jre/native $ jlink --module-path build/jmods:$TARGET_JMODS \ --strip-debug --vm server --compress 2 \ --class-for-name \ --exclude-jmod-section=headers --exclude-jmod-section=man \ --dedup-legal-notices=error-if-not-same-content \ --add-modules http.server \ --output build/jre/native $ du -csh build/jre/native 21M build/jre/native 21M total
That’s more than 40% off of an already small base :)
Just to check that things are still working, you can run the app using the custom runtime like this:
$ ./build/jre/native/bin/java -m http.server
And the Test section should look familiar by now.
Now you could zip that directory and send it to everyone who’s using the
same platform as you are. (That’s why I chose the name native
.)
In order to be platform agnostic (this is Java app after all), we can Dockerize the custom runtime.
Note
|
the custom runtime needs to be cross-compiled for Linux, because that’s what’s running in the container. Don’t worry, JDK folks have made it child’s play :) |
Most of the Linux distributions use the GNU C library known as glibc. Alpine Linux, in order to shrink the size of the distribution, is based on musl C library. Hence, the "normal" Linux JDK builds are not compatible with that because they are linked against glibc.
Luckily, Project Portola ported the JDK to Alpine Linux, and their effort was already included in the JDK 9 EA build 171, released at the beginning of June 2017. As of this writing, there doesn’t seem to be JDK 9 or 10 builds available anymore, but the latest JDK 11 for Alpine Linux is available at the Early Access page.
So, in order to cross-compile, you will need to download the target JDK. JRE is not enough. Also, you will need a matching native JDK for the linking to work.
You don’t have to install these JDKs, just extract them somewhere.
Head on to http://jdk.java.net/11/ and grab your native JDK and the Alpine Linux JDK. If you want to compare to a Linux distribution that is based on glibc, grab the Linux JDK, too. (Unless that happens to be your native platform.)
Then extract the JDK(s) somewhere.
For example, I’ve got the Alpine JDK in /Users/vmj/jdks/x64-musl/
and Linux JDK in /Users/vmj/jdks/x64-linux/
.
$ cd ~/jdks/x64-musl $ tar xzf openjdk-11-ea+11_linux-x64-musl_bin.tar.gz $ cd ../x64-linux $ tar xzf openjdk-11-ea+12_linux-x64_bin.tar.gz $ cd ../x64-osx $ tar xzf openjdk-11-ea+12_osx-x64_bin.tar.gz
Point your TARGET_JMODS
env var to the target JDK. Here I’m pointing it to Alpine Linux modules:
$ export TARGET_JMODS=/Users/vmj/jdks/x64-musl/jdk-11/jmods
And for convenience, point JLINK
env var to the matching native jlink
.
Since I have JDK 9 as my regular JDK, I point the JLINK
to the JDK 11 version I extracted above:
$ export JLINK=/Users/vmj/jdks/x64-osx/jdk-11.jdk/Contents/Home/bin/jlink
Now go back to the project directory and build the custom runtime for Alpine:
$ $JLINK --module-path build/jmods:$TARGET_JMODS \ --strip-debug --vm server --compress 2 \ --class-for-name \ --exclude-jmod-section=headers --exclude-jmod-section=man \ --dedup-legal-notices=error-if-not-same-content \ --add-modules http.server \ --output build/jre/alpine
Note that we’re now pointing the module path to the target JDK
instead of that of the build host.
jlink
, which we launch from the native JDK,
will notice that we’re cross-compiling,
and it will spit out a different result.
We’re also changing the output directory, just so we can have multiple custom runtimes.
You can optionally run jlink
again
with TARGET_JMODS
pointing to the Linux JDK
and with the option --output build/jre/linux
.
That will give you a glibc based runtime for comparison.
$ export TARGET_JMODS=/Users/vmj/jdks/x64-linux/jdk-11/jmods $ $JLINK --module-path build/jmods:$TARGET_JMODS \ --strip-debug --vm server --compress 2 \ --class-for-name \ --exclude-jmod-section=headers --exclude-jmod-section=man \ --dedup-legal-notices=error-if-not-same-content \ --add-modules http.server \ --output build/jre/linux
Let’s create some simplistic Dockerfiles for our images:
$ rm -rf build/dockerfile $ mkdir -p build/dockerfile $ sed -e 's BASE_IMAGE alpine:3.6 ' Dockerfile.in >build/dockerfile/alpine $ sed -e 's BASE_IMAGE vbatts/slackware:13.37 ' Dockerfile.in >build/dockerfile/linux
The second sed
invocation is optional.
In it, you could also use debian:stretch-slim
or
pretty much any glibc based Linux distribution.
Now we can do the docker dance. First create a docker build context:
$ rm -rf build/docker $ mkdir -p build/docker
Then copy the custom runtime and the Dockerfile
to the build context:
$ cp -a build/jre/alpine build/docker/jre $ cp build/dockerfile/alpine build/docker/Dockerfile
Now you can upload the build context to the docker daemon and build the image:
$ (cd build/docker && docker build --tag vmj0/http-server-alpine-java11:1.0-SNAPSHOT .)
And, you can do the same dance for Linux, just replacing alpine with linux in three places:
$ rm -rf build/docker $ mkdir -p build/docker $ cp -a build/jre/linux build/docker/jre $ cp build/dockerfile/linux build/docker/Dockerfile $ (cd build/docker && docker build --tag vmj0/http-server-linux-java11:1.0-SNAPSHOT .)
Running the container is old news:
docker run --rm -it -p9000:9000 vmj0/http-server-alpine-java11:1.0-SNAPSHOT
And checking that it works is… yes, in the Test section.
Above is all very tedious. But now that the process is known, you can use whatever build tool that you’re comfortable with. Here are a couple of examples.
You can build the Alpine Docker image in one command using the Dockerfile.multistage
,
which was contributed by @StevenACoffman (thanks!).
For example:
docker build -f Dockerfile.multistage --tag vmj0/http-server-multistage:1.0-SNAPSHOT .
You probably noticed the Makefile
.
It’s optional, since I’ve shown you above how to do things,
but the Makefile
contains all the above commands.
If you’ve got GNU make, try invoking make help
.
Or do the following (adjusting variables, naturally) and read the help later:
$ unset TARGET_JMODS $ export NATIVE_JMODS=/Users/vmj/jdks/x64-osx/jdk-11.jdk/Contents/Home/jmods $ export ALPINE_JMODS=/Users/vmj/jdks/x64-musl/jdk-11/jmods $ export LINUX_JMODS=/Users/vmj/jdks/x64-linux/jdk-11/jmods $ for target in native alpine linux ; do make jre TARGET=$target ; done $ export DOCKER_NAME=vmj0 $ for target in alpine linux ; do make dockerImage TARGET=$target ; done
Have fun!