This repository contains various artifacts to create Infinispan server and CLI images.
Currently we provide the following images which are all based upon the ubi-minimal base image:
infinispan/server
- Infinispan is executed using the Java 11 openjdk JVMinfinispan/server-native
- Infinispan is executed natively.infinispan/cli
- A natively compiled version of the Infinispan CLI.
The
server
andserver-native
images are configured the same. The server instructions throughout these docs are applicable to both images unless otherwise stated.
docker run -it infinispan/cli
The image's endpoint is the CLI binary, so it's possible to pass the usual CLI args straight to the image. For example:
docker run -it infinispan/cli --connect http://<server-url>:11222
You can find complete documentation for the CLI, in our CLI User Guide.
To get started with infinispan server on your local machine simply execute:
docker run -p 11222:11222 infinispan/server
or
podman run --net=host -p 11222:11222 infinispan/server
When utilising podman it's necessary for the
--net=host
to be passed when not executing assudo
.
By default the image has authentication and enabled on all exposed endpoints. When executing the above command the image automatically generates a username/password combo with the "admin" role, prints the values to stdout and then starts the Infinispan server with the authenticated Hotrod and Rest endpoints exposed on port 11222. Therefore, it's necessary to utilise the printed credentials when attempting to access the exposed endpoints via clients.
It's also possible to provide a admin username/password combination via environment variables like so:
docker run -p 11222:11222 -e USER="admin" -e PASS="changeme" infinispan/server
We recommend utilising the auto-generated credentials or USER & PASS env variables for initial development only. Providing authentication and authorization configuration via a Identities Batch file allows for much greater control.
When connecting a HotRod client to the image, the following SASL properties must be configured on your client (with the username and password properties changed as required):
infinispan.client.hotrod.auth_username=admin
infinispan.client.hotrod.auth_password=changme
infinispan.client.hotrod.sasl_mechanism=DIGEST-MD5
User identities and roles can be defined by providing a cli batch file via the IDENTITIES_BATCH
env variable.
All of the cli commands defined in this file are executed before the server is started, therefore it's only possible to
execute offline commands otherwise the container will fail. For example, including create cache ...
in the batch would
fail as it requires a connection to an Infinispan server.
Infinispan provides implicit roles for some users.
[TIP] Check Infinispan documentation to know more about implicit roles and authorization
Below is an example Identities batch CLI file identities.batch
, that defines four users and their role:
user create "Alan Shearer" -p "striker9" -g admin
user create "observer" -p "secret1"
user create "deployer" -p "secret2"
user create "Rigoberta Baldini" -p "secret3" -g monitor
To run the image using a local identities.batch
, execute:
docker run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 infinispan/server
The Infinispan image passes all container arguments to the created server, therefore it's possible to configure the server in the same manner as a non-containerised deployment.
Below shows how a docker volume can be created and mounted in order to run
the Infinispan image with the local configuration file my-infinispan-config.xml
located in the users current working directory.
docker run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 infinispan/server -c /user-config/my-infinispan-config.xml
When running in a managed environment such as Kubernetes, it is not possible to utilise multicasting for initial node
discovery, thefore we must utilise the JGroups DNS_PING protocol to
discover cluster members. To enable this, we must provide the jgroups.dnsPing.query
property and configure the
kubernetes
stack.
To utilise the tcp stack with DNS_PING, execute the following config:
docker run -v $(pwd):/user-config infinispan/server --bind-address=0.0.0.0 -Dinfinispan.cluster.stack=kubernetes -Djgroups.dns.query="infinispan-dns-ping.myproject.svc.cluster.local"
It's possible to provide additional java properties and JVM options to the server images via the JAVA_OPTIONS
env variable.
For example, to quickly configure CORS without providing a server.yaml file, it's possible to do the following:
docker run -e JAVA_OPTIONS="-Dinfinispan.cors.enableAll=https://host.domain:port" infinispan/server
Deploy artifacts to the server lib directory using the SERVER_LIBS
env variable.
For example, to add the PostgreSQL JDBC driver to the server:
docker run -e SERVER_LIBS="org.postgresql:postgresql:42.3.1" infinispan/server
The SERVER_LIBS
variable supports multiple, space-separated artifacts represented as URLs or as Maven coordinates. Archive artifacts in .tar
, .tar.gz
or .zip
formats will be extracted. Refer to the CLI install
command help to learn about all possible arguments and options.
The image scripts that are used to configure and launch the executables can be debugged by setting the environment variable DEBUG=TRUE
as follows:
docker run -e DEBUG=true infinispan/<image-name>
It's also possible to debug the Infinispan server in the image by setting the DEBUG_PORT
environment variable as follows:
docker run -e DEBUG_PORT="*:8787" -p 8787:8787 infinispan/server
In order to keep the image's size as small as possible, we utilise the ubi-minimal image. Consequently, the image does not provide all of the tools that are commonly available in linux distributions. Below is a list of common tools/recipes that are useful for debugging.
Task | Command |
---|---|
Text editor | vi |
Get the PID of the java process | ps -fC java |
Get socket/file information | lsof |
List all open files excluding network sockets | lsof |
List all TCP sockets | ss -t -a |
List all UDP sockets | ss -u -a |
Network configuration | ip |
Show unicast routes | ip route |
Show multicast routes | ip maddress |
It's recommended to utilise Infinispan's REST endpoint in order to determine if the server is ready/live. To do this, you can utilise the Kubernetes httpGet probes as follows:
livenessProbe:
httpGet:
path: /rest/v2/cache-managers/default/health/status
port: 11222
failureThreshold: 5
initialDelaySeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /rest/v2/cache-managers/default/health/status
port: 11222
failureThreshold: 5
initialDelaySeconds: 10
successThreshold: 1
timeoutSeconds: 10
All of our images are created using the Cekit tool. Installation instructions can be found here.
The exact dependencies that you will require depends on the "builder" that you want to use in order to create your image. For example OSBS has different requirements to Docker.
We leverage cekit descriptor files in order to create the different image types.
server-openjdk.yaml
- Creates theinfinispan/server
image.server-native.yaml
- Creates theinfinispan/server-native
image.cli.yaml
- Creates theinfinispan/cli
image with a natively compiled cli.server-dev-native.yaml
- Creates theinfinispan/server-native
image using local artifact paths that must be added to the descriptor.cli-dev.yaml
- Creates theinfinispan/cli
image using a local cli executable that must be added to the descriptor.
We recommend pulling stable image releases from Quay.io or Docker Hub, however it is also possible to recreate stable releases of an image.
To recreate a given release, it's necessary to checkout the corresponding git tag and build using cekit --descriptor <descriptor-file> build <build-engine>
.
For example, the following commands will recreate the infinispan/server:10.0.0.Dev05
image.
git checkout 11.0.0.Dev05
cekit --descriptor server-openjdk.yaml build docker
The *-dev-*.yaml
descriptors can be used to create local images for development purposes. In order to use these it's
necessary to update the paths of the artifacts in the descriptor then issue the following command:
BUILD_ENGINE="podman"
DESCRIPTOR="server-dev-native.yaml"
cekit -v --descriptor $DESCRIPTOR build $BUILD_ENGINE
See License.