The agent is supported for Red Hat®️ OpenShift®️ 4.5 and newer.
The agent can be installed in your cluster using a set of YAML files we provide. These files contain the minimum necessary OpenShift objects and settings to run the agent. Teams should review and modify these YAML files for the specific needs of their clusters.
- LogDNA Account - Create an account with LogDNA by following our quick start guide.
- LogDNA Ingestion Key - You can find an ingestion key at the top of your account's Add a Log Source page.
- OpenShift cluster running Kubernetes 1.9 or greater.
- Local clone of this repository.
- Navigate to the root directory of the cloned
logdna-agent-v2
repository. - Run the following commands to create and configure a new project, secret, and service account:
oc new-project logdna-agent
oc create serviceaccount logdna-agent
oc create secret generic logdna-agent-key --from-literal=logdna-agent-key=<YOUR LOGDNA INGESTION KEY>
oc adm policy add-scc-to-user privileged system:serviceaccount:logdna-agent:logdna-agent
- Create the remaining resources:
oc apply -f k8s/agent-resources-openshift.yaml
- Monitor the pods for startup success:
foo@bar:~$ oc get pods --watch
NAME READY STATUS RESTARTS AGE
logdna-agent-jb2rg 1/1 Running 0 7s
⚠️ By default the agent will run as root. To run the agent as a non-root user, refer to the section Run as Non-Root below.
Note: To run as non-root, your OpenShift container must still be marked as privileged.
There are two components, the configuration and the image, that can be upgraded independently of each other. While not strictly required, we always recommend upgrading both components together.
Not every version update of the agent makes a change to our supplied configuration YAML files. If there are changes, they will be outlined in the release page.
Depending on what version of the agent configuration you're using, different steps are required to update it. If you are unsure of what version of the configuration you have, you can always check the app.kubernetes.io/version
label of the DaemonSet:
foo@bar:~$ oc describe daemonset -l app.kubernetes.io/name=logdna-agent
Name: logdna-agent
Selector: app=logdna-agent
Node-Selector: <none>
Labels: app.kubernetes.io/instance=logdna-agent
app.kubernetes.io/name=logdna-agent
app.kubernetes.io/version=2.2.0
...
Older versions of our configurations do not provide these labels. In that case, each upgrade path below provides an example of each configuration to compare to what's running on your cluster.
- Example Configuration YAML Files:
- Differences: The configuration is lacking a number of new OpenShift objects. It also uses a removed environment variable for controlling journald monitoring,
USEJOURNALD
. - Upgrade Steps:
- If you have changes you want to persist to the new DaemonSet, backup the old DaemonSet.
- Run
oc get daemonset -o yaml logdna-agent > old-logdna-agent-daemon-set.yaml
. - Copy any desired changes from
old-logdna-agent-daemon-set.yaml
to the DaemonSet object ink8s/agent-resources-openshift.yaml
.
- Run
- If you want to continue using journald, follow the steps for enabling journald monitoring on the agent.
- Overwrite the DaemonSet as well as create the new OpenShift objects; run
oc apply -f k8s/agent-resources-openshift.yaml
.
- If you have changes you want to persist to the new DaemonSet, backup the old DaemonSet.
⚠️ Exporting OpenShift objects with "oc get <resource> -o yaml" includes extra information about the object's state. This data does not need to be copied over to the new YAML file.
The image contains the actual agent code that is run on the Pods created by the DaemonSet. New versions of the agent always strive for backwards compatibility with old configuration versions. Any breaking changes will be outlined in the change log. We always recommend upgrading to the latest configuration to guarantee access to new features.
The upgrade path for the image depends on which image tag you are using in your DaemonSet.
If your DaemonSet is configured with logdna/logdna-agent:3
, or some other major version number, then restarting your Pods will trigger them to pull down the latest minor for this major version (in this example 3
) version of the LogDNA agent image.
oc rollout restart daemonset logdna-agent
Otherwise, if your DaemonSet is configured with a different tag (e.g. logdna/logdna-agent:3.5.1
), you'll need to update the image and tag, which will trigger a rollover of all the pods.
oc patch daemonset logdna-agent --type json -p '[{"op":"replace","path":"/spec/template/spec/containers/0/image","value":"logdna/logdna-agent:3.5.1"}]'
The specific tag you should use depends on your requirements, we offer a list of tags for varying compatibility:
3
- Updates with each minor and patch version updates under3.x.x
.3.5
- Updates with each patch version update under3.5.x
.3.5.1
- Targets a specific version of the agent.
Note: This list isn't exhaustive; for a full list check out the logdna-agent dockerhub page
The default configuration places all of the OpenShift objects in a unique project. To completely remove all traces of the agent you need to simply delete the logdna-agent
project within the Web UI.
Note: OpenShift has no way to delete projects with the oc
CLI. View OpenShift's documentation for managing projects.
If you're sharing the project with other applications, and thus you need to leave the project, you can instead remove all traces by deleting the agent with a label filter. You'll also need to remove the logdna-agent-key
secret and logdna-agent
service account, both of which do not have a label:
oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -l app.kubernetes.io/name=logdna-agent -o name | xargs oc delete
oc delete secret logdna-agent-key
oc delete serviceaccount logdna-agent
By default the agent is configured to run as root; however, the DaemonSet can be modified to run the agent as a non-root user.
Note: To run as non-root the agent container must still be marked as privileged.
This is accomplished through Linux capabilities and turning the agent binary into a "capability-dumb binary." The binary is given CAP_DAC_READ_SEARCH
to read all files on the file system. The image already comes with this change and the necessary user and group. The only required step is configuring the agent DaemonSet to run as the user and group 5000:5000
.
Add two new fields, runAsUser
and runAsGroup
, to the securityContext
section found in the logdna-agent
container in the logdna-agent
DaemonSet inside of k8s/agent-resources-openshift.yaml
[spec.template.spec.containers.0.securityContext
]:
securityContext:
runAsUser: 5000
runAsGroup: 5000
Apply the updated configuration to your cluster:
oc apply -f k8s/agent-resources-openshift.yaml
Alternatively, update the DaemonSet configuration directly by using the following patch command:
oc patch daemonset logdna-agent --type json -p '[{"op":"add","path":"/spec/template/spec/containers/0/securityContext/runAsUser","value":5000},{"op":"add","path":"/spec/template/spec/containers/0/securityContext/runAsGroup","value":5000}]'
To avoid possible duplication or skipping of log messages during agent restart or upgrade, the agent stores its current file offsets on the host node's filesystem, using a hostPath volume.
The host directory must be writable by the user or group specified in the securityContext. To achieve this the host directory must either already exist with the correct permissions or else Kubernetes will create the directory without write permissions for the agent user.
In this case the permissions must be set before the agent starts. When running as non-root the agent pod does not have permissions to do this, so an initcontainer may be used.
Below is an example manifest section which can be added to the pod specification alongside the containers array. It assumes the agent user/group are both 5000 and the volume is mounted at /var/lib/logdna
initContainers:
- name: volume-mount-permissions-fix
image: busybox
command: ["sh", "-c", "chmod -R 775 /var/lib/logdna && chown -R 5000:5000 /var/lib/logdna"]
volumeMounts:
- name: varliblogdna
mountPath: /var/lib/logdna
The agent by default only captures logs generated by the containers running on the OpenShift cluster's container runtime environment. It does not, however, collect system component logs from applications running directly on the node such as the kubelet and container runtime. With some configuration on both the node and the agent, these journald logs can be exposed from the node to the agent.
The agent can access Journald logs from the host node by mounting the logs from /var/log/journal
. This requires enabling journald log storage in the node as well as configuring the agent to monitor the directory.
Follow OpenShift's documentation for enabling Journald on your nodes.
To enable Journald monitoring in the agent, add a new environment variable, LOGDNA_JOURNALD_PATHS
with a value of /var/log/journal
, to the logdna-agent DaemonSet:
- If you are updating an already deployed agent:
- You can patch the existing agent by running.
oc patch daemonset -n logdna-agent logdna-agent --type json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"LOGDNA_JOURNALD_PATHS","value":"/var/log/journal"}}]'
- If you are modifying a YAML file:
- Add the new environment variable to the envs section of the DaemonSet object in
k8s/agent-resources-openshift.yaml
[spec.template.spec.containers.0.env
]. - Apply the new configuration file, run
oc apply -f k8s/agent-resources-openshift.yaml
.
- Add the new environment variable to the envs section of the DaemonSet object in
env:
- name: LOGDNA_JOURNALD_PATHS
value: /var/log/journal