Skip to content

Commit

Permalink
remove referencing files in kubernetes section
Browse files Browse the repository at this point in the history
  • Loading branch information
maycmlee committed Oct 1, 2024
1 parent bdd9fbf commit 6cf2550
Show file tree
Hide file tree
Showing 4 changed files with 2 additions and 48 deletions.
40 changes: 1 addition & 39 deletions content/en/observability_pipelines/advanced_configurations.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ further_reading:

## Overview

This document goes over [bootstrapping the Observability Pipelines Worker](#bootstrap-options) and [referencing files in Kubernetes](#referencing-files-in-kubernetes).
This document goes over bootstrapping the Observability Pipelines Worker.

## Bootstrap Options

Expand Down Expand Up @@ -90,44 +90,6 @@ The following is a list of bootstrap options, their related pipeline environment
: &nbsp;&nbsp;&nbsp;&nbsp;proxy:<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;enabled: true<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;https: https://foo.bar:3128
: <b>Note</b>: The `DD_PROXY_HTTP(S)` and `HTTP(S)_PROXY` environment variables need to be already exported in your environment for the Worker to resolve them. They cannot be prepended to the Worker installation script.

## Referencing files in Kubernetes

If you are referencing files in Kubernetes for Google Cloud Storage authentication, TLS certificates for certain sources, or an enrichment table processor, you need to use `volumeMounts[*].subPath` to mount files from a `configMap` or `secret`.

For example, if you have a `secret` defined as:

```
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
credentials1.json: bXktc2VjcmV0LTE=
credentials2.json: bXktc2VjcmV0LTI=
```

Then you need to override `extraVolumes` and `extraVolumeMounts` in the `values.yaml` file to mount the secret files to Observability Pipelines Worker pods using `subPath`:

```
# extraVolumes -- Specify additional Volumes to use.
extraVolumes:
- name: my-secret-volume
secret:
secretName: my-secret
# extraVolumeMounts -- Specify Additional VolumeMounts to use.
extraVolumeMounts:
- name: my-secret-volume
mountPath: /var/lib/observability-pipelines-worker/config/credentials1.json
subPath: credentials1.json
- name: my-secret-volume
mountPath: /var/lib/observability-pipelines-worker/config/credentials2.json
subPath: credentials2.json
```

**Note**: If you override the`datadog.dataDir` parameter, you need to override the `mountPath` as well.

## Further reading

{{< partial name="whats-next/whats-next.html" >}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,6 @@

To authenticate the Observability Pipelines Worker for Google Cloud Storage, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under `DD_OP_DATA_DIR/config`. See [Getting API authentication credential][9092] for more information.

**Note**: If you are installing the Worker in Kubernetes, see [Referencing files in Kubernetes][9097] for information on how to reference the credentials file.

#### Connect the storage bucket to Datadog Log Archives

1. Navigate to Datadog [Log Forwarding][9094].
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
To authenticate the Observability Pipelines Worker for Google Chronicle, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under `DD_OP_DATA_DIR/config`. See [Getting API authentication credential][10001] for more information.

**Note**: If you are installing the Worker in Kubernetes, see [Referencing files in Kubernetes][10004] for information on how to reference the credentials file.


To set up the Worker's Google Chronicle destination:

1. Enter the customer ID for your Google Chronicle instance.
Expand All @@ -13,5 +10,4 @@ To set up the Worker's Google Chronicle destination:
**Note**: Logs sent to the Google Chronicle destination must have ingestion labels. For example, if the logs are from a A10 load balancer, it must have the ingestion label `A10_LOAD_BALANCER`. See Google Cloud's [Support log types with a default parser][10003] for a list of available log types and their respective ingestion labels.

[10001]: https://cloud.google.com/chronicle/docs/reference/ingestion-api#getting_api_authentication_credentials
[10003]: https://cloud.google.com/chronicle/docs/ingestion/parser-list/supported-default-parsers#with-default-parser
[10004]: /observability_pipelines/advanced_configurations/#referencing-files-in-kubernetes
[10003]: https://cloud.google.com/chronicle/docs/ingestion/parser-list/supported-default-parsers#with-default-parser
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ To set up the enrichment table processor:
- For the **File** type:
1. Enter the file path.
1. Enter the column name. The column name in the enrichment table is used for matching the source attribute value. See the [Enrichment file example](#enrichment-file-example).
<br>**Note**: If you are installing the Worker in Kubernetes, see [Referencing files in Kubernetes][10011] for information on how to reference the file.
- For the **GeoIP** type, enter the GeoIP path.

##### Enrichment file example
Expand Down Expand Up @@ -37,4 +36,3 @@ merchant_info {
"state":"Colorado"
}
```
[10011]: /observability_pipelines/advanced_configurations/#referencing-files-in-kubernetes

0 comments on commit 6cf2550

Please sign in to comment.