diff --git a/CHANGELOG.md b/CHANGELOG.md index 3bde8a8576..9b754161a2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,6 +10,11 @@ internal API changes are not present. Main (unreleased) ----------------- +### Breaking changes + +- The default listen port for `otelcol.receiver.opencensus` has changed from + 4317 to 55678 to align with upstream. (@rfratto) + ### Enhancements - Add support for importing folders as single module to `import.file`. (@wildum) @@ -19,6 +24,9 @@ Main (unreleased) - Improve converter diagnostic output by including a Footer and removing lower level diagnostics when a configuration fails to generate. (@erikbaranowski) +- Increased the alert interval and renamed the `ClusterSplitBrain` alert to `ClusterNodeCountMismatch` in the Grafana + Agent Mixin to better match the alert conditions. (@thampiotr) + ### Features - Added a new CLI flag `--stability.level` which defines the minimum stability @@ -30,14 +38,27 @@ Main (unreleased) - Fix an issue where JSON string array elements were not parsed correctly in `loki.source.cloudflare`. (@thampiotr) +- Update gcp_exporter to a newer version with a patch for incorrect delta histograms (@kgeckhart) + +### Other changes + +- Clustering for Grafana Agent in Flow mode has graduated from beta to stable. + +- Resync defaults for `otelcol.processor.k8sattributes` with upstream. (@hainenber) + +- Resync defaults for `otelcol.exporter.otlp` and `otelcol.exporter.otlphttp` with upstream. (@hainenber) + +v0.40.3 (2024-03-14) +-------------------- + +### Bugfixes + - Fix a bug where structured metadata and parsed field are not passed further in `loki.source.api` (@marchellodev) - Change `import.git` to use Git pulls rather than fetches to fix scenarios where the local code did not get updated. (@mattdurham) ### Other changes -- Clustering for Grafana Agent in Flow mode has graduated from beta to stable. - - Upgrade to Go 1.22.1 (@thampiotr) v0.40.2 (2024-03-05) diff --git a/docs/developer/release/3-update-version-in-code.md b/docs/developer/release/3-update-version-in-code.md index 73026115de..6384973cc4 100644 --- a/docs/developer/release/3-update-version-in-code.md +++ b/docs/developer/release/3-update-version-in-code.md @@ -40,9 +40,9 @@ The project must be updated to reference the upcoming release tag whenever a new - Stable Release example PR [here](https://github.com/grafana/agent/pull/3119) - Patch Release example PR [here](https://github.com/grafana/agent/pull/3191) -4. Create a branch from `release/VERSION_PREFIX` for [grafana/agent](https://github.com/grafana/agent). +4. If one doesn't exist yet, create a branch called `release/VERSION_PREFIX` for [grafana/alloy](https://github.com/grafana/alloy). -5. Cherry pick the commit on main from the merged PR in Step 3 from main into the new branch from Step 4: +5. Cherry pick the commit on main from the merged PR in Step 3 from main into the branch from Step 4: ``` git cherry-pick -x COMMIT_SHA diff --git a/docs/developer/release/8-update-helm-charts.md b/docs/developer/release/8-update-helm-charts.md index f269a3271c..00dbaaa916 100644 --- a/docs/developer/release/8-update-helm-charts.md +++ b/docs/developer/release/8-update-helm-charts.md @@ -16,7 +16,7 @@ Our Helm charts require some version updates as well. 1. Copy the content of the last CRDs into helm-charts. - Copy the contents from agent repo `production/operator/crds/` to replace the contents of helm-charts repo `charts/agent-operator/crds` + Copy the contents from agent repo `operations/agent-static-operator/crds` to replace the contents of helm-charts repo `charts/agent-operator/crds` 2. Update references of agent-operator app version in helm-charts pointing to release version. diff --git a/docs/sources/get-started/install/kubernetes.md b/docs/sources/get-started/install/kubernetes.md index 25f38b1fab..cf60add7c7 100644 --- a/docs/sources/get-started/install/kubernetes.md +++ b/docs/sources/get-started/install/kubernetes.md @@ -32,22 +32,47 @@ To deploy {{< param "PRODUCT_ROOT_NAME" >}} on Kubernetes using Helm, run the fo helm repo update ``` +1. Create a namespace for {{< param "PRODUCT_NAME" >}}: + + ```shell + kubectl create namespace + ``` + + Replace the following: + + - _``_: The namespace to use for your {{< param "PRODUCT_NAME" >}} + installation, such as `alloy`. + 1. Install {{< param "PRODUCT_ROOT_NAME" >}}: ```shell - helm install grafana/grafana-alloy + helm install --namespace grafana/grafana-alloy ``` Replace the following: - - _``_: The name to use for your {{< param "PRODUCT_ROOT_NAME" >}} installation, such as `grafana-alloy`. + - _``_: The namespace created in the previous step. + - _``_: The name to use for your {{< param "PRODUCT_ROOT_NAME" >}} installation, such as `grafana-alloy`. -For more information on the {{< param "PRODUCT_ROOT_NAME" >}} Helm chart, refer to the Helm chart documentation on [Artifact Hub][]. +1. Verify that the {{< param "PRODUCT_NAME" >}} pods are running: + + ```shell + kubectl get pods --namespace + ``` + + Replace the following: + + - _``_: The namespace used in the previous step. + +You have successfully deployed {{< param "PRODUCT_NAME" >}} on Kubernetes, using default Helm settings. +To configure {{< param "PRODUCT_NAME" >}}, see the [Configure {{< param "PRODUCT_NAME" >}} on Kubernetes][Configure] guide. ## Next steps - [Configure {{< param "PRODUCT_NAME" >}}][Configure] +- Refer to the [{{< param "PRODUCT_NAME" >}} Helm chart documentation on Artifact Hub][Artifact Hub] for more information about the Helm chart. + [Helm]: https://helm.sh [Artifact Hub]: https://artifacthub.io/packages/helm/grafana/grafana-alloy [Configure]: ../../../tasks/configure/configure-kubernetes/ diff --git a/docs/sources/reference/components/otelcol.processor.k8sattributes.md b/docs/sources/reference/components/otelcol.processor.k8sattributes.md index 1622902877..2326f341e4 100644 --- a/docs/sources/reference/components/otelcol.processor.k8sattributes.md +++ b/docs/sources/reference/components/otelcol.processor.k8sattributes.md @@ -219,6 +219,10 @@ Name | Type | Description The `exclude` block configures which pods to exclude from the processor. +{{< admonition type="note" >}} +Pods with the name `jaeger-agent` or `jaeger-collector` are excluded by default. +{{< /admonition >}} + ### pod block The `pod` block configures a pod to be excluded from the processor. diff --git a/docs/sources/reference/components/otelcol.receiver.opencensus.md b/docs/sources/reference/components/otelcol.receiver.opencensus.md index bf78f52021..0884f6b1b8 100644 --- a/docs/sources/reference/components/otelcol.receiver.opencensus.md +++ b/docs/sources/reference/components/otelcol.receiver.opencensus.md @@ -38,7 +38,7 @@ otelcol.receiver.opencensus "LABEL" { Name | Type | Description | Default | Required ---- | ---- | ----------- | ------- | -------- `cors_allowed_origins` | `list(string)` | A list of allowed Cross-Origin Resource Sharing (CORS) origins. | | no -`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:4317"` | no +`endpoint` | `string` | `host:port` to listen for traffic on. | `"0.0.0.0:55678"` | no `transport` | `string` | Transport to use for the gRPC server. | `"tcp"` | no `max_recv_msg_size` | `string` | Maximum size of messages the server will accept. 0 disables a limit. | | no `max_concurrent_streams` | `number` | Limit the number of concurrent streaming RPC calls. | | no @@ -54,7 +54,7 @@ The "endpoint" parameter is the same for both gRPC and HTTP/JSON, as the protoco To write traces with HTTP/JSON, `POST` to `[address]/v1/trace`. The JSON message format parallels the gRPC protobuf format. For details, refer to its [OpenApi specification](https://github.com/census-instrumentation/opencensus-proto/blob/master/gen-openapi/opencensus/proto/agent/trace/v1/trace_service.swagger.json). -Note that `max_recv_msg_size`, `read_buffer_size` and `write_buffer_size` are formatted in a way so that the units are included +Note that `max_recv_msg_size`, `read_buffer_size` and `write_buffer_size` are formatted in a way so that the units are included in the string, such as "512KiB" or "1024KB". ## Blocks @@ -153,56 +153,56 @@ finally sending it to an OTLP-capable endpoint: ```river otelcol.receiver.opencensus "default" { - cors_allowed_origins = ["https://*.test.com", "https://test.com"] - - endpoint = "0.0.0.0:9090" - transport = "tcp" - - max_recv_msg_size = "32KB" - max_concurrent_streams = "16" - read_buffer_size = "1024KB" - write_buffer_size = "1024KB" - include_metadata = true - - tls { - cert_file = "test.crt" - key_file = "test.key" - } - - keepalive { - server_parameters { - max_connection_idle = "11s" - max_connection_age = "12s" - max_connection_age_grace = "13s" - time = "30s" - timeout = "5s" - } - - enforcement_policy { - min_time = "10s" - permit_without_stream = true - } - } - - output { - metrics = [otelcol.processor.batch.default.input] - logs = [otelcol.processor.batch.default.input] - traces = [otelcol.processor.batch.default.input] - } + cors_allowed_origins = ["https://*.test.com", "https://test.com"] + + endpoint = "0.0.0.0:9090" + transport = "tcp" + + max_recv_msg_size = "32KB" + max_concurrent_streams = "16" + read_buffer_size = "1024KB" + write_buffer_size = "1024KB" + include_metadata = true + + tls { + cert_file = "test.crt" + key_file = "test.key" + } + + keepalive { + server_parameters { + max_connection_idle = "11s" + max_connection_age = "12s" + max_connection_age_grace = "13s" + time = "30s" + timeout = "5s" + } + + enforcement_policy { + min_time = "10s" + permit_without_stream = true + } + } + + output { + metrics = [otelcol.processor.batch.default.input] + logs = [otelcol.processor.batch.default.input] + traces = [otelcol.processor.batch.default.input] + } } otelcol.processor.batch "default" { - output { - metrics = [otelcol.exporter.otlp.default.input] - logs = [otelcol.exporter.otlp.default.input] - traces = [otelcol.exporter.otlp.default.input] - } + output { + metrics = [otelcol.exporter.otlp.default.input] + logs = [otelcol.exporter.otlp.default.input] + traces = [otelcol.exporter.otlp.default.input] + } } otelcol.exporter.otlp "default" { - client { - endpoint = env("OTLP_ENDPOINT") - } + client { + endpoint = env("OTLP_ENDPOINT") + } } ``` @@ -219,4 +219,4 @@ Connecting some components may not be sensible or components may require further Refer to the linked documentation for more details. {{< /admonition >}} - \ No newline at end of file + diff --git a/docs/sources/reference/components/prometheus.exporter.unix.md b/docs/sources/reference/components/prometheus.exporter.unix.md index 1d322aced9..feda1ab9f5 100644 --- a/docs/sources/reference/components/prometheus.exporter.unix.md +++ b/docs/sources/reference/components/prometheus.exporter.unix.md @@ -130,6 +130,8 @@ The following blocks are supported inside the definition of ### filesystem block +The default values can vary by the operating system the agent runs on - refer to the [integration source](https://github.com/grafana/agent/blob/main/internal/static/integrations/node_exporter/config.go) for up-to-date values on each OS. + | Name | Type | Description | Default | Required | | ---------------------- | ---------- | ------------------------------------------------------------------- | ----------------------------------------------- | -------- | | `fs_types_exclude` | `string` | Regexp of filesystem types to ignore for filesystem collector. | (_see below_ ) | no | @@ -139,7 +141,7 @@ The following blocks are supported inside the definition of `fs_types_exclude` defaults to the following regular expression string: ``` -^(autofs\|binfmt_misc\|bpf\|cgroup2?\|configfs\|debugfs\|devpts\|devtmpfs\|fusectl\|hugetlbfs\|iso9660\|mqueue\|nsfs\|overlay\|proc\|procfs\|pstore\|rpc_pipefs\|securityfs\|selinuxfs\|squashfs\|sysfs\|tracefs)$ +^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ ``` ### ipvs block @@ -183,7 +185,7 @@ The following blocks are supported inside the definition of `fields` defaults to the following regular expression string: ``` -"^(.*_(InErrors\|InErrs)\|Ip_Forwarding\|Ip(6\|Ext)_(InOctets\|OutOctets)\|Icmp6?_(InMsgs\|OutMsgs)\|TcpExt_(Listen.*\|Syncookies.*\|TCPSynRetrans\|TCPTimeouts)\|Tcp_(ActiveOpens\|InSegs\|OutSegs\|OutRsts\|PassiveOpens\|RetransSegs\|CurrEstab)\|Udp6?_(InDatagrams\|OutDatagrams\|NoPorts\|RcvbufErrors\|SndbufErrors))$" +"^(.*_(InErrors|InErrs)|Ip_Forwarding|Ip(6|Ext)_(InOctets|OutOctets)|Icmp6?_(InMsgs|OutMsgs)|TcpExt_(Listen.*|Syncookies.*|TCPSynRetrans|TCPTimeouts)|Tcp_(ActiveOpens|InSegs|OutSegs|OutRsts|PassiveOpens|RetransSegs|CurrEstab)|Udp6?_(InDatagrams|OutDatagrams|NoPorts|RcvbufErrors|SndbufErrors))$" ``` ### perf block diff --git a/docs/sources/shared/reference/components/otelcol-queue-block.md b/docs/sources/shared/reference/components/otelcol-queue-block.md index a7fbde5804..c08d6a3bd5 100644 --- a/docs/sources/shared/reference/components/otelcol-queue-block.md +++ b/docs/sources/shared/reference/components/otelcol-queue-block.md @@ -10,14 +10,14 @@ Name | Type | Description ----------------|-----------|----------------------------------------------------------------------------|---------|--------- `enabled` | `boolean` | Enables an in-memory buffer before sending data to the client. | `true` | no `num_consumers` | `number` | Number of readers to send batches written to the queue in parallel. | `10` | no -`queue_size` | `number` | Maximum number of unwritten batches allowed in the queue at the same time. | `5000` | no +`queue_size` | `number` | Maximum number of unwritten batches allowed in the queue at the same time. | `1000` | no When `enabled` is `true`, data is first written to an in-memory buffer before sending it to the configured server. Batches sent to the component's `input` exported field are added to the buffer as long as the number of unsent batches doesn't exceed the configured `queue_size`. `queue_size` determines how long an endpoint outage is tolerated. -Assuming 100 requests/second, the default queue size `5000` provides about 50 seconds of outage tolerance. -To calculate the correct value for `queue_size`, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated. +Assuming 100 requests/second, the default queue size `1000` provides about 10 seconds of outage tolerance. +To calculate the correct value for `queue_size`, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated. A very high value can cause Out Of Memory (OOM) kills. The `num_consumers` argument controls how many readers read from the buffer and send data in parallel. Larger values of `num_consumers` allow data to be sent more quickly at the expense of increased network traffic. diff --git a/docs/sources/tasks/configure/configure-kubernetes.md b/docs/sources/tasks/configure/configure-kubernetes.md index e8709d4056..d853b00ad9 100644 --- a/docs/sources/tasks/configure/configure-kubernetes.md +++ b/docs/sources/tasks/configure/configure-kubernetes.md @@ -8,30 +8,58 @@ weight: 200 # Configure {{% param "PRODUCT_NAME" %}} on Kubernetes -To configure {{< param "PRODUCT_NAME" >}} on Kubernetes, perform the following steps: +This page describes how to apply a new configuration to {{< param "PRODUCT_NAME" >}} when running on Kubernetes with the Helm chart. +It assumes that: -1. Download a local copy of [values.yaml][] for the Helm chart. +- You have [installed {{< param "PRODUCT_NAME" >}} on Kubernetes using the Helm chart][k8s-install]. +- You already have a new {{< param "PRODUCT_NAME" >}} configuration that you want to apply to your Helm chart installation. -1. Make changes to your copy of `values.yaml` to customize settings for the Helm chart. +If instead you're looking for help in configuring {{< param "PRODUCT_NAME" >}} to perform a specific task, +consult the following guides instead: - Refer to the inline documentation in the `values.yaml` for more information about each option. +- [Collect and forward Prometheus metrics][prometheus], +- [Collect OpenTelemetry data][otel], +- or the [tasks section][tasks] for all the remaining configuration guides. + +[prometheus]: ../../collect-prometheus-metrics/ +[otel]: ../../collect-opentelemetry-data/ +[tasks]: ../ +[k8s-install]: ../../../get-started/install/kubernetes/ + +## Configure the Helm chart + +To modify {{< param "PRODUCT_NAME" >}}'s Helm chart configuration, perform the following steps: + +1. Create a local `values.yaml` file with a new Helm chart configuration. + + 1. You can use your own copy of the values file or download a copy of the + default [values.yaml][]. + + 1. Make changes to your `values.yaml` to customize settings for the + Helm chart. + + Refer to the inline documentation in the default [values.yaml][] for more + information about each option. 1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation: ```shell - helm upgrade RELEASE_NAME grafana/alloy -f VALUES_PATH + helm upgrade --namespace grafana/alloy -f ``` - 1. Replace `RELEASE_NAME` with the name you used for your {{< param "PRODUCT_NAME" >}} installation. + Replace the following: + - _``_: The namespace you used for your {{< param "PRODUCT_NAME" >}} installation. + - _``_: The name you used for your {{< param "PRODUCT_NAME" >}} installation. + - _``_: The path to your copy of `values.yaml` to use. - 1. Replace `VALUES_PATH` with the path to your copy of `values.yaml` to use. +[values.yaml]: https://raw.githubusercontent.com/grafana/alloy/main/operations/helm/charts/alloy/values.yaml ## Kustomize considerations If you are using [Kustomize][] to inflate and install the [Helm chart][], be careful when using a `configMapGenerator` to generate the ConfigMap containing the configuration. By default, the generator appends a hash to the name and patches the resource mentioning it, triggering a rolling update. -This behavior is undesirable for {{< param "PRODUCT_NAME" >}} because the startup time can be significant depending on the size of the Write-Ahead Log. +This behavior is undesirable for {{< param "PRODUCT_NAME" >}} because the startup time can be significant, for example, when your deployment has a large metrics Write-Ahead Log. You can use the [Helm chart][] sidecar container to watch the ConfigMap and trigger a dynamic reload. The following is an example snippet of a `kustomization` that disables this behavior: @@ -44,6 +72,87 @@ configMapGenerator: options: disableNameSuffixHash: true ``` + +## Configure the {{< param "PRODUCT_NAME" >}} + +This section describes how to modify the {{< param "PRODUCT_NAME" >}} configuration, which is stored in a ConfigMap in the Kubernetes cluster. +There are two methods to perform this task. + +### Method 1: Modify the configuration in the values.yaml file + +Use this method if you prefer to embed your {{< param "PRODUCT_NAME" >}} configuration in the Helm chart's `values.yaml` file. + +1. Modify the configuration file contents directly in the `values.yaml` file: + + ```yaml + alloy: + configMap: + content: |- + // Write your Agent config here: + logging { + level = "info" + format = "logfmt" + } + ``` + +1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation: + + ```shell + helm upgrade --namespace grafana/grafana-agent -f + ``` + + Replace the following: + + - _``_: The namespace you used for your {{< param "PRODUCT_NAME" >}} installation. + - _``_: The name you used for your {{< param "PRODUCT_NAME" >}} installation. + - _``_: The path to your copy of `values.yaml` to use. + +### Method 2: Create a separate ConfigMap from a file + +Use this method if you prefer to write your {{< param "PRODUCT_NAME" >}} configuration in a separate file. + +1. Write your configuration to a file, for example, `config.river`. + + ```river + // Write your Agent config here: + logging { + level = "info" + format = "logfmt" + } + ``` + +1. Create a ConfigMap called `agent-config` from the above file: + + ```shell + kubectl create configmap --namespace agent-config "--from-file=config.river=./config.river" + ``` + + Replace the following: + + - _``_: The namespace you used for your {{< param "PRODUCT_NAME" >}} installation. + +1. Modify Helm Chart's configuration in your `values.yaml` to use the existing ConfigMap: + + ```yaml + agent: + configMap: + create: false + name: agent-config + key: config.river + ``` + +1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation: + + ```shell + helm upgrade --namespace grafana/grafana-agent -f + ``` + + Replace the following: + + - _``_: The namespace you used for your {{< param "PRODUCT_NAME" >}} installation. + - _``_: The name you used for your {{< param "PRODUCT_NAME" >}} installation. + - _``_: The path to your copy of `values.yaml` to use. + [values.yaml]: https://raw.githubusercontent.com/grafana/alloy/main/operations/helm/charts/alloy/values.yaml [Helm chart]: https://github.com/grafana/alloy/tree/main/operations/helm/charts/alloy [Kustomize]: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/ diff --git a/docs/sources/tasks/debug.md b/docs/sources/tasks/debug.md index 5b2a25146e..6f92d7ce68 100644 --- a/docs/sources/tasks/debug.md +++ b/docs/sources/tasks/debug.md @@ -105,6 +105,11 @@ To debug issues when using [clustering][], check for the following symptoms. - **Node stuck in terminating state**: The node attempted to gracefully shut down and set its state to Terminating, but it has not completely gone away. Check the clustering page to view the state of the peers and verify that the terminating {{< param "PRODUCT_ROOT_NAME" >}} has been shut down. +{{< admonition type="note" >}} +Some issues that appear to be clustering issues may be symptoms of other issues, for example, problems with scraping or service discovery can result in missing metrics for an agent that can be interpreted as a node not joining the cluster. +{{< /admonition >}} + [logging]: ../../reference/config-blocks/logging/ [clustering]: ../../concepts/clustering/ [secret]: ../../concepts/config-language/expressions/types_and_values/#secrets + diff --git a/go.mod b/go.mod index f017127088..05ccc57cdb 100644 --- a/go.mod +++ b/go.mod @@ -16,7 +16,7 @@ require ( github.com/IBM/sarama v1.42.1 github.com/Lusitaniae/apache_exporter v0.11.1-0.20220518131644-f9522724dab4 github.com/Masterminds/sprig/v3 v3.2.3 - github.com/PuerkitoBio/rehttp v1.1.0 + github.com/PuerkitoBio/rehttp v1.3.0 github.com/alecthomas/kingpin/v2 v2.4.0 github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 github.com/aws/aws-sdk-go v1.45.25 // indirect @@ -214,16 +214,16 @@ require ( go.uber.org/goleak v1.2.1 go.uber.org/multierr v1.11.0 go.uber.org/zap v1.26.0 - golang.org/x/crypto v0.18.0 + golang.org/x/crypto v0.20.0 golang.org/x/exp v0.0.0-20231206192017-f3f8817b8deb - golang.org/x/net v0.20.0 + golang.org/x/net v0.21.0 golang.org/x/oauth2 v0.16.0 - golang.org/x/sys v0.16.0 + golang.org/x/sys v0.17.0 golang.org/x/text v0.14.0 - golang.org/x/time v0.3.0 - google.golang.org/api v0.149.0 + golang.org/x/time v0.5.0 + google.golang.org/api v0.152.0 google.golang.org/grpc v1.61.0 - google.golang.org/protobuf v1.32.0 + google.golang.org/protobuf v1.33.0 gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v3 v3.0.1 gotest.tools v2.2.0+incompatible @@ -428,13 +428,13 @@ require ( github.com/influxdata/telegraf v1.16.3 // indirect github.com/ionos-cloud/sdk-go/v6 v6.1.9 // indirect github.com/jackc/chunkreader/v2 v2.0.1 // indirect - github.com/jackc/pgconn v1.14.0 // indirect + github.com/jackc/pgconn v1.14.3 // indirect github.com/jackc/pgio v1.0.0 // indirect github.com/jackc/pgpassfile v1.0.0 // indirect - github.com/jackc/pgproto3/v2 v2.3.2 // indirect + github.com/jackc/pgproto3/v2 v2.3.3 // indirect github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a // indirect github.com/jackc/pgtype v1.14.0 // indirect - github.com/jackc/pgx/v4 v4.18.1 // indirect + github.com/jackc/pgx/v4 v4.18.2 // indirect github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect github.com/jcmturner/aescts/v2 v2.0.0 // indirect github.com/jcmturner/dnsutils/v2 v2.0.0 // indirect @@ -583,7 +583,7 @@ require ( go4.org/netipx v0.0.0-20230125063823-8449b0a6169f // indirect golang.org/x/mod v0.14.0 // indirect golang.org/x/sync v0.5.0 // indirect - golang.org/x/term v0.16.0 // indirect + golang.org/x/term v0.17.0 // indirect golang.org/x/tools v0.16.0 golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect @@ -617,6 +617,7 @@ require ( github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor v0.87.0 github.com/open-telemetry/opentelemetry-collector-contrib/receiver/prometheusreceiver v0.87.0 github.com/open-telemetry/opentelemetry-collector-contrib/receiver/vcenterreceiver v0.87.0 + github.com/prometheus/tsdb v0.10.0 go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.42.0 golang.org/x/crypto/x509roots/fallback v0.0.0-20240208163226-62c9f1799c91 k8s.io/apimachinery v0.28.3 @@ -647,7 +648,7 @@ require ( github.com/containerd/log v0.1.0 // indirect github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc // indirect github.com/drone/envsubst v1.0.3 // indirect - github.com/go-jose/go-jose/v3 v3.0.1 // indirect + github.com/go-jose/go-jose/v3 v3.0.3 // indirect github.com/golang-jwt/jwt/v5 v5.0.0 // indirect github.com/google/gnostic-models v0.6.8 // indirect github.com/grafana/jfr-parser v0.8.0 // indirect @@ -769,6 +770,10 @@ exclude ( replace github.com/github/smimesign => github.com/grafana/smimesign v0.2.1-0.20220408144937-2a5adf3481d3 +// Replacing for an internal for with a bugfix for delta histograms, https://github.com/grafana/stackdriver_exporter/pull/1 +// Moving back to upstream is being tracked in an internal issue +replace github.com/prometheus-community/stackdriver_exporter => github.com/grafana/stackdriver_exporter v0.0.0-20240228143257-3a2c9acef5a2 + // Submodules. // TODO(rfratto): Change all imports of github.com/grafana/river in favor of // importing github.com/grafana/alloy/syntax and change module and package diff --git a/go.sum b/go.sum index 4c686d9ee2..073ac65bff 100644 --- a/go.sum +++ b/go.sum @@ -228,8 +228,8 @@ github.com/ProtonMail/go-crypto v0.0.0-20230828082145-3c4c8a2d2371 h1:kkhsdkhsCv github.com/ProtonMail/go-crypto v0.0.0-20230828082145-3c4c8a2d2371/go.mod h1:EjAoLdwvbIOoOQr3ihjnSoLZRtE8azugULFRteWMNc0= github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= -github.com/PuerkitoBio/rehttp v1.1.0 h1:JFZ7OeK+hbJpTxhNB0NDZT47AuXqCU0Smxfjtph7/Rs= -github.com/PuerkitoBio/rehttp v1.1.0/go.mod h1:LUwKPoDbDIA2RL5wYZCNsQ90cx4OJ4AWBmq6KzWZL1s= +github.com/PuerkitoBio/rehttp v1.3.0 h1:w54Pb72MQn2eJrSdPsvGqXlAfiK1+NMTGDrOJJ4YvSU= +github.com/PuerkitoBio/rehttp v1.3.0/go.mod h1:LUwKPoDbDIA2RL5wYZCNsQ90cx4OJ4AWBmq6KzWZL1s= github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= github.com/SAP/go-hdb v0.12.0/go.mod h1:etBT+FAi1t5k3K3tf5vQTnosgYmhDkRi8jEnQqCnxF0= @@ -560,6 +560,7 @@ github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc h1:8WFBn63wegobsY github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc/go.mod h1:c9O8+fpSOX1DM8cPNSkX/qsBWdkD4yd2dpciOWQjpBw= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= +github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/digitalocean/godo v1.1.1/go.mod h1:h6faOIcZ8lWIwNQ+DN7b3CgX4Kwby5T+nbpNqkUIozU= github.com/digitalocean/godo v1.10.0/go.mod h1:h6faOIcZ8lWIwNQ+DN7b3CgX4Kwby5T+nbpNqkUIozU= github.com/digitalocean/godo v1.104.1 h1:SZNxjAsskM/su0YW9P8Wx3gU0W1Z13b6tZlYNpl5BnA= @@ -715,8 +716,8 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2 github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= github.com/go-jose/go-jose/v3 v3.0.0/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= -github.com/go-jose/go-jose/v3 v3.0.1 h1:pWmKFVtt+Jl0vBZTIpz/eAKwsm6LkIxDVVbFHKkchhA= -github.com/go-jose/go-jose/v3 v3.0.1/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= +github.com/go-jose/go-jose/v3 v3.0.3 h1:fFKWeig/irsp7XD2zBxvnmA/XaRWp5V3CBsZXJF7G7k= +github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o= @@ -1082,6 +1083,8 @@ github.com/grafana/smimesign v0.2.1-0.20220408144937-2a5adf3481d3 h1:UPkAxuhlAcR github.com/grafana/smimesign v0.2.1-0.20220408144937-2a5adf3481d3/go.mod h1:iZiiwNT4HbtGRVqCQu7uJPEZCuEE5sfSSttcnePkDl4= github.com/grafana/snowflake-prometheus-exporter v0.0.0-20221213150626-862cad8e9538 h1:tkT0yha3JzB5S5VNjfY4lT0cJAe20pU8XGt3Nuq73rM= github.com/grafana/snowflake-prometheus-exporter v0.0.0-20221213150626-862cad8e9538/go.mod h1:VxVydRyq8f6w1qmX/5MSYIdSbgujre8rdFRLgU6u/RI= +github.com/grafana/stackdriver_exporter v0.0.0-20240228143257-3a2c9acef5a2 h1:xBGGPnQyQNK0Apz269BZoKTnFxKKxYhhXzI++N2phE0= +github.com/grafana/stackdriver_exporter v0.0.0-20240228143257-3a2c9acef5a2/go.mod h1:Ce7MjYSAUzZZeFb5jBNqSUUZ45w5IMdnNEKfz3jJRos= github.com/grafana/tail v0.0.0-20230510142333-77b18831edf0 h1:bjh0PVYSVVFxzINqPFYJmAmJNrWPgnVjuSdYJGHmtFU= github.com/grafana/tail v0.0.0-20230510142333-77b18831edf0/go.mod h1:7t5XR+2IA8P2qggOAHTj/GCZfoLBle3OvNSYh1VkRBU= github.com/grafana/vmware_exporter v0.0.4-beta h1:Tb8Edm/wDYh0Lvhm38HLNTlkflUrlPGB+jD+/hW4xHI= @@ -1317,8 +1320,8 @@ github.com/jackc/pgconn v0.0.0-20190831204454-2fabfa3c18b7/go.mod h1:ZJKsE/KZfsU github.com/jackc/pgconn v1.8.0/go.mod h1:1C2Pb36bGIP9QHGBYCjnyhqu7Rv3sGshaQUvmfGIB/o= github.com/jackc/pgconn v1.9.0/go.mod h1:YctiPyvzfU11JFxoXokUOOKQXQmDMoJL9vJzHH8/2JY= github.com/jackc/pgconn v1.9.1-0.20210724152538-d89c8390a530/go.mod h1:4z2w8XhRbP1hYxkpTuBjTS3ne3J48K83+u0zoyvg2pI= -github.com/jackc/pgconn v1.14.0 h1:vrbA9Ud87g6JdFWkHTJXppVce58qPIdP7N8y0Ml/A7Q= -github.com/jackc/pgconn v1.14.0/go.mod h1:9mBNlny0UvkgJdCDvdVHYSjI+8tD2rnKK69Wz8ti++E= +github.com/jackc/pgconn v1.14.3 h1:bVoTr12EGANZz66nZPkMInAV/KHD2TxH9npjXXgiB3w= +github.com/jackc/pgconn v1.14.3/go.mod h1:RZbme4uasqzybK2RK5c65VsHxoyaml09lx3tXOcO/VM= github.com/jackc/pgio v1.0.0 h1:g12B9UwVnzGhueNavwioyEEpAmqMe1E/BN9ES+8ovkE= github.com/jackc/pgio v1.0.0/go.mod h1:oP+2QK2wFfUWgr+gxjoBH9KGBb31Eio69xUb0w5bYf8= github.com/jackc/pgmock v0.0.0-20190831213851-13a1b77aafa2/go.mod h1:fGZlG77KXmcq05nJLRkk0+p82V8B8Dw8KN2/V9c/OAE= @@ -1334,8 +1337,8 @@ github.com/jackc/pgproto3/v2 v2.0.0-rc3/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvW github.com/jackc/pgproto3/v2 v2.0.0-rc3.0.20190831210041-4c03ce451f29/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM= github.com/jackc/pgproto3/v2 v2.0.6/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA= github.com/jackc/pgproto3/v2 v2.1.1/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA= -github.com/jackc/pgproto3/v2 v2.3.2 h1:7eY55bdBeCz1F2fTzSz69QC+pG46jYq9/jtSPiJ5nn0= -github.com/jackc/pgproto3/v2 v2.3.2/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA= +github.com/jackc/pgproto3/v2 v2.3.3 h1:1HLSx5H+tXR9pW3in3zaztoEwQYRC9SQaYUHjTSUOag= +github.com/jackc/pgproto3/v2 v2.3.3/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA= github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b/go.mod h1:vsD4gTJCa9TptPL8sPkXrLZ+hDuNrZCnj29CQpr4X1E= github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a h1:bbPeKD0xmW/Y25WS6cokEszi5g+S0QxI/d45PkRi7Nk= github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM= @@ -1350,12 +1353,11 @@ github.com/jackc/pgx/v4 v4.0.0-20190420224344-cc3461e65d96/go.mod h1:mdxmSJJuR08 github.com/jackc/pgx/v4 v4.0.0-20190421002000-1b8f0016e912/go.mod h1:no/Y67Jkk/9WuGR0JG/JseM9irFbnEPbuWV2EELPNuM= github.com/jackc/pgx/v4 v4.0.0-pre1.0.20190824185557-6972a5742186/go.mod h1:X+GQnOEnf1dqHGpw7JmHqHc1NxDoalibchSk9/RWuDc= github.com/jackc/pgx/v4 v4.12.1-0.20210724153913-640aa07df17c/go.mod h1:1QD0+tgSXP7iUjYm9C1NxKhny7lq6ee99u/z+IHFcgs= -github.com/jackc/pgx/v4 v4.18.1 h1:YP7G1KABtKpB5IHrO9vYwSrCOhs7p3uqhvhhQBptya0= -github.com/jackc/pgx/v4 v4.18.1/go.mod h1:FydWkUyadDmdNH/mHnGob881GawxeEm7TcMCzkb+qQE= +github.com/jackc/pgx/v4 v4.18.2 h1:xVpYkNR5pk5bMCZGfClbO962UIqVABcAGt7ha1s/FeU= +github.com/jackc/pgx/v4 v4.18.2/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw= github.com/jackc/puddle v0.0.0-20190413234325-e4ced69a3a2b/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk= github.com/jackc/puddle v0.0.0-20190608224051-11cab39313c9/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk= github.com/jackc/puddle v1.1.3/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk= -github.com/jackc/puddle v1.3.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk= github.com/jaegertracing/jaeger v1.50.0 h1:qsOcPeB3nAc3h8tx+gnZ3JODAZfqbYmQr45jPEwBd2w= github.com/jaegertracing/jaeger v1.50.0/go.mod h1:MVGvxf4+Pcn31gz9RnLo0097w3khKFwJIprIZHOt89s= github.com/jarcoal/httpmock v0.0.0-20180424175123-9c70cfe4a1da/go.mod h1:ks+b9deReOc7jgqp+e7LuFiCBH6Rm5hL32cLcEAArb4= @@ -1712,8 +1714,8 @@ github.com/onsi/gomega v1.4.1/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5 github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= -github.com/onsi/gomega v1.27.10 h1:naR28SdDFlqrG6kScpT8VWpu1xWY5nJRCF3XaYyBjhI= -github.com/onsi/gomega v1.27.10/go.mod h1:RsS8tutOdbdgzbPtzzATp12yT7kM5I5aElG3evPbQ0M= +github.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8= +github.com/onsi/gomega v1.30.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ= github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk= github.com/open-telemetry/opentelemetry-collector-contrib/connector/servicegraphconnector v0.87.0 h1:ArBXfq0KQ89DV9th/MU/snH205Uh6jFCnIiwd/wKp+s= github.com/open-telemetry/opentelemetry-collector-contrib/connector/servicegraphconnector v0.87.0/go.mod h1:hN1ufLEIhE10FeG7L/yKMXMr9B0hcyrvqiZ3vR/qq/c= @@ -1918,8 +1920,6 @@ github.com/prometheus-community/go-runit v0.1.0 h1:uTWEj/Fn2RoLdfg/etSqwzgYNOYPr github.com/prometheus-community/go-runit v0.1.0/go.mod h1:AvJ9Jo3gAFu2lbM4+qfjdpq30FfiLDJZKbQ015u08IQ= github.com/prometheus-community/prom-label-proxy v0.6.0 h1:vRY29tUex8qI2MEimovTzJdieEwiSko+f7GuPCLjFkI= github.com/prometheus-community/prom-label-proxy v0.6.0/go.mod h1:XyAyskjjhqEx0qnbGUVeAkYSz3Wm9gStT7/wXFxD8n0= -github.com/prometheus-community/stackdriver_exporter v0.13.0 h1:4h7v28foRJ4/RuchNZCYsoDp+CkF4Mp9nebtPzgil3g= -github.com/prometheus-community/stackdriver_exporter v0.13.0/go.mod h1:ZFO015Mexz1xNHSvFjZFiIspYx6qhDg9Kre4LPUjO9s= github.com/prometheus-community/windows_exporter v0.24.1-0.20231127180936-5a872a227c2f h1:nEIgTweLXQk5ihBuKa84+l9WG/xrqA/1qX0jgbJ69OQ= github.com/prometheus-community/windows_exporter v0.24.1-0.20231127180936-5a872a227c2f/go.mod h1:wxKb/CTmvhDaZz4BokGt3btOn4aCrr5ruDQT7KxmJok= github.com/prometheus-operator/prometheus-operator v0.66.0 h1:Jj4mbGAkfBbTih6ait03f2vUjEHB7Kb4gnlAmWu7AJ0= @@ -2006,6 +2006,8 @@ github.com/prometheus/snmp_exporter v0.24.1/go.mod h1:j6uIGkdR0DXvKn7HJtSkeDj//U github.com/prometheus/statsd_exporter v0.22.7/go.mod h1:N/TevpjkIh9ccs6nuzY3jQn9dFqnUakOjnEuMPJJJnI= github.com/prometheus/statsd_exporter v0.22.8 h1:Qo2D9ZzaQG+id9i5NYNGmbf1aa/KxKbB9aKfMS+Yib0= github.com/prometheus/statsd_exporter v0.22.8/go.mod h1:/DzwbTEaFTE0Ojz5PqcSk6+PFHOPWGxdXVr6yC8eFOM= +github.com/prometheus/tsdb v0.10.0 h1:If5rVCMTp6W2SiRAQFlbpJNgVlgMEd+U2GZckwK38ic= +github.com/prometheus/tsdb v0.10.0/go.mod h1:oi49uRhEe9dPUTlS3JRZOwJuVi6tmh10QSgwXEyGCt4= github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4= github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4= github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM= @@ -2499,8 +2501,9 @@ golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2Uz golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU= golang.org/x/crypto v0.10.0/go.mod h1:o4eNf7Ede1fv+hwOwZsTHl9EsPFO6q6ZvYR8vYfY45I= -golang.org/x/crypto v0.18.0 h1:PGVlW0xEltQnzFZ55hkuX5+KLyrMYhHld1YHO4AKcdc= -golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg= +golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= +golang.org/x/crypto v0.20.0 h1:jmAMJJZXr5KiCw05dfYK9QnqaqKLYXijU23lsEdcQqg= +golang.org/x/crypto v0.20.0/go.mod h1:Xwo95rrVNIoSMx9wa1JroENMToLWn3RNVrTBpLHgZPQ= golang.org/x/crypto/x509roots/fallback v0.0.0-20240208163226-62c9f1799c91 h1:Lyizcy9jX02jYR0ceBkL6S+jRys8Uepf7wt1vrz6Ras= golang.org/x/crypto/x509roots/fallback v0.0.0-20240208163226-62c9f1799c91/go.mod h1:kNa9WdvYnzFwC79zRpLRMJbdEFlhyM5RPFBBZp/wWH8= golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= @@ -2623,8 +2626,8 @@ golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= golang.org/x/net v0.11.0/go.mod h1:2L/ixqYpgIVXmeoSA/4Lu7BzTG4KIyPIryS4IsOd1oQ= -golang.org/x/net v0.20.0 h1:aCL9BSgETF1k+blQaYUBx9hJ9LOGP3gAVemcZlf1Kpo= -golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY= +golang.org/x/net v0.21.0 h1:AQyQV4dYCvJ7vGmJyKki9+PBdyvhkSd8EIx/qb0AYv4= +golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44= golang.org/x/oauth2 v0.0.0-20170807180024-9a379c6b3e95/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -2786,8 +2789,8 @@ golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU= -golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y= +golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= @@ -2798,8 +2801,8 @@ golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.9.0/go.mod h1:M6DEAAIenWoTxdKrOltXcmDY3rSplQUkrvaDU5FcQyo= -golang.org/x/term v0.16.0 h1:m+B6fahuftsE9qjo0VWp2FW0mB3MTJvR0BaMQrq0pmE= -golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY= +golang.org/x/term v0.17.0 h1:mkTF7LCd6WGJNL3K1Ad7kwxNfYAW6a8a8QqtMblp/4U= +golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -2825,8 +2828,8 @@ golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxb golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4= -golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= +golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk= +golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= @@ -2958,8 +2961,8 @@ google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00 google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k= google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE= google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI= -google.golang.org/api v0.149.0 h1:b2CqT6kG+zqJIVKRQ3ELJVLN1PwHZ6DJ3dW8yl82rgY= -google.golang.org/api v0.149.0/go.mod h1:Mwn1B7JTXrzXtnvmzQE2BD6bYZQ8DShKZDZbeN9I7qI= +google.golang.org/api v0.152.0 h1:t0r1vPnfMc260S2Ci+en7kfCZaLOPs5KI0sVV/6jZrY= +google.golang.org/api v0.152.0/go.mod h1:3qNJX5eOmhiWYc67jRA/3GsDw97UFb5ivv7Y2PrriAY= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= @@ -3093,8 +3096,8 @@ google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQ google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -google.golang.org/protobuf v1.32.0 h1:pPC6BG5ex8PDFnkbrGU3EixyhKcQ2aDuBS36lqK/C7I= -google.golang.org/protobuf v1.32.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= +google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI= +google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U= gopkg.in/alecthomas/kingpin.v2 v2.2.6 h1:jMFz6MfLP0/4fUyZle81rXUoxOBFi19VUFKVDOQfozc= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= diff --git a/internal/component/loki/rules/kubernetes/diff.go b/internal/component/common/kubernetes/diff.go similarity index 68% rename from internal/component/loki/rules/kubernetes/diff.go rename to internal/component/common/kubernetes/diff.go index 34c74ed62e..5a1ab8d8d7 100644 --- a/internal/component/loki/rules/kubernetes/diff.go +++ b/internal/component/common/kubernetes/diff.go @@ -1,4 +1,4 @@ -package rules +package kubernetes import ( "bytes" @@ -7,27 +7,27 @@ import ( "gopkg.in/yaml.v3" // Used for prometheus rulefmt compatibility instead of gopkg.in/yaml.v2 ) -type ruleGroupDiffKind string +type RuleGroupDiffKind string const ( - ruleGroupDiffKindAdd ruleGroupDiffKind = "add" - ruleGroupDiffKindRemove ruleGroupDiffKind = "remove" - ruleGroupDiffKindUpdate ruleGroupDiffKind = "update" + RuleGroupDiffKindAdd RuleGroupDiffKind = "add" + RuleGroupDiffKindRemove RuleGroupDiffKind = "remove" + RuleGroupDiffKindUpdate RuleGroupDiffKind = "update" ) -type ruleGroupDiff struct { - Kind ruleGroupDiffKind +type RuleGroupDiff struct { + Kind RuleGroupDiffKind Actual rulefmt.RuleGroup Desired rulefmt.RuleGroup } -type ruleGroupsByNamespace map[string][]rulefmt.RuleGroup -type ruleGroupDiffsByNamespace map[string][]ruleGroupDiff +type RuleGroupsByNamespace map[string][]rulefmt.RuleGroup +type RuleGroupDiffsByNamespace map[string][]RuleGroupDiff -func diffRuleState(desired, actual ruleGroupsByNamespace) ruleGroupDiffsByNamespace { +func DiffRuleState(desired, actual RuleGroupsByNamespace) RuleGroupDiffsByNamespace { seenNamespaces := map[string]bool{} - diff := make(ruleGroupDiffsByNamespace) + diff := make(RuleGroupDiffsByNamespace) for namespace, desiredRuleGroups := range desired { seenNamespaces[namespace] = true @@ -55,8 +55,8 @@ func diffRuleState(desired, actual ruleGroupsByNamespace) ruleGroupDiffsByNamesp return diff } -func diffRuleNamespaceState(desired []rulefmt.RuleGroup, actual []rulefmt.RuleGroup) []ruleGroupDiff { - var diff []ruleGroupDiff +func diffRuleNamespaceState(desired []rulefmt.RuleGroup, actual []rulefmt.RuleGroup) []RuleGroupDiff { + var diff []RuleGroupDiff seenGroups := map[string]bool{} @@ -70,8 +70,8 @@ desiredGroups: continue desiredGroups } - diff = append(diff, ruleGroupDiff{ - Kind: ruleGroupDiffKindUpdate, + diff = append(diff, RuleGroupDiff{ + Kind: RuleGroupDiffKindUpdate, Actual: actualRuleGroup, Desired: desiredRuleGroup, }) @@ -79,8 +79,8 @@ desiredGroups: } } - diff = append(diff, ruleGroupDiff{ - Kind: ruleGroupDiffKindAdd, + diff = append(diff, RuleGroupDiff{ + Kind: RuleGroupDiffKindAdd, Desired: desiredRuleGroup, }) } @@ -90,8 +90,8 @@ desiredGroups: continue } - diff = append(diff, ruleGroupDiff{ - Kind: ruleGroupDiffKindRemove, + diff = append(diff, RuleGroupDiff{ + Kind: RuleGroupDiffKindRemove, Actual: actualRuleGroup, }) } diff --git a/internal/component/loki/rules/kubernetes/diff_test.go b/internal/component/common/kubernetes/diff_test.go similarity index 83% rename from internal/component/loki/rules/kubernetes/diff_test.go rename to internal/component/common/kubernetes/diff_test.go index e52ae13288..7b22e963cf 100644 --- a/internal/component/loki/rules/kubernetes/diff_test.go +++ b/internal/component/common/kubernetes/diff_test.go @@ -1,4 +1,4 @@ -package rules +package kubernetes import ( "fmt" @@ -42,7 +42,7 @@ groups: name string desired map[string][]rulefmt.RuleGroup actual map[string][]rulefmt.RuleGroup - expected map[string][]ruleGroupDiff + expected map[string][]RuleGroupDiff } testCases := []testCase{ @@ -50,7 +50,7 @@ groups: name: "empty sets", desired: map[string][]rulefmt.RuleGroup{}, actual: map[string][]rulefmt.RuleGroup{}, - expected: map[string][]ruleGroupDiff{}, + expected: map[string][]RuleGroupDiff{}, }, { name: "add rule group", @@ -58,10 +58,10 @@ groups: managedNamespace: ruleGroupsA, }, actual: map[string][]rulefmt.RuleGroup{}, - expected: map[string][]ruleGroupDiff{ + expected: map[string][]RuleGroupDiff{ managedNamespace: { { - Kind: ruleGroupDiffKindAdd, + Kind: RuleGroupDiffKindAdd, Desired: ruleGroupsA[0], }, }, @@ -73,10 +73,10 @@ groups: actual: map[string][]rulefmt.RuleGroup{ managedNamespace: ruleGroupsA, }, - expected: map[string][]ruleGroupDiff{ + expected: map[string][]RuleGroupDiff{ managedNamespace: { { - Kind: ruleGroupDiffKindRemove, + Kind: RuleGroupDiffKindRemove, Actual: ruleGroupsA[0], }, }, @@ -90,10 +90,10 @@ groups: actual: map[string][]rulefmt.RuleGroup{ managedNamespace: ruleGroupsAModified, }, - expected: map[string][]ruleGroupDiff{ + expected: map[string][]RuleGroupDiff{ managedNamespace: { { - Kind: ruleGroupDiffKindUpdate, + Kind: RuleGroupDiffKindUpdate, Desired: ruleGroupsA[0], Actual: ruleGroupsAModified[0], }, @@ -108,28 +108,28 @@ groups: actual: map[string][]rulefmt.RuleGroup{ managedNamespace: ruleGroupsA, }, - expected: map[string][]ruleGroupDiff{}, + expected: map[string][]RuleGroupDiff{}, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - actual := diffRuleState(tc.desired, tc.actual) + actual := DiffRuleState(tc.desired, tc.actual) requireEqualRuleDiffs(t, tc.expected, actual) }) } } -func requireEqualRuleDiffs(t *testing.T, expected, actual map[string][]ruleGroupDiff) { +func requireEqualRuleDiffs(t *testing.T, expected, actual map[string][]RuleGroupDiff) { require.Equal(t, len(expected), len(actual)) - var summarizeDiff = func(diff ruleGroupDiff) string { + var summarizeDiff = func(diff RuleGroupDiff) string { switch diff.Kind { - case ruleGroupDiffKindAdd: + case RuleGroupDiffKindAdd: return fmt.Sprintf("add: %s", diff.Desired.Name) - case ruleGroupDiffKindRemove: + case RuleGroupDiffKindRemove: return fmt.Sprintf("remove: %s", diff.Actual.Name) - case ruleGroupDiffKindUpdate: + case RuleGroupDiffKindUpdate: return fmt.Sprintf("update: %s", diff.Desired.Name) } panic("unreachable") diff --git a/internal/component/common/kubernetes/event.go b/internal/component/common/kubernetes/event.go new file mode 100644 index 0000000000..6850500582 --- /dev/null +++ b/internal/component/common/kubernetes/event.go @@ -0,0 +1,61 @@ +package kubernetes + +import ( + "github.com/go-kit/log" + "github.com/grafana/agent/internal/flow/logging/level" + "k8s.io/client-go/tools/cache" + "k8s.io/client-go/util/workqueue" +) + +// This type must be hashable, so it is kept simple. The indexer will maintain a +// cache of current state, so this is mostly used for logging. +type Event struct { + Typ EventType + ObjectKey string +} + +type EventType string + +const ( + EventTypeResourceChanged EventType = "resource-changed" +) + +type queuedEventHandler struct { + log log.Logger + queue workqueue.RateLimitingInterface +} + +func NewQueuedEventHandler(log log.Logger, queue workqueue.RateLimitingInterface) *queuedEventHandler { + return &queuedEventHandler{ + log: log, + queue: queue, + } +} + +// OnAdd implements the cache.ResourceEventHandler interface. +func (c *queuedEventHandler) OnAdd(obj interface{}, _ bool) { + c.publishEvent(obj) +} + +// OnUpdate implements the cache.ResourceEventHandler interface. +func (c *queuedEventHandler) OnUpdate(oldObj, newObj interface{}) { + c.publishEvent(newObj) +} + +// OnDelete implements the cache.ResourceEventHandler interface. +func (c *queuedEventHandler) OnDelete(obj interface{}) { + c.publishEvent(obj) +} + +func (c *queuedEventHandler) publishEvent(obj interface{}) { + key, err := cache.MetaNamespaceKeyFunc(obj) + if err != nil { + level.Error(c.log).Log("msg", "failed to get key for object", "err", err) + return + } + + c.queue.AddRateLimited(Event{ + Typ: EventTypeResourceChanged, + ObjectKey: key, + }) +} diff --git a/internal/component/common/kubernetes/rules.go b/internal/component/common/kubernetes/rules.go new file mode 100644 index 0000000000..c89d9742af --- /dev/null +++ b/internal/component/common/kubernetes/rules.go @@ -0,0 +1,34 @@ +package kubernetes + +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" +) + +type LabelSelector struct { + MatchLabels map[string]string `river:"match_labels,attr,optional"` + MatchExpressions []MatchExpression `river:"match_expression,block,optional"` +} + +type MatchExpression struct { + Key string `river:"key,attr"` + Operator string `river:"operator,attr"` + Values []string `river:"values,attr,optional"` +} + +func ConvertSelectorToListOptions(selector LabelSelector) (labels.Selector, error) { + matchExpressions := []metav1.LabelSelectorRequirement{} + + for _, me := range selector.MatchExpressions { + matchExpressions = append(matchExpressions, metav1.LabelSelectorRequirement{ + Key: me.Key, + Operator: metav1.LabelSelectorOperator(me.Operator), + Values: me.Values, + }) + } + + return metav1.LabelSelectorAsSelector(&metav1.LabelSelector{ + MatchLabels: selector.MatchLabels, + MatchExpressions: matchExpressions, + }) +} diff --git a/internal/component/common/kubernetes/rules_test.go b/internal/component/common/kubernetes/rules_test.go new file mode 100644 index 0000000000..3994ea36b6 --- /dev/null +++ b/internal/component/common/kubernetes/rules_test.go @@ -0,0 +1,13 @@ +package kubernetes + +import ( + "testing" + + "k8s.io/client-go/util/workqueue" +) + +func TestEventTypeIsHashable(t *testing.T) { + // This test is here to ensure that the EventType type is hashable according to the workqueue implementation + queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()) + queue.AddRateLimited(Event{}) +} diff --git a/internal/component/loki/rules/kubernetes/events.go b/internal/component/loki/rules/kubernetes/events.go index f8f80da31f..cde73f79cd 100644 --- a/internal/component/loki/rules/kubernetes/events.go +++ b/internal/component/loki/rules/kubernetes/events.go @@ -6,69 +6,15 @@ import ( "regexp" "time" - "github.com/go-kit/log" + "github.com/grafana/agent/internal/component/common/kubernetes" "github.com/grafana/agent/internal/flow/logging/level" "github.com/hashicorp/go-multierror" promv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" "github.com/prometheus/prometheus/model/rulefmt" - "k8s.io/client-go/tools/cache" - "k8s.io/client-go/util/workqueue" "sigs.k8s.io/yaml" // Used for CRD compatibility instead of gopkg.in/yaml.v2 ) -// This type must be hashable, so it is kept simple. The indexer will maintain a -// cache of current state, so this is mostly used for logging. -type event struct { - typ eventType - objectKey string -} - -type eventType string - -const ( - eventTypeResourceChanged eventType = "resource-changed" - eventTypeSyncLoki eventType = "sync-loki" -) - -type queuedEventHandler struct { - log log.Logger - queue workqueue.RateLimitingInterface -} - -func newQueuedEventHandler(log log.Logger, queue workqueue.RateLimitingInterface) *queuedEventHandler { - return &queuedEventHandler{ - log: log, - queue: queue, - } -} - -// OnAdd implements the cache.ResourceEventHandler interface. -func (c *queuedEventHandler) OnAdd(obj interface{}, _ bool) { - c.publishEvent(obj) -} - -// OnUpdate implements the cache.ResourceEventHandler interface. -func (c *queuedEventHandler) OnUpdate(oldObj, newObj interface{}) { - c.publishEvent(newObj) -} - -// OnDelete implements the cache.ResourceEventHandler interface. -func (c *queuedEventHandler) OnDelete(obj interface{}) { - c.publishEvent(obj) -} - -func (c *queuedEventHandler) publishEvent(obj interface{}) { - key, err := cache.MetaNamespaceKeyFunc(obj) - if err != nil { - level.Error(c.log).Log("msg", "failed to get key for object", "err", err) - return - } - - c.queue.AddRateLimited(event{ - typ: eventTypeResourceChanged, - objectKey: key, - }) -} +const eventTypeSyncLoki kubernetes.EventType = "sync-loki" func (c *Component) eventLoop(ctx context.Context) { for { @@ -78,14 +24,14 @@ func (c *Component) eventLoop(ctx context.Context) { return } - evt := eventInterface.(event) - c.metrics.eventsTotal.WithLabelValues(string(evt.typ)).Inc() + evt := eventInterface.(kubernetes.Event) + c.metrics.eventsTotal.WithLabelValues(string(evt.Typ)).Inc() err := c.processEvent(ctx, evt) if err != nil { retries := c.queue.NumRequeues(evt) if retries < 5 { - c.metrics.eventsRetried.WithLabelValues(string(evt.typ)).Inc() + c.metrics.eventsRetried.WithLabelValues(string(evt.Typ)).Inc() c.queue.AddRateLimited(evt) level.Error(c.log).Log( "msg", "failed to process event, will retry", @@ -94,7 +40,7 @@ func (c *Component) eventLoop(ctx context.Context) { ) continue } else { - c.metrics.eventsFailed.WithLabelValues(string(evt.typ)).Inc() + c.metrics.eventsFailed.WithLabelValues(string(evt.Typ)).Inc() level.Error(c.log).Log( "msg", "failed to process event, max retries exceeded", "retries", fmt.Sprintf("%d/5", retries), @@ -110,12 +56,12 @@ func (c *Component) eventLoop(ctx context.Context) { } } -func (c *Component) processEvent(ctx context.Context, e event) error { +func (c *Component) processEvent(ctx context.Context, e kubernetes.Event) error { defer c.queue.Done(e) - switch e.typ { - case eventTypeResourceChanged: - level.Info(c.log).Log("msg", "processing event", "type", e.typ, "key", e.objectKey) + switch e.Typ { + case kubernetes.EventTypeResourceChanged: + level.Info(c.log).Log("msg", "processing event", "type", e.Typ, "key", e.ObjectKey) case eventTypeSyncLoki: level.Debug(c.log).Log("msg", "syncing current state from ruler") err := c.syncLoki(ctx) @@ -123,7 +69,7 @@ func (c *Component) processEvent(ctx context.Context, e event) error { return err } default: - return fmt.Errorf("unknown event type: %s", e.typ) + return fmt.Errorf("unknown event type: %s", e.Typ) } return c.reconcileState(ctx) @@ -156,7 +102,7 @@ func (c *Component) reconcileState(ctx context.Context) error { return err } - diffs := diffRuleState(desiredState, c.currentState) + diffs := kubernetes.DiffRuleState(desiredState, c.currentState) var result error for ns, diff := range diffs { err = c.applyChanges(ctx, ns, diff) @@ -169,13 +115,13 @@ func (c *Component) reconcileState(ctx context.Context) error { return result } -func (c *Component) loadStateFromK8s() (ruleGroupsByNamespace, error) { +func (c *Component) loadStateFromK8s() (kubernetes.RuleGroupsByNamespace, error) { matchedNamespaces, err := c.namespaceLister.List(c.namespaceSelector) if err != nil { return nil, fmt.Errorf("failed to list namespaces: %w", err) } - desiredState := make(ruleGroupsByNamespace) + desiredState := make(kubernetes.RuleGroupsByNamespace) for _, ns := range matchedNamespaces { crdState, err := c.ruleLister.PrometheusRules(ns.Name).List(c.ruleSelector) if err != nil { @@ -213,26 +159,26 @@ func convertCRDRuleGroupToRuleGroup(crd promv1.PrometheusRuleSpec) ([]rulefmt.Ru return groups.Groups, nil } -func (c *Component) applyChanges(ctx context.Context, namespace string, diffs []ruleGroupDiff) error { +func (c *Component) applyChanges(ctx context.Context, namespace string, diffs []kubernetes.RuleGroupDiff) error { if len(diffs) == 0 { return nil } for _, diff := range diffs { switch diff.Kind { - case ruleGroupDiffKindAdd: + case kubernetes.RuleGroupDiffKindAdd: err := c.lokiClient.CreateRuleGroup(ctx, namespace, diff.Desired) if err != nil { return err } level.Info(c.log).Log("msg", "added rule group", "namespace", namespace, "group", diff.Desired.Name) - case ruleGroupDiffKindRemove: + case kubernetes.RuleGroupDiffKindRemove: err := c.lokiClient.DeleteRuleGroup(ctx, namespace, diff.Actual.Name) if err != nil { return err } level.Info(c.log).Log("msg", "removed rule group", "namespace", namespace, "group", diff.Actual.Name) - case ruleGroupDiffKindUpdate: + case kubernetes.RuleGroupDiffKindUpdate: err := c.lokiClient.CreateRuleGroup(ctx, namespace, diff.Desired) if err != nil { return err diff --git a/internal/component/loki/rules/kubernetes/events_test.go b/internal/component/loki/rules/kubernetes/events_test.go index e6ebf800d6..8c0b5f928e 100644 --- a/internal/component/loki/rules/kubernetes/events_test.go +++ b/internal/component/loki/rules/kubernetes/events_test.go @@ -8,6 +8,7 @@ import ( "time" "github.com/go-kit/log" + "github.com/grafana/agent/internal/component/common/kubernetes" lokiClient "github.com/grafana/agent/internal/loki/client" v1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" promListers "github.com/prometheus-operator/prometheus-operator/pkg/client/listers/monitoring/v1" @@ -135,7 +136,7 @@ func TestEventLoop(t *testing.T) { args: Arguments{LokiNameSpacePrefix: "agent"}, metrics: newMetrics(), } - eventHandler := newQueuedEventHandler(component.log, component.queue) + eventHandler := kubernetes.NewQueuedEventHandler(component.log, component.queue) ctx, cancel := context.WithCancel(context.Background()) defer cancel() @@ -153,7 +154,7 @@ func TestEventLoop(t *testing.T) { require.NoError(t, err) return len(rules) == 1 }, time.Second, 10*time.Millisecond) - component.queue.AddRateLimited(event{typ: eventTypeSyncLoki}) + component.queue.AddRateLimited(kubernetes.Event{Typ: eventTypeSyncLoki}) // Update the rule in kubernetes rule.Spec.Groups[0].Rules = append(rule.Spec.Groups[0].Rules, v1.Rule{ @@ -170,7 +171,7 @@ func TestEventLoop(t *testing.T) { rules := allRules[lokiNamespaceForRuleCRD("agent", rule)][0].Rules return len(rules) == 2 }, time.Second, 10*time.Millisecond) - component.queue.AddRateLimited(event{typ: eventTypeSyncLoki}) + component.queue.AddRateLimited(kubernetes.Event{Typ: eventTypeSyncLoki}) // Remove the rule from kubernetes ruleIndexer.Delete(rule) diff --git a/internal/component/loki/rules/kubernetes/rules.go b/internal/component/loki/rules/kubernetes/rules.go index 9f3d2c486d..081b1f283c 100644 --- a/internal/component/loki/rules/kubernetes/rules.go +++ b/internal/component/loki/rules/kubernetes/rules.go @@ -8,6 +8,7 @@ import ( "github.com/go-kit/log" "github.com/grafana/agent/internal/component" + commonK8s "github.com/grafana/agent/internal/component/common/kubernetes" "github.com/grafana/agent/internal/featuregate" "github.com/grafana/agent/internal/flow/logging/level" lokiClient "github.com/grafana/agent/internal/loki/client" @@ -63,7 +64,7 @@ type Component struct { namespaceSelector labels.Selector ruleSelector labels.Selector - currentState ruleGroupsByNamespace + currentState commonK8s.RuleGroupsByNamespace metrics *metrics healthMut sync.RWMutex @@ -202,8 +203,8 @@ func (c *Component) Run(ctx context.Context) error { c.shutdown() return nil case <-c.ticker.C: - c.queue.Add(event{ - typ: eventTypeSyncLoki, + c.queue.Add(commonK8s.Event{ + Typ: eventTypeSyncLoki, }) } } @@ -274,12 +275,12 @@ func (c *Component) init() error { c.ticker.Reset(c.args.SyncInterval) - c.namespaceSelector, err = convertSelectorToListOptions(c.args.RuleNamespaceSelector) + c.namespaceSelector, err = commonK8s.ConvertSelectorToListOptions(c.args.RuleNamespaceSelector) if err != nil { return err } - c.ruleSelector, err = convertSelectorToListOptions(c.args.RuleSelector) + c.ruleSelector, err = commonK8s.ConvertSelectorToListOptions(c.args.RuleSelector) if err != nil { return err } @@ -287,23 +288,6 @@ func (c *Component) init() error { return nil } -func convertSelectorToListOptions(selector LabelSelector) (labels.Selector, error) { - matchExpressions := []metav1.LabelSelectorRequirement{} - - for _, me := range selector.MatchExpressions { - matchExpressions = append(matchExpressions, metav1.LabelSelectorRequirement{ - Key: me.Key, - Operator: metav1.LabelSelectorOperator(me.Operator), - Values: me.Values, - }) - } - - return metav1.LabelSelectorAsSelector(&metav1.LabelSelector{ - MatchLabels: selector.MatchLabels, - MatchExpressions: matchExpressions, - }) -} - func (c *Component) startNamespaceInformer() error { factory := informers.NewSharedInformerFactoryWithOptions( c.k8sClient, @@ -316,7 +300,7 @@ func (c *Component) startNamespaceInformer() error { namespaces := factory.Core().V1().Namespaces() c.namespaceLister = namespaces.Lister() c.namespaceInformer = namespaces.Informer() - _, err := c.namespaceInformer.AddEventHandler(newQueuedEventHandler(c.log, c.queue)) + _, err := c.namespaceInformer.AddEventHandler(commonK8s.NewQueuedEventHandler(c.log, c.queue)) if err != nil { return err } @@ -338,7 +322,7 @@ func (c *Component) startRuleInformer() error { promRules := factory.Monitoring().V1().PrometheusRules() c.ruleLister = promRules.Lister() c.ruleInformer = promRules.Informer() - _, err := c.ruleInformer.AddEventHandler(newQueuedEventHandler(c.log, c.queue)) + _, err := c.ruleInformer.AddEventHandler(commonK8s.NewQueuedEventHandler(c.log, c.queue)) if err != nil { return err } diff --git a/internal/component/loki/rules/kubernetes/rules_test.go b/internal/component/loki/rules/kubernetes/rules_test.go index 332c8942fe..74ccd4cbeb 100644 --- a/internal/component/loki/rules/kubernetes/rules_test.go +++ b/internal/component/loki/rules/kubernetes/rules_test.go @@ -5,15 +5,8 @@ import ( "github.com/grafana/river" "github.com/stretchr/testify/require" - "k8s.io/client-go/util/workqueue" ) -func TestEventTypeIsHashable(t *testing.T) { - // This test is here to ensure that the EventType type is hashable according to the workqueue implementation - queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()) - queue.AddRateLimited(event{}) -} - func TestRiverConfig(t *testing.T) { var exampleRiverConfig = ` address = "GRAFANA_CLOUD_METRICS_URL" diff --git a/internal/component/loki/rules/kubernetes/types.go b/internal/component/loki/rules/kubernetes/types.go index 0e9f0bfedc..b98db47196 100644 --- a/internal/component/loki/rules/kubernetes/types.go +++ b/internal/component/loki/rules/kubernetes/types.go @@ -5,6 +5,7 @@ import ( "time" "github.com/grafana/agent/internal/component/common/config" + "github.com/grafana/agent/internal/component/common/kubernetes" ) type Arguments struct { @@ -15,8 +16,8 @@ type Arguments struct { SyncInterval time.Duration `river:"sync_interval,attr,optional"` LokiNameSpacePrefix string `river:"loki_namespace_prefix,attr,optional"` - RuleSelector LabelSelector `river:"rule_selector,block,optional"` - RuleNamespaceSelector LabelSelector `river:"rule_namespace_selector,block,optional"` + RuleSelector kubernetes.LabelSelector `river:"rule_selector,block,optional"` + RuleNamespaceSelector kubernetes.LabelSelector `river:"rule_namespace_selector,block,optional"` } var DefaultArguments = Arguments{ @@ -42,14 +43,3 @@ func (args *Arguments) Validate() error { // We must explicitly Validate because HTTPClientConfig is squashed and it won't run otherwise return args.HTTPClientConfig.Validate() } - -type LabelSelector struct { - MatchLabels map[string]string `river:"match_labels,attr,optional"` - MatchExpressions []MatchExpression `river:"match_expression,block,optional"` -} - -type MatchExpression struct { - Key string `river:"key,attr"` - Operator string `river:"operator,attr"` - Values []string `river:"values,attr,optional"` -} diff --git a/internal/component/mimir/rules/kubernetes/diff.go b/internal/component/mimir/rules/kubernetes/diff.go deleted file mode 100644 index 34c74ed62e..0000000000 --- a/internal/component/mimir/rules/kubernetes/diff.go +++ /dev/null @@ -1,113 +0,0 @@ -package rules - -import ( - "bytes" - - "github.com/prometheus/prometheus/model/rulefmt" - "gopkg.in/yaml.v3" // Used for prometheus rulefmt compatibility instead of gopkg.in/yaml.v2 -) - -type ruleGroupDiffKind string - -const ( - ruleGroupDiffKindAdd ruleGroupDiffKind = "add" - ruleGroupDiffKindRemove ruleGroupDiffKind = "remove" - ruleGroupDiffKindUpdate ruleGroupDiffKind = "update" -) - -type ruleGroupDiff struct { - Kind ruleGroupDiffKind - Actual rulefmt.RuleGroup - Desired rulefmt.RuleGroup -} - -type ruleGroupsByNamespace map[string][]rulefmt.RuleGroup -type ruleGroupDiffsByNamespace map[string][]ruleGroupDiff - -func diffRuleState(desired, actual ruleGroupsByNamespace) ruleGroupDiffsByNamespace { - seenNamespaces := map[string]bool{} - - diff := make(ruleGroupDiffsByNamespace) - - for namespace, desiredRuleGroups := range desired { - seenNamespaces[namespace] = true - - actualRuleGroups := actual[namespace] - subDiff := diffRuleNamespaceState(desiredRuleGroups, actualRuleGroups) - - if len(subDiff) == 0 { - continue - } - - diff[namespace] = subDiff - } - - for namespace, actualRuleGroups := range actual { - if seenNamespaces[namespace] { - continue - } - - subDiff := diffRuleNamespaceState(nil, actualRuleGroups) - - diff[namespace] = subDiff - } - - return diff -} - -func diffRuleNamespaceState(desired []rulefmt.RuleGroup, actual []rulefmt.RuleGroup) []ruleGroupDiff { - var diff []ruleGroupDiff - - seenGroups := map[string]bool{} - -desiredGroups: - for _, desiredRuleGroup := range desired { - seenGroups[desiredRuleGroup.Name] = true - - for _, actualRuleGroup := range actual { - if desiredRuleGroup.Name == actualRuleGroup.Name { - if equalRuleGroups(desiredRuleGroup, actualRuleGroup) { - continue desiredGroups - } - - diff = append(diff, ruleGroupDiff{ - Kind: ruleGroupDiffKindUpdate, - Actual: actualRuleGroup, - Desired: desiredRuleGroup, - }) - continue desiredGroups - } - } - - diff = append(diff, ruleGroupDiff{ - Kind: ruleGroupDiffKindAdd, - Desired: desiredRuleGroup, - }) - } - - for _, actualRuleGroup := range actual { - if seenGroups[actualRuleGroup.Name] { - continue - } - - diff = append(diff, ruleGroupDiff{ - Kind: ruleGroupDiffKindRemove, - Actual: actualRuleGroup, - }) - } - - return diff -} - -func equalRuleGroups(a, b rulefmt.RuleGroup) bool { - aBuf, err := yaml.Marshal(a) - if err != nil { - return false - } - bBuf, err := yaml.Marshal(b) - if err != nil { - return false - } - - return bytes.Equal(aBuf, bBuf) -} diff --git a/internal/component/mimir/rules/kubernetes/diff_test.go b/internal/component/mimir/rules/kubernetes/diff_test.go deleted file mode 100644 index e52ae13288..0000000000 --- a/internal/component/mimir/rules/kubernetes/diff_test.go +++ /dev/null @@ -1,157 +0,0 @@ -package rules - -import ( - "fmt" - "testing" - - "github.com/prometheus/prometheus/model/rulefmt" - "github.com/stretchr/testify/require" -) - -func parseRuleGroups(t *testing.T, buf []byte) []rulefmt.RuleGroup { - t.Helper() - - groups, errs := rulefmt.Parse(buf) - require.Empty(t, errs) - - return groups.Groups -} - -func TestDiffRuleState(t *testing.T) { - ruleGroupsA := parseRuleGroups(t, []byte(` -groups: -- name: rule-group-a - interval: 1m - rules: - - record: rule_a - expr: 1 -`)) - - ruleGroupsAModified := parseRuleGroups(t, []byte(` -groups: -- name: rule-group-a - interval: 1m - rules: - - record: rule_a - expr: 3 -`)) - - managedNamespace := "agent/namespace/name/12345678-1234-1234-1234-123456789012" - - type testCase struct { - name string - desired map[string][]rulefmt.RuleGroup - actual map[string][]rulefmt.RuleGroup - expected map[string][]ruleGroupDiff - } - - testCases := []testCase{ - { - name: "empty sets", - desired: map[string][]rulefmt.RuleGroup{}, - actual: map[string][]rulefmt.RuleGroup{}, - expected: map[string][]ruleGroupDiff{}, - }, - { - name: "add rule group", - desired: map[string][]rulefmt.RuleGroup{ - managedNamespace: ruleGroupsA, - }, - actual: map[string][]rulefmt.RuleGroup{}, - expected: map[string][]ruleGroupDiff{ - managedNamespace: { - { - Kind: ruleGroupDiffKindAdd, - Desired: ruleGroupsA[0], - }, - }, - }, - }, - { - name: "remove rule group", - desired: map[string][]rulefmt.RuleGroup{}, - actual: map[string][]rulefmt.RuleGroup{ - managedNamespace: ruleGroupsA, - }, - expected: map[string][]ruleGroupDiff{ - managedNamespace: { - { - Kind: ruleGroupDiffKindRemove, - Actual: ruleGroupsA[0], - }, - }, - }, - }, - { - name: "update rule group", - desired: map[string][]rulefmt.RuleGroup{ - managedNamespace: ruleGroupsA, - }, - actual: map[string][]rulefmt.RuleGroup{ - managedNamespace: ruleGroupsAModified, - }, - expected: map[string][]ruleGroupDiff{ - managedNamespace: { - { - Kind: ruleGroupDiffKindUpdate, - Desired: ruleGroupsA[0], - Actual: ruleGroupsAModified[0], - }, - }, - }, - }, - { - name: "unchanged rule groups", - desired: map[string][]rulefmt.RuleGroup{ - managedNamespace: ruleGroupsA, - }, - actual: map[string][]rulefmt.RuleGroup{ - managedNamespace: ruleGroupsA, - }, - expected: map[string][]ruleGroupDiff{}, - }, - } - - for _, tc := range testCases { - t.Run(tc.name, func(t *testing.T) { - actual := diffRuleState(tc.desired, tc.actual) - requireEqualRuleDiffs(t, tc.expected, actual) - }) - } -} - -func requireEqualRuleDiffs(t *testing.T, expected, actual map[string][]ruleGroupDiff) { - require.Equal(t, len(expected), len(actual)) - - var summarizeDiff = func(diff ruleGroupDiff) string { - switch diff.Kind { - case ruleGroupDiffKindAdd: - return fmt.Sprintf("add: %s", diff.Desired.Name) - case ruleGroupDiffKindRemove: - return fmt.Sprintf("remove: %s", diff.Actual.Name) - case ruleGroupDiffKindUpdate: - return fmt.Sprintf("update: %s", diff.Desired.Name) - } - panic("unreachable") - } - - for namespace, expectedDiffs := range expected { - actualDiffs, ok := actual[namespace] - require.True(t, ok) - - require.Equal(t, len(expectedDiffs), len(actualDiffs)) - - for i, expectedDiff := range expectedDiffs { - actualDiff := actualDiffs[i] - - if expectedDiff.Kind != actualDiff.Kind || - !equalRuleGroups(expectedDiff.Desired, actualDiff.Desired) || - !equalRuleGroups(expectedDiff.Actual, actualDiff.Actual) { - - t.Logf("expected diff: %s", summarizeDiff(expectedDiff)) - t.Logf("actual diff: %s", summarizeDiff(actualDiff)) - t.Fail() - } - } - } -} diff --git a/internal/component/mimir/rules/kubernetes/events.go b/internal/component/mimir/rules/kubernetes/events.go index ed3ace0523..7752077d97 100644 --- a/internal/component/mimir/rules/kubernetes/events.go +++ b/internal/component/mimir/rules/kubernetes/events.go @@ -6,69 +6,15 @@ import ( "regexp" "time" - "github.com/go-kit/log" + "github.com/grafana/agent/internal/component/common/kubernetes" "github.com/grafana/agent/internal/flow/logging/level" "github.com/hashicorp/go-multierror" promv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" "github.com/prometheus/prometheus/model/rulefmt" - "k8s.io/client-go/tools/cache" - "k8s.io/client-go/util/workqueue" "sigs.k8s.io/yaml" // Used for CRD compatibility instead of gopkg.in/yaml.v2 ) -// This type must be hashable, so it is kept simple. The indexer will maintain a -// cache of current state, so this is mostly used for logging. -type event struct { - typ eventType - objectKey string -} - -type eventType string - -const ( - eventTypeResourceChanged eventType = "resource-changed" - eventTypeSyncMimir eventType = "sync-mimir" -) - -type queuedEventHandler struct { - log log.Logger - queue workqueue.RateLimitingInterface -} - -func newQueuedEventHandler(log log.Logger, queue workqueue.RateLimitingInterface) *queuedEventHandler { - return &queuedEventHandler{ - log: log, - queue: queue, - } -} - -// OnAdd implements the cache.ResourceEventHandler interface. -func (c *queuedEventHandler) OnAdd(obj interface{}, _ bool) { - c.publishEvent(obj) -} - -// OnUpdate implements the cache.ResourceEventHandler interface. -func (c *queuedEventHandler) OnUpdate(oldObj, newObj interface{}) { - c.publishEvent(newObj) -} - -// OnDelete implements the cache.ResourceEventHandler interface. -func (c *queuedEventHandler) OnDelete(obj interface{}) { - c.publishEvent(obj) -} - -func (c *queuedEventHandler) publishEvent(obj interface{}) { - key, err := cache.MetaNamespaceKeyFunc(obj) - if err != nil { - level.Error(c.log).Log("msg", "failed to get key for object", "err", err) - return - } - - c.queue.AddRateLimited(event{ - typ: eventTypeResourceChanged, - objectKey: key, - }) -} +const eventTypeSyncMimir kubernetes.EventType = "sync-mimir" func (c *Component) eventLoop(ctx context.Context) { for { @@ -78,14 +24,14 @@ func (c *Component) eventLoop(ctx context.Context) { return } - evt := eventInterface.(event) - c.metrics.eventsTotal.WithLabelValues(string(evt.typ)).Inc() + evt := eventInterface.(kubernetes.Event) + c.metrics.eventsTotal.WithLabelValues(string(evt.Typ)).Inc() err := c.processEvent(ctx, evt) if err != nil { retries := c.queue.NumRequeues(evt) if retries < 5 { - c.metrics.eventsRetried.WithLabelValues(string(evt.typ)).Inc() + c.metrics.eventsRetried.WithLabelValues(string(evt.Typ)).Inc() c.queue.AddRateLimited(evt) level.Error(c.log).Log( "msg", "failed to process event, will retry", @@ -94,7 +40,7 @@ func (c *Component) eventLoop(ctx context.Context) { ) continue } else { - c.metrics.eventsFailed.WithLabelValues(string(evt.typ)).Inc() + c.metrics.eventsFailed.WithLabelValues(string(evt.Typ)).Inc() level.Error(c.log).Log( "msg", "failed to process event, max retries exceeded", "retries", fmt.Sprintf("%d/5", retries), @@ -110,12 +56,12 @@ func (c *Component) eventLoop(ctx context.Context) { } } -func (c *Component) processEvent(ctx context.Context, e event) error { +func (c *Component) processEvent(ctx context.Context, e kubernetes.Event) error { defer c.queue.Done(e) - switch e.typ { - case eventTypeResourceChanged: - level.Info(c.log).Log("msg", "processing event", "type", e.typ, "key", e.objectKey) + switch e.Typ { + case kubernetes.EventTypeResourceChanged: + level.Info(c.log).Log("msg", "processing event", "type", e.Typ, "key", e.ObjectKey) case eventTypeSyncMimir: level.Debug(c.log).Log("msg", "syncing current state from ruler") err := c.syncMimir(ctx) @@ -123,7 +69,7 @@ func (c *Component) processEvent(ctx context.Context, e event) error { return err } default: - return fmt.Errorf("unknown event type: %s", e.typ) + return fmt.Errorf("unknown event type: %s", e.Typ) } return c.reconcileState(ctx) @@ -156,7 +102,7 @@ func (c *Component) reconcileState(ctx context.Context) error { return err } - diffs := diffRuleState(desiredState, c.currentState) + diffs := kubernetes.DiffRuleState(desiredState, c.currentState) var result error for ns, diff := range diffs { err = c.applyChanges(ctx, ns, diff) @@ -169,13 +115,13 @@ func (c *Component) reconcileState(ctx context.Context) error { return result } -func (c *Component) loadStateFromK8s() (ruleGroupsByNamespace, error) { +func (c *Component) loadStateFromK8s() (kubernetes.RuleGroupsByNamespace, error) { matchedNamespaces, err := c.namespaceLister.List(c.namespaceSelector) if err != nil { return nil, fmt.Errorf("failed to list namespaces: %w", err) } - desiredState := make(ruleGroupsByNamespace) + desiredState := make(kubernetes.RuleGroupsByNamespace) for _, ns := range matchedNamespaces { crdState, err := c.ruleLister.PrometheusRules(ns.Name).List(c.ruleSelector) if err != nil { @@ -211,26 +157,26 @@ func convertCRDRuleGroupToRuleGroup(crd promv1.PrometheusRuleSpec) ([]rulefmt.Ru return groups.Groups, nil } -func (c *Component) applyChanges(ctx context.Context, namespace string, diffs []ruleGroupDiff) error { +func (c *Component) applyChanges(ctx context.Context, namespace string, diffs []kubernetes.RuleGroupDiff) error { if len(diffs) == 0 { return nil } for _, diff := range diffs { switch diff.Kind { - case ruleGroupDiffKindAdd: + case kubernetes.RuleGroupDiffKindAdd: err := c.mimirClient.CreateRuleGroup(ctx, namespace, diff.Desired) if err != nil { return err } level.Info(c.log).Log("msg", "added rule group", "namespace", namespace, "group", diff.Desired.Name) - case ruleGroupDiffKindRemove: + case kubernetes.RuleGroupDiffKindRemove: err := c.mimirClient.DeleteRuleGroup(ctx, namespace, diff.Actual.Name) if err != nil { return err } level.Info(c.log).Log("msg", "removed rule group", "namespace", namespace, "group", diff.Actual.Name) - case ruleGroupDiffKindUpdate: + case kubernetes.RuleGroupDiffKindUpdate: err := c.mimirClient.CreateRuleGroup(ctx, namespace, diff.Desired) if err != nil { return err diff --git a/internal/component/mimir/rules/kubernetes/events_test.go b/internal/component/mimir/rules/kubernetes/events_test.go index 621f3383ef..e177e41bd1 100644 --- a/internal/component/mimir/rules/kubernetes/events_test.go +++ b/internal/component/mimir/rules/kubernetes/events_test.go @@ -8,6 +8,7 @@ import ( "time" "github.com/go-kit/log" + "github.com/grafana/agent/internal/component/common/kubernetes" mimirClient "github.com/grafana/agent/internal/mimir/client" v1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" promListers "github.com/prometheus-operator/prometheus-operator/pkg/client/listers/monitoring/v1" @@ -135,7 +136,7 @@ func TestEventLoop(t *testing.T) { args: Arguments{MimirNameSpacePrefix: "agent"}, metrics: newMetrics(), } - eventHandler := newQueuedEventHandler(component.log, component.queue) + eventHandler := kubernetes.NewQueuedEventHandler(component.log, component.queue) ctx, cancel := context.WithCancel(context.Background()) defer cancel() @@ -153,7 +154,7 @@ func TestEventLoop(t *testing.T) { require.NoError(t, err) return len(rules) == 1 }, time.Second, 10*time.Millisecond) - component.queue.AddRateLimited(event{typ: eventTypeSyncMimir}) + component.queue.AddRateLimited(kubernetes.Event{Typ: eventTypeSyncMimir}) // Update the rule in kubernetes rule.Spec.Groups[0].Rules = append(rule.Spec.Groups[0].Rules, v1.Rule{ @@ -170,7 +171,7 @@ func TestEventLoop(t *testing.T) { rules := allRules[mimirNamespaceForRuleCRD("agent", rule)][0].Rules return len(rules) == 2 }, time.Second, 10*time.Millisecond) - component.queue.AddRateLimited(event{typ: eventTypeSyncMimir}) + component.queue.AddRateLimited(kubernetes.Event{Typ: eventTypeSyncMimir}) // Remove the rule from kubernetes ruleIndexer.Delete(rule) diff --git a/internal/component/mimir/rules/kubernetes/rules.go b/internal/component/mimir/rules/kubernetes/rules.go index 692e176266..e6e6d03d8f 100644 --- a/internal/component/mimir/rules/kubernetes/rules.go +++ b/internal/component/mimir/rules/kubernetes/rules.go @@ -8,6 +8,7 @@ import ( "github.com/go-kit/log" "github.com/grafana/agent/internal/component" + commonK8s "github.com/grafana/agent/internal/component/common/kubernetes" "github.com/grafana/agent/internal/featuregate" "github.com/grafana/agent/internal/flow/logging/level" mimirClient "github.com/grafana/agent/internal/mimir/client" @@ -63,7 +64,7 @@ type Component struct { namespaceSelector labels.Selector ruleSelector labels.Selector - currentState ruleGroupsByNamespace + currentState commonK8s.RuleGroupsByNamespace metrics *metrics healthMut sync.RWMutex @@ -202,8 +203,8 @@ func (c *Component) Run(ctx context.Context) error { c.shutdown() return nil case <-c.ticker.C: - c.queue.Add(event{ - typ: eventTypeSyncMimir, + c.queue.Add(commonK8s.Event{ + Typ: eventTypeSyncMimir, }) } } @@ -275,12 +276,12 @@ func (c *Component) init() error { c.ticker.Reset(c.args.SyncInterval) - c.namespaceSelector, err = convertSelectorToListOptions(c.args.RuleNamespaceSelector) + c.namespaceSelector, err = commonK8s.ConvertSelectorToListOptions(c.args.RuleNamespaceSelector) if err != nil { return err } - c.ruleSelector, err = convertSelectorToListOptions(c.args.RuleSelector) + c.ruleSelector, err = commonK8s.ConvertSelectorToListOptions(c.args.RuleSelector) if err != nil { return err } @@ -288,23 +289,6 @@ func (c *Component) init() error { return nil } -func convertSelectorToListOptions(selector LabelSelector) (labels.Selector, error) { - matchExpressions := []metav1.LabelSelectorRequirement{} - - for _, me := range selector.MatchExpressions { - matchExpressions = append(matchExpressions, metav1.LabelSelectorRequirement{ - Key: me.Key, - Operator: metav1.LabelSelectorOperator(me.Operator), - Values: me.Values, - }) - } - - return metav1.LabelSelectorAsSelector(&metav1.LabelSelector{ - MatchLabels: selector.MatchLabels, - MatchExpressions: matchExpressions, - }) -} - func (c *Component) startNamespaceInformer() error { factory := informers.NewSharedInformerFactoryWithOptions( c.k8sClient, @@ -317,7 +301,7 @@ func (c *Component) startNamespaceInformer() error { namespaces := factory.Core().V1().Namespaces() c.namespaceLister = namespaces.Lister() c.namespaceInformer = namespaces.Informer() - _, err := c.namespaceInformer.AddEventHandler(newQueuedEventHandler(c.log, c.queue)) + _, err := c.namespaceInformer.AddEventHandler(commonK8s.NewQueuedEventHandler(c.log, c.queue)) if err != nil { return err } @@ -339,7 +323,7 @@ func (c *Component) startRuleInformer() error { promRules := factory.Monitoring().V1().PrometheusRules() c.ruleLister = promRules.Lister() c.ruleInformer = promRules.Informer() - _, err := c.ruleInformer.AddEventHandler(newQueuedEventHandler(c.log, c.queue)) + _, err := c.ruleInformer.AddEventHandler(commonK8s.NewQueuedEventHandler(c.log, c.queue)) if err != nil { return err } diff --git a/internal/component/mimir/rules/kubernetes/rules_test.go b/internal/component/mimir/rules/kubernetes/rules_test.go index 332c8942fe..74ccd4cbeb 100644 --- a/internal/component/mimir/rules/kubernetes/rules_test.go +++ b/internal/component/mimir/rules/kubernetes/rules_test.go @@ -5,15 +5,8 @@ import ( "github.com/grafana/river" "github.com/stretchr/testify/require" - "k8s.io/client-go/util/workqueue" ) -func TestEventTypeIsHashable(t *testing.T) { - // This test is here to ensure that the EventType type is hashable according to the workqueue implementation - queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()) - queue.AddRateLimited(event{}) -} - func TestRiverConfig(t *testing.T) { var exampleRiverConfig = ` address = "GRAFANA_CLOUD_METRICS_URL" diff --git a/internal/component/mimir/rules/kubernetes/types.go b/internal/component/mimir/rules/kubernetes/types.go index d59265f9c6..564d6b4f0e 100644 --- a/internal/component/mimir/rules/kubernetes/types.go +++ b/internal/component/mimir/rules/kubernetes/types.go @@ -5,6 +5,7 @@ import ( "time" "github.com/grafana/agent/internal/component/common/config" + "github.com/grafana/agent/internal/component/common/kubernetes" ) type Arguments struct { @@ -16,8 +17,8 @@ type Arguments struct { SyncInterval time.Duration `river:"sync_interval,attr,optional"` MimirNameSpacePrefix string `river:"mimir_namespace_prefix,attr,optional"` - RuleSelector LabelSelector `river:"rule_selector,block,optional"` - RuleNamespaceSelector LabelSelector `river:"rule_namespace_selector,block,optional"` + RuleSelector kubernetes.LabelSelector `river:"rule_selector,block,optional"` + RuleNamespaceSelector kubernetes.LabelSelector `river:"rule_namespace_selector,block,optional"` } var DefaultArguments = Arguments{ @@ -44,14 +45,3 @@ func (args *Arguments) Validate() error { // We must explicitly Validate because HTTPClientConfig is squashed and it won't run otherwise return args.HTTPClientConfig.Validate() } - -type LabelSelector struct { - MatchLabels map[string]string `river:"match_labels,attr,optional"` - MatchExpressions []MatchExpression `river:"match_expression,block,optional"` -} - -type MatchExpression struct { - Key string `river:"key,attr"` - Operator string `river:"operator,attr"` - Values []string `river:"values,attr,optional"` -} diff --git a/internal/component/otelcol/config_grpc.go b/internal/component/otelcol/config_grpc.go index 36c5279f18..da6cca47ed 100644 --- a/internal/component/otelcol/config_grpc.go +++ b/internal/component/otelcol/config_grpc.go @@ -13,6 +13,8 @@ import ( otelextension "go.opentelemetry.io/collector/extension" ) +const DefaultBalancerName = "pick_first" + // GRPCServerArguments holds shared gRPC settings for components which launch // gRPC servers. type GRPCServerArguments struct { @@ -168,6 +170,12 @@ func (args *GRPCClientArguments) Convert() *otelconfiggrpc.GRPCClientSettings { auth = &otelconfigauth.Authentication{AuthenticatorID: args.Auth.ID} } + // Set default value for `balancer_name` to sync up with upstream's + balancerName := args.BalancerName + if balancerName == "" { + balancerName = DefaultBalancerName + } + return &otelconfiggrpc.GRPCClientSettings{ Endpoint: args.Endpoint, @@ -180,7 +188,7 @@ func (args *GRPCClientArguments) Convert() *otelconfiggrpc.GRPCClientSettings { WriteBufferSize: int(args.WriteBufferSize), WaitForReady: args.WaitForReady, Headers: opaqueHeaders, - BalancerName: args.BalancerName, + BalancerName: balancerName, Authority: args.Authority, Auth: auth, diff --git a/internal/component/otelcol/config_queue.go b/internal/component/otelcol/config_queue.go index 154e9772bd..b6a61294e9 100644 --- a/internal/component/otelcol/config_queue.go +++ b/internal/component/otelcol/config_queue.go @@ -21,16 +21,12 @@ var DefaultQueueArguments = QueueArguments{ Enabled: true, NumConsumers: 10, - // Copied from [upstream]: + // Copied from [upstream](https://github.com/open-telemetry/opentelemetry-collector/blob/241334609fc47927b4a8533dfca28e0f65dad9fe/exporter/exporterhelper/queue_sender.go#L50-L53) // - // 5000 queue elements at 100 requests/sec gives about 50 seconds of survival - // of destination outage. This is a pretty decent value for production. Users - // should calculate this from the perspective of how many seconds to buffer - // in case of a backend outage and multiply that by the number of requests - // per second. - // - // [upstream]: https://github.com/open-telemetry/opentelemetry-collector/blob/ff73e49f74d8fd8c57a849aa3ff23ae1940cc16a/exporter/exporterhelper/queued_retry.go#L62-L65 - QueueSize: 5000, + // By default, batches are 8192 spans, for a total of up to 8 million spans in the queue + // This can be estimated at 1-4 GB worth of maximum memory usage + // This default is probably still too high, and may be adjusted further down in a future release + QueueSize: 1000, } // SetToDefault implements river.Defaulter. diff --git a/internal/component/otelcol/exporter/loadbalancing/loadbalancing.go b/internal/component/otelcol/exporter/loadbalancing/loadbalancing.go index ad0bac1cd1..d9b87a01fb 100644 --- a/internal/component/otelcol/exporter/loadbalancing/loadbalancing.go +++ b/internal/component/otelcol/exporter/loadbalancing/loadbalancing.go @@ -285,6 +285,11 @@ func (args *GRPCClientArguments) Convert() *otelconfiggrpc.GRPCClientSettings { auth = &otelconfigauth.Authentication{AuthenticatorID: args.Auth.ID} } + balancerName := args.BalancerName + if balancerName == "" { + balancerName = otelcol.DefaultBalancerName + } + return &otelconfiggrpc.GRPCClientSettings{ Compression: args.Compression.Convert(), @@ -295,7 +300,7 @@ func (args *GRPCClientArguments) Convert() *otelconfiggrpc.GRPCClientSettings { WriteBufferSize: int(args.WriteBufferSize), WaitForReady: args.WaitForReady, Headers: opaqueHeaders, - BalancerName: args.BalancerName, + BalancerName: balancerName, Authority: args.Authority, Auth: auth, @@ -317,7 +322,7 @@ var DefaultGRPCClientArguments = GRPCClientArguments{ Headers: map[string]string{}, Compression: otelcol.CompressionTypeGzip, WriteBufferSize: 512 * 1024, - BalancerName: "pick_first", + BalancerName: otelcol.DefaultBalancerName, } // SetToDefault implements river.Defaulter. diff --git a/internal/component/otelcol/exporter/loadbalancing/loadbalancing_test.go b/internal/component/otelcol/exporter/loadbalancing/loadbalancing_test.go index 445efff92b..8034531ffb 100644 --- a/internal/component/otelcol/exporter/loadbalancing/loadbalancing_test.go +++ b/internal/component/otelcol/exporter/loadbalancing/loadbalancing_test.go @@ -20,14 +20,10 @@ func TestConfigConversion(t *testing.T) { defaultRetrySettings = exporterhelper.NewDefaultRetrySettings() defaultTimeoutSettings = exporterhelper.NewDefaultTimeoutSettings() - // TODO(rfratto): resync defaults with upstream. - // - // We have drifted from the upstream defaults, which have decreased the - // default queue_size to 1000 since we introduced the defaults. defaultQueueSettings = exporterhelper.QueueSettings{ Enabled: true, NumConsumers: 10, - QueueSize: 5000, + QueueSize: 1000, } defaultProtocol = loadbalancingexporter.Protocol{ @@ -37,7 +33,7 @@ func TestConfigConversion(t *testing.T) { Compression: "gzip", WriteBufferSize: 512 * 1024, Headers: map[string]configopaque.String{}, - BalancerName: "pick_first", + BalancerName: otelcol.DefaultBalancerName, }, RetrySettings: defaultRetrySettings, TimeoutSettings: defaultTimeoutSettings, @@ -131,7 +127,7 @@ func TestConfigConversion(t *testing.T) { Compression: "gzip", WriteBufferSize: 512 * 1024, Headers: map[string]configopaque.String{}, - BalancerName: "pick_first", + BalancerName: otelcol.DefaultBalancerName, Authority: "authority", }, }, diff --git a/internal/component/otelcol/exporter/otlp/otlp.go b/internal/component/otelcol/exporter/otlp/otlp.go index b228e7bd68..d50c876226 100644 --- a/internal/component/otelcol/exporter/otlp/otlp.go +++ b/internal/component/otelcol/exporter/otlp/otlp.go @@ -94,7 +94,7 @@ var DefaultGRPCClientArguments = GRPCClientArguments{ Headers: map[string]string{}, Compression: otelcol.CompressionTypeGzip, WriteBufferSize: 512 * 1024, - BalancerName: "pick_first", + BalancerName: otelcol.DefaultBalancerName, } // SetToDefault implements river.Defaulter. diff --git a/internal/component/otelcol/extension/jaeger_remote_sampling/jaeger_remote_sampling.go b/internal/component/otelcol/extension/jaeger_remote_sampling/jaeger_remote_sampling.go index 2f5d3e1257..d4365706f3 100644 --- a/internal/component/otelcol/extension/jaeger_remote_sampling/jaeger_remote_sampling.go +++ b/internal/component/otelcol/extension/jaeger_remote_sampling/jaeger_remote_sampling.go @@ -146,7 +146,7 @@ var DefaultGRPCClientArguments = GRPCClientArguments{ Headers: map[string]string{}, Compression: otelcol.CompressionTypeGzip, WriteBufferSize: 512 * 1024, - BalancerName: "pick_first", + BalancerName: otelcol.DefaultBalancerName, } // SetToDefault implements river.Defaulter. diff --git a/internal/component/otelcol/extension/jaeger_remote_sampling/jaeger_remote_sampling_test.go b/internal/component/otelcol/extension/jaeger_remote_sampling/jaeger_remote_sampling_test.go index f9ff03ec27..9b2bd7374f 100644 --- a/internal/component/otelcol/extension/jaeger_remote_sampling/jaeger_remote_sampling_test.go +++ b/internal/component/otelcol/extension/jaeger_remote_sampling/jaeger_remote_sampling_test.go @@ -221,7 +221,7 @@ func TestUnmarshalUsesDefaults(t *testing.T) { Headers: map[string]string{}, Compression: otelcol.CompressionTypeGzip, WriteBufferSize: 512 * 1024, - BalancerName: "pick_first", + BalancerName: otelcol.DefaultBalancerName, }, }, }, diff --git a/internal/component/otelcol/processor/k8sattributes/k8sattributes.go b/internal/component/otelcol/processor/k8sattributes/k8sattributes.go index 6dd081bc31..082fc6d51a 100644 --- a/internal/component/otelcol/processor/k8sattributes/k8sattributes.go +++ b/internal/component/otelcol/processor/k8sattributes/k8sattributes.go @@ -43,6 +43,18 @@ type Arguments struct { Output *otelcol.ConsumerArguments `river:"output,block"` } +// SetToDefault implements river.Defaulter. +func (args *Arguments) SetToDefault() { + // These are default excludes from upstream opentelemetry-collector-contrib + // Source: https://github.com/open-telemetry/opentelemetry-collector-contrib/blame/main/processor/k8sattributesprocessor/factory.go#L21 + args.Exclude = ExcludeConfig{ + Pods: []ExcludePodConfig{ + {Name: "jaeger-agent"}, + {Name: "jaeger-collector"}, + }, + } +} + // Validate implements river.Validator. func (args *Arguments) Validate() error { cfg, err := args.Convert() diff --git a/internal/component/otelcol/processor/k8sattributes/k8sattributes_test.go b/internal/component/otelcol/processor/k8sattributes/k8sattributes_test.go index 94a844c595..1a5df80a63 100644 --- a/internal/component/otelcol/processor/k8sattributes/k8sattributes_test.go +++ b/internal/component/otelcol/processor/k8sattributes/k8sattributes_test.go @@ -313,13 +313,30 @@ func Test_Passthrough(t *testing.T) { } func Test_Exclude(t *testing.T) { - cfg := ` - exclude { - pod { - name = "jaeger-agent" + t.Run("default excludes", func(t *testing.T) { + cfg := ` + exclude { } + output { + // no-op: will be overridden by test code. } + ` + var args k8sattributes.Arguments + require.NoError(t, river.Unmarshal([]byte(cfg), &args)) + + convertedArgs, err := args.Convert() + require.NoError(t, err) + otelObj := (convertedArgs).(*k8sattributesprocessor.Config) + + exclude := &otelObj.Exclude + require.Len(t, exclude.Pods, 2) + require.Equal(t, "jaeger-agent", exclude.Pods[0].Name) + require.Equal(t, "jaeger-collector", exclude.Pods[1].Name) + }) + t.Run("custom excludes", func(t *testing.T) { + cfg := ` + exclude { pod { - name = "jaeger-collector" + name = "grafana-agent" } } @@ -327,15 +344,15 @@ func Test_Exclude(t *testing.T) { // no-op: will be overridden by test code. } ` - var args k8sattributes.Arguments - require.NoError(t, river.Unmarshal([]byte(cfg), &args)) + var args k8sattributes.Arguments + require.NoError(t, river.Unmarshal([]byte(cfg), &args)) - convertedArgs, err := args.Convert() - require.NoError(t, err) - otelObj := (convertedArgs).(*k8sattributesprocessor.Config) + convertedArgs, err := args.Convert() + require.NoError(t, err) + otelObj := (convertedArgs).(*k8sattributesprocessor.Config) - exclude := &otelObj.Exclude - require.Len(t, exclude.Pods, 2) - require.Equal(t, "jaeger-agent", exclude.Pods[0].Name) - require.Equal(t, "jaeger-collector", exclude.Pods[1].Name) + exclude := &otelObj.Exclude + require.Len(t, exclude.Pods, 1) + require.Equal(t, "grafana-agent", exclude.Pods[0].Name) + }) } diff --git a/internal/component/otelcol/processor/k8sattributes/types.go b/internal/component/otelcol/processor/k8sattributes/types.go index 1fdeb47e92..d44d8f5828 100644 --- a/internal/component/otelcol/processor/k8sattributes/types.go +++ b/internal/component/otelcol/processor/k8sattributes/types.go @@ -157,6 +157,7 @@ func (args ExcludeConfig) convert() map[string]interface{} { for _, pod := range args.Pods { pods = append(pods, pod.convert()) } + result["pods"] = pods return result diff --git a/internal/component/otelcol/receiver/opencensus/opencensus.go b/internal/component/otelcol/receiver/opencensus/opencensus.go index 2ebeb412d8..1a4ac11573 100644 --- a/internal/component/otelcol/receiver/opencensus/opencensus.go +++ b/internal/component/otelcol/receiver/opencensus/opencensus.go @@ -43,7 +43,7 @@ var _ receiver.Arguments = Arguments{} // Default server settings. var DefaultArguments = Arguments{ GRPC: otelcol.GRPCServerArguments{ - Endpoint: "0.0.0.0:4317", + Endpoint: "0.0.0.0:55678", Transport: "tcp", ReadBufferSize: 512 * units.Kibibyte, diff --git a/internal/converter/internal/otelcolconvert/converter.go b/internal/converter/internal/otelcolconvert/converter.go index 4939289f8b..77d74f61c2 100644 --- a/internal/converter/internal/otelcolconvert/converter.go +++ b/internal/converter/internal/otelcolconvert/converter.go @@ -5,6 +5,7 @@ import ( "strings" "github.com/grafana/agent/internal/converter/diag" + "github.com/grafana/agent/internal/converter/internal/common" "github.com/grafana/river/token/builder" "go.opentelemetry.io/collector/component" "go.opentelemetry.io/collector/otelcol" @@ -61,8 +62,9 @@ type state struct { // extensionLookup maps OTel extensions to Flow component IDs. extensionLookup map[component.ID]componentID - componentID component.InstanceID // ID of the current component being converted. - componentConfig component.Config // Config of the current component being converted. + componentID component.InstanceID // ID of the current component being converted. + componentConfig component.Config // Config of the current component being converted. + componentLabelPrefix string // Prefix for the label of the current component being converted. } type converterKey struct { @@ -119,9 +121,13 @@ func (state *state) flowLabelForComponent(c component.InstanceID) string { // // Otherwise, we'll replace empty group and component names with "default" // and concatenate them with an underscore. + unsanitizedLabel := state.componentLabelPrefix + if unsanitizedLabel != "" { + unsanitizedLabel += "_" + } switch { case groupName == "" && componentName == "": - return defaultLabel + unsanitizedLabel += defaultLabel default: if groupName == "" { @@ -130,8 +136,10 @@ func (state *state) flowLabelForComponent(c component.InstanceID) string { if componentName == "" { componentName = defaultLabel } - return fmt.Sprintf("%s_%s", groupName, componentName) + unsanitizedLabel += fmt.Sprintf("%s_%s", groupName, componentName) } + + return common.SanitizeIdentifierPanics(unsanitizedLabel) } // Next returns the set of Flow component IDs for a given data type that the diff --git a/internal/converter/internal/otelcolconvert/converter_bearertokenauthextension.go b/internal/converter/internal/otelcolconvert/converter_bearertokenauthextension.go new file mode 100644 index 0000000000..63f134ec8a --- /dev/null +++ b/internal/converter/internal/otelcolconvert/converter_bearertokenauthextension.go @@ -0,0 +1,82 @@ +package otelcolconvert + +import ( + "fmt" + "time" + + "github.com/grafana/agent/internal/component/local/file" + "github.com/grafana/agent/internal/component/otelcol/auth/bearer" + "github.com/grafana/agent/internal/converter/diag" + "github.com/grafana/agent/internal/converter/internal/common" + "github.com/grafana/river/rivertypes" + "github.com/grafana/river/token/builder" + "github.com/open-telemetry/opentelemetry-collector-contrib/extension/bearertokenauthextension" + "go.opentelemetry.io/collector/component" +) + +func init() { + converters = append(converters, bearerTokenAuthExtensionConverter{}) +} + +type bearerTokenAuthExtensionConverter struct{} + +func (bearerTokenAuthExtensionConverter) Factory() component.Factory { + return bearertokenauthextension.NewFactory() +} + +func (bearerTokenAuthExtensionConverter) InputComponentName() string { return "otelcol.auth.bearer" } + +func (bearerTokenAuthExtensionConverter) ConvertAndAppend(state *state, id component.InstanceID, cfg component.Config) diag.Diagnostics { + var diags diag.Diagnostics + + label := state.FlowComponentLabel() + + bcfg := cfg.(*bearertokenauthextension.Config) + var block *builder.Block + + if bcfg.Filename == "" { + args := toBearerTokenAuthExtension(bcfg) + block = common.NewBlockWithOverride([]string{"otelcol", "auth", "bearer"}, label, args) + } else { + args, fileContents := toBearerTokenAuthExtensionWithFilename(state, bcfg) + overrideHook := func(val interface{}) interface{} { + switch value := val.(type) { + case rivertypes.Secret: + return common.CustomTokenizer{Expr: fileContents} + default: + return value + } + } + block = common.NewBlockWithOverrideFn([]string{"otelcol", "auth", "bearer"}, label, args, overrideHook) + } + + diags.Add( + diag.SeverityLevelInfo, + fmt.Sprintf("Converted %s into %s", stringifyInstanceID(id), stringifyBlock(block)), + ) + + state.Body().AppendBlock(block) + return diags +} + +func toBearerTokenAuthExtension(cfg *bearertokenauthextension.Config) *bearer.Arguments { + return &bearer.Arguments{ + Scheme: cfg.Scheme, + Token: rivertypes.Secret(string(cfg.BearerToken)), + } +} +func toBearerTokenAuthExtensionWithFilename(state *state, cfg *bearertokenauthextension.Config) (*bearer.Arguments, string) { + label := state.FlowComponentLabel() + args := &file.Arguments{ + Filename: cfg.Filename, + Type: file.DefaultArguments.Type, // Using the default type (fsnotify) since that's what upstream also uses. + PollFrequency: 60 * time.Second, // Setting an arbitrary polling time. + IsSecret: true, + } + block := common.NewBlockWithOverride([]string{"local", "file"}, label, args) + state.Body().AppendBlock(block) + + return &bearer.Arguments{ + Scheme: cfg.Scheme, + }, fmt.Sprintf("%s.content", stringifyBlock(block)) +} diff --git a/internal/converter/internal/otelcolconvert/converter_headerssetterextension.go b/internal/converter/internal/otelcolconvert/converter_headerssetterextension.go new file mode 100644 index 0000000000..799bc96042 --- /dev/null +++ b/internal/converter/internal/otelcolconvert/converter_headerssetterextension.go @@ -0,0 +1,65 @@ +package otelcolconvert + +import ( + "fmt" + + "github.com/grafana/agent/internal/component/otelcol/auth/headers" + "github.com/grafana/agent/internal/converter/diag" + "github.com/grafana/agent/internal/converter/internal/common" + "github.com/grafana/river/rivertypes" + "github.com/open-telemetry/opentelemetry-collector-contrib/extension/headerssetterextension" + "go.opentelemetry.io/collector/component" +) + +func init() { + converters = append(converters, headersSetterExtensionConverter{}) +} + +type headersSetterExtensionConverter struct{} + +func (headersSetterExtensionConverter) Factory() component.Factory { + return headerssetterextension.NewFactory() +} + +func (headersSetterExtensionConverter) InputComponentName() string { return "otelcol.auth.headers" } + +func (headersSetterExtensionConverter) ConvertAndAppend(state *state, id component.InstanceID, cfg component.Config) diag.Diagnostics { + var diags diag.Diagnostics + + label := state.FlowComponentLabel() + + args := toHeadersSetterExtension(cfg.(*headerssetterextension.Config)) + block := common.NewBlockWithOverride([]string{"otelcol", "auth", "headers"}, label, args) + + diags.Add( + diag.SeverityLevelInfo, + fmt.Sprintf("Converted %s into %s", stringifyInstanceID(id), stringifyBlock(block)), + ) + + state.Body().AppendBlock(block) + return diags +} + +func toHeadersSetterExtension(cfg *headerssetterextension.Config) *headers.Arguments { + res := make([]headers.Header, 0, len(cfg.HeadersConfig)) + for _, h := range cfg.HeadersConfig { + var val *rivertypes.OptionalSecret + if h.Value != nil { + val = &rivertypes.OptionalSecret{ + IsSecret: false, // we default to non-secret so that the converted configuration includes the actual value instead of (secret). + Value: *h.Value, + } + } + + res = append(res, headers.Header{ + Key: *h.Key, // h.Key cannot be nil or it's not valid configuration for the upstream component. + Value: val, + FromContext: h.FromContext, + Action: headers.Action(h.Action), + }) + } + + return &headers.Arguments{ + Headers: res, + } +} diff --git a/internal/converter/internal/otelcolconvert/converter_loadbalancingexporter.go b/internal/converter/internal/otelcolconvert/converter_loadbalancingexporter.go index 92e14b4a1b..a01136e1d2 100644 --- a/internal/converter/internal/otelcolconvert/converter_loadbalancingexporter.go +++ b/internal/converter/internal/otelcolconvert/converter_loadbalancingexporter.go @@ -68,6 +68,13 @@ func toProtocol(cfg loadbalancingexporter.Protocol) loadbalancing.Protocol { if cfg.OTLP.Auth != nil { a = &auth.Handler{} } + + // Set default value for `balancer_name` to sync up with upstream's + balancerName := cfg.OTLP.BalancerName + if balancerName == "" { + balancerName = otelcol.DefaultBalancerName + } + return loadbalancing.Protocol{ // NOTE(rfratto): this has a lot of overlap with converting the // otlpexporter, but otelcol.exporter.loadbalancing uses custom types to @@ -86,7 +93,7 @@ func toProtocol(cfg loadbalancingexporter.Protocol) loadbalancing.Protocol { WriteBufferSize: units.Base2Bytes(cfg.OTLP.WriteBufferSize), WaitForReady: cfg.OTLP.WaitForReady, Headers: toHeadersMap(cfg.OTLP.Headers), - BalancerName: cfg.OTLP.BalancerName, + BalancerName: balancerName, Authority: cfg.OTLP.Authority, Auth: a, diff --git a/internal/converter/internal/otelcolconvert/converter_loggingexporter.go b/internal/converter/internal/otelcolconvert/converter_loggingexporter.go new file mode 100644 index 0000000000..76d85cd2f0 --- /dev/null +++ b/internal/converter/internal/otelcolconvert/converter_loggingexporter.go @@ -0,0 +1,57 @@ +package otelcolconvert + +import ( + "fmt" + + "github.com/grafana/agent/internal/component/otelcol/exporter/logging" + "github.com/grafana/agent/internal/converter/diag" + "github.com/grafana/agent/internal/converter/internal/common" + "go.opentelemetry.io/collector/component" + "go.opentelemetry.io/collector/exporter/loggingexporter" + "go.uber.org/zap/zapcore" +) + +func init() { + converters = append(converters, loggingExporterConverter{}) +} + +type loggingExporterConverter struct{} + +func (loggingExporterConverter) Factory() component.Factory { + return loggingexporter.NewFactory() +} + +func (loggingExporterConverter) InputComponentName() string { + return "otelcol.exporter.logging" +} + +func (loggingExporterConverter) ConvertAndAppend(state *state, id component.InstanceID, cfg component.Config) diag.Diagnostics { + var diags diag.Diagnostics + + label := state.FlowComponentLabel() + args := toOtelcolExporterLogging(cfg.(*loggingexporter.Config)) + block := common.NewBlockWithOverrideFn([]string{"otelcol", "exporter", "logging"}, label, args, nil) + + diags.Add( + diag.SeverityLevelInfo, + fmt.Sprintf("Converted %s into %s", stringifyInstanceID(id), stringifyBlock(block)), + ) + + diags.AddAll(common.ValidateSupported(common.NotEquals, + cfg.(*loggingexporter.Config).LogLevel, + zapcore.InfoLevel, + "otelcol logging exporter loglevel", + "use verbosity instead since loglevel is deprecated")) + + state.Body().AppendBlock(block) + return diags +} + +func toOtelcolExporterLogging(cfg *loggingexporter.Config) *logging.Arguments { + return &logging.Arguments{ + Verbosity: cfg.Verbosity, + SamplingInitial: cfg.SamplingInitial, + SamplingThereafter: cfg.SamplingThereafter, + DebugMetrics: common.DefaultValue[logging.Arguments]().DebugMetrics, + } +} diff --git a/internal/converter/internal/otelcolconvert/converter_otlpexporter.go b/internal/converter/internal/otelcolconvert/converter_otlpexporter.go index 8fbc4809a4..230478144c 100644 --- a/internal/converter/internal/otelcolconvert/converter_otlpexporter.go +++ b/internal/converter/internal/otelcolconvert/converter_otlpexporter.go @@ -92,6 +92,13 @@ func toGRPCClientArguments(cfg configgrpc.GRPCClientSettings) otelcol.GRPCClient if cfg.Auth != nil { a = &auth.Handler{} } + + // Set default value for `balancer_name` to sync up with upstream's + balancerName := cfg.BalancerName + if balancerName == "" { + balancerName = otelcol.DefaultBalancerName + } + return otelcol.GRPCClientArguments{ Endpoint: cfg.Endpoint, @@ -104,7 +111,7 @@ func toGRPCClientArguments(cfg configgrpc.GRPCClientSettings) otelcol.GRPCClient WriteBufferSize: units.Base2Bytes(cfg.WriteBufferSize), WaitForReady: cfg.WaitForReady, Headers: toHeadersMap(cfg.Headers), - BalancerName: cfg.BalancerName, + BalancerName: balancerName, Authority: cfg.Authority, Auth: a, diff --git a/internal/converter/internal/otelcolconvert/otelcolconvert.go b/internal/converter/internal/otelcolconvert/otelcolconvert.go index 719887ac91..8262053c0c 100644 --- a/internal/converter/internal/otelcolconvert/otelcolconvert.go +++ b/internal/converter/internal/otelcolconvert/otelcolconvert.go @@ -65,7 +65,7 @@ func Convert(in []byte, extraArgs []string) ([]byte, diag.Diagnostics) { f := builder.NewFile() - diags.AddAll(appendConfig(f, cfg)) + diags.AddAll(AppendConfig(f, cfg, "")) diags.AddAll(common.ValidateNodes(f)) var buf bytes.Buffer @@ -141,9 +141,9 @@ func getFactories() otelcol.Factories { return facts } -// appendConfig converts the provided OpenTelemetry config into an equivalent +// AppendConfig converts the provided OpenTelemetry config into an equivalent // Flow config and appends the result to the provided file. -func appendConfig(file *builder.File, cfg *otelcol.Config) diag.Diagnostics { +func AppendConfig(file *builder.File, cfg *otelcol.Config, labelPrefix string) diag.Diagnostics { var diags diag.Diagnostics groups, err := createPipelineGroups(cfg.Service.Pipelines) @@ -198,8 +198,9 @@ func appendConfig(file *builder.File, cfg *otelcol.Config) diag.Diagnostics { converterLookup: converterTable, - componentConfig: cfg.Extensions, - componentID: cid, + componentConfig: cfg.Extensions, + componentID: cid, + componentLabelPrefix: labelPrefix, } key := converterKey{Kind: component.KindExtension, Type: ext.Type()} @@ -244,8 +245,9 @@ func appendConfig(file *builder.File, cfg *otelcol.Config) diag.Diagnostics { converterLookup: converterTable, extensionLookup: extensionTable, - componentConfig: componentSet.configLookup[id], - componentID: componentID, + componentConfig: componentSet.configLookup[id], + componentID: componentID, + componentLabelPrefix: labelPrefix, } key := converterKey{Kind: componentSet.kind, Type: id.Type()} diff --git a/internal/converter/internal/otelcolconvert/testdata/attributes.yaml b/internal/converter/internal/otelcolconvert/testdata/attributes.yaml index dc9cfcd6e7..1e24baf4d6 100644 --- a/internal/converter/internal/otelcolconvert/testdata/attributes.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/attributes.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 processors: attributes/example: diff --git a/internal/converter/internal/otelcolconvert/testdata/basicauth.yaml b/internal/converter/internal/otelcolconvert/testdata/basicauth.yaml index bc585d4f9a..9f12acbd5d 100644 --- a/internal/converter/internal/otelcolconvert/testdata/basicauth.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/basicauth.yaml @@ -25,12 +25,7 @@ exporters: otlp: auth: authenticator: basicauth - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: extensions: [basicauth, basicauth/client] diff --git a/internal/converter/internal/otelcolconvert/testdata/batch.yaml b/internal/converter/internal/otelcolconvert/testdata/batch.yaml index 7cb5f20639..42e3bb4fd1 100644 --- a/internal/converter/internal/otelcolconvert/testdata/batch.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/batch.yaml @@ -9,12 +9,7 @@ processors: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: pipelines: diff --git a/internal/converter/internal/otelcolconvert/testdata/bearertoken.river b/internal/converter/internal/otelcolconvert/testdata/bearertoken.river new file mode 100644 index 0000000000..83a26c92d8 --- /dev/null +++ b/internal/converter/internal/otelcolconvert/testdata/bearertoken.river @@ -0,0 +1,39 @@ +local.file "default_fromfile" { + filename = "file-containing.token" + is_secret = true +} + +otelcol.auth.bearer "default_fromfile" { + token = local.file.default_fromfile.content +} + +otelcol.auth.bearer "default_withscheme" { + scheme = "CustomScheme" + token = "randomtoken" +} + +otelcol.receiver.otlp "default" { + grpc { } + + http { } + + output { + metrics = [otelcol.exporter.otlp.default_withauth.input, otelcol.exporter.otlphttp.default_withauth.input] + logs = [otelcol.exporter.otlp.default_withauth.input, otelcol.exporter.otlphttp.default_withauth.input] + traces = [otelcol.exporter.otlp.default_withauth.input, otelcol.exporter.otlphttp.default_withauth.input] + } +} + +otelcol.exporter.otlp "default_withauth" { + client { + endpoint = "database:4317" + auth = otelcol.auth.bearer.default_fromfile.handler + } +} + +otelcol.exporter.otlphttp "default_withauth" { + client { + endpoint = "database:4318" + auth = otelcol.auth.bearer.default_withscheme.handler + } +} diff --git a/internal/converter/internal/otelcolconvert/testdata/bearertoken.yaml b/internal/converter/internal/otelcolconvert/testdata/bearertoken.yaml new file mode 100644 index 0000000000..dc4b3cfb94 --- /dev/null +++ b/internal/converter/internal/otelcolconvert/testdata/bearertoken.yaml @@ -0,0 +1,42 @@ +extensions: + bearertokenauth/fromfile: + token: "somerandomtoken" # this will be ignored in lieu of the filename field. + filename: "file-containing.token" + bearertokenauth/withscheme: + scheme: "CustomScheme" + token: "randomtoken" + +receivers: + otlp: + protocols: + grpc: + http: + +exporters: + otlp/withauth: + # Our defaults have drifted from upstream, so we explicitly set our + # defaults below (balancer_name). + endpoint: database:4317 + auth: + authenticator: bearertokenauth/fromfile + balancer_name: pick_first + otlphttp/withauth: + endpoint: database:4318 + auth: + authenticator: bearertokenauth/withscheme + +service: + extensions: [bearertokenauth/fromfile, bearertokenauth/withscheme] + pipelines: + metrics: + receivers: [otlp] + processors: [] + exporters: [otlp/withauth, otlphttp/withauth] + logs: + receivers: [otlp] + processors: [] + exporters: [otlp/withauth, otlphttp/withauth] + traces: + receivers: [otlp] + processors: [] + exporters: [otlp/withauth, otlphttp/withauth] diff --git a/internal/converter/internal/otelcolconvert/testdata/filter.yaml b/internal/converter/internal/otelcolconvert/testdata/filter.yaml index d67232cc50..2a6366f7b8 100644 --- a/internal/converter/internal/otelcolconvert/testdata/filter.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/filter.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 processors: filter/ottl: diff --git a/internal/converter/internal/otelcolconvert/testdata/headerssetter.river b/internal/converter/internal/otelcolconvert/testdata/headerssetter.river new file mode 100644 index 0000000000..327bf44f09 --- /dev/null +++ b/internal/converter/internal/otelcolconvert/testdata/headerssetter.river @@ -0,0 +1,42 @@ +otelcol.auth.headers "default" { + header { + key = "X-Scope-OrgID" + from_context = "tenant_id" + action = "insert" + } + + header { + key = "User-ID" + value = "user_id" + } + + header { + key = "User-ID" + value = "user_id" + action = "update" + } + + header { + key = "Some-Header" + action = "delete" + } +} + +otelcol.receiver.otlp "default" { + grpc { } + + http { } + + output { + metrics = [otelcol.exporter.otlp.default.input] + logs = [otelcol.exporter.otlp.default.input] + traces = [otelcol.exporter.otlp.default.input] + } +} + +otelcol.exporter.otlp "default" { + client { + endpoint = "database:4317" + auth = otelcol.auth.headers.default.handler + } +} diff --git a/internal/converter/internal/otelcolconvert/testdata/headerssetter.yaml b/internal/converter/internal/otelcolconvert/testdata/headerssetter.yaml new file mode 100644 index 0000000000..a5d44e3d4c --- /dev/null +++ b/internal/converter/internal/otelcolconvert/testdata/headerssetter.yaml @@ -0,0 +1,45 @@ +extensions: + headers_setter: + headers: + - action: insert + key: X-Scope-OrgID + from_context: tenant_id + - action: upsert + key: User-ID + value: user_id + - action: update + key: User-ID + value: user_id + - action: delete + key: Some-Header + +receivers: + otlp: + protocols: + grpc: + http: + +exporters: + otlp: + # Our defaults have drifted from upstream, so we explicitly set our + # defaults below (balancer_name). + endpoint: database:4317 + auth: + authenticator: headers_setter + balancer_name: pick_first + +service: + extensions: [ headers_setter ] + pipelines: + metrics: + receivers: [otlp] + processors: [] + exporters: [otlp] + logs: + receivers: [otlp] + processors: [] + exporters: [otlp] + traces: + receivers: [otlp] + processors: [] + exporters: [otlp] diff --git a/internal/converter/internal/otelcolconvert/testdata/inconsistent_processor.yaml b/internal/converter/internal/otelcolconvert/testdata/inconsistent_processor.yaml index b53b67761c..519e26de7e 100644 --- a/internal/converter/internal/otelcolconvert/testdata/inconsistent_processor.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/inconsistent_processor.yaml @@ -9,12 +9,7 @@ processors: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: pipelines: diff --git a/internal/converter/internal/otelcolconvert/testdata/jaeger.yaml b/internal/converter/internal/otelcolconvert/testdata/jaeger.yaml index 0f92e78718..dcc7525bd5 100644 --- a/internal/converter/internal/otelcolconvert/testdata/jaeger.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/jaeger.yaml @@ -8,12 +8,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: pipelines: diff --git a/internal/converter/internal/otelcolconvert/testdata/jaegerremotesampling.yaml b/internal/converter/internal/otelcolconvert/testdata/jaegerremotesampling.yaml index d85b388771..c52e5600f9 100644 --- a/internal/converter/internal/otelcolconvert/testdata/jaegerremotesampling.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/jaegerremotesampling.yaml @@ -27,12 +27,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: extensions: [jaegerremotesampling] diff --git a/internal/converter/internal/otelcolconvert/testdata/k8sattributes.river b/internal/converter/internal/otelcolconvert/testdata/k8sattributes.river index f2819753ce..a79183e7af 100644 --- a/internal/converter/internal/otelcolconvert/testdata/k8sattributes.river +++ b/internal/converter/internal/otelcolconvert/testdata/k8sattributes.river @@ -17,16 +17,6 @@ otelcol.processor.k8sattributes "default" { metadata = ["container.image.name", "container.image.tag", "k8s.deployment.name", "k8s.namespace.name", "k8s.node.name", "k8s.pod.name", "k8s.pod.start_time", "k8s.pod.uid"] } - exclude { - pod { - name = "jaeger-agent" - } - - pod { - name = "jaeger-collector" - } - } - output { metrics = [otelcol.exporter.otlp.default.input] logs = [otelcol.exporter.otlp.default.input] diff --git a/internal/converter/internal/otelcolconvert/testdata/k8sattributes.yaml b/internal/converter/internal/otelcolconvert/testdata/k8sattributes.yaml index dfeee2cebc..bb59afa0fb 100644 --- a/internal/converter/internal/otelcolconvert/testdata/k8sattributes.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/k8sattributes.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 processors: k8sattributes: diff --git a/internal/converter/internal/otelcolconvert/testdata/kafka.yaml b/internal/converter/internal/otelcolconvert/testdata/kafka.yaml index 456c87a007..fb8455bbf5 100644 --- a/internal/converter/internal/otelcolconvert/testdata/kafka.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/kafka.yaml @@ -25,12 +25,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: pipelines: diff --git a/internal/converter/internal/otelcolconvert/testdata/loadbalancing.yaml b/internal/converter/internal/otelcolconvert/testdata/loadbalancing.yaml index 3c51727183..dcb7e55d33 100644 --- a/internal/converter/internal/otelcolconvert/testdata/loadbalancing.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/loadbalancing.yaml @@ -8,9 +8,6 @@ exporters: routing_key: "service" protocol: otlp: - balancer_name: pick_first - sending_queue: - queue_size: 5000 resolver: static: hostnames: diff --git a/internal/converter/internal/otelcolconvert/testdata/logging.diags b/internal/converter/internal/otelcolconvert/testdata/logging.diags new file mode 100644 index 0000000000..77de050d8c --- /dev/null +++ b/internal/converter/internal/otelcolconvert/testdata/logging.diags @@ -0,0 +1 @@ +(Error) The converter does not support converting the provided otelcol logging exporter loglevel config: use verbosity instead since loglevel is deprecated \ No newline at end of file diff --git a/internal/converter/internal/otelcolconvert/testdata/logging.river b/internal/converter/internal/otelcolconvert/testdata/logging.river new file mode 100644 index 0000000000..78a74badd9 --- /dev/null +++ b/internal/converter/internal/otelcolconvert/testdata/logging.river @@ -0,0 +1,23 @@ +otelcol.receiver.otlp "default" { + grpc { } + + http { } + + output { + metrics = [otelcol.exporter.logging.default.input, otelcol.exporter.logging.default_2.input] + logs = [otelcol.exporter.logging.default.input, otelcol.exporter.logging.default_2.input] + traces = [otelcol.exporter.logging.default.input, otelcol.exporter.logging.default_2.input] + } +} + +otelcol.exporter.logging "default" { + verbosity = "Detailed" + sampling_initial = 5 + sampling_thereafter = 200 +} + +otelcol.exporter.logging "default_2" { + verbosity = "Detailed" + sampling_initial = 5 + sampling_thereafter = 200 +} diff --git a/internal/converter/internal/otelcolconvert/testdata/logging.yaml b/internal/converter/internal/otelcolconvert/testdata/logging.yaml new file mode 100644 index 0000000000..78589031ca --- /dev/null +++ b/internal/converter/internal/otelcolconvert/testdata/logging.yaml @@ -0,0 +1,30 @@ +receivers: + otlp: + protocols: + grpc: + http: + +exporters: + logging: + verbosity: detailed + sampling_initial: 5 + sampling_thereafter: 200 + logging/2: + sampling_initial: 5 + sampling_thereafter: 200 + loglevel: debug + +service: + pipelines: + metrics: + receivers: [otlp] + processors: [] + exporters: [logging,logging/2] + logs: + receivers: [otlp] + processors: [] + exporters: [logging,logging/2] + traces: + receivers: [otlp] + processors: [] + exporters: [logging,logging/2] diff --git a/internal/converter/internal/otelcolconvert/testdata/memorylimiter.yaml b/internal/converter/internal/otelcolconvert/testdata/memorylimiter.yaml index 1056c8959e..8dded50387 100644 --- a/internal/converter/internal/otelcolconvert/testdata/memorylimiter.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/memorylimiter.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 processors: memory_limiter: diff --git a/internal/converter/internal/otelcolconvert/testdata/oauth2.yaml b/internal/converter/internal/otelcolconvert/testdata/oauth2.yaml index d337d40bca..40a2930009 100644 --- a/internal/converter/internal/otelcolconvert/testdata/oauth2.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/oauth2.yaml @@ -26,23 +26,14 @@ receivers: exporters: otlphttp/noauth: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below for queue_size. endpoint: database:4318 - sending_queue: - queue_size: 5000 otlp/withauth: tls: ca_file: /tmp/certs/ca.pem auth: authenticator: oauth2client - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: extensions: [oauth2client] diff --git a/internal/converter/internal/otelcolconvert/testdata/opencensus.river b/internal/converter/internal/otelcolconvert/testdata/opencensus.river index 156647ab31..e4ef378fa3 100644 --- a/internal/converter/internal/otelcolconvert/testdata/opencensus.river +++ b/internal/converter/internal/otelcolconvert/testdata/opencensus.river @@ -1,6 +1,4 @@ otelcol.receiver.opencensus "default" { - endpoint = "0.0.0.0:55678" - output { metrics = [otelcol.exporter.otlp.default.input] traces = [otelcol.exporter.otlp.default.input] diff --git a/internal/converter/internal/otelcolconvert/testdata/opencensus.yaml b/internal/converter/internal/otelcolconvert/testdata/opencensus.yaml index 52777dc5ec..ef1c70b0f2 100644 --- a/internal/converter/internal/otelcolconvert/testdata/opencensus.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/opencensus.yaml @@ -3,12 +3,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: pipelines: diff --git a/internal/converter/internal/otelcolconvert/testdata/otlp.yaml b/internal/converter/internal/otelcolconvert/testdata/otlp.yaml index d7803ffd8d..289b5be9a6 100644 --- a/internal/converter/internal/otelcolconvert/testdata/otlp.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/otlp.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: pipelines: diff --git a/internal/converter/internal/otelcolconvert/testdata/otlphttp.yaml b/internal/converter/internal/otelcolconvert/testdata/otlphttp.yaml index 91d4c03bfa..0d57cfe2d5 100644 --- a/internal/converter/internal/otelcolconvert/testdata/otlphttp.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/otlphttp.yaml @@ -6,11 +6,7 @@ receivers: exporters: otlphttp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below for queue_size. endpoint: database:4318 - sending_queue: - queue_size: 5000 service: pipelines: diff --git a/internal/converter/internal/otelcolconvert/testdata/probabilistic_sampler.yaml b/internal/converter/internal/otelcolconvert/testdata/probabilistic_sampler.yaml index 6f058dbd6c..07fe159096 100644 --- a/internal/converter/internal/otelcolconvert/testdata/probabilistic_sampler.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/probabilistic_sampler.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 processors: probabilistic_sampler: diff --git a/internal/converter/internal/otelcolconvert/testdata/span.yaml b/internal/converter/internal/otelcolconvert/testdata/span.yaml index 3d61052153..5797594932 100644 --- a/internal/converter/internal/otelcolconvert/testdata/span.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/span.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 processors: span: diff --git a/internal/converter/internal/otelcolconvert/testdata/span_full.yaml b/internal/converter/internal/otelcolconvert/testdata/span_full.yaml index e7a5173727..f9335da0dd 100644 --- a/internal/converter/internal/otelcolconvert/testdata/span_full.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/span_full.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 processors: # Since this processor has deeply nested attributes, we're adding a more diff --git a/internal/converter/internal/otelcolconvert/testdata/spanmetrics.yaml b/internal/converter/internal/otelcolconvert/testdata/spanmetrics.yaml index 58786875cc..cf907299aa 100644 --- a/internal/converter/internal/otelcolconvert/testdata/spanmetrics.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/spanmetrics.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 processors: batch: diff --git a/internal/converter/internal/otelcolconvert/testdata/spanmetrics_full.river b/internal/converter/internal/otelcolconvert/testdata/spanmetrics_full.river index 07665b91f2..ad20c45537 100644 --- a/internal/converter/internal/otelcolconvert/testdata/spanmetrics_full.river +++ b/internal/converter/internal/otelcolconvert/testdata/spanmetrics_full.river @@ -32,18 +32,18 @@ otelcol.connector.spanmetrics "default" { } } -otelcol.exporter.otlp "foo_metrics_backend_two" { +otelcol.exporter.otlp "_2_metrics_backend_2" { client { endpoint = "database:54317" } } -otelcol.connector.spanmetrics "foo_default" { +otelcol.connector.spanmetrics "_2_default" { histogram { explicit { } } output { - metrics = [otelcol.exporter.otlp.foo_metrics_backend_two.input] + metrics = [otelcol.exporter.otlp._2_metrics_backend_2.input] } } diff --git a/internal/converter/internal/otelcolconvert/testdata/spanmetrics_full.yaml b/internal/converter/internal/otelcolconvert/testdata/spanmetrics_full.yaml index b2ebe10cfa..9e00d25918 100644 --- a/internal/converter/internal/otelcolconvert/testdata/spanmetrics_full.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/spanmetrics_full.yaml @@ -6,28 +6,13 @@ receivers: exporters: otlp/traces_backend: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:34317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 otlp/metrics_backend: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:44317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 - otlp/metrics_backend_two: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). + otlp/metrics_backend/2: endpoint: database:54317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 connectors: spanmetrics: @@ -42,7 +27,7 @@ service: metrics: receivers: [spanmetrics] exporters: [otlp/metrics_backend] - metrics/foo: + metrics/2: receivers: [spanmetrics] - exporters: [otlp/metrics_backend_two] + exporters: [otlp/metrics_backend/2] diff --git a/internal/converter/internal/otelcolconvert/testdata/tail_sampling.yaml b/internal/converter/internal/otelcolconvert/testdata/tail_sampling.yaml index ac87148c80..9a1b11c47e 100644 --- a/internal/converter/internal/otelcolconvert/testdata/tail_sampling.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/tail_sampling.yaml @@ -6,12 +6,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 processors: tail_sampling: diff --git a/internal/converter/internal/otelcolconvert/testdata/transform.yaml b/internal/converter/internal/otelcolconvert/testdata/transform.yaml index 4bd271d264..563f63aed5 100644 --- a/internal/converter/internal/otelcolconvert/testdata/transform.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/transform.yaml @@ -51,12 +51,7 @@ processors: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: pipelines: diff --git a/internal/converter/internal/otelcolconvert/testdata/zipkin.yaml b/internal/converter/internal/otelcolconvert/testdata/zipkin.yaml index a750a0c7fe..7a4a0d8523 100644 --- a/internal/converter/internal/otelcolconvert/testdata/zipkin.yaml +++ b/internal/converter/internal/otelcolconvert/testdata/zipkin.yaml @@ -3,12 +3,7 @@ receivers: exporters: otlp: - # Our defaults have drifted from upstream, so we explicitly set our - # defaults below (balancer_name and queue_size). endpoint: database:4317 - balancer_name: pick_first - sending_queue: - queue_size: 5000 service: pipelines: diff --git a/internal/converter/internal/staticconvert/internal/build/apache_exporter.go b/internal/converter/internal/staticconvert/internal/build/apache_exporter.go index 41cd1e311c..df368bb744 100644 --- a/internal/converter/internal/staticconvert/internal/build/apache_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/apache_exporter.go @@ -7,7 +7,7 @@ import ( apache_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/apache_http" ) -func (b *IntegrationsConfigBuilder) appendApacheExporter(config *apache_http.Config) discovery.Exports { +func (b *ConfigBuilder) appendApacheExporter(config *apache_http.Config) discovery.Exports { args := toApacheExporter(config) return b.appendExporterBlock(args, config.Name(), nil, "apache") } @@ -20,7 +20,7 @@ func toApacheExporter(config *apache_http.Config) *apache.Arguments { } } -func (b *IntegrationsConfigBuilder) appendApacheExporterV2(config *apache_exporter_v2.Config) discovery.Exports { +func (b *ConfigBuilder) appendApacheExporterV2(config *apache_exporter_v2.Config) discovery.Exports { args := toApacheExporterV2(config) return b.appendExporterBlock(args, config.Name(), config.Common.InstanceKey, "apache") } diff --git a/internal/converter/internal/staticconvert/internal/build/app_agent_receiver.go b/internal/converter/internal/staticconvert/internal/build/app_agent_receiver.go index d5179e9c11..1926fdb97d 100644 --- a/internal/converter/internal/staticconvert/internal/build/app_agent_receiver.go +++ b/internal/converter/internal/staticconvert/internal/build/app_agent_receiver.go @@ -14,7 +14,7 @@ import ( "github.com/grafana/river/scanner" ) -func (b *IntegrationsConfigBuilder) appendAppAgentReceiverV2(config *app_agent_receiver_v2.Config) { +func (b *ConfigBuilder) appendAppAgentReceiverV2(config *app_agent_receiver_v2.Config) { args := toAppAgentReceiverV2(config) compLabel, err := scanner.SanitizeIdentifier(b.formatJobName(config.Name(), nil)) diff --git a/internal/converter/internal/staticconvert/internal/build/azure_exporter.go b/internal/converter/internal/staticconvert/internal/build/azure_exporter.go index 90493479f9..d099c67849 100644 --- a/internal/converter/internal/staticconvert/internal/build/azure_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/azure_exporter.go @@ -6,7 +6,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/azure_exporter" ) -func (b *IntegrationsConfigBuilder) appendAzureExporter(config *azure_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendAzureExporter(config *azure_exporter.Config, instanceKey *string) discovery.Exports { args := toAzureExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "azure") } diff --git a/internal/converter/internal/staticconvert/internal/build/blackbox_exporter.go b/internal/converter/internal/staticconvert/internal/build/blackbox_exporter.go index 38bd62c53c..0c2fb9b9f7 100644 --- a/internal/converter/internal/staticconvert/internal/build/blackbox_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/blackbox_exporter.go @@ -10,7 +10,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendBlackboxExporter(config *blackbox_exporter.Config) discovery.Exports { +func (b *ConfigBuilder) appendBlackboxExporter(config *blackbox_exporter.Config) discovery.Exports { args := toBlackboxExporter(config) return b.appendExporterBlock(args, config.Name(), nil, "blackbox") } @@ -27,7 +27,7 @@ func toBlackboxExporter(config *blackbox_exporter.Config) *blackbox.Arguments { } } -func (b *IntegrationsConfigBuilder) appendBlackboxExporterV2(config *blackbox_exporter_v2.Config) discovery.Exports { +func (b *ConfigBuilder) appendBlackboxExporterV2(config *blackbox_exporter_v2.Config) discovery.Exports { args := toBlackboxExporterV2(config) return b.appendExporterBlock(args, config.Name(), config.Common.InstanceKey, "blackbox") } diff --git a/internal/converter/internal/staticconvert/internal/build/builder.go b/internal/converter/internal/staticconvert/internal/build/builder.go index 1f8c695031..677e28b91e 100644 --- a/internal/converter/internal/staticconvert/internal/build/builder.go +++ b/internal/converter/internal/staticconvert/internal/build/builder.go @@ -1,67 +1,22 @@ package build import ( - "fmt" "strings" - "github.com/grafana/agent/internal/component" - "github.com/grafana/agent/internal/component/discovery" - "github.com/grafana/agent/internal/component/prometheus/remotewrite" "github.com/grafana/agent/internal/converter/diag" - "github.com/grafana/agent/internal/converter/internal/common" - "github.com/grafana/agent/internal/converter/internal/prometheusconvert" "github.com/grafana/agent/internal/static/config" - agent_exporter "github.com/grafana/agent/internal/static/integrations/agent" - "github.com/grafana/agent/internal/static/integrations/apache_http" - "github.com/grafana/agent/internal/static/integrations/azure_exporter" - "github.com/grafana/agent/internal/static/integrations/blackbox_exporter" - "github.com/grafana/agent/internal/static/integrations/cadvisor" - "github.com/grafana/agent/internal/static/integrations/cloudwatch_exporter" - int_config "github.com/grafana/agent/internal/static/integrations/config" - "github.com/grafana/agent/internal/static/integrations/consul_exporter" - "github.com/grafana/agent/internal/static/integrations/dnsmasq_exporter" - "github.com/grafana/agent/internal/static/integrations/elasticsearch_exporter" - "github.com/grafana/agent/internal/static/integrations/gcp_exporter" - "github.com/grafana/agent/internal/static/integrations/github_exporter" - "github.com/grafana/agent/internal/static/integrations/kafka_exporter" - "github.com/grafana/agent/internal/static/integrations/memcached_exporter" - "github.com/grafana/agent/internal/static/integrations/mongodb_exporter" - mssql_exporter "github.com/grafana/agent/internal/static/integrations/mssql" - "github.com/grafana/agent/internal/static/integrations/mysqld_exporter" - "github.com/grafana/agent/internal/static/integrations/node_exporter" - "github.com/grafana/agent/internal/static/integrations/oracledb_exporter" - "github.com/grafana/agent/internal/static/integrations/postgres_exporter" - "github.com/grafana/agent/internal/static/integrations/process_exporter" - "github.com/grafana/agent/internal/static/integrations/redis_exporter" - "github.com/grafana/agent/internal/static/integrations/snmp_exporter" - "github.com/grafana/agent/internal/static/integrations/snowflake_exporter" - "github.com/grafana/agent/internal/static/integrations/squid_exporter" - "github.com/grafana/agent/internal/static/integrations/statsd_exporter" - agent_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/agent" - apache_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/apache_http" - app_agent_receiver_v2 "github.com/grafana/agent/internal/static/integrations/v2/app_agent_receiver" - blackbox_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/blackbox_exporter" - common_v2 "github.com/grafana/agent/internal/static/integrations/v2/common" - eventhandler_v2 "github.com/grafana/agent/internal/static/integrations/v2/eventhandler" - metricsutils_v2 "github.com/grafana/agent/internal/static/integrations/v2/metricsutils" - snmp_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/snmp_exporter" - "github.com/grafana/agent/internal/static/integrations/windows_exporter" - "github.com/grafana/river/scanner" "github.com/grafana/river/token/builder" - "github.com/prometheus/common/model" - prom_config "github.com/prometheus/prometheus/config" - "github.com/prometheus/prometheus/model/relabel" ) -type IntegrationsConfigBuilder struct { +type ConfigBuilder struct { f *builder.File diags *diag.Diagnostics cfg *config.Config globalCtx *GlobalContext } -func NewIntegrationsConfigBuilder(f *builder.File, diags *diag.Diagnostics, cfg *config.Config, globalCtx *GlobalContext) *IntegrationsConfigBuilder { - return &IntegrationsConfigBuilder{ +func NewConfigBuilder(f *builder.File, diags *diag.Diagnostics, cfg *config.Config, globalCtx *GlobalContext) *ConfigBuilder { + return &ConfigBuilder{ f: f, diags: diags, cfg: cfg, @@ -69,298 +24,11 @@ func NewIntegrationsConfigBuilder(f *builder.File, diags *diag.Diagnostics, cfg } } -func (b *IntegrationsConfigBuilder) Build() { +func (b *ConfigBuilder) Build() { b.appendLogging(b.cfg.Server) b.appendServer(b.cfg.Server) b.appendIntegrations() -} - -func (b *IntegrationsConfigBuilder) appendIntegrations() { - switch b.cfg.Integrations.Version { - case config.IntegrationsVersion1: - b.appendV1Integrations() - case config.IntegrationsVersion2: - b.appendV2Integrations() - default: - panic(fmt.Sprintf("unknown integrations version %d", b.cfg.Integrations.Version)) - } -} - -func (b *IntegrationsConfigBuilder) appendV1Integrations() { - for _, integration := range b.cfg.Integrations.ConfigV1.Integrations { - if !integration.Common.Enabled { - continue - } - - scrapeIntegration := b.cfg.Integrations.ConfigV1.ScrapeIntegrations - if integration.Common.ScrapeIntegration != nil { - scrapeIntegration = *integration.Common.ScrapeIntegration - } - - if !scrapeIntegration { - b.diags.Add(diag.SeverityLevelError, fmt.Sprintf("The converter does not support handling integrations which are not being scraped: %s.", integration.Name())) - continue - } - - var exports discovery.Exports - switch itg := integration.Config.(type) { - case *agent_exporter.Config: - exports = b.appendAgentExporter(itg) - case *apache_http.Config: - exports = b.appendApacheExporter(itg) - case *node_exporter.Config: - exports = b.appendNodeExporter(itg, nil) - case *blackbox_exporter.Config: - exports = b.appendBlackboxExporter(itg) - case *cloudwatch_exporter.Config: - exports = b.appendCloudwatchExporter(itg, nil) - case *consul_exporter.Config: - exports = b.appendConsulExporter(itg, nil) - case *dnsmasq_exporter.Config: - exports = b.appendDnsmasqExporter(itg, nil) - case *elasticsearch_exporter.Config: - exports = b.appendElasticsearchExporter(itg, nil) - case *gcp_exporter.Config: - exports = b.appendGcpExporter(itg, nil) - case *github_exporter.Config: - exports = b.appendGithubExporter(itg, nil) - case *kafka_exporter.Config: - exports = b.appendKafkaExporter(itg, nil) - case *memcached_exporter.Config: - exports = b.appendMemcachedExporter(itg, nil) - case *mongodb_exporter.Config: - exports = b.appendMongodbExporter(itg, nil) - case *mssql_exporter.Config: - exports = b.appendMssqlExporter(itg, nil) - case *mysqld_exporter.Config: - exports = b.appendMysqldExporter(itg, nil) - case *oracledb_exporter.Config: - exports = b.appendOracledbExporter(itg, nil) - case *postgres_exporter.Config: - exports = b.appendPostgresExporter(itg, nil) - case *process_exporter.Config: - exports = b.appendProcessExporter(itg, nil) - case *redis_exporter.Config: - exports = b.appendRedisExporter(itg, nil) - case *snmp_exporter.Config: - exports = b.appendSnmpExporter(itg) - case *snowflake_exporter.Config: - exports = b.appendSnowflakeExporter(itg, nil) - case *squid_exporter.Config: - exports = b.appendSquidExporter(itg, nil) - case *statsd_exporter.Config: - exports = b.appendStatsdExporter(itg, nil) - case *windows_exporter.Config: - exports = b.appendWindowsExporter(itg, nil) - case *azure_exporter.Config: - exports = b.appendAzureExporter(itg, nil) - case *cadvisor.Config: - exports = b.appendCadvisorExporter(itg, nil) - } - - if len(exports.Targets) > 0 { - b.appendExporter(&integration.Common, integration.Name(), exports.Targets) - } - } -} - -func (b *IntegrationsConfigBuilder) appendExporter(commonConfig *int_config.Common, name string, extraTargets []discovery.Target) { - var relabelConfigs []*relabel.Config - if commonConfig.InstanceKey != nil { - defaultConfig := relabel.DefaultRelabelConfig - relabelConfig := &defaultConfig - relabelConfig.TargetLabel = "instance" - relabelConfig.Replacement = *commonConfig.InstanceKey - - relabelConfigs = append(relabelConfigs, relabelConfig) - } - - if relabelConfig := b.getJobRelabelConfig(name, commonConfig.RelabelConfigs); relabelConfig != nil { - relabelConfigs = append(relabelConfigs, b.getJobRelabelConfig(name, commonConfig.RelabelConfigs)) - } - - scrapeConfig := prom_config.DefaultScrapeConfig - scrapeConfig.JobName = b.formatJobName(name, nil) - scrapeConfig.RelabelConfigs = append(commonConfig.RelabelConfigs, relabelConfigs...) - scrapeConfig.MetricRelabelConfigs = commonConfig.MetricRelabelConfigs - scrapeConfig.HTTPClientConfig.TLSConfig = b.cfg.Integrations.ConfigV1.TLSConfig - - scrapeConfig.ScrapeInterval = model.Duration(commonConfig.ScrapeInterval) - if commonConfig.ScrapeInterval == 0 { - scrapeConfig.ScrapeInterval = b.cfg.Integrations.ConfigV1.PrometheusGlobalConfig.ScrapeInterval - } - - scrapeConfig.ScrapeTimeout = model.Duration(commonConfig.ScrapeTimeout) - if commonConfig.ScrapeTimeout == 0 { - scrapeConfig.ScrapeTimeout = b.cfg.Integrations.ConfigV1.PrometheusGlobalConfig.ScrapeTimeout - } - - scrapeConfigs := []*prom_config.ScrapeConfig{&scrapeConfig} - - promConfig := &prom_config.Config{ - GlobalConfig: b.cfg.Integrations.ConfigV1.PrometheusGlobalConfig, - ScrapeConfigs: scrapeConfigs, - RemoteWriteConfigs: b.cfg.Integrations.ConfigV1.PrometheusRemoteWrite, - } - - if len(b.cfg.Integrations.ConfigV1.PrometheusRemoteWrite) == 0 { - b.diags.Add(diag.SeverityLevelError, "The converter does not support handling integrations which are not connected to a remote_write.") - } - - jobNameToCompLabelsFunc := func(jobName string) string { - return b.jobNameToCompLabel(jobName) - } - - b.diags.AddAll(prometheusconvert.AppendAllNested(b.f, promConfig, jobNameToCompLabelsFunc, extraTargets, b.globalCtx.RemoteWriteExports)) - b.globalCtx.InitializeRemoteWriteExports() -} - -func (b *IntegrationsConfigBuilder) appendV2Integrations() { - for _, integration := range b.cfg.Integrations.ConfigV2.Configs { - var exports discovery.Exports - var commonConfig common_v2.MetricsConfig - - switch itg := integration.(type) { - case *agent_exporter_v2.Config: - exports = b.appendAgentExporterV2(itg) - commonConfig = itg.Common - case *apache_exporter_v2.Config: - exports = b.appendApacheExporterV2(itg) - commonConfig = itg.Common - case *app_agent_receiver_v2.Config: - b.appendAppAgentReceiverV2(itg) - commonConfig = itg.Common - case *blackbox_exporter_v2.Config: - exports = b.appendBlackboxExporterV2(itg) - commonConfig = itg.Common - case *eventhandler_v2.Config: - b.appendEventHandlerV2(itg) - case *snmp_exporter_v2.Config: - exports = b.appendSnmpExporterV2(itg) - commonConfig = itg.Common - case *metricsutils_v2.ConfigShim: - commonConfig = itg.Common - switch v1_itg := itg.Orig.(type) { - case *azure_exporter.Config: - exports = b.appendAzureExporter(v1_itg, itg.Common.InstanceKey) - case *cadvisor.Config: - exports = b.appendCadvisorExporter(v1_itg, itg.Common.InstanceKey) - case *cloudwatch_exporter.Config: - exports = b.appendCloudwatchExporter(v1_itg, itg.Common.InstanceKey) - case *consul_exporter.Config: - exports = b.appendConsulExporter(v1_itg, itg.Common.InstanceKey) - case *dnsmasq_exporter.Config: - exports = b.appendDnsmasqExporter(v1_itg, itg.Common.InstanceKey) - case *elasticsearch_exporter.Config: - exports = b.appendElasticsearchExporter(v1_itg, itg.Common.InstanceKey) - case *gcp_exporter.Config: - exports = b.appendGcpExporter(v1_itg, itg.Common.InstanceKey) - case *github_exporter.Config: - exports = b.appendGithubExporter(v1_itg, itg.Common.InstanceKey) - case *kafka_exporter.Config: - exports = b.appendKafkaExporter(v1_itg, itg.Common.InstanceKey) - case *memcached_exporter.Config: - exports = b.appendMemcachedExporter(v1_itg, itg.Common.InstanceKey) - case *mongodb_exporter.Config: - exports = b.appendMongodbExporter(v1_itg, itg.Common.InstanceKey) - case *mssql_exporter.Config: - exports = b.appendMssqlExporter(v1_itg, itg.Common.InstanceKey) - case *mysqld_exporter.Config: - exports = b.appendMysqldExporter(v1_itg, itg.Common.InstanceKey) - case *node_exporter.Config: - exports = b.appendNodeExporter(v1_itg, itg.Common.InstanceKey) - case *oracledb_exporter.Config: - exports = b.appendOracledbExporter(v1_itg, itg.Common.InstanceKey) - case *postgres_exporter.Config: - exports = b.appendPostgresExporter(v1_itg, itg.Common.InstanceKey) - case *process_exporter.Config: - exports = b.appendProcessExporter(v1_itg, itg.Common.InstanceKey) - case *redis_exporter.Config: - exports = b.appendRedisExporter(v1_itg, itg.Common.InstanceKey) - case *snowflake_exporter.Config: - exports = b.appendSnowflakeExporter(v1_itg, itg.Common.InstanceKey) - case *squid_exporter.Config: - exports = b.appendSquidExporter(v1_itg, itg.Common.InstanceKey) - case *statsd_exporter.Config: - exports = b.appendStatsdExporter(v1_itg, itg.Common.InstanceKey) - case *windows_exporter.Config: - exports = b.appendWindowsExporter(v1_itg, itg.Common.InstanceKey) - } - } - - if len(exports.Targets) > 0 { - b.appendExporterV2(&commonConfig, integration.Name(), exports.Targets) - } - } -} - -func (b *IntegrationsConfigBuilder) appendExporterV2(commonConfig *common_v2.MetricsConfig, name string, extraTargets []discovery.Target) { - var relabelConfigs []*relabel.Config - - for _, extraLabel := range commonConfig.ExtraLabels { - defaultConfig := relabel.DefaultRelabelConfig - relabelConfig := &defaultConfig - relabelConfig.SourceLabels = []model.LabelName{"__address__"} - relabelConfig.TargetLabel = extraLabel.Name - relabelConfig.Replacement = extraLabel.Value - - relabelConfigs = append(relabelConfigs, relabelConfig) - } - - if commonConfig.InstanceKey != nil { - defaultConfig := relabel.DefaultRelabelConfig - relabelConfig := &defaultConfig - relabelConfig.TargetLabel = "instance" - relabelConfig.Replacement = *commonConfig.InstanceKey - - relabelConfigs = append(relabelConfigs, relabelConfig) - } - - if relabelConfig := b.getJobRelabelConfig(name, commonConfig.Autoscrape.RelabelConfigs); relabelConfig != nil { - relabelConfigs = append(relabelConfigs, relabelConfig) - } - - commonConfig.ApplyDefaults(b.cfg.Integrations.ConfigV2.Metrics.Autoscrape) - scrapeConfig := prom_config.DefaultScrapeConfig - scrapeConfig.JobName = b.formatJobName(name, commonConfig.InstanceKey) - scrapeConfig.RelabelConfigs = append(commonConfig.Autoscrape.RelabelConfigs, relabelConfigs...) - scrapeConfig.MetricRelabelConfigs = commonConfig.Autoscrape.MetricRelabelConfigs - scrapeConfig.ScrapeInterval = commonConfig.Autoscrape.ScrapeInterval - scrapeConfig.ScrapeTimeout = commonConfig.Autoscrape.ScrapeTimeout - - scrapeConfigs := []*prom_config.ScrapeConfig{&scrapeConfig} - - var remoteWriteExports *remotewrite.Exports - for _, metrics := range b.cfg.Metrics.Configs { - if metrics.Name == commonConfig.Autoscrape.MetricsInstance { - // This must match the name of the existing remote write config in the metrics config: - label, err := scanner.SanitizeIdentifier("metrics_" + metrics.Name) - if err != nil { - b.diags.Add(diag.SeverityLevelCritical, fmt.Sprintf("failed to sanitize job name: %s", err)) - } - - remoteWriteExports = &remotewrite.Exports{ - Receiver: common.ConvertAppendable{Expr: "prometheus.remote_write." + label + ".receiver"}, - } - break - } - } - - if remoteWriteExports == nil { - b.diags.Add(diag.SeverityLevelCritical, fmt.Sprintf("integration %s is looking for an undefined metrics config: %s", name, commonConfig.Autoscrape.MetricsInstance)) - } - - promConfig := &prom_config.Config{ - GlobalConfig: b.cfg.Metrics.Global.Prometheus, - ScrapeConfigs: scrapeConfigs, - } - - jobNameToCompLabelsFunc := func(jobName string) string { - return b.jobNameToCompLabel(jobName) - } - - // Need to pass in the remote write reference from the metrics config here: - b.diags.AddAll(prometheusconvert.AppendAllNested(b.f, promConfig, jobNameToCompLabelsFunc, extraTargets, remoteWriteExports)) + b.appendTraces() } func splitByCommaNullOnEmpty(s string) []string { @@ -370,53 +38,3 @@ func splitByCommaNullOnEmpty(s string) []string { return strings.Split(s, ",") } - -func (b *IntegrationsConfigBuilder) jobNameToCompLabel(jobName string) string { - labelSuffix := strings.TrimPrefix(jobName, "integrations/") - if labelSuffix == "" { - return b.globalCtx.LabelPrefix - } - - return fmt.Sprintf("%s_%s", b.globalCtx.LabelPrefix, labelSuffix) -} - -func (b *IntegrationsConfigBuilder) formatJobName(name string, instanceKey *string) string { - jobName := b.globalCtx.LabelPrefix - if instanceKey != nil { - jobName = fmt.Sprintf("%s/%s", jobName, *instanceKey) - } else { - jobName = fmt.Sprintf("%s/%s", jobName, name) - } - - return jobName -} - -func (b *IntegrationsConfigBuilder) appendExporterBlock(args component.Arguments, configName string, instanceKey *string, exporterName string) discovery.Exports { - compLabel, err := scanner.SanitizeIdentifier(b.formatJobName(configName, instanceKey)) - if err != nil { - b.diags.Add(diag.SeverityLevelCritical, fmt.Sprintf("failed to sanitize job name: %s", err)) - } - - b.f.Body().AppendBlock(common.NewBlockWithOverride( - []string{"prometheus", "exporter", exporterName}, - compLabel, - args, - )) - - return common.NewDiscoveryExports(fmt.Sprintf("prometheus.exporter.%s.%s.targets", exporterName, compLabel)) -} - -func (b *IntegrationsConfigBuilder) getJobRelabelConfig(name string, relabelConfigs []*relabel.Config) *relabel.Config { - // Don't add a job relabel if that label is already targeted - for _, relabelConfig := range relabelConfigs { - if relabelConfig.TargetLabel == "job" { - return nil - } - } - - defaultConfig := relabel.DefaultRelabelConfig - relabelConfig := &defaultConfig - relabelConfig.TargetLabel = "job" - relabelConfig.Replacement = "integrations/" + name - return relabelConfig -} diff --git a/internal/converter/internal/staticconvert/internal/build/builder_integrations.go b/internal/converter/internal/staticconvert/internal/build/builder_integrations.go new file mode 100644 index 0000000000..1f268e299b --- /dev/null +++ b/internal/converter/internal/staticconvert/internal/build/builder_integrations.go @@ -0,0 +1,391 @@ +package build + +import ( + "fmt" + "strings" + + "github.com/grafana/agent/internal/component" + "github.com/grafana/agent/internal/component/discovery" + "github.com/grafana/agent/internal/component/prometheus/remotewrite" + "github.com/grafana/agent/internal/converter/diag" + "github.com/grafana/agent/internal/converter/internal/common" + "github.com/grafana/agent/internal/converter/internal/prometheusconvert" + "github.com/grafana/agent/internal/static/config" + agent_exporter "github.com/grafana/agent/internal/static/integrations/agent" + "github.com/grafana/agent/internal/static/integrations/apache_http" + "github.com/grafana/agent/internal/static/integrations/azure_exporter" + "github.com/grafana/agent/internal/static/integrations/blackbox_exporter" + "github.com/grafana/agent/internal/static/integrations/cadvisor" + "github.com/grafana/agent/internal/static/integrations/cloudwatch_exporter" + int_config "github.com/grafana/agent/internal/static/integrations/config" + "github.com/grafana/agent/internal/static/integrations/consul_exporter" + "github.com/grafana/agent/internal/static/integrations/dnsmasq_exporter" + "github.com/grafana/agent/internal/static/integrations/elasticsearch_exporter" + "github.com/grafana/agent/internal/static/integrations/gcp_exporter" + "github.com/grafana/agent/internal/static/integrations/github_exporter" + "github.com/grafana/agent/internal/static/integrations/kafka_exporter" + "github.com/grafana/agent/internal/static/integrations/memcached_exporter" + "github.com/grafana/agent/internal/static/integrations/mongodb_exporter" + mssql_exporter "github.com/grafana/agent/internal/static/integrations/mssql" + "github.com/grafana/agent/internal/static/integrations/mysqld_exporter" + "github.com/grafana/agent/internal/static/integrations/node_exporter" + "github.com/grafana/agent/internal/static/integrations/oracledb_exporter" + "github.com/grafana/agent/internal/static/integrations/postgres_exporter" + "github.com/grafana/agent/internal/static/integrations/process_exporter" + "github.com/grafana/agent/internal/static/integrations/redis_exporter" + "github.com/grafana/agent/internal/static/integrations/snmp_exporter" + "github.com/grafana/agent/internal/static/integrations/snowflake_exporter" + "github.com/grafana/agent/internal/static/integrations/squid_exporter" + "github.com/grafana/agent/internal/static/integrations/statsd_exporter" + agent_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/agent" + apache_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/apache_http" + app_agent_receiver_v2 "github.com/grafana/agent/internal/static/integrations/v2/app_agent_receiver" + blackbox_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/blackbox_exporter" + common_v2 "github.com/grafana/agent/internal/static/integrations/v2/common" + eventhandler_v2 "github.com/grafana/agent/internal/static/integrations/v2/eventhandler" + metricsutils_v2 "github.com/grafana/agent/internal/static/integrations/v2/metricsutils" + snmp_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/snmp_exporter" + "github.com/grafana/agent/internal/static/integrations/windows_exporter" + "github.com/grafana/river/scanner" + "github.com/prometheus/common/model" + prom_config "github.com/prometheus/prometheus/config" + "github.com/prometheus/prometheus/model/relabel" +) + +func (b *ConfigBuilder) appendIntegrations() { + switch b.cfg.Integrations.Version { + case config.IntegrationsVersion1: + b.appendV1Integrations() + case config.IntegrationsVersion2: + b.appendV2Integrations() + default: + panic(fmt.Sprintf("unknown integrations version %d", b.cfg.Integrations.Version)) + } +} + +func (b *ConfigBuilder) appendV1Integrations() { + for _, integration := range b.cfg.Integrations.ConfigV1.Integrations { + if !integration.Common.Enabled { + continue + } + + scrapeIntegration := b.cfg.Integrations.ConfigV1.ScrapeIntegrations + if integration.Common.ScrapeIntegration != nil { + scrapeIntegration = *integration.Common.ScrapeIntegration + } + + if !scrapeIntegration { + b.diags.Add(diag.SeverityLevelError, fmt.Sprintf("The converter does not support handling integrations which are not being scraped: %s.", integration.Name())) + continue + } + + var exports discovery.Exports + switch itg := integration.Config.(type) { + case *agent_exporter.Config: + exports = b.appendAgentExporter(itg) + case *apache_http.Config: + exports = b.appendApacheExporter(itg) + case *node_exporter.Config: + exports = b.appendNodeExporter(itg, nil) + case *blackbox_exporter.Config: + exports = b.appendBlackboxExporter(itg) + case *cloudwatch_exporter.Config: + exports = b.appendCloudwatchExporter(itg, nil) + case *consul_exporter.Config: + exports = b.appendConsulExporter(itg, nil) + case *dnsmasq_exporter.Config: + exports = b.appendDnsmasqExporter(itg, nil) + case *elasticsearch_exporter.Config: + exports = b.appendElasticsearchExporter(itg, nil) + case *gcp_exporter.Config: + exports = b.appendGcpExporter(itg, nil) + case *github_exporter.Config: + exports = b.appendGithubExporter(itg, nil) + case *kafka_exporter.Config: + exports = b.appendKafkaExporter(itg, nil) + case *memcached_exporter.Config: + exports = b.appendMemcachedExporter(itg, nil) + case *mongodb_exporter.Config: + exports = b.appendMongodbExporter(itg, nil) + case *mssql_exporter.Config: + exports = b.appendMssqlExporter(itg, nil) + case *mysqld_exporter.Config: + exports = b.appendMysqldExporter(itg, nil) + case *oracledb_exporter.Config: + exports = b.appendOracledbExporter(itg, nil) + case *postgres_exporter.Config: + exports = b.appendPostgresExporter(itg, nil) + case *process_exporter.Config: + exports = b.appendProcessExporter(itg, nil) + case *redis_exporter.Config: + exports = b.appendRedisExporter(itg, nil) + case *snmp_exporter.Config: + exports = b.appendSnmpExporter(itg) + case *snowflake_exporter.Config: + exports = b.appendSnowflakeExporter(itg, nil) + case *squid_exporter.Config: + exports = b.appendSquidExporter(itg, nil) + case *statsd_exporter.Config: + exports = b.appendStatsdExporter(itg, nil) + case *windows_exporter.Config: + exports = b.appendWindowsExporter(itg, nil) + case *azure_exporter.Config: + exports = b.appendAzureExporter(itg, nil) + case *cadvisor.Config: + exports = b.appendCadvisorExporter(itg, nil) + } + + if len(exports.Targets) > 0 { + b.appendExporter(&integration.Common, integration.Name(), exports.Targets) + } + } +} + +func (b *ConfigBuilder) appendExporter(commonConfig *int_config.Common, name string, extraTargets []discovery.Target) { + var relabelConfigs []*relabel.Config + if commonConfig.InstanceKey != nil { + defaultConfig := relabel.DefaultRelabelConfig + relabelConfig := &defaultConfig + relabelConfig.TargetLabel = "instance" + relabelConfig.Replacement = *commonConfig.InstanceKey + + relabelConfigs = append(relabelConfigs, relabelConfig) + } + + if relabelConfig := b.getJobRelabelConfig(name, commonConfig.RelabelConfigs); relabelConfig != nil { + relabelConfigs = append(relabelConfigs, b.getJobRelabelConfig(name, commonConfig.RelabelConfigs)) + } + + scrapeConfig := prom_config.DefaultScrapeConfig + scrapeConfig.JobName = b.formatJobName(name, nil) + scrapeConfig.RelabelConfigs = append(commonConfig.RelabelConfigs, relabelConfigs...) + scrapeConfig.MetricRelabelConfigs = commonConfig.MetricRelabelConfigs + scrapeConfig.HTTPClientConfig.TLSConfig = b.cfg.Integrations.ConfigV1.TLSConfig + + scrapeConfig.ScrapeInterval = model.Duration(commonConfig.ScrapeInterval) + if commonConfig.ScrapeInterval == 0 { + scrapeConfig.ScrapeInterval = b.cfg.Integrations.ConfigV1.PrometheusGlobalConfig.ScrapeInterval + } + + scrapeConfig.ScrapeTimeout = model.Duration(commonConfig.ScrapeTimeout) + if commonConfig.ScrapeTimeout == 0 { + scrapeConfig.ScrapeTimeout = b.cfg.Integrations.ConfigV1.PrometheusGlobalConfig.ScrapeTimeout + } + + scrapeConfigs := []*prom_config.ScrapeConfig{&scrapeConfig} + + promConfig := &prom_config.Config{ + GlobalConfig: b.cfg.Integrations.ConfigV1.PrometheusGlobalConfig, + ScrapeConfigs: scrapeConfigs, + RemoteWriteConfigs: b.cfg.Integrations.ConfigV1.PrometheusRemoteWrite, + } + + if len(b.cfg.Integrations.ConfigV1.PrometheusRemoteWrite) == 0 { + b.diags.Add(diag.SeverityLevelError, "The converter does not support handling integrations which are not connected to a remote_write.") + } + + jobNameToCompLabelsFunc := func(jobName string) string { + return b.jobNameToCompLabel(jobName) + } + + b.diags.AddAll(prometheusconvert.AppendAllNested(b.f, promConfig, jobNameToCompLabelsFunc, extraTargets, b.globalCtx.IntegrationsRemoteWriteExports)) + b.globalCtx.InitializeIntegrationsRemoteWriteExports() +} + +func (b *ConfigBuilder) appendV2Integrations() { + for _, integration := range b.cfg.Integrations.ConfigV2.Configs { + var exports discovery.Exports + var commonConfig common_v2.MetricsConfig + + switch itg := integration.(type) { + case *agent_exporter_v2.Config: + exports = b.appendAgentExporterV2(itg) + commonConfig = itg.Common + case *apache_exporter_v2.Config: + exports = b.appendApacheExporterV2(itg) + commonConfig = itg.Common + case *app_agent_receiver_v2.Config: + b.appendAppAgentReceiverV2(itg) + commonConfig = itg.Common + case *blackbox_exporter_v2.Config: + exports = b.appendBlackboxExporterV2(itg) + commonConfig = itg.Common + case *eventhandler_v2.Config: + b.appendEventHandlerV2(itg) + case *snmp_exporter_v2.Config: + exports = b.appendSnmpExporterV2(itg) + commonConfig = itg.Common + case *metricsutils_v2.ConfigShim: + commonConfig = itg.Common + switch v1_itg := itg.Orig.(type) { + case *azure_exporter.Config: + exports = b.appendAzureExporter(v1_itg, itg.Common.InstanceKey) + case *cadvisor.Config: + exports = b.appendCadvisorExporter(v1_itg, itg.Common.InstanceKey) + case *cloudwatch_exporter.Config: + exports = b.appendCloudwatchExporter(v1_itg, itg.Common.InstanceKey) + case *consul_exporter.Config: + exports = b.appendConsulExporter(v1_itg, itg.Common.InstanceKey) + case *dnsmasq_exporter.Config: + exports = b.appendDnsmasqExporter(v1_itg, itg.Common.InstanceKey) + case *elasticsearch_exporter.Config: + exports = b.appendElasticsearchExporter(v1_itg, itg.Common.InstanceKey) + case *gcp_exporter.Config: + exports = b.appendGcpExporter(v1_itg, itg.Common.InstanceKey) + case *github_exporter.Config: + exports = b.appendGithubExporter(v1_itg, itg.Common.InstanceKey) + case *kafka_exporter.Config: + exports = b.appendKafkaExporter(v1_itg, itg.Common.InstanceKey) + case *memcached_exporter.Config: + exports = b.appendMemcachedExporter(v1_itg, itg.Common.InstanceKey) + case *mongodb_exporter.Config: + exports = b.appendMongodbExporter(v1_itg, itg.Common.InstanceKey) + case *mssql_exporter.Config: + exports = b.appendMssqlExporter(v1_itg, itg.Common.InstanceKey) + case *mysqld_exporter.Config: + exports = b.appendMysqldExporter(v1_itg, itg.Common.InstanceKey) + case *node_exporter.Config: + exports = b.appendNodeExporter(v1_itg, itg.Common.InstanceKey) + case *oracledb_exporter.Config: + exports = b.appendOracledbExporter(v1_itg, itg.Common.InstanceKey) + case *postgres_exporter.Config: + exports = b.appendPostgresExporter(v1_itg, itg.Common.InstanceKey) + case *process_exporter.Config: + exports = b.appendProcessExporter(v1_itg, itg.Common.InstanceKey) + case *redis_exporter.Config: + exports = b.appendRedisExporter(v1_itg, itg.Common.InstanceKey) + case *snowflake_exporter.Config: + exports = b.appendSnowflakeExporter(v1_itg, itg.Common.InstanceKey) + case *squid_exporter.Config: + exports = b.appendSquidExporter(v1_itg, itg.Common.InstanceKey) + case *statsd_exporter.Config: + exports = b.appendStatsdExporter(v1_itg, itg.Common.InstanceKey) + case *windows_exporter.Config: + exports = b.appendWindowsExporter(v1_itg, itg.Common.InstanceKey) + } + } + + if len(exports.Targets) > 0 { + b.appendExporterV2(&commonConfig, integration.Name(), exports.Targets) + } + } +} + +func (b *ConfigBuilder) appendExporterV2(commonConfig *common_v2.MetricsConfig, name string, extraTargets []discovery.Target) { + var relabelConfigs []*relabel.Config + + for _, extraLabel := range commonConfig.ExtraLabels { + defaultConfig := relabel.DefaultRelabelConfig + relabelConfig := &defaultConfig + relabelConfig.SourceLabels = []model.LabelName{"__address__"} + relabelConfig.TargetLabel = extraLabel.Name + relabelConfig.Replacement = extraLabel.Value + + relabelConfigs = append(relabelConfigs, relabelConfig) + } + + if commonConfig.InstanceKey != nil { + defaultConfig := relabel.DefaultRelabelConfig + relabelConfig := &defaultConfig + relabelConfig.TargetLabel = "instance" + relabelConfig.Replacement = *commonConfig.InstanceKey + + relabelConfigs = append(relabelConfigs, relabelConfig) + } + + if relabelConfig := b.getJobRelabelConfig(name, commonConfig.Autoscrape.RelabelConfigs); relabelConfig != nil { + relabelConfigs = append(relabelConfigs, relabelConfig) + } + + commonConfig.ApplyDefaults(b.cfg.Integrations.ConfigV2.Metrics.Autoscrape) + scrapeConfig := prom_config.DefaultScrapeConfig + scrapeConfig.JobName = b.formatJobName(name, commonConfig.InstanceKey) + scrapeConfig.RelabelConfigs = append(commonConfig.Autoscrape.RelabelConfigs, relabelConfigs...) + scrapeConfig.MetricRelabelConfigs = commonConfig.Autoscrape.MetricRelabelConfigs + scrapeConfig.ScrapeInterval = commonConfig.Autoscrape.ScrapeInterval + scrapeConfig.ScrapeTimeout = commonConfig.Autoscrape.ScrapeTimeout + + scrapeConfigs := []*prom_config.ScrapeConfig{&scrapeConfig} + + var remoteWriteExports *remotewrite.Exports + for _, metrics := range b.cfg.Metrics.Configs { + if metrics.Name == commonConfig.Autoscrape.MetricsInstance { + // This must match the name of the existing remote write config in the metrics config: + label, err := scanner.SanitizeIdentifier("metrics_" + metrics.Name) + if err != nil { + b.diags.Add(diag.SeverityLevelCritical, fmt.Sprintf("failed to sanitize job name: %s", err)) + } + + remoteWriteExports = &remotewrite.Exports{ + Receiver: common.ConvertAppendable{Expr: "prometheus.remote_write." + label + ".receiver"}, + } + break + } + } + + if remoteWriteExports == nil { + b.diags.Add(diag.SeverityLevelCritical, fmt.Sprintf("integration %s is looking for an undefined metrics config: %s", name, commonConfig.Autoscrape.MetricsInstance)) + } + + promConfig := &prom_config.Config{ + GlobalConfig: b.cfg.Metrics.Global.Prometheus, + ScrapeConfigs: scrapeConfigs, + } + + jobNameToCompLabelsFunc := func(jobName string) string { + return b.jobNameToCompLabel(jobName) + } + + // Need to pass in the remote write reference from the metrics config here: + b.diags.AddAll(prometheusconvert.AppendAllNested(b.f, promConfig, jobNameToCompLabelsFunc, extraTargets, remoteWriteExports)) +} + +func (b *ConfigBuilder) jobNameToCompLabel(jobName string) string { + labelSuffix := strings.TrimPrefix(jobName, "integrations/") + if labelSuffix == "" { + return b.globalCtx.IntegrationsLabelPrefix + } + + return fmt.Sprintf("%s_%s", b.globalCtx.IntegrationsLabelPrefix, labelSuffix) +} + +func (b *ConfigBuilder) appendExporterBlock(args component.Arguments, configName string, instanceKey *string, exporterName string) discovery.Exports { + compLabel, err := scanner.SanitizeIdentifier(b.formatJobName(configName, instanceKey)) + if err != nil { + b.diags.Add(diag.SeverityLevelCritical, fmt.Sprintf("failed to sanitize job name: %s", err)) + } + + b.f.Body().AppendBlock(common.NewBlockWithOverride( + []string{"prometheus", "exporter", exporterName}, + compLabel, + args, + )) + + return common.NewDiscoveryExports(fmt.Sprintf("prometheus.exporter.%s.%s.targets", exporterName, compLabel)) +} + +func (b *ConfigBuilder) getJobRelabelConfig(name string, relabelConfigs []*relabel.Config) *relabel.Config { + // Don't add a job relabel if that label is already targeted + for _, relabelConfig := range relabelConfigs { + if relabelConfig.TargetLabel == "job" { + return nil + } + } + + defaultConfig := relabel.DefaultRelabelConfig + relabelConfig := &defaultConfig + relabelConfig.TargetLabel = "job" + relabelConfig.Replacement = "integrations/" + name + return relabelConfig +} + +func (b *ConfigBuilder) formatJobName(name string, instanceKey *string) string { + jobName := b.globalCtx.IntegrationsLabelPrefix + if instanceKey != nil { + jobName = fmt.Sprintf("%s/%s", jobName, *instanceKey) + } else { + jobName = fmt.Sprintf("%s/%s", jobName, name) + } + + return jobName +} diff --git a/internal/converter/internal/staticconvert/internal/build/logging.go b/internal/converter/internal/staticconvert/internal/build/builder_logging.go similarity index 88% rename from internal/converter/internal/staticconvert/internal/build/logging.go rename to internal/converter/internal/staticconvert/internal/build/builder_logging.go index f64eb11de4..cda77f4849 100644 --- a/internal/converter/internal/staticconvert/internal/build/logging.go +++ b/internal/converter/internal/staticconvert/internal/build/builder_logging.go @@ -8,7 +8,7 @@ import ( "github.com/grafana/agent/internal/static/server" ) -func (b *IntegrationsConfigBuilder) appendLogging(config *server.Config) { +func (b *ConfigBuilder) appendLogging(config *server.Config) { args := toLogging(config) if !reflect.DeepEqual(*args, logging.DefaultOptions) { b.f.Body().AppendBlock(common.NewBlockWithOverride( diff --git a/internal/converter/internal/staticconvert/internal/build/server.go b/internal/converter/internal/staticconvert/internal/build/builder_server.go similarity index 97% rename from internal/converter/internal/staticconvert/internal/build/server.go rename to internal/converter/internal/staticconvert/internal/build/builder_server.go index 187f10f6c7..be742a448b 100644 --- a/internal/converter/internal/staticconvert/internal/build/server.go +++ b/internal/converter/internal/staticconvert/internal/build/builder_server.go @@ -8,7 +8,7 @@ import ( "github.com/grafana/agent/internal/static/server" ) -func (b *IntegrationsConfigBuilder) appendServer(config *server.Config) { +func (b *ConfigBuilder) appendServer(config *server.Config) { args := toServer(config) if !reflect.DeepEqual(*args.TLS, http.TLSArguments{}) { b.f.Body().AppendBlock(common.NewBlockWithOverride( diff --git a/internal/converter/internal/staticconvert/internal/build/builder_traces.go b/internal/converter/internal/staticconvert/internal/build/builder_traces.go new file mode 100644 index 0000000000..b37b658924 --- /dev/null +++ b/internal/converter/internal/staticconvert/internal/build/builder_traces.go @@ -0,0 +1,96 @@ +package build + +import ( + "fmt" + "reflect" + + "github.com/grafana/agent/internal/converter/diag" + "github.com/grafana/agent/internal/converter/internal/otelcolconvert" + "github.com/grafana/agent/internal/static/traces" + otel_component "go.opentelemetry.io/collector/component" + "go.opentelemetry.io/collector/exporter/loggingexporter" + "go.opentelemetry.io/collector/otelcol" +) + +func (b *ConfigBuilder) appendTraces() { + if reflect.DeepEqual(b.cfg.Traces, traces.Config{}) { + return + } + + for _, cfg := range b.cfg.Traces.Configs { + otelCfg, err := cfg.OtelConfig() + if err != nil { + b.diags.Add(diag.SeverityLevelCritical, fmt.Sprintf("failed to load otelConfig from agent traces config: %s", err)) + continue + } + + // Remove the push receiver which is an implementation detail for static mode and unnecessary for the otel config. + removeReceiver(otelCfg, "traces", "push_receiver") + + b.translateAutomaticLogging(otelCfg, cfg) + + // Only prefix component labels if we are doing more than 1 trace config. + labelPrefix := "" + if len(b.cfg.Traces.Configs) > 1 { + labelPrefix = cfg.Name + } + b.diags.AddAll(otelcolconvert.AppendConfig(b.f, otelCfg, labelPrefix)) + } +} + +func (b *ConfigBuilder) translateAutomaticLogging(otelCfg *otelcol.Config, cfg traces.InstanceConfig) { + if _, ok := otelCfg.Processors[otel_component.NewID("automatic_logging")]; !ok { + return + } + + if cfg.AutomaticLogging.Backend == "stdout" { + b.diags.Add(diag.SeverityLevelWarn, "automatic_logging for traces has no direct flow equivalent. "+ + "A best effort translation has been made to otelcol.exporter.logging but the behavior will differ.") + } else { + b.diags.Add(diag.SeverityLevelError, "automatic_logging for traces has no direct flow equivalent. "+ + "A best effort translation can be made which only outputs to stdout and not directly to loki by bypassing errors.") + } + + // Add the logging exporter to the otel config with default values + otelCfg.Exporters[otel_component.NewID("logging")] = loggingexporter.NewFactory().CreateDefaultConfig() + + // Add the logging exporter to all pipelines + for _, pipeline := range otelCfg.Service.Pipelines { + pipeline.Exporters = append(pipeline.Exporters, otel_component.NewID("logging")) + } + + // Remove the custom automatic_logging processor + removeProcessor(otelCfg, "traces", "automatic_logging") +} + +// removeReceiver removes a receiver from the otel config for a specific pipeline type. +func removeReceiver(otelCfg *otelcol.Config, pipelineType otel_component.Type, receiverType otel_component.Type) { + if _, ok := otelCfg.Receivers[otel_component.NewID(receiverType)]; !ok { + return + } + + delete(otelCfg.Receivers, otel_component.NewID(receiverType)) + spr := make([]otel_component.ID, 0, len(otelCfg.Service.Pipelines[otel_component.NewID(pipelineType)].Receivers)-1) + for _, r := range otelCfg.Service.Pipelines[otel_component.NewID(pipelineType)].Receivers { + if r != otel_component.NewID(receiverType) { + spr = append(spr, r) + } + } + otelCfg.Service.Pipelines[otel_component.NewID(pipelineType)].Receivers = spr +} + +// removeProcessor removes a processor from the otel config for a specific pipeline type. +func removeProcessor(otelCfg *otelcol.Config, pipelineType otel_component.Type, processorType otel_component.Type) { + if _, ok := otelCfg.Processors[otel_component.NewID(processorType)]; !ok { + return + } + + delete(otelCfg.Processors, otel_component.NewID(processorType)) + spr := make([]otel_component.ID, 0, len(otelCfg.Service.Pipelines[otel_component.NewID(pipelineType)].Processors)-1) + for _, r := range otelCfg.Service.Pipelines[otel_component.NewID(pipelineType)].Processors { + if r != otel_component.NewID(processorType) { + spr = append(spr, r) + } + } + otelCfg.Service.Pipelines[otel_component.NewID(pipelineType)].Processors = spr +} diff --git a/internal/converter/internal/staticconvert/internal/build/cadvisor_exporter.go b/internal/converter/internal/staticconvert/internal/build/cadvisor_exporter.go index 00c8ab7089..eab148d9f0 100644 --- a/internal/converter/internal/staticconvert/internal/build/cadvisor_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/cadvisor_exporter.go @@ -8,7 +8,7 @@ import ( cadvisor_integration "github.com/grafana/agent/internal/static/integrations/cadvisor" ) -func (b *IntegrationsConfigBuilder) appendCadvisorExporter(config *cadvisor_integration.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendCadvisorExporter(config *cadvisor_integration.Config, instanceKey *string) discovery.Exports { args := toCadvisorExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "cadvisor") } diff --git a/internal/converter/internal/staticconvert/internal/build/cloudwatch_exporter.go b/internal/converter/internal/staticconvert/internal/build/cloudwatch_exporter.go index d288c090e7..3e35cc4d4e 100644 --- a/internal/converter/internal/staticconvert/internal/build/cloudwatch_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/cloudwatch_exporter.go @@ -6,7 +6,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/cloudwatch_exporter" ) -func (b *IntegrationsConfigBuilder) appendCloudwatchExporter(config *cloudwatch_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendCloudwatchExporter(config *cloudwatch_exporter.Config, instanceKey *string) discovery.Exports { args := toCloudwatchExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "cloudwatch") } diff --git a/internal/converter/internal/staticconvert/internal/build/consul_exporter.go b/internal/converter/internal/staticconvert/internal/build/consul_exporter.go index 8281aa84c4..e6c5231a9c 100644 --- a/internal/converter/internal/staticconvert/internal/build/consul_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/consul_exporter.go @@ -6,7 +6,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/consul_exporter" ) -func (b *IntegrationsConfigBuilder) appendConsulExporter(config *consul_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendConsulExporter(config *consul_exporter.Config, instanceKey *string) discovery.Exports { args := toConsulExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "consul") } diff --git a/internal/converter/internal/staticconvert/internal/build/dnsmasq_exporter.go b/internal/converter/internal/staticconvert/internal/build/dnsmasq_exporter.go index 1bcc43071e..a3cc9edfdd 100644 --- a/internal/converter/internal/staticconvert/internal/build/dnsmasq_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/dnsmasq_exporter.go @@ -6,7 +6,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/dnsmasq_exporter" ) -func (b *IntegrationsConfigBuilder) appendDnsmasqExporter(config *dnsmasq_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendDnsmasqExporter(config *dnsmasq_exporter.Config, instanceKey *string) discovery.Exports { args := toDnsmasqExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "dnsmasq") } diff --git a/internal/converter/internal/staticconvert/internal/build/elasticsearch_exporter.go b/internal/converter/internal/staticconvert/internal/build/elasticsearch_exporter.go index 21fc667211..4b39f46ca3 100644 --- a/internal/converter/internal/staticconvert/internal/build/elasticsearch_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/elasticsearch_exporter.go @@ -8,7 +8,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendElasticsearchExporter(config *elasticsearch_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendElasticsearchExporter(config *elasticsearch_exporter.Config, instanceKey *string) discovery.Exports { args := toElasticsearchExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "elasticsearch") } diff --git a/internal/converter/internal/staticconvert/internal/build/eventhandler.go b/internal/converter/internal/staticconvert/internal/build/eventhandler.go index ef0dad2743..2381a23b00 100644 --- a/internal/converter/internal/staticconvert/internal/build/eventhandler.go +++ b/internal/converter/internal/staticconvert/internal/build/eventhandler.go @@ -13,7 +13,7 @@ import ( "github.com/grafana/river/scanner" ) -func (b *IntegrationsConfigBuilder) appendEventHandlerV2(config *eventhandler_v2.Config) { +func (b *ConfigBuilder) appendEventHandlerV2(config *eventhandler_v2.Config) { compLabel, err := scanner.SanitizeIdentifier(b.formatJobName(config.Name(), nil)) if err != nil { b.diags.Add(diag.SeverityLevelCritical, fmt.Sprintf("failed to sanitize job name: %s", err)) @@ -44,7 +44,7 @@ func (b *IntegrationsConfigBuilder) appendEventHandlerV2(config *eventhandler_v2 )) } -func (b *IntegrationsConfigBuilder) injectExtraLabels(config *eventhandler_v2.Config, receiver common.ConvertLogsReceiver, compLabel string) common.ConvertLogsReceiver { +func (b *ConfigBuilder) injectExtraLabels(config *eventhandler_v2.Config, receiver common.ConvertLogsReceiver, compLabel string) common.ConvertLogsReceiver { var relabelConfigs []*flow_relabel.Config for _, extraLabel := range config.ExtraLabels { defaultConfig := flow_relabel.DefaultRelabelConfig diff --git a/internal/converter/internal/staticconvert/internal/build/gcp_exporter.go b/internal/converter/internal/staticconvert/internal/build/gcp_exporter.go index 823d4bfcca..27a984fac0 100644 --- a/internal/converter/internal/staticconvert/internal/build/gcp_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/gcp_exporter.go @@ -6,7 +6,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/gcp_exporter" ) -func (b *IntegrationsConfigBuilder) appendGcpExporter(config *gcp_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendGcpExporter(config *gcp_exporter.Config, instanceKey *string) discovery.Exports { args := toGcpExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "gcp") } diff --git a/internal/converter/internal/staticconvert/internal/build/github_exporter.go b/internal/converter/internal/staticconvert/internal/build/github_exporter.go index 5531b554c8..8759eb27c8 100644 --- a/internal/converter/internal/staticconvert/internal/build/github_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/github_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendGithubExporter(config *github_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendGithubExporter(config *github_exporter.Config, instanceKey *string) discovery.Exports { args := toGithubExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "github") } diff --git a/internal/converter/internal/staticconvert/internal/build/global_context.go b/internal/converter/internal/staticconvert/internal/build/global_context.go index 270e84fc33..9ffaceaa6a 100644 --- a/internal/converter/internal/staticconvert/internal/build/global_context.go +++ b/internal/converter/internal/staticconvert/internal/build/global_context.go @@ -6,14 +6,14 @@ import ( ) type GlobalContext struct { - LabelPrefix string - RemoteWriteExports *remotewrite.Exports + IntegrationsLabelPrefix string + IntegrationsRemoteWriteExports *remotewrite.Exports } -func (g *GlobalContext) InitializeRemoteWriteExports() { - if g.RemoteWriteExports == nil { - g.RemoteWriteExports = &remotewrite.Exports{ - Receiver: common.ConvertAppendable{Expr: "prometheus.remote_write." + g.LabelPrefix + ".receiver"}, +func (g *GlobalContext) InitializeIntegrationsRemoteWriteExports() { + if g.IntegrationsRemoteWriteExports == nil { + g.IntegrationsRemoteWriteExports = &remotewrite.Exports{ + Receiver: common.ConvertAppendable{Expr: "prometheus.remote_write." + g.IntegrationsLabelPrefix + ".receiver"}, } } } diff --git a/internal/converter/internal/staticconvert/internal/build/kafka_exporter.go b/internal/converter/internal/staticconvert/internal/build/kafka_exporter.go index b5d180a7f8..b67aab7537 100644 --- a/internal/converter/internal/staticconvert/internal/build/kafka_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/kafka_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendKafkaExporter(config *kafka_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendKafkaExporter(config *kafka_exporter.Config, instanceKey *string) discovery.Exports { args := toKafkaExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "kafka") } diff --git a/internal/converter/internal/staticconvert/internal/build/memcached_exporter.go b/internal/converter/internal/staticconvert/internal/build/memcached_exporter.go index 46176c2348..fd9b428aab 100644 --- a/internal/converter/internal/staticconvert/internal/build/memcached_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/memcached_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/memcached_exporter" ) -func (b *IntegrationsConfigBuilder) appendMemcachedExporter(config *memcached_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendMemcachedExporter(config *memcached_exporter.Config, instanceKey *string) discovery.Exports { args := toMemcachedExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "memcached") } diff --git a/internal/converter/internal/staticconvert/internal/build/mongodb_exporter.go b/internal/converter/internal/staticconvert/internal/build/mongodb_exporter.go index 36839e97c2..5dfa296770 100644 --- a/internal/converter/internal/staticconvert/internal/build/mongodb_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/mongodb_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendMongodbExporter(config *mongodb_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendMongodbExporter(config *mongodb_exporter.Config, instanceKey *string) discovery.Exports { args := toMongodbExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "mongodb") } diff --git a/internal/converter/internal/staticconvert/internal/build/mssql_exporter.go b/internal/converter/internal/staticconvert/internal/build/mssql_exporter.go index 87ef828edf..388e93cc7f 100644 --- a/internal/converter/internal/staticconvert/internal/build/mssql_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/mssql_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendMssqlExporter(config *mssql_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendMssqlExporter(config *mssql_exporter.Config, instanceKey *string) discovery.Exports { args := toMssqlExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "mssql") } diff --git a/internal/converter/internal/staticconvert/internal/build/mysqld_exporter.go b/internal/converter/internal/staticconvert/internal/build/mysqld_exporter.go index 4694e934a6..f7de2572d5 100644 --- a/internal/converter/internal/staticconvert/internal/build/mysqld_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/mysqld_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendMysqldExporter(config *mysqld_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendMysqldExporter(config *mysqld_exporter.Config, instanceKey *string) discovery.Exports { args := toMysqldExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "mysql") } diff --git a/internal/converter/internal/staticconvert/internal/build/node_exporter.go b/internal/converter/internal/staticconvert/internal/build/node_exporter.go index 32b0a57e59..59a4762f30 100644 --- a/internal/converter/internal/staticconvert/internal/build/node_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/node_exporter.go @@ -6,7 +6,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/node_exporter" ) -func (b *IntegrationsConfigBuilder) appendNodeExporter(config *node_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendNodeExporter(config *node_exporter.Config, instanceKey *string) discovery.Exports { args := toNodeExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "unix") } diff --git a/internal/converter/internal/staticconvert/internal/build/oracledb_exporter.go b/internal/converter/internal/staticconvert/internal/build/oracledb_exporter.go index bbf0a859e6..bc768c1b7d 100644 --- a/internal/converter/internal/staticconvert/internal/build/oracledb_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/oracledb_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendOracledbExporter(config *oracledb_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendOracledbExporter(config *oracledb_exporter.Config, instanceKey *string) discovery.Exports { args := toOracledbExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "oracledb") } diff --git a/internal/converter/internal/staticconvert/internal/build/postgres_exporter.go b/internal/converter/internal/staticconvert/internal/build/postgres_exporter.go index 9a54a7251d..e73877e964 100644 --- a/internal/converter/internal/staticconvert/internal/build/postgres_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/postgres_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendPostgresExporter(config *postgres_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendPostgresExporter(config *postgres_exporter.Config, instanceKey *string) discovery.Exports { args := toPostgresExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "postgres") } diff --git a/internal/converter/internal/staticconvert/internal/build/process_exporter.go b/internal/converter/internal/staticconvert/internal/build/process_exporter.go index 4634b40982..d8136cfa55 100644 --- a/internal/converter/internal/staticconvert/internal/build/process_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/process_exporter.go @@ -6,7 +6,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/process_exporter" ) -func (b *IntegrationsConfigBuilder) appendProcessExporter(config *process_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendProcessExporter(config *process_exporter.Config, instanceKey *string) discovery.Exports { args := toProcessExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "process") } diff --git a/internal/converter/internal/staticconvert/internal/build/redis_exporter.go b/internal/converter/internal/staticconvert/internal/build/redis_exporter.go index 659bb122db..e54bc2f7a6 100644 --- a/internal/converter/internal/staticconvert/internal/build/redis_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/redis_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendRedisExporter(config *redis_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendRedisExporter(config *redis_exporter.Config, instanceKey *string) discovery.Exports { args := toRedisExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "redis") } diff --git a/internal/converter/internal/staticconvert/internal/build/self_exporter.go b/internal/converter/internal/staticconvert/internal/build/self_exporter.go index 54e81e2a9e..31e7b50551 100644 --- a/internal/converter/internal/staticconvert/internal/build/self_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/self_exporter.go @@ -7,7 +7,7 @@ import ( agent_exporter_v2 "github.com/grafana/agent/internal/static/integrations/v2/agent" ) -func (b *IntegrationsConfigBuilder) appendAgentExporter(config *agent_exporter.Config) discovery.Exports { +func (b *ConfigBuilder) appendAgentExporter(config *agent_exporter.Config) discovery.Exports { args := toAgentExporter(config) return b.appendExporterBlock(args, config.Name(), nil, "self") } @@ -16,7 +16,7 @@ func toAgentExporter(config *agent_exporter.Config) *self.Arguments { return &self.Arguments{} } -func (b *IntegrationsConfigBuilder) appendAgentExporterV2(config *agent_exporter_v2.Config) discovery.Exports { +func (b *ConfigBuilder) appendAgentExporterV2(config *agent_exporter_v2.Config) discovery.Exports { args := toAgentExporterV2(config) return b.appendExporterBlock(args, config.Name(), config.Common.InstanceKey, "self") } diff --git a/internal/converter/internal/staticconvert/internal/build/snmp_exporter.go b/internal/converter/internal/staticconvert/internal/build/snmp_exporter.go index 23dab102a8..cc7dbe3c03 100644 --- a/internal/converter/internal/staticconvert/internal/build/snmp_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/snmp_exporter.go @@ -10,7 +10,7 @@ import ( snmp_config "github.com/prometheus/snmp_exporter/config" ) -func (b *IntegrationsConfigBuilder) appendSnmpExporter(config *snmp_exporter.Config) discovery.Exports { +func (b *ConfigBuilder) appendSnmpExporter(config *snmp_exporter.Config) discovery.Exports { args := toSnmpExporter(config) return b.appendExporterBlock(args, config.Name(), nil, "snmp") } @@ -58,7 +58,7 @@ func toSnmpExporter(config *snmp_exporter.Config) *snmp.Arguments { } } -func (b *IntegrationsConfigBuilder) appendSnmpExporterV2(config *snmp_exporter_v2.Config) discovery.Exports { +func (b *ConfigBuilder) appendSnmpExporterV2(config *snmp_exporter_v2.Config) discovery.Exports { args := toSnmpExporterV2(config) return b.appendExporterBlock(args, config.Name(), config.Common.InstanceKey, "snmp") } diff --git a/internal/converter/internal/staticconvert/internal/build/snowflake_exporter.go b/internal/converter/internal/staticconvert/internal/build/snowflake_exporter.go index b496258d60..3b0e204aa9 100644 --- a/internal/converter/internal/staticconvert/internal/build/snowflake_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/snowflake_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendSnowflakeExporter(config *snowflake_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendSnowflakeExporter(config *snowflake_exporter.Config, instanceKey *string) discovery.Exports { args := toSnowflakeExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "snowflake") } diff --git a/internal/converter/internal/staticconvert/internal/build/squid_exporter.go b/internal/converter/internal/staticconvert/internal/build/squid_exporter.go index 9999a4c805..2c93845620 100644 --- a/internal/converter/internal/staticconvert/internal/build/squid_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/squid_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/river/rivertypes" ) -func (b *IntegrationsConfigBuilder) appendSquidExporter(config *squid_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendSquidExporter(config *squid_exporter.Config, instanceKey *string) discovery.Exports { args := toSquidExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "squid") } diff --git a/internal/converter/internal/staticconvert/internal/build/statsd_exporter.go b/internal/converter/internal/staticconvert/internal/build/statsd_exporter.go index 5f8b509565..78aca3ec37 100644 --- a/internal/converter/internal/staticconvert/internal/build/statsd_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/statsd_exporter.go @@ -7,7 +7,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/statsd_exporter" ) -func (b *IntegrationsConfigBuilder) appendStatsdExporter(config *statsd_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendStatsdExporter(config *statsd_exporter.Config, instanceKey *string) discovery.Exports { args := toStatsdExporter(config) if config.MappingConfig != nil { diff --git a/internal/converter/internal/staticconvert/internal/build/windows_exporter.go b/internal/converter/internal/staticconvert/internal/build/windows_exporter.go index 7c88ae7d47..079c68d489 100644 --- a/internal/converter/internal/staticconvert/internal/build/windows_exporter.go +++ b/internal/converter/internal/staticconvert/internal/build/windows_exporter.go @@ -8,7 +8,7 @@ import ( "github.com/grafana/agent/internal/static/integrations/windows_exporter" ) -func (b *IntegrationsConfigBuilder) appendWindowsExporter(config *windows_exporter.Config, instanceKey *string) discovery.Exports { +func (b *ConfigBuilder) appendWindowsExporter(config *windows_exporter.Config, instanceKey *string) discovery.Exports { args := toWindowsExporter(config) return b.appendExporterBlock(args, config.Name(), instanceKey, "windows") } diff --git a/internal/converter/internal/staticconvert/staticconvert.go b/internal/converter/internal/staticconvert/staticconvert.go index bb4a94f14b..ccb7e36938 100644 --- a/internal/converter/internal/staticconvert/staticconvert.go +++ b/internal/converter/internal/staticconvert/staticconvert.go @@ -70,8 +70,7 @@ func AppendAll(f *builder.File, staticConfig *config.Config) diag.Diagnostics { diags.AddAll(appendStaticPrometheus(f, staticConfig)) diags.AddAll(appendStaticPromtail(f, staticConfig)) - diags.AddAll(appendStaticIntegrations(f, staticConfig)) - // TODO otel + diags.AddAll(appendStaticConfig(f, staticConfig)) diags.AddAll(validate(staticConfig)) @@ -164,10 +163,10 @@ func appendStaticPromtail(f *builder.File, staticConfig *config.Config) diag.Dia return diags } -func appendStaticIntegrations(f *builder.File, staticConfig *config.Config) diag.Diagnostics { +func appendStaticConfig(f *builder.File, staticConfig *config.Config) diag.Diagnostics { var diags diag.Diagnostics - b := build.NewIntegrationsConfigBuilder(f, &diags, staticConfig, &build.GlobalContext{LabelPrefix: "integrations"}) + b := build.NewConfigBuilder(f, &diags, staticConfig, &build.GlobalContext{IntegrationsLabelPrefix: "integrations"}) b.Build() return diags diff --git a/internal/converter/internal/staticconvert/testdata/traces.diags b/internal/converter/internal/staticconvert/testdata/traces.diags new file mode 100644 index 0000000000..4f7c851a10 --- /dev/null +++ b/internal/converter/internal/staticconvert/testdata/traces.diags @@ -0,0 +1,2 @@ +(Warning) automatic_logging for traces has no direct flow equivalent. A best effort translation has been made to otelcol.exporter.logging but the behavior will differ. +(Warning) Please review your agent command line flags and ensure they are set in your Flow mode config file where necessary. \ No newline at end of file diff --git a/internal/converter/internal/staticconvert/testdata/traces.river b/internal/converter/internal/staticconvert/testdata/traces.river new file mode 100644 index 0000000000..750e89cd55 --- /dev/null +++ b/internal/converter/internal/staticconvert/testdata/traces.river @@ -0,0 +1,282 @@ +otelcol.receiver.otlp "default" { + grpc { + include_metadata = true + } + + http { + include_metadata = true + } + + output { + metrics = [] + logs = [] + traces = [otelcol.processor.attributes.default.input] + } +} + +otelcol.processor.attributes "default" { + action { + key = "db.table" + action = "delete" + } + + action { + key = "redacted_span" + value = true + action = "upsert" + } + + action { + key = "copy_key" + from_attribute = "key_original" + action = "update" + } + + action { + key = "account_id" + value = 2245 + action = "insert" + } + + action { + key = "account_password" + action = "delete" + } + + action { + key = "account_email" + action = "hash" + } + + action { + key = "http.status_code" + converted_type = "int" + action = "convert" + } + + output { + metrics = [] + logs = [] + traces = [otelcol.processor.tail_sampling.default.input] + } +} + +otelcol.processor.tail_sampling "default" { + policy { + name = "test-policy-1" + type = "always_sample" + } + + policy { + name = "test-policy-2" + type = "latency" + + latency { + threshold_ms = 5000 + } + } + + policy { + name = "test-policy-3" + type = "numeric_attribute" + + numeric_attribute { + key = "key1" + min_value = 50 + max_value = 100 + } + } + + policy { + name = "test-policy-4" + type = "probabilistic" + + probabilistic { + sampling_percentage = 10 + } + } + + policy { + name = "test-policy-5" + type = "status_code" + + status_code { + status_codes = ["ERROR", "UNSET"] + } + } + + policy { + name = "test-policy-6" + type = "string_attribute" + + string_attribute { + key = "key2" + values = ["value1", "value2"] + } + } + + policy { + name = "test-policy-7" + type = "string_attribute" + + string_attribute { + key = "key2" + values = ["value1", "val*"] + enabled_regex_matching = true + cache_max_size = 10 + } + } + + policy { + name = "test-policy-8" + type = "rate_limiting" + + rate_limiting { + spans_per_second = 35 + } + } + + policy { + name = "test-policy-9" + type = "string_attribute" + + string_attribute { + key = "http.url" + values = ["\\/health", "\\/metrics"] + enabled_regex_matching = true + invert_match = true + } + } + + policy { + name = "test-policy-10" + type = "span_count" + + span_count { + min_spans = 2 + max_spans = 20 + } + } + + policy { + name = "test-policy-11" + type = "trace_state" + + trace_state { + key = "key3" + values = ["value1", "value2"] + } + } + + policy { + name = "test-policy-12" + type = "boolean_attribute" + + boolean_attribute { + key = "key4" + value = true + } + } + + policy { + name = "test-policy-11" + type = "ottl_condition" + + ottl_condition { + error_mode = "ignore" + span = ["attributes[\"test_attr_key_1\"] == \"test_attr_val_1\"", "attributes[\"test_attr_key_2\"] != \"test_attr_val_1\""] + spanevent = ["name != \"test_span_event_name\"", "attributes[\"test_event_attr_key_2\"] != \"test_event_attr_val_1\""] + } + } + + policy { + name = "and-policy-1" + type = "and" + + and { + and_sub_policy { + name = "test-and-policy-1" + type = "numeric_attribute" + + numeric_attribute { + key = "key1" + min_value = 50 + max_value = 100 + } + } + + and_sub_policy { + name = "test-and-policy-2" + type = "string_attribute" + + string_attribute { + key = "key2" + values = ["value1", "value2"] + } + } + } + } + + policy { + name = "composite-policy-1" + type = "composite" + + composite { + max_total_spans_per_second = 1000 + policy_order = ["test-composite-policy-1", "test-composite-policy-2", "test-composite-policy-3"] + + composite_sub_policy { + name = "test-composite-policy-1" + type = "numeric_attribute" + + numeric_attribute { + key = "key1" + min_value = 50 + max_value = 100 + } + } + + composite_sub_policy { + name = "test-composite-policy-2" + type = "string_attribute" + + string_attribute { + key = "key2" + values = ["value1", "value2"] + } + } + + composite_sub_policy { + name = "test-composite-policy-3" + type = "always_sample" + } + + rate_allocation { + policy = "test-composite-policy-1" + percent = 50 + } + + rate_allocation { + policy = "test-composite-policy-2" + percent = 25 + } + } + } + decision_wait = "5s" + + output { + traces = [otelcol.exporter.otlp.default_0.input, otelcol.exporter.logging.default.input] + } +} + +otelcol.exporter.otlp "default_0" { + retry_on_failure { + max_elapsed_time = "1m0s" + } + + client { + endpoint = "http://localhost:1234/write" + } +} + +otelcol.exporter.logging "default" { } diff --git a/internal/converter/internal/staticconvert/testdata/traces.yaml b/internal/converter/internal/staticconvert/testdata/traces.yaml new file mode 100644 index 0000000000..f5e2fffec4 --- /dev/null +++ b/internal/converter/internal/staticconvert/testdata/traces.yaml @@ -0,0 +1,166 @@ +traces: + configs: + - name: trace_config + receivers: + otlp: + protocols: + grpc: + http: + remote_write: + - endpoint: http://localhost:1234/write + automatic_logging: + backend: "stdout" + tail_sampling: + policies: + [ + { + name: test-policy-1, + type: always_sample + }, + { + name: test-policy-2, + type: latency, + latency: {threshold_ms: 5000} + }, + { + name: test-policy-3, + type: numeric_attribute, + numeric_attribute: {key: key1, min_value: 50, max_value: 100} + }, + { + name: test-policy-4, + type: probabilistic, + probabilistic: {sampling_percentage: 10} + }, + { + name: test-policy-5, + type: status_code, + status_code: {status_codes: [ERROR, UNSET]} + }, + { + name: test-policy-6, + type: string_attribute, + string_attribute: {key: key2, values: [value1, value2]} + }, + { + name: test-policy-7, + type: string_attribute, + string_attribute: {key: key2, values: [value1, val*], enabled_regex_matching: true, cache_max_size: 10} + }, + { + name: test-policy-8, + type: rate_limiting, + rate_limiting: {spans_per_second: 35} + }, + { + name: test-policy-9, + type: string_attribute, + string_attribute: {key: http.url, values: [\/health, \/metrics], enabled_regex_matching: true, invert_match: true} + }, + { + name: test-policy-10, + type: span_count, + span_count: {min_spans: 2, max_spans: 20} + }, + { + name: test-policy-11, + type: trace_state, + trace_state: { key: key3, values: [value1, value2] } + }, + { + name: test-policy-12, + type: boolean_attribute, + boolean_attribute: {key: key4, value: true} + }, + { + name: test-policy-11, + type: ottl_condition, + ottl_condition: { + error_mode: ignore, + span: [ + "attributes[\"test_attr_key_1\"] == \"test_attr_val_1\"", + "attributes[\"test_attr_key_2\"] != \"test_attr_val_1\"", + ], + spanevent: [ + "name != \"test_span_event_name\"", + "attributes[\"test_event_attr_key_2\"] != \"test_event_attr_val_1\"", + ] + } + }, + { + name: and-policy-1, + type: and, + and: { + and_sub_policy: + [ + { + name: test-and-policy-1, + type: numeric_attribute, + numeric_attribute: { key: key1, min_value: 50, max_value: 100 } + }, + { + name: test-and-policy-2, + type: string_attribute, + string_attribute: { key: key2, values: [ value1, value2 ] } + }, + ] + } + }, + { + name: composite-policy-1, + type: composite, + composite: + { + max_total_spans_per_second: 1000, + policy_order: [test-composite-policy-1, test-composite-policy-2, test-composite-policy-3], + composite_sub_policy: + [ + { + name: test-composite-policy-1, + type: numeric_attribute, + numeric_attribute: {key: key1, min_value: 50, max_value: 100} + }, + { + name: test-composite-policy-2, + type: string_attribute, + string_attribute: {key: key2, values: [value1, value2]} + }, + { + name: test-composite-policy-3, + type: always_sample + } + ], + rate_allocation: + [ + { + policy: test-composite-policy-1, + percent: 50 + }, + { + policy: test-composite-policy-2, + percent: 25 + } + ] + } + }, + ] + attributes: + actions: + - key: db.table + action: delete + - key: redacted_span + value: true + action: upsert + - key: copy_key + from_attribute: key_original + action: update + - key: account_id + value: 2245 + action: insert + - key: account_password + action: delete + - key: account_email + action: hash + - key: http.status_code + action: convert + converted_type: int \ No newline at end of file diff --git a/internal/converter/internal/staticconvert/testdata/traces_multi.diags b/internal/converter/internal/staticconvert/testdata/traces_multi.diags new file mode 100644 index 0000000000..a4a05d1a3b --- /dev/null +++ b/internal/converter/internal/staticconvert/testdata/traces_multi.diags @@ -0,0 +1 @@ +(Warning) Please review your agent command line flags and ensure they are set in your Flow mode config file where necessary. \ No newline at end of file diff --git a/internal/converter/internal/staticconvert/testdata/traces_multi.river b/internal/converter/internal/staticconvert/testdata/traces_multi.river new file mode 100644 index 0000000000..b7e4a4ea76 --- /dev/null +++ b/internal/converter/internal/staticconvert/testdata/traces_multi.river @@ -0,0 +1,74 @@ +otelcol.receiver.otlp "trace_config_1_default" { + grpc { + include_metadata = true + } + + http { } + + output { + metrics = [] + logs = [] + traces = [otelcol.processor.attributes.trace_config_1_default.input] + } +} + +otelcol.processor.attributes "trace_config_1_default" { + action { + key = "db.table" + action = "delete" + } + + output { + metrics = [] + logs = [] + traces = [otelcol.exporter.otlp.trace_config_1_default_0.input] + } +} + +otelcol.exporter.otlp "trace_config_1_default_0" { + retry_on_failure { + max_elapsed_time = "1m0s" + } + + client { + endpoint = "http://localhost:1234/write" + } +} + +otelcol.receiver.otlp "trace_config_2_default" { + grpc { + include_metadata = true + } + + http { } + + output { + metrics = [] + logs = [] + traces = [otelcol.processor.attributes.trace_config_2_default.input] + } +} + +otelcol.processor.attributes "trace_config_2_default" { + action { + key = "redacted_span" + value = true + action = "upsert" + } + + output { + metrics = [] + logs = [] + traces = [otelcol.exporter.otlp.trace_config_2_default_0.input] + } +} + +otelcol.exporter.otlp "trace_config_2_default_0" { + retry_on_failure { + max_elapsed_time = "1m0s" + } + + client { + endpoint = "http://localhost:1234/write" + } +} diff --git a/internal/converter/internal/staticconvert/testdata/traces_multi.yaml b/internal/converter/internal/staticconvert/testdata/traces_multi.yaml new file mode 100644 index 0000000000..4135c94216 --- /dev/null +++ b/internal/converter/internal/staticconvert/testdata/traces_multi.yaml @@ -0,0 +1,27 @@ +traces: + configs: + - name: trace_config_1 + receivers: + otlp: + protocols: + grpc: + http: + remote_write: + - endpoint: http://localhost:1234/write + attributes: + actions: + - key: db.table + action: delete + - name: trace_config_2 + receivers: + otlp: + protocols: + grpc: + http: + remote_write: + - endpoint: http://localhost:1234/write + attributes: + actions: + - key: redacted_span + value: true + action: upsert \ No newline at end of file diff --git a/internal/converter/internal/staticconvert/testdata/unsupported.diags b/internal/converter/internal/staticconvert/testdata/unsupported.diags index 98330af73d..0958f0e79e 100644 --- a/internal/converter/internal/staticconvert/testdata/unsupported.diags +++ b/internal/converter/internal/staticconvert/testdata/unsupported.diags @@ -1,9 +1,9 @@ (Error) The converter does not support handling integrations which are not being scraped: mssql. (Error) mapping_config is not supported in statsd_exporter integrations config +(Error) automatic_logging for traces has no direct flow equivalent. A best effort translation can be made which only outputs to stdout and not directly to loki by bypassing errors. (Warning) Please review your agent command line flags and ensure they are set in your Flow mode config file where necessary. (Error) The converter does not support converting the provided grpc_tls_config server config: flow mode does not have a gRPC server to configure. (Error) The converter does not support converting the provided prefer_server_cipher_suites server config. (Warning) The converter does not support converting the provided metrics wal_directory config: Use the run command flag --storage.path for Flow mode instead. (Warning) disabled integrations do nothing and are not included in the output: node_exporter. -(Error) The converter does not support converting the provided traces config. (Error) The converter does not support converting the provided agent_management config. \ No newline at end of file diff --git a/internal/converter/internal/staticconvert/testdata/unsupported.river b/internal/converter/internal/staticconvert/testdata/unsupported.river index 76923a6c7f..3e9a55630b 100644 --- a/internal/converter/internal/staticconvert/testdata/unsupported.river +++ b/internal/converter/internal/staticconvert/testdata/unsupported.river @@ -49,3 +49,29 @@ prometheus.remote_write "integrations" { metadata_config { } } } + +otelcol.receiver.otlp "default" { + grpc { + include_metadata = true + } + + http { } + + output { + metrics = [] + logs = [] + traces = [otelcol.exporter.otlp.default_0.input, otelcol.exporter.logging.default.input] + } +} + +otelcol.exporter.otlp "default_0" { + retry_on_failure { + max_elapsed_time = "1m0s" + } + + client { + endpoint = "http://localhost:1234/write" + } +} + +otelcol.exporter.logging "default" { } diff --git a/internal/converter/internal/staticconvert/testdata/unsupported.yaml b/internal/converter/internal/staticconvert/testdata/unsupported.yaml index 11613e49b3..8dd14ab125 100644 --- a/internal/converter/internal/staticconvert/testdata/unsupported.yaml +++ b/internal/converter/internal/staticconvert/testdata/unsupported.yaml @@ -44,10 +44,6 @@ integrations: outcome: "$3" job: "${1}_server" -traces: - configs: - - name: trace_config - logs: positions_directory: /path global: @@ -56,5 +52,18 @@ logs: configs: - name: log_config +traces: + configs: + - name: trace_config + receivers: + otlp: + protocols: + grpc: + http: + remote_write: + - endpoint: http://localhost:1234/write + automatic_logging: + backend: "something else" + agent_management: host: host_name diff --git a/internal/converter/internal/staticconvert/validate.go b/internal/converter/internal/staticconvert/validate.go index a703dd5a43..216b791f59 100644 --- a/internal/converter/internal/staticconvert/validate.go +++ b/internal/converter/internal/staticconvert/validate.go @@ -218,8 +218,6 @@ func validateIntegrationsV2(integrationsConfig *v2.SubsystemOptions) diag.Diagno func validateTraces(tracesConfig traces.Config) diag.Diagnostics { var diags diag.Diagnostics - diags.AddAll(common.ValidateSupported(common.NotDeepEquals, tracesConfig, traces.Config{}, "traces", "")) - return diags } diff --git a/internal/flow/import_git_test.go b/internal/flow/import_git_test.go new file mode 100644 index 0000000000..c2b4c4f921 --- /dev/null +++ b/internal/flow/import_git_test.go @@ -0,0 +1,110 @@ +//go:build linux + +package flow_test + +import ( + "context" + "os" + "os/exec" + "path/filepath" + "sync" + "testing" + "time" + + "github.com/stretchr/testify/require" +) + +func TestPullUpdating(t *testing.T) { + // Previously we used fetch instead of pull, which would set the FETCH_HEAD but not HEAD + // This caused changes not to propagate if there were changes, since HEAD was pinned to whatever it was on the initial download. + // Switching to pull removes this problem at the expense of network bandwidth. + // Tried switching to FETCH_HEAD but FETCH_HEAD is only set on fetch and not initial repo clone so we would need to + // remember to always call fetch after clone. + // + // This test ensures we can pull the correct values down if they update no matter what, it works by creating a local + // file based git repo then committing a file, running the component, then updating the file in the repo. + testRepo := t.TempDir() + + contents := `declare "add" { + argument "a" {} + argument "b" {} + + export "sum" { + value = argument.a.value + argument.b.value + } +}` + main := ` +import.git "testImport" { + repository = "` + testRepo + `" + path = "math.river" + pull_frequency = "5s" +} + +testImport.add "cc" { + a = 1 + b = 1 +} +` + init := exec.Command("git", "init", testRepo) + err := init.Run() + require.NoError(t, err) + math := filepath.Join(testRepo, "math.river") + err = os.WriteFile(math, []byte(contents), 0666) + require.NoError(t, err) + add := exec.Command("git", "add", ".") + add.Dir = testRepo + err = add.Run() + require.NoError(t, err) + commit := exec.Command("git", "commit", "-m \"test\"") + commit.Dir = testRepo + err = commit.Run() + require.NoError(t, err) + + defer verifyNoGoroutineLeaks(t) + ctrl, f := setup(t, main) + err = ctrl.LoadSource(f, nil) + require.NoError(t, err) + ctx, cancel := context.WithCancel(context.Background()) + + var wg sync.WaitGroup + defer func() { + cancel() + wg.Wait() + }() + + wg.Add(1) + go func() { + defer wg.Done() + ctrl.Run(ctx) + }() + + // Check for initial condition + require.Eventually(t, func() bool { + export := getExport[map[string]interface{}](t, ctrl, "", "testImport.add.cc") + return export["sum"] == 2 + }, 3*time.Second, 10*time.Millisecond) + + contentsMore := `declare "add" { + argument "a" {} + argument "b" {} + + export "sum" { + value = argument.a.value + argument.b.value + 1 + } +}` + err = os.WriteFile(math, []byte(contentsMore), 0666) + require.NoError(t, err) + add2 := exec.Command("git", "add", ".") + add2.Dir = testRepo + add2.Run() + + commit2 := exec.Command("git", "commit", "-m \"test2\"") + commit2.Dir = testRepo + commit2.Run() + + // Check for final condition. + require.Eventually(t, func() bool { + export := getExport[map[string]interface{}](t, ctrl, "", "testImport.add.cc") + return export["sum"] == 3 + }, 20*time.Second, 1*time.Millisecond) +} diff --git a/internal/flow/import_test.go b/internal/flow/import_test.go index 691d9246b5..89fddafc46 100644 --- a/internal/flow/import_test.go +++ b/internal/flow/import_test.go @@ -4,7 +4,6 @@ import ( "context" "io/fs" "os" - "os/exec" "path/filepath" "strings" "sync" @@ -251,101 +250,6 @@ func TestImportError(t *testing.T) { } } -func TestPullUpdating(t *testing.T) { - // Previously we used fetch instead of pull, which would set the FETCH_HEAD but not HEAD - // This caused changes not to propagate if there were changes, since HEAD was pinned to whatever it was on the initial download. - // Switching to pull removes this problem at the expense of network bandwidth. - // Tried switching to FETCH_HEAD but FETCH_HEAD is only set on fetch and not initial repo clone so we would need to - // remember to always call fetch after clone. - // - // This test ensures we can pull the correct values down if they update no matter what, it works by creating a local - // file based git repo then committing a file, running the component, then updating the file in the repo. - testRepo := t.TempDir() - - contents := `declare "add" { - argument "a" {} - argument "b" {} - - export "sum" { - value = argument.a.value + argument.b.value - } -}` - main := ` -import.git "testImport" { - repository = "` + testRepo + `" - path = "math.river" - pull_frequency = "5s" -} - -testImport.add "cc" { - a = 1 - b = 1 -} -` - init := exec.Command("git", "init", testRepo) - err := init.Run() - require.NoError(t, err) - math := filepath.Join(testRepo, "math.river") - err = os.WriteFile(math, []byte(contents), 0666) - require.NoError(t, err) - add := exec.Command("git", "add", ".") - add.Dir = testRepo - err = add.Run() - require.NoError(t, err) - commit := exec.Command("git", "commit", "-m \"test\"") - commit.Dir = testRepo - err = commit.Run() - require.NoError(t, err) - - defer verifyNoGoroutineLeaks(t) - ctrl, f := setup(t, main) - err = ctrl.LoadSource(f, nil) - require.NoError(t, err) - ctx, cancel := context.WithCancel(context.Background()) - - var wg sync.WaitGroup - defer func() { - cancel() - wg.Wait() - }() - - wg.Add(1) - go func() { - defer wg.Done() - ctrl.Run(ctx) - }() - - // Check for initial condition - require.Eventually(t, func() bool { - export := getExport[map[string]interface{}](t, ctrl, "", "testImport.add.cc") - return export["sum"] == 2 - }, 3*time.Second, 10*time.Millisecond) - - contentsMore := `declare "add" { - argument "a" {} - argument "b" {} - - export "sum" { - value = argument.a.value + argument.b.value + 1 - } -}` - err = os.WriteFile(math, []byte(contentsMore), 0666) - require.NoError(t, err) - add2 := exec.Command("git", "add", ".") - add2.Dir = testRepo - add2.Run() - - commit2 := exec.Command("git", "commit", "-m \"test2\"") - commit2.Dir = testRepo - commit2.Run() - - // Check for final condition. - require.Eventually(t, func() bool { - export := getExport[map[string]interface{}](t, ctrl, "", "testImport.add.cc") - return export["sum"] == 3 - }, 20*time.Second, 1*time.Millisecond) -} - func testConfig(t *testing.T, config string, reloadConfig string, update func()) { defer verifyNoGoroutineLeaks(t) ctrl, f := setup(t, config) diff --git a/internal/flow/logging/logger_test.go b/internal/flow/logging/logger_test.go index 7b8fb15d76..79c84d6307 100644 --- a/internal/flow/logging/logger_test.go +++ b/internal/flow/logging/logger_test.go @@ -18,7 +18,7 @@ import ( ) /* Most recent performance results on M2 Macbook Air: -$ go test -count=1 -benchmem ./pkg/flow/logging -run ^$ -bench BenchmarkLogging_ +$ go test -count=1 -benchmem ./internal/flow/logging -run ^$ -bench BenchmarkLogging_ goos: darwin goarch: arm64 pkg: github.com/grafana/agent/internal/flow/logging diff --git a/internal/static/integrations/gcp_exporter/gcp_exporter.go b/internal/static/integrations/gcp_exporter/gcp_exporter.go index acfea2d36f..3b684e402f 100644 --- a/internal/static/integrations/gcp_exporter/gcp_exporter.go +++ b/internal/static/integrations/gcp_exporter/gcp_exporter.go @@ -14,6 +14,7 @@ import ( "github.com/go-kit/log" "github.com/grafana/dskit/multierror" "github.com/prometheus-community/stackdriver_exporter/collectors" + "github.com/prometheus-community/stackdriver_exporter/delta" "github.com/prometheus-community/stackdriver_exporter/utils" "github.com/prometheus/client_golang/prometheus" "golang.org/x/oauth2/google" @@ -105,8 +106,8 @@ func (c *Config) NewIntegration(l log.Logger) (integrations.Integration, error) AggregateDeltas: true, }, l, - collectors.NewInMemoryDeltaCounterStore(l, 30*time.Minute), - collectors.NewInMemoryDeltaDistributionStore(l, 30*time.Minute), + delta.NewInMemoryCounterStore(l, 30*time.Minute), + delta.NewInMemoryHistogramStore(l, 30*time.Minute), ) if err != nil { return nil, fmt.Errorf("failed to create monitoring collector: %w", err) diff --git a/internal/static/traces/config.go b/internal/static/traces/config.go index 10fc5fef10..12aa4c784e 100644 --- a/internal/static/traces/config.go +++ b/internal/static/traces/config.go @@ -655,7 +655,7 @@ func formatPolicies(cfg []policy) ([]map[string]interface{}, error) { return policies, nil } -func (c *InstanceConfig) otelConfig() (*otelcol.Config, error) { +func (c *InstanceConfig) OtelConfig() (*otelcol.Config, error) { otelMapStructure := map[string]interface{}{} if len(c.Receivers) == 0 { diff --git a/internal/static/traces/config_test.go b/internal/static/traces/config_test.go index a5e9bbedd4..fb8443abab 100644 --- a/internal/static/traces/config_test.go +++ b/internal/static/traces/config_test.go @@ -1510,7 +1510,7 @@ service: err := yaml.Unmarshal([]byte(tc.cfg), &cfg) require.NoError(t, err) // check error - actualConfig, err := cfg.otelConfig() + actualConfig, err := cfg.OtelConfig() if tc.expectedError { assert.Error(t, err) return @@ -1733,7 +1733,7 @@ load_balancing: require.NoError(t, err) // check error - actualConfig, err := cfg.otelConfig() + actualConfig, err := cfg.OtelConfig() require.NoError(t, err) require.Equal(t, len(tc.expectedProcessors), len(actualConfig.Service.Pipelines)) @@ -1892,7 +1892,7 @@ remote_write: cfg := InstanceConfig{} err := yaml.Unmarshal([]byte(test), &cfg) assert.Nil(t, err) - otel, err := cfg.otelConfig() + otel, err := cfg.OtelConfig() assert.Nil(t, err) assert.Contains(t, otel.Service.Pipelines[component.NewID("traces")].Receivers, component.NewID(pushreceiver.TypeStr)) } diff --git a/internal/static/traces/instance.go b/internal/static/traces/instance.go index bfa063353f..0c2e3fcb19 100644 --- a/internal/static/traces/instance.go +++ b/internal/static/traces/instance.go @@ -94,7 +94,7 @@ func (i *Instance) stop() { func (i *Instance) buildAndStartPipeline(ctx context.Context, cfg InstanceConfig, logs *logs.Logs, instManager instance.Manager, reg prom_client.Registerer) error { // create component factories - otelConfig, err := cfg.otelConfig() + otelConfig, err := cfg.OtelConfig() if err != nil { return fmt.Errorf("failed to load otelConfig from agent traces config: %w", err) } diff --git a/operations/agent-mixin/alerts/clustering.libsonnet b/operations/agent-mixin/alerts/clustering.libsonnet index b4d5edc989..5e2ad3c026 100644 --- a/operations/agent-mixin/alerts/clustering.libsonnet +++ b/operations/agent-mixin/alerts/clustering.libsonnet @@ -7,23 +7,22 @@ alert.newGroup( alert.newRule( 'ClusterNotConverging', 'stddev by (cluster, namespace) (sum without (state) (cluster_node_peers)) != 0', - 'Cluster is not converging.', + 'Cluster is not converging: nodes report different number of peers in the cluster.', '10m', ), - // Cluster has entered a split brain state. alert.newRule( - 'ClusterSplitBrain', - // Assert that the set of known peers (regardless of state) for an - // agent matches the same number of running agents in the same cluster - // and namespace. + 'ClusterNodeCountMismatch', + // Assert that the number of known peers (regardless of state) reported by each + // agent matches the number of running agents in the same cluster + // and namespace as reported by a count of Prometheus metrics. ||| sum without (state) (cluster_node_peers) != on (cluster, namespace) group_left count by (cluster, namespace) (cluster_node_info) |||, - 'Cluster nodes have entered a split brain state.', - '10m', + 'Nodes report different number of peers vs. the count of observed agent metrics. Some agent metrics may be missing or the cluster is in a split brain state.', + '15m', ), // Nodes health score is not zero. @@ -32,7 +31,7 @@ alert.newGroup( ||| cluster_node_gossip_health_score > 0 |||, - 'Cluster node is reporting a health score > 0.', + 'Cluster node is reporting a gossip protocol health score > 0.', '10m', ), diff --git a/operations/agent-mixin/dashboards/controller.libsonnet b/operations/agent-mixin/dashboards/controller.libsonnet index ac6125d578..ec059de981 100644 --- a/operations/agent-mixin/dashboards/controller.libsonnet +++ b/operations/agent-mixin/dashboards/controller.libsonnet @@ -276,7 +276,7 @@ local filename = 'agent-flow-controller.json'; // // This panel supports both native and classic histograms, though it only shows one at a time. ( - panel.newNativeHistogramHeatmap('Component evaluation histogram') + + panel.newNativeHistogramHeatmap('Component evaluation histogram', 's') + panel.withDescription(||| Detailed histogram view of how long component evaluations take. @@ -301,7 +301,7 @@ local filename = 'agent-flow-controller.json'; // // This panel supports both native and classic histograms, though it only shows one at a time. ( - panel.newNativeHistogramHeatmap('Component dependency wait histogram') + + panel.newNativeHistogramHeatmap('Component dependency wait histogram', 's') + panel.withDescription(||| Detailed histogram of how long components wait to be evaluated after their dependency is updated. diff --git a/operations/agent-mixin/dashboards/opentelemetry.libsonnet b/operations/agent-mixin/dashboards/opentelemetry.libsonnet index a88fdf3893..cd3aeb1efc 100644 --- a/operations/agent-mixin/dashboards/opentelemetry.libsonnet +++ b/operations/agent-mixin/dashboards/opentelemetry.libsonnet @@ -39,8 +39,9 @@ local stackedPanelMixin = { ( panel.new(title='Accepted spans', type='timeseries') + panel.withDescription(||| - Number of spans successfully pushed into the pipeline. - |||) + + Number of spans successfully pushed into the pipeline. + |||) + + stackedPanelMixin + panel.withPosition({ x: 0, y: 0, w: 8, h: 10 }) + panel.withQueries([ panel.newQuery( @@ -58,19 +59,19 @@ local stackedPanelMixin = { panel.withDescription(||| Number of spans that could not be pushed into the pipeline. |||) + + stackedPanelMixin + panel.withPosition({ x: 8, y: 0, w: 8, h: 10 }) + panel.withQueries([ panel.newQuery( expr=||| - rate(receiver_refused_spans_ratio_total{cluster="$cluster", namespace="$namespace", instance=~"$instance"}[$__rate_interval]) + rate(receiver_refused_spans_ratio_total{cluster="$cluster", namespace="$namespace", instance=~"$instance"}[$__rate_interval]) |||, legendFormat='{{ pod }} / {{ transport }}', ), ]) ), ( - panel.newHeatmap('RPC server duration (traces)') + - panel.withUnit('milliseconds') + + panel.newHeatmap('RPC server duration', 'ms') + panel.withDescription(||| The duration of inbound RPCs. |||) + @@ -86,13 +87,14 @@ local stackedPanelMixin = { // "Batching" row ( - panel.new('Batching [otelcol.processor.batch]', 'row') + + panel.new('Batching of logs, metrics, and traces [otelcol.processor.batch]', 'row') + panel.withPosition({ h: 1, w: 24, x: 0, y: 10 }) ), ( - panel.newHeatmap('Number of units in the batch') + + panel.newHeatmap('Number of units in the batch', 'short') + + panel.withUnit('short') + panel.withDescription(||| - Number of units in the batch + Number of spans, metric datapoints, or log lines in a batch |||) + panel.withPosition({ x: 0, y: 10, w: 8, h: 10 }) + panel.withQueries([ @@ -105,6 +107,9 @@ local stackedPanelMixin = { ), ( panel.new(title='Distinct metadata values', type='timeseries') + + //TODO: Clarify what metadata means. I think it's the metadata in the HTTP headers? + //TODO: Mention that if this metric is too high, it could hit the metadata_cardinality_limit + //TODO: MAke a metric for the current value of metadata_cardinality_limit and create an alert if the actual cardinality reaches it? panel.withDescription(||| Number of distinct metadata value combinations being processed |||) + @@ -112,7 +117,7 @@ local stackedPanelMixin = { panel.withQueries([ panel.newQuery( expr=||| - processor_batch_metadata_cardinality_ratio{cluster="$cluster", namespace="$namespace", instance=~"$instance"} + processor_batch_metadata_cardinality_ratio{cluster="$cluster", namespace="$namespace", instance=~"$instance"} |||, legendFormat='{{ pod }}', ), @@ -127,7 +132,7 @@ local stackedPanelMixin = { panel.withQueries([ panel.newQuery( expr=||| - rate(processor_batch_timeout_trigger_send_ratio_total{cluster="$cluster", namespace="$namespace", instance=~"$instance"}[$__rate_interval]) + rate(processor_batch_timeout_trigger_send_ratio_total{cluster="$cluster", namespace="$namespace", instance=~"$instance"}[$__rate_interval]) |||, legendFormat='{{ pod }}', ), @@ -144,6 +149,7 @@ local stackedPanelMixin = { panel.withDescription(||| Number of spans successfully sent to destination. |||) + + stackedPanelMixin + panel.withPosition({ x: 0, y: 20, w: 8, h: 10 }) + panel.withQueries([ panel.newQuery( @@ -159,6 +165,7 @@ local stackedPanelMixin = { panel.withDescription(||| Number of spans in failed attempts to send to destination. |||) + + stackedPanelMixin + panel.withPosition({ x: 8, y: 20, w: 8, h: 10 }) + panel.withQueries([ panel.newQuery( diff --git a/operations/agent-mixin/dashboards/utils/panel.jsonnet b/operations/agent-mixin/dashboards/utils/panel.jsonnet index a59e6a4b54..9ebf39f34d 100644 --- a/operations/agent-mixin/dashboards/utils/panel.jsonnet +++ b/operations/agent-mixin/dashboards/utils/panel.jsonnet @@ -30,7 +30,7 @@ }, }, - newHeatmap(title=''):: $.new(title, 'heatmap') { + newHeatmap(title='', unit=''):: $.new(title, 'heatmap') { maxDataPoints: 30, options: { calculate: false, @@ -53,13 +53,13 @@ yHistogram: true, }, yAxis: { - unit: 's', + unit: unit, }, }, pluginVersion: '9.0.6', }, - newNativeHistogramHeatmap(title=''):: $.newHeatmap(title) { + newNativeHistogramHeatmap(title='', unit=''):: $.newHeatmap(title, unit) { options+: { cellGap: 0, color: { diff --git a/tools/gen-versioned-files/agent-version.txt b/tools/gen-versioned-files/agent-version.txt index 1e24a0583a..6b97562039 100644 --- a/tools/gen-versioned-files/agent-version.txt +++ b/tools/gen-versioned-files/agent-version.txt @@ -1 +1 @@ -v0.40.2 +v0.40.3