diff --git a/_includes/logs/loc-availability.rst b/_includes/logs/loc-availability.rst index 782bcd12b..e55279502 100644 --- a/_includes/logs/loc-availability.rst +++ b/_includes/logs/loc-availability.rst @@ -1,3 +1,3 @@ -Splunk Log Observer Connect is available in the AWS regions us0, us1, eu0, jp0, and au0, and in the GCP region us2. Splunk Log Observer Connect is compatible with Splunk Enterprise versions 9.0.1 and higher, and Splunk Cloud Platform versions 9.0.2209 and higher. Log Observer Connect is not available for Splunk Cloud Platform trials. +Splunk Log Observer Connect is available in the AWS regions us0, us1, eu0, eu1, eu2, jp0, and au0, and in the GCP region us2. Splunk Log Observer Connect is compatible with Splunk Enterprise versions 9.0.1 and higher, and Splunk Cloud Platform versions 9.0.2209 and higher. Log Observer Connect is not available for Splunk Cloud Platform trials. You cannot access logs from a GovCloud environment through Log Observer Connect. However, you can use global data links to link from Log Observer Connect to your GovCloud environment where you can access your logs. For more information on global data links, see :ref:`link-metadata-to-content`. \ No newline at end of file diff --git a/admin/notif-services/admin-notifs-index.rst b/admin/notif-services/admin-notifs-index.rst index 5738efeff..348723bb7 100644 --- a/admin/notif-services/admin-notifs-index.rst +++ b/admin/notif-services/admin-notifs-index.rst @@ -119,6 +119,14 @@ The following table contains a list of the IP addresses that you can use to allo - * 108.128.26.145/32 * 34.250.243.212/32 * 54.171.237.247/32 + * - eu1 + - * 3.73.240.7 + * 18.196.129.64 + * 3.126.181.171 + * - eu2 + - * 13.41.86.83 + * 52.56.124.93 + * 35.177.204.133 * - jp0 - * 35.78.47.79/32 * 35.77.252.198/32 diff --git a/gdi/opentelemetry/collector-kubernetes/collector-configuration-tutorial-k8s/about-collector-config-tutorial.rst b/gdi/opentelemetry/collector-kubernetes/collector-configuration-tutorial-k8s/about-collector-config-tutorial.rst index 1b9596ef6..42c9edd99 100644 --- a/gdi/opentelemetry/collector-kubernetes/collector-configuration-tutorial-k8s/about-collector-config-tutorial.rst +++ b/gdi/opentelemetry/collector-kubernetes/collector-configuration-tutorial-k8s/about-collector-config-tutorial.rst @@ -1,7 +1,7 @@ .. _about-collector-configuration-tutorial-k8s: ***************************************************************************************** -Tutorial: Configure the Splunk Distribution of OpenTelemetry Collector on Kubernetes +Tutorial: Configure the Splunk Distribution of the OpenTelemetry Collector on Kubernetes ***************************************************************************************** .. meta:: @@ -14,7 +14,7 @@ Tutorial: Configure the Splunk Distribution of OpenTelemetry Collector on Kubern collector-config-tutorial-start collector-config-tutorial-edit -The Splunk Distribution of OpenTelemetry Collector is a :new-page:`distribution ` of the OpenTelemetry Collector that includes components, installers, and default settings so that it's ready to work with Splunk Observability Cloud. +The Splunk Distribution of the OpenTelemetry Collector is a :new-page:`distribution ` of the OpenTelemetry Collector that includes components, installers, and default settings so that it's ready to work with Splunk Observability Cloud. Follow this tutorial for a walkthrough of configuring the Splunk Distribution of OpenTelemetry Collector to collect telemetry in common situations. diff --git a/gdi/opentelemetry/collector-kubernetes/collector-configuration-tutorial-k8s/collector-config-tutorial-edit.rst b/gdi/opentelemetry/collector-kubernetes/collector-configuration-tutorial-k8s/collector-config-tutorial-edit.rst index 92daf2701..14e3df780 100644 --- a/gdi/opentelemetry/collector-kubernetes/collector-configuration-tutorial-k8s/collector-config-tutorial-edit.rst +++ b/gdi/opentelemetry/collector-kubernetes/collector-configuration-tutorial-k8s/collector-config-tutorial-edit.rst @@ -26,7 +26,6 @@ Download the default :new-page:`values.yaml Kubernetes (EKS Add-on) Configure with Helm - Advanced config + Add components and data sources Configure logs and events + Advanced configuration Default Kubernetes metrics Upgrade Uninstall @@ -55,12 +56,13 @@ Optionally, you can also:

Configure the Collector for KubernetesΒΆ

-To configure the Collector, see: +To configure the Collector, including adding additional components or activating automatic discovery, see: * :ref:`otel-kubernetes-config` -* :ref:`otel-kubernetes-config-advanced` -* :ref:`kubernetes-config-logs` +* :ref:`kubernetes-config-add` * :ref:`discovery-mode-k8s` +* :ref:`kubernetes-config-logs` +* :ref:`otel-kubernetes-config-advanced` .. raw:: html diff --git a/gdi/opentelemetry/collector-kubernetes/kubernetes-config-add.rst b/gdi/opentelemetry/collector-kubernetes/kubernetes-config-add.rst new file mode 100644 index 000000000..abd2f19ee --- /dev/null +++ b/gdi/opentelemetry/collector-kubernetes/kubernetes-config-add.rst @@ -0,0 +1,207 @@ +.. _kubernetes-config-add: + +********************************************************************************************** +Configure the Collector for Kubernetes with Helm: Add components and data sources +********************************************************************************************** + +.. meta:: + :description: Optional configurations for the Splunk Distribution of OpenTelemetry Collector for Kubernetes: Add components or new data sources. + +Read on to learn how to add additional components or data sources to your Collector for Kubernetes config. + +For other config options, see: + +* :ref:`otel-kubernetes-config` +* :ref:`discovery-mode-k8s` +* :ref:`kubernetes-config-logs` +* :ref:`otel-kubernetes-config-advanced` + +For a practical example of how to configure the Collector for Kubernetes see :ref:`about-collector-configuration-tutorial-k8s`. + +.. _otel-kubernetes-config-add-components: + +Add additional components to the configuration +====================================================== + +To use any additional OTel component, integration or legacy monitor, add it the relevant configuration sections in the values.yaml file. Depending on your requirements, you might want to include it in the ``agent.config`` or the ``clusterReceiver.config`` section of the values.yaml. See more at :ref:`helm-chart-components`. + +For a full list of available components and how to configure them, see :ref:`otel-components`. For a list of available application integrations, see :ref:`monitor-data-sources`. + +How to collect data: agent or cluster receiver? +----------------------------------------------------------------------------- + +Read the following table to decide which option to chose to collect your data: + +.. list-table:: + :header-rows: 1 + :width: 100% + :widths: 20 40 40 + + * - + - Collect via the Collector agent + - Collect via the Collector cluster receiver + + * - Where is data collected? + - At the node level. + - At the Kubernetes service level, through a single point. + + * - Advantages + - * Granularity: This option ensures that you capture the complete picture of your cluster's performance and health. + * Fault tolerance: If a node becomes isolated or experiences issues, its metrics are still being collected independently. This gives you visibility into problems affecting individual nodes. + - Simplicity: This option simplifies the setup and management. + + * - Considerations + - Complexity: Managing and configuring agents on each node can increase operational complexity, specifically agent config file management. + - Uncomplete data: This option might result in a partial view of your cluster's health and performance. If the service collects metrics only from a subset of nodes, you might miss critical metrics from parts of your cluster. + + * - Use cases + - - Use this in environments where you need detailed insights into each node's operations. This allows better issue diagnosing and optimizing performance. + - Use this to collect metrics from application pods that have multiple replicas that can be running on multiple nodes. + - Use this in environments where operational simplicity is a priority, or if your cluster is already simple and has only 1 node. + +Example: Add the MySQL receiver +----------------------------------------------------------------------------- + +This example shows how to add the :ref:`mysql-receiver` to your configuration file. + +Add the MySQL receiver in the ``agent`` section +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To use the Collector agent daemonset to collect ``mysql`` metrics from every node the agent is deployed to, add this to your configuration: + +.. code:: yaml + + agent: + config: + receivers: + mysql: + endpoint: localhost:3306 + ... + +Add the MySQL receiver in the ``clusterReceiver`` section +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To use the Collector cluster receiver deployment to collect ``mysql`` metrics from a single endpoint, add this to your configuration: + +.. code:: yaml + + clusterReceiver: + config: + receivers: + mysql: + endpoint: mysql-k8s-service:3306 + ... + +Example: Add the Rabbit MQ monitor +----------------------------------------------------------------------------- + +This example shows how to add the :ref:`rabbitmq` integration to your configuration file. + +Add RabbitMQ in the ``agent`` section +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If you want to activate the RabbitMQ monitor in the Collector agent daemonset, add ``mysql`` to the ``receivers`` section of your agent section in the configuration file: + +.. code:: yaml + + agent: + config: + receivers: + smartagent/rabbitmq: + type: collectd/rabbitmq + host: localhost + port: 5672 + username: otel + password: ${env:RABBITMQ_PASSWORD} + +Next, include the receiver in the ``metrics`` pipeline of the ``service`` section of your configuration file: + +.. code:: yaml + + service: + pipelines: + metrics: + receivers: + - smartagent/rabbitmq + +Add RabbitMQ in the ``clusterReceiver`` section +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Similarly, if you want to activate the RabbitMQ monitor in the cluster receiver, add ``mysql`` to the ``receivers`` section of your cluster receiver section in the configuration file: + +.. code:: yaml + + clusterReceiver: + config: + receivers: + smartagent/rabbitmq: + type: collectd/rabbitmq + host: rabbitmq-service + port: 5672 + username: otel + password: ${env:RABBITMQ_PASSWORD} + +Next, include the receiver in the ``metrics`` pipeline of the ``service`` section of your configuration file: + +.. code:: yaml + + service: + pipelines: + metrics: + receivers: + - smartagent/rabbitmq + +Activate discovery mode on the Collector +============================================ + +Use the discovery mode of the Splunk Distribution of OpenTelemetry Collector to detect metric sources and create +a configuration based on the results. + +See :ref:`discovery-mode-k8s` for instructions on how to activate discovery mode in the Helm chart. + +.. _otel-kubernetes-config-resources: + +Add additional telemetry sources +=========================================== + +Use the ``autodetect`` configuration option to activate additional telemetry sources. + +Set ``autodetect.prometheus=true`` if you want the Collector to scrape Prometheus metrics from pods that have generic Prometheus-style annotations. Add the following annotations on pods to allow a fine control of the scraping process: + +* ``prometheus.io/scrape: true``: The default configuration scrapes all pods. If set to ``false``, this annotation excludes the pod from the scraping process. +* ``prometheus.io/path``: The path to scrape the metrics from. The default value is ``/metrics``. +* ``prometheus.io/port``: The port to scrape the metrics from. The default value is ``9090``. + +If the Collector is running in an Istio environment, set ``autodetect.istio=true`` to make sure that all traces, metrics, and logs reported by Istio are collected in a unified manner. + +For example, use the following configuration to activate automatic detection of both Prometheus and Istio telemetry sources: + +.. code-block:: yaml + + splunkObservability: + accessToken: xxxxxx + realm: us0 + clusterName: my-k8s-cluster + autodetect: + istio: true + prometheus: true + +.. _otel-kubernetes-deactivate-telemetry: + +Deactivate particular types of telemetry +============================================ + +By default, OpenTelemetry sends only metrics and traces to Splunk Observability Cloud and sends only logs to Splunk Platform. You can activate or deactivate any kind of telemetry data collection for a specific destination. + +For example, the following configuration allows the Collector to send all collected telemetry data to Splunk Observability Cloud and the Splunk Platform if you've properly configured them: + +.. code-block:: yaml + + splunkObservability: + metricsEnabled: true + tracesEnabled: true + logsEnabled: true + splunkPlatform: + metricsEnabled: true + logsEnabled: true + diff --git a/gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst b/gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst index b7f101f59..56a93e62b 100644 --- a/gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst +++ b/gdi/opentelemetry/collector-kubernetes/kubernetes-config-advanced.rst @@ -75,7 +75,8 @@ The following table shows which Kubernetes distributions support control plane m .. list-table:: :header-rows: 1 - :width: 60% + :width: 100% + :widths: 50 50 * - Supported - Unsupported @@ -134,6 +135,7 @@ The following example shows how to connect to a nonstandard API server that uses useHTTPS: true useServiceAccount: false +.. _kubernetes-config-advanced-non-root: Run the container in non-root user mode ================================================== @@ -151,6 +153,91 @@ To run the container in ``non-root`` user mode, use ``agent.securityContext`` to .. note:: Running the collector agent for log collection in non-root mode is not currently supported in CRI-O and OpenShift environments at this time. For more details, see the :new-page:`related GitHub feature request issue `. +.. _kubernetes-config-advanced-tls-certificates: + +Configure custom TLS certificates +================================================== + +If your organization requires custom TLS certificates for secure communication with the Collector, follow these steps: + +1. Create a Kubernetes secret containing the Root CA certificate, TLS certificate, and private key files +--------------------------------------------------------------------------------------------------------------------- + +Store your custom CA certificate, key, and cert files in a Kubernetes secret in the same namespace as the your Splunk Helm chart. + +For example, you can run this command: + +.. code-block:: bash + + kubectl create secret generic my-custom-tls --from-file=ca.crt=/path/to/custom_ca.crt --from-file=apiserver.key=/path/to/custom_key.key --from-file=apiserver.crt=/path/to/custom_cert.crt -n + +.. Note:: You are responsible for externally managing this secret, which is not part of the Splunk Helm chart deployment. + +2. Mount the secret in the Splunk Helm Chart +----------------------------------------------------------------------------- + +Apply this configuration to the ``agent``, ``clusterReceiver``, or ``gateway`` using the following Helm values: + +* ``agent.extraVolumes``, ``agent.extraVolumeMounts`` +* ``clusterReceiver.extraVolumes``, ``clusterReceiver.extraVolumeMounts`` +* ``gateway.extraVolumes``, ``gateway.extraVolumeMounts`` + +Learn more about Helm components at :ref:`helm-chart-components`. + +For example: + +.. code-block:: yaml + + agent: + extraVolumes: + - name: custom-tls + secret: + secretName: my-custom-tls + extraVolumeMounts: + - name: custom-tls + mountPath: /etc/ssl/certs/ + readOnly: true + + clusterReceiver: + extraVolumes: + - name: custom-tls + secret: + secretName: my-custom-tls + extraVolumeMounts: + - name: custom-tls + mountPath: /etc/ssl/certs/ + readOnly: true + + gateway: + extraVolumes: + - name: custom-tls + secret: + secretName: my-custom-tls + extraVolumeMounts: + - name: custom-tls + mountPath: /etc/ssl/certs/ + readOnly: true + +3. Override your TLS configuration +----------------------------------------------------------------------------- + +Update the TLS configuration for specific Collector components, such as the agent's ``kubeletstatsreceiver``, to use the mounted certificate, key, and CA files. + +For example: + +.. code-block:: yaml + + agent: + config: + receivers: + kubeletstats: + auth_type: "tls" + ca_file: "/etc/ssl/certs/custom_ca.crt" + key_file: "/etc/ssl/certs/custom_key.key" + cert_file: "/etc/ssl/certs/custom_cert.crt" + insecure_skip_verify: true + +.. note:: To skip certificate checks, you can disable secure TLS checks per component. This option is not recommended for production environments due to security standards. Collect network telemetry using eBPF ================================================== diff --git a/gdi/opentelemetry/collector-kubernetes/kubernetes-config.rst b/gdi/opentelemetry/collector-kubernetes/kubernetes-config.rst index 986b8ed19..b5542a92e 100644 --- a/gdi/opentelemetry/collector-kubernetes/kubernetes-config.rst +++ b/gdi/opentelemetry/collector-kubernetes/kubernetes-config.rst @@ -7,7 +7,16 @@ Configure the Collector for Kubernetes with Helm .. meta:: :description: Optional configurations for the Splunk Distribution of OpenTelemetry Collector for Kubernetes. -After you've :ref:`installed the Collector for Kubernetes `, these are the available settings you can configure. Additionally, see also :ref:`the advanced configuration options ` and :ref:`otel-kubernetes-config-logs`. +After you've :ref:`installed the Collector for Kubernetes `, read on to learn which settings you can configure. + +Additionally, see: + +* :ref:`kubernetes-config-add` +* :ref:`discovery-mode-k8s` +* :ref:`kubernetes-config-logs` +* :ref:`otel-kubernetes-config-advanced` + +For a practical example of how to configure the Collector for Kubernetes see :ref:`about-collector-configuration-tutorial-k8s`. .. caution:: @@ -190,139 +199,6 @@ For example: clusterName: my-k8s-cluster cloudProvider: aws -.. _otel-kubernetes-config-add-components: - -Add additional components to the configuration -====================================================== - -To use any additional OTel component, integration or legacy monitor, add it the relevant sections of the configuration file. Depending on your requirements, you might want to include it in the ``agent`` or the ``clusterReceiver`` component section of the configuration. See more at :ref:`helm-chart-components`. - -For a full list of available components and how to configure them, see :ref:`otel-components`. For a list of available application integrations, see :ref:`monitor-data-sources`. - -How to collect data: agent or cluster receiver? ------------------------------------------------------------------------------ - -Read the following table to decide which option to chose to collect your data: - -.. list-table:: - :header-rows: 1 - :width: 100% - :widths: 20 40 40 - - * - - - Collect via the Collector agent - - Collect via the Collector cluster receiver - - * - Where is data collected? - - At the node level. - - At the Kubernetes service level, through a single point. - - * - Advantages - - * Granularity: This option ensures that you capture the complete picture of your cluster's performance and health. - * Fault tolerance: If a node becomes isolated or experiences issues, its metrics are still being collected independently. This gives you visibility into problems affecting individual nodes. - - Simplicity: This option simplifies the setup and management. - - * - Considerations - - Complexity: Managing and configuring agents on each node can increase operational complexity, specifically agent config file management. - - Uncomplete data: This option might result in a partial view of your cluster's health and performance. If the service collects metrics only from a subset of nodes, you might miss critical metrics from parts of your cluster. - - * - Use cases - - - Use this in environments where you need detailed insights into each node's operations. This allows better issue diagnosing and optimizing performance. - - Use this to collect metrics from application pods that have multiple replicas that can be running on multiple nodes. - - Use this in environments where operational simplicity is a priority, or if your cluster is already simple and has only 1 node. - -Example: Add the MySQL receiver ------------------------------------------------------------------------------ - -This example shows how to add the :ref:`mysql-receiver` to your configuration file. - -Add the MySQL receiver in the ``agent`` section -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To use the Collector agent daemonset to collect ``mysql`` metrics from every node the agent is deployed to, add this to your configuration: - -.. code:: yaml - - agent: - config: - receivers: - mysql: - endpoint: localhost:3306 - ... - -Add the MySQL receiver in the ``clusterReceiver`` section -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To use the Collector cluster receiver deployment to collect ``mysql`` metrics from a single endpoint, add this to your configuration: - -.. code:: yaml - - clusterReceiver: - config: - receivers: - mysql: - endpoint: mysql-k8s-service:3306 - ... - -Example: Add the Rabbit MQ monitor ------------------------------------------------------------------------------ - -This example shows how to add the :ref:`rabbitmq` integration to your configuration file. - -Add RabbitMQ in the ``agent`` section -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you want to activate the RabbitMQ monitor in the Collector agent daemonset, add ``mysql`` to the ``receivers`` section of your agent section in the configuration file: - -.. code:: yaml - - agent: - config: - receivers: - smartagent/rabbitmq: - type: collectd/rabbitmq - host: localhost - port: 5672 - username: otel - password: ${env:RABBITMQ_PASSWORD} - -Next, include the receiver in the ``metrics`` pipeline of the ``service`` section of your configuration file: - -.. code:: yaml - - service: - pipelines: - metrics: - receivers: - - smartagent/rabbitmq - -Add RabbitMQ in the ``clusterReceiver`` section -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Similarly, if you want to activate the RabbitMQ monitor in the cluster receiver, add ``mysql`` to the ``receivers`` section of your cluster receiver section in the configuration file: - -.. code:: yaml - - clusterReceiver: - config: - receivers: - smartagent/rabbitmq: - type: collectd/rabbitmq - host: rabbitmq-service - port: 5672 - username: otel - password: ${env:RABBITMQ_PASSWORD} - -Next, include the receiver in the ``metrics`` pipeline of the ``service`` section of your configuration file: - -.. code:: yaml - - service: - pipelines: - metrics: - receivers: - - smartagent/rabbitmq - .. _otel-kubernetes-config-hostnetwork: Configure the agent's use of the host network @@ -407,62 +283,6 @@ Instead of having the tokens as clear text in the config file, you can provide t create: false name: your-secret -.. _otel-kubernetes-config-resources: - -Add additional telemetry sources -=========================================== - -Use the ``autodetect`` configuration option to activate additional telemetry sources. - -Set ``autodetect.prometheus=true`` if you want the Collector to scrape Prometheus metrics from pods that have generic Prometheus-style annotations. Add the following annotations on pods to allow a fine control of the scraping process: - -* ``prometheus.io/scrape: true``: The default configuration scrapes all pods. If set to ``false``, this annotation excludes the pod from the scraping process. -* ``prometheus.io/path``: The path to scrape the metrics from. The default value is ``/metrics``. -* ``prometheus.io/port``: The port to scrape the metrics from. The default value is ``9090``. - -If the Collector is running in an Istio environment, set ``autodetect.istio=true`` to make sure that all traces, metrics, and logs reported by Istio are collected in a unified manner. - -For example, use the following configuration to activate automatic detection of both Prometheus and Istio telemetry sources: - -.. code-block:: yaml - - splunkObservability: - accessToken: xxxxxx - realm: us0 - clusterName: my-k8s-cluster - autodetect: - istio: true - prometheus: true - -.. _otel-kubernetes-discovery-mode: - -Activate discovery mode on the Collector -============================================ - -Use the discovery mode of the Splunk Distribution of OpenTelemetry Collector to detect metric sources and create -a configuration based on the results. - -See :ref:`discovery-mode-k8s` for instructions on how to activate discovery mode in the Helm chart. - -.. _otel-kubernetes-deactivate-telemetry: - -Deactivate particular types of telemetry -============================================ - -By default, OpenTelemetry sends only metrics and traces to Splunk Observability Cloud and sends only logs to Splunk Platform. You can activate or deactivate any kind of telemetry data collection for a specific destination. - -For example, the following configuration allows the Collector to send all collected telemetry data to Splunk Observability Cloud and the Splunk Platform if you've properly configured them: - -.. code-block:: yaml - - splunkObservability: - metricsEnabled: true - tracesEnabled: true - logsEnabled: true - splunkPlatform: - metricsEnabled: true - logsEnabled: true - Configure Windows worker nodes =============================================== diff --git a/logs/forward-logs.rst b/logs/forward-logs.rst index eab0ef3d8..5b29b6b3d 100644 --- a/logs/forward-logs.rst +++ b/logs/forward-logs.rst @@ -93,6 +93,16 @@ If you already set up Log Observer Connect, you do not need to add the necessary | 34.250.243.212/32 | 54.171.237.247/32 + * - eu1 + - | 3.73.240.7 + | 18.196.129.64 + | 3.126.181.171 + + * - eu2 + - | 13.41.86.83 + | 52.56.124.93 + | 35.177.204.133 + .. _authenticate-hec: Authenticate the connection to HEC in Splunk Observability Cloud