Skip to content

Commit

Permalink
Merge pull request #1497 from splunk/repo-sync
Browse files Browse the repository at this point in the history
Pulling refs/heads/main into main
  • Loading branch information
aurbiztondo-splunk authored Aug 21, 2024
2 parents c587a0f + 81784f6 commit 9ba79ec
Show file tree
Hide file tree
Showing 9 changed files with 338 additions and 202 deletions.
2 changes: 1 addition & 1 deletion _includes/logs/loc-availability.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Splunk Log Observer Connect is available in the AWS regions us0, us1, eu0, jp0, and au0, and in the GCP region us2. Splunk Log Observer Connect is compatible with Splunk Enterprise versions 9.0.1 and higher, and Splunk Cloud Platform versions 9.0.2209 and higher. Log Observer Connect is not available for Splunk Cloud Platform trials.
Splunk Log Observer Connect is available in the AWS regions us0, us1, eu0, eu1, eu2, jp0, and au0, and in the GCP region us2. Splunk Log Observer Connect is compatible with Splunk Enterprise versions 9.0.1 and higher, and Splunk Cloud Platform versions 9.0.2209 and higher. Log Observer Connect is not available for Splunk Cloud Platform trials.

You cannot access logs from a GovCloud environment through Log Observer Connect. However, you can use global data links to link from Log Observer Connect to your GovCloud environment where you can access your logs. For more information on global data links, see :ref:`link-metadata-to-content`.
8 changes: 8 additions & 0 deletions admin/notif-services/admin-notifs-index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,14 @@ The following table contains a list of the IP addresses that you can use to allo
- * 108.128.26.145/32
* 34.250.243.212/32
* 54.171.237.247/32
* - eu1
- * 3.73.240.7
* 18.196.129.64
* 3.126.181.171
* - eu2
- * 13.41.86.83
* 52.56.124.93
* 35.177.204.133
* - jp0
- * 35.78.47.79/32
* 35.77.252.198/32
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. _about-collector-configuration-tutorial-k8s:

*****************************************************************************************
Tutorial: Configure the Splunk Distribution of OpenTelemetry Collector on Kubernetes
Tutorial: Configure the Splunk Distribution of the OpenTelemetry Collector on Kubernetes
*****************************************************************************************

.. meta::
Expand All @@ -14,7 +14,7 @@ Tutorial: Configure the Splunk Distribution of OpenTelemetry Collector on Kubern
collector-config-tutorial-start
collector-config-tutorial-edit

The Splunk Distribution of OpenTelemetry Collector is a :new-page:`distribution <https://docs.splunk.com/Splexicon:Distribution>` of the OpenTelemetry Collector that includes components, installers, and default settings so that it's ready to work with Splunk Observability Cloud.
The Splunk Distribution of the OpenTelemetry Collector is a :new-page:`distribution <https://docs.splunk.com/Splexicon:Distribution>` of the OpenTelemetry Collector that includes components, installers, and default settings so that it's ready to work with Splunk Observability Cloud.

Follow this tutorial for a walkthrough of configuring the Splunk Distribution of OpenTelemetry Collector to collect telemetry in common situations.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ Download the default :new-page:`values.yaml <https://github.com/signalfx/splunk-

Take a moment to read through the values.yaml file and examine its structure. Notice how each section configures the Collector for different targets, such as Splunk Observability Cloud and Splunk Cloud Platform. The comments in the file contain useful indications as to which values you can use and what's their effect.


Configure the Splunk HEC endpoint and token
============================================

Expand Down Expand Up @@ -156,7 +155,10 @@ This completes the tutorial. You created a local Kubernetes cluster, configured

To learn more about the Collector installation and components, see the following resources:

- :ref:`otel-install-k8s`
- :ref:`otel-kubernetes-config`
- :ref:`splunk-hec-exporter`
* :ref:`kubernetes-helm-architecture`
* :ref:`otel-install-k8s`
* :ref:`otel-kubernetes-config`
* :ref:`kubernetes-config-add`
* :ref:`splunk-hec-exporter`


Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,9 @@ Get started with the Collector for Kubernetes
Install with YAML manifests <install-k8s-manifests.rst>
Kubernetes (EKS Add-on) <install-k8s-addon-eks.rst>
Configure with Helm <kubernetes-config.rst>
Advanced config <kubernetes-config-advanced.rst>
Add components and data sources <kubernetes-config-add.rst>
Configure logs and events <kubernetes-config-logs.rst>
Advanced configuration <kubernetes-config-advanced.rst>
Default Kubernetes metrics <metrics-ootb-k8s.rst>
Upgrade <kubernetes-upgrade.rst>
Uninstall <kubernetes-uninstall.rst>
Expand Down Expand Up @@ -55,12 +56,13 @@ Optionally, you can also:
<h2>Configure the Collector for Kubernetes<a name="k8s-configure" class="headerlink" href="#k8s-configure" title="Permalink to this headline">¶</a></h2>
</embed>

To configure the Collector, see:
To configure the Collector, including adding additional components or activating automatic discovery, see:

* :ref:`otel-kubernetes-config`
* :ref:`otel-kubernetes-config-advanced`
* :ref:`kubernetes-config-logs`
* :ref:`kubernetes-config-add`
* :ref:`discovery-mode-k8s`
* :ref:`kubernetes-config-logs`
* :ref:`otel-kubernetes-config-advanced`

.. raw:: html

Expand Down
207 changes: 207 additions & 0 deletions gdi/opentelemetry/collector-kubernetes/kubernetes-config-add.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
.. _kubernetes-config-add:

**********************************************************************************************
Configure the Collector for Kubernetes with Helm: Add components and data sources
**********************************************************************************************

.. meta::
:description: Optional configurations for the Splunk Distribution of OpenTelemetry Collector for Kubernetes: Add components or new data sources.

Read on to learn how to add additional components or data sources to your Collector for Kubernetes config.

For other config options, see:

* :ref:`otel-kubernetes-config`
* :ref:`discovery-mode-k8s`
* :ref:`kubernetes-config-logs`
* :ref:`otel-kubernetes-config-advanced`

For a practical example of how to configure the Collector for Kubernetes see :ref:`about-collector-configuration-tutorial-k8s`.

.. _otel-kubernetes-config-add-components:

Add additional components to the configuration
======================================================

To use any additional OTel component, integration or legacy monitor, add it the relevant configuration sections in the values.yaml file. Depending on your requirements, you might want to include it in the ``agent.config`` or the ``clusterReceiver.config`` section of the values.yaml. See more at :ref:`helm-chart-components`.

For a full list of available components and how to configure them, see :ref:`otel-components`. For a list of available application integrations, see :ref:`monitor-data-sources`.

How to collect data: agent or cluster receiver?
-----------------------------------------------------------------------------

Read the following table to decide which option to chose to collect your data:

.. list-table::
:header-rows: 1
:width: 100%
:widths: 20 40 40

* -
- Collect via the Collector agent
- Collect via the Collector cluster receiver

* - Where is data collected?
- At the node level.
- At the Kubernetes service level, through a single point.

* - Advantages
- * Granularity: This option ensures that you capture the complete picture of your cluster's performance and health.
* Fault tolerance: If a node becomes isolated or experiences issues, its metrics are still being collected independently. This gives you visibility into problems affecting individual nodes.
- Simplicity: This option simplifies the setup and management.

* - Considerations
- Complexity: Managing and configuring agents on each node can increase operational complexity, specifically agent config file management.
- Uncomplete data: This option might result in a partial view of your cluster's health and performance. If the service collects metrics only from a subset of nodes, you might miss critical metrics from parts of your cluster.

* - Use cases
- - Use this in environments where you need detailed insights into each node's operations. This allows better issue diagnosing and optimizing performance.
- Use this to collect metrics from application pods that have multiple replicas that can be running on multiple nodes.
- Use this in environments where operational simplicity is a priority, or if your cluster is already simple and has only 1 node.

Example: Add the MySQL receiver
-----------------------------------------------------------------------------

This example shows how to add the :ref:`mysql-receiver` to your configuration file.

Add the MySQL receiver in the ``agent`` section
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To use the Collector agent daemonset to collect ``mysql`` metrics from every node the agent is deployed to, add this to your configuration:

.. code:: yaml
agent:
config:
receivers:
mysql:
endpoint: localhost:3306
...
Add the MySQL receiver in the ``clusterReceiver`` section
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To use the Collector cluster receiver deployment to collect ``mysql`` metrics from a single endpoint, add this to your configuration:

.. code:: yaml
clusterReceiver:
config:
receivers:
mysql:
endpoint: mysql-k8s-service:3306
...
Example: Add the Rabbit MQ monitor
-----------------------------------------------------------------------------

This example shows how to add the :ref:`rabbitmq` integration to your configuration file.

Add RabbitMQ in the ``agent`` section
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If you want to activate the RabbitMQ monitor in the Collector agent daemonset, add ``mysql`` to the ``receivers`` section of your agent section in the configuration file:

.. code:: yaml
agent:
config:
receivers:
smartagent/rabbitmq:
type: collectd/rabbitmq
host: localhost
port: 5672
username: otel
password: ${env:RABBITMQ_PASSWORD}
Next, include the receiver in the ``metrics`` pipeline of the ``service`` section of your configuration file:

.. code:: yaml
service:
pipelines:
metrics:
receivers:
- smartagent/rabbitmq
Add RabbitMQ in the ``clusterReceiver`` section
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Similarly, if you want to activate the RabbitMQ monitor in the cluster receiver, add ``mysql`` to the ``receivers`` section of your cluster receiver section in the configuration file:

.. code:: yaml
clusterReceiver:
config:
receivers:
smartagent/rabbitmq:
type: collectd/rabbitmq
host: rabbitmq-service
port: 5672
username: otel
password: ${env:RABBITMQ_PASSWORD}
Next, include the receiver in the ``metrics`` pipeline of the ``service`` section of your configuration file:

.. code:: yaml
service:
pipelines:
metrics:
receivers:
- smartagent/rabbitmq
Activate discovery mode on the Collector
============================================

Use the discovery mode of the Splunk Distribution of OpenTelemetry Collector to detect metric sources and create
a configuration based on the results.

See :ref:`discovery-mode-k8s` for instructions on how to activate discovery mode in the Helm chart.

.. _otel-kubernetes-config-resources:

Add additional telemetry sources
===========================================

Use the ``autodetect`` configuration option to activate additional telemetry sources.

Set ``autodetect.prometheus=true`` if you want the Collector to scrape Prometheus metrics from pods that have generic Prometheus-style annotations. Add the following annotations on pods to allow a fine control of the scraping process:

* ``prometheus.io/scrape: true``: The default configuration scrapes all pods. If set to ``false``, this annotation excludes the pod from the scraping process.
* ``prometheus.io/path``: The path to scrape the metrics from. The default value is ``/metrics``.
* ``prometheus.io/port``: The port to scrape the metrics from. The default value is ``9090``.

If the Collector is running in an Istio environment, set ``autodetect.istio=true`` to make sure that all traces, metrics, and logs reported by Istio are collected in a unified manner.

For example, use the following configuration to activate automatic detection of both Prometheus and Istio telemetry sources:

.. code-block:: yaml
splunkObservability:
accessToken: xxxxxx
realm: us0
clusterName: my-k8s-cluster
autodetect:
istio: true
prometheus: true
.. _otel-kubernetes-deactivate-telemetry:

Deactivate particular types of telemetry
============================================

By default, OpenTelemetry sends only metrics and traces to Splunk Observability Cloud and sends only logs to Splunk Platform. You can activate or deactivate any kind of telemetry data collection for a specific destination.

For example, the following configuration allows the Collector to send all collected telemetry data to Splunk Observability Cloud and the Splunk Platform if you've properly configured them:

.. code-block:: yaml
splunkObservability:
metricsEnabled: true
tracesEnabled: true
logsEnabled: true
splunkPlatform:
metricsEnabled: true
logsEnabled: true
Loading

0 comments on commit 9ba79ec

Please sign in to comment.