Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shorter integration pages #564

Closed
wants to merge 17 commits into from
4 changes: 2 additions & 2 deletions docs/_include/general-shipping/k8s-all-data.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
## All telemetry (logs, metrics, traces and security reports) at once
## Send all telemetry Data (logs, metrics, traces and security reports) at once


To enjoy the full Kubernetes 360 experience, you can send all your telemetry data to Logz.io using one single Helm chart:
Send all of your telemetry data using one single Helm chart:

```sh
helm repo add logzio-helm https://logzio.github.io/logzio-helm
Expand Down
90 changes: 45 additions & 45 deletions docs/_include/general-shipping/k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,25 +6,22 @@
## Prerequisites

:::note
You can find your Logz.io configuration tokens, environment IDs, regions, and other required details [here](https://app.logz.io/#/dashboard/integrations/aws-eks).
Your Logz.io configuration tokens, environment IDs, regions, and other required details are [here](https://app.logz.io/#/dashboard/integrations/aws-eks).
:::

1. [Helm](https://helm.sh/)
* [Helm](https://helm.sh/)

Add Logzio-helm repository
* Add Logzio-helm repository

```sh
helm repo add logzio-helm https://logzio.github.io/logzio-helm && helm repo update
```
{@include: ../../_include/general-shipping/k8s-all-data.md}
```sh
helm repo add logzio-helm https://logzio.github.io/logzio-helm && helm repo update
```

## Send your logs
* {@include: ../../_include/general-shipping/k8s-all-data.md}

`logzio-monitoring` supports the following subcharts for log collection agent:
- `logzio-logs-collector`: Based on opentelemetry collector
- `logzio-fluentd`: Based on fluentd
## Send your logs

### Log collection with logzio-logs-collector
`logzio-monitoring` supports `logzio-logs-collector`, Based on OpenTelemetry collector.

_Migrating to `logzio-monitoring` >=6.0.0_

Expand All @@ -40,8 +37,8 @@ logzio-monitoring logzio-helm/logzio-monitoring
```

### Log collection with logzio-fluentd
The `logzio-fluentd` chart is disabled by default in favor of the `logzio-logs-collector` chart for log collection.
Deploy `logzio-fluentd`, by adding the following `--set` flags:
The `logzio-fluentd` chart is disabled by default in favor of the `logzio-logs-collector` chart.
Deploy `logzio-fluentd` by adding the following `--set` flags:

```sh
helm install -n monitoring \
Expand All @@ -63,12 +60,12 @@ logzio-monitoring logzio-helm/logzio-monitoring
| `<<LOGZIO-REGION>>` | Logzio region. |


For log shipping troubleshooting, see our [user guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-fluentd-for-kubernetes-logs/).
If you encounter an issue, see our [troubleshooting guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-fluentd-for-kubernetes-logs/).

## Send your deploy events logs
## Send deployment events logs

This integration sends data about deployment events in the cluster, and how they affect the cluster's resources.
Currently supported resource kinds are `Deployment`, `Daemonset`, `Statefulset`, `ConfigMap`, `Secret`, `Service Account`, `Cluster Role` and `Cluster Role Binding`.
Send data about deployment events in the cluster, and how they affect its resources.
_Supported resource kinds are `Deployment`, `Daemonset`, `Statefulset`, `ConfigMap`, `Secret`, `Service Account`, `Cluster Role` and `Cluster Role Binding`._

```sh
helm install --namespace=monitoring \
Expand All @@ -90,7 +87,8 @@ logzio-monitoring logzio-helm/logzio-monitoring

### Deployment events versioning

To add a versioning indicator to our K8S 360 and Service Overview UI, the specified annotation must be included in the metadata of each resource whose versioning you wish to track. The 'View commit' button will link to the commit URL in your version control system (VCS) from the logzio/commit_url annotation value.
To add a versioning indicator in Kubernetes 360 and Service Overview, include the `logzio/commit_url` annotation in the resource metadata. The 'View commit' button will link to the commit URL in your version control system (VCS).


```yaml
metadata:
Expand Down Expand Up @@ -175,7 +173,9 @@ logzio-monitoring logzio-helm/logzio-monitoring
| `<<SPM-METRICS-SHIPPING-TOKEN>>` | Your [span metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). |

## Deploy both charts with span metrics and service graph
**Note** `serviceGraph.enabled=true` will have no effect unless `traces.enabled` & `spm.enabled=true` is also set to `true`

**Note:** `serviceGraph.enabled=true` will have no effect unless `traces.enabled` & `spm.enabled=true` is also set to `true`.

```sh
helm install -n monitoring \
--set metricsOrTraces.enabled=true \
Expand All @@ -189,8 +189,10 @@ helm install -n monitoring \
logzio-monitoring logzio-helm/logzio-monitoring
```

#### Deploy metrics chart with Kuberenetes object logs correlation
**Note** `k8sObjectsConfig.enabled=true` will have no effect unless `metrics.enabled` is also set to `true`
#### Deploy metrics chart with Kubernetes object logs correlation

**Note** `k8sObjectsConfig.enabled=true` will have no effect unless `metrics.enabled` is also set to `true`.

```sh
helm install \
--set logzio-k8s-telemetry.metrics.enabled=true \
Expand Down Expand Up @@ -221,36 +223,36 @@ helm install -n monitoring \
| `<<CLUSTER-NAME>>` | The cluster's name, to easily identify the telemetry data for each environment. |


## Modifying the configuration for logs
## Modifying log configuration

You can see a full list of the possible configuration values in the [logzio-fluentd Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/fluentd#configuration).
View the full list of the possible configuration values in the [logzio-fluentd Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/fluentd#configuration).

If you would like to modify any of the values found in the `logzio-fluentd` folder, use the `--set` flag with the `logzio-fluentd` prefix.
To modify values in the `logzio-fluentd` folder, use the `--set` flag with the `logzio-fluentd` prefix.

For instance, if there is a parameter called `someField` in the `logzio-telemetry`'s `values.yaml` file, you can set it by adding the following to the `helm install` command:
For example, for a parameter called `someField` in the `logzio-telemetry`'s `values.yaml` file, set it by adding the following to the `helm install` command:

```sh
--set logzio-fluentd.someField="my new value"
```
You can add `log_type` annotation with a custom value, which will be parsed into a `log_type` field with the same value.

Adding `log_type` annotation with a custom value will be parsed into a `log_type` field with the same value.

### Modifying the configuration for metrics and traces

You can see a full list of the possible configuration values in the [logzio-telemetry Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-telemetry).
### Modifying metrics and traces configuration

If you would like to modify any of the values found in the `logzio-telemetry` folder, use the `--set` flag with the `logzio-k8s-telemetry` prefix.
View the full list of the possible configuration values in the [logzio-telemetry Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-telemetry).

For instance, if there is a parameter called `someField` in the `logzio-telemetry`'s `values.yaml` file, you can set it by adding the following to the `helm install` command:
To modify values found in the `logzio-telemetry` folder, use the `--set` flag with the `logzio-k8s-telemetry` prefix.

For example, for a parameter called `someField` in the `logzio-telemetry`'s `values.yaml` file, set it by adding the following to the `helm install` command:

```sh
--set logzio-k8s-telemetry.someField="my new value"
```

## Sending telemetry data from eks on fargate
## Sending telemetry data from EKS on Fargate

To ship logs from pods running on Fargate, set the `fargateLogRouter.enabled` value to `true`. Doing so will deploy a dedicated `aws-observability` namespace and a `configmap` for the Fargate log router. For more information on EKS Fargate logging, please refer to the [official AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html).
Set the `fargateLogRouter.enabled` value to `true`. This deploys a dedicated `aws-observability` namespace and a `configmap` for the Fargate log router. Read more on EKS Fargate logging in the [official AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html).

```shell
helm install -n monitoring \
Expand Down Expand Up @@ -284,9 +286,7 @@ logzio-monitoring logzio-helm/logzio-monitoring

## Handling image pull rate limit

In certain situations, such as with spot clusters where pods/nodes are frequently replaced, you may encounter the pull rate limit for images fetched from Docker Hub. This could result in the following error: `You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits`.

To address this issue, you can use the `--set` commands provided below in order to access an alternative image repository:
Docker Hub pull rate limits could result in the following error: `You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits`. To avoid this, use the `--set` commands below to access an alternative image repository:

```shell
--set logzio-k8s-telemetry.image.repository=ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib
Expand All @@ -298,15 +298,16 @@ To address this issue, you can use the `--set` commands provided below in order

## Upgrading logzio-monitoring to v3.0.0

Before upgrading your logzio-monitoring Chart to v3.0.0 with `helm upgrade`, note that you may encounter an error for some of the logzio-telemetry sub-charts.
Before upgrading your logzio-monitoring chart to v3.0.0 with `helm upgrade`, you might encounter errors with some logzio-telemetry sub-charts.

You have two options:

There are two possible approaches to the upgrade you can choose from:
- Reinstall the chart.
- Before running the `helm upgrade` command, delete the old subcharts resources: `logzio-monitoring-prometheus-pushgateway` deployment and the `logzio-monitoring-prometheus-node-exporter` daemonset.
- Before running `helm upgrade`, delete the old subcharts resources: `logzio-monitoring-prometheus-pushgateway` deployment and the `logzio-monitoring-prometheus-node-exporter` daemonset.

## Configuring logs in JSON format

This configuration sets up a log processor to parse, restructure, and clean JSON-formatted log messages for streamlined analysis and monitoring:
Set up a log processor to parse JSON logs:

```json
<filter **>
Expand All @@ -320,17 +321,16 @@ This configuration sets up a log processor to parse, restructure, and clean JSON
</filter>
```

## Adding metric names to K8S 360 filter
## Adding metric names to Kubernetes 360 filter

To customize the metrics collected by Prometheus in your Kubernetes environment, you need to modify the `prometheusFilters` configuration in your Helm chart.
Customize Prometheus metrics in your Kubernetes environment by modifying the `prometheusFilters` configuration in your Helm chart.

### Identify metrics to keep
**1. Identify metrics to keep**

Decide which metrics you need to add to your collection, formatted as a regex string (e.g., `new_metric_1|new_metric_2`).

### Set filters
**2. Set filters**

Run the following command:

```shell
helm upgrade <RELEASE_NAME> logzio-helm/logzio-monitoring \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Replace the placeholders in the code block (indicated by the double angle bracke

| Environment variable | Description |Required/Default|
|---|---|---|
|url| The Logz.io Listener URL for for your region, configured to use port **8050** (default), or port **8052** for http traffic, or port **8053** for https traffic. For more details, see the [Prometheus configuration file remote write reference. ](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) | Required|
|url| The Logz.io Listener URL for your region, configured to use port **8050** (default), or port **8052** for http traffic, or port **8053** for https traffic. For more details, see the [Prometheus configuration file remote write reference. ](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) | Required|
|Bearer|The Logz.io Prometheus Metrics account token. | Required|


2 changes: 1 addition & 1 deletion docs/_include/metric-shipping/custom-dashboard.html
Original file line number Diff line number Diff line change
@@ -1 +1 @@
Log in to your Logz.io account and navigate to the current instructions page [inside the Logz.io app](https://app.logz.io/#/dashboard/send-your-data/prometheus-sources/{{page.slug}}).
Navigate to the instructions page within [the Logz.io app](https://app.logz.io/#/dashboard/send-your-data/prometheus-sources/{{page.slug}}).
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
| remote_write | The remote write section configuration sets Logz.io as the endpoint for your Prometheus metrics data. Place this section at the same indentation level as the `global` section. ||
|url| The Logz.io Listener URL for for your region, configured to use port **8052** for http traffic, or port **8053** for https traffic. For more details, see the [Prometheus configuration file remote write reference. ](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) | Required|
|url| The Logz.io Listener URL for your region, configured to use port **8052** for http traffic, or port **8053** for https traffic. For more details, see the [Prometheus configuration file remote write reference. ](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) | Required|
|bearer_token|The Logz.io Prometheus Metrics account token. | Required|
4 changes: 2 additions & 2 deletions docs/_include/p8s-shipping/remotewrite-syd-userguide.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Add the following parameters to your Prometheus yaml file:
| external_labels | Parameters to tag the metrics from this specific Prometheus server. | |
| p8s_logzio_name |Use the value of the parameter `p8s_logzio_name` to identify from which Prometheus environment the metrics are arriving to Logz.io. Replace the `<labelvalue>` placeholder with a label that will be added to all the metrics that are sent from this specific Prometheus server. | |
| remote_write | The remote write section configuration sets Logz.io as the endpoint for your Prometheus metrics data. Place this section at the same indentation level as the `global` section. ||
|url| The Logz.io Listener URL for for your region, configured to use port **8052** for http traffic, or port **8053** for https traffic. For more details, see the [Prometheus configuration file remote write reference. ](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) | Required|
|url| The Logz.io Listener URL for your region, configured to use port **8052** for http traffic, or port **8053** for https traffic. For more details, see the [Prometheus configuration file remote write reference. ](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) | Required|
|bearer_token|The Logz.io Prometheus Metrics account token. | Required|


Expand Down Expand Up @@ -64,7 +64,7 @@ If you want to use a `bearer_token_file` to configure your Prometheus account, c
| external_labels | Parameters to tag the metrics from this specific Prometheus server. |
| p8s_logzio_name |Use the value of the parameter `p8s_logzio_name` to identify from which Prometheus environment the metrics are arriving to Logz.io. Replace the `<labelvalue>` placeholder with a label that will be added to all the metrics that are sent from this specific Prometheus server. |
| remote_write | The remote write section configuration sets Logz.io as the endpoint for your Prometheus metrics data. Place this section at the same indentation level as the `global` section. |
|url| The Logz.io Listener URL for for your region, configured to use port **8052** for http traffic, or port **8053** for https traffic. For more details, see the [Prometheus configuration file remote write reference. ](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) | Required|
|url| The Logz.io Listener URL for your region, configured to use port **8052** for http traffic, or port **8053** for https traffic. For more details, see the [Prometheus configuration file remote write reference. ](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) | Required|
|bearer_token_file|The file path that holds Logz.io Prometheus Metrics account token. | Required|

```yaml
Expand Down
11 changes: 9 additions & 2 deletions docs/_include/tracing-shipping/collector-run-note.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
:::note
Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network.
:::
When running the OTEL collector in a Docker container, your application should run in separate containers on the same host network. **Ensure all containers share the same network**. Using Docker Compose ensures that all containers, including the OTEL collector, share the same network configuration automatically.
:::







11 changes: 4 additions & 7 deletions docs/_include/tracing-shipping/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ service:

{@include: ../../_include/tracing-shipping/replace-tracing-token.html}

##### Tail Sampling
### Tail Sampling

{@include: ../../_include/tracing-shipping/tail-sampling.md}

Expand Down Expand Up @@ -158,14 +158,11 @@ service:
{@include: ../../_include/tracing-shipping/replace-tracing-token.html}


{@include: ../../_include/tracing-shipping/tail-sampling.md}


##### Run the container
#### Run the container

Mount the `config.yaml` as volume to the `docker run` command and run it as follows.

###### Linux
##### Linux

```
docker run \
Expand All @@ -177,7 +174,7 @@ otel/opentelemetry-collector-contrib:0.78.0

Replace `<PATH-TO>` to the path to the `config.yaml` file on your system.

###### Windows
##### Windows

```
docker run \
Expand Down
4 changes: 2 additions & 2 deletions docs/_include/tracing-shipping/dotnet-steps.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
##### Download instrumentation packages
#### Download instrumentation packages

Run the following command from the application directory:

Expand All @@ -9,7 +9,7 @@ dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Extensions.Hosting
```

##### Enable instrumentation in the code
#### Enable instrumentation in the code

Add the following configuration to the beginning of the Startup.cs file:

Expand Down
9 changes: 5 additions & 4 deletions docs/_include/tracing-shipping/tail-sampling.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
The `tail_sampling` defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.
`tail_sampling` defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.

You can add more policy configurations to the processor. For more on this, refer to [OpenTelemetry Documentation](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md).

Additional policy configurations can be added to the processor. For more details, refer to the [OpenTelemetry Documentation](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md).

The configurable parameters in the Logz.io default configuration are:

| Parameter | Description | Default |
|---|---|---|
| threshold_ms | Threshold for the spand latency - all traces slower than the threshold value will be filtered in. | 1000 |
| sampling_percentage | Sampling percentage for the probabilistic policy. | 10 |
| threshold_ms | Threshold for the span latency - traces slower than this value will be included. | 1000 |
| sampling_percentage | Percentage of traces to sample using the probabilistic policy. | 10 |
Loading
Loading