diff --git a/docs/shipping/AWS/aws-api-gateway.md b/docs/shipping/AWS/aws-api-gateway.md index e66196e2..2589cfa7 100644 --- a/docs/shipping/AWS/aws-api-gateway.md +++ b/docs/shipping/AWS/aws-api-gateway.md @@ -1,7 +1,7 @@ --- id: AWS-API-Gateway title: AWS API Gateway -overview: This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon MQ metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. +overview: This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon API Gateway metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. product: ['metrics'] os: [] filters: ['AWS', 'Access Management', 'Most Popular'] @@ -14,6 +14,92 @@ metrics_alerts: [] drop_filter: [] --- +## Logs + + +:::note +For a much easier and more efficient way to collect and send telemetry, consider using the [Logz.io telemetry collector](https://app.logz.io/#/dashboard/send-your-data/agent/new). +::: + + +## Configure AWS to forward logs to Logz.io + +This integration uses Fluentd in a Docker container to forward logs from your Amazon Elastic Container Service (ECS) cluster to Logz.io. + +:::note +This integration refers to an EC2-based cluster. For Fargate-based cluster see [our Fargate documentation](https://docs.logz.io/shipping/log-sources/fargate.html). +::: + + +:::caution Important +Fluentd will fetch all existing logs, as it is not able to ignore older logs. +::: + +### Automated CloudFormation deployment + + +#### Configure and create your stack + +Click the button that matches your AWS region, then follow the instructions below: + +| AWS Region | Launch button | +| --- | --- | +| `us-east-1` | [![Deploy to AWS](https://dytvr9ot2sszz.cloudfront.net/logz-docs/lights/LightS-button.png)](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?templateURL=https://logzio-aws-integrations-us-east-1.s3.amazonaws.com/logzio-aws-ecs/1.0.0/auto-deployment.json&stackName=logzio-aws-ecs-auto-deployment) | + +:::note +If your region is not on the list, let us know in the [repo's issues](https://github.com/logzio/logzio-aws-ecs/issues), or reach out to Logz.io support team! +::: + + +####### In screen **Step 1 Specify template**: + +Keep the defaults and click **Next**. + +![Screen_1](https://dytvr9ot2sszz.cloudfront.net/logz-docs/ecs/screen_01.png) + +####### In screen **Step 2 Specify stack details**: + +1. For **Stack name** you can either keep the default, or change the stack name. + +2. For **LogzioListener** - choose your Logz.io listener from the list. + +3. For **LogzioToken** - insert your Logz.io logs shipping token. + +4. Click **Next**. + +![Screen_2](https://dytvr9ot2sszz.cloudfront.net/logz-docs/ecs/screen_02.png) + +####### In screen **Step 3 Configure stack options** (Optional): + +If you want to, you can add your custom tags, or not. Click on **Next**. + +####### In screen **Step 4 Review**: + +Scroll down and click on **Create stack**. + +**Give your stack a few moments to launch.** + +#### Run the task + +1. Go to your AWS ECS page, and on the left menu, click on **Task Definitions**, then choose the task you just created. + +2. Click on the **Actions** button, then choose **Run Task**. + +3. In the **Run Task** screen, choose **EC2** as your **Launch type**. + +4. Choose the cluster you want to ship logs from. + +5. For **Placement Templates** choose **One Task Per Host**. + +6. Click on **Run Task**. + +#### Check Logz.io for your logs + +Give your logs some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). + + +## Metrics + :::note For a much easier and more efficient way to collect and send metrics, consider using the [Logz.io telemetry collector](https://app.logz.io/#/dashboard/send-your-data/agent/new). diff --git a/docs/shipping/Containers/kubernetes-events.md b/docs/shipping/Containers/kubernetes-events.md deleted file mode 100644 index 689f2da4..00000000 --- a/docs/shipping/Containers/kubernetes-events.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -id: Kubernetes-events -title: Kubernetes Events -overview: This guide uses the kubernetes-event-exporter tool to ship kubernetes events to Logz.io. -product: ['metrics'] -os: ['windows', 'linux'] -filters: ['Containers', 'Orchestration'] -logo: https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kubernetes.svg -logs_dashboards: [] -logs_alerts: [] -logs2metrics: [] -metrics_dashboards: [] -metrics_alerts: [] -drop_filter: [] ---- - - -### Shipping Kubernetes Events - -Kubernetes Events are a resource type that Kubernetes automatically creates when other resources get state changes, errors, or other messages that should be shared across the system. - -This guide uses the [kubernetes-event-exporter](https://github.com/opsgenie/kubernetes-event-exporter) tool to ship kubernetes events to Logz.io. - -##### Sending logs from nodes with taints - -If you want to ship logs from any of the nodes that have a taint, make sure that the taint key values are listed in your in your daemonset/deployment configuration as follows: - -```yaml -tolerations: -- key: - operator: - value: - effect: -``` - -To determine if a node uses taints as well as to display the taint keys, run: - -``` -kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}" -``` - - - - -#### Create monitoring namespace - -```shell -kubectl create namespace monitoring -``` - -#### Store your Logz.io credentials -Save your Logz.io shipping credentials as a Kubernetes secret. To do this, customize the sample command below to your specifics and run it. - -```shell -kubectl create secret generic logzio-events-secret \ - --from-literal=logzio-log-shipping-token='<>' \ - --from-literal=logzio-log-listener='<>' \ - -n monitoring -``` - -* {@include: ../../_include/log-shipping/listener-var.html} -* {@include: ../../_include/log-shipping/log-shipping-token.html} - -#### Deploy - -```shell -kubectl apply -f https://raw.githubusercontent.com/logzio/logz-docs/master/shipping-config-samples/k8s-events.yaml -``` - -#### Check Logz.io for your events - -Give your events some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). - -If you still don't see your logs, see [Kubernetes log shipping troubleshooting](https://docs.logz.io/user-guide/kubernetes-troubleshooting/). - - - diff --git a/docs/shipping/Containers/kubernetes.md b/docs/shipping/Containers/kubernetes.md index aec2f657..7d795d5d 100644 --- a/docs/shipping/Containers/kubernetes.md +++ b/docs/shipping/Containers/kubernetes.md @@ -15,82 +15,219 @@ drop_filter: [] --- - - - The logzio-monitoring Helm Chart ships your Kubernetes telemetry (logs, metrics, traces and security reports) to your Logz.io account. + ## Prerequisites 1. [Helm](https://helm.sh/) + Add Logzio-helm repository `helm repo add logzio-helm https://logzio.github.io/logzio-helm && helm repo update` ## Send your logs + + +Send your logs ```sh helm install -n monitoring \ --set logs.enabled=true \ ---set logzio-fluentd.secrets.logzioShippingToken="<>" \ ---set logzio-fluentd.secrets.logzioListener="<>" \ +--set logzio-fluentd.secrets.logzioShippingToken="{@include: ../../_include/log-shipping/log-shipping-token.html}" \ +--set logzio-fluentd.secrets.logzioListener="{@include: ../../_include/log-shipping/listener-var.html}" \ --set logzio-fluentd.env_id="<>" \ +--set logzio-fluentd.fargateLogRouter.enabled=true \ logzio-monitoring logzio-helm/logzio-monitoring ``` | Parameter | Description | | --- | --- | -| `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/general). | +| `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | | `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | | `<>` | The cluster's name, to easily identify the telemetry data for each environment. | +### Adding a custom log_type field from attribute +To add a `log_type` field with a custom value to each log, you can use the annotation key `log_type` with a custom value. The annotation will be automatically parsed into a `log_type` field with the provided value. +e.g: +``` +... + metadata: + annotations: + log_type: "my_type" +``` +Will result with the following log (json): +``` +{ +... +,"log_type": "my_type" +... +} +``` + + +### Configuring Fluentd to concatenate multiline logs using a plugin + +Fluentd splits multiline logs by default. If your original logs span multiple lines, you may find that they arrive in your Logz.io account split into several partial logs. + +The Logz.io Docker image comes with a pre-built Fluentd filter plug-in that can be used to concatenate multiline logs. The plug-in is named `fluent-plugin-concat` and you can view the full list of configuration options in the [GitHub project](https://github.com/fluent-plugins-nursery/fluent-plugin-concat). + +### Example + +The following is an example of a multiline log sent from a deployment on a k8s cluster: + +```shell +2021-02-08 09:37:51,031 - errorLogger - ERROR - Traceback (most recent call last): +File "./code.py", line 25, in my_func +1/0 +ZeroDivisionError: division by zero +``` + +Fluentd's default configuration will split the above log into 4 logs, 1 for each line of the original log. In other words, each line break (`\n`) causes a split. + +To avoid this, you can use the `fluent-plugin-concat` and customize the configuration to meet your needs. The additional configuration is added to: + +* `kubernetes.conf` for RBAC/non-RBAC DaemonSet +* `kubernetes-containerd.conf` for Containerd DaemonSet + +For the above example, we could use the following regex expressions to demarcate the start and end of our example log: + + +```shell + + @type concat + key message # The key for part of multiline log + multiline_start_regexp /^[0-9]{4}-[0-9]{2}-[0-9]{2}/ # This regex expression identifies line starts. + +``` + +### Monitoring fluentd with prometheus +In order to monitor fluentd and collect input & output metrics. You can +enable prometheus configuration with the `logzio-fluentd.daemonset.fluentdPrometheusConf` and `logzio-fluentd.windowsDaemonset.fluentdPrometheusConf` parameter (default to false). +When enabling Prometheus configuration, the pod collects and exposes fluentd metrics on port `24231`, `/metrics` endpoint. + +### Modifying the configuration + +You can see a full list of the possible configuration values in the [logzio-fluentd Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/fluentd#configuration). + +If you would like to modify any of the values found in the `logzio-fluentd` folder, use the `--set` flag with the `logzio-fluentd` prefix. + +For instance, if there is a parameter called `someField` in the `logzio-telemetry`'s `values.yaml` file, you can set it by adding the following to the `helm install` command: + +```sh +--set logzio-fluentd.someField="my new value" +``` +You can add `log_type` annotation with a custom value, which will be parsed into a `log_type` field with the same value. + +### Sending logs from nodes with taints + +If you want to ship logs from any of the nodes that have a taint, make sure that the taint key values are listed in your in your daemonset/deployment configuration as follows: + +```yaml +tolerations: +- key: + operator: + value: + effect: +``` + +To determine if a node uses taints as well as to display the taint keys, run: + +``` +kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}" +``` + For troubleshooting log shipping, see our [user guide](https://docs.logz.io/user-guide/kubernetes-troubleshooting/). +### Sending logs from nodes with taints + +If you want to ship logs from any of the nodes that have a taint, make sure that the taint key values are listed in your in your daemonset/deployment configuration as follows: + +```yaml +tolerations: +- key: + operator: + value: + effect: +``` + +To determine if a node uses taints as well as to display the taint keys, run: + +``` +kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}" +``` + + ## Send your Metrics ```sh helm install -n monitoring \ --set metricsOrTraces.enabled=true \ --set logzio-k8s-telemetry.metrics.enabled=true \ ---set logzio-k8s-telemetry.secrets.MetricsToken="<>" \ ---set logzio-k8s-telemetry.secrets.ListenerHost="https://<>:8053" \ +--set logzio-k8s-telemetry.secrets.MetricsToken="{@include: ../../_include/p8s-shipping/replace-prometheus-token.html}" \ +--set logzio-k8s-telemetry.secrets.ListenerHost="{@include: ../../_include/p8s-shipping/replace-prometheus-listener.html}" \ --set logzio-k8s-telemetry.secrets.p8s_logzio_name="<>" \ --set logzio-k8s-telemetry.secrets.env_id="<>" \ +--set logzio-k8s-telemetry.collector.mode=standalone \ logzio-monitoring logzio-helm/logzio-monitoring ``` | Parameter | Description | | --- | --- | -| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | +| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=metrics). | +| `<>` | The name for the environment's metrics, to easily identify the metrics for each environment. | | `<>` | The cluster's name, to easily identify the telemetry data for each environment. | | `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | For troubleshooting metrics shipping, see our [user guide](https://docs.logz.io/user-guide/infrastructure-monitoring/troubleshooting/k8-helm-opentelemetry-troubleshooting.html). + + +### Customize the metrics collected by the Helm chart +The default configuration uses the Prometheus receiver with the following scrape jobs: +* Cadvisor: Scrapes container metrics +* Kubernetes service endpoints: These jobs scrape metrics from the node exporters, from Kube state metrics, from any other service for which the `prometheus.io/scrape: true` annotaion is set, and from services that expose Prometheus metrics at the `/metrics` endpoint. +To customize your configuration, edit the `config` section in the `values.yaml` file. + + +### Check Logz.io for your metrics +Give your metrics some time to get from your system to ours. + + +{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. + + + +{@include: ../../_include/metric-shipping/generic-dashboard.html} + + +For troubleshooting this solution, see our [EKS troubleshooting guide](https://docs.logz.io/user-guide/infrastructure-monitoring/troubleshooting/eks-helm-opentelemetry-troubleshooting.html). + ## Send your traces ```sh helm install -n monitoring \ --set metricsOrTraces.enabled=true \ --set logzio-k8s-telemetry.traces.enabled=true \ ---set logzio-k8s-telemetry.secrets.TracesToken="<>" \ ---set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ +--set logzio-k8s-telemetry.secrets.TracesToken="{@include: ../../_include/tracing-shipping/replace-tracing-token.html}" \ +--set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ --set logzio-k8s-telemetry.secrets.env_id="<>" \ logzio-monitoring logzio-helm/logzio-monitoring ``` | Parameter | Description | | --- | --- | -| `<>` | Your [traces shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=tracing). | +| `<>` | Your [traces shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=traces). | | `<>` | The cluster's name, to easily identify the telemetry data for each environment. | -| `<>` | Name of your Logz.io traces region e.g `us`, `eu`... | +| `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=traces). | +| `<>` | Name of your Logz.io traces region e.g `us`, `eu`... | For troubleshooting traces shipping, see our [user guide]([https://docs.logz.io/user-guide/kubernetes-troubleshooting/](https://docs.logz.io/user-guide/distributed-tracing/tracing-troubleshooting.html)). @@ -102,55 +239,23 @@ For troubleshooting traces shipping, see our [user guide]([https://docs.logz.io/ helm install -n monitoring \ --set metricsOrTraces.enabled=true \ --set logzio-k8s-telemetry.traces.enabled=true \ ---set logzio-k8s-telemetry.secrets.TracesToken="<>" \ ---set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ +--set logzio-k8s-telemetry.secrets.TracesToken="{@include: ../../_include/tracing-shipping/replace-tracing-token.html}" \ +--set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ --set logzio-k8s-telemetry.secrets.env_id="<>" \ --set logzio-k8s-telemetry.spm.enabled=true \ ---set logzio-k8s-telemetry.secrets.SpmToken=<> \ +--set logzio-k8s-telemetry.secrets.SpmToken={@include: ../../_include/tracing-shipping/replace-spm-token.html} \ logzio-monitoring logzio-helm/logzio-monitoring ``` | Parameter | Description | | --- | --- | -| `<>` | Your [traces shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | -| `<>` | The cluster's name, to easily identify the telemetry data for each environment. | -| `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | -| `<>` | Name of your Logz.io traces region e.g `us`, `eu`... | -| `<>` | Your [span metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | - - -## Scan your cluster for security vulnerabilities - -```sh -helm install -n monitoring \ ---set securityReport.enabled=true \ ---set logzio-trivy.env_id="<>" \ ---set logzio-trivy.secrets.logzioShippingToken="<>" \ ---set logzio-trivy.secrets.logzioListener="<>" \ -``` - -| Parameter | Description | -| --- | --- | -| `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/general). | -| `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | +| `<>` | Your [traces shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=traces). | | `<>` | The cluster's name, to easily identify the telemetry data for each environment. | +| `<>` | Name of your Logz.io traces region e.g `us`, `eu`... | +| `<>` | Your [span metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=metrics). | -## Modifying the configuration for logs - -You can see a full list of the possible configuration values in the [logzio-fluentd Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/fluentd#configuration). - -If you would like to modify any of the values found in the `logzio-fluentd` folder, use the `--set` flag with the `logzio-fluentd` prefix. - -For instance, if there is a parameter called `someField` in the `logzio-telemetry`'s `values.yaml` file, you can set it by adding the following to the `helm install` command: - -```sh ---set logzio-fluentd.someField="my new value" -``` -You can add `log_type` annotation with a custom value, which will be parsed into a `log_type` field with the same value. - - -### Modifying the configuration for metrics and traces +## Modifying the configuration for metrics and traces You can see a full list of the possible configuration values in the [logzio-telemetry Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-telemetry). @@ -163,47 +268,30 @@ For instance, if there is a parameter called `someField` in the `logzio-telemetr --set logzio-k8s-telemetry.someField="my new value" ``` -## Sending telemetry data from eks on fargate -To ship logs from pods running on Fargate, set the `fargateLogRouter.enabled` value to `true`. Doing so will deploy a dedicated `aws-observability` namespace and a `configmap` for the Fargate log router. For more information on EKS Fargate logging, please refer to the [official AWS documentation]((https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html). +## Scan your cluster for security vulnerabilities -```shell +```sh helm install -n monitoring \ ---set logs.enabled=true \ ---set logzio-fluentd.fargateLogRouter.enabled=true \ ---set logzio-fluentd.secrets.logzioShippingToken="<>" \ ---set logzio-fluentd.secrets.logzioListener="<>" \ ---set metricsOrTraces.enabled=true \ ---set logzio-k8s-telemetry.metrics.enabled=true \ ---set logzio-k8s-telemetry.secrets.MetricsToken="<>" \ ---set logzio-k8s-telemetry.secrets.ListenerHost="https://<>:8053" \ ---set logzio-k8s-telemetry.secrets.p8s_logzio_name="<>" \ ---set logzio-k8s-telemetry.traces.enabled=true \ ---set logzio-k8s-telemetry.secrets.TracesToken="<>" \ ---set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ -logzio-monitoring logzio-helm/logzio-monitoring +--set securityReport.enabled=true \ +--set logzio-trivy.env_id="<>" \ +--set logzio-trivy.secrets.logzioShippingToken="<>" \ +--set logzio-trivy.secrets.logzioListener="<>" \ ``` | Parameter | Description | | --- | --- | -| `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | +| `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/general). | | `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | -| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=metrics). | -| `<>` | The name for the environment's metrics, to easily identify the metrics for each environment. | -| `<>` | The name for your environment's identifier, to easily identify the telemetry data for each environment. | -| `<>` | Your custom name for the environment's metrics, to easily identify the metrics for each environment. | -| `<>` | Replace `<>` with the [token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=tracing) of the account you want to ship to. | -| `<>` | Name of your Logz.io traces region e.g `us` or `eu`. You can find your region code in the [Regions and URLs](https://docs.logz.io/user-guide/accounts/account-region.html#regions-and-urls) table. | +| `<>` | The cluster's name, to easily identify the telemetry data for each environment. | + -## Handling image pull rate limit +## Uninstalling the Chart -In certain situations, such as with spot clusters where pods/nodes are frequently replaced, you may encounter the pull rate limit for images fetched from Docker Hub. This could result in the following error: Y`ou have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits`. +The `uninstall` command is used to remove all the Kubernetes components associated with the chart and to delete the release. -To address this issue, you can use the `--set` commands provided below in order to access an alternative image repository: +To uninstall the `logzio-k8s-telemetry` deployment, use the following command: ```shell ---set logzio-k8s-telemetry.image.repository=ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib ---set logzio-k8s-telemetry.prometheus-pushgateway.image.repository=public.ecr.aws/logzio/prom-pushgateway ---set logzio-fluentd.image=public.ecr.aws/logzio/logzio-fluentd ---set logzio-trivy.image=public.ecr.aws/logzio/trivy-to-logzio -``` +helm uninstall logzio-k8s-telemetry +``` \ No newline at end of file diff --git a/docs/shipping/Operating-Systems/winlogbeat.md b/docs/shipping/Operating-Systems/winlogbeat.md index 733df393..f87a6be6 100644 --- a/docs/shipping/Operating-Systems/winlogbeat.md +++ b/docs/shipping/Operating-Systems/winlogbeat.md @@ -154,182 +154,3 @@ New-Service -Name LogzioOTELCollector -BinaryPathName "$env:APPDATA\LogzioAgent\ |Service logs|`eventvwr.msc`| |Delete service|`Stop-Service -Name LogzioOTELCollector` `sc.exe DELETE LogzioOTELCollector`| - -## Send your logs using Winlogbeat - - -**Before you begin, you'll need**: -[Winlogbeat 8](https://www.elastic.co/guide/en/beats/winlogbeat/8.7/winlogbeat-installation-configuration.html#installation), [Winlogbeat 7](https://www.elastic.co/guide/en/beats/winlogbeat/7.x/winlogbeat-installation-configuration.html#installation), or [Winlogbeat 6](https://www.elastic.co/guide/en/beats/winlogbeat/6.8/winlogbeat-installation.html). - -### Download the Logz.io public certificate - -Download the -[Logz.io public certificate]({@include: ../../_include/log-shipping/certificate-path.md}) -to `C:\ProgramData\Winlogbeat\COMODORSADomainValidationSecureServerCA.crt` -on your machine. - -### Configure Windows input - -If you're working with the default configuration file, -(`C:\Program Files\Winlogbeat\winlogbeat.yml`) -clear the content and start with a fresh file. - -Paste this code block. - -{@include: ../../_include/log-shipping/log-shipping-token.html} - -```yaml -winlogbeat.event_logs: - - name: Application - ignore_older: 72h - - name: Security - - name: System - -fields: - logzio_codec: json - token: <> - type: wineventlog -fields_under_root: true -``` - -If you're running Winlogbeat 7 or 8, paste this code block. -Otherwise, you can leave it out. - -```yaml -# ... For Winlogbeat 7 and 8 only ... -processors: - - rename: - fields: - - from: "agent" - to: "beat_agent" - ignore_missing: true - - rename: - fields: - - from: "log.file.path" - to: "source" - ignore_missing: true - - rename: - fields: - - from: "log" - to: "log_information" - ignore_missing: true -``` - - -### Set Logz.io as the output - -If Logz.io isn't the output, set it now. - -Winlogbeat can have one output only, so remove any other `output` entries. - -{@include: ../../_include/log-shipping/listener-var.html} - -```yaml -output.logstash: - hosts: ["<>:5015"] - ssl: - certificate_authorities: ['C:\ProgramData\Winlogbeat\COMODORSADomainValidationSecureServerCA.crt'] -``` - -### Restart Winlogbeat - -Open PowerShell as an admin and run this command: - -```powershell -Restart-Service winlogbeat -``` - -:::note -If you're starting Winlogbeat, and haven't configured it as a service yet, refer to [Winlogbeat documentation](https://www.elastic.co/guide/en/beats/winlogbeat/current/configuring-howto-winlogbeat.html). -::: - - -### Check Logz.io for your logs - -Give your logs some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). - -If you still don't see your logs, see [log shipping troubleshooting]({{site.baseurl}}/user-guide/log-shipping/log-shipping-troubleshooting.html). - - - - -## Configure NXLog - -**Before you begin, you'll need**: -[NXLog](https://nxlog.co/products/nxlog-community-edition/download) - - - -### Configure NXLog basics - -Copy this code into your configuration file (`C:\Program Files (x86)\nxlog\conf\nxlog.conf` by default). - -```conf -define ROOT C:\\Program Files (x86)\\nxlog -define ROOT_STRING C:\\Program Files (x86)\\nxlog -define CERTDIR %ROOT%\\cert -Moduledir %ROOT%\\modules -CacheDir %ROOT%\\data -Pidfile %ROOT%\\data\\nxlog.pid -SpoolDir %ROOT%\\data -LogFile %ROOT%\\data\\nxlog.log - - Module xm_charconv - AutodetectCharsets utf-8, euc-jp, utf-16, utf-32, iso8859-2 - -``` - -:::note -For information on parsing multi-line messages, see [this](https://nxlog.co/documentation/nxlog-user-guide/parsing-multiline.html#parsing-multiline) from NXLog. -::: - - -### Add Windows as an input - -Add an `Input` block to append your account token to log records. - -{@include: ../../_include/log-shipping/log-shipping-token.html} - -```conf - - -# For Windows Vista/2008 and later, set Module to `im_msvistalog`. For -# Windows XP/2000/2003, set to `im_mseventlog`. - Module im_msvistalog - - Exec if $raw_event =~ /^#/ drop(); - Exec convert_fields("AUTO", "utf-8"); - Exec $raw_event = '[<>][type=wineventlog]' + $raw_event; - -``` - -### Set Logz.io as the output - -Add the Logz.io listener in the `Output` block. - -{@include: ../../_include/log-shipping/listener-var.html} - -```conf - - Module om_tcp - Host <> - Port 8010 - - - Path eventlog => out - -``` - -### Restart NXLog - -Open PowerShell as an admin and run this command: - -```powershell -Restart-Service nxlog -``` - -### Check Logz.io for your logs - -Give your logs some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). - - diff --git a/docs/shipping/Other/opentelemetry.md b/docs/shipping/Other/opentelemetry.md index 7ce3ec7a..81cc2fe1 100644 --- a/docs/shipping/Other/opentelemetry.md +++ b/docs/shipping/Other/opentelemetry.md @@ -15,6 +15,135 @@ drop_filter: [] --- +## Logs + + +This project lets you configure the OpenTelemetry collector to send your collected logs to Logz.io. + +### Configuring OpenTelemetry to send your log data to Logz.io + +#### Download OpenTelemetry collector + +:::note +If you already have OpenTelemetry, proceed to the next step. +::: + +Create a dedicated directory on your host and download the OpenTelemetry collector that is relevant to the operating system of your host. + +After downloading the collector, create a configuration file `config.yaml`. + +#### Configure the Receivers + +Open the configuration file and ensure it contains the receivers required to collect your logs. + +#### Configure the Exporters + +In the same configuration file, add the following to the exporters section: + +```yaml +exporters: + logzio-logs: + endpoint: https://<>:<> + headers: + Authorization: Bearer <> +``` + +#### Configure the Service Pipeline + +In the service section of the configuration file, add the following configuration: + +```yaml +service: + pipelines: + logs: + receivers: [<>] + exporters: [logzio/logs] +``` + +* Replace <> with the name of your log receiver. + +#### Start the Collector + +Run the following command: + +```shell +/otelcol-contrib --config ./config.yaml +``` + +* Replace `` with the path to the directory where you downloaded the collector. If the name of your configuration file is different to config, adjust the name in the command accordingly. + +#### Check Logz.io for Your Logs + +Give your data some time to get from your system to ours, then log in to your Logz.io account, and open the appropriate tab or dashboard to view your logs. + +## Metrics + + +This project lets you configure the OpenTelemetry collector to send your collected Prometheus-format metrics to Logz.io. + + +#### Configuring OpenTelemetry to send your metrics data to Logz.io + +##### Download OpenTelemetry collector + +:::note +If you already have OpenTelemetry, proceed to the next step. +::: + +Create a dedicated directory on your host and download the [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.60.0) that is relevant to the operating system of your host. + +After downloading the collector, create a configuration file `config.yaml`. + +##### Configure the receivers + +Open the configuration file and make sure that it states the receivers required for your source. + +##### Configure the exporters + +In the same configuration file, add the following to the `exporters` section: + +```yaml +exporters: + prometheusremotewrite: + endpoint: https://<>:8053 + headers: + Authorization: Bearer <> +``` + +{@include: ../../_include/general-shipping/replace-placeholders-prometheus.html} + +##### Configure the service pipeline + +In the `service` section of the configuration file, add the following configuration + +```yaml +service: + pipelines: + metrics: + receivers: [<>] + exporters: [prometheusremotewrite] +``` +* Replace `<>` with the name of your receiver. + + + +##### Start the collector + +Run the following command: + +```shell +/otelcol-contrib --config ./config.yaml +``` + +* Replace `` with the path to the directory where you downloaded the collector. If the name of your configuration file is different to `config`, adjust name in the command accordingly. + +##### Check Logz.io for your metrics + +Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). + + +## Traces + Deploy this integration to send traces from your OpenTelemetry installation to Logz.io. ## Manual configuration @@ -131,156 +260,6 @@ Run the application to generate traces. Give your traces some time to get from your system to ours, and then open [Tracing](https://app.logz.io/#/dashboard/jaeger). -## Configuration via Helm - -You can use a Helm chart to ship traces to Logz.io via the OpenTelemetry collector. The Helm tool is used to manage packages of pre-configured Kubernetes resources that use charts. - -**logzio-k8s-telemetry** allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector. - - -:::note -This chart is a fork of the [opentelemtry-collector](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector) Helm chart. The main repository for Logz.io helm charts are [logzio-helm](https://github.com/logzio/logzio-helm). -::: - - - -:::caution Important -This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core. -::: - - - -### Deploy the Helm chart - -Add `logzio-helm` repo as follows: - -```shell -helm repo add logzio-helm https://logzio.github.io/logzio-helm -helm repo update -``` - -### Run the Helm deployment code - -``` -helm install \ ---set config.exporters.logzio.region=<> \ ---set config.exporters.logzio.account_token=<> \ ---set traces.enabled=true \ -logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry -``` - -{@include: ../../_include/tracing-shipping/replace-tracing-token.html} -`<>` - Your Logz.io account region code. [Available regions](https://docs.logz.io/user-guide/accounts/account-region.html#available-regions). - -### Check Logz.io for your traces - -Give your traces some time to get from your system to ours, then open [Logz.io](https://app.logz.io/). - - -### Customizing Helm chart parameters - -#### Configure customization options - -You can use the following options to update the Helm chart parameters: - -* Specify parameters using the `--set key=value[,key=value]` argument to `helm install`. - -* Edit the `values.yaml`. - -* Overide default values with your own `my_values.yaml` and apply it in the `helm install` command. - - -If required, you can add the following optional parameters as environment variables: - -| Parameter | Description | -|---|---| -| secrets.SamplingLatency | Threshold for the spand latency - all traces slower than the threshold value will be filtered in. Default 500. | -| secrets.SamplingProbability | Sampling percentage for the probabilistic policy. Default 10. | - -#### Example - -You can run the logzio-k8s-telemetry chart with your custom configuration file that takes precedence over the `values.yaml` of the chart. - -For example: - - -:::note -The collector will sample **ALL traces** where is some span with error with this example configuration. -::: - - -```yaml -baseCollectorConfig: - processors: - tail_sampling: - policies: - [ - { - name: error-in-policy, - type: status_code, - status_code: {status_codes: [ERROR]} - }, - { - name: slow-traces-policy, - type: latency, - latency: {threshold_ms: 400} - }, - { - name: health-traces, - type: and, - and: { - and_sub_policy: - [ - { - name: ping-operation, - type: string_attribute, - string_attribute: { key: http.url, values: [ /health ] } - }, - { - name: main-service, - type: string_attribute, - string_attribute: { key: service.name, values: [ main-service ] } - }, - { - name: probability-policy-1, - type: probabilistic, - probabilistic: {sampling_percentage: 1} - } - ] - } - }, - { - name: probability-policy, - type: probabilistic, - probabilistic: {sampling_percentage: 20} - } - ] -``` - -``` -helm install -f /my_values.yaml \ ---set logzio.region=<> \ ---set logzio.tracing_token=<> \ ---set traces.enabled=true \ -logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry -``` - -Replace `` with the path to your custom `values.yaml` file. - -{@include: ../../_include/tracing-shipping/replace-tracing-token.html} - - - -### Uninstalling the Chart - -The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release. - -To uninstall the `logzio-k8s-telemetry` deployment, use the following command: - -```shell -helm uninstall logzio-k8s-telemetry -``` - ### Troubleshooting {@include: ../../_include/tracing-shipping/otel-troubleshooting.md} diff --git a/docs/shipping/Other/prometheus.md b/docs/shipping/Other/prometheus.md index 02f6ee03..2d66d93e 100644 --- a/docs/shipping/Other/prometheus.md +++ b/docs/shipping/Other/prometheus.md @@ -1,7 +1,7 @@ --- id: Prometheus-data title: Prometheus -overview: This project lets you send Prometheus-format logs, metrics and traces to Logz.io. +overview: This integration lets you send Prometheus-format metrics and traces to Logz.io. product: ['metrics', 'logs'] os: ['windows', 'linux'] filters: ['Other', 'Most Popular'] @@ -14,12 +14,10 @@ metrics_alerts: [] drop_filter: [] --- - +## Using Telegraf This project lets you configure a Telegraf agent to send your collected Prometheus-format metrics to Logz.io. -## Overview - Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems, and IoT sensors. To send your Prometheus-format metrics to Logz.io, you add the **outputs.http** plug-in to your Telegraf configuration file. @@ -28,8 +26,6 @@ To send your Prometheus-format metrics to Logz.io, you add the **outputs.http** #### Configure Telegraf to send your metrics data to Logz.io - - ##### Set up Telegraf v1.17 or higher: **Ubuntu & Debian** @@ -99,7 +95,7 @@ The full list of data scraping and configuring options can be found [here](https {@include: ../../_include/metric-shipping/generic-dashboard.html} - +## Using Prometheus To send your Prometheus application metrics to a Logz.io Infrastructure Monitoring account, use remote write to connect to Logz.io as the endpoint. Your data is formatted as JSON documents by the Logz.io listener. @@ -110,7 +106,6 @@ To send your Prometheus application metrics to a Logz.io Infrastructure Monitori ::: - #### Configuring Remote Write to Logz.io @@ -212,72 +207,6 @@ After your metrics are flowing, [import your existing Prometheus and Grafana das * **Tune the remote write process**: Learn more about Prometheus [remote write tuning here.](https://prometheus.io/docs/practices/remote_write/) -This project lets you configure the OpenTelemetry collector to send your collected Prometheus-format metrics to Logz.io. - - -#### Configuring OpenTelemetry to send your metrics data to Logz.io - - - -##### Download OpenTelemetry collector - -:::note -If you already have OpenTelemetry, proceed to the next step. -::: - - -Create a dedicated directory on your host and download the [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.60.0) that is relevant to the operating system of your host. - -After downloading the collector, create a configuration file `config.yaml`. - -##### Configure the receivers - -Open the configuration file and make sure that it states the receivers required for your source. - -##### Configure the exporters - -In the same configuration file, add the following to the `exporters` section: - -```yaml -exporters: - prometheusremotewrite: - endpoint: https://<>:8053 - headers: - Authorization: Bearer <> -``` - -{@include: ../../_include/general-shipping/replace-placeholders-prometheus.html} - -##### Configure the service pipeline - -In the `service` section of the configuration file, add the following configuration - -```yaml -service: - pipelines: - metrics: - receivers: [<>] - exporters: [prometheusremotewrite] -``` -* Replace `<>` with the name of your receiver. - - - -##### Start the collector - -Run the following command: - -```shell -/otelcol-contrib --config ./config.yaml -``` - -* Replace `` with the path to the directory where you downloaded the collector. If the name of your configuration file is different to `config`, adjust name in the command accordingly. - -##### Check Logz.io for your metrics - -Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). - - diff --git a/docs/shipping/Other/rsyslog.md b/docs/shipping/Other/rsyslog.md index 6fd9797a..81de8bbf 100644 --- a/docs/shipping/Other/rsyslog.md +++ b/docs/shipping/Other/rsyslog.md @@ -14,8 +14,7 @@ metrics_alerts: [] drop_filter: [] --- - -###### Shipping with Rsyslog +## Rsyslog over TLS Most Unix systems these days come with pre-installed rsyslog, which is a great lightweight service to consolidate logs. @@ -112,6 +111,10 @@ If you still don't see your logs, see our [Rsyslog troubleshooting guide](https: +## Rsyslog with SELinux + + + Security-Enhanced Linux (SELinux) is a security architecture for Linux based systems that allows administrators to have more control over who can access the system. In systems where SELinux is enabled, rsyslog is one of the system processes that SELinux protects. One of the ways SELinux protects the service is by allowing it to only send logs using the standard port, which is 514 UDP. To be able to ship logs to Logz.io, you’ll need to modify the current SELinux policy to allow shipping logs using the non-standard port 5000 TCP. @@ -209,7 +212,7 @@ Give your logs some time to get from your system to ours, and then [open Open Se If you still don't see your logs, see our [Rsyslog troubleshooting guide](https://docs.logz.io/user-guide/log-shipping/rsyslog-selinux-troubleshooting.html). - +## Automatic configuration ###### Shipping with Rsyslog @@ -255,7 +258,7 @@ curl -sLO https://github.com/logzio/logzio-shipper/raw/master/dist/logzio-rsyslo * `<>`: {@include: ../../_include/log-shipping/type.md} - +## Manual configuration ###### Shipping with Rsyslog