diff --git a/docs/_include/general-shipping/k8s-all-data.md b/docs/_include/general-shipping/k8s-all-data.md index 58d81542..5c02466a 100644 --- a/docs/_include/general-shipping/k8s-all-data.md +++ b/docs/_include/general-shipping/k8s-all-data.md @@ -1,20 +1,20 @@ -## All telemetry (logs, metrics, traces and security reports) at once +## Send All Telemetry Data (logs, metrics, traces and security reports) -To enjoy the full Kubernetes 360 experience, you can send all your telemetry data to Logz.io using one single Helm chart: +Send all of your telemetry data using one single Helm chart: + ```sh helm repo add logzio-helm https://logzio.github.io/logzio-helm helm repo update helm install -n monitoring --create-namespace \ --set logs.enabled=true \ ---set logzio-logs-collector.secrets.logzioLogsToken="<>" \ ---set logzio-logs-collector.secrets.logzioRegion="<>" \ ---set logzio-logs-collector.secrets.env_id="<>" \ ---set logzio-fluentd.enabled=false \ +--set logzio-logs-collector.secrets.logzioLogsToken="<>" \ +--set logzio-logs-collector.secrets.logzioRegion="<>" \ +--set logzio-logs-collector.secrets.env_id="<>" \ --set metricsOrTraces.enabled=true \ --set logzio-k8s-telemetry.metrics.enabled=true \ ---set logzio-k8s-telemetry.secrets.MetricsToken="<>" \ +--set logzio-k8s-telemetry.secrets.MetricsToken="<>" \ --set logzio-k8s-telemetry.secrets.ListenerHost="https://<>:8053" \ --set logzio-k8s-telemetry.secrets.p8s_logzio_name="<>" \ --set logzio-k8s-telemetry.traces.enabled=true \ @@ -22,7 +22,7 @@ helm install -n monitoring --create-namespace \ --set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ --set logzio-k8s-telemetry.spm.enabled=true \ --set logzio-k8s-telemetry.secrets.env_id="<>" \ ---set logzio-k8s-telemetry.secrets.SpmToken="<>" \ +--set logzio-k8s-telemetry.secrets.SpmToken="<>" \ --set logzio-k8s-telemetry.serviceGraph.enabled=true \ --set logzio-k8s-telemetry.k8sObjectsConfig.enabled=true \ --set logzio-k8s-telemetry.secrets.k8sObjectsLogsToken="<>" \ @@ -41,9 +41,8 @@ logzio-monitoring logzio-helm/logzio-monitoring | --- | --- | | `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/general). | | `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | -| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | -| `<>` | The name for the environment's metrics, to easily identify the metrics for each environment. | +| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | +| `<>` | Your [SPM account shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping) | | `<>` | The name for your environment's identifier, to easily identify the telemetry data for each environment. | | `<>` | Your [traces shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | -| `<>` | Name of your Logz.io traces region e.g `us`, `eu`... | - +| `<>` | Your Logz.io [region code](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions) | diff --git a/docs/_include/general-shipping/k8s.md b/docs/_include/general-shipping/k8s.md index ec4e7b1f..ecdf5a3c 100644 --- a/docs/_include/general-shipping/k8s.md +++ b/docs/_include/general-shipping/k8s.md @@ -1,49 +1,48 @@ +## Prerequisites +* [Helm](https://helm.sh/) +* Add Logzio-helm repository + ```sh + helm repo add logzio-helm https://logzio.github.io/logzio-helm && helm repo update + ``` +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; -## Prerequisites +{@include: ../../_include/general-shipping/k8s-all-data.md} -:::note -You can find your Logz.io configuration tokens, environment IDs, regions, and other required details [here](https://app.logz.io/#/dashboard/integrations/aws-eks). -::: -1. [Helm](https://helm.sh/) +## Manual Setup +Below are instructions for configuring each type of telemetry data individually. -Add Logzio-helm repository + + -```sh -helm repo add logzio-helm https://logzio.github.io/logzio-helm && helm repo update -``` -{@include: ../../_include/general-shipping/k8s-all-data.md} -## Send your logs +## Send your logs -`logzio-monitoring` supports the following subcharts for log collection agent: -- `logzio-logs-collector`: Based on opentelemetry collector -- `logzio-fluentd`: Based on fluentd +To send your logs, our chart`logzio-monitoring` offers two methods: -### Log collection with logzio-logs-collector +* `logzio-logs-collector`, based on OpenTelemetry collector +* `logzio-fluentd`, based on fluentd -_Migrating to `logzio-monitoring` >=6.0.0_ -Deploy `logzio-logs-collector`, by replacing `logzio-fluentd` flags with the following `--set` flags: +### Log collection with OpenTelemetry collector ```sh helm install -n monitoring --create-namespace \ --set logs.enabled=true \ ---set logzio-logs-collector.enabled=true \ ---set logzio-fluentd.enabled=false \ ---set logzio-logs-collector.secrets.logzioLogsToken="<>" \ ---set logzio-logs-collector.secrets.logzioRegion="<>" \ ---set logzio-logs-collector.secrets.env_id="<>" \ +--set logzio-logs-collector.secrets.logzioLogsToken="<>" \ +--set logzio-logs-collector.secrets.logzioRegion="<>" \ +--set logzio-logs-collector.secrets.env_id="<>" \ logzio-monitoring logzio-helm/logzio-monitoring ``` -### Log collection with logzio-fluentd -The `logzio-fluentd` chart is disabled by default in favor of the `logzio-logs-collector` chart for log collection. -Deploy `logzio-fluentd`, by adding the following `--set` flags: +### Log collection with Fluentd +The `logzio-fluentd` chart is disabled by default in favor of the `logzio-logs-collector` chart. +Deploy `logzio-fluentd` by adding the following `--set` flags: ```sh helm install -n monitoring --create-namespace \ @@ -62,15 +61,45 @@ logzio-monitoring logzio-helm/logzio-monitoring | `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/general). | | `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | | `<>` | The cluster's name, to easily identify the telemetry data for each environment. | -| `<>` | Logzio region. | +| `<>` | Your Logz.io [region code](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions). | + + +:::note +If you encounter an issue, see our [troubleshooting guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-fluentd-for-kubernetes-logs/). +::: + + +### Custom Configuration + +You can view the full list of possible configuration options for each chart in the links below: +1. [logzio-fluentd Chart](https://github.com/logzio/logzio-helm/tree/master/charts/fluentd#configuration) +2. [logzio-logs-collector Chart](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-logs-collector#configuration) + +To modify values, use the `--set` flag with the chart name as a prefix. +**Example:** +For a parameter called `someField` in the `logzio-logs-collector`'s `values.yaml` file, set it by adding the following to the `helm install` command: -For log shipping troubleshooting, see our [user guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-fluentd-for-kubernetes-logs/). +```sh +--set logzio-logs-collector.someField="my new value" +``` + +:::info +Adding `log_type` annotation with a custom value will be parsed into a `log_type` field with the same value. +::: + + + + + + +## Send deployment events logs -## Send your deploy events logs +Send data about deployment events in the cluster, and how they affect its resources. -This integration sends data about deployment events in the cluster, and how they affect the cluster's resources. -Currently supported resource kinds are `Deployment`, `Daemonset`, `Statefulset`, `ConfigMap`, `Secret`, `Service Account`, `Cluster Role` and `Cluster Role Binding`. +:::info +Supported resource kinds are `Deployment`, `Daemonset`, `Statefulset`, `ConfigMap`, `Secret`, `Service Account`, `Cluster Role` and `Cluster Role Binding`. +::: ```sh helm install -n monitoring --create-namespace \ @@ -82,7 +111,6 @@ logzio-monitoring logzio-helm/logzio-monitoring ``` - | Parameter | Description | | --- | --- | | `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/general). | @@ -92,7 +120,8 @@ logzio-monitoring logzio-helm/logzio-monitoring ### Deployment events versioning -To add a versioning indicator to our K8S 360 and Service Overview UI, the specified annotation must be included in the metadata of each resource whose versioning you wish to track. The 'View commit' button will link to the commit URL in your version control system (VCS) from the logzio/commit_url annotation value. +To add a versioning indicator in Kubernetes 360 and Service Overview, include the `logzio/commit_url` annotation in the resource metadata. The 'View commit' button will link to the commit URL in your version control system (VCS). + ```yaml metadata: @@ -106,7 +135,14 @@ Commit URL structure: `https://github.com///commit/ + + ## Send your metrics @@ -114,7 +150,7 @@ For log shipping troubleshooting, see our [user guide](https://docs.logz.io/docs helm install -n monitoring --create-namespace \ --set metricsOrTraces.enabled=true \ --set logzio-k8s-telemetry.metrics.enabled=true \ ---set logzio-k8s-telemetry.secrets.MetricsToken="<>" \ +--set logzio-k8s-telemetry.secrets.MetricsToken="<>" \ --set logzio-k8s-telemetry.secrets.ListenerHost="https://<>:8053" \ --set logzio-k8s-telemetry.secrets.p8s_logzio_name="<>" \ --set logzio-k8s-telemetry.secrets.env_id="<>" \ @@ -123,37 +159,49 @@ logzio-monitoring logzio-helm/logzio-monitoring | Parameter | Description | | --- | --- | -| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | +| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | | `<>` | The cluster's name, to easily identify the telemetry data for each environment. | | `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | +:::note +If you encounter an issue, see our [troubleshooting guide](https://docs.logz.io/docs/user-guide/infrastructure-monitoring/troubleshooting/k8s-troubleshooting/). +::: + + +### Custom Configuration + +You can view the full list of the possible configuration values in the [logzio-k8s-telemetry Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-telemetry). + +To modify values found in the `logzio-telemetry` folder, use the `--set` flag with the `logzio-k8s-telemetry` prefix. + +For example, for a parameter called `someField` in the `logzio-k8s-telemetry`'s `values.yaml` file, set it by adding the following to the `helm install` command: + +```sh +--set logzio-k8s-telemetry.someField="my new value" +``` -For metrics shipping troubleshooting, see our [user guide](https://docs.logz.io/docs/user-guide/infrastructure-monitoring/troubleshooting/k8s-troubleshooting/). + + ## Send your traces +We offer three options for sending send your traces: +* Send Traces only +* Send Traces and SPM +* Send Traces, SPM and Service Graph data + ```sh helm install -n monitoring --create-namespace \ --set metricsOrTraces.enabled=true \ --set logzio-k8s-telemetry.traces.enabled=true \ --set logzio-k8s-telemetry.secrets.TracesToken="<>" \ ---set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ +--set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ --set logzio-k8s-telemetry.secrets.env_id="<>" \ logzio-monitoring logzio-helm/logzio-monitoring ``` -| Parameter | Description | -| --- | --- | -| `<>` | Your [traces shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=tracing). | -| `<>` | The cluster's name, to easily identify the telemetry data for each environment. | -| `<>` | Name of your Logz.io traces region e.g `us`, `eu`... | - - -For traces shipping troubleshooting, see our [Distributed Tracing troubleshooting](https://docs.logz.io/docs/user-guide/distributed-tracing/troubleshooting/tracing-troubleshooting/). - - ## Send traces with SPM ```sh @@ -161,23 +209,19 @@ helm install -n monitoring --create-namespace \ --set metricsOrTraces.enabled=true \ --set logzio-k8s-telemetry.traces.enabled=true \ --set logzio-k8s-telemetry.secrets.TracesToken="<>" \ ---set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ +--set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ --set logzio-k8s-telemetry.secrets.env_id="<>" \ --set logzio-k8s-telemetry.spm.enabled=true \ --set logzio-k8s-telemetry.secrets.SpmToken=<> \ logzio-monitoring logzio-helm/logzio-monitoring ``` -| Parameter | Description | -| --- | --- | -| `<>` | Your [traces shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | -| `<>` | The cluster's name, to easily identify the telemetry data for each environment. | -| `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | -| `<>` | Name of your Logz.io traces region e.g `us`, `eu`... | -| `<>` | Your [span metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | +## Send Service Graph data + +:::warning Important +`serviceGraph.enabled=true` will have no effect unless `traces.enabled` and `spm.enabled=true` is set to `true`. +::: -## Deploy both charts with span metrics and service graph -**Note** `serviceGraph.enabled=true` will have no effect unless `traces.enabled` & `spm.enabled=true` is also set to `true` ```sh helm install -n monitoring --create-namespace \ --set metricsOrTraces.enabled=true \ @@ -191,21 +235,67 @@ helm install -n monitoring --create-namespace \ logzio-monitoring logzio-helm/logzio-monitoring ``` -#### Deploy metrics chart with Kuberenetes object logs correlation -**Note** `k8sObjectsConfig.enabled=true` will have no effect unless `metrics.enabled` is also set to `true` +| Parameter | Description | +| --- | --- | +| `<>` | Your [traces shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | +| `<>` | The cluster's name, to easily identify the telemetry data for each environment. | +| `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | +| `<>` | Your Logz.io [region code](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions) | +| `<>` | Your [span metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | + +:::note +For traces shipping troubleshooting, see our [Distributed Tracing troubleshooting](https://docs.logz.io/docs/user-guide/distributed-tracing/troubleshooting/tracing-troubleshooting/). +::: + +### Custom Configuration + +You can view the full list of the possible configuration values in the [logzio-k8s-telemetry Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-telemetry). + +To modify values found in the `logzio-telemetry` folder, use the `--set` flag with the `logzio-k8s-telemetry` prefix. + +For example, for a parameter called `someField` in the `logzio-k8s-telemetry`'s `values.yaml` file, set it by adding the following to the `helm install` command: + +```sh +--set logzio-k8s-telemetry.someField="my new value" +``` + + + + + + +## Send Metrics with Kubernetes object logs + +:::warning Important +`k8sObjectsConfig.enabled=true` will have no effect unless `metrics.enabled` is also set to `true`. +::: + ```sh helm install \ --set logzio-k8s-telemetry.metrics.enabled=true \ --set logzio-k8s-telemetry.k8sObjectsConfig.enabled=true \ --set logzio-k8s-telemetry.secrets.LogzioRegion=<> \ ---set logzio-k8s-telemetry.secrets.k8sObjectsLogsToken=<> \ ---set logzio-k8s-telemetry.secrets.MetricsToken=<> \ +--set logzio-k8s-telemetry.secrets.k8sObjectsLogsToken=<> \ +--set logzio-k8s-telemetry.secrets.MetricsToken=<> \ --set logzio-k8s-telemetry.secrets.ListenerHost=<> \ ---set logzio-k8s-telemetry.secrets.p8s_logzio_name=<> \ ---set logzio-k8s-telemetry.secrets.env_id=<> \ +--set logzio-k8s-telemetry.secrets.p8s_logzio_name=<> \ +--set logzio-k8s-telemetry.secrets.env_id=<> \ logzio-monitoring logzio-helm/logzio-monitoring ``` +| Parameter | Description | +| --- | --- | +| `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/general). | +| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | +| `<>` | Your Logz.io [region code](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions) | +| `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | +| `<>` | The cluster's name, to easily identify the telemetry data for each environment. | + + + + + + ## Scan your cluster for security vulnerabilities ```sh @@ -223,36 +313,14 @@ helm install -n monitoring --create-namespace \ | `<>` | The cluster's name, to easily identify the telemetry data for each environment. | -## Modifying the configuration for logs + + -You can see a full list of the possible configuration values in the [logzio-fluentd Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/fluentd#configuration). -If you would like to modify any of the values found in the `logzio-fluentd` folder, use the `--set` flag with the `logzio-fluentd` prefix. -For instance, if there is a parameter called `someField` in the `logzio-telemetry`'s `values.yaml` file, you can set it by adding the following to the `helm install` command: - -```sh ---set logzio-fluentd.someField="my new value" -``` -You can add `log_type` annotation with a custom value, which will be parsed into a `log_type` field with the same value. - - -### Modifying the configuration for metrics and traces - -You can see a full list of the possible configuration values in the [logzio-telemetry Chart folder](https://github.com/logzio/logzio-helm/tree/master/charts/logzio-telemetry). - -If you would like to modify any of the values found in the `logzio-telemetry` folder, use the `--set` flag with the `logzio-k8s-telemetry` prefix. - -For instance, if there is a parameter called `someField` in the `logzio-telemetry`'s `values.yaml` file, you can set it by adding the following to the `helm install` command: + ## Sending telemetry data from EKS on Fargate - -```sh ---set logzio-k8s-telemetry.someField="my new value" -``` - -## Sending telemetry data from eks on fargate - -To ship logs from pods running on Fargate, set the `fargateLogRouter.enabled` value to `true`. Doing so will deploy a dedicated `aws-observability` namespace and a `configmap` for the Fargate log router. For more information on EKS Fargate logging, please refer to the [official AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html). +Set the `fargateLogRouter.enabled` value to `true`. This deploys a dedicated `aws-observability` namespace and a `configmap` for the Fargate log router. Read more on EKS Fargate logging in the [official AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html). ```shell helm install -n monitoring --create-namespace \ @@ -264,12 +332,12 @@ helm install -n monitoring --create-namespace \ --set logzio-k8s-telemetry.collector.mode=standalone \ --set logzio-k8s-telemetry.enableMetricsFilter.eks=true \ --set logzio-k8s-telemetry.metrics.enabled=true \ ---set logzio-k8s-telemetry.secrets.MetricsToken="<>" \ +--set logzio-k8s-telemetry.secrets.MetricsToken="<>" \ --set logzio-k8s-telemetry.secrets.ListenerHost="https://<>:8053" \ --set logzio-k8s-telemetry.secrets.p8s_logzio_name="<>" \ --set logzio-k8s-telemetry.traces.enabled=true \ --set logzio-k8s-telemetry.secrets.TracesToken="<>" \ ---set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ +--set logzio-k8s-telemetry.secrets.LogzioRegion="<>" \ logzio-monitoring logzio-helm/logzio-monitoring ``` @@ -277,18 +345,25 @@ logzio-monitoring logzio-helm/logzio-monitoring | --- | --- | | `<>` | Your [logs shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | | `<>` | Your account's [listener host](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs). | -| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=metrics). | -| `<>` | The name for the environment's metrics, to easily identify the metrics for each environment. | +| `<>` | Your [metrics shipping token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=metrics). | | `<>` | The name for your environment's identifier, to easily identify the telemetry data for each environment. | -| `<>` | Your custom name for the environment's metrics, to easily identify the metrics for each environment. | | `<>` | Replace `<>` with the [token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=tracing) of the account you want to ship to. | -| `<>` | Name of your Logz.io traces region e.g `us` or `eu`. You can find your region code in the [Regions and URLs](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#regions-and-urls/) table. | +| `<>` | Your Logz.io [region code](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions) | -## Handling image pull rate limit -In certain situations, such as with spot clusters where pods/nodes are frequently replaced, you may encounter the pull rate limit for images fetched from Docker Hub. This could result in the following error: `You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits`. + + -To address this issue, you can use the `--set` commands provided below in order to access an alternative image repository: + +## Advanced Configuration and Troubleshooting + + + + + +## Handling image pull rate limit + +Docker Hub pull rate limits could result in the following error: `You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits`. To avoid this, use the `--set` commands below to access an alternative image repository: ```shell --set logzio-k8s-telemetry.image.repository=ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib @@ -298,17 +373,14 @@ To address this issue, you can use the `--set` commands provided below in order --set logzio-trivy.image=public.ecr.aws/logzio/trivy-to-logzio ``` -## Upgrading logzio-monitoring to v3.0.0 -Before upgrading your logzio-monitoring Chart to v3.0.0 with `helm upgrade`, note that you may encounter an error for some of the logzio-telemetry sub-charts. + + -There are two possible approaches to the upgrade you can choose from: -- Reinstall the chart. -- Before running the `helm upgrade` command, delete the old subcharts resources: `logzio-monitoring-prometheus-pushgateway` deployment and the `logzio-monitoring-prometheus-node-exporter` daemonset. ## Configuring logs in JSON format -This configuration sets up a log processor to parse, restructure, and clean JSON-formatted log messages for streamlined analysis and monitoring: +To parse JSON Logs using the fluentd chart, configure the following processor using the `configmap.extraConfig` configuration option: ```json @@ -322,17 +394,29 @@ This configuration sets up a log processor to parse, restructure, and clean JSON ``` -## Adding metric names to K8S 360 filter +:::info +Instructions of using `configmap.extraConfig` can be found [here](https://github.com/logzio/logzio-helm/tree/master/charts/fluentd#configuration). +::: + -To customize the metrics collected by Prometheus in your Kubernetes environment, you need to modify the `prometheusFilters` configuration in your Helm chart. + + -### Identify metrics to keep -Decide which metrics you need to add to your collection, formatted as a regex string (e.g., `new_metric_1|new_metric_2`). +## Custom Filter for Metrics -### Set filters +We provide a setting (`enableMetricsFilter`) which filters by default only the metrics needed for K8360. +If you wish to customize the metrics being sent, you can do so by modifying the `prometheusFilters` configuration in your Helm chart. +Follow the steps below to adjust this configuration: + +**1. Identify metrics to keep** + +Determine which metrics you need to add to your collection, and format them as a regex string (e.g., `new_metric_1|new_metric_2`). + +**2. Set filters** + +Run the following command to update your Chart with the specified metrics: -Run the following command: ```shell helm upgrade logzio-helm/logzio-monitoring \ @@ -341,4 +425,9 @@ helm upgrade logzio-helm/logzio-monitoring \ ``` * Replace `` with the name of your Helm release. -* Replace `` with the appropriate service name: `ask`, `eks` or `gke`. \ No newline at end of file +* Replace `` with the appropriate service name: `ask`, `eks` or `gke`. + + + + + diff --git a/docs/_include/log-shipping/certificate.md b/docs/_include/log-shipping/certificate.md index 7d056c35..91719eab 100644 --- a/docs/_include/log-shipping/certificate.md +++ b/docs/_include/log-shipping/certificate.md @@ -1,4 +1,4 @@ -##### Download the Logz.io public certificate to your credentials server +### Download the Logz.io public certificate For HTTPS shipping, download the Logz.io public certificate to your certificate authority folder. diff --git a/docs/_include/log-shipping/filebeat-installed-port5015-begin.md b/docs/_include/log-shipping/filebeat-installed-port5015-begin.md index 0defcf8d..eab7d8d2 100644 --- a/docs/_include/log-shipping/filebeat-installed-port5015-begin.md +++ b/docs/_include/log-shipping/filebeat-installed-port5015-begin.md @@ -1,3 +1,3 @@ * [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html) installed -* Port 5015 open +* Port 5015 open to outgoing traffic * Root access \ No newline at end of file diff --git a/docs/_include/log-shipping/filebeat-installed-port5015-end.md b/docs/_include/log-shipping/filebeat-installed-port5015-end.md index eb6ea561..d2ecf627 100644 --- a/docs/_include/log-shipping/filebeat-installed-port5015-end.md +++ b/docs/_include/log-shipping/filebeat-installed-port5015-end.md @@ -1,2 +1,3 @@ +:::note While support for [Filebeat 6.3 and later versions](https://www.elastic.co/guide/en/beats/filebeat/6.7/filebeat-installation.html) is available, Logz.io recommends that you use the latest stable version -* Destination port 5015 open to outgoing traffic \ No newline at end of file +::: diff --git a/docs/_include/log-shipping/filebeat-ssl.md b/docs/_include/log-shipping/filebeat-ssl.md index f78da2ec..676d29ea 100644 --- a/docs/_include/log-shipping/filebeat-ssl.md +++ b/docs/_include/log-shipping/filebeat-ssl.md @@ -1,14 +1,11 @@ -###### Disabling SSL for Filebeat log shipping +#### Disabling SSL -By default, Filebeat uses SSL/TLS to secure the communication between Filebeat and Logz.io. However, if you want to disable SSL, you can modify the Filebeat configuration accordingly. +Filebeat uses SSL/TLS to secure the communication between Filebeat and Logz.io. To disable SSL, modify the Filebeat configuration accordingly: -To ship logs without using SSL in Filebeat: +1. Open the Filebeat configuration file, typically located at `/etc/filebeat/filebeat.yml` (Linux) or `C:\ProgramData\Filebeat\filebeat.yml` (Windows). -1. Open the Filebeat configuration file for editing. The configuration file's location may vary depending on your operating system, but it is commonly located at `/etc/filebeat/filebeat.yml` (Linux) or `C:\ProgramData\Filebeat\filebeat.yml` (Windows). +2. Find the `output.logstash` section in the file. -2. Look for the `output.logstash` section in the configuration file. +3. Remove the # character at the beginning of the #ssl.enabled line to disable SSL. The line should now look like this: `#ssl.enabled: false` -3. Uncomment the # character at the beginning of the #ssl.enabled line to disable SSL. The line should now look like this: - `#ssl.enabled: false` - -4. Save the changes to the configuration file and restart the Filebeat service to apply the changes. \ No newline at end of file +4. Save the changes and restart the Filebeat service to apply the changes. \ No newline at end of file diff --git a/docs/_include/log-shipping/filebeat-wizard.html b/docs/_include/log-shipping/filebeat-wizard.html index be57b957..9fbbfab9 100644 --- a/docs/_include/log-shipping/filebeat-wizard.html +++ b/docs/_include/log-shipping/filebeat-wizard.html @@ -1 +1 @@ -Log into your Logz.io account, and go to the [Filebeat log shipping page](https://app.logz.io/#/dashboard/send-your-data/log-sources/filebeat) to use the dedicated Logz.io Filebeat configuration wizard. It's the simplest way to configure Filebeat for your use case. +Log in to Logz.io and navigate to the [Filebeat log shipping page](https://app.logz.io/#/dashboard/integrations/Filebeat-data). diff --git a/docs/_include/log-shipping/filebeat-wizard.md b/docs/_include/log-shipping/filebeat-wizard.md index 0811f027..fb06ef14 100644 --- a/docs/_include/log-shipping/filebeat-wizard.md +++ b/docs/_include/log-shipping/filebeat-wizard.md @@ -1,19 +1,24 @@ -###### Adding log sources to the configuration file - -For each of the log types you plan to send to Logz.io, fill in the following: +#### Adding log sources to the configuration file * Select your operating system - **Linux** or **Windows**. -* Specify the full **Path** to the logs. -* Select a log **Type** from the list or select **Other** and give it a name of your choice to specify a custom log type. +* Specify the full log **Path**. +* Select a log **Type** from the list or select **Other** to create and specify a custom log type. * If you select a log type from the list, the logs will be automatically parsed and analyzed. [List of types available for parsing by default](https://docs.logz.io/docs/user-guide/data-hub/log-parsing/default-parsing/#built-in-log-types). - * If you select **Other**, contact support to request custom parsing assistance. Don't be shy, it's included in your plan! + * If you select **Other**, contact support for custom parsing assistance. * Select the log format - **Plaintext** or **Json**. -* (_Optional_) Enable the **Multiline** option if your log messages span +* (Optional) Enable the **Multiline** option if your log messages span multiple lines. You’ll need to give a regex that identifies the beginning line of each log. -* (_Optional_) Add a custom field. Click **+ Add a field** to add additional fields. +* (Optional) Add a custom field. Click **+ Add a field** to add additional fields. + +:::note +The wizard makes it simple to add multiple log types to a single configuration file. So to add additional sources, click **+ Add a log type** to fill in the details for another log type. Repeat as necessary. +::: + +#### Filebeat 8.1+ +If you're running Filebeat 8.1+, there are some adjustment you need to make in the config file: -If you're running Filebeat 8.1+, the `type` of the `filebeat.inputs` is `filestream` instead of `logs`: +1. Change `type` of the `filebeat.inputs` to `filestream` instead of `logs`: ```yaml filebeat.inputs: @@ -22,7 +27,16 @@ filebeat.inputs: - /var/log/*.log ``` -###### Add additional sources (_Optional_) - -The wizard makes it simple to add multiple log types to a single configuration file. Click **+ Add a log type** to fill in the details for another log type. Repeat as necessary. +2. **To configure multiline** nest the multiline settings under `parsers`: +```yaml +- type: filestream + paths: + - /var/log/*.log + parsers: + - multiline: + type: pattern + pattern: '^\d{4}-' + negate: true + match: after +``` diff --git a/docs/_include/log-shipping/lambda-xtension-tablink-indox.html b/docs/_include/log-shipping/lambda-xtension-tablink-indox.html index aa3d5c69..1523f7ef 100644 --- a/docs/_include/log-shipping/lambda-xtension-tablink-indox.html +++ b/docs/_include/log-shipping/lambda-xtension-tablink-indox.html @@ -1 +1 @@ -in the [Environment Variables & ARNs](https://docs.logz.io/docs/shipping/Compute/Lambda-extensions#environment-variables) tab \ No newline at end of file +(https://docs.logz.io/docs/shipping/Compute/Lambda-extensions#environment-variables) \ No newline at end of file diff --git a/docs/_include/log-shipping/lambda-xtension-tablink.md b/docs/_include/log-shipping/lambda-xtension-tablink.md index 83c846bd..28087e85 100644 --- a/docs/_include/log-shipping/lambda-xtension-tablink.md +++ b/docs/_include/log-shipping/lambda-xtension-tablink.md @@ -1 +1 @@ -(https://app.logz.io/#/dashboard/send-your-data/log-sources/lambda-extensions?type=tables) \ No newline at end of file +(https://docs.logz.io/docs/shipping/aws/lambda-extensions/#arns) \ No newline at end of file diff --git a/docs/_include/log-shipping/rsyslog-troubleshooting.md b/docs/_include/log-shipping/rsyslog-troubleshooting.md index 4b040737..625876b4 100644 --- a/docs/_include/log-shipping/rsyslog-troubleshooting.md +++ b/docs/_include/log-shipping/rsyslog-troubleshooting.md @@ -1,20 +1,22 @@ -This section contains some guidelines for handling errors that you may encounter when trying to collect logs for Rsyslog - SELinux configuration. +## Troubleshooting -SELinux is a Linux feature that allows you to implement access control security policies in Linux systems. In distributions such as Fedora and RHEL, SELinux is in Enforcing mode by default. +This section provides guidelines for handling errors when collecting logs for Rsyslog with SELinux configuration. -Rsyslog is one of the system processes protected by SELinux. This means that rsyslog by default is not allowed to send to a port other than 514/udp (the standard syslog port) has limited access to other files and directories outside of their initial configurations. +SELinux is a Linux feature for implementing access control security policies. In distributions like Fedora and RHEL, SELinux is enabled in Enforcing mode by default. -To send information to Logz.io properly in a SELinux environment, it is necessary to add exceptions to allow: +Rsyslog, a system process protected by SELinux, is restricted by default to sending data only to port 514/udp (the standard syslog port) and has limited access to files and directories beyond its initial configuration. -* rsyslog to communicate with logz.io through the desired port -* rsyslog to access the files and directories needed for it to work properly +To send data to Logz.io in a SELinux environment, you need to add exceptions to allow: +* rsyslog to communicate with logz.io through the desired port. +* rsyslog to access the necessary files and directories. -##### Possible cause - issue not related to SELinux + +### Issue not related to SELinux The issue may not be caused by SELinux. -###### Suggested remedy +**Suggested remedy** Disable SELinux temporarily and see if that solves the problem. @@ -55,23 +57,20 @@ SELINUX=disabled SELINUX=permissive ``` -##### Possible cause - need exceptions to SELinux for Logz.io +### Need to add exceptions You may need to add exception to SELinux configuration to enable Logz.io. -###### Suggested remedy - +**Suggested remedy** -###### Install the policycoreutils and the setroubleshoot packages +1. Install the policycoreutils and the setroubleshoot packages: ```shell # Installing policycoreutils & setroubleshoot packages $ sudo yum install policycoreutils setroubleshoot ``` -###### Check which syslog ports are allowed by SELinux - -Run the command as in the example below: +2. Check which syslog ports are allowed by SELinux: ```shell $ sudo semanage port -l| grep syslog @@ -80,14 +79,14 @@ output: syslogd_port_t udp 514 ``` -###### Add a new port to policy for Logz.io +3. Add a new port to policy for Logz.io: ```shell # Adding a port to SELinux policies $ sudo semanage port -m -t syslogd_port_t -p tcp 5000 ``` -###### Authorize Rsyslog directory +4. Authorize Rsyslog directory: ```shell @@ -96,7 +95,7 @@ $ sudo semanage fcontext -a -t syslogd_var_lib_t "/var/spool/rsyslog/*" $ sudo restorecon -R -v /var/spool/rsyslog ``` -Depending on the distribution, run the following command: +5. Depending on the distribution, run the following command: ```shell # instructing se to authorize /etc/rsyslog.d/* @@ -109,7 +108,7 @@ $ sudo semanage fcontext -a -t etc_t "/etc/rsyslog.d" $ sudo restorecon -v /etc/rsyslog.d ``` -###### Restart Rsyslog +6. Restart Rsyslog: ```shell $ sudo service rsyslog restart diff --git a/docs/_include/log-shipping/stack.md b/docs/_include/log-shipping/stack.md index 7bd11590..18332395 100644 --- a/docs/_include/log-shipping/stack.md +++ b/docs/_include/log-shipping/stack.md @@ -1,4 +1,4 @@ -##### Create new stack +#### Create new stack To deploy this project, click the button that matches the region you wish to deploy your Stack to: @@ -23,7 +23,7 @@ To deploy this project, click the button that matches the region you wish to dep | `ca-central-1` | [![Deploy to AWS](https://dytvr9ot2sszz.cloudfront.net/logz-docs/lights/LightS-button.png)](https://console.aws.amazon.com/cloudformation/home?region=ca-central-1#/stacks/create/review?templateURL=https://logzio-aws-integrations-ca-central-1.s3.amazonaws.com/s3-hook/0.4.2/sam-template.yaml&stackName=logzio-s3-hook¶m_logzioToken=<>¶m_logzioListener=https://<>:8071) | -##### Specify stack details +#### Specify stack details Specify the stack details as per the table below, check the checkboxes and select **Create stack**. @@ -37,39 +37,31 @@ Specify the stack details as per the table below, check the checkboxes and selec | `pathToFields` | Fields from the path to your logs directory that you want to add to the logs. For example, `org-id/aws-type/account-id` will add each of the fields `ord-id`, `aws-type` and `account-id` to the logs that are fetched from the directory that this path refers to. | - | -##### Add trigger +#### Add trigger -Give the stack a few minutes to be deployed. +After deploying the stack, wait a few minutes for it to complete. Once your Lambda function is ready, you'll need to manually add a trigger due to CloudFormation limitations: -Once your Lambda function is ready, you'll need to manually add a trigger. This is due to Cloudformation limitations. +1. Navigate to the function's page and click on **Add trigger**. -Go to the function's page, and click on **Add trigger**. +2. Choose **S3** as a trigger, and fill in: -![Step 5 screenshot](https://dytvr9ot2sszz.cloudfront.net/logz-docs/control-tower/s3-hook-stack-05.png) + - **Bucket**: Your bucket name. + - **Event type**: Select `All object create events`. + - **Prefix** and **Suffix**: Leave these fields empty. -Then, choose **S3** as a trigger, and fill in: + Confirm the checkbox, and click **Add**. -- **Bucket**: Your bucket name. -- **Event type**: Choose option `All object create events`. -- Prefix and Suffix should be left empty. -Confirm the checkbox, and click **Add*. +#### Send logs -![Step 5 screenshot](https://dytvr9ot2sszz.cloudfront.net/logz-docs/control-tower/s3-hook-stack-06.png) +Your function is now configured. When you upload new files to your bucket, the function will be triggered, and the logs will be sent to your Logz.io account. -##### Send logs +#### Parsing -That's it. Your function is configured. -Once you upload new files to your bucket, it will trigger the function, and the logs will be sent to your Logz.io account. +The S3 Hook will automatically parse logs if the object's path contains the phrase `cloudtrail` (case insensitive). -###### Parsing +#### Check your logs -S3 Hook will automatically parse logs in the following cases: +Allow some time for data ingestion, then check your [OpenSearch Dashboards](https://app.logz.io/#/dashboard/osd/discover/). -- The object's path contains the phrase `cloudtrail` (case insensitive). - -##### Check Logz.io for your logs - -Give your logs some time to get from your system to ours, and then open [OpenSearch Dashboards](https://app.logz.io/#/dashboard/osd/discover/). - -If you still don't see your logs, see Log shipping troubleshooting. \ No newline at end of file +For troubleshooting, refer to our [log shipping troubleshooting](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/log-shipping-troubleshooting/) guide. \ No newline at end of file diff --git a/docs/_include/log-shipping/validate-yaml.md b/docs/_include/log-shipping/validate-yaml.md index 25bcf74a..c6bbc29d 100644 --- a/docs/_include/log-shipping/validate-yaml.md +++ b/docs/_include/log-shipping/validate-yaml.md @@ -1,7 +1,7 @@ -##### Download and validate the file +#### Download and validate confiuration When you're done adding your sources, click **Make the config file** to download it. You can compare it to our [sample configuration](https://raw.githubusercontent.com/logzio/logz-docs/master/shipping-config-samples/logz-filebeat-config.yml) if you have questions. -If you've edited the file manually, it's a good idea to run it through a YAML validator to rule out indentation errors, clean up extra characters, and check if your yml file is valid. ([Yamllint.com](http://www.yamllint.com/) is a great choice.) \ No newline at end of file +Validate the file using a YAML validator tool, such as ([Yamllint.com](http://www.yamllint.com/). \ No newline at end of file diff --git a/docs/_include/metric-shipping/aws-metrics-new.md b/docs/_include/metric-shipping/aws-metrics-new.md index cbe70637..4341ecff 100644 --- a/docs/_include/metric-shipping/aws-metrics-new.md +++ b/docs/_include/metric-shipping/aws-metrics-new.md @@ -1,14 +1,14 @@ **Before you begin, you'll need**: -* An active account with Logz.io +* An active Logz.io account -## Configure AWS to forward metrics to Logz.io +### Configure AWS to forward metrics to Logz.io -### Set the required minimum IAM permissions +**1. Set the required minimum IAM permissions** -Make sure you have configured the minimum required IAM permissions as follows: +configured the minimum required IAM permissions as follows: * **Amazon S3**: - `s3:CreateBucket` @@ -61,7 +61,7 @@ Make sure you have configured the minimum required IAM permissions as follows: - `cloudformation:ListStackResources` -### Create Stack in the relevant region +**2. Create Stack in the relevant region** To deploy this project, click the button that matches the region you wish to deploy your Stack to: @@ -98,25 +98,29 @@ To deploy this project, click the button that matches the region you wish to dep | `il-central-1` | [![Deploy to AWS](https://dytvr9ot2sszz.cloudfront.net/logz-docs/lights/LightS-button.png)](https://console.aws.amazon.com/cloudformation/home?region=il-central-1#/stacks/create/review?templateURL=https://logzio-aws-integrations-il-central-1.s3.amazonaws.com/metric-stream-helpers/aws/1.3.4/sam-template.yaml&stackName=logzio-metric-stream¶m_logzioToken=<>¶m_logzioListener=https://<>:8053) | -### Specify stack details +**3. Specify stack details** Specify the stack details as per the table below, check the checkboxes and select **Create stack**. | Parameter | Description | Required/Default | |--------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------| -| `logzioListener` | The Logz.io listener URL for your region. (For more details, see the [regions page](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/). For example - `https://listener.logz.io:8053` | **Required** | +| `logzioListener` | Logz.io listener URL for your region. (For more details, see the [regions page](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/). e.g., `https://listener.logz.io:8053` | **Required** | | `logzioToken` | Your Logz.io metrics shipping token. | **Required** | -| `awsNamespaces` | Comma-separated list of the AWS namespaces you want to monitor. See [this list](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/aws-services-cloudwatch-metrics.html) of namespaces. If you want to automatically add all namespaces, use value `all-namespaces`. | At least one of `awsNamespaces` or `customNamespace` is required | -| `customNamespace` | A custom namespace for CloudWatch metrics. This is used to specify a namespace unique to your setup, separate from the standard AWS namespaces. | At least one of `awsNamespaces` or `customNamespace` is required | -| `logzioDestination` | Your Logz.io destination URL. | **Required** | -| `httpEndpointDestinationIntervalInSeconds` | The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. | `60` | -| `httpEndpointDestinationSizeInMBs` | The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. | `5` | +| `awsNamespaces` | Comma-separated list of AWS namespaces to monitor. See [this list](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/aws-services-cloudwatch-metrics.html) of namespaces. Use value `all-namespaces` to automatically add all namespaces. | At least one of `awsNamespaces` or `customNamespace` is required | +| `customNamespace` | A custom namespace for CloudWatch metrics. Used to specify a namespace unique to your setup, separate from the standard AWS namespaces. | At least one of `awsNamespaces` or `customNamespace` is required | +| `logzioDestination` | Your Logz.io destination URL. Choose the relevant endpoint from the drop down list based on your Logz.io account region. | **Required** | +| `httpEndpointDestinationIntervalInSeconds` | Buffer time in seconds before Kinesis Data Firehose delivers data. | `60` | +| `httpEndpointDestinationSizeInMBs` | Buffer size in MBs before Kinesis Data Firehose delivers data. | `5` | | `debugMode` | Enable debug mode for detailed logging (true/false). | false | -### Check Logz.io for your metrics -Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). + +**4. View your metrics** + + +Allow some time for data ingestion, then open your [Logz.io metrics account](https://app.logz.io/#/dashboard/metrics/). + diff --git a/docs/_include/metric-shipping/aws-metrics.md b/docs/_include/metric-shipping/aws-metrics.md index ffe8b6e8..ff9ab24b 100644 --- a/docs/_include/metric-shipping/aws-metrics.md +++ b/docs/_include/metric-shipping/aws-metrics.md @@ -1,6 +1,6 @@ **Before you begin, you'll need**: -* An active account with Logz.io +* An active Logz.io account diff --git a/docs/_include/metric-shipping/custom-dashboard.html b/docs/_include/metric-shipping/custom-dashboard.html index ce223ae4..049741bc 100644 --- a/docs/_include/metric-shipping/custom-dashboard.html +++ b/docs/_include/metric-shipping/custom-dashboard.html @@ -1 +1 @@ -Log in to your Logz.io account and navigate to the current instructions page [inside the Logz.io app](https://app.logz.io/#/dashboard/send-your-data/prometheus-sources/{{page.slug}}). \ No newline at end of file +Navigate to the instructions page within [the Logz.io app](https://app.logz.io/#/dashboard/send-your-data/prometheus-sources/{{page.slug}}). \ No newline at end of file diff --git a/docs/_include/tracing-shipping/collector-run-note.md b/docs/_include/tracing-shipping/collector-run-note.md index ff950e4f..7779f5ef 100644 --- a/docs/_include/tracing-shipping/collector-run-note.md +++ b/docs/_include/tracing-shipping/collector-run-note.md @@ -1,3 +1,9 @@ :::note -Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network. -::: \ No newline at end of file +When running the OTEL collector in a Docker container, your application should run in separate containers on the same host network. **Ensure all containers share the same network**. Using Docker Compose ensures that all containers, including the OTEL collector, share the same network configuration automatically. +::: + + + + + + diff --git a/docs/_include/tracing-shipping/docker.md b/docs/_include/tracing-shipping/docker.md index a64a62a3..51e82d5d 100644 --- a/docs/_include/tracing-shipping/docker.md +++ b/docs/_include/tracing-shipping/docker.md @@ -1,10 +1,10 @@ -### Pull the Docker image for the OpenTelemetry collector +#### Pull the Docker image for the OpenTelemetry collector ```shell docker pull otel/opentelemetry-collector-contrib:0.78.0 ``` -### Create a configuration file +#### Create a configuration file Create a file `config.yaml` with the following content: @@ -68,7 +68,7 @@ service: {@include: ../../_include/tracing-shipping/replace-tracing-token.html} -##### Tail Sampling +#### Tail Sampling {@include: ../../_include/tracing-shipping/tail-sampling.md} @@ -100,7 +100,7 @@ If you already have an OpenTelemetry installation, add the following parameters {@include: ../../_include/tracing-shipping/replace-tracing-token.html} -An example configuration file looks as follows: +Here is an example configuration file: ```yaml receivers: @@ -158,14 +158,11 @@ service: {@include: ../../_include/tracing-shipping/replace-tracing-token.html} -{@include: ../../_include/tracing-shipping/tail-sampling.md} - - -##### Run the container +#### Run the container Mount the `config.yaml` as volume to the `docker run` command and run it as follows. -###### Linux +##### Linux ``` docker run \ @@ -177,7 +174,7 @@ otel/opentelemetry-collector-contrib:0.78.0 Replace `` to the path to the `config.yaml` file on your system. -###### Windows +##### Windows ``` docker run \ @@ -193,4 +190,4 @@ docker run \ -p 4317:4317 \ -p 55681:55681 \ otel/opentelemetry-collector-contrib:0.78.0 -``` +``` \ No newline at end of file diff --git a/docs/_include/tracing-shipping/dotnet-framework-steps.md b/docs/_include/tracing-shipping/dotnet-framework-steps.md index 937c08ae..fef292a8 100644 --- a/docs/_include/tracing-shipping/dotnet-framework-steps.md +++ b/docs/_include/tracing-shipping/dotnet-framework-steps.md @@ -1,4 +1,4 @@ -##### Download instrumentation packages +#### Download instrumentation packages Run the following command from the application directory: @@ -9,7 +9,7 @@ dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol dotnet add package OpenTelemetry.Instrumentation.AspNet ``` -##### Modify the Web.Config file +#### Modify the Web.Config file Add a required HttpModule to the Web.Config file as follows: @@ -25,7 +25,7 @@ Add a required HttpModule to the Web.Config file as follows: ``` -##### Enable instrumentation in the code +#### Enable instrumentation in the code Add the following code to the Global.asax.cs file: diff --git a/docs/_include/tracing-shipping/dotnet-steps.md b/docs/_include/tracing-shipping/dotnet-steps.md index edba24e2..4dc11b20 100644 --- a/docs/_include/tracing-shipping/dotnet-steps.md +++ b/docs/_include/tracing-shipping/dotnet-steps.md @@ -1,4 +1,4 @@ -##### Download instrumentation packages +#### Download instrumentation packages Run the following command from the application directory: @@ -9,7 +9,7 @@ dotnet add package OpenTelemetry.Instrumentation.AspNetCore dotnet add package OpenTelemetry.Extensions.Hosting ``` -##### Enable instrumentation in the code +#### Enable instrumentation in the code Add the following configuration to the beginning of the Startup.cs file: diff --git a/docs/_include/tracing-shipping/node-steps.md b/docs/_include/tracing-shipping/node-steps.md index c15937e3..1c639e2a 100644 --- a/docs/_include/tracing-shipping/node-steps.md +++ b/docs/_include/tracing-shipping/node-steps.md @@ -1,6 +1,5 @@ -### Download instrumentation packages +#### Download instrumentation packages -Run the following command from the application directory: ```shell npm install --save @opentelemetry/api @@ -13,9 +12,9 @@ npm install --save @opentelemetry/auto-instrumentations-node npm install --save @opentelemetry/sdk-node ``` -### Create a tracer file +#### Create a tracer file -In the directory of your application file, create a file named `tracer.js` with the following configuration: +In your application's directory, create a file named `tracer.js` with the following configuration: ```javascript "use strict"; diff --git a/docs/_include/tracing-shipping/otel-troubleshooting.md b/docs/_include/tracing-shipping/otel-troubleshooting.md index 8bf22a5c..d6c2dc8b 100644 --- a/docs/_include/tracing-shipping/otel-troubleshooting.md +++ b/docs/_include/tracing-shipping/otel-troubleshooting.md @@ -1,26 +1,24 @@ -This section contains some guidelines for handling errors that you may encounter when trying to collect traces with OpenTelemetry. +## Troubleshooting +If traces are not being sent despite instrumentation, follow these steps: -## Problem: No traces are sent -The code has been instrumented, but the traces are not being sent. - -##### Possible cause - Collector not installed +### Collector not installed The OpenTelemetry collector may not be installed on your system. -###### Suggested remedy +**Suggested remedy** -Check if you have an OpenTelemetry collector installed and configured to receive traces from your hosts. +Ensure the OpenTelemetry collector is installed and configured to receive traces from your hosts. -### Possible cause - Collector path not configured +### Collector path not configured -If the collector is installed, it may not have the correct endpoint configured for the receiver. +The collector may not have the correct endpoint configured for the receiver. -###### Suggested remedy +**Suggested remedy** -1. Check that the configuration file of the collector lists the following endpoints: +1. Verify the configuration file lists the following endpoints: ```yaml receivers: @@ -32,29 +30,29 @@ If the collector is installed, it may not have the correct endpoint configured f endpoint: "0.0.0.0:4318" ``` -2. In the instrumentation code, make sure that the endpoint is specified correctly. You can use Logz.io's [integrations hub](https://app.logz.io/#/dashboard/integrations/collectors?tags=Tracing) to ship your data. +2. Ensure the endpoint is correctly specified in the instrumentation code. Use Logz.io's [integrations hub](https://app.logz.io/#/dashboard/integrations/collectors?tags=Tracing) to ship your data. + -##### Possible cause - Traces not genereated -If the collector is installed and the endpoints are properly configured, the instrumentation code may be incorrect. +### Traces not generated +The instrumentation code may be incorrect even if the collector and endpoints are properly configured. -###### Suggested remedy + +**Suggested remedy** 1. Check if the instrumentation can output traces to a console exporter. 2. Use a web-hook to check if the traces are going to the output. -3. Use the metrics endpoint of the collector (http://`<>`:8888/metrics) to see the number of spans received per receiver and the number of spans sent to the Logz.io exporter. - -* Replace `<>` with the address of your collector host, e.g. `localhost`, if the collector is hosted locally. +3. Check the metrics endpoint `(http://<>:8888/metrics)` to see spans received and sent. Replace `<>` with your collector's address. -If the above steps do not work, refer to Logz.io's [integrations hub](https://app.logz.io/#/dashboard/integrations/collectors?tags=Tracing) and re-instrument the application. +If issues persist, refer to Logz.io's [integrations hub](https://app.logz.io/#/dashboard/integrations/collectors?tags=Tracing) and re-instrument the application. -### Possible cause - Wrong exporter/protocol/endpoint +### Wrong exporter/protocol/endpoint -If traces are generated but not send, the collector may be using incorrect exporter, protocol and/or endpoint. +Incorrect exporter, protocol, or endpoint configuration. The correct endpoints are: @@ -68,9 +66,10 @@ The correct endpoints are: endpoint: "<>:4318/v1/traces" ``` -###### Suggested remedy +**Suggested remedy** + +1. Activate `debug` logs in the collector's configuration -1. Activate `debug` logs in the configuration file of the collector as follows: ```yaml service: @@ -83,17 +82,17 @@ Debug logs indicate the status code of the http/https post request. If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint. -If the post request is successful, there will be an additional log with the status code 200. If the post request failed for some reason, there would be another log with the reason for the failure. +A successful post request will log status code 200; failure reasons will also be logged. -##### Possible cause - Collector failure +### Collector failure -If the `debug` logs are sent, but the traces are still not generated, the collector logs need to be investigated. +The collector may fail to generate traces despite sending `debug` logs. -###### Suggested remedy +**Suggested remedy** -1. On Linux and MacOS, see the logs for the collector: +1. On Linux and MacOS, view collector logs: ```shell journalctl | grep otelcol @@ -109,26 +108,28 @@ If the `debug` logs are sent, but the traces are still not generated, the collec This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors. -##### Possible cause - Exporter failure +### Exporter failure + +The exporter configuration may be incorrect, causing trace export issues. -Traces may not be generated if the exporter is not configured properly. -###### Suggested remedy +**Suggested remedy** If you are unable to export traces to a destination, this may be caused by the following: -* There is a network configuration issue -* The exporter configuration is incorrect -* The destination is unavailable +* There is a network configuration issue. +* The exporter configuration is incorrect. +* The destination is unavailable. To investigate this issue: 1. Make sure that the `exporters` and `service: pipelines` are configured correctly. -2. Check the collector logs as well as `zpages` for potential issues. +2. Check the collector logs and `zpages` for potential issues. 3. Check your network configuration, such as firewall, DNS, or proxy. -For example, those metrics can provide information about the exporter: +Metrics like the following can provide insights: + ```shell # HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue. @@ -138,22 +139,23 @@ otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instanc ``` -##### Possible cause - Receiver failure +### Receiver failure -Traces may not be generated if the receiver is not configured properly. +The receiver may not be configured correctly. -###### Suggested remedy +**Suggested remedy** If you are unable to receive data, this may be caused by the following: -* There is a network configuration issue -* The receiver configuration is incorrect -* The receiver is defined in the receivers section, but not enabled in any pipelines -* The client configuration is incorrect +* There is a network configuration issue. +* The receiver configuration is incorrect. +* The receiver is defined in the receivers section, but not enabled in any pipelines. +* The client configuration is incorrect. + +Metrics for receivers can help diagnose issues: -Those metrics can provide about the receiver: ```shell # HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline. diff --git a/docs/_include/tracing-shipping/ruby-steps.md b/docs/_include/tracing-shipping/ruby-steps.md index 6cd9c9ad..889c65d7 100644 --- a/docs/_include/tracing-shipping/ruby-steps.md +++ b/docs/_include/tracing-shipping/ruby-steps.md @@ -1,4 +1,4 @@ -##### Download instrumentation packages +#### Download instrumentation packages Run the following command from the application directory: @@ -8,7 +8,7 @@ gem install opentelemetry-exporter-otlp -v 0.26.1 gem install opentelemetry-instrumentation-all -v 0.40.0 ``` -##### Enable instrumentation in the code +#### Enable instrumentation in the code Add the following configuration to the `Gemfile`: @@ -34,7 +34,7 @@ end Replace `` with the name of your tracing service defined earlier. -##### Install the Bundler +#### Install the Bundler Run the following command: @@ -44,7 +44,7 @@ bundle install ``` -##### Configure data exporter +#### Configure data exporter Run the following command: diff --git a/docs/_include/tracing-shipping/tail-sampling.md b/docs/_include/tracing-shipping/tail-sampling.md index b52869ac..7bb3861e 100644 --- a/docs/_include/tracing-shipping/tail-sampling.md +++ b/docs/_include/tracing-shipping/tail-sampling.md @@ -1,10 +1,11 @@ -The `tail_sampling` defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces. +`tail_sampling` defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces. -You can add more policy configurations to the processor. For more on this, refer to [OpenTelemetry Documentation](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md). + +Additional policy configurations can be added to the processor. For more details, refer to the [OpenTelemetry Documentation](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md). The configurable parameters in the Logz.io default configuration are: | Parameter | Description | Default | |---|---|---| -| threshold_ms | Threshold for the span latency - all traces slower than the threshold value will be filtered in. | 1000 | -| sampling_percentage | Sampling percentage for the probabilistic policy. | 10 | +| threshold_ms | Threshold for the span latency - traces slower than this value will be included. | 1000 | +| sampling_percentage | Percentage of traces to sample using the probabilistic policy. | 10 | \ No newline at end of file diff --git a/docs/shipping/AWS/aws-api-gateway.md b/docs/shipping/AWS/aws-api-gateway.md index de8cbca9..233d55a6 100644 --- a/docs/shipping/AWS/aws-api-gateway.md +++ b/docs/shipping/AWS/aws-api-gateway.md @@ -115,7 +115,7 @@ Deploy this integration to send your Amazon API Gateway metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon API Gateway metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -125,7 +125,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-app-elb.md b/docs/shipping/AWS/aws-app-elb.md index 115f6eb8..4143e257 100644 --- a/docs/shipping/AWS/aws-app-elb.md +++ b/docs/shipping/AWS/aws-app-elb.md @@ -58,7 +58,7 @@ Deploy this integration to send your Amazon App ELB metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon App ELB metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -66,7 +66,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-classic-elb.md b/docs/shipping/AWS/aws-classic-elb.md index 33c8e100..563ce880 100644 --- a/docs/shipping/AWS/aws-classic-elb.md +++ b/docs/shipping/AWS/aws-classic-elb.md @@ -67,7 +67,7 @@ Deploy this integration to send your Amazon Classic ELB metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon Classic ELB metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -76,7 +76,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-cloudfront.md b/docs/shipping/AWS/aws-cloudfront.md index bc6b5ad8..33cdff40 100644 --- a/docs/shipping/AWS/aws-cloudfront.md +++ b/docs/shipping/AWS/aws-cloudfront.md @@ -67,7 +67,7 @@ Deploy this integration to send your Amazon CloudFront metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon CloudFront metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -76,7 +76,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-dynamodb.md b/docs/shipping/AWS/aws-dynamodb.md index eae8535a..a01eacca 100644 --- a/docs/shipping/AWS/aws-dynamodb.md +++ b/docs/shipping/AWS/aws-dynamodb.md @@ -159,7 +159,7 @@ Deploy this integration to send your Amazon DynamoDB metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon DynamoDB metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -169,7 +169,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-ebs.md b/docs/shipping/AWS/aws-ebs.md index c6efd2be..936f3746 100644 --- a/docs/shipping/AWS/aws-ebs.md +++ b/docs/shipping/AWS/aws-ebs.md @@ -26,7 +26,7 @@ Deploy this integration to send your Amazon EBS metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon EBS metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -36,7 +36,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-ec2-auto-scaling.md b/docs/shipping/AWS/aws-ec2-auto-scaling.md index eb46fc04..0f270414 100644 --- a/docs/shipping/AWS/aws-ec2-auto-scaling.md +++ b/docs/shipping/AWS/aws-ec2-auto-scaling.md @@ -163,7 +163,7 @@ Deploy this integration to send your Amazon EC2 Auto Scaling metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon EC2 Auto Scaling metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -175,7 +175,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-ec2.md b/docs/shipping/AWS/aws-ec2.md index 7d18451f..05bf1cd3 100644 --- a/docs/shipping/AWS/aws-ec2.md +++ b/docs/shipping/AWS/aws-ec2.md @@ -190,7 +190,7 @@ Deploy this integration to send your Amazon EC2 metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon EC2 metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -200,7 +200,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-ecs.md b/docs/shipping/AWS/aws-ecs.md index d8ad3a49..377cf098 100644 --- a/docs/shipping/AWS/aws-ecs.md +++ b/docs/shipping/AWS/aws-ecs.md @@ -156,7 +156,7 @@ Deploy this integration to send your Amazon ECS metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon ECS metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -167,7 +167,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-efs.md b/docs/shipping/AWS/aws-efs.md index 23bf8956..e225a9c8 100644 --- a/docs/shipping/AWS/aws-efs.md +++ b/docs/shipping/AWS/aws-efs.md @@ -27,7 +27,7 @@ Deploy this integration to send your Amazon EFS metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon EFS metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -36,7 +36,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-eks.md b/docs/shipping/AWS/aws-eks.md index 18fd9e8c..5eda4b3f 100644 --- a/docs/shipping/AWS/aws-eks.md +++ b/docs/shipping/AWS/aws-eks.md @@ -19,10 +19,10 @@ The logzio-monitoring Helm Chart ships your EKS Fargate telemetry (logs, metrics ## Prerequisites -1. [Helm](https://helm.sh/) +* [Helm](https://helm.sh/) -Add Logzio-helm repository +* Add Logzio-helm repository `helm repo add logzio-helm https://logzio.github.io/logzio-helm && helm repo update` {@include: ../../_include/general-shipping/k8s-all-data.md} @@ -194,7 +194,7 @@ To customize your configuration, edit the `config` section in the `values.yaml` Give your metrics some time to get from your system to ours. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-elasticache-redis.md b/docs/shipping/AWS/aws-elasticache-redis.md index a78b780e..63f9331d 100644 --- a/docs/shipping/AWS/aws-elasticache-redis.md +++ b/docs/shipping/AWS/aws-elasticache-redis.md @@ -26,7 +26,7 @@ Deploy this integration to send your Amazon ElastiCache for Redis metrics to Log This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon ElastiCache for Redis metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -36,7 +36,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-fsx.md b/docs/shipping/AWS/aws-fsx.md index 50d65742..4b874316 100644 --- a/docs/shipping/AWS/aws-fsx.md +++ b/docs/shipping/AWS/aws-fsx.md @@ -112,7 +112,7 @@ Deploy this integration to send your Amazon FSx - metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon FSx metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -121,7 +121,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-kafka.md b/docs/shipping/AWS/aws-kafka.md index 6dec8920..e8004826 100644 --- a/docs/shipping/AWS/aws-kafka.md +++ b/docs/shipping/AWS/aws-kafka.md @@ -25,7 +25,7 @@ Deploy this integration to send your Amazon Managed Streaming for Apache Kafka ( This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon Managed Streaming for Apache Kafka (MSK) metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -35,7 +35,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-kinesis-firehose.md b/docs/shipping/AWS/aws-kinesis-firehose.md index 202382c5..aa15fb39 100644 --- a/docs/shipping/AWS/aws-kinesis-firehose.md +++ b/docs/shipping/AWS/aws-kinesis-firehose.md @@ -16,18 +16,15 @@ drop_filter: [] ## Logs -:::important -The `services` and `customLogGroups` configurations do not work together. If you specify services, it will automatically send all logs from those services, regardless of any custom log groups you define. To collect logs only from specific log groups, do not use the `services` field. Instead, configure the desired log groups directly without adding services. -::: - This project deploys instrumentation that allows shipping Cloudwatch logs to Logz.io, with a Firehose Delivery Stream. It uses a Cloudformation template to create a Stack that deploys: * Firehose Delivery Stream with Logz.io as the stream's destination. * Lambda function that adds Subscription Filters to Cloudwatch Log Groups, as defined by user's input. * Roles, log groups, and other resources that are necessary for this instrumentation. -:::important -This service sends all logs, regardless of custom log groups. When setting up a service, it automatically sends all logs from that service, even if you specify a particular log group. To solve this, directly configure the specific log group without adding a new service. + +:::info +If you want to send logs from specific log groups, use `customLogGroups` instead of `services`. Since specifying `services` will automatically send all logs from those services, regardless of any custom log groups you define. ::: ##### Auto-deploy the Stack @@ -136,23 +133,15 @@ For a much easier and more efficient way to collect and send metrics, consider u ::: - - Deploy this integration to send your Amazon Kinesis Data Firehose metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon Kinesis Data Firehose metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. - - - -{@include: ../../_include/metric-shipping/generic-dashboard.html} - {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-lambda-extensions.md b/docs/shipping/AWS/aws-lambda-extensions.md index 1fe2ca39..874c287c 100644 --- a/docs/shipping/AWS/aws-lambda-extensions.md +++ b/docs/shipping/AWS/aws-lambda-extensions.md @@ -16,7 +16,7 @@ drop_filter: [] Lambda Extensions enable tools to integrate deeply into the Lambda execution environment to control and participate in Lambda’s lifecycle. To read more about Lambda Extensions, [click here](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html). -The Logz.io Lambda extension for logs, uses the AWS Extensions API and [AWS Logs API](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-logs-api.html), and sends your Lambda Function Logs directly to your Logz.io account. +The Logz.io Lambda extension for logs, uses the AWS Extensions API and [AWS Logs API](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html#runtimes-supported), and sends your Lambda Function Logs directly to your Logz.io account. This repo is based on the [AWS lambda extensions sample](https://github.com/aws-samples/aws-lambda-extensions). This extension is written in Go, but can be run with runtimes that support [extensions](https://docs.aws.amazon.com/lambda/latest/dg/using-extensions.html). @@ -35,11 +35,17 @@ If you want to send all the logs by the time your Lambda function stops running, This means that if your Lambda function goes into the `SHUTDOWN` phase, the extension will start running and send all logs that are in the queue. -## Extension deployment options +## Deploying Logz.io logs extension You can deploy the extension via the AWS CLI or via the AWS Management Console. -## Deploying Logz.io logs extension via the AWS CLI +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + + + + +## Deploying via the AWS CLI ### Deploy the extension and configuration @@ -55,7 +61,7 @@ aws lambda update-function-configuration \ ``` :::note -This command overwrites the existing function configuration. If you already have your own layers and environment variables for your function, list them as well. +This command overwrites the existing function configuration. If you already have your own layers and environment variables for your function, include them in the list. ::: @@ -63,60 +69,42 @@ This command overwrites the existing function configuration. If you already have |---|---|---| | `<>` | Name of the Lambda Function you want to monitor. |Required| | `<>` | A space-separated list of function layers to add to the function's execution environment. Specify each layer by its ARN, including the version. For the ARN, see the [**ARNs** table]({@include: ../../_include/log-shipping/lambda-xtension-tablink.md}) | | -| `<>` | Key-value pairs containing environment variables that are accessible from function code during execution. Should appear in the following format: `KeyName1=string,KeyName2=string`. For a list of all the environment variables for the extension, see the [**Lambda environment variables** table]{@include: ../../_include/log-shipping/lambda-xtension-tablink.md} | | - -### Run the function - -Use the following command. It may take more than one run of the function for the logs to start shipping to your Logz.io account. +| `<>` | Key-value pairs containing environment variables that are accessible from function code during execution. Should appear in the following format: `KeyName1=string,KeyName2=string`. For a list of all the environment variables for the extension, see the [**Lambda environment variables** table]{@include: ../../_include/log-shipping/lambda-xtension-tablink-indox.html} | | +#### Command example ```shell aws lambda update-function-configuration \ - --function-name <> \ - --layers [] \ - --environment "Variables={}" + --function-name exampleFunction \ + --layers arn:aws:lambda:us-east-1:486140753397:layer:LogzioLambdaExtensionLogs:14 \ + --environment "Variables={LOGZIO_LOGS_TOKEN=<>,LOGZIO_LISTENER=<>,ENABLE_PLATFORM_LOGS=true,GROK_PATTERNS='{}',LOGS_FORMAT='^\[%{NUMBER:logId}\] %{GREEDYDATA:message}',CUSTOM_FIELDS='fieldName1=fieldValue1,fieldName2=fieldValue2',JSON_FIELDS_UNDER_ROOT=true}" ``` -Your lambda logs will appear under the type `lambda-extension-logs`. - - -:::note -This command overwrites the existing function configuration. If you already have your own layers and environment variables for your function, include them in the list. -::: - - -### Check Logz.io for your logs - -Give your logs some time to get from your system to ours. - - -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your logs. - - - -{@include: ../../_include/metric-shipping/generic-dashboard.html} - #### Deleting the extension To delete the extension and its environment variables, use the following command: ```shell aws lambda update-function-configuration \ - --function-name some-func \ - --layers [] \ + --function-name <> \ + --layers \ --environment "Variables={}" ``` -:::note -This command overwrites the existing function configuration. If you already have your own layers and environment variables for your function, include them in the list. -::: - +:::info +This command overwrites the existing function configuration. If you already have your own layers and environment variables for your function, include them in the list: +```shell +aws lambda update-function-configuration \ + --function-name <> \ + --layers [<>] \ + --environment "Variables={<>}" +``` +::: -## Deploying Logz.io log extensions via the AWS Management Console - -You'll have to add the extension. - + + +## Deploying via the AWS Management Console ### Add the extension to your Lambda Function @@ -126,40 +114,38 @@ You'll have to add the extension. 2. In the page for the function, scroll down to the `Layers` section and choose `Add Layer`. ![Add layer](https://dytvr9ot2sszz.cloudfront.net/logz-docs/lambda_extensions/lambda-x_1-2.jpg) -3. Select the `Specify an ARN` option, then choose the ARN of the extension with the region code that matches your Lambda Function region from the [**ARNs** table]{@include: ../../_include/log-shipping/lambda-xtension-tablink.md} {@include: ../../_include/log-shipping/lambda-xtension-tablink-indox.html}, and click the `Add` button. +3. Select the `Specify an ARN` option, then choose the ARN of the extension with the region code that matches your Lambda Function region from the [**ARNs** table]{@include: ../../_include/log-shipping/lambda-xtension-tablink.md}, and click the `Add` button. ![Add ARN extension](https://dytvr9ot2sszz.cloudfront.net/logz-docs/lambda_extensions/lambda-x_1-3.jpg) ### Configure the extension parameters -Add the environment variables to the function, according to the [**Environment variables** table]{@include: ../../_include/log-shipping/lambda-xtension-tablink.md} {@include: ../../_include/log-shipping/lambda-xtension-tablink-indox.html}. - - -##### Run the function - -Run the function. It may take more than one run of the function for the logs to start shipping to your Logz.io account. -Your lambda logs will appear under the type `lambda-extension-logs`. - -### Check Logz.io for your logs +Add environment variables to the function, according to the [**Environment variables** table]{@include: ../../_include/log-shipping/lambda-xtension-tablink-indox.html}. -Give your logs some time to get from your system to ours. +#### Deleting the extension -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your logs. +- To delete the **extension layer**: In your function page, go to the **layers** panel. Click `edit`, select the extension layer, and click `save`. +- To delete the extension's **environment variables**: In your function page, select the `Configuration` tab, select `Environment variables`, click `edit`, and remove the variables that you added for the extension. - -{@include: ../../_include/metric-shipping/generic-dashboard.html} + + +### Check Logz.io for your logs +Give your logs some time to get from your system to ours. It may take more than one run of the function for the logs to start shipping to your Logz.io account. -#### Deleting the extension +:::info +Your lambda logs will appear under the type `lambda-extension-logs`. +::: -- To delete the **extension layer**: In your function page, go to the **layers** panel. Click `edit`, select the extension layer, and click `save`. -- To delete the extension's **environment variables**: In your function page, select the `Configuration` tab, select `Environment variables`, click `edit`, and remove the variables that you added for the extension. +#### Pre-Built content +Log in to your Logz.io account and navigate to the [current instructions page](https://app.logz.io/#/dashboard/integrations/Lambda-extensions) to install the pre-built dashboard to enhance the observability of your logs. + ## Environment Variables @@ -167,7 +153,7 @@ Give your logs some time to get from your system to ours. | Name | Description |Required/Default| | --- | --- | --- | | `LOGZIO_LOGS_TOKEN` | Your Logz.io log shipping [token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping). | Required | -| `LOGZIO_LISTENER` | Your Logz.io listener address, with port 8070 (http) or 8071 (https). For example: `https://listener.logz.io:8071`. {@include: ../../_include/log-shipping/listener-var.md} | Required | +| `LOGZIO_LISTENER` | {@include: ../../_include/log-shipping/listener-var.md} For example: `https://listener.logz.io:8071`. | Required | | `LOGS_EXT_LOG_LEVEL` | Log level of the extension. Can be set to one of the following: `debug`, `info`, `warn`, `error`, `fatal`, `panic`. |Default: `info` | | `ENABLE_PLATFORM_LOGS` | The platform log captures runtime or execution environment errors. Set to `true` if you wish the platform logs will be shipped to your Logz.io account. | Default: `false` | | `GROK_PATTERNS` | Must be set with `LOGS_FORMAT`. Use this if you want to parse your logs into fields. A minified JSON list that contains the field name and the regex that will match the field. To understand more see the [parsing logs](https://docs.logz.io/docs/shipping/aws/lambda-extensions/#parsing-logs) section. | - | @@ -217,21 +203,9 @@ Give your logs some time to get from your system to ours. | Asia Pacific (Seoul) | `ap-northeast-2` | `arn:aws:lambda:ap-northeast-2:486140753397:layer:LogzioLambdaExtensionLogsArm:5` | | Europe (London) | `eu-west-2` | `arn:aws:lambda:eu-west-2:486140753397:layer:LogzioLambdaExtensionLogsArm:4` | | Europe (Paris) | `eu-west-3` | `arn:aws:lambda:eu-west-3:486140753397:layer:LogzioLambdaExtensionLogsArm:5` | + | - -## Lambda extension versions - -| Version | Supported Runtimes | -|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| 0.3.3 | `.NET 6`, `.NET 8`, `provided.al2`, `provided.al2023`, `Java 8`, `Java 11`, `Java 17`, `Node.js 16`, `Node.js 18`, `Python 3.8`, `Python 3.9`, `Python 3.10`, `Python 3.11`, `Python 3.12`, `Ruby 3.2`, `Custom Runtime` | -| 0.3.2 | `.NET 6`, `Go 1.x`, `Java 17`, `Node.js 18`, `Python 3.11`, `Ruby 3.2`, `Java 11`, `Java 8`, `Node.js 16`, `Python 3.10`, `Python 3.9`, `Python 3.8`, `Ruby 2.7`, `Custom Runtime` | -| 0.3.1 | All runtimes | -| 0.3.0 | `.NET Core 3.1`, `Java 11`, `Java 8`, `Node.js 14.x`, `Node.js 12.x`, `Python 3.9`, `Python 3.8`, `Python 3.7`, `Ruby 2.7`, `Custom runtime` | -| 0.2.0 | `.NET Core 3.1`, `Java 11`, `Java 8`, `Node.js 14.x`, `Node.js 12.x`, `Python 3.9`, `Python 3.8`, `Python 3.7`, `Ruby 2.7`, `Custom runtime` | -| 0.1.0 | `.NET Core 3.1`, `Java 11`, `Java 8`, `Node.js 14.x`, `Node.js 12.x`, `Node.js 10.x`, `Python 3.8`, `Python 3.7`, `Ruby 2.7`, `Ruby 2.5`, `Custom runtime` | -| 0.0.1 | `Python 3.7`, `Python 3.8` | - -:::note +:::info If your AWS region is not in the list, please reach out to Logz.io's support or open an issue in the [project's Github repo](https://github.com/logzio/logzio-lambda-extensions). ::: @@ -256,7 +230,7 @@ May 04 2024 10:50:46.532 logzio_sender: Successfully sent bulk to logz.io, size: In Logz.io we wish to have `timestamp`, `app_name` and `message` in their own fields. To do so, we'll set the environment variables as follows: -##### GROK_PATTERNS +#### GROK_PATTERNS The `GROK_PATTERNS` variable contains definitions of custom grok patterns and should be in a JSON format. - key - is the custom pattern name. @@ -273,7 +247,7 @@ Meaning we can set `GROK_PATTERNS` as: {"MY_CUSTOM_TIMESTAMP":"\\w+ \\d{2} \\d{4} \\d{2}:\\d{2}:\\d{2}\\.\\d{3}"} ``` -##### LOGS_FORMAT +#### LOGS_FORMAT The `LOGS_FORMAT` variable contains the full grok pattern that will match the format of the logs, using known patterns and the custom patterns that were defined in `GROK_PATTERNS` (if defined). The variable should be in a grok format: @@ -281,7 +255,7 @@ The variable should be in a grok format: %{GROK_PATTERN_NAME:WANTED_FIELD_NAME} ``` -:::note +:::warning The `WANTED_FIELD_NAME` cannot contain a dot (`.`). ::: @@ -311,10 +285,10 @@ This project uses an external module for its grok parsing. To learn more about i ### Nested fields -As of v0.2.0 the extension can detect if a log is in a JSON format, and to parse the fields to appear as nested fields in the Logz.io app. +**As of v0.2.0** the extension can detect if a log is in a JSON format, and to parse the fields to appear as nested fields in the Logz.io app. For example, the following log: -``` +```json { "foo": "bar", "field2": "val2" } ``` @@ -324,11 +298,11 @@ message_nested.foo: bar message_nested.field2: val2 ``` -As of v0.3.3, to have the fields nested under the root (instead of under `message_nested`), set the `JSON_FIELDS_UNDER_ROOT` environment variable as `true`. +**As of v0.3.3**, to have the fields nested under the root (instead of under `message_nested`), set the `JSON_FIELDS_UNDER_ROOT` environment variable as `true`. It is useful in cases where the passed object is in fact meant to be that of a message plus metadata fields. For example, the following log: -``` +```json { "message": "hello", "foo": "bar" } ``` @@ -339,10 +313,3 @@ foo: bar ``` **Note:** The user must insert a valid JSON. Sending a dictionary or any key-value data structure that is not in a JSON format will cause the log to be sent as a string. - -## Upgrading from v0.0.1 to v0.1.0 - -If you have Lambda extension v0.0.1 and you want to upgrade to v0.1.0+, to ensure that your logs are correctly sent to Logz.io: - -1. Delete the existing extension layer, its dependencies, and environment variables as decribed below in this topic. -2. Deploy the new extension, its dependencies, and configuration as described below in this topic. diff --git a/docs/shipping/AWS/aws-lambda.md b/docs/shipping/AWS/aws-lambda.md index 208eb551..4feaa392 100644 --- a/docs/shipping/AWS/aws-lambda.md +++ b/docs/shipping/AWS/aws-lambda.md @@ -18,23 +18,15 @@ drop_filter: [] ## Metrics +Deploy this integration to send Amazon Lambda metrics to Logz.io. It creates a Kinesis Data Firehose delivery stream to send metrics to your Logz.io account and a Lambda function to add AWS namespaces and collect resource tags. -Deploy this integration to send your Amazon Lambda metrics to Logz.io. - - -This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon Lambda metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. - -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. - - - -{@include: ../../_include/metric-shipping/generic-dashboard.html} +Install the pre-built dashboard to enhance the observability of your metrics. {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -43,16 +35,9 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y ## Traces -Deploy this integration for automatic instrumentation of your Node.js or Go applications on AWS Lambda, enabling trace forwarding to your Logz.io account. It involves adding a specialized OpenTelemetry collector layer. Additionally, Node.js applications require an extra layer for auto-instrumentation. Environment variable configurations are necessary for both Node.js and Go integrations. This setup does not require modifications to your existing application code. - - - - -For detailed instructions on implementing this integration for your specific application, please refer to the following documentation: - -- **Go Applications**: For deploying traces from Go applications on AWS Lambda using OpenTelemetry, visit [Traces from Go on AWS Lambda using OpenTelemetry](https://docs.logz.io/docs/shipping/AWS/Lambda-extension-go). - -- **Node.js Applications**: For deploying traces from Node.js applications on AWS Lambda using OpenTelemetry, visit [Traces from Node.js on AWS Lambda using OpenTelemetry](https://docs.logz.io/docs/shipping/aws/lambda-extension-node/). +Deploy this integration for automatic instrumentation of Node.js or Go applications on AWS Lambda, forwarding traces to Logz.io. This involves adding an OpenTelemetry collector layer and configuring environment variables for both Node.js and Go integrations without modifying your application code. -These guides offer step-by-step instructions tailored to your application's programming language, ensuring a seamless integration process. +These guides offer step-by-step instructions tailored to your application's programming language, ensuring a seamless integration process: +* Traces from **[Go Applications](https://docs.logz.io/docs/shipping/AWS/Lambda-extension-go)** using OpenTelemetry. +* Traces from **[Node.js Applications](https://docs.logz.io/docs/shipping/aws/lambda-extension-node/)** using OpenTelemetry. diff --git a/docs/shipping/AWS/aws-mq.md b/docs/shipping/AWS/aws-mq.md index 1f0c736c..b47a5b20 100644 --- a/docs/shipping/AWS/aws-mq.md +++ b/docs/shipping/AWS/aws-mq.md @@ -112,7 +112,7 @@ Deploy this integration to send your Amazon MQ metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon MQ metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -121,7 +121,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-msk.md b/docs/shipping/AWS/aws-msk.md index f47d1fe0..12cc8cec 100644 --- a/docs/shipping/AWS/aws-msk.md +++ b/docs/shipping/AWS/aws-msk.md @@ -111,7 +111,7 @@ Deploy this integration to send your Amazon MSK metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon MSK metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -121,7 +121,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-nat.md b/docs/shipping/AWS/aws-nat.md index 2ae382d6..8bd3e70d 100644 --- a/docs/shipping/AWS/aws-nat.md +++ b/docs/shipping/AWS/aws-nat.md @@ -27,7 +27,7 @@ Deploy this integration to send your Amazon NAT metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon NAT metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -37,7 +37,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-network-elb.md b/docs/shipping/AWS/aws-network-elb.md index f8cd657d..5fa373da 100644 --- a/docs/shipping/AWS/aws-network-elb.md +++ b/docs/shipping/AWS/aws-network-elb.md @@ -68,7 +68,7 @@ Deploy this integration to send your Amazon Network ELB metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon Network ELB metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -79,7 +79,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-rds.md b/docs/shipping/AWS/aws-rds.md index 3c66b097..44ed304f 100644 --- a/docs/shipping/AWS/aws-rds.md +++ b/docs/shipping/AWS/aws-rds.md @@ -21,7 +21,7 @@ drop_filter: [] **Before you begin, you'll need**: * MySQL database hosted on Amazon RDS -* An active account with Logz.io +* An active Logz.io account @@ -99,7 +99,7 @@ If you still don't see your logs, see [log shipping troubleshooting](https://doc * MySQL database hosted on Amazon RDS * Destination port 5015 open on your firewall for outgoing traffic. -* An active account with Logz.io +* An active Logz.io account :::note This is a basic deployment. If you need to apply advanced configurations, adjust and edit the deployment accordingly. @@ -195,7 +195,7 @@ Deploy this integration to send your Amazon RDS metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon RDS metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -206,7 +206,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-route53.md b/docs/shipping/AWS/aws-route53.md index a49d7ee4..ca1c2885 100644 --- a/docs/shipping/AWS/aws-route53.md +++ b/docs/shipping/AWS/aws-route53.md @@ -158,7 +158,7 @@ Deploy this integration to send your Amazon Route 53 metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon Route 53 metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -169,7 +169,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-s3-bucket.md b/docs/shipping/AWS/aws-s3-bucket.md index 973fb29d..c56ad11c 100644 --- a/docs/shipping/AWS/aws-s3-bucket.md +++ b/docs/shipping/AWS/aws-s3-bucket.md @@ -20,41 +20,41 @@ drop_filter: [] Some AWS services can be configured to send their logs to an S3 bucket, where Logz.io can directly retrieve them. +### Shipping logs via S3 Hook + +If your data is not alphabetically organized, use the S3 Hook. This requires deploying a Lambda function in your environment to manage the log shipping process. -### Choosing the Right Shipping Method -* **S3 Fetcher**: If your data is organized alphabetically, opt for the S3 Fetcher. Logz.io operates this fetcher on their end, directly accessing your S3 to retrieve the data. -* **S3 Hook**: If your data is not alphabetically organized, use the S3 Hook. This requires deploying a Lambda function within your environment to manage the log shipping process. +{@include: ../../_include/log-shipping/stack.md} + ### Shipping logs via S3 Fetcher +If your data is organized alphabetically, use S3 Fetcher. Logz.io operates this fetcher, directly accessing your S3 to retrieve the data. + :::note If your S3 bucket is encrypted, add `kms:Decrypt` to the policy on the ARN of the KMS key used to encrypt the bucket. ::: -#### Best practices +**Best practices** -The S3 API does not allow retrieval of object timestamps, so Logz.io must collect logs in alphabetical order. -Please keep these notes in mind when configuring logging. +Due to S3 API limitations, Logz.io collects logs in alphabetical order. Keep the following tips in mind when configuring logging: -* **Make the prefix as specific as possible** \\ +* **Make the prefix as specific as possible** - The prefix is the part of your log path that remains constant across all logs. - This can include folder structure and the beginning of the filename. - -* **The log path after the prefix must come in alphabetical order** \\ - We recommend starting the object name (after the prefix) with the Unix epoch time. - The Unix epoch time is always increasing, ensuring we can always fetch your incoming logs. + This includes the folder structure and the beginning of the filename. -* **The size of each log file should not exceed 50 MB** \\ - To guarantee successful file upload, make sure that the size of each log file does not exceed 50 MB. +* **The log path after the prefix must come in alphabetical order** - + Start the object name (after the prefix) with the Unix epoch time, as it always increases, ensuring that incoming logs are fetched correctly. +* **The size of each log file should not exceed 50 MB** - + Each log file should be no larger than 50 MB to ensure successful upload. -#### Configure Logz.io to fetch logs from an S3 bucket +### Add a New S3 Bucket via Logz.io Wizard -#### Add a new S3 bucket using the dedicated Logz.io configuration wizard {@include: ../../_include/log-shipping/s3-bucket-snippet.html} @@ -64,31 +64,28 @@ Please keep these notes in mind when configuring logging. 1. Click **+ Add a bucket** 2. Select IAM role as your method of authentication. +3. The configuration wizard will open: + * Select the hosting region from the dropdown list. + * Enter the **S3 bucket name** + * _(Optional)_ Add a prefix if desired. + * Decide whether to include the **source file path** as a field in your log. +4. **Save** your information. -The configuration wizard will open. - -3. Select the hosting region from the dropdown list. -4. Provide the **S3 bucket name** -5. _Optional_ You have the option to add a prefix. -6. Choose whether you want to include the **source file path**. This saves the path of the file as a field in your log. -7. **Save** your information. - -![S3 bucket IAM authentication wizard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/log-shipping/s3-add-bucket.png) :::note -Logz.io fetches logs that are generated after configuring an S3 bucket. -Logz.io cannot fetch old logs retroactively. +Logz.io fetches logs generated after the S3 bucket is configured. It cannot fetch logs retroactively. ::: -##### Enable Logz.io to access your S3 bucket +#### Enable Logz.io to access your S3 bucket -Logz.io will need the following permissions to your S3 bucket: +Add the following to your IAM policy: -* **s3:ListBucket** - to know which files are in your bucket and to thereby keep track of which files have already been ingested -* **s3:GetObject** - to download your files and ingest them to your account +* **s3:ListBucket** - to list the files in your bucket and track which files have been ingested. +* **s3:GetObject** - to download and ingest your files. + +Add the following to your IAM policy: -To do this, add the following to your IAM policy: ```json { @@ -122,97 +119,62 @@ Note that the ListBucket permission is set to the entire bucket and the GetObjec -##### Create a Logz.io-AWS connector - -In your Logz.io app, go to **Integration hub** and select the relevant AWS resource. - -Inside the integration, click **+ Add a bucket** and select the option to **Authenticate with a role** - -![Connect Logz.io to an AWS resource](https://dytvr9ot2sszz.cloudfront.net/logz-docs/log-shipping/s3-bucket-id-dec.png) - -Copy and paste the **Account ID** and **External ID** in your text editor. +#### Create a Logz.io-AWS connector -Fill in the form to create a new connector. +Navigate to Logz.io's [**Integration Hub**](https://app.logz.io/#/dashboard/integrations/collectors) and select the relevant AWS resource. Once inside, click **+ Add a bucket** and select the option to **Authenticate with a role**. -Enter the **S3 bucket name** and, if needed, -the **Prefix** where your logs are stored. +Copy the **Account ID** and **External ID** from the integration page. -Click **Get the role policy**. -You can review the role policy to confirm the permissions that will be needed. -Paste the policy in your text editor. +Fill out the form to create a new connector, including the **S3 bucket name** and optional **Prefix** where your logs are stored. -Keep this information available so you can use it in AWS. +Click **Get the role policy** to review and copy the policy. Keep this information handy for use in AWS. Choose whether you want to include the **source file path**. This saves the path of the file as a field in your log. -##### Create the policy +#### Create the policy Navigate to [IAM policies](https://us-east-1.console.aws.amazon.com/iam/home#/policies) and click **Create policy**. -In the **JSON** tab, -replace the default JSON with the policy you copied from Logz.io. +In the **JSON** tab, replace the default JSON with the policy you copied from Logz.io. -Click **Next** to continue. +Click **Next**, provide a **Name** and optional **Description**, and then click **Create policy**. Remember the policy's name. -Give the policy a **Name** and optional **Description**, -and then click **Create policy**. -Remember the policy's name—you'll need this in the next step. - -Return to the _Create role_ page. - - -##### Create the IAM Role in AWS +#### Create the IAM Role in AWS -Go to your [IAM roles](https://console.aws.amazon.com/iam/home#/roles) page in your AWS admin console. - -Click **Create role** to open the _Create role_ wizard. - -![Create an IAM role for another AWS account](https://dytvr9ot2sszz.cloudfront.net/logz-docs/aws/create-role-main-screen-dec.png) +Go to your [IAM roles](https://console.aws.amazon.com/iam/home#/roles) page in your AWS admin console and click "Create role." Click **AWS Account > Another AWS account**. Paste the **Account ID** you copied from Logz.io. -Select **Require external ID**, -and then paste the **External ID** you've copied and saved in your text editor. +Paste the Account ID from Logz.io, select **Require external ID**, and paste the **External ID**. -Click **Next: Permissions** to continue. +Click **Next: Permissions**, refresh the page, and search for your new policy. Select it and proceed. -##### Attach the policy to the role +#### Finalize the role -Refresh the page, -and then type your new policy's name in the search box. -Find your policy in the filtered list and select its check box. +Provide a **Name** (e.g., "logzio-...") and optional **Description**. -Click **Next** to review the new role. +Click **Create role**. -##### Finalize the role +#### Copy the ARN to Logz.io -Give the role a **Name** and optional **Description**. -We recommend beginning the name with "logzio-" -so that it's clear you're using this role with Logz.io. - -Click **Create role** when you're done. - -##### Copy the ARN to Logz.io - -In the _IAM roles_ screen, type your new role's name in the search box. - -Find your role in the filtered list and click it to go to its summary page. +In the _IAM roles_ screen, type your new role's name in the search box. Click on it to enter its summary page. Copy the role ARN (top of the page). + In Logz.io, paste the ARN in the **Role ARN** field, and then click **Save**. -##### Check Logz.io for your logs +#### Check your logs -Give your logs some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). +Allow some time for data ingestion, then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). -If you still don't see your logs, see [log shipping troubleshooting](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/log-shipping-troubleshooting/). +For troubleshooting, refer to our [log shipping troubleshooting](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/log-shipping-troubleshooting/) guide. @@ -220,10 +182,8 @@ If you still don't see your logs, see [log shipping troubleshooting](https://doc You can add your buckets directly from Logz.io by providing your S3 credentials and configuration. -##### Configure Logz.io to fetch logs from an S3 bucket - -##### Add a new S3 bucket using the dedicated Logz.io configuration wizard +#### Add a new S3 bucket using the dedicated Logz.io configuration wizard {@include: ../../_include/log-shipping/s3-bucket-snippet.html} @@ -231,33 +191,29 @@ You can add your buckets directly from Logz.io by providing your S3 credentials -1. Click **+ Add a bucket** -2. Select **access keys** as the method of authentication. - -The configuration wizard will open. - -3. Select the hosting region from the dropdown list. -4. Provide the **S3 bucket name** -5. _Optional_ You have the option to add a prefix. -6. Choose whether you want to include the **source file path**. This saves the path of the file as a field in your log. -7. **Save** your information. +1. Click **+ Add a bucket**. +2. Select **access keys** as your method of authentication. +3. The configuration wizard will open: + * Select the hosting region from the dropdown list. + * Enter the **S3 bucket name** + * _(Optional)_ Add a prefix if desired. + * Decide whether to include the **source file path** as a field in your log. +4. **Save** your information. -![S3 bucket keyaccess authentication wizard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/log-shipping/key-access-config-basic.png) :::note -Logz.io fetches logs that are generated after configuring an S3 bucket. -Logz.io cannot fetch old logs retroactively. +Logz.io fetches logs generated after the S3 bucket is configured. It cannot fetch logs retroactively. ::: -##### Enable Logz.io to access your S3 bucket +#### Enable Logz.io to access your S3 bucket -Logz.io will need the following permissions to your S3 bucket: +Add the following to your IAM policy: -* **s3:ListBucket** - to know which files are in your bucket and to thereby keep track of which files have already been ingested -* **s3:GetObject** - to download your files and ingest them to your account +* **s3:ListBucket** - to list the files in your bucket and track which files have been ingested. +* **s3:GetObject** - to download and ingest your files. -To do this, add the following to your IAM policy: +Add the following to your IAM policy: ```json { @@ -293,12 +249,9 @@ Note that the ListBucket permission is set to the entire bucket and the GetObjec ::: -##### Create the user +#### Create the user -Browse to the [IAM users](https://console.aws.amazon.com/iam/home#/users) -and click **Create user**. - -![Create an IAM role for another AWS account](https://dytvr9ot2sszz.cloudfront.net/logz-docs/aws/iam-create-user-dec.png) +Navigate to [IAM users](https://console.aws.amazon.com/iam/home#/users) and click **Create user**. Assign a **User name**. @@ -306,14 +259,12 @@ Under _Select AWS access type_, select **Programmatic access**. Click **Next: Permissions** to continue. -##### Create the policy +#### Create the policy -In the _Set permissions_ section, click **Attach existing policies directly > Create policy**. -The _Create policy_ page loads in a new tab. +In the _Set permissions_ section, click **Attach existing policies directly** and then **Create policy**. This opens the _Create policy_ page in a new tab. -![Create policy](https://dytvr9ot2sszz.cloudfront.net/logz-docs/aws/create-policy-visual-editor.png) -Set these permissions: +Set the following permissions: * **Service**: Choose **S3** @@ -328,34 +279,28 @@ Set these permissions: then select **Object name > Any**. Click **Add**. -Click **Review policy** to continue. -Give the policy a **Name** and optional **Description**, and then click **Create policy**. +Click **Review policy**, provide a **Name** and optional **Description**, and click **Create policy**. Remember the policy's name—you'll need this in the next step. Close the tab to return to the _Add user_ page. -##### Attach the policy to the user - -Refresh the page, -and then type your new policy's name in the search box. +#### Attach the policy to the user -Find your policy in the filtered list and select its check box. +Refresh the page, search for your new policy, and select its check box. -Click **Next: Tags**, -and then click **Next: Review** to continue to the _Review_ screen. +Click **Next: Tags**, then **Next: Review** to continue to the _Review_ screen. -##### Finalize the user +#### Finalize the user -Give the user a **Name** and optional **Description**, -and then click **Create user**. +Give the user a **Name** and optional **Description**, and then click **Create user**. You're taken to a success page. -##### Add the bucket to Logz.io +#### Add the bucket to Logz.io -Add the **S3 bucket name** and **Prefix** +Add the **S3 bucket name** and **Prefix**. Copy the _Access key ID_ and _Secret access key_, or click **Download .csv**. @@ -363,101 +308,68 @@ In Logz.io, paste the **Access key** and **Secret key**, and then click **Save**. -##### Check Logz.io for your logs +#### Check your logs + +Allow some time for data ingestion, then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). -Give your logs some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). +For troubleshooting, refer to our [log shipping troubleshooting](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/log-shipping-troubleshooting/) guide. -If you still don't see your logs, see [log shipping troubleshooting](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/log-shipping-troubleshooting/). ### Troubleshooting -#### Migrating IAM roles to a new external ID {#new-external-id} +#### Migrating IAM roles to a new external ID -If you previously set up an IAM role with your own external ID, -we recommend updating your Logz.io and AWS configurations -to use a Logz.io-generated external ID. -This adds security to your AWS account -by removing the predictability -of any internal naming conventions -your company might have. +If you previously set up an IAM role with your own external ID, we recommend updating your Logz.io and AWS configurations to use a Logz.io-generated external ID. This enhances the security of your AWS account by removing the predictability of internal naming conventions. + +**Before Migration:** Identify where the existing IAM role is used in Logz.io. You'll need to replace any S3 fetcher and [Archive & Restore](https://app.logz.io/#/dashboard/tools/archive-and-restore) configurations that use the current role. -Before you migrate, -you'll need to know where the existing IAM role is used in Logz.io. -This is because you'll need to replace any -[S3 fetcher](https://app.logz.io/#/dashboard/send-your-data/log-sources/s3-bucket) -and -[Archive & restore](https://app.logz.io/#/dashboard/tools/archive-and-restore) -configurations that use the existing role. * **If the role is used in a single Logz.io account**: - You can update the external ID - and replace current Logz.io configurations. - See - _Migrate to the Logz.io external ID in the same role_. + Update the external ID and replace current Logz.io configurations. See [_Migrate to the Logz.io external ID in the same role_](https://docs.logz.io/docs/user-guide/admin/give-aws-access-with-iam-roles/#migrate-with-same-role). * **If the role is used with multiple Logz.io accounts**: - You'll need to create a new role for each account - and replace current Logz.io configurations. - See - _Migrate to new IAM roles_. + Create a new role for each account and replace current Logz.io configurations. See + [_Migrate to new IAM roles_](https://docs.logz.io/docs/user-guide/admin/give-aws-access-with-iam-roles/#migrate-to-new-roles). -##### Migrate to the Logz.io external ID in the same role {#migrate-with-same-role} +#### Migrate to the Logz.io external ID in the same role In this procedure, you'll: -* Replace Logz.io configurations to use the new external ID -* Update the external ID in your IAM role's trust policy +* Replace Logz.io configurations to use the new external ID. +* Update the external ID in your IAM role's trust policy. Follow this process only if the IAM role is used in a single Logz.io account. - :::danger Warning When you update your IAM role to the Logz.io external ID, -all Logz.io configurations that rely on that role +all Logz.io configurations relying on that role will stop working. Before you begin, make sure you know everywhere your existing IAM role is used in Logz.io. ::: - +**1. Delete an S3 configuration from Logz.io** -###### Delete an S3 configuration from Logz.io +Choose an S3 fetcher or [Archive & restore](https://app.logz.io/#/dashboard/tools/archive-and-restore) configuration to replace. -Choose an -[S3 fetcher](https://app.logz.io/#/dashboard/send-your-data/log-sources/s3-bucket) -or -[Archive & restore](https://app.logz.io/#/dashboard/tools/archive-and-restore) -configuration to replace. - -Copy the **S3 bucket name** and **Role ARN** to your text editor, -and make a note of the **Bucket region**. -If this is an S3 fetcher, copy the path **Prefix** as well, -and make a note of the **Log type**. +Copy the **S3 bucket name**, **Role ARN**, and note the **Bucket region**. For an S3 fetcher, also copy the path **Prefix** and **Log type**. Delete the configuration. -###### Replace the configuration - -If this is for an S3 fetcher, click **Add a bucket**, -and click **Authenticate with a role**. +**2. Replace the configuration** -![S3 fetcher and archive configuration screens](https://dytvr9ot2sszz.cloudfront.net/logz-docs/archive-and-restore/s3-fetcher-and-archive-config-external-id.png) +For an S3 fetcher, click **Add a bucket** and **Authenticate with a role**. -Recreate your configuration with the values you copied in a previous step, -and copy the **External ID** (you'll paste it in AWS in the next step). +Recreate your configuration with the values copied earlier, and copy the **External ID** for use in AWS. -###### Replace the external ID in your IAM role -Browse to the [IAM roles](https://console.aws.amazon.com/iam/home#/roles) page. -Open the role used by the configuration you deleted in step 1. +**3. Update the external ID in your IAM role** -![IAM role summary page, trust relationships tab](https://dytvr9ot2sszz.cloudfront.net/logz-docs/aws/iam-role-edit-trust-relationship.png) +Go to the [IAM roles](https://console.aws.amazon.com/iam/home#/roles) page and open the role used by the deleted configuration. -Open the **Trust relationships** tab -and click **Edit trust relationship** to open the policy document JSON. +In the **Trust relationships** tab, click **Edit trust relationship** to open the policy document JSON. -Find the line with the key `sts:ExternalId`, -and replace the value with the Logz.io external ID you copied in step 2. +Replace the value of `sts:ExternalId` with the Logz.io external ID. For example, if your account's external ID is @@ -468,6 +380,7 @@ you would see this: "sts:ExternalId": "logzio:aws:extid:example0nktixxe8q" ``` +:::caution note Saving the trust policy at this point will immediately change your role's external ID. Any other Logz.io configurations that use this role @@ -476,9 +389,7 @@ will stop working until you update them. Click **Update Trust Policy** to use the Logz.io external ID for this role. -###### Save the new S3 configuration in Logz.io - -Save the configuration in Logz.io: +**4. Save the new S3 configuration in Logz.io** * **For an S3 fetcher**: Click **Save** * **For Archive & restore**: Click **Start archiving** @@ -488,108 +399,85 @@ You'll see a success message if Logz.io authenticated and connected to your S3 b If the connection failed, double-check your credentials in Logz.io and AWS. -###### _(If needed)_ Replace other configurations that use this role +**5. _(If needed)_ Replace other configurations that use this role** + +If other configurations use the same role, update them with the new external ID. Logz.io generates one external ID per account, so you won't need to change the role again. -If there are other S3 fetcher or Archive & restore configurations -in this account that use the same role, -replace those configurations with the updated external ID. -Logz.io generates one external ID per account, -so you won't need to change the role again. -##### Migrate to new IAM roles {#migrate-to-new-roles} +#### Migrate to new IAM roles In this procedure, you'll: -* Create a new IAM role with the new external ID -* Replace Logz.io configurations to use the new role +* Create a new IAM role with the new external ID. +* Replace Logz.io configurations to use the new role. + +Repeat this procedure for each Logz.io account where you need to fetch or archive logs in an S3 bucket. -You'll repeat this procedure for each Logz.io account -where you need to fetch or archive logs in an S3 bucket. -###### Delete an S3 configuration from Logz.io +**1. Delete an S3 configuration from Logz.io** -Choose an -[S3 fetcher](https://app.logz.io/#/dashboard/send-your-data/log-sources/s3-bucket) -or -[Archive & restore](https://app.logz.io/#/dashboard/tools/archive-and-restore) +Choose an S3 fetcher or [Archive & restore](https://app.logz.io/#/dashboard/tools/archive-and-restore) configuration to replace. -Copy the **S3 bucket name** to your text editor, -and make a note of the **Bucket region**. -If this is an S3 fetcher, copy the path **Prefix** as well, -and make a note of the **Log type**. +Copy the **S3 bucket name**, **Role ARN**, and note the **Bucket region**. For an S3 fetcher, also copy the path **Prefix** and **Log type**. Delete the configuration. -###### Replace the configuration +**2. Replace the configuration** -If this is for an S3 fetcher, click **Add a bucket**, -and click **Authenticate with a role**. +For an S3 fetcher, click **Add a bucket** and **Authenticate with a role**. -![S3 fetcher and archive configuration screens](https://dytvr9ot2sszz.cloudfront.net/logz-docs/archive-and-restore/s3-fetcher-and-archive-config-external-id.png) +Recreate your configuration with the values copied earlier, and copy the **External ID** for use in AWS. -Recreate your configuration with the values you copied in step 1, -and copy the **External ID** (you'll paste it in AWS later). -###### Set up your new IAM role +**3. Set up your new IAM role** -Using the information you copied in step 1, -follow the steps in -_Grant access to an S3 bucket_. +Follow the steps in _[Grant access to an S3 bucket](https://docs.logz.io/docs/user-guide/admin/give-aws-access-with-access-keys/#grant-access-to-an-s3-bucket)_ using the information you copied earlier. -Continue with this procedure when you're done. +Complete the setup of the new IAM role. -###### _(If needed)_ Replace other configurations that use this role -If there are other S3 fetcher or Archive & restore configurations -in this account that use the same role, -repeat steps 1 and 2, -and use the role ARN from step 3. +**4. _(If needed)_ Replace other configurations that use this role** -For configurations in other Logz.io accounts, -repeat this procedure from the beginning. - +If there are other S3 fetcher or Archive & Restore configurations in this account using the same role, repeat steps 1 and 2, and use the new role ARN. + +For configurations in other Logz.io accounts, repeat the entire procedure from the beginning. -#### Testing IAM Configuration +#### Testing IAM Configuration After setting up `s3:ListBucket` and `s3:GetObject` permissions, you can test the configuration as follows. - - -##### Install s3cmd +**1. Install s3cmd** -###### For Linux and Mac: +* **For Linux and Mac:** Download the .zip file from the [master branch](https://github.com/s3tools/s3cmd/archive/master.zip) of the s3cmd GitHub repository. -###### For Windows: +* **For Windows:** Download [s3cmd express](https://www.s3express.com/download.htm). -Note that s3cmd will usually prefer your locally-configured s3 credentials over the ones that you provide as parameters. So, either backup your current s3 access settings, or use a new instance or Docker container. -##### Configure s3cmd +Note: s3cmd will usually prefer your locally-configured S3 credentials over those provided as parameters. Backup your current S3 access settings or use a new instance or Docker container. + +**2. Configure s3cmd** Run `s3cmd --configure` and enter your Logz.io IAM user access and secret keys. -##### List a required bucket +**3. List a required bucket** Run `s3cmd ls s3:////`. Replace `` with the name of your s3 bucket and `` with the bucket prefix, if the prefix is required. -##### Get a file from the bucket +**4. Get a file from the bucket** Run `s3cmd get s3:////`. Replace `` with the name of your s3 bucket, `` with the bucket prefix and `` with the name of the file you want to retrieve. - - -### Shipping logs via S3 Hook -{@include: ../../_include/log-shipping/stack.md} ## Metrics @@ -598,18 +486,14 @@ Deploy this integration to send your Amazon S3 metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon S3 metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. -{@include: ../../_include/metric-shipping/generic-dashboard.html} - {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. -{@include: ../../_include/metric-shipping/generic-dashboard.html} - diff --git a/docs/shipping/AWS/aws-ses.md b/docs/shipping/AWS/aws-ses.md index b3ef5a54..e137d463 100644 --- a/docs/shipping/AWS/aws-ses.md +++ b/docs/shipping/AWS/aws-ses.md @@ -27,7 +27,7 @@ Deploy this integration to send your Amazon SES metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon SES metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -37,7 +37,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-sns.md b/docs/shipping/AWS/aws-sns.md index 0c21a576..3f802aee 100644 --- a/docs/shipping/AWS/aws-sns.md +++ b/docs/shipping/AWS/aws-sns.md @@ -27,7 +27,7 @@ Deploy this integration to send your Amazon SNS metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon SNS metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -37,7 +37,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-sqs.md b/docs/shipping/AWS/aws-sqs.md index 8809616d..648c04e5 100644 --- a/docs/shipping/AWS/aws-sqs.md +++ b/docs/shipping/AWS/aws-sqs.md @@ -114,7 +114,7 @@ Deploy this integration to send your Amazon SQS metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon SQS metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -124,7 +124,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/AWS/aws-vpn.md b/docs/shipping/AWS/aws-vpn.md index df6f507a..a5c11dea 100644 --- a/docs/shipping/AWS/aws-vpn.md +++ b/docs/shipping/AWS/aws-vpn.md @@ -28,7 +28,7 @@ Deploy this integration to send your Amazon VPN metrics to Logz.io. This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon VPN metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -37,7 +37,7 @@ This integration creates a Kinesis Data Firehose delivery stream that links to y {@include: ../../_include/metric-shipping/aws-metrics-new.md} -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/App360/App360.md b/docs/shipping/App360/App360.md index 103f3565..f7997ddd 100644 --- a/docs/shipping/App360/App360.md +++ b/docs/shipping/App360/App360.md @@ -37,7 +37,7 @@ This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Col **Before you begin, you'll need**: * An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger -* An active account with Logz.io +* An active Logz.io account * A Logz.io metrics account diff --git a/docs/shipping/Azure/azure-activity-logs.md b/docs/shipping/Azure/azure-activity-logs.md index 306a1b5a..557684ef 100644 --- a/docs/shipping/Azure/azure-activity-logs.md +++ b/docs/shipping/Azure/azure-activity-logs.md @@ -144,6 +144,7 @@ For users currently on the Classic Application Insights, it's essential to migra 2. Click on the notification that states "Classic Application Insights is deprecated." 3. A "Migrate to Workspace-based" dialog will appear. Here, confirm your preferred Log Analytics Workspace and click 'Apply'. -:::important + +:::caution important Be aware that once you migrate to a workspace-based model, the process cannot be reversed. ::: diff --git a/docs/shipping/Azure/azure-diagnostic-logs.md b/docs/shipping/Azure/azure-diagnostic-logs.md index 1a3e4775..21ef256f 100644 --- a/docs/shipping/Azure/azure-diagnostic-logs.md +++ b/docs/shipping/Azure/azure-diagnostic-logs.md @@ -153,6 +153,6 @@ For users currently on the Classic Application Insights, it's essential to migra 2. Click on the notification that states "Classic Application Insights is deprecated." 3. A "Migrate to Workspace-based" dialog will appear. Here, confirm your preferred Log Analytics Workspace and click 'Apply'. -:::important +:::caution important Be aware that once you migrate to a workspace-based model, the process cannot be reversed. ::: diff --git a/docs/shipping/Azure/azure-graph.md b/docs/shipping/Azure/azure-graph.md index 70183065..17fa9387 100644 --- a/docs/shipping/Azure/azure-graph.md +++ b/docs/shipping/Azure/azure-graph.md @@ -131,7 +131,7 @@ docker run --name logzio-api-fetcher \ logzio/logzio-api-fetcher ``` -:::note +:::info To run in Debug mode add `--level` flag to the command: ```shell docker run --name logzio-api-fetcher \ diff --git a/docs/shipping/Azure/azure-mail-reports.md b/docs/shipping/Azure/azure-mail-reports.md index 7987de90..be39474b 100644 --- a/docs/shipping/Azure/azure-mail-reports.md +++ b/docs/shipping/Azure/azure-mail-reports.md @@ -141,7 +141,7 @@ docker run --name logzio-api-fetcher \ logzio/logzio-api-fetcher ``` -:::note +:::info To run in Debug mode add `--level` flag to the command: ```shell docker run --name logzio-api-fetcher \ diff --git a/docs/shipping/Azure/azure-vm-extension.md b/docs/shipping/Azure/azure-vm-extension.md index b519c6f2..24801f1b 100644 --- a/docs/shipping/Azure/azure-vm-extension.md +++ b/docs/shipping/Azure/azure-vm-extension.md @@ -28,7 +28,7 @@ Logz.io Azure VM extension currently only supports Linux-based VMs. **Before you begin, you'll need**: * Logz.io app installed from your Azure Marketplace. -* An active account with Logz.io. +* An active Logz.io account. * Resource group created under your Logz.io account in Azure. diff --git a/docs/shipping/CI-CD/argo-cd.md b/docs/shipping/CI-CD/argo-cd.md index 168ff777..5f03a9a3 100644 --- a/docs/shipping/CI-CD/argo-cd.md +++ b/docs/shipping/CI-CD/argo-cd.md @@ -56,7 +56,7 @@ Now you need to configure the input plug-in to enable Telegraf to scrape the Arg ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboards to enhance the observability of your metrics. +Install the pre-built dashboards to enhance the observability of your metrics. diff --git a/docs/shipping/CI-CD/jenkins.md b/docs/shipping/CI-CD/jenkins.md index d833f146..8b6fa8ab 100644 --- a/docs/shipping/CI-CD/jenkins.md +++ b/docs/shipping/CI-CD/jenkins.md @@ -161,7 +161,7 @@ Now you need to configure the input plug-in to enable Telegraf to scrape the Jen #### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboards to enhance the observability of your metrics. +Install the pre-built dashboards to enhance the observability of your metrics. diff --git a/docs/shipping/CI-CD/teamcity.md b/docs/shipping/CI-CD/teamcity.md index bc7b6abe..7b8f9d25 100644 --- a/docs/shipping/CI-CD/teamcity.md +++ b/docs/shipping/CI-CD/teamcity.md @@ -61,7 +61,7 @@ First you need to configure the input plug-in to enable Telegraf to scrape the T Give your metrics some time to get from your system to ours. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Code/dotnet-traces-kafka.md b/docs/shipping/Code/dotnet-traces-kafka.md index 134522e2..a1dfb62c 100644 --- a/docs/shipping/Code/dotnet-traces-kafka.md +++ b/docs/shipping/Code/dotnet-traces-kafka.md @@ -26,7 +26,7 @@ This integration includes: **Before you begin, you'll need**: * A .NET application without instrumentation. -* An active account with Logz.io. +* An active Logz.io account. * Port 4317 available on your host system. * A name defined for your tracing service to identify traces in Logz.io. diff --git a/docs/shipping/Code/dotnet.md b/docs/shipping/Code/dotnet.md index 12eefa57..41aa305b 100644 --- a/docs/shipping/Code/dotnet.md +++ b/docs/shipping/Code/dotnet.md @@ -23,79 +23,50 @@ drop_filter: [] import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/) +::: + **Before you begin, you'll need**: -* log4net 2.0.8 or higher -* .NET Core SDK version 2.0 or higher -* .NET Framework version 4.6.1 or higher +* log4net 2.0.8+. +* .NET Core SDK version 2.0+. +* .NET Framework version 4.6.1+. -:::note -[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/) -::: +### Add the dependency -#### Add the dependency to your project - -If you're on Windows, navigate to your project's folder in the command line, and run this command to install the dependency. +On Windows, navigate to your project folder, and run the following command: ``` Install-Package Logzio.DotNet.Log4net ``` -If you're on a Mac or Linux machine, you can install the package using Visual Studio. Select **Project > Add NuGet Packages...**, and then search for `Logzio.DotNet.Log4net`. +On Mac or Linux, open Visual Studio, navigate to **Project > Add NuGet Packages...**, search and install `Logzio.DotNet.Log4net`. -#### Configure the appender -You can configure the appender in a configuration file or directly in the code. -Use the samples in the code blocks below as a starting point, and replace them with a configuration that matches your needs. See [log4net documentation 🔗](https://github.com/apache/logging-log4net) to learn more about configuration options. +### Configure the appender in a configuration file -For a complete list of options, see the configuration parameters below the code blocks.👇 +Use the sample configuration and edit it according to your needs. View [log4net documentation](https://github.com/apache/logging-log4net) for additional options. -##### Option 1: In a configuration file ```xml - - <> - - - log4net - https://<>:8071 - - 100 - 00:00:05 - 3 - 00:00:02 - true - - - false - false - false - false @@ -107,9 +78,16 @@ For a complete list of options, see the configuration parameters below the code ``` -Add a reference to the configuration file in your code, as shown in the example [here](https://github.com/logzio/logzio-dotnet/blob/master/sample-applications/LogzioLog4netSampleApplication/Program.cs). -###### Code sample +To enable JSON format logging, add the following to your configuration file: + +`true` + + +Next, reference the configuration file in your code as shown in the example [here](https://github.com/logzio/logzio-dotnet/blob/master/sample-applications/LogzioLog4netSampleApplication/Program.cs). + + +**Run the code:** ```csharp using System.IO; @@ -140,31 +118,35 @@ namespace dotnet_log4net ``` -##### Option 2: In the code +### Configure the appender in the code + +Use the sample configuration and edit it according to your needs. View [log4net documentation](https://github.com/apache/logging-log4net) for additional options. + + ```csharp var hierarchy = (Hierarchy)LogManager.GetRepository(); var logzioAppender = new LogzioAppender(); logzioAppender.AddToken("<>"); logzioAppender.AddListenerUrl("<>"); -// <-- Uncomment and edit this line to enable proxy routing: --> -// logzioAppender.AddProxyAddress("http://your.proxy.com:port"); -// <-- Uncomment this to enable sending logs in Json format --> -// logzioAppender.ParseJsonMessage(true); -// <-- Uncomment these lines to enable gzip compression --> -// logzioAppender.AddGzip(true); -// logzioAppender.ActivateOptions(); -// logzioAppender.JsonKeysCamelCase(false); -// logzioAppender.AddTraceContext(false); -// logzioAppender.UseStaticHttpClient(false); logzioAppender.ActivateOptions(); hierarchy.Root.AddAppender(logzioAppender); hierarchy.Root.Level = Level.All; hierarchy.Configured = true; ``` +Customize your code by adding the following: + + +| Why? | What? | +|------|-------| +| Enable proxy routing | `logzioAppender.AddProxyAddress("http://your.proxy.com:port");` | +| Enable sending logs in JSON format | `logzioAppender.ParseJsonMessage(true);` | +| Enable gzip compression | `logzioAppender.AddGzip(true);` , `logzioAppender.ActivateOptions();` , `logzioAppender.JsonKeysCamelCase(false);` , `logzioAppender.AddTraceContext(false);` , `logzioAppender.UseStaticHttpClient(false);` | + + -###### Code sample + - // logzioAppender.AddProxyAddress("http://your.proxy.com:port"); - // <-- Uncomment this to enable sending logs in Json format --> - // logzioAppender.ParseJsonMessage(true); - // <-- Uncomment these lines to enable gzip compression --> - // logzioAppender.AddGzip(true); - // logzioAppender.ActivateOptions(); - // logzioAppender.JsonKeysCamelCase(false) - // logzioAppender.AddTraceContext(false); - // logzioAppender.UseStaticHttpClient(false); logzioAppender.ActivateOptions(); hierarchy.Root.AddAppender(logzioAppender); @@ -209,8 +181,9 @@ namespace dotnet_log4net } } ``` +--> -###### Parameters +### Parameters | Parameter | Description | Default/Required | |---|---|---| @@ -231,10 +204,10 @@ namespace dotnet_log4net -##### Custom fields +### Custom fields + +Add static keys and values to all log messages by including these custom fields under ``, as shown: -You can add static keys and values to be added to all log messages. -These custom fields must be children of ``, as shown here. ```xml @@ -249,7 +222,7 @@ These custom fields must be children of ``, as shown here. ``` -##### Extending the appender +### Extending the appender To change or add fields to your logs, inherit the appender and override the `ExtendValues` method. @@ -264,16 +237,16 @@ public class MyAppLogzioAppender : LogzioAppender } ``` -Change your configuration to use your new appender name. -For the example above, you'd use `MyAppLogzioAppender`. +Update your configuration to use the new appender name, such as `MyAppLogzioAppender`. -##### Add trace context +### Add trace context :::note The Trace Context feature does not support .NET Standard 1.3. ::: -If you’re sending traces with OpenTelemetry instrumentation (auto or manual), you can correlate your logs with the trace context. In this way, your logs will have traces data in it: `span id` and `trace id`. To enable this feature, set `true` in your configuration file or `logzioAppender.AddTraceContext(true);` in your code. For example: +To correlate logs with trace context in OpenTelemetry, set `true` in your configuration file or use `logzioAppender.AddTraceContext(true);` in your code. This adds `span id` and `trace id` to your logs. For example: + ```csharp using log4net; @@ -293,14 +266,6 @@ namespace dotnet_log4net logzioAppender.AddToken("<>"); logzioAppender.AddListenerUrl("https://<>:8071"); - // <-- Uncomment and edit this line to enable proxy routing: --> - // logzioAppender.AddProxyAddress("http://your.proxy.com:port"); - // <-- Uncomment this to enable sending logs in Json format --> - // logzioAppender.ParseJsonMessage(true); - // <-- Uncomment these lines to enable gzip compression --> - // logzioAppender.AddGzip(true); - // logzioAppender.ActivateOptions(); - // logzioAppender.JsonKeysCamelCase(false) logzioAppender.AddTraceContext(true); logzioAppender.ActivateOptions(); @@ -318,11 +283,12 @@ namespace dotnet_log4net } ``` -##### Serverless platforms -If you’re using a serverless function, you’ll need to call the appender's flush method at the end of the function run to make sure the logs are sent before the function finishes its execution. You’ll also need to create a static appender in the Startup.cs file so each invocation will use the same appender. The appender should have the `UseStaticHttpClient` flag set to `true`. +### Serverless platforms + +For serverless functions, call the appender's flush method at the end to ensure logs are sent before execution finishes. Create a static appender in Startup.cs with `UseStaticHttpClient` set to `true` for consistent invocations. +For example: -###### Azure serverless function code sample *Startup.cs* ```csharp using Microsoft.Azure.Functions.Extensions.DependencyInjection; @@ -388,77 +354,71 @@ namespace LogzioLog4NetSampleApplication **Before you begin, you'll need**: -* NLog 4.5.0 or higher -* .NET Core SDK version 2.0 or higher -* .NET Framework version 4.6.1 or higher +* NLog 4.5.0+. +* .NET Core SDK version 2.0+. +* .NET Framework version 4.6.1+. -:::note -[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/) -::: -#### Add the dependency to your project -If you're on Windows, navigate to your project's folder in the command line, and run this command to install the dependency. +### Add the dependency + +On Windows, navigate to your project folder, and run the following command: + ``` Install-Package Logzio.DotNet.NLog ``` -If you’re on a Mac or Linux machine, you can install the package using Visual Studio. **Select Project > Add NuGet Packages...**, and then search for `Logzio.DotNet.NLog`. +On Mac or Linux, open Visual Studio, navigate to **Project > Add NuGet Packages...**, search and install `Logzio.DotNet.Log4net`. -#### Configure the appender -You can configure the appender in a configuration file or directly in the code. -Use the samples in the code blocks below as a starting point, and replace them with a configuration that matches your needs. See [NLog documentation 🔗](https://github.com/NLog/NLog/wiki/Configuration-file) to learn more about configuration options. +### Configure the appender in a configuration file + +Use the sample configuration and edit it according to your needs. View [NLog documentation](https://github.com/NLog/NLog/wiki/Configuration-file) for additional options. -For a complete list of options, see the configuration parameters below the code blocks.👇 -##### Option 1: In a configuration file ```xml - + - - - - bufferSize="100" - bufferTimeout="00:00:05" - retriesMaxAttempts="3" - retriesInterval="00:00:02" - includeEventProperties="true" - useGzip="false" - debug="false" - jsonKeysCamelCase="false" - addTraceContext="false" - - - > - - - + + + + + + + + - + ``` -##### Option 2: In the code +### Configure the appender in the code + +Use the sample configuration and edit it according to your needs. ```csharp var config = new LoggingConfiguration(); - -// Replace these parameters with your configuration var logzioTarget = new LogzioTarget { Name = "Logzio", Token = "<>", @@ -480,7 +440,7 @@ config.AddRule(LogLevel.Debug, LogLevel.Fatal, logzioTarget); LogManager.Configuration = config; ``` -###### Parameters +### Parameters | Parameter | Description | Default/Required | |---|---|---| @@ -498,7 +458,7 @@ LogManager.Configuration = config; | addTraceContext | If want to add trace context to each log, set this field to true. | `false` | | useStaticHttpClient | If want to use the same static HTTP/s client for sending logs, set this field to true. | `false` | -###### Code sample +**Code sample** ```csharp using System; @@ -529,9 +489,10 @@ namespace LogzioNLogSampleApplication } ``` -##### Include context properties +### Include context properties + +Configure the target to include custom values when forwarding logs to Logz.io. For example: -You can configure the target to include your own custom values when forwarding logs to Logz.io. For example: ```xml @@ -544,7 +505,7 @@ You can configure the target to include your own custom values when forwarding l ``` -##### Extending the appender +### Extending the appender To change or add fields to your logs, inherit the appender and override the `ExtendValues` method. @@ -560,11 +521,15 @@ public class MyAppLogzioTarget : LogzioTarget } ``` -Change your configuration to use your new target. For the example above, you'd use `MyAppLogzio`. +Update your configuration to use the new appender name, such as `MyAppLogzio`. + + + +### JSON Layout + +When using `JsonLayout`, set the attribute name to something **other than** 'message'. For example: -##### Json Layout -When using 'JsonLayout' set the name of the attribute to **other than** 'message'. for example: ```xml @@ -572,13 +537,15 @@ When using 'JsonLayout' set the name of the attribute to **other than** 'message ``` -##### Add trace context +### Add trace context :::note The Trace Context feature does not support .NET Standard 1.3. ::: -If you’re sending traces with OpenTelemetry instrumentation (auto or manual), you can correlate your logs with the trace context. In this way, your logs will have traces data in it: `span id` and `trace id`. To enable this feature, set `addTraceContext="true"` in your configuration file or `AddTraceContext = true` in your code. For example: +To correlate logs with trace context in OpenTelemetry (auto or manual), set `addTraceContext="true"` in your configuration file or `AddTraceContext = true` in your code. This adds `span id` and `trace id` to your logs. For example: + + ```csharp var config = new LoggingConfiguration(); @@ -604,11 +571,12 @@ config.AddRule(LogLevel.Debug, LogLevel.Fatal, logzioTarget); LogManager.Configuration = config; ``` -##### Serverless platforms -If you’re using a serverless function, you’ll need to call the appender's flush method at the end of the function run to make sure the logs are sent before the function finishes its execution. You’ll also need to create a static appender in the Startup.cs file so each invocation will use the same appender. The appender should have the `UseStaticHttpClient` flag set to `true`. +### Serverless platforms +For serverless functions, call the appender's flush method at the end to ensure logs are sent before execution finishes. Create a static appender in Startup.cs with `UseStaticHttpClient` flag set to `true` for consistent invocations. -###### Azure serverless function code sample + +**Azure serverless function code sample** *Startup.cs* @@ -691,17 +659,15 @@ namespace LogzioNLogSampleApplication **Before you begin, you'll need**: -* log4net 2.0.8 or higher -* .NET Core SDK version 2.0 or higher -* .NET Framework version 4.6.1 or higher +* log4net 2.0.8+. +* .NET Core SDK version 2.0+. +* .NET Framework version 4.6.1+. + -:::note -[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/) -::: -#### Add the dependency to your project +### Add the dependency -If you're on Windows, navigate to your project's folder in the command line, and run these commands to install the dependencies. +On Windows, navigate to your project folder, and run the following command: ``` Install-Package Logzio.DotNet.Log4net @@ -711,53 +677,31 @@ Install-Package Logzio.DotNet.Log4net Install-Package Microsoft.Extensions.Logging.Log4Net.AspNetCore ``` -If you're on a Mac or Linux machine, you can install the package using Visual Studio. Select **Project > Add NuGet Packages...**, and then search for `Logzio.DotNet.Log4net` and `Microsoft.Extensions.Logging.Log4Net.AspNetCore`. +On Mac or Linux, open Visual Studio, navigate to **Project > Add NuGet Packages...**, search and install Logzio.DotNet.Log4net and `Microsoft.Extensions.Logging.Log4Net.AspNetCore`. + -#### Configure the appender -You can configure the appender in a configuration file or directly in the code. -Use the samples in the code blocks below as a starting point, and replace them with a configuration that matches your needs. See [log4net documentation 🔗](https://github.com/apache/logging-log4net) to learn more about configuration options. +### Configure the appender in a configuration file + +Use the sample configuration and edit it according to your needs. View [log4net documentation](https://github.com/apache/logging-log4net) for additional options. + -For a complete list of options, see the configuration parameters below the code blocks.👇 -###### Option 1: In a configuration file ```xml - - <> - - - log4net - https://<>:8071 - - 100 - 00:00:05 - 3 - 00:00:02 - true - false - false - false - false @@ -769,28 +713,34 @@ For a complete list of options, see the configuration parameters below the code ``` -###### Option 2: In the code +### Configure the appender in the code + +Use the sample configuration and edit it according to your needs. View [log4net documentation](https://github.com/apache/logging-log4net) for additional options. + + + ```csharp var hierarchy = (Hierarchy)LogManager.GetRepository(); var logzioAppender = new LogzioAppender(); logzioAppender.AddToken("<>"); logzioAppender.AddListenerUrl("<>"); -// Uncomment and edit this line to enable proxy routing: -// logzioAppender.AddProxyAddress("http://your.proxy.com:port"); -// Uncomment these lines to enable gzip compression -// logzioAppender.AddGzip(true); -// logzioAppender.ActivateOptions(); -// logzioAppender.JsonKeysCamelCase(false); -// logzioAppender.AddTraceContext(false); -// logzioAppender.UseStaticHttpClient(false); logzioAppender.ActivateOptions(); hierarchy.Root.AddAppender(logzioAppender); hierarchy.Root.Level = Level.All; hierarchy.Configured = true; ``` -###### Parameters +Customize your code by adding the following: + + +| Why? | What? | +|------|-------| +| Enable proxy routing | `logzioAppender.AddProxyAddress("http://your.proxy.com:port");` | +| Enable gzip compression | `logzioAppender.AddGzip(true);` , `logzioAppender.ActivateOptions();` , `logzioAppender.JsonKeysCamelCase(false);` , `logzioAppender.AddTraceContext(false);` , `logzioAppender.UseStaticHttpClient(false);` | + + +### Parameters | Parameter | Description | Default/Required | |---|---|---| @@ -809,9 +759,8 @@ hierarchy.Configured = true; | addTraceContext | If want to add trace context to each log, set this field to true. | `false` | | useStaticHttpClient | If want to use the same static HTTP/s client for sending logs, set this field to true. | `false` | -###### Code sample -###### ASP.NET Core +### ASP.NET Core Update Startup.cs file in Configure method to include the Log4Net middleware as in the code below. @@ -860,7 +809,7 @@ In the Controller methods: } ``` -###### .NET Core Desktop Application +### .NET Core Desktop Application ```csharp using System.IO; @@ -894,10 +843,10 @@ In the Controller methods: ``` -##### Custom fields +### Custom fields + +Add static keys and values to all log messages by including these custom fields under ``, as shown: -You can add static keys and values to all log messages. -These custom fields must be children of ``, as shown in the code below. ```xml @@ -912,10 +861,11 @@ These custom fields must be children of ``, as shown in the code below ``` -#### Extending the appender +### Extending the appender To change or add fields to your logs, inherit the appender and override the `ExtendValues` method. + ```csharp public class MyAppLogzioAppender : LogzioAppender { @@ -927,28 +877,23 @@ public class MyAppLogzioAppender : LogzioAppender } ``` -Change your configuration to use your new appender name. -For the example above, you'd use `MyAppLogzioAppender`. +Update your configuration to use the new appender name, such as `MyAppLogzioAppender`. -##### Add trace context + +### Add trace context :::note The Trace Context feature does not support .NET Standard 1.3. ::: -If you’re sending traces with OpenTelemetry instrumentation (auto or manual), you can correlate your logs with the trace context. In this way, your logs will have traces data in it: `span id` and `trace id`. To enable this feature, set `addTraceContext="true"` in your configuration file or `AddTraceContext = true` in your code. For example: +To correlate logs with trace context in OpenTelemetry (auto or manual), set `addTraceContext="true"` in your configuration file or `AddTraceContext = true` in your code. This adds `span id` and `trace id` to your logs. For example: + ```csharp var hierarchy = (Hierarchy)LogManager.GetRepository(); var logzioAppender = new LogzioAppender(); logzioAppender.AddToken("<>"); logzioAppender.AddListenerUrl("<>"); -// Uncomment and edit this line to enable proxy routing: -// logzioAppender.AddProxyAddress("http://your.proxy.com:port"); -// Uncomment these lines to enable gzip compression -// logzioAppender.AddGzip(true); -// logzioAppender.ActivateOptions(); -// logzioAppender.JsonKeysCamelCase(false); logzioAppender.AddTraceContext(true); logzioAppender.ActivateOptions(); hierarchy.Root.AddAppender(logzioAppender); @@ -956,11 +901,25 @@ hierarchy.Root.Level = Level.All; hierarchy.Configured = true; ``` -##### Serverless platforms -If you’re using a serverless function, you’ll need to call the appender's flush method at the end of the function run to make sure the logs are sent before the function finishes its execution. You’ll also need to create a static appender in the Startup.cs file so each invocation will use the same appender. The appender should have the `UseStaticHttpClient` flag set to `true`. +Customize your code by adding the following: + + +| Why? | What? | +|------|-------| +| Enable proxy routing | `logzioAppender.AddProxyAddress("http://your.proxy.com:port");` | +| Enable sending logs in JSON format | `logzioAppender.ParseJsonMessage(true);` | +| Enable gzip compression | `logzioAppender.AddGzip(true);` , `logzioAppender.ActivateOptions();` , `logzioAppender.JsonKeysCamelCase(false);` | + + +### Serverless platforms + +For serverless functions, call the appender's flush method at the end to ensure logs are sent before execution finishes. Create a static appender in Startup.cs with `UseStaticHttpClient` set to `true` for consistent invocations. + + + +**Azure serverless function code sample** -###### Azure serverless function code sample *Startup.cs* ```csharp @@ -1036,31 +995,24 @@ This integration is based on [Serilog.Sinks.Logz.Io repository](https://github.c **Before you begin, you'll need**: -* .NET Core SDK version 2.0 or higher -* .NET Framework version 4.6.1 or higher +* .NET Core SDK version 2.0+. +* .NET Framework version 4.6.1+. + -:::note -[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/) -::: -#### Install the Logz.io Serilog sink +### Install the Logz.io Serilog sink -Install `Serilog.Sinks.Logz.Io` using Nuget or by running the following command in the Package Manager Console: +Install `Serilog.Sinks.Logz.Io` via Nuget or by running this command in the Package Manager Console: ```shell PM> Install-Package Serilog.Sinks.Logz.Io ``` -#### Configure the sink - -There are 2 ways to use Serilog: -1. Using a configuration file -2. In the code +### Configure the sink in a configuration file -###### Using a configuration file +Create an `appsettings.json` file and copy this configuration: -Create `appsettings.json` file and copy the following configuration: ```json { @@ -1082,11 +1034,10 @@ Create `appsettings.json` file and copy the following configuration: {@include: ../../_include/log-shipping/listener-var.html} -Replace `<` with the type that you want to assign to your logs. You will use this value to identify these logs in Logz.io. +Replace `<>` with the log type to identify these logs in Logz.io. -Add the following code to use the configuration and create logs: +Add the following code to use the configuration and create logs with `Serilog.Settings.Configuration` and `Microsoft.Extensions.Configuration.Json` packages: -* Using Serilog.Settings.Configuration and Microsoft.Extensions.Configuration.Json packages ```csharp using System.IO; @@ -1117,7 +1068,7 @@ namespace Example ``` -###### In the code +#### Run the code: ```csharp @@ -1152,11 +1103,11 @@ namespace Example } ``` -##### Serverless platforms -If you’re using a serverless function, you’ll need to create a static appender in the Startup.cs file so each invocation will use the same appender. -In the Serilog integration, you should use the 'WriteTo.LogzIo()' instad of 'WriteTo.LogzIoDurableHttp()' method as it uses in-memory buffering which is best practice for serverless functions. +### Serverless platforms +For serverless function, create a static appender in Startup.cs to ensure each invocation uses the same appender. For Serilog integration, use `WriteTo.LogzIo()` instead of `WriteTo.LogzIoDurableHttp()` for in-memory buffering, which is best for serverless functions. -###### Azure serverless function code sample + +**Azure serverless function code sample** *Startup.cs* ```csharp @@ -1216,8 +1167,165 @@ namespace LogzioSerilogSampleApplication {@include: ../../_include/log-shipping/listener-var.html} -Replace `<` with the type that you want to assign to your logs. You will use this value to identify these logs in Logz.io. +Replace `<>` with the log type to identify these logs in Logz.io. + + + + + + + + + + + + + +### Prerequisites + +Ensure that you have the following installed locally: +- [.NET SDK](https://dotnet.microsoft.com/download/dotnet) 6+ + +### Example Application +The following example uses a basic [Minimal API with ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/tutorials/min-web-api?view=aspnetcore-8.0&tabs=visual-studio) application. + +### Create and launch an HTTP Server + +To begin, set up an environment in a new directory called `dotnet-simple`. Within that directory, execute following command: + +``` +dotnet new web +``` +In the same directory, replace the content of Program.cs with the following code: + +``` +using System.Globalization; + +using Microsoft.AspNetCore.Mvc; + +var builder = WebApplication.CreateBuilder(args); +var app = builder.Build(); + +string HandleRollDice([FromServices]ILogger logger, string? player) +{ + var result = RollDice(); + + if (string.IsNullOrEmpty(player)) + { + logger.LogInformation("Anonymous player is rolling the dice: {result}", result); + } + else + { + logger.LogInformation("{player} is rolling the dice: {result}", player, result); + } + + return result.ToString(CultureInfo.InvariantCulture); +} + +int RollDice() +{ + return Random.Shared.Next(1, 7); +} + +app.MapGet("/rolldice/{player?}", HandleRollDice); + +app.Run(); + +``` + +In the Properties subdirectory, replace the content of launchSettings.json with the following: + +``` +{ + "$schema": "http://json.schemastore.org/launchsettings.json", + "profiles": { + "http": { + "commandName": "Project", + "dotnetRunMessages": true, + "launchBrowser": true, + "applicationUrl": "http://localhost:8080", + "environmentVariables": { + "ASPNETCORE_ENVIRONMENT": "Development" + } + } + } +} + +``` + +Build and run the application with the following command, then open http://localhost:8080/rolldice in your web browser to ensure it is working. + +``` +dotnet build +dotnet run +``` +### Instrumentation + +Next we’ll install the instrumentation [NuGet packages from OpenTelemetry](https://www.nuget.org/profiles/OpenTelemetry) that will generate the telemetry, and set them up. + +1. Add the packages + ``` + dotnet add package OpenTelemetry.Extensions.Hosting + dotnet add package OpenTelemetry.Instrumentation.AspNetCore + dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol + ``` + +2. Setup the OpenTelemetry code + + In Program.cs, replace the following lines: + + ``` + var builder = WebApplication.CreateBuilder(args); + var app = builder.Build(); + ``` + With: + ``` + using OpenTelemetry; + using OpenTelemetry.Logs; + using OpenTelemetry.Resources; + using OpenTelemetry.Exporter; + + var builder = WebApplication.CreateBuilder(args); + + const string serviceName = "roll-dice"; + const string logzioEndpoint = "https://otlp-listener.logz.io/v1/logs"; + const string logzioToken = ""; + + builder.Logging.AddOpenTelemetry(options => + { + options + .SetResourceBuilder( + ResourceBuilder.CreateDefault() + .AddService(serviceName)) + .AddOtlpExporter(otlpOptions => + { + otlpOptions.Endpoint = new Uri(logzioEndpoint); + otlpOptions.Headers = $"Authorization=Bearer {logzioToken}, user-agent=logzio-dotnet-logs"; + otlpOptions.Protocol = OtlpExportProtocol.HttpProtobuf; + }); + }); + + var app = builder.Build(); + ``` +3. Run your **application** once again: + + ``` + dotnet run + ``` + Note the output from the dotnet run. + +4. From another terminal, send a request using curl: + + ``` + curl localhost:8080/rolldice + ``` +5. After about 30 sec, stop the server process. + +At this point, you should see log output from the server and client on your Logzio account. + + + ## Metrics @@ -1227,15 +1335,17 @@ Replace `<` with the type that you want to assign to your logs. You will u -Helm is a tool for managing packages of preconfigured Kubernetes resources using Charts. This integration allows you to collect and ship diagnostic metrics of your .NET application in Kubernetes to Logz.io, using dotnet-monitor and OpenTelemetry. logzio-dotnet-monitor runs as a sidecar in the same pod as the .NET application. +Helm manages packages of preconfigured Kubernetes resources using Charts. This integration allows you to collect and ship diagnostic metrics of your .NET application in Kubernetes to Logz.io, using dotnet-monitor and OpenTelemetry. logzio-dotnet-monitor runs as a sidecar in the same pod as the .NET application. :::note [Project's GitHub repo](https://github.com/logzio/logzio-helm/) ::: -###### Sending metrics from nodes with taints +### Sending metrics from nodes with taints + + +To ship metrics from nodes with taints, ensure the taint key values are included in your DaemonSet/Deployment configuration as follows: -If you want to ship metrics from any of the nodes that have a taint, make sure that the taint key values are listed in your in your daemonset/deployment configuration as follows: ```yaml tolerations: @@ -1251,19 +1361,20 @@ To determine if a node uses taints as well as to display the taint keys, run: kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}" ``` -:::node +:::note You need to use `Helm` client with version `v3.9.0` or above. ::: -#### Standard configuration +### Standard configuration +**1. Select the namespace** + +This integration deploys to the namespace specified in values.yaml. The default is logzio-dotnet-monitor. -##### Select the namespace +To use a different namespace, run: -This integration will be deployed in the namespace you set in values.yaml. The default namespace for this integration is logzio-dotnet-monitor. -To select a different namespace, run: ```shell kubectl create namespace <> @@ -1272,7 +1383,7 @@ kubectl create namespace <> * Replace `<>` with the name of your namespace. -##### Add `logzio-helm` repo +**2. Add `logzio-helm` repo** ```shell helm repo add logzio-helm https://logzio.github.io/logzio-helm @@ -1280,7 +1391,7 @@ helm repo update ``` -###### Run the Helm deployment code +**3. Run the Helm deployment code** ```shell helm install -n <> \ @@ -1301,11 +1412,11 @@ volumeMounts: ``` -##### Check Logz.io for your metrics +**4. Check Logz.io for your metrics** -Give your metrics some time to get from your system to ours, then open [Logz.io](https://app.logz.io/). You can search for your metrics in Logz.io by searching `{job="dotnet-monitor-collector"}` +Allow some time for data ingestion, then open [Logz.io](https://app.logz.io/). Search for your metrics in Logz.io by searching `{job="dotnet-monitor-collector"}` -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -1313,20 +1424,20 @@ Give your metrics some time to get from your system to ours, then open [Logz.io] -#### Customizing Helm chart parameters +### Customizing Helm chart parameters -##### Configure customization options +* **Configure customization options** -You can use the following options to update the Helm chart parameters: + Update the Helm chart parameters using the following options: -* Specify parameters using the `--set key=value[,key=value]` argument to `helm install` or `--set-file key=value[,key=value]` + * Specify parameters using the `--set key=value[,key=value]` argument to `helm install` or `--set-file key=value[,key=value]` -* Edit the `values.yaml` + * Edit the `values.yaml` -* Override default values with your own `my_values.yaml` and apply it in the `helm install` command. + * Overide default values with your own `my_values.yaml` and apply it in the `helm install` command. -##### Customization parameters +* **Customization parameters** | Parameter | Description | Default | |---|---|---| @@ -1357,59 +1468,65 @@ You can use the following options to update the Helm chart parameters: * To get additional information about dotnet-monitor configuration, click [here](https://github.com/dotnet/dotnet-monitor/blob/main/documentation/api/metrics.md). * To see well-known providers and their counters, click [here](https://docs.microsoft.com/en-us/dotnet/core/diagnostics/available-counters). -#### Uninstalling the Chart +### Uninstalling the Chart -The Uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release. +To remove all Kubernetes components associated with the chart and delete the release, use the uninstall command. -To uninstall the `dotnet-monitor-collector` deployment, use the following command: +To uninstall the `dotnet-monitor-collector` deployment, run: ```shell helm uninstall dotnet-monitor-collector ``` -For troubleshooting this solution, see our [.NET with helm troubleshooting guide](https://docs.logz.io/docs/user-guide/infrastructure-monitoring/troubleshooting/dotnet-helm-troubleshooting/). + +For troubleshooting, refer to our [.NET with helm troubleshooting guide](https://docs.logz.io/docs/user-guide/infrastructure-monitoring/troubleshooting/dotnet-helm-troubleshooting/). + + + + -You can send custom metrics from your .NET Core application using Logzio.App.Metrics. Logzio.App.Metrics is an open-source and cross-platform .NET library used to record metrics within an application and forward the data to Logz.io. +Send custom metrics from your .NET Core application using Logzio.App.Metrics, an open-source, cross-platform .NET library for recording metrics and forwarding them to Logz.io. These instructions show you how to: -* Create a basic custom metrics export configuration with a hardcoded Logz.io exporter -* Create a basic custom metrics export configuration with a Logz.io exporter defined by a configuration file -* Add advanced settings to the basic custom metrics export configuration +* Create a basic custom metrics export configuration with a hardcoded Logz.io exporter. +* Create a basic custom metrics export configuration with a Logz.io exporter defined by a configuration file. +* Add advanced settings to the basic custom metrics export configuration. + :::note [Project's GitHub repo](https://github.com/logzio/logzio-app-metrics/) ::: -#### Send custom metrics to Logz.io with a hardcoded Logz.io exporter +### Send custom metrics with a hardcoded Logz.io exporter -**Before you begin, you'll need**: -* An application in .NET Core 3.1 or higher -* An active Logz.io account +**Before you begin, you'll need**: - +* An application in .NET Core 3.1+. +* An active Logz.io account. -##### Install the App.Metrics.Logzio package +**1. Install the App.Metrics.Logzio package** -Install the App.Metrics.Logzio package from the Package Manager Console: +Run the following from the Package Manager Console: ```shell Install-Package Logzio.App.Metrics ``` -If you prefer to install the library manually, download the latest version from the NuGet Gallery. +For manual installation, download the latest version from the NuGet Gallery. + +**2. Create MetricsBuilder** -##### Create MetricsBuilder +Copy and paste the following code into the function where you need to export metrics: -To create MetricsBuilder, copy and paste the following code into the function of the code that you need to export metrics from: ```csharp var metrics = new MetricsBuilder() @@ -1422,9 +1539,9 @@ var metrics = new MetricsBuilder() {@include: ../../_include/metric-shipping/replace-metrics-token.html} -##### Create Scheduler +**3. Create Scheduler** -To create the Scheduler, copy and paste the following code into the same function of the code as the MetricsBuilder: +To create the Scheduler, add the following code into the same function as the MetricsBuilder: ```csharp var scheduler = new AppMetricsTaskScheduler( @@ -1433,19 +1550,19 @@ var scheduler = new AppMetricsTaskScheduler( scheduler.Start(); ``` -##### Add required metrics to your code +**4. Add required metrics to your code** + +* [Apdex (Application Performance Index)](https://www.app-metrics.io/getting-started/metric-types/apdex/) - Monitors end-user satisfaction. +* [Counter](https://www.app-metrics.io/getting-started/metric-types/counters/) - Tracks the number of times an event occurs. +* [Gauge](https://www.app-metrics.io/getting-started/metric-types/gauges/) - Provides an instantaneous measurement of a value that can arbitrarily increase or decrease (e.g., CPU usage). +* [Histogram](https://www.app-metrics.io/getting-started/metric-types/histograms/) - Measures the statistical distribution of a set of values. +* [Meter](https://www.app-metrics.io/getting-started/metric-types/meters/) - Measures the rate of event occurrences and the total count. +* [Timer](https://www.app-metrics.io/getting-started/metric-types/timers/) - Combines a histogram and meter to measure event duration, rate of occurrence, and duration statistics. -You can send the following metrics from your code: +To use Logzio.App.Metrics, you must include at least one of the above metrics in your code. -* [Apdex (Application Performance Index)](https://www.app-metrics.io/getting-started/metric-types/apdex/) -* [Counter](https://www.app-metrics.io/getting-started/metric-types/counters/) -* [Gauge](https://www.app-metrics.io/getting-started/metric-types/gauges/) -* [Histogram](https://www.app-metrics.io/getting-started/metric-types/histograms/) -* [Meter](https://www.app-metrics.io/getting-started/metric-types/meters/) -* [Timer](https://www.app-metrics.io/getting-started/metric-types/timers/) +For example, to add a counter metric, insert the following code block into the same function as the MetricsBuilder and Scheduler: -You must have at least one of the above metrics in your code to use the Logzio.App.Metrics. -For example, to add a counter metric to your code, copy and paste the following code block into the same function of the code as the MetricsBuilder and Scheduler. ```csharp var counter = new CounterOptions {Name = "my_counter", Tags = new MetricTags("test", "my_test")}; @@ -1455,43 +1572,16 @@ metrics.Measure.Counter.Increment(counter); In the example above, the metric has a name ("my_counter"), a tag key ("test") and a tag value ("my_test"): These parameters are used to query data from this metric in your Logz.io dashboard. -###### Apdex - -Apdex (Application Performance Index) allows you to monitor end-user satisfaction. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/apdex/). - -###### Counter - -Counters are one of the most basic supported metrics types: They enable you to track how many times something has happened. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/counters/). - -###### Gauge - -A Gauge is an action that returns an instantaneous measurement for a value that abitrarily increases and decreases (for example, CPU usage). For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/gauges/). - -###### Histogram - -Histograms measure the statistical distribution of a set of values. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/histograms/). - -###### Meter - -A Meter measures the rate at which an event occurs, along with the total count of the occurences. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/meters/). - -###### Timer - -A Timer is a combination of a histogram and a meter, which enables you to measure the duration of a type of event, the rate of its occurrence, and provide duration statistics. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/timers/). - - -##### Run your application +**5. View your metrics** Run your application to start sending metrics to Logz.io. +Allow some time for data ingestion, then check your [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?). -##### Check Logz.io for your events - -Give your events some time to get from your system to ours, and then open the [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?). -##### Filter the metrics by labels +### Filter metrics by labels -Once the metrics are in Logz.io, you can query the required metrics using labels. Each metric has the following labels: +Once the metrics are in Logz.io, you can query them using labels. Each metric includes the following labels: | App Metrics parameter name | Description | Logz.io parameter name | |---|---|---| @@ -1502,7 +1592,7 @@ Once the metrics are in Logz.io, you can query the required metrics using labels Some of the metrics have custom labels, as described below. -###### Meter +#### Meter | App Metrics label name | Logz.io label name | |---|---| @@ -1518,7 +1608,7 @@ Some of the metrics have custom labels, as described below. Replace [[your_meter_name]] with the name that you assigned to the meter metric. -###### Histogram +#### Histogram | App Metrics label name | Logz.io label name | |---|---| @@ -1545,7 +1635,7 @@ Replace [[your_meter_name]] with the name that you assigned to the meter metric. Replace [[your_histogram_name]] with the name that you assigned to the histogram metric. -###### Timer +#### Timer | App Metrics label name | Logz.io label name | |---|---| @@ -1574,7 +1664,7 @@ Replace [[your_histogram_name]] with the name that you assigned to the histogram Replace [[your_timer_name]] with the name that you assigned to the timer metric. -###### Apdex +#### Apdex | App Metrics parameter name | Logz.io parameter name | |---|---| @@ -1592,32 +1682,28 @@ For troubleshooting this solution, see our [.NET core troubleshooting guide](htt -#### Send custom metrics to Logz.io with a Logz.io exporter defined by a config file +### Send custom metrics with a Logz.io exporter defined by a config file **Before you begin, you'll need**: -* An application in .NET Core 3.1 or higher -* An active Logz.io account - - - +* An application in .NET Core 3.1+. +* An active Logz.io account. -##### Install the App.Metrics.Logzio package + **1. Install the App.Metrics.Logzio package** -Install the App.Metrics.Logzio package from the Package Manager Console: +Run the following from the Package Manager Console: ```csharp Install-Package Logzio.App.Metrics ``` -If you prefer to install the library manually, download the latest version from NuGet Gallery. +For manual installation, download the latest version from the NuGet Gallery. +**2. Create MetricsBuilder** -##### Create MetricsBuilder - -To create MetricsBuilder, copy and paste the following code into the function of the code that you need to export metrics from: +Copy and paste the following code into the function where you need to export metrics: ```csharp var metrics = new MetricsBuilder() @@ -1625,7 +1711,7 @@ var metrics = new MetricsBuilder() .Build(); ``` -Add the following code to the configuration file: +Add the following to the configuration file: ```xml @@ -1643,9 +1729,9 @@ Add the following code to the configuration file: {@include: ../../_include/metric-shipping/replace-metrics-token.html} -##### Create Scheduler +**3. Create Scheduler** -To create a Scheduler, copy and paste the following code into the same function of the code as the MetricsBuilder: +Copy and paste the following code into the same function as the MetricsBuilder: ```csharp var scheduler = new AppMetricsTaskScheduler( @@ -1654,18 +1740,19 @@ var scheduler = new AppMetricsTaskScheduler( scheduler.Start(); ``` -##### Add the required metrics to your code +**4. Add the required metrics to your code** + -You can send the following metrics from your code: +* [Apdex (Application Performance Index)](https://www.app-metrics.io/getting-started/metric-types/apdex/) - Monitors end-user satisfaction. +* [Counter](https://www.app-metrics.io/getting-started/metric-types/counters/) - Tracks the number of times an event occurs. +* [Gauge](https://www.app-metrics.io/getting-started/metric-types/gauges/) - Provides an instantaneous measurement of a value that can arbitrarily increase or decrease (e.g., CPU usage). +* [Histogram](https://www.app-metrics.io/getting-started/metric-types/histograms/) - Measures the statistical distribution of a set of values. +* [Meter](https://www.app-metrics.io/getting-started/metric-types/meters/) - Measures the rate of event occurrences and the total count. +* [Timer](https://www.app-metrics.io/getting-started/metric-types/timers/) - Combines a histogram and meter to measure event duration, rate of occurrence, and duration statistics. -* [Apdex (Application Performance Index)](https://www.app-metrics.io/getting-started/metric-types/apdex/) -* [Counter](https://www.app-metrics.io/getting-started/metric-types/counters/) -* [Gauge](https://www.app-metrics.io/getting-started/metric-types/gauges/) -* [Histogram](https://www.app-metrics.io/getting-started/metric-types/histograms/) -* [Meter](https://www.app-metrics.io/getting-started/metric-types/meters/) -* [Timer](https://www.app-metrics.io/getting-started/metric-types/timers/) +To use Logzio.App.Metrics, you must include at least one of the above metrics in your code. -You must have at least one of the above metrics in your code to use the Logzio.App.Metrics. For example, to add a counter metric to your code, copy and paste the following code block into the same function of the code as the MetricsBuilder and Scheduler: +For example, to add a counter metric, insert the following code block into the same function as the MetricsBuilder and Scheduler: ```csharp var counter = new CounterOptions {Name = "my_counter", Tags = new MetricTags("test", "my_test")}; @@ -1675,44 +1762,15 @@ metrics.Measure.Counter.Increment(counter); In the example above, the metric has a name ("my_counter"), a tag key ("test") and a tag value ("my_test"). These parameters are used to query data from this metric in your Logz.io dashboard. -###### Apdex - -Apdex (Application Performance Index) allows you to monitor end-user satisfaction. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/apdex/). - -###### Counter - -Counters are one of the most basic supported metrics types: They enable you to track how many times something has happened. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/counters/). - -###### Gauge - -A Gauge is an action that returns an instantaneous measurement for a value that abitrarily increases and decreases (for example, CPU usage). For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/gauges/). - -###### Histogram - -Histograms measure the statistical distribution of a set of values. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/histograms/). - -###### Meter - -A Meter measures the rate at which an event occurs, along with the total count of the occurences. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/meters/). - -###### Timer - -A Timer is a combination of a histogram and a meter, which enables you to measure the duration of a type of event, the rate of its occurrence, and provide duration statistics. For more information on this metric, refer to [App Metrics documentation](https://www.app-metrics.io/getting-started/metric-types/timers/). - - - -##### Run your application +**5. View your metrics** Run your application to start sending metrics to Logz.io. +Allow some time for data ingestion, then check your [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?). -##### Check Logz.io for your events +### Filter metrics by labels -Give your events some time to get from your system to ours, and then open [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?). - -##### Filter the metrics by labels - -Once the metrics are in Logz.io, you can query the required metrics using labels. Each metric has the following labels: +Once the metrics are in Logz.io, you can query them using labels. Each metric includes the following labels: | App Metrics parameter name | Description | Logz.io parameter name | |---|---|---| @@ -1723,7 +1781,7 @@ Once the metrics are in Logz.io, you can query the required metrics using labels Some of the metrics have custom labels as described below. -###### Meter +##### Meter | App Metrics label name | Logz.io label name | |---|---| @@ -1739,7 +1797,7 @@ Some of the metrics have custom labels as described below. Replace [[your_meter_name]] with the name that you assigned to the meter metric. -###### Histogram +##### Histogram | App Metrics label name | Logz.io label name | |---|---| @@ -1766,7 +1824,7 @@ Replace [[your_meter_name]] with the name that you assigned to the meter metric. Replace [[your_histogram_name]] with the name that you assigned to the histogram metric. -###### Timer +##### Timer | App Metrics label name | Logz.io label name | |---|---| @@ -1795,7 +1853,7 @@ Replace [[your_histogram_name]] with the name that you assigned to the histogram Replace [[your_timer_name]] with the name that you assigned to the timer metric. -###### Apdex +##### Apdex | App Metrics parameter name | Logz.io parameter name | |---|---| @@ -1813,9 +1871,10 @@ For troubleshooting this solution, see our [.NET core troubleshooting guide](htt -#### Export using ToLogzioHttp exporter +### Export using ToLogzioHttp exporter + +You can configure MetricsBuilder to use ToLogzioHttp exporter, which allows you to export metrics via HTTP using additional export settings. Add the following code block to define the MetricsBuilder: -You can configure MetricsBuilder to use ToLogzioHttp exporter, which allows you to export metrics via HTTP using additional export settings. To enable this exporter, add the following code block to define the MetricsBuilder: ```csharp var metrics = new MetricsBuilder() @@ -1840,25 +1899,26 @@ var metrics = new MetricsBuilder() * `HttpPolicy.FailuresBeforeBackoff ` is the value defining the number of failures before backing-off when metrics are failing to report to the metrics ingress endpoint. * `HttpPolicy.Timeout ` is the value in seconds defining the HTTP timeout duration when attempting to report metrics to the metrics ingress endpoint. -#### .NET Core runtime metrics +### .NET Core runtime metrics -The runtime metrics are additional parameters that will be sent from your code. These parameters include: +The runtime metrics include additional parameters sent from your code, such as: -* Garbage collection frequencies and timings by generation/type, pause timings and GC CPU consumption ratio. + +* Garbage collection frequencies, timings by generation/type, pause timings, and GC CPU consumption ratio. * Heap size by generation. * Bytes allocated by small/large object heap. * JIT compilations and JIT CPU consumption ratio. * Thread pool size, scheduling delays and reasons for growing/shrinking. * Lock contention. -To enable collection of these metrics with default settings, add the following code block after the MetricsBuilder: +To enable the collection of these metrics with default settings, add the following code block after the MetricsBuilder: ```csharp // metrics is the MetricsBuilder IDisposable collector = DotNetRuntimeStatsBuilder.Default(metrics).StartCollecting(); ``` -To enable collection of these metrics with custom settings, add the following code block after the MetricsBuilder: +For custom settings, use the following code block after the MetricsBuilder: ```csharp IDisposable collector = DotNetRuntimeStatsBuilder @@ -1873,9 +1933,9 @@ IDisposable collector = DotNetRuntimeStatsBuilder Data collected from these metrics is found in Logz.io, under the Contexts labels `process` and `dotnet`. -#### Get current snapshot +### Get current snapshot -The current snapshot creates a preview of the metrics in Logz.io format. To enable this option, add the following code block to the MetricsBuilder: +To enable the current snapshot preview of metrics in Logz.io format, add the following code block to the MetricsBuilder: ```csharp var metrics = new MetricsBuilder() @@ -1921,7 +1981,7 @@ On deployment, the ASP.NET Core instrumentation automatically captures spans fro **Before you begin, you'll need**: * An ASP.NET Core application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4317` available on your host system * A name defined for your tracing service. You will need it to identify the traces in Logz.io. @@ -1967,7 +2027,7 @@ This integration enables you to auto-instrument your ASP.NET Core application an **Before you begin, you'll need**: * An ASP.NET Core application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4317` available on your host system * A name defined for your tracing service @@ -2014,7 +2074,7 @@ On deployment, the ASP.NET Core instrumentation automatically captures spans fro **Before you begin, you'll need**: * An ASP.NET Core application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4317` available on your host system * A name defined for your tracing service. You will need it to identify the traces in Logz.io. @@ -2062,7 +2122,7 @@ This integration enables you to auto-instrument your ASP.NET Core application an **Before you begin, you'll need**: * An ASP.NET Core application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4317` available on your host system * A name defined for your tracing service @@ -2096,4 +2156,4 @@ Give your traces some time to get from your system to ours, and then open [Traci For troubleshooting the OpenTelemetry instrumentation, see our [OpenTelemetry troubleshooting guide](https://docs.logz.io/docs/user-guide/distributed-tracing/troubleshooting/otel-troubleshooting/). - + \ No newline at end of file diff --git a/docs/shipping/Code/go.md b/docs/shipping/Code/go.md index 917814a8..65227761 100644 --- a/docs/shipping/Code/go.md +++ b/docs/shipping/Code/go.md @@ -377,7 +377,7 @@ _ = metric.Must(meter).NewFloat64UpDownCounterObserver( Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -407,7 +407,7 @@ On deployment, the Go instrumentation automatically captures spans from your app **Before you begin, you'll need**: * A Go application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4318` available on your host system * A name defined for your tracing service. You will need it to identify the traces in Logz.io. diff --git a/docs/shipping/Code/java-traces-with-kafka-using-opentelemetry.md b/docs/shipping/Code/java-traces-with-kafka-using-opentelemetry.md index f3348af7..361d06fb 100644 --- a/docs/shipping/Code/java-traces-with-kafka-using-opentelemetry.md +++ b/docs/shipping/Code/java-traces-with-kafka-using-opentelemetry.md @@ -33,7 +33,7 @@ This integration enables you to auto-instrument your Java application and run a **Before you begin, you'll need**: * A Java application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4317` available on your host system * A name defined for your tracing service. You will need it to identify the traces in Logz.io. diff --git a/docs/shipping/Code/java.md b/docs/shipping/Code/java.md index 687dd15b..d646b2b6 100644 --- a/docs/shipping/Code/java.md +++ b/docs/shipping/Code/java.md @@ -16,7 +16,7 @@ drop_filter: [] --- :::tip -If your code runs within Kubernetes, it's best practice to use our Kubernetes integration to collect various telemetry types. +For Kubernetes data, use the [dedicated integration](https://docs.logz.io/docs/shipping/containers/kubernetes/). ::: ## Logs @@ -32,22 +32,18 @@ import TabItem from '@theme/TabItem'; [Project's GitHub repo](https://github.com/logzio/logzio-log4j2-appender/) ::: -The Logz.io Log4j 2 appender sends logs using non-blocking threading, bulks, and HTTPS encryption to port 8071. +The Logz.io Log4j 2 appender sends logs via non-blocking threading bulks and HTTPS encryption to port 8071. It uses LogzioSender, with logs queued in a buffer and are 100% non-blocking, shipped by a background task. This .jar includes LogzioSender, BigQueue, Gson, and Guava. -This appender uses LogzioSender. -Logs queue in the buffer and are 100% non-blocking. -A background task handles log shipping. -To help manage dependencies, this .jar shades LogzioSender, BigQueue, Gson, and Guava. -**Before you begin, you'll need**: -Log4j 2.7 or higher, -Java 8 or higher + +**Requirements:**: +* Log4j 2.7+ +* Java 8+ -#### Add the dependency to your project +### Add a dependency to a configuration file -Add a dependency to your project configuration file (for instance, `pom.xml` in a Maven project). JDK 8: ```xml @@ -59,10 +55,10 @@ JDK 8: ``` :::note -In case of issues, consider using version 1.0.12 or other earlier versions as a potential solution. +If you encounter any issue, try using version 1.0.12 or earlier. ::: -JDK 11 and above: +JDK 11+: ```xml io.logz.log4j2 @@ -71,7 +67,7 @@ JDK 11 and above: ``` -The appender also requires a logger implementation, for example: +The appender also requires a logger implementation: ```xml org.apache.logging.log4j @@ -80,20 +76,18 @@ The appender also requires a logger implementation, for example: ``` -The logzio-log4j2-appender artifact can be found in the Maven central repo at https://search.maven.org/artifact/io.logz.log4j2/logzio-log4j2-appender. +Find the logzio-log4j2-appender artifact in the [Maven central repo](https://search.maven.org/artifact/io.logz.log4j2/logzio-log4j2-appender). -#### Configure the appender +### Appender configuration -Use the samples in the code block below as a starting point, and replace the sample with a configuration that matches your needs. +Replace the placeholders with your configuration. -For a complete list of options, see the configuration parameters below the code block.👇 XML example: ```xml - <> https://<>:8071 @@ -135,44 +129,45 @@ rootLogger.appenderRef.logzioAppender.ref = logzioAppender :::note -See the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.html) for more information on the Log4j 2 configuration file. +For more details, see the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.html). ::: -#### Parameters +### Appender parameters | Parameter | Default | Explained | Required/Optional | | ------------------ | ------------------------------------ | ----- | ----- | -| **logzioToken** | *None* | Your Logz.io log shipping token securely directs the data to your [Logz.io account](https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping). {@include: ../../_include/log-shipping/log-shipping-token.html} Begin with `$` to use an environment variable or system property with the specified name. For example, `$LOGZIO_TOKEN` uses the LOGZIO_TOKEN environment variable. | Required | -| **logzioType** | *java* | The [log type](https://support.logz.io/hc/en-us/articles/209486049-What-is-Type-) for that appender, it must not contain any spaces | Optional | +| **logzioToken** | *None* | Your Logz.io log shipping token. {@include: ../../_include/log-shipping/log-shipping-token.html} Begin with `$` to use an environment variable or system property with the specified name. For example, `$LOGZIO_TOKEN` uses the LOGZIO_TOKEN environment variable. | Required | +| **logzioType** | *java* | The [log type](https://support.logz.io/hc/en-us/articles/209486049-What-is-Type-). Can't contain spaces. | Optional | | **logzioUrl** | *https://listener.logz.io:8071* | Listener URL and port. {@include: ../../_include/log-shipping/listener-var.html} | Required | -| **drainTimeoutSec** | *5* | How often the appender should drain the queue (in seconds) | Required | -| **socketTimeoutMs** | *10 * 1000* | The socket timeout during log shipment | Required | -| **connectTimeoutMs** | *10 * 1000* | The connection timeout during log shipment | Required | -| **addHostname** | *false* | If true, then a field named 'hostname' will be added holding the host name of the machine. If from some reason there's no defined hostname, this field won't be added | Required | -| **additionalFields** | *None* | Allows to add additional fields to the JSON message sent. The format is "fieldName1=fieldValue1;fieldName2=fieldValue2". You can optionally inject an environment variable value using the following format: "fieldName1=fieldValue1;fieldName2=$ENV_VAR_NAME". In that case, the environment variable should be the only value. In case the environment variable can't be resolved, the field will be omitted. | Optional | -| **debug** | *false* | Print some debug messages to stdout to help to diagnose issues | Required | -| **compressRequests** | *false* | Boolean. `true` if logs are compressed in gzip format before sending. `false` if logs are sent uncompressed. | Required | -| **exceedMaxSizeAction** | *"cut"* | String. cut to truncate the message field or drop to drop log that exceed the allowed maximum size for logzio. If the log size exceeding the maximum size allowed after truncating the message field, the log will be dropped. | Required | - -#### Parameters for in-memory queue +| **drainTimeoutSec** | *5* | How often the appender drains the buffer, in seconds. | Required | +| **socketTimeoutMs** | *10 * 1000* | Socket timeout during log shipment. | Required | +| **connectTimeoutMs** | *10 * 1000* | Connection timeout during log shipment, in milliseconds. | Required | +| **addHostname** | *false* | If true, adds a field named `hostname` with the machine's hostname. If there's no defined hostname, the field won't be added. | Required | +| **additionalFields** | *None* | Allows to add additional fields to the JSON message sent. The format is "fieldName1=fieldValue1;fieldName2=fieldValue2". Optionally, inject an environment variable value using this format: "fieldName1=fieldValue1;fieldName2=$ENV_VAR_NAME". The environment variable should be the only value. If the environment variable can't be resolved, the field will be omitted. | Optional | +| **debug** | *false* | Boolean. Set to `true` to print debug messages to stdout. | Required | +| **compressRequests** | *false* | Boolean. If `true`, logs are compressed in gzip format before sending. If `false`, logs are sent uncompressed. | Required | +| **exceedMaxSizeAction** | *"cut"* | String. Use "cut" to truncate the message or "drop" to discard oversized logs. Logs exceeding the maximum size after truncation will be dropped. | Required | + +### In-memory queue parameters | Parameter | Default | Explained | | ------------------ | ------------------------------------ | ----- | -| **inMemoryQueueCapacityBytes** | *1024 * 1024 * 100* | The amount of memory(bytes) we are allowed to use for the memory queue. If the value is -1 the sender will not limit the queue size.| -| **inMemoryLogsCountCapacity** | *-1* | Number of logs we are allowed to have in the queue before dropping logs. If the value is -1 the sender will not limit the number of logs allowed.| -| **inMemoryQueue** | *false* | Set to true if the appender uses in memory queue. By default the appender uses disk queue| +| **inMemoryQueueCapacityBytes** | *1024 * 1024 * 100* | Memory (in bytes) allowed to use for the memory queue. -1 value means no limit.| +| **inMemoryLogsCountCapacity** | *-1* | Number of logs allowed in the queue before dropping logs. -1 value means no limit.| +| **inMemoryQueue** | *false* | Set to true to use in memory queue. Default is disk queue.| + +### Disk queue parameters -#### Parameters for disk queue | Parameter | Default | Explained | | ------------------ | ------------------------------------ | ----- | -| **fileSystemFullPercentThreshold** | *98* | The percent of used file system space at which the sender will stop queueing. When we will reach that percentage, the file system in which the queue is stored will drop all new logs until the percentage of used space drops below that threshold. Set to -1 to never stop processing new logs | -| **gcPersistedQueueFilesIntervalSeconds** | *30* | How often the disk queue should clean sent logs from disk | -| **bufferDir**(deprecated, use queueDir) | *System.getProperty("java.io.tmpdir")* | Where the appender should store the queue | -| **queueDir** | *System.getProperty("java.io.tmpdir")* | Where the appender should store the queue | +| **fileSystemFullPercentThreshold** | *98* | Percentage of file system usage at which the sender stops queueing. Once reached, new logs are dropped until usage falls below the threshold. Set to -1 to never stop processing logs. | +| **gcPersistedQueueFilesIntervalSeconds** | *30* | Interval (in seconds) for cleaning sent logs from disk. | +| **bufferDir**(deprecated, use queueDir) | *System.getProperty("java.io.tmpdir")* | Directory for storing the queue. | +| **queueDir** | *System.getProperty("java.io.tmpdir")* | Directory for storing the queue. | -#### Code Example +Code Example: ```java import org.apache.logging.log4j.LogManager; @@ -190,9 +185,9 @@ public class LogzioLog4j2Example { -#### Troubleshooting +### Troubleshooting -If you receive an error message regarding a missing appender, try adding the following configuration to the beginning and end of the configuration file: +If you receive an error about a missing appender, add the following to the configuration file: ```xml @@ -204,12 +199,10 @@ If you receive an error message regarding a missing appender, try adding the fol ``` -#### MDC +#### Using Mapped Diagnostic Context (MDC) -When you add mapped diagnostic context (MDC) to your logs, -each key-value pair you define is added log lines while the thread is alive. +Add MDC with the following code: -So this code sample... ```java import org.apache.logging.log4j.LogManager; @@ -225,7 +218,7 @@ public class LogzioLog4j2Example { } ``` -...produces this log output. +Which produces the following output: ```json { @@ -235,11 +228,10 @@ public class LogzioLog4j2Example { } ``` -#### Markers +#### Using Markers -Markers are values you can use to tag and enrich log statements. +Markers are used to tag and enrich log statements. Add them by running this: -This code... ```java import org.apache.logging.log4j.LogManager; @@ -256,7 +248,7 @@ public class LogzioLog4j2Example { } ``` -...produces this log output. +Which produces the following output: ```json { @@ -275,26 +267,18 @@ public class LogzioLog4j2Example { Logback sends logs to your Logz.io account using non-blocking threading, bulks, and HTTPS encryption to port 8071. -This appender uses BigQueue implementation of persistent queue, so all logs are backed up to a local file system before being sent. -Once you send a log, it will be enqueued in the buffer and 100% non-blocking. -A background task handles the log shipment. -To help manage dependencies, this .jar shades BigQueue, Gson, and Guava. +This appender uses BigQueue for a persistent queue, backing up all logs to the local file system before sending. Logs are enqueued in the buffer and 100% non-blocking, with a background task handling shipment. The `.jar` includes BigQueue, Gson, and Guava for dependency management. -**Before you begin, you'll need**: -Logback 1.1.7 or higher, -Java 8 or higher +**Requirements**: +* Logback 1.1.7+ +* Java 8+ +### Add dependencies from Maven -#### Add the dependency to your project - -Add a dependency to your project configuration file - -#### Installation from Maven +Add the following dependencies to `pom.xml`: -In the `pom.xml` add the following dependencies: - -JDK 11 and above: +JDK 11+: ``` io.logz.logback @@ -304,7 +288,7 @@ JDK 11 and above: ``` -JDK 8 and above: +JDK 8+: ``` io.logz.logback @@ -313,7 +297,7 @@ JDK 8 and above: ``` -Logback appender also requires logback classic: +Logback classic: ``` ch.qos.logback @@ -322,19 +306,16 @@ Logback appender also requires logback classic: ``` -The logzio-log4j2-appender artifact can be found in the Maven central repo at https://search.maven.org/artifact/io.logz.log4j2/logzio-log4j2-appender. - +Find logzio-log4j2-appender artifact in the [Maven central repo](https://search.maven.org/artifact/io.logz.log4j2/logzio-log4j2-appender). -#### Configure the appender -Use the samples in the code block below as a starting point, and replace the sample with a configuration that matches your needs. +#### Appender configuration -For a complete list of options, see the configuration parameters below the code block.👇 +Replace placeholders with your configuration. :::note -See the [Logback documentation](https://logback.qos.ch/manual/configuration.html) for more information on the Logback configuration file. +For more details, see the [Logback documentation](https://logback.qos.ch/manual/configuration.html). ::: - ```xml @@ -357,7 +338,7 @@ See the [Logback documentation](https://logback.qos.ch/manual/configuration.html ``` -If you want to output `debug` messages, include the `debug` parameter into the code as follows: +To output `debug` messages, include the parameter into the code: ```xml @@ -384,17 +365,17 @@ If you want to output `debug` messages, include the `debug` parameter into the c ``` -#### Parameters +#### Appender parameters | Parameter | Description | Required/Default | |---|---|---| -| token | Your Logz.io log shipping token securely directs the data to your [Logz.io account](https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping). {@include: ../../_include/log-shipping/log-shipping-token.html} Begin with `$` to use an environment variable or system property with the specified name. For example, `$LOGZIO_TOKEN` uses the LOGZIO_TOKEN environment variable. | Required | -| logzioUrl | Listener URL and port. {@include: ../../_include/log-shipping/listener-var.html} | `https://listener.logz.io:8071` | -| logzioType | The [log type](https://docs.logz.io/docs/user-guide/data-hub/log-parsing/default-parsing/#built-in-log-types), shipped as `type` field. Used by Logz.io for consistent parsing. Can't contain spaces. | `java` | -| addHostname | Indicates whether to add `hostname` field to logs. This field holds the machine's host name. Set to `true` to include hostname. Set to `false` to leave it off. If a host name can't be found, this field is not added. | `false` | -| additionalFields | Adds fields to the JSON message output, formatted as `field1=value1;field2=value2`. Use `$` to inject an environment variable value, such as `field2=$VAR_NAME`. The environment variable should be the only value in the key-value pair. If the environment variable can't be resolved, the field is omitted. | N/A | +| token | Your Logz.io log shipping token. {@include: ../../_include/log-shipping/log-shipping-token.html} Begin with `$` to use an environment variable or system property with the specified name. For example, `$LOGZIO_TOKEN` uses the LOGZIO_TOKEN environment variable. | Required | +| logzioUrl | Listener URL and port. {@include: ../../_include/log-shipping/listener-var.html} | `https://listener.logz.io:8071` | +| logzioType | The [log type](https://docs.logz.io/docs/user-guide/data-hub/log-parsing/default-parsing/#built-in-log-types), shipped as `type` field. Can't contain spaces. | `java` | +| addHostname | If true, adds a field named `hostname` with the machine's hostname. If there's no defined hostname, the field won't be added. | `false` | +| additionalFields | Adds fields to the JSON message output, formatted as `field1=value1;field2=value2`. Use `$` to inject an environment variable value, such as `field2=$VAR_NAME`. The environment variable should be the only value in the key-value pair. If the environment variable can't be resolved, the field is omitted. | N/A | | bufferDir | Filepath where the appender stores the buffer. | `System.getProperty("java.io.tmpdir")` | -| compressRequests | Boolean. Set to `true` if you're sending gzip-compressed logs. Set to `false` if sending uncompressed logs. | `false` | +| compressRequests | Boolean. If `true`, logs are compressed in gzip format before sending. If `false`, logs are sent uncompressed. | `false` | | connectTimeout | Connection timeout during log shipment, in milliseconds. | `10 * 1000` | | debug | Boolean. Set to `true` to print debug messages to stdout. | `false` | | drainTimeoutSec | How often the appender drains the buffer, in seconds. | `5` | @@ -404,6 +385,7 @@ If you want to output `debug` messages, include the `debug` parameter into the c | socketTimeout | Socket timeout during log shipment, in milliseconds. | `10 * 1000` | + #### Code sample ```java @@ -422,17 +404,9 @@ public class LogzioLogbackExample { -#### More options - -You can optionally add mapped diagnostic context (MDC) -and markers to your logs. +#### Add MDC to your logs -#### MDC - -You can add Mapped Diagnostic Context (MDC) to your logs. -Each key-value pair you define is added log lines while the thread is alive. - -So this code sample... +Each key-value pair you define will be included in log lines while the thread is active. Add it by running the following: ```java import org.slf4j.Logger; @@ -449,7 +423,7 @@ public class LogzioLogbackExample { } ``` -...produces this log output. +Which produces this output: ```json { @@ -459,11 +433,10 @@ public class LogzioLogbackExample { } ``` -#### Markers +#### Add Markers to your logs -Markers are values you can use to tag and enrich log statements. +Markers are used to tag and enrich log statements. Add it by running: -This code... ```java import org.slf4j.Logger; @@ -481,7 +454,7 @@ public class LogzioLogbackExample { } ``` -...produces this log output. +Which produces this output: ```json { @@ -529,6 +502,7 @@ If the log appender does not ship logs, add `true :::note [Project's GitHub repo](https://github.com/logzio/micrometer-registry-logzio/) ::: + ### Usage @@ -558,24 +532,26 @@ implementation("io.logz.micrometer:micrometer-registry-logzio:1.0.2") -#### Import in your package +### Import to your code ```java import io.micrometer.logzio.LogzioConfig; import io.micrometer.logzio.LogzioMeterRegistry; ``` -#### Quick start +### Getting started + +Replace the placeholders in the code (indicated by `<< >>`) to match your specifics. + -Replace the placeholders in the code (indicated by the double angle brackets `<< >>`) to match your specifics. | Environment variable | Description |Required/Default| |---|---|---| -|`<>`| The full Logz.io Listener URL for your region, configured to use port **8052** for http traffic, or port **8053** for https traffic (example: https://listener.logz.io:8053). For more details, see the [regions page](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/) in logz.io docs | Required| -|`<>`| The Logz.io Prometheus Metrics account token. Find it under **Settings > Manage accounts**. [Look up your Metrics account token.](https://docs.logz.io/docs/user-guide/admin/authentication-tokens/finding-your-metrics-account-token/) | Required| -|interval | The interval in seconds, to push metrics to Logz.io **Note that your program will need to run for at least one interval for the metrics to be sent** | Required| +|`<>`| Logz.io Listener URL for your region. Port **8052** for HTTP, or port **8053** for HTTPS. See the [regions page](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/) for more info. | Required| +|`<>`| Logz.io Prometheus Metrics account token. Find it under **Settings > Manage accounts**. [Look up your Metrics account token.](https://docs.logz.io/docs/user-guide/admin/authentication-tokens/finding-your-metrics-account-token/). | Required| +|interval | Interval in seconds to push metrics to Logz.io. **Your program must run for at least one interval**. | Required| -#### In your package +### Example: ```java package your_package; @@ -638,9 +614,9 @@ class MicrometerLogzio { } ``` -#### Common tags +### Configuring common tags -You can attach common tags to your registry that will be added to all metrics reported, for example: +Attach common tags to your registry to include them in all reported metrics. For example: ```java // Initialize registry @@ -649,13 +625,9 @@ LogzioMeterRegistry registry = new LogzioMeterRegistry(logzioConfig, Clock.SYSTE registry.config().commonTags("key", "value"); ``` -#### Filter labels +### Filtering labels - Include -You can the `includeLabels` or `excludeLabels` functions to filter your metrics by labels. - -#### Include - -Take for example this following usage, In your `LogzioConfig()` constructor: +Use `includeLabels` in your `LogzioConfig()` constructor: ```java @Override @@ -668,9 +640,9 @@ public Hashtable includeLabels() { ``` The registry will keep only metrics with the label `__name__` matching the regex `my_counter_abc_total|my_second_counter_abc_total`, and with the label `k1` matching the regex `v1`. -#### Exclude +#### Filtering labels - Exclude -In your `LogzioConfig()` constructor +Use `excludeLabels` in your `LogzioConfig()` constructor: ```java @Override @@ -685,9 +657,11 @@ public Hashtable excludeLabels() { The registry will drop all metrics with the label `__name__` matching the regex `my_counter_abc_total|my_second_counter_abc_total`, and with the label `k1` matching the regex `v1`. -#### Meter binders +### Using meter binders + +Micrometer provides a set of binders for monitoring JVM metrics out of the box: + -Micrometer provides a set of binders for monitoring JVM metrics out of the box, for example: ```java // Initialize registry @@ -714,21 +688,22 @@ new LogbackMetrics().bindTo(registry); new Log4j2Metrics().bindTo(registry); ``` -For more information about other binders check out [Micrometer-core](https://github.com/micrometer-metrics/micrometer/tree/main/micrometer-core/src/main/java/io/micrometer/core/instrument/binder) Github repo. - -#### Types of metrics +For more information about other binders check out the [Micrometer-core](https://github.com/micrometer-metrics/micrometer/tree/main/micrometer-core/src/main/java/io/micrometer/core/instrument/binder) Github repo. -Refer to the Micrometer [documentation](https://micrometer.io/docs/concepts) for more details. +### Metric types | Name | Behavior | | ---- | ---------- | -| Counter | Metric value can only go up or be reset to 0, calculated per `counter.increment(value); ` call. | -| Gauge | Metric value can arbitrarily increment or decrement, values can set automaticaly by tracking `Collection` size or set manually by `gauge.set(value)` | -| DistributionSummary | Metric values captured by the `summary.record(value)` function, the output is a distribution of `count`,`sum` and `max` for the recorded values during the push interval. | -| Timer | Mesures timing, metric values can be recorded by `timer.record()` call. | +| Counter | Metric value can only go up or be reset to 0, calculated per `counter.increment(value);` call. | +| Gauge | Metric value can arbitrarily increment or decrement, values can set automaticaly by tracking `Collection` size or manually by `gauge.set(value)`. | +| DistributionSummary | Captured metric values via `summary.record(value)`. Outputs a distribution of `count`,`sum` and `max` for the recorded values during the push interval. | +| Timer | Mesures timing. Metric values recorded by `timer.record()` call. | -##### [Counter](https://micrometer.io/docs/concepts#_counters) +For more details, see the Micrometer [documentation](https://micrometer.io/docs/concepts). + + +#### [Counter](https://micrometer.io/docs/concepts#_counters) ```java Counter counter = Counter @@ -742,7 +717,7 @@ counter.increment(2); // The following metric will be created and sent to Logz.io: counter_example_total{env="dev"} 3 ``` -##### [Gauge](https://micrometer.io/docs/concepts#_gauges) +#### [Gauge](https://micrometer.io/docs/concepts#_gauges) ```java // Create Gauge @@ -766,7 +741,7 @@ manual_gauge.set(83); // The following metric will be created and sent to Logz.io:: manual_gauge_example{env="dev"} 83 ``` -##### [DistributionSummary](https://micrometer.io/docs/concepts#_distribution_summaries) +#### [DistributionSummary](https://micrometer.io/docs/concepts#_distribution_summaries) ```java // Create DistributionSummary @@ -785,7 +760,7 @@ summary.record(30); // summary_example_sum{env="dev"} 60 ``` -##### [Timer](https://micrometer.io/docs/concepts#_timers) +#### [Timer](https://micrometer.io/docs/concepts#_timers) ```java // Create Timer @@ -810,21 +785,15 @@ timer.record(()-> { // timer_example_duration_seconds_sum{env="dev"} 3000 ``` - - ### Run your application -Run your application to start sending metrics to Logz.io. - - -### Check Logz.io for your metrics - -Give your metrics some time to get from your system to ours, and then open [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?). +Run your application to start sending metrics to Logz.io. Give it some time to run and check the Logz.io [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?). ## Traces -Deploy this integration to enable automatic instrumentation of your Java application using OpenTelemetry. + +Deploy this integration for automatic instrumentation of your Java application using OpenTelemetry. The Java agent captures spans and forwards them to the collector, which exports data to your Logz.io account. This integration includes: @@ -832,53 +801,58 @@ This integration includes: * Installing the OpenTelemetry collector with Logz.io exporter * Establishing communication between the agent and collector -On deployment, the Java agent automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account. -### Setup auto-instrumentation for your locally hosted Java application and send traces to Logz.io +**Requirements**: -**Before you begin, you'll need**: - -* A Java application without instrumentation -* An active account with Logz.io -* Port `4317` available on your host system +* A Java application without instrumentation. +* An active Logz.io account. +* Port `4317` available on your host system. * A name defined for your tracing service. You will need it to identify the traces in Logz.io. :::note This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core. ::: - -### Download Java agent +### Setting up auto-instrumentation and sending Traces to Logz.io -Download the latest version of the [OpenTelemetry Java agent](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar) to the host of your Java application. -### Download and configure OpenTelemetry collector +**1. Download Java agent** -Create a dedicated directory on the host of your Java application and download the [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.70.0) that is relevant to the operating system of your host. +Download the latest version of the [OpenTelemetry Java agent](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar) to your application host. -After downloading the collector, create a configuration file `config.yaml` with the following parameters: + +**2. Download and configure OpenTelemetry collector** + +Create a dedicated directory on the host of your Java application and download the relevant [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.70.0). + +Next, create a configuration file, `config.yaml`, with the following parameters: {@include: ../../_include/tracing-shipping/collector-config.md} {@include: ../../_include/tracing-shipping/replace-tracing-token.html} -### Start the collector -Run the following command: +**3. Start the collector** + +Run: ```shell /otelcontribcol_ --config ./config.yaml ``` -* Replace `` with the path to the directory where you downloaded the collector. -* Replace `` with the version name of the collector applicable to your system, e.g. `otelcontribcol_darwin_amd64`. +* Replace `` with the collector's directory. +* Replace `` with the version name, e.g. `otelcontribcol_darwin_amd64`. + + + +**4. Attach the agent** + +Run the following command from your Java application's directory: -### Attach the agent to the runtime and run it -Run the following command from the directory of your Java application: ```shell java -javaagent:/opentelemetry-javaagent-all.jar \ @@ -889,13 +863,14 @@ java -javaagent:/opentelemetry-javaagent-all.jar \ -jar target/*.jar ``` -* Replace `` with the path to the directory where you downloaded the agent. -* Replace `` with the name of your tracing service defined earlier. +* Replace `` with the collector's directory. +* Replace `` with the tracing service name. + -### Controlling the number of spans +### Control the number of spans -To limit the number of outgoing spans, you can use the sampling option in the Java agent. +Use the sampling option in the Java agent to limit outgoing spans. The sampler configures whether spans will be recorded for any call to `SpanBuilder.startSpan`. @@ -913,6 +888,6 @@ Supported values for `otel.traces.sampler` are - "parentbased_always_off": ParentBased(root=AlwaysOffSampler) - "parentbased_traceidratio": ParentBased(root=TraceIdRatioBased). `otel.traces.sampler.arg` sets the ratio. -### Check Logz.io for your traces +### Viewing Traces in Logz.io -Give your traces some time to get from your system to ours, and then open [Tracing](https://app.logz.io/#/dashboard/jaeger). +Give your traces time to process, after which they'll be available in your [Tracing](https://app.logz.io/#/dashboard/jaeger) dashboard. \ No newline at end of file diff --git a/docs/shipping/Code/json.md b/docs/shipping/Code/json.md index 93f5fafc..30ab580c 100644 --- a/docs/shipping/Code/json.md +++ b/docs/shipping/Code/json.md @@ -21,19 +21,18 @@ import TabItem from '@theme/TabItem'; -If you want to ship logs from your code but don't have a library in place, you can send them directly to the Logz.io listener as a minified JSON file. +To ship logs directly to the Logz.io listener, send them as minified JSON files over an HTTP/HTTPS connection. -The listeners accept bulk uploads over an HTTP/HTTPS connection or TLS/SSL streams over TCP. -### The request path and header +### Request path and header -For HTTPS shipping _(recommended)_, use this URL configuration: +For HTTPS _(recommended)_: ``` https://<>:8071?token=<>&type=<> ``` -Otherwise, for HTTP shipping, use this configuration: +For HTTP: ``` http://<>:8070?token=<>&type=<> @@ -44,11 +43,13 @@ http://<>:8070?token=<>&type=<> * {@include: ../../_include/log-shipping/type.md} Otherwise, the default `type` is `http-bulk`. -### The request body +### Request body -Your request's body is a list of logs in minified JSON format. Also, each log must be separated by a new line. You can escape newlines in a JSON string with `\n`. +The request body is a list of logs in minified JSON format, with each log separated by a newline `(\n)`. -For example: + + +Example: ```json {"message": "Hello there", "counter": 1} @@ -57,12 +58,11 @@ For example: ### Limitations -* Max body size is 10 MB (10,485,760 bytes) -* Each log line must be 500,000 bytes or less -* If you include a `type` field in the log, it overrides `type` in the request header +* Max body size: 10 MB (10,485,760 bytes). +* Max log line size: 500,000 bytes. +* Type field in the log overrides the `type` in the request header. - -### Code sample +For example: ```shell echo $'{"message":"hello there", "counter": 1}\n{"message":"hello again", "counter": 2}' \ @@ -75,13 +75,13 @@ echo $'{"message":"hello there", "counter": 1}\n{"message":"hello again", "count #### 200 OK -All logs were received and validated. Give your logs some time to get from your system to ours, and then check your [Logz.io Log Management account](https://app.logz.io/#/dashboard/osd) for your logs. +All logs received and validated. Allow some time for data ingestion, then open [Logz.io Log Management account](https://app.logz.io/#/dashboard/osd). The response body is empty. #### 400 BAD REQUEST -The input wasn't valid. The response message will look like this: +Invalid input. Response example: ``` @@ -95,32 +95,31 @@ The input wasn't valid. The response message will look like this: #### 401 UNAUTHORIZED -The token query string parameter is missing or not valid. -Make sure you're using the right account token. +Missing or invalid token query string parameter. Ensure you're using the correct account token. + +Response: "Logging token is missing" or "Logging token is not valid". -In the response body, you'll see either "Logging token is missing" or "Logging token is not valid" as the reason for the response. #### 413 REQUEST ENTITY TOO LARGE -The request body size is larger than 10 MB. +Request body size exceeds 10 MB. -If you want to ship logs from your code but don't have a library in place, you can send them directly to the Logz.io listener as a minified JSON file. +To ship logs directly to the Logz.io listener, send them as minified JSON files over an HTTP/HTTPS connection. -The listeners accept bulk uploads over an HTTP/HTTPS connection or TLS/SSL streams over TCP. +### JSON log structure -### JSON log structure +Follow these practices when shipping JSON logs over TCP: -Keep to these practices when shipping JSON logs over TCP: +* Each log must be a single-line JSON object. +* Each log line must be 500,000 bytes or less. +* Each log line must be followed by a `\n` (even the last log). +* Include your account token as a top-level property: `{ ... "token": "<>" , ... }`. -* Each log must be a single-line JSON object -* Each log line must be 500,000 bytes or less -* Each log line must be followed by a `\n` (even the last log) -* Include your account token as a top-level property: `{ ... "token": "<>" , ... }` ### Send TLS/SSL streams over TCP @@ -129,14 +128,15 @@ Keep to these practices when shipping JSON logs over TCP: ### Send the logs -Using the certificate you just downloaded, send the logs to TCP port 5052 on your region’s listener host. For more information on finding your account’s region, see [Account region](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/). +Using the downloaded certificate, send logs to TCP port 5052 on your region's listener host. For details on finding your account's region, refer to the [Account region](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/) section. ## Check Logz.io for your logs -Give your logs some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). -If you still don't see your logs, see [log shipping troubleshooting](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/log-shipping-troubleshooting/). +Allow some time for data ingestion, then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). + +For troubleshooting, refer to our [log shipping troubleshooting](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/log-shipping-troubleshooting/) guide. diff --git a/docs/shipping/Code/nestjs.md b/docs/shipping/Code/nestjs.md index cca6624d..174e9e45 100644 --- a/docs/shipping/Code/nestjs.md +++ b/docs/shipping/Code/nestjs.md @@ -34,7 +34,7 @@ On deployment, the NestJS instrumentation automatically captures spans from your **Before you begin, you'll need**: * A NestJS application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4317` available on your host system * A name defined for your tracing service. You will need it to identify the traces in Logz.io. @@ -176,7 +176,7 @@ This integration enables you to auto-instrument your NestJS application and run **Before you begin, you'll need**: * A NestJS application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4317` available on your host system * A name defined for your tracing service. You will need it to identify the traces in Logz.io. diff --git a/docs/shipping/Code/node-js.md b/docs/shipping/Code/node-js.md index 5500e12a..243cf45a 100644 --- a/docs/shipping/Code/node-js.md +++ b/docs/shipping/Code/node-js.md @@ -27,30 +27,18 @@ import TabItem from '@theme/TabItem'; [Project's GitHub repo](https://github.com/logzio/logzio-nodejs/) ::: -logzio-nodejs collects log messages in an array, which is sent asynchronously when it reaches its size limit or time limit (100 messages or 10 seconds), whichever comes first. -It contains a simple retry mechanism which upon connection reset or client timeout, tries to send a waiting bulk (2 seconds default). +logzio-nodejs collects log messages in an array and sends them asynchronously when it reaches 100 messages or 10 seconds. It retries on connection reset or timeout every 2 seconds, doubling the interval up to 3 times. It operates asynchronously, ensuring it doesn't block other messages. By default, errors are logged to the console, but this can be customized with a callback function. -It's asynchronous, so it doesn't block other messages from being collected and sent. -The interval increases by a factor of 2 between each retry until it reaches the maximum allowed attempts (3). -By default, any error is logged to the console. -You can change this by using a callback function. +### Configure logzio-nodejs -#### Configure logzio-nodejs - -##### Add the dependency to your project - -Navigate to your project's folder in the command line, and run this command to install the dependency. +Install the dependency: ```shell npm install logzio-nodejs ``` -##### Configure logzio-nodejs - -Use the samples in the code block below as a starting point, and replace the sample with a configuration that matches your needs. - -For a complete list of options, see the configuration parameters below the code block.👇 +Use the sample configuration and edit it according to your needs: ```javascript // Replace these parameters with your configuration @@ -63,7 +51,7 @@ var logger = require('logzio-nodejs').createLogger({ }); ``` -###### Parameters +### Parameters | Parameter | Description | Required/Default | |---|---|---| @@ -81,12 +69,9 @@ var logger = require('logzio-nodejs').createLogger({ | extraFields | JSON format. Adds your custom fields to each log. Format: `extraFields : { field_1: "val_1", field_2: "val_2" , ... }` | -- | | setUserAgent | Set to false to send logs without the user-agent field in the request header. | `true` | -###### Code sample - -You can send log lines as a raw string or as an object. -For more consistent and reliable parsing, we recommend sending logs as objects. +**Code example:** -To send an object (recommended): +You can send log lines as a raw string or an object. For consistent and reliable parsing, we recommend sending them as objects: ```javascript var obj = { @@ -97,21 +82,21 @@ To send an object (recommended): logger.log(obj); ``` -To send raw text: +To send a raw string: ```javascript logger.log('This is a log message'); ``` -Include this line at the end of the run if you're using logzio-nodejs in a severless environment, such as AWS Lambda, Azure Functions, or Google Cloud Functions: +For serverless environments, such as AWS Lambda, Azure Functions, or Google Cloud Functions, include this line at the end of the run: ```javascript logger.sendAndClose(); ``` -###### Custom tags +### Add custom tags to logzio-nodejs -You can add custom tags to your logs using the following format: `{ tags : ['tag1']}`, for example: +Add custom tags using the following format: `{ tags : ['tag1']}`, for example: ```javascript var obj = { @@ -131,28 +116,21 @@ logger.log(obj); [Project's GitHub repo](https://github.com/logzio/winston-logzio/) ::: -This winston plugin is a wrapper for the logzio-nodejs appender, which basically means it just wraps our nodejs logzio shipper. -With winston-logzio, you can take advantage of the winston logger framework with your Node.js app. +This winston plugin is a wrapper for the logzio-nodejs appender, allowing you to use the Logz.io shipper with the winston logger framework in your Node.js app. -#### Configure winston-logzio +### Configure winston-logzio -**Before you begin, you'll need**: Winston 3 (If you're looking for Winston 2, checkout v1.0.8). If you need to run with Typescript, follow the procedure to set up winston with Typescript. +**Before you begin, you'll need**: Winston 3 (for Winston 2, see version v1.0.8). If you're using Typescript, follow the procedure to set up winston with Typescript. - - -##### Add the dependency to your project -Navigate to your project's folder in the command line, and run this command to install the dependency. +Install the dependency: ```shell npm install winston-logzio --save ``` -##### Configure winston-logzio - -Here's a sample configuration that you can use as a starting point. -Use the samples in the code block below or replace the sample with a configuration that matches your needs. +Use the sample configuration and edit it according to your needs: ```javascript const winston = require('winston'); @@ -174,13 +152,14 @@ const logger = winston.createLogger({ logger.log('warn', 'Just a test message'); ``` -If winston-logzio is used as part of a serverless service (AWS Lambda, Azure Functions, Google Cloud Functions, etc.), add `await logger.info(“API Called”)` and `logger.close()` at the end of the run, every time you are using the logger. +If you are using winston-logzio in a serverless service (e.g., AWS Lambda, Azure Functions, Google Cloud Functions), add `await logger.info("API Called")` and `logger.close()` at the end of each run to ensure proper logging. + {@include: ../../_include/general-shipping/replace-placeholders.html} -##### Parameters +### Parameters For a complete list of your options, see the configuration parameters below.👇 @@ -188,41 +167,44 @@ For a complete list of your options, see the configuration parameters below.👇 | Parameter | Description | Required/Default | |---|---|---| -| LogzioWinstonTransport | This variable determines what will be passed to the logzio nodejs logger itself. If you want to configure the nodejs logger, add any parameters you want to send to winston when initializing the transport. | -- | -| token | Your Logz.io log shipping token securely directs the data to your [Logz.io account](https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping). {@include: ../../_include/log-shipping/log-shipping-token.html} | Required | -| protocol | `http` or `https`. The value here affects the default of the `port` parameter. | `http` | +| LogzioWinstonTransport | Determines the settings passed to the logzio-nodejs logger. Configure any parameters you want to send to winston when initializing the transport. | -- | +| token | Your Logz.io log shipping token securely directs data to your [Logz.io account](https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping). {@include: ../../_include/log-shipping/log-shipping-token.html} | Required | +| protocol | `http` or `https`, affecting the default `port` parameter. | `http` | | host | {@include: ../../_include/log-shipping/listener-var.md} {@include: ../../_include/log-shipping/listener-var.html} | `listener.logz.io` | -| port | Destination port. The default port depends on the `protocol` parameter: `8070` (for HTTP) or `8071` (for HTTPS) | `8070` / `8071` | +| port | Destination port based on the `protocol` parameter: `8070` (for HTTP) or `8071` (for HTTPS) | `8070` / `8071` | | type | {@include: ../../_include/log-shipping/type.md} | `nodejs` | | sendIntervalMs | Time to wait between retry attempts, in milliseconds. | `2000` (2 seconds) | | bufferSize | Maximum number of messages the logger will accumulate before sending them all as a bulk. | `100` | | numberOfRetries | Maximum number of retry attempts. | `3` | | debug | To print debug messsages to the console, `true`. Otherwise, `false`. | `false` | -| callback | A callback function to call when the logger encounters an unrecoverable error. The function API is `function(err)`, where `err` is the Error object. | -- | +| callback | Callback function for unrecoverable errors. The function API is `function(err)`, where `err` is the Error object. | -- | | timeout | Read/write/connection timeout, in milliseconds. | -- | -| extraFields | JSON format. Adds your custom fields to each log. Format: `extraFields : { field_1: "val_1", field_2: "val_2" , ... }` | -- | -| setUserAgent | Set to false to send logs without the user-agent field in the request header. If you want to send data from Firefox browser, set that option to false. | `true` | +| extraFields | Adds custom fields to each log in JSON format: `extraFields : { field_1: "val_1", field_2: "val_2" , ... }` | -- | +| setUserAgent | Set to `false` to send logs without the user-agent field in the request header. Set to `false` if sending data from Firefox browser. | `true` | + +### Additional configuration options + +* If you are using winston-logzio in a serverless service (e.g., AWS Lambda, Azure Functions, Google Cloud Functions), add this line at the end of the configuration code block. -##### Additional configuration options -* If winston-logzio is used as part of a serverless service (AWS Lambda, Azure Functions, Google Cloud Functions, etc.), add this line at the end of the configuration code block. ```javascript logger.close() ``` -* The winston logger by default sends all logs to the console. You can easily disable this by adding this line to your code: + +* By default, the winston logger sends all logs to the console. Disable this by adding the following line to your code: ```javascript winston.remove(winston.transports.Console); ``` -* To send a log line: +* Send a log line: ```javascript winston.log('info', 'winston logger configured with logzio transport'); ``` -* To log the last UncaughtException before Node exits: +* Log the last UncaughtException before Node exits: ```javascript var logzIOTransport = new (winstonLogzIO)(loggerOptions); @@ -244,7 +226,7 @@ For a complete list of your options, see the configuration parameters below.👇 }); ``` -* Another configuration option +* Additional configuration option ```javascript var winston = require('winston'); @@ -262,9 +244,11 @@ For a complete list of your options, see the configuration parameters below.👇 winston.add(logzioWinstonTransport, loggerOptions); ``` -###### Custom tags -You can add custom tags to your logs using the following format: `{ tags : ['tag1']}`, for example: +### Add custom tags to winston-logzio + +Add custom tags using the following format: `{ tags : ['tag1']}`, for example: + ```javascript var obj = { @@ -280,37 +264,30 @@ logger.log(obj); - +### Configure winston-logzio with Typescript -### winston-logzio setup with Typescript -This winston plugin is a wrapper for the logzio-nodejs appender that runs with Typescript, which basically means it just wraps our nodejs logzio shipper. -With winston-logzio, you can take advantage of the winston logger framework with your Node.js app. +This winston plugin is a TypeScript-compatible wrapper for the logzio-nodejs appender, effectively integrating Logz.io shipper with your Node.js application. With winston-logzio, you can take advantage of the winston logger framework. -#### Configure winston-logzio -**Before you begin, you'll need**: Winston 3 (If you're looking for Winston 2, checkout v1.0.8) +**Before you begin, you'll need**: Winston 3 (for Winston 2, see version v1.0.8). - -##### Add the dependency to your project +Install the dependency: -Navigate to your project's folder in the command line, and run this command to install the dependency. ```shell npm install winston-logzio --save ``` -##### Configure winston-logzio with Typescript - -If you don't have a `tsconfig.json` file, you'll need to add it first. Start by running: +Configure winston-logzio with Typescript. If you don't have a `tsconfig.json` file, start by adding one. Run the following command: ```javascript tsc --init ``` -On your `tsconfig.json` file, under the parameter `compilerOptions` make sure you have the `esModuleInterop` flag set to `true` or add it: +On your `tsconfig.json` file, under `compilerOptions` ensure you have the `esModuleInterop` flag set to `true` or add it: ```javascript "compilerOptions": { @@ -319,8 +296,9 @@ On your `tsconfig.json` file, under the parameter `compilerOptions` make sure yo } ``` -Here's a sample configuration that you can use as a starting point. -Use the samples in the code block below or replace the sample with a configuration that matches your needs. + +Use the sample configuration and edit it according to your needs: + ```javascript import winston from 'winston'; @@ -338,7 +316,9 @@ const logger = winston.createLogger({ logger.log('warn', 'Just a test message'); ``` -If winston-logzio is used as part of a serverless service (AWS Lambda, Azure Functions, Google Cloud Functions, etc.), add this line at the end of the configuration code block, every time you are using the logger. +If you are using winston-logzio in a serverless service (e.g., AWS Lambda, Azure Functions, Google Cloud Functions), add this line at the end of each run to ensure proper logging. + + ```javascript await logger.info(“API Called”) @@ -350,8 +330,7 @@ logger.close() ### Troubleshooting -To fix errors related to `esModuleInterop` flag make sure you run the relevant `tsconfig` file. -These might help: +To resolve errors related to the `esModuleInterop` flag, ensure you run the appropriate `tsconfig` file. Use one of the following commands: ``` tsc .ts --esModuleInterop @@ -364,9 +343,9 @@ tsc --project tsconfig.json ``` -###### Custom tags +### Add custom tags to winston-logzio with Typescript -You can add custom tags to your logs using the following format: `{ tags : ['tag1']}`, for example: +Add custom tags using the following format: `{ tags : ['tag1']}`, for example: ```javascript var obj = { @@ -379,15 +358,15 @@ var obj = { logger.log(obj); ``` + ## Metrics +These examples use the [OpenTelemetry JS SDK](https://github.com/open-telemetry/opentelemetry-js) and are based on the [OpenTelemetry exporter collector proto](https://github.com/open-telemetry/opentelemetry-js/tree/main/packages/opentelemetry-exporter-collector-proto). -Deploy this integration to send custom metrics from your Node.js application to Logz.io. -The provided example uses the [OpenTelemetry JS SDK](https://github.com/open-telemetry/opentelemetry-js) and is based on [OpenTelemetry exporter collector proto](https://github.com/open-telemetry/opentelemetry-js/tree/main/packages/opentelemetry-exporter-collector-proto). :::note [Project's GitHub repo](https://github.com/logzio/js-metrics/) @@ -396,26 +375,28 @@ The provided example uses the [OpenTelemetry JS SDK](https://github.com/open-tel **Before you begin, you'll need**: -Node 8 or higher +Node 8 or higher. :::note -We advise to use this integration with [the Logz.io Metrics backend](https://app.logz.io/#/dashboard/metrics/). However, the integration is compatible with all backends that support metrics in `prometheuesrmotewrite` format. +We recommend using this integration with [the Logz.io Metrics backend](https://app.logz.io/#/dashboard/metrics/), though it is compatible with any backend that supports the `prometheusremotewrite` format. ::: - -### Configuring your Node.js application to send custom metrics to Logz.io - -#### Install the SDK package + + + + + + +### Install the SDK package ```shell npm install logzio-nodejs-metrics-sdk@0.4.0 ``` -#### Initialize the exporter and meter provider - -Add the following code to your application: +### Initialize the exporter and meter provider + ```javascript const MeterProvider = require('@opentelemetry/sdk-metrics-base'); @@ -441,25 +422,24 @@ const meter = new MeterProvider.MeterProvider({ {@include: ../../_include/general-shipping/replace-placeholders-prometheus.html} -#### Add required metrics to the code +### Add required metrics to the code -This integration allows you to use the following metrics: +Yu can use the following metrics: | Name | Behavior | | ---- | ---------- | -| Counter | Metric value can only go up or be reset to 0, calculated per `counter.Add(context,value,labels)` request. | +| Counter | Metric value can only increase or reset to 0, calculated per `counter.Add(context,value,labels)` request. | | UpDownCounter | Metric value can arbitrarily increment or decrement, calculated per `updowncounter.Add(context,value,labels)` request. | -| Histogram | Metric values captured by the `histogram.Record(context,value,labels)` function, calculated per request. | +| Histogram | Metric values are captured by the `histogram.Record(context,value,labels)` function and calculated per request. | -For more information on each of these metrics, see the OpenTelemetry [documentation](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md). +For details on these metrics, refer to the OpenTelemetry [documentation](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md). -To add a required metric to your code, copy and paste the required metric code to your application, placing it after the initialization code: +Insert the following code after initialization to add a metric: #### Counter ```javascript -// Create your first counter metric const requestCounter = meter.createCounter('Counter', { description: 'Example of a Counter', }); @@ -474,7 +454,6 @@ requestCounter.add(1,labels); #### UpDownCounter ```javascript -// Create UpDownCounter metric const upDownCounter = meter.createUpDownCounter('UpDownCounter', { description: 'Example of an UpDownCounter', }); @@ -490,7 +469,6 @@ upDownCounter.add(-1,labels); #### Histogram: ```javascript -// Create ValueRecorder metric const histogram = meter.createHistogram('test_histogram', { description: 'Example of a histogram', }); @@ -505,58 +483,37 @@ histogram.record(20,labels); // test_histogram_avg{environment: 'prod'} 25.0 ``` -#### Run your application +### View your metrics Run your application to start sending metrics to Logz.io. +Allow some time for data ingestion, then check your [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?). -#### Check Logz.io for your metrics - -Give your metrics some time to get from your system to ours, and then open [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?). - -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard for enhanced observability. {@include: ../../_include/metric-shipping/generic-dashboard.html} - - - - - - - - ## Traces - - - -Deploy this integration to enable automatic instrumentation of your Node.js application using OpenTelemetry. - -### Manual configuration +### Auto-instrument Node.js and send Traces to Logz.io This integration includes: -* Installing the OpenTelemetry Node.js instrumentation packages on your application host -* Installing the OpenTelemetry collector with Logz.io exporter -* Running your Node.js application in conjunction with the OpenTelemetry instrumentation - -On deployment, the Node.js instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account. +* Install OpenTelemetry Node.js instrumentation packages on your host. +* Install the OpenTelemetry collector with Logz.io exporter. +* Run your Node.js application with OpenTelemetry instrumentation. - - -#### Setup auto-instrumentation for your locally hosted Node.js application and send traces to Logz.io +The Node.js instrumentation captures spans and forwards them to the collector, which exports the data to your Logz.io account. **Before you begin, you'll need**: -* A Node.js application without instrumentation -* An active account with Logz.io -* Port `4318` available on your host system -* A name defined for your tracing service. You will need it to identify the traces in Logz.io. - +* A Node.js application without instrumentation. +* An active Logz.io account. +* Port `4318` available on your host system. +* A name for your tracing service to identify traces in Logz.io. :::note This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core. @@ -567,23 +524,20 @@ This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Col {@include: ../../_include/tracing-shipping/node-steps.md} -##### Download and configure OpenTelemetry collector -Create a dedicated directory on the host of your Node.js application and download the [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.82.0) that is relevant to the operating system of your host. +#### Download and configure the OpenTelemetry collector +Create a directory on your Node.js host, download the appropriate [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases) for your OS, and create a `config.yaml` file with the following parameters: -After downloading the collector, create a configuration file `config.yaml` with the following parameters: {@include: ../../_include/tracing-shipping/collector-config.md} -- {@include: ../../_include/tracing-shipping/replace-tracing-token.html} -##### Start the collector +#### Start the collector -Run the following command from the directory of your application file: ```shell /otelcontribcol_ --config ./config.yaml @@ -591,32 +545,30 @@ Run the following command from the directory of your application file: * Replace `` with the path to the directory where you downloaded the collector. * Replace `` with the version name of the collector applicable to your system, e.g. `otelcontribcol_darwin_amd64`. -##### Run the application +#### Run the application -Run the application to generate traces: +Run this command to generate traces: ```shell node --require './tracer.js' .js ``` -##### Check Logz.io for your traces - -Give your traces some time to get from your system to ours, and then open [Tracing](https://app.logz.io/#/dashboard/jaeger). +#### View your traces +Give your traces some time to ingest, and then open your [Tracing account](https://app.logz.io/#/dashboard/jaeger). +### Auto-instrument Node.js with Docker for Logz.io -### Setup auto-instrumentation for your Node.js application using Docker and send traces to Logz.io - -This integration enables you to auto-instrument your Node.js application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network. +This integration auto-instruments your Node.js app and runs a containerized OpenTelemetry collector to send traces to Logz.io. Ensure both application and collector containers are on the same network. **Before you begin, you'll need**: -* A Node.js application without instrumentation -* An active account with Logz.io -* Port `4317` available on your host system -* A name defined for your tracing service. You will need it to identify the traces in Logz.io. +* A Node.js application without instrumentation. +* An active Logz.io account. +* Port `4317` available on your host system. +* A name for your tracing service to identify traces in Logz.io. {@include: ../../_include/tracing-shipping/node-steps.md} @@ -642,11 +594,13 @@ node --require './tracer.js' .js #### Check Logz.io for your traces -Give your traces some time to get from your system to ours, and then open [Tracing](https://app.logz.io/#/dashboard/jaeger). +Give your traces some time to ingest, and then open your [Tracing account](https://app.logz.io/#/dashboard/jaeger). + -### Configuratiion using Helm -You can use a Helm chart to ship Traces to Logz.io via the OpenTelemetry collector. The Helm tool is used to manage packages of preconfigured Kubernetes resources that use charts. +### Configuration using Helm + +You can use a Helm chart to ship traces to Logz.io via the OpenTelemetry collector. Helm is a tool for managing packages of preconfigured Kubernetes resources using charts. **logzio-k8s-telemetry** allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector. @@ -662,11 +616,8 @@ This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Col ::: -#### Standard configuration - - -##### 1. Deploy the Helm chart +#### Deploy the Helm chart Add `logzio-helm` repo as follows: @@ -675,7 +626,7 @@ helm repo add logzio-helm https://logzio.github.io/logzio-helm helm repo update ``` -##### 2. Run the Helm deployment code +#### Run the Helm deployment code ``` helm install \ @@ -689,33 +640,36 @@ logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry {@include: ../../_include/tracing-shipping/replace-tracing-token.html} `<>` - Your Logz.io account region code. [Available regions](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions). -##### 3. Define the logzio-k8s-telemetry dns name +#### Define the logzio-k8s-telemetry dns name In most cases, the service name will be `logzio-k8s-telemetry.default.svc.cluster.local`, where `default` is the namespace where you deployed the helm chart and `svc.cluster.name` is your cluster domain name. + + -If you are not sure what your cluster domain name is, you can run the following command to look it up: +To find your cluster domain name, run the following command: ```shell kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \ sh -c 'nslookup kubernetes.default | grep Name | sed "s/Name:\skubernetes.default//"' ``` -It will deploy a small pod that extracts your cluster domain name from your Kubernetes environment. You can remove this pod after it has returned the cluster domain name. +This command deploys a temporary pod to extract your cluster domain name. You can remove the pod after retrieving the domain name. {@include: ../../_include/tracing-shipping/node-steps.md} -##### 4. Check Logz.io for your traces +#### Check Logz.io for your traces + +Give your traces some time to ingest, and then open your [Tracing account](https://app.logz.io/). + -Give your traces some time to get from your system to ours, then open [Logz.io](https://app.logz.io/). -#### Customizing Helm chart parameters +### Customizing Helm chart parameters -##### Configure customization options -You can use the following options to update the Helm chart parameters: +To customize the Helm chart parameters, you have the following options: * Specify parameters using the `--set key=value[,key=value]` argument to `helm install`. @@ -723,16 +677,16 @@ You can use the following options to update the Helm chart parameters: * Override default values with your own `my_values.yaml` and apply it in the `helm install` command. -If required, you can add the following optional parameters as environment variables: +You can add the following optional parameters as environment variables if needed: | Parameter | Description | |---|---| | secrets.SamplingLatency | Threshold for the span latency - all traces slower than the threshold value will be filtered in. Default 500. | | secrets.SamplingProbability | Sampling percentage for the probabilistic policy. Default 10. | -##### Example +**Code example:** -You can run the logzio-k8s-telemetry chart with your custom configuration file that takes precedence over the `values.yaml` of the chart. +You can run the logzio-k8s-telemetry chart with your custom configuration file, which will override the default `values.yaml` settings. For example: @@ -808,11 +762,11 @@ Replace `` with the path to your custom `values.yaml` file. -#### Uninstalling the Chart +### Uninstalling the Chart -The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release. +To remove all Kubernetes components associated with the chart and delete the release, use the uninstall command. -To uninstall the `logzio-k8s-telemetry` deployment, use the following command: +To uninstall the `logzio-k8s-telemetry` deployment, run: ```shell helm uninstall logzio-k8s-telemetry diff --git a/docs/shipping/Code/python.md b/docs/shipping/Code/python.md index b6e1fb8a..223ab51c 100644 --- a/docs/shipping/Code/python.md +++ b/docs/shipping/Code/python.md @@ -21,44 +21,48 @@ drop_filter: [] [Project's GitHub repo](https://github.com/logzio/logzio-python-handler/) ::: -Logz.io Python Handler sends logs in bulk over HTTPS to Logz.io. -Logs are grouped into bulks based on their size. +The Logz.io Python Handler sends logs in bulk over HTTPS to Logz.io, grouping them based on size. If the main thread quits, the handler attempts to send any remaining logs before exiting. If unsuccessful, the logs are saved to the local file system for later retrieval. -If the main thread quits,the handler tries to consume the remaining logs and then exits. -If the handler can't send the remaining logs, they're written to the local file system for later retrieval. -## Set up Logz.io Python Handler +## Setup Logz.io Python Handler *Supported versions*: Python 3.5 or newer. -### Add the dependency to your project +### Install dependency -Navigate to your project's folder in the command line, and run this command to install the dependency. +Navigate to your project's folder and run: ```shell pip install logzio-python-handler ``` -If you'd like to use Trace context, you need to install the OpenTelemetry logging instrumentation dependency by running the following command: +For Trace context, install the OpenTelemetry logging instrumentation dependency by running: ```shell pip install logzio-python-handler[opentelemetry-logging] ``` -### Configure Logz.io Python Handler for a standard Python project +### Configure Python Handler for a standard project -Use the samples in the code block below as a starting point, and replace the sample with a configuration that matches your needs. +Replace placeholders with your details. You must configure these parameters **by this exact order**. i.e. you cannot set Debug to true, without configuring all of the previous parameters as well. -Replace: -* `<< LOG-SHIPPING-TOKEN >>` - Your Logz.io account log shipping token. -* `<< LISTENER-HOST >>` - Logz.io listener host, as described [here](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#regions-and-urls). -* `<< LOG-TYPE >>` - Log type, for searching in logz.io (defaults to "python") -For a complete list of options, see the configuration parameters below the code block.👇 -##### Config File +|Parameter|Description| Required/Default | +|---|---|----| +| `<< LOG-SHIPPING-TOKEN >>` | Your Logz.io account log shipping token. | Required | +| `<< LOG-TYPE >>` | Log type, for searching in logz.io. | `python` | +| `<>` | Time to sleep between draining attempts | `3` | +| `<< LISTENER-HOST >>` | Logz.io listener host, as described [here](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#regions-and-urls). | `https://listener.logz.io:8071` | +| `<>` | Debug flag. If set to True, will print debug messages to stdout. | `false` | +| `<>` | If set to False, disables the local backup of logs in case of failure.| `true` | +| `<>` | Network timeout, in seconds, int or float, for sending the logs to logz.io. | `10`| +| `<>` | Retries number | `4` | +| `<>` | Retry timeout (retry_timeout) in seconds | `2`| + + ```python [handlers] @@ -68,7 +72,6 @@ keys=LogzioHandler class=logzio.handler.LogzioHandler formatter=logzioFormat -# Parameters must be set in order. Replace these parameters with your configuration. args=('<>', '<>', <>, 'https://<>:8071', <>,<>,<>,<>) [formatters] @@ -84,21 +87,9 @@ level=INFO [formatter_logzioFormat] format={"additional_field": "value"} ``` -*args=() arguments, by order* - - Your logz.io token - - Log type, for searching in logz.io (defaults to "python") - - Time to sleep between draining attempts (defaults to "3") - - Logz.io Listener address (defaults to `https://listener.logz.io:8071`) - - Debug flag. Set to True, will print debug messages to stdout. (defaults to "False") - - Backup logs flag. Set to False, will disable the local backup of logs in case of failure. (defaults to "True") - - Network timeout, in seconds, int or float, for sending the logs to logz.io. (defaults to 10) - - Retries number (retry_no, defaults to 4). - - Retry timeout (retry_timeout) in seconds (defaults to 2). - - Please note, that you have to configure those parameters by this exact order. - i.e. you cannot set Debug to true, without configuring all of the previous parameters as well. - -##### Dict Config + + +### Dictionary configuration: ```python LOGGING = { @@ -132,7 +123,7 @@ LOGGING = { } } ``` -##### Django configuration +### Django configuration ```python LOGGING = { @@ -181,9 +172,13 @@ LOGGING = { ### Serverless platforms -If you're using a serverless function, you'll need to: -1. Import and add the LogzioFlusher annotation before your sender function. To do this, in the Code Example below, uncomment the `import` statement and the `@LogzioFlusher(logger)` annotation line. -2. Make sure that the Logz.io handler is added to the root logger in your Configuration: +When using a serverless function, import and add LogzioFlusher annotation before your sender function. In the code example below umcomment `import` and the `@LogzioFlusher(logger)` annotation line. Next, ensure the Logz.io handler is added to the root logger. + +Be sure to replace `superAwesomeLogzioLoggers` with the name of your logger. + + + + ```python 'loggers': { 'superAwesomeLogzioLogger': { @@ -193,24 +188,19 @@ If you're using a serverless function, you'll need to: } } ``` -**Note:** replace `superAwesomeLogzioLoggers` with the name you used for your logger in the code (see Code Example below). -### Code Example +For example: ```python import logging import logging.config -# If you're using a serverless function, uncomment. # from logzio.flusher import LogzioFlusher - -# If you'd like to leverage the dynamic extra fields feature, uncomment. # from logzio.handler import ExtraFieldsLogFilter # Say I have saved my configuration as a dictionary in a variable named 'LOGGING' - see 'Dict Config' sample section logging.config.dictConfig(LOGGING) logger = logging.getLogger('superAwesomeLogzioLogger') -# If you're using a serverless function, uncomment. # @LogzioFlusher(logger) def my_func(): logger.info('Test log') @@ -222,33 +212,30 @@ def my_func(): logger.exception("Supporting exceptions too!") ``` -### Dynamic Extra Fields -If you prefer, you can add extra fields to your logs dynamically, and not pre-defining them in the configuration. -This way, you can allow different logs to have different extra fields. -Example in the code below. +### Dynamic extra fields + +You can dynamically add extra fields to your logs without predefining them in the configuration. This allows each log to have unique extra fields. + ``` python -# Example additional code that demonstrates how to dynamically add/remove fields within the code, make sure class is imported. -logger.info("Test log") # Outputs: {"message":"Test log"} +logger.info("Test log") extra_fields = {"foo":"bar","counter":1} logger.addFilter(ExtraFieldsLogFilter(extra_fields)) -logger.warning("Warning test log") # Outputs: {"message":"Warning test log","foo":"bar","counter":1} +logger.warning("Warning test log") error_fields = {"err_msg":"Failed to run due to exception.","status_code":500} logger.addFilter(ExtraFieldsLogFilter(error_fields)) -logger.error("Error test log") # Outputs: {"message":"Error test log","foo":"bar","counter":1,"err_msg":"Failed to run due to exception.","status_code":500} +logger.error("Error test log") # If you'd like to remove filters from future logs using the logger.removeFilter option: logger.removeFilter(ExtraFieldsLogFilter(error_fields)) -logger.debug("Debug test log") # Outputs: {"message":"Debug test log","foo":"bar","counter":1} +logger.debug("Debug test log") ``` -### Extra Fields -In case you need to dynamic metadata to a specific log and not dynamically to the logger, other than the constant metadata from the formatter, you can use the "extra" parameter. -All key values in the dictionary passed in "extra" will be presented in Logz.io as new fields in the log you are sending. -Please note, that you cannot override default fields by the python logger (i.e. lineno, thread, etc..) +To add dynamic metadata to a specific log rather than to the logger, use the "extra" parameter. All key-value pairs in the dictionary passed to "extra" will appear as new fields in Logz.io. Note that you cannot override default fields set by the Python logger (e.g., lineno, thread). + For example: ```python @@ -257,15 +244,18 @@ logger.info('Warning', extra={'extra_key':'extra_value'}) ### Trace context -If you're sending traces with OpenTelemetry instrumentation (auto or manual), you can correlate your logs with the trace context. -That way, your logs will have traces data in it, such as service name, span id and trace id. +You can correlate your logs with the trace context by installing the OpenTelemetry logging instrumentation dependency: + + -Make sure to install the OpenTelemetry logging instrumentation dependecy by running the following command: ```shell pip install logzio-python-handler[opentelemetry-logging] ``` -To enable this feature, set the `add_context` param in your handler configuration to `True`, like in this example: + +Enable this feature by setting `add_context` parameter to `True` in your handler configuration: + + ```python LOGGING = { @@ -303,7 +293,7 @@ LOGGING = { ### Truncating logs -If you want to create a Python logging filter to truncate log messages to a set number of characters before they are processed, add the following code: +To create a Python logging filter that truncates log messages to a specific number of characters before processing, use the following code: ```python class TruncationLoggerFilter(logging.Filter): @@ -319,12 +309,12 @@ logger = logging.getLogger("logzio") logger.addFilter(TruncationLoggerFilter()) ``` -Th edefault limit is 32700, but you can adjust this value as required. +The default limit is 32,700, but you can adjust this value as required. ## Metrics -You can send custom metrics to Logz.io from your Python application. This example uses the [OpenTelemetry Python SDK](https://github.com/open-telemetry/opentelemetry-python-contrib) and the [OpenTelemetry remote write exporter](https://pypi.org/project/opentelemetry-exporter-prometheus-remote-write/), which are both in alpha/preview. +Send custom metrics to Logz.io from your Python application. This example uses [OpenTelemetry Python SDK](https://github.com/open-telemetry/opentelemetry-python-contrib) and the [OpenTelemetry remote write exporter](https://pypi.org/project/opentelemetry-exporter-prometheus-remote-write/). import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; @@ -332,10 +322,10 @@ import TabItem from '@theme/TabItem'; -### Setup in code +### Code configuration setup -#### Install the snappy c-library +**1. Install the snappy c-library** DEB: `sudo apt-get install libsnappy-dev` @@ -345,19 +335,20 @@ OSX/Brew: `brew install snappy` Windows: `pip install python_snappy-0.5-cp36-cp36m-win_amd64.whl` -#### Install the exporter and opentelemtry sdk +**2. Install the exporter and opentelemtry sdk** + ``` pip install opentelemetry-exporter-prometheus-remote-write ``` -#### Add instruments to your application +**3. Add instruments to your application** -Replace the placeholders in the `exporter` section code (indicated by the double angle brackets `<< >>`) to match your specifics. +Replace the placeholders in the `exporter` section to match your specifics. |Parameter|Description| |---|---| -|LISTENER-HOST| The Logz.io Listener URL for your region, configured to use port **8052** for http traffic, or port **8053** for https traffic. {@include: ../../_include//log-shipping/listener-var.html} and add http/https protocol (https://listener.logz.io:8053) | +|LISTENER-HOST| The Logz.io Listener URL for your region, configured to use port **8052** for http traffic, or port **8053** for https traffic. {@include: ../../_include//log-shipping/listener-var.html} and add http/https protocol (https://listener.logz.io:8053). | |PROMETHEUS-METRICS-SHIPPING-TOKEN| Your Logz.io Prometheus Metrics account token. {@include: ../../_include//p8s-shipping/replace-prometheus-token.html} | @@ -440,7 +431,7 @@ sleep(6) #### Types of metric instruments -Refer to the OpenTelemetry [documentation](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md) for more details. +See OpenTelemetry [documentation](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md) for more details. | Name | Behavior | Default aggregation | @@ -473,7 +464,7 @@ counter.add(25, labels) ##### [UpDownCounter](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#updowncounter) ```python -# create a updowncounter instrument +# create an updowncounter instrument requests_active = meter.create_updowncounter( name="requests_active", description="number of active requests", @@ -539,7 +530,7 @@ def get_ram_usage_callback(observer): "dimension": "value" } observer.observe(ram_percent, labels) -# create a updownsumobserver instrument +# create an updownsumobserver instrument meter.register_updownsumobserver( callback=get_ram_usage_callback, name="ram_usage", @@ -567,33 +558,44 @@ meter.register_valueobserver( ) ``` -#### Check Logz.io for your metrics +**5. Check Logz.io for your metrics** + +Allow some time for your data to transfer. Then log in to your Logz.io Metrics account and open the [Metrics](https://app.logz.io/#/dashboard/metrics/) dashboard. + + + + + + -Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). ### Setup Metrics using Lambda -This integration uses OpenTelemetry collector extention and Python metrics SDK to create and send metrics from your Lambda functions to your Logz.io account. +This integration uses the OpenTelemetry collector extension and Python metrics SDK to create and send metrics from your Lambda functions to your Logz.io account. + + :::note -This integration is currently only supported in the following AWS regions: **us-east-1**, **us-east-2**,**us-west-1**, **us-west-2**, **ca-central-1**, **ap-northeast-2**, **ap-northeast-1**,**eu-central-1**, **eu-west-2**. Contact Logz.io Customer Support if you need to deploy in a different region. +This integration is currently supported in the following AWS regions: **us-east-1**, **us-east-2**,**us-west-1**, **us-west-2**, **ca-central-1**, **ap-northeast-2**, **ap-northeast-1**,**eu-central-1**, **eu-west-2**. Contact Logz.io [Customer Support](mailto:help@logz.io) for other regions. ::: #### Create Lambda function Create a new Lambda function in your AWS account (with Python version >= 3.8). -After creating your new Lambda function, you can use our example [deployment package](https://logzio-aws-integrations-us-east-1.s3.amazonaws.com/aws-otel-lambda-python/logzio-python-lambda-custom-metrics-deployment.zip) that includes the code sample. Upload the .zip file to the **code source** section inside your newly created Lambda function. +You can use our example [deployment package](https://logzio-aws-integrations-us-east-1.s3.amazonaws.com/aws-otel-lambda-python/logzio-python-lambda-custom-metrics-deployment.zip) by uploading the .zip file to the **code source** section inside your newly created Lambda function. ![Upload deployment package](https://dytvr9ot2sszz.cloudfront.net/logz-docs/log-shipping/uploadzip.gif) #### Add OpenTelemetry collector config variable -Add `OPENTELEMETRY_COLLECTOR_CONFIG_FILE` environment variable with a value of `/var/task/collector.yaml`. This will tell the collector extention the path to the configuration file. +Add the `OPENTELEMETRY_COLLECTOR_CONFIG_FILE` environment variable with a value of `/var/task/collector.yaml`. This indicates the path to the configuration file. + + #### Add OpenTelemetry config file @@ -624,7 +626,7 @@ service: exporters: [logging,prometheusremotewrite] ``` -Replace the placeholders (indicated by the double angle brackets `<< >>`) to match your specifics as per the table below. +Replace the placeholders to match your data: |Environment variable|Description| |---|---| @@ -685,7 +687,7 @@ def lambda_handler(event, context): ``` -#### Add Logz.io Otel Python layer +#### Add Logz.io OTEL Python layer Add the `logzio-otel-python-layer` lambda layer to your function: @@ -697,18 +699,20 @@ Replace `<>` with your AWS resgion. #### Run the Lambda function -Start running the Lambda function to send metrics to your Logz.io account. +Run the Lambda function to send metrics to your Logz.io account. -#### Check Logz.io for your metrics +#### Viewing metrics in Logz.io + +Give your metrics time to process, after which they'll be available in your [Metrics](https://app.logz.io/#/dashboard/metrics/) dashboard. -Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). #### Types of metric instruments -For more information, see the OpenTelemetry [documentation](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md). +Refer to the OpenTelemetry [documentation](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md) for more details. + | Name | Behavior | | ---- | ---------- | @@ -716,7 +720,6 @@ For more information, see the OpenTelemetry [documentation](https://github.com/o | UpDownCounter | Metric value can arbitrarily increment or decrement, calculated per `updowncounter.Add(context,value,labels)` request. | | Histogram | Metric values captured by the `histogram.Record(context,value,labels)` function, calculated per request. | -#### More examples ##### [Counter](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#counter) @@ -751,17 +754,17 @@ For more information, see the OpenTelemetry [documentation](https://github.com/o -### Setup Metrics using `prometheus_client` Library +### Setup Metrics using prometheus_client Library -#### Install the prometheus_client library +**1. Install Prometheus_client Library:** ```python pip3 install Prometheus-client ``` -#### Add the prometheus_client library to your application +**2. Add the prometheus_client library to your application** -In your Python script, use the prometheus_client library and expose the built-in metrics to the Prometheus HTTP server. See the code below for an example: +In your Python script, use the prometheus_client library and expose the built-in metrics to the Prometheus HTTP server: ```python from prometheus_client import start_http_server @@ -780,11 +783,10 @@ if __name__== '__main__': main() ``` -#### Add system metrics (if required) +**3. Add system metrics (if required)** -If you are using Linux, the system metrics such as CPU and memory usage are exposed by default. -If you are using an OS other than Linux: +For non-Linux OS, install the psutil library: 1. Instal the `psutil` library: @@ -830,23 +832,22 @@ If you are using an OS other than Linux: ``` -#### Check the metrics locally +**3. Check metrics locally** -Go to the HTTP server at `localhost:8000` to see the metrics. +Go to `localhost:8000` to see the metrics. -#### Download OpenTelemetry collector +**4. Download OpenTelemetry collector** :::note If you already have OpenTelemetry, proceed to the next step. ::: -Create a dedicated directory on your host and download the OpenTelemetry collector that is relevant to the operating system of your host. +Create a dedicated directory on your host and download the OpenTelemetry collector for your OS. -After downloading the collector, create a configuration file `config.yaml`. +Create a configuration file `config.yaml` with the following: -#### Configure the Receivers +#### Receivers configuration -Open the configuration file and ensure it contains the receivers required to collect your metrics: ```yaml receivers: @@ -859,9 +860,7 @@ receivers: - targets: ['localhost:8000'] ``` -#### Configure the Exporters - -In the same configuration file, add the following to the exporters section: +#### Exporters configuration ```yaml exporters: @@ -878,7 +877,7 @@ exporters: {@include: ../../_include/p8s-shipping/replace-prometheus-token.html} -#### Configure teh Processors and Service +#### Processors and Service configuration ```yaml processors: @@ -896,7 +895,7 @@ service: exporters: [prometheusremotewrite, logging] ``` -#### Start the Collector +**5. Start the Collector** Run the following command: @@ -904,11 +903,15 @@ Run the following command: /otelcol-contrib --config ./config.yaml ``` -* Replace `` with the path to the directory where you downloaded the collector. If the name of your configuration file is different to config, adjust the name in the command accordingly. +* Replace `` with the directory path where you downloaded the collector. Adjust the configuration file name if it is different. + + + +#### Viewing metrics in Logz.io + +Give your metrics time to process, after which they'll be available in your [Metrics](https://app.logz.io/#/dashboard/metrics/) dashboard. -#### Check Logz.io for your metrics -Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). @@ -918,7 +921,7 @@ Give your data some time to get from your system to ours, then log in to your Lo Deploy this integration to enable automatic instrumentation of your Python application using OpenTelemetry. -### Architecture overview +## Architecture overview This integration includes: @@ -930,12 +933,12 @@ On deployment, the Python instrumentation automatically captures spans from your ### Local host Python application auto instrumentation -**Before you begin, you'll need**: +**Requirements**: * A Python application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4317` available on your host system -* A name defined for your tracing service. You will need it to identify the traces in Logz.io. +* A name defined for your tracing service :::note @@ -943,10 +946,8 @@ This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Col ::: +### Install OpenTelemetry components for Python -#### Install general Python OpenTelemetry instrumentation components - -Run the following commands: ```shell pip3 install opentelemetry-distro @@ -955,23 +956,18 @@ opentelemetry-bootstrap --action=install pip3 install opentelemetry-exporter-otlp ``` -#### Set environment variables +### Set environment variables -After installation, configure the exporter by running the following command: +After installation, configure the exporter with this command: ```shell export OTEL_TRACES_EXPORTER=otlp export OTEL_RESOURCE_ATTRIBUTES="service.name=<>" ``` -Replace `<>` with the name of your tracing service defined earlier. - -#### Download and configure OpenTelemetry collector - -Create a dedicated directory on the host of your Python application and download the [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.82.0) that is relevant to the operating system of your host. - +### Download and configure OpenTelemetry collector -After downloading the collector, create a configuration file `config.yaml` with the parameters below. +Create a directory on your Python application and download the relevant [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.82.0). Create a `config.yaml` with the following parameters: * {@include: ../../_include/tracing-shipping/replace-tracing-token.md} @@ -980,47 +976,47 @@ After downloading the collector, create a configuration file `config.yaml` with {@include: ../../_include/tracing-shipping/tail-sampling.md} -#### Start the collector +### Start the collector -Run the following command: +Run: ```shell /otelcontribcol_ --config ./config.yaml ``` -* Replace `` with the path to the directory where you downloaded the collector. -* Replace `` with the version name of the collector applicable to your system, e.g. `otelcontribcol_darwin_amd64`. +* Replace `` with the collector's directory. +* Replace `` with the version name, e.g. `otelcontribcol_darwin_amd64`. -#### Run the OpenTelemetry instrumentation in conjunction with your Python application +### Run OpenTelemetry with your Python application -Run the following command from the directory of your Python application script: +Run this code from the directory of your Python application script: ```shell opentelemetry-instrument python3 .py ``` -Replace `` with the name of your Python application script. +Replace `` with your Python application script name. + +### Viewing Traces in Logz.io -#### Check Logz.io for your traces +Give your traces time to process, after which they'll be available in your [Tracing](https://app.logz.io/#/dashboard/jaeger) dashboard. -Give your traces some time to get from your system to ours, and then open [Tracing](https://app.logz.io/#/dashboard/jaeger). ### Docker Python application auto instrumentation -This integration enables you to auto-instrument your Python application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network. +Auto-instrument your Python application and run a containerized OpenTelemetry collector to send traces to Logz.io. Ensure both application and collector containers share the same network. -**Before you begin, you'll need**: +**Requirements**: * A Python application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4317` available on your host system -* A name defined for your tracing service. You will need it to identify the traces in Logz.io. +* A name defined for your tracing service -#### Install general Python OpenTelemetry instrumentation components +#### Install OpenTelemetry instrumentation components -Run the following commands: ```shell pip3 install opentelemetry-distro @@ -1031,17 +1027,18 @@ pip3 install opentelemetry-exporter-otlp #### Set environment variables -After installation, configure the exporter by running the following command: +Configure the exporter by running: ```shell export OTEL_TRACES_EXPORTER=otlp export OTEL_RESOURCE_ATTRIBUTES="service.name=<>" ``` -Replace `<>` with the name of your tracing service defined earlier. +Replace `<>` with your tracing service name. -#### Pull the Docker image for the OpenTelemetry collector +#### Pull OpenTelemetry collector docker image + ```shell docker pull otel/opentelemetry-collector-contrib:0.78.0 @@ -1049,7 +1046,7 @@ docker pull otel/opentelemetry-collector-contrib:0.78.0 #### Create a configuration file -Create a file `config.yaml` with the following content: +Create a `config.yaml` file with the following content: ```yaml receivers: @@ -1112,8 +1109,10 @@ service: {@include: ../../_include/tracing-shipping/tail-sampling.md} +If you already have an OpenTelemetry installation, add these parameters to your existing collector's configuration file: + + -If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector: * Under the `exporters` list @@ -1137,7 +1136,7 @@ If you already have an OpenTelemetry installation, add the following parameters {@include: ../../_include/tracing-shipping/replace-tracing-token.html} -An example configuration file looks as follows: +An example configuration file: ```yaml receivers: @@ -1198,9 +1197,9 @@ service: #### Run the container -Mount the `config.yaml` as volume to the `docker run` command and run it as follows. +Mount `config.yaml` as volume to the `docker run` command and run it as follows. -###### Linux +##### Linux ``` docker run \ @@ -1212,7 +1211,7 @@ otel/opentelemetry-collector-contrib:0.78.0 Replace `` to the path to the `config.yaml` file on your system. -###### Windows +##### Windows ``` docker run \ @@ -1237,38 +1236,41 @@ otel/opentelemetry-collector-contrib:0.78.0 {@include: ../../_include/tracing-shipping/collector-run-note.md} -Run the following command from the directory of your Python application script: +Run this code from your Python application script directory: ```shell opentelemetry-instrument python3 `<>`.py ``` -Replace `<>` with the name of your Python application script. +Replace `<>` with your Python application script name. -#### Check Logz.io for your traces +#### Viewing Traces in Logz.io -Give your traces some time to get from your system to ours, and then open [Tracing](https://app.logz.io/#/dashboard/jaeger). +Give your traces time to process, after which they'll be available in your [Tracing](https://app.logz.io/#/dashboard/jaeger) dashboard. -### Kuberenetes Python application auto insturmentation -#### Overview -You can use a Helm chart to ship Traces to Logz.io via the OpenTelemetry collector. The Helm tool is used to manage packages of preconfigured Kubernetes resources that use charts. +### Kuberenetes Python application auto insturmentation + + +Use a Helm chart to ship traces to Logz.io via the OpenTelemetry collector. The Helm tool manages packages of preconfigured Kubernetes resources. **logzio-k8s-telemetry** allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector. :::note -This chart is a fork of the [opentelemtry-collector](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector) Helm chart. The main repository for Logz.io helm charts are [logzio-helm](https://github.com/logzio/logzio-helm). +This chart is a fork of the [opentelemtry-collector](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector) Helm chart. The main repository for Logz.io helm charts is [logzio-helm](https://github.com/logzio/logzio-helm). ::: - + + :::caution This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core. ::: + @@ -1276,7 +1278,8 @@ This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Col -##### Deploy the Helm chart +**1. Deploy the Helm chart** + Add `logzio-helm` repo as follows: @@ -1285,7 +1288,7 @@ helm repo add logzio-helm https://logzio.github.io/logzio-helm helm repo update ``` -##### Run the Helm deployment code +**2. Run the Helm deployment code** ``` helm install \ @@ -1298,23 +1301,19 @@ logzio-monitoring logzio-helm/logzio-monitoring -n monitoring `<>` - Your Logz.io account region code. [Available regions](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions). -##### Define the logzio-k8s-telemetry service DNS +**3. Define the logzio-k8s-telemetry service DNS** -In most cases, the service name will be `logzio-k8s-telemetry.default.svc.cluster.local`, where `default` is the namespace where you deployed the helm chart and `svc.cluster.name` is your cluster domain name. - -If you are not sure what your cluster domain name is, you can run the following command to look it up: +Typically, the service name will be `logzio-k8s-telemetry.default.svc.cluster.local`, where `default` is the namespace where you deployed the helm chart and `svc.cluster.name` is your cluster domain name. If you're unsude what your cluster domain name is, run the following command to find it: ```shell kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \ sh -c 'nslookup kubernetes.default | grep Name | sed "s/Name:\skubernetes.default//"' ``` -It will deploy a small pod that extracts your cluster domain name from your Kubernetes environment. You can remove this pod after it has returned the cluster domain name. - +This command deploys a pod to extract your cluster domain name, which can be removed after. -#### Install general Python OpenTelemetry instrumentation components -Run the following commands: +**4. Install general Python OpenTelemetry instrumentation components** ```shell pip3 install opentelemetry-distro @@ -1323,26 +1322,26 @@ opentelemetry-bootstrap --action=install pip3 install opentelemetry-exporter-otlp ``` -#### Set environment variables +**5. Set environment variables** -After installation, configure the exporter by running the following command: +Configure the exporter by running the following command: ```shell export OTEL_TRACES_EXPORTER=otlp export OTEL_RESOURCE_ATTRIBUTES="service.name=<>" ``` -Replace `<>` with the name of your tracing service defined earlier. +Replace `<>` with your tracing service name. -#### Check Logz.io for your traces +**6. Viewing Traces in Logz.io** -Give your traces some time to get from your system to ours, then open [Logz.io](https://app.logz.io/). +Give your traces time to process, after which they'll be available in your [Tracing](https://app.logz.io/#/dashboard/jaeger) dashboard. - Customizing Helm chart parameters -#### Configure customization options -You can use the following options to update the Helm chart parameters: +#### Customizing Helm chart parameters + +You can update Helm chart parameters in three ways: * Specify parameters using the `--set key=value[,key=value]` argument to `helm install`. @@ -1350,24 +1349,37 @@ You can use the following options to update the Helm chart parameters: * Override default values with your own `my_values.yaml` and apply it in the `helm install` command. -If required, you can add the following optional parameters as environment variables: +Optional parameters can be added as environment variables: -| Parameter | Description | -|---|---| -| secrets.SamplingLatency | Threshold for the span latency - all traces slower than the threshold value will be filtered in. Default 500. | -| secrets.SamplingProbability | Sampling percentage for the probabilistic policy. Default 10. | +| Parameter | Description | Default | +|---|---|---| +| secrets.SamplingLatency | Threshold for the span latency - all traces slower than the threshold value will be filtered in. | `500` | +| secrets.SamplingProbability | Sampling percentage for the probabilistic policy. | `10` | ##### Example -You can run the logzio-k8s-telemetry chart with your custom configuration file that takes precedence over the `values.yaml` of the chart. +You can run the logzio-k8s-telemetry chart with a custom configuration file that takes precedence over the `values.yaml` of the chart by running the following: + + +``` +helm install -f /my_values.yaml \ +--set logzio-k8s-telemetry.secrets.TracesToken=<> \ +--set logzio-k8s-telemetry.secrets.LogzioRegion=<> \ +--set metricsOrTraces=true \ +logzio-monitoring logzio-helm/logzio-monitoring +``` + +Replace `` with your custom `values.yaml` file path. + +{@include: ../../_include/tracing-shipping/replace-tracing-token.html} -For example: :::note -The collector will sample **ALL traces** where is some span with error with this example configuration. +The collector will sample **ALL traces** that contain any span with an error with this example configuration. ::: - + + ```yaml baseCollectorConfig: @@ -1417,26 +1429,15 @@ baseCollectorConfig: ] ``` -Command: -``` -helm install -f /my_values.yaml \ ---set logzio-k8s-telemetry.secrets.TracesToken=<> \ ---set logzio-k8s-telemetry.secrets.LogzioRegion=<> \ ---set metricsOrTraces=true \ -logzio-monitoring logzio-helm/logzio-monitoring -``` -Replace `` with the path to your custom `values.yaml` file. -{@include: ../../_include/tracing-shipping/replace-tracing-token.html} #### Uninstalling the Chart -The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release. -To uninstall the `logzio-monitoring` deployment, use the following command: +To uninstall the `logzio-monitoring` deployment, run: ```shell helm uninstall logzio-monitoring @@ -1446,11 +1447,8 @@ helm uninstall logzio-monitoring ## Troubleshooting -#### Logz.io Python handler -For troubleshooting the Logz.io Python handler, see our [Python logging troubleshooting guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-python/). +* [Python logging troubleshooting guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-python/). -#### OpenTelemetry instrumentation -For troubleshooting the OpenTelemetry instrumentation, see our [OpenTelemetry troubleshooting guide](https://docs.logz.io/docs/user-guide/distributed-tracing/troubleshooting/otel-troubleshooting/). +* [OpenTelemetry troubleshooting guide](https://docs.logz.io/docs/user-guide/distributed-tracing/troubleshooting/otel-troubleshooting/). -#### Distributed Tracing account -For troubleshooting your Distributed Tracing account, see our [Distributed Tracing troubleshooting guide](https://docs.logz.io/docs/user-guide/distributed-tracing/troubleshooting/tracing-troubleshooting/) +* [Distributed Tracing troubleshooting guide](https://docs.logz.io/docs/user-guide/distributed-tracing/troubleshooting/tracing-troubleshooting/) \ No newline at end of file diff --git a/docs/shipping/Code/ruby.md b/docs/shipping/Code/ruby.md index 5fb6ef09..cd831550 100644 --- a/docs/shipping/Code/ruby.md +++ b/docs/shipping/Code/ruby.md @@ -35,7 +35,7 @@ On deployment, the Ruby instrumentation automatically captures spans from your a **Before you begin, you'll need**: * A Ruby application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4318` available on your host system * A name defined for your tracing service. You will need it to identify the traces in Logz.io. @@ -103,7 +103,7 @@ This integration enables you to auto-instrument your Ruby application and run a **Before you begin, you'll need**: * A Ruby application without instrumentation -* An active account with Logz.io +* An active Logz.io account * Port `4318` available on your host system * A name defined for your tracing service. You will need it to identify the traces in Logz.io. diff --git a/docs/shipping/Compute/apache-tomcat.md b/docs/shipping/Compute/apache-tomcat.md index b44fee96..638fb094 100644 --- a/docs/shipping/Compute/apache-tomcat.md +++ b/docs/shipping/Compute/apache-tomcat.md @@ -70,7 +70,7 @@ The full list of data scraping and configuring options can be found [here](https ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboards to enhance the observability of your metrics. +Install the pre-built dashboards to enhance the observability of your metrics. diff --git a/docs/shipping/Compute/telegraf-sysmetrics.md b/docs/shipping/Compute/telegraf-sysmetrics.md index 92728d50..fde0fdc8 100644 --- a/docs/shipping/Compute/telegraf-sysmetrics.md +++ b/docs/shipping/Compute/telegraf-sysmetrics.md @@ -74,7 +74,7 @@ The full list of data scraping and configuring options can be found [here](https ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Compute/vmware-vsphere.md b/docs/shipping/Compute/vmware-vsphere.md index ab30a2c6..b8c5a54e 100644 --- a/docs/shipping/Compute/vmware-vsphere.md +++ b/docs/shipping/Compute/vmware-vsphere.md @@ -120,7 +120,7 @@ Here is an example of the configuration file that will enable Telegraf to scrape ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboards to enhance the observability of your metrics. +Install the pre-built dashboards to enhance the observability of your metrics. diff --git a/docs/shipping/Containers/docker.md b/docs/shipping/Containers/docker.md index b2f69260..3ee59504 100644 --- a/docs/shipping/Containers/docker.md +++ b/docs/shipping/Containers/docker.md @@ -286,7 +286,7 @@ Below is a list of all environment variables available with this integration. If #### Check Logz.io metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboards to enhance the observability of your metrics. +Install the pre-built dashboards to enhance the observability of your metrics. diff --git a/docs/shipping/Containers/kubernetes.md b/docs/shipping/Containers/kubernetes.md index af5994bf..703d8e89 100644 --- a/docs/shipping/Containers/kubernetes.md +++ b/docs/shipping/Containers/kubernetes.md @@ -17,4 +17,4 @@ drop_filter: [] Integrate your Kubernetes system with Logz.io to monitor your logs, metrics, and traces, gain observability into your environment, and be able to identify and resolve issues with just a few clicks. -{@include: ../../_include/general-shipping/k8s.md} +{@include: ../../_include/general-shipping/k8s.md} diff --git a/docs/shipping/Data-Store/etcd.md b/docs/shipping/Data-Store/etcd.md index eb883291..c18ac654 100644 --- a/docs/shipping/Data-Store/etcd.md +++ b/docs/shipping/Data-Store/etcd.md @@ -140,7 +140,7 @@ The full list of data scraping and configuring options can be found [here](https ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboards to enhance the observability of your metrics. +Install the pre-built dashboards to enhance the observability of your metrics. diff --git a/docs/shipping/Data-Store/mongodb-atlas.md b/docs/shipping/Data-Store/mongodb-atlas.md index 9da87a53..a11337e1 100644 --- a/docs/shipping/Data-Store/mongodb-atlas.md +++ b/docs/shipping/Data-Store/mongodb-atlas.md @@ -25,7 +25,7 @@ Deploy this integration to send your MongoDB Atlas metric to your Logz.io accoun * A MongoDB Atlas project * Private and public keys created for your MongoDB Atlas [organization](https://docs.atlas.mongodb.com/tutorial/configure-api-access/organization/create-one-api-key/) or the [project](https://docs.atlas.mongodb.com/tutorial/configure-api-access/project/create-one-api-key/) to send the data from. -* An active account with Logz.io +* An active Logz.io account diff --git a/docs/shipping/Data-Store/mongodb.md b/docs/shipping/Data-Store/mongodb.md index c932a5d3..a4435130 100644 --- a/docs/shipping/Data-Store/mongodb.md +++ b/docs/shipping/Data-Store/mongodb.md @@ -224,7 +224,7 @@ The full list of data scraping and configuring options can be found [here](https #### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Database/apache-cassandra.md b/docs/shipping/Database/apache-cassandra.md index 479bf540..5d0a5e52 100644 --- a/docs/shipping/Database/apache-cassandra.md +++ b/docs/shipping/Database/apache-cassandra.md @@ -170,7 +170,7 @@ The full list of data scraping and configuring options can be found [here](https ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Database/mysql.md b/docs/shipping/Database/mysql.md index 4650b4b9..7bbcbd6c 100644 --- a/docs/shipping/Database/mysql.md +++ b/docs/shipping/Database/mysql.md @@ -372,7 +372,7 @@ The full list of data scraping and configuring options can be found [here](https ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Database/postgresql.md b/docs/shipping/Database/postgresql.md index fac4a012..8d8a1987 100644 --- a/docs/shipping/Database/postgresql.md +++ b/docs/shipping/Database/postgresql.md @@ -65,7 +65,7 @@ The database name is only required for instantiating a connection with the serve Give your metrics some time to get from your system to ours. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Database/redis.md b/docs/shipping/Database/redis.md index 007b0d74..c9117ccf 100644 --- a/docs/shipping/Database/redis.md +++ b/docs/shipping/Database/redis.md @@ -179,7 +179,7 @@ The full list of data scraping and configuring options can be found [here](https Give your metrics some time to get from your system to ours. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Distributed-Messaging/rabbitmq.md b/docs/shipping/Distributed-Messaging/rabbitmq.md index d55f1ffa..76cccf33 100644 --- a/docs/shipping/Distributed-Messaging/rabbitmq.md +++ b/docs/shipping/Distributed-Messaging/rabbitmq.md @@ -109,7 +109,7 @@ The full list of data scraping and configuring options can be found [here](https Give your metrics some time to get from your system to ours. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Load-Balancer/nginx.md b/docs/shipping/Load-Balancer/nginx.md index 32a8fda9..be0c1138 100644 --- a/docs/shipping/Load-Balancer/nginx.md +++ b/docs/shipping/Load-Balancer/nginx.md @@ -233,7 +233,7 @@ First you need to configure the input plug-in to enable Telegraf to scrape the N #### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboards to enhance the observability of your metrics. +Install the pre-built dashboards to enhance the observability of your metrics. diff --git a/docs/shipping/Network/openvpn.md b/docs/shipping/Network/openvpn.md index ebd22382..26864297 100644 --- a/docs/shipping/Network/openvpn.md +++ b/docs/shipping/Network/openvpn.md @@ -21,7 +21,7 @@ These instructions only apply to Linux and MacOS systems. **Before you begin, you'll need**: -* An active account with Logz.io +* An active Logz.io account * OpenVPN Access Server installed * [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html) installed on the same machine as OpenVPN Access Server * Root priveleges on your machines diff --git a/docs/shipping/Operating-Systems/linux.md b/docs/shipping/Operating-Systems/linux.md index ce4b1972..19f7287c 100644 --- a/docs/shipping/Operating-Systems/linux.md +++ b/docs/shipping/Operating-Systems/linux.md @@ -19,38 +19,38 @@ drop_filter: [] * Root access -## Send your Linux machine logs and metrics using OpenTelemetry service +## Send Linux logs and metrics with OpenTelemetry :::note -For a much easier and more efficient way to collect and send metrics, consider using the [Logz.io telemetry collector](https://app.logz.io/#/dashboard/integrations/collectors?tags=Quick%20Setup). +For a simpler and more efficient way to collect and send metrics, use the [Logz.io telemetry collector](https://app.logz.io/#/dashboard/integrations/collectors?tags=Quick%20Setup). ::: -Create a Logz.io directory: +**1. Create a Logz.io directory:** ```shell sudo mkdir /opt/logzio-agent ``` -Download OpenTelemetry tar.gz: +**2. Download OpenTelemetry tar.gz:** ```shell curl -fsSL "https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.82.0/otelcol-contrib_0.82.0_linux_amd64.tar.gz" >./otelcol-contrib.tar.gz ``` -Extract the OpenTelemetry binary: +**3. Extract the OpenTelemetry binary:** ```shell sudo tar -zxf ./otelcol-contrib.tar.gz --directory /opt/logzio-agent otelcol-contrib ``` -Create the OpenTelemetry config file: +**4. Create the OpenTelemetry config file:** ```shell sudo touch /opt/logzio-agent/otel_config.yaml ``` -And copy the following OpenTelemetry content into the config file. +**5. Copy the following into the config file:** Replace `<>`, `<>`, and `<>` with the relevant parameters from your Logz.io account. @@ -141,11 +141,11 @@ service: :::caution Important -If you already running OpenTelemetry metrics on port 8888, you will need to edit the `address` field in the config file. +If OpenTelemetry metrics are already running on port 8888, edit the `address` field in the config file. ::: -Next, create the service file: +**6. Create the service file:** ```shell sudo touch /etc/systemd/system/logzioOTELCollector.service @@ -168,7 +168,9 @@ WantedBy=multi-user.target ``` -### Manage your OpenTelemetry on Localhost +## Manage your OpenTelemetry on Localhost + +Manage OpenTelemetry on your machine using the following commands: |Description|Command| |--|--| @@ -178,7 +180,7 @@ WantedBy=multi-user.target |Delete service|`sudo systemctl stop logzioOTELCollector` `sudo systemctl reset-failed logzioOTELCollector 2>/dev/null` `sudo rm /etc/systemd/system/logzioOTELCollector.service 2>/dev/null` `sudo rm /usr/lib/systemd/system/logzioOTELCollector.service 2>/dev/null` `sudo rm /etc/init.d/logzioOTELCollector 2>/dev/null`| -## Send your logs to Logz.io through rsyslog +## Send data through rsyslog **Before you begin, you'll need**: @@ -205,9 +207,9 @@ The above assumes the following defaults: ### Check Logz.io for your logs -Give your logs some time to get from your system to ours, and then [open Open Search Dashboards](https://app.logz.io/#/dashboard/osd). You can search for `type:syslog` to filter for your logs. +Allow some time for data ingestion, then open your [metrics dashboard](https://app.logz.io/#/dashboard/metrics). -If you still don't see your logs, see [log shipping troubleshooting](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/log-shipping-troubleshooting/). +For troubleshooting, refer to our [log shipping troubleshooting](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/log-shipping-troubleshooting/) guide. diff --git a/docs/shipping/Operating-Systems/localhost-mac.md b/docs/shipping/Operating-Systems/localhost-mac.md index 137b700d..6b2c2ece 100644 --- a/docs/shipping/Operating-Systems/localhost-mac.md +++ b/docs/shipping/Operating-Systems/localhost-mac.md @@ -16,40 +16,37 @@ drop_filter: [] -## Send your Mac machine logs and metrics using Opentelemetry service +## Send Mac logs and metrics using Opentelemetry service :::note -For a much easier and more efficient way to collect and send metrics, consider using the [Logz.io telemetry collector](https://app.logz.io/#/dashboard/integrations/collectors?tags=Quick%20Setup). +For a simpler and more efficient way to collect and send metrics, use the [Logz.io telemetry collector](https://app.logz.io/#/dashboard/integrations/collectors?tags=Quick%20Setup). ::: -Follow these steps to manually configure OpenTelemetry on your Mac machine - - -Create a Logz.io directory: +**1. Create a Logz.io directory:** ```shell sudo mkdir /opt/logzio-agent ``` -Download OpenTelemetry tar.gz: +**2. Download OpenTelemetry tar.gz:** ```shell curl -fsSL "https://github.com/logzio/otel-collector-distro/releases/download/v0.82.0/otelcol-logzio-darwin_amd64.tar.gz" >./otelcol-logzio.tar.gz ``` -Extract the OpenTelemetry binary: +**3. Extract the OpenTelemetry binary:** ```shell sudo tar -zxf ./otelcol-logzio.tar.gz --directory /opt/logzio-agent ``` -Create the OpenTelemetry config file: +**4. Create the OpenTelemetry config file:** ```shell sudo touch /opt/logzio-agent/otel_config.yaml ``` -And copy the following OpenTelemetry config content into the config file. +**5. Copy the following OpenTelemetry config content into the config file:** Replace `<>`, `<>`, and `<>` with the relevant parameters from your Logz.io account. @@ -138,16 +135,16 @@ service: ``` :::caution Important -If you already running OpenTelemetry metrics on port 8888, you will need to edit the `address` field in the config file. +If OpenTelemetry metrics are already running on port 8888, edit the `address` field in the config file. ::: -Create plist file: +**6. Create plist file:** ```shell sudo touch /Library/LaunchDaemons/com.logzio.OTELCollector.plist ``` -And copy the plist file's content: +Copy the plist file's content: ```shell @@ -170,9 +167,9 @@ And copy the plist file's content: ``` -### Manage your OpenTelemetry on Mac +## Manage your OpenTelemetry on Mac -To manage OpenTelemetry on your machine, use the following commands: +Manage OpenTelemetry on your machine using the following commands: Description|Command |--|--| diff --git a/docs/shipping/Operating-Systems/telegraf-windows-performance.md b/docs/shipping/Operating-Systems/telegraf-windows-performance.md index 07175ccb..df14e329 100644 --- a/docs/shipping/Operating-Systems/telegraf-windows-performance.md +++ b/docs/shipping/Operating-Systems/telegraf-windows-performance.md @@ -165,7 +165,7 @@ telegraf.exe --service start ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboards to enhance the observability of your metrics. +Install the pre-built dashboards to enhance the observability of your metrics. diff --git a/docs/shipping/Operating-Systems/windows.md b/docs/shipping/Operating-Systems/windows.md index 0e71a50a..5bb73b78 100644 --- a/docs/shipping/Operating-Systems/windows.md +++ b/docs/shipping/Operating-Systems/windows.md @@ -15,38 +15,39 @@ drop_filter: [] --- -## Send your Windows machine logs and metrics using OpenTelemetry service +## Send Windows logs and metrics with OpenTelemetry :::note -For a much easier and more efficient way to collect and send metrics, consider using the [Logz.io telemetry collector](https://app.logz.io/#/dashboard/integrations/collectors?tags=Quick%20Setup). +For a simpler and more efficient way to collect and send metrics, use the [Logz.io telemetry collector](https://app.logz.io/#/dashboard/integrations/collectors?tags=Quick%20Setup). ::: -Create a Logz.io directory: + +**1. Create a Logz.io directory:** ```shell New-Item -Path $env:APPDATA\LogzioAgent -ItemType Directory -Force ``` -Download OpenTelemetry tar.gz: +**2. Download OpenTelemetry tar.gz:** ```shell Invoke-WebRequest -Uri "https://github.com/logzio/otel-collector-distro/releases/download/v0.82.0/otelcol-logzio-windows_amd64.zip" -OutFile C:\Users\<>\Downloads\otelcol-logzio.zip ``` -Extract the OpenTelemetry binary: +**3. Extract the OpenTelemetry binary:** ```shell Expand-Archive -LiteralPath C:\Users\<>\Downloads\otelcol-logzio.zip -DestinationPath $env:APPDATA\LogzioAgent -Force ``` -Create the OpenTelemetry config file: +**4. Create the OpenTelemetry config file:** ```shell New-Item -Path $env:APPDATA\LogzioAgent\otel_config.yaml -ItemType File -Force ``` -And copy the following OpenTelemetry content into the config file. +**5. Copy the following into the config file:** Replace `<>`, `<>`, and `<>` with the relevant parameters from your Logz.io account. @@ -140,17 +141,19 @@ service: :::caution Important -If you already running OpenTelemetry metrics on port 8888, you will need to edit the `address` field in the config file. +If OpenTelemetry metrics are already running on port 8888, edit the `address` field in the config file. ::: -Next, create the service file: +**6. Create the service file:** ```shell New-Service -Name LogzioOTELCollector -BinaryPathName "$env:APPDATA\LogzioAgent\otelcol-logzio-windows_amd64.exe --config $env:APPDATA\LogzioAgent\otel_config.yaml" -Description "Collects localhost logs/metrics and sends them to Logz.io." ``` -### Manage your OpenTelemetry on Localhost +## Manage your OpenTelemetry on Localhost + +Manage OpenTelemetry on your machine using the following commands: |Description|Command| |--|--| diff --git a/docs/shipping/Orchestration/istio-traces.md b/docs/shipping/Orchestration/istio-traces.md index 1f207edd..98e5d1dc 100644 --- a/docs/shipping/Orchestration/istio-traces.md +++ b/docs/shipping/Orchestration/istio-traces.md @@ -21,7 +21,7 @@ Deploy this integration to send traces from your Istio service mesh layers to Lo * An applicaion instrumented by Istio in a Kubernetes cluster * [Istioctl](https://istio.io/latest/docs/reference/commands/istioctl/) installed on your machine -* An active account with Logz.io +* An active Logz.io account :::note diff --git a/docs/shipping/Other/aiven.md b/docs/shipping/Other/aiven.md index 45f0dce3..589e1c6a 100644 --- a/docs/shipping/Other/aiven.md +++ b/docs/shipping/Other/aiven.md @@ -18,7 +18,7 @@ drop_filter: [] **Before you begin, you'll need**: -* an active account with Logz.io +* An active Logz.io account * an Aiven project with the service enabled diff --git a/docs/shipping/Other/axonius.md b/docs/shipping/Other/axonius.md index 531fdf17..201df7f7 100644 --- a/docs/shipping/Other/axonius.md +++ b/docs/shipping/Other/axonius.md @@ -19,7 +19,7 @@ drop_filter: [] **Before you begin, you'll need**: * An active account with Axonius -* An active account with Logz.io +* An active Logz.io account * [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html) installed on your machine * Root priveleges on your machines diff --git a/docs/shipping/Other/bunny-net.md b/docs/shipping/Other/bunny-net.md index 24e90df1..1f2d9a46 100644 --- a/docs/shipping/Other/bunny-net.md +++ b/docs/shipping/Other/bunny-net.md @@ -20,7 +20,7 @@ drop_filter: [] **Before you begin, you'll need**: * An active account with bunny.net -* An active account with Logz.io +* An active Logz.io account * [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html) installed on your machine * Root priveleges on your machines diff --git a/docs/shipping/Other/consul.md b/docs/shipping/Other/consul.md index f382a6cd..588f8b29 100644 --- a/docs/shipping/Other/consul.md +++ b/docs/shipping/Other/consul.md @@ -147,7 +147,7 @@ Run the following command: ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Other/curl.md b/docs/shipping/Other/curl.md index b9cc58b5..9388b05c 100644 --- a/docs/shipping/Other/curl.md +++ b/docs/shipping/Other/curl.md @@ -16,30 +16,22 @@ drop_filter: [] -cURL is a command line utility for transferring data. cURL is a quick and easy way to test your configuration or troubleshoot your connectivity to Logz.io. -You can upload JSON or plain text files. +cURL is a command line utility for transferring data, useful for testing configurations or troubleshooting connectivity to Logz.io. You can upload JSON or plain text files. ## Upload a JSON log file -### Limitations +**Limitations** -* Max body size is 10 MB (10,485,760 bytes) -* Each log line must be 500,000 bytes or less -* If you include a `type` field in the log, it overrides `type` in the request header +* Max body size: 10 MB (10,485,760 bytes). +* Max log line size: 500,000 bytes. +* Type field in the log overrides the `type` in the request header. +1. Download [cURL](https://curl.haxx.se/download.html). -**Before you begin, you'll need**: -[cURL](https://curl.haxx.se/download.html) +2. Upload the file: - - -### Upload the file - -If you want to ship logs from your code but don't have a library in place, -you can send them directly to the Logz.io listener as a minified JSON file. - ```shell cat /path/to/log/file | curl -X POST "https://<>:8071?token=<>&type=" \ -H "user-agent:logzio-curl-logs" \ @@ -50,34 +42,26 @@ cat /path/to/log/file | curl -X POST "https://<>:8071?token=< + ## Configure Filebeat on macOS or Linux -**Before you begin, you'll need**: +### Pre Requirements +Before you begin, you'll need: {@include: ../../_include/log-shipping/filebeat-installed-port5015-begin.md} {@include: ../../_include/log-shipping/filebeat-installed-port5015-end.md} - - - {@include: ../../_include/log-shipping/certificate.md} {@include: ../../_include/log-shipping/filebeat-ssl.md} -### Configure Filebeat using the dedicated Logz.io configuration wizard +### Configure Filebeat with Logz.io configuration wizard {@include: ../../_include/log-shipping/filebeat-input-extension.md} @@ -50,42 +53,37 @@ Filebeat is often the easiest way to get logs from your system to Logz.io. Logz. {@include: ../../_include/log-shipping/validate-yaml.md} -### Move the configuration file to the Filebeat folder +#### Move the configuration file to the Filebeat folder Move your configuration file to `/etc/filebeat/filebeat.yml`. -### Start Filebeat +### Start Filebeat and view logs [Start or restart Filebeat](https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-starting.html) for the changes to take effect. -### Check Logz.io for your logs - -Give your logs some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). - -If you still don't see your logs, see [Filebeat's troubleshooting guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-filebeat/). - - +Allow some time for data ingestion, then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). -Filebeat is often the easiest way to get logs from your system to Logz.io. Logz.io has a dedicated configuration wizard to make it simple to configure Filebeat. If you already have Filebeat and you want to add new sources, check out our other shipping instructions to copy & paste just the relevant changes from our code examples. +If you don't see your logs, see [Filebeat's troubleshooting guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-filebeat/). + + ## Configure Filebeat on Windows -**Before you begin, you'll need**: +### Pre Requirements +Before you begin, you'll need: -{@include: ../../_include/log-shipping/filebeat-installed-port5015-begin.md} installed as a Windows service{@include: ../../_include/log-shipping/filebeat-installed-port5015-end.md} +{@include: ../../_include/log-shipping/filebeat-installed-port5015-begin.md} installed as a Windows service - - - - +{@include: ../../_include/log-shipping/filebeat-installed-port5015-end.md} ### Download the Logz.io public certificate For HTTPS shipping, download the Logz.io public certificate to your certificate authority folder. + Download the [Logz.io public certificate]({@include: ../../_include/log-shipping/certificate-path.md}) to `C:\ProgramData\Filebeat\Logzio.crt` @@ -94,11 +92,10 @@ on your machine. {@include: ../../_include/log-shipping/filebeat-ssl.md} -### Configure Filebeat using the dedicated Logz.io configuration wizard +### Configure Filebeat with Logz.io configuration wizard {@include: ../../_include/log-shipping/filebeat-input-extension.md} - {@include: ../../_include/log-shipping/filebeat-wizard.html} @@ -110,39 +107,34 @@ on your machine. {@include: ../../_include/log-shipping/validate-yaml.md} -### Move the configuration file to the Filebeat folder +#### Move the configuration file to the Filebeat folder Move the configuration file to `C:\Program Files\Filebeat\filebeat.yml`. -### Restart Filebeat +### Restart Filebeat and view logs + +Restart Filebeat for the changes to take effect. + ```powershell PS C:\Program Files\Filebeat> Restart-Service filebeat ``` -### Check Logz.io for your logs +Allow some time for data ingestion, then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). + -Give your logs some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). +If you don't see your logs, see [Filebeat's troubleshooting guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-filebeat/). -If you still don't see your logs, see [Filebeat's troubleshooting guide](https://docs.logz.io/docs/user-guide/log-management/troubleshooting/troubleshooting-filebeat/). + + - - +## Supported Modules Beat shippers make use of modules to ship data from various sources. Refer to the list below to see which modules each shipper supports. * [Apache ActiveMQ](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-activemq.html#filebeat-module-activemq) - * [AWS](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-aws.html) - * [Azure](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-azure.html) - * [Google Cloud](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-gcp.html) - * [MySQL](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-mysql.html) - -* [Find more modules](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules.html) - - - - +* [And more](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules.html) diff --git a/docs/shipping/Other/fpm.md b/docs/shipping/Other/fpm.md index 3605e837..40ea7933 100644 --- a/docs/shipping/Other/fpm.md +++ b/docs/shipping/Other/fpm.md @@ -118,7 +118,7 @@ Run the following command: Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -198,7 +198,7 @@ The full list of data scraping and configuring options can be found [here](https Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Other/jaeger.md b/docs/shipping/Other/jaeger.md index fbe0b7a0..cdef2cbc 100644 --- a/docs/shipping/Other/jaeger.md +++ b/docs/shipping/Other/jaeger.md @@ -32,7 +32,7 @@ On deployment, your Jaeger instrumentation captures spans from your application **Before you begin, you'll need**: * An application instrumented with Jaeger -* An active account with Logz.io +* An active Logz.io account #### Download and configure OpenTelemetry collector diff --git a/docs/shipping/Other/logstash.md b/docs/shipping/Other/logstash.md index 173c433b..0a6324b7 100644 --- a/docs/shipping/Other/logstash.md +++ b/docs/shipping/Other/logstash.md @@ -2,7 +2,7 @@ id: Logstash-data title: Logstash overview: Logstash is an open-source server-side data processing pipeline. This integration can ingest data from multiple sources. With Logz.io, you can monitor Logstash instances and quickly identify if and when issues arise. -product: ['metrics'] +product: ['logs'] os: ['windows', 'linux'] filters: ['Other'] logo: https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/logstash_temp.png diff --git a/docs/shipping/Other/opentelemetry.md b/docs/shipping/Other/opentelemetry.md index 1d805377..f9e8305e 100644 --- a/docs/shipping/Other/opentelemetry.md +++ b/docs/shipping/Other/opentelemetry.md @@ -18,9 +18,9 @@ drop_filter: [] ## Logs -This project lets you configure the OpenTelemetry collector to send your collected logs to Logz.io. +This project helps you configure the OpenTelemetry collector to send your logs to Logz.io. -### Configuring OpenTelemetry to send your log data to Logz.io +### Sending OpenTelemetry Logs to Logz.io #### Download OpenTelemetry collector @@ -28,17 +28,20 @@ This project lets you configure the OpenTelemetry collector to send your collect If you already have OpenTelemetry, proceed to the next step. ::: -Create a dedicated directory on your host and download the OpenTelemetry collector that is relevant to the operating system of your host. +Create a dedicated directory on your host and download the relevant OpenTelemetry collector for your host operating system. -After downloading the collector, create a configuration file `config.yaml`. +Create a configuration file `config.yaml`. -#### Configure the Receivers -Open the configuration file and ensure it contains the receivers required to collect your logs. -#### Configure the Exporters +#### Configure Receivers + +Ensure your `config.yaml` file includes the necessary receivers to collect your logs. + +#### Configure Exporters + +Add the following to the exporters section of your `config.yaml`: -In the same configuration file, add the following to the exporters section: ```yaml exporters: @@ -49,9 +52,10 @@ exporters: user-agent: logzio-opentelemetry-logs ``` -#### Configure the Service Pipeline +#### Configure Service Pipeline + +In the service section of your `config.yaml`, add: -In the service section of the configuration file, add the following configuration: ```yaml service: @@ -65,7 +69,7 @@ service: #### Start the Collector -Run the following command: +Run: ```shell /otelcol-contrib --config ./config.yaml @@ -73,35 +77,37 @@ Run the following command: * Replace `` with the path to the directory where you downloaded the collector. If the name of your configuration file is different to config, adjust the name in the command accordingly. -#### Check Logz.io for Your Logs +#### View your logs + +Allow some time for data ingestion, then open Logz.io to view your logs. + -Give your data some time to get from your system to ours, then log in to your Logz.io account, and open the appropriate tab or dashboard to view your logs. ## Metrics -This project lets you configure the OpenTelemetry collector to send your collected Prometheus-format metrics to Logz.io. +This project helps you configure the OpenTelemetry collector to send your metrics to Logz.io. -#### Configuring OpenTelemetry to send your metrics data to Logz.io +### Sending OpenTelemetry Metrics to Logz.io -##### Download OpenTelemetry collector +#### Download OpenTelemetry collector :::note If you already have OpenTelemetry, proceed to the next step. ::: -Create a dedicated directory on your host and download the [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.60.0) that is relevant to the operating system of your host. +Create a dedicated directory on your host and download the relevant [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/v0.60.0) for your host operating system. -After downloading the collector, create a configuration file `config.yaml`. +Create a configuration file `config.yaml`. -##### Configure the receivers +#### Configure receivers -Open the configuration file and make sure that it states the receivers required for your source. +Ensure your `config.yaml` file includes the necessary receivers for your source. -##### Configure the exporters +#### Configure exporters -In the same configuration file, add the following to the `exporters` section: +Add the following to the `exporters` section of your `config.yaml`: ```yaml exporters: @@ -116,9 +122,9 @@ exporters: {@include: ../../_include/general-shipping/replace-placeholders-prometheus.html} -##### Configure the service pipeline +#### Configure service pipeline -In the `service` section of the configuration file, add the following configuration +In the `service` section of your `config.yaml`, add: ```yaml service: @@ -131,9 +137,9 @@ service: -##### Start the collector +#### Start the collector -Run the following command: +Run: ```shell /otelcol-contrib --config ./config.yaml @@ -141,28 +147,27 @@ Run the following command: * Replace `` with the path to the directory where you downloaded the collector. If the name of your configuration file is different to `config`, adjust name in the command accordingly. -##### Check Logz.io for your metrics +#### View your metrics + + +Allow some time for data ingestion, then open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). + -Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open [the Logz.io Metrics tab](https://app.logz.io/#/dashboard/metrics/). ## Traces -Deploy this integration to send traces from your OpenTelemetry installation to Logz.io. +This project helps you configure the OpenTelemetry collector to send your traces to Logz.io. -## Manual configuration -This integration includes: -* Configuring the OpenTelemetry collector to receive traces from your OpenTelemetry installation and send them to Logz.io +## Manual configuration On deployment, your OpenTelemetry instrumentation captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account. -### Set up your locally hosted OpenTelemetry installation to send traces to Logz.io - **Before you begin, you'll need**: -* An active account with Logz.io +* An active Logz.io account :::note @@ -171,11 +176,11 @@ This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Col -#### Download and configure OpenTelemetry collector +### Download and configure OpenTelemetry collector -Create a dedicated directory on the host of your application and download the [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.70.0) that is relevant to the operating system of your host. +Create a directory on your application's host and download the relevant [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/tag/v0.70.0) for your host operating system -After downloading the collector, create a configuration file `config.yaml` with the following parameters: +Create a configuration file `config.yaml` with the following parameters: {@include: ../../_include/tracing-shipping/collector-config.md} @@ -184,7 +189,7 @@ After downloading the collector, create a configuration file `config.yaml` with {@include: ../../_include/tracing-shipping/tail-sampling.md} -If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector: +If you already have an OpenTelemetry installation, add the following to the configuration file of your existing OpenTelemetry collector: * Under the `exporters` list @@ -209,18 +214,19 @@ If you already have an OpenTelemetry installation, add the following parameters {@include: ../../_include/tracing-shipping/replace-tracing-token.html} -An example configuration file looks as follows: +Example configuration file: {@include: ../../_include/tracing-shipping/collector-config.md} -#### Instrument the application +### Instrument the application -If your application is not yet instrumented, instrument the code as described in our [tracing documents](https://docs.logz.io/shipping/#tracing-sources). +If your application isn't instrumented, begin by downloading the OpenTelemetry agent or library specific to your programming language. Logz.io supports popular open-source instrumentation libraries, including OpenTracing, Jaeger, OpenTelemetry, and Zipkin. Attach the agent, set up the necessary configuration options, and start your application. The agent will automatically instrument your application to capture telemetry data. -#### Start the collector -Run the following command: +### Start the collector + +Run: ```shell /otelcontribcol_ --config ./config.yaml @@ -228,32 +234,30 @@ Run the following command: * Replace `` with the path to the directory where you downloaded the collector. * Replace `` with the version name of the collector applicable to your system, e.g. `otelcontribcol_darwin_amd64`. -#### Run the application +And run the application to generate traces. -Run the application to generate traces. +### View your traces -#### Check Logz.io for your traces +Allow some time for data ingestion, then open your [Tracing](https://app.logz.io/#/dashboard/jaeger) account. -Give your traces some time to get from your system to ours, and then open [Tracing](https://app.logz.io/#/dashboard/jaeger). -### Set up your OpenTelemetry installation using containerized collector to send traces to Logz.io +## Configure OpenTelemetry with a Containerized Collector **Before you begin, you'll need**: -* An active account with Logz.io +* An active Logz.io account #### Instrument the application -If your application is not yet instrumented, instrument the code as described in our [tracing documents](https://docs.logz.io/shipping/#tracing-sources). - +If your application isn't instrumented, begin by downloading the OpenTelemetry agent or library specific to your programming language. Logz.io supports popular open-source instrumentation libraries, including OpenTracing, Jaeger, OpenTelemetry, and Zipkin. Attach the agent, set up the necessary configuration options, and start your application. The agent will automatically instrument your application to capture telemetry data. {@include: ../../_include/tracing-shipping/docker.md} {@include: ../../_include/tracing-shipping/replace-tracing-token.html} -#### Run the application +### Run the application {@include: ../../_include/tracing-shipping/collector-run-note.md} @@ -261,12 +265,10 @@ If your application is not yet instrumented, instrument the code as described in Run the application to generate traces. -#### Check Logz.io for your traces +### View your traces -Give your traces some time to get from your system to ours, and then open [Tracing](https://app.logz.io/#/dashboard/jaeger). +Allow some time for data ingestion, then open your [Tracing](https://app.logz.io/#/dashboard/jaeger) account. -### Troubleshooting {@include: ../../_include/tracing-shipping/otel-troubleshooting.md} - diff --git a/docs/shipping/Other/telegraf.md b/docs/shipping/Other/telegraf.md index 04c92698..c3b2ae87 100644 --- a/docs/shipping/Other/telegraf.md +++ b/docs/shipping/Other/telegraf.md @@ -86,7 +86,7 @@ The full list of data scraping and configuring options can be found [here](https ##### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Security/avast.md b/docs/shipping/Security/avast.md index 33b025ad..1e76d473 100644 --- a/docs/shipping/Security/avast.md +++ b/docs/shipping/Security/avast.md @@ -19,7 +19,7 @@ drop_filter: [] **Before you begin, you'll need**: * Avast Antivirus installed on your machine -* An active account with Logz.io +* An active Logz.io account * [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html) installed on your machine * Root priveleges on your machines diff --git a/docs/shipping/Security/crowdstrike.md b/docs/shipping/Security/crowdstrike.md index 0c982258..5545febb 100644 --- a/docs/shipping/Security/crowdstrike.md +++ b/docs/shipping/Security/crowdstrike.md @@ -38,7 +38,7 @@ Upon deployment, the Crowdstrike connector connects to your Crowdstrike account **Before you begin, you'll need**: * an active account with Crowdstrike -* an active account with Logz.io +* An active Logz.io account * FluentD agent on your machine * Crowdstrike connector installed on your machine diff --git a/docs/shipping/Security/cynet.md b/docs/shipping/Security/cynet.md index 453c454d..6fa71957 100644 --- a/docs/shipping/Security/cynet.md +++ b/docs/shipping/Security/cynet.md @@ -20,7 +20,7 @@ drop_filter: [] * An active Cynet license * Cynet login credentials -* An active account with Logz.io +* An active Logz.io account * [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html) installed on a dedicated machine (acting as a syslog server) * Root priveleges on your machines diff --git a/docs/shipping/Security/pfsense.md b/docs/shipping/Security/pfsense.md index 09e1f1bd..8fa005f7 100644 --- a/docs/shipping/Security/pfsense.md +++ b/docs/shipping/Security/pfsense.md @@ -19,7 +19,7 @@ drop_filter: [] **Before you begin, you'll need**: * pfSense installed and configured on your machine -* an active account with Logz.io +* An active Logz.io account * [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html) installed on your machine * Root priveleges on your machines diff --git a/docs/shipping/Security/trivy.md b/docs/shipping/Security/trivy.md index f7416672..f27f1fb7 100644 --- a/docs/shipping/Security/trivy.md +++ b/docs/shipping/Security/trivy.md @@ -36,7 +36,7 @@ This integration is presently in its beta phase and may be subject to modificati **Before you begin, you'll need**: -* an active account with Logz.io +* An active Logz.io account * Kubernetes cluster to send reports from diff --git a/docs/shipping/Security/x509.md b/docs/shipping/Security/x509.md index 20b5588e..3082c071 100644 --- a/docs/shipping/Security/x509.md +++ b/docs/shipping/Security/x509.md @@ -88,7 +88,7 @@ Run the ping statistics tests to generate metrics. Give your metrics some time to get from your system to ours, and then open [OpenSearch Dashboards](https://app.logz.io/#/dashboard/osd). All metrics that were sent from the Lambda function will have the prefix `x509` in their name. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. @@ -198,7 +198,7 @@ The full list of data scraping and configuring options can be found [here](https ### Check Logz.io for your metrics -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Synthetic-Monitoring/api-status-metrics.md b/docs/shipping/Synthetic-Monitoring/api-status-metrics.md index 849a1998..9509b3ac 100644 --- a/docs/shipping/Synthetic-Monitoring/api-status-metrics.md +++ b/docs/shipping/Synthetic-Monitoring/api-status-metrics.md @@ -90,7 +90,7 @@ Run the ping statistics tests to generate metrics. Give your metrics some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). All metrics that were sent from the Lambda function will have the prefix `api_status` in their name. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Synthetic-Monitoring/ping-statistics.md b/docs/shipping/Synthetic-Monitoring/ping-statistics.md index d4532e67..772efe55 100644 --- a/docs/shipping/Synthetic-Monitoring/ping-statistics.md +++ b/docs/shipping/Synthetic-Monitoring/ping-statistics.md @@ -84,7 +84,7 @@ Run the ping statistics tests to generate metrics. Give your metrics some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). All metrics that were sent from the Lambda function will have the prefix `ping_stats` in their name. -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics. +Install the pre-built dashboard to enhance the observability of your metrics. diff --git a/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md b/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md index 9c8ed6fb..d18e85f0 100644 --- a/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md +++ b/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md @@ -70,7 +70,7 @@ Specify the stack details as per the table below, check the checkboxes and selec Give the stack a few minutes to be deployed and the data to get to our system, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd). -{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your data. +Install the pre-built dashboard to enhance the observability of your data. diff --git a/docs/user-guide/Infrastructure-monitoring/grafana-datasource-for-Logzio-metrics.md b/docs/user-guide/Infrastructure-monitoring/grafana-datasource-for-Logzio-metrics.md new file mode 100644 index 00000000..67df717e --- /dev/null +++ b/docs/user-guide/Infrastructure-monitoring/grafana-datasource-for-Logzio-metrics.md @@ -0,0 +1,52 @@ +--- +sidebar_position: 6 +title: Configuring Grafana Datasource for Logz.io Metrics +description: This guide provides step-by-step instructions for configuring Grafana to query Prometheus metrics stored in Logz.io. If you have your own Grafana instance and want to use it to visualize metrics from Logz.io, follow the steps below. +image: https://dytvr9ot2sszz.cloudfront.net/logz-docs/social-assets/docs-social.jpg +keywords: [metrics, infrastructure monitoring, Prometheus, monitoring, dashboard, observability, logz.io] +--- + +### Request Metric API Endpoint Enablement + +To enable Metric API access for your accounts, contact your account manager or the [Logz.io support team](mailto:help@logz.io) with your **Metric Account ID**. + +### Create API Token for the Metrics Account + +1. **Create an API Token:** + Generate an [API token](https://docs.logz.io/docs/user-guide/admin/authentication-tokens/api-tokens/) from your Main Log Management account or Log Management sub-account. You will need this token as the `X-API-TOKEN` header in the next step. + +2. **Generate a Metric Account Token:** + Use the [Create a sub-account API token](https://api-docs.logz.io/docs/logz/create-api-token-request/) endpoint to generate a new token for the metrics account. Include the **Metrics Account ID** in the request body. + +### Configure Logz.io Metric Endpoint as a Local Grafana Datasource + +To configure the Logz.io metric endpoint as a Prometheus datasource in your Grafana instance: + +1. **Navigate to Data Source Configuration:** + Add a new datasource of type **Prometheus**. + +2. **Configure the Datasource:** + - **URL:** Set the URL to `https://api.logz.io/v1/metrics/prometheus` (adjust according to your region). + - **Access:** Select **Server (default)** as the access type. + - **Custom Headers:** Add a custom header named `X-API-TOKEN` and set its value to the Metric Account API token generated in the previous step. + +### Query and Create Dashboards + +Once the datasource is configured, you can start creating queries and dashboards. + +To use the Prometheus Query API, utilize the endpoints (according to your region) provided under `{LOGZIO_API_URL}/v1/metrics/prometheus`. The supported query APIs are: + +- **[Instant Queries](https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries):** + Supports both GET and POST requests to `{LOGZIO_API_URL}/v1/metrics/prometheus/api/v1/query`. + +- **[Range Queries](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries):** + Supports both GET and POST requests to `{LOGZIO_API_URL}/v1/metrics/prometheus/api/v1/query_range`. + +- **[Series Queries](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers):** + Supports both GET and POST requests to `{LOGZIO_API_URL}/v1/metrics/prometheus/api/v1/series`. + +- **[Getting Label Names](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names):** + Supports both GET and POST requests to `{LOGZIO_API_URL}/v1/metrics/prometheus/api/v1/labels`. + +- **[Getting Label Values](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names):** + Supports GET requests to `{LOGZIO_API_URL}/v1/metrics/prometheus/api/v1/label//values`. diff --git a/docs/user-guide/Infrastructure-monitoring/log-correlations/_category_.json b/docs/user-guide/Infrastructure-monitoring/log-correlations/_category_.json index eef235c9..f4f7441d 100644 --- a/docs/user-guide/Infrastructure-monitoring/log-correlations/_category_.json +++ b/docs/user-guide/Infrastructure-monitoring/log-correlations/_category_.json @@ -1,6 +1,6 @@ { "label": "Log Correlations", - "position": 6, + "position": 7, "link": { "type": "generated-index", "description": "Your Metrics account offers several ways to correlate data between your Logz.io Infrastructure Monitoring and Log Management accounts" diff --git a/docs/user-guide/admin/sso/single-sign-on.md b/docs/user-guide/admin/sso/single-sign-on.md index f6c0bee3..e98af4c9 100644 --- a/docs/user-guide/admin/sso/single-sign-on.md +++ b/docs/user-guide/admin/sso/single-sign-on.md @@ -16,7 +16,7 @@ Single Sign-On (SSO) allows you to manage access to your Logz.io account using a ## How is access managed with SSO? -:::important +:::caution important To use the ‘Sign in with SSO’ button on Logz.io’s login page, **your initial login** must be performed through the SSO identity service provider. ::: diff --git a/docs/user-guide/cloud-siem/integrations/siemplify.md b/docs/user-guide/cloud-siem/integrations/siemplify.md index 74daaca4..9bb71af3 100644 --- a/docs/user-guide/cloud-siem/integrations/siemplify.md +++ b/docs/user-guide/cloud-siem/integrations/siemplify.md @@ -29,7 +29,7 @@ Siemplify is an industry-leading Security Orchestration, Automation & Response ( **Before you begin, you'll need**: * An active account with Siemplify. -* An active account with Logz.io. +* An active Logz.io account. * A valid [Logz.io API](https://app.logz.io/#/dashboard/settings/manage-tokens/api) token. Contact support if your account doesn't have one. ### 1. Add a Logz.io instance to your Siemplify workspace diff --git a/docs/user-guide/data-hub/drop-filters/drop-fiters-logs.md b/docs/user-guide/data-hub/drop-filters/drop-fiters-logs.md index dedb0f72..6294bed3 100644 --- a/docs/user-guide/data-hub/drop-filters/drop-fiters-logs.md +++ b/docs/user-guide/data-hub/drop-filters/drop-fiters-logs.md @@ -51,6 +51,6 @@ Confirm the settings by checking the acknowledgment box and clicking **Apply fil You can create and manage up to 10 drop filters per account. -:::important Note +:::caution note When restoring logs from an archive, consider temporarily deactivating some filters. This ensures that all logs are indexed and visible in your OpenSearch Dashboards. ::: \ No newline at end of file diff --git a/docs/user-guide/explore/new-explore.md b/docs/user-guide/explore/new-explore.md index c684b8d4..e02080b1 100644 --- a/docs/user-guide/explore/new-explore.md +++ b/docs/user-guide/explore/new-explore.md @@ -7,80 +7,84 @@ keywords: [logz.io, dashboard, explore, logs, metrics, traces, analytics, log an slug: /user-guide/new-explore/ --- -Explore lets you monitor your logs, metrics, and traces in one unified dashboard. It offers a quick and easy way to identify and debug issues quickly and effectively. +Explore provides a unified dashboard for monitoring your data, offering a quick and efficient way to identify and debug issues. Designed for investigating and analyzing large data volumes, Explore allows you to use filters, queries, and searches to pinpoint and delve into problems effortlessly. -## Explore Dashboards Overview +![Explore dashboard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/explore-dashboard-aug6.png) -Explore is designed to investigate and analyze massive volumes of data quickly and easily. Use filters or the auto-complete syntax tool to find the needed logs and drill into them using the quick view panel. -![Explore dashboard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/explore-dashboard-jul22.png) +### Simple Search / Advanced (Lucene) +Click on the dropdown menu to switch between Simple Search and Advanced Search, a Lucene query-based search: -### 1. Simple Search / Advanced (Lucene) +* **Simple Search**: An intuitive search with auto-complete functionality. It streamlines your search process and enables faster access to data. -Simple Search is the default query option, offering an intuitive experience with auto-complete functionality. It streamlines your search process and enables faster access to data. +Build your query by selecting fields, parameters, and conditions. To add a value that doesn't appear in your logs, type its name and click on the + sign. You can also add free text to your search, which will convert it into a Lucene query. -Start typing to see all of the available fields you can use in your query. Select the operator you want to apply, and the available values will appear. +![Smart Search gif](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/simple-search-aug6.gif) -![Smart Search gif](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/search-bar-jul22.gif) +* **Advanced (Lucene)**: Use advanced text querying for log searches. You can search for free text by typing the text string you want to find; for example, error will return all words containing this string, and using quotation marks, "error", will return only the specific word you're searching for. -To add a value that doesn't appear in your logs, type its name and click on the + sign. +![Lucene Search gif](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/advanced-search-aug6.gif) -add-value -Click on the dropdown menu to switch between Simple Search and Advanced Search, a Lucene query-based search. +### Filters -### 2. Filters +Filters make it easy to refine and narrow your search. Start by selecting the account you want to filter. Then, click on a field to see its available parameters. Choose the values to include in your view or uncheck to remove them. +All visible fields appear on the left side, including exceptions (if any) and special fields that cannot be filtered but can be added to the table or used as a **field exists** filter. -Filters make it easy to refine and narrow your search. First, select the accounts you want to filter. Then, click on a field to see its available parameters. Choose the values you want to include in your view, or uncheck them to remove them. You can use the search bar to quickly find specific fields. -Special fields are located at the top of the list. These fields cannot be filtered but can be added to the table or used as a **field exists** filter. +You can pin up to three custom fields by hovering over them and clicking the star icon. -Additionally, you can pin up to three custom fields to the top of the list by hovering over them and clicking the star icon. +explore-fields -explore-fields +### Graph View -### 3. Graph View +Visualize trends over time and group data based on your investigations. Hover over the graph to see additional details about each data point, and click and drag to focus on specific time frames or data points. -Visualize trends over time and group data based on your preferred categories. Hover over the graph for additional details about each data point. +You can enlarge or reduce the size of the graph by clicking the arrow button at the top right. -Use the arrow button at the top right of the chart to minimize or expand the graph view. +graph-view -graph-view +### Exceptions -### 4. Choose Time Frame +Logz.io Exceptions automatically identifies and highlights exceptions in Explore. -The default time frame in Explore is the last 15 minutes. +You can see the number of exceptions detected for every query you run. Click the button to open the Exception quick view menu for a detailed view of the exceptions found. + +Learn more about [Exceptions](https://docs.logz.io/docs/user-guide/explore/exceptions). -You can select a custom time frame by clicking the data element and choosing what’s relevant for your overview or investigation. +### Choose Time Frame + +The default time frame in Explore is the last 15 minutes. +To select a custom time frame, click the time element and choose the period relevant to your overview or investigation. -### 5. Observability IQ Assistant +### Observability IQ Assistant -Click the ✨ icon to activate Observability IQ Assistant, an AI-powered, chat-based interface that lets you engage in a dynamic conversation with your data. Use one of the pre-configured prompts or type your own question to get real-time insights about your metrics, anomalies, trends, and the overall health of your environment. +Click the ✨ Observability IQ button to activate Observability IQ Assistant, an AI-powered, chat-based interface that lets you engage in a dynamic conversation with your data. Use one of the pre-configured prompts or type your own question to get real-time insights about your metrics, anomalies, trends, and the overall health of your environment. -![Observability IQ Assistant](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/iq-jul22.gif) +![Observability IQ Assistant](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/iq-aug6.gif) -### 6. Group By +### Group By -The default graph view is set to group by all fields, but you can choose specific fields to focus on from the dropdown menu. +The default graph view is set to group by all fields, and you can choose specific fields to focus on from the dropdown menu. -![Smart Search group by](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/groupby-may21.png) +smart-search-groupby -### 7. Table Density +### Table Density Click the 1L button to change the table view. Selecting **1 Line** provides a compact view, **2 Lines** displays two lines from the logs, and **Expanded** offers a full log view, presenting all relevant data for easier viewing. -expand-view +expand-view -### 8. Create Alert, Copy Link, Export CSV +### Create Alert, Copy Link, Export CSV The ⋮ menu offers additional options for Explore, including: @@ -88,26 +92,14 @@ The ⋮ menu offers additional options for Explore, including: * **Copy Link**: Generates a URL with your current view, which you can share with team members. You need to be logged in to Logz.io to view it * **Export CSV**: Exports up to 50,000 logs to a CSV file, including the timestamp and log message -side-menu - -### 9. Logs Table - -Use the Logs Table to view and analyze logs. Access relevant logs and their details quickly, and customize the table by adding or removing columns. - -You can expand each log line to view additional details, and clicking on a field or value will open a dedicated menu that will let you: - -* Add the field to your table -* Copy the value -* Add field to log table -* Group by field in graph -* Exclude the value from your view -* Open Observability IQ to learn more about the chosen field or value +side-menu -![Smart Search menu](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/menu-jul22.png) +### Logs Table -### Quick View +Use the Logs Table to view and analyze logs. Access relevant logs and their details quickly, customizing the table by adding or removing columns. +Expand each log to view additional details, see the log in JSON format, and add columns to the table. Filter values in or out of your view as needed. Use the Observability IQ Assistant on fields or values to gain more information about them. -Dive deeper into your logs with Quick View, designed to provide comprehensive insights at a glance. Click the eye icon or anywhere inside a log line to open the detailed view, which offers additional information and context for each log to help you easily identify critical details. You can switch between the log table and JSON view depending on your preferences. +In the top right corner, choose to view a single log in a new window, view surrounding logs for context, and share the URL of the specific log you're viewing. -![Explore quick view](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/quick-view-may21.png) \ No newline at end of file +smart-search \ No newline at end of file diff --git a/docs/user-guide/k8s-360/unified-helm-chart.md b/docs/user-guide/k8s-360/unified-helm-chart.md index f1ccf707..8e712aad 100644 --- a/docs/user-guide/k8s-360/unified-helm-chart.md +++ b/docs/user-guide/k8s-360/unified-helm-chart.md @@ -9,4 +9,5 @@ keywords: [logz.io, helm, Kybernetes, k8s, helm chart] The logzio-monitoring Helm Chart ships your Kubernetes telemetry (logs, metrics, traces and security reports) to your Logz.io account. -{@include: ../../_include/general-shipping/k8s.md} \ No newline at end of file +{@include: ../../_include/general-shipping/k8s.md} + diff --git a/docs/user-guide/log-management/api-fetcher.md b/docs/user-guide/log-management/api-fetcher.md index 5aa8a62f..7305a3aa 100644 --- a/docs/user-guide/log-management/api-fetcher.md +++ b/docs/user-guide/log-management/api-fetcher.md @@ -268,7 +268,7 @@ docker run --name logzio-api-fetcher \ logzio/logzio-api-fetcher ``` -:::note +:::info To run in Debug mode add `--level` flag to the command: ```shell docker run --name logzio-api-fetcher \ diff --git a/docs/user-guide/log-management/cold-tier.md b/docs/user-guide/log-management/cold-tier.md index fb406671..7c365ae3 100644 --- a/docs/user-guide/log-management/cold-tier.md +++ b/docs/user-guide/log-management/cold-tier.md @@ -72,8 +72,7 @@ And more. To investigate the logs further, you can re-ingest them to your Logz.io account by clicking the **Re-ingest** button. - -**Note that the re-ingested data will count against your daily quota and may result in an additional charge if you exceed your account's limit.** +**Note that the re-ingested data will count against your daily quota and may result in an additional charge if you exceed your account's limit. Additionally, be aware that the maximum restore duration is limited to 24 hours.** You can check your account usage and daily limit by navigating to [**Settings > Manage accounts**](https://app.logz.io/#/dashboard/settings/manage-accounts). diff --git a/static/manifest.json b/static/manifest.json index edc4bbaa..e622c1ca 100644 --- a/static/manifest.json +++ b/static/manifest.json @@ -1 +1 @@ -{"collectors": [{"id": "Apache-ActiveMQ", "title": "Apache ActiveMQ", "description": "Apache ActiveMQ is an open source message broker with a Java Message Service client. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from various sources.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/activemq-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/apache-activemq.md"}, {"id": "RabbitMQ", "title": "RabbitMQ", "description": "RabbitMQ is an open-source message-broker software that originally implemented the Advanced Message Queuing Protocol and has since been extended with a plug-in architecture to support Streaming Text Oriented Messaging Protocol, MQ Telemetry Transport, and other protocols. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Distributed Messaging"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/rabbitmq-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "77P29wgQwu1pqCaZFMcwnC"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/rabbitmq.md"}, {"id": "Apache-Kafka", "title": "Apache Kafka", "description": "Apache Kafka is a distributed event store and stream-processing platform.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Distributed Messaging"], "recommendedFor": ["DevOps Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kafka.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/apache-kafka.md"}, {"id": "NSQ", "title": "NSQ", "description": "NSQ is a realtime distributed messaging platform designed to operate at scale, handling billions of messages per day. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nsq.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/nsq.md"}, {"id": "Apache-Storm", "title": "Apache Storm", "description": "This integration allows you to send logs from your Apache Storm server instances to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["linux"], "filterTags": ["Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apache-storm.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/apache-storm.md"}, {"id": "Filebeat-data", "title": "Filebeat", "description": "Filebeat is often the easiest way to get logs from your system to Logz.io. Logz.io has a dedicated configuration wizard to make it simple to configure Filebeat. If you already have Filebeat and you want to add new sources, check out our other shipping instructions to copy&paste just the relevant changes from our code examples.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/beats.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/filebeat.md"}, {"id": "uWSGI-data", "title": "uWSGI", "description": "uWSGI is a software application that aims at developing a full stack for building hosting services. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/uwsgi-logo1.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/uWSGI-telegraf.md"}, {"id": "NVIDIA-data", "title": "NVIDIA", "description": "NVIDIA System Management Interface (nvidia-smi) is a command line utility, based on top of the NVIDIA Management Library (NVML), intended to aid in the management and monitoring of NVIDIA GPU devices. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nvidia.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/nvidia.md"}, {"id": "Apache-Aurora-data", "title": "Apache Aurora", "description": "Collect Aurora metrics using Telegraf", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aurora-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/apache-aurora.md"}, {"id": "Vector-data", "title": "Vector", "description": "Vector by Datadog is a lightweight, ultra-fast tool for building observability pipelines. Deploy this integration to send logs from your Vector tools to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/vector.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/vector.md"}, {"id": "Hashicorp-Consul-data", "title": "Hashicorp Consul", "description": "This project lets you configure the OpenTelemetry collector to send your Prometheus-format metrics from Hashicorp Consul to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/consul-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/consul.md"}, {"id": "Axonius-data", "title": "Axonius", "description": "This integration sends system logs from your Axonius platform to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/axonius.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/axonius.md"}, {"id": "Salesforce-Commerce-Cloud-data", "title": "Salesforce Commerce Cloud", "description": "Salesforce Commerce Cloud is a scalable, cloud-based software-as-a-service (SaaS) ecommerce platform. This integration allows you to collect data from Salesforce Commerce Cloud and send it to your Logz.io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/salesforce-commerce-cloud-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/salesforce-commerce-cloud.md"}, {"id": "BigBlueButton-data", "title": "BigBlueButton", "description": "BigBlueButton is a free software web conferencing system for Linux servers. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigbluebutton-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/bigbluebutton.md"}, {"id": "invoke-restmethod-data", "title": "Invoke RestMethod", "description": "Invoke-RestMethod is a command to interact with REST APIs in PowerShell. Invoke-RestMethod is a quick and easy way to test your configuration or troubleshoot your connectivity to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Invoke-RestMethod.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/invoke-restmethod.md"}, {"id": "Aiven-data", "title": "Aiven", "description": "Aiven is a cloud service provider that specializes in managed open-source database, messaging, and event streaming solutions.", "productTags": ["LOG_ANALYTICS"], "osTags": [], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aiven-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/aiven.md"}, {"id": "confluent", "title": "Confluent Cloud", "description": "This integration allows you to ship Confluent logs to Logz.io using Cloud HTTP Sink.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/confluent.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/confluent.md"}, {"id": "Fluent-Bit-data", "title": "Fluent Bit", "description": "Fluent Bit is an open source Log Processor and Forwarder which allows you to collect any data like metrics and logs from different sources. This integration allows you to send logs from Fluent Bit running as a standalone app and forward them to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/fluent-bit.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/fluent-bit.md"}, {"id": "Redfish-data", "title": "Redfish", "description": "DMTF's Redfish is a standard designed to deliver simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC).Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/redfish-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/redfish.md"}, {"id": "Beats-data", "title": "Beats", "description": "Beats is an open platform that allows you to send data from hundreds or thousands of machines and systems. You can send data from your Beats to Logz.io to add a layer of observability to identify and resolve issues quickly.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/beats.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/beats.md"}, {"id": "cURL-data", "title": "cURL", "description": "cURL is a command line utility for transferring data. cURL is a quick and easy way to test your configuration or troubleshoot your connectivity to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/curl.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/curl.md"}, {"id": "Rsyslog-data", "title": "Rsyslog", "description": "Rsyslog is an open-source software utility used on most UNIX and Unix-like computer systems. It offers a great lightweight service to consolidate logs. With Logz.io, you can monitor these logs, identify if and when issues arise, and solve them before they impact your customers.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/rsyslog.md"}, {"id": "Intercom-data", "title": "Intercom", "description": "Intercom is a messaging platform with bots, apps, product tours and oher features. Deploy this integration to ship Intercom events from your Intercom account to Logz.io using webhooks.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/intercom.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/intercom.md"}, {"id": "Sysmon-data", "title": "Sysmon (System Monitor) via Winlogbeat", "description": "Sysmon (System Monitor) is a Windows system service that monitors and logs system activity of the Windows event log. It tracks process creations, network connections, and changes to file creation time.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/sysmon.md"}, {"id": "Salesforce-data", "title": "Salesforce", "description": "Salesforce is a customer relationship management solution. The Account sObject is an abstraction of the account record and holds the account field information in memory as an object. This integration allows you to collect sObject data from Salesforce and send it to your Logz.io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/salesforce-commerce-cloud-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/salesforce.md"}, {"id": "BUNNY-NET-data", "title": "BUNNY.NET", "description": "BUNNY.NET is a content delivery network offering features and performance with a fast global network. This document describes how to send system logs from your bunny.net pull zones to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bunny.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/bunny-net.md"}, {"id": "OpenTelemetry-data", "title": "OpenTelemetry", "description": "OpenTelemetry is a collection of APIs, SDKs, and tools to instrument, generate, collect, and export telemetry data, including logs, metrics, and traces. Logz.io helps you identify anomalies and issues in the data so you can resolve them quickly and easily.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/opentelemetry-icon-color.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2Q2f3D9WiUgMIyjlDXi0sA"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/opentelemetry.md"}, {"id": "Dovecot-data", "title": "Dovecot", "description": "Dovecot is an open-source IMAP and POP3 server for Unix-like operating systems, written primarily with security in mind. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/dovecot.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/dovecot.md"}, {"id": "Youtube-data", "title": "Youtube", "description": "Youtube is an online video sharing and social media platform. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/youtube-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/youtube.md"}, {"id": "Microsoft-Graph-data", "title": "Microsoft Graph", "description": "Microsoft Graph is a RESTful web API that enables you to access Microsoft Cloud service resources. This integration allows you to collect data from Microsoft Graph API and send it to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/graph-api-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/microsoft-graph.md"}, {"id": "Tengine-data", "title": "Tengine", "description": "Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/tengine-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/tengine.md"}, {"id": "cadvisor", "title": "cAdvisor", "description": "This integration lets you send cAdvisor metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/cadvisor.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/cadvisor.md"}, {"id": "FPM-data", "title": "FPM", "description": "This integration sends Prometheus-format PHP-FPM metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/phpfpm-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "55uVoiaFwAreNAf7DojQZN"}, {"type": "GRAFANA_ALERT", "id": "1A2NfkQQprZqbtzQOVrcO7"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/fpm.md"}, {"id": "Bond-data", "title": "Bond", "description": "This integration allows you to collects metrics from all bond interfaces in your network. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bond-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/bond.md"}, {"id": "Microsoft-365-data", "title": "Microsoft 365", "description": "Deploy this integration to send Unified Audit Logging logs from Microsoft 365 to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/office365.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/microsoft-365.md"}, {"id": "prometheus-alerts-migrator", "title": "Prometheus Alerts Migrator", "description": "This Helm chart deploys the Prometheus Alerts Migrator as a Kubernetes controller, which automates the migration of Prometheus alert rules to Logz.io's alert format, facilitating monitoring and alert management in a Logz.io integrated environment.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/prometheusio-icon.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/prometheus-alerts-migrator.md"}, {"id": "Burrow-data", "title": "Burrow", "description": "Burrow is a monitoring application for Apache Kafka that monitors committed offsets for all consumers and calculates the status of those consumers on demand. It automatically monitors all consumers and their consumed partitions.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kafka.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/burrow.md"}, {"id": "Phusion-Passenger-data", "title": "Phusion Passenger", "description": "Phusion Passenger is a free web server and application server with support for Ruby, Python and Node.js. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/phfusion-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/phusion-passenger.md"}, {"id": "Logstash-data", "title": "Logstash", "description": "Logstash is an open-source server-side data processing pipeline. This integration can ingest data from multiple sources. With Logz.io, you can monitor Logstash instances and quickly identify if and when issues arise.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/logstash_temp.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/logstash.md"}, {"id": "Mailchimp-data", "title": "Mailchimp", "description": "Mailchimp is the All-In-One integrated marketing platform for small businesses. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mailchimp.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/mailchimp.md"}, {"id": "Heroku-data", "title": "Heroku", "description": "Heroku is a platform as a service (PaaS) that enables developers to build, run, and operate applications entirely in the cloud. This integration allows you to send logs from your Heroku applications to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": [], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/heroku.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/heroku.md"}, {"id": "Jaeger-data", "title": "Jaeger", "description": "Jaeger is an open-source software that can help you monitor and troubleshoot problems on microservices. Integrate Jaeger with Logz.io to gain more observability into your data, identify if and when issues occur, and resolve them quickly and easily.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/jaeger.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/jaeger.md"}, {"id": "IPMI-data", "title": "IPMI", "description": "IPMI is a standardized computer system interface used by system administrators to manage a computer system and monitor its operation. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ipmi.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/ipmi.md"}, {"id": "Telegraf", "title": "Telegraf", "description": "This integration lets you send Prometheus-format metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mascot-telegraf.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "32X5zm8qW7ByLlp1YPFkrJ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/telegraf.md"}, {"id": "Fluentd-data", "title": "Fluentd", "description": "Fluentd is a data collector, which unifies the data collection and consumption. This integration allows you to use Fluentd to send logs to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/fluentd.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/fluentd.md"}, {"id": "Disque-data", "title": "Disque", "description": "Disque is a distributed message broker. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/disque-telegraf.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/disque.md"}, {"id": "Prometheus-remote-write", "title": "Prometheus Remote Write", "description": "This integration lets you send Prometheus-format metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/prometheusio-icon.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/prometheus.md"}, {"id": "Service-Performance-Monitoring-App360", "title": "App360", "description": "This integration allows you to configure App360 with OpenTelemetry collector and send data from your OpenTelemetry installation to Logz.io.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/span-metrics.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "40ZpsSfzfGhbguMYoxwOqm"}, {"type": "GRAFANA_DASHBOARD", "id": "5PFq9YHx2iQkwVMLCMOmjJ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/App360/App360.md"}, {"id": "internet-information-services", "title": "Internet Information Services (IIS)", "description": "Internet Information Services (IIS) for Windows\u00ae Server is a flexible, secure and manageable Web server for hosting on the Web. This integration allows you to send logs from your IIS services to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/iis.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/internet-information-services.md"}, {"id": "Apache-HTTP-Server", "title": "Apache HTTP Server", "description": "The Apache HTTP Server, colloquially called Apache, is a free and open-source cross-platform web server. This integration sends Apache HTTP server logs and metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apache-http-logo.png", "bundle": [{"type": "OSD_DASHBOARD", "id": "5LWLzuSeGMqXVj5p8cP1NX"}, {"type": "LOG_ALERT", "id": "6b8UfKeSHCc4SWxHphMd0O, 5jTENQYn5PpgiZWvezI0Cp, 6OAv4ozj4eRi7NSHgJawl1, 7EgPOsqIuoBUCwcHpq57L3, 6NmeR0XGMoTTanwU82oCrD"}, {"type": "GRAFANA_DASHBOARD", "id": "28VJXdtDINv7w2T3l8oOO9"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/apache-http-server.md"}, {"id": "Telegraf-sysmetrics", "title": "Telegraf System Metrics", "description": "Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/telegraf-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "32X5zm8qW7ByLlp1YPFkrJ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/telegraf-sysmetrics.md"}, {"id": "VMware-vSphere", "title": "VMware vSphere", "description": "VMware vSphere is VMware's cloud computing virtualization platform. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/vsphere-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "VpeHVDlhfo1mF22Lc0UKf"}, {"type": "GRAFANA_DASHBOARD", "id": "6CpW1YzdonmTQ8uIXAN5OL"}, {"type": "GRAFANA_DASHBOARD", "id": "3AvORCMPVJd8948i9oKaBO"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/vmware-vsphere.md"}, {"id": "Apache-Tomcat", "title": "Apache Tomcat", "description": "Apache Tomcat is a web server and servlet container that allows the execution of Java Servlets and JavaServer Pages (JSP) for web applications.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/tomcat-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1QIverGwIdtlC5ZbKohyvj"}, {"type": "GRAFANA_DASHBOARD", "id": "6J2RujMalRK3oC4y0r88ax"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/apache-tomcat.md"}, {"id": "Ping-statistics-synthetic", "title": "Ping Statistics", "description": "Deploy this integration to collect metrics of ping statistics collected from your preferred web addresses and send them to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Synthetic Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ping-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1rNO8llFw8Cm9N8U3M3vCQ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Synthetic-Monitoring/ping-statistics.md"}, {"id": "synthetic-link-detector-synthetic", "title": "Synthetic Link Detector", "description": "Deploy this integration to collect data on broken links in a web page, and to get additional data about the links.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Synthetic Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/link.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4l4xVZhvqsrJWO7rZwOxgx"}, {"type": "GRAFANA_DASHBOARD", "id": "1NiBMzN5DvQZ8BjePpUtvQ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md"}, {"id": "API-status-metrics-synthetic", "title": "API Status Metrics", "description": "Deploy this integration to collect API status metrics of user API and send them to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Synthetic Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apii.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1RCzCjjByhyz0bJ4Hmau0y"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Synthetic-Monitoring/api-status-metrics.md"}, {"id": "AWS-DynamoDB", "title": "AWS DynamoDB", "description": "This integration sends your Amazon DynamoDB logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-dynamodb.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1SCWsYpcgBc9DmjM1vELkf"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-dynamodb.md"}, {"id": "Lambda-extension-go", "title": "Traces from Go on AWS Lambda using OpenTelemetry", "description": "This integration to auto-instrument your Go application running on AWS Lambda and send the traces to your Logz.io account.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/AWS-Lambda.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-lambda-extension-go.md"}, {"id": "AWS-CloudTrail", "title": "AWS CloudTrail", "description": "AWS Cloudtrail enables governance, compliance, operational auditing, and risk auditing of your Amazon Web Services account. Integrate it with Logz.io to monitor your Cloudtrail logs and metrics and know if and when issues arise.", "productTags": ["LOG_ANALYTICS", "METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Security", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-cloudtrail.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cloudtrail.md"}, {"id": "AWS-Kafka", "title": "Amazon Managed Streaming for Apache Kafka (MSK)", "description": "Send your Amazon Managed Streaming for Apache Kafka (MSK) metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/aws-msk.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7bHNddlAK5q8Iya7xIhbbU"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-kafka.md"}, {"id": "AWS-FSx", "title": "AWS FSx", "description": "This integration sends your Amazon FSx logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/aws-fsx.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6rVrCJsVXljHWg7wZo51HT"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-fsx.md"}, {"id": "AWS-cross-account", "title": "AWS Cross Account", "description": "Deploy this integration to simultaneously ship logs from multiple AWS accounts to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-cloudwatch.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cross-account.md"}, {"id": "Amazon-Classic-ELB", "title": "AWS Classic ELB", "description": "Send your AWS Classic ELB logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-classic-elb.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "5oFBj0BIKo4M5XLZpwjSgl"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-classic-elb.md"}, {"id": "AWS-ElastiCache-Redis", "title": "AWS ElastiCache for Redis", "description": "Send your Amazon ElastiCache for Redis metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-redis-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2iTJV7AkvtHDJauaEzYobs"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-elasticache-redis.md"}, {"id": "AWS-MSK", "title": "AWS MSK", "description": "This integration sends your Amazon MSK logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-msk.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2EGM4H9wch68bVy1vm4oxb"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-msk.md"}, {"id": "AWS-Amplify", "title": "AWS Amplify", "description": "This is an integration that collects Amplify access logs and sends them to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": [], "filterTags": ["AWS", "CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/amplify.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-amplify.md"}, {"id": "AWS-SES", "title": "AWS SES", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon SES metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ses.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6YXSlRl6RxMuGPiTTO9NHg"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ses.md"}, {"id": "AWS-Network-ELB", "title": "AWS Network ELB", "description": "Send your AWS Network ELB logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/elb-network.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "5pihdWdmBYQ1i7AbU9po2R"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-network-elb.md"}, {"id": "AWS-mq", "title": "AWS MQ", "description": "This integration sends your Amazon MQ logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": [], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-mq.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1xglfXxBurNsVZIla5zRnS"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-mq.md"}, {"id": "AWS-RDS", "title": "AWS RDS", "description": "This integration sends AWS RDS logs and metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-rds.svg", "bundle": [{"type": "OSD_DASHBOARD", "id": "2IzSk7ZLwhRFwaqAQg4e2U"}, {"type": "GRAFANA_DASHBOARD", "id": "5azSSei1AhiJPCV7yptVI7"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-rds.md"}, {"id": "Amazon-ElastiCache", "title": "AWS ElastiCache", "description": "Send your Amazon ElastiCache metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Amazon-ElastiCache.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-elasticache.md"}, {"id": "AWS-EBS", "title": "AWS EBS", "description": "Send your Amazon EBS metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ebs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6WqwxluZ76GXXPut0GHGKH"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ebs.md"}, {"id": "AWS-AppRunner", "title": "AWS AppRunner", "description": "Send your Amazon AppRunner metrics to Logz.io", "productTags": ["METRICS"], "osTags": [], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-fusion.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-apprunner.md"}, {"id": "AWS-Security-Hub", "title": "AWS Security Hub", "description": "This integration ships events from AWS Security Hub to Logz.io. It will automatically deploy resources to your AWS Account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-security-hub.md"}, {"id": "aws-vpn", "title": "AWS VPN", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon VPN metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-vpn.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4nSubW6qKSqV8Pq367JQca"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-vpn.md"}, {"id": "aws-SQS", "title": "AWS SQS", "description": "This integration sends your Amazon SQS logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-sqs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1pEmJtP0bwd5WuuAfEe5cc"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-sqs.md"}, {"id": "Amazon-EC2-Auto-Scaling", "title": "AWS EC2 Auto Scaling", "description": "This integration sends your Amazon EC2 Auto Scaling logs and metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ec2-auto-scaling.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2VNLppOm4XOFwVouv8dorr"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ec2-auto-scaling.md"}, {"id": "AWS-Cost-and-Usage-Reports", "title": "AWS Cost and Usage Reports", "description": "AWS Cost and Usage Reports function tracks your AWS usage and provides estimated charges associated with your account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cost-and-usage-report.md"}, {"id": "AWS-SNS", "title": "AWS SNS", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon SNS metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-sns.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3G7HxOI10AvzpqGXQNfawA"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-sns.md"}, {"id": "AWS-Control-Tower", "title": "AWS Control Tower", "description": "AWS Control Tower is a tool to control a top-level summary of policies applied to the AWS environment.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-control-tower.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7bHNddlAK5q8Iya7xIhbbU"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-control-tower.md"}, {"id": "AWS-EFS", "title": "AWS EFS", "description": "Send your Amazon EFS metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-efs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7IUpQgVmcbkHV8zAGuLHIL"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-efs.md"}, {"id": "AWS-Redshift", "title": "AWS Redshift", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon Redshift metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Amazon-Redshift.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-redshift.md"}, {"id": "AWS-Kinesis-Firehose", "title": "AWS Kinesis Data Firehose", "description": "This integration sends your Amazon Kinesis Data Firehose logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Amazon-Kinesis.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6c42S4dUE98HajLbiuaShI"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-kinesis-firehose.md"}, {"id": "AWS-Lambda", "title": "AWS Lambda", "description": "AWS Lambda serverless compute service runs code in response to events and automatically manages compute resources. Send these events to Logz.io to identify anomalies and issues and quickly solve them.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "recommendedFor": ["DevOps Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/lambda-nodejs2.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2YLu810AXPlVwzQen8vff1"}, {"type": "GRAFANA_ALERT", "id": "4iuPoRsdogZKww8d0NO7er"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-lambda.md"}, {"id": "AWS-S3-Access", "title": "AWS S3 Access", "description": "Amazon S3 Access Logs provide detailed records about requests that are made to your S3 bucket. This integration allows you to send these logs to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-s3.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-s3-access.md"}, {"id": "AWS-NAT", "title": "AWS NAT", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon NAT metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-nat.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1EhgOtbCtQxzsWh6FJjme8"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-nat.md"}, {"id": "AWS-ECS", "title": "AWS ECS", "description": "Send your Amazon ECS logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute", "Containers"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ecs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4pY46CjyNMoHWGB3gjgQWd"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ecs.md"}, {"id": "AWS-CloudFront", "title": "AWS CloudFront", "description": "Send your Amazon CloudFront metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": [], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-cloudfront.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3MJWDTivgQCNz3DQIj3Kry"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cloudfront.md"}, {"id": "AWS-Route-53", "title": "AWS Route 53", "description": "This integration sends your Amazon Route 53 logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Amazon-Route-53.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "Tnb9WjjHnI3COgp08Wsin"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-route53.md"}, {"id": "AWS-API-Gateway", "title": "AWS API Gateway", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon API Gateway metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": [], "filterTags": ["AWS", "Access Management", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-api-gateway.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7234Vgs9rITAlaHJH5iqOw"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-api-gateway.md"}, {"id": "AWS-Athena", "title": "AWS Athena", "description": "This integration sends your Amazon Athena logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-athena.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-athena.md"}, {"id": "AWS-EC2", "title": "AWS EC2", "description": "Send your Amazon EC2 logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "recommendedFor": ["DevOps Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ec2.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2VNLppOm4XOFwVouv8dorr"}, {"type": "GRAFANA_ALERT", "id": "hWld33IEO6gZMpp2e4vs0"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ec2.md"}, {"id": "Lambda-extension-node", "title": "Traces from Node.js on AWS Lambda using OpenTelemetry", "description": "This integration to auto-instrument your Node.js application running on AWS Lambda and send the traces to your Logz.io account.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/AWS-Lambda.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-lambda-extension-node.md"}, {"id": "Lambda-extensions", "title": "Lambda Extensions", "description": "The Logz.io Lambda extension for logs, uses the AWS Extensions API and AWS Logs API and sends your Lambda Function Logs directly to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/AWS-Lambda.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-lambda-extensions.md"}, {"id": "AWS-WAF", "title": "AWS WAF", "description": "Ship your AWS WAF logs to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/AWS-WAF.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-waf.md"}, {"id": "AWS-CloudFormation", "title": "AWS CloudFormation", "description": "Send your Amazon CloudFront metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": [], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-cloudformation.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cloudformation.md"}, {"id": "AWS-App-ELB", "title": "AWS App ELB", "description": "Send your AWS Application ELB logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": [], "filterTags": ["AWS", "Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-app-elb.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "5BZ6El3juQkCKCIuGm1oyC"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-app-elb.md"}, {"id": "AWS-S3-Bucket", "title": "AWS S3 Bucket", "description": "Amazon S3 stores data within buckets, allowing you to send your AWS logs and metrics to Logz.io. S3 buckets lets you store and access large amounts of data and is often used for big data analytics, root cause analysis, and more.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store", "Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-s3.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1Pm3OYbu1MRGoELc2qhxQ1"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-s3-bucket.md"}, {"id": "GuardDuty", "title": "GuardDuty", "description": "This integration sends GuardDuty logs.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-guardduty.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-guardduty.md"}, {"id": "AWS-ECS-Fargate", "title": "AWS ECS Fargate", "description": "AWS Fargate is a serverless compute engine for building applications without managing servers.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": [], "filterTags": ["AWS", "Compute", "Containers"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-fargate.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ecs-fargate.md"}, {"id": "aws-eks", "title": "AWS EKS", "description": "Send Kubernetes logs, metrics and traces to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-eks.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-eks.md"}, {"id": "HAProxy-load", "title": "HAProxy", "description": "HAProxy is a network device, so it needs to transfer logs using the syslog protocol.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/haproxy-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Load-Balancer/haproxy.md"}, {"id": "Nginx-load", "title": "Nginx", "description": "Nginx is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. Deploy this integration to ship Nginx logs to your Logz.io SIEM account and metrics, including Plus API, Plus, Stream STS, VTS.", "productTags": ["LOG_ANALYTICS", "METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nginx.svg", "bundle": [{"type": "LOG_ALERT", "id": "5tov4MgrnR6vXZhh1MyuHO"}, {"type": "LOG_ALERT", "id": "63MnOu9ZzkCXdX0KOhXghi"}, {"type": "LOG_ALERT", "id": "4V8BXcfr7noTdtU6EjXp7w"}, {"type": "LOG_ALERT", "id": "2EXnb71ucdTnVolN1PqbM6"}, {"type": "GRAFANA_DASHBOARD", "id": "3HKho6pQhCmEYmwMc4xCeY"}, {"type": "GRAFANA_ALERT", "id": "1Bz57jmzsN7uIiyZLdnNpx"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Load-Balancer/nginx.md"}, {"id": "VPC-network", "title": "VPC", "description": "VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. This integration allows you to send these logs to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": [], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/vpc.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/vpc.md"}, {"id": "Unbound-network", "title": "Unbound", "description": "Unbound is a crowdfunding publisher that gives people the tools, support and freedom to bring their ideas to life. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/unbound-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/unbound-telegraf.md"}, {"id": "Juniper-SRX-network", "title": "Juniper SRX", "description": "Juniper SRX is a networking firewall solution and services gateway. If you ship your Juniper firewall logs to your Logz.io Cloud SIEM, you can centralize your security ops and receive alerts about security events logged by Juniper SRX.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/juniper.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/juniper-srx.md"}, {"id": "Mcrouter-network", "title": "Mcrouter", "description": "Mcrouter is a memcached protocol router for scaling memcached deployments. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mcrouter-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/mcrouter.md"}, {"id": "WireGuard-network", "title": "WireGuard", "description": "WireGuard is a communication protocol and free and open-source software that implements encrypted virtual private networks, and was designed with the goals of ease of use, high speed performance, and low attack surface. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/wireguard-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/wireguard.md"}, {"id": "Cloudflare-network", "title": "Cloudflare", "description": "The Cloudflare web application firewall (WAF) protects your internet property against malicious attacks that aim to exploit vulnerabilities such as SQL injection attacks, cross-site scripting, and cross-site forgery requests.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudflare.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/cloudflare.md"}, {"id": "Network-devices-network", "title": "Network Devices", "description": "This integration allows you to send logs from your network devices to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/network-device.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/network-device.md"}, {"id": "OpenVPN-network", "title": "OpenVPN", "description": "OpenVPN is a virtual private network system for secure point-to-point or site-to-site connections.", "productTags": ["METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/openvpn.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/openvpn.md"}, {"id": "junos-telemetry-interface-network", "title": "Junos Telemetry Interface", "description": "Junos Telemetry Interface (JTI) is a push mechanism to collect operational metrics for monitoring the health of a network that has no scaling limitations. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/juniper.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/junos-telemetry-interface.md"}, {"id": "nlnet-labs-name-server-daemon-network", "title": "NLnet Labs Name Server Daemon", "description": "NLnet Labs Name Server Daemon (NSD) is an authoritative DNS name server. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nsd.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/nlnet-labs-name-server-daemon.md"}, {"id": "Synproxy-network", "title": "Synproxy", "description": "Synproxy is a TCP SYN packets proxy. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/synproxy.md"}, {"id": "Neptune-apex-iot", "title": "Neptune Apex", "description": "Neptune Apex is an aquarium control system. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["IoT"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/neptune.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/IoT/neptune-apex.md"}, {"id": "Docker", "title": "Docker", "description": "Docker lets you work in standardized environments using local containers, promoting continuous integration and continuous delivery (CI/CD) workflows. With Logz.io you can collect logs and metrics from your Docker environment to gain observability and know if and when issues occur.", "productTags": ["LOG_ANALYTICS", "METRICS"], "recommendedFor": ["DevOps Engineer"], "osTags": ["windows", "linux"], "filterTags": ["Containers", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/docker.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/docker.md"}, {"id": "oracle-cloud-infrastructure-container-engine-for-kubernetes", "title": "Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)", "description": "Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Containers", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/oke.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/oracle-cloud-infrastructure-container-engine-for-kubernetes.md"}, {"id": "Kubernetes", "title": "Kubernetes", "description": "Kubernetes, also known as K8s, is an open-source system for automating deployments, scaling, and managing containerized applications. Integrate your Kubernetes system with Logz.io to monitor your logs, metrics, and traces, gain observability into your environment, and be able to identify and resolve issues with just a few clicks.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "recommendedFor": ["DevOps Engineer"], "osTags": ["windows", "linux"], "filterTags": ["Containers", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kubernetes.svg", "bundle": [{"type": "OSD_DASHBOARD", "id": "3D1grGcEYB5Oe2feUPImak"}, {"type": "OSD_DASHBOARD", "id": "qryn7oYYoeaBBGMFRvm67"}, {"type": "LOG_ALERT", "id": "1AZRkKc64I12yxAMf2Wyny"}, {"type": "LOG_ALERT", "id": "6H7dfFOPUaHVMIjxdOMASx"}, {"type": "LOG_ALERT", "id": "1F6zSL5me5XJt9Lrjw3vxU"}, {"type": "LOG_ALERT", "id": "2dQHLx0WxmKmLk1kc67Ags"}, {"type": "LOG_ALERT", "id": "3dyFejyivMaZFdudbwKGRG"}, {"type": "GRAFANA_DASHBOARD", "id": "7nILXHYFZbThgTSMObUxkw"}, {"type": "GRAFANA_DASHBOARD", "id": "5TGD77ZKuTiZUXtiM51m6V"}, {"type": "GRAFANA_DASHBOARD", "id": "6pY6DKD0oQJL4sO7bW728"}, {"type": "GRAFANA_DASHBOARD", "id": "5kkUAuEwA0Ygvlgm9iXTHY"}, {"type": "GRAFANA_DASHBOARD", "id": "53g5kSILqoj1T10U1jnvKV"}, {"type": "GRAFANA_DASHBOARD", "id": "5e1xRaDdQnOvs5LCuwKCh5"}, {"type": "GRAFANA_DASHBOARD", "id": "7Cy6DUN78jlKUtMCsbt6GC"}, {"type": "GRAFANA_DASHBOARD", "id": "29HGYsE3kgFEdgJbalTqeY"}, {"type": "GRAFANA_DASHBOARD", "id": "1Hij49FKdnAKVJTjOmpDbH"}, {"type": "GRAFANA_DASHBOARD", "id": "6ThbRK67ZxBGeYwp8n74D0"}, {"type": "GRAFANA_ALERT", "id": "5Ng398K19vXP9197bRV1If"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/kubernetes.md"}, {"id": "Control-plane", "title": "Control Plane", "description": "Control Plane is a hybrid platform that integrates multiple cloud services, such as AWS, GCP, and Azure, providing a unified and flexible environment for developers to build and manage backend applications and services across various public and private clouds.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Containers"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/control-plane.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/control-plane.md"}, {"id": "OpenShift", "title": "OpenShift", "description": "OpenShift is a family of containerization software products developed by Red Hat. Deploy this integration to ship logs from your OpenShift cluster to Logz.io. Deploy this integration to ship logs from your OpenShift cluster to Logz.io. This integration will deploy the default daemonset, which sends only container logs while ignoring all containers with \"openshift\" namespace.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Containers", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/openshift.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/openshift.md"}, {"id": "apache-couchdb", "title": "Apache CouchDB", "description": "Apache CouchDB is an open-source document-oriented NoSQL database, implemented in Erlang.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/couchdb.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/apache-couchdb.md"}, {"id": "MongoDB-Atlas", "title": "MongoDB Atlas", "description": "MongoDB Atlas is a fully-managed cloud database that handles deploying, managing and healing deployments on its cloud service provider.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mongoatlas-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/mongodb-atlas.md"}, {"id": "etcd", "title": "etcd", "description": "etcd is an open source, distributed, consistent key-value store for shared configuration, service discovery, and scheduler coordination of distributed systems or clusters of machines. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/etcd-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3Vr8IYt2XR2LEKP6PeVV0r"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/etcd.md"}, {"id": "MongoDB", "title": "MongoDB", "description": "MongoDB is a source-available cross-platform document-oriented database program. This integration lets you send logs and metrics from your MongoDB instances to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mongo-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "13q1IECY8zfnnDXvUq7vvH"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/mongodb.md"}, {"id": "ZFS", "title": "ZFS", "description": "ZFS combines a file system with a volume manager. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/zfs-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/zfs.md"}, {"id": "Ceph", "title": "Ceph", "description": "Ceph is an open-source software (software-defined storage) storage platform, implements object storage on a single distributed computer cluster, and provides 3-in-1 interfaces for object-, block- and file-level storage. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ceph-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/ceph.md"}, {"id": "Elasticsearch", "title": "Elasticsearch", "description": "Elasticsearch is a search engine based on the Lucene library. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/elasticsearch.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/elasticsearch.md"}, {"id": "Solr", "title": "Solr", "description": "Solr is an open-source enterprise-search platform, written in Java. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["Search Platform"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/solr-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/solr.md"}, {"id": "Suricata", "title": "Suricata", "description": "Suricata is an open source-based intrusion detection system and intrusion prevention system. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/suricata-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/suricata.md"}, {"id": "Palo-Alto-Networks", "title": "Palo Alto Networks", "description": "Palo Alto Networks provides advanced protection, security and consistency across locations and clouds. This integration allows you to send logs from your Palo Alto Networks applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/palo-alto-networks.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/palo-alto-networks.md"}, {"id": "Cisco-Meraki", "title": "Cisco Meraki", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon S3 metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cisco-meraki-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/cisco-meraki.md"}, {"id": "OpenVAS", "title": "OpenVAS", "description": "These instructions show you how to configure Filebeat to send OpenVAS reports to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/greenbone_icon.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/openvas.md"}, {"id": "ESET", "title": "ESET", "description": "ESET provides anti-virus and firewall solutions. This integration allows you to send ESET logs to your Logz.io SIEM account.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/eset.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/eset.md"}, {"id": "auditbeat", "title": "Auditbeat", "description": "As its name suggests, auditd is a service that audits activities in a Linux environment. It's available for most major Linux distributions.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/auditbeat.md"}, {"id": "HashiCorp-Vault", "title": "HashiCorp Vault", "description": "HashiCorp Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. This integration allows you to send HashiCorp Vault logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/hashicorp-vault.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/hashicorp-vault.md"}, {"id": "Check-Point", "title": "Check Point", "description": "Check Point provides hardware and software products for IT security, including network security, endpoint security, cloud security, mobile security, data security and security management. This integration allows you to send Check Point logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/check-point.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/check-point.md"}, {"id": "Carbon-Black", "title": "Carbon Black", "description": "Carbon Black enables multi-cloud workload and endpoint threat protection. Connect your Carbon Black to Logz.io to monitor and analyze endpoint security, threat detection, user behavior, software inventory, compliance, and incident response to enhance overall cybersecurity.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/carbon-black.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/carbon-black.md"}, {"id": "x509", "title": "x509", "description": "Deploy this integration to collect X509 certificate metrics from URLs and send them to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ssl-certificate.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "19AIOkwkFLQCZWmUSINGXT"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/x509.md"}, {"id": "Trivy", "title": "Trivy", "description": "TThis integration utilizes the logzio-trivy Helm Chart to deploy the trivy-Operator Helm Chart that scans the cluster and creates Trivy reports and a deployment that looks for the Trivy reports in the cluster, processes them, and sends them to Logz.io", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/trivy-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/trivy.md"}, {"id": "Zeek", "title": "Zeek", "description": "Zeek is a free and open-source software network analysis framework. This integration allows you to send Zeek logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/zeek.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/zeek.md"}, {"id": "Fail2Ban", "title": "Fail2Ban", "description": "Fail2Ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. This integration allows you to send Fail2ban logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/fail2ban.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/fail2ban.md"}, {"id": "FortiGate", "title": "FortiGate", "description": "FortiGate units are installed as a gateway or router between two networks. This integration allows you to send FortiGate logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/fortinet.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/fortigate.md"}, {"id": "McAfee-ePolicy-Orchestrator", "title": "McAfee ePolicy Orchestrator", "description": "McAfee ePolicy Orchestrator (McAfee ePO) software centralizes and streamlines management of endpoint, network, data security, and compliance solutions. This integration allows you to send McAfee ePolicy Orchestrator logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mcafee.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/mcafee-epolicy-orchestrator.md"}, {"id": "Sophos", "title": "Sophos", "description": "Sophos Endpoint is an endpoint protection product that combines antimalware, web and application control, device control and much more. This integration allows you to send logs from your Linux-based Sophos applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/sophos-shield.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/sophos.md"}, {"id": "Crowdstrike", "title": "Crowdstrike", "description": "Crowdstrike is a SaaS (software as a service) system security solution. Deploy this integration to ship Crowdstrike events from your Crowdstrike account to Logz.io using FluentD.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/crowdstrike-logo.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/crowdstrike.md"}, {"id": "Wazuh", "title": "Wazuh", "description": "Wazuh is a free, open source and enterprise-ready security monitoring solution for threat detection, integrity monitoring, incident response and compliance. This integration allows you to send Wazuh logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/wazuh.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/wazuh.md"}, {"id": "Avast", "title": "Avast", "description": "Avast is a family of cross-platform internet security applications. This topic describes how to send system logs from your Avast Antivirus platform to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/avast.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/avast.md"}, {"id": "Bitdefender", "title": "Bitdefender", "description": "Bitdefender is an antivirus software. This integration allows you to send Bitdefender logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bitdefender.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/bitdefender.md"}, {"id": "Stormshield", "title": "Stormshield", "description": "Stormshield provides cyber-security solutions. This integration allows you to send logs from your Stormshield applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/stormshield.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/stormshield.md"}, {"id": "ModSecurity", "title": "ModSecurity", "description": "ModSecurity, sometimes called Modsec, is an open-source web application firewall. This integration allows you to send ModSecurity logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/modsec.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/modsecurity.md"}, {"id": "Windows-Defender", "title": "Windows Defender via Winlogbeat", "description": "This integration enable you to send Windows Defender events to Logz.io using winlogbeat", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows-defender.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/windows-defender.md"}, {"id": "Falco", "title": "Falco", "description": "Falco is a CNCF-approved container security and Kubernetes threat detection engine that logs illegal container activity at runtime. Shipping your Falco logs to your Cloud SIEM can help you monitor your Kubernetes workloads for potentially malicious behavior. This can help you catch attempts to remove logging data from a container, to run recon tools inside a container, or add potentially malicious repositories to a container.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/falco-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/falco.md"}, {"id": "OSSEC", "title": "OSSEC", "description": "OSSEC is a multiplatform, open source and free Host Intrusion Detection System (HIDS). This integration allows you to send OSSEC logs to your Logz.io SIEM account.", "productTags": ["SIEM", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ossec.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/ossec.md"}, {"id": "Cynet", "title": "Cynet", "description": "Cynet is a cybersecurity asset management platform. This topic describes how to send system logs from your Cynet platform to Logz.io.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cynet.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/cynet.md"}, {"id": "Cisco-SecureX", "title": "Cisco SecureX", "description": "Cisco SecureX connects the breadth of Cisco's integrated security portfolio and your infrastructure. This integration allows you to collect data from Cisco SecureX API and send it to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/securex-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/cisco-securex.md"}, {"id": "SentinelOne", "title": "SentinelOne", "description": "SentinelOne platform delivers the defenses to prevent, detect, and undo\u2014known and unknown\u2014threats. This integration allows you to send logs from your SentinelOne applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/sentintelone-icon.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/sentinelone.md"}, {"id": "Alcide-kAudit", "title": "Alcide kAudit", "description": "Alcide kAudit is a security service for monitoring Kubernetes audit logs, and easily identifying abnormal administrative activity and compromised Kubernetes resources.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/alcide.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/alcide-kaudit.md"}, {"id": "Cisco-ASA", "title": "Cisco ASA", "description": "Cisco ASA is a security device that combines firewall, antivirus, intrusion prevention, and virtual private network (VPN) capabilities.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cisco.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/cisco-asa.md"}, {"id": "SonicWall", "title": "SonicWall", "description": "SonicWall firewalls allow you to identify and control all of the applications in use on your network. This integration allows you to send logs from your SonicWall applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/SonicWall-Logo.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/sonicwall.md"}, {"id": "Trend-micro", "title": "Trend Micro", "description": "This integration enables users to monitor and analyze cybersecurity threats and events in real-time, enhancing their overall security visibility and incident response capabilities.", "productTags": ["METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/trendmicro-small-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/trend-micro.md"}, {"id": "pfSense", "title": "pfSense", "description": "pfSense is an open source firewall solution. This topic describes how to configure pfSense to send system logs to Logz.io via Filebeat running on a dedicated server.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/pfsense-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/pfsense.md"}, {"id": "bCache-memory", "title": "bCache", "description": "bCache is a cache in the Linux kernel's block layer, which is used for accessing secondary storage devices. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Memory-Caching/bCache.md"}, {"id": "Memcached-memory", "title": "Memcached", "description": "Memcached is a general-purpose distributed memory-caching system. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Memory Caching"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/memcached.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Memory-Caching/memcached.md"}, {"id": "dotnet-traces-with-kafka-using-opentelemetry", "title": ".NET Kafka Tracing with OpenTelemetry", "description": "Deploy this integration to enable kafka instrumentation of your .NET application using OpenTelemetry.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kafka.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/dotnet-traces-kafka.md"}, {"id": "JSON", "title": "JSON", "description": "Ship logs from your code directly to the Logz.io listener as a minified JavaScript Object Notation (JSON) file, a standard text-based format for representing structured data based on JavaScript object syntax.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/json.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/json.md"}, {"id": "Java", "title": "Java", "description": "Integrate your Java applications with Logz.io to gain observability needed to maintain and improve your applications and performance. With Logz.io, you can monitor your Java logs, metrics, and traces, know if and when incidents occur, and quickly resolve them.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Most Popular"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/java.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/java.md"}, {"id": "Ruby", "title": "Ruby", "description": "Deploy this integration to enable automatic instrumentation of your Ruby application using OpenTelemetry.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ruby.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/ruby.md"}, {"id": "Node-js", "title": "Node.js", "description": "Send Node.js logs, metrics, and traces to monitor and maintain your applications' stability, dependability, and performance. By sending your data to Logz.io, you can rapidly spot any issue that might harm your applications and quickly resolve them.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Most Popular"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nodejs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2zAdXztEedvoRJzWTR2dY0"}, {"type": "GRAFANA_ALERT", "id": "14UC8nC6PZmuJ0lqOeHnhv"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/node-js.md"}, {"id": "dotnet", "title": ".NET", "description": ".NET is an open-source, managed computer software framework for Windows, Linux, and macOS operating systems. Integrate .NET with Logz.io to monitor logs, metrics, and traces, identify when issues occur, easily troubleshoot them, and improve your applications and services.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Most Popular"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/dotnet.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3lGo7AE5839jDfkAYU8r21"}, {"type": "GRAFANA_ALERT", "id": "1ALFpmGPygXKWi18TDoO5C"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/dotnet.md"}, {"id": "java-traces-with-kafka-using-opentelemetry", "title": "Java Traces with Kafka using OpenTelemetry", "description": "Deploy this integration to enable automatic instrumentation of your Java application with Kafka using OpenTelemetry.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kafka.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/java-traces-with-kafka-using-opentelemetry.md"}, {"id": "Nestjs", "title": "NestJS OpenTelemetry", "description": "Deploy this integration to enable automatic instrumentation of your NestJS application using OpenTelemetry.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nest-logo.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/nestjs.md"}, {"id": "Python", "title": "Python", "description": "Logz.io's Python integration allows you to send custom logs, custom metrics, and auto-instrument traces into your account, allowing you to identify and resolve issues in your code.", "productTags": ["METRICS", "LOG_ANALYTICS", "TRACING"], "osTags": ["windows", "linux", "mac"], "filterTags": ["Code", "Most Popular"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/python.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1B98fgq9MpqTviLUGFMe6Z"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/python.md"}, {"id": "GO", "title": "GO", "description": "Send logs, metrics and traces from you Go code", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/go.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2cm0FZu4VK4vzH0We6SrJb"}, {"type": "GRAFANA_ALERT", "id": "1UqjU2gqNAKht1f62jBC9Q"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/go.md"}, {"id": "OneLogin", "title": "OneLogin", "description": "OneLogin is a cloud-based identity and access management (IAM) provider. This integration allows you to ship logs from your OneLogin account to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/onelogin.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/onelogin.md"}, {"id": "JumpCloud", "title": "JumpCloud", "description": "JumpCloud is a cloud-based platform for identity and access management. Deploy this integration to ship JumpCloud events to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/jumpcloud.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/jumpcloud.md"}, {"id": "Okta", "title": "Okta", "description": "Okta is an enterprise-grade, identity management service, built for the cloud, but compatible with many on-premises applications.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/okta.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/okta.md"}, {"id": "Active-Directory", "title": "Active Directory via Winlogbeat", "description": "Active Directory is a directory service developed by Microsoft for Windows domain networks. This integration allows you to send Active Directory logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/active-directory.md"}, {"id": "Auth0", "title": "Auth0", "description": "Auth0 is an easy to implement, adaptable authentication and authorization platform. Deploy this integration to ship Auth0 events from your Auth0 account to Logz.io using custom log stream via webhooks.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/auth0.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/auth0.md"}, {"id": "Istio-orchestration", "title": "Istio", "description": "Deploy this integration to send traces from your Istio service mesh layers to Logz.io via the OpenTelemetry collector.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/istio.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/istio-traces.md"}, {"id": "DC-OS-orchestration", "title": "DC/OS", "description": "DC/OS (the Distributed Cloud Operating System) is an open-source, distributed operating system based on the Apache Mesos distributed systems kernel. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/dcos.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/ds-os.md"}, {"id": "Apache-ZooKeeper-orchestration", "title": "Apache ZooKeeper", "description": "Apache ZooKeeper is an open-source server for highly reliable distributed coordination of cloud applications.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/zookeeper-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/apache-zookeeper.md"}, {"id": "Beanstalkd-orchestration", "title": "Beanstalkd", "description": "Beanstalkd is a simple, fast work queue. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/beanstalk-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/beanstalkd.md"}, {"id": "Apache-Mesos-orchestration", "title": "Apache Mesos", "description": "Apache Mesos is an open-source project to manage computer clusters.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mesos-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/apache-mesos.md"}, {"id": "GCP-Cloud-Logging", "title": "GCP Cloud Logging", "description": "Send Google Cloud Logging metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudlogging.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-logging.md"}, {"id": "GCP-Recommendations", "title": "GCP Recommendations", "description": "Send Google Cloud Recommendations metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpai.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-recommendations.md"}, {"id": "GCP-Cloud-IDS", "title": "GCP IDS", "description": "Send Google Cloud IDS metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ids.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-ids.md"}, {"id": "GCP-Cloud-Router", "title": "GCP Router", "description": "Send Google Cloud Router metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcprouter.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-router.md"}, {"id": "gcp-internet-of-things", "title": "GCP Cloud Internet of Things (IoT) Core", "description": "Send Google Cloud Internet of Things (IoT) Core metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "IoT"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/googleiot.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-internet-of-things.md"}, {"id": "GCP-Cloud-Functions", "title": "GCP Cloud Functions", "description": "Send Google Cloud Functions metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudfunctions.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "78mU6GZUeRLhMtExlMvshT"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloudfunctions.md"}, {"id": "GCP-Filestore", "title": "GCP Filestore", "description": "Send Google Cloud Filestore metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpfilestore.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4LAZ8Zep644MzbT1x089GG"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-filestorage.md"}, {"id": "GCP-Firestore", "title": "GCP Firestore", "description": "Send Google Cloud Firestore metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/firestore.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-firestore.md"}, {"id": "GCP-BigQuery", "title": "GCP BigQuery", "description": "Send Google Cloud BigQuery metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigquery.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-bigquery.md"}, {"id": "GCP-Compute-Engine", "title": "GCP Compute Engine", "description": "Send Google Cloud Compute Engine metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/computeengine.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2UHWhKZvymlkGU7yy4jKIK"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-compute-engine.md"}, {"id": "gcp-load-balancing", "title": "GCP Load Balancing", "description": "Send Google Cloud Load Balancing metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcplb.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2qF8pBXlwH0Pw6noOMfzRk"}, {"type": "GRAFANA_DASHBOARD", "id": "48vnzAEl0x6hh3DWKIWkpx"}, {"type": "GRAFANA_DASHBOARD", "id": "7s5HblMf4IVimoRSwnCRJ6"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-load-balancing.md"}, {"id": "GCP-BigQuery-Data-Transfer-Service", "title": "GCP BigQuery Data Transfer Service", "description": "Send Google Cloud BigQuery Data Transfer Service metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigquery.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-bigquery-data-transfer-service.md"}, {"id": "google-certificate-authority-service", "title": "Google Certificate Authority Service", "description": "Send Google Cloud Certificate Authority Service metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/certmanager.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/google-certificate-authority-service.md"}, {"id": "GCP-VPN", "title": "GCP VPN", "description": "Send Google Cloud VPN metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-vpn.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4gdYz2iIWFeIL3WDDcYRm"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-vpn.md"}, {"id": "gcp-firewall-insights", "title": "GCP Firewall Insights", "description": "Send Google Cloud Firewall metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpfirewall.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-firewall-insights.md"}, {"id": "GCP-Firebase", "title": "GCP Firebase", "description": "Send Google Cloud Firebase metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/firebase.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-firebase.md"}, {"id": "GCP-Cloud-API", "title": "GCP API", "description": "Send Google Cloud API metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpapis.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-api.md"}, {"id": "gcp-managed-service-for-microsoft-active-directory", "title": "GCP Managed Service for Microsoft Active Directory", "description": "Send Google Cloud Managed Service for Microsoft Active Directory metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpiam.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-managed-service-for-microsoft-active-directory.md"}, {"id": "GCP-Storage-Transfer", "title": "GCP Storage Transfer Service", "description": "Send Google Cloud Storage Transfer Service metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpstorage.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-storage-transfer.md"}, {"id": "GCP-Cloud-Interconnect", "title": "GCP Interconnect", "description": "Send Google Cloud Interconnect metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/interconnect.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-interconnect.md"}, {"id": "GCP-Workspace", "title": "GCP Workspace", "description": "Send Google Cloud Workspace metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/google-workspace.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-workspace.md"}, {"id": "GCP-AI-Platform", "title": "GCP AI Platform", "description": "Send Google AI Platform metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpai.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-ai-platform.md"}, {"id": "GCP-Bigtable", "title": "GCP Bigtable", "description": "Send Google Cloud Bigtable metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigtable.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "z2VVwfx5bq2xD5zhQUzk6"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-bigtable.md"}, {"id": "GCP-Datastream", "title": "GCP Datastream", "description": "Send Google Cloud Datastream metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdatastream.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-datastream.md"}, {"id": "GCP-Cloud-Monitoring", "title": "GCP Monitoring", "description": "Send Google Cloud Monitoring metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudmonitoring.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-monitoring.md"}, {"id": "gcp-app-engine", "title": "GCP App Engine", "description": "Send Google Cloud App Engine metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/appengine.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-app-engine.md"}, {"id": "GCP-BigQuery-BI-Engine", "title": "GCP BigQuery BI Engine", "description": "Send Google Cloud BigQuery BI Engine metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigquery.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-bigquerybiengine.md"}, {"id": "Google-Cloud-Run", "title": "GCP Cloud Run", "description": "Send Google Cloud Run metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudrun.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-run.md"}, {"id": "GCP-Stackdriver", "title": "GCP Operation Suite (Stackdriver)", "description": "Send Google Cloud Operation Suite (Stackdriver) metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcp-stackdriver.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-stackdriver.md"}, {"id": "GCP-Cloud-Trace", "title": "GCP Trace", "description": "Send Google Cloud Trace metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcptrace.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-trace.md"}, {"id": "GCP-Identity-and-Access-Management", "title": "GCP Identity and Access Management", "description": "Send Google Cloud Identity and Access Management metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpiam.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-identity-and-access-management.md"}, {"id": "GCP-VM-Manager", "title": "GCP VM Manager", "description": "Send Google Cloud VM Manager metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/computeengine.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-vm-manager.md"}, {"id": "GCP-reCAPTCHA-Enterprise", "title": "GCP reCAPTCHA Enterprise", "description": "Send Google Cloud reCAPTCHA Enterprise metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/recap.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-recaptcha-enterprise.md"}, {"id": "GCP-API-Gateway", "title": "GCP API Gateway", "description": "Send Google Cloud API Gateway metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apigateway.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-api-gateway.md"}, {"id": "GCP-Data-Loss-Prevention", "title": "GCP Data Loss Prevention", "description": "Send Google Cloud Data Loss Prevention metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/lossprevention.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-data-loss-prevention.md"}, {"id": "GPC-Apigee", "title": "GCP Apigee", "description": "Apigee, part of Google Cloud, helps design, secure, and scale application programming interfaces (APIs). Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apigee.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-apigee.md"}, {"id": "GCP-Dataproc", "title": "GCP Dataproc", "description": "Send Google Cloud Dataproc metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdataproc.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-dataproc.md"}, {"id": "GCP-Compute-Engine-Autoscaler", "title": "GCP Compute Engine Autoscaler", "description": "Send Google Cloud Compute Engine Autoscaler metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/computeengine.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-compute-engine-autoscaler.md"}, {"id": "GCP-Cloud-Composer", "title": "GCP Cloud Composer", "description": "Send Google Cloud Composer metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpcomposer.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-composer.md"}, {"id": "GCP-Storage", "title": "GCP Storage", "description": "Send Google Cloud Storage metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpstorage.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4LAZ8Zep644MzbT1x089GG"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-storage.md"}, {"id": "GCP-Cloud-Armor", "title": "GCP Cloud Armor", "description": "Send Google Cloud Armor metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudarmor.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-armor.md"}, {"id": "gcp-memorystore-for-memcached", "title": "GCP Memorystore for Memcached", "description": "Send Google Cloud Memorystore for Memcached metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/memorystore.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6V6DBzsX8cRZXCSvuSkHiA"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-memorystore-for-memcached.md"}, {"id": "GCP-Dataflow", "title": "GCP Dataflow", "description": "Send Google Cloud Dataflow metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdataflow.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-dataflow.md"}, {"id": "GCP-Cloud-Healthcare", "title": "GCP Healthcare", "description": "Send Google Cloud Healthcare metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcphealthcare.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-healthcare.md"}, {"id": "GCP-Vertex-AI", "title": "GCP Vertex AI", "description": "Send Google Cloud Vertex AI metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/vertexai.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-vertex-ai.md"}, {"id": "gcp-network-topology", "title": "GCP Network Topology", "description": "Send Google Cloud Network Topology metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpnetwork.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-network-topology.md"}, {"id": "GCP-Dataproc-Metastore", "title": "GCP Dataproc Metastore", "description": "Send Google Cloud Dataproc Metastore metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdataproc.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-dataproc-metastore.md"}, {"id": "GCP-Cloud-TPU", "title": "GCP TPU", "description": "Send Google Cloud TPU metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/tpu.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-tpu.md"}, {"id": "gcp-contact-center-ai-insights", "title": "GCP Contact Center AI Insights", "description": "Send Google Cloud Contact Center AI Insights metrics to your Logz.io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpai.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-contact-center-ai-insights.md"}, {"id": "GCP-Datastore", "title": "GCP Datastore", "description": "Send Google Cloud Datastore metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdatastore.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-datastore.md"}, {"id": "GCP-Workflows", "title": "GCP Workflows", "description": "Send Google Cloud Workflows metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/workflows.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-workflows.md"}, {"id": "GCP-Cloud-DNS", "title": "GCP DNS", "description": "Send Google Cloud DNS metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/dns.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-dns.md"}, {"id": "GCP-Cloud-Tasks", "title": "GCP Tasks", "description": "Send Google Cloud Tasks metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcptasks.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloudtasks.md"}, {"id": "GCP-PubSub", "title": "GCP PubSub", "description": "Send Google Cloud PubSub metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/pubsub.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-pubsub.md"}, {"id": "GCP-Cloud-SQL", "title": "GCP SQL", "description": "Send Google Cloud SQL metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpsql.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4KUp9D8EhuMuCuLLhIZBEP"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloudsql.md"}, {"id": "GCP-Memorystore-for-Redis", "title": "GCP Memorystore for Redis", "description": "Send Google Cloud Memorystore for Redis metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/memorystore.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "771vgmjMzFBHHma1Jov3bG"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-memorystore-for-redis.md"}, {"id": "localhost-mac", "title": "Mac Operating System", "description": "Send your Mac machine logs and metrics to Logz.io to monitor and manage your Mac data, allowing you to identify anomalies, investigate incidents, get to the root cause of any issue, and quickly resolve it.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["mac"], "filterTags": ["Operating Systems", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mac-os.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2gsQP2xRef7dkwt8pxWieo"}, {"type": "GRAFANA_ALERT", "id": "hWld33IEO6gZMpp2e4vs0"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/localhost-mac.md"}, {"id": "Linux-data", "title": "Linux Operating System", "description": "Send your Linux machine logs and metrics to Logz.io to monitor and manage your Linux data, allowing you to identify anomalies, investigate incidents, get to the root cause of any issue, and quickly resolve it.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["linux"], "filterTags": ["Operating Systems", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6hb5Nww0ar4SXoF92QxMx"}, {"type": "GRAFANA_ALERT", "id": "6y7xNsm1RXlXAFUAXLyOpZ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/linux.md"}, {"id": "Telegraf-windows-performance", "title": "Telegraf Windows Performance", "description": "Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3AND5wMrjcMC9ngDTghmHx"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/telegraf-windows-performance.md"}, {"id": "Windows", "title": "Windows Operating System", "description": "Send your Windows machine logs and metrics to Logz.io to monitor and manage your Windows data, allowing you to identify anomalies, investigate incidents, get to the root cause of any issue, and quickly resolve it.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows"], "filterTags": ["Operating Systems", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [{"type": "LOG_ALERT", "id": "72Yry8pK5OfiGdPOV2y9RZ"}, {"type": "LOG_ALERT", "id": "4Mkw0OICZz7xnZZjlGWX9x"}, {"type": "GRAFANA_DASHBOARD", "id": "7vydxtpnlKLILHIGK4puX5"}, {"type": "GRAFANA_ALERT", "id": "4GVNTAqeH4lSRQBfN7dCXQ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/windows.md"}, {"id": "Telegraf-Windows-services", "title": "Telegraf Windows Services", "description": "Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/telegraf-windows-services.md"}, {"id": "Azure-blob-trigger", "title": "Azure Blob Trigger", "description": "Azure Blob Storage is Microsoft's object storage solution for the cloud. Deploy this integration to forward logs from your Azure Blob Storage account to Logz.io using an automated deployment process via the trigger function. Each new log in the container path inside the storage account (including sub directories), will trigger the Logz.io function that will ship the file content to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure-blob.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-blob-trigger.md"}, {"id": "azure-office365-message-trace-reports", "title": "Microsoft Azure Office365 Message Trace Reports (mail reports)", "description": "You can ship mail report logs available from the Microsoft Office365 Message Trace API with Logzio-api-fetcher.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-mail-reports.md"}, {"id": "Azure-Native", "title": "Azure Native Logs", "description": "This integration uses Logz.io's Cloud-Native Observability Platform to monitor the health and performance of your Azure environment through the Azure portal.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Azure-native-integration2.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-native.md"}, {"id": "Azure-Security-Center", "title": "Azure Security Center", "description": "You can ship security logs available from the Microsoft Graph APIs with Logzio api fetcher.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-security-center.md"}, {"id": "Azure-NSG", "title": "Azure NSG", "description": "Enable an Azure function to forward NSG logs from your Azure Blob Storage account to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nsg-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-nsg.md"}, {"id": "azure-active-Directory", "title": "Azure Active Directory", "description": "You can ship logs available from the Microsoft Graph APIs with Logzio-MSGraph.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-active-directory.md"}, {"id": "Azure-Diagnostic-Logs", "title": "Azure Diagnostic Logs", "description": "Ship your Azure diagnostic logs using an automated deployment process.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure-monitor.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-diagnostic-logs.md"}, {"id": "Azure-VM-Extension", "title": "Azure VM Extension", "description": "Extensions are small applications that provide post-deployment configuration and automation on Azure VMs. You can install Logz.io agents on Azure virtual machines as an extension. This will allow you to ship logs directly from your VM to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure-vm.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-vm-extension.md"}, {"id": "Azure-Activity-logs", "title": "Azure Activity Logs", "description": "Ship your Azure activity logs using an automated deployment process.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure-monitor.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-activity-logs.md"}, {"id": "azure-graph", "title": "Microsoft Azure Graph API", "description": "You can ship logs available from the Microsoft Graph APIs with Logzio-api-fetcher.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-graph.md"}, {"id": "TeamCity", "title": "TeamCity", "description": "TeamCity is a general-purpose CI/CD solution that allows the most flexibility for all sorts of workflows and development practices. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/TeamCity-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1mdHqslZMi4gXaNCLZo9G1"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/teamcity.md"}, {"id": "GitHub", "title": "GitHub", "description": "This integration enable you to collect logs and metrics from github", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/github.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/github.md"}, {"id": "Jenkins", "title": "Jenkins", "description": "Jenkins is an automation server for building, testing, and deploying software. This integration allows you to send logs and metrics from your Jenkins servers to your Logz.io account.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/jenkins.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7bmikAb2xNPTy7PESlBqXY"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/jenkins.md"}, {"id": "GitLab", "title": "GitLab", "description": "GitLab is a DevOps platform that combines the ability to develop, secure, and operate software in a single application. This integration allows you to send logs from your GitLan platform to your Logz.io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gitlab.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/gitlab.md"}, {"id": "Puppet", "title": "Puppet", "description": "Puppet is a software configuration management tool which includes its own declarative language to describe system configuration. Deploy this integration to send logs from your Puppet applications to your Logz,io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/puppet.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/puppet.md"}, {"id": "Argo-CD", "title": "Argo CD", "description": "Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "recommendedFor": ["DevOps Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/argo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6Gx8npV306IL2WZ4SJRIN4"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/argo-cd.md"}, {"id": "Bitbucket", "title": "Bitbucket", "description": "Bitbucket is a Git-based source code repository hosting service. This integration allows you to ship logs from your Bitbucket repository to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bitbucket.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/bitbucket.md"}, {"id": "Microsoft-SQL-Server", "title": "Microsoft SQL Server", "description": "Microsoft SQL Server is a relational database management system developed by Microsoft. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mysql.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/microsoft-sql-server.md"}, {"id": "PostgreSQL", "title": "PostgreSQL", "description": "PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/postgresql-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3L7cjHptO2CFcrvpqGCNI0"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/postgresql.md"}, {"id": "Redis", "title": "Redis", "description": "Redis is an in-memory data structure store, used as a distributed, in-memory key\u2013value database, cache and message broker, with optional durability. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/redis-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1sS7i6SyMz35RIay8NRYGp"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/redis.md"}, {"id": "PgBouncer", "title": "PgBouncer", "description": "PgBouncer is a lightweight connection pooler for PostgreSQL. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/pgbouncer.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/pgbouncer.md"}, {"id": "Apache-Cassandra", "title": "Apache Cassandra", "description": "Apache Cassandra is an open source NoSQL distributed database management system designed to process large amounts of data across commodity servers.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cassandra-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "5oCUt52hGJu6LmVGHPOktr"}, {"type": "GRAFANA_DASHBOARD", "id": "6J2RujMalRK3oC4y0r88ax"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/apache-cassandra.md"}, {"id": "MySQL", "title": "MySQL", "description": "MySQL is an open-source relational database management system. Filebeat is often the easiest way to get logs from your system to Logz.io. Logz.io has a dedicated configuration wizard to make it simple to configure Filebeat. If you already have Filebeat and you want to add new sources, check out our other shipping instructions to copy&paste just the relevant changes from our code examples.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mysql.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2zMVEOdWnIMgOPATDLByX7"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/mysql.md"}, {"id": "RavenDB", "title": "RavenDB", "description": "RavenDB is an open source document-oriented NoSQL designed especially for the .NET/Windows platform. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ravendb-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/ravendb.md"}, {"id": "Aerospike", "title": "Aerospike", "description": "Aerospike is a flash-optimized and in-memory open source distributed key value NoSQL database. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aerospike-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/aerospike.md"}, {"id": "Riak", "title": "Riak", "description": "Riak is a distributed NoSQL key-value data store that offers high availability, fault tolerance, operational simplicity, and scalability. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/riak-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/riak.md"}, {"id": "MarkLogic", "title": "MarkLogic", "description": "MarkLogic is a NoSQL database platform that is used in publishing, government, finance and other sectors, with hundreds of large-scale systems in production. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/marklogic.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/marklogic.md"}, {"id": "ClickHouse", "title": "ClickHouse", "description": "ClickHouse is a fast open-source column-oriented database management system that allows generating analytical data reports in real-time using SQL queries. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/clickhouse.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/clickhouse-telegraf.md"}], "tags": ["Distributed Messaging", "Other", "Most Popular", "Compute", "Synthetic Monitoring", "AWS", "Database", "Security", "Data Store", "Access Management", "Load Balancer", "Memory Caching", "CI/CD", "Network", "Containers", "Orchestration", "IoT", "Code", "GCP", "Monitoring", "Operating Systems", "Azure"]} \ No newline at end of file +{"collectors": [{"id": "Nestjs", "title": "NestJS OpenTelemetry", "description": "Deploy this integration to enable automatic instrumentation of your NestJS application using OpenTelemetry.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nest-logo.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/nestjs.md"}, {"id": "JSON", "title": "JSON", "description": "Ship logs from your code directly to the Logz.io listener as a minified JavaScript Object Notation (JSON) file, a standard text-based format for representing structured data based on JavaScript object syntax.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/json.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/json.md"}, {"id": "java-traces-with-kafka-using-opentelemetry", "title": "Java Traces with Kafka using OpenTelemetry", "description": "Deploy this integration to enable automatic instrumentation of your Java application with Kafka using OpenTelemetry.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kafka.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/java-traces-with-kafka-using-opentelemetry.md"}, {"id": "dotnet-traces-with-kafka-using-opentelemetry", "title": ".NET Kafka Tracing with OpenTelemetry", "description": "Deploy this integration to enable kafka instrumentation of your .NET application using OpenTelemetry.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kafka.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/dotnet-traces-kafka.md"}, {"id": "Python", "title": "Python", "description": "Logz.io's Python integration allows you to send custom logs, custom metrics, and auto-instrument traces into your account, allowing you to identify and resolve issues in your code.", "productTags": ["METRICS", "LOG_ANALYTICS", "TRACING"], "osTags": ["windows", "linux", "mac"], "filterTags": ["Code", "Most Popular"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/python.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1B98fgq9MpqTviLUGFMe6Z"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/python.md"}, {"id": "Java", "title": "Java", "description": "Integrate your Java applications with Logz.io to gain observability needed to maintain and improve your applications and performance. With Logz.io, you can monitor your Java logs, metrics, and traces, know if and when incidents occur, and quickly resolve them.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Most Popular"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/java.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/java.md"}, {"id": "dotnet", "title": ".NET", "description": ".NET is an open-source, managed computer software framework for Windows, Linux, and macOS operating systems. Integrate .NET with Logz.io to monitor logs, metrics, and traces, identify when issues occur, easily troubleshoot them, and improve your applications and services.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Most Popular"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/dotnet.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3lGo7AE5839jDfkAYU8r21"}, {"type": "GRAFANA_ALERT", "id": "1ALFpmGPygXKWi18TDoO5C"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/dotnet.md"}, {"id": "Node-js", "title": "Node.js", "description": "Send Node.js logs, metrics, and traces to monitor and maintain your applications' stability, dependability, and performance. By sending your data to Logz.io, you can rapidly spot any issue that might harm your applications and quickly resolve them.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code", "Most Popular"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nodejs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2zAdXztEedvoRJzWTR2dY0"}, {"type": "GRAFANA_ALERT", "id": "14UC8nC6PZmuJ0lqOeHnhv"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/node-js.md"}, {"id": "Ruby", "title": "Ruby", "description": "Deploy this integration to enable automatic instrumentation of your Ruby application using OpenTelemetry.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ruby.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/ruby.md"}, {"id": "GO", "title": "GO", "description": "Send logs, metrics and traces from you Go code", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Code"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/go.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2cm0FZu4VK4vzH0We6SrJb"}, {"type": "GRAFANA_ALERT", "id": "1UqjU2gqNAKht1f62jBC9Q"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Code/go.md"}, {"id": "OpenShift", "title": "OpenShift", "description": "OpenShift is a family of containerization software products developed by Red Hat. Deploy this integration to ship logs from your OpenShift cluster to Logz.io. Deploy this integration to ship logs from your OpenShift cluster to Logz.io. This integration will deploy the default daemonset, which sends only container logs while ignoring all containers with \"openshift\" namespace.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Containers", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/openshift.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/openshift.md"}, {"id": "Docker", "title": "Docker", "description": "Docker lets you work in standardized environments using local containers, promoting continuous integration and continuous delivery (CI/CD) workflows. With Logz.io you can collect logs and metrics from your Docker environment to gain observability and know if and when issues occur.", "productTags": ["LOG_ANALYTICS", "METRICS"], "recommendedFor": ["DevOps Engineer"], "osTags": ["windows", "linux"], "filterTags": ["Containers", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/docker.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/docker.md"}, {"id": "oracle-cloud-infrastructure-container-engine-for-kubernetes", "title": "Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)", "description": "Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Containers", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/oke.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/oracle-cloud-infrastructure-container-engine-for-kubernetes.md"}, {"id": "Control-plane", "title": "Control Plane", "description": "Control Plane is a hybrid platform that integrates multiple cloud services, such as AWS, GCP, and Azure, providing a unified and flexible environment for developers to build and manage backend applications and services across various public and private clouds.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Containers"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/control-plane.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/control-plane.md"}, {"id": "Kubernetes", "title": "Kubernetes", "description": "Kubernetes, also known as K8s, is an open-source system for automating deployments, scaling, and managing containerized applications. Integrate your Kubernetes system with Logz.io to monitor your logs, metrics, and traces, gain observability into your environment, and be able to identify and resolve issues with just a few clicks.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "recommendedFor": ["DevOps Engineer"], "osTags": ["windows", "linux"], "filterTags": ["Containers", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kubernetes.svg", "bundle": [{"type": "OSD_DASHBOARD", "id": "3D1grGcEYB5Oe2feUPImak"}, {"type": "OSD_DASHBOARD", "id": "qryn7oYYoeaBBGMFRvm67"}, {"type": "LOG_ALERT", "id": "1AZRkKc64I12yxAMf2Wyny"}, {"type": "LOG_ALERT", "id": "6H7dfFOPUaHVMIjxdOMASx"}, {"type": "LOG_ALERT", "id": "1F6zSL5me5XJt9Lrjw3vxU"}, {"type": "LOG_ALERT", "id": "2dQHLx0WxmKmLk1kc67Ags"}, {"type": "LOG_ALERT", "id": "3dyFejyivMaZFdudbwKGRG"}, {"type": "GRAFANA_DASHBOARD", "id": "7nILXHYFZbThgTSMObUxkw"}, {"type": "GRAFANA_DASHBOARD", "id": "5TGD77ZKuTiZUXtiM51m6V"}, {"type": "GRAFANA_DASHBOARD", "id": "6pY6DKD0oQJL4sO7bW728"}, {"type": "GRAFANA_DASHBOARD", "id": "5kkUAuEwA0Ygvlgm9iXTHY"}, {"type": "GRAFANA_DASHBOARD", "id": "53g5kSILqoj1T10U1jnvKV"}, {"type": "GRAFANA_DASHBOARD", "id": "5e1xRaDdQnOvs5LCuwKCh5"}, {"type": "GRAFANA_DASHBOARD", "id": "7Cy6DUN78jlKUtMCsbt6GC"}, {"type": "GRAFANA_DASHBOARD", "id": "29HGYsE3kgFEdgJbalTqeY"}, {"type": "GRAFANA_DASHBOARD", "id": "1Hij49FKdnAKVJTjOmpDbH"}, {"type": "GRAFANA_DASHBOARD", "id": "6ThbRK67ZxBGeYwp8n74D0"}, {"type": "GRAFANA_ALERT", "id": "5Ng398K19vXP9197bRV1If"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Containers/kubernetes.md"}, {"id": "Telegraf-sysmetrics", "title": "Telegraf System Metrics", "description": "Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/telegraf-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "32X5zm8qW7ByLlp1YPFkrJ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/telegraf-sysmetrics.md"}, {"id": "Apache-Tomcat", "title": "Apache Tomcat", "description": "Apache Tomcat is a web server and servlet container that allows the execution of Java Servlets and JavaServer Pages (JSP) for web applications.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/tomcat-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1QIverGwIdtlC5ZbKohyvj"}, {"type": "GRAFANA_DASHBOARD", "id": "6J2RujMalRK3oC4y0r88ax"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/apache-tomcat.md"}, {"id": "internet-information-services", "title": "Internet Information Services (IIS)", "description": "Internet Information Services (IIS) for Windows\u00ae Server is a flexible, secure and manageable Web server for hosting on the Web. This integration allows you to send logs from your IIS services to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/iis.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/internet-information-services.md"}, {"id": "VMware-vSphere", "title": "VMware vSphere", "description": "VMware vSphere is VMware's cloud computing virtualization platform. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/vsphere-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "VpeHVDlhfo1mF22Lc0UKf"}, {"type": "GRAFANA_DASHBOARD", "id": "6CpW1YzdonmTQ8uIXAN5OL"}, {"type": "GRAFANA_DASHBOARD", "id": "3AvORCMPVJd8948i9oKaBO"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/vmware-vsphere.md"}, {"id": "Apache-HTTP-Server", "title": "Apache HTTP Server", "description": "The Apache HTTP Server, colloquially called Apache, is a free and open-source cross-platform web server. This integration sends Apache HTTP server logs and metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apache-http-logo.png", "bundle": [{"type": "OSD_DASHBOARD", "id": "5LWLzuSeGMqXVj5p8cP1NX"}, {"type": "LOG_ALERT", "id": "6b8UfKeSHCc4SWxHphMd0O, 5jTENQYn5PpgiZWvezI0Cp, 6OAv4ozj4eRi7NSHgJawl1, 7EgPOsqIuoBUCwcHpq57L3, 6NmeR0XGMoTTanwU82oCrD"}, {"type": "GRAFANA_DASHBOARD", "id": "28VJXdtDINv7w2T3l8oOO9"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Compute/apache-http-server.md"}, {"id": "Neptune-apex-iot", "title": "Neptune Apex", "description": "Neptune Apex is an aquarium control system. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["IoT"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/neptune.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/IoT/neptune-apex.md"}, {"id": "HAProxy-load", "title": "HAProxy", "description": "HAProxy is a network device, so it needs to transfer logs using the syslog protocol.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/haproxy-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Load-Balancer/haproxy.md"}, {"id": "Nginx-load", "title": "Nginx", "description": "Nginx is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. Deploy this integration to ship Nginx logs to your Logz.io SIEM account and metrics, including Plus API, Plus, Stream STS, VTS.", "productTags": ["LOG_ANALYTICS", "METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nginx.svg", "bundle": [{"type": "LOG_ALERT", "id": "5tov4MgrnR6vXZhh1MyuHO"}, {"type": "LOG_ALERT", "id": "63MnOu9ZzkCXdX0KOhXghi"}, {"type": "LOG_ALERT", "id": "4V8BXcfr7noTdtU6EjXp7w"}, {"type": "LOG_ALERT", "id": "2EXnb71ucdTnVolN1PqbM6"}, {"type": "GRAFANA_DASHBOARD", "id": "3HKho6pQhCmEYmwMc4xCeY"}, {"type": "GRAFANA_ALERT", "id": "1Bz57jmzsN7uIiyZLdnNpx"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Load-Balancer/nginx.md"}, {"id": "bCache-memory", "title": "bCache", "description": "bCache is a cache in the Linux kernel's block layer, which is used for accessing secondary storage devices. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Memory-Caching/bCache.md"}, {"id": "Memcached-memory", "title": "Memcached", "description": "Memcached is a general-purpose distributed memory-caching system. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Memory Caching"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/memcached.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Memory-Caching/memcached.md"}, {"id": "etcd", "title": "etcd", "description": "etcd is an open source, distributed, consistent key-value store for shared configuration, service discovery, and scheduler coordination of distributed systems or clusters of machines. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/etcd-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3Vr8IYt2XR2LEKP6PeVV0r"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/etcd.md"}, {"id": "Solr", "title": "Solr", "description": "Solr is an open-source enterprise-search platform, written in Java. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["Search Platform"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/solr-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/solr.md"}, {"id": "ZFS", "title": "ZFS", "description": "ZFS combines a file system with a volume manager. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/zfs-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/zfs.md"}, {"id": "apache-couchdb", "title": "Apache CouchDB", "description": "Apache CouchDB is an open-source document-oriented NoSQL database, implemented in Erlang.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/couchdb.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/apache-couchdb.md"}, {"id": "Elasticsearch", "title": "Elasticsearch", "description": "Elasticsearch is a search engine based on the Lucene library. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/elasticsearch.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/elasticsearch.md"}, {"id": "MongoDB-Atlas", "title": "MongoDB Atlas", "description": "MongoDB Atlas is a fully-managed cloud database that handles deploying, managing and healing deployments on its cloud service provider.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mongoatlas-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/mongodb-atlas.md"}, {"id": "MongoDB", "title": "MongoDB", "description": "MongoDB is a source-available cross-platform document-oriented database program. This integration lets you send logs and metrics from your MongoDB instances to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mongo-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "13q1IECY8zfnnDXvUq7vvH"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/mongodb.md"}, {"id": "Ceph", "title": "Ceph", "description": "Ceph is an open-source software (software-defined storage) storage platform, implements object storage on a single distributed computer cluster, and provides 3-in-1 interfaces for object-, block- and file-level storage. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ceph-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Data-Store/ceph.md"}, {"id": "gcp-app-engine", "title": "GCP App Engine", "description": "Send Google Cloud App Engine metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/appengine.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-app-engine.md"}, {"id": "gcp-managed-service-for-microsoft-active-directory", "title": "GCP Managed Service for Microsoft Active Directory", "description": "Send Google Cloud Managed Service for Microsoft Active Directory metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpiam.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-managed-service-for-microsoft-active-directory.md"}, {"id": "GCP-Stackdriver", "title": "GCP Operation Suite (Stackdriver)", "description": "Send Google Cloud Operation Suite (Stackdriver) metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcp-stackdriver.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-stackdriver.md"}, {"id": "GCP-VPN", "title": "GCP VPN", "description": "Send Google Cloud VPN metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-vpn.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4gdYz2iIWFeIL3WDDcYRm"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-vpn.md"}, {"id": "GCP-Dataflow", "title": "GCP Dataflow", "description": "Send Google Cloud Dataflow metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdataflow.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-dataflow.md"}, {"id": "GCP-Data-Loss-Prevention", "title": "GCP Data Loss Prevention", "description": "Send Google Cloud Data Loss Prevention metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/lossprevention.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-data-loss-prevention.md"}, {"id": "GCP-Compute-Engine", "title": "GCP Compute Engine", "description": "Send Google Cloud Compute Engine metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/computeengine.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2UHWhKZvymlkGU7yy4jKIK"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-compute-engine.md"}, {"id": "GCP-Cloud-Interconnect", "title": "GCP Interconnect", "description": "Send Google Cloud Interconnect metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/interconnect.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-interconnect.md"}, {"id": "GCP-reCAPTCHA-Enterprise", "title": "GCP reCAPTCHA Enterprise", "description": "Send Google Cloud reCAPTCHA Enterprise metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/recap.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-recaptcha-enterprise.md"}, {"id": "gcp-load-balancing", "title": "GCP Load Balancing", "description": "Send Google Cloud Load Balancing metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcplb.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2qF8pBXlwH0Pw6noOMfzRk"}, {"type": "GRAFANA_DASHBOARD", "id": "48vnzAEl0x6hh3DWKIWkpx"}, {"type": "GRAFANA_DASHBOARD", "id": "7s5HblMf4IVimoRSwnCRJ6"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-load-balancing.md"}, {"id": "GCP-Identity-and-Access-Management", "title": "GCP Identity and Access Management", "description": "Send Google Cloud Identity and Access Management metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpiam.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-identity-and-access-management.md"}, {"id": "GCP-Memorystore-for-Redis", "title": "GCP Memorystore for Redis", "description": "Send Google Cloud Memorystore for Redis metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/memorystore.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "771vgmjMzFBHHma1Jov3bG"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-memorystore-for-redis.md"}, {"id": "GCP-Cloud-Tasks", "title": "GCP Tasks", "description": "Send Google Cloud Tasks metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcptasks.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloudtasks.md"}, {"id": "GCP-Cloud-TPU", "title": "GCP TPU", "description": "Send Google Cloud TPU metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/tpu.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-tpu.md"}, {"id": "GCP-Cloud-Functions", "title": "GCP Cloud Functions", "description": "Send Google Cloud Functions metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudfunctions.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "78mU6GZUeRLhMtExlMvshT"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloudfunctions.md"}, {"id": "gcp-contact-center-ai-insights", "title": "GCP Contact Center AI Insights", "description": "Send Google Cloud Contact Center AI Insights metrics to your Logz.io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpai.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-contact-center-ai-insights.md"}, {"id": "GCP-Workspace", "title": "GCP Workspace", "description": "Send Google Cloud Workspace metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/google-workspace.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-workspace.md"}, {"id": "gcp-network-topology", "title": "GCP Network Topology", "description": "Send Google Cloud Network Topology metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpnetwork.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-network-topology.md"}, {"id": "GCP-BigQuery-BI-Engine", "title": "GCP BigQuery BI Engine", "description": "Send Google Cloud BigQuery BI Engine metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigquery.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-bigquerybiengine.md"}, {"id": "GCP-Datastream", "title": "GCP Datastream", "description": "Send Google Cloud Datastream metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdatastream.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-datastream.md"}, {"id": "GCP-Cloud-DNS", "title": "GCP DNS", "description": "Send Google Cloud DNS metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/dns.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-dns.md"}, {"id": "GCP-Vertex-AI", "title": "GCP Vertex AI", "description": "Send Google Cloud Vertex AI metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/vertexai.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-vertex-ai.md"}, {"id": "GCP-Cloud-Trace", "title": "GCP Trace", "description": "Send Google Cloud Trace metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcptrace.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-trace.md"}, {"id": "GCP-API-Gateway", "title": "GCP API Gateway", "description": "Send Google Cloud API Gateway metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apigateway.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-api-gateway.md"}, {"id": "GCP-Workflows", "title": "GCP Workflows", "description": "Send Google Cloud Workflows metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/workflows.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-workflows.md"}, {"id": "GCP-Storage", "title": "GCP Storage", "description": "Send Google Cloud Storage metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpstorage.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4LAZ8Zep644MzbT1x089GG"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-storage.md"}, {"id": "GCP-AI-Platform", "title": "GCP AI Platform", "description": "Send Google AI Platform metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpai.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-ai-platform.md"}, {"id": "GCP-Cloud-IDS", "title": "GCP IDS", "description": "Send Google Cloud IDS metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ids.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-ids.md"}, {"id": "google-certificate-authority-service", "title": "Google Certificate Authority Service", "description": "Send Google Cloud Certificate Authority Service metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/certmanager.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/google-certificate-authority-service.md"}, {"id": "GCP-PubSub", "title": "GCP PubSub", "description": "Send Google Cloud PubSub metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/pubsub.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-pubsub.md"}, {"id": "GCP-Cloud-SQL", "title": "GCP SQL", "description": "Send Google Cloud SQL metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpsql.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4KUp9D8EhuMuCuLLhIZBEP"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloudsql.md"}, {"id": "GCP-Cloud-Router", "title": "GCP Router", "description": "Send Google Cloud Router metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcprouter.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-router.md"}, {"id": "GCP-Dataproc", "title": "GCP Dataproc", "description": "Send Google Cloud Dataproc metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdataproc.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-dataproc.md"}, {"id": "GCP-VM-Manager", "title": "GCP VM Manager", "description": "Send Google Cloud VM Manager metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/computeengine.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-vm-manager.md"}, {"id": "GCP-Cloud-Monitoring", "title": "GCP Monitoring", "description": "Send Google Cloud Monitoring metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudmonitoring.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-monitoring.md"}, {"id": "GCP-Firebase", "title": "GCP Firebase", "description": "Send Google Cloud Firebase metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/firebase.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-firebase.md"}, {"id": "GCP-Cloud-Logging", "title": "GCP Cloud Logging", "description": "Send Google Cloud Logging metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudlogging.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-logging.md"}, {"id": "GPC-Apigee", "title": "GCP Apigee", "description": "Apigee, part of Google Cloud, helps design, secure, and scale application programming interfaces (APIs). Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apigee.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-apigee.md"}, {"id": "GCP-Compute-Engine-Autoscaler", "title": "GCP Compute Engine Autoscaler", "description": "Send Google Cloud Compute Engine Autoscaler metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/computeengine.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-compute-engine-autoscaler.md"}, {"id": "GCP-BigQuery", "title": "GCP BigQuery", "description": "Send Google Cloud BigQuery metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigquery.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-bigquery.md"}, {"id": "GCP-Recommendations", "title": "GCP Recommendations", "description": "Send Google Cloud Recommendations metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpai.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-recommendations.md"}, {"id": "GCP-Datastore", "title": "GCP Datastore", "description": "Send Google Cloud Datastore metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdatastore.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-datastore.md"}, {"id": "GCP-Cloud-Healthcare", "title": "GCP Healthcare", "description": "Send Google Cloud Healthcare metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcphealthcare.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-healthcare.md"}, {"id": "gcp-firewall-insights", "title": "GCP Firewall Insights", "description": "Send Google Cloud Firewall metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpfirewall.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-firewall-insights.md"}, {"id": "Google-Cloud-Run", "title": "GCP Cloud Run", "description": "Send Google Cloud Run metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudrun.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-run.md"}, {"id": "GCP-Firestore", "title": "GCP Firestore", "description": "Send Google Cloud Firestore metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/firestore.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-firestore.md"}, {"id": "GCP-Storage-Transfer", "title": "GCP Storage Transfer Service", "description": "Send Google Cloud Storage Transfer Service metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpstorage.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-storage-transfer.md"}, {"id": "GCP-Cloud-Armor", "title": "GCP Cloud Armor", "description": "Send Google Cloud Armor metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudarmor.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-armor.md"}, {"id": "GCP-Filestore", "title": "GCP Filestore", "description": "Send Google Cloud Filestore metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpfilestore.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4LAZ8Zep644MzbT1x089GG"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-filestorage.md"}, {"id": "GCP-Cloud-Composer", "title": "GCP Cloud Composer", "description": "Send Google Cloud Composer metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpcomposer.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-composer.md"}, {"id": "gcp-memorystore-for-memcached", "title": "GCP Memorystore for Memcached", "description": "Send Google Cloud Memorystore for Memcached metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/memorystore.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6V6DBzsX8cRZXCSvuSkHiA"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-memorystore-for-memcached.md"}, {"id": "GCP-Bigtable", "title": "GCP Bigtable", "description": "Send Google Cloud Bigtable metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigtable.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "z2VVwfx5bq2xD5zhQUzk6"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-bigtable.md"}, {"id": "gcp-internet-of-things", "title": "GCP Cloud Internet of Things (IoT) Core", "description": "Send Google Cloud Internet of Things (IoT) Core metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "IoT"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/googleiot.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-internet-of-things.md"}, {"id": "GCP-Dataproc-Metastore", "title": "GCP Dataproc Metastore", "description": "Send Google Cloud Dataproc Metastore metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpdataproc.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-dataproc-metastore.md"}, {"id": "GCP-Cloud-API", "title": "GCP API", "description": "Send Google Cloud API metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gcpapis.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-cloud-api.md"}, {"id": "GCP-BigQuery-Data-Transfer-Service", "title": "GCP BigQuery Data Transfer Service", "description": "Send Google Cloud BigQuery Data Transfer Service metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["GCP", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigquery.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/GCP/gcp-bigquery-data-transfer-service.md"}, {"id": "OneLogin", "title": "OneLogin", "description": "OneLogin is a cloud-based identity and access management (IAM) provider. This integration allows you to ship logs from your OneLogin account to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/onelogin.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/onelogin.md"}, {"id": "Active-Directory", "title": "Active Directory via Winlogbeat", "description": "Active Directory is a directory service developed by Microsoft for Windows domain networks. This integration allows you to send Active Directory logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/active-directory.md"}, {"id": "Auth0", "title": "Auth0", "description": "Auth0 is an easy to implement, adaptable authentication and authorization platform. Deploy this integration to ship Auth0 events from your Auth0 account to Logz.io using custom log stream via webhooks.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/auth0.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/auth0.md"}, {"id": "Okta", "title": "Okta", "description": "Okta is an enterprise-grade, identity management service, built for the cloud, but compatible with many on-premises applications.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/okta.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/okta.md"}, {"id": "JumpCloud", "title": "JumpCloud", "description": "JumpCloud is a cloud-based platform for identity and access management. Deploy this integration to ship JumpCloud events to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/jumpcloud.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Access-Management/jumpcloud.md"}, {"id": "nlnet-labs-name-server-daemon-network", "title": "NLnet Labs Name Server Daemon", "description": "NLnet Labs Name Server Daemon (NSD) is an authoritative DNS name server. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nsd.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/nlnet-labs-name-server-daemon.md"}, {"id": "OpenVPN-network", "title": "OpenVPN", "description": "OpenVPN is a virtual private network system for secure point-to-point or site-to-site connections.", "productTags": ["METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/openvpn.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/openvpn.md"}, {"id": "Synproxy-network", "title": "Synproxy", "description": "Synproxy is a TCP SYN packets proxy. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/synproxy.md"}, {"id": "WireGuard-network", "title": "WireGuard", "description": "WireGuard is a communication protocol and free and open-source software that implements encrypted virtual private networks, and was designed with the goals of ease of use, high speed performance, and low attack surface. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/wireguard-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/wireguard.md"}, {"id": "junos-telemetry-interface-network", "title": "Junos Telemetry Interface", "description": "Junos Telemetry Interface (JTI) is a push mechanism to collect operational metrics for monitoring the health of a network that has no scaling limitations. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/juniper.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/junos-telemetry-interface.md"}, {"id": "Mcrouter-network", "title": "Mcrouter", "description": "Mcrouter is a memcached protocol router for scaling memcached deployments. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mcrouter-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/mcrouter.md"}, {"id": "Cloudflare-network", "title": "Cloudflare", "description": "The Cloudflare web application firewall (WAF) protects your internet property against malicious attacks that aim to exploit vulnerabilities such as SQL injection attacks, cross-site scripting, and cross-site forgery requests.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cloudflare.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/cloudflare.md"}, {"id": "Network-devices-network", "title": "Network Devices", "description": "This integration allows you to send logs from your network devices to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/network-device.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/network-device.md"}, {"id": "Unbound-network", "title": "Unbound", "description": "Unbound is a crowdfunding publisher that gives people the tools, support and freedom to bring their ideas to life. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/unbound-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/unbound-telegraf.md"}, {"id": "VPC-network", "title": "VPC", "description": "VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. This integration allows you to send these logs to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": [], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/vpc.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/vpc.md"}, {"id": "Juniper-SRX-network", "title": "Juniper SRX", "description": "Juniper SRX is a networking firewall solution and services gateway. If you ship your Juniper firewall logs to your Logz.io Cloud SIEM, you can centralize your security ops and receive alerts about security events logged by Juniper SRX.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/juniper.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Network/juniper-srx.md"}, {"id": "confluent", "title": "Confluent Cloud", "description": "This integration allows you to ship Confluent logs to Logz.io using Cloud HTTP Sink.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/confluent.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/confluent.md"}, {"id": "cadvisor", "title": "cAdvisor", "description": "This integration lets you send cAdvisor metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/cadvisor.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/cadvisor.md"}, {"id": "uWSGI-data", "title": "uWSGI", "description": "uWSGI is a software application that aims at developing a full stack for building hosting services. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/uwsgi-logo1.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/uWSGI-telegraf.md"}, {"id": "Hashicorp-Consul-data", "title": "Hashicorp Consul", "description": "This project lets you configure the OpenTelemetry collector to send your Prometheus-format metrics from Hashicorp Consul to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/consul-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/consul.md"}, {"id": "Burrow-data", "title": "Burrow", "description": "Burrow is a monitoring application for Apache Kafka that monitors committed offsets for all consumers and calculates the status of those consumers on demand. It automatically monitors all consumers and their consumed partitions.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kafka.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/burrow.md"}, {"id": "Microsoft-Graph-data", "title": "Microsoft Graph", "description": "Microsoft Graph is a RESTful web API that enables you to access Microsoft Cloud service resources. This integration allows you to collect data from Microsoft Graph API and send it to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/graph-api-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/microsoft-graph.md"}, {"id": "FPM-data", "title": "FPM", "description": "This integration sends Prometheus-format PHP-FPM metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/phpfpm-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "55uVoiaFwAreNAf7DojQZN"}, {"type": "GRAFANA_ALERT", "id": "1A2NfkQQprZqbtzQOVrcO7"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/fpm.md"}, {"id": "Telegraf", "title": "Telegraf", "description": "This integration lets you send Prometheus-format metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mascot-telegraf.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "32X5zm8qW7ByLlp1YPFkrJ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/telegraf.md"}, {"id": "Heroku-data", "title": "Heroku", "description": "Heroku is a platform as a service (PaaS) that enables developers to build, run, and operate applications entirely in the cloud. This integration allows you to send logs from your Heroku applications to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": [], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/heroku.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/heroku.md"}, {"id": "Sysmon-data", "title": "Sysmon (System Monitor) via Winlogbeat", "description": "Sysmon (System Monitor) is a Windows system service that monitors and logs system activity of the Windows event log. It tracks process creations, network connections, and changes to file creation time.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/sysmon.md"}, {"id": "Beats-data", "title": "Beats", "description": "Beats is an open platform that allows you to send data from hundreds or thousands of machines and systems. You can send data from your Beats to Logz.io to add a layer of observability to identify and resolve issues quickly.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/beats.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/beats.md"}, {"id": "Mailchimp-data", "title": "Mailchimp", "description": "Mailchimp is the All-In-One integrated marketing platform for small businesses. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mailchimp.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/mailchimp.md"}, {"id": "Youtube-data", "title": "Youtube", "description": "Youtube is an online video sharing and social media platform. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/youtube-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/youtube.md"}, {"id": "Intercom-data", "title": "Intercom", "description": "Intercom is a messaging platform with bots, apps, product tours and oher features. Deploy this integration to ship Intercom events from your Intercom account to Logz.io using webhooks.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/intercom.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/intercom.md"}, {"id": "invoke-restmethod-data", "title": "Invoke RestMethod", "description": "Invoke-RestMethod is a command to interact with REST APIs in PowerShell. Invoke-RestMethod is a quick and easy way to test your configuration or troubleshoot your connectivity to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Invoke-RestMethod.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/invoke-restmethod.md"}, {"id": "Jaeger-data", "title": "Jaeger", "description": "Jaeger is an open-source software that can help you monitor and troubleshoot problems on microservices. Integrate Jaeger with Logz.io to gain more observability into your data, identify if and when issues occur, and resolve them quickly and easily.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/jaeger.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/jaeger.md"}, {"id": "Fluentd-data", "title": "Fluentd", "description": "Fluentd is a data collector, which unifies the data collection and consumption. This integration allows you to use Fluentd to send logs to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/fluentd.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/fluentd.md"}, {"id": "Salesforce-Commerce-Cloud-data", "title": "Salesforce Commerce Cloud", "description": "Salesforce Commerce Cloud is a scalable, cloud-based software-as-a-service (SaaS) ecommerce platform. This integration allows you to collect data from Salesforce Commerce Cloud and send it to your Logz.io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/salesforce-commerce-cloud-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/salesforce-commerce-cloud.md"}, {"id": "Dovecot-data", "title": "Dovecot", "description": "Dovecot is an open-source IMAP and POP3 server for Unix-like operating systems, written primarily with security in mind. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/dovecot.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/dovecot.md"}, {"id": "Rsyslog-data", "title": "Rsyslog", "description": "Rsyslog is an open-source software utility used on most UNIX and Unix-like computer systems. It offers a great lightweight service to consolidate logs. With Logz.io, you can monitor these logs, identify if and when issues arise, and solve them before they impact your customers.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/rsyslog.md"}, {"id": "cURL-data", "title": "cURL", "description": "cURL is a command line utility for transferring data. cURL is a quick and easy way to test your configuration or troubleshoot your connectivity to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/curl.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/curl.md"}, {"id": "Axonius-data", "title": "Axonius", "description": "This integration sends system logs from your Axonius platform to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/axonius.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/axonius.md"}, {"id": "Tengine-data", "title": "Tengine", "description": "Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/tengine-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/tengine.md"}, {"id": "prometheus-alerts-migrator", "title": "Prometheus Alerts Migrator", "description": "This Helm chart deploys the Prometheus Alerts Migrator as a Kubernetes controller, which automates the migration of Prometheus alert rules to Logz.io's alert format, facilitating monitoring and alert management in a Logz.io integrated environment.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/prometheusio-icon.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/prometheus-alerts-migrator.md"}, {"id": "Bond-data", "title": "Bond", "description": "This integration allows you to collects metrics from all bond interfaces in your network. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bond-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/bond.md"}, {"id": "Apache-Aurora-data", "title": "Apache Aurora", "description": "Collect Aurora metrics using Telegraf", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aurora-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/apache-aurora.md"}, {"id": "Prometheus-remote-write", "title": "Prometheus Remote Write", "description": "This integration lets you send Prometheus-format metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/prometheusio-icon.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/prometheus.md"}, {"id": "Fluent-Bit-data", "title": "Fluent Bit", "description": "Fluent Bit is an open source Log Processor and Forwarder which allows you to collect any data like metrics and logs from different sources. This integration allows you to send logs from Fluent Bit running as a standalone app and forward them to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/fluent-bit.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/fluent-bit.md"}, {"id": "Salesforce-data", "title": "Salesforce", "description": "Salesforce is a customer relationship management solution. The Account sObject is an abstraction of the account record and holds the account field information in memory as an object. This integration allows you to collect sObject data from Salesforce and send it to your Logz.io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/salesforce-commerce-cloud-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/salesforce.md"}, {"id": "Microsoft-365-data", "title": "Microsoft 365", "description": "Deploy this integration to send Unified Audit Logging logs from Microsoft 365 to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/office365.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/microsoft-365.md"}, {"id": "Disque-data", "title": "Disque", "description": "Disque is a distributed message broker. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/disque-telegraf.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/disque.md"}, {"id": "BigBlueButton-data", "title": "BigBlueButton", "description": "BigBlueButton is a free software web conferencing system for Linux servers. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bigbluebutton-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/bigbluebutton.md"}, {"id": "NVIDIA-data", "title": "NVIDIA", "description": "NVIDIA System Management Interface (nvidia-smi) is a command line utility, based on top of the NVIDIA Management Library (NVML), intended to aid in the management and monitoring of NVIDIA GPU devices. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nvidia.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/nvidia.md"}, {"id": "IPMI-data", "title": "IPMI", "description": "IPMI is a standardized computer system interface used by system administrators to manage a computer system and monitor its operation. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ipmi.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/ipmi.md"}, {"id": "BUNNY-NET-data", "title": "BUNNY.NET", "description": "BUNNY.NET is a content delivery network offering features and performance with a fast global network. This document describes how to send system logs from your bunny.net pull zones to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bunny.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/bunny-net.md"}, {"id": "Filebeat-data", "title": "Filebeat", "description": "Filebeat is often the easiest way to get logs from your system to Logz.io. Logz.io has a dedicated configuration wizard to make it simple to configure Filebeat. If you already have Filebeat and you want to add new sources, check out our other shipping instructions to copy&paste just the relevant changes from our code examples.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/beats.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/filebeat.md"}, {"id": "OpenTelemetry-data", "title": "OpenTelemetry", "description": "OpenTelemetry is a collection of APIs, SDKs, and tools to instrument, generate, collect, and export telemetry data, including logs, metrics, and traces. Logz.io helps you identify anomalies and issues in the data so you can resolve them quickly and easily.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/opentelemetry-icon-color.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2Q2f3D9WiUgMIyjlDXi0sA"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/opentelemetry.md"}, {"id": "Vector-data", "title": "Vector", "description": "Vector by Datadog is a lightweight, ultra-fast tool for building observability pipelines. Deploy this integration to send logs from your Vector tools to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/vector.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/vector.md"}, {"id": "Redfish-data", "title": "Redfish", "description": "DMTF's Redfish is a standard designed to deliver simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC).Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/redfish-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/redfish.md"}, {"id": "Aiven-data", "title": "Aiven", "description": "Aiven is a cloud service provider that specializes in managed open-source database, messaging, and event streaming solutions.", "productTags": ["LOG_ANALYTICS"], "osTags": [], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aiven-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/aiven.md"}, {"id": "Logstash-data", "title": "Logstash", "description": "Logstash is an open-source server-side data processing pipeline. This integration can ingest data from multiple sources. With Logz.io, you can monitor Logstash instances and quickly identify if and when issues arise.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/logstash_temp.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/logstash.md"}, {"id": "Phusion-Passenger-data", "title": "Phusion Passenger", "description": "Phusion Passenger is a free web server and application server with support for Ruby, Python and Node.js. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/phfusion-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Other/phusion-passenger.md"}, {"id": "API-status-metrics-synthetic", "title": "API Status Metrics", "description": "Deploy this integration to collect API status metrics of user API and send them to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Synthetic Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apii.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1RCzCjjByhyz0bJ4Hmau0y"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Synthetic-Monitoring/api-status-metrics.md"}, {"id": "synthetic-link-detector-synthetic", "title": "Synthetic Link Detector", "description": "Deploy this integration to collect data on broken links in a web page, and to get additional data about the links.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Synthetic Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/link.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4l4xVZhvqsrJWO7rZwOxgx"}, {"type": "GRAFANA_DASHBOARD", "id": "1NiBMzN5DvQZ8BjePpUtvQ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md"}, {"id": "Ping-statistics-synthetic", "title": "Ping Statistics", "description": "Deploy this integration to collect metrics of ping statistics collected from your preferred web addresses and send them to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Synthetic Monitoring"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ping-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1rNO8llFw8Cm9N8U3M3vCQ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Synthetic-Monitoring/ping-statistics.md"}, {"id": "Telegraf-windows-performance", "title": "Telegraf Windows Performance", "description": "Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3AND5wMrjcMC9ngDTghmHx"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/telegraf-windows-performance.md"}, {"id": "Telegraf-Windows-services", "title": "Telegraf Windows Services", "description": "Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/telegraf-windows-services.md"}, {"id": "localhost-mac", "title": "Mac Operating System", "description": "Send your Mac machine logs and metrics to Logz.io to monitor and manage your Mac data, allowing you to identify anomalies, investigate incidents, get to the root cause of any issue, and quickly resolve it.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["mac"], "filterTags": ["Operating Systems", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mac-os.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2gsQP2xRef7dkwt8pxWieo"}, {"type": "GRAFANA_ALERT", "id": "hWld33IEO6gZMpp2e4vs0"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/localhost-mac.md"}, {"id": "Linux-data", "title": "Linux Operating System", "description": "Send your Linux machine logs and metrics to Logz.io to monitor and manage your Linux data, allowing you to identify anomalies, investigate incidents, get to the root cause of any issue, and quickly resolve it.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["linux"], "filterTags": ["Operating Systems", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6hb5Nww0ar4SXoF92QxMx"}, {"type": "GRAFANA_ALERT", "id": "6y7xNsm1RXlXAFUAXLyOpZ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/linux.md"}, {"id": "Windows", "title": "Windows Operating System", "description": "Send your Windows machine logs and metrics to Logz.io to monitor and manage your Windows data, allowing you to identify anomalies, investigate incidents, get to the root cause of any issue, and quickly resolve it.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows"], "filterTags": ["Operating Systems", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows.svg", "bundle": [{"type": "LOG_ALERT", "id": "72Yry8pK5OfiGdPOV2y9RZ"}, {"type": "LOG_ALERT", "id": "4Mkw0OICZz7xnZZjlGWX9x"}, {"type": "GRAFANA_DASHBOARD", "id": "7vydxtpnlKLILHIGK4puX5"}, {"type": "GRAFANA_ALERT", "id": "4GVNTAqeH4lSRQBfN7dCXQ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Operating-Systems/windows.md"}, {"id": "Amazon-EC2-Auto-Scaling", "title": "AWS EC2 Auto Scaling", "description": "This integration sends your Amazon EC2 Auto Scaling logs and metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ec2-auto-scaling.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2VNLppOm4XOFwVouv8dorr"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ec2-auto-scaling.md"}, {"id": "AWS-Control-Tower", "title": "AWS Control Tower", "description": "AWS Control Tower is a tool to control a top-level summary of policies applied to the AWS environment.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-control-tower.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7bHNddlAK5q8Iya7xIhbbU"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-control-tower.md"}, {"id": "AWS-SES", "title": "AWS SES", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon SES metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ses.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6YXSlRl6RxMuGPiTTO9NHg"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ses.md"}, {"id": "AWS-API-Gateway", "title": "AWS API Gateway", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon API Gateway metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": [], "filterTags": ["AWS", "Access Management", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-api-gateway.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7234Vgs9rITAlaHJH5iqOw"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-api-gateway.md"}, {"id": "Amazon-Classic-ELB", "title": "AWS Classic ELB", "description": "Send your AWS Classic ELB logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-classic-elb.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "5oFBj0BIKo4M5XLZpwjSgl"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-classic-elb.md"}, {"id": "AWS-EBS", "title": "AWS EBS", "description": "Send your Amazon EBS metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ebs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6WqwxluZ76GXXPut0GHGKH"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ebs.md"}, {"id": "AWS-Network-ELB", "title": "AWS Network ELB", "description": "Send your AWS Network ELB logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/elb-network.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "5pihdWdmBYQ1i7AbU9po2R"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-network-elb.md"}, {"id": "AWS-S3-Bucket", "title": "AWS S3 Bucket", "description": "Amazon S3 stores data within buckets, allowing you to send your AWS logs and metrics to Logz.io. S3 buckets lets you store and access large amounts of data and is often used for big data analytics, root cause analysis, and more.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store", "Other", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-s3.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1Pm3OYbu1MRGoELc2qhxQ1"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-s3-bucket.md"}, {"id": "AWS-MSK", "title": "AWS MSK", "description": "This integration sends your Amazon MSK logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-msk.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2EGM4H9wch68bVy1vm4oxb"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-msk.md"}, {"id": "AWS-App-ELB", "title": "AWS App ELB", "description": "Send your AWS Application ELB logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": [], "filterTags": ["AWS", "Load Balancer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-app-elb.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "5BZ6El3juQkCKCIuGm1oyC"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-app-elb.md"}, {"id": "Lambda-extension-go", "title": "Traces from Go on AWS Lambda using OpenTelemetry", "description": "This integration to auto-instrument your Go application running on AWS Lambda and send the traces to your Logz.io account.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/AWS-Lambda.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-lambda-extension-go.md"}, {"id": "AWS-cross-account", "title": "AWS Cross Account", "description": "Deploy this integration to simultaneously ship logs from multiple AWS accounts to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-cloudwatch.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cross-account.md"}, {"id": "AWS-Amplify", "title": "AWS Amplify", "description": "This is an integration that collects Amplify access logs and sends them to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": [], "filterTags": ["AWS", "CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/amplify.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-amplify.md"}, {"id": "AWS-CloudFormation", "title": "AWS CloudFormation", "description": "Send your Amazon CloudFront metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": [], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-cloudformation.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cloudformation.md"}, {"id": "aws-vpn", "title": "AWS VPN", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon VPN metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-vpn.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4nSubW6qKSqV8Pq367JQca"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-vpn.md"}, {"id": "AWS-Route-53", "title": "AWS Route 53", "description": "This integration sends your Amazon Route 53 logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Amazon-Route-53.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "Tnb9WjjHnI3COgp08Wsin"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-route53.md"}, {"id": "AWS-Kafka", "title": "Amazon Managed Streaming for Apache Kafka (MSK)", "description": "Send your Amazon Managed Streaming for Apache Kafka (MSK) metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/aws-msk.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7bHNddlAK5q8Iya7xIhbbU"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-kafka.md"}, {"id": "Lambda-extension-node", "title": "Traces from Node.js on AWS Lambda using OpenTelemetry", "description": "This integration to auto-instrument your Node.js application running on AWS Lambda and send the traces to your Logz.io account.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/AWS-Lambda.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-lambda-extension-node.md"}, {"id": "AWS-Security-Hub", "title": "AWS Security Hub", "description": "This integration ships events from AWS Security Hub to Logz.io. It will automatically deploy resources to your AWS Account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-security-hub.md"}, {"id": "Lambda-extensions", "title": "Lambda Extensions", "description": "The Logz.io Lambda extension for logs, uses the AWS Extensions API and AWS Logs API and sends your Lambda Function Logs directly to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/AWS-Lambda.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-lambda-extensions.md"}, {"id": "AWS-RDS", "title": "AWS RDS", "description": "This integration sends AWS RDS logs and metrics to your Logz.io account.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-rds.svg", "bundle": [{"type": "OSD_DASHBOARD", "id": "2IzSk7ZLwhRFwaqAQg4e2U"}, {"type": "GRAFANA_DASHBOARD", "id": "5azSSei1AhiJPCV7yptVI7"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-rds.md"}, {"id": "AWS-Redshift", "title": "AWS Redshift", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon Redshift metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Amazon-Redshift.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-redshift.md"}, {"id": "AWS-S3-Access", "title": "AWS S3 Access", "description": "Amazon S3 Access Logs provide detailed records about requests that are made to your S3 bucket. This integration allows you to send these logs to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-s3.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-s3-access.md"}, {"id": "aws-SQS", "title": "AWS SQS", "description": "This integration sends your Amazon SQS logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-sqs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1pEmJtP0bwd5WuuAfEe5cc"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-sqs.md"}, {"id": "AWS-ECS-Fargate", "title": "AWS ECS Fargate", "description": "AWS Fargate is a serverless compute engine for building applications without managing servers.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": [], "filterTags": ["AWS", "Compute", "Containers"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-fargate.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ecs-fargate.md"}, {"id": "AWS-FSx", "title": "AWS FSx", "description": "This integration sends your Amazon FSx logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://dytvr9ot2sszz.cloudfront.net/logz-docs/shipper-logos/aws-fsx.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6rVrCJsVXljHWg7wZo51HT"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-fsx.md"}, {"id": "AWS-AppRunner", "title": "AWS AppRunner", "description": "Send your Amazon AppRunner metrics to Logz.io", "productTags": ["METRICS"], "osTags": [], "filterTags": ["AWS", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-fusion.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-apprunner.md"}, {"id": "AWS-mq", "title": "AWS MQ", "description": "This integration sends your Amazon MQ logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": [], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-mq.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1xglfXxBurNsVZIla5zRnS"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-mq.md"}, {"id": "AWS-WAF", "title": "AWS WAF", "description": "Ship your AWS WAF logs to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/AWS-WAF.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-waf.md"}, {"id": "AWS-Cost-and-Usage-Reports", "title": "AWS Cost and Usage Reports", "description": "AWS Cost and Usage Reports function tracks your AWS usage and provides estimated charges associated with your account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cost-and-usage-report.md"}, {"id": "AWS-EC2", "title": "AWS EC2", "description": "Send your Amazon EC2 logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "recommendedFor": ["DevOps Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ec2.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2VNLppOm4XOFwVouv8dorr"}, {"type": "GRAFANA_ALERT", "id": "hWld33IEO6gZMpp2e4vs0"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ec2.md"}, {"id": "Amazon-ElastiCache", "title": "AWS ElastiCache", "description": "Send your Amazon ElastiCache metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Amazon-ElastiCache.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-elasticache.md"}, {"id": "AWS-NAT", "title": "AWS NAT", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon NAT metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-nat.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1EhgOtbCtQxzsWh6FJjme8"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-nat.md"}, {"id": "AWS-ElastiCache-Redis", "title": "AWS ElastiCache for Redis", "description": "Send your Amazon ElastiCache for Redis metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Memory Caching"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-redis-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2iTJV7AkvtHDJauaEzYobs"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-elasticache-redis.md"}, {"id": "GuardDuty", "title": "GuardDuty", "description": "This integration sends GuardDuty logs.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-guardduty.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-guardduty.md"}, {"id": "AWS-CloudTrail", "title": "AWS CloudTrail", "description": "AWS Cloudtrail enables governance, compliance, operational auditing, and risk auditing of your Amazon Web Services account. Integrate it with Logz.io to monitor your Cloudtrail logs and metrics and know if and when issues arise.", "productTags": ["LOG_ANALYTICS", "METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Security", "Most Popular"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-cloudtrail.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cloudtrail.md"}, {"id": "AWS-Lambda", "title": "AWS Lambda", "description": "AWS Lambda serverless compute service runs code in response to events and automatically manages compute resources. Send these events to Logz.io to identify anomalies and issues and quickly solve them.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute"], "recommendedFor": ["DevOps Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/lambda-nodejs2.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2YLu810AXPlVwzQen8vff1"}, {"type": "GRAFANA_ALERT", "id": "4iuPoRsdogZKww8d0NO7er"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-lambda.md"}, {"id": "AWS-SNS", "title": "AWS SNS", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon SNS metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-sns.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3G7HxOI10AvzpqGXQNfawA"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-sns.md"}, {"id": "AWS-ECS", "title": "AWS ECS", "description": "Send your Amazon ECS logs and metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Compute", "Containers"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-ecs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "4pY46CjyNMoHWGB3gjgQWd"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-ecs.md"}, {"id": "AWS-CloudFront", "title": "AWS CloudFront", "description": "Send your Amazon CloudFront metrics to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": [], "filterTags": ["AWS", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-cloudfront.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3MJWDTivgQCNz3DQIj3Kry"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-cloudfront.md"}, {"id": "aws-eks", "title": "AWS EKS", "description": "Send Kubernetes logs, metrics and traces to Logz.io.", "productTags": ["LOG_ANALYTICS", "METRICS", "TRACING"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-eks.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-eks.md"}, {"id": "AWS-EFS", "title": "AWS EFS", "description": "Send your Amazon EFS metrics to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-efs.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7IUpQgVmcbkHV8zAGuLHIL"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-efs.md"}, {"id": "AWS-Kinesis-Firehose", "title": "AWS Kinesis Data Firehose", "description": "This integration sends your Amazon Kinesis Data Firehose logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Amazon-Kinesis.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6c42S4dUE98HajLbiuaShI"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-kinesis-firehose.md"}, {"id": "AWS-DynamoDB", "title": "AWS DynamoDB", "description": "This integration sends your Amazon DynamoDB logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-dynamodb.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1SCWsYpcgBc9DmjM1vELkf"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-dynamodb.md"}, {"id": "AWS-Athena", "title": "AWS Athena", "description": "This integration sends your Amazon Athena logs and metrics to Logz.io.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["AWS", "Data Store"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aws-athena.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/AWS/aws-athena.md"}, {"id": "Apache-ZooKeeper-orchestration", "title": "Apache ZooKeeper", "description": "Apache ZooKeeper is an open-source server for highly reliable distributed coordination of cloud applications.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/zookeeper-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/apache-zookeeper.md"}, {"id": "Istio-orchestration", "title": "Istio", "description": "Deploy this integration to send traces from your Istio service mesh layers to Logz.io via the OpenTelemetry collector.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/istio.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/istio-traces.md"}, {"id": "Beanstalkd-orchestration", "title": "Beanstalkd", "description": "Beanstalkd is a simple, fast work queue. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/beanstalk-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/beanstalkd.md"}, {"id": "Apache-Mesos-orchestration", "title": "Apache Mesos", "description": "Apache Mesos is an open-source project to manage computer clusters.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mesos-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/apache-mesos.md"}, {"id": "DC-OS-orchestration", "title": "DC/OS", "description": "DC/OS (the Distributed Cloud Operating System) is an open-source, distributed operating system based on the Apache Mesos distributed systems kernel. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Orchestration"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/dcos.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Orchestration/ds-os.md"}, {"id": "Azure-Security-Center", "title": "Azure Security Center", "description": "You can ship security logs available from the Microsoft Graph APIs with Logzio api fetcher.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-security-center.md"}, {"id": "Azure-Diagnostic-Logs", "title": "Azure Diagnostic Logs", "description": "Ship your Azure diagnostic logs using an automated deployment process.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure-monitor.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-diagnostic-logs.md"}, {"id": "azure-active-Directory", "title": "Azure Active Directory", "description": "You can ship logs available from the Microsoft Graph APIs with Logzio-MSGraph.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-active-directory.md"}, {"id": "Azure-NSG", "title": "Azure NSG", "description": "Enable an Azure function to forward NSG logs from your Azure Blob Storage account to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nsg-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-nsg.md"}, {"id": "Azure-Activity-logs", "title": "Azure Activity Logs", "description": "Ship your Azure activity logs using an automated deployment process.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure-monitor.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-activity-logs.md"}, {"id": "Azure-Native", "title": "Azure Native Logs", "description": "This integration uses Logz.io's Cloud-Native Observability Platform to monitor the health and performance of your Azure environment through the Azure portal.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/Azure-native-integration2.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-native.md"}, {"id": "azure-office365-message-trace-reports", "title": "Microsoft Azure Office365 Message Trace Reports (mail reports)", "description": "You can ship mail report logs available from the Microsoft Office365 Message Trace API with Logzio-api-fetcher.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-mail-reports.md"}, {"id": "Azure-blob-trigger", "title": "Azure Blob Trigger", "description": "Azure Blob Storage is Microsoft's object storage solution for the cloud. Deploy this integration to forward logs from your Azure Blob Storage account to Logz.io using an automated deployment process via the trigger function. Each new log in the container path inside the storage account (including sub directories), will trigger the Logz.io function that will ship the file content to Logz.io.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure-blob.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-blob-trigger.md"}, {"id": "Azure-VM-Extension", "title": "Azure VM Extension", "description": "Extensions are small applications that provide post-deployment configuration and automation on Azure VMs. You can install Logz.io agents on Azure virtual machines as an extension. This will allow you to ship logs directly from your VM to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Compute"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure-vm.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-vm-extension.md"}, {"id": "azure-graph", "title": "Microsoft Azure Graph API", "description": "You can ship logs available from the Microsoft Graph APIs with Logzio-api-fetcher.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Azure", "Access Management"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/azure.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Azure/azure-graph.md"}, {"id": "Apache-Cassandra", "title": "Apache Cassandra", "description": "Apache Cassandra is an open source NoSQL distributed database management system designed to process large amounts of data across commodity servers.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cassandra-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "5oCUt52hGJu6LmVGHPOktr"}, {"type": "GRAFANA_DASHBOARD", "id": "6J2RujMalRK3oC4y0r88ax"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/apache-cassandra.md"}, {"id": "RavenDB", "title": "RavenDB", "description": "RavenDB is an open source document-oriented NoSQL designed especially for the .NET/Windows platform. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ravendb-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/ravendb.md"}, {"id": "Redis", "title": "Redis", "description": "Redis is an in-memory data structure store, used as a distributed, in-memory key\u2013value database, cache and message broker, with optional durability. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/redis-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1sS7i6SyMz35RIay8NRYGp"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/redis.md"}, {"id": "ClickHouse", "title": "ClickHouse", "description": "ClickHouse is a fast open-source column-oriented database management system that allows generating analytical data reports in real-time using SQL queries. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/clickhouse.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/clickhouse-telegraf.md"}, {"id": "Aerospike", "title": "Aerospike", "description": "Aerospike is a flash-optimized and in-memory open source distributed key value NoSQL database. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/aerospike-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/aerospike.md"}, {"id": "Riak", "title": "Riak", "description": "Riak is a distributed NoSQL key-value data store that offers high availability, fault tolerance, operational simplicity, and scalability. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/riak-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/riak.md"}, {"id": "MySQL", "title": "MySQL", "description": "MySQL is an open-source relational database management system. Filebeat is often the easiest way to get logs from your system to Logz.io. Logz.io has a dedicated configuration wizard to make it simple to configure Filebeat. If you already have Filebeat and you want to add new sources, check out our other shipping instructions to copy&paste just the relevant changes from our code examples.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mysql.svg", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "2zMVEOdWnIMgOPATDLByX7"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/mysql.md"}, {"id": "MarkLogic", "title": "MarkLogic", "description": "MarkLogic is a NoSQL database platform that is used in publishing, government, finance and other sectors, with hundreds of large-scale systems in production. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/marklogic.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/marklogic.md"}, {"id": "Microsoft-SQL-Server", "title": "Microsoft SQL Server", "description": "Microsoft SQL Server is a relational database management system developed by Microsoft. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mysql.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/microsoft-sql-server.md"}, {"id": "PostgreSQL", "title": "PostgreSQL", "description": "PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/postgresql-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "3L7cjHptO2CFcrvpqGCNI0"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/postgresql.md"}, {"id": "PgBouncer", "title": "PgBouncer", "description": "PgBouncer is a lightweight connection pooler for PostgreSQL. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Database"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/pgbouncer.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Database/pgbouncer.md"}, {"id": "Puppet", "title": "Puppet", "description": "Puppet is a software configuration management tool which includes its own declarative language to describe system configuration. Deploy this integration to send logs from your Puppet applications to your Logz,io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/puppet.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/puppet.md"}, {"id": "GitLab", "title": "GitLab", "description": "GitLab is a DevOps platform that combines the ability to develop, secure, and operate software in a single application. This integration allows you to send logs from your GitLan platform to your Logz.io account.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/gitlab.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/gitlab.md"}, {"id": "Argo-CD", "title": "Argo CD", "description": "Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "recommendedFor": ["DevOps Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/argo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "6Gx8npV306IL2WZ4SJRIN4"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/argo-cd.md"}, {"id": "Bitbucket", "title": "Bitbucket", "description": "Bitbucket is a Git-based source code repository hosting service. This integration allows you to ship logs from your Bitbucket repository to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bitbucket.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/bitbucket.md"}, {"id": "GitHub", "title": "GitHub", "description": "This integration enable you to collect logs and metrics from github", "productTags": ["LOG_ANALYTICS", "METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/github.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/github.md"}, {"id": "Jenkins", "title": "Jenkins", "description": "Jenkins is an automation server for building, testing, and deploying software. This integration allows you to send logs and metrics from your Jenkins servers to your Logz.io account.", "productTags": ["METRICS", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/jenkins.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "7bmikAb2xNPTy7PESlBqXY"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/jenkins.md"}, {"id": "TeamCity", "title": "TeamCity", "description": "TeamCity is a general-purpose CI/CD solution that allows the most flexibility for all sorts of workflows and development practices. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["CI/CD"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/TeamCity-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "1mdHqslZMi4gXaNCLZo9G1"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/CI-CD/teamcity.md"}, {"id": "Suricata", "title": "Suricata", "description": "Suricata is an open source-based intrusion detection system and intrusion prevention system. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/suricata-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/suricata.md"}, {"id": "Windows-Defender", "title": "Windows Defender via Winlogbeat", "description": "This integration enable you to send Windows Defender events to Logz.io using winlogbeat", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/windows-defender.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/windows-defender.md"}, {"id": "Palo-Alto-Networks", "title": "Palo Alto Networks", "description": "Palo Alto Networks provides advanced protection, security and consistency across locations and clouds. This integration allows you to send logs from your Palo Alto Networks applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/palo-alto-networks.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/palo-alto-networks.md"}, {"id": "Crowdstrike", "title": "Crowdstrike", "description": "Crowdstrike is a SaaS (software as a service) system security solution. Deploy this integration to ship Crowdstrike events from your Crowdstrike account to Logz.io using FluentD.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/crowdstrike-logo.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/crowdstrike.md"}, {"id": "ModSecurity", "title": "ModSecurity", "description": "ModSecurity, sometimes called Modsec, is an open-source web application firewall. This integration allows you to send ModSecurity logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/modsec.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/modsecurity.md"}, {"id": "Cisco-SecureX", "title": "Cisco SecureX", "description": "Cisco SecureX connects the breadth of Cisco's integrated security portfolio and your infrastructure. This integration allows you to collect data from Cisco SecureX API and send it to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/securex-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/cisco-securex.md"}, {"id": "Fail2Ban", "title": "Fail2Ban", "description": "Fail2Ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. This integration allows you to send Fail2ban logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/fail2ban.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/fail2ban.md"}, {"id": "Cisco-ASA", "title": "Cisco ASA", "description": "Cisco ASA is a security device that combines firewall, antivirus, intrusion prevention, and virtual private network (VPN) capabilities.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cisco.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/cisco-asa.md"}, {"id": "McAfee-ePolicy-Orchestrator", "title": "McAfee ePolicy Orchestrator", "description": "McAfee ePolicy Orchestrator (McAfee ePO) software centralizes and streamlines management of endpoint, network, data security, and compliance solutions. This integration allows you to send McAfee ePolicy Orchestrator logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/mcafee.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/mcafee-epolicy-orchestrator.md"}, {"id": "Zeek", "title": "Zeek", "description": "Zeek is a free and open-source software network analysis framework. This integration allows you to send Zeek logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/zeek.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/zeek.md"}, {"id": "FortiGate", "title": "FortiGate", "description": "FortiGate units are installed as a gateway or router between two networks. This integration allows you to send FortiGate logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/fortinet.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/fortigate.md"}, {"id": "ESET", "title": "ESET", "description": "ESET provides anti-virus and firewall solutions. This integration allows you to send ESET logs to your Logz.io SIEM account.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/eset.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/eset.md"}, {"id": "Trend-micro", "title": "Trend Micro", "description": "This integration enables users to monitor and analyze cybersecurity threats and events in real-time, enhancing their overall security visibility and incident response capabilities.", "productTags": ["METRICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/trendmicro-small-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/trend-micro.md"}, {"id": "Trivy", "title": "Trivy", "description": "TThis integration utilizes the logzio-trivy Helm Chart to deploy the trivy-Operator Helm Chart that scans the cluster and creates Trivy reports and a deployment that looks for the Trivy reports in the cluster, processes them, and sends them to Logz.io", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/trivy-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/trivy.md"}, {"id": "Alcide-kAudit", "title": "Alcide kAudit", "description": "Alcide kAudit is a security service for monitoring Kubernetes audit logs, and easily identifying abnormal administrative activity and compromised Kubernetes resources.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/alcide.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/alcide-kaudit.md"}, {"id": "Falco", "title": "Falco", "description": "Falco is a CNCF-approved container security and Kubernetes threat detection engine that logs illegal container activity at runtime. Shipping your Falco logs to your Cloud SIEM can help you monitor your Kubernetes workloads for potentially malicious behavior. This can help you catch attempts to remove logging data from a container, to run recon tools inside a container, or add potentially malicious repositories to a container.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/falco-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/falco.md"}, {"id": "HashiCorp-Vault", "title": "HashiCorp Vault", "description": "HashiCorp Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. This integration allows you to send HashiCorp Vault logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/hashicorp-vault.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/hashicorp-vault.md"}, {"id": "OpenVAS", "title": "OpenVAS", "description": "These instructions show you how to configure Filebeat to send OpenVAS reports to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/greenbone_icon.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/openvas.md"}, {"id": "OSSEC", "title": "OSSEC", "description": "OSSEC is a multiplatform, open source and free Host Intrusion Detection System (HIDS). This integration allows you to send OSSEC logs to your Logz.io SIEM account.", "productTags": ["SIEM", "LOG_ANALYTICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ossec.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/ossec.md"}, {"id": "SonicWall", "title": "SonicWall", "description": "SonicWall firewalls allow you to identify and control all of the applications in use on your network. This integration allows you to send logs from your SonicWall applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/SonicWall-Logo.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/sonicwall.md"}, {"id": "pfSense", "title": "pfSense", "description": "pfSense is an open source firewall solution. This topic describes how to configure pfSense to send system logs to Logz.io via Filebeat running on a dedicated server.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security", "Network"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/pfsense-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/pfsense.md"}, {"id": "SentinelOne", "title": "SentinelOne", "description": "SentinelOne platform delivers the defenses to prevent, detect, and undo\u2014known and unknown\u2014threats. This integration allows you to send logs from your SentinelOne applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/sentintelone-icon.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/sentinelone.md"}, {"id": "Avast", "title": "Avast", "description": "Avast is a family of cross-platform internet security applications. This topic describes how to send system logs from your Avast Antivirus platform to Logz.io.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/avast.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/avast.md"}, {"id": "Cisco-Meraki", "title": "Cisco Meraki", "description": "This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon S3 metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cisco-meraki-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/cisco-meraki.md"}, {"id": "Cynet", "title": "Cynet", "description": "Cynet is a cybersecurity asset management platform. This topic describes how to send system logs from your Cynet platform to Logz.io.", "productTags": ["SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/cynet.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/cynet.md"}, {"id": "auditbeat", "title": "Auditbeat", "description": "As its name suggests, auditd is a service that audits activities in a Linux environment. It's available for most major Linux distributions.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/linux.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/auditbeat.md"}, {"id": "Bitdefender", "title": "Bitdefender", "description": "Bitdefender is an antivirus software. This integration allows you to send Bitdefender logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/bitdefender.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/bitdefender.md"}, {"id": "Wazuh", "title": "Wazuh", "description": "Wazuh is a free, open source and enterprise-ready security monitoring solution for threat detection, integrity monitoring, incident response and compliance. This integration allows you to send Wazuh logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/wazuh.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/wazuh.md"}, {"id": "x509", "title": "x509", "description": "Deploy this integration to collect X509 certificate metrics from URLs and send them to Logz.io.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/ssl-certificate.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "19AIOkwkFLQCZWmUSINGXT"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/x509.md"}, {"id": "Sophos", "title": "Sophos", "description": "Sophos Endpoint is an endpoint protection product that combines antimalware, web and application control, device control and much more. This integration allows you to send logs from your Linux-based Sophos applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/sophos-shield.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/sophos.md"}, {"id": "Stormshield", "title": "Stormshield", "description": "Stormshield provides cyber-security solutions. This integration allows you to send logs from your Stormshield applications to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Network", "Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/stormshield.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/stormshield.md"}, {"id": "Check-Point", "title": "Check Point", "description": "Check Point provides hardware and software products for IT security, including network security, endpoint security, cloud security, mobile security, data security and security management. This integration allows you to send Check Point logs to your Logz.io SIEM account.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/check-point.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/check-point.md"}, {"id": "Carbon-Black", "title": "Carbon Black", "description": "Carbon Black enables multi-cloud workload and endpoint threat protection. Connect your Carbon Black to Logz.io to monitor and analyze endpoint security, threat detection, user behavior, software inventory, compliance, and incident response to enhance overall cybersecurity.", "productTags": ["LOG_ANALYTICS", "SIEM"], "osTags": ["windows", "linux"], "filterTags": ["Security"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/carbon-black.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Security/carbon-black.md"}, {"id": "RabbitMQ", "title": "RabbitMQ", "description": "RabbitMQ is an open-source message-broker software that originally implemented the Advanced Message Queuing Protocol and has since been extended with a plug-in architecture to support Streaming Text Oriented Messaging Protocol, MQ Telemetry Transport, and other protocols. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Distributed Messaging"], "recommendedFor": ["Software Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/rabbitmq-logo.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "77P29wgQwu1pqCaZFMcwnC"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/rabbitmq.md"}, {"id": "NSQ", "title": "NSQ", "description": "NSQ is a realtime distributed messaging platform designed to operate at scale, handling billions of messages per day. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from databases, systems and IoT sensors.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/nsq.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/nsq.md"}, {"id": "Apache-Kafka", "title": "Apache Kafka", "description": "Apache Kafka is a distributed event store and stream-processing platform.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Distributed Messaging"], "recommendedFor": ["DevOps Engineer"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kafka.svg", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/apache-kafka.md"}, {"id": "Apache-Storm", "title": "Apache Storm", "description": "This integration allows you to send logs from your Apache Storm server instances to your Logz.io account.", "productTags": ["LOG_ANALYTICS"], "osTags": ["linux"], "filterTags": ["Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/apache-storm.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/apache-storm.md"}, {"id": "Apache-ActiveMQ", "title": "Apache ActiveMQ", "description": "Apache ActiveMQ is an open source message broker with a Java Message Service client. Telegraf is a plug-in driven server agent for collecting and sending metrics and events from various sources.", "productTags": ["METRICS"], "osTags": ["windows", "linux"], "filterTags": ["Distributed Messaging"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/activemq-logo.png", "bundle": [], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/Distributed-Messaging/apache-activemq.md"}, {"id": "Service-Performance-Monitoring-App360", "title": "App360", "description": "This integration allows you to configure App360 with OpenTelemetry collector and send data from your OpenTelemetry installation to Logz.io.", "productTags": ["TRACING"], "osTags": ["windows", "linux"], "filterTags": ["Other"], "logo": "https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/span-metrics.png", "bundle": [{"type": "GRAFANA_DASHBOARD", "id": "40ZpsSfzfGhbguMYoxwOqm"}, {"type": "GRAFANA_DASHBOARD", "id": "5PFq9YHx2iQkwVMLCMOmjJ"}], "dataLink": "https://raw.githubusercontent.com/logzio/documentation/master/docs/shipping/App360/App360.md"}], "tags": ["Code", "Most Popular", "Distributed Messaging", "Containers", "Orchestration", "Other", "Compute", "IoT", "Load Balancer", "Memory Caching", "Data Store", "GCP", "Access Management", "Monitoring", "Network", "Security", "Database", "Synthetic Monitoring", "Operating Systems", "AWS", "CI/CD", "Azure"]} \ No newline at end of file