Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc 694 #469

Closed
wants to merge 7 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
145 changes: 145 additions & 0 deletions docs/shipping/Code/java.md
Original file line number Diff line number Diff line change
Expand Up @@ -1052,6 +1052,151 @@ timer.record(()-> {
Run your application to start sending metrics to Logz.io. Give it some time to run and check the Logz.io [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?).


### Monitoring Java applications with Prometheus

#### Build the JMX Exporter

1. Clone the JMX Exporter repository from GitHub:

```bash
git clone https://github.com/prometheus/jmx_exporter.git
```

2. Navigate to the `jmx_exporter` directory:

```bash
cd jmx_exporter
```

3. Build the JMX Exporter using Maven:

```bash
mvn package
```

4. Identify the built JAR file name:

```bash
Identify the built JAR file name:
```

Note the name of the JAR file for later use.

#### Run Kafka

1. Ensure Zookeeper is running:

```bash
kafka_2.12-2.4.1/bin/zookeeper-server-start.sh kafka_2.12-2.4.1/config/zookeeper.properties &
```

Make sure to use the correct path for your Kafka version.

2. Set up your Java environment for Kafka:

Export the required environment variables and include the JMX Exporter JAR file using the full path identified earlier. Replace `<JAR_PATH>` with the path to your JAR file and adjust the port if necessary (default is 8090):

```bash
export EXTRA_ARGS="-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Djava.util.logging.config.file=logging.properties \
-javaagent:<JAR_PATH>=8090:jmx_exporter/example_configs/kafka-2_0_0.yml"
```

3. Start Kafka with the specified Java environment:

```bash
kafka_2.12-2.4.1/bin/kafka-server-start.sh kafka_2.12-2.4.1/config/server.properties &
```

This step assumes the environment variables and the JMX Exporter have been set up correctly.

4. View Metrics

Access the exposed metrics by navigating to http://localhost:8080 (or the port you specified).

#### Configure OpenTelemetry to Send Metrics to Logz.io

1. Configure the Prometheus receiver in the OpenTelemetry Collector to scrape metrics from Kafka:

```yaml
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'otel-collector-kafka'
scrape_interval: 5s
static_configs:
- targets: ['127.0.0.1:8090']
```

2. Set up processors for resource detection and attribute modification:

```yaml
processors:
resourcedetection/system:
detectors: ["system"]
system:
hostname_sources: ["os"]
attributes/agent:
actions:
- key: logzio_agent_version
value: v1.0.36
action: insert
- key: cloudservice
value: _CloudService_
action: insert
- key: role
value: _role_
action: insert
```

3. Configure exporters for logging and sending metrics to Logz.io:

```yaml
exporters:
logging:
logzio/logs:
account_token: **********************
region: us
prometheusremotewrite:
endpoint: https://listener.logz.io:8053
headers:
Authorization: "Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>"
resource_to_telemetry_conversion:
enabled: true
```
{@include: ../../_include/general-shipping/replace-prometheus-token.html}


4. Define the service pipelines to use the configured receivers, processors, and exporters:

```yaml
service:
pipelines:
metrics:
receivers:
- prometheus
exporters: [prometheusremotewrite]
processors: [resourcedetection/system]
```

##### Start the collector

Run the following command:

```shell
<path/to>/otelcol-contrib --config ./config.yaml
```

* Replace `<path/to>` with the path to the directory where you downloaded the collector. If the name of your configuration file is different to `config`, adjust name in the command accordingly.

### Check Logz.io for your metrics

Give your metrics some time to get from your system to ours, and then open [Metrics dashboard](https://app.logz.io/#/dashboard/metrics/discover?).


## Traces


Expand Down
6 changes: 5 additions & 1 deletion docs/shipping/Containers/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,12 @@ metrics_alerts: ['5Ng398K19vXP9197bRV1If']
drop_filter: []
---


Integrate your Kubernetes system with Logz.io to monitor your logs, metrics, and traces, gain observability into your environment, and be able to identify and resolve issues with a few clicks.

{@include: ../../_include/general-shipping/k8s.md}

{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboards to enhance the observability of your metrics.

<!-- logzio-inject:install:grafana:dashboards ids=["6ThbRK67ZxBGeYwp8n74D0"] -->

{@include: ../../_include/metric-shipping/generic-dashboard.html}
156 changes: 156 additions & 0 deletions docs/shipping/Distributed-Messaging/jmx-kafka.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
---
id: Kafka-JMX-Receiver
title: Kafka-JMX-Receiver
overview: The JMX Receiver complements the OpenTelemetry JMX Metric Gatherer by capturing metrics from an MBean server. It utilizes a Groovy script with the assistance of a built-in OpenTelemetry helper.
product: ['metrics']
os: ['windows', 'linux']
filters: ['Distributed Messaging']
logo: https://logzbucket.s3.eu-west-1.amazonaws.com/logz-docs/shipper-logos/kafka.svg
logs_dashboards: []
logs_alerts: []
logs2metrics: []
metrics_dashboards: []
metrics_alerts: []
drop_filter: []
---


The Kafka JMX Receiver enhances Java application monitoring by utilizing the OpenTelemetry JMX Metric Gatherer. This combination facilitates the collection of metrics from MBean servers via a Groovy script, using an OpenTelemetry (otel) helper.

## Setup Instructions


**Before you begin, you'll need**:
Apache Kafka running on your server


### Install the JMX Metric Gatherer

1. **Download the JMX Metric Gatherer JAR**: Obtain the latest version (1.9 or above) from [OpenTelemetry Java Contrib GitHub releases](https://github.com/open-telemetry/opentelemetry-java-contrib/releases).

2. **Specify the JAR Path**: Ensure the JMX Metric Gatherer JAR is accessible. By default, use `/opt/opentelemetry-java-contrib-jmx-metrics.jar` or specify your own path if different.

:::note
The JAR path must represent a released version 1.9+ of the jar, which can be [downloaded from GitHub](https://github.com/open-telemetry/opentelemetry-java-contrib/releases). If a non-released version is required, you can specify a custom version by providing the `sha256 hash` of your custom version of the jar during collector build time using the `ldflags` option:

```go
go build -ldflags "-X github.com/open-telemetry/opentelemetry-collector-contrib/receiver/jmxreceiver.MetricsGathererHash=<sha256hash>"
```
:::

### Configure the JMX Receiver

1. **Set Up the Child JRE Process**: The receiver initiates a child JRE process equipped with your JMX connection details and the target Groovy script for metric gathering.

2. **JMX Connection Configuration**:

* **Endpoint**: Define the JMX Service URL or host and port. Use the format `service:jmx:<protocol>:<sap> or host:port`. The `host:port` format will transform into a Service URL like `service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi`.

3. **Target System Selection**:

* Choose from built-in target systems like `activemq`, `cassandra`, `hbase`, `hadoop`, `jetty`, `jvm`, `kafka`, `kafka-consumer`, `kafka-producer`, `solr`, `tomcat`, `wildfly`.
* For additional systems, custom JMX metric gatherer jars might be necessary, configurable via build flags.

### Set Up Kafka JMX Reporting

1. **Kafka Broker and Producer Metrics Collection**:

* Configure a JMX remote connection without authentication for Kafka metrics.
* Use the command `JMX_PORT=12346 ./bin/kafka-server-start.sh` to start Kafka with JMX enabled.

2. **Starting Zookeeper and Kafka**:

* First, launch Zookeeper by running the following command:

```bash
installation_location\bin\windows\zookeeper-server-start.bat config\zookeeper.properties
```

* Then, start the Kafka server by running the following command:

```bash
installation_location\bin\windows\kafka-server-start.bat config\server.properties
```

3. **Additional JVM Metric Exposure**:

* To monitor other applications, modify the environment files accordingly. For Solr, add `SOLR_JMX_CONFIG="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=12346"` to the `solr.env` file.

### Configure OpenTelemetry

Configure the collector as follows:

```yaml
receivers:
jmx:
jar_path: /Users/israelefrati/Downloads/jmxOpen/opentelemetry-jmx-metrics.jar
endpoint: localhost:12346
target_system: jvm,kafka
collection_interval: 10s


processors:
attributes/jmx:
actions:
- key: host
value: localhost:12345
action: insert
attributes/agent:
actions:
- key: logzio_agent_version
value: v1.0.36
action: insert
- key: targets
value: _CloudService_
action: insert
- key: role
value: _role_
action: insert


exporters:
logging:
logzio/logs:------------------------------
region: us
prometheusremotewrite:
endpoint: https://listener.logz.io:8053
headers:
Authorization: Bearer —-----------------------
resource_to_telemetry_conversion:
enabled: true


service:
pipelines:
metrics:
receivers:
- jmx
exporters: [prometheusremotewrite]
processors:
- attributes/jmx
- attributes/agent
telemetry:
logs:
level: debug
metrics:
address: localhost:8899
```


### Check Logz.io for your metrics

{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics.

<!-- logzio-inject:install:grafana:dashboards ids=["1G48F1M2FrS9tBtZ4P8jP6", "1G48F1M2FrS9tBtZ4P8jP6"] -->

{@include: ../../_include/metric-shipping/generic-dashboard.html}


{@include: ../../_include/metric-shipping/custom-dashboard.html} Install the pre-built dashboard to enhance the observability of your metrics.

<!-- logzio-inject:install:grafana:dashboards ids=["1G48F1M2FrS9tBtZ4P8jP6", "1G48F1M2FrS9tBtZ4P8jP6"] -->

{@include: ../../_include/metric-shipping/generic-dashboard.html}



8 changes: 8 additions & 0 deletions docs/user-guide/log-management/cold-tier/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Cold tier",
"position": 7,
"link": {
"type": "generated-index",
"description": "Cold Tier allows you to effortlessly search archived cold storage data, view up to 1,000 matching raw logs, and re-ingest them into your Logz.io account for further analysis and investigation."
}
}
Loading