diff --git a/README.md b/README.md
index 9fe09572492b9e..d4e8208e653586 100644
--- a/README.md
+++ b/README.md
@@ -233,7 +233,7 @@ By default, Netdata will send e-mail notifications if there is a configured MTA
### 4. **Configure Netdata Parents** :family:
-Optionally, configure one or more Netdata Parents. A Netdata Parent is a Netdata Agent that has been configured to accept [streaming connections](https://learn.netdata.cloud/docs/streaming/streaming-configuration-reference) from other Netdata agents.
+Optionally, configure one or more Netdata Parents. A Netdata Parent is a Netdata Agent that has been configured to accept [streaming connections](https://learn.netdata.cloud/docs/streaming/streaming-configuration-reference) from other Netdata Agents.
Netdata Parents provide:
@@ -264,8 +264,8 @@ If you connect your Netdata Parents, there is no need to connect your Netdata Ag
When your Netdata nodes are connected to Netdata Cloud, you can (on top of the above):
-- Access your Netdata agents from anywhere
-- Access sensitive Netdata agent features (like "Netdata Functions": processes, systemd-journal)
+- Access your Netdata Agents from anywhere
+- Access sensitive Netdata Agent features (like "Netdata Functions": processes, systemd-journal)
- Organize your infra in spaces and Rooms
- Create, manage, and share **custom dashboards**
- Invite your team and assign roles to them (Role-Based Access Control)
@@ -573,7 +573,7 @@ Here are some suggestions on how to manage and navigate this wealth of informati
If you're looking for specific information, you can use the search feature to find the relevant metrics or charts. This can help you avoid scrolling through all the data.
3. **Customize your Dashboards**
- Netdata allows you to create custom dashboards, which can help you focus on the metrics that are most important to you. Sign-in to Netdata and there you can have your custom dashboards. (coming soon to the agent dashboard too)
+ Netdata allows you to create custom dashboards, which can help you focus on the metrics that are most important to you. Sign-in to Netdata and there you can have your custom dashboards. (coming soon to the Agent dashboard too)
4. **Leverage Netdata's Anomaly Detection**
Netdata uses machine learning to detect anomalies in your metrics. This can help you identify potential issues before they become major problems. We have added an `AR` button above the dashboard table of contents to reveal the anomaly rate per section so that you can spot what could need your attention.
@@ -633,7 +633,7 @@ We are aware that for privacy or regulatory reasons, not all environments can al
These steps will disable the anonymous telemetry for your Netdata installation.
-Please note, even with telemetry disabled, Netdata still requires a [Netdata Registry](https://learn.netdata.cloud/docs/configuring/securing-netdata-agents/registry) for alert notifications' Call To Action (CTA) functionality. When you click an alert notification, it redirects you to the Netdata Registry, which then directs your web browser to the specific Netdata Agent that issued the alert for further troubleshooting. The Netdata Registry learns the URLs of your agents when you visit their dashboards.
+Please note, even with telemetry disabled, Netdata still requires a [Netdata Registry](https://learn.netdata.cloud/docs/configuring/securing-netdata-agents/registry) for alert notifications' Call To Action (CTA) functionality. When you click an alert notification, it redirects you to the Netdata Registry, which then directs your web browser to the specific Netdata Agent that issued the alert for further troubleshooting. The Netdata Registry learns the URLs of your Agents when you visit their dashboards.
Any Netdata Agent can act as a Netdata Registry. Designate one Netdata Agent as your registry, and our global Netdata Registry will no longer be in use. For further information on this, please refer to [this guide](https://learn.netdata.cloud/docs/configuring/securing-netdata-agents/registry).
diff --git a/docs/alerts-and-notifications/notifications/README.md b/docs/alerts-and-notifications/notifications/README.md
index 870076b974e584..2efcdbe48b041c 100644
--- a/docs/alerts-and-notifications/notifications/README.md
+++ b/docs/alerts-and-notifications/notifications/README.md
@@ -6,4 +6,4 @@ This section includes the documentation of the integrations for both of Netdata'
- Netdata Cloud provides centralized alert notifications, utilizing the health status data already sent to Netdata Cloud from connected nodes to send alerts to configured integrations. [Supported integrations](/docs/alerts-&-notifications/notifications/centralized-cloud-notifications) include Amazon SNS, Discord, Slack, Splunk, and others.
-- The Netdata Agent offers a [wider range of notification options](/docs/alerts-&-notifications/notifications/agent-dispatched-notifications) directly from the agent itself. You can choose from over a dozen services, including email, Slack, PagerDuty, Twilio, and others, for more granular control over notifications on each node.
+- The Netdata Agent offers a [wider range of notification options](/docs/alerts-&-notifications/notifications/agent-dispatched-notifications) directly from the Agent itself. You can choose from over a dozen services, including email, Slack, PagerDuty, Twilio, and others, for more granular control over notifications on each node.
diff --git a/docs/dashboards-and-charts/README.md b/docs/dashboards-and-charts/README.md
index f94d776a3a1230..3008cfccb37085 100644
--- a/docs/dashboards-and-charts/README.md
+++ b/docs/dashboards-and-charts/README.md
@@ -25,7 +25,7 @@ The Netdata dashboard consists of the following main sections:
> **Note**
>
-> Some sections of the dashboard, when accessed through the agent, may require the user to be signed in to Netdata Cloud or have the Agent claimed to Netdata Cloud for their full functionality. Examples include saving visualization settings on charts or custom dashboards, claiming the node to Netdata Cloud, or executing functions on an Agent.
+> Some sections of the dashboard, when accessed through the Agent, may require the user to be signed in to Netdata Cloud or have the Agent claimed to Netdata Cloud for their full functionality. Examples include saving visualization settings on charts or custom dashboards, claiming the node to Netdata Cloud, or executing functions on an Agent.
## How to access the dashboards?
diff --git a/docs/dashboards-and-charts/events-feed.md b/docs/dashboards-and-charts/events-feed.md
index 34d6ee0e652542..8e31ebb5f453f0 100644
--- a/docs/dashboards-and-charts/events-feed.md
+++ b/docs/dashboards-and-charts/events-feed.md
@@ -49,8 +49,8 @@ At a high-level view, these are the domains from which the Events feed will prov
| Node Removed | The node was removed from the Space, for example by using the `Delete` action on the node. This is a soft delete in that the node gets marked as deleted, but retains the association with this space. If it becomes live again, it will be restored (see `Node Restored` below) and reappear in this space as before. | Node `ip-xyz.ec2.internal` was **deleted (soft)** |
| Node Restored | The node was restored. See `Node Removed` above. | Node `ip-xyz.ec2.internal` was **restored** |
| Node Deleted | The node was deleted from the Space. This is a hard delete and no information on the node is retained. | Node `ip-xyz.ec2.internal` was **deleted (hard)** |
-| Agent Connected | The agent connected to the Cloud MQTT server (Agent-Cloud Link established).
These events can only be seen on _All nodes_ Room. | Agent with claim ID `7d87bqs9-cv42-4823-8sd4-3614548850c7` has connected to Cloud. |
-| Agent Disconnected | The agent disconnected from the Cloud MQTT server (Agent-Cloud Link severed).
These events can only be seen on _All nodes_ Room. | Agent with claim ID `7d87bqs9-cv42-4823-8sd4-3614548850c7` has disconnected from Cloud: **Connection Timeout**. |
+| Agent Connected | The Agent connected to the Cloud MQTT server (Agent-Cloud Link established).
These events can only be seen on _All nodes_ Room. | Agent with claim ID `7d87bqs9-cv42-4823-8sd4-3614548850c7` has connected to Cloud. |
+| Agent Disconnected | The Agent disconnected from the Cloud MQTT server (Agent-Cloud Link severed).
These events can only be seen on _All nodes_ Room. | Agent with claim ID `7d87bqs9-cv42-4823-8sd4-3614548850c7` has disconnected from Cloud: **Connection Timeout**. |
| Space Statistics | Daily snapshot of space node statistics.
These events can only be seen on _All nodes_ Room. | Space statistics. Nodes: **22 live**, **21 stale**, **18 removed**, **61 total**. |
### Alert events
diff --git a/docs/dashboards-and-charts/netdata-charts.md b/docs/dashboards-and-charts/netdata-charts.md
index c7563aa2901a4c..50b7c15a2b5bfb 100644
--- a/docs/dashboards-and-charts/netdata-charts.md
+++ b/docs/dashboards-and-charts/netdata-charts.md
@@ -274,7 +274,7 @@ Finally, you can reset everything to its defaults by clicking the green "Reset"
## Anomaly Rate ribbon
-Netdata's unsupervised machine learning algorithm creates a unique model for each metric collected by your agents, using exclusively the metric's past data.
+Netdata's unsupervised machine learning algorithm creates a unique model for each metric collected by your Agents, using exclusively the metric's past data.
It then uses these unique models during data collection to predict the value that should be collected and check if the collected value is within the range of acceptable values based on past patterns and behavior.
If the value collected is an outlier, it is marked as anomalous.
diff --git a/docs/developer-and-contributor-corner/style-guide.md b/docs/developer-and-contributor-corner/style-guide.md
index b64a9df0bffff3..16e07f54d06fb2 100644
--- a/docs/developer-and-contributor-corner/style-guide.md
+++ b/docs/developer-and-contributor-corner/style-guide.md
@@ -160,8 +160,7 @@ capitalization. In summary:
Docker, Apache, NGINX)
- Avoid camel case (NetData) or all caps (NETDATA).
-Whenever you refer to the company Netdata, Inc., or the open-source monitoring agent the company develops, capitalize
-**Netdata**.
+Whenever you refer to the company Netdata, Inc., or the open-source monitoring Agent the company develops, capitalize both words.
However, if you are referring to a process, user, or group on a Linux system, use lowercase and fence the word in an
inline code block: `` `netdata` ``.
diff --git a/docs/glossary.md b/docs/glossary.md
index 78ba180728494b..873f5e27585bfe 100644
--- a/docs/glossary.md
+++ b/docs/glossary.md
@@ -128,7 +128,7 @@ metrics, troubleshoot complex performance problems, and make data interoperable
## S
-- [**Single Node Dashboard**](/docs/dashboards-and-charts/metrics-tab-and-single-node-tabs.md): A dashboard pre-configured with every installation of the Netdata agent, with thousand of metrics and hundreds of interactive charts that requires no set up.
+- [**Single Node Dashboard**](/docs/dashboards-and-charts/metrics-tab-and-single-node-tabs.md): A dashboard pre-configured with every installation of the Netdata Agent, with thousand of metrics and hundreds of interactive charts that requires no set up.
- [**Space**](/docs/netdata-cloud/organize-your-infrastructure-invite-your-team.md#netdata-cloud-spaces): A high-level container and virtual collaboration area where you can organize team members, access levels,and the nodes you want to monitor.
diff --git a/docs/netdata-agent/README.md b/docs/netdata-agent/README.md
index 8096e911a0e27a..ef538f2426b60f 100644
--- a/docs/netdata-agent/README.md
+++ b/docs/netdata-agent/README.md
@@ -59,7 +59,7 @@ stateDiagram-v2
6. **Check**: a health engine, triggering alerts and sending notifications. Netdata comes with hundreds of alert configurations that are automatically attached to metrics when they get collected, detecting errors, common configuration errors and performance issues.
7. **Query**: a query engine for querying time-series data.
8. **Score**: a scoring engine for comparing and correlating metrics.
-9. **Stream**: a mechanism to connect Netdata agents and build Metrics Centralization Points (Netdata Parents).
+9. **Stream**: a mechanism to connect Netdata Agents and build Metrics Centralization Points (Netdata Parents).
10. **Visualize**: Netdata's fully automated dashboards for all metrics.
11. **Export**: export metric samples to 3rd party time-series databases, enabling the use of 3rd party tools for visualization, like Grafana.
@@ -77,8 +77,8 @@ stateDiagram-v2
## Dashboard Versions
-The Netdata agents (Standalone, Children and Parents) **share the dashboard** of Netdata Cloud. However, when the user is logged in and the Netdata agent is connected to Netdata Cloud, the following are enabled (which are otherwise disabled):
+The Netdata Agents (Standalone, Children and Parents) **share the dashboard** of Netdata Cloud. However, when the user is logged in and the Agent is connected to the Cloud, the following are enabled (which are otherwise disabled):
-1. **Access to Sensitive Data**: Some data, like systemd-journal logs and several [Top Monitoring](/docs/top-monitoring-netdata-functions.md) features expose sensitive data, like IPs, ports, process command lines and more. To access all these when the dashboard is served directly from a Netdata agent, Netdata Cloud is required to verify that the user accessing the dashboard has the required permissions.
+1. **Access to Sensitive Data**: Some data, like systemd-journal logs and several [Top Monitoring](/docs/top-monitoring-netdata-functions.md) features expose sensitive data, like IPs, ports, process command lines and more. To access all these when the dashboard is served directly from an Agent, Netdata Cloud is required to verify that the user accessing the dashboard has the required permissions.
-2. **Dynamic Configuration**: Netdata agents are configured via configuration files, manually or through some provisioning system. The latest Netdata includes a feature to allow users to change some configurations (collectors, alerts) via the dashboard. This feature is only available to users of paid Netdata Cloud plan.
+2. **Dynamic Configuration**: Netdata Agents are configured via configuration files, manually or through some provisioning system. The latest Netdata includes a feature to allow users to change some configurations (collectors, alerts) via the dashboard. This feature is only available to users of paid Netdata Cloud plan.
diff --git a/docs/netdata-agent/backup-and-restore-an-agent.md b/docs/netdata-agent/backup-and-restore-an-agent.md
index db9398b2782297..e0b8869ed29cec 100644
--- a/docs/netdata-agent/backup-and-restore-an-agent.md
+++ b/docs/netdata-agent/backup-and-restore-an-agent.md
@@ -34,18 +34,18 @@ In this standard scenario, youβre backing up your Netdata Agent in case of a n
sudo tar -cvpzf netdata_backup.tar.gz /etc/netdata/ /var/cache/netdata /var/lib/netdata
```
- Stopping the Netdata agent is typically necessary to back up the database files of the Netdata Agent.
+ Stopping the Netdata Agent is typically necessary to back up the database files of the Netdata Agent.
If you want to minimize the gap in metrics caused by stopping the Netdata Agent, consider implementing a backup job or script that follows this sequence:
- Backup the Agent configuration Identity directories
- Stop the Netdata service
- Backup up the database files
-- Restart the netdata agent.
+- Restart the Netdata Agent.
### Restoring Netdata
-1. Ensure that the Netdata agent is installed and is [stopped](/docs/netdata-agent/start-stop-restart.md)
+1. Ensure that the Netdata Agent is installed and is [stopped](/docs/netdata-agent/start-stop-restart.md)
If you plan to deploy the Agent and restore a backup on top of it, then you might find it helpful to use the [`--dont-start-it`](/packaging/installer/methods/kickstart.md#other-options) option upon installation.
@@ -66,4 +66,4 @@ If you want to minimize the gap in metrics caused by stopping the Netdata Agent,
sudo tar -xvpzf /path/to/netdata_backup.tar.gz -C /
```
-3. [Start the Netdata agent](/docs/netdata-agent/start-stop-restart.md)
+3. [Start the Netdata Agent](/docs/netdata-agent/start-stop-restart.md)
diff --git a/docs/netdata-agent/configuration/anonymous-telemetry-events.md b/docs/netdata-agent/configuration/anonymous-telemetry-events.md
index 4d48de4a219899..a5b4880c92141b 100644
--- a/docs/netdata-agent/configuration/anonymous-telemetry-events.md
+++ b/docs/netdata-agent/configuration/anonymous-telemetry-events.md
@@ -1,7 +1,6 @@
# Anonymous telemetry events
-By default, Netdata collects anonymous usage information from the open-source monitoring agent. For agent events like start, stop, crash, etc. we use our own cloud function in GCP. For frontend telemetry (page views etc.) on the agent dashboard itself, we use the open-source
-product analytics platform [PostHog](https://github.com/PostHog/posthog).
+By default, Netdata collects anonymous usage information from the open-source monitoring Agent. For events like start, stop, crash, etc. we use our own cloud function in GCP. For frontend telemetry (page views etc.) on the dashboard itself, we use the open-source product analytics platform [PostHog](https://github.com/PostHog/posthog).
We are strongly committed to your [data privacy](https://netdata.cloud/privacy/).
@@ -10,7 +9,7 @@ We use the statistics gathered from this information for two purposes:
1. **Quality assurance**, to help us understand if Netdata behaves as expected, and to help us classify repeated
issues with certain distributions or environments.
-2. **Usage statistics**, to help us interpret how people use the Netdata agent in real-world environments, and to help
+2. **Usage statistics**, to help us interpret how people use the Netdata Agent in real-world environments, and to help
us identify how our development/design decisions influence the community.
Netdata collects usage information via two different channels:
@@ -59,7 +58,7 @@ filename and source code line number of the fatal error.
Starting with v1.21, we additionally collect information about:
- Failures to build the dependencies required to use Cloud features.
-- Unavailability of Cloud features in an agent.
+- Unavailability of Cloud features in an Agent.
- Failures to connect to the Cloud in case the [connection process](/src/claim/README.md) has been completed. This includes error codes
to inform the Netdata team about the reason why the connection failed.
diff --git a/docs/netdata-agent/configuration/optimize-the-netdata-agents-performance.md b/docs/netdata-agent/configuration/optimize-the-netdata-agents-performance.md
index ff51fbf78e44a4..26abcb38ee45a4 100644
--- a/docs/netdata-agent/configuration/optimize-the-netdata-agents-performance.md
+++ b/docs/netdata-agent/configuration/optimize-the-netdata-agents-performance.md
@@ -26,7 +26,7 @@ The following table summarizes the effect of each optimization on the CPU, RAM a
| [Use a different metric storage database](/src/database/README.md) | | :heavy_check_mark: | :heavy_check_mark: |
| [Disable machine learning](#disable-machine-learning) | :heavy_check_mark: | | |
| [Use a reverse proxy](#run-netdata-behind-a-proxy) | :heavy_check_mark: | | |
-| [Disable/lower gzip compression for the agent dashboard](#disablelower-gzip-compression-for-the-dashboard) | :heavy_check_mark: | | |
+| [Disable/lower gzip compression for the Agent dashboard](#disablelower-gzip-compression-for-the-dashboard) | :heavy_check_mark: | | |
## Resources required by a default Netdata installation
@@ -62,7 +62,7 @@ To reduce CPU usage, you can (either one or a combination of the following actio
3. [Reduce the data collection frequency](#reduce-collection-frequency)
4. [Disable unneeded plugins or collectors](#disable-unneeded-plugins-or-collectors)
5. [Use a reverse proxy](#run-netdata-behind-a-proxy),
-6. [Disable/lower gzip compression for the agent dashboard](#disablelower-gzip-compression-for-the-dashboard).
+6. [Disable/lower gzip compression for the Agent dashboard](#disablelower-gzip-compression-for-the-dashboard).
### Memory consumption
@@ -111,7 +111,7 @@ using [streaming and replication](/docs/observability-centralization-points/READ
### Disable health checks on the child nodes
When you set up streaming, we recommend you run your health checks on the parent. This saves resources on the children
-and makes it easier to configure or disable alerts and agent notifications.
+and makes it easier to configure or disable alerts and Agent notifications.
The parents by default run health checks for each child, as long as the child is connected (the details are
in `stream.conf`). On the child nodes you should add to `netdata.conf` the following:
diff --git a/docs/netdata-agent/configuration/optimizing-metrics-database/change-metrics-storage.md b/docs/netdata-agent/configuration/optimizing-metrics-database/change-metrics-storage.md
index 2282cbc44e96eb..8c0c11bc1fac2a 100644
--- a/docs/netdata-agent/configuration/optimizing-metrics-database/change-metrics-storage.md
+++ b/docs/netdata-agent/configuration/optimizing-metrics-database/change-metrics-storage.md
@@ -17,8 +17,7 @@ With these defaults, Netdata requires approximately 4 GiB of storage space (incl
## Retention Settings
-> **In a parent-child setup**, these settings manage the shared storage space used by the Netdata parent agent for
-> storing metrics collected by both the parent and its child nodes.
+> **In a parent-child setup**, these settings manage the shared storage space used by the Netdata parent Agent for storing metrics collected by both the parent and its child nodes.
You can fine-tune retention for each tier by setting a time limit or size limit. Setting a limit to 0 disables it,
allowing for no time-based deletion for that tier or using all available space, respectively. This enables various
diff --git a/docs/netdata-agent/configuration/organize-systems-metrics-and-alerts.md b/docs/netdata-agent/configuration/organize-systems-metrics-and-alerts.md
index f7f56279b7df7c..efc38c00f5c1d7 100644
--- a/docs/netdata-agent/configuration/organize-systems-metrics-and-alerts.md
+++ b/docs/netdata-agent/configuration/organize-systems-metrics-and-alerts.md
@@ -104,8 +104,7 @@ can reload labels using the helpful `netdatacli` tool:
netdatacli reload-labels
```
-Your host labels will now be enabled. You can double-check these by using `curl http://HOST-IP:19999/api/v1/info` to
-read the status of your agent. For example, from a VPS system running Debian 10:
+Your host labels will now be enabled. You can double-check these by using `curl http://HOST-IP:19999/api/v1/info` to read the status of your Agent. For example, from a VPS system running Debian 10:
```json
{
@@ -232,7 +231,7 @@ All go.d plugin collectors support the specification of labels at the "collectio
labels (e.g. generic Prometheus collector, Kubernetes, Docker and more). But you can also add your own custom labels by configuring
the data collection jobs.
-For example, suppose we have a single Netdata agent, collecting data from two remote Apache web servers, located in different data centers.
+For example, suppose we have a single Netdata Agent, collecting data from two remote Apache web servers, located in different data centers.
The web servers are load balanced and provide access to the service "Payments".
You can define the following in `go.d.conf`, to be able to group the web requests by service or location:
diff --git a/docs/netdata-agent/configuration/running-the-netdata-agent-behind-a-reverse-proxy/README.md b/docs/netdata-agent/configuration/running-the-netdata-agent-behind-a-reverse-proxy/README.md
index a0810bb5103924..af35c3c662b387 100644
--- a/docs/netdata-agent/configuration/running-the-netdata-agent-behind-a-reverse-proxy/README.md
+++ b/docs/netdata-agent/configuration/running-the-netdata-agent-behind-a-reverse-proxy/README.md
@@ -1,6 +1,6 @@
# Running the Netdata Agent behind a reverse proxy
-If you need to access a Netdata agent's user interface or API in a production environment we recommend you put Netdata behind
+If you need to access a Netdata Agent's user interface or API in a production environment we recommend you put Netdata behind
another web server and secure access to the dashboard via SSL, user authentication and firewall rules.
A dedicated web server also provides more robustness and capabilities than the Agent's [internal web server](/src/web/README.md).
diff --git a/docs/netdata-agent/configuration/running-the-netdata-agent-behind-a-reverse-proxy/Running-behind-nginx.md b/docs/netdata-agent/configuration/running-the-netdata-agent-behind-a-reverse-proxy/Running-behind-nginx.md
index c0364633a5a950..d38fbe8272e786 100644
--- a/docs/netdata-agent/configuration/running-the-netdata-agent-behind-a-reverse-proxy/Running-behind-nginx.md
+++ b/docs/netdata-agent/configuration/running-the-netdata-agent-behind-a-reverse-proxy/Running-behind-nginx.md
@@ -12,7 +12,7 @@ The software is known for its low impact on memory resources, high scalability,
- Nginx is used and useful in cases when you want to access different instances of Netdata from a single server.
-- Password-protect access to Netdata, until distributed authentication is implemented via the Netdata cloud Sign In mechanism.
+- Password-protect access to Netdata, until distributed authentication is implemented via the Netdata Cloud Sign In mechanism.
- A proxy was necessary to encrypt the communication to Netdata, until v1.16.0, which provided TLS (HTTPS) support.
diff --git a/docs/netdata-agent/sizing-netdata-agents/bandwidth-requirements.md b/docs/netdata-agent/sizing-netdata-agents/bandwidth-requirements.md
index fbbc279d559c74..954860b923dd62 100644
--- a/docs/netdata-agent/sizing-netdata-agents/bandwidth-requirements.md
+++ b/docs/netdata-agent/sizing-netdata-agents/bandwidth-requirements.md
@@ -44,4 +44,4 @@ The information transferred to Netdata Cloud is:
This is not a constant stream of information. Netdata Agents update Netdata Cloud only about status changes on all the above (e.g., an alert being triggered, or a metric stopped being collected). So, there is an initial handshake and exchange of information when Netdata starts, and then there only updates when required.
-Of course, when you view Netdata Cloud dashboards that need to query the database a Netdata agent maintains, this query is forwarded to an agent that can satisfy it. This means that Netdata Cloud receives metric samples only when a user is accessing a dashboard and the samples transferred are usually aggregations to allow rendering the dashboards.
+Of course, when you view Netdata Cloud dashboards that need to query the database a Netdata Agent maintains, this query is forwarded to an Agent that can satisfy it. This means that Netdata Cloud receives metric samples only when a user is accessing a dashboard and the samples transferred are usually aggregations to allow rendering the dashboards.
diff --git a/docs/netdata-cloud/README.md b/docs/netdata-cloud/README.md
index 6a2406aebf7100..73a0bcc658ebb8 100644
--- a/docs/netdata-cloud/README.md
+++ b/docs/netdata-cloud/README.md
@@ -37,7 +37,7 @@ flowchart TB
NC <-->|secure connection| Agents
```
-Netdata Cloud provides the following features, on top of what the Netdata agents already provide:
+Netdata Cloud provides the following features, on top of what the Netdata Agents already provide:
1. **Horizontal scalability**: Netdata Cloud allows scaling the observability infrastructure horizontally, by adding more independent Netdata Parents and Children. It can aggregate such, otherwise independent, observability islands into one uniform and integrated infrastructure.
@@ -45,11 +45,11 @@ Netdata Cloud provides the following features, on top of what the Netdata agents
2. **Role Based Access Control (RBAC)**: Netdata Cloud has all the mechanisms for user-management and access control. It allows assigning all users a role, segmenting the infrastructure into rooms, and associating Rooms with roles and users.
-3. **Access from anywhere**: Netdata agents are installed on-prem and this is where all your data are always stored. Netdata Cloud allows querying all the Netdata agents (Standalone, Children and Parents) in real-time when dashboards are accessed via Netdata Cloud.
+3. **Access from anywhere**: Netdata Agents are installed on-prem and this is where all your data are always stored. Netdata Cloud allows querying all the Netdata Agents (Standalone, Children and Parents) in real-time when dashboards are accessed via Netdata Cloud.
This enables a much simpler access control, eliminating the complexities of setting up VPNs to access observability, and the bandwidth costs for centralizing all metrics to one place.
-4. **Central dispatch of alert notifications**: Netdata Cloud allows controlling the dispatch of alert notifications centrally. By default, all Netdata agents (Standalone, Children and Parents) send their own notifications. This becomes increasingly complex as the infrastructure grows. So, Netdata Cloud steps in to simplify this process and provide central control of all notifications.
+4. **Central dispatch of alert notifications**: Netdata Cloud allows controlling the dispatch of alert notifications centrally. By default, all Netdata Agents (Standalone, Children and Parents) send their own notifications. This becomes increasingly complex as the infrastructure grows. So, Netdata Cloud steps in to simplify this process and provide central control of all notifications.
Netdata Cloud also enables the use of the **Netdata Mobile App** offering mobile push notifications for all users in commercial plans.
@@ -61,18 +61,18 @@ Netdata Cloud provides the following features, on top of what the Netdata agents
## Data Exposed to Netdata Cloud
-Netdata is thin layer of top of Netdata agents. It does not receive the samples collected, or the logs Netdata agents maintain.
+Netdata is thin layer of top of Netdata Agents. It does not receive the samples collected, or the logs Netdata Agents maintain.
This is a key design decision for Netdata. If we were centralizing metric samples and logs, Netdata would have the same constrains and cost structure other observability solutions have, and we would be forced to lower metrics resolution, filter out metrics and eventually increase significantly the cost of observability.
Instead, Netdata Cloud receives and stores only metadata related to the metrics collected, such as the nodes collecting metrics and their labels, the metric names, their labels and their retention, the data collection plugins and modules running, the configured alerts and their transitions.
-This information is a small fraction of the total information maintained by Netdata agents, allowing Netdata Cloud to remain high-resolution, high-fidelity and real-time, while being able to:
+This information is a small fraction of the total information maintained by Netdata Agents, allowing Netdata Cloud to remain high-resolution, high-fidelity and real-time, while being able to:
- dispatch alerts centrally for all alert transitions.
-- know which Netdata agents to query when users view the dashboards.
+- know which Netdata Agents to query when users view the dashboards.
-Metric samples and logs are transferred via Netdata Cloud to your Web Browser, only when you view them via Netdata Cloud. And even then, Netdata Cloud does not store this information. It only aggregates the responses of multiple Netdata agents to a single response for your web browser to visualize.
+Metric samples and logs are transferred via Netdata Cloud to your Web Browser, only when you view them via Netdata Cloud. And even then, Netdata Cloud does not store this information. It only aggregates the responses of multiple Netdata Agents to a single response for your web browser to visualize.
## High-Availability
@@ -80,38 +80,38 @@ You can subscribe to Netdata Cloud updates at the [Netdata Cloud Status](https:/
Netdata Cloud is a highly available, auto-scalable solution, however being a monitoring solution, we need to ensure dashboards are accessible during crisis.
-Netdata agents provide the same dashboard Netdata Cloud provides, with the following limitations:
+Netdata Agents provide the same dashboard Netdata Cloud provides, with the following limitations:
-1. Netdata agents (Children and Parents) dashboards are limited to their databases, while on Netdata Cloud the dashboard presents the entire infrastructure, from all Netdata agents connected to it.
+1. Netdata Agents (Children and Parents) dashboards are limited to their databases, while on Netdata Cloud the dashboard presents the entire infrastructure, from all Netdata Agents connected to it.
-2. When you are not logged-in or the agent is not connected to Netdata Cloud, certain features of the Netdata agent dashboard will not be available.
+2. When you are not logged-in or the Agent is not connected to Netdata Cloud, certain features of the Netdata Agent dashboard will not be available.
- When you are logged-in and the agent is connected to Netdata Cloud, the agent dashboard has the same functionality as Netdata Cloud.
+ When you are logged-in and the Agent is connected to Netdata Cloud, the dashboard has the same functionality as Netdata Cloud.
-To ensure dashboard high availability, Netdata agent dashboards are available by directly accessing them, even when the connectivity between Children and Parents or Netdata Cloud faces issues. This allows the use of the individual Netdata agents' dashboards during crisis, at different levels of aggregation.
+To ensure dashboard high availability, Netdata Agent dashboards are available by directly accessing them, even when the connectivity between Children and Parents or Netdata Cloud faces issues. This allows the use of the individual Netdata Agents' dashboards during crisis, at different levels of aggregation.
## Fidelity and Insights
-Netdata Cloud queries Netdata agents, so it provides exactly the same fidelity and insights Netdata agents provide. Dashboards have the same resolution, the same number of metrics, exactly the same data.
+Netdata Cloud queries Netdata Agents, so it provides exactly the same fidelity and insights Netdata Agents provide. Dashboards have the same resolution, the same number of metrics, exactly the same data.
## Performance
-The Netdata agent and Netdata Cloud have similar query performance, but there are additional network latencies involved when the dashboards are viewed via Netdata Cloud.
+The Netdata Agent and Netdata Cloud have similar query performance, but there are additional network latencies involved when the dashboards are viewed via Netdata Cloud.
-Accessing Netdata agents on the same LAN has marginal network latency and their response time is only affected by the queries. However, accessing the same Netdata agents via Netdata Cloud has a bigger network round-trip time, that looks like this:
+Accessing Netdata Agents on the same LAN has marginal network latency and their response time is only affected by the queries. However, accessing the same Netdata Agents via Netdata Cloud has a bigger network round-trip time, that looks like this:
1. Your web browser makes a request to Netdata Cloud.
-2. Netdata Cloud sends the request to your Netdata agents. If multiple Netdata agents are involved, they are queried in parallel.
+2. Netdata Cloud sends the request to your Netdata Agents. If multiple Netdata Agents are involved, they are queried in parallel.
3. Netdata Cloud receives their responses and aggregates them into a single response.
4. Netdata Cloud replies to your web browser.
-If you are sitting on the same LAN as the Netdata agents, the latency will be 2 times the round-trip network latency between this LAN and Netdata Cloud.
+If you are sitting on the same LAN as the Netdata Agents, the latency will be 2 times the round-trip network latency between this LAN and Netdata Cloud.
-However, when there are multiple Netdata agents involved, the queries will be faster compared to a monitoring solution that has one centralization point. Netdata Cloud splits each query into multiple parts and each of the Netdata agents involved will only perform a small part of the original query. So, when querying a large infrastructure, you enjoy the performance of the combined power of all your Netdata agents, which is usually quite higher than any single-centralization-point monitoring solution.
+However, when there are multiple Netdata Agents involved, the queries will be faster compared to a monitoring solution that has one centralization point. Netdata Cloud splits each query into multiple parts and each of the Netdata Agents involved will only perform a small part of the original query. So, when querying a large infrastructure, you enjoy the performance of the combined power of all your Netdata Agents, which is usually quite higher than any single-centralization-point monitoring solution.
## Does Netdata Cloud require Observability Centralization Points?
-No. Any or all Netdata agents can be connected to Netdata Cloud.
+No. Any or all Netdata Agents can be connected to Netdata Cloud.
We recommend to create [observability centralization points](/docs/observability-centralization-points/README.md), as required for operational efficiency (ephemeral nodes, teams or services isolation, central control of alerts, production systems performance), security policies (internet isolation), or cost optimization (use existing capacities before allocating new ones).
diff --git a/docs/netdata-cloud/authentication-and-authorization/api-tokens.md b/docs/netdata-cloud/authentication-and-authorization/api-tokens.md
index a8f304ffba9aac..d5d88779c67714 100644
--- a/docs/netdata-cloud/authentication-and-authorization/api-tokens.md
+++ b/docs/netdata-cloud/authentication-and-authorization/api-tokens.md
@@ -2,9 +2,7 @@
## Overview
-Every single user can get access to the Netdata resource programmatically. It is done through the API Token which
-can be also called as Bearer Token. This token is used for authentication and authorization, it can be issued
-in the Netdata UI under the user Settings:
+Every single user can get access to the Netdata resource programmatically. It is done through the API Token, also called Bearer Token. This token is used for authentication and authorization, it can be issued in the Netdata UI under the user Settings:
@@ -16,18 +14,18 @@ The API Tokens are not going to expire and can be limited to a few scopes:
* `scope:agent-ui`
- this token is mainly used by the local Netdata agent accessing the Cloud UI
+ this token is mainly used by the local Netdata Agent accessing the Cloud UI
* `scope:grafana-plugin`
this token is used for the [Netdata Grafana plugin](https://github.com/netdata/netdata-grafana-datasource-plugin/blob/master/README.md)
to access Netdata charts
-Currently, the Netdata Cloud is not exposing stable API.
+Currently, Netdata Cloud is not exposing the stable API.
## Example usage
-* get the cloud space list
+* get the Netdata Cloud space list
```console
curl -H 'Accept: application/json' -H "Authorization: Bearer " https://app.netdata.cloud/api/v2/spaces
diff --git a/docs/netdata-cloud/netdata-cloud-on-prem/README.md b/docs/netdata-cloud/netdata-cloud-on-prem/README.md
index 49373c454cfaa9..df53e06982a87e 100644
--- a/docs/netdata-cloud/netdata-cloud-on-prem/README.md
+++ b/docs/netdata-cloud/netdata-cloud-on-prem/README.md
@@ -6,7 +6,7 @@ The overall architecture looks like this:
```mermaid
flowchart TD
- agents("π Netdata Agents
Users' infrastructure
Netdata Children & Parents")
+ Agents("π Netdata Agents
Users' infrastructure
Netdata Children & Parents")
users[["π₯ Unified Dashboards
Integrated Infrastructure
Dashboards"]]
ingress("π‘οΈ Ingress Gateway
TLS termination")
traefik((("π Traefik
Authentication &
Authorization")))
@@ -15,7 +15,7 @@ flowchart TD
frontend("π Front-End
Static Web Files")
auth("π¨βπΌ Users & Agents
Authorization
Microservices")
spaceroom("π‘ Spaces, Rooms,
Nodes, Settings
Microservices for
managing Spaces,
Rooms, Nodes and
related settings")
- charts("π Metrics & Queries
Microservices for
dispatching queries
to Netdata agents")
+ charts("π Metrics & Queries
Microservices for
dispatching queries
to Netdata Agents")
alerts("π Alerts & Notifications
Microservices for
tracking alert
transitions and
deduplicating alerts")
sql[("β¨ PostgreSQL
Users, Spaces, Rooms,
Agents, Nodes, Metric
Names, Metrics Retention,
Custom Dashboards,
Settings")]
redis[("ποΈ Redis
Caches needed
by Microservices")]
diff --git a/docs/netdata-cloud/netdata-cloud-on-prem/installation.md b/docs/netdata-cloud/netdata-cloud-on-prem/installation.md
index a23baa99caa8b2..7082e96cd73ec7 100644
--- a/docs/netdata-cloud/netdata-cloud-on-prem/installation.md
+++ b/docs/netdata-cloud/netdata-cloud-on-prem/installation.md
@@ -123,59 +123,59 @@ Responsible for user registration & authentication. Manages user account informa
### cloud-agent-data-ctrl-service
-Forwards request from the cloud to the relevant agents.
+Forwards request from the Cloud to the relevant Agents.
The requests include:
-- Fetching chart metadata from the agent
-- Fetching chart data from the agent
-- Fetching function data from the agent
+- Fetching chart metadata from the Agent
+- Fetching chart data from the Agent
+- Fetching function data from the Agent
### cloud-agent-mqtt-input-service
-Forwards MQTT messages emitted by the agent related to the agent entities to the internal Pulsar broker. These include agent connection state updates.
+Forwards MQTT messages emitted by the Agent related to the Agent entities to the internal Pulsar broker. These include Agent connection state updates.
### cloud-agent-mqtt-output-service
-Forwards Pulsar messages emitted in the cloud related to the agent entities to the MQTT broker. From there, the messages reach the relevant agent.
+Forwards Pulsar messages emitted in the Cloud related to the Agent entities to the MQTT broker. From there, the messages reach the relevant Agent.
### cloud-alarm-config-mqtt-input-service
-Forwards MQTT messages emitted by the agent related to the alarm-config entities to the internal Pulsar broker. These include the data for the alarm configuration as seen by the agent.
+Forwards MQTT messages emitted by the Agent related to the alarm-config entities to the internal Pulsar broker. These include the data for the alarm configuration as seen by the Agent.
### cloud-alarm-log-mqtt-input-service
-Forwards MQTT messages emitted by the agent related to the alarm-log entities to the internal Pulsar broker. These contain data about the alarm transitions that occurred in an agent.
+Forwards MQTT messages emitted by the Agent related to the alarm-log entities to the internal Pulsar broker. These contain data about the alarm transitions that occurred in an Agent.
### cloud-alarm-mqtt-output-service
-Forwards Pulsar messages emitted in the cloud related to the alarm entities to the MQTT broker. From there, the messages reach the relevant agent.
+Forwards Pulsar messages emitted in the Cloud related to the alarm entities to the MQTT broker. From there, the messages reach the relevant Agent.
### cloud-alarm-processor-service
-Persists latest alert statuses received from the agent in the cloud.
+Persists latest alert statuses received from the Agent in the Cloud.
Aggregates alert statuses from relevant node instances.
-Exposes API endpoints to fetch alert data for visualization on the cloud.
+Exposes API endpoints to fetch alert data for visualization on the Cloud.
Determines if notifications need to be sent when alert statuses change and emits relevant messages to Pulsar.
Exposes API endpoints to store and return notification-silencing data.
### cloud-alarm-streaming-service
-Responsible for starting the alert stream between the agent and the cloud.
-Ensures that messages are processed in the correct order, and starts a reconciliation process between the cloud and the agent if out-of-order processing occurs.
+Responsible for starting the alert stream between the Agent and the Cloud.
+Ensures that messages are processed in the correct order, and starts a reconciliation process between the Cloud and the Agent if out-of-order processing occurs.
### cloud-charts-mqtt-input-service
-Forwards MQTT messages emitted by the agent related to the chart entities to the internal Pulsar broker. These include the chart metadata that is used to display relevant charts on the cloud.
+Forwards MQTT messages emitted by the Agent related to the chart entities to the internal Pulsar broker. These include the chart metadata that is used to display relevant charts on the Cloud.
### cloud-charts-mqtt-output-service
-Forwards Pulsar messages emitted in the cloud related to the charts entities to the MQTT broker. From there, the messages reach the relevant agent.
+Forwards Pulsar messages emitted in the Cloud related to the charts entities to the MQTT broker. From there, the messages reach the relevant Agent.
### cloud-charts-service
Exposes API endpoints to fetch the chart metadata.
-Forwards data requests via the `cloud-agent-data-ctrl-service` to the relevant agents to fetch chart data points.
-Exposes API endpoints to call various other endpoints on the agent, for instance, functions.
+Forwards data requests via the `cloud-agent-data-ctrl-service` to the relevant Agents to fetch chart data points.
+Exposes API endpoints to call various other endpoints on the Agent, for instance, functions.
### cloud-custom-dashboard-service
@@ -183,8 +183,8 @@ Exposes API endpoints to fetch and store custom dashboard data.
### cloud-environment-service
-Serves as the first contact point between the agent and the cloud.
-Returns authentication and MQTT endpoints to connecting agents.
+Serves as the first contact point between the Agent and the Cloud.
+Returns authentication and MQTT endpoints to connecting Agents.
### cloud-feed-service
@@ -193,7 +193,7 @@ Exposes API endpoints to fetch feed events from Elasticsearch.
### cloud-frontend
-Contains the on-prem cloud website. Serves static content.
+Contains the on-prem Cloud website. Serves static content.
### cloud-iam-user-service
@@ -209,11 +209,11 @@ Exposes API endpoints to fetch a human-friendly explanation of various netdata c
### cloud-node-mqtt-input-service
-Forwards MQTT messages emitted by the agent related to the node entities to the internal Pulsar broker. These include the node metadata as well as their connectivity state, either direct or via parents.
+Forwards MQTT messages emitted by the Agent related to the node entities to the internal Pulsar broker. These include the node metadata as well as their connectivity state, either direct or via parents.
### cloud-node-mqtt-output-service
-Forwards Pulsar messages emitted in the cloud related to the charts entities to the MQTT broker. From there, the messages reach the relevant agent.
+Forwards Pulsar messages emitted in the Cloud related to the charts entities to the MQTT broker. From there, the messages reach the relevant Agent.
### cloud-notifications-dispatcher-service
@@ -222,6 +222,6 @@ Handles incoming notification messages and uses the relevant channels(email, sla
### cloud-spaceroom-service
-Exposes API endpoints to fetch and store relations between agents, nodes, spaces, users, and rooms.
-Acts as a provider of authorization for other cloud endpoints.
-Exposes API endpoints to authenticate agents connecting to the cloud.
+Exposes API endpoints to fetch and store relations between Agents, nodes, spaces, users, and rooms.
+Acts as a provider of authorization for other Cloud endpoints.
+Exposes API endpoints to authenticate Agents connecting to the Cloud.
diff --git a/docs/netdata-cloud/netdata-cloud-on-prem/troubleshooting.md b/docs/netdata-cloud/netdata-cloud-on-prem/troubleshooting.md
index ac8bdf6f871d34..39f60b10c7eed8 100644
--- a/docs/netdata-cloud/netdata-cloud-on-prem/troubleshooting.md
+++ b/docs/netdata-cloud/netdata-cloud-on-prem/troubleshooting.md
@@ -8,19 +8,19 @@ The following are questions that are usually asked by Netdata Cloud On-Prem oper
## Loading charts takes a long time or ends with an error
-The charts service is trying to collect data from the agents involved in the query. In most of the cases, this microservice queries many agents (depending on the Room), and all of them have to reply for the query to be satisfied.
+The charts service is trying to collect data from the Agents involved in the query. In most of the cases, this microservice queries many Agents (depending on the Room), and all of them have to reply for the query to be satisfied.
One or more of the following may be the cause:
1. **Slow Netdata Agent or Netdata Agents with unreliable connections**
- If any of the Netdata agents queried is slow or has an unreliable network connection, the query will stall and Netdata Cloud will have timeout before responding.
+ If any of the Netdata Agents queried is slow or has an unreliable network connection, the query will stall and Netdata Cloud will have timeout before responding.
- When agents are overloaded or have unreliable connections, we suggest to install more Netdata Parents for providing reliable backends to Netdata Cloud. They will automatically be preferred for all queries, when available.
+ When Agents are overloaded or have unreliable connections, we suggest to install more Netdata Parents for providing reliable backends to Netdata Cloud. They will automatically be preferred for all queries, when available.
2. **Poor Kubernetes cluster management**
- Another common issue is poor management of the Kubernetes cluster. When a node of a Kubernetes cluster is saturated, or the limits set to its containers are small, Netdata Cloud microservices get throttled by Kubernetes and does not get the resources required to process the responses of Netdata agents and aggregate the results for the dashboard.
+ Another common issue is poor management of the Kubernetes cluster. When a node of a Kubernetes cluster is saturated, or the limits set to its containers are small, Netdata Cloud microservices get throttled by Kubernetes and does not get the resources required to process the responses of Netdata Agents and aggregate the results for the dashboard.
We recommend to review the throttling of the containers and increase the limits if required.
diff --git a/docs/netdata-cloud/versions.md b/docs/netdata-cloud/versions.md
index 1bfd363d601421..37a59d3e2aadee 100644
--- a/docs/netdata-cloud/versions.md
+++ b/docs/netdata-cloud/versions.md
@@ -12,7 +12,7 @@ For more information check our [Pricing](https://www.netdata.cloud/pricing/) pag
## SaaS Version
-[Sign-up to Netdata Cloud](https://app.netdata.cloud) and start connecting your Netdata agents. The commands provided once you have signed up, include all the information to install and automatically connect (claim) Netdata agents to your Netdata Cloud space.
+[Sign-up to Netdata Cloud](https://app.netdata.cloud) and start connecting your Netdata Agents. The commands provided once you have signed up, include all the information to install and automatically connect (claim) Netdata Agents to your Netdata Cloud space.
## On-Prem Version
diff --git a/docs/observability-centralization-points/best-practices.md b/docs/observability-centralization-points/best-practices.md
index 49bd3d6c3b8f81..74a84da1251db3 100644
--- a/docs/observability-centralization-points/best-practices.md
+++ b/docs/observability-centralization-points/best-practices.md
@@ -32,8 +32,8 @@ Compared to other observability solutions, the design of Netdata offers:
- **Optimized Cost and Performance**: By distributing the load across multiple centralization points, Netdata can optimize both performance and cost. This distribution allows for the efficient use of resources and help mitigate the bottlenecks associated with a single centralization point.
-- **Simplicity**: Netdata agents (Children and Parents) require minimal configuration and maintenance, usually less than the configuration and maintenance required for the agents and exporters of other monitoring solutions. This provides an observability pipeline that has less moving parts and is easier to manage and maintain.
+- **Simplicity**: Netdata Agents (Children and Parents) require minimal configuration and maintenance, usually less than the configuration and maintenance required for the Agents and exporters of other monitoring solutions. This provides an observability pipeline that has less moving parts and is easier to manage and maintain.
-- **Always On-Prem**: Netdata centralization points are always on-prem. Even when Netdata Cloud is used, Netdata agents and parents are queried to provide the data required for the dashboards.
+- **Always On-Prem**: Netdata centralization points are always on-prem. Even when Netdata Cloud is used, Netdata Agents and parents are queried to provide the data required for the dashboards.
- **Bottom-Up Observability**: Netdata is designed to monitor systems, containers and applications bottom-up, aiming to provide the maximum resolution, visibility, depth and insights possible. Its ability to segment the infrastructure into multiple independent observability centralization points with customized retention, machine learning and alerts on each of them, while providing unified infrastructure level dashboards at Netdata Cloud, provides a flexible environment that can be tailored per service or team, while still being one unified infrastructure.
diff --git a/docs/observability-centralization-points/metrics-centralization-points/configuration.md b/docs/observability-centralization-points/metrics-centralization-points/configuration.md
index d1f13f0501082f..2ba5d9b070a714 100644
--- a/docs/observability-centralization-points/metrics-centralization-points/configuration.md
+++ b/docs/observability-centralization-points/metrics-centralization-points/configuration.md
@@ -2,7 +2,7 @@
Metrics streaming configuration for both Netdata Children and Parents is done via `stream.conf`.
-`netdata.conf` and `stream.conf` have the same `ini` format, but `netdata.conf` is considered a non-sensitive file, while `stream.conf` contains API keys, IPs and other sensitive information that enable communication between Netdata agents.
+`netdata.conf` and `stream.conf` have the same `ini` format, but `netdata.conf` is considered a non-sensitive file, while `stream.conf` contains API keys, IPs and other sensitive information that enable communication between Netdata Agents.
`stream.conf` has 2 main sections:
diff --git a/docs/observability-centralization-points/metrics-centralization-points/faq.md b/docs/observability-centralization-points/metrics-centralization-points/faq.md
index 1ce0d8534b26c2..917b8088a4c9d1 100644
--- a/docs/observability-centralization-points/metrics-centralization-points/faq.md
+++ b/docs/observability-centralization-points/metrics-centralization-points/faq.md
@@ -49,9 +49,9 @@ Check [Restoring a Netdata Parent after maintenance](/docs/observability-central
When there are multiple data sources for the same node, Netdata Cloud follows this strategy:
-1. Netdata Cloud prefers Netdata agents having `live` data.
-2. For time-series queries, when multiple Netdata agents have the retention required to answer the query, Netdata Cloud prefers the one that is further away from production systems.
-3. For Functions, Netdata Cloud prefers Netdata agents that are closer to the production systems.
+1. Netdata Cloud prefers Netdata Agents having `live` data.
+2. For time-series queries, when multiple Netdata Agents have the retention required to answer the query, Netdata Cloud prefers the one that is further away from production systems.
+3. For Functions, Netdata Cloud prefers Netdata Agents that are closer to the production systems.
## Is there a way to balance child nodes to the parent nodes of a cluster?
@@ -69,7 +69,7 @@ To set the ephemeral flag on a node, edit its netdata.conf and in the `[global]`
A parent node tracks connections and disconnections. When a node is marked as ephemeral and stops connecting for more than 24 hours, the parent will delete it from its memory and local administration, and tell Cloud that it is no longer live nor stale. Data for the node can no longer be accessed, but if the node connects again later, the node will be "revived", and previous data becomes available again.
-A node can be forced into this "forgotten" state with the Netdata CLI tool on the parent the node is connected to (if still connected) or one of the parent agents it was previously connected to. The state will be propagated _upwards_ and _sideways_ in case of an HA setup.
+A node can be forced into this "forgotten" state with the Netdata CLI tool on the parent the node is connected to (if still connected) or one of the parent Agents it was previously connected to. The state will be propagated _upwards_ and _sideways_ in case of an HA setup.
```
netdatacli remove-stale-node
diff --git a/docs/observability-centralization-points/metrics-centralization-points/sizing-netdata-parents.md b/docs/observability-centralization-points/metrics-centralization-points/sizing-netdata-parents.md
index edfbabe934b3d0..677d244a7a6811 100644
--- a/docs/observability-centralization-points/metrics-centralization-points/sizing-netdata-parents.md
+++ b/docs/observability-centralization-points/metrics-centralization-points/sizing-netdata-parents.md
@@ -1,3 +1,3 @@
# Sizing Netdata Parents
-To estimate CPU, RAM, and disk requirements for your Netdata Parents, check [sizing Netdata agents](/docs/netdata-agent/sizing-netdata-agents/README.md).
+To estimate CPU, RAM, and disk requirements for your Netdata Parents, check [sizing Netdata Agents](/docs/netdata-agent/sizing-netdata-agents/README.md).
diff --git a/docs/security-and-privacy-design/README.md b/docs/security-and-privacy-design/README.md
index da484bc0e19c12..5333087a9fcd3f 100644
--- a/docs/security-and-privacy-design/README.md
+++ b/docs/security-and-privacy-design/README.md
@@ -28,7 +28,7 @@ Netdata is committed to adhering to the best practices laid out by the Open Sour
Currently, the Netdata Agent follows the OSSF best practices at the passing level. Feel free to audit our approach to
the [OSSF guidelines](https://bestpractices.coreinfrastructure.org/en/projects/2231)
-Netdata Cloud boasts of comprehensive end-to-end automated testing, encompassing the UI, back-end, and agents, where
+Netdata Cloud boasts of comprehensive end-to-end automated testing, encompassing the UI, back-end, and Agents, where
involved. In addition, the Netdata Agent uses an array of third-party services for static code analysis,
security analysis, and CI/CD integrations to ensure code quality on a per pull request basis. Tools like Github's
CodeQL, Github's Dependabot, our own unit tests, various types of linters,
@@ -100,7 +100,7 @@ laws, including GDPR and CCPA.
Netdata ensures user privacy rights as mandated by the GDPR and CCPA. This includes the right to access, correct, and
delete personal data. These functions are all available online via the Netdata Cloud User Interface (UI). In case a user
-wants to remove all personal information (email and activities), they can delete their cloud account by logging
+wants to remove all personal information (email and activities), they can delete their Netdata Cloud account by logging
into and accessing their profile, at the bottom left of the screen.
### Regular Review and Updates
@@ -124,10 +124,10 @@ Netdata also collects anonymous telemetry events, which provide information on t
and performance metrics. This data is used to understand how the software is being used and to identify areas for
improvement.
-The purpose of collecting these statistics and telemetry data is to guide the development of the open-source agent,
+The purpose of collecting these statistics and telemetry data is to guide the development of the open-source Agent,
focusing on areas that are most beneficial to users.
-Users have the option to opt out of this data collection during the installation of the agent, or at any time by
+Users have the option to opt out of this data collection during the installation of the Agent, or at any time by
removing a specific file from their system.
Netdata retains this data indefinitely in order to track changes and trends within the community over time.
diff --git a/docs/security-and-privacy-design/netdata-agent-security.md b/docs/security-and-privacy-design/netdata-agent-security.md
index d2e2e1429cbec2..6d3acf76c2ab68 100644
--- a/docs/security-and-privacy-design/netdata-agent-security.md
+++ b/docs/security-and-privacy-design/netdata-agent-security.md
@@ -27,25 +27,25 @@ neither do most of the data collecting plugins.
Data collection plugins communicate with the main Netdata process via ephemeral, in-memory, pipes that are inaccessible
to any other process.
-Streaming of metrics between Netdata agents requires an API key and can also be encrypted with TLS if the user
+Streaming of metrics between Netdata Agents requires an API key and can also be encrypted with TLS if the user
configures it.
-The Netdata agent's web API can also use TLS if configured.
+The Netdata Agent's web API can also use TLS if configured.
-When Netdata agents are claimed to Netdata Cloud, the communication happens via MQTT over Web Sockets over TLS, and
+When Netdata Agents are claimed to Netdata Cloud, the communication happens via MQTT over Web Sockets over TLS, and
public/private keys are used for authorizing access. These keys are exchanged during the claiming process (usually
-during the provisioning of each agent).
+during the provisioning of each Agent).
## Authentication
-Direct user access to the agent is not authenticated, considering that users should either use Netdata Cloud, or they
-are already on the same LAN, or they have configured proper firewall policies. However, Netdata agents can be hidden
+Direct user access to the Agent is not authenticated, considering that users should either use Netdata Cloud, or they
+are already on the same LAN, or they have configured proper firewall policies. However, Netdata Agents can be hidden
behind an authenticating web proxy if required.
-For other Netdata agents streaming metrics to an agent, authentication via API keys is required and TLS can be used if
+For other Netdata Agents streaming metrics to an Agent, authentication via API keys is required and TLS can be used if
configured.
-For Netdata Cloud accessing Netdata agents, public/private key cryptography is used and TLS is mandatory.
+For Netdata Cloud accessing Netdata Agents, public/private key cryptography is used and TLS is mandatory.
## Security Vulnerability Response
@@ -57,12 +57,11 @@ information can be found [here](https://github.com/netdata/netdata/security/poli
## Protection Against Common Security Threats
-The Netdata agent is resilient against common security threats such as DDoS attacks and SQL injections. For DDoS,
-Netdata agent uses a fixed number of threads for processing requests, providing a cap on the resources that can be
+The Netdata Agent is resilient against common security threats such as DDoS attacks and SQL injections. For DDoS, the Agent uses a fixed number of threads for processing requests, providing a cap on the resources that can be
consumed. It also automatically manages its memory to prevent over-utilization. SQL injections are prevented as nothing
from the UI is passed back to the data collection plugins accessing databases.
-Additionally, the Netdata agent is running as a normal, unprivileged, operating system user (a few data collections
+Additionally, the Agent is running as a normal, unprivileged, operating system user (a few data collections
require escalated privileges, but these privileges are isolated to just them), every netdata process runs by default
with a nice priority to protect production applications in case the system is starving for CPU resources, and Netdata
agents are configured by default to be the first processes to be killed by the operating system in case the operating
@@ -70,6 +69,4 @@ system starves for memory resources (OS-OOM - Operating System Out Of Memory eve
## User Customizable Security Settings
-Netdata provides users with the flexibility to customize agent security settings. Users can configure TLS across the
-system, and the agent provides extensive access control lists on all its interfaces to limit access to its endpoints
-based on IP. Additionally, users can configure the CPU and Memory priority of Netdata agents.
+Netdata provides users with the flexibility to customize the Agent's security settings. Users can configure TLS across the system, and the Agent provides extensive access control lists on all its interfaces to limit access to its endpoints based on IP. Additionally, users can configure the CPU and Memory priority of Netdata Agents.
diff --git a/docs/security-and-privacy-design/netdata-cloud-security.md b/docs/security-and-privacy-design/netdata-cloud-security.md
index 1df02286075c48..13270e7ec90fce 100644
--- a/docs/security-and-privacy-design/netdata-cloud-security.md
+++ b/docs/security-and-privacy-design/netdata-cloud-security.md
@@ -4,7 +4,7 @@ Netdata Cloud is designed with a security-first approach to ensure the highest l
using Netdata Cloud in environments that require compliance with standards like PCI DSS, SOC 2, or HIPAA, users can be
confident that all collected data is stored within their infrastructure. Data viewed on dashboards and alert
notifications travel over Netdata Cloud, but are not storedβinstead, they're transformed in transit, aggregated from
-multiple agents and parents (centralization points), to appear as one data source in the user's browser.
+multiple Agents and parents (centralization points), to appear as one data source in the user's browser.
## User Identification and Authorization
@@ -41,10 +41,7 @@ Netdata Cloud does not store user credentials.
## Security Features and Response
-Netdata Cloud offers a variety of security features, including infrastructure-level dashboards, centralized alerts
-notifications, auditing logs, and role-based access to different segments of the infrastructure. The cloud service
-employs several protection mechanisms against DDoS attacks, such as rate-limiting and automated blacklisting. It also
-uses static code analyzers to prevent other types of attacks.
+Netdata Cloud offers a variety of security features, including infrastructure-level dashboards, centralized alert notifications, auditing logs, and role-based access to different segments of the infrastructure. It employs several protection mechanisms against DDoS attacks, such as rate-limiting and automated blacklisting. It also uses static code analyzers to prevent other types of attacks.
In the event of potential security vulnerabilities or incidents, Netdata Cloud follows the same process as the Netdata
agent. Every report is acknowledged and analyzed by the Netdata team within three working days, and the team keeps the
@@ -59,8 +56,7 @@ security tools, etc.) on a per contract basis.
## Deleting Personal Data
-Users who wish to remove all personal data (including email and activities) can delete their cloud account by logging
-into Netdata Cloud and accessing their profile.
+Users who wish to remove all personal data (including email and activities) can delete their account by logging into Netdata Cloud and accessing their profile.
## User Privacy and Data Protection
diff --git a/docs/top-monitoring-netdata-functions.md b/docs/top-monitoring-netdata-functions.md
index a9caea781337e0..3d461f56eda42c 100644
--- a/docs/top-monitoring-netdata-functions.md
+++ b/docs/top-monitoring-netdata-functions.md
@@ -13,7 +13,7 @@ For more details please check out documentation on how we use our internal colle
The following is required to be able to run Functions from Netdata Cloud.
-- At least one of the nodes claimed to your Space should be on a Netdata agent version higher than `v1.37.1`
+- At least one of the nodes claimed to your Space should be on a Netdata Agent version higher than `v1.37.1`
- Ensure that the node has the collector that exposes the function you want enabled
## What functions are currently available?
diff --git a/integrations/README.md b/integrations/README.md
index 377c1a3061d8ac..3ab22ec4df1519 100644
--- a/integrations/README.md
+++ b/integrations/README.md
@@ -10,7 +10,7 @@ To generate a copy of `integrations.js` locally, you will need:
- A local checkout of https://github.com/netdata/netdata
- A local checkout of https://github.com/netdata/go.d.plugin. The script
expects this to be checked out in a directory called `go.d.plugin`
- in the root directory of the agent repo, though a symlink with that
+ in the root directory of the Agent repo, though a symlink with that
name pointing at the actual location of the repo will work as well.
The first two parts can be easily covered in a Linux environment, such
@@ -21,6 +21,6 @@ as a VM or Docker container:
- On Fedora or RHEL (EPEL is required on RHEL systems): `dnf install python3-jsonschema python3-referencing python3-jinja2 python3-ruamel-yaml`
Once the environment is set up, simply run
-`integrations/gen_integrations.py` from the agent repo. Note that the
+`integrations/gen_integrations.py` from the Agent repo. Note that the
script must be run _from this specific location_, as it uses itβs own
path to figure out where all the files it needs are.
diff --git a/integrations/integrations.js b/integrations/integrations.js
index 95904775a1292d..46041aa145c186 100644
--- a/integrations/integrations.js
+++ b/integrations/integrations.js
@@ -3193,7 +3193,7 @@ export const integrations = [
"most_popular": true
},
"overview": "# Apache\n\nPlugin: go.d.plugin\nModule: apache\n\n## Overview\n\nThis collector monitors the activity and performance of Apache servers, and collects metrics such as the number of connections, workers, requests and more.\n\n\nIt sends HTTP requests to the Apache location [server-status](https://httpd.apache.org/docs/2.4/mod/mod_status.html), \nwhich is a built-in location that provides metrics about the Apache server.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nBy default, it detects Apache instances running on localhost that are listening on port 80.\nOn startup, it tries to collect metrics from:\n\n- http://localhost/server-status?auto\n- http://127.0.0.1/server-status?auto\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Apache status support\n\n- Enable and configure [status_module](https://httpd.apache.org/docs/2.4/mod/mod_status.html).\n- Ensure that you have [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/mod_status.html#troubleshoot) set on (enabled by default since Apache v2.3.6).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/apache.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/apache.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1/server-status?auto | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nApache with enabled HTTPS and self-signed certificate.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1/server-status?auto\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n - name: remote\n url: http://192.0.2.1/server-status?auto\n\n```\n{% /details %}\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Apache status support\n\n- Enable and configure [status_module](https://httpd.apache.org/docs/2.4/mod/mod_status.html).\n- Ensure that you have [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/mod_status.html#troubleshoot) set on (enabled by default since Apache v2.3.6).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/apache.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/apache.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1/server-status?auto | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| force_http2 | Forces the use of HTTP/2 protocol for all requests, even over plain TCP (h2c). | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nApache with enabled HTTPS and self-signed certificate.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1/server-status?auto\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n - name: remote\n url: http://192.0.2.1/server-status?auto\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `apache` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m apache\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `apache` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep apache\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep apache /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep apache\n```\n\n",
"alerts": "## Alerts\n\nThere are no alerts configured by default for this integration.\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\nAll metrics available only if [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus) is on.\n\n\n### Per Apache instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit | Basic | Extended |\n|:------|:----------|:----|:---:|:---:|\n| apache.connections | connections | connections | \u2022 | \u2022 |\n| apache.conns_async | keepalive, closing, writing | connections | \u2022 | \u2022 |\n| apache.workers | idle, busy | workers | \u2022 | \u2022 |\n| apache.scoreboard | waiting, starting, reading, sending, keepalive, dns_lookup, closing, logging, finishing, idle_cleanup, open | connections | \u2022 | \u2022 |\n| apache.requests | requests | requests/s | | \u2022 |\n| apache.net | sent | kilobit/s | | \u2022 |\n| apache.reqpersec | requests | requests/s | | \u2022 |\n| apache.bytespersec | served | KiB/s | | \u2022 |\n| apache.bytesperreq | size | KiB | | \u2022 |\n| apache.uptime | uptime | seconds | | \u2022 |\n\n",
@@ -3242,7 +3242,7 @@ export const integrations = [
"most_popular": true
},
"overview": "# HTTPD\n\nPlugin: go.d.plugin\nModule: apache\n\n## Overview\n\nThis collector monitors the activity and performance of Apache servers, and collects metrics such as the number of connections, workers, requests and more.\n\n\nIt sends HTTP requests to the Apache location [server-status](https://httpd.apache.org/docs/2.4/mod/mod_status.html), \nwhich is a built-in location that provides metrics about the Apache server.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nBy default, it detects Apache instances running on localhost that are listening on port 80.\nOn startup, it tries to collect metrics from:\n\n- http://localhost/server-status?auto\n- http://127.0.0.1/server-status?auto\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Apache status support\n\n- Enable and configure [status_module](https://httpd.apache.org/docs/2.4/mod/mod_status.html).\n- Ensure that you have [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/mod_status.html#troubleshoot) set on (enabled by default since Apache v2.3.6).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/apache.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/apache.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1/server-status?auto | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nApache with enabled HTTPS and self-signed certificate.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1/server-status?auto\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n - name: remote\n url: http://192.0.2.1/server-status?auto\n\n```\n{% /details %}\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Apache status support\n\n- Enable and configure [status_module](https://httpd.apache.org/docs/2.4/mod/mod_status.html).\n- Ensure that you have [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/mod_status.html#troubleshoot) set on (enabled by default since Apache v2.3.6).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/apache.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/apache.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1/server-status?auto | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| force_http2 | Forces the use of HTTP/2 protocol for all requests, even over plain TCP (h2c). | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nApache with enabled HTTPS and self-signed certificate.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1/server-status?auto\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n - name: remote\n url: http://192.0.2.1/server-status?auto\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `apache` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m apache\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `apache` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep apache\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep apache /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep apache\n```\n\n",
"alerts": "## Alerts\n\nThere are no alerts configured by default for this integration.\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\nAll metrics available only if [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus) is on.\n\n\n### Per Apache instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit | Basic | Extended |\n|:------|:----------|:----|:---:|:---:|\n| apache.connections | connections | connections | \u2022 | \u2022 |\n| apache.conns_async | keepalive, closing, writing | connections | \u2022 | \u2022 |\n| apache.workers | idle, busy | workers | \u2022 | \u2022 |\n| apache.scoreboard | waiting, starting, reading, sending, keepalive, dns_lookup, closing, logging, finishing, idle_cleanup, open | connections | \u2022 | \u2022 |\n| apache.requests | requests | requests/s | | \u2022 |\n| apache.net | sent | kilobit/s | | \u2022 |\n| apache.reqpersec | requests | requests/s | | \u2022 |\n| apache.bytespersec | served | KiB/s | | \u2022 |\n| apache.bytesperreq | size | KiB | | \u2022 |\n| apache.uptime | uptime | seconds | | \u2022 |\n\n",
@@ -3577,7 +3577,7 @@ export const integrations = [
"most_popular": true
},
"overview": "# Consul\n\nPlugin: go.d.plugin\nModule: consul\n\n## Overview\n\nThis collector monitors [key metrics](https://developer.hashicorp.com/consul/docs/agent/telemetry#key-metrics) of Consul Agents: transaction timings, leadership changes, memory usage and more.\n\n\nIt periodically sends HTTP requests to [Consul REST API](https://developer.hashicorp.com/consul/api-docs).\n\nUsed endpoints:\n\n- [/operator/autopilot/health](https://developer.hashicorp.com/consul/api-docs/operator/autopilot#read-health)\n- [/agent/checks](https://developer.hashicorp.com/consul/api-docs/agent/check#list-checks)\n- [/agent/self](https://developer.hashicorp.com/consul/api-docs/agent#read-configuration)\n- [/agent/metrics](https://developer.hashicorp.com/consul/api-docs/agent#view-metrics)\n- [/coordinate/nodes](https://developer.hashicorp.com/consul/api-docs/coordinate#read-lan-coordinates-for-all-nodes)\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis collector discovers instances running on the local host, that provide metrics on port 8500.\n\nOn startup, it tries to collect metrics from:\n\n- http://localhost:8500\n- http://127.0.0.1:8500\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Prometheus telemetry\n\n[Enable](https://developer.hashicorp.com/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time) telemetry on your Consul agent, by increasing the value of `prometheus_retention_time` from `0`.\n\n\n#### Add required ACLs to Token\n\nRequired **only if authentication is enabled**.\n\n| ACL | Endpoint |\n|:---------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `operator:read` | [autopilot health status](https://developer.hashicorp.com/consul/api-docs/operator/autopilot#read-health) |\n| `node:read` | [checks](https://developer.hashicorp.com/consul/api-docs/agent/check#list-checks) |\n| `agent:read` | [configuration](https://developer.hashicorp.com/consul/api-docs/agent#read-configuration), [metrics](https://developer.hashicorp.com/consul/api-docs/agent#view-metrics), and [lan coordinates](https://developer.hashicorp.com/consul/api-docs/coordinate#read-lan-coordinates-for-all-nodes) |\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/consul.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/consul.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"All options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://localhost:8500 | yes |\n| acl_token | ACL token used in every request. | | no |\n| max_checks | Checks processing/charting limit. | | no |\n| max_filter | Checks processing/charting filter. Uses [simple patterns](https://github.com/netdata/netdata/blob/master/src/libnetdata/simple_pattern/README.md). | | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| timeout | HTTP request timeout. | 1 | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client tls certificate. | | no |\n| tls_key | Client tls key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n\n```\n##### Basic HTTP auth\n\nLocal server with basic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n username: foo\n password: bar\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n\n - name: remote\n url: http://203.0.113.10:8500\n acl_token: \"ada7f751-f654-8872-7f93-498e799158b6\"\n\n```\n{% /details %}\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Prometheus telemetry\n\n[Enable](https://developer.hashicorp.com/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time) telemetry on your Consul Agent, by increasing the value of `prometheus_retention_time` from `0`.\n\n\n#### Add required ACLs to Token\n\nRequired **only if authentication is enabled**.\n\n| ACL | Endpoint |\n|:---------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `operator:read` | [autopilot health status](https://developer.hashicorp.com/consul/api-docs/operator/autopilot#read-health) |\n| `node:read` | [checks](https://developer.hashicorp.com/consul/api-docs/agent/check#list-checks) |\n| `agent:read` | [configuration](https://developer.hashicorp.com/consul/api-docs/agent#read-configuration), [metrics](https://developer.hashicorp.com/consul/api-docs/agent#view-metrics), and [lan coordinates](https://developer.hashicorp.com/consul/api-docs/coordinate#read-lan-coordinates-for-all-nodes) |\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/consul.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/consul.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"All options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://localhost:8500 | yes |\n| acl_token | ACL token used in every request. | | no |\n| max_checks | Checks processing/charting limit. | | no |\n| max_filter | Checks processing/charting filter. Uses [simple patterns](https://github.com/netdata/netdata/blob/master/src/libnetdata/simple_pattern/README.md). | | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| timeout | HTTP request timeout. | 1 | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client tls certificate. | | no |\n| tls_key | Client tls key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n\n```\n##### Basic HTTP auth\n\nLocal server with basic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n username: foo\n password: bar\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n\n - name: remote\n url: http://203.0.113.10:8500\n acl_token: \"ada7f751-f654-8872-7f93-498e799158b6\"\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `consul` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m consul\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `consul` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep consul\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep consul /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep consul\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ consul_node_health_check_status ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.node_health_check_status | node health check ${label:check_name} has failed on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_service_health_check_status ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.service_health_check_status | service health check ${label:check_name} for service ${label:service_name} has failed on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_client_rpc_requests_exceeded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.client_rpc_requests_exceeded_rate | number of rate-limited RPC requests made by server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_client_rpc_requests_failed ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.client_rpc_requests_failed_rate | number of failed RPC requests made by server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_gc_pause_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.gc_pause_time | time spent in stop-the-world garbage collection pauses on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_autopilot_health_status ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.autopilot_health_status | datacenter ${label:datacenter} cluster is unhealthy as reported by server ${label:node_name} |\n| [ consul_autopilot_server_health_status ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.autopilot_server_health_status | server ${label:node_name} from datacenter ${label:datacenter} is unhealthy |\n| [ consul_raft_leader_last_contact_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.raft_leader_last_contact_time | median time elapsed since leader server ${label:node_name} datacenter ${label:datacenter} was last able to contact the follower nodes |\n| [ consul_raft_leadership_transitions ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.raft_leadership_transitions_rate | there has been a leadership change and server ${label:node_name} datacenter ${label:datacenter} has become the leader |\n| [ consul_raft_thread_main_saturation ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.raft_thread_main_saturation_perc | average saturation of the main Raft goroutine on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_raft_thread_fsm_saturation ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.raft_thread_fsm_saturation_perc | average saturation of the FSM Raft goroutine on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_license_expiration_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.license_expiration_time | Consul Enterprise licence expiration time on node ${label:node_name} datacenter ${label:datacenter} |\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\nThe set of metrics depends on the [Consul Agent mode](https://developer.hashicorp.com/consul/docs/install/glossary#agent).\n\n\n### Per Consul instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit | Leader | Follower | Client |\n|:------|:----------|:----|:---:|:---:|:---:|\n| consul.client_rpc_requests_rate | rpc | requests/s | \u2022 | \u2022 | \u2022 |\n| consul.client_rpc_requests_exceeded_rate | exceeded | requests/s | \u2022 | \u2022 | \u2022 |\n| consul.client_rpc_requests_failed_rate | failed | requests/s | \u2022 | \u2022 | \u2022 |\n| consul.memory_allocated | allocated | bytes | \u2022 | \u2022 | \u2022 |\n| consul.memory_sys | sys | bytes | \u2022 | \u2022 | \u2022 |\n| consul.gc_pause_time | gc_pause | seconds | \u2022 | \u2022 | \u2022 |\n| consul.kvs_apply_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | \u2022 | |\n| consul.kvs_apply_operations_rate | kvs_apply | ops/s | \u2022 | \u2022 | |\n| consul.txn_apply_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | \u2022 | |\n| consul.txn_apply_operations_rate | txn_apply | ops/s | \u2022 | \u2022 | |\n| consul.autopilot_health_status | healthy, unhealthy | status | \u2022 | \u2022 | |\n| consul.autopilot_failure_tolerance | failure_tolerance | servers | \u2022 | \u2022 | |\n| consul.autopilot_server_health_status | healthy, unhealthy | status | \u2022 | \u2022 | |\n| consul.autopilot_server_stable_time | stable | seconds | \u2022 | \u2022 | |\n| consul.autopilot_server_serf_status | active, failed, left, none | status | \u2022 | \u2022 | |\n| consul.autopilot_server_voter_status | voter, not_voter | status | \u2022 | \u2022 | |\n| consul.network_lan_rtt | min, max, avg | ms | \u2022 | \u2022 | |\n| consul.raft_commit_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | | |\n| consul.raft_commits_rate | commits | commits/s | \u2022 | | |\n| consul.raft_leader_last_contact_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | | |\n| consul.raft_leader_oldest_log_age | oldest_log_age | seconds | \u2022 | | |\n| consul.raft_follower_last_contact_leader_time | leader_last_contact | ms | | \u2022 | |\n| consul.raft_rpc_install_snapshot_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | | \u2022 | |\n| consul.raft_leader_elections_rate | leader | elections/s | \u2022 | \u2022 | |\n| consul.raft_leadership_transitions_rate | leadership | transitions/s | \u2022 | \u2022 | |\n| consul.server_leadership_status | leader, not_leader | status | \u2022 | \u2022 | |\n| consul.raft_thread_main_saturation_perc | quantile_0.5, quantile_0.9, quantile_0.99 | percentage | \u2022 | \u2022 | |\n| consul.raft_thread_fsm_saturation_perc | quantile_0.5, quantile_0.9, quantile_0.99 | percentage | \u2022 | \u2022 | |\n| consul.raft_fsm_last_restore_duration | last_restore_duration | ms | \u2022 | \u2022 | |\n| consul.raft_boltdb_freelist_bytes | freelist | bytes | \u2022 | \u2022 | |\n| consul.raft_boltdb_logs_per_batch_rate | written | logs/s | \u2022 | \u2022 | |\n| consul.raft_boltdb_store_logs_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | \u2022 | |\n| consul.license_expiration_time | license_expiration | seconds | \u2022 | \u2022 | \u2022 |\n\n### Per node check\n\nMetrics about checks on Node level.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| datacenter | Datacenter Identifier |\n| node_name | The node's name |\n| check_name | The check's name |\n\nMetrics:\n\n| Metric | Dimensions | Unit | Leader | Follower | Client |\n|:------|:----------|:----|:---:|:---:|:---:|\n| consul.node_health_check_status | passing, maintenance, warning, critical | status | \u2022 | \u2022 | \u2022 |\n\n### Per service check\n\nMetrics about checks at a Service level.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| datacenter | Datacenter Identifier |\n| node_name | The node's name |\n| check_name | The check's name |\n| service_name | The service's name |\n\nMetrics:\n\n| Metric | Dimensions | Unit | Leader | Follower | Client |\n|:------|:----------|:----|:---:|:---:|:---:|\n| consul.service_health_check_status | passing, maintenance, warning, critical | status | \u2022 | \u2022 | \u2022 |\n\n",
@@ -6259,7 +6259,7 @@ export const integrations = [
"most_popular": true
},
"overview": "# PostgreSQL\n\nPlugin: go.d.plugin\nModule: postgres\n\n## Overview\n\nThis collector monitors the activity and performance of Postgres servers, collects replication statistics, metrics for each database, table and index, and more.\n\n\nIt establishes a connection to the Postgres instance via a TCP or UNIX socket.\nTo collect metrics for database tables and indexes, it establishes an additional connection for each discovered database.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nBy default, it detects instances running on localhost by trying to connect as root and netdata using known PostgreSQL TCP and UNIX sockets:\n\n- 127.0.0.1:5432\n- /var/run/postgresql/\n\n\n#### Limits\n\nTable and index metrics are not collected for databases with more than 50 tables or 250 indexes.\nThese limits can be changed in the configuration file.\n\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Create netdata user\n\nCreate a user with granted `pg_monitor`\nor `pg_read_all_stat` [built-in role](https://www.postgresql.org/docs/current/predefined-roles.html).\n\nTo create the `netdata` user with these permissions, execute the following in the psql session, as a user with CREATEROLE privileges:\n\n```postgresql\nCREATE USER netdata;\nGRANT pg_monitor TO netdata;\n```\n\nAfter creating the new user, restart the Netdata agent with `sudo systemctl restart netdata`, or\nthe [appropriate method](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/start-stop-restart.md) for your\nsystem.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/postgres.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/postgres.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 5 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| dsn | Postgres server DSN (Data Source Name). See [DSN syntax](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING). | postgres://postgres:postgres@127.0.0.1:5432/postgres | yes |\n| timeout | Query timeout in seconds. | 2 | no |\n| collect_databases_matching | Databases selector. Determines which database metrics will be collected. Syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/src/go/pkg/matcher#simple-patterns-matcher). | | no |\n| max_db_tables | Maximum number of tables in the database. Table metrics will not be collected for databases that have more tables than max_db_tables. 0 means no limit. | 50 | no |\n| max_db_indexes | Maximum number of indexes in the database. Index metrics will not be collected for databases that have more indexes than max_db_indexes. 0 means no limit. | 250 | no |\n\n{% /details %}\n#### Examples\n\n##### TCP socket\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'\n\n```\n##### Unix socket\n\nAn example configuration.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n dsn: 'host=/var/run/postgresql dbname=postgres user=netdata'\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nLocal and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'\n\n - name: remote\n dsn: 'postgresql://netdata@203.0.113.0:5432/postgres'\n\n```\n{% /details %}\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Create netdata user\n\nCreate a user with granted `pg_monitor`\nor `pg_read_all_stat` [built-in role](https://www.postgresql.org/docs/current/predefined-roles.html).\n\nTo create the `netdata` user with these permissions, execute the following in the psql session, as a user with CREATEROLE privileges:\n\n```postgresql\nCREATE USER netdata;\nGRANT pg_monitor TO netdata;\n```\n\nAfter creating the new user, restart the Netdata Agent with `sudo systemctl restart netdata`, or\nthe [appropriate method](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/start-stop-restart.md) for your\nsystem.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/postgres.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/postgres.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 5 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| dsn | Postgres server DSN (Data Source Name). See [DSN syntax](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING). | postgres://postgres:postgres@127.0.0.1:5432/postgres | yes |\n| timeout | Query timeout in seconds. | 2 | no |\n| collect_databases_matching | Databases selector. Determines which database metrics will be collected. Syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/src/go/pkg/matcher#simple-patterns-matcher). | | no |\n| max_db_tables | Maximum number of tables in the database. Table metrics will not be collected for databases that have more tables than max_db_tables. 0 means no limit. | 50 | no |\n| max_db_indexes | Maximum number of indexes in the database. Index metrics will not be collected for databases that have more indexes than max_db_indexes. 0 means no limit. | 250 | no |\n\n{% /details %}\n#### Examples\n\n##### TCP socket\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'\n\n```\n##### Unix socket\n\nAn example configuration.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n dsn: 'host=/var/run/postgresql dbname=postgres user=netdata'\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nLocal and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'\n\n - name: remote\n dsn: 'postgresql://netdata@203.0.113.0:5432/postgres'\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `postgres` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m postgres\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `postgres` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep postgres\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep postgres /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep postgres\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ postgres_total_connection_utilization ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.connections_utilization | average total connection utilization over the last minute |\n| [ postgres_acquired_locks_utilization ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.locks_utilization | average acquired locks utilization over the last minute |\n| [ postgres_txid_exhaustion_perc ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.txid_exhaustion_perc | percent towards TXID wraparound |\n| [ postgres_db_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.db_cache_io_ratio | average cache hit ratio in db ${label:database} over the last minute |\n| [ postgres_db_transactions_rollback_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.db_cache_io_ratio | average aborted transactions percentage in db ${label:database} over the last five minutes |\n| [ postgres_db_deadlocks_rate ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.db_deadlocks_rate | number of deadlocks detected in db ${label:database} in the last minute |\n| [ postgres_table_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_cache_io_ratio | average cache hit ratio in db ${label:database} table ${label:table} over the last minute |\n| [ postgres_table_index_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_index_cache_io_ratio | average index cache hit ratio in db ${label:database} table ${label:table} over the last minute |\n| [ postgres_table_toast_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_toast_cache_io_ratio | average TOAST hit ratio in db ${label:database} table ${label:table} over the last minute |\n| [ postgres_table_toast_index_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_toast_index_cache_io_ratio | average index TOAST hit ratio in db ${label:database} table ${label:table} over the last minute |\n| [ postgres_table_bloat_size_perc ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_bloat_size_perc | bloat size percentage in db ${label:database} table ${label:table} |\n| [ postgres_table_last_autovacuum_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_autovacuum_since_time | time elapsed since db ${label:database} table ${label:table} was vacuumed by the autovacuum daemon |\n| [ postgres_table_last_autoanalyze_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_autoanalyze_since_time | time elapsed since db ${label:database} table ${label:table} was analyzed by the autovacuum daemon |\n| [ postgres_index_bloat_size_perc ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.index_bloat_size_perc | bloat size percentage in db ${label:database} table ${label:table} index ${label:index} |\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\n\n\n### Per PostgreSQL instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.connections_utilization | used | percentage |\n| postgres.connections_usage | available, used | connections |\n| postgres.connections_state_count | active, idle, idle_in_transaction, idle_in_transaction_aborted, disabled | connections |\n| postgres.transactions_duration | a dimension per bucket | transactions/s |\n| postgres.queries_duration | a dimension per bucket | queries/s |\n| postgres.locks_utilization | used | percentage |\n| postgres.checkpoints_rate | scheduled, requested | checkpoints/s |\n| postgres.checkpoints_time | write, sync | milliseconds |\n| postgres.bgwriter_halts_rate | maxwritten | events/s |\n| postgres.buffers_io_rate | checkpoint, backend, bgwriter | B/s |\n| postgres.buffers_backend_fsync_rate | fsync | calls/s |\n| postgres.buffers_allocated_rate | allocated | B/s |\n| postgres.wal_io_rate | write | B/s |\n| postgres.wal_files_count | written, recycled | files |\n| postgres.wal_archiving_files_count | ready, done | files/s |\n| postgres.autovacuum_workers_count | analyze, vacuum_analyze, vacuum, vacuum_freeze, brin_summarize | workers |\n| postgres.txid_exhaustion_towards_autovacuum_perc | emergency_autovacuum | percentage |\n| postgres.txid_exhaustion_perc | txid_exhaustion | percentage |\n| postgres.txid_exhaustion_oldest_txid_num | xid | xid |\n| postgres.catalog_relations_count | ordinary_table, index, sequence, toast_table, view, materialized_view, composite_type, foreign_table, partitioned_table, partitioned_index | relations |\n| postgres.catalog_relations_size | ordinary_table, index, sequence, toast_table, view, materialized_view, composite_type, foreign_table, partitioned_table, partitioned_index | B |\n| postgres.uptime | uptime | seconds |\n| postgres.databases_count | databases | databases |\n\n### Per repl application\n\nThese metrics refer to the replication application.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| application | application name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.replication_app_wal_lag_size | sent_lag, write_lag, flush_lag, replay_lag | B |\n| postgres.replication_app_wal_lag_time | write_lag, flush_lag, replay_lag | seconds |\n\n### Per repl slot\n\nThese metrics refer to the replication slot.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| slot | replication slot name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.replication_slot_files_count | wal_keep, pg_replslot_files | files |\n\n### Per database\n\nThese metrics refer to the database.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | database name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.db_transactions_ratio | committed, rollback | percentage |\n| postgres.db_transactions_rate | committed, rollback | transactions/s |\n| postgres.db_connections_utilization | used | percentage |\n| postgres.db_connections_count | connections | connections |\n| postgres.db_cache_io_ratio | miss | percentage |\n| postgres.db_io_rate | memory, disk | B/s |\n| postgres.db_ops_fetched_rows_ratio | fetched | percentage |\n| postgres.db_ops_read_rows_rate | returned, fetched | rows/s |\n| postgres.db_ops_write_rows_rate | inserted, deleted, updated | rows/s |\n| postgres.db_conflicts_rate | conflicts | queries/s |\n| postgres.db_conflicts_reason_rate | tablespace, lock, snapshot, bufferpin, deadlock | queries/s |\n| postgres.db_deadlocks_rate | deadlocks | deadlocks/s |\n| postgres.db_locks_held_count | access_share, row_share, row_exclusive, share_update, share, share_row_exclusive, exclusive, access_exclusive | locks |\n| postgres.db_locks_awaited_count | access_share, row_share, row_exclusive, share_update, share, share_row_exclusive, exclusive, access_exclusive | locks |\n| postgres.db_temp_files_created_rate | created | files/s |\n| postgres.db_temp_files_io_rate | written | B/s |\n| postgres.db_size | size | B |\n\n### Per table\n\nThese metrics refer to the database table.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | database name |\n| schema | schema name |\n| table | table name |\n| parent_table | parent table name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.table_rows_dead_ratio | dead | percentage |\n| postgres.table_rows_count | live, dead | rows |\n| postgres.table_ops_rows_rate | inserted, deleted, updated | rows/s |\n| postgres.table_ops_rows_hot_ratio | hot | percentage |\n| postgres.table_ops_rows_hot_rate | hot | rows/s |\n| postgres.table_cache_io_ratio | miss | percentage |\n| postgres.table_io_rate | memory, disk | B/s |\n| postgres.table_index_cache_io_ratio | miss | percentage |\n| postgres.table_index_io_rate | memory, disk | B/s |\n| postgres.table_toast_cache_io_ratio | miss | percentage |\n| postgres.table_toast_io_rate | memory, disk | B/s |\n| postgres.table_toast_index_cache_io_ratio | miss | percentage |\n| postgres.table_toast_index_io_rate | memory, disk | B/s |\n| postgres.table_scans_rate | index, sequential | scans/s |\n| postgres.table_scans_rows_rate | index, sequential | rows/s |\n| postgres.table_autovacuum_since_time | time | seconds |\n| postgres.table_vacuum_since_time | time | seconds |\n| postgres.table_autoanalyze_since_time | time | seconds |\n| postgres.table_analyze_since_time | time | seconds |\n| postgres.table_null_columns | null | columns |\n| postgres.table_size | size | B |\n| postgres.table_bloat_size_perc | bloat | percentage |\n| postgres.table_bloat_size | bloat | B |\n\n### Per index\n\nThese metrics refer to the table index.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | database name |\n| schema | schema name |\n| table | table name |\n| parent_table | parent table name |\n| index | index name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.index_size | size | B |\n| postgres.index_bloat_size_perc | bloat | percentage |\n| postgres.index_bloat_size | bloat | B |\n| postgres.index_usage_status | used, unused | status |\n\n",
@@ -17046,8 +17046,8 @@ export const integrations = [
],
"most_popular": false
},
- "overview": "# Tomcat\n\nPlugin: go.d.plugin\nModule: tomcat\n\n## Overview\n\nThis collector monitors Tomcat metrics about bandwidth, processing time, threads and more.\n\n\nIt parses the information provided by the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) HTTP endpoint.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\nBy default, this Tomcat collector cannot access the server's status page. To enable data collection, you will need to configure access credentials with appropriate permissions.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nIf the Netdata agent and Tomcat are on the same host, the collector will attempt to connect to the Tomcat server's status page at `http://localhost:8080/manager/status?XML=true`.\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Access to Tomcat Status Endpoint\n\nThe Netdata agent needs read-only access to its status endpoint to collect data from the Tomcat server.\n\nYou can achieve this by creating a dedicated user named `netdata` with read-only permissions specifically for accessing the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) endpoint.\n\nOnce you've created the `netdata` user, you'll need to configure the username and password in the collector configuration file.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/tomcat.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/tomcat.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1:8080 | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | POST | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8080\n username: John\n password: Doe\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8080\n username: admin1\n password: hackme1\n\n - name: remote\n url: http://192.0.2.1:8080\n username: admin2\n password: hackme2\n\n```\n{% /details %}\n",
+ "overview": "# Tomcat\n\nPlugin: go.d.plugin\nModule: tomcat\n\n## Overview\n\nThis collector monitors Tomcat metrics about bandwidth, processing time, threads and more.\n\n\nIt parses the information provided by the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) HTTP endpoint.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\nBy default, this Tomcat collector cannot access the server's status page. To enable data collection, you will need to configure access credentials with appropriate permissions.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nIf the Netdata Agent and Tomcat are on the same host, the collector will attempt to connect to the Tomcat server's status page at `http://localhost:8080/manager/status?XML=true`.\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Access to Tomcat Status Endpoint\n\nThe Netdata Agent needs read-only access to its status endpoint to collect data from the Tomcat server.\n\nYou can achieve this by creating a dedicated user named `netdata` with read-only permissions specifically for accessing the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) endpoint.\n\nOnce you've created the `netdata` user, you'll need to configure the username and password in the collector configuration file.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/tomcat.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/tomcat.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1:8080 | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | POST | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8080\n username: John\n password: Doe\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8080\n username: admin1\n password: hackme1\n\n - name: remote\n url: http://192.0.2.1:8080\n username: admin2\n password: hackme2\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `tomcat` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m tomcat\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `tomcat` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep tomcat\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep tomcat /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep tomcat\n```\n\n",
"alerts": "## Alerts\n\nThere are no alerts configured by default for this integration.\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\n\n\n### Per Tomcat instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| tomcat.jvm_memory_usage | free, used | bytes |\n\n### Per jvm memory pool\n\nThese metrics refer to the JVM memory pool.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| mempool_name | Memory Pool name. |\n| mempool_type | Memory Pool type. |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| tomcat.jvm_mem_pool_memory_usage | commited, used, max | bytes |\n\n### Per connector\n\nThese metrics refer to the connector.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| connector_name | Connector name. |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| tomcat.connector_requests | requests | requests/s |\n| tomcat.connector_bandwidth | received, sent | bytes/s |\n| tomcat.connector_requests_processing_time | processing_time | milliseconds |\n| tomcat.connector_errors | errors | errors/s |\n| tomcat.connector_request_threads | idle, busy | threads |\n\n",
@@ -17578,7 +17578,7 @@ export const integrations = [
}
}
},
- "overview": "# Active Directory\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# Active Directory\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17618,7 +17618,7 @@ export const integrations = [
}
}
},
- "overview": "# HyperV\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# HyperV\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17656,7 +17656,7 @@ export const integrations = [
}
}
},
- "overview": "# MS Exchange\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# MS Exchange\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17696,7 +17696,7 @@ export const integrations = [
}
}
},
- "overview": "# MS SQL Server\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# MS SQL Server\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17734,7 +17734,7 @@ export const integrations = [
}
}
},
- "overview": "# NET Framework\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# NET Framework\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17771,7 +17771,7 @@ export const integrations = [
}
}
},
- "overview": "# Windows\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# Windows\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](https://github.com/netdata/netdata/blob/master/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n{% details open=true summary=\"Config\" %}\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -21119,7 +21119,7 @@ export const integrations = [
"exporter",
"json"
],
- "overview": "# JSON\n\nUse the JSON connector for the exporting engine to archive your agent's metrics to JSON document databases for long-term storage,\nfurther analysis, or correlation with data from other sources\n\n",
+ "overview": "# JSON\n\nUse the JSON connector for the exporting engine to archive your Agent's metrics to JSON document databases for long-term storage,\nfurther analysis, or correlation with data from other sources\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### \n\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `exporting.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config exporting.conf\n```\n#### Options\n\nThe following options can be defined for this exporter.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |\n| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | pubsub.googleapis.com | yes |\n| username | Username for HTTP authentication | my_username | no |\n| password | Password for HTTP authentication | my_password | no |\n| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |\n| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |\n| prefix | The prefix to add to all metrics. | Netdata | no |\n| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |\n| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |\n| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 2 * update_every * 1000 | no |\n| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/src/libnetdata/simple_pattern#simple-patterns). | localhost * | no |\n| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |\n| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |\n| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |\n| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |\n\n##### destination\n\nThe format of each item in this list, is: [PROTOCOL:]IP[:PORT].\n- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.\n- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.\n- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.\n\nExample IPv4:\n ```yaml\n destination = localhost:5448\n ```\nWhen multiple servers are defined, Netdata will try the next one when the previous one fails.\n\n\n##### update every\n\nNetdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers\nsend data to the same database. This randomness does not affect the quality of the data, only the time they are sent.\n\n\n##### buffer on failures\n\nIf the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).\n\n\n##### send hosts matching\n\nIncludes one or more space separated patterns, using * as wildcard (any number of times within each pattern).\nThe patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to\nfilter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.\n\nA pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,\nuse `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).\n\n\n##### send charts matching\n\nA pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,\nuse !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,\npositive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter\nhas a higher priority than the configuration option.\n\n\n##### send names instead of ids\n\nNetdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names\nare human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are\ndifferent : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.\n\n\n{% /details %}\n#### Examples\n\n##### Basic configuration\n\n\n\n```yaml\n[json:my_json_instance]\n enabled = yes\n destination = localhost:5448\n\n```\n##### Configuration with HTTPS and HTTP authentication\n\nAdd `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `json:https:my_json_instance`.\n\n```yaml\n[json:my_json_instance]\n enabled = yes\n destination = localhost:5448\n username = my_username\n password = my_password\n\n```\n",
"integration_type": "exporter",
"edit_link": "https://github.com/netdata/netdata/blob/master/src/exporting/json/metadata.yaml",
@@ -21251,7 +21251,7 @@ export const integrations = [
"exporter",
"MongoDB"
],
- "overview": "# MongoDB\n\nUse the MongoDB connector for the exporting engine to archive your agent's metrics to a MongoDB database\nfor long-term storage, further analysis, or correlation with data from other sources.\n\n",
+ "overview": "# MongoDB\n\nUse the MongoDB connector for the exporting engine to archive your Agent's metrics to a MongoDB database\nfor long-term storage, further analysis, or correlation with data from other sources.\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### \n\n- To use MongoDB as an external storage for long-term archiving, you should first [install](https://www.mongodb.com/docs/languages/c/c-driver/current/libmongoc/tutorials/obtaining-libraries/installing/#std-label-installing) libmongoc 1.7.0 or higher.\n- Next, re-install Netdata from the source, which detects that the required library is now available.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `exporting.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config exporting.conf\n```\n#### Options\n\nThe following options can be defined for this exporter.\n\n\n{% details open=true summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |\n| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | localhost | yes |\n| username | Username for HTTP authentication | my_username | no |\n| password | Password for HTTP authentication | my_password | no |\n| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |\n| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |\n| prefix | The prefix to add to all metrics. | Netdata | no |\n| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |\n| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |\n| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 2 * update_every * 1000 | no |\n| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/src/libnetdata/simple_pattern#simple-patterns). | localhost * | no |\n| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |\n| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |\n| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |\n| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |\n\n##### destination\n\nThe format of each item in this list, is: [PROTOCOL:]IP[:PORT].\n- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.\n- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.\n- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.\n\nExample IPv4:\n ```yaml\n destination = 10.11.14.2:27017 10.11.14.3:4242 10.11.14.4:27017\n ```\nExample IPv6 and IPv4 together:\n```yaml\ndestination = [ffff:...:0001]:2003 10.11.12.1:2003\n```\nWhen multiple servers are defined, Netdata will try the next one when the previous one fails.\n\n\n##### update every\n\nNetdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers\nsend data to the same database. This randomness does not affect the quality of the data, only the time they are sent.\n\n\n##### buffer on failures\n\nIf the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).\n\n\n##### send hosts matching\n\nIncludes one or more space separated patterns, using * as wildcard (any number of times within each pattern).\nThe patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to\nfilter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.\n\nA pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,\nuse `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).\n\n\n##### send charts matching\n\nA pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,\nuse !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,\npositive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter\nhas a higher priority than the configuration option.\n\n\n##### send names instead of ids\n\nNetdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names\nare human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are\ndifferent : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.\n\n\n{% /details %}\n#### Examples\n\n##### Basic configuration\n\nThe default socket timeout depends on the exporting connector update interval.\nThe timeout is 500 ms shorter than the interval (but not less than 1000 ms). You can alter the timeout using the sockettimeoutms MongoDB URI option.\n\n\n```yaml\n[mongodb:my_instance]\n enabled = yes\n destination = mongodb://\n database = your_database_name\n collection = your_collection_name\n\n```\n",
"integration_type": "exporter",
"edit_link": "https://github.com/netdata/netdata/blob/master/src/exporting/mongodb/metadata.yaml",
@@ -21911,7 +21911,7 @@ export const integrations = [
"PagerDuty"
],
"overview": "# PagerDuty\n\nPagerDuty is an enterprise incident resolution service that integrates with ITOps and DevOps monitoring stacks to improve operational reliability and agility. From enriching and aggregating events to correlating them into incidents, PagerDuty streamlines the incident management process by reducing alert noise and resolution times.\nYou can send notifications to PagerDuty using Netdata's Agent alert notification feature, which supports dozens of endpoints, user roles, and more.\n\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### \n\n- An installation of the [PagerDuty](https://www.pagerduty.com/docs/guides/agent-install-guide/) agent on the node running the Netdata Agent\n- A PagerDuty Generic API service using either the `Events API v2` or `Events API v1`\n- [Add a new service](https://support.pagerduty.com/docs/services-and-integrations#section-configuring-services-and-integrations) to PagerDuty. Click Use our API directly and select either `Events API v2` or `Events API v1`. Once you finish creating the service, click on the Integrations tab to find your Integration Key.\n- Access to the terminal where Netdata Agent is running\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `health_alarm_notify.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config health_alarm_notify.conf\n```\n#### Options\n\nThe following options can be defined for this notification\n\n{% details open=true summary=\"Config Options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| SEND_PD | Set `SEND_PD` to YES | YES | yes |\n| DEFAULT_RECIPIENT_PD | Set `DEFAULT_RECIPIENT_PD` to the PagerDuty service key you want the alert notifications to be sent to. You can define multiple service keys like this: `pd_service_key_1` `pd_service_key_2`. | | yes |\n\n##### DEFAULT_RECIPIENT_PD\n\nAll roles will default to this variable if left unconfigured.\n\nThe `DEFAULT_RECIPIENT_PD` can be edited in the following entries at the bottom of the same file:\n```text\nrole_recipients_pd[sysadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxa\"\nrole_recipients_pd[domainadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxb\"\nrole_recipients_pd[dba]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxc\"\nrole_recipients_pd[webmaster]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxd\"\nrole_recipients_pd[proxyadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxe\"\nrole_recipients_pd[sitemgr]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxf\"\n```\n\n\n{% /details %}\n#### Examples\n\n##### Basic Configuration\n\n\n\n```yaml\n#------------------------------------------------------------------------------\n# pagerduty.com notification options\n\nSEND_PD=\"YES\"\nDEFAULT_RECIPIENT_PD=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\nUSE_PD_VERSION=\"2\"\n\n```\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### \n\n- An installation of the [PagerDuty](https://www.pagerduty.com/docs/guides/agent-install-guide/) Agent on the node running the Netdata Agent\n- A PagerDuty Generic API service using either the `Events API v2` or `Events API v1`\n- [Add a new service](https://support.pagerduty.com/docs/services-and-integrations#section-configuring-services-and-integrations) to PagerDuty. Click Use our API directly and select either `Events API v2` or `Events API v1`. Once you finish creating the service, click on the Integrations tab to find your Integration Key.\n- Access to the terminal where Netdata Agent is running\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `health_alarm_notify.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config health_alarm_notify.conf\n```\n#### Options\n\nThe following options can be defined for this notification\n\n{% details open=true summary=\"Config Options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| SEND_PD | Set `SEND_PD` to YES | YES | yes |\n| DEFAULT_RECIPIENT_PD | Set `DEFAULT_RECIPIENT_PD` to the PagerDuty service key you want the alert notifications to be sent to. You can define multiple service keys like this: `pd_service_key_1` `pd_service_key_2`. | | yes |\n\n##### DEFAULT_RECIPIENT_PD\n\nAll roles will default to this variable if left unconfigured.\n\nThe `DEFAULT_RECIPIENT_PD` can be edited in the following entries at the bottom of the same file:\n```text\nrole_recipients_pd[sysadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxa\"\nrole_recipients_pd[domainadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxb\"\nrole_recipients_pd[dba]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxc\"\nrole_recipients_pd[webmaster]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxd\"\nrole_recipients_pd[proxyadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxe\"\nrole_recipients_pd[sitemgr]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxf\"\n```\n\n\n{% /details %}\n#### Examples\n\n##### Basic Configuration\n\n\n\n```yaml\n#------------------------------------------------------------------------------\n# pagerduty.com notification options\n\nSEND_PD=\"YES\"\nDEFAULT_RECIPIENT_PD=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\nUSE_PD_VERSION=\"2\"\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Test Notification\n\nYou can run the following command by hand, to test alerts configuration:\n\n```bash\n# become user netdata\nsudo su -s /bin/bash netdata\n\n# enable debugging info on the console\nexport NETDATA_ALARM_NOTIFY_DEBUG=1\n\n# send test alarms to sysadmin\n/usr/libexec/netdata/plugins.d/alarm-notify.sh test\n\n# send test alarms to any role\n/usr/libexec/netdata/plugins.d/alarm-notify.sh test \"ROLE\"\n```\n\nNote that this will test _all_ alert mechanisms for the selected role.\n\n",
"integration_type": "agent_notification",
"edit_link": "https://github.com/netdata/netdata/blob/master/src/health/notifications/pagerduty/metadata.yaml"
diff --git a/integrations/integrations.json b/integrations/integrations.json
index 9e1c16ba0cf7fa..5f62b8b566170c 100644
--- a/integrations/integrations.json
+++ b/integrations/integrations.json
@@ -3191,7 +3191,7 @@
"most_popular": true
},
"overview": "# Apache\n\nPlugin: go.d.plugin\nModule: apache\n\n## Overview\n\nThis collector monitors the activity and performance of Apache servers, and collects metrics such as the number of connections, workers, requests and more.\n\n\nIt sends HTTP requests to the Apache location [server-status](https://httpd.apache.org/docs/2.4/mod/mod_status.html), \nwhich is a built-in location that provides metrics about the Apache server.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nBy default, it detects Apache instances running on localhost that are listening on port 80.\nOn startup, it tries to collect metrics from:\n\n- http://localhost/server-status?auto\n- http://127.0.0.1/server-status?auto\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Apache status support\n\n- Enable and configure [status_module](https://httpd.apache.org/docs/2.4/mod/mod_status.html).\n- Ensure that you have [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/mod_status.html#troubleshoot) set on (enabled by default since Apache v2.3.6).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/apache.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/apache.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1/server-status?auto | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nApache with enabled HTTPS and self-signed certificate.\n\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1/server-status?auto\n tls_skip_verify: yes\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n - name: remote\n url: http://192.0.2.1/server-status?auto\n\n```\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Apache status support\n\n- Enable and configure [status_module](https://httpd.apache.org/docs/2.4/mod/mod_status.html).\n- Ensure that you have [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/mod_status.html#troubleshoot) set on (enabled by default since Apache v2.3.6).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/apache.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/apache.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1/server-status?auto | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| force_http2 | Forces the use of HTTP/2 protocol for all requests, even over plain TCP (h2c). | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nApache with enabled HTTPS and self-signed certificate.\n\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1/server-status?auto\n tls_skip_verify: yes\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n - name: remote\n url: http://192.0.2.1/server-status?auto\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `apache` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m apache\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `apache` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep apache\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep apache /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep apache\n```\n\n",
"alerts": "## Alerts\n\nThere are no alerts configured by default for this integration.\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\nAll metrics available only if [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus) is on.\n\n\n### Per Apache instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit | Basic | Extended |\n|:------|:----------|:----|:---:|:---:|\n| apache.connections | connections | connections | \u2022 | \u2022 |\n| apache.conns_async | keepalive, closing, writing | connections | \u2022 | \u2022 |\n| apache.workers | idle, busy | workers | \u2022 | \u2022 |\n| apache.scoreboard | waiting, starting, reading, sending, keepalive, dns_lookup, closing, logging, finishing, idle_cleanup, open | connections | \u2022 | \u2022 |\n| apache.requests | requests | requests/s | | \u2022 |\n| apache.net | sent | kilobit/s | | \u2022 |\n| apache.reqpersec | requests | requests/s | | \u2022 |\n| apache.bytespersec | served | KiB/s | | \u2022 |\n| apache.bytesperreq | size | KiB | | \u2022 |\n| apache.uptime | uptime | seconds | | \u2022 |\n\n",
@@ -3240,7 +3240,7 @@
"most_popular": true
},
"overview": "# HTTPD\n\nPlugin: go.d.plugin\nModule: apache\n\n## Overview\n\nThis collector monitors the activity and performance of Apache servers, and collects metrics such as the number of connections, workers, requests and more.\n\n\nIt sends HTTP requests to the Apache location [server-status](https://httpd.apache.org/docs/2.4/mod/mod_status.html), \nwhich is a built-in location that provides metrics about the Apache server.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nBy default, it detects Apache instances running on localhost that are listening on port 80.\nOn startup, it tries to collect metrics from:\n\n- http://localhost/server-status?auto\n- http://127.0.0.1/server-status?auto\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Apache status support\n\n- Enable and configure [status_module](https://httpd.apache.org/docs/2.4/mod/mod_status.html).\n- Ensure that you have [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/mod_status.html#troubleshoot) set on (enabled by default since Apache v2.3.6).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/apache.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/apache.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1/server-status?auto | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nApache with enabled HTTPS and self-signed certificate.\n\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1/server-status?auto\n tls_skip_verify: yes\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n - name: remote\n url: http://192.0.2.1/server-status?auto\n\n```\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Apache status support\n\n- Enable and configure [status_module](https://httpd.apache.org/docs/2.4/mod/mod_status.html).\n- Ensure that you have [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/mod_status.html#troubleshoot) set on (enabled by default since Apache v2.3.6).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/apache.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/apache.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1/server-status?auto | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| force_http2 | Forces the use of HTTP/2 protocol for all requests, even over plain TCP (h2c). | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nApache with enabled HTTPS and self-signed certificate.\n\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1/server-status?auto\n tls_skip_verify: yes\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1/server-status?auto\n\n - name: remote\n url: http://192.0.2.1/server-status?auto\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `apache` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m apache\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `apache` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep apache\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep apache /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep apache\n```\n\n",
"alerts": "## Alerts\n\nThere are no alerts configured by default for this integration.\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\nAll metrics available only if [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus) is on.\n\n\n### Per Apache instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit | Basic | Extended |\n|:------|:----------|:----|:---:|:---:|\n| apache.connections | connections | connections | \u2022 | \u2022 |\n| apache.conns_async | keepalive, closing, writing | connections | \u2022 | \u2022 |\n| apache.workers | idle, busy | workers | \u2022 | \u2022 |\n| apache.scoreboard | waiting, starting, reading, sending, keepalive, dns_lookup, closing, logging, finishing, idle_cleanup, open | connections | \u2022 | \u2022 |\n| apache.requests | requests | requests/s | | \u2022 |\n| apache.net | sent | kilobit/s | | \u2022 |\n| apache.reqpersec | requests | requests/s | | \u2022 |\n| apache.bytespersec | served | KiB/s | | \u2022 |\n| apache.bytesperreq | size | KiB | | \u2022 |\n| apache.uptime | uptime | seconds | | \u2022 |\n\n",
@@ -3575,7 +3575,7 @@
"most_popular": true
},
"overview": "# Consul\n\nPlugin: go.d.plugin\nModule: consul\n\n## Overview\n\nThis collector monitors [key metrics](https://developer.hashicorp.com/consul/docs/agent/telemetry#key-metrics) of Consul Agents: transaction timings, leadership changes, memory usage and more.\n\n\nIt periodically sends HTTP requests to [Consul REST API](https://developer.hashicorp.com/consul/api-docs).\n\nUsed endpoints:\n\n- [/operator/autopilot/health](https://developer.hashicorp.com/consul/api-docs/operator/autopilot#read-health)\n- [/agent/checks](https://developer.hashicorp.com/consul/api-docs/agent/check#list-checks)\n- [/agent/self](https://developer.hashicorp.com/consul/api-docs/agent#read-configuration)\n- [/agent/metrics](https://developer.hashicorp.com/consul/api-docs/agent#view-metrics)\n- [/coordinate/nodes](https://developer.hashicorp.com/consul/api-docs/coordinate#read-lan-coordinates-for-all-nodes)\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis collector discovers instances running on the local host, that provide metrics on port 8500.\n\nOn startup, it tries to collect metrics from:\n\n- http://localhost:8500\n- http://127.0.0.1:8500\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Prometheus telemetry\n\n[Enable](https://developer.hashicorp.com/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time) telemetry on your Consul agent, by increasing the value of `prometheus_retention_time` from `0`.\n\n\n#### Add required ACLs to Token\n\nRequired **only if authentication is enabled**.\n\n| ACL | Endpoint |\n|:---------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `operator:read` | [autopilot health status](https://developer.hashicorp.com/consul/api-docs/operator/autopilot#read-health) |\n| `node:read` | [checks](https://developer.hashicorp.com/consul/api-docs/agent/check#list-checks) |\n| `agent:read` | [configuration](https://developer.hashicorp.com/consul/api-docs/agent#read-configuration), [metrics](https://developer.hashicorp.com/consul/api-docs/agent#view-metrics), and [lan coordinates](https://developer.hashicorp.com/consul/api-docs/coordinate#read-lan-coordinates-for-all-nodes) |\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/consul.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/consul.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://localhost:8500 | yes |\n| acl_token | ACL token used in every request. | | no |\n| max_checks | Checks processing/charting limit. | | no |\n| max_filter | Checks processing/charting filter. Uses [simple patterns](/src/libnetdata/simple_pattern/README.md). | | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| timeout | HTTP request timeout. | 1 | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client tls certificate. | | no |\n| tls_key | Client tls key. | | no |\n\n#### Examples\n\n##### Basic\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n\n```\n##### Basic HTTP auth\n\nLocal server with basic HTTP authentication.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n username: foo\n password: bar\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n\n - name: remote\n url: http://203.0.113.10:8500\n acl_token: \"ada7f751-f654-8872-7f93-498e799158b6\"\n\n```\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Enable Prometheus telemetry\n\n[Enable](https://developer.hashicorp.com/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time) telemetry on your Consul Agent, by increasing the value of `prometheus_retention_time` from `0`.\n\n\n#### Add required ACLs to Token\n\nRequired **only if authentication is enabled**.\n\n| ACL | Endpoint |\n|:---------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `operator:read` | [autopilot health status](https://developer.hashicorp.com/consul/api-docs/operator/autopilot#read-health) |\n| `node:read` | [checks](https://developer.hashicorp.com/consul/api-docs/agent/check#list-checks) |\n| `agent:read` | [configuration](https://developer.hashicorp.com/consul/api-docs/agent#read-configuration), [metrics](https://developer.hashicorp.com/consul/api-docs/agent#view-metrics), and [lan coordinates](https://developer.hashicorp.com/consul/api-docs/coordinate#read-lan-coordinates-for-all-nodes) |\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/consul.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/consul.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://localhost:8500 | yes |\n| acl_token | ACL token used in every request. | | no |\n| max_checks | Checks processing/charting limit. | | no |\n| max_filter | Checks processing/charting filter. Uses [simple patterns](/src/libnetdata/simple_pattern/README.md). | | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| timeout | HTTP request timeout. | 1 | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client tls certificate. | | no |\n| tls_key | Client tls key. | | no |\n\n#### Examples\n\n##### Basic\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n\n```\n##### Basic HTTP auth\n\nLocal server with basic HTTP authentication.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n username: foo\n password: bar\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8500\n acl_token: \"ec15675e-2999-d789-832e-8c4794daa8d7\"\n\n - name: remote\n url: http://203.0.113.10:8500\n acl_token: \"ada7f751-f654-8872-7f93-498e799158b6\"\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `consul` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m consul\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `consul` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep consul\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep consul /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep consul\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ consul_node_health_check_status ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.node_health_check_status | node health check ${label:check_name} has failed on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_service_health_check_status ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.service_health_check_status | service health check ${label:check_name} for service ${label:service_name} has failed on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_client_rpc_requests_exceeded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.client_rpc_requests_exceeded_rate | number of rate-limited RPC requests made by server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_client_rpc_requests_failed ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.client_rpc_requests_failed_rate | number of failed RPC requests made by server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_gc_pause_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.gc_pause_time | time spent in stop-the-world garbage collection pauses on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_autopilot_health_status ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.autopilot_health_status | datacenter ${label:datacenter} cluster is unhealthy as reported by server ${label:node_name} |\n| [ consul_autopilot_server_health_status ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.autopilot_server_health_status | server ${label:node_name} from datacenter ${label:datacenter} is unhealthy |\n| [ consul_raft_leader_last_contact_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.raft_leader_last_contact_time | median time elapsed since leader server ${label:node_name} datacenter ${label:datacenter} was last able to contact the follower nodes |\n| [ consul_raft_leadership_transitions ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.raft_leadership_transitions_rate | there has been a leadership change and server ${label:node_name} datacenter ${label:datacenter} has become the leader |\n| [ consul_raft_thread_main_saturation ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.raft_thread_main_saturation_perc | average saturation of the main Raft goroutine on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_raft_thread_fsm_saturation ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.raft_thread_fsm_saturation_perc | average saturation of the FSM Raft goroutine on server ${label:node_name} datacenter ${label:datacenter} |\n| [ consul_license_expiration_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/consul.conf) | consul.license_expiration_time | Consul Enterprise licence expiration time on node ${label:node_name} datacenter ${label:datacenter} |\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\nThe set of metrics depends on the [Consul Agent mode](https://developer.hashicorp.com/consul/docs/install/glossary#agent).\n\n\n### Per Consul instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit | Leader | Follower | Client |\n|:------|:----------|:----|:---:|:---:|:---:|\n| consul.client_rpc_requests_rate | rpc | requests/s | \u2022 | \u2022 | \u2022 |\n| consul.client_rpc_requests_exceeded_rate | exceeded | requests/s | \u2022 | \u2022 | \u2022 |\n| consul.client_rpc_requests_failed_rate | failed | requests/s | \u2022 | \u2022 | \u2022 |\n| consul.memory_allocated | allocated | bytes | \u2022 | \u2022 | \u2022 |\n| consul.memory_sys | sys | bytes | \u2022 | \u2022 | \u2022 |\n| consul.gc_pause_time | gc_pause | seconds | \u2022 | \u2022 | \u2022 |\n| consul.kvs_apply_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | \u2022 | |\n| consul.kvs_apply_operations_rate | kvs_apply | ops/s | \u2022 | \u2022 | |\n| consul.txn_apply_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | \u2022 | |\n| consul.txn_apply_operations_rate | txn_apply | ops/s | \u2022 | \u2022 | |\n| consul.autopilot_health_status | healthy, unhealthy | status | \u2022 | \u2022 | |\n| consul.autopilot_failure_tolerance | failure_tolerance | servers | \u2022 | \u2022 | |\n| consul.autopilot_server_health_status | healthy, unhealthy | status | \u2022 | \u2022 | |\n| consul.autopilot_server_stable_time | stable | seconds | \u2022 | \u2022 | |\n| consul.autopilot_server_serf_status | active, failed, left, none | status | \u2022 | \u2022 | |\n| consul.autopilot_server_voter_status | voter, not_voter | status | \u2022 | \u2022 | |\n| consul.network_lan_rtt | min, max, avg | ms | \u2022 | \u2022 | |\n| consul.raft_commit_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | | |\n| consul.raft_commits_rate | commits | commits/s | \u2022 | | |\n| consul.raft_leader_last_contact_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | | |\n| consul.raft_leader_oldest_log_age | oldest_log_age | seconds | \u2022 | | |\n| consul.raft_follower_last_contact_leader_time | leader_last_contact | ms | | \u2022 | |\n| consul.raft_rpc_install_snapshot_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | | \u2022 | |\n| consul.raft_leader_elections_rate | leader | elections/s | \u2022 | \u2022 | |\n| consul.raft_leadership_transitions_rate | leadership | transitions/s | \u2022 | \u2022 | |\n| consul.server_leadership_status | leader, not_leader | status | \u2022 | \u2022 | |\n| consul.raft_thread_main_saturation_perc | quantile_0.5, quantile_0.9, quantile_0.99 | percentage | \u2022 | \u2022 | |\n| consul.raft_thread_fsm_saturation_perc | quantile_0.5, quantile_0.9, quantile_0.99 | percentage | \u2022 | \u2022 | |\n| consul.raft_fsm_last_restore_duration | last_restore_duration | ms | \u2022 | \u2022 | |\n| consul.raft_boltdb_freelist_bytes | freelist | bytes | \u2022 | \u2022 | |\n| consul.raft_boltdb_logs_per_batch_rate | written | logs/s | \u2022 | \u2022 | |\n| consul.raft_boltdb_store_logs_time | quantile_0.5, quantile_0.9, quantile_0.99 | ms | \u2022 | \u2022 | |\n| consul.license_expiration_time | license_expiration | seconds | \u2022 | \u2022 | \u2022 |\n\n### Per node check\n\nMetrics about checks on Node level.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| datacenter | Datacenter Identifier |\n| node_name | The node's name |\n| check_name | The check's name |\n\nMetrics:\n\n| Metric | Dimensions | Unit | Leader | Follower | Client |\n|:------|:----------|:----|:---:|:---:|:---:|\n| consul.node_health_check_status | passing, maintenance, warning, critical | status | \u2022 | \u2022 | \u2022 |\n\n### Per service check\n\nMetrics about checks at a Service level.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| datacenter | Datacenter Identifier |\n| node_name | The node's name |\n| check_name | The check's name |\n| service_name | The service's name |\n\nMetrics:\n\n| Metric | Dimensions | Unit | Leader | Follower | Client |\n|:------|:----------|:----|:---:|:---:|:---:|\n| consul.service_health_check_status | passing, maintenance, warning, critical | status | \u2022 | \u2022 | \u2022 |\n\n",
@@ -6257,7 +6257,7 @@
"most_popular": true
},
"overview": "# PostgreSQL\n\nPlugin: go.d.plugin\nModule: postgres\n\n## Overview\n\nThis collector monitors the activity and performance of Postgres servers, collects replication statistics, metrics for each database, table and index, and more.\n\n\nIt establishes a connection to the Postgres instance via a TCP or UNIX socket.\nTo collect metrics for database tables and indexes, it establishes an additional connection for each discovered database.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nBy default, it detects instances running on localhost by trying to connect as root and netdata using known PostgreSQL TCP and UNIX sockets:\n\n- 127.0.0.1:5432\n- /var/run/postgresql/\n\n\n#### Limits\n\nTable and index metrics are not collected for databases with more than 50 tables or 250 indexes.\nThese limits can be changed in the configuration file.\n\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Create netdata user\n\nCreate a user with granted `pg_monitor`\nor `pg_read_all_stat` [built-in role](https://www.postgresql.org/docs/current/predefined-roles.html).\n\nTo create the `netdata` user with these permissions, execute the following in the psql session, as a user with CREATEROLE privileges:\n\n```postgresql\nCREATE USER netdata;\nGRANT pg_monitor TO netdata;\n```\n\nAfter creating the new user, restart the Netdata agent with `sudo systemctl restart netdata`, or\nthe [appropriate method](/docs/netdata-agent/start-stop-restart.md) for your\nsystem.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/postgres.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/postgres.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 5 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| dsn | Postgres server DSN (Data Source Name). See [DSN syntax](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING). | postgres://postgres:postgres@127.0.0.1:5432/postgres | yes |\n| timeout | Query timeout in seconds. | 2 | no |\n| collect_databases_matching | Databases selector. Determines which database metrics will be collected. Syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/src/go/pkg/matcher#simple-patterns-matcher). | | no |\n| max_db_tables | Maximum number of tables in the database. Table metrics will not be collected for databases that have more tables than max_db_tables. 0 means no limit. | 50 | no |\n| max_db_indexes | Maximum number of indexes in the database. Index metrics will not be collected for databases that have more indexes than max_db_indexes. 0 means no limit. | 250 | no |\n\n#### Examples\n\n##### TCP socket\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'\n\n```\n##### Unix socket\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n dsn: 'host=/var/run/postgresql dbname=postgres user=netdata'\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nLocal and remote instances.\n\n\n```yaml\njobs:\n - name: local\n dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'\n\n - name: remote\n dsn: 'postgresql://netdata@203.0.113.0:5432/postgres'\n\n```\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Create netdata user\n\nCreate a user with granted `pg_monitor`\nor `pg_read_all_stat` [built-in role](https://www.postgresql.org/docs/current/predefined-roles.html).\n\nTo create the `netdata` user with these permissions, execute the following in the psql session, as a user with CREATEROLE privileges:\n\n```postgresql\nCREATE USER netdata;\nGRANT pg_monitor TO netdata;\n```\n\nAfter creating the new user, restart the Netdata Agent with `sudo systemctl restart netdata`, or\nthe [appropriate method](/docs/netdata-agent/start-stop-restart.md) for your\nsystem.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/postgres.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/postgres.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 5 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| dsn | Postgres server DSN (Data Source Name). See [DSN syntax](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING). | postgres://postgres:postgres@127.0.0.1:5432/postgres | yes |\n| timeout | Query timeout in seconds. | 2 | no |\n| collect_databases_matching | Databases selector. Determines which database metrics will be collected. Syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/src/go/pkg/matcher#simple-patterns-matcher). | | no |\n| max_db_tables | Maximum number of tables in the database. Table metrics will not be collected for databases that have more tables than max_db_tables. 0 means no limit. | 50 | no |\n| max_db_indexes | Maximum number of indexes in the database. Index metrics will not be collected for databases that have more indexes than max_db_indexes. 0 means no limit. | 250 | no |\n\n#### Examples\n\n##### TCP socket\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'\n\n```\n##### Unix socket\n\nAn example configuration.\n\n```yaml\njobs:\n - name: local\n dsn: 'host=/var/run/postgresql dbname=postgres user=netdata'\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nLocal and remote instances.\n\n\n```yaml\njobs:\n - name: local\n dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'\n\n - name: remote\n dsn: 'postgresql://netdata@203.0.113.0:5432/postgres'\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `postgres` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m postgres\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `postgres` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep postgres\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep postgres /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep postgres\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ postgres_total_connection_utilization ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.connections_utilization | average total connection utilization over the last minute |\n| [ postgres_acquired_locks_utilization ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.locks_utilization | average acquired locks utilization over the last minute |\n| [ postgres_txid_exhaustion_perc ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.txid_exhaustion_perc | percent towards TXID wraparound |\n| [ postgres_db_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.db_cache_io_ratio | average cache hit ratio in db ${label:database} over the last minute |\n| [ postgres_db_transactions_rollback_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.db_cache_io_ratio | average aborted transactions percentage in db ${label:database} over the last five minutes |\n| [ postgres_db_deadlocks_rate ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.db_deadlocks_rate | number of deadlocks detected in db ${label:database} in the last minute |\n| [ postgres_table_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_cache_io_ratio | average cache hit ratio in db ${label:database} table ${label:table} over the last minute |\n| [ postgres_table_index_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_index_cache_io_ratio | average index cache hit ratio in db ${label:database} table ${label:table} over the last minute |\n| [ postgres_table_toast_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_toast_cache_io_ratio | average TOAST hit ratio in db ${label:database} table ${label:table} over the last minute |\n| [ postgres_table_toast_index_cache_io_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_toast_index_cache_io_ratio | average index TOAST hit ratio in db ${label:database} table ${label:table} over the last minute |\n| [ postgres_table_bloat_size_perc ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_bloat_size_perc | bloat size percentage in db ${label:database} table ${label:table} |\n| [ postgres_table_last_autovacuum_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_autovacuum_since_time | time elapsed since db ${label:database} table ${label:table} was vacuumed by the autovacuum daemon |\n| [ postgres_table_last_autoanalyze_time ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.table_autoanalyze_since_time | time elapsed since db ${label:database} table ${label:table} was analyzed by the autovacuum daemon |\n| [ postgres_index_bloat_size_perc ](https://github.com/netdata/netdata/blob/master/src/health/health.d/postgres.conf) | postgres.index_bloat_size_perc | bloat size percentage in db ${label:database} table ${label:table} index ${label:index} |\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\n\n\n### Per PostgreSQL instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.connections_utilization | used | percentage |\n| postgres.connections_usage | available, used | connections |\n| postgres.connections_state_count | active, idle, idle_in_transaction, idle_in_transaction_aborted, disabled | connections |\n| postgres.transactions_duration | a dimension per bucket | transactions/s |\n| postgres.queries_duration | a dimension per bucket | queries/s |\n| postgres.locks_utilization | used | percentage |\n| postgres.checkpoints_rate | scheduled, requested | checkpoints/s |\n| postgres.checkpoints_time | write, sync | milliseconds |\n| postgres.bgwriter_halts_rate | maxwritten | events/s |\n| postgres.buffers_io_rate | checkpoint, backend, bgwriter | B/s |\n| postgres.buffers_backend_fsync_rate | fsync | calls/s |\n| postgres.buffers_allocated_rate | allocated | B/s |\n| postgres.wal_io_rate | write | B/s |\n| postgres.wal_files_count | written, recycled | files |\n| postgres.wal_archiving_files_count | ready, done | files/s |\n| postgres.autovacuum_workers_count | analyze, vacuum_analyze, vacuum, vacuum_freeze, brin_summarize | workers |\n| postgres.txid_exhaustion_towards_autovacuum_perc | emergency_autovacuum | percentage |\n| postgres.txid_exhaustion_perc | txid_exhaustion | percentage |\n| postgres.txid_exhaustion_oldest_txid_num | xid | xid |\n| postgres.catalog_relations_count | ordinary_table, index, sequence, toast_table, view, materialized_view, composite_type, foreign_table, partitioned_table, partitioned_index | relations |\n| postgres.catalog_relations_size | ordinary_table, index, sequence, toast_table, view, materialized_view, composite_type, foreign_table, partitioned_table, partitioned_index | B |\n| postgres.uptime | uptime | seconds |\n| postgres.databases_count | databases | databases |\n\n### Per repl application\n\nThese metrics refer to the replication application.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| application | application name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.replication_app_wal_lag_size | sent_lag, write_lag, flush_lag, replay_lag | B |\n| postgres.replication_app_wal_lag_time | write_lag, flush_lag, replay_lag | seconds |\n\n### Per repl slot\n\nThese metrics refer to the replication slot.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| slot | replication slot name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.replication_slot_files_count | wal_keep, pg_replslot_files | files |\n\n### Per database\n\nThese metrics refer to the database.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | database name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.db_transactions_ratio | committed, rollback | percentage |\n| postgres.db_transactions_rate | committed, rollback | transactions/s |\n| postgres.db_connections_utilization | used | percentage |\n| postgres.db_connections_count | connections | connections |\n| postgres.db_cache_io_ratio | miss | percentage |\n| postgres.db_io_rate | memory, disk | B/s |\n| postgres.db_ops_fetched_rows_ratio | fetched | percentage |\n| postgres.db_ops_read_rows_rate | returned, fetched | rows/s |\n| postgres.db_ops_write_rows_rate | inserted, deleted, updated | rows/s |\n| postgres.db_conflicts_rate | conflicts | queries/s |\n| postgres.db_conflicts_reason_rate | tablespace, lock, snapshot, bufferpin, deadlock | queries/s |\n| postgres.db_deadlocks_rate | deadlocks | deadlocks/s |\n| postgres.db_locks_held_count | access_share, row_share, row_exclusive, share_update, share, share_row_exclusive, exclusive, access_exclusive | locks |\n| postgres.db_locks_awaited_count | access_share, row_share, row_exclusive, share_update, share, share_row_exclusive, exclusive, access_exclusive | locks |\n| postgres.db_temp_files_created_rate | created | files/s |\n| postgres.db_temp_files_io_rate | written | B/s |\n| postgres.db_size | size | B |\n\n### Per table\n\nThese metrics refer to the database table.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | database name |\n| schema | schema name |\n| table | table name |\n| parent_table | parent table name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.table_rows_dead_ratio | dead | percentage |\n| postgres.table_rows_count | live, dead | rows |\n| postgres.table_ops_rows_rate | inserted, deleted, updated | rows/s |\n| postgres.table_ops_rows_hot_ratio | hot | percentage |\n| postgres.table_ops_rows_hot_rate | hot | rows/s |\n| postgres.table_cache_io_ratio | miss | percentage |\n| postgres.table_io_rate | memory, disk | B/s |\n| postgres.table_index_cache_io_ratio | miss | percentage |\n| postgres.table_index_io_rate | memory, disk | B/s |\n| postgres.table_toast_cache_io_ratio | miss | percentage |\n| postgres.table_toast_io_rate | memory, disk | B/s |\n| postgres.table_toast_index_cache_io_ratio | miss | percentage |\n| postgres.table_toast_index_io_rate | memory, disk | B/s |\n| postgres.table_scans_rate | index, sequential | scans/s |\n| postgres.table_scans_rows_rate | index, sequential | rows/s |\n| postgres.table_autovacuum_since_time | time | seconds |\n| postgres.table_vacuum_since_time | time | seconds |\n| postgres.table_autoanalyze_since_time | time | seconds |\n| postgres.table_analyze_since_time | time | seconds |\n| postgres.table_null_columns | null | columns |\n| postgres.table_size | size | B |\n| postgres.table_bloat_size_perc | bloat | percentage |\n| postgres.table_bloat_size | bloat | B |\n\n### Per index\n\nThese metrics refer to the table index.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | database name |\n| schema | schema name |\n| table | table name |\n| parent_table | parent table name |\n| index | index name |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| postgres.index_size | size | B |\n| postgres.index_bloat_size_perc | bloat | percentage |\n| postgres.index_bloat_size | bloat | B |\n| postgres.index_usage_status | used, unused | status |\n\n",
@@ -17044,8 +17044,8 @@
],
"most_popular": false
},
- "overview": "# Tomcat\n\nPlugin: go.d.plugin\nModule: tomcat\n\n## Overview\n\nThis collector monitors Tomcat metrics about bandwidth, processing time, threads and more.\n\n\nIt parses the information provided by the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) HTTP endpoint.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\nBy default, this Tomcat collector cannot access the server's status page. To enable data collection, you will need to configure access credentials with appropriate permissions.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nIf the Netdata agent and Tomcat are on the same host, the collector will attempt to connect to the Tomcat server's status page at `http://localhost:8080/manager/status?XML=true`.\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### Access to Tomcat Status Endpoint\n\nThe Netdata agent needs read-only access to its status endpoint to collect data from the Tomcat server.\n\nYou can achieve this by creating a dedicated user named `netdata` with read-only permissions specifically for accessing the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) endpoint.\n\nOnce you've created the `netdata` user, you'll need to configure the username and password in the collector configuration file.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/tomcat.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/tomcat.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1:8080 | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | POST | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8080\n username: John\n password: Doe\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8080\n username: admin1\n password: hackme1\n\n - name: remote\n url: http://192.0.2.1:8080\n username: admin2\n password: hackme2\n\n```\n",
+ "overview": "# Tomcat\n\nPlugin: go.d.plugin\nModule: tomcat\n\n## Overview\n\nThis collector monitors Tomcat metrics about bandwidth, processing time, threads and more.\n\n\nIt parses the information provided by the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) HTTP endpoint.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\nBy default, this Tomcat collector cannot access the server's status page. To enable data collection, you will need to configure access credentials with appropriate permissions.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nIf the Netdata Agent and Tomcat are on the same host, the collector will attempt to connect to the Tomcat server's status page at `http://localhost:8080/manager/status?XML=true`.\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### Access to Tomcat Status Endpoint\n\nThe Netdata Agent needs read-only access to its status endpoint to collect data from the Tomcat server.\n\nYou can achieve this by creating a dedicated user named `netdata` with read-only permissions specifically for accessing the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) endpoint.\n\nOnce you've created the `netdata` user, you'll need to configure the username and password in the collector configuration file.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/tomcat.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/tomcat.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1:8080 | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | POST | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8080\n username: John\n password: Doe\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8080\n username: admin1\n password: hackme1\n\n - name: remote\n url: http://192.0.2.1:8080\n username: admin2\n password: hackme2\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `tomcat` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m tomcat\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `tomcat` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep tomcat\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep tomcat /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep tomcat\n```\n\n",
"alerts": "## Alerts\n\nThere are no alerts configured by default for this integration.\n",
"metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\n\n\n### Per Tomcat instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| tomcat.jvm_memory_usage | free, used | bytes |\n\n### Per jvm memory pool\n\nThese metrics refer to the JVM memory pool.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| mempool_name | Memory Pool name. |\n| mempool_type | Memory Pool type. |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| tomcat.jvm_mem_pool_memory_usage | commited, used, max | bytes |\n\n### Per connector\n\nThese metrics refer to the connector.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| connector_name | Connector name. |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| tomcat.connector_requests | requests | requests/s |\n| tomcat.connector_bandwidth | received, sent | bytes/s |\n| tomcat.connector_requests_processing_time | processing_time | milliseconds |\n| tomcat.connector_errors | errors | errors/s |\n| tomcat.connector_request_threads | idle, busy | threads |\n\n",
@@ -17576,7 +17576,7 @@
}
}
},
- "overview": "# Active Directory\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# Active Directory\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17616,7 +17616,7 @@
}
}
},
- "overview": "# HyperV\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# HyperV\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17654,7 +17654,7 @@
}
}
},
- "overview": "# MS Exchange\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# MS Exchange\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17694,7 +17694,7 @@
}
}
},
- "overview": "# MS SQL Server\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# MS SQL Server\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17732,7 +17732,7 @@
}
}
},
- "overview": "# NET Framework\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# NET Framework\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -17769,7 +17769,7 @@
}
}
},
- "overview": "# Windows\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
+ "overview": "# Windows\n\nPlugin: go.d.plugin\nModule: windows\n\n## Overview\n\n**Deprecation Notice**: This collector is no longer the recommended method for Windows monitoring and will be removed in a future release.\n\nThe official Netdata Agent for Windows now provides a robust and user-friendly solution for real-time system and application performance monitoring. By installing Netdata on your Windows host, you'll gain access to a wide range of metrics and visualizations without the need for additional collectors or complex configurations.\n\nTo get started with Netdata on Windows, see the [Netdata Windows Installer](/packaging/windows/WINDOWS_INSTALLER.md).\n\n---\n\nThis collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).\n\n\nIt collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nThis integration doesn't support auto-detection.\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nData collection affects the CPU usage of the Windows host. CPU usage depends on the frequency of data collection and the [enabled collectors](https://github.com/prometheus-community/windows_exporter#collectors).\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### Install Windows exporter\n\nTo install the Windows exporter, follow the [official installation guide](https://github.com/prometheus-community/windows_exporter#installation).\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/windows.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/windows.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: win_server\n url: http://192.0.2.1:9182/metrics\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nDo not validate server certificate chain and hostname.\n\n```yaml\njobs:\n - name: win_server\n url: https://192.0.2.1:9182/metrics\n tls_skip_verify: yes\n\n```\n##### Virtual Node\n\nThe Virtual Node functionality allows you to define nodes in configuration files and treat them as ordinary nodes in all interfaces, panels, tabs, filters, etc.\nYou can create a virtual node for all your Windows machines and control them as separate entities.\n\nTo make your Windows server a virtual node, you need to define virtual nodes in `/etc/netdata/vnodes/vnodes.conf`:\n\n> **Note**: To create a valid guid, you can use the `uuidgen` command on Linux, or the `[guid]::NewGuid()` command in PowerShell on Windows.\n\n```yaml\n# /etc/netdata/vnodes/vnodes.conf\n- hostname: win_server\n guid: \n```\n\n\n```yaml\njobs:\n - name: win_server\n vnode: win_server\n url: http://192.0.2.1:9182/metrics\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from multiple remote instances.\n\n\n```yaml\njobs:\n - name: win_server1\n url: http://192.0.2.1:9182/metrics\n\n - name: win_server2\n url: http://192.0.2.2:9182/metrics\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\n**Important**: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.\n\nTo troubleshoot issues with the `windows` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m windows\n ```\n\n### Getting Logs\n\nIf you're encountering problems with the `windows` collector, follow these steps to retrieve logs and identify potential issues:\n\n- **Run the command** specific to your system (systemd, non-systemd, or Docker container).\n- **Examine the output** for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.\n\n#### System with systemd\n\nUse the following command to view logs generated since the last Netdata service restart:\n\n```bash\njournalctl _SYSTEMD_INVOCATION_ID=\"$(systemctl show --value --property=InvocationID netdata)\" --namespace=netdata --grep windows\n```\n\n#### System without systemd\n\nLocate the collector log file, typically at `/var/log/netdata/collector.log`, and use `grep` to filter for collector's name:\n\n```bash\ngrep windows /var/log/netdata/collector.log\n```\n\n**Note**: This method shows logs from all restarts. Focus on the **latest entries** for troubleshooting current issues.\n\n#### Docker Container\n\nIf your Netdata runs in a Docker container named \"netdata\" (replace if different), use this command:\n\n```bash\ndocker logs netdata 2>&1 | grep windows\n```\n\n",
"alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ windows_10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.cpu_utilization_total | average CPU utilization over the last 10 minutes |\n| [ windows_ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.memory_utilization | memory utilization |\n| [ windows_inbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of inbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_discarded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_discarded | number of outbound discarded packets for the network interface in the last 10 minutes |\n| [ windows_inbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of inbound errors for the network interface in the last 10 minutes |\n| [ windows_outbound_packets_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.net_nic_errors | number of outbound errors for the network interface in the last 10 minutes |\n| [ windows_disk_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/windows.conf) | windows.logical_disk_space_usage | disk space utilization |\n",
@@ -21117,7 +21117,7 @@
"exporter",
"json"
],
- "overview": "# JSON\n\nUse the JSON connector for the exporting engine to archive your agent's metrics to JSON document databases for long-term storage,\nfurther analysis, or correlation with data from other sources\n\n",
+ "overview": "# JSON\n\nUse the JSON connector for the exporting engine to archive your Agent's metrics to JSON document databases for long-term storage,\nfurther analysis, or correlation with data from other sources\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### \n\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `exporting.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config exporting.conf\n```\n#### Options\n\nThe following options can be defined for this exporter.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |\n| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | pubsub.googleapis.com | yes |\n| username | Username for HTTP authentication | my_username | no |\n| password | Password for HTTP authentication | my_password | no |\n| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |\n| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |\n| prefix | The prefix to add to all metrics. | Netdata | no |\n| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |\n| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |\n| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 2 * update_every * 1000 | no |\n| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/src/libnetdata/simple_pattern#simple-patterns). | localhost * | no |\n| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |\n| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |\n| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |\n| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |\n\n##### destination\n\nThe format of each item in this list, is: [PROTOCOL:]IP[:PORT].\n- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.\n- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.\n- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.\n\nExample IPv4:\n ```yaml\n destination = localhost:5448\n ```\nWhen multiple servers are defined, Netdata will try the next one when the previous one fails.\n\n\n##### update every\n\nNetdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers\nsend data to the same database. This randomness does not affect the quality of the data, only the time they are sent.\n\n\n##### buffer on failures\n\nIf the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).\n\n\n##### send hosts matching\n\nIncludes one or more space separated patterns, using * as wildcard (any number of times within each pattern).\nThe patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to\nfilter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.\n\nA pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,\nuse `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).\n\n\n##### send charts matching\n\nA pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,\nuse !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,\npositive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter\nhas a higher priority than the configuration option.\n\n\n##### send names instead of ids\n\nNetdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names\nare human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are\ndifferent : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.\n\n\n#### Examples\n\n##### Basic configuration\n\n\n\n```yaml\n[json:my_json_instance]\n enabled = yes\n destination = localhost:5448\n\n```\n##### Configuration with HTTPS and HTTP authentication\n\nAdd `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `json:https:my_json_instance`.\n\n```yaml\n[json:my_json_instance]\n enabled = yes\n destination = localhost:5448\n username = my_username\n password = my_password\n\n```\n",
"integration_type": "exporter",
"edit_link": "https://github.com/netdata/netdata/blob/master/src/exporting/json/metadata.yaml",
@@ -21249,7 +21249,7 @@
"exporter",
"MongoDB"
],
- "overview": "# MongoDB\n\nUse the MongoDB connector for the exporting engine to archive your agent's metrics to a MongoDB database\nfor long-term storage, further analysis, or correlation with data from other sources.\n\n",
+ "overview": "# MongoDB\n\nUse the MongoDB connector for the exporting engine to archive your Agent's metrics to a MongoDB database\nfor long-term storage, further analysis, or correlation with data from other sources.\n\n",
"setup": "## Setup\n\n### Prerequisites\n\n#### \n\n- To use MongoDB as an external storage for long-term archiving, you should first [install](https://www.mongodb.com/docs/languages/c/c-driver/current/libmongoc/tutorials/obtaining-libraries/installing/#std-label-installing) libmongoc 1.7.0 or higher.\n- Next, re-install Netdata from the source, which detects that the required library is now available.\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `exporting.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config exporting.conf\n```\n#### Options\n\nThe following options can be defined for this exporter.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |\n| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | localhost | yes |\n| username | Username for HTTP authentication | my_username | no |\n| password | Password for HTTP authentication | my_password | no |\n| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |\n| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |\n| prefix | The prefix to add to all metrics. | Netdata | no |\n| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |\n| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |\n| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 2 * update_every * 1000 | no |\n| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/src/libnetdata/simple_pattern#simple-patterns). | localhost * | no |\n| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |\n| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |\n| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |\n| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |\n\n##### destination\n\nThe format of each item in this list, is: [PROTOCOL:]IP[:PORT].\n- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.\n- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.\n- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.\n\nExample IPv4:\n ```yaml\n destination = 10.11.14.2:27017 10.11.14.3:4242 10.11.14.4:27017\n ```\nExample IPv6 and IPv4 together:\n```yaml\ndestination = [ffff:...:0001]:2003 10.11.12.1:2003\n```\nWhen multiple servers are defined, Netdata will try the next one when the previous one fails.\n\n\n##### update every\n\nNetdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers\nsend data to the same database. This randomness does not affect the quality of the data, only the time they are sent.\n\n\n##### buffer on failures\n\nIf the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).\n\n\n##### send hosts matching\n\nIncludes one or more space separated patterns, using * as wildcard (any number of times within each pattern).\nThe patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to\nfilter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.\n\nA pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,\nuse `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).\n\n\n##### send charts matching\n\nA pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,\nuse !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,\npositive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter\nhas a higher priority than the configuration option.\n\n\n##### send names instead of ids\n\nNetdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names\nare human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are\ndifferent : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.\n\n\n#### Examples\n\n##### Basic configuration\n\nThe default socket timeout depends on the exporting connector update interval.\nThe timeout is 500 ms shorter than the interval (but not less than 1000 ms). You can alter the timeout using the sockettimeoutms MongoDB URI option.\n\n\n```yaml\n[mongodb:my_instance]\n enabled = yes\n destination = mongodb://\n database = your_database_name\n collection = your_collection_name\n\n```\n",
"integration_type": "exporter",
"edit_link": "https://github.com/netdata/netdata/blob/master/src/exporting/mongodb/metadata.yaml",
@@ -21909,7 +21909,7 @@
"PagerDuty"
],
"overview": "# PagerDuty\n\nPagerDuty is an enterprise incident resolution service that integrates with ITOps and DevOps monitoring stacks to improve operational reliability and agility. From enriching and aggregating events to correlating them into incidents, PagerDuty streamlines the incident management process by reducing alert noise and resolution times.\nYou can send notifications to PagerDuty using Netdata's Agent alert notification feature, which supports dozens of endpoints, user roles, and more.\n\n",
- "setup": "## Setup\n\n### Prerequisites\n\n#### \n\n- An installation of the [PagerDuty](https://www.pagerduty.com/docs/guides/agent-install-guide/) agent on the node running the Netdata Agent\n- A PagerDuty Generic API service using either the `Events API v2` or `Events API v1`\n- [Add a new service](https://support.pagerduty.com/docs/services-and-integrations#section-configuring-services-and-integrations) to PagerDuty. Click Use our API directly and select either `Events API v2` or `Events API v1`. Once you finish creating the service, click on the Integrations tab to find your Integration Key.\n- Access to the terminal where Netdata Agent is running\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `health_alarm_notify.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config health_alarm_notify.conf\n```\n#### Options\n\nThe following options can be defined for this notification\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| SEND_PD | Set `SEND_PD` to YES | YES | yes |\n| DEFAULT_RECIPIENT_PD | Set `DEFAULT_RECIPIENT_PD` to the PagerDuty service key you want the alert notifications to be sent to. You can define multiple service keys like this: `pd_service_key_1` `pd_service_key_2`. | | yes |\n\n##### DEFAULT_RECIPIENT_PD\n\nAll roles will default to this variable if left unconfigured.\n\nThe `DEFAULT_RECIPIENT_PD` can be edited in the following entries at the bottom of the same file:\n```text\nrole_recipients_pd[sysadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxa\"\nrole_recipients_pd[domainadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxb\"\nrole_recipients_pd[dba]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxc\"\nrole_recipients_pd[webmaster]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxd\"\nrole_recipients_pd[proxyadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxe\"\nrole_recipients_pd[sitemgr]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxf\"\n```\n\n\n#### Examples\n\n##### Basic Configuration\n\n\n\n```yaml\n#------------------------------------------------------------------------------\n# pagerduty.com notification options\n\nSEND_PD=\"YES\"\nDEFAULT_RECIPIENT_PD=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\nUSE_PD_VERSION=\"2\"\n\n```\n",
+ "setup": "## Setup\n\n### Prerequisites\n\n#### \n\n- An installation of the [PagerDuty](https://www.pagerduty.com/docs/guides/agent-install-guide/) Agent on the node running the Netdata Agent\n- A PagerDuty Generic API service using either the `Events API v2` or `Events API v1`\n- [Add a new service](https://support.pagerduty.com/docs/services-and-integrations#section-configuring-services-and-integrations) to PagerDuty. Click Use our API directly and select either `Events API v2` or `Events API v1`. Once you finish creating the service, click on the Integrations tab to find your Integration Key.\n- Access to the terminal where Netdata Agent is running\n\n\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `health_alarm_notify.conf`.\n\n\nYou can edit the configuration file using the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config health_alarm_notify.conf\n```\n#### Options\n\nThe following options can be defined for this notification\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| SEND_PD | Set `SEND_PD` to YES | YES | yes |\n| DEFAULT_RECIPIENT_PD | Set `DEFAULT_RECIPIENT_PD` to the PagerDuty service key you want the alert notifications to be sent to. You can define multiple service keys like this: `pd_service_key_1` `pd_service_key_2`. | | yes |\n\n##### DEFAULT_RECIPIENT_PD\n\nAll roles will default to this variable if left unconfigured.\n\nThe `DEFAULT_RECIPIENT_PD` can be edited in the following entries at the bottom of the same file:\n```text\nrole_recipients_pd[sysadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxa\"\nrole_recipients_pd[domainadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxb\"\nrole_recipients_pd[dba]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxc\"\nrole_recipients_pd[webmaster]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxd\"\nrole_recipients_pd[proxyadmin]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxe\"\nrole_recipients_pd[sitemgr]=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxf\"\n```\n\n\n#### Examples\n\n##### Basic Configuration\n\n\n\n```yaml\n#------------------------------------------------------------------------------\n# pagerduty.com notification options\n\nSEND_PD=\"YES\"\nDEFAULT_RECIPIENT_PD=\"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\nUSE_PD_VERSION=\"2\"\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Test Notification\n\nYou can run the following command by hand, to test alerts configuration:\n\n```bash\n# become user netdata\nsudo su -s /bin/bash netdata\n\n# enable debugging info on the console\nexport NETDATA_ALARM_NOTIFY_DEBUG=1\n\n# send test alarms to sysadmin\n/usr/libexec/netdata/plugins.d/alarm-notify.sh test\n\n# send test alarms to any role\n/usr/libexec/netdata/plugins.d/alarm-notify.sh test \"ROLE\"\n```\n\nNote that this will test _all_ alert mechanisms for the selected role.\n\n",
"integration_type": "agent_notification",
"edit_link": "https://github.com/netdata/netdata/blob/master/src/health/notifications/pagerduty/metadata.yaml"
diff --git a/packaging/PLATFORM_SUPPORT.md b/packaging/PLATFORM_SUPPORT.md
index 8fa2b977f7e977..3d7bdd757b70d2 100644
--- a/packaging/PLATFORM_SUPPORT.md
+++ b/packaging/PLATFORM_SUPPORT.md
@@ -28,7 +28,7 @@ The following table shows a general outline of the various support tiers and cat
| Previously Supported | Users asked to upgrade | None | None | Yes, but only already published versions | Best Effort |
- βBug Supportβ: How we handle of platform-specific bugs.
-- βGuaranteed Configurationsβ: Which runtime configurations for the agent we try to guarantee will work with minimal
+- βGuaranteed Configurationsβ: Which runtime configurations for the Agent we try to guarantee will work with minimal
effort from users.
- βCI Coverageβ: What level of coverage we provide for the platform in CI.
- βNative Packagesβ: Whether we provide native packages for the system package manager for the platform.
diff --git a/packaging/VERSIONING_AND_PUBLIC_API.md b/packaging/VERSIONING_AND_PUBLIC_API.md
index ce672a6fc6cc45..8ef443f7613ce3 100644
--- a/packaging/VERSIONING_AND_PUBLIC_API.md
+++ b/packaging/VERSIONING_AND_PUBLIC_API.md
@@ -59,7 +59,7 @@ Netdata Agent git repository.
## Public API
-The remainder of the document outlines the public API of the Netdata agent.
+The remainder of the document outlines the public API of the Netdata Agent.
We define two categories of components within the public API:
@@ -89,7 +89,7 @@ notes at least one minor release prior to being merged:
- The protocol used for communicating with external data collection plugins.
- The APIs provided by the `python.d.plugin` and `charts.d.plugin` data collection frameworks.
- The set of optional features supported by the Agent which are provided by default in our pre-built packages. If
- support for an optional feature is being completely removed from the agent, that is instead covered by what
+ support for an optional feature is being completely removed from the Agent, that is instead covered by what
component that feature is part of.
### Loosely Defined Public API Components
diff --git a/packaging/docker/README.md b/packaging/docker/README.md
index 0f9ad23d64f43a..7bfc7aadb32ea4 100644
--- a/packaging/docker/README.md
+++ b/packaging/docker/README.md
@@ -7,7 +7,7 @@ import TabItem from '@theme/TabItem';
We do not officially support running our Docker images with the Docker CLI `--user` option or the Docker Compose
`user:` parameter. Such usage will usually still work, but some features will not be available when run this
-way. Note that the agent will drop privileges appropriately inside the container during startup, meaning that even
+way. Note that the Agent will drop privileges appropriately inside the container during startup, meaning that even
when run without these options almost nothing in the container will actually run with an effective UID of 0.
Our POWER8+ Docker images do not support our FreeIPMI collector. This is a technical limitation in FreeIPMI itself,
@@ -620,12 +620,12 @@ Our Docker image provides integrated support for health checks through the stand
You can control how the health checks run by using the environment variable `NETDATA_HEALTHCHECK_TARGET` as follows:
-- If left unset, the health check will attempt to access the `/api/v1/info` endpoint of the agent.
-- If set to the exact value 'cli', the health check script will use `netdatacli ping` to determine if the agent is
+- If left unset, the health check will attempt to access the `/api/v1/info` endpoint of the Agent.
+- If set to the exact value 'cli', the health check script will use `netdatacli ping` to determine if the Agent is
running correctly or not. This is sufficient to ensure that Netdata did not hang during startup, but does not provide
a rigorous verification that the daemon is collecting data or is otherwise usable.
- If set to anything else, the health check will treat the value as a URL to check for a 200 status code on. In most
- cases, this should start with `http://localhost:19999/` to check the agent running in the container.
+ cases, this should start with `http://localhost:19999/` to check the Agent running in the container.
In most cases, the default behavior of checking the `/api/v1/info` endpoint will be sufficient. If you are using a
configuration which disables the web server or restricts access to certain APIs, you will need to use a non-default
diff --git a/packaging/installer/methods/ansible.md b/packaging/installer/methods/ansible.md
index 82e4095f7e067b..a1cc83836ec3d9 100644
--- a/packaging/installer/methods/ansible.md
+++ b/packaging/installer/methods/ansible.md
@@ -11,7 +11,7 @@ code?
Enter [Ansible](https://ansible.com), a popular system provisioning, configuration management, and infrastructure as
code (IaC) tool. Ansible uses **playbooks** to glue many standardized operations together with a simple syntax, then run
-those operations over standard and secure SSH connections. There's no agent to install on the remote system, so all you
+those operations over standard and secure SSH connections. There's no Agent to install on the remote system, so all you
have to worry about is your application and your monitoring software.
Ansible has some competition from the likes of [Puppet](https://puppet.com/) or [Chef](https://www.chef.io/), but the
diff --git a/packaging/installer/methods/freebsd.md b/packaging/installer/methods/freebsd.md
index 05137598b175c6..a7aa5bc54dfa2e 100644
--- a/packaging/installer/methods/freebsd.md
+++ b/packaging/installer/methods/freebsd.md
@@ -21,7 +21,7 @@ Please respond in the affirmative for any relevant prompts during the installati
The simplest method is to use the single line [kickstart script](/packaging/installer/methods/kickstart.md)
-If you have a Netdata cloud account then clicking on the **Connect Nodes** button will generate the kickstart command you should use. Use the command from the "Linux" tab, it should look something like this:
+If you have a Netdata Cloud account, clicking on the **Connect Nodes** button will generate the kickstart command you should use. Use the command from the "Linux" tab, it should look something like this:
```sh
wget -O /tmp/netdata-kickstart.sh https://get.netdata.cloud/kickstart.sh && sh /tmp/netdata-kickstart.sh --claim-token --claim-url https://app.netdata.cloud
@@ -120,7 +120,7 @@ The following options are mutually exclusive and specify special operations othe
- `--uninstall`: Uninstall an existing installation of Netdata. Fails if there is no existing install.
- `--claim-only`: If there is an existing install, only try to claim it without attempting to update it. If there is no existing install, install and claim Netdata normally.
- `--repositories-only`: Only install repository configuration packages instead of doing a full install of Netdata. Automatically sets --native-only.
-- `--prepare-offline-install-source`: Instead of installing the agent, prepare a directory that can be used to install on another system without needing to download anything. See our [offline installation documentation](/packaging/installer/methods/offline.md) for more info.
+- `--prepare-offline-install-source`: Instead of installing the Agent, prepare a directory that can be used to install on another system without needing to download anything. See our [offline installation documentation](/packaging/installer/methods/offline.md) for more info.
Additionally, the following environment variables may be used to further customize how the script runs (most users
should not need to use special values for any of these):
diff --git a/packaging/installer/methods/kickstart.md b/packaging/installer/methods/kickstart.md
index ed5a4ae41df13c..4ce467eeac1480 100644
--- a/packaging/installer/methods/kickstart.md
+++ b/packaging/installer/methods/kickstart.md
@@ -65,7 +65,7 @@ The `kickstart.sh` script accepts a number of optional parameters to control how
The script automatically detects if it is running interactively, on a user's terminal, or headless in a CI/CD environment. These are options related to overriding this behavior.
- `--non-interactive` or `--dont-wait`
- Donβt prompt for anything and assume yes whenever possible, overriding any automatic detection of an interactive run. Use this option when installing Netdata agent with a provisioning tool or in CI/CD.
+ Donβt prompt for anything and assume yes whenever possible, overriding any automatic detection of an interactive run. Use this option when installing Netdata Agent with a provisioning tool or in CI/CD.
- `--interactive`
Act as if running interactively, even if automatic detection indicates a run is non-interactive.
@@ -109,22 +109,22 @@ By default, the script installs a cron job to automatically update Netdata to th
### Netdata Cloud related options
-By default, the kickstart script will provide a Netdata agent installation that can potentially communicate with Netdata Cloud if the Netdata agent is further configured to do so.
+By default, the kickstart script will provide a Netdata Agent installation that can potentially communicate with Netdata Cloud if the Netdata Agent is further configured to do so.
- `--claim-token`
Specify a unique claiming token associated with your Space in Netdata Cloud to be used to connect to the node after the installation. This will connect and claim the Netdata Agent to Netdata Cloud.
- `--claim-url`
- Specify a URL to use when connecting to the cloud. Defaults to `https://app.netdata.cloud`. Use this option to change the Netdata Cloud URL to point to your Netdata Cloud installation.
+ Specify a URL to use when connecting to the Cloud. Defaults to `https://app.netdata.cloud`. Use this option to change the Netdata Cloud URL to point to your Netdata Cloud installation.
- `--claim-rooms`
Specify a comma-separated list of tokens for each Room this node should appear in.
- `--claim-proxy`
- Specify a proxy to use when connecting to the cloud in the form of `http://[user:pass@]host:ip` for an HTTP(S) proxy. See [connecting through a proxy](/src/claim/README.md#automatically-via-a-provisioning-system-or-the-command-line) for details.
+ Specify a proxy to use when connecting to the Cloud in the form of `http://[user:pass@]host:ip` for an HTTP(S) proxy. See [connecting through a proxy](/src/claim/README.md#automatically-via-a-provisioning-system-or-the-command-line) for details.
- `--claim-only`
If there is an existing installation, only try to claim it without attempting to update it. If there is no existing installation, install and claim Netdata normally.
### anonymous telemetry
-By default, the agent is sending anonymous telemetry data to help us take identify the most common operating systems and the configurations Netdata agents run. We use this information to prioritize our efforts towards what is most commonly used by our community.
+By default, the Agent is sending anonymous telemetry data to help us take identify the most common operating systems and the configurations Netdata Agents run. We use this information to prioritize our efforts towards what is most commonly used by our community.
- `--disable-telemetry`
Disable anonymous statistics.
@@ -157,7 +157,7 @@ The following options are mutually exclusive and specify special operations othe
- `--repositories-only`
Only install repository configuration packages instead of doing a full install of Netdata. Automatically sets --native-only.
- `--prepare-offline-install-source`
- Instead of installing the agent, prepare a directory that can be used to install on another system without needing to download anything. See our [offline installation documentation](/packaging/installer/methods/offline.md) for more info.
+ Instead of installing the Agent, prepare a directory that can be used to install on another system without needing to download anything. See our [offline installation documentation](/packaging/installer/methods/offline.md) for more info.
### environment variables
diff --git a/packaging/installer/methods/macos.md b/packaging/installer/methods/macos.md
index 0843753b6e42b1..e90f123e6cb060 100644
--- a/packaging/installer/methods/macos.md
+++ b/packaging/installer/methods/macos.md
@@ -37,9 +37,9 @@ area](/docs/netdata-cloud/organize-your-infrastructure-invite-your-team.md#netda
- `--claim-token`: Specify a unique claiming token associated with your Space in Netdata Cloud to be used to connect to the node
after the install.
- `--claim-rooms`: Specify a comma-separated list of tokens for each Room this node should appear in.
-- `--claim-proxy`: Specify a proxy to use when connecting to the cloud in the form of `http://[user:pass@]host:ip` for an HTTP(S) proxy.
+- `--claim-proxy`: Specify a proxy to use when connecting to the Cloud in the form of `http://[user:pass@]host:ip` for an HTTP(S) proxy.
See [connecting through a proxy](/src/claim/README.md#automatically-via-a-provisioning-system-or-the-command-line) for details.
-- `--claim-url`: Specify a URL to use when connecting to the cloud. Defaults to `https://app.netdata.cloud`.
+- `--claim-url`: Specify a URL to use when connecting to the Cloud. Defaults to `https://app.netdata.cloud`.
For example:
diff --git a/packaging/installer/methods/manual.md b/packaging/installer/methods/manual.md
index 0b7bdb279bc5d5..423b68a07fe64c 100644
--- a/packaging/installer/methods/manual.md
+++ b/packaging/installer/methods/manual.md
@@ -210,7 +210,7 @@ cd netdata
Unlike the [`kickstart.sh`](/packaging/installer/methods/kickstart.md), the `netdata-installer.sh` script does
not allow you to automatically [connect](/src/claim/README.md) your node to Netdata Cloud immediately after installation.
-See the [connect to cloud](/src/claim/README.md) doc for details on connecting a node with a manual installation of Netdata.
+See the [connect to Netdata Cloud](/src/claim/README.md) doc for details on connecting a node with a manual installation of Netdata.
### 'nonrepresentable section on output' errors
diff --git a/packaging/installer/methods/source.md b/packaging/installer/methods/source.md
index f09db53d0db228..cb14eeabdc83d4 100644
--- a/packaging/installer/methods/source.md
+++ b/packaging/installer/methods/source.md
@@ -45,7 +45,7 @@ are using a source tarball published by the Netdata project, then these are incl
of the Git repository, you may need to explicitly fetch and update the submodules using `git submodule update
--init --recursive`.
-### Netdata cloud
+### Netdata Cloud
## Building Netdata
@@ -93,7 +93,7 @@ On Linux systems, Netdata has support for using the kernel's eBPF
interface to monitor performance-related VFS, network, and process events,
allowing for insights into process lifetimes and file access
patterns. Using this functionality requires additional code managed in
-a separate repository from the core Netdata agent. You can either install
+a separate repository from the core Netdata Agent. You can either install
a pre-built copy of the required code, or build it locally.
#### Installing the pre-built eBPF code
diff --git a/src/claim/README.md b/src/claim/README.md
index a0af190b952b24..146992a04ae70e 100644
--- a/src/claim/README.md
+++ b/src/claim/README.md
@@ -1,7 +1,7 @@
# Connect Agent to Cloud
This section guides you through installing and securely connecting a new Netdata Agent to Netdata Cloud via the
-encrypted Agent-Cloud Link ([ACLK](/src/aclk/README.md)). Connecting your agent to Netdata Cloud unlocks additional
+encrypted Agent-Cloud Link ([ACLK](/src/aclk/README.md)). Connecting your Agent to Netdata Cloud unlocks additional
features like centralized monitoring and easier collaboration.
## Connect
@@ -70,8 +70,8 @@ example:
insecure = no
```
-If the agent is already running, you can either run `netdatacli reload-claiming-state` or restart the agent.
-Otherwise, the agent will be claimed when it starts.
+If the Agent is already running, you can either run `netdatacli reload-claiming-state` or restart the Agent.
+Otherwise, the Agent will be claimed when it starts.
If the claiming process fails, the reason will be logged in daemon.log (search for "CLAIM") and the `cloud` section of `http://ip:19999/api/v2/info`.
@@ -102,7 +102,7 @@ sudo rm -rf cloud.d/
> **IMPORTANT**
>
-> Keep in mind that the Agent will be **re-claimed automatically** if the environment variables or `claim.conf` exist when the agent is restarted.
+> Keep in mind that the Agent will be **re-claimed automatically** if the environment variables or `claim.conf` exist when the Agent is restarted.
This node no longer has access to the credentials it was used when connecting to Netdata Cloud via the ACLK. You will
still be able to see this node in your Rooms in an **unreachable** state.
diff --git a/src/collectors/ebpf.plugin/README.md b/src/collectors/ebpf.plugin/README.md
index 1246fec04f1be7..7f502f4f33eb3d 100644
--- a/src/collectors/ebpf.plugin/README.md
+++ b/src/collectors/ebpf.plugin/README.md
@@ -957,7 +957,7 @@ Then compile your `netdata_ebpf.te` file with the following commands to create a
# semodule_package -o netdata_ebpf.pp -m netdata_ebpf.mod
```
-Finally, you can load the new policy and start the Netdata agent again:
+Finally, you can load the new policy and start the Netdata Agent again:
```bash
# semodule -i netdata_ebpf.pp
diff --git a/src/collectors/proc.plugin/README.md b/src/collectors/proc.plugin/README.md
index 8523309c7c9d5b..d6189bad68de87 100644
--- a/src/collectors/proc.plugin/README.md
+++ b/src/collectors/proc.plugin/README.md
@@ -581,7 +581,7 @@ Default configuration will monitor only enabled infiniband ports, and refresh ne
## AMD GPUs
-This module monitors every AMD GPU card discovered at agent startup.
+This module monitors every AMD GPU card discovered at Agent startup.
### Monitored GPU metrics
diff --git a/src/collectors/profile.plugin/README.md b/src/collectors/profile.plugin/README.md
index 992e6de9929407..a63671c81bef46 100644
--- a/src/collectors/profile.plugin/README.md
+++ b/src/collectors/profile.plugin/README.md
@@ -1,6 +1,6 @@
# profile.plugin
-This plugin allows someone to backfill an agent with random data.
+This plugin allows someone to backfill an Agent with random data.
A user can specify:
diff --git a/src/collectors/systemd-journal.plugin/README.md b/src/collectors/systemd-journal.plugin/README.md
index 74eba78de0707c..890f9928110704 100644
--- a/src/collectors/systemd-journal.plugin/README.md
+++ b/src/collectors/systemd-journal.plugin/README.md
@@ -396,15 +396,15 @@ free Netdata Cloud account.
### Is any of my data exposed to Netdata Cloud from this plugin?
-No. When you access the agent directly, none of your data passes through Netdata Cloud.
+No. When you access the Agent directly, none of your data passes through Netdata Cloud.
You need a free Netdata Cloud account only to verify your identity and enable the use of
-Netdata Functions. Once this is done, all the data flow directly from your Netdata agent
+Netdata Functions. Once this is done, all the data flow directly from your Netdata Agent
to your web browser.
Also check [this discussion](https://github.com/netdata/netdata/discussions/16136).
When you access Netdata via `https://app.netdata.cloud`, your data travel via Netdata Cloud,
-but they are not stored in Netdata Cloud. This is to allow you access your Netdata agents from
+but they are not stored in Netdata Cloud. This is to allow you access your Netdata Agents from
anywhere. All communication from/to Netdata Cloud is encrypted.
### What are `volatile` and `persistent` journals?
diff --git a/src/collectors/windows-events.plugin/README.md b/src/collectors/windows-events.plugin/README.md
index ecaa4349ab049b..76b00f48114002 100644
--- a/src/collectors/windows-events.plugin/README.md
+++ b/src/collectors/windows-events.plugin/README.md
@@ -216,12 +216,12 @@ account.
### Is any of my data exposed to Netdata Cloud from this plugin?
-No. When you access the agent directly, none of your data passes through Netdata Cloud. You need a free Netdata
+No. When you access the Agent directly, none of your data passes through Netdata Cloud. You need a free Netdata
Cloud account only to verify your identity and enable the use of Netdata Functions. Once this is done, all the
-data flow directly from your Netdata agent to your web browser.
+data flow directly from your Netdata Agent to your web browser.
When you access Netdata via https://app.netdata.cloud, your data travel via Netdata Cloud, but they are not stored
-in Netdata Cloud. This is to allow you access your Netdata agents from anywhere. All communication from/to
+in Netdata Cloud. This is to allow you access your Netdata Agents from anywhere. All communication from/to
Netdata Cloud is encrypted.
### What are the different types of event logs supported by this plugin?
diff --git a/src/daemon/config/README.md b/src/daemon/config/README.md
index 7217ec4ea3bbb1..a674d7d1d7f259 100644
--- a/src/daemon/config/README.md
+++ b/src/daemon/config/README.md
@@ -167,7 +167,7 @@ monitoring](/src/health/README.md).
| script to execute on alarm | `/usr/libexec/netdata/plugins.d/alarm-notify.sh` | The script that sends alert notifications. Note that in versions before 1.16, the plugins.d directory may be installed in a different location in certain OSs (e.g. under `/usr/lib/netdata`). |
| run at least every | `10s` | Controls how often all alert conditions should be evaluated. |
| postpone alarms during hibernation for | `1m` | Prevents false alerts. May need to be increased if you get alerts during hibernation. |
-| health log retention | `5d` | Specifies the history of alert events (in seconds) kept in the agent's sqlite database. |
+| health log retention | `5d` | Specifies the history of alert events (in seconds) kept in the Agent's sqlite database. |
| enabled alarms | * | Defines which alerts to load from both user and stock directories. This is a [simple pattern](/src/libnetdata/simple_pattern/README.md) list of alert or template names. Can be used to disable specific alerts. For example, `enabled alarms = !oom_kill *` will load all alerts except `oom_kill`. |
### [web] section options
diff --git a/src/database/README.md b/src/database/README.md
index e861582d468a6b..771a2bbf02631e 100644
--- a/src/database/README.md
+++ b/src/database/README.md
@@ -45,7 +45,7 @@ You can select the database mode by editing `netdata.conf` and setting:
## Netdata Longer Metrics Retention
Metrics retention is controlled only by the disk space allocated to storing metrics. But it also affects the memory and
-CPU required by the agent to query longer timeframes.
+CPU required by the Agent to query longer timeframes.
Since Netdata Agents usually run on the edge, on production systems, Netdata Agent **parents** should be considered.
When having a [**parent - child**](/docs/observability-centralization-points/README.md) setup, the child (the
diff --git a/src/database/engine/README.md b/src/database/engine/README.md
index 078271228accab..c95b71209fee04 100644
--- a/src/database/engine/README.md
+++ b/src/database/engine/README.md
@@ -122,7 +122,7 @@ Until **hot pages** and **dirty pages** are **flushed** to disk they are at risk
power failure), as they are stored only in memory.
The supported way of ensuring high data availability is the use of Netdata Parents to stream the data in real-time to
-multiple other Netdata agents.
+multiple other Netdata Agents.
## Memory requirements and retention
diff --git a/src/exporting/json/integrations/json.md b/src/exporting/json/integrations/json.md
index b6d87249254686..7cf95a54689483 100644
--- a/src/exporting/json/integrations/json.md
+++ b/src/exporting/json/integrations/json.md
@@ -13,7 +13,7 @@ endmeta-->
-Use the JSON connector for the exporting engine to archive your agent's metrics to JSON document databases for long-term storage,
+Use the JSON connector for the exporting engine to archive your Agent's metrics to JSON document databases for long-term storage,
further analysis, or correlation with data from other sources
diff --git a/src/exporting/json/metadata.yaml b/src/exporting/json/metadata.yaml
index 75abfdac333e15..c1bd8ee8ec5ea1 100644
--- a/src/exporting/json/metadata.yaml
+++ b/src/exporting/json/metadata.yaml
@@ -12,7 +12,7 @@ keywords:
- json
overview:
exporter_description: |
- Use the JSON connector for the exporting engine to archive your agent's metrics to JSON document databases for long-term storage,
+ Use the JSON connector for the exporting engine to archive your Agent's metrics to JSON document databases for long-term storage,
further analysis, or correlation with data from other sources
exporter_limitations: ''
setup:
diff --git a/src/exporting/mongodb/integrations/mongodb.md b/src/exporting/mongodb/integrations/mongodb.md
index 9d333bd09a966b..495563fd659227 100644
--- a/src/exporting/mongodb/integrations/mongodb.md
+++ b/src/exporting/mongodb/integrations/mongodb.md
@@ -13,7 +13,7 @@ endmeta-->
-Use the MongoDB connector for the exporting engine to archive your agent's metrics to a MongoDB database
+Use the MongoDB connector for the exporting engine to archive your Agent's metrics to a MongoDB database
for long-term storage, further analysis, or correlation with data from other sources.
diff --git a/src/exporting/mongodb/metadata.yaml b/src/exporting/mongodb/metadata.yaml
index 6597df7147978f..1b51cced156e67 100644
--- a/src/exporting/mongodb/metadata.yaml
+++ b/src/exporting/mongodb/metadata.yaml
@@ -12,7 +12,7 @@ keywords:
- MongoDB
overview:
exporter_description: |
- Use the MongoDB connector for the exporting engine to archive your agent's metrics to a MongoDB database
+ Use the MongoDB connector for the exporting engine to archive your Agent's metrics to a MongoDB database
for long-term storage, further analysis, or correlation with data from other sources.
exporter_limitations: ''
setup:
diff --git a/src/go/plugin/go.d/modules/activemq/testdata/config.json b/src/go/plugin/go.d/modules/activemq/testdata/config.json
index 13327dd3fab2e4..8c22484838af84 100644
--- a/src/go/plugin/go.d/modules/activemq/testdata/config.json
+++ b/src/go/plugin/go.d/modules/activemq/testdata/config.json
@@ -21,5 +21,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/activemq/testdata/config.yaml b/src/go/plugin/go.d/modules/activemq/testdata/config.yaml
index dbb4232e98ac0f..0d68150df55f59 100644
--- a/src/go/plugin/go.d/modules/activemq/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/activemq/testdata/config.yaml
@@ -20,3 +20,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/apache/config_schema.json b/src/go/plugin/go.d/modules/apache/config_schema.json
index 4c68bbd578062c..92dcc4eeda1933 100644
--- a/src/go/plugin/go.d/modules/apache/config_schema.json
+++ b/src/go/plugin/go.d/modules/apache/config_schema.json
@@ -30,6 +30,11 @@
"description": "If set, the client will not follow HTTP redirects automatically.",
"type": "boolean"
},
+ "force_http2": {
+ "title": "Force HTTP2",
+ "description": "If set, forces the use of HTTP/2 protocol for all requests, even over plain TCP (h2c).",
+ "type": "boolean"
+ },
"username": {
"title": "Username",
"description": "The username for basic authentication.",
@@ -143,7 +148,8 @@
"update_every",
"url",
"timeout",
- "not_follow_redirects"
+ "not_follow_redirects",
+ "force_http2"
]
},
{
diff --git a/src/go/plugin/go.d/modules/apache/integrations/apache.md b/src/go/plugin/go.d/modules/apache/integrations/apache.md
index c97d26c9057ef8..50e35527ccba5b 100644
--- a/src/go/plugin/go.d/modules/apache/integrations/apache.md
+++ b/src/go/plugin/go.d/modules/apache/integrations/apache.md
@@ -137,6 +137,7 @@ The following options can be defined globally: update_every, autodetection_retry
| body | HTTP request body. | | no |
| headers | HTTP request headers. | | no |
| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |
+| force_http2 | Forces the use of HTTP/2 protocol for all requests, even over plain TCP (h2c). | no | no |
| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |
| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |
| tls_cert | Client TLS certificate. | | no |
diff --git a/src/go/plugin/go.d/modules/apache/integrations/httpd.md b/src/go/plugin/go.d/modules/apache/integrations/httpd.md
index 02b3face7dd273..f1d2d07f5b726f 100644
--- a/src/go/plugin/go.d/modules/apache/integrations/httpd.md
+++ b/src/go/plugin/go.d/modules/apache/integrations/httpd.md
@@ -137,6 +137,7 @@ The following options can be defined globally: update_every, autodetection_retry
| body | HTTP request body. | | no |
| headers | HTTP request headers. | | no |
| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |
+| force_http2 | Forces the use of HTTP/2 protocol for all requests, even over plain TCP (h2c). | no | no |
| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |
| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |
| tls_cert | Client TLS certificate. | | no |
diff --git a/src/go/plugin/go.d/modules/apache/metadata.yaml b/src/go/plugin/go.d/modules/apache/metadata.yaml
index bfab73fcfe45a8..2623dd9ef02bdb 100644
--- a/src/go/plugin/go.d/modules/apache/metadata.yaml
+++ b/src/go/plugin/go.d/modules/apache/metadata.yaml
@@ -119,6 +119,10 @@ modules:
description: Redirect handling policy. Controls whether the client follows redirects.
default_value: no
required: false
+ - name: force_http2
+ description: Forces the use of HTTP/2 protocol for all requests, even over plain TCP (h2c).
+ default_value: no
+ required: false
- name: tls_skip_verify
description: Server certificate chain and hostname validation policy. Controls whether the client performs this check.
default_value: no
diff --git a/src/go/plugin/go.d/modules/apache/testdata/config.json b/src/go/plugin/go.d/modules/apache/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/apache/testdata/config.json
+++ b/src/go/plugin/go.d/modules/apache/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/apache/testdata/config.yaml b/src/go/plugin/go.d/modules/apache/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/apache/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/apache/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/bind/testdata/config.json b/src/go/plugin/go.d/modules/bind/testdata/config.json
index 145df9ff4a40bd..e14f33913765d0 100644
--- a/src/go/plugin/go.d/modules/bind/testdata/config.json
+++ b/src/go/plugin/go.d/modules/bind/testdata/config.json
@@ -17,5 +17,6 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"permit_view": "ok"
}
diff --git a/src/go/plugin/go.d/modules/bind/testdata/config.yaml b/src/go/plugin/go.d/modules/bind/testdata/config.yaml
index cc0a33b7470a47..e7f7aca429a317 100644
--- a/src/go/plugin/go.d/modules/bind/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/bind/testdata/config.yaml
@@ -15,4 +15,5 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
permit_view: "ok"
diff --git a/src/go/plugin/go.d/modules/cassandra/testdata/config.json b/src/go/plugin/go.d/modules/cassandra/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/cassandra/testdata/config.json
+++ b/src/go/plugin/go.d/modules/cassandra/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/cassandra/testdata/config.yaml b/src/go/plugin/go.d/modules/cassandra/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/cassandra/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/cassandra/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/ceph/testdata/config.json b/src/go/plugin/go.d/modules/ceph/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/ceph/testdata/config.json
+++ b/src/go/plugin/go.d/modules/ceph/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/ceph/testdata/config.yaml b/src/go/plugin/go.d/modules/ceph/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/ceph/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/ceph/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/clickhouse/testdata/config.json b/src/go/plugin/go.d/modules/clickhouse/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/clickhouse/testdata/config.json
+++ b/src/go/plugin/go.d/modules/clickhouse/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/clickhouse/testdata/config.yaml b/src/go/plugin/go.d/modules/clickhouse/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/clickhouse/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/clickhouse/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/cockroachdb/testdata/config.json b/src/go/plugin/go.d/modules/cockroachdb/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/cockroachdb/testdata/config.json
+++ b/src/go/plugin/go.d/modules/cockroachdb/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/cockroachdb/testdata/config.yaml b/src/go/plugin/go.d/modules/cockroachdb/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/cockroachdb/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/cockroachdb/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/consul/integrations/consul.md b/src/go/plugin/go.d/modules/consul/integrations/consul.md
index 55a1bbf5905216..69f426602b24b4 100644
--- a/src/go/plugin/go.d/modules/consul/integrations/consul.md
+++ b/src/go/plugin/go.d/modules/consul/integrations/consul.md
@@ -180,7 +180,7 @@ The following alerts are available:
#### Enable Prometheus telemetry
-[Enable](https://developer.hashicorp.com/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time) telemetry on your Consul agent, by increasing the value of `prometheus_retention_time` from `0`.
+[Enable](https://developer.hashicorp.com/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time) telemetry on your Consul Agent, by increasing the value of `prometheus_retention_time` from `0`.
#### Add required ACLs to Token
diff --git a/src/go/plugin/go.d/modules/consul/metadata.yaml b/src/go/plugin/go.d/modules/consul/metadata.yaml
index 34445cd7e8270e..63500fb3a84bcd 100644
--- a/src/go/plugin/go.d/modules/consul/metadata.yaml
+++ b/src/go/plugin/go.d/modules/consul/metadata.yaml
@@ -58,7 +58,7 @@ modules:
list:
- title: Enable Prometheus telemetry
description: |
- [Enable](https://developer.hashicorp.com/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time) telemetry on your Consul agent, by increasing the value of `prometheus_retention_time` from `0`.
+ [Enable](https://developer.hashicorp.com/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time) telemetry on your Consul Agent, by increasing the value of `prometheus_retention_time` from `0`.
- title: Add required ACLs to Token
description: |
Required **only if authentication is enabled**.
diff --git a/src/go/plugin/go.d/modules/consul/testdata/config.json b/src/go/plugin/go.d/modules/consul/testdata/config.json
index bcd07a41b4e865..24908ca16d4f54 100644
--- a/src/go/plugin/go.d/modules/consul/testdata/config.json
+++ b/src/go/plugin/go.d/modules/consul/testdata/config.json
@@ -17,5 +17,6 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
- "acl_token": "ok"
+ "acl_token": "ok",
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/consul/testdata/config.yaml b/src/go/plugin/go.d/modules/consul/testdata/config.yaml
index def554c7e66a27..705904fa9ffc22 100644
--- a/src/go/plugin/go.d/modules/consul/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/consul/testdata/config.yaml
@@ -16,3 +16,4 @@ tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
acl_token: "ok"
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/coredns/testdata/config.json b/src/go/plugin/go.d/modules/coredns/testdata/config.json
index 2dc54a1a2be4d0..5b11abe281d251 100644
--- a/src/go/plugin/go.d/modules/coredns/testdata/config.json
+++ b/src/go/plugin/go.d/modules/coredns/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"per_server_stats": {
"includes": [
"ok"
diff --git a/src/go/plugin/go.d/modules/coredns/testdata/config.yaml b/src/go/plugin/go.d/modules/coredns/testdata/config.yaml
index be474167fd0fb5..5551696f17c8ec 100644
--- a/src/go/plugin/go.d/modules/coredns/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/coredns/testdata/config.yaml
@@ -15,6 +15,7 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
per_server_stats:
includes:
- "ok"
diff --git a/src/go/plugin/go.d/modules/couchbase/testdata/config.json b/src/go/plugin/go.d/modules/couchbase/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/couchbase/testdata/config.json
+++ b/src/go/plugin/go.d/modules/couchbase/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/couchbase/testdata/config.yaml b/src/go/plugin/go.d/modules/couchbase/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/couchbase/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/couchbase/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/couchdb/testdata/config.json b/src/go/plugin/go.d/modules/couchdb/testdata/config.json
index 0fa716e5d4f38d..19cca35eff95bc 100644
--- a/src/go/plugin/go.d/modules/couchdb/testdata/config.json
+++ b/src/go/plugin/go.d/modules/couchdb/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"node": "ok",
"databases": "ok"
}
diff --git a/src/go/plugin/go.d/modules/couchdb/testdata/config.yaml b/src/go/plugin/go.d/modules/couchdb/testdata/config.yaml
index 4968ed263b4bb5..22934dcd516cb0 100644
--- a/src/go/plugin/go.d/modules/couchdb/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/couchdb/testdata/config.yaml
@@ -15,5 +15,6 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
node: "ok"
databases: "ok"
diff --git a/src/go/plugin/go.d/modules/dnsdist/testdata/config.json b/src/go/plugin/go.d/modules/dnsdist/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/dnsdist/testdata/config.json
+++ b/src/go/plugin/go.d/modules/dnsdist/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/dnsdist/testdata/config.yaml b/src/go/plugin/go.d/modules/dnsdist/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/dnsdist/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/dnsdist/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/docker_engine/testdata/config.json b/src/go/plugin/go.d/modules/docker_engine/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/docker_engine/testdata/config.json
+++ b/src/go/plugin/go.d/modules/docker_engine/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/docker_engine/testdata/config.yaml b/src/go/plugin/go.d/modules/docker_engine/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/docker_engine/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/docker_engine/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/dockerhub/testdata/config.json b/src/go/plugin/go.d/modules/dockerhub/testdata/config.json
index 3496e747cc42c8..3cbb615c810bc3 100644
--- a/src/go/plugin/go.d/modules/dockerhub/testdata/config.json
+++ b/src/go/plugin/go.d/modules/dockerhub/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"repositories": [
"ok"
]
diff --git a/src/go/plugin/go.d/modules/dockerhub/testdata/config.yaml b/src/go/plugin/go.d/modules/dockerhub/testdata/config.yaml
index 20c4ba61b43a8f..e25234c1475c80 100644
--- a/src/go/plugin/go.d/modules/dockerhub/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/dockerhub/testdata/config.yaml
@@ -15,5 +15,6 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
repositories:
- "ok"
diff --git a/src/go/plugin/go.d/modules/elasticsearch/testdata/config.json b/src/go/plugin/go.d/modules/elasticsearch/testdata/config.json
index a456d1d5619886..201ca27792b499 100644
--- a/src/go/plugin/go.d/modules/elasticsearch/testdata/config.json
+++ b/src/go/plugin/go.d/modules/elasticsearch/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"cluster_mode": true,
"collect_node_stats": true,
"collect_cluster_health": true,
diff --git a/src/go/plugin/go.d/modules/elasticsearch/testdata/config.yaml b/src/go/plugin/go.d/modules/elasticsearch/testdata/config.yaml
index af1b4a1369b133..87476834c6716f 100644
--- a/src/go/plugin/go.d/modules/elasticsearch/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/elasticsearch/testdata/config.yaml
@@ -15,6 +15,7 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
cluster_mode: yes
collect_node_stats: yes
collect_cluster_health: yes
diff --git a/src/go/plugin/go.d/modules/envoy/testdata/config.json b/src/go/plugin/go.d/modules/envoy/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/envoy/testdata/config.json
+++ b/src/go/plugin/go.d/modules/envoy/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/envoy/testdata/config.yaml b/src/go/plugin/go.d/modules/envoy/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/envoy/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/envoy/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/fluentd/testdata/config.json b/src/go/plugin/go.d/modules/fluentd/testdata/config.json
index 6477bd57d2cceb..cb7e83bfc329d5 100644
--- a/src/go/plugin/go.d/modules/fluentd/testdata/config.json
+++ b/src/go/plugin/go.d/modules/fluentd/testdata/config.json
@@ -17,5 +17,6 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"permit_plugin_id": "ok"
}
diff --git a/src/go/plugin/go.d/modules/fluentd/testdata/config.yaml b/src/go/plugin/go.d/modules/fluentd/testdata/config.yaml
index 0afd42e6769842..c832c88dd50632 100644
--- a/src/go/plugin/go.d/modules/fluentd/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/fluentd/testdata/config.yaml
@@ -15,4 +15,5 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
permit_plugin_id: "ok"
diff --git a/src/go/plugin/go.d/modules/geth/testdata/config.json b/src/go/plugin/go.d/modules/geth/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/geth/testdata/config.json
+++ b/src/go/plugin/go.d/modules/geth/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/geth/testdata/config.yaml b/src/go/plugin/go.d/modules/geth/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/geth/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/geth/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/haproxy/testdata/config.json b/src/go/plugin/go.d/modules/haproxy/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/haproxy/testdata/config.json
+++ b/src/go/plugin/go.d/modules/haproxy/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/haproxy/testdata/config.yaml b/src/go/plugin/go.d/modules/haproxy/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/haproxy/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/haproxy/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/hdfs/testdata/config.json b/src/go/plugin/go.d/modules/hdfs/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/hdfs/testdata/config.json
+++ b/src/go/plugin/go.d/modules/hdfs/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/hdfs/testdata/config.yaml b/src/go/plugin/go.d/modules/hdfs/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/hdfs/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/hdfs/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/httpcheck/testdata/config.json b/src/go/plugin/go.d/modules/httpcheck/testdata/config.json
index 649393cdda0dfe..9de1100be8a9b8 100644
--- a/src/go/plugin/go.d/modules/httpcheck/testdata/config.json
+++ b/src/go/plugin/go.d/modules/httpcheck/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"status_accepted": [
123
],
diff --git a/src/go/plugin/go.d/modules/httpcheck/testdata/config.yaml b/src/go/plugin/go.d/modules/httpcheck/testdata/config.yaml
index 1a66590e6467c8..0af4237c2e3dd3 100644
--- a/src/go/plugin/go.d/modules/httpcheck/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/httpcheck/testdata/config.yaml
@@ -15,6 +15,7 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
status_accepted:
- 123
response_match: "ok"
diff --git a/src/go/plugin/go.d/modules/icecast/testdata/config.json b/src/go/plugin/go.d/modules/icecast/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/icecast/testdata/config.json
+++ b/src/go/plugin/go.d/modules/icecast/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/icecast/testdata/config.yaml b/src/go/plugin/go.d/modules/icecast/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/icecast/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/icecast/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/ipfs/testdata/config.json b/src/go/plugin/go.d/modules/ipfs/testdata/config.json
index b99928ca655e4c..83826f4933045c 100644
--- a/src/go/plugin/go.d/modules/ipfs/testdata/config.json
+++ b/src/go/plugin/go.d/modules/ipfs/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"pinapi": false,
"repoapi": false
}
diff --git a/src/go/plugin/go.d/modules/ipfs/testdata/config.yaml b/src/go/plugin/go.d/modules/ipfs/testdata/config.yaml
index 271695e647cb76..8be0e7bb3a58b8 100644
--- a/src/go/plugin/go.d/modules/ipfs/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/ipfs/testdata/config.yaml
@@ -15,5 +15,6 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
pinapi: no
repoapi: no
diff --git a/src/go/plugin/go.d/modules/k8s_kubelet/testdata/config.json b/src/go/plugin/go.d/modules/k8s_kubelet/testdata/config.json
index d854839533036a..bc694dc57745f6 100644
--- a/src/go/plugin/go.d/modules/k8s_kubelet/testdata/config.json
+++ b/src/go/plugin/go.d/modules/k8s_kubelet/testdata/config.json
@@ -17,5 +17,6 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"token_path": "ok"
}
diff --git a/src/go/plugin/go.d/modules/k8s_kubelet/testdata/config.yaml b/src/go/plugin/go.d/modules/k8s_kubelet/testdata/config.yaml
index 9e4f3fdc45c2ed..ae468ded892ac2 100644
--- a/src/go/plugin/go.d/modules/k8s_kubelet/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/k8s_kubelet/testdata/config.yaml
@@ -15,4 +15,5 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
token_path: "ok"
diff --git a/src/go/plugin/go.d/modules/k8s_kubeproxy/testdata/config.json b/src/go/plugin/go.d/modules/k8s_kubeproxy/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/k8s_kubeproxy/testdata/config.json
+++ b/src/go/plugin/go.d/modules/k8s_kubeproxy/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/k8s_kubeproxy/testdata/config.yaml b/src/go/plugin/go.d/modules/k8s_kubeproxy/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/k8s_kubeproxy/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/k8s_kubeproxy/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/lighttpd/testdata/config.json b/src/go/plugin/go.d/modules/lighttpd/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/lighttpd/testdata/config.json
+++ b/src/go/plugin/go.d/modules/lighttpd/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/lighttpd/testdata/config.yaml b/src/go/plugin/go.d/modules/lighttpd/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/lighttpd/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/lighttpd/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/logstash/testdata/config.json b/src/go/plugin/go.d/modules/logstash/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/logstash/testdata/config.json
+++ b/src/go/plugin/go.d/modules/logstash/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/logstash/testdata/config.yaml b/src/go/plugin/go.d/modules/logstash/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/logstash/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/logstash/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/maxscale/testdata/config.json b/src/go/plugin/go.d/modules/maxscale/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/maxscale/testdata/config.json
+++ b/src/go/plugin/go.d/modules/maxscale/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/maxscale/testdata/config.yaml b/src/go/plugin/go.d/modules/maxscale/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/maxscale/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/maxscale/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/monit/testdata/config.json b/src/go/plugin/go.d/modules/monit/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/monit/testdata/config.json
+++ b/src/go/plugin/go.d/modules/monit/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/monit/testdata/config.yaml b/src/go/plugin/go.d/modules/monit/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/monit/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/monit/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/nginx/testdata/config.json b/src/go/plugin/go.d/modules/nginx/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/nginx/testdata/config.json
+++ b/src/go/plugin/go.d/modules/nginx/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/nginx/testdata/config.yaml b/src/go/plugin/go.d/modules/nginx/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/nginx/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/nginx/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/nginxplus/testdata/config.json b/src/go/plugin/go.d/modules/nginxplus/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/nginxplus/testdata/config.json
+++ b/src/go/plugin/go.d/modules/nginxplus/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/nginxplus/testdata/config.yaml b/src/go/plugin/go.d/modules/nginxplus/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/nginxplus/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/nginxplus/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/nginxunit/testdata/config.json b/src/go/plugin/go.d/modules/nginxunit/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/nginxunit/testdata/config.json
+++ b/src/go/plugin/go.d/modules/nginxunit/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/nginxunit/testdata/config.yaml b/src/go/plugin/go.d/modules/nginxunit/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/nginxunit/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/nginxunit/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/nginxvts/testdata/config.json b/src/go/plugin/go.d/modules/nginxvts/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/nginxvts/testdata/config.json
+++ b/src/go/plugin/go.d/modules/nginxvts/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/nginxvts/testdata/config.yaml b/src/go/plugin/go.d/modules/nginxvts/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/nginxvts/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/nginxvts/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/phpdaemon/testdata/config.json b/src/go/plugin/go.d/modules/phpdaemon/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/phpdaemon/testdata/config.json
+++ b/src/go/plugin/go.d/modules/phpdaemon/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/phpdaemon/testdata/config.yaml b/src/go/plugin/go.d/modules/phpdaemon/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/phpdaemon/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/phpdaemon/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/phpfpm/testdata/config.json b/src/go/plugin/go.d/modules/phpfpm/testdata/config.json
index 458343f7415aed..3a6e24bd5fe418 100644
--- a/src/go/plugin/go.d/modules/phpfpm/testdata/config.json
+++ b/src/go/plugin/go.d/modules/phpfpm/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"socket": "ok",
"address": "ok",
"fcgi_path": "ok"
diff --git a/src/go/plugin/go.d/modules/phpfpm/testdata/config.yaml b/src/go/plugin/go.d/modules/phpfpm/testdata/config.yaml
index 6c7bea094bab4f..092d0f96d05c8d 100644
--- a/src/go/plugin/go.d/modules/phpfpm/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/phpfpm/testdata/config.yaml
@@ -15,6 +15,7 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
socket: "ok"
address: "ok"
fcgi_path: "ok"
diff --git a/src/go/plugin/go.d/modules/pihole/testdata/config.json b/src/go/plugin/go.d/modules/pihole/testdata/config.json
index 2d82443b0642b6..5de773728f2136 100644
--- a/src/go/plugin/go.d/modules/pihole/testdata/config.json
+++ b/src/go/plugin/go.d/modules/pihole/testdata/config.json
@@ -17,5 +17,6 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"setup_vars_path": "ok"
}
diff --git a/src/go/plugin/go.d/modules/pihole/testdata/config.yaml b/src/go/plugin/go.d/modules/pihole/testdata/config.yaml
index a9361246af839a..7122ad0931bcef 100644
--- a/src/go/plugin/go.d/modules/pihole/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/pihole/testdata/config.yaml
@@ -15,4 +15,5 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
setup_vars_path: "ok"
diff --git a/src/go/plugin/go.d/modules/postgres/integrations/postgresql.md b/src/go/plugin/go.d/modules/postgres/integrations/postgresql.md
index 48dab8dc9e7df7..72d7594de874f9 100644
--- a/src/go/plugin/go.d/modules/postgres/integrations/postgresql.md
+++ b/src/go/plugin/go.d/modules/postgres/integrations/postgresql.md
@@ -266,7 +266,7 @@ CREATE USER netdata;
GRANT pg_monitor TO netdata;
```
-After creating the new user, restart the Netdata agent with `sudo systemctl restart netdata`, or
+After creating the new user, restart the Netdata Agent with `sudo systemctl restart netdata`, or
the [appropriate method](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/start-stop-restart.md) for your
system.
diff --git a/src/go/plugin/go.d/modules/postgres/metadata.yaml b/src/go/plugin/go.d/modules/postgres/metadata.yaml
index 7cdf4c7b7c9bbd..e85d52df134b70 100644
--- a/src/go/plugin/go.d/modules/postgres/metadata.yaml
+++ b/src/go/plugin/go.d/modules/postgres/metadata.yaml
@@ -68,7 +68,7 @@ modules:
GRANT pg_monitor TO netdata;
```
- After creating the new user, restart the Netdata agent with `sudo systemctl restart netdata`, or
+ After creating the new user, restart the Netdata Agent with `sudo systemctl restart netdata`, or
the [appropriate method](/docs/netdata-agent/start-stop-restart.md) for your
system.
configuration:
diff --git a/src/go/plugin/go.d/modules/powerdns/testdata/config.json b/src/go/plugin/go.d/modules/powerdns/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/powerdns/testdata/config.json
+++ b/src/go/plugin/go.d/modules/powerdns/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/powerdns/testdata/config.yaml b/src/go/plugin/go.d/modules/powerdns/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/powerdns/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/powerdns/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/powerdns_recursor/testdata/config.json b/src/go/plugin/go.d/modules/powerdns_recursor/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/powerdns_recursor/testdata/config.json
+++ b/src/go/plugin/go.d/modules/powerdns_recursor/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/powerdns_recursor/testdata/config.yaml b/src/go/plugin/go.d/modules/powerdns_recursor/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/powerdns_recursor/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/powerdns_recursor/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/prometheus/testdata/config.json b/src/go/plugin/go.d/modules/prometheus/testdata/config.json
index 75d7e9ba3cd38c..fa3d543d137493 100644
--- a/src/go/plugin/go.d/modules/prometheus/testdata/config.json
+++ b/src/go/plugin/go.d/modules/prometheus/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"name": "ok",
"app": "ok",
"label_prefix": "ok",
diff --git a/src/go/plugin/go.d/modules/prometheus/testdata/config.yaml b/src/go/plugin/go.d/modules/prometheus/testdata/config.yaml
index d7ab417ecdde8e..f00094bd988b7d 100644
--- a/src/go/plugin/go.d/modules/prometheus/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/prometheus/testdata/config.yaml
@@ -15,6 +15,7 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
name: "ok"
app: "ok"
label_prefix: "ok"
diff --git a/src/go/plugin/go.d/modules/pulsar/testdata/config.json b/src/go/plugin/go.d/modules/pulsar/testdata/config.json
index ab4f38fe0826c2..42a77ceac41404 100644
--- a/src/go/plugin/go.d/modules/pulsar/testdata/config.json
+++ b/src/go/plugin/go.d/modules/pulsar/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"topic_filter": {
"includes": [
"ok"
diff --git a/src/go/plugin/go.d/modules/pulsar/testdata/config.yaml b/src/go/plugin/go.d/modules/pulsar/testdata/config.yaml
index f2645d9e9f4007..e7a631527054e4 100644
--- a/src/go/plugin/go.d/modules/pulsar/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/pulsar/testdata/config.yaml
@@ -15,6 +15,7 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
topic_filter:
includes:
- "ok"
diff --git a/src/go/plugin/go.d/modules/puppet/testdata/config.json b/src/go/plugin/go.d/modules/puppet/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/puppet/testdata/config.json
+++ b/src/go/plugin/go.d/modules/puppet/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/puppet/testdata/config.yaml b/src/go/plugin/go.d/modules/puppet/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/puppet/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/puppet/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/rabbitmq/testdata/config.json b/src/go/plugin/go.d/modules/rabbitmq/testdata/config.json
index b3f637f06ab17c..a8159f1fef33df 100644
--- a/src/go/plugin/go.d/modules/rabbitmq/testdata/config.json
+++ b/src/go/plugin/go.d/modules/rabbitmq/testdata/config.json
@@ -17,5 +17,6 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"collect_queues_metrics": true
}
diff --git a/src/go/plugin/go.d/modules/rabbitmq/testdata/config.yaml b/src/go/plugin/go.d/modules/rabbitmq/testdata/config.yaml
index 12bb79bece0570..3c62ad30a125bc 100644
--- a/src/go/plugin/go.d/modules/rabbitmq/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/rabbitmq/testdata/config.yaml
@@ -15,4 +15,5 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
collect_queues_metrics: yes
diff --git a/src/go/plugin/go.d/modules/riakkv/testdata/config.json b/src/go/plugin/go.d/modules/riakkv/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/riakkv/testdata/config.json
+++ b/src/go/plugin/go.d/modules/riakkv/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/riakkv/testdata/config.yaml b/src/go/plugin/go.d/modules/riakkv/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/riakkv/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/riakkv/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/rspamd/testdata/config.json b/src/go/plugin/go.d/modules/rspamd/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/rspamd/testdata/config.json
+++ b/src/go/plugin/go.d/modules/rspamd/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/rspamd/testdata/config.yaml b/src/go/plugin/go.d/modules/rspamd/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/rspamd/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/rspamd/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/scaleio/testdata/config.json b/src/go/plugin/go.d/modules/scaleio/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/scaleio/testdata/config.json
+++ b/src/go/plugin/go.d/modules/scaleio/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/scaleio/testdata/config.yaml b/src/go/plugin/go.d/modules/scaleio/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/scaleio/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/scaleio/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/squid/testdata/config.json b/src/go/plugin/go.d/modules/squid/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/squid/testdata/config.json
+++ b/src/go/plugin/go.d/modules/squid/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/squid/testdata/config.yaml b/src/go/plugin/go.d/modules/squid/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/squid/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/squid/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/supervisord/testdata/config.json b/src/go/plugin/go.d/modules/supervisord/testdata/config.json
index 825b0c394d96d8..15c4216b4b0245 100644
--- a/src/go/plugin/go.d/modules/supervisord/testdata/config.json
+++ b/src/go/plugin/go.d/modules/supervisord/testdata/config.json
@@ -7,5 +7,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/supervisord/testdata/config.yaml b/src/go/plugin/go.d/modules/supervisord/testdata/config.yaml
index e1a01abd7146bf..022c36e72ac723 100644
--- a/src/go/plugin/go.d/modules/supervisord/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/supervisord/testdata/config.yaml
@@ -7,3 +7,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/tengine/testdata/config.json b/src/go/plugin/go.d/modules/tengine/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/tengine/testdata/config.json
+++ b/src/go/plugin/go.d/modules/tengine/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/tengine/testdata/config.yaml b/src/go/plugin/go.d/modules/tengine/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/tengine/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/tengine/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/tomcat/integrations/tomcat.md b/src/go/plugin/go.d/modules/tomcat/integrations/tomcat.md
index 7097729562eb32..b62a15d009ad11 100644
--- a/src/go/plugin/go.d/modules/tomcat/integrations/tomcat.md
+++ b/src/go/plugin/go.d/modules/tomcat/integrations/tomcat.md
@@ -38,7 +38,7 @@ By default, this Tomcat collector cannot access the server's status page. To ena
#### Auto-Detection
-If the Netdata agent and Tomcat are on the same host, the collector will attempt to connect to the Tomcat server's status page at `http://localhost:8080/manager/status?XML=true`.
+If the Netdata Agent and Tomcat are on the same host, the collector will attempt to connect to the Tomcat server's status page at `http://localhost:8080/manager/status?XML=true`.
#### Limits
@@ -120,7 +120,7 @@ There are no alerts configured by default for this integration.
#### Access to Tomcat Status Endpoint
-The Netdata agent needs read-only access to its status endpoint to collect data from the Tomcat server.
+The Netdata Agent needs read-only access to its status endpoint to collect data from the Tomcat server.
You can achieve this by creating a dedicated user named `netdata` with read-only permissions specifically for accessing the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) endpoint.
diff --git a/src/go/plugin/go.d/modules/tomcat/metadata.yaml b/src/go/plugin/go.d/modules/tomcat/metadata.yaml
index d5815cf70ceae3..7b02bba230e9ec 100644
--- a/src/go/plugin/go.d/modules/tomcat/metadata.yaml
+++ b/src/go/plugin/go.d/modules/tomcat/metadata.yaml
@@ -39,7 +39,7 @@ modules:
default_behavior:
auto_detection:
description: >
- If the Netdata agent and Tomcat are on the same host, the collector will attempt to connect to the Tomcat server's status page at `http://localhost:8080/manager/status?XML=true`.
+ If the Netdata Agent and Tomcat are on the same host, the collector will attempt to connect to the Tomcat server's status page at `http://localhost:8080/manager/status?XML=true`.
limits:
description: ""
performance_impact:
@@ -49,7 +49,7 @@ modules:
list:
- title: Access to Tomcat Status Endpoint
description: |
- The Netdata agent needs read-only access to its status endpoint to collect data from the Tomcat server.
+ The Netdata Agent needs read-only access to its status endpoint to collect data from the Tomcat server.
You can achieve this by creating a dedicated user named `netdata` with read-only permissions specifically for accessing the [Server Status](https://tomcat.apache.org/tomcat-10.0-doc/manager-howto.html#Server_Status) endpoint.
diff --git a/src/go/plugin/go.d/modules/tomcat/testdata/config.json b/src/go/plugin/go.d/modules/tomcat/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/tomcat/testdata/config.json
+++ b/src/go/plugin/go.d/modules/tomcat/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/tomcat/testdata/config.yaml b/src/go/plugin/go.d/modules/tomcat/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/tomcat/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/tomcat/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/traefik/testdata/config.json b/src/go/plugin/go.d/modules/traefik/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/traefik/testdata/config.json
+++ b/src/go/plugin/go.d/modules/traefik/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/traefik/testdata/config.yaml b/src/go/plugin/go.d/modules/traefik/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/traefik/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/traefik/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/typesense/testdata/config.json b/src/go/plugin/go.d/modules/typesense/testdata/config.json
index 628fa6317c7d72..6c8695d384b0a8 100644
--- a/src/go/plugin/go.d/modules/typesense/testdata/config.json
+++ b/src/go/plugin/go.d/modules/typesense/testdata/config.json
@@ -17,5 +17,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/typesense/testdata/config.yaml b/src/go/plugin/go.d/modules/typesense/testdata/config.yaml
index 7274c3ab066f28..ddabb8696ea6b8 100644
--- a/src/go/plugin/go.d/modules/typesense/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/typesense/testdata/config.yaml
@@ -16,3 +16,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/vcsa/testdata/config.json b/src/go/plugin/go.d/modules/vcsa/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/vcsa/testdata/config.json
+++ b/src/go/plugin/go.d/modules/vcsa/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/vcsa/testdata/config.yaml b/src/go/plugin/go.d/modules/vcsa/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/vcsa/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/vcsa/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/vernemq/testdata/config.json b/src/go/plugin/go.d/modules/vernemq/testdata/config.json
index 984c3ed6e7a880..adedab97fdc7df 100644
--- a/src/go/plugin/go.d/modules/vernemq/testdata/config.json
+++ b/src/go/plugin/go.d/modules/vernemq/testdata/config.json
@@ -16,5 +16,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/vernemq/testdata/config.yaml b/src/go/plugin/go.d/modules/vernemq/testdata/config.yaml
index 8558b61cc05cf8..744c7c996d5a21 100644
--- a/src/go/plugin/go.d/modules/vernemq/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/vernemq/testdata/config.yaml
@@ -15,3 +15,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/modules/vsphere/testdata/config.json b/src/go/plugin/go.d/modules/vsphere/testdata/config.json
index 3e4a7739640c7f..f40ac5d0460d2f 100644
--- a/src/go/plugin/go.d/modules/vsphere/testdata/config.json
+++ b/src/go/plugin/go.d/modules/vsphere/testdata/config.json
@@ -17,6 +17,7 @@
"tls_cert": "ok",
"tls_key": "ok",
"tls_skip_verify": true,
+ "force_http2": true,
"discovery_interval": 123.123,
"host_include": [
"ok"
diff --git a/src/go/plugin/go.d/modules/vsphere/testdata/config.yaml b/src/go/plugin/go.d/modules/vsphere/testdata/config.yaml
index d15e2346fd6c23..8c86028c568c6d 100644
--- a/src/go/plugin/go.d/modules/vsphere/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/vsphere/testdata/config.yaml
@@ -15,6 +15,7 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
discovery_interval: 123.123
host_include:
- "ok"
diff --git a/src/go/plugin/go.d/modules/windows/integrations/active_directory.md b/src/go/plugin/go.d/modules/windows/integrations/active_directory.md
index 94d431e1093cef..7934ba673ff45a 100644
--- a/src/go/plugin/go.d/modules/windows/integrations/active_directory.md
+++ b/src/go/plugin/go.d/modules/windows/integrations/active_directory.md
@@ -32,7 +32,7 @@ To get started with Netdata on Windows, see the [Netdata Windows Installer](http
This collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).
-It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.
+It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.
This collector is supported on all platforms.
diff --git a/src/go/plugin/go.d/modules/windows/integrations/hyperv.md b/src/go/plugin/go.d/modules/windows/integrations/hyperv.md
index 0d1c8917d87010..f0784dbc269766 100644
--- a/src/go/plugin/go.d/modules/windows/integrations/hyperv.md
+++ b/src/go/plugin/go.d/modules/windows/integrations/hyperv.md
@@ -32,7 +32,7 @@ To get started with Netdata on Windows, see the [Netdata Windows Installer](http
This collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).
-It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.
+It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.
This collector is supported on all platforms.
diff --git a/src/go/plugin/go.d/modules/windows/integrations/ms_exchange.md b/src/go/plugin/go.d/modules/windows/integrations/ms_exchange.md
index 5da10ed32a7a42..4ad6d9243229aa 100644
--- a/src/go/plugin/go.d/modules/windows/integrations/ms_exchange.md
+++ b/src/go/plugin/go.d/modules/windows/integrations/ms_exchange.md
@@ -32,7 +32,7 @@ To get started with Netdata on Windows, see the [Netdata Windows Installer](http
This collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).
-It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.
+It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.
This collector is supported on all platforms.
diff --git a/src/go/plugin/go.d/modules/windows/integrations/ms_sql_server.md b/src/go/plugin/go.d/modules/windows/integrations/ms_sql_server.md
index 2485c2bd0bebc5..15a5f627938af5 100644
--- a/src/go/plugin/go.d/modules/windows/integrations/ms_sql_server.md
+++ b/src/go/plugin/go.d/modules/windows/integrations/ms_sql_server.md
@@ -32,7 +32,7 @@ To get started with Netdata on Windows, see the [Netdata Windows Installer](http
This collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).
-It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.
+It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.
This collector is supported on all platforms.
diff --git a/src/go/plugin/go.d/modules/windows/integrations/net_framework.md b/src/go/plugin/go.d/modules/windows/integrations/net_framework.md
index 7cab7c75c57f6d..21e67134e15e8a 100644
--- a/src/go/plugin/go.d/modules/windows/integrations/net_framework.md
+++ b/src/go/plugin/go.d/modules/windows/integrations/net_framework.md
@@ -32,7 +32,7 @@ To get started with Netdata on Windows, see the [Netdata Windows Installer](http
This collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).
-It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.
+It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.
This collector is supported on all platforms.
diff --git a/src/go/plugin/go.d/modules/windows/integrations/windows.md b/src/go/plugin/go.d/modules/windows/integrations/windows.md
index e3755b94ea50ca..df156b34fddf28 100644
--- a/src/go/plugin/go.d/modules/windows/integrations/windows.md
+++ b/src/go/plugin/go.d/modules/windows/integrations/windows.md
@@ -32,7 +32,7 @@ To get started with Netdata on Windows, see the [Netdata Windows Installer](http
This collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).
-It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.
+It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.
This collector is supported on all platforms.
diff --git a/src/go/plugin/go.d/modules/windows/metadata.yaml b/src/go/plugin/go.d/modules/windows/metadata.yaml
index 3f93fe2007b982..bcc8963dac9fa4 100644
--- a/src/go/plugin/go.d/modules/windows/metadata.yaml
+++ b/src/go/plugin/go.d/modules/windows/metadata.yaml
@@ -33,7 +33,7 @@ modules:
This collector monitors the performance of Windows machines, collects both host metrics and metrics from various Windows applications (e.g. Active Directory, MSSQL).
method_description: |
- It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows agent running on each host.
+ It collect metrics by periodically sending HTTP requests to [Prometheus exporter for Windows machines](https://github.com/prometheus-community/windows_exporter), a native Windows Agent running on each host.
default_behavior:
auto_detection:
description: ""
diff --git a/src/go/plugin/go.d/modules/windows/testdata/config.json b/src/go/plugin/go.d/modules/windows/testdata/config.json
index 6f8c1084e16265..d0080c440aaae1 100644
--- a/src/go/plugin/go.d/modules/windows/testdata/config.json
+++ b/src/go/plugin/go.d/modules/windows/testdata/config.json
@@ -17,5 +17,6 @@
"tls_ca": "ok",
"tls_cert": "ok",
"tls_key": "ok",
- "tls_skip_verify": true
+ "tls_skip_verify": true,
+ "force_http2": true
}
diff --git a/src/go/plugin/go.d/modules/windows/testdata/config.yaml b/src/go/plugin/go.d/modules/windows/testdata/config.yaml
index 4bbb7474d0227c..2b67fbf480635e 100644
--- a/src/go/plugin/go.d/modules/windows/testdata/config.yaml
+++ b/src/go/plugin/go.d/modules/windows/testdata/config.yaml
@@ -16,3 +16,4 @@ tls_ca: "ok"
tls_cert: "ok"
tls_key: "ok"
tls_skip_verify: yes
+force_http2: yes
diff --git a/src/go/plugin/go.d/pkg/web/client_config.go b/src/go/plugin/go.d/pkg/web/client_config.go
index 0ab3a045a1d2de..bf010c9810483b 100644
--- a/src/go/plugin/go.d/pkg/web/client_config.go
+++ b/src/go/plugin/go.d/pkg/web/client_config.go
@@ -3,12 +3,16 @@
package web
import (
+ "context"
+ "crypto/tls"
"errors"
"fmt"
"net"
"net/http"
"net/url"
+ "golang.org/x/net/http2"
+
"github.com/netdata/netdata/go/plugins/plugin/go.d/pkg/confopt"
"github.com/netdata/netdata/go/plugins/plugin/go.d/pkg/tlscfg"
)
@@ -34,10 +38,34 @@ type ClientConfig struct {
// TLSConfig specifies the TLS configuration.
tlscfg.TLSConfig `yaml:",inline" json:""`
+
+ ForceHTTP2 bool `yaml:"force_http2,omitempty" json:"force_http2"`
}
// NewHTTPClient returns a new *http.Client given a ClientConfig configuration and an error if any.
func NewHTTPClient(cfg ClientConfig) (*http.Client, error) {
+ var transport http.RoundTripper
+ var err error
+
+ if cfg.ForceHTTP2 {
+ transport, err = newHTTP2Transport(cfg)
+ } else {
+ transport, err = newHTTPTransport(cfg)
+ }
+ if err != nil {
+ return nil, err
+ }
+
+ client := &http.Client{
+ Timeout: cfg.Timeout.Duration(),
+ Transport: transport,
+ CheckRedirect: redirectFunc(cfg.NotFollowRedirect),
+ }
+
+ return client, nil
+}
+
+func newHTTPTransport(cfg ClientConfig) (*http.Transport, error) {
tlsConfig, err := tlscfg.NewTLSConfig(cfg.TLSConfig)
if err != nil {
return nil, fmt.Errorf("error on creating TLS config: %v", err)
@@ -52,24 +80,54 @@ func NewHTTPClient(cfg ClientConfig) (*http.Client, error) {
d := &net.Dialer{Timeout: cfg.Timeout.Duration()}
transport := &http.Transport{
- Proxy: proxyFunc(cfg.ProxyURL),
TLSClientConfig: tlsConfig,
DialContext: d.DialContext,
TLSHandshakeTimeout: cfg.Timeout.Duration(),
+ Proxy: proxyFunc(cfg.ProxyURL),
}
- return &http.Client{
- Timeout: cfg.Timeout.Duration(),
- Transport: transport,
- CheckRedirect: redirectFunc(cfg.NotFollowRedirect),
- }, nil
+ return transport, nil
}
-func redirectFunc(notFollowRedirect bool) func(req *http.Request, via []*http.Request) error {
- if follow := !notFollowRedirect; follow {
- return nil
+func newHTTP2Transport(cfg ClientConfig) (*http2Transport, error) {
+ tlsConfig, err := tlscfg.NewTLSConfig(cfg.TLSConfig)
+ if err != nil {
+ return nil, fmt.Errorf("error on creating TLS config: %v", err)
+ }
+
+ d := &net.Dialer{Timeout: cfg.Timeout.Duration()}
+
+ transport := &http2Transport{
+ t2: &http2.Transport{
+ TLSClientConfig: tlsConfig,
+ },
+ t2c: &http2.Transport{
+ AllowHTTP: true,
+ DialTLSContext: func(ctx context.Context, network, addr string, _ *tls.Config) (net.Conn, error) {
+ return d.DialContext(ctx, network, addr)
+ },
+ TLSClientConfig: tlsConfig,
+ },
+ }
+
+ return transport, nil
+}
+
+type http2Transport struct {
+ t2 *http2.Transport
+ t2c *http2.Transport
+}
+
+func (t *http2Transport) RoundTrip(req *http.Request) (resp *http.Response, err error) {
+ if req.URL.Scheme == "https" {
+ return t.t2.RoundTrip(req)
}
- return func(_ *http.Request, _ []*http.Request) error { return ErrRedirectAttempted }
+ return t.t2c.RoundTrip(req)
+}
+
+func (t *http2Transport) CloseIdleConnection() {
+ t.t2.CloseIdleConnections()
+ t.t2c.CloseIdleConnections()
}
func proxyFunc(rawProxyURL string) func(r *http.Request) (*url.URL, error) {
@@ -79,3 +137,10 @@ func proxyFunc(rawProxyURL string) func(r *http.Request) (*url.URL, error) {
proxyURL, _ := url.Parse(rawProxyURL)
return http.ProxyURL(proxyURL)
}
+
+func redirectFunc(notFollow bool) func(req *http.Request, via []*http.Request) error {
+ if notFollow {
+ return func(_ *http.Request, _ []*http.Request) error { return ErrRedirectAttempted }
+ }
+ return nil
+}
diff --git a/src/health/README.md b/src/health/README.md
index 081a8b8f8f2d12..ba244e3adabf11 100644
--- a/src/health/README.md
+++ b/src/health/README.md
@@ -10,7 +10,7 @@ Read our documentation on [configuring alerts](/src/health/REFERENCE.md) to chan
- Netdata Cloud provides centralized alert notifications, utilizing the health status data already sent to Netdata Cloud from connected nodes to send alerts to configured integrations. [Supported integrations](/docs/alerts-&-notifications/notifications/centralized-cloud-notifications) include Amazon SNS, Discord, Slack, Splunk, and others.
-- The Netdata Agent offers a [wider range of notification options](/docs/alerts-&-notifications/notifications/agent-dispatched-notifications) directly from the agent itself. You can choose from over a dozen services, including email, Slack, PagerDuty, Twilio, and others, for more granular control over notifications on each node.
+- The Netdata Agent offers a [wider range of notification options](/docs/alerts-&-notifications/notifications/agent-dispatched-notifications) directly from the Agent itself. You can choose from over a dozen services, including email, Slack, PagerDuty, Twilio, and others, for more granular control over notifications on each node.
The Netdata Agent is a health watchdog for the health and performance of your systems, services, and applications. We've worked closely with our community of DevOps engineers, SREs, and developers to define hundreds of production-ready alerts that work without any configuration.
diff --git a/src/health/REFERENCE.md b/src/health/REFERENCE.md
index b46012d04b56bf..b84a3818663b3e 100644
--- a/src/health/REFERENCE.md
+++ b/src/health/REFERENCE.md
@@ -89,7 +89,7 @@ available options are described below.
### Disable all alerts
-In the `netdata.conf` `[health]` section, set `enabled` to `no`, and restart the agent.
+In the `netdata.conf` `[health]` section, set `enabled` to `no`, and restart the Agent.
### Disable some alerts
@@ -116,7 +116,7 @@ When you need to frequently disable all or some alerts from triggering during ce
when running backups) you can use the
[health management API](/src/web/api/health/README.md).
The API allows you to issue commands to control the health engine's behavior without changing configuration,
-or restarting the agent.
+or restarting the Agent.
### Temporarily silence notifications at runtime
@@ -124,7 +124,7 @@ If you want health checks to keep running and alerts to keep getting triggered,
suppressed temporarily, you can use the
[health management API](/src/web/api/health/README.md).
The API allows you to issue commands to control the health engine's behavior without changing configuration,
-or restarting the agent.
+or restarting the Agent.
## Write a new health entity
diff --git a/src/health/guides/anomalies/anomalies_anomaly_probabilities.md b/src/health/guides/anomalies/anomalies_anomaly_probabilities.md
index cea04a43e767a3..90ec9280ffedad 100644
--- a/src/health/guides/anomalies/anomalies_anomaly_probabilities.md
+++ b/src/health/guides/anomalies/anomalies_anomaly_probabilities.md
@@ -1,6 +1,6 @@
### Understand the alert
-This alert, `anomalies_anomaly_probabilities`, is generated by the Netdata agent when the average anomaly probability over the last 2 minutes is 50. An anomaly probability is a value calculated by the machine learning (ML) component in Netdata, aiming to detect unusual events or behavior in system metrics.
+This alert, `anomalies_anomaly_probabilities`, is generated by the Netdata Agent when the average anomaly probability over the last 2 minutes is 50. An anomaly probability is a value calculated by the machine learning (ML) component in Netdata, aiming to detect unusual events or behavior in system metrics.
### What is anomaly probability?
diff --git a/src/health/guides/dbengine/10min_dbengine_global_fs_errors.md b/src/health/guides/dbengine/10min_dbengine_global_fs_errors.md
index a4093681b6f5f6..50aff56e475d4e 100644
--- a/src/health/guides/dbengine/10min_dbengine_global_fs_errors.md
+++ b/src/health/guides/dbengine/10min_dbengine_global_fs_errors.md
@@ -2,7 +2,7 @@
The Database Engine works like a traditional database. It dedicates a certain amount of RAM to data caching and indexing, while the rest of the data resides compressed on disk. Unlike other memory modes, the amount of historical metrics stored is based on the amount of disk space you allocate and the effective compression ratio, not a fixed number of metrics collected.
-By using both RAM and disk space, the database engine allows for long-term storage of per-second metrics inside of the Netdata agent itself.
+By using both RAM and disk space, the database engine allows for long-term storage of per-second metrics inside of the Netdata Agent itself.
Netdata monitors the number of filesystem errors in the last 10 minutes. The Dbengine is experiencing filesystem errors (too many open files, wrong permissions, etc.)
diff --git a/src/health/guides/exporting/exporting_last_buffering.md b/src/health/guides/exporting/exporting_last_buffering.md
index 1139b0b6d2698b..4543714a7cb66e 100644
--- a/src/health/guides/exporting/exporting_last_buffering.md
+++ b/src/health/guides/exporting/exporting_last_buffering.md
@@ -16,7 +16,7 @@ This alert is related to the Netdata Exporting engine, which calculates the numb
```
Replace `new_value` with the desired number that matches your system requirements.
-4. Restart the Netdata Agent: After modifying the `exporting.conf` file, don't forget to restart the Netdata Agent for changes to take effect. Use the following command to restart the agent:
+4. Restart the Netdata Agent: After modifying the `exporting.conf` file, don't forget to restart the Netdata Agent for changes to take effect. Use the following command to restart the Agent:
```
sudo systemctl restart netdata
diff --git a/src/health/guides/httpcheck/httpcheck_web_service_bad_content.md b/src/health/guides/httpcheck/httpcheck_web_service_bad_content.md
index cbf42694d1c4c5..c838f83cce02a6 100644
--- a/src/health/guides/httpcheck/httpcheck_web_service_bad_content.md
+++ b/src/health/guides/httpcheck/httpcheck_web_service_bad_content.md
@@ -1,6 +1,6 @@
### Understand the alert
-The Netdata Agent monitors your HTTP endpoints. You can specify endpoints that the agent will monitor in Agent's Go module under `go.d/httpcheck.conf`. You can also specify the expected response pattern. This HTTP endpoint will send in the `response_match` option. If the endpoint's response does not match the `response_match` pattern, then the Agent marks the response as unexpected.
+The Netdata Agent monitors your HTTP endpoints. You can specify endpoints that the Agent will monitor in Agent's Go module under `go.d/httpcheck.conf`. You can also specify the expected response pattern. This HTTP endpoint will send in the `response_match` option. If the endpoint's response does not match the `response_match` pattern, then the Agent marks the response as unexpected.
The Netdata Agent calculates the average ratio of HTTP responses with unexpected content over the last 5 minutes.
diff --git a/src/health/guides/httpcheck/httpcheck_web_service_unreachable.md b/src/health/guides/httpcheck/httpcheck_web_service_unreachable.md
index 306ce1fee58c31..de950e49eda502 100644
--- a/src/health/guides/httpcheck/httpcheck_web_service_unreachable.md
+++ b/src/health/guides/httpcheck/httpcheck_web_service_unreachable.md
@@ -1,6 +1,6 @@
### Understand the alert
-The Netdata agent monitors your HTTP endpoints. You can specify endpoints the Agent will monitor in the Agent's Go module under `go.d/httpcheck.conf`.
+The Netdata Agent monitors your HTTP endpoints. You can specify endpoints the Agent will monitor in the Agent's Go module under `go.d/httpcheck.conf`.
If your system fails to connect to your endpoint, or if the request to that endpoint times out, then the Agent will mark the requests and log them as "unreachable".
diff --git a/src/health/guides/httpcheck/httpcheck_web_service_up.md b/src/health/guides/httpcheck/httpcheck_web_service_up.md
index be17fadd564f5f..7a9e6ad81eada2 100644
--- a/src/health/guides/httpcheck/httpcheck_web_service_up.md
+++ b/src/health/guides/httpcheck/httpcheck_web_service_up.md
@@ -30,9 +30,9 @@ An HTTP endpoint is like a door where clients make requests to access web servic
curl -I http://example.com/some/endpoint
```
-4. Check for network issues between the monitoring agent and the HTTP endpoint.
+4. Check for network issues between the monitoring Agent and the HTTP endpoint.
- Use tools like `ping`, `traceroute`, or `mtr` to check for network latency or packet loss between the monitoring agent and the HTTP endpoint.
+ Use tools like `ping`, `traceroute`, or `mtr` to check for network latency or packet loss between the monitoring Agent and the HTTP endpoint.
5. Review the web server or application configuration.
diff --git a/src/health/guides/ml/ml_1min_node_ar.md b/src/health/guides/ml/ml_1min_node_ar.md
index b5f12389bb74b8..51fd6d30c80cd4 100644
--- a/src/health/guides/ml/ml_1min_node_ar.md
+++ b/src/health/guides/ml/ml_1min_node_ar.md
@@ -6,7 +6,7 @@ For example, with the default of `warn: $this > 1`, this means that 1% or more o
### Troubleshoot the alert
-This alert is a signal that some significant percentage of metrics within your infrastructure have been flagged as anomalous accoring to the ML based anomaly detection models the Netdata agent continually trains and re-trains for each metric. This tells us something somewhere might look strange in some way. THe next step is to try drill in and see what metrics are actually driving this.
+This alert is a signal that some significant percentage of metrics within your infrastructure have been flagged as anomalous accoring to the ML based anomaly detection models the Netdata Agent continually trains and re-trains for each metric. This tells us something somewhere might look strange in some way. THe next step is to try drill in and see what metrics are actually driving this.
1. **Filter for the node or nodes relevant**: First we need to reduce as much noise as possible by filtering for just those nodes that have the elevated node anomaly rate. Look at the `anomaly_detection.anomaly_rate` chart and group by `node` to see which nodes have an elevated anomaly rate. Filter for just those nodes since this will reduce any noise as much as possible.
diff --git a/src/health/guides/net/1m_received_traffic_overflow.md b/src/health/guides/net/1m_received_traffic_overflow.md
index 270dd892d883d9..5fcbc10fc3117c 100644
--- a/src/health/guides/net/1m_received_traffic_overflow.md
+++ b/src/health/guides/net/1m_received_traffic_overflow.md
@@ -1,6 +1,6 @@
ο»Ώ### Understand the alert
-Network interfaces are categorized primarily on the bandwidth they can operate (1 Gbps, 10 Gbps, etc). High network utilization occurs when the volume of data on a network link approaches the capacity of the link. Netdata agent
+Network interfaces are categorized primarily on the bandwidth they can operate (1 Gbps, 10 Gbps, etc). High network utilization occurs when the volume of data on a network link approaches the capacity of the link. Netdata Agent
calculates the average outbound utilization for a specific network interface over the last minute. High outbound utilization increases latency and packet loss because packet bursts are buffered
This alarm may indicate either network congestion or malicious activity.
diff --git a/src/health/guides/net/1m_sent_traffic_overflow.md b/src/health/guides/net/1m_sent_traffic_overflow.md
index 376d578cd48c28..f8359fe2bb958e 100644
--- a/src/health/guides/net/1m_sent_traffic_overflow.md
+++ b/src/health/guides/net/1m_sent_traffic_overflow.md
@@ -1,6 +1,6 @@
### Understand the alert
-Network interfaces are categorized primarily on the bandwidth rate at which they can operate (1 Gbps, 10 Gbps, etc). High network utilization occurs when the volume of data on a network link approaches the capacity of the link. Netdata agent calculates the average outbound utilization for a specific network interface over the last minute. High outbound utilization increases latency and packet loss because packet bursts are buffered.
+Network interfaces are categorized primarily on the bandwidth rate at which they can operate (1 Gbps, 10 Gbps, etc). High network utilization occurs when the volume of data on a network link approaches the capacity of the link. Netdata Agent calculates the average outbound utilization for a specific network interface over the last minute. High outbound utilization increases latency and packet loss because packet bursts are buffered.
This alarm may indicate either a network congestion or malicious activity.
diff --git a/src/health/guides/netdev/1min_netdev_budget_ran_outs.md b/src/health/guides/netdev/1min_netdev_budget_ran_outs.md
index 30539322543ef2..d3103dd9297fc5 100644
--- a/src/health/guides/netdev/1min_netdev_budget_ran_outs.md
+++ b/src/health/guides/netdev/1min_netdev_budget_ran_outs.md
@@ -2,7 +2,7 @@
Your system communicates with the devices attached to it through interrupt requests. In a nutshell, when an interrupt occurs, the operating system stops what it was doing and starts addressing that interrupt.
-Network interfaces can receive thousands of packets per second. To avoid burying the system with thousands of interrupts, the Linux kernel uses the NAPI polling framework. In this way, we can replace hundreds of hardware interrupts with one poll by managing them with a few Soft Interrupt ReQuests (Soft IRQs). Ksoftirqd is a per-CPU kernel thread responsible for handling those unserved Soft Interrupt ReQuests (Soft IRQs). The Netdata agent inspects the average number of times Ksoftirqd ran out of netdev_budget or CPU time when there was still work to be done. This abnormality may cause packet overflow on the intermediate buffers and, as a result, drop packet on your network interfaces.
+Network interfaces can receive thousands of packets per second. To avoid burying the system with thousands of interrupts, the Linux kernel uses the NAPI polling framework. In this way, we can replace hundreds of hardware interrupts with one poll by managing them with a few Soft Interrupt ReQuests (Soft IRQs). Ksoftirqd is a per-CPU kernel thread responsible for handling those unserved Soft Interrupt ReQuests (Soft IRQs). The Netdata Agent inspects the average number of times Ksoftirqd ran out of netdev_budget or CPU time when there was still work to be done. This abnormality may cause packet overflow on the intermediate buffers and, as a result, drop packet on your network interfaces.
The default value of the netdev_budget is 300. However, this may not be enough in some cases, such as:
diff --git a/src/health/guides/portcheck/portcheck_connection_timeouts.md b/src/health/guides/portcheck/portcheck_connection_timeouts.md
index b3608f62e9d8ce..f3eef935919b15 100644
--- a/src/health/guides/portcheck/portcheck_connection_timeouts.md
+++ b/src/health/guides/portcheck/portcheck_connection_timeouts.md
@@ -32,7 +32,7 @@ This alert triggers a warning state when the ratio of timeouts is between 10-40%
5. Check the Netdata configuration
- Review the Netdata configuration file `/etc/netdata/netdata.conf` to ensure the `portcheck` plugin settings are correctly configured for monitoring the TCP endpoint.
- - If necessary, update and restart the Netdata agent.
+ - If necessary, update and restart the Netdata Agent.
### Useful resources
diff --git a/src/health/guides/vernemq/vernemq_cluster_dropped.md b/src/health/guides/vernemq/vernemq_cluster_dropped.md
index 0bdc6f08d86adc..9056521c949819 100644
--- a/src/health/guides/vernemq/vernemq_cluster_dropped.md
+++ b/src/health/guides/vernemq/vernemq_cluster_dropped.md
@@ -1,6 +1,6 @@
### Understand the alert
-This alert indicates that VerneMQ, an MQTT broker, is experiencing issues with inter-node message delivery within a clustered environment. The Netdata agent calculates the amount of traffic dropped during communication with cluster nodes in the last minute. If you receive this alert, it means that the outgoing cluster buffer is full and some messages cannot be delivered.
+This alert indicates that VerneMQ, an MQTT broker, is experiencing issues with inter-node message delivery within a clustered environment. The Netdata Agent calculates the amount of traffic dropped during communication with cluster nodes in the last minute. If you receive this alert, it means that the outgoing cluster buffer is full and some messages cannot be delivered.
### What does dropped messages mean?
diff --git a/src/health/guides/web_log/web_log_1m_redirects.md b/src/health/guides/web_log/web_log_1m_redirects.md
index 663f04f5f0f2e8..1225227ce2187b 100644
--- a/src/health/guides/web_log/web_log_1m_redirects.md
+++ b/src/health/guides/web_log/web_log_1m_redirects.md
@@ -2,7 +2,7 @@
HTTP response status codes indicate whether a specific HTTP request has been successfully completed or not.
-The 3XX class of status code indicates that further action needs to be taken by the user agent in order to fulfill the request. The action required may be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A client SHOULD detect infinite redirection loops, since such loops generate network traffic for each redirection.
+The 3XX class of status code indicates that further action needs to be taken by the user Agent in order to fulfill the request. The action required may be carried out by the user Agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A client SHOULD detect infinite redirection loops, since such loops generate network traffic for each redirection.
The Netdata Agent calculates the ratio of redirection HTTP requests over the last minute. This metric does not include the "304 Not modified" message.
diff --git a/src/health/guides/web_log/web_log_1m_total_requests.md b/src/health/guides/web_log/web_log_1m_total_requests.md
index 7dc19983d98ff0..a50ccbd7358fb2 100644
--- a/src/health/guides/web_log/web_log_1m_total_requests.md
+++ b/src/health/guides/web_log/web_log_1m_total_requests.md
@@ -10,7 +10,7 @@ An increase in workload means that your web server is handling more traffic than
1. Analyze web traffic logs
- To understand the reason behind the increased workload, the first step is to analyze the web server traffic logs. Look for any patterns, specific time intervals, or specific user agents that are contributing to the high number of requests.
+ To understand the reason behind the increased workload, the first step is to analyze the web server traffic logs. Look for any patterns, specific time intervals, or specific user Agents that are contributing to the high number of requests.
2. Check the web server performance
diff --git a/src/health/guides/web_log/web_log_5m_successful.md b/src/health/guides/web_log/web_log_5m_successful.md
index d3ca5916af92e5..1dad775acf3ba6 100644
--- a/src/health/guides/web_log/web_log_5m_successful.md
+++ b/src/health/guides/web_log/web_log_5m_successful.md
@@ -22,7 +22,7 @@ A successful HTTP request is one that receives a response with an HTTP status co
4. Verify client connections
- Investigate the IP addresses and user agents that are making a significant number of requests during the alert period. If there's a spike in requests from a single or a few IPs, it could be a sign of a coordinated attack, excessive crawling, or other unexpected behavior.
+ Investigate the IP addresses and user Agents that are making a significant number of requests during the alert period. If there's a spike in requests from a single or a few IPs, it could be a sign of a coordinated attack, excessive crawling, or other unexpected behavior.
5. Check your web application
diff --git a/src/health/notifications/pagerduty/README.md b/src/health/notifications/pagerduty/README.md
index d85dd46c99ea09..59e883120b7b14 100644
--- a/src/health/notifications/pagerduty/README.md
+++ b/src/health/notifications/pagerduty/README.md
@@ -26,7 +26,7 @@ You can send notifications to PagerDuty using Netdata's Agent alert notification
####
-- An installation of the [PagerDuty](https://www.pagerduty.com/docs/guides/agent-install-guide/) agent on the node running the Netdata Agent
+- An installation of the [PagerDuty](https://www.pagerduty.com/docs/guides/agent-install-guide/) Agent on the node running the Netdata Agent
- A PagerDuty Generic API service using either the `Events API v2` or `Events API v1`
- [Add a new service](https://support.pagerduty.com/docs/services-and-integrations#section-configuring-services-and-integrations) to PagerDuty. Click Use our API directly and select either `Events API v2` or `Events API v1`. Once you finish creating the service, click on the Integrations tab to find your Integration Key.
- Access to the terminal where Netdata Agent is running
diff --git a/src/health/notifications/pagerduty/metadata.yaml b/src/health/notifications/pagerduty/metadata.yaml
index 3973825fcf2004..048e55ce43b29c 100644
--- a/src/health/notifications/pagerduty/metadata.yaml
+++ b/src/health/notifications/pagerduty/metadata.yaml
@@ -19,7 +19,7 @@
list:
- title: ''
description: |
- - An installation of the [PagerDuty](https://www.pagerduty.com/docs/guides/agent-install-guide/) agent on the node running the Netdata Agent
+ - An installation of the [PagerDuty](https://www.pagerduty.com/docs/guides/agent-install-guide/) Agent on the node running the Netdata Agent
- A PagerDuty Generic API service using either the `Events API v2` or `Events API v1`
- [Add a new service](https://support.pagerduty.com/docs/services-and-integrations#section-configuring-services-and-integrations) to PagerDuty. Click Use our API directly and select either `Events API v2` or `Events API v1`. Once you finish creating the service, click on the Integrations tab to find your Integration Key.
- Access to the terminal where Netdata Agent is running
diff --git a/src/health/notifications/web/README.md b/src/health/notifications/web/README.md
index baa0bfaaa66cb9..21561cb7301d7d 100644
--- a/src/health/notifications/web/README.md
+++ b/src/health/notifications/web/README.md
@@ -1,4 +1,4 @@
-# Browser pop up agent alert notifications
+# Browser pop up Agent alert notifications
The Netdata dashboard shows HTML notifications, when it is open.
diff --git a/src/libnetdata/log/README.md b/src/libnetdata/log/README.md
index c7a42f28b05afa..5f1614e3ad03bf 100644
--- a/src/libnetdata/log/README.md
+++ b/src/libnetdata/log/README.md
@@ -247,7 +247,7 @@ The structure of the logs are as follows:
- Channel `Netdata/Collector`: general messages about Netdata external plugins
- Channel `Netdata/Health`: alert transitions and general messages generated by Netdata's health engine
- Channel `Netdata/Access`: all accesses to Netdata APIs
- - Channel `Netdata/Aclk`: for cloud connectivity tracing (disabled by default)
+ - Channel `Netdata/Aclk`: for Cloud connectivity tracing (disabled by default)
Retention can be configured per Channel via the Event Viewer. Netdata does not set a default, so the system default is used.
@@ -272,7 +272,7 @@ For WEL, Netdata logs as follows:
- Publisher `NetdataCollector`: general messages about Netdata external plugins
- Publisher `NetdataHealth`: alert transitions and general messages generated by Netdata's health engine
- Publisher `NetdataAccess`: all accesses to Netdata APIs
- - Publisher `NetdataAclk`: for cloud connectivity tracing (disabled by default)
+ - Publisher `NetdataAclk`: for Cloud connectivity tracing (disabled by default)
Publishers must have unique names system-wide, so we had to prefix them with `Netdata`.
diff --git a/src/ml/notebooks/README.md b/src/ml/notebooks/README.md
index 5e9db6dee8ed47..408f806780f461 100644
--- a/src/ml/notebooks/README.md
+++ b/src/ml/notebooks/README.md
@@ -2,4 +2,4 @@
This folder is a home for any documentation supporting machine learning related notebooks.
-- [Netdata anomaly detection deepdive](netdata_anomaly_detection_deepdive.ipynb): This is a starter notebook to help users understand how anomaly detection works in the Netdata agent and go a little deeper if they want.
\ No newline at end of file
+- [Netdata anomaly detection deepdive](netdata_anomaly_detection_deepdive.ipynb): This is a starter notebook to help users understand how anomaly detection works in the Netdata Agent and go a little deeper if they want.
\ No newline at end of file
diff --git a/src/plugins.d/README.md b/src/plugins.d/README.md
index d82a7cd9db7e0b..90f8e5a450dbad 100644
--- a/src/plugins.d/README.md
+++ b/src/plugins.d/README.md
@@ -451,7 +451,7 @@ The `source` is an integer field that can have the following values:
- `1`: The value was set automatically.
- `2`: The value was set manually.
- `4`: This is a K8 label.
-- `8`: This is a label defined using `netdata` agent cloud link.
+- `8`: This is a label defined using `netdata` Agent-Cloud link.
#### CLABEL_COMMIT
diff --git a/src/registry/README.md b/src/registry/README.md
index 97db113f7dc795..ff6009084c848d 100644
--- a/src/registry/README.md
+++ b/src/registry/README.md
@@ -135,7 +135,7 @@ Keep in mind that connections to Netdata API ports are filtered by `[web].allow
`[registry].allow from` should also be allowed by `[web].allow connection from`.
The patterns can be matches over IP addresses or FQDN of the host. In order to check the FQDN of the connection without
-opening the Netdata agent to DNS-spoofing, a reverse-dns record must be setup for the connecting host. At connection
+opening the Netdata Agent to DNS-spoofing, a reverse-dns record must be setup for the connecting host. At connection
time the reverse-dns of the peer IP address is resolved, and a forward DNS resolution is made to validate the IP address
against the name-pattern.
diff --git a/src/streaming/README.md b/src/streaming/README.md
index 74b5691d06d883..989c9366b8f4d3 100644
--- a/src/streaming/README.md
+++ b/src/streaming/README.md
@@ -50,7 +50,7 @@ This section is used by the sending Netdata.
### `[API_KEY]` sections
-This section defines an API key for other agents to connect to this Netdata.
+This section defines an API key for other Agents to connect to this Netdata.
| Setting | Default | Description |
|------------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -75,12 +75,12 @@ This section defines an API key for other agents to connect to this Netdata.
### `[MACHINE_GUID]` sections
-This section is about customizing configuration for specific agents. It allows many agents to share the same API key, while providing customizability per remote agent.
+This section is about customizing configuration for specific Agents. It allows many Agents to share the same API key, while providing customizability per remote Agent.
| Setting | Default | Description |
|------------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `enabled` | `no` | Whether this MACHINE_GUID enabled or disabled. |
-| `type` | `machine` | This section defines the configuration for a specific agent. |
+| `type` | `machine` | This section defines the configuration for a specific Agent. |
| [`allow from`](#allow-from) | `*` | A space-separated list of [Netdata simple patterns](/src/libnetdata/simple_pattern/README.md) matching the IPs of nodes that will stream metrics using this API key. [Read more →](#allow-from) |
| `retention` | `3600` | The default amount of child metrics history to retain when using the `ram` db. |
| [`db`](#default-memory-mode) | `dbengine` | The [database](/src/database/README.md) to use for all nodes using this `API_KEY`. Valid settings are `dbengine`, `ram`, or `none`. [Read more →](#default-memory-mode) |
@@ -221,7 +221,7 @@ default `dbengine` as specified by the `API_KEY`, and alerts are disabled.
[![Supported version Netdata Agent release](https://img.shields.io/badge/Supported%20Netdata%20stream%20version-v5%2B-blue)](https://github.com/netdata/netdata/releases/latest)
#### OS dependencies
-* Streaming compression is based on [lz4 v1.9.0+](https://github.com/lz4/lz4). The [lz4 v1.9.0+](https://github.com/lz4/lz4) library must be installed in your OS in order to enable streaming compression. Any lower version will disable Netdata streaming compression for compatibility purposes between the older versions of Netdata agents.
+* Streaming compression is based on [lz4 v1.9.0+](https://github.com/lz4/lz4). The [lz4 v1.9.0+](https://github.com/lz4/lz4) library must be installed in your OS in order to enable streaming compression. Any lower version will disable Netdata streaming compression for compatibility purposes between the older versions of Netdata Agents.
To check if your Netdata Agent supports stream compression run the following GET request in your browser or terminal:
@@ -251,7 +251,7 @@ A compressed data packet is determined and decompressed on the fly.
#### Limitations
This limitation will be withdrawn asap and is work-in-progress.
-The current implementation of streaming data compression can support only a few number of dimensions in a chart with names that cannot exceed the size of 16384 bytes. In case your instance hit this limitation, the agent will deactivate compression during runtime to avoid stream corruption. This limitation can be seen in the error.log file with the sequence of the following messages:
+The current implementation of streaming data compression can support only a few number of dimensions in a chart with names that cannot exceed the size of 16384 bytes. In case your instance hit this limitation, the Agent will deactivate compression during runtime to avoid stream corruption. This limitation can be seen in the error.log file with the sequence of the following messages:
```
netdata INFO : STREAM_SENDER[child01] : STREAM child01 [send to my.parent.IP]: connecting...
netdata INFO : STREAM_SENDER[child01] : STREAM child01 [send to my.parent.IP]: initializing communication...
@@ -266,7 +266,7 @@ netdata ERROR : PLUGINSD[go.d] : STREAM_COMPRESSION child01 [send to my.parent.I
netdata INFO : STREAM_SENDER[child01] : STREAM child01 [send to my.parent.IP]: connecting...
netdata INFO : STREAM_SENDER[child01] : STREAM child01 [send to my.parent.IP]: initializing communication...
netdata INFO : STREAM_SENDER[child01] : STREAM child01 [send to my.parent.IP]: waiting response from remote netdata...
-netdata INFO : STREAM_SENDER[child01] : Stream is uncompressed! One of the agents (my.parent.IP <-> child01) does not support compression OR compression is disabled.
+netdata INFO : STREAM_SENDER[child01] : Stream is uncompressed! One of the Agents (my.parent.IP <-> child01) does not support compression OR compression is disabled.
netdata INFO : STREAM_SENDER[child01] : STREAM child01 [send to my.parent.IP]: established communication with a parent using protocol version 4 - ready to send metrics...
netdata INFO : WEB_SERVER[static4] : STREAM child01 [send]: sending metrics...
```
@@ -283,7 +283,7 @@ To enable stream compression:
2. In the `[stream]` section, set `enable compression` to `yes`.
```
-# This is the default stream compression flag for an agent.
+# This is the default stream compression flag for an Agent.
[stream]
enable compression = yes | no
diff --git a/src/web/README.md b/src/web/README.md
index 942cdc89c69ded..f87738376b788d 100644
--- a/src/web/README.md
+++ b/src/web/README.md
@@ -6,7 +6,7 @@ we put a lot of emphasis on real-time, meaningful, and context-aware charts.
We bundle Netdata with a dashboard and hundreds of charts, designed by both our
team and the community, but you can also customize them yourself.
-There are two primary ways to view Netdata's dashboards on the agent:
+There are two primary ways to view Netdata's dashboards on the Agent:
1. The [local Agent dashboard](/src/web/gui/README.md) that comes pre-configured with every Netdata installation. You can
see it at `http://NODE:19999`, replacing `NODE` with `localhost`, the hostname of your node, or its IP address. You
diff --git a/src/web/api/README.md b/src/web/api/README.md
index 7ad1a7ad42bb33..885df0f8425819 100644
--- a/src/web/api/README.md
+++ b/src/web/api/README.md
@@ -1,12 +1,12 @@
# API
-## Netdata agent REST API
+## Netdata Agent REST API
-The complete documentation of the Netdata agent's REST API is documented in the OpenAPI format [in our GitHub repository](https://raw.githubusercontent.com/netdata/netdata/master/src/web/api/netdata-swagger.yaml).
+The complete documentation of the Netdata Agent's REST API is documented in the OpenAPI format [in our GitHub repository](https://raw.githubusercontent.com/netdata/netdata/master/src/web/api/netdata-swagger.yaml).
You can explore it using the **[Swagger UI](https://learn.netdata.cloud/api)**, or the **[Swagger Editor](https://editor.swagger.io/?url=https://raw.githubusercontent.com/netdata/netdata/master/src/web/api/netdata-swagger.yaml)**.
-## Netdata cloud API
+## Netdata Cloud API
-A very basic Netdata cloud REST API supports the [Grafana data source plugin](https://github.com/netdata/netdata-grafana-datasource-plugin/blob/master/README.md),
+A very basic Netdata Cloud REST API supports the [Grafana data source plugin](https://github.com/netdata/netdata-grafana-datasource-plugin/blob/master/README.md),
but has not yet been expanded for wider use. We intend to provide a properly documented API in the future.
diff --git a/src/web/api/netdata-swagger.json b/src/web/api/netdata-swagger.json
index 29828b875d4c1a..246c415ff38845 100644
--- a/src/web/api/netdata-swagger.json
+++ b/src/web/api/netdata-swagger.json
@@ -60,7 +60,7 @@
},
{
"name": "management",
- "description": "Everything related to managing netdata agents"
+ "description": "Everything related to managing netdata Agents"
}
],
"paths": {
@@ -71,7 +71,7 @@
"nodes"
],
"summary": "Nodes Info v2",
- "description": "Get a list of all nodes hosted by this Netdata agent.\n",
+ "description": "Get a list of all nodes hosted by this Netdata Agent.\n",
"parameters": [
{
"$ref": "#/components/parameters/scopeNodes"
@@ -92,7 +92,7 @@
"content": {
"application/json": {
"schema": {
- "description": "`/api/v2/nodes` response for all nodes hosted by a Netdata agent.\n",
+ "description": "`/api/v2/nodes` response for all nodes hosted by a Netdata Agent.\n",
"type": "object",
"properties": {
"api": {
@@ -125,7 +125,7 @@
"contexts"
],
"summary": "Contexts Info v2",
- "description": "Get a list of all contexts, across all nodes, hosted by this Netdata agent.\n",
+ "description": "Get a list of all contexts, across all nodes, hosted by this Netdata Agent.\n",
"parameters": [
{
"$ref": "#/components/parameters/scopeNodes"
@@ -161,7 +161,7 @@
"contexts"
],
"summary": "Full Text Search v2",
- "description": "Get a list of contexts, across all nodes, hosted by this Netdata agent, matching a string expression\n",
+ "description": "Get a list of contexts, across all nodes, hosted by this Netdata Agent, matching a string expression\n",
"parameters": [
{
"name": "q",
@@ -1853,7 +1853,7 @@
"scopeNodes": {
"name": "scope_nodes",
"in": "query",
- "description": "A simple pattern limiting the nodes scope of the query. The scope controls both data and metadata response. The simple pattern is checked against the nodes' machine guid, node id and hostname. The default nodes scope is all nodes for which this agent has data for. Usually the nodes scope is used to slice the entire dashboard (e.g. the Global Nodes Selector at the Netdata Cloud overview dashboard). Both positive and negative simple pattern expressions are supported.\n",
+ "description": "A simple pattern limiting the nodes scope of the query. The scope controls both data and metadata response. The simple pattern is checked against the nodes' machine guid, node id and hostname. The default nodes scope is all nodes for which this Agent has data for. Usually the nodes scope is used to slice the entire dashboard (e.g. the Global Nodes Selector at the Netdata Cloud overview dashboard). Both positive and negative simple pattern expressions are supported.\n",
"required": false,
"schema": {
"type": "string",
@@ -1864,7 +1864,7 @@
"scopeContexts": {
"name": "scope_contexts",
"in": "query",
- "description": "A simple pattern limiting the contexts scope of the query. The scope controls both data and metadata response. The default contexts scope is all contexts for which this agent has data for. Usually the contexts scope is used to slice data on the dashboard (e.g. each context based chart has its own contexts scope, limiting the chart to all the instances of the selected context). Both positive and negative simple pattern expressions are supported.\n",
+ "description": "A simple pattern limiting the contexts scope of the query. The scope controls both data and metadata response. The default contexts scope is all contexts for which this Agent has data for. Usually the contexts scope is used to slice data on the dashboard (e.g. each context based chart has its own contexts scope, limiting the chart to all the instances of the selected context). Both positive and negative simple pattern expressions are supported.\n",
"required": false,
"schema": {
"type": "string",
@@ -1992,7 +1992,7 @@
"dataQueryOptions": {
"name": "options",
"in": "query",
- "description": "Options that affect data generation.\n* `jsonwrap` - Wrap the output in a JSON object with metadata about the query.\n* `raw` - change the output so that it is aggregatable across multiple such queries. Supported by `/api/v2` data queries and `json2` format.\n* `minify` - Remove unnecessary spaces and newlines from the output.\n* `debug` - Provide additional information in `jsonwrap` output to help tracing issues.\n* `nonzero` - Do not return dimensions that all their values are zero, to improve the visual appearance of charts. They will still be returned if all the dimensions are entirely zero.\n* `null2zero` - Replace `null` values with `0`.\n* `absolute` or `abs` - Traditionally Netdata returns select dimensions negative to improve visual appearance. This option turns this feature off.\n* `display-absolute` - Only used by badges, to do color calculation using the signed value, but render the value without a sign.\n* `flip` or `reversed` - Order the timestamps array in reverse order (newest to oldest).\n* `min2max` - When flattening multi-dimensional data into a single metric format, use `max - min` instead of `sum`. This is EOL - use `/api/v2` to control aggregation across dimensions.\n* `percentage` - Convert all values into a percentage vs the row total. When enabled, Netdata will query all dimensions, even the ones that have not been selected or are hidden, to find the row total, in order to calculate the percentage of each dimension selected.\n* `seconds` - Output timestamps in seconds instead of dates.\n* `milliseconds` or `ms` - Output timestamps in milliseconds instead of dates.\n* `unaligned` - by default queries are aligned to the the view, so that as time passes past data returned do not change. When a data query will not be used for visualization, `unaligned` can be given to avoid aligning the query time-frame for visual precision.\n* `match-ids`, `match-names`. By default filters match both IDs and names when they are available. Setting either of the two options will disable the other.\n* `anomaly-bit` - query the anomaly information instead of metric values. This is EOL, use `/api/v2` and `json2` format which always returns this information and many more.\n* `jw-anomaly-rates` - return anomaly rates as a separate result set in the same `json` format response. This is EOL, use `/api/v2` and `json2` format which always returns information and many more. \n* `details` - `/api/v2/data` returns in `jsonwrap` the full tree of dimensions that have been matched by the query.\n* `group-by-labels` - `/api/v2/data` returns in `jsonwrap` flattened labels per output dimension. These are used to identify the instances that have been aggregated into each dimension, making it possible to provide a map, like Netdata does for Kubernetes.\n* `natural-points` - return timestamps as found in the database. The result is again fixed-step, but the query engine attempts to align them with the timestamps found in the database.\n* `virtual-points` - return timestamps independent of the database alignment. This is needed aggregating data across multiple Netdata agents, to ensure that their outputs do not need to be interpolated to be merged.\n* `selected-tier` - use data exclusively from the selected tier given with the `tier` parameter. This option is set automatically when the `tier` parameter is set.\n* `all-dimensions` - In `/api/v1` `jsonwrap` include metadata for all candidate metrics examined. In `/api/v2` this is standard behavior and no option is needed.\n* `label-quotes` - In `csv` output format, enclose each header label in quotes.\n* `objectrows` - Each row of value should be an object, not an array (only for `json` format).\n* `google_json` - Comply with google JSON/JSONP specs (only for `json` format).\n",
+ "description": "Options that affect data generation.\n* `jsonwrap` - Wrap the output in a JSON object with metadata about the query.\n* `raw` - change the output so that it is aggregatable across multiple such queries. Supported by `/api/v2` data queries and `json2` format.\n* `minify` - Remove unnecessary spaces and newlines from the output.\n* `debug` - Provide additional information in `jsonwrap` output to help tracing issues.\n* `nonzero` - Do not return dimensions that all their values are zero, to improve the visual appearance of charts. They will still be returned if all the dimensions are entirely zero.\n* `null2zero` - Replace `null` values with `0`.\n* `absolute` or `abs` - Traditionally Netdata returns select dimensions negative to improve visual appearance. This option turns this feature off.\n* `display-absolute` - Only used by badges, to do color calculation using the signed value, but render the value without a sign.\n* `flip` or `reversed` - Order the timestamps array in reverse order (newest to oldest).\n* `min2max` - When flattening multi-dimensional data into a single metric format, use `max - min` instead of `sum`. This is EOL - use `/api/v2` to control aggregation across dimensions.\n* `percentage` - Convert all values into a percentage vs the row total. When enabled, Netdata will query all dimensions, even the ones that have not been selected or are hidden, to find the row total, in order to calculate the percentage of each dimension selected.\n* `seconds` - Output timestamps in seconds instead of dates.\n* `milliseconds` or `ms` - Output timestamps in milliseconds instead of dates.\n* `unaligned` - by default queries are aligned to the the view, so that as time passes past data returned do not change. When a data query will not be used for visualization, `unaligned` can be given to avoid aligning the query time-frame for visual precision.\n* `match-ids`, `match-names`. By default filters match both IDs and names when they are available. Setting either of the two options will disable the other.\n* `anomaly-bit` - query the anomaly information instead of metric values. This is EOL, use `/api/v2` and `json2` format which always returns this information and many more.\n* `jw-anomaly-rates` - return anomaly rates as a separate result set in the same `json` format response. This is EOL, use `/api/v2` and `json2` format which always returns information and many more. \n* `details` - `/api/v2/data` returns in `jsonwrap` the full tree of dimensions that have been matched by the query.\n* `group-by-labels` - `/api/v2/data` returns in `jsonwrap` flattened labels per output dimension. These are used to identify the instances that have been aggregated into each dimension, making it possible to provide a map, like Netdata does for Kubernetes.\n* `natural-points` - return timestamps as found in the database. The result is again fixed-step, but the query engine attempts to align them with the timestamps found in the database.\n* `virtual-points` - return timestamps independent of the database alignment. This is needed aggregating data across multiple Netdata Agents, to ensure that their outputs do not need to be interpolated to be merged.\n* `selected-tier` - use data exclusively from the selected tier given with the `tier` parameter. This option is set automatically when the `tier` parameter is set.\n* `all-dimensions` - In `/api/v1` `jsonwrap` include metadata for all candidate metrics examined. In `/api/v2` this is standard behavior and no option is needed.\n* `label-quotes` - In `csv` output format, enclose each header label in quotes.\n* `objectrows` - Each row of value should be an object, not an array (only for `json` format).\n* `google_json` - Comply with google JSON/JSONP specs (only for `json` format).\n",
"required": false,
"allowEmptyValue": false,
"schema": {
@@ -2186,7 +2186,7 @@
"timeoutMS": {
"name": "timeout",
"in": "query",
- "description": "Specify a timeout value in milliseconds after which the agent will abort the query and return a 503 error. A value of 0 indicates no timeout.\n",
+ "description": "Specify a timeout value in milliseconds after which the Agent will abort the query and return a 503 error. A value of 0 indicates no timeout.\n",
"required": false,
"schema": {
"type": "number",
@@ -2197,7 +2197,7 @@
"timeoutSecs": {
"name": "timeout",
"in": "query",
- "description": "Specify a timeout value in seconds after which the agent will abort the query and return a 504 error. A value of 0 indicates no timeout, but some endpoints, like `weights`, do not accept infinite timeouts (they have a predefined default), so to disable the timeout it must be set to a really high value.\n",
+ "description": "Specify a timeout value in seconds after which the Agent will abort the query and return a 504 error. A value of 0 indicates no timeout, but some endpoints, like `weights`, do not accept infinite timeouts (they have a predefined default), so to disable the timeout it must be set to a really high value.\n",
"required": false,
"schema": {
"type": "number",
@@ -3490,38 +3490,38 @@
"type": "integer"
},
"agents": {
- "description": "An array of agent definitions consulted to compose this response.\n",
+ "description": "An array of Agent definitions consulted to compose this response.\n",
"type": "array",
"items": {
"type": "object",
"properties": {
"mg": {
- "description": "The agent machine GUID.",
+ "description": "The Agent machine GUID.",
"type": "string",
"format": "uuid"
},
"nd": {
- "description": "The agent cloud node ID.",
+ "description": "The Agent cloud node ID.",
"type": "string",
"format": "uuid"
},
"nm": {
- "description": "The agent hostname.",
+ "description": "The Agent hostname.",
"type": "string"
},
"ai": {
- "description": "The agent index ID for this agent, in this response.",
+ "description": "The Agent index ID for this Agent, in this response.",
"type": "integer"
},
"now": {
- "description": "The current unix epoch timestamp of this agent.",
+ "description": "The current unix epoch timestamp of this Agent.",
"type": "integer"
}
}
}
},
"versions": {
- "description": "Hashes that allow the caller to detect important database changes of Netdata agents.\n",
+ "description": "Hashes that allow the caller to detect important database changes of Netdata Agents.\n",
"type": "object",
"properties": {
"nodes_hard_hash": {
@@ -3577,11 +3577,11 @@
"type": "object",
"properties": {
"ai": {
- "description": "The agent index id that has been contacted for this node.",
+ "description": "The Agent index id that has been contacted for this node.",
"type": "integer"
},
"code": {
- "description": "The HTTP response code of the response for this node. When working directly with an agent, this is always 200. If the `code` is missing, it should be assumed to be 200.",
+ "description": "The HTTP response code of the response for this node. When working directly with an Agent, this is always 200. If the `code` is missing, it should be assumed to be 200.",
"type": "integer"
},
"msg": {
@@ -3589,7 +3589,7 @@
"type": "string"
},
"ms": {
- "description": "The time in milliseconds this node took to respond, or if the local agent responded for this node, the time it needed to execute the query. If `ms` is missing, the time that was required to query this node is unknown.",
+ "description": "The time in milliseconds this node took to respond, or if the local Agent responded for this node, the time it needed to execute the query. If `ms` is missing, the time that was required to query this node is unknown.",
"type": "number"
}
}
@@ -3641,11 +3641,11 @@
"type": "string"
},
"hops": {
- "description": "How many hops away from the origin node, the queried one is. 0 means the agent itself is the origin node.",
+ "description": "How many hops away from the origin node, the queried one is. 0 means the Agent itself is the origin node.",
"type": "integer"
},
"state": {
- "description": "The current state of the node on this agent.",
+ "description": "The current state of the node on this Agent.",
"type": "string",
"enum": [
"reachable",
@@ -3678,7 +3678,7 @@
}
},
"contexts2": {
- "description": "`/api/v2/contexts` and `/api/v2/q` response about multi-node contexts hosted by a Netdata agent.\n",
+ "description": "`/api/v2/contexts` and `/api/v2/q` response about multi-node contexts hosted by a Netdata Agent.\n",
"type": "object",
"properties": {
"api": {
@@ -4310,7 +4310,7 @@
"properties": {
"aclk-available": {
"type": "string",
- "description": "Describes whether this agent is capable of connection to the Cloud. False means agent has been built without ACLK component either on purpose (user choice) or due to missing dependency.\n"
+ "description": "Describes whether this Agent is capable of connection to the Cloud. False means Agent has been built without ACLK component either on purpose (user choice) or due to missing dependency.\n"
},
"aclk-version": {
"type": "integer",
@@ -4323,18 +4323,18 @@
"type": "string"
}
},
- "agent-claimed": {
+ "Agent-claimed": {
"type": "boolean",
- "description": "Informs whether this agent has been added to a space in the cloud (User has to perform claiming). If false (user didn't perform claiming) agent will never attempt any cloud connection."
+ "description": "Informs whether this Agent has been added to a space in the cloud (User has to perform claiming). If false (user didn't perform claiming) Agent will never attempt any cloud connection."
},
"claimed_id": {
"type": "string",
"format": "uuid",
- "description": "Unique ID this agent uses to identify when connecting to cloud"
+ "description": "Unique ID this Agent uses to identify when connecting to cloud"
},
"online": {
"type": "boolean",
- "description": "Informs if this agent was connected to the cloud at the time this request has been processed."
+ "description": "Informs if this Agent was connected to the cloud at the time this request has been processed."
},
"used-cloud-protocol": {
"type": "string",
@@ -4605,7 +4605,7 @@
"properties": {
"version": {
"type": "integer",
- "description": "The version of dynamic configuration supported by the Netdata agent."
+ "description": "The version of dynamic configuration supported by the Netdata Agent."
},
"tree": {
"type": "object",
diff --git a/src/web/api/netdata-swagger.yaml b/src/web/api/netdata-swagger.yaml
index abcb6db02a1fd2..6be46e015861db 100644
--- a/src/web/api/netdata-swagger.yaml
+++ b/src/web/api/netdata-swagger.yaml
@@ -32,7 +32,7 @@ tags:
- name: alerts
description: Everything related to alerts
- name: management
- description: Everything related to managing netdata agents
+ description: Everything related to managing netdata Agents
paths:
/api/v2/nodes:
get:
@@ -41,7 +41,7 @@ paths:
- nodes
summary: Nodes Info v2
description: |
- Get a list of all nodes hosted by this Netdata agent.
+ Get a list of all nodes hosted by this Netdata Agent.
parameters:
- $ref: '#/components/parameters/scopeNodes'
- $ref: '#/components/parameters/scopeContexts'
@@ -54,7 +54,7 @@ paths:
application/json:
schema:
description: |
- `/api/v2/nodes` response for all nodes hosted by a Netdata agent.
+ `/api/v2/nodes` response for all nodes hosted by a Netdata Agent.
type: object
properties:
api:
@@ -74,7 +74,7 @@ paths:
- contexts
summary: Contexts Info v2
description: |
- Get a list of all contexts, across all nodes, hosted by this Netdata agent.
+ Get a list of all contexts, across all nodes, hosted by this Netdata Agent.
parameters:
- $ref: '#/components/parameters/scopeNodes'
- $ref: '#/components/parameters/scopeContexts'
@@ -94,7 +94,7 @@ paths:
- contexts
summary: Full Text Search v2
description: |
- Get a list of contexts, across all nodes, hosted by this Netdata agent, matching a string expression
+ Get a list of contexts, across all nodes, hosted by this Netdata Agent, matching a string expression
parameters:
- name: q
in: query
@@ -1186,7 +1186,7 @@ components:
name: scope_nodes
in: query
description: |
- A simple pattern limiting the nodes scope of the query. The scope controls both data and metadata response. The simple pattern is checked against the nodes' machine guid, node id and hostname. The default nodes scope is all nodes for which this agent has data for. Usually the nodes scope is used to slice the entire dashboard (e.g. the Global Nodes Selector at the Netdata Cloud overview dashboard). Both positive and negative simple pattern expressions are supported.
+ A simple pattern limiting the nodes scope of the query. The scope controls both data and metadata response. The simple pattern is checked against the nodes' machine guid, node id and hostname. The default nodes scope is all nodes for which this Agent has data for. Usually the nodes scope is used to slice the entire dashboard (e.g. the Global Nodes Selector at the Netdata Cloud overview dashboard). Both positive and negative simple pattern expressions are supported.
required: false
schema:
type: string
@@ -1196,7 +1196,7 @@ components:
name: scope_contexts
in: query
description: |
- A simple pattern limiting the contexts scope of the query. The scope controls both data and metadata response. The default contexts scope is all contexts for which this agent has data for. Usually the contexts scope is used to slice data on the dashboard (e.g. each context based chart has its own contexts scope, limiting the chart to all the instances of the selected context). Both positive and negative simple pattern expressions are supported.
+ A simple pattern limiting the contexts scope of the query. The scope controls both data and metadata response. The default contexts scope is all contexts for which this Agent has data for. Usually the contexts scope is used to slice data on the dashboard (e.g. each context based chart has its own contexts scope, limiting the chart to all the instances of the selected context). Both positive and negative simple pattern expressions are supported.
required: false
schema:
type: string
@@ -1333,7 +1333,7 @@ components:
* `details` - `/api/v2/data` returns in `jsonwrap` the full tree of dimensions that have been matched by the query.
* `group-by-labels` - `/api/v2/data` returns in `jsonwrap` flattened labels per output dimension. These are used to identify the instances that have been aggregated into each dimension, making it possible to provide a map, like Netdata does for Kubernetes.
* `natural-points` - return timestamps as found in the database. The result is again fixed-step, but the query engine attempts to align them with the timestamps found in the database.
- * `virtual-points` - return timestamps independent of the database alignment. This is needed aggregating data across multiple Netdata agents, to ensure that their outputs do not need to be interpolated to be merged.
+ * `virtual-points` - return timestamps independent of the database alignment. This is needed aggregating data across multiple Netdata Agents, to ensure that their outputs do not need to be interpolated to be merged.
* `selected-tier` - use data exclusively from the selected tier given with the `tier` parameter. This option is set automatically when the `tier` parameter is set.
* `all-dimensions` - In `/api/v1` `jsonwrap` include metadata for all candidate metrics examined. In `/api/v2` this is standard behavior and no option is needed.
* `label-quotes` - In `csv` output format, enclose each header label in quotes.
@@ -1520,7 +1520,7 @@ components:
name: timeout
in: query
description: |
- Specify a timeout value in milliseconds after which the agent will abort the query and return a 503 error. A value of 0 indicates no timeout.
+ Specify a timeout value in milliseconds after which the Agent will abort the query and return a 503 error. A value of 0 indicates no timeout.
required: false
schema:
type: number
@@ -1530,7 +1530,7 @@ components:
name: timeout
in: query
description: |
- Specify a timeout value in seconds after which the agent will abort the query and return a 504 error. A value of 0 indicates no timeout, but some endpoints, like `weights`, do not accept infinite timeouts (they have a predefined default), so to disable the timeout it must be set to a really high value.
+ Specify a timeout value in seconds after which the Agent will abort the query and return a 504 error. A value of 0 indicates no timeout, but some endpoints, like `weights`, do not accept infinite timeouts (they have a predefined default), so to disable the timeout it must be set to a really high value.
required: false
schema:
type: number
@@ -2563,31 +2563,31 @@ components:
type: integer
agents:
description: |
- An array of agent definitions consulted to compose this response.
+ An array of Agent definitions consulted to compose this response.
type: array
items:
type: object
properties:
mg:
- description: The agent machine GUID.
+ description: The Agent machine GUID.
type: string
format: uuid
nd:
- description: The agent cloud node ID.
+ description: The Agent cloud node ID.
type: string
format: uuid
nm:
- description: The agent hostname.
+ description: The Agent hostname.
type: string
ai:
- description: The agent index ID for this agent, in this response.
+ description: The Agent index ID for this Agent, in this response.
type: integer
now:
- description: The current unix epoch timestamp of this agent.
+ description: The current unix epoch timestamp of this Agent.
type: integer
versions:
description: |
- Hashes that allow the caller to detect important database changes of Netdata agents.
+ Hashes that allow the caller to detect important database changes of Netdata Agents.
type: object
properties:
nodes_hard_hash:
@@ -2636,16 +2636,16 @@ components:
type: object
properties:
ai:
- description: The agent index id that has been contacted for this node.
+ description: The Agent index id that has been contacted for this node.
type: integer
code:
- description: The HTTP response code of the response for this node. When working directly with an agent, this is always 200. If the `code` is missing, it should be assumed to be 200.
+ description: The HTTP response code of the response for this node. When working directly with an Agent, this is always 200. If the `code` is missing, it should be assumed to be 200.
type: integer
msg:
description: A human readable description of the error, if any. If `msg` is missing, or is the empty string `""` or is `null`, there is no description associated with the current status.
type: string
ms:
- description: The time in milliseconds this node took to respond, or if the local agent responded for this node, the time it needed to execute the query. If `ms` is missing, the time that was required to query this node is unknown.
+ description: The time in milliseconds this node took to respond, or if the local Agent responded for this node, the time it needed to execute the query. If `ms` is missing, the time that was required to query this node is unknown.
type: number
nodeWithDataStatistics:
allOf:
@@ -2673,10 +2673,10 @@ components:
description: The version of the Netdata Agent the node runs.
type: string
hops:
- description: How many hops away from the origin node, the queried one is. 0 means the agent itself is the origin node.
+ description: How many hops away from the origin node, the queried one is. 0 means the Agent itself is the origin node.
type: integer
state:
- description: The current state of the node on this agent.
+ description: The current state of the node on this Agent.
type: string
enum:
- reachable
@@ -2697,7 +2697,7 @@ components:
type: boolean
contexts2:
description: |
- `/api/v2/contexts` and `/api/v2/q` response about multi-node contexts hosted by a Netdata agent.
+ `/api/v2/contexts` and `/api/v2/q` response about multi-node contexts hosted by a Netdata Agent.
type: object
properties:
api:
@@ -3167,7 +3167,7 @@ components:
aclk-available:
type: string
description: |
- Describes whether this agent is capable of connection to the Cloud. False means agent has been built without ACLK component either on purpose (user choice) or due to missing dependency.
+ Describes whether this Agent is capable of connection to the Cloud. False means Agent has been built without ACLK component either on purpose (user choice) or due to missing dependency.
aclk-version:
type: integer
description: Describes which ACLK version is currently used.
@@ -3176,17 +3176,17 @@ components:
description: List of supported protocols for communication with Cloud.
items:
type: string
- agent-claimed:
+ Agent-claimed:
type: boolean
- description: Informs whether this agent has been added to a space in the cloud (User has to perform claiming).
- If false (user didn't perform claiming) agent will never attempt any cloud connection.
+ description: Informs whether this Agent has been added to a space in the cloud (User has to perform claiming).
+ If false (user didn't perform claiming) Agent will never attempt any cloud connection.
claimed_id:
type: string
format: uuid
- description: Unique ID this agent uses to identify when connecting to cloud
+ description: Unique ID this Agent uses to identify when connecting to cloud
online:
type: boolean
- description: Informs if this agent was connected to the cloud at the time this request has been processed.
+ description: Informs if this Agent was connected to the cloud at the time this request has been processed.
used-cloud-protocol:
type: string
description: Informs which protocol is used to communicate with cloud
@@ -3374,7 +3374,7 @@ components:
properties:
version:
type: integer
- description: The version of dynamic configuration supported by the Netdata agent.
+ description: The version of dynamic configuration supported by the Netdata Agent.
tree:
type: object
description: A map of configuration entity paths, each containing one or more configurable entities.
diff --git a/src/web/server/README.md b/src/web/server/README.md
index 4052ee2b17dbe6..fce80777d86748 100644
--- a/src/web/server/README.md
+++ b/src/web/server/README.md
@@ -90,7 +90,7 @@ Using the above, Netdata will bind to:
- IPv4 127.0.0.1 at port 19999 (port was used from `default port`). Only the UI (dashboard) and the read API will be accessible on this port. Both HTTP and HTTPS requests will be accepted.
- IPv4 10.1.1.1 at port 19998. The management API and `netdata.conf` will be accessible on this port.
- All the IPs `hostname` resolves to (both IPv4 and IPv6 depending on the resolved IPs) at port 19997. Only badges will be accessible on this port.
-- All IPv6 IPs at port 19996. Only metric streaming requests from other Netdata agents will be accepted on this port. Only encrypted streams will be allowed (i.e. child nodes also need to be [configured for TLS](/src/streaming/README.md).
+- All IPv6 IPs at port 19996. Only metric streaming requests from other Netdata Agents will be accepted on this port. Only encrypted streams will be allowed (i.e. child nodes also need to be [configured for TLS](/src/streaming/README.md).
- All the IPs `localhost` resolves to (both IPv4 and IPv6 depending the resolved IPs) at port 19996. This port will only accept registry API requests.
- All IPv4 and IPv6 IPs at port `http` as set in `/etc/services`. Only the UI (dashboard) and the read API will be accessible on this port.
- Unix domain socket `/run/netdata/netdata.sock`. All requests are serviceable on this socket. Note that in some OSs like Fedora, every service sees a different `/tmp`, so don't create a Unix socket under `/tmp`. `/run` or `/var/run` is suggested.
@@ -179,7 +179,7 @@ Example:
For information how to configure the child to use TLS, check [securing the communication](/src/streaming/README.md#securing-streaming-with-tlsssl) in the streaming documentation. There you will find additional details on the expected behavior for client and server nodes, when their respective TLS options are enabled.
-When we define the use of SSL in a Netdata agent for different ports, Netdata will apply the behavior specified on each port. For example, using the configuration line below:
+When we define the use of SSL in a Netdata Agent for different ports, Netdata will apply the behavior specified on each port. For example, using the configuration line below:
```text
[web]
@@ -235,7 +235,7 @@ Netdata supports access lists in `netdata.conf`:
- `allow management from` checks the IPs to allow API management calls. Management via the API is currently supported for [health](/src/web/api/health/README.md#health-management-api)
-In order to check the FQDN of the connection without opening the Netdata agent to DNS-spoofing, a reverse-dns record
+In order to check the FQDN of the connection without opening the Netdata Agent to DNS-spoofing, a reverse-dns record
must be setup for the connecting host. At connection time the reverse-dns of the peer IP address is resolved, and
a forward DNS resolution is made to validate the IP address against the name-pattern.