diff --git a/blog-service/2022/12-31.md b/blog-service/2022/12-31.md index 499fdd3f32..a226bbb05e 100644 --- a/blog-service/2022/12-31.md +++ b/blog-service/2022/12-31.md @@ -204,7 +204,7 @@ Update - GitHub Advanced Security dashboards are now available for the Sumo Logi New - We’re happy to announce the release of Alert Grouping, which allows you to generate more than one alert from a given monitor by specifying a group condition on one or more fields. For example, rather than creating multiple monitors for each `service`, you could create one single monitor that notifies you when some metric (i.e., CPU utilization, error count) goes above the threshold for a given `service`. [Learn more](/docs/alerts/monitors/alert-grouping). -New - Configurable Resolution Window for Logs allows more quickly resolve alerts when the underlying issues are fixed. You can configure how long a monitor will wait, before resolving the alert, when the underlying issues was corrected (earlier the monitor waited one complete window before resolving). [Learn more](/docs/alerts/monitors/create-monitor/#trigger-type). +New - Configurable Resolution Window for Logs allows more quickly resolve alerts when the underlying issues are fixed. You can configure how long a monitor will wait, before resolving the alert, when the underlying issues was corrected (earlier the monitor waited one complete window before resolving). See [Logs trigger types](/docs/alerts/monitors/create-monitor/#trigger-type-logs) and [Metrics trigger types](/docs/alerts/monitors/create-monitor/#trigger-type-metrics). New - You can now access your monitor playbook as a template variable, `{{playbook}}`. You can reference this template variable to customize your notification payloads similar to any other template variable. [Learn more](/docs/alerts/monitors/alert-variables). diff --git a/docs/alerts/monitors/alert-response.md b/docs/alerts/monitors/alert-response.md index e5fcd83ca1..fbbb00f8e8 100644 --- a/docs/alerts/monitors/alert-response.md +++ b/docs/alerts/monitors/alert-response.md @@ -63,12 +63,11 @@ The following is an example Slack payload with the variable: ## Alerts list -The Alerts list shows all of your Alerts from monitors triggered within the past seven days. By default, the list is sorted by status (showing **Active** on top, followed by **Resolved**), and then chronologically by creation time. +The Alerts list shows all of your Alerts from monitors triggered within the past 7 days. By default, the list is sorted by status (showing **Active** on top, followed by **Resolved**), and then chronologically by creation time. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). To access the Alerts list, click the bell icon in the top menu.
alert-list-page-bell-border [**New UI**](/docs/get-started/sumo-logic-ui/). To access the Alerts list, in the main Sumo Logic menu select **Alerts > Alert List**. You can also click the **Go To...** menu at the top of the screen and select **Alert List**. - To filter or sort by category (e.g., **Name**, **Severity**, **Status**), you can use the search bar or click on a column header.
![search alert list.png](/img/monitors/search-alert-list.png) @@ -76,13 +75,13 @@ To filter or sort by category (e.g., **Name**, **Severity**, **Status**), you c The Alerts list displays up to 1,000 alerts. ::: -### Resolve alerts +### Resolving alerts To resolve an alert, click a row to select it, then click **Resolve**. ### Translating thresholds -Threshold translating allows you to open the Alert Response Page in the **Metrics Explorer** that helps you to easily view the threshold associated with an alert. This also helps you to understand how your monitor's thresholds are translating into metrics and compare the threshold values set in a monitor with the data displayed in the Metrics Explorer chart. +Threshold translating allows you to open the Alert Response page in the **Metrics Explorer** that helps you to easily view the threshold associated with an alert. This also helps you to understand how your monitor's thresholds are translating into metrics and compare the threshold values set in a monitor with the data displayed in the Metrics Explorer chart. For example, when you open an alert response page in Metrics Explorer, you can see critical thresholds defined with some number. You can then see that this threshold is also applied and enabled in the Metrics Explorer view, with exactly the same number defined.
arp-metrics-explorer @@ -99,7 +98,7 @@ To view the Alert Response chart in Metrics Explorer, follow the steps below: 1. Use this feature to compare the threshold values set in a monitor with the data displayed in the Metrics Explorer graph and gain a better understanding of how your monitors are translating into metrics. :::note -Note that the same threshold translating functionality supports to [Create Monitors from the Metrics Explorer](/docs/alerts/monitors/create-monitor/#from-your-metrics-explorer) and [Opening Monitor in the Metrics Explorer](/docs/alerts/monitors/settings/#view-in-metrics-explorer). +Note that the same threshold translating functionality supports to [Create Monitors from the Metrics Explorer](/docs/alerts/monitors/create-monitor/#from-metrics-explorer) and [Opening Monitor in the Metrics Explorer](/docs/alerts/monitors/settings/#view-in-metrics-explorer). ::: @@ -111,7 +110,7 @@ An Alert provides curated information to on-calls in order for them to troublesh * **Alert Details**. Overview of the alert that was triggered to help you understand the issue and its potential impact.  * **Alert Context**. System curated context helps you understand potential underlying symptoms within the system that might be causing the issue. -### Alert Details +### Alert details The details section provides: * a chart to visualize the alerting KPI before and during the alert. @@ -154,13 +153,13 @@ Below this, as you scroll down on the page, you'll see context cards covered in * Related Alerts and Monitor History show the top 250 alerts. ::: -### Context Cards +### Alert context cards **Alert Context** provides additional insights that the system has discovered automatically by analyzing your data. The system uses artificial intelligence and machine learning to track your logs and metrics data and find interesting patterns in the data that might help explain the underlying issue and surfaces them in the form of context cards. Depending on the type of data an alert is based on (metrics or logs) and the detection method (static or outlier), you'll see different context cards. You will see a progress spinner labeled **Analyzing alert content** at the bottom of the window when cards are still being loaded. It may take a minute for some cards to load.
![analyzing alert content.png](/img/monitors/analyzing-alert-content.png) -### Log Fluctuations +### Log fluctuations This card detects different signatures in your log messages using [LogReduce](/docs/search/logreduce) such as errors, exceptions, timeouts, and successes. It compares log signatures trends with a normal baseline period and surfaces noteworthy changes in signatures. @@ -315,7 +314,7 @@ To cancel an inherited subscription, you'll need to remove the subscription from Alert notification preferences give you granular control over specific monitor activity you want to follow.
alert-list-page-bell-border -1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select your username and then **Preferences**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu, select your username and then **Preferences**. +1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select your username and then **Preferences**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu, select your username and then **Preferences**. 2. Click on any of the following checkboxes to enable your desired preferences: * **Display alert badge when my subscribed monitors are triggered**. the bell icon is displayed in the top nav * **Notify about only subscribed monitors**. the bell icon will only push notifications for monitors you're subscribed to diff --git a/docs/alerts/monitors/alert-variables.md b/docs/alerts/monitors/alert-variables.md index ef8d9b74bc..89504f9314 100644 --- a/docs/alerts/monitors/alert-variables.md +++ b/docs/alerts/monitors/alert-variables.md @@ -23,7 +23,7 @@ Variables must be enclosed by double curly brackets (`{{ }}`). Unresolved variab | `{{Query}}` | The query used to run the alert. | ✅| ✅| | `{{QueryURL}}` | The URL to the logs or metrics query within Sumo Logic. | ✅| ✅| | `{{ResultsJson}}` | JSON object containing the query results that triggered the alert. A maximum of 200 aggregate results or 10 raw messages for this field can be sent via webhook. | ✅| ✅
Not available with Email notifications | -| `{{ResultsJson.fieldName}}` | The value of the specified field name. For example, the payload specification `{{ResultsJson.client_ip}} had {{ResultsJson.errors}} errors` would result in a subject line like this: `70.69.152.165 had 391 errors`.

A maximum of 200 aggregate results or 10 raw messages for this field can be sent via webhook.

A field name must match (case-insensitive) the field from your search and must be alphanumeric characters, underscores, and spaces. If you have a field name that has an unsupported character, use the [`as`](../../search/search-query-language/search-operators/as.md) operator to rename it.

You can return a specific result by providing an array index value in bracket notation such as `{{ResultsJson.fieldName}}[0]` to return the first result.

**Reserved Fields**
The following are reserved field names. They are generated by Sumo Logic during collection or search operations. | ✅| ✅
Email notifications only return the first result. | +| `{{ResultsJson.fieldName}}` | The value of the specified field name. For example, the payload specification `{{ResultsJson.client_ip}} had {{ResultsJson.errors}} errors` would result in a subject line like this: `70.69.152.165 had 391 errors`.

A maximum of 200 aggregate results or 10 raw messages for this field can be sent via webhook.

A field name must match (case-insensitive) the field from your search and must be alphanumeric characters, underscores, and spaces. If you have a field name that has an unsupported character, use the [`as`](../../search/search-query-language/search-operators/as.md) operator to rename it.

You can return a specific result by providing an array index value in bracket notation such as `{{ResultsJson.fieldName}}[0]` to return the first result.

**Reserved Fields**
The following are reserved field names. They are generated by Sumo Logic during collection or search operations. | ✅| ✅
Email notifications only return the first result. | | `{{NumQueryResults}}` | The number of results the query returned. Results can be raw messages, time-series, or aggregates.
An aggregate query returns the number of aggregate results; displayed in the **Aggregates** tab of the [Search page](/docs/search).
A non-aggregate query returns the number of raw results; displayed in the **Messages** tab of the [Log Search](/docs/search) page. | ✅| ✅| | `{{Id}}` | The unique identifier of the monitor or search that triggered the alert. For example, `00000000000468D5`. | ✅| ✅| | `{{DetectionMethod}}` | This is the type of Detection Method used to detect alerts. Values are based on static or outlier triggers and data type, either logs or metrics. The value will be either `LogsStaticCondition`, `MetricsStaticCondition`, `LogsOutlierCondition`, `MetricsOutlierCondition`, `LogsMissingDataCondition`, `MetricsMissingDataCondition`, or `StaticCondition` (deprecated). | ✅| ✅| @@ -37,7 +37,37 @@ Variables must be enclosed by double curly brackets (`{{ }}`). Unresolved variab | `{{SourceURL}}` | The URL to the configuration or status page of the monitor in Sumo Logic. | ✅| ❌ | | `{{AlertResponseUrl}}` | When your monitor is triggered, it will generate a URL and provide it as the value of this variable where you can use it to open alert response. | ✅| ❌ | | `{{AlertName}}` | Name of the alert that will be displayed on the alert page. | ✅| ✅| -| `{{Playbook}}` | Allows you to access the [playbook content](/docs/alerts/monitors/create-monitor#trigger-type) that was configured as part of the initial monitor setup. | ✅| ✅| +| `{{Playbook}}` | Allows you to access the [playbook content](/docs/alerts/monitors/create-monitor/#step-4-playbook-optional) configured as part of your initial monitor setup. | ✅| ✅| + +:::info Legacy variables + +Here are legacy variables available for alert notifications from metrics monitors and Scheduled Searches. + +
+Click to view + +| Variables | Description | Metrics Monitors | Scheduled Searches | +| :-- | :-- | :-- | :-- | +| `{{SearchName}}` | Description of the saved search or monitor. In the delivered payload, this variable is replaced with the Name you assigned to the search or monitor when you created it. | ✅| ✅| +| `{{SearchDescription}}` | Description of the saved search or monitor. In the delivered payload, this variable is replaced by the Description you assigned to the search or monitor when you created it. | ✅| ✅| +| `{{SearchQuery}}` | The query used to run the saved search. In the delivered payload, this variable is replaced by your saved search query or metric query. | ✅| ✅| +| `{{SearchQueryUrl}}` | The URL to the search or metrics query. In the delivered payload, this is a URL that you can click to run the saved logs or metric query. | ✅| ✅| +| `{{TimeRange}}` | The time range that triggered the alert. | ✅| ✅| +| `{{FireTime}}` | The start time of the log search or metric query that triggered the notification. | ✅| ✅| +| ` {{AggregateResultsJson}}` | JSON object containing search aggregation results. A maximum of 200 aggregate results can be sent via webhook. | ❌ | ✅
Not available with Email notifications | +| `{{RawResultsJson}}` | JSON object containing raw messages. A maximum of 10 raw messages can be sent via webhook. | ❌ | ✅
Not available with Email notifications | +| `{{NumRawResults}}` | Number of results returned by the search. | ❌ | ✅| +| `{{Results.fieldname}}` | The value returned from the search result for the specified field. For example, this payload specification:
`{{Results.client_ip}} had {{Results.errors}} errors`
Results in a subject line like this:
`70.69.152.165 had 391 errors`
A maximum of 200 aggregate results or 10 raw messages for this field can be sent via webhook.
A field name must match (case-insensitive) the field from your search and must be **alphanumeric characters**, **underscores**, and b. If you have a field name that has an unsupported character use the [as](../../search/search-query-language/search-operators/as.md) operator to rename it. | ✅| ✅| +| `{{AlertThreshold}}` | The condition that triggered the alert (for example, above 90 at least once in the last 5 minutes) | ✅| ❌ | +| `{{AlertSource}}` | The metric and sourceHost that triggered the alert, including associated tags for that metric. | ✅| ❌ | +| `{{AlertGroup}}` | The alert grouping that triggered the alert, including associated values for that metric. | ✅| ❌ | +| `{{AlertSource.fieldname}}` | The value returned from the AlertSource object for the specified field name. | ✅| ❌ | +| `{{AlertID}}` | The ID of the triggered alert. | ✅| ❌ | +| `{{AlertStatus}}` | Current status of the time series that triggered (for example, Critical or Warning). | ✅| ❌ | +| `{{AlertCondition}}` | The condition that triggered the alert. | ❌ | ✅ | + +
+::: ## Examples @@ -90,31 +120,3 @@ Variables must be enclosed by double curly brackets (`{{ }}`). Unresolved variab ```sql Monitor Alert: {{TriggerTimeRange}} on {{Name}} ``` - -## Legacy variables - -This section provides the old variables available for alert notifications from metrics monitors and Scheduled Searches. The following table shows where the old variables are supported. - -:::tip -We recommend instead using the new variables listed above. In the future, legacy variables will be deprecated. -::: - -| Variables | Description | Metrics Monitors | Scheduled Searches | -| :-- | :-- | :-- | :-- | -| `{{SearchName}}` | Description of the saved search or monitor. In the delivered payload, this variable is replaced with the Name you assigned to the search or monitor when you created it. | ✅| ✅| -| `{{SearchDescription}}` | Description of the saved search or monitor. In the delivered payload, this variable is replaced by the Description you assigned to the search or monitor when you created it. | ✅| ✅| -| `{{SearchQuery}}` | The query used to run the saved search. In the delivered payload, this variable is replaced by your saved search query or metric query. | ✅| ✅| -| `{{SearchQueryUrl}}` | The URL to the search or metrics query. In the delivered payload, this is a URL that you can click to run the saved logs or metric query. | ✅| ✅| -| `{{TimeRange}}` | The time range that triggered the alert. | ✅| ✅| -| `{{FireTime}}` | The start time of the log search or metric query that triggered the notification. | ✅| ✅| -| ` {{AggregateResultsJson}}` | JSON object containing search aggregation results. A maximum of 200 aggregate results can be sent via webhook. | ❌ | ✅
Not available with Email notifications | -| `{{RawResultsJson}}` | JSON object containing raw messages. A maximum of 10 raw messages can be sent via webhook. | ❌ | ✅
Not available with Email notifications | -| `{{NumRawResults}}` | Number of results returned by the search. | ❌ | ✅| -| `{{Results.fieldname}}` | The value returned from the search result for the specified field. For example, this payload specification:
`{{Results.client_ip}} had {{Results.errors}} errors`
Results in a subject line like this:
`70.69.152.165 had 391 errors`
A maximum of 200 aggregate results or 10 raw messages for this field can be sent via webhook.
A field name must match (case-insensitive) the field from your search and must be **alphanumeric characters**, **underscores**, and b. If you have a field name that has an unsupported character use the [as](../../search/search-query-language/search-operators/as.md) operator to rename it. | ✅| ✅| -| `{{AlertThreshold}}` | The condition that triggered the alert (for example, above 90 at least once in the last 5 minutes) | ✅| ❌ | -| `{{AlertSource}}` | The metric and sourceHost that triggered the alert, including associated tags for that metric. | ✅| ❌ | -| `{{AlertGroup}}` | The alert grouping that triggered the alert, including associated values for that metric. | ✅| ❌ | -| `{{AlertSource.fieldname}}` | The value returned from the AlertSource object for the specified field name. | ✅| ❌ | -| `{{AlertID}}` | The ID of the triggered alert. | ✅| ❌ | -| `{{AlertStatus}}` | Current status of the time series that triggered (for example, Critical or Warning). | ✅| ❌ | -| `{{AlertCondition}}` | The condition that triggered the alert. | ❌ | ✅ | diff --git a/docs/alerts/monitors/create-monitor.md b/docs/alerts/monitors/create-monitor.md index ad5d2c4616..693ec31040 100644 --- a/docs/alerts/monitors/create-monitor.md +++ b/docs/alerts/monitors/create-monitor.md @@ -5,124 +5,148 @@ description: Learn how to create a Sumo Logic monitor. --- import useBaseUrl from '@docusaurus/useBaseUrl'; -import AlertsTimeslice from '../../reuse/alerts-timeslice.md'; -This topic shows you how to create a monitor. +This guide will walk you through the steps of creating a monitor in Sumo Logic, from setting up trigger conditions to configuring advanced settings, notifications, and playbooks. -
-Use the New Monitor dialog to create a monitor (expand to view) - -Screenshot of the 'New Monitor' setup page in Sumo Logic, displaying sections for configuring trigger conditions, advanced settings, notifications, playbook, and monitor details. It includes options to select monitor type (Logs, Metrics, SLO), detection method (Static, Anomaly), and set alert criteria. The advanced settings section provides options for alert name and evaluation delay, while the notifications section allows configuring notification preferences. The playbook section supports adding text and automated playbooks. The monitor details section has fields for monitor name, location, tags, and description, with 'Cancel' and 'Save' buttons at the bottom. - -
+## Open the New Monitor window +There are several ways to create a new monitor, depending on where you are in Sumo Logic. -## Open the New Monitor window +### From the Monitors page -### From your Monitors page +1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. +1. Click **Add** > **New Monitor**, and the **New Monitor** dialog box will appear. -1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Monitoring > Monitors**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu, select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. -1. Click on the **Add** button > **New Monitor** to add a new Monitor. The **New Monitor** dialog box will appear. + -### From your Metrics Explorer +### From Metrics Explorer Creating a monitor based on the threshold values defined in the Metrics page can save time and effort. By using the pre-filled monitor editor, you can quickly create a monitor with the same threshold values as defined in the Metrics page. This will ensure that the monitor is using the same criteria as the Metrics page, providing consistency in monitoring. To create a monitor from the [Metrics Explorer](/docs/metrics/metrics-queries/metrics-explorer/), follow the steps below: -1. Open the Metrics Explorer page and enter the metrics query to create a monitor from it.Screenshot of the Metrics Explorer in Sumo Logic displaying a query -1. In the **Threshold** section, define the critical and warning thresholds for your metrics query.
Screenshot of the Metrics Explorer in Sumo Logic, displaying a line chart for node memory utilization over time. The chart shows the memory utilization metric from 17:42:12 to 17:57:12 on 21/02/2023. The right side of the screen includes a thresholds panel with critical and warning thresholds set to 500000000 and 80, respectively. The 'Fill remaining area as green' option is toggled off. -1. Click the kebab button at the end of the query field and select **Create a Monitor**.
Screenshot of the Metrics Explorer in Sumo Logic, showing the dropdown menu accessed via the three vertical dots icon. The menu includes options for Basic Mode, Duplicate Query, Create a Monitor, and Create an SLO. The option 'Create a Monitor' is highlighted. Below the menu, the thresholds panel shows critical and warning thresholds set to 500000000 and 80, respectively, with the 'Fill remaining area as green' option toggled off. -1. The Monitor Editor will open with prefilled data based on the threshold values set on the Metrics page.
Screenshot of the 'New Monitor' setup page in Sumo Logic, specifically focusing on the Trigger Conditions section. The Monitor Type is set to Metrics and Detection Method to Static. The query is set for node memory utilization for a specific collector. The Alert Grouping options include one alert per monitor or one alert per time series. The Trigger Type section shows critical alerts set to trigger when the result is greater than or equal to 500000000 within 15 minutes. The recovery settings are enabled to recover automatically when the result is less than 500000000 within a 15-minute window. Historical Trend is displayed below, with a dashed red line indicating the threshold. -1. In the **Trigger Type** section of the monitor editor, enable the checkbox that corresponds to the threshold value that you want to use (either "Critical", "Warning", or both). -1. The threshold values will be the same as defined in the Metrics page for both Critical and Warning thresholds. -1. All other parameters should be set to default, including the window (15 minutes) and the "at all times" box. -1. Ensure that the Recover value is set to the default, which is the opposite of the Alert value. The Edit Recovery button should be off. -1. Once all values have been set, click on **Save** to create the monitor. +1. Open the Metrics Explorer page: + * [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). From Sumo Logic home, click **Metrics**.
Screenshot of the Sumo Logic home page with rectangle around the Metrics icon + * [**New UI**](/docs/get-started/sumo-logic-ui). Click the **Go To...** menu at the top of the screen and select **Metrics Search**. +1. Enter a metrics query. For example:
Screenshot of the Metrics Explorer in Sumo Logic displaying a query +1. In the **Thresholds** section, define the critical and warning thresholds for your metrics query.
Screenshot of the Metrics Explorer in Sumo Logic, displaying a line chart for node memory utilization over time. The chart shows the memory utilization metric from 17:42:12 to 17:57:12 on 21/02/2023. The right side of the screen includes a thresholds panel with critical and warning thresholds set to 500000000 and 80, respectively. The 'Fill remaining area as green' option is toggled off. +1. Click the three-dot kebab icon button at the end of the query field and select **Create a Monitor**.
Screenshot of the Metrics Explorer in Sumo Logic, showing the dropdown menu accessed via the three vertical dots icon. The menu includes options for Basic Mode, Duplicate Query, Create a Monitor, and Create an SLO. The option 'Create a Monitor' is highlighted. Below the menu, the thresholds panel shows critical and warning thresholds set to 500000000 and 80, respectively, with the 'Fill remaining area as green' option toggled off. +1. The **New Monitor** will open with prefilled data based on the threshold values you set in the previous steps.
Screenshot of the 'New Monitor' setup page in Sumo Logic, specifically focusing on the Trigger Conditions section. The Monitor Type is set to Metrics and Detection Method to Static. The query is set for node memory utilization for a specific collector. The Alert Grouping options include one alert per monitor or one alert per time series. The Trigger Type section shows critical alerts set to trigger when the result is greater than or equal to 500000000 within 15 minutes. The recovery settings are enabled to recover automatically when the result is less than 500000000 within a 15-minute window. Historical Trend is displayed below, with a dashed red line indicating the threshold. +1. In the **Trigger Type** section, enable the checkbox that corresponds to the threshold value that you want to use (Critical and/or Warning). + * The threshold values will be the same as defined in the Metrics page for both Critical and Warning thresholds. + * Set all other parameters to default, including the window (15 minutes) and the **at all times** box. + * Ensure that the Recover value is set to the default, which is the opposite of the Alert value. The Edit Recovery button should be off. +1. Once all values have been set, click **Save** to create the monitor. 1. The same threshold will also be applied to the histogram chart. :::note -Note that the same threshold translating functionality supports to [Opening Alerts Response Page in the Metrics Explorer](/docs/alerts/monitors/alert-response/#translating-thresholds) and [Opening Monitor in the Metrics Explorer](/docs/alerts/monitors/settings/#view-in-metrics-explorer). +The same threshold translating functionality supports to [Opening Alerts Response Page in the Metrics Explorer](/docs/alerts/monitors/alert-response/#translating-thresholds) and [Opening Monitor in the Metrics Explorer](/docs/alerts/monitors/settings/#view-in-metrics-explorer). ::: ## Step 1. Set trigger conditions -The first step when you create a new monitor is to set the trigger conditions. +The first step when creating a new monitor is setting the **Trigger Conditions**. -Set trigger conditions +### Monitor Type -### Select monitor type and detection method +Select a **Monitor Type**, which will create alerts based on [Logs](/docs/search/), [Metrics](/docs/metrics/metrics-queries/), or an [SLO](/docs/observability/reliability-management-slo/).
Monitor types -1. Select a **Monitor Type**.
Monitor types - * **Logs**. Creates alerts based on a [log search](/docs/search/). - * **Metrics**. Creates alerts based on [metrics queries](/docs/metrics/metrics-queries/). - * **SLO**. Creates alerts based on a [Service Level Objectives (SLO)](/docs/observability/reliability-management-slo/). -1. Select a **Detection Method**. The methods available depend on whether you choose **Logs** or **Metrics** as the monitor type (there is no detection type for **SLO**):
Logs detection methods
Logs detection methods - * **Static** allows you to set specific threshold conditions. Use this detection method when you are alerting on KPIs that have well defined and constant thresholds for what's good and bad. For example, infrastructure metrics like CPU utilization, and memory. - * **Anomaly** lets you uncover unusual behavior identified by anomaly detection, which applies machine learning techniques to detect anomalies and identifies suspicious patterns of activity. This type of detection, also called [*AI-Driven Alerting*](https://www.youtube.com/watch?v=nMRoYb1YCfg), works by establishing baselines for normal behavior so you can receive alerts when deviations or unusual activities are detected. When you create a monitor using this method, it establishes a baseline for normal signal behavior, leveraging historical data to minimize false positives. AI-driven alerting overcomes monitoring limitations through: - * **Model-driven anomaly detection**. Utilizing historical data, ML models establish accurate baselines, eliminating guesswork and noise in alerts. - * **AutoML**. The system self-tunes, including seasonality detection, minimizing user intervention for a simpler experience. - * **User context**. Users set alert sensitivity and incident thresholds, adding context to anomaly detection to mitigate noise. - * **One-click playbook assignment**. Monitors seamlessly [link to Automation Service playbooks](/docs/alerts/monitors/use-playbooks-with-monitors/#create-an-anomaly-monitor-that-runs-an-automated-playbook), expediting response without manual intervention. - * **Auto-diagnosis and recovery**. Sumo Logic Automation Service automates diagnosis and resolution, closing the loop from alert to recovery. - * **Outlier** lets you detect an unusual change or a spike in a time series of a key indicator. Use this detection method when you are alerting on KPIs that don't have well-defined constant thresholds for what's good and bad. You want the Monitor to automatically detect and alert on unusual changes or spikes on the alerting query. For example, application KPIs like page request, throughput, and latency.  -1. If you chose the **Anomaly** detection method, choose **Use Outlier** if you want to trigger alerts on outlier direction rather than anomaly detection:
Screenshot of the Monitor Type and Detection Method options in Sumo Logic's 'New Monitor' setup page. Logs is selected as the Monitor Type, and Anomaly is selected as the Detection Method. There is an option to use Outlier detection, which is currently toggled off. +### Detection Method -### Provide a query (logs and metrics only) +Select a **Detection Method**. -1. Provide a query if you are creating a log or metrics monitor type. - * Logs Monitors can have one query up to 15,000 characters long. - * Metrics Monitors can have up to six queries. When providing multiple metrics queries, use the letter labels to reference a query row. The Monitor will automatically detect the query that triggers your alert, and will mark that row with a notification bell icon. See [joined metrics queries](../../metrics/metrics-queries/metrics-explorer.md) for details.
![Screenshot of the 'New Monitor' setup page in Sumo Logic, showing the Trigger Conditions section. Metrics is selected as the Monitor Type and Static as the Detection Method. The query includes two metrics: CPU_Sys and CPU_User, with an alert condition combining both metrics (#B + #C). A bell icon is highlighted on the left side.](/img/monitors/metrics-monitor-query-row.png) -1. If you're using the **Outlier** or **Anomaly** detection method, you'll need to select the **Direction** you want to track (Up, Down, or Both).
Outlier detection direction
Anomaly detection direction - * **Up.** Only get alerted if there is an abnormal *increase* in the tracked key indicator.  - * **Down.** Only get alerted if there is an abnormal *decrease* in the tracked key indicator. - * **Both.** Get alerted if there is *any* abnormality in the data whether an increase or a decrease. +:::note logs and metrics only +There is no detection method for **SLO**. +::: - If you chose the **Static** detection method, you won't see this option. +#### Logs -### Trigger Type +Logs detection methods -Specify the **Trigger Type**. A Monitor can have one critical, warning, and missing data trigger condition, each with one or more notification destinations. Triggers have different options depending on the query and alert type. Click the **Expand** button next to the query type you're using for configuration details. +**Static** -
-Logs Trigger Types (expand to view) +Allows you to set specific threshold conditions. Use this detection method when you are alerting on KPIs that have well defined and constant thresholds for what's good and bad. For example, infrastructure metrics like CPU utilization and memory. -#### Logs Trigger Types +**Anomaly** -Screenshot of the 'New Monitor' setup page in Sumo Logic, showing the Trigger Conditions section. Logs is selected as the Monitor Type, and Static is selected as the Detection Method. The query input field is empty, waiting for a metric to be entered. +Lets you uncover unusual behavior identified by anomaly detection, which applies machine learning techniques to detect anomalies and identifies suspicious patterns of activity. This type of detection, also called [*AI-Driven Alerting*](https://www.youtube.com/watch?v=nMRoYb1YCfg), works by establishing baselines for normal behavior so you can receive alerts when deviations or unusual activities are detected. When you create a monitor using this method, it establishes a baseline for normal signal behavior, leveraging historical data to minimize false positives. AI-driven alerting overcomes monitoring limitations through: +* **Model-driven anomaly detection**. Utilizing historical data, ML models establish accurate baselines, eliminating guesswork and noise in alerts. +* **AutoML**. The system self-tunes, including seasonality detection, minimizing user intervention for a simpler experience. +* **User context**. Users set alert sensitivity and incident thresholds, adding context to anomaly detection to mitigate noise. +* **One-click playbook assignment**. Monitors seamlessly [link to Automation Service playbooks](/docs/alerts/monitors/use-playbooks-with-monitors/#create-an-anomaly-monitor-that-runs-an-automated-playbook), expediting response without manual intervention. +* **Auto-diagnosis and recovery**. Sumo Logic Automation Service automates diagnosis and resolution, closing the loop from alert to recovery. -Trigger alerts on:
![trigger alerts on field.png](/img/monitors/trigger-alerts-field.png) +If you want to trigger alerts on outlier direction rather than anomaly detection, select **Anomaly** and enable **Use Outlier**.
Screenshot of the Monitor Type and Detection Method options in Sumo Logic's 'New Monitor' setup page. Logs is selected as the Monitor Type, and Anomaly is selected as the Detection Method. There is an option to use Outlier detection, which is currently toggled off. -You can set the trigger based on the following: -* **returned row count** (default): the number of rows returned from the log search. -* A numeric field returned from the search. You can pick any numeric field from your query, and alert on the value of that field. The field is `_count` in the above screenshot. To convert a string to a number use the [num operator](/docs/search/search-query-language/search-operators/num). For example, if you have a field named **duration** you would use the num operator as follows to convert it to a number value. +#### Metrics + +Metrics detection methods + +**Static** + +Allows you to set specific threshold conditions. Use this detection method when you are alerting on KPIs that have well defined and constant thresholds for what's good and bad. For example, infrastructure metrics like CPU utilization, and memory. + +**Outlier** + +Lets you detect an unusual change or a spike in a time series of a key indicator. Use this detection method when you are alerting on KPIs that don't have well-defined constant thresholds for what's good and bad. You want the Monitor to automatically detect and alert on unusual changes or spikes on the alerting query. For example, application KPIs like page request, throughput, and latency.  + +### Query -```sh -| num(duration) -``` +In the next step, you'll need to provide a logs or metrics query. -##### Static detection method +:::note logs and metrics monitors only +No need to enter a query for **SLO** monitors. +::: + +**Logs** monitors can have one query up to 15,000 characters long. + +**Metrics** monitors can have up to six queries. When providing multiple metrics queries, use the letter labels to reference a query row. The monitor will automatically detect the query that triggers your alert, and will mark that row with a notification bell icon. See [Joined metrics queries](/docs/metrics/metrics-queries/metrics-explorer/#join-metric-queries) for details.
Screenshot of the 'New Monitor' setup page in Sumo Logic, showing the Trigger Conditions section. Metrics is selected as the Monitor Type and Static as the Detection Method. The query includes two metrics: CPU_Sys and CPU_User, with an alert condition combining both metrics (#B + #C). A bell icon is highlighted on the left side. + +### Anomaly or Outlier Direction + +If you're using the outlier or anomaly detection method, you'll need to select the **Anomaly Direction** or **Outlier Direction** you want to track (up and/or down). If you chose the **Static** detection method, you won't see this option. + +Anomaly detection direction
-**Log Trigger Type: Critical and Warning** +Outlier detection direction -![logs trigger type.png](/img/monitors/logs-trigger-type.png) +* **Up.** Only get alerted if there is an abnormal *increase* in the tracked key indicator.  +* **Down.** Only get alerted if there is an abnormal *decrease* in the tracked key indicator. +* **Both.** Get alerted if there is *any* abnormality in the data whether an increase or a decrease. + + +### Trigger Type (Logs) + +Next, you'll need to configure the **Trigger Type** for logs or [metrics](#trigger-type-metrics). Trigger alerts on:
trigger alerts on field + +You can set the trigger based on the following: +* A **returned row count** (default), which is the number of rows returned from the log search. +* A numeric field returned from the search. You can pick any numeric field from your query, and alert on the value of that field. The field is `_count` in the above screenshot. To convert a string to a number use the [`num` operator](/docs/search/search-query-language/search-operators/num). For example, if you have a field named **duration**, you would use the `num` operator as follows to convert it to a number value. + ```sh + | num(duration) + ``` + +#### Static detection method + +**Example: Log Trigger Type - Critical and Warning** + +logs trigger type.png `Alert when returned row count is within
+metrics query.png -
-Metrics Trigger Types (expand to view) +#### Static detection method -#### Metrics Trigger Types +**Example: Metrics Trigger Type - Critical and Warning** -![metrics query.png](/img/monitors/metrics-query.png) - - -##### Static detection method - -**Metrics Trigger Type: Critical and Warning** - -![metrics trigger types.png](/img/monitors/metrics-trigger-types.png) +metrics trigger types.png `Alert when result is
## Step 2. Advanced settings (optional) -The second step when you create a new monitor is to configure advanced settings. +Configure advanced settings like the Alert Name and Evaluation Delay. Screenshot of the Advanced Settings section in Sumo Logic's 'New Monitor' setup page. It includes options to use the monitor name or customize the alert name, and an evaluation delay slider set to 0 seconds with a maximum of 120 minutes. ### Alert Name Alert Name allows you to customize the name that appears on the Alert page. By default, the Alert name is the monitor name, but you may want to create a custom name based on your use case. You can include any of the available alert variables, except `{{AlertName}}`, `Playbook`, `{{AlertResponseURL}}`, and `{{ResultsJson}}`, in the name such as the type of monitor or trigger condition. You can check the alert variables list for details. - * Example: `{{Resultsjson.Env}}` - High CPU. This alert will produce an Alert with the name like PROD - High CPU. Here we are assuming that there is a field name Env in underlying data that has a value of "PROD". + +Example: `{{Resultsjson.Env}}` - High CPU. This alert will produce an Alert with the name like PROD - High CPU. Here we are assuming that there is a field name Env in underlying data that has a value of "PROD". ### Evaluation Delay -Collection delays may occur due to your environment and it takes a couple of minutes for data to be processed into Sumo Logic. Since Monitors run on data from the most current time period, it's possible for Monitors to evaluate against incomplete data. As a result, Monitors can generate false positives or negatives that can cause confusion. Set an evaluation delay in seconds to delay the evaluation of a Monitor, so it doesn't look at the most current time (where data can be incomplete) and instead looks at an older period of time, where you have more complete data.
![additional settings evaluation delay.png](/img/monitors/additional-settings-evaluation-delay.png)
If your data is coming from the [Amazon CloudWatch Source for Metrics](/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics.md) we recommend a setting of 900 seconds. -## Step 3. Notifications (optional) +Use this setting to delay monitor processing by the specified interval of time to account for data ingestion delays. It delays the evaluation of a monitor so that it doesn't look at the most current time (where data can be incomplete) and instead looks at an older period of time, where you have more complete data. -The third step when you create a new monitor is to configure notifications. +Collection delays may occur due to your environment and it takes a couple of minutes for data to be processed into Sumo Logic. Since monitors run on data from the most current time period, it's possible for them to evaluate against incomplete data. As a result, monitors can generate false positives or negatives that can cause confusion. -Screenshot of the Notifications section in Sumo Logic's 'New Monitor' setup page. It includes an option to select the preferred notification time zone, set to (GMT-06:00) America/Chicago. Below is a section to configure connection types for notifications, with options for Critical, Alert, Recovery, Warning, and Missing Data. There is also a button to add a new notification. +:::note +If your data is coming from the [Amazon CloudWatch Source for Metrics](/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics), we recommend a setting of 900 seconds. +::: -When a trigger condition is met, you can send notifications to other people and services. Metrics monitors have an option to send notifications either as a group or separately. **Group Notifications** define whether you want single notifications per time series that match the Monitor query or you want group notifications where you receive a single notification for the entire Monitor. Log monitors always group notifications. +## Step 3. Notifications (optional) + +Configure who gets notified when the monitor triggers an alert. When a trigger condition is met, you can send notifications to other people and services. Metrics monitors have an option to send notifications either as a group or separately. **Group Notifications** define whether you want single notifications per time series that match the Monitor query or you want group notifications where you receive a single notification for the entire Monitor. Log monitors always group notifications. -To add notifications, click the **Add Notification** button. You can add more than one notification channel for a Monitor. +Screenshot of the Notifications section in Sumo Logic's 'New Monitor' setup page. It includes an option to select the preferred notification time zone, set to (GMT-06:00) America/Chicago. Below is a section to configure connection types for notifications, with options for Critical, Alert, Recovery, Warning, and Missing Data. There is also a button to add a new notification. -1. Set your **Preferred Notification Time Zone** for your monitor's alert notifications. If you do not select anything, it will default to the time zone specified in your user preferences. -1. The **Connection Type** specifies the notification channel where you want to get notified, such as an email or webhook. See [Connections](/docs/alerts/webhook-connections) for details. Monitor notifications support variables to reference its configuration settings or your raw data. See [alert variables](/docs/alerts/monitors/alert-variables) for a table of the available variables. +* **Preferred Notification Time Zone**. Set the time zone for your alert notifications. If you do not select anything, it will default to the time zone specified in your user preferences. +* **Connection Type**. Choose the [connection](/docs/alerts/webhook-connections) notification method (e.g., email, Webhook, PagerDuty). Monitor notifications support [Alert Variables](/docs/alerts/monitors/alert-variables) to reference its configuration settings or your raw data. * **Email**. Provide 1-100 recipient email addresses. You can customize the email subject and body. * **Webhook**. By default, the payload defined on the Connection is used. You can customize your payload for each notification if needed. -1. Select the **Alert** and **Recovery** checkboxes for each trigger type based on when you want to send a notification.  You can have different Trigger Conditions send a notification to different channels. For example, you can get notified on PagerDuty for critical Incidents and get an email or Slack notification for warning incidents. - * If your connection type is Lambda, Microsoft Teams, OpsGenie, PagerDuty, Slack, or a generic webhook, the **Recovery** checkbox enables an automatic resolution process that updates the connection when an alert has recovered within Sumo Logic. Support for other connection types is coming soon. - * **Add Notifications** to add additional notification channels as needed. You can configure different notifications for each trigger type, critical, warning, and missing data. +* **Trigger Type Notifications**. Set different notification channels for each trigger type (**Critical**, **Warning**, **Missing Data**). Select the **Alert** and **Recovery** checkboxes for each trigger type based on when you want to send a notification. You can have different Trigger Conditions send a notification to different channels. For example, you can get notified on PagerDuty for critical Incidents and get an email or Slack notification for warning incidents. + * For the connection types listed [here](/docs/alerts/webhook-connections), you can use the **Recovery** checkbox to enable an automatic resolution process that updates the connection when an alert has recovered within Sumo Logic. + * Optionally, you can click **Add Notifications** to add more notification channels. and configure different notifications for each trigger type (critical, warning, and missing data). -## Step 4. Playbook (optional) -The fourth step when you create a new monitor is to add playbooks. +## Step 4. Playbook (optional) Screenshot of the Playbook section in Sumo Logic's 'New Monitor' setup page. It includes a Text Playbook field with a placeholder 'Click here to start typing' and a note indicating that Markdown is supported. Below, there is a dropdown menu to select an automated playbook, with options to add or manage playbooks -In this step, you can add a **Playbook** to run in response to an alert. +In this step, you can add a playbook to run in response to an alert. -1. **Text Playbook**. Enter instructions for how to handle the alerts resulting from the monitor. This allows admins to codify tribal knowledge for an on-call so that they know what to do upon receiving an alert. Markdown is supported. For an example, see [Alert details](/docs/alerts/monitors/alert-response/#alert-details). -1. **Automated Playbooks**. Select an existing playbook from the Automation Service to run when an alert is fired. For more information, see [Automated Playbooks in Monitors](/docs/alerts/monitors/use-playbooks-with-monitors/). -1. **Add Playbook**. If desired, you can add more automated playbooks to run sequentially. -1. Click **Manage Playbooks** to manage the automated playbooks in the Automation Service. +* **Text Playbook**. Provide manual instructions to handle alerts resulting from the monitor. This allows admins to codify tribal knowledge for an on-call so that they know what to do upon receiving an alert. Markdown is supported. For an example, see [Alert details](/docs/alerts/monitors/alert-response/#alert-details). +* **Automated Playbooks**. Select from existing automated playbooks in the [Automation Service](/docs/platform-services/automation-service) to run when an alert is fired. For more information, see [Automated Playbooks in Monitors](/docs/alerts/monitors/use-playbooks-with-monitors/). Optionally, you can click **Add Playbook** to add more automated playbooks to run sequentially, and **Manage Playbooks** to manage the automated playbooks in the Automation Service. ## Step 5. Monitor details -In this step, you'll configure monitor details. +Finalize your monitor by configuring its details. **Monitor Name** gives your monitor a name and **Location** where the monitor will be saved. Monitor details modal -1. Enter a **Monitor Name** and the **Location** where you want to save it. -1. (Optional) Add one or more **Tags**. [Learn more here](/docs/alerts/monitors/settings#tags). -1. (Optional) Add a **Description**. -1. When you're done creating the monitor, click **Save**. +Optionally, you can add [**Tags**](/docs/alerts/monitors/settings#tags) to organize your monitors and/or a **Description**. ## Other configurations diff --git a/docs/alerts/monitors/monitor-faq.md b/docs/alerts/monitors/monitor-faq.md index 654f59be2c..a4b4be71ba 100644 --- a/docs/alerts/monitors/monitor-faq.md +++ b/docs/alerts/monitors/monitor-faq.md @@ -4,6 +4,8 @@ title: Monitors FAQ description: Frequently asked questions about Sumo Logic monitors. --- +import AlertsTimeslice from '../../reuse/alerts-timeslice.md'; + ## Can I convert my existing Scheduled Search to a monitor? Yes, however, it's a manual process. You have to create a new monitor with the appropriate query and alerting condition based on your existing Scheduled Search. See the [differences between monitors and Scheduled Searches](/docs/alerts/difference-from-scheduled-searches) before you consider converting. @@ -16,7 +18,7 @@ For example, instead of creating one monitor to alert on CPU utilization, you co ## Why does my monitor get automatically disabled?  -Sumo Logic will automatically disable a monitor if it violates specific limitations. You can check the reason it was disabled with the [System Event Index](/docs/manage/security/audit-indexes/system-event-index.md). The following query will search the System Event Index for the reason: +Sumo Logic will automatically disable a monitor if it violates specific limitations. You can check the reason it was disabled with the [System Event Index](/docs/manage/security/audit-indexes/system-event-index.md). The following query will search the System Event Index for the reason: ```sql _index=sumologic_system_events MonitorSystemDisabled @@ -54,6 +56,11 @@ For the best experience, we recommend being mindful of the number of monitors yo Yes, you can use [Alert Variables](/docs/alerts/monitors/alert-variables) to reference various monitor configurations in your custom payload. + +## How does a timeslice affect a monitor? + + + ## Does Sumo Logic let me get alerts from a specific static IP address that I can allowlist? Yes, Sumo Logic provides webhook notifications through static IP addresses. You can allowlist those IP addresses to receive notifications directly from Sumo Logic. For a list of our allowlist addresses, contact [Support](https://support.sumologic.com/support/s). diff --git a/docs/alerts/monitors/settings.md b/docs/alerts/monitors/settings.md index 59abd54096..069026dc19 100644 --- a/docs/alerts/monitors/settings.md +++ b/docs/alerts/monitors/settings.md @@ -12,7 +12,7 @@ The **Monitors** page allows you to view, create, manage, and organize your moni [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). To access the **Monitors** page, in the main Sumo Logic menu select **Manage Data > Monitoring > Monitors**. [**New UI**](/docs/get-started/sumo-logic-ui/). To access the Monitors page, in the main Sumo Logic menu select **Alerts > Monitors**. You can also click the **Go To...** menu at the top of the screen and select **Monitors**. - + The page displays the following information: * **Name**. Name of the monitor. @@ -69,7 +69,7 @@ To view the thresholds translating values in your metrics explorer, follow the s 1. To view the values on chart, you may need to change the window time range in the graph to some other as the default is 15 minutes.
thresholds-graph :::note -Note that the same threshold translating functionality supports to [Creating Monitor from the Metrics Explorer](/docs/alerts/monitors/create-monitor/#from-your-metrics-explorer) and [Opening Alerts Response Page in the Metrics Explorer](/docs/alerts/monitors/alert-response/#translating-thresholds) +Note that the same threshold translating functionality supports to [Creating Monitor from the Metrics Explorer](/docs/alerts/monitors/create-monitor/#from-metrics-explorer) and [Opening Alerts Response Page in the Metrics Explorer](/docs/alerts/monitors/alert-response/#translating-thresholds) ::: ## Edit, Disable, More Actions diff --git a/docs/alerts/monitors/use-playbooks-with-monitors.md b/docs/alerts/monitors/use-playbooks-with-monitors.md index 0176d0ae4f..c807987502 100644 --- a/docs/alerts/monitors/use-playbooks-with-monitors.md +++ b/docs/alerts/monitors/use-playbooks-with-monitors.md @@ -1,7 +1,7 @@ --- id: use-playbooks-with-monitors title: Automated Playbooks in Monitors -sidebar_label: Automated Playbooks in Monitors +sidebar_label: Using Automated Playbooks description: Learn how to use Automation Service playbooks with monitors. --- @@ -105,7 +105,7 @@ To create an anomaly monitor that runs an automated playbook in response to an a 1. Go to [Step 1: Trigger Conditions](/docs/alerts/monitors/create-monitor/#step-1-set-trigger-conditions) in the **New Monitor** window. 1. Select the **Logs** monitor type. 1. Select **Anomaly** under **Detection Method**.
(Note that **Outlier** monitors are under **Anomaly** because they use anomaly detection on in-query data.)
Anomaly detection method -1. In **Query**, [provide a query](/docs/alerts/monitors/create-monitor/#provide-a-query-logs-and-metrics-only) for the logs to be monitored for anomalous behavior. +1. In **Query**, [provide a query](/docs/alerts/monitors/create-monitor/#query) for the logs to be monitored for anomalous behavior. 1. In the **Critical** tab under **Trigger Type**, select the parameters for the alert trigger: * **Alert when anomaly count is at least ___ (max. 5) at any time within ___**. Enter the minimum number of anomalies to detect during the detection window before triggering an alert, and the duration of time to watch for anomalies (from 5 minutes to 24 hours). Ensure that the time period window is 5-10 times longer than the timeslice used in the log query. This setting helps you add context to anomaly detection. For example, if you know a particular signal is noisy, you may want to wait for a number of anomalous data points in the detection window before triggering an alert. If the time period is set to 5 minutes, and the minimum anomaly count is set to 1, then an alert is triggered if 1 anomaly appears within a 5-minute time period. * **Show me fewer alerts --- more alerts**. Tune the number of anomalous data points detected per day compared to the predicted baseline for the detection window. Select more alerts if you do not want to miss out on most anomalies. diff --git a/docs/get-started/ai-machine-learning.md b/docs/get-started/ai-machine-learning.md index 1cfc12bd09..8303d1f865 100644 --- a/docs/get-started/ai-machine-learning.md +++ b/docs/get-started/ai-machine-learning.md @@ -63,7 +63,7 @@ LogCompare simplifies log analysis by enabling easy comparison of log data from #### Anomaly Detection -[Anomaly Detection](/docs/alerts/monitors/create-monitor/#select-monitor-type-and-detection-method), powered by machine learning, efficiently flags suspicious activities by establishing baseline behavior and minimizing false positives. It also automatically fine-tunes anomaly detection with minimal user input, and you can associate it with a playbook to link anomaly responses with monitors, streamlining incident response. +[Anomaly Detection](/docs/alerts/monitors/create-monitor/#step-1-set-trigger-conditions), powered by machine learning, efficiently flags suspicious activities by establishing baseline behavior and minimizing false positives. It also automatically fine-tunes anomaly detection with minimal user input, and you can associate it with a playbook to link anomaly responses with monitors, streamlining incident response. #### Automated playbooks diff --git a/docs/reuse/alerts-timeslice.md b/docs/reuse/alerts-timeslice.md index 446e67a941..fce8a625b3 100644 --- a/docs/reuse/alerts-timeslice.md +++ b/docs/reuse/alerts-timeslice.md @@ -1,8 +1,3 @@ -
-How does a timeslice affect a monitor? - Monitor query output is matched with the configured threshold during its evaluation. If it matches, the alert triggers. If there are multiple rows in the search query output because of [`timeslice`](/docs/search/search-query-language/search-operators/timeslice) or any other reason (such as a [`group by`](/docs/search/search-query-language/group-aggregate-operators) operator), it would match each row with the monitor threshold and if it matches for any row, it would trigger the alert.
So if the query is `_sourceCategory=abc | timeslice 1m | count by _timeslice`, the timeRange is `15m`, and there are 15 rows in the query output, it would trigger the alert if `_count` for any row matches the threshold and resolve when none of the rows match the alert threshold (and all match resolution threshold). - -
diff --git a/docs/search/search-query-language/search-operators/timeslice.md b/docs/search/search-query-language/search-operators/timeslice.md index b629a1ec0b..53fec50076 100644 --- a/docs/search/search-query-language/search-operators/timeslice.md +++ b/docs/search/search-query-language/search-operators/timeslice.md @@ -67,8 +67,9 @@ For example, in Australia, DST goes into effect on October 2nd for Spring. For t In another example, if you had a 4h timeslice, you would usually see results at 12 a.m., 4 a.m., 8 a.m., 12 p.m., etc. But when the DST happens, the result after 12 a.m. could be either 3 a.m. or 5 a.m., depending on Fall or Spring. - +#### How does a timeslice affect a monitor? + ### Basic examples diff --git a/i18n/ja/alerts/alerts/monitors/index.md b/i18n/ja/alerts/alerts/monitors/index.md index 091cdda93a..71f6c8bd6a 100644 --- a/i18n/ja/alerts/alerts/monitors/index.md +++ b/i18n/ja/alerts/alerts/monitors/index.md @@ -188,8 +188,6 @@ import TabItem from '@theme/TabItem'; -![Logs monitors.png](/img/monitors/logs-monitors.png) - Trigger alerts on: ![trigger alerts on field.png](/img/monitors/trigger-alerts-field.png) diff --git a/static/img/monitors/create-monitor.png b/static/img/monitors/create-monitor.png index c672a6875c..cb14f2f3f9 100644 Binary files a/static/img/monitors/create-monitor.png and b/static/img/monitors/create-monitor.png differ diff --git a/static/img/monitors/edit-recovery-settings1.png b/static/img/monitors/edit-recovery-settings1.png index b224979724..247b8e0a92 100644 Binary files a/static/img/monitors/edit-recovery-settings1.png and b/static/img/monitors/edit-recovery-settings1.png differ diff --git a/static/img/monitors/home-metrics.png b/static/img/monitors/home-metrics.png new file mode 100644 index 0000000000..c1dae27dd5 Binary files /dev/null and b/static/img/monitors/home-metrics.png differ diff --git a/static/img/monitors/logs-missing-data.png b/static/img/monitors/logs-missing-data.png index 88142fa5db..ece5e05441 100644 Binary files a/static/img/monitors/logs-missing-data.png and b/static/img/monitors/logs-missing-data.png differ diff --git a/static/img/monitors/logs-monitors.png b/static/img/monitors/logs-monitors.png deleted file mode 100644 index 887203cb00..0000000000 Binary files a/static/img/monitors/logs-monitors.png and /dev/null differ diff --git a/static/img/monitors/metrics-explorer-thresholds.png b/static/img/monitors/metrics-explorer-thresholds.png index 2bbf2f6aec..4ba8133a4d 100644 Binary files a/static/img/monitors/metrics-explorer-thresholds.png and b/static/img/monitors/metrics-explorer-thresholds.png differ diff --git a/static/img/monitors/metrics-explorer-view.png b/static/img/monitors/metrics-explorer-view.png index 5df3d29090..cb6dbd575c 100644 Binary files a/static/img/monitors/metrics-explorer-view.png and b/static/img/monitors/metrics-explorer-view.png differ diff --git a/static/img/monitors/metrics-monitor-query-row.png b/static/img/monitors/metrics-monitor-query-row.png index cb96105408..d0e910b06a 100644 Binary files a/static/img/monitors/metrics-monitor-query-row.png and b/static/img/monitors/metrics-monitor-query-row.png differ diff --git a/static/img/monitors/metrics-query.png b/static/img/monitors/metrics-query.png index e588558869..e7ea72f875 100644 Binary files a/static/img/monitors/metrics-query.png and b/static/img/monitors/metrics-query.png differ diff --git a/static/img/monitors/metrics-trigger-types.png b/static/img/monitors/metrics-trigger-types.png index 8fb77a54db..cbf6003c8b 100644 Binary files a/static/img/monitors/metrics-trigger-types.png and b/static/img/monitors/metrics-trigger-types.png differ diff --git a/static/img/monitors/missing.png b/static/img/monitors/missing.png index 8c1cfaf931..8e3fa29733 100644 Binary files a/static/img/monitors/missing.png and b/static/img/monitors/missing.png differ diff --git a/static/img/monitors/monitor-detection-methods-for-logs.png b/static/img/monitors/monitor-detection-methods-for-logs.png index cc9f900de1..6de74c4d4c 100644 Binary files a/static/img/monitors/monitor-detection-methods-for-logs.png and b/static/img/monitors/monitor-detection-methods-for-logs.png differ diff --git a/static/img/monitors/monitor-detection-methods-for-metrics.png b/static/img/monitors/monitor-detection-methods-for-metrics.png index d72a6a14a4..873e69f84e 100644 Binary files a/static/img/monitors/monitor-detection-methods-for-metrics.png and b/static/img/monitors/monitor-detection-methods-for-metrics.png differ diff --git a/static/img/monitors/monitor-detector-types-for-anamoly.png b/static/img/monitors/monitor-detector-types-for-anamoly.png deleted file mode 100644 index 379c95b725..0000000000 Binary files a/static/img/monitors/monitor-detector-types-for-anamoly.png and /dev/null differ diff --git a/static/img/monitors/monitor-detector-types-for-anomaly.png b/static/img/monitors/monitor-detector-types-for-anomaly.png new file mode 100644 index 0000000000..4e3f9ad6fb Binary files /dev/null and b/static/img/monitors/monitor-detector-types-for-anomaly.png differ diff --git a/static/img/monitors/monitor-metrics-outlier-triggers.png b/static/img/monitors/monitor-metrics-outlier-triggers.png index b95f561df3..3f9f2272e7 100644 Binary files a/static/img/monitors/monitor-metrics-outlier-triggers.png and b/static/img/monitors/monitor-metrics-outlier-triggers.png differ diff --git a/static/img/monitors/monitor-outlier-logs.png b/static/img/monitors/monitor-outlier-logs.png index 9e8ccbcb1f..acac3e3ed6 100644 Binary files a/static/img/monitors/monitor-outlier-logs.png and b/static/img/monitors/monitor-outlier-logs.png differ diff --git a/static/img/monitors/new-monitor-advanced-settings.png b/static/img/monitors/new-monitor-advanced-settings.png index 52bfa42057..2a4f6d18f6 100644 Binary files a/static/img/monitors/new-monitor-advanced-settings.png and b/static/img/monitors/new-monitor-advanced-settings.png differ diff --git a/static/img/monitors/new-monitor-dialog.png b/static/img/monitors/new-monitor-dialog.png deleted file mode 100644 index fb767ac08b..0000000000 Binary files a/static/img/monitors/new-monitor-dialog.png and /dev/null differ diff --git a/static/img/monitors/new-monitor-set-trigger-conditions.png b/static/img/monitors/new-monitor-set-trigger-conditions.png deleted file mode 100644 index 6a0ce3be6a..0000000000 Binary files a/static/img/monitors/new-monitor-set-trigger-conditions.png and /dev/null differ diff --git a/static/img/monitors/new-monitor-window.png b/static/img/monitors/new-monitor-window.png index b7f745cafc..cb0233e147 100644 Binary files a/static/img/monitors/new-monitor-window.png and b/static/img/monitors/new-monitor-window.png differ diff --git a/static/img/monitors/trigger-alerts-field.png b/static/img/monitors/trigger-alerts-field.png index a6ae4fdbaf..6fcc842e93 100644 Binary files a/static/img/monitors/trigger-alerts-field.png and b/static/img/monitors/trigger-alerts-field.png differ