Skip to content

Commit

Permalink
Merge branch 'main' into issue-6038-alerts-bugs
Browse files Browse the repository at this point in the history
  • Loading branch information
nastasha-solomon authored Dec 18, 2024
2 parents 7652062 + 19e3484 commit 28ef252
Show file tree
Hide file tree
Showing 53 changed files with 421 additions and 219 deletions.
15 changes: 5 additions & 10 deletions .github/ISSUE_TEMPLATE/docs-request-internal.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,19 +44,14 @@ body:
default: 0
validations:
required: true
- type: dropdown
- type: textarea
id: version-ess
attributes:
label: ESS release
description: Select a release version if your request is tied to the Elastic Stack release schedule.
options:
- '8.12'
- '8.13'
- '8.14'
- '8.15'
- '8.16'
- 'N/A'
default: 0
description: Please provide a release version if your request is tied to the Elastic Stack release schedule.
placeholder: |
For example:
"The functionality is being introduced in ESS version 8.18.0"
validations:
required: true
- type: input
Expand Down
32 changes: 6 additions & 26 deletions .mergify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ pull_request_rules:
- name: backport patches to 8.x branch
conditions:
- merged
- base=main
- label=v8.18.0
actions:
backport:
Expand All @@ -43,7 +42,6 @@ pull_request_rules:
- name: backport patches to 8.17 branch
conditions:
- merged
- base=main
- label=v8.17.0
actions:
backport:
Expand All @@ -57,7 +55,6 @@ pull_request_rules:
- name: backport patches to 8.16 branch
conditions:
- merged
- base=main
- label=v8.16.0
actions:
backport:
Expand All @@ -71,7 +68,6 @@ pull_request_rules:
- name: backport patches to 8.15 branch
conditions:
- merged
- base=main
- label=v8.15.0
actions:
backport:
Expand All @@ -85,7 +81,6 @@ pull_request_rules:
- name: backport patches to 8.14 branch
conditions:
- merged
- base=main
- label=v8.14.0
actions:
backport:
Expand All @@ -99,7 +94,6 @@ pull_request_rules:
- name: backport patches to 8.13 branch
conditions:
- merged
- base=main
- label=v8.13.0
actions:
backport:
Expand All @@ -113,7 +107,6 @@ pull_request_rules:
- name: backport patches to 8.12 branch
conditions:
- merged
- base=main
- label=v8.12.0
actions:
backport:
Expand All @@ -127,7 +120,6 @@ pull_request_rules:
- name: backport patches to 8.11 branch
conditions:
- merged
- base=main
- label=v8.11.0
actions:
backport:
Expand All @@ -141,7 +133,6 @@ pull_request_rules:
- name: backport patches to 8.10 branch
conditions:
- merged
- base=main
- label=v8.10.0
actions:
backport:
Expand All @@ -155,7 +146,6 @@ pull_request_rules:
- name: backport patches to 8.9 branch
conditions:
- merged
- base=main
- label=v8.9.0
actions:
backport:
Expand All @@ -169,7 +159,6 @@ pull_request_rules:
- name: backport patches to 8.8 branch
conditions:
- merged
- base=main
- label=v8.8.0
actions:
backport:
Expand All @@ -183,7 +172,6 @@ pull_request_rules:
- name: backport patches to 8.7 branch
conditions:
- merged
- base=main
- label=v8.7.0
actions:
backport:
Expand All @@ -197,7 +185,6 @@ pull_request_rules:
- name: backport patches to 8.6 branch
conditions:
- merged
- base=main
- label=v8.6.0
actions:
backport:
Expand All @@ -211,7 +198,6 @@ pull_request_rules:
- name: backport patches to 8.5 branch
conditions:
- merged
- base=main
- label=v8.5.0
actions:
backport:
Expand All @@ -225,7 +211,6 @@ pull_request_rules:
- name: backport patches to 8.4 branch
conditions:
- merged
- base=main
- label=v8.4.0
actions:
backport:
Expand All @@ -239,7 +224,6 @@ pull_request_rules:
- name: backport patches to 8.3 branch
conditions:
- merged
- base=main
- label=v8.3.0
actions:
backport:
Expand All @@ -253,7 +237,6 @@ pull_request_rules:
- name: backport patches to 8.2 branch
conditions:
- merged
- base=main
- label=v8.2.0
actions:
backport:
Expand All @@ -267,7 +250,6 @@ pull_request_rules:
- name: backport patches to 8.1 branch
conditions:
- merged
- base=main
- label=v8.1.0
actions:
backport:
Expand All @@ -278,31 +260,29 @@ pull_request_rules:
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
labels:
- backport
- name: backport patches to 7.17 branch
- name: backport patches to 8.0 branch
conditions:
- merged
- base=main
- label=v7.17.0
- label=v8.0.0
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "7.17"
- "8.0"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
labels:
- backport
- name: backport patches to 8.0 branch
- name: backport patches to 7.17 branch
conditions:
- merged
- base=main
- label=v8.0.0
- label=v7.17.0
actions:
backport:
assignees:
- "{{ author }}"
branches:
- "8.0"
- "7.17"
title: "[{{ destination_branch }}] {{ title }} (backport #{{ number }})"
labels:
- backport
Expand Down
36 changes: 20 additions & 16 deletions docs/AI-for-security/connect-to-byo.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This page provides instructions for setting up a connector to a large language m

This example uses a single server hosted in GCP to run the following components:

* LM Studio with the https://mistral.ai/technology/#models[Mixtral-8x7b] model
* LM Studio with the https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407[Mistral-Nemo-Instruct-2407] model
* A reverse proxy using Nginx to authenticate to Elastic Cloud

image::images/lms-studio-arch-diagram.png[Architecture diagram for this guide]
Expand All @@ -20,7 +20,7 @@ NOTE: For testing, you can use alternatives to Nginx such as https://learn.micro
[discrete]
== Configure your reverse proxy

NOTE: If your Elastic instance is on the same host as LM Studio, you can skip this step.
NOTE: If your Elastic instance is on the same host as LM Studio, you can skip this step. Also, check out our https://www.elastic.co/blog/herding-llama-3-1-with-elastic-and-lm-studio[blog post] that walks through the whole process of setting up a single-host implementation.

You need to set up a reverse proxy to enable communication between LM Studio and Elastic. For more complete instructions, refer to a guide such as https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-reverse-proxy-on-ubuntu-22-04[this one].

Expand Down Expand Up @@ -74,7 +74,14 @@ server {
}
--------------------------------------------------

IMPORTANT: If using the example configuration file above, you must replace several values: Replace `<secret token>` with your actual token, and keep it safe since you'll need it to set up the {elastic-sec} connector. Replace `<yourdomainname.com>` with your actual domain name. Update the `proxy_pass` value at the bottom of the configuration if you decide to change the port number in LM Studio to something other than 1234.
[IMPORTANT]
====
If using the example configuration file above, you must replace several values:
* Replace `<secret token>` with your actual token, and keep it safe since you'll need it to set up the {elastic-sec} connector.
* Replace `<yourdomainname.com>` with your actual domain name.
* Update the `proxy_pass` value at the bottom of the configuration if you decide to change the port number in LM Studio to something other than 1234.
====

[discrete]
=== (Optional) Set up performance monitoring for your reverse proxy
Expand All @@ -85,23 +92,20 @@ You can use Elastic's {integrations-docs}/nginx[Nginx integration] to monitor pe

First, install https://lmstudio.ai/[LM Studio]. LM Studio supports the OpenAI SDK, which makes it compatible with Elastic's OpenAI connector, allowing you to connect to any model available in the LM Studio marketplace.

One current limitation of LM Studio is that when it is installed on a server, you must launch the application using its GUI before doing so using the CLI. For example, by using Chrome RDP with an https://cloud.google.com/architecture/chrome-desktop-remote-on-compute-engine[X Window System]. After you've opened the application the first time using the GUI, you can start it by using `sudo lms server start` in the CLI.
You must launch the application using its GUI before doing so using the CLI. For example, use Chrome RDP with an https://cloud.google.com/architecture/chrome-desktop-remote-on-compute-engine[X Window System]. After you've opened the application the first time using the GUI, you can start it by using `sudo lms server start` in the CLI.

Once you've launched LM Studio:

1. Go to LM Studio's Search window.
2. Search for an LLM (for example, `Mixtral-8x7B-instruct`). Your chosen model must include `instruct` in its name in order to work with Elastic.
3. Filter your search for "Compatibility Guess" to optimize results for your hardware. Results will be color coded:
* Green means "Full GPU offload possible", which yields the best results.
* Blue means "Partial GPU offload possible", which may work.
* Red for "Likely too large for this machine", which typically will not work.
2. Search for an LLM (for example, `Mistral-Nemo-Instruct-2407`). Your chosen model must include `instruct` in its name in order to work with Elastic.
3. After you find a model, view download options and select a recommended version (green). For best performance, select one with the thumbs-up icon that indicates good performance on your hardware.
4. Download one or more models.

IMPORTANT: For security reasons, before downloading a model, verify that it is from a trusted source. It can be helpful to review community feedback on the model (for example using a site like Hugging Face).

image::images/lms-model-select.png[The LM Studio model selection interface]

In this example we used https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF[`TheBloke/Mixtral-8x7B-Instruct-v0.1.Q3_K_M.gguf`]. It has 46.7B total parameters, a 32,000 token context window, and uses GGUF https://huggingface.co/docs/transformers/main/en/quantization/overview[quanitization]. For more information about model names and format information, refer to the following table.
In this example we used https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407[`mistralai/Mistral-Nemo-Instruct-2407`]. It has 12B total parameters, a 128,000 token context window, and uses GGUF https://huggingface.co/docs/transformers/main/en/quantization/overview[quanitization]. For more information about model names and format information, refer to the following table.

[cols="1,1,1,1", options="header"]
|===
Expand All @@ -124,18 +128,18 @@ After downloading a model, load it in LM Studio using the GUI or LM Studio's htt
[discrete]
=== Option 1: load a model using the CLI (Recommended)

It is a best practice to download models from the marketplace using the GUI, and then load or unload them using the CLI. The GUI allows you to search for models, whereas the CLI only allows you to import specific paths, but the CLI provides a good interface for loading and unloading.
It is a best practice to download models from the marketplace using the GUI, and then load or unload them using the CLI. The GUI allows you to search for models, whereas the CLI allows you to use `lms get` to search for models. The CLI provides a good interface for loading and unloading.

Use the following commands in your CLI:
Once you've downloaded a model, use the following commands in your CLI:

1. Verify LM Studio is installed: `lms`
2. Check LM Studio's status: `lms status`
3. List all downloaded models: `lms ls`
4. Load a model: `lms load`
4. Load a model: `lms load`.

image::images/lms-cli-welcome.png[The CLI interface during execution of initial LM Studio commands]

After the model loads, you should see a `Model loaded successfully` message in the CLI.
After the model loads, you should see a `Model loaded successfully` message in the CLI.

image::images/lms-studio-model-loaded-msg.png[The CLI message that appears after a model loads]

Expand All @@ -156,8 +160,8 @@ Refer to the following video to see how to load a model using LM Studio's GUI. Y
<img
style="width: 100%; margin: auto; display: block;"
class="vidyard-player-embed"
src="https://play.vidyard.com/FMx2wxGQhquWPVhGQgjkyM.jpg"
data-uuid="FMx2wxGQhquWPVhGQgjkyM"
src="https://play.vidyard.com/c4AxH8d9tWMnwNp5J6bcfX.jpg"
data-uuid="c4AxH8d9tWMnwNp5J6bcfX"
data-v="4"
data-type="inline"
/>
Expand Down
Binary file modified docs/AI-for-security/images/lms-cli-welcome.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/AI-for-security/images/lms-model-select.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/AI-for-security/images/lms-ps-command.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/AI-for-security/images/lms-studio-model-loaded-msg.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion docs/AI-for-security/llm-performance-matrix.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,5 @@ This table describes the performance of various large language models (LLMs) for
| *Assistant - Knowledge retrieval* | Good | Excellent | Excellent | Excellent | Excellent | Excellent | Great | Excellent | Excellent
| *Attack Discovery* | Great | Great | Excellent | Poor | Poor | Great | Poor | Excellent | Poor
|===


NOTE: `Excellent` is the best rating, followed by `Great`, then by `Good`, and finally by `Poor`.
2 changes: 2 additions & 0 deletions docs/advanced-entity-analytics/entity-risk-scoring.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ NOTE: Entities without any alerts, or with only `Closed` alerts, are not assigne
== How is risk score calculated?

. The risk scoring engine runs hourly to aggregate `Open` and `Acknowledged` alerts from the last 30 days. For each entity, the engine processes up to 10,000 alerts.
+
NOTE: When <<turn-on-risk-engine, turning on the risk engine>>, you can choose to also include `Closed` alerts in risk scoring calculations.

. The engine groups alerts by `host.name` or `user.name`, and aggregates the individual alert risk scores (`kibana.alert.risk_score`) such that alerts with higher risk scores contribute more than alerts with lower risk scores. The resulting aggregated risk score is assigned to the **Alerts** category in the entity's <<host-risk-summary, risk summary>>.

Expand Down
Binary file modified docs/advanced-entity-analytics/images/preview-risky-entities.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/advanced-entity-analytics/images/turn-on-risk-engine.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 3 additions & 1 deletion docs/advanced-entity-analytics/turn-on-risk-engine.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,9 @@ image::images/preview-risky-entities.png[Preview of risky entities]
If you're installing the risk scoring engine for the first time:

. Find **Entity Risk Score** in the navigation menu.
. Turn the **Entity risk score** toggle on.
. On the **Entity Risk Score** page, turn the toggle on.

You can also choose to include `Closed` alerts in risk scoring calculations and specify a date and time range for the calculation.

[role="screenshot"]
image::images/turn-on-risk-engine.png[Turn on entity risk scoring]
Expand Down
2 changes: 1 addition & 1 deletion docs/cases/cases-req.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ once, which creates a user profile.
| Give view-only access for cases
a| **Read** for the *Security* feature and **All** for the *Cases* feature

NOTE: You can customize the sub-feature privileges to allow access to deleting cases, deleting alerts and comments from cases, and viewing or editing case settings.
NOTE: You can customize the sub-feature privileges to allow access to deleting cases, deleting alerts and comments from cases, viewing or editing case settings, adding case comments and attachments, and re-opening cases.

| Revoke all access to cases | **None** for the *Cases* feature under *Security*

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ include::cspm.asciidoc[leveloffset=+1]
include::cspm-get-started-aws.asciidoc[leveloffset=+2]
include::cspm-get-started-gcp.asciidoc[leveloffset=+2]
include::cspm-get-started-azure.asciidoc[leveloffset=+2]
include::cspm-permissions.asciidoc[leveloffset=+2]
include::cspm-findings.asciidoc[leveloffset=+2]
include::cspm-benchmark-rules.asciidoc[leveloffset=+2]
include::cspm-cloud-posture-dashboard.asciidoc[leveloffset=+2]
Expand Down
9 changes: 1 addition & 8 deletions docs/cloud-native-security/cspm-get-started-aws.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,10 @@ This page explains how to get started monitoring the security posture of your cl
.Requirements
[sidebar]
--
* Minimum privileges vary depending on whether you need to read, write, or manage CSPM data and integrations. Refer to <<cspm-required-permissions>>.
* The CSPM integration is available to all {ecloud} users. On-premise deployments require an https://www.elastic.co/pricing[Enterprise subscription].
* CSPM only works in the `Default` {kib} space. Installing the CSPM integration on a different {kib} space will not work.
* CSPM is supported only on AWS, GCP, and Azure commercial cloud platforms, and AWS GovCloud. Other government cloud platforms are not supported. https://github.com/elastic/kibana/issues/new/choose[Click here to request support].
* `Read` privileges for the following {es} indices:
** `logs-cloud_security_posture.findings_latest-*`
** `logs-cloud_security_posture.scores-*`
* The following {kib} privileges:
** Security: `Read`
** Integrations: `Read`
** Saved Objects Management: `Read`
** Fleet: `All`
* The user who gives the CSPM integration AWS permissions must be an AWS account `admin`.
--

Expand Down
9 changes: 1 addition & 8 deletions docs/cloud-native-security/cspm-get-started-azure.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,10 @@ This page explains how to get started monitoring the security posture of your cl
.Requirements
[sidebar]
--
* Minimum privileges vary depending on whether you need to read, write, or manage CSPM data and integrations. Refer to <<cspm-required-permissions>>.
* The CSPM integration is available to all {ecloud} users. On-premise deployments require an https://www.elastic.co/pricing[Enterprise subscription].
* CSPM only works in the `Default` {kib} space. Installing the CSPM integration on a different {kib} space will not work.
* CSPM is supported only on AWS, GCP, and Azure commercial cloud platforms, and AWS GovCloud. Other government cloud platforms are not supported. https://github.com/elastic/kibana/issues/new/choose[Click here to request support].
* `Read` privileges for the following {es} indices:
** `logs-cloud_security_posture.findings_latest-*`
** `logs-cloud_security_posture.scores-*`
* The following {kib} privileges:
** Security: `Read`
** Integrations: `Read`
** Saved Objects Management: `Read`
** Fleet: `All`
* The user who gives the CSPM integration permissions in Azure must be an Azure subscription `admin`.
--

Expand Down
Loading

0 comments on commit 28ef252

Please sign in to comment.