Skip to content

Commit

Permalink
Merge branch 'main' into issue-5341-unified-timeline-integration
Browse files Browse the repository at this point in the history
  • Loading branch information
nastasha-solomon authored Jul 16, 2024
2 parents af7d328 + bce4151 commit dc63100
Show file tree
Hide file tree
Showing 98 changed files with 444 additions and 725 deletions.
2 changes: 1 addition & 1 deletion .backportrc.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"upstream": "elastic/security-docs",
"branches": [{ "name": "7.x", "checked": true }, "8.15", "8.14", "8.13", "8.12", "8.11", "8.10", "8.9", "8.8", "8.7", "8.6", "8.5", "8.4", "8.3", "8.2", "8.1", "8.0", "7.17", "7.16", "7.15", "7.14", "7.13", "7.12", "7.11", "7.10", "7.9", "7.8"],
"branches": ["8.15", "8.14", "8.13", "8.12", "8.11", "8.10", "8.9", "8.8", "8.7", "8.6", "8.5", "8.4", "8.3", "8.2", "8.1", "8.0", "7.17", "7.16", "7.15", "7.14", "7.13", "7.12", "7.11", "7.10", "7.9", "7.8"],
"labels": ["backport"]
}
16 changes: 9 additions & 7 deletions docs/AI-for-security/ai-for-security.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,19 @@

You can use {elastic-sec}'s built-in AI tools to speed up your work and augment your team's capabilities. The pages in this section describe <<security-assistant, AI Assistant>>, which answers questions and enhances your workflows throughout {elastic-sec}, and <<attack-discovery, Attack discovery>>, which speeds up the triage process by finding patterns and identifying attacks spanning multiple alerts.

include::security-assistant.asciidoc[leveloffset=+1]
include::ai-security-assistant.asciidoc[leveloffset=+1]
include::attack-discovery.asciidoc[leveloffset=+1]

include::llm-connector-guides.asciidoc[leveloffset=+1]
include::azure-openai-setup.asciidoc[leveloffset=+2]
include::connector-guides-landing-pg.asciidoc[leveloffset=+1]
include::connect-to-azure-openai.asciidoc[leveloffset=+2]
include::connect-to-bedrock.asciidoc[leveloffset=+2]
include::connect-to-openai.asciidoc[leveloffset=+2]
include::connect-to-byo.asciidoc[leveloffset=+2]

include::ai-use-cases.asciidoc[leveloffset=+1]
include::ai-alert-triage.asciidoc[leveloffset=+2]
include::use-attack-discovery-ai-assistant-incident-reporting.asciidoc[leveloffset=+2]
include::ai-esql-queries.asciidoc[leveloffset=+2]

include::usecase-landing-pg.asciidoc[leveloffset=+1]
include::usecase-alert-triage.asciidoc[leveloffset=+2]
include::usecase-attack-discovery-ai-assistant-incident-reporting.asciidoc[leveloffset=+2]
include::usecase-esql-queries.asciidoc[leveloffset=+2]

include::llm-performance-matrix.asciidoc[leveloffset=+1]
6 changes: 5 additions & 1 deletion docs/AI-for-security/attack-discovery.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,11 @@ image::images/select-model-empty-state.png[]
+
. Once you've selected a connector, click **Generate** to start the analysis.

It may take from a few seconds up to several minutes to generate discoveries, depending on the number of alerts and the model you selected. Note that Attack discovery is in technical preview and will only analyze opened and acknowleged alerts from the past 24 hours.
It may take from a few seconds up to several minutes to generate discoveries, depending on the number of alerts and the model you selected.

IMPORTANT: Attack discovery is in technical preview and will only analyze opened and acknowleged alerts from the past 24 hours. By default it only analyzes up to 20 alerts within this timeframe, but you can expand this up to 100 by going to **AI Assistant → Settings (image:images/icon-settings.png[Settings icon,17,17]) → Knowledge Base** and updating the **Alerts** setting.

image::images/knowledge-base-settings.png["AI Assistant's settings menu open to the Knowledge Base tab",75%]

IMPORTANT: Attack discovery uses the same data anonymization settings as <<security-assistant, Elastic AI Assistant>>. To configure which alert fields are sent to the LLM and which of those fields are obfuscated, use the Elastic AI Assistant settings. Consider the privacy policies of third-party LLMs before sending them sensitive data.

Expand Down
192 changes: 192 additions & 0 deletions docs/AI-for-security/connect-to-byo.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,192 @@
[[connect-to-byo-llm]]
= Connect to your own local LLM

:frontmatter-description: Set up a connector to LM Studio so you can use a local model with AI Assistant.
:frontmatter-tags-products: [security]
:frontmatter-tags-content-type: [guide]
:frontmatter-tags-user-goals: [get-started]

This page provides instructions for setting up a connector to a large language model (LLM) of your choice using LM Studio. This allows you to use your chosen model within {elastic-sec}. You'll first need to set up a reverse proxy to communicate with {elastic-sec}, then set up LM Studio on a server, and finally configure the connector in your Elastic deployment. https://www.elastic.co/blog/ai-assistant-locally-hosted-models[Learn more about the benefits of using a local LLM].

This example uses a single server hosted in GCP to run the following components:

* LM Studio with the https://mistral.ai/technology/#models[Mixtral-8x7b] model
* A reverse proxy using Nginx to authenticate to Elastic Cloud

image::images/lms-studio-arch-diagram.png[Architecture diagram for this guide]

NOTE: For testing, you can use alternatives to Nginx such as https://learn.microsoft.com/en-us/azure/developer/dev-tunnels/overview[Azure Dev Tunnels] or https://ngrok.com/[Ngrok], but using Nginx makes it easy to collect additional telemetry and monitor its status by using Elastic's native Nginx integration. While this example uses cloud infrastructure, it could also be replicated locally without an internet connection.

[discrete]
== Configure your reverse proxy

NOTE: If your Elastic instance is on the same host as LM Studio, you can skip this step.

You need to set up a reverse proxy to enable communication between LM Studio and Elastic. For more complete instructions, refer to a guide such as https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-reverse-proxy-on-ubuntu-22-04[this one].

The following is an example Nginx configuration file:

[source,txt]
--------------------------------------------------
server {
listen 80;
listen [::]:80;
server_name <yourdomainname.com>;
server_tokens off;
add_header x-xss-protection "1; mode=block" always;
add_header x-frame-options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name <yourdomainname.com>;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/<yourdomainname.com>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<yourdomainname.com>/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets on;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256';
ssl_protocols TLSv1.3 TLSv1.2;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
add_header x-xss-protection "1; mode=block" always;
add_header x-frame-options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/<yourdomainname.com>/fullchain.pem;
resolver 1.1.1.1;
location / {
if ($http_authorization != "Bearer <secret token>") {
return 401;
}
proxy_pass http://localhost:1234/;
}
}
--------------------------------------------------

IMPORTANT: If using the example configuration file above, you must replace several values: Replace `<secret token>` with your actual token, and keep it safe since you'll need it to set up the {elastic-sec} connector. Replace `<yourdomainname.com>` with your actual domain name. Update the `proxy_pass` value at the bottom of the configuration if you decide to change the port number in LM Studio to something other than 1234.

[discrete]
=== (Optional) Set up performance monitoring for your reverse proxy
You can use Elastic's {integrations-docs}/nginx[Nginx integration] to monitor performance and populate monitoring dashboards in the {security-app}.

[discrete]
== Configure LM Studio and download a model

First, install https://lmstudio.ai/[LM Studio]. LM Studio supports the OpenAI SDK, which makes it compatible with Elastic's OpenAI connector, allowing you to connect to any model available in the LM Studio marketplace.

One current limitation of LM Studio is that when it is installed on a server, you must launch the application using its GUI before doing so using the CLI. For example, by using Chrome RDP with an https://cloud.google.com/architecture/chrome-desktop-remote-on-compute-engine[X Window System]. After you've opened the application the first time using the GUI, you can start it by using `sudo lms server start` in the CLI.

Once you've launched LM Studio:

1. Go to LM Studio's Search window.
2. Search for an LLM (for example, `Mixtral-8x7B-instruct`). Your chosen model must include `instruct` in its name in order to work with Elastic.
3. Filter your search for "Compatibility Guess" to optimize results for your hardware. Results will be color coded:
* Green means "Full GPU offload possible", which yields the best results.
* Blue means "Partial GPU offload possible", which may work.
* Red for "Likely too large for this machine", which typically will not work.
4. Download one or more models.

IMPORTANT: For security reasons, before downloading a model, verify that it is from a trusted source. It can be helpful to review community feedback on the model (for example using a site like Hugging Face).

image::images/lms-model-select.png[The LM Studio model selection interface]

In this example we used https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF[`TheBloke/Mixtral-8x7B-Instruct-v0.1.Q3_K_M.gguf`]. It has 46.7B total parameters, a 32,000 token context window, and uses GGUF https://huggingface.co/docs/transformers/main/en/quantization/overview[quanitization]. For more information about model names and format information, refer to the following table.

[cols="1,1,1,1", options="header"]
|===
| Model Name | Parameter Size | Tokens/Context Window | Quantization Format
| Name of model, sometimes with a version number.
| LLMs are often compared by their number of parameters — higher numbers mean more powerful models.
| Tokens are small chunks of input information. Tokens do not necessarily correspond to characters. You can use https://platform.openai.com/tokenizer[Tokenizer] to see how many tokens a given prompt might contain.
| Quantization reduces overall parameters and helps the model to run faster, but reduces accuracy.
| Examples: Llama, Mistral, Phi-3, Falcon.
| The number of parameters is a measure of the size and the complexity of the model. The more parameters a model has, the more data it can process, learn from, generate, and predict.
| The context window defines how much information the model can process at once. If the number of input tokens exceeds this limit, input gets truncated.
| Specific formats for quantization vary, most models now support GPU rather than CPU offloading.
|===

[discrete]
== Load a model in LM Studio

After downloading a model, load it in LM Studio using the GUI or LM Studio's https://lmstudio.ai/blog/lms[CLI tool].

[discrete]
=== Option 1: load a model using the CLI (Recommended)

It is a best practice to download models from the marketplace using the GUI, and then load or unload them using the CLI. The GUI allows you to search for models, whereas the CLI only allows you to import specific paths, but the CLI provides a good interface for loading and unloading.

Use the following commands in your CLI:

1. Verify LM Studio is installed: `lms`
2. Check LM Studio's status: `lms status`
3. List all downloaded models: `lms ls`
4. Load a model: `lms load`

image::images/lms-cli-welcome.png[The CLI interface during execution of initial LM Studio commands]

After the model loads, you should see a `Model loaded successfully` message in the CLI.

image::images/lms-studio-model-loaded-msg.png[The CLI message that appears after a model loads]

To verify which model is loaded, use the `lms ps` command.

image::images/lms-ps-command.png[The CLI message that appears after running lms ps]

If your model uses NVIDIA drivers, you can check the GPU performance with the `sudo nvidia-smi` command.

[discrete]
=== Option 2: load a model using the GUI

Refer to the following video to see how to load a model using LM Studio's GUI. You can change the **port** setting, which is referenced in the Nginx configuration file. Note that the **GPU offload** was set to **Max**.

=======
++++
<script type="text/javascript" async src="https://play.vidyard.com/embed/v4.js"></script>
<img
style="width: 100%; margin: auto; display: block;"
class="vidyard-player-embed"
src="https://play.vidyard.com/FMx2wxGQhquWPVhGQgjkyM.jpg"
data-uuid="FMx2wxGQhquWPVhGQgjkyM"
data-v="4"
data-type="inline"
/>
</br>
++++
=======

[discrete]
== (Optional) Collect logs using Elastic's Custom Logs integration

You can monitor the performance of the host running LM Studio using Elastic's {integrations-docs}/log[Custom Logs integration]. This can also help with troubleshooting. Note that the default path for LM Studio logs is `/tmp/lmstudio-server-log.txt`, as in the following screenshot:

image::images/lms-custom-logs-config.png[The configuration window for the custom logs integration]

[discrete]
== Configure the connector in your Elastic deployment

Finally, configure the connector:

1. Log in to your Elastic deployment.
2. Navigate to **Stack Management → Connectors → Create Connector → OpenAI**. The OpenAI connector enables this use case because LM Studio uses the OpenAI SDK.
3. Name your connector to help keep track of the model version you are using.
4. Under **URL**, enter the domain name specified in your Nginx configuration file, followed by `/v1/chat/completions`.
5. Under **Default model**, enter `local-model`.
6. Under **API key**, enter the secret token specified in your Nginx configuration file.
7. Click **Save**.

image::images/lms-edit-connector.png[The Edit connector page in the {security-app}, with appropriate values populated]

Setup is now complete. You can use the model you've loaded in LM Studio to power Elastic's generative AI features. You can test a variety of models as you interact with AI Assistant to see what works best without having to update your connector.

NOTE: While local models work well for <<security-assistant, AI Assistant>>, we recommend you use one of <<llm-performance-matrix, these models>> for interacting with <<attack-discovery, Attack discovery>>. As local models become more performant over time, this is likely to change.
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ Setup guides are available for the following LLM providers:
* <<assistant-connect-to-azure-openai, Azure OpenAI>>
* <<assistant-connect-to-bedrock, Amazon Bedrock>>
* <<assistant-connect-to-openai, OpenAI>>

* <<connect-to-byo-llm, LM Studio (custom local LLM)>>
Binary file added docs/AI-for-security/images/lms-cli-welcome.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/AI-for-security/images/lms-model-select.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/AI-for-security/images/lms-ps-command.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
File renamed without changes.
5 changes: 5 additions & 0 deletions docs/advanced-entity-analytics/entity-risk-scoring.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
[[entity-risk-scoring]]
= Entity risk scoring

[sidebar]
--
If you’ve installed the original user and host risk score modules, refer to {security-guide-all}/8.11/host-risk-score.html[Host risk score] and {security-guide-all}/8.11/user-risk-score.html[User risk score].
--

beta::[]

Entity risk scoring is an advanced {elastic-sec} analytics feature that helps security analysts detect changes in an entity's risk posture, hunt for new threats, and prioritize incident response.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ image::images/preview-risky-entities.png[Preview of risky entities]
[NOTE]
======
* To view risk score data, you must have alerts generated in your environment.
* If you previously installed the original <<user-risk-score, user>> and <<host-risk-score, host risk score>> modules, and you're upgrading to {stack} version 8.11 or newer, refer to <<upgrade-risk-engine, Upgrade to the latest risk engine>>.
* If you previously installed the original user and host risk score modules, and you're upgrading to {stack} version 8.11 or newer, refer to <<upgrade-risk-engine, Upgrade to the latest risk engine>>.
======

If you're installing the risk scoring engine for the first time:
Expand Down
2 changes: 2 additions & 0 deletions docs/cloud-native-security/d4c-get-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
:frontmatter-tags-content-type: [how-to]
:frontmatter-tags-user-goals: [get-started]

beta::[]

This page describes how to set up Cloud Workload Protection (CWP) for Kubernetes.

.Requirements
Expand Down
2 changes: 2 additions & 0 deletions docs/cloud-native-security/d4c-overview.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
[[d4c-overview]]
= Cloud workload protection for Kubernetes

beta::[]

Elastic Cloud Workload Protection (CWP) for Kubernetes provides cloud-native runtime protections for containerized environments by identifying and optionally blocking unexpected system behavior in Kubernetes containers.

[[d4c-use-cases]]
Expand Down
4 changes: 4 additions & 0 deletions docs/cloud-native-security/vuln-management-faq.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ The CNVM integration uses various security data sources. The complete list can b

CNVM uses the open source scanner https://github.com/aquasecurity/trivy[Trivy] v0.35.

*What system architectures are supported?*

Because of Trivy's limitations, CNVM can only be deployed on ARM-based VMs. However, it can scan hosts regardless of system architecture.

*How often are the security data sources synchronized?*

The CNVM integration fetches the latest data sources at the beginning of every scan cycle to ensure up-to-date vulnerability information.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ This page explains how to set up Cloud Native Vulnerability Management (CNVM).
--
* CNVM is available to all {ecloud} users. On-premise deployments require an https://www.elastic.co/pricing[Enterprise subscription].
* Requires {stack} and {agent} version 8.8 or higher.
* CNVM only works in the `Default` {kib} space. Installing the CNVM integration on a different {kib} space will not work.
* Only works in the `Default` {kib} space. Installing the CNVM integration on a different {kib} space will not work.
* CNVM can only be deployed on ARM-based VMs.
* To view vulnerability scan findings, you need at least `read` privileges for the following indices:
** `logs-cloud_security_posture.vulnerabilities-*`
** `logs-cloud_security_posture.vulnerabilities_latest-*`
Expand Down
2 changes: 0 additions & 2 deletions docs/detections/alerts-view-details.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -144,8 +144,6 @@ image::images/insights-section-rp.png[Insights section of the Overview tab, 65%]

The Entities overview provides high-level details about the user and host that are related to the alert. Host and user risk classifications are also available with a https://www.elastic.co/pricing[Platinum subscription] or higher.

NOTE: <<user-risk-score,User>> and <<host-risk-score,host>> risk scores are technical preview features.

[role="screenshot"]
image::images/entities-overview.png[Overview of the entity details section in the right panel, 60%]

Expand Down
Loading

0 comments on commit dc63100

Please sign in to comment.