diff --git a/docs/cloud-native-security/cloud-nat-sec-kubernetes-dashboard.asciidoc b/docs/cloud-native-security/cloud-nat-sec-kubernetes-dashboard.asciidoc index c9d0b174a7..4452e96a72 100644 --- a/docs/cloud-native-security/cloud-nat-sec-kubernetes-dashboard.asciidoc +++ b/docs/cloud-native-security/cloud-nat-sec-kubernetes-dashboard.asciidoc @@ -1,7 +1,7 @@ [[cloud-nat-sec-kubernetes-dashboard]] // Note: This page is intentionally duplicated by docs/dashboards/kubernetes-dashboard.asciidoc. When you update this page, update that page to match. And careful with the anchor links because they should not match. -== Kubernetes dashboard += Kubernetes dashboard The Kubernetes dashboard provides insight into Linux process data from your Kubernetes clusters. It shows sessions in detail and in the context of your monitored infrastructure. @@ -33,74 +33,29 @@ The *Metadata* tab is organized into these expandable sections: - *Container:* `id`, `name`, `image.name`, `image.tag`, `image.hash.all` - *Orchestrator:* `resource.ip`, `resource.name`, `resource.type`, `namespace`, `cluster.id`, `cluster.name`, `parent.type` - [discrete] -[[cloud-nat-sec-k8s-dash-setup]] == Setup -To collect session data for the dashboard, you'll deploy a Kubernetes DaemonSet to your clusters that implements the {elastic-defend} integration. +To get data for this dashboard, set up <> for the clusters you want to display on the dashboard. .Requirements [sidebar] -- -- Session data capture requires a Linux kernel version of 5.10.16 or higher. Session View does not support older kernels. -- This feature requires Elastic Stack version 8.4 or newer. -- You need an active {fleet-guide}/fleet-overview.html[{fleet} Server]. -- Your Elastic deployment must have the {elastic-defend} integration <>. -- The {elastic-defend} integration policy must have **Include session data** set to `true`. To modify this setting, go to **Manage -> Policies**, select your policy, and find `Include session data` near the bottom of the `Policy settings` tab. +- Kubernetes node operating systems must have Linux kernels 5.10.16 or higher. +- {stack} version 8.8 or higher. -- -WARNING: Do not install the {elastic-defend} DaemonSet on hosts already running the {agent} DaemonSet. The {elastic-defend} DaemonSet deploys the {agent}, so trying to install both can cause problems since only one {agent} should run on each host. - -**Support matrix**: This feature is currently available on GKE and EKS using Linux hosts and Kubernetes versions that match the following specifications: -|===================== -| | **Kubernetes versions** | **Node OSes** -|**EKS**| 1.18; 1.19; 1.20; 1.21 | Amazon Linux 2, Bottlerocket OS -|**GKE**| Regular (default channel): 1.21 and 1.22; Stable: 1.20 and 1.21; Rapid: 1.22 and 1.23 | Container-optimized OS (COS), Ubuntu -|===================== - -NOTE: When running within Kubernetes, {elastic-endpoint}'s <> and <> features are disabled. - -[discrete] -=== Download and modify the DaemonSet manifest -The DaemonSet integrates {elastic-endpoint} into your Kubernetes cluster. The {agent} is enrolled to a running {fleet-server} using the `FLEET_URL` parameter, and connected to a specific {agent} policy using the `FLEET_ENROLLMENT_TOKEN`. - -You first need to download the DaemonSet manifest `.yaml`, then modify it to include your {fleet} URL and Enrollment Token before you deploy it to the clusters you want to monitor. - -. Download the DaemonSet manifest using this command: -+ -[source,console] ----- -curl -L -O https://raw.githubusercontent.com/elastic/endpoint/main/releases/8.7.0/kubernetes/deploy/elastic-defend.yaml ----- - -. Fill in the manifest's `FLEET_URL` field with your {fleet} server's `Host URL`. To find it, go to **{kib} -> Management -> {fleet} -> Settings**. For more information, refer to {fleet-guide}/fleet-settings.html[Fleet UI settings]. -. Fill in the manifest's `FLEET_ENROLLMENT_TOKEN` field with a Fleet enrollment token. To find one, go to **{kib} -> Management -> {fleet} -> Enrollment tokens**. For more information, refer to {fleet-guide}/fleet-enrollment-tokens.html[Fleet enrollment tokens]. - - -[discrete] -=== Apply the modified manifest to your cluster or clusters - -To ensure you install {elastic-endpoint} on the desired Kubernetes cluster(s), set the default context using command: `kubectl config use-context `. -To check which contexts exist, use `kubectl config get-contexts` to list them from your local kubectl config file. An asterisk indicates the current default context. - -You can repeat the following steps for multiple contexts. - -**Example:** - -- Apply the manifest to a cluster: `kubectl apply -f elastic-defend.yaml` -- Check the DaemonSet’s status: `kubectl get pods -A` - -Once the DaemonSet is running, Elastic Endpoint will start sending Linux session data from Kubernetes to {kib}. You can then view that data from the Kubernetes dashboard. - +**Support matrix**: +This feature is currently available on GKE and EKS using Linux hosts and Kubernetes versions that match the following specifications: +|=== +| | EKS 1.24-1.26 (AL2022) | GKE 1.24-1.26 (COS) +| Process event exports | ✓ | ✓ +| Network event exports | ✓ | ✓ +| File event exports | ✓ | ✓ +| File blocking | ✓ | ✓ +| Process blocking | ✓ | ✓ +| Network blocking | ✗ | ✗ +| Drift prevention | ✓ | ✓ +| Mount point awareness | ✓ | ✓ +|=== IMPORTANT: This dashboard uses data from the `logs-*` index pattern, which is included by default in the <>. To collect data from multiple {es} clusters (as in a cross-cluster deployment), update `logs-*` to `*:logs-*`. - -[discrete] -=== Remove the DaemonSet from your clusters - -To uninstall the agent DaemonSet: - -1. Switch to the `kube-system` namespace -2. Execute `kubectl delete -f elastic-defend.yaml` - -This will delete the DaemonSet along with the RBAC roles and service accounts created during its deployment. diff --git a/docs/cloud-native-security/cloud-native-security-index.asciidoc b/docs/cloud-native-security/cloud-native-security-index.asciidoc index 903758072c..b9657a4ceb 100644 --- a/docs/cloud-native-security/cloud-native-security-index.asciidoc +++ b/docs/cloud-native-security/cloud-native-security-index.asciidoc @@ -24,16 +24,16 @@ Scans your cloud workloads for known vulnerabilities. When it finds a vulnerabil <>. [discrete] -== Container Workload Protection +== Container Workload Protection for Kubernetes Provides cloud-native runtime protections for containerized environments by identifying and (optionally) blocking unexpected system behavior in Kubernetes containers. These capabilities are sometimes referred to as container drift detection and prevention. The solution also captures detailed process and file telemetry from monitored containers, allowing you to set up custom alerts and protection rules. -<>. +<>. [discrete] == Cloud Workload Protection for VMs Helps you monitor and protect your Linux VMs. It uses {elastic-defend} to instantly detect and prevent malicious behavior and malware, and captures workload telemetry data for process, file, and network activity. You can use this data with Elastic's out-of-the-box detection rules and {ml} models. These detections generate alerts that quickly help you identify and remediate threats. -<>. +<>. include::security-posture-management.asciidoc[leveloffset=+1] @@ -60,8 +60,8 @@ include::vuln-management-faq.asciidoc[leveloffset=+2] include::d4c-overview.asciidoc[leveloffset=+1] include::d4c-get-started.asciidoc[leveloffset=+2] include::d4c-policy-guide.asciidoc[leveloffset=+2] +include::cloud-nat-sec-kubernetes-dashboard.asciidoc[leveloffset=+2] include::cloud-workload-protection.asciidoc[leveloffset=+1] include::session-view.asciidoc[leveloffset=+1] -include::cloud-nat-sec-kubernetes-dashboard.asciidoc[leveloffset=+1] include::environment-variable-capture.asciidoc[leveloffset=+1] diff --git a/docs/cloud-native-security/d4c-get-started.asciidoc b/docs/cloud-native-security/d4c-get-started.asciidoc index 9c357d207c..2722f20588 100644 --- a/docs/cloud-native-security/d4c-get-started.asciidoc +++ b/docs/cloud-native-security/d4c-get-started.asciidoc @@ -1,12 +1,19 @@ [[d4c-get-started]] -= Get started with CWP += Get started with CWP for Kubernetes :frontmatter-description: Secure your containerized workloads and start detecting threats and vulnerabilities. :frontmatter-tags-products: [security] :frontmatter-tags-content-type: [how-to] :frontmatter-tags-user-goals: [get-started] -This page describes how to set up Container Workload Protection (CWP) for various use cases. +This page describes how to set up Cloud Workload Protection (CWP) for Kubernetes. + +.Requirements +[sidebar] +-- +- Kubernetes node operating systems must have Linux kernels 5.10.16 or higher. +- {stack} version 8.8 or higher. +-- [discrete] == Initial setup diff --git a/docs/cloud-native-security/d4c-overview.asciidoc b/docs/cloud-native-security/d4c-overview.asciidoc index 5c3634a911..51035420e7 100644 --- a/docs/cloud-native-security/d4c-overview.asciidoc +++ b/docs/cloud-native-security/d4c-overview.asciidoc @@ -1,7 +1,7 @@ [[d4c-overview]] -= Container workload protection += Cloud workload protection for Kubernetes -Elastic Container Workload Protection (CWP) provides cloud-native runtime protections for containerized environments by identifying and optionally blocking unexpected system behavior in Kubernetes containers. +Elastic Cloud Workload Protection (CWP) for Kubernetes provides cloud-native runtime protections for containerized environments by identifying and optionally blocking unexpected system behavior in Kubernetes containers. [[d4c-use-cases]] [discrete] @@ -9,7 +9,7 @@ Elastic Container Workload Protection (CWP) provides cloud-native runtime protec [discrete] === Threat detection & threat hunting -CWP sends system events from your containers to {es}. {elastic-sec}'s prebuilt security rules include many designed to detect malicious behaviors in container runtimes. These can help you detect behaviors that should never occur in containers, such as reverse shell executions, privilege escalation, container escape attempts, and more. +CWP for Kubernetes sends system events from your containers to {es}. {elastic-sec}'s prebuilt security rules include many designed to detect malicious behavior in container runtimes. These can help you detect events that should never occur in containers, such as reverse shell executions, privilege escalation, container escape attempts, and more. [discrete] === Drift detection & prevention @@ -17,7 +17,7 @@ Cloud-native containers should be immutable, meaning that their file systems sho [discrete] === Workload protection policies -CWP uses a powerful policy language to restrict container workloads to a set of allowlisted capabilities chosen by you. When employed with Drift and Threat Detection, this can provide multiple layers of defense. +CWP for Kubernetes uses a flexible policy language to restrict container workloads to a set of allowlisted capabilities chosen by you. When employed with Drift and Threat Detection, this can provide multiple layers of defense. [discrete] == Support matrix: @@ -28,15 +28,15 @@ CWP uses a powerful policy language to restrict container workloads to a set of | Network event exports | ✓ | ✓ | File event exports | ✓ | ✓ | File blocking | ✓ | ✓ -| Process blocking | Coming Soon | Coming Soon +| Process blocking | ✓ | ✓ | Network blocking | ✗ | ✗ | Drift prevention | ✓ | ✓ | Mount point awareness | ✓ | ✓ |=== [discrete] -== How CWP works -CWP uses a lightweight integration, Defend for Containers (D4C). When you set up the D4C integration, it gets deployed by {agent}. Specifically, the {agent} gets installed as a DaemonSet on your Kubernetes clusters, where it enables D4C to use eBPF Linux Security Modules https://docs.kernel.org/bpf/prog_lsm.html[LSM] and tracepoint probes to record system events. Events are evaluated against LSM hook points, enabling {agent} to evaluate system activity against your policy before allowing it to proceed. +== How CWP for Kubernetes works +CWP for Kubernetes uses a lightweight integration, Defend for Containers (D4C). When you set up the D4C integration, it gets deployed by {agent}. Specifically, the {agent} is installed as a DaemonSet on your Kubernetes clusters, where it enables D4C to use eBPF Linux Security Modules (https://docs.kernel.org/bpf/prog_lsm.html[LSM]) and tracepoint probes to record system events. Events are evaluated against LSM hook points, enabling {agent} to evaluate system activity against your policy before allowing it to proceed. Your D4C integration policy determines which system behaviors (for example, process execution or file creation or deletion) will result in which actions. _Selectors_ and _responses_ define each policy. Selectors define the conditions which cause the associated responses to run. Responses are associated with one or more selectors, and specify one or more actions (such as `log`, `alert`, or `block`) that will occur when the conditions defined in an associated selector are met. diff --git a/docs/cloud-native-security/vuln-management-findings.asciidoc b/docs/cloud-native-security/vuln-management-findings.asciidoc index a193ab4c71..af5808c650 100644 --- a/docs/cloud-native-security/vuln-management-findings.asciidoc +++ b/docs/cloud-native-security/vuln-management-findings.asciidoc @@ -32,7 +32,7 @@ Independent of grouping, you can filter data in two ways: [[vuln-findings-learn-more]] == Learn more about a vulnerability -Click the arrow to the left of a vulnerability's row to open the vulnerability details flyout. This flyout includes a link to the related vulnerability database, the vulnerability's publication date, CVSS vector strings, fix versions (if available), and more. +Click a vulnerability to open the vulnerability details flyout. This flyout includes a link to the related vulnerability database, the vulnerability's publication date, CVSS vector strings, fix versions (if available), and more. When you open the vulnerability details flyout, it defaults to the *Overview* tab, which highlights key information. To view every field present in the vulnerability document, select the *Table* or *JSON* tabs. @@ -41,3 +41,12 @@ When you open the vulnerability details flyout, it defaults to the *Overview* ta == Remediate vulnerabilities To remediate a vulnerability and reduce your attack surface, update the affected package if a fix is available. + +[discrete] +[[cnvm-create-rule-from-finding]] +== Generate alerts for failed Findings +You can create detection rules that detect specific vulnerabilities directly from the Findings page: + +. Click a vulnerability to open the vulnerability details flyout flyout. +. Click **Take action**, then **Create a detection rule**. This automatically creates a detection rule that creates alerts when the associated vulnerability is found. +. To review or customize the new rule, click **View rule**. diff --git a/docs/dashboards/kubernetes-dashboard.asciidoc b/docs/dashboards/kubernetes-dashboard.asciidoc index f0dd73361d..b6da9c1dec 100644 --- a/docs/dashboards/kubernetes-dashboard.asciidoc +++ b/docs/dashboards/kubernetes-dashboard.asciidoc @@ -34,74 +34,29 @@ The *Metadata* tab is organized into these expandable sections: - *Container:* `id`, `name`, `image.name`, `image.tag`, `image.hash.all` - *Orchestrator:* `resource.ip`, `resource.name`, `resource.type`, `namespace`, `cluster.id`, `cluster.name`, `parent.type` - [discrete] -[[k8s-dash-setup]] == Setup -To collect session data for the dashboard, you'll deploy a Kubernetes DaemonSet to your clusters that implements the {elastic-defend} integration. +To get data for this dashboard, set up <> for the clusters you want to display on the dashboard. .Requirements [sidebar] -- -- Session data capture requires a Linux kernel version of 5.10.16 or higher. Session View does not support older kernels. -- This feature requires Elastic Stack version 8.4 or newer. -- You need an active {fleet-guide}/fleet-overview.html[{fleet} Server]. -- Your Elastic deployment must have the {elastic-defend} integration <>. -- The {elastic-defend} integration policy must have **Include session data** set to `true`. To modify this setting, go to **Manage -> Policies**, select your policy, and find `Include session data` near the bottom of the `Policy settings` tab. +- Kubernetes node operating systems must have Linux kernels 5.10.16 or higher. +- {stack} version 8.8 or higher. -- -WARNING: Do not install the {elastic-defend} DaemonSet on hosts already running the {agent} DaemonSet. The {elastic-defend} DaemonSet deploys the {agent}, so trying to install both can cause problems since only one {agent} should run on each host. - -**Support matrix**: This feature is currently available on GKE and EKS using Linux hosts and Kubernetes versions that match the following specifications: -|===================== -| | **Kubernetes versions** | **Node OSes** -|**EKS**| 1.18; 1.19; 1.20; 1.21 | Amazon Linux 2, Bottlerocket OS -|**GKE**| Regular (default channel): 1.21 and 1.22; Stable: 1.20 and 1.21; Rapid: 1.22 and 1.23 | Container-optimized OS (COS), Ubuntu -|===================== - -NOTE: When running within Kubernetes, {elastic-endpoint}'s <> and <> features are disabled. - -[discrete] -=== Download and modify the DaemonSet manifest -The DaemonSet integrates {elastic-endpoint} into your Kubernetes cluster. The {agent} is enrolled to a running {fleet-server} using the `FLEET_URL` parameter, and connected to a specific {agent} policy using the `FLEET_ENROLLMENT_TOKEN`. - -You first need to download the DaemonSet manifest `.yaml`, then modify it to include your {fleet} URL and Enrollment Token before you deploy it to the clusters you want to monitor. - -. Download the DaemonSet manifest using this command: -+ -[source,console] ----- -curl -L -O https://raw.githubusercontent.com/elastic/endpoint/main/releases/8.7.0/kubernetes/deploy/elastic-defend.yaml ----- - -. Fill in the manifest's `FLEET_URL` field with your {fleet} server's `Host URL`. To find it, go to **{kib} -> Management -> {fleet} -> Settings**. For more information, refer to {fleet-guide}/fleet-settings.html[Fleet UI settings]. -. Fill in the manifest's `FLEET_ENROLLMENT_TOKEN` field with a Fleet enrollment token. To find one, go to **{kib} -> Management -> {fleet} -> Enrollment tokens**. For more information, refer to {fleet-guide}/fleet-enrollment-tokens.html[Fleet enrollment tokens]. - - -[discrete] -=== Apply the modified manifest to your cluster or clusters - -To ensure you install {elastic-endpoint} on the desired Kubernetes cluster(s), set the default context using command: `kubectl config use-context `. -To check which contexts exist, use `kubectl config get-contexts` to list them from your local kubectl config file. An asterisk indicates the current default context. - -You can repeat the following steps for multiple contexts. - -**Example:** - -- Apply the manifest to a cluster: `kubectl apply -f elastic-defend.yaml` -- Check the DaemonSet’s status: `kubectl get pods -A` - -Once the DaemonSet is running, Elastic Endpoint will start sending Linux session data from Kubernetes to {kib}. You can then view that data from the Kubernetes dashboard. - +**Support matrix**: +This feature is currently available on GKE and EKS using Linux hosts and Kubernetes versions that match the following specifications: +|=== +| | EKS 1.24-1.26 (AL2022) | GKE 1.24-1.26 (COS) +| Process event exports | ✓ | ✓ +| Network event exports | ✓ | ✓ +| File event exports | ✓ | ✓ +| File blocking | ✓ | ✓ +| Process blocking | ✓ | ✓ +| Network blocking | ✗ | ✗ +| Drift prevention | ✓ | ✓ +| Mount point awareness | ✓ | ✓ +|=== IMPORTANT: This dashboard uses data from the `logs-*` index pattern, which is included by default in the <>. To collect data from multiple {es} clusters (as in a cross-cluster deployment), update `logs-*` to `*:logs-*`. - -[discrete] -=== Remove the DaemonSet from your clusters - -To uninstall the agent DaemonSet: - -1. Switch to the `kube-system` namespace -2. Execute `kubectl delete -f elastic-defend.yaml` - -This will delete the DaemonSet along with the RBAC roles and service accounts created during its deployment.