diff --git a/docs/en/observability/images/quickstart-autodetection-command.png b/docs/en/observability/images/quickstart-autodetection-command.png deleted file mode 100644 index 8ee8386567..0000000000 Binary files a/docs/en/observability/images/quickstart-autodetection-command.png and /dev/null differ diff --git a/docs/en/observability/images/quickstart-aws-firehose-entry-point.png b/docs/en/observability/images/quickstart-aws-firehose-entry-point.png index 2ec56faef7..6a0bbe3399 100644 Binary files a/docs/en/observability/images/quickstart-aws-firehose-entry-point.png and b/docs/en/observability/images/quickstart-aws-firehose-entry-point.png differ diff --git a/docs/en/observability/images/quickstart-k8s-entry-point.png b/docs/en/observability/images/quickstart-k8s-entry-point.png index 6a00630071..e558f81054 100644 Binary files a/docs/en/observability/images/quickstart-k8s-entry-point.png and b/docs/en/observability/images/quickstart-k8s-entry-point.png differ diff --git a/docs/en/observability/images/quickstart-k8s-otel-entry-point.png b/docs/en/observability/images/quickstart-k8s-otel-entry-point.png index e47fac646b..739832bac2 100644 Binary files a/docs/en/observability/images/quickstart-k8s-otel-entry-point.png and b/docs/en/observability/images/quickstart-k8s-otel-entry-point.png differ diff --git a/docs/en/observability/images/quickstart-monitor-hosts-entry-point.png b/docs/en/observability/images/quickstart-monitor-hosts-entry-point.png new file mode 100644 index 0000000000..d59756fa2d Binary files /dev/null and b/docs/en/observability/images/quickstart-monitor-hosts-entry-point.png differ diff --git a/docs/en/observability/quickstarts/collect-data-with-aws-firehose.asciidoc b/docs/en/observability/quickstarts/collect-data-with-aws-firehose.asciidoc index 86f1b4ec62..50af15cabc 100644 --- a/docs/en/observability/quickstarts/collect-data-with-aws-firehose.asciidoc +++ b/docs/en/observability/quickstarts/collect-data-with-aws-firehose.asciidoc @@ -47,17 +47,17 @@ You can use an AWS CLI command or upload the template to the AWS CloudFormation * `FirehoseStreamNameForLogs`: Name for Amazon Data Firehose Stream for collecting CloudWatch logs. Default is `elastic-firehose-logs`. ==== -IMPORTANT: Some AWS services need additional manual configuration to properly ingest logs and metrics. For more information, check the +IMPORTANT: Some AWS services need additional manual configuration to properly ingest logs and metrics. For more information, check the link:https://www.elastic.co/docs/current/integrations/aws[AWS integration] documentation. -Data collection with AWS Firehose is supported on ESS deployments in AWS, Azure and GCP. +Data collection with AWS Firehose is supported on ESS deployments in AWS, Azure and GCP. [discrete] == Prerequisites * A deployment using our hosted {ess} on {ess-trial}[{ecloud}]. The deployment includes an {es} cluster for storing and searching your data, and {kib} for visualizing and managing your data. * A user with the `superuser` {ref}/built-in-roles.html[built-in role] or the privileges required to onboard data. -+ ++ [%collapsible] .Expand to view required privileges ==== @@ -75,16 +75,16 @@ NOTE: The default CloudFormation stack is created in the AWS region selected for The AWS Firehose receiver has the following limitations: * It does not support AWS PrivateLink. -* It is not available for on-premise Elastic Stack deployments. -* The CloudFormation template detects and ingests logs and metrics within a single AWS region only. +* It is not available for on-premise Elastic Stack deployments. +* The CloudFormation template detects and ingests logs and metrics within a single AWS region only. The following table shows the type of data ingested by the supported AWS services: |=== -| AWS Service | Data type +| AWS Service | Data type -| VPC Flow Logs |Logs -| API Gateway|Logs, Metrics +| VPC Flow Logs |Logs +| API Gateway|Logs, Metrics | CloudTrail | Logs | Network Firewall | Logs, Metrics | Route53 | Logs @@ -113,9 +113,9 @@ The following table shows the type of data ingested by the supported AWS service [discrete] == Collect your data -. In {kib}, go to **Observability** and click **Add Data**. +. In {kib}, go to the **Observability** UI and click **Add Data**. -. Select **Cloud**, **AWS**, and then select **AWS Firehose**. +. Under **What do you want to monitor?** select **Cloud**, **AWS**, and then select **AWS Firehose**. + [role="screenshot"] image::images/quickstart-aws-firehose-entry-point.png[AWS Firehose entry point] diff --git a/docs/en/observability/quickstarts/monitor-hosts-with-elastic-agent.asciidoc b/docs/en/observability/quickstarts/monitor-hosts-with-elastic-agent.asciidoc index b93990d16f..478146ce7a 100644 --- a/docs/en/observability/quickstarts/monitor-hosts-with-elastic-agent.asciidoc +++ b/docs/en/observability/quickstarts/monitor-hosts-with-elastic-agent.asciidoc @@ -1,8 +1,6 @@ [[quickstart-monitor-hosts-with-elastic-agent]] = Quickstart: Monitor hosts with {agent} -preview::[] - In this quickstart guide, you'll learn how to scan your host to detect and collect logs and metrics, then navigate to dashboards to further analyze and explore your observability data. You'll also learn how to get value out of your observability data. @@ -15,7 +13,7 @@ The script also generates an {agent} configuration file that you can use with yo [discrete] == Prerequisites -* A deployment using our hosted {ess} on {ess-trial}[{ecloud}]. The deployment includes an {es} cluster for storing and searching your data, and {kib} for visualizing and managing your data. +* An {es} cluster for storing and searching your data, and {kib} for visualizing and managing your data. This quickstart is available for all Elastic deployment models. To get started quickly, try out our hosted {ess} on {ess-trial}[{ecloud}]. * A user with the `superuser` {ref}/built-in-roles.html[built-in role] or the privileges required to onboard data. + [%collapsible] @@ -30,8 +28,6 @@ The script also generates an {agent} configuration file that you can use with yo [discrete] == Limitations -* The auto-detection script currently scans for metrics and logs from Apache, Docker, Nginx, and the host system. - It also scans for custom log files. * The auto-detection script works on Linux and MacOS only. Support for the `lsof` command is also required if you want to detect custom log files. * If you've installed Apache or Nginx in a non-standard location, you'll need to specify log file paths manually when you run the scan. * Because Docker Desktop runs in a VM, its logs are not auto-detected. @@ -39,14 +35,14 @@ The script also generates an {agent} configuration file that you can use with yo [discrete] == Collect your data -. Go to the **Observability** UI and click **Add Data**. -. Select **Collect and analyze logs**, and then select **Auto-detect logs and metrics**. -. Copy the command that's shown. For example: +. In {kib}, go to the **Observability** UI and click **Add Data**. +. Under **What do you want to monitor?** select **Host**, and then select **Elastic Agent: Logs & Metrics**. + [role="screenshot"] -image::images/quickstart-autodetection-command.png[Quick start showing command for running auto-detection] +image::images/quickstart-monitor-hosts-entry-point.png[Host monitoring entry point] +. Copy the install command. + -You'll run this command to download the auto-detection script and scan your system for observability data. +You'll run this command to download the auto-detection script, scan your system for observability data, and install {agent}. . Open a terminal on the host you want to scan, and run the command. . Review the list of log files: * Enter `Y` to ingest all the log files listed. @@ -59,6 +55,7 @@ There might be a slight delay before logs and other data are ingested. ***** **Need to scan your host again?** +The auto-detection script (`auto_detect.sh`) is downloaded to the directory where you ran the installation command. You can re-run the script on the same host to detect additional logs. The script will scan the host and reconfigure {agent} with any additional logs that are found. If the script misses any custom logs, you can add them manually by entering `n` after the script has finished scanning the host. @@ -75,23 +72,27 @@ the page may link to the following integration assets: |==== | Integration asset | Description -| **System** -| Prebuilt dashboard for monitoring host status and health using system metrics. - | **Apache** | Prebuilt dashboard for monitoring Apache HTTP server health using error and access log data. +| **Custom .log files** +| Logs Explorer for analyzing custom logs. | **Docker** | Prebuilt dashboard for monitoring the status and health of Docker containers. +| **MySQL** +| Prebuilt dashboard for monitoring MySQl server health using error and access log data. | **Nginx** | Prebuilt dashboard for monitoring Nginx server health using error and access log data. +| **System** +| Prebuilt dashboard for monitoring host status and health using system metrics. -| **Custom .log files** -| Logs Explorer for analyzing custom logs. +| **Other prebuilt dashboards** +| Prebuilt dashboards are also available for systems and services not described here, +including PostgreSQL, Redis, HAProxy, Kafka, RabbitMQ, Prometheus, Apache Tomcat, and MongoDB. |==== For example, you can navigate the **Host overview** dashboard to explore detailed metrics about system usage and throughput. diff --git a/docs/en/observability/quickstarts/monitor-k8s-logs-metrics.asciidoc b/docs/en/observability/quickstarts/monitor-k8s-logs-metrics.asciidoc index 340a909b6c..f7b3cad2c6 100644 --- a/docs/en/observability/quickstarts/monitor-k8s-logs-metrics.asciidoc +++ b/docs/en/observability/quickstarts/monitor-k8s-logs-metrics.asciidoc @@ -1,8 +1,6 @@ [[monitor-k8s-logs-metrics-with-elastic-agent]] = Quickstart: Monitor your Kubernetes cluster with {agent} -preview::[] - In this quickstart guide, you'll learn how to create the Kubernetes resources that are required to monitor your cluster infrastructure. This new approach requires minimal configuration and provides you with an easy setup to monitor your infrastructure. You no longer need to download, install, or configure the Elastic Agent, everything happens automatically when you run the kubectl command. @@ -12,7 +10,7 @@ The kubectl command installs the standalone Elastic Agent in your Kubernetes clu [discrete] == Prerequisites -* A deployment using our hosted {ess} on {ess-trial}[{ecloud}]. The deployment includes an {es} cluster for storing and searching your data, and {kib} for visualizing and managing your data. +* An {es} cluster for storing and searching your data, and {kib} for visualizing and managing your data. This quickstart is available for all Elastic deployment models. To get started quickly, try out our hosted {ess} on {ess-trial}[{ecloud}]. * A user with the `superuser` {ref}/built-in-roles.html[built-in role] or the privileges required to onboard data. + [%collapsible] @@ -28,9 +26,9 @@ The kubectl command installs the standalone Elastic Agent in your Kubernetes clu [discrete] == Collect your data -. Go to the **Observability** UI and click **Add Data**. +. In {kib}, go to the **Observability** UI and click **Add Data**. -. Select **Monitor infrastructure**, and then select **Kubernetes**. +. Under **What do you want to monitor?** select **Kubernetes**, and then select **Elastic Agent: Logs & Metrics**. + [role="screenshot"] image::images/quickstart-k8s-entry-point.png[Kubernetes entry point] diff --git a/docs/en/observability/quickstarts/monitor-k8s-otel.asciidoc b/docs/en/observability/quickstarts/monitor-k8s-otel.asciidoc index 788925318e..8c50350bed 100644 --- a/docs/en/observability/quickstarts/monitor-k8s-otel.asciidoc +++ b/docs/en/observability/quickstarts/monitor-k8s-otel.asciidoc @@ -3,7 +3,7 @@ preview::[] -In this quickstart guide, you will learn how to send Kubernetes logs, metrics, and application traces to Elasticsearch, using the https://github.com/open-telemetry/opentelemetry-operator/[OpenTelemetry Operator] to orchestrate https://github.com/elastic/opentelemetry/tree/main[Elastic Distributions of OpenTelemetry] (EDOT) Collectors and SDK instances. +In this quickstart guide, you'll learn how to send Kubernetes logs, metrics, and application traces to Elasticsearch, using the https://github.com/open-telemetry/opentelemetry-operator/[OpenTelemetry Operator] to orchestrate https://github.com/elastic/opentelemetry/tree/main[Elastic Distributions of OpenTelemetry] (EDOT) Collectors and SDK instances. All the components will be deployed through the https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-kube-stack[opentelemetry-kube-stack] helm chart. They include: @@ -17,7 +17,7 @@ For a more detailed description of the components and advanced configuration, re [discrete] == Prerequisites -* A deployment using our hosted {ess} on {ess-trial}[{ecloud}]. The deployment includes an {es} cluster for storing and searching your data, and {kib} for visualizing and managing your data. +* An {es} cluster for storing and searching your data, and {kib} for visualizing and managing your data. This quickstart is available for all Elastic deployment models. To get started quickly, try out our hosted {ess} on {ess-trial}[{ecloud}]. * A running Kubernetes cluster (v1.23 or newer). * https://kubernetes.io/docs/reference/kubectl/[Kubectl]. * https://helm.sh/docs/intro/install/[Helm]. @@ -26,9 +26,9 @@ For a more detailed description of the components and advanced configuration, re [discrete] == Collect your data -. In {kib}, go to the **Observability** UI and click **Add Data**. +. In {kib}, go to the **Observability** UI and click **Add Data**. -. Under *`What do you want to monitor?`*, select **Kubernetes**, and then select the **OpenTelemetry: Full Observability** option. +. Under **What do you want to monitor?** select **Kubernetes**, and then select **OpenTelemetry: Full Observability**. + [role="screenshot"] image::images/quickstart-k8s-otel-entry-point.png[Kubernetes-OTel entry point] @@ -41,7 +41,7 @@ The default installation deploys the OpenTelemetry Operator with a self-signed T ==== + Deploy the OpenTelemetry Operator and EDOT Collectors using the kube-stack Helm chart with the provided `values.yaml` file. You will run a few commands to: -+ ++ * Add the helm chart repository needed for the installation. * Create a namespace. * Create a secret with an API Key and the {es} endpoint to be used by the collectors. diff --git a/docs/en/serverless/images/quickstart-autodetection-command.png b/docs/en/serverless/images/quickstart-autodetection-command.png new file mode 100644 index 0000000000..0e19ddd94d Binary files /dev/null and b/docs/en/serverless/images/quickstart-autodetection-command.png differ diff --git a/docs/en/serverless/images/quickstart-aws-firehose-entry-point.png b/docs/en/serverless/images/quickstart-aws-firehose-entry-point.png new file mode 100644 index 0000000000..6a0bbe3399 Binary files /dev/null and b/docs/en/serverless/images/quickstart-aws-firehose-entry-point.png differ diff --git a/docs/en/serverless/images/quickstart-k8s-entry-point.png b/docs/en/serverless/images/quickstart-k8s-entry-point.png new file mode 100644 index 0000000000..e558f81054 Binary files /dev/null and b/docs/en/serverless/images/quickstart-k8s-entry-point.png differ diff --git a/docs/en/serverless/images/quickstart-monitor-hosts-entry-point.png b/docs/en/serverless/images/quickstart-monitor-hosts-entry-point.png new file mode 100644 index 0000000000..d59756fa2d Binary files /dev/null and b/docs/en/serverless/images/quickstart-monitor-hosts-entry-point.png differ diff --git a/docs/en/serverless/quickstarts/collect-data-with-aws-firehose.asciidoc b/docs/en/serverless/quickstarts/collect-data-with-aws-firehose.asciidoc new file mode 100644 index 0000000000..40e64fceb5 --- /dev/null +++ b/docs/en/serverless/quickstarts/collect-data-with-aws-firehose.asciidoc @@ -0,0 +1,132 @@ +[[collect-data-with-aws-firehose]] += Quickstart: Collect data with AWS Firehose + +preview::[] + +In this quickstart guide, you'll learn how to use AWS Firehose to send logs and metrics to Elastic. + +The AWS Firehose streams are created using a CloudFormation template, which can collect all available CloudWatch logs and metrics for your AWS account. + +This approach requires minimal configuration as the CloudFormation template creates a Firehose stream, enables CloudWatch metrics collection across all namespaces, and sets up an account-level subscription filter for CloudWatch log groups to send logs to Elastic via Firehose. +You can use an AWS CLI command or upload the template to the AWS CloudFormation portal to customize the following parameter values: + +[%collapsible] +.Required Input Parameters +==== +* `ElasticEndpointURL`: Elastic endpoint URL. +* `ElasticAPIKey`: Elastic API Key. +==== + +[%collapsible] +.Optional Input Parameters +==== +* `HttpBufferInterval`: The Kinesis Firehose HTTP buffer interval, in seconds. Default is `60`. +* `HttpBufferSize`: The Kinesis Firehose HTTP buffer size, in MiB. Default is `1`. +* `S3BackupMode`: Source record backup in Amazon S3, failed data only or all data. Default is `FailedDataOnly`. +* `S3BufferInterval`: The Kinesis Firehose S3 buffer interval, in seconds. Default is `300`. +* `S3BufferSize`: The Kinesis Firehose S3 buffer size, in MiB. Default is `5`. +* `S3BackupBucketARN`: By default, an S3 bucket for backup will be created. You can override this behaviour by providing an ARN of an existing S3 bucket that ensures the data can be recovered if record processing transformation does not produce the desired results. +* `Attributes`: List of attribute name-value pairs for HTTP endpoint separated by commas. For example "name1=value1,name2=value2". +==== + +[%collapsible] +.Optional Input Parameters Specific for Metrics +==== +* `EnableCloudWatchMetrics`: Enable CloudWatch Metrics collection. Default is `true`. When CloudWatch metrics collection is enabled, by default a metric stream will be created with metrics from all namespaces. +* `FirehoseStreamNameForMetrics`: Name for Amazon Data Firehose Stream for collecting CloudWatch metrics. Default is `elastic-firehose-metrics`. +* `IncludeOrExclude`: Select the metrics you want to stream. You can include or exclude specific namespaces and metrics. If no filter namespace is given, then default to all namespaces. Default is `Include`. +* `MetricNameFilters`: Comma-delimited list of namespace-metric names pairs to use for filtering metrics from the stream. If no metric name filter is given, then default to all namespaces and all metrics. For example "AWS/EC2:CPUUtilization|NetworkIn|NetworkOut,AWS/RDS,AWS/S3:AllRequests". +* `IncludeLinkedAccountsMetrics`: If you are creating a metric stream in a monitoring account, specify `true` to include metrics from source accounts that are linked to this monitoring account, in the metric stream. Default is `false`. +* `Tags`: Comma-delimited list of tags to apply to the metric stream. For example "org:eng,project:firehose". +==== + +[%collapsible] +.Optional Input Parameters Specific for Logs +==== +* `EnableCloudWatchLogs`: Enable CloudWatch Logs collection. Default is `true`. When CloudWatch logs collection is enabled, an account-level subscription filter policy is created for all CloudWatch log groups (except the log groups created for Firehose logs). +* `FirehoseStreamNameForLogs`: Name for Amazon Data Firehose Stream for collecting CloudWatch logs. Default is `elastic-firehose-logs`. +==== + +IMPORTANT: Some AWS services need additional manual configuration to properly ingest logs and metrics. For more information, check the +link:https://www.elastic.co/docs/current/integrations/aws[AWS integration] documentation. + +Data collection with AWS Firehose is supported on Amazon Web Services. + +[discrete] +== Prerequisites + +* An {obs-serverless} project. To learn more, refer to <>. +* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to <>. +* An active AWS account and the necessary permissions to create delivery streams. + +NOTE: The default CloudFormation stack is created in the AWS region selected for the user's account. This region can be modified either through the AWS Console interface or by specifying a `--region` parameter in the AWS CLI command when creating the stack. + +[discrete] +== Limitations + +The AWS Firehose receiver has the following limitations: + +* It does not support AWS PrivateLink. +* The CloudFormation template detects and ingests logs and metrics within a single AWS region only. + +The following table shows the type of data ingested by the supported AWS services: + +|=== +| AWS Service | Data type + +| VPC Flow Logs |Logs +| API Gateway|Logs, Metrics +| CloudTrail | Logs +| Network Firewall | Logs, Metrics +| Route53 | Logs +| WAF | Logs +| DynamoDB | Metrics +| EBS | Metrics +| EC2 | Metrics +| ECS | Metrics +| ELB | Metrics +| EMR | Metrics +| MSK | Metrics +| Kinesis Data Stream | Metrics +| Lambda | Metrics +| NAT Gateway | Metrics +| RDS | Metrics +| S3 | Metrics +| SNS | Metrics +| SQS | Metrics +| Transit Gateway | Metrics +| AWS Usage | Metrics +| VPN | Metrics +| Uncategorized Firehose Logs | Logs + +|=== + +[discrete] +== Collect your data + +. <>, or open an existing one. +. In your {obs-serverless} project, go to **Add Data**. +. Under **What do you want to monitor?** select **Cloud**, **AWS**, and then select **AWS Firehose**. ++ +[role="screenshot"] +image::images/quickstart-aws-firehose-entry-point.png[AWS Firehose entry point] + +. Click **Create Firehose Stream in AWS** to create a CloudFormation stack from the CloudFormation template. + +. Go back to the **Add Observability Data** page. + +[discrete] +== Visualize your data + +After installation is complete and all relevant data is flowing into Elastic, +the **Visualize your data** section allows you to access the different dashboards for the various services. + +[role="screenshot"] +image::images/quickstart-aws-firehose-dashboards.png[AWS Firehose dashboards] + +Here is an example of the VPC Flow logs dashboard: + +[role="screenshot"] +image::images/quickstart-aws-firehose-vpc-flow.png[AWS Firehose VPC flow] + +Refer to <> for a description of other useful features. diff --git a/docs/en/serverless/quickstarts/k8s-logs-metrics.asciidoc b/docs/en/serverless/quickstarts/k8s-logs-metrics.asciidoc new file mode 100644 index 0000000000..a483b42e33 --- /dev/null +++ b/docs/en/serverless/quickstarts/k8s-logs-metrics.asciidoc @@ -0,0 +1,51 @@ +[[observability-quickstarts-k8s-logs-metrics]] += Quickstart: Monitor your Kubernetes cluster with Elastic Agent + +// :description: Learn how to monitor your cluster infrastructure running on Kubernetes. +// :keywords: serverless, observability, how-to + +In this quickstart guide, you'll learn how to create the Kubernetes resources that are required to monitor your cluster infrastructure. + +This new approach requires minimal configuration and provides you with an easy setup to monitor your infrastructure. You no longer need to download, install, or configure the Elastic Agent, everything happens automatically when you run the kubectl command. + +The kubectl command installs the standalone Elastic Agent in your Kubernetes cluster, downloads all the Kubernetes resources needed to collect metrics from the cluster, and sends it to Elastic. + +[discrete] +[[observability-quickstarts-k8s-logs-metrics-prerequisites]] +== Prerequisites + +* An {obs-serverless} project. To learn more, refer to <>. +* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to <>. +* A running Kubernetes cluster. +* https://kubernetes.io/docs/reference/kubectl/[Kubectl]. + +[discrete] +[[observability-quickstarts-k8s-logs-metrics-collect-your-data]] +== Collect your data + +. <>, or open an existing one. +. In your {obs-serverless} project, go to **Add Data**. +. Under **What do you want to monitor?** select **Kubernetes**, and then select **Elastic Agent: Logs & Metrics**. ++ +[role="screenshot"] +image::images/quickstart-k8s-entry-point.png[Kubernetes entry point] +. To install the Elastic Agent on your host, copy and run the install command. ++ +You will use the kubectl command to download a manifest file, inject user's API key generated by Kibana, and create the Kubernetes resources. +. Go back to the **Add Observability Data** page. +There might be a slight delay before data is ingested. When ready, you will see the message **We are monitoring your cluster**. +. Click **Explore Kubernetes cluster** to navigate to dashboards and explore your data. + +[discrete] +[[observability-quickstarts-k8s-logs-metrics-visualize-your-data]] +== Visualize your data + +After installation is complete and all relevant data is flowing into Elastic, +the **Visualize your data** section allows you to access the Kubernetes Cluster Overview dashboard that can be used to monitor the health of the cluster. + +[role="screenshot"] +image::images/quickstart-k8s-overview.png[Kubernetes overview dashboard] + +Furthermore, you can access other useful prebuilt dashboards for monitoring Kubernetes resources, for example running pods per namespace, as well as the resources they consume, like CPU and memory. + +Refer to <> for a description of other useful features. diff --git a/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.asciidoc b/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.asciidoc new file mode 100644 index 0000000000..9777fca364 --- /dev/null +++ b/docs/en/serverless/quickstarts/monitor-hosts-with-elastic-agent.asciidoc @@ -0,0 +1,132 @@ +[[observability-quickstarts-monitor-hosts-with-elastic-agent]] += Quickstart: Monitor hosts with {agent} + +// :description: Learn how to scan your hosts to detect and collect logs and metrics. +// :keywords: serverless, observability, how-to + +In this quickstart guide, you'll learn how to scan your host to detect and collect logs and metrics, +then navigate to dashboards to further analyze and explore your observability data. +You'll also learn how to get value out of your observability data. + +To scan your host, you'll run an auto-detection script that downloads and installs {agent}, +which is used to collect observability data from the host and send it to Elastic. + +The script also generates an {agent} configuration file that you can use with your existing Infrastructure-as-Code tooling. + +[discrete] +[[observability-quickstarts-monitor-hosts-with-elastic-agent-prerequisites]] +== Prerequisites + +* An {obs-serverless} project. To learn more, refer to <>. +* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to <>. +* Root privileges on the host—required to run the auto-detection script used in this quickstart. + +[discrete] +[[observability-quickstarts-monitor-hosts-with-elastic-agent-limitations]] +== Limitations + +* The auto-detection script works on Linux and MacOS only. Support for the `lsof` command is also required if you want to detect custom log files. +* If you've installed Apache or Nginx in a non-standard location, you'll need to specify log file paths manually when you run the scan. +* Because Docker Desktop runs in a VM, its logs are not auto-detected. + +[discrete] +[[observability-quickstarts-monitor-hosts-with-elastic-agent-collect-your-data]] +== Collect your data + +. <>, or open an existing one. +. In your {obs-serverless} project, go to **Add Data**. +. Under **What do you want to monitor?** select **Host**, and then select **Elastic Agent: Logs & Metrics**. ++ +[role="screenshot"] +image::images/quickstart-monitor-hosts-entry-point.png[Host monitoring entry point] +. Copy the install command. ++ +You'll run this command to download the auto-detection script, scan your system for observability data, and install {agent}. +. Open a terminal on the host you want to scan, and run the command. +. Review the list of log files: ++ +** Enter `Y` to ingest all the log files listed. +** Enter `n` to either exclude log files or specify additional log paths. Enter `Y` to confirm your selections. + +When the script is done, you'll see a message like "{agent} is configured and running." + +There might be a slight delay before logs and other data are ingested. + +.Need to scan your host again? +[NOTE] +==== +The auto-detection script (`auto_detect.sh`) is downloaded to the directory where you ran the installation command. +You can re-run the script on the same host to detect additional logs. +The script will scan the host and reconfigure {agent} with any additional logs that are found. +If the script misses any custom logs, you can add them manually by entering `n` after the script has finished scanning the host. +==== + +[discrete] +[[observability-quickstarts-monitor-hosts-with-elastic-agent-visualize-your-data]] +== Visualize your data + +After installation is complete and all relevant data is flowing into Elastic, +the **Visualize your data** section will show links to assets you can use to analyze your data. +Depending on what type of observability data was collected, +the page may link to the following integration assets: + +|=== +| Integration asset | Description + +| **Apache** +| Prebuilt dashboard for monitoring Apache HTTP server health using error and access log data. + +| **Custom .log files** +| Logs Explorer for analyzing custom logs. + +| **Docker** +| Prebuilt dashboard for monitoring the status and health of Docker containers. + +| **MySQL** +| Prebuilt dashboard for monitoring MySQl server health using error and access log data. + +| **Nginx** +| Prebuilt dashboard for monitoring Nginx server health using error and access log data. + +| **System** +| Prebuilt dashboard for monitoring host status and health using system metrics. + +| **Other prebuilt dashboards** +| Prebuilt dashboards are also available for systems and services not described here, +including PostgreSQL, Redis, HAProxy, Kafka, RabbitMQ, Prometheus, Apache Tomcat, and MongoDB. +|=== + +For example, you can navigate the **Host overview** dashboard to explore detailed metrics about system usage and throughput. +Metrics that indicate a possible problem are highlighted in red. + +[role="screenshot"] +image::images/quickstart-host-overview.png[Host overview dashboard] + +[discrete] +[[observability-quickstarts-monitor-hosts-with-elastic-agent-get-value-out-of-your-data]] +== Get value out of your data + +After using the dashboards to examine your data and confirm you've ingested all the host logs and metrics you want to monitor, +you can use {obs-serverless} to gain deeper insight into your data. + +For host monitoring, the following capabilities and features are recommended: + +* In the <>, analyze and compare data collected from your hosts. +You can also: ++ +** <> for memory usage and network traffic on hosts. +** <> that notify you when an anomaly is detected or a metric exceeds a given value. +* In the <>, search and filter your log data, +get information about the structure of log fields, and display your findings in a visualization. +You can also: ++ +** <> to find degraded documents. +** <> to find patterns in unstructured log messages. +** <> that notify you when an Observability data type reaches or exceeds a given value. +* Use <> to apply predictive analytics and machine learning to your data: ++ +** <> by comparing real-time and historical data from different sources to look for unusual, problematic patterns. +** <>. +** <> in your time series data. + +Refer to <> for a description of other useful features.