forked from elastic/observability-docs
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into issue#elastic#4248
- Loading branch information
Showing
16 changed files
with
223 additions
and
12 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
66 changes: 66 additions & 0 deletions
66
docs/en/observability/quickstarts/monitor-k8s-otel.asciidoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
[[monitor-k8s-otel-edot]] | ||
= Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT) | ||
|
||
preview::[] | ||
|
||
In this quickstart guide, you will learn how to send Kubernetes logs, metrics, and application traces to Elasticsearch, using the https://github.com/open-telemetry/opentelemetry-operator/[OpenTelemetry Operator] to orchestrate https://github.com/elastic/opentelemetry/tree/main[Elastic Distributions of OpenTelemetry] (EDOT) Collectors and SDK instances. | ||
|
||
All the components will be deployed through the https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-kube-stack[opentelemetry-kube-stack] helm chart. They include: | ||
|
||
* https://github.com/open-telemetry/opentelemetry-operator/[OpenTelemetry Operator]. | ||
* `DaemonSet` EDOT Collector configured for node level metrics. | ||
* `Deployment` EDOT Collector configured for cluster level metrics. | ||
* `Instrumentation` object for applications https://opentelemetry.io/docs/kubernetes/operator/automatic/[auto-instrumentation]. | ||
|
||
For a more detailed description of the components and advanced configuration, refer to the https://github.com/elastic/opentelemetry/blob/main/docs/kubernetes/operator/README.md[elastic/opentelemetry] GitHub repository. | ||
|
||
[discrete] | ||
== Prerequisites | ||
|
||
* A deployment using our hosted {ess} on {ess-trial}[{ecloud}]. The deployment includes an {es} cluster for storing and searching your data, and {kib} for visualizing and managing your data. | ||
* A running Kubernetes cluster (v1.23 or newer). | ||
* https://kubernetes.io/docs/reference/kubectl/[Kubectl]. | ||
* https://helm.sh/docs/intro/install/[Helm]. | ||
* (optional) https://cert-manager.io/docs/installation/[Cert-manager], if you opt for automatic generation and renewal of TLS certificates. | ||
|
||
[discrete] | ||
== Collect your data | ||
|
||
. In {kib}, go to the **Observability** UI and click **Add Data**. | ||
|
||
. Under *`What do you want to monitor?`*, select **Kubernetes**, and then select the **OpenTelemetry: Full Observability** option. | ||
+ | ||
[role="screenshot"] | ||
image::images/quickstart-k8s-otel-entry-point.png[Kubernetes-OTel entry point] | ||
|
||
. Follow the on-screen instructions to install all needed components. | ||
+ | ||
[NOTE] | ||
==== | ||
The default installation deploys the OpenTelemetry Operator with a self-signed TLS certificate valid for 365 days. This certificate **won't be renewed** unless the Helm Chart release is manually updated. Refer to the https://github.com/elastic/opentelemetry/blob/main/docs/kubernetes/operator/README.md#cert-manager[cert-manager integrated installation] guide to enable automatic certificate generation and renewal using https://cert-manager.io/docs/installation/[cert-manager]. | ||
==== | ||
+ | ||
Deploy the OpenTelemetry Operator and EDOT Collectors using the kube-stack Helm chart with the provided `values.yaml` file. You will run a few commands to: | ||
+ | ||
* Add the helm chart repository needed for the installation. | ||
* Create a namespace. | ||
* Create a secret with an API Key and the {es} endpoint to be used by the collectors. | ||
* Install the `opentelemetry-kube-stack` helm chart with the provided `values.yaml`. | ||
* Optionally, for instrumenting applications, apply the corresponding `annotations` as shown in {kib}. | ||
|
||
[discrete] | ||
== Visualize your data | ||
|
||
After installation is complete and all relevant data is flowing into Elastic, | ||
the **Visualize your data** section provides a link to the *[OTEL][Metrics Kubernetes]Cluster Overview* dashboard used to monitor the health of the cluster. | ||
|
||
[role="screenshot"] | ||
image::images/quickstart-k8s-otel-dashboard.png[Kubernetes overview dashboard] | ||
|
||
[discrete] | ||
== Troubleshooting and more | ||
|
||
* For troubleshooting deployment and installation, refer to https://github.com/elastic/opentelemetry/tree/main/docs/kubernetes/operator#installation-verification[installation verification]. | ||
* For application instrumentation details, refer to https://github.com/elastic/opentelemetry/blob/main/docs/kubernetes/operator/instrumenting-applications.md[Instrumenting applications with EDOT SDKs on Kubernetes]. | ||
* For customizing the configuration, refer to https://github.com/elastic/opentelemetry/tree/main/docs/kubernetes/operator#custom-configuration[custom configuration]. | ||
* Refer to <<observability-introduction>> for a description of other useful features. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
4 changes: 2 additions & 2 deletions
4
...ss/technical-preview-limitations.asciidoc → docs/en/serverless/limitations.asciidoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
132 changes: 132 additions & 0 deletions
132
docs/en/serverless/quickstarts/collect-data-with-aws-firehose.asciidoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,132 @@ | ||
[[collect-data-with-aws-firehose]] | ||
= Collect data with AWS Firehose | ||
|
||
preview::[] | ||
|
||
In this quickstart guide, you'll learn how to use AWS Firehose to send logs and metrics to Elastic. | ||
|
||
The AWS Firehose streams are created using a CloudFormation template, which can collect all available CloudWatch logs and metrics for your AWS account. | ||
|
||
This approach requires minimal configuration as the CloudFormation template creates a Firehose stream, enables CloudWatch metrics collection across all namespaces, and sets up an account-level subscription filter for CloudWatch log groups to send logs to Elastic via Firehose. | ||
You can use an AWS CLI command or upload the template to the AWS CloudFormation portal to customize the following parameter values: | ||
|
||
[%collapsible] | ||
.Required Input Parameters | ||
==== | ||
* `ElasticEndpointURL`: Elastic endpoint URL. | ||
* `ElasticAPIKey`: Elastic API Key. | ||
==== | ||
|
||
[%collapsible] | ||
.Optional Input Parameters | ||
==== | ||
* `HttpBufferInterval`: The Kinesis Firehose HTTP buffer interval, in seconds. Default is `60`. | ||
* `HttpBufferSize`: The Kinesis Firehose HTTP buffer size, in MiB. Default is `1`. | ||
* `S3BackupMode`: Source record backup in Amazon S3, failed data only or all data. Default is `FailedDataOnly`. | ||
* `S3BufferInterval`: The Kinesis Firehose S3 buffer interval, in seconds. Default is `300`. | ||
* `S3BufferSize`: The Kinesis Firehose S3 buffer size, in MiB. Default is `5`. | ||
* `S3BackupBucketARN`: By default, an S3 bucket for backup will be created. You can override this behaviour by providing an ARN of an existing S3 bucket that ensures the data can be recovered if record processing transformation does not produce the desired results. | ||
* `Attributes`: List of attribute name-value pairs for HTTP endpoint separated by commas. For example "name1=value1,name2=value2". | ||
==== | ||
|
||
[%collapsible] | ||
.Optional Input Parameters Specific for Metrics | ||
==== | ||
* `EnableCloudWatchMetrics`: Enable CloudWatch Metrics collection. Default is `true`. When CloudWatch metrics collection is enabled, by default a metric stream will be created with metrics from all namespaces. | ||
* `FirehoseStreamNameForMetrics`: Name for Amazon Data Firehose Stream for collecting CloudWatch metrics. Default is `elastic-firehose-metrics`. | ||
* `IncludeOrExclude`: Select the metrics you want to stream. You can include or exclude specific namespaces and metrics. If no filter namespace is given, then default to all namespaces. Default is `Include`. | ||
* `MetricNameFilters`: Comma-delimited list of namespace-metric names pairs to use for filtering metrics from the stream. If no metric name filter is given, then default to all namespaces and all metrics. For example "AWS/EC2:CPUUtilization|NetworkIn|NetworkOut,AWS/RDS,AWS/S3:AllRequests". | ||
* `IncludeLinkedAccountsMetrics`: If you are creating a metric stream in a monitoring account, specify `true` to include metrics from source accounts that are linked to this monitoring account, in the metric stream. Default is `false`. | ||
* `Tags`: Comma-delimited list of tags to apply to the metric stream. For example "org:eng,project:firehose". | ||
==== | ||
|
||
[%collapsible] | ||
.Optional Input Parameters Specific for Logs | ||
==== | ||
* `EnableCloudWatchLogs`: Enable CloudWatch Logs collection. Default is `true`. When CloudWatch logs collection is enabled, an account-level subscription filter policy is created for all CloudWatch log groups (except the log groups created for Firehose logs). | ||
* `FirehoseStreamNameForLogs`: Name for Amazon Data Firehose Stream for collecting CloudWatch logs. Default is `elastic-firehose-logs`. | ||
==== | ||
|
||
IMPORTANT: Some AWS services need additional manual configuration to properly ingest logs and metrics. For more information, check the | ||
link:https://www.elastic.co/docs/current/integrations/aws[AWS integration] documentation. | ||
|
||
Data collection with AWS Firehose is supported on Amazon Web Services. | ||
|
||
[discrete] | ||
== Prerequisites | ||
|
||
* An {obs-serverless} project. To learn more, refer to <<observability-create-an-observability-project>>. | ||
* A user with the **Admin** role or higher—required to onboard system logs and metrics. To learn more, refer to <<general-assign-user-roles>>. | ||
* An active AWS account and the necessary permissions to create delivery streams. | ||
|
||
NOTE: The default CloudFormation stack is created in the AWS region selected for the user's account. This region can be modified either through the AWS Console interface or by specifying a `--region` parameter in the AWS CLI command when creating the stack. | ||
|
||
[discrete] | ||
== Limitations | ||
|
||
The AWS Firehose receiver has the following limitations: | ||
|
||
* It does not support AWS PrivateLink. | ||
* The CloudFormation template detects and ingests logs and metrics within a single AWS region only. | ||
|
||
The following table shows the type of data ingested by the supported AWS services: | ||
|
||
|=== | ||
| AWS Service | Data type | ||
|
||
| VPC Flow Logs |Logs | ||
| API Gateway|Logs, Metrics | ||
| CloudTrail | Logs | ||
| Network Firewall | Logs, Metrics | ||
| Route53 | Logs | ||
| WAF | Logs | ||
| DynamoDB | Metrics | ||
| EBS | Metrics | ||
| EC2 | Metrics | ||
| ECS | Metrics | ||
| ELB | Metrics | ||
| EMR | Metrics | ||
| MSK | Metrics | ||
| Kinesis Data Stream | Metrics | ||
| Lambda | Metrics | ||
| NAT Gateway | Metrics | ||
| RDS | Metrics | ||
| S3 | Metrics | ||
| SNS | Metrics | ||
| SQS | Metrics | ||
| Transit Gateway | Metrics | ||
| AWS Usage | Metrics | ||
| VPN | Metrics | ||
| Uncategorized Firehose Logs | Logs | ||
|
||
|=== | ||
|
||
[discrete] | ||
== Collect your data | ||
|
||
. <<observability-create-an-observability-project,Create a new {obs-serverless} project>>, or open an existing one. | ||
. In your {obs-serverless} project, go to **Add Data**. | ||
. Go to **Cloud** > **AWS**, and then select **AWS Firehose**. | ||
+ | ||
[role="screenshot"] | ||
image::images/quickstart-aws-firehose-entry-point.png[AWS Firehose entry point] | ||
|
||
. Click **Create Firehose Stream in AWS** to create a CloudFormation stack from the CloudFormation template. | ||
|
||
. Go back to the **Add Observability Data** page. | ||
|
||
[discrete] | ||
== Visualize your data | ||
|
||
After installation is complete and all relevant data is flowing into Elastic, | ||
the **Visualize your data** section allows you to access the different dashboards for the various services. | ||
|
||
[role="screenshot"] | ||
image::images/quickstart-aws-firehose-dashboards.png[AWS Firehose dashboards] | ||
|
||
Here is an example of the VPC Flow logs dashboard: | ||
|
||
[role="screenshot"] | ||
image::images/quickstart-aws-firehose-vpc-flow.png[AWS Firehose VPC flow] | ||
|
||
Refer to <<observability-serverless-observability-overview>> for a description of other useful features. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
["appendix",role="exclude",id="redirects"] | ||
= Deleted pages | ||
|
||
The following pages have moved or been deleted. | ||
|
||
[role="exclude",id="observability-technical-preview-limitations"] | ||
=== Technical preview limitations | ||
|
||
Refer to <<observability-limitations,Limitations>>. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters