diff --git a/en/images/OIDC/OIDC-config.jpg b/en/images/OIDC/OIDC-config.jpg new file mode 100644 index 00000000..ca76dbd9 Binary files /dev/null and b/en/images/OIDC/OIDC-config.jpg differ diff --git a/en/images/OIDC/adfs-1.png b/en/images/OIDC/adfs-1.png new file mode 100644 index 00000000..654e91bb Binary files /dev/null and b/en/images/OIDC/adfs-1.png differ diff --git a/en/images/OIDC/adfs-2.png b/en/images/OIDC/adfs-2.png new file mode 100644 index 00000000..57d6bb68 Binary files /dev/null and b/en/images/OIDC/adfs-2.png differ diff --git a/en/images/OIDC/adfs-3.png b/en/images/OIDC/adfs-3.png new file mode 100644 index 00000000..ee36d59b Binary files /dev/null and b/en/images/OIDC/adfs-3.png differ diff --git a/en/images/OIDC/adfs-4.png b/en/images/OIDC/adfs-4.png new file mode 100644 index 00000000..e34986d6 Binary files /dev/null and b/en/images/OIDC/adfs-4.png differ diff --git a/en/images/OIDC/adfs-5.png b/en/images/OIDC/adfs-5.png new file mode 100644 index 00000000..3bb0b86b Binary files /dev/null and b/en/images/OIDC/adfs-5.png differ diff --git a/en/images/OIDC/adfs-6.png b/en/images/OIDC/adfs-6.png new file mode 100644 index 00000000..1ad0b4d1 Binary files /dev/null and b/en/images/OIDC/adfs-6.png differ diff --git a/en/images/OIDC/adfs-7.png b/en/images/OIDC/adfs-7.png new file mode 100644 index 00000000..3d05c6dd Binary files /dev/null and b/en/images/OIDC/adfs-7.png differ diff --git a/en/images/OIDC/adfs-8.png b/en/images/OIDC/adfs-8.png new file mode 100644 index 00000000..efd16a40 Binary files /dev/null and b/en/images/OIDC/adfs-8.png differ diff --git a/en/images/OIDC/adfs-9.png b/en/images/OIDC/adfs-9.png new file mode 100644 index 00000000..a30e8eba Binary files /dev/null and b/en/images/OIDC/adfs-9.png differ diff --git a/en/images/OIDC/authentication-configuration.png b/en/images/OIDC/authentication-configuration.png new file mode 100644 index 00000000..19454fc9 Binary files /dev/null and b/en/images/OIDC/authentication-configuration.png differ diff --git a/en/images/OIDC/authorization-configuration.png b/en/images/OIDC/authorization-configuration.png new file mode 100644 index 00000000..4e632acb Binary files /dev/null and b/en/images/OIDC/authorization-configuration.png differ diff --git a/en/images/OIDC/cognito-hostUI-new.png b/en/images/OIDC/cognito-hostUI-new.png new file mode 100644 index 00000000..715914f6 Binary files /dev/null and b/en/images/OIDC/cognito-hostUI-new.png differ diff --git a/en/images/OIDC/cognito-new-console-clientID.png b/en/images/OIDC/cognito-new-console-clientID.png new file mode 100644 index 00000000..f5ccd0ed Binary files /dev/null and b/en/images/OIDC/cognito-new-console-clientID.png differ diff --git a/en/images/OIDC/cognito-new-console-userpoolID.png b/en/images/OIDC/cognito-new-console-userpoolID.png new file mode 100644 index 00000000..17a175f5 Binary files /dev/null and b/en/images/OIDC/cognito-new-console-userpoolID.png differ diff --git a/en/images/OIDC/endpoint-info.png b/en/images/OIDC/endpoint-info.png new file mode 100644 index 00000000..d4b41c2e Binary files /dev/null and b/en/images/OIDC/endpoint-info.png differ diff --git a/en/images/OIDC/keycloak-access-token-setting.png b/en/images/OIDC/keycloak-access-token-setting.png new file mode 100644 index 00000000..4b4873d5 Binary files /dev/null and b/en/images/OIDC/keycloak-access-token-setting.png differ diff --git a/en/images/OIDC/keycloak-client-setting.jpg b/en/images/OIDC/keycloak-client-setting.jpg new file mode 100644 index 00000000..0febbd60 Binary files /dev/null and b/en/images/OIDC/keycloak-client-setting.jpg differ diff --git a/en/images/OIDC/keycloak-example-realm.jpg b/en/images/OIDC/keycloak-example-realm.jpg new file mode 100644 index 00000000..ab62c9cc Binary files /dev/null and b/en/images/OIDC/keycloak-example-realm.jpg differ diff --git a/en/images/PRD/doc-help-panel.png b/en/images/PRD/doc-help-panel.png new file mode 100644 index 00000000..a9c36acd Binary files /dev/null and b/en/images/PRD/doc-help-panel.png differ diff --git a/en/images/access-proxy-link.png b/en/images/access-proxy-link.png new file mode 100644 index 00000000..88b57513 Binary files /dev/null and b/en/images/access-proxy-link.png differ diff --git a/en/images/app-log/app-pipline-upgrade-v1.0.png b/en/images/app-log/app-pipline-upgrade-v1.0.png new file mode 100644 index 00000000..0e54183d Binary files /dev/null and b/en/images/app-log/app-pipline-upgrade-v1.0.png differ diff --git a/en/images/architecture/alarm.svg b/en/images/architecture/alarm.svg new file mode 100644 index 00000000..f9515c16 --- /dev/null +++ b/en/images/architecture/alarm.svg @@ -0,0 +1,192 @@ + + + + + + + + + + + + + + + + + + + + +
+
+
AWS Cloud
+
+
+
+ AWS Cloud +
+
+ + + + + + + +
+
+
Amazon OpenSearch Service
+
+
+
+ Amazon OpenSe... +
+
+ + + + + +
+
+
State Change
+
+
+
+ State Change +
+
+ + + + + +
+
+
CloudWatch Alarms
+
+
+
+ CloudWatch Al... +
+
+ + + + + +
+
+
Notify
+
+
+
+ Notify +
+
+ + + + + +
+
+
Amazon SNS
+
+
+
+ Amazon SNS +
+
+ + + + + + + +
+
+
Admin
+
+
+
+ Admin +
+
+ + + + + +
+
+
Send To Target
+
+
+
+ Send To Target +
+
+ + + + + +
+
+
Amazon EventBridge
+
+
+
+ Amazon EventB... +
+
+ + + + +
+
+
1
+
+
+
+ 1 +
+
+ + + + +
+
+
2
+
+
+
+ 2 +
+
+ + + + +
+
+
3
+
+
+
+ 3 +
+
+
+ + + + Viewer does not support full SVG 1.1 + + +
\ No newline at end of file diff --git a/en/images/architecture/app-log-architecture.png b/en/images/architecture/app-log-architecture.png new file mode 100644 index 00000000..60ef9595 Binary files /dev/null and b/en/images/architecture/app-log-architecture.png differ diff --git a/en/images/architecture/app-log-pipeline-ec2-eks.png b/en/images/architecture/app-log-pipeline-ec2-eks.png new file mode 100644 index 00000000..8a8253cc Binary files /dev/null and b/en/images/architecture/app-log-pipeline-ec2-eks.png differ diff --git a/en/images/architecture/app-log-pipeline-ec2-eks.svg b/en/images/architecture/app-log-pipeline-ec2-eks.svg new file mode 100644 index 00000000..606ee6b7 --- /dev/null +++ b/en/images/architecture/app-log-pipeline-ec2-eks.svg @@ -0,0 +1,4 @@ + + + +
Main Account
Main Account
Sources
Sources
Upload logs
Upload logs
Application Log Source
Application Log Source
Amazon EC2
w/ SSM Agent
Amazon EC2...
Trigger
Trigger
Amazon Kinesis 
Data Streams
Amazon Kines...
Bulk upload
Bulk upload
Failed records
Failed records
AWS Lambda
(Log Processor)
AWS Lambda...
Amazon S3
(Backup Bucket)
Amazon S3...
Amazon OpenSearch Service
Amazon OpenS...
1
1
2
2
4
4
3
3
Peering Connection
Peering...
Amazon EKS
Amazon EKS
Amazon S3
(Log Bucket)
Amazon S3...
Log Buffer
Log Buffer
Text is not SVG - cannot display
\ No newline at end of file diff --git a/en/images/architecture/app-log-pipeline-syslog.png b/en/images/architecture/app-log-pipeline-syslog.png new file mode 100644 index 00000000..bb1fc25b Binary files /dev/null and b/en/images/architecture/app-log-pipeline-syslog.png differ diff --git a/en/images/architecture/app-log-pipeline-syslog.svg b/en/images/architecture/app-log-pipeline-syslog.svg new file mode 100644 index 00000000..7b11bac8 --- /dev/null +++ b/en/images/architecture/app-log-pipeline-syslog.svg @@ -0,0 +1,4 @@ + + + +
Sources
Sources
Syslog Endpoint
Syslog Endpoint
Main Account
Main Account
Upload logs
Upload logs
Trigger
Trigger
Amazon Kinesis 
Data Streams
Amazon Kines...
Bulk upload
Bulk upload
Failed records
Failed records
AWS Lambda
(Log Processor)
AWS Lambda...
Amazon S3
(Backup Bucket)
Amazon S3...
Amazon OpenSearch Service
Amazon OpenS...
2
2
3
3
5
5
4
4
Peering Connection
Peering...
Amazon S3
(Log Bucket)
Amazon S3...
Log Buffer
Log Buffer
Amazon ECS
(Syslog Server)
Amazon ECS...
1
1
TCP/UDP
TCP/UDP
Network Load Balancer
Network Load...
Syslog Client 1
(Router/Switch/Firewall)
Syslog C...
Syslog Client 2
(Linux OS)
Syslog C...
Viewer does not support full SVG 1.1
\ No newline at end of file diff --git a/en/images/architecture/app-log-pipeline.png b/en/images/architecture/app-log-pipeline.png new file mode 100644 index 00000000..ebd71542 Binary files /dev/null and b/en/images/architecture/app-log-pipeline.png differ diff --git a/en/images/architecture/app-log-pipeline.svg b/en/images/architecture/app-log-pipeline.svg new file mode 100644 index 00000000..07c5766d --- /dev/null +++ b/en/images/architecture/app-log-pipeline.svg @@ -0,0 +1,4 @@ + + + +
Syslog Endpoint
Syslog Endpoint
Main Account
Main Account
Sources
Sources
Upload logs
Upload logs
Application Log Source
Application Log Source
Amazon EC2
w/ SSM Agent
Amazon EC2...
Trigger
Trigger
Amazon Kinesis 
Data Streams
Amazon Kines...
Bulk upload
Bulk upload
Failed records
Failed records
AWS Lambda
(Log Processor)
AWS Lambda...
Amazon S3
(Backup Bucket)
Amazon S3...
Amazon OpenSearch Service
Amazon OpenS...
1
1
2
2
4
4
3
3
Peering Connection
Peering...
Amazon EKS
Amazon EKS
Amazon S3
(Log Bucket)
Amazon S3...
Log Buffer
Log Buffer
Amazon ECS
(Syslog Server)
Amazon ECS...
Elastic Load Balancing
Elastic Load...
Syslog Client
Syslog...
Text is not SVG - cannot display
\ No newline at end of file diff --git a/en/images/architecture/arch-cn.png b/en/images/architecture/arch-cn.png new file mode 100644 index 00000000..b2b10ea7 Binary files /dev/null and b/en/images/architecture/arch-cn.png differ diff --git a/en/images/architecture/arch-new-cn.png b/en/images/architecture/arch-new-cn.png new file mode 100644 index 00000000..28e7e675 Binary files /dev/null and b/en/images/architecture/arch-new-cn.png differ diff --git a/en/images/architecture/arch.png b/en/images/architecture/arch.png new file mode 100644 index 00000000..2fa76ae7 Binary files /dev/null and b/en/images/architecture/arch.png differ diff --git a/en/images/architecture/eks-cluster-log-deployment.png b/en/images/architecture/eks-cluster-log-deployment.png new file mode 100644 index 00000000..44169b11 Binary files /dev/null and b/en/images/architecture/eks-cluster-log-deployment.png differ diff --git a/en/images/architecture/logs-from-amazon-ec2-eks-light-engine.drawio.png b/en/images/architecture/logs-from-amazon-ec2-eks-light-engine.drawio.png new file mode 100644 index 00000000..1ea50642 Binary files /dev/null and b/en/images/architecture/logs-from-amazon-ec2-eks-light-engine.drawio.png differ diff --git a/en/images/architecture/logs-in-s3-light-engine.drawio.png b/en/images/architecture/logs-in-s3-light-engine.drawio.png new file mode 100644 index 00000000..97ee2e22 Binary files /dev/null and b/en/images/architecture/logs-in-s3-light-engine.drawio.png differ diff --git a/en/images/architecture/proxy.png b/en/images/architecture/proxy.png new file mode 100644 index 00000000..b9c52684 Binary files /dev/null and b/en/images/architecture/proxy.png differ diff --git a/en/images/architecture/proxy.svg b/en/images/architecture/proxy.svg new file mode 100644 index 00000000..61b8371e --- /dev/null +++ b/en/images/architecture/proxy.svg @@ -0,0 +1,4 @@ + + + +
OpenSearch VPC
OpenSearch VPC
Centralized Logging with OpenSearch VPC
Centralized Logging with OpenSearch VPC
Private subnet
Private subnet
Auto Scaling group
Auto Scaling group
DNS query
DNS query
Amazon EC2
(Nginx)
Amazon EC2...
Load balancing
Load balancing
Peering Connection
Peeri...
Access
Access
Web Client
Web Client
Application Load Balancer
Application L...
DNS Service
DNS Service
1
1
2
2
3
3
5
5
4
4
Private subnet
Private subnet
OpenSearch Dashboards
OpenSearch...
Amazon OpenSearch Service
Amazon OpenSe...
Proxy
Proxy
Text is not SVG - cannot display
\ No newline at end of file diff --git a/en/images/architecture/service-pipeline-cw.png b/en/images/architecture/service-pipeline-cw.png new file mode 100644 index 00000000..e3899c3c Binary files /dev/null and b/en/images/architecture/service-pipeline-cw.png differ diff --git a/en/images/architecture/service-pipeline-cw.svg b/en/images/architecture/service-pipeline-cw.svg new file mode 100644 index 00000000..a5ceb656 --- /dev/null +++ b/en/images/architecture/service-pipeline-cw.svg @@ -0,0 +1,4 @@ + + + +
Main Account
Main Account
Sources
Sources
Amazon OpenSearch Service
Amazon OpenS...
Amazon RDS
Amazon RDS
AWS Lambda
AWS Lambda
Subscribe
Subscribe
Put log files
Put log files
Bulk upload
Bulk upload
Write logs
Write logs
AWS Services
AWS Services
Trigger
Trigger
Send events
Send events
Read Log File
Read Log File
1
1
Amazon CloudWatch
Amazon Cloud...
2
2
Amazon Kinesis
Data Firehose
Amazon Kines...
3
3
Amazon S3
(Log Bucket)
Amazon S3...
4
4
Amazon SQS
Amazon SQS
AWS Lambda
(Log Processor)
AWS Lambda...
5
5
7
7
Amazon S3
(Backup Bucket)
Amazon S3...
6
6
Failed records
Failed records
8
8
Text is not SVG - cannot display
\ No newline at end of file diff --git a/en/images/architecture/service-pipeline-cwl-to-kds.svg b/en/images/architecture/service-pipeline-cwl-to-kds.svg new file mode 100644 index 00000000..758cc0c4 --- /dev/null +++ b/en/images/architecture/service-pipeline-cwl-to-kds.svg @@ -0,0 +1,4 @@ + + + +
Sources
Sources
AWS Services
AWS Services
Main Account
Main Account
Amazon OpenSearch Service
Amazon OpenS...
Stream Logs
Stream Logs
Bulk upload
Bulk upload
Write logs
Write logs
Trigger
Trigger
Amazon CloudWatch
Amazon Cloud...
AWS Lambda
(Log Processor)
AWS Lambda...
2
2
Amazon S3
(Backup Bucket)
Amazon S3...
Failed records
Failed records
4
4
1
1
AWS CloudTrail
AWS CloudTra...
VPC Flow Logs
VPC Flow Logs
Amazon Kinesis Data Stream
Amazon Kinesi...
3
3
Text is not SVG - cannot display
\ No newline at end of file diff --git a/en/images/architecture/service-pipeline-kdf-to-s3.svg b/en/images/architecture/service-pipeline-kdf-to-s3.svg new file mode 100644 index 00000000..9608b05a --- /dev/null +++ b/en/images/architecture/service-pipeline-kdf-to-s3.svg @@ -0,0 +1,4 @@ + + + +
Main Account
Main Account
Sources
Sources
Amazon OpenSearch Service
Amazon OpenS...
Amazon RDS
Amazon RDS
AWS Lambda
AWS Lambda
Subscribe
Subscribe
Put log files
Put log files
Bulk upload
Bulk upload
Write logs
Write logs
AWS Services
AWS Services
Trigger
Trigger
Send events
Send events
Read Log File
Read Log File
Amazon CloudWatch
Amazon Cloud...
Amazon Kinesis
Data Firehose
Amazon Kines...
Amazon S3
(Log Bucket)
Amazon S3...
2
2
Amazon SQS
Amazon SQS
AWS Lambda
(Log Processor)
AWS Lambda...
3
3
5
5
Amazon S3
(Backup Bucket)
Amazon S3...
4
4
Failed records
Failed records
6
6
1
1
Text is not SVG - cannot display
\ No newline at end of file diff --git a/en/images/architecture/service-pipeline-kds.svg b/en/images/architecture/service-pipeline-kds.svg new file mode 100644 index 00000000..47beb96c --- /dev/null +++ b/en/images/architecture/service-pipeline-kds.svg @@ -0,0 +1,4 @@ + + + +
Sources
Sources
AWS Services
AWS Services
Main Account
Main Account
Amazon OpenSearch Service
Amazon OpenS...
Write logs
Write logs
Bulk upload
Bulk upload
Trigger
Trigger
AWS Lambda
(Log Processor)
AWS Lambda...
2
2
Amazon S3
(Backup Bucket)
Amazon S3...
Failed records
Failed records
4
4
1
1
Amazon Kinesis Data Stream
Amazon Kinesi...
Amazon CloudFront
Amazon Cloud...
3
3
Text is not SVG - cannot display
\ No newline at end of file diff --git a/en/images/architecture/service-pipeline-s3.png b/en/images/architecture/service-pipeline-s3.png new file mode 100644 index 00000000..dc715732 Binary files /dev/null and b/en/images/architecture/service-pipeline-s3.png differ diff --git a/en/images/architecture/service-pipeline-s3.svg b/en/images/architecture/service-pipeline-s3.svg new file mode 100644 index 00000000..0513e47f --- /dev/null +++ b/en/images/architecture/service-pipeline-s3.svg @@ -0,0 +1,4 @@ + + + +
Sources
Sources
Main Account
Main Account
Send events
Send events
Bulk upload
Bulk upload
Read log files
Read log files
Amazon OpenSearch Service
Amazon OpenS...
Trigger
Trigger
Amazon SQS
Amazon SQS
Write logs
Write logs
AWS Services
AWS Services
Elastic Load Balancing
Elastic Load...
AWS CloudTrail
AWS CloudTra...
Amazon S3
Amazon S3
Amazon CloudFront
Amazon Cloud...
Failed records
Failed records
...
...
AWS WAF
AWS WAF
Amazon S3
(Log Bucket)
Amazon S3...
1
1
2
2
AWS Lambda
(Log Processor)
AWS Lambda...
3
3
5
5
6
6
Amazon S3
(Backup Bucket)
Amazon S3...
4
4
Text is not SVG - cannot display
\ No newline at end of file diff --git a/en/images/authing/OIDC-config.jpg b/en/images/authing/OIDC-config.jpg new file mode 100644 index 00000000..ac92f264 Binary files /dev/null and b/en/images/authing/OIDC-config.jpg differ diff --git a/en/images/authing/add-domain.png b/en/images/authing/add-domain.png new file mode 100644 index 00000000..789d43dd Binary files /dev/null and b/en/images/authing/add-domain.png differ diff --git a/en/images/authing/app-name.png b/en/images/authing/app-name.png new file mode 100644 index 00000000..d371cc67 Binary files /dev/null and b/en/images/authing/app-name.png differ diff --git a/en/images/authing/authentication-configuration.png b/en/images/authing/authentication-configuration.png new file mode 100644 index 00000000..19454fc9 Binary files /dev/null and b/en/images/authing/authentication-configuration.png differ diff --git a/en/images/authing/authorization-configuration.png b/en/images/authing/authorization-configuration.png new file mode 100644 index 00000000..4e632acb Binary files /dev/null and b/en/images/authing/authorization-configuration.png differ diff --git a/en/images/authing/cloudfront-alternative.png b/en/images/authing/cloudfront-alternative.png new file mode 100644 index 00000000..41e87d34 Binary files /dev/null and b/en/images/authing/cloudfront-alternative.png differ diff --git a/en/images/authing/create-app.png b/en/images/authing/create-app.png new file mode 100644 index 00000000..88b54f48 Binary files /dev/null and b/en/images/authing/create-app.png differ diff --git a/en/images/authing/endpoint-info.png b/en/images/authing/endpoint-info.png new file mode 100644 index 00000000..d4b41c2e Binary files /dev/null and b/en/images/authing/endpoint-info.png differ diff --git a/en/images/authing/keycloak-accept-dns-res.jpg b/en/images/authing/keycloak-accept-dns-res.jpg new file mode 100644 index 00000000..d6b2c5bf Binary files /dev/null and b/en/images/authing/keycloak-accept-dns-res.jpg differ diff --git a/en/images/authing/keycloak-add-realm.jpg b/en/images/authing/keycloak-add-realm.jpg new file mode 100644 index 00000000..f93e675e Binary files /dev/null and b/en/images/authing/keycloak-add-realm.jpg differ diff --git a/en/images/authing/keycloak-add-user.jpg b/en/images/authing/keycloak-add-user.jpg new file mode 100644 index 00000000..a0c70a7c Binary files /dev/null and b/en/images/authing/keycloak-add-user.jpg differ diff --git a/en/images/authing/keycloak-alb-nerwork-mapping.jpg b/en/images/authing/keycloak-alb-nerwork-mapping.jpg new file mode 100644 index 00000000..17d068f8 Binary files /dev/null and b/en/images/authing/keycloak-alb-nerwork-mapping.jpg differ diff --git a/en/images/authing/keycloak-alb-param.jpg b/en/images/authing/keycloak-alb-param.jpg new file mode 100644 index 00000000..ff2901e9 Binary files /dev/null and b/en/images/authing/keycloak-alb-param.jpg differ diff --git a/en/images/authing/keycloak-cfn-output.png b/en/images/authing/keycloak-cfn-output.png new file mode 100644 index 00000000..28180c07 Binary files /dev/null and b/en/images/authing/keycloak-cfn-output.png differ diff --git a/en/images/authing/keycloak-client-setting.jpg b/en/images/authing/keycloak-client-setting.jpg new file mode 100644 index 00000000..9730320a Binary files /dev/null and b/en/images/authing/keycloak-client-setting.jpg differ diff --git a/en/images/authing/keycloak-clientId.jpg b/en/images/authing/keycloak-clientId.jpg new file mode 100644 index 00000000..86dac3de Binary files /dev/null and b/en/images/authing/keycloak-clientId.jpg differ diff --git a/en/images/authing/keycloak-create-alb.jpg b/en/images/authing/keycloak-create-alb.jpg new file mode 100644 index 00000000..50b05dde Binary files /dev/null and b/en/images/authing/keycloak-create-alb.jpg differ diff --git a/en/images/authing/keycloak-create-client.jpg b/en/images/authing/keycloak-create-client.jpg new file mode 100644 index 00000000..90e6ac21 Binary files /dev/null and b/en/images/authing/keycloak-create-client.jpg differ diff --git a/en/images/authing/keycloak-create-internal-ALB.jpg b/en/images/authing/keycloak-create-internal-ALB.jpg new file mode 100644 index 00000000..dd0d79f8 Binary files /dev/null and b/en/images/authing/keycloak-create-internal-ALB.jpg differ diff --git a/en/images/authing/keycloak-credentials.jpg b/en/images/authing/keycloak-credentials.jpg new file mode 100644 index 00000000..91294202 Binary files /dev/null and b/en/images/authing/keycloak-credentials.jpg differ diff --git a/en/images/authing/keycloak-delete-listener.jpg b/en/images/authing/keycloak-delete-listener.jpg new file mode 100644 index 00000000..6119f03a Binary files /dev/null and b/en/images/authing/keycloak-delete-listener.jpg differ diff --git a/en/images/authing/keycloak-dns-action.jpg b/en/images/authing/keycloak-dns-action.jpg new file mode 100644 index 00000000..71c784e9 Binary files /dev/null and b/en/images/authing/keycloak-dns-action.jpg differ diff --git a/en/images/authing/keycloak-edit-dns-settings.jpg b/en/images/authing/keycloak-edit-dns-settings.jpg new file mode 100644 index 00000000..81057116 Binary files /dev/null and b/en/images/authing/keycloak-edit-dns-settings.jpg differ diff --git a/en/images/authing/keycloak-example-realm.jpg b/en/images/authing/keycloak-example-realm.jpg new file mode 100644 index 00000000..ab62c9cc Binary files /dev/null and b/en/images/authing/keycloak-example-realm.jpg differ diff --git a/en/images/authing/keycloak-login.jpg b/en/images/authing/keycloak-login.jpg new file mode 100644 index 00000000..05dd6e5c Binary files /dev/null and b/en/images/authing/keycloak-login.jpg differ diff --git a/en/images/authing/keycloak-parameter.png b/en/images/authing/keycloak-parameter.png new file mode 100644 index 00000000..6e82dc8c Binary files /dev/null and b/en/images/authing/keycloak-parameter.png differ diff --git a/en/images/authing/keycloak-password.jpg b/en/images/authing/keycloak-password.jpg new file mode 100644 index 00000000..584517dd Binary files /dev/null and b/en/images/authing/keycloak-password.jpg differ diff --git a/en/images/authing/keycloak-peering.jpg b/en/images/authing/keycloak-peering.jpg new file mode 100644 index 00000000..2112d8af Binary files /dev/null and b/en/images/authing/keycloak-peering.jpg differ diff --git a/en/images/authing/keycloak-portal.png b/en/images/authing/keycloak-portal.png new file mode 100644 index 00000000..e83b6370 Binary files /dev/null and b/en/images/authing/keycloak-portal.png differ diff --git a/en/images/authing/keycloak-realm-name.jpg b/en/images/authing/keycloak-realm-name.jpg new file mode 100644 index 00000000..3540ec55 Binary files /dev/null and b/en/images/authing/keycloak-realm-name.jpg differ diff --git a/en/images/authing/keycloak-route-table.jpg b/en/images/authing/keycloak-route-table.jpg new file mode 100644 index 00000000..56bd5b1b Binary files /dev/null and b/en/images/authing/keycloak-route-table.jpg differ diff --git a/en/images/authing/keycloak-routes.jpg b/en/images/authing/keycloak-routes.jpg new file mode 100644 index 00000000..73503baf Binary files /dev/null and b/en/images/authing/keycloak-routes.jpg differ diff --git a/en/images/authing/keycloak-secret.jpg b/en/images/authing/keycloak-secret.jpg new file mode 100644 index 00000000..319acdc2 Binary files /dev/null and b/en/images/authing/keycloak-secret.jpg differ diff --git a/en/images/authing/keycloak-secrets.jpg b/en/images/authing/keycloak-secrets.jpg new file mode 100644 index 00000000..e02b6557 Binary files /dev/null and b/en/images/authing/keycloak-secrets.jpg differ diff --git a/en/images/authing/keycloak-subnet.jpg b/en/images/authing/keycloak-subnet.jpg new file mode 100644 index 00000000..5133c605 Binary files /dev/null and b/en/images/authing/keycloak-subnet.jpg differ diff --git a/en/images/authing/keycloak-user.jpg b/en/images/authing/keycloak-user.jpg new file mode 100644 index 00000000..047f3c41 Binary files /dev/null and b/en/images/authing/keycloak-user.jpg differ diff --git a/en/images/authing/loghub.jpg b/en/images/authing/loghub.jpg new file mode 100644 index 00000000..84700181 Binary files /dev/null and b/en/images/authing/loghub.jpg differ diff --git a/en/images/authing/proxy-creation.png b/en/images/authing/proxy-creation.png new file mode 100644 index 00000000..9ab0ec9f Binary files /dev/null and b/en/images/authing/proxy-creation.png differ diff --git a/en/images/authing/secrets.jpg b/en/images/authing/secrets.jpg new file mode 100644 index 00000000..cc94d37e Binary files /dev/null and b/en/images/authing/secrets.jpg differ diff --git a/en/images/aws-solutions.png b/en/images/aws-solutions.png new file mode 100644 index 00000000..ee21f69b Binary files /dev/null and b/en/images/aws-solutions.png differ diff --git a/en/images/cloudtrail-log.png b/en/images/cloudtrail-log.png new file mode 100644 index 00000000..651e42a6 Binary files /dev/null and b/en/images/cloudtrail-log.png differ diff --git a/en/images/dashboards/apache.png b/en/images/dashboards/apache.png new file mode 100644 index 00000000..fa768353 Binary files /dev/null and b/en/images/dashboards/apache.png differ diff --git a/en/images/dashboards/cloudfront-db.png b/en/images/dashboards/cloudfront-db.png new file mode 100644 index 00000000..e45c3470 Binary files /dev/null and b/en/images/dashboards/cloudfront-db.png differ diff --git a/en/images/dashboards/cloudtrail-db.png b/en/images/dashboards/cloudtrail-db.png new file mode 100644 index 00000000..2407e260 Binary files /dev/null and b/en/images/dashboards/cloudtrail-db.png differ diff --git a/en/images/dashboards/config-db.png b/en/images/dashboards/config-db.png new file mode 100644 index 00000000..b11b98fa Binary files /dev/null and b/en/images/dashboards/config-db.png differ diff --git a/en/images/dashboards/elb-db.png b/en/images/dashboards/elb-db.png new file mode 100644 index 00000000..d8675c50 Binary files /dev/null and b/en/images/dashboards/elb-db.png differ diff --git a/en/images/dashboards/lambda-db.png b/en/images/dashboards/lambda-db.png new file mode 100644 index 00000000..68e42a48 Binary files /dev/null and b/en/images/dashboards/lambda-db.png differ diff --git a/en/images/dashboards/nginx-1.png b/en/images/dashboards/nginx-1.png new file mode 100644 index 00000000..b5c28c37 Binary files /dev/null and b/en/images/dashboards/nginx-1.png differ diff --git a/en/images/dashboards/nginx-2.png b/en/images/dashboards/nginx-2.png new file mode 100644 index 00000000..8521037c Binary files /dev/null and b/en/images/dashboards/nginx-2.png differ diff --git a/en/images/dashboards/rds-db.png b/en/images/dashboards/rds-db.png new file mode 100644 index 00000000..f2078e1c Binary files /dev/null and b/en/images/dashboards/rds-db.png differ diff --git a/en/images/dashboards/s3-db.png b/en/images/dashboards/s3-db.png new file mode 100644 index 00000000..11ebb04d Binary files /dev/null and b/en/images/dashboards/s3-db.png differ diff --git a/en/images/dashboards/vpcflow-db.png b/en/images/dashboards/vpcflow-db.png new file mode 100644 index 00000000..3643341c Binary files /dev/null and b/en/images/dashboards/vpcflow-db.png differ diff --git a/en/images/dashboards/waf-db.png b/en/images/dashboards/waf-db.png new file mode 100644 index 00000000..95f225c4 Binary files /dev/null and b/en/images/dashboards/waf-db.png differ diff --git a/en/images/design-diagram/alarm-process.png b/en/images/design-diagram/alarm-process.png new file mode 100644 index 00000000..ea3723b7 Binary files /dev/null and b/en/images/design-diagram/alarm-process.png differ diff --git a/en/images/design-diagram/app-log-er-diagram.png b/en/images/design-diagram/app-log-er-diagram.png new file mode 100644 index 00000000..bef2a80c Binary files /dev/null and b/en/images/design-diagram/app-log-er-diagram.png differ diff --git a/en/images/design-diagram/application-log-ingestion.png b/en/images/design-diagram/application-log-ingestion.png new file mode 100644 index 00000000..b6416cb0 Binary files /dev/null and b/en/images/design-diagram/application-log-ingestion.png differ diff --git a/en/images/design-diagram/auto-import-aos-domain.png b/en/images/design-diagram/auto-import-aos-domain.png new file mode 100644 index 00000000..8bb86bf2 Binary files /dev/null and b/en/images/design-diagram/auto-import-aos-domain.png differ diff --git a/en/images/design-diagram/collect-control-plane-logging.png b/en/images/design-diagram/collect-control-plane-logging.png new file mode 100644 index 00000000..90ce3e31 Binary files /dev/null and b/en/images/design-diagram/collect-control-plane-logging.png differ diff --git a/en/images/design-diagram/create-app-log-ingestion.png b/en/images/design-diagram/create-app-log-ingestion.png new file mode 100644 index 00000000..ea9e4b1d Binary files /dev/null and b/en/images/design-diagram/create-app-log-ingestion.png differ diff --git a/en/images/design-diagram/create-eks-pod-log-ingestion.png b/en/images/design-diagram/create-eks-pod-log-ingestion.png new file mode 100644 index 00000000..454c8c4c Binary files /dev/null and b/en/images/design-diagram/create-eks-pod-log-ingestion.png differ diff --git a/en/images/design-diagram/create-svc-pipe-uml.png b/en/images/design-diagram/create-svc-pipe-uml.png new file mode 100644 index 00000000..2edb1384 Binary files /dev/null and b/en/images/design-diagram/create-svc-pipe-uml.png differ diff --git a/en/images/design-diagram/create-svc.png b/en/images/design-diagram/create-svc.png new file mode 100644 index 00000000..a0703f21 Binary files /dev/null and b/en/images/design-diagram/create-svc.png differ diff --git a/en/images/design-diagram/db-export.png b/en/images/design-diagram/db-export.png new file mode 100644 index 00000000..6efa8f91 Binary files /dev/null and b/en/images/design-diagram/db-export.png differ diff --git a/en/images/design-diagram/delete-domain.png b/en/images/design-diagram/delete-domain.png new file mode 100644 index 00000000..90cbb7cc Binary files /dev/null and b/en/images/design-diagram/delete-domain.png differ diff --git a/en/images/design-diagram/delete-svc-pipe-uml.png b/en/images/design-diagram/delete-svc-pipe-uml.png new file mode 100644 index 00000000..5d16f63e Binary files /dev/null and b/en/images/design-diagram/delete-svc-pipe-uml.png differ diff --git a/en/images/design-diagram/delete-svc.png b/en/images/design-diagram/delete-svc.png new file mode 100644 index 00000000..bc9eeb49 Binary files /dev/null and b/en/images/design-diagram/delete-svc.png differ diff --git a/en/images/design-diagram/eks-application-log-ingestion-flow.png b/en/images/design-diagram/eks-application-log-ingestion-flow.png new file mode 100644 index 00000000..34f6f66a Binary files /dev/null and b/en/images/design-diagram/eks-application-log-ingestion-flow.png differ diff --git a/en/images/design-diagram/eks-pod-log-ingestion-overview.png b/en/images/design-diagram/eks-pod-log-ingestion-overview.png new file mode 100644 index 00000000..87d91d2b Binary files /dev/null and b/en/images/design-diagram/eks-pod-log-ingestion-overview.png differ diff --git a/en/images/design-diagram/eks-pod-log-stfn-flow.png b/en/images/design-diagram/eks-pod-log-stfn-flow.png new file mode 100644 index 00000000..cb034bc2 Binary files /dev/null and b/en/images/design-diagram/eks-pod-log-stfn-flow.png differ diff --git a/en/images/design-diagram/get-domain.png b/en/images/design-diagram/get-domain.png new file mode 100644 index 00000000..9cce5638 Binary files /dev/null and b/en/images/design-diagram/get-domain.png differ diff --git a/en/images/design-diagram/import-domain.png b/en/images/design-diagram/import-domain.png new file mode 100644 index 00000000..0eea9df2 Binary files /dev/null and b/en/images/design-diagram/import-domain.png differ diff --git a/en/images/design-diagram/import-eks-cluster.png b/en/images/design-diagram/import-eks-cluster.png new file mode 100644 index 00000000..66708b1a Binary files /dev/null and b/en/images/design-diagram/import-eks-cluster.png differ diff --git a/en/images/design-diagram/install-log-agent.png b/en/images/design-diagram/install-log-agent.png new file mode 100644 index 00000000..ef2ed00a Binary files /dev/null and b/en/images/design-diagram/install-log-agent.png differ diff --git a/en/images/design-diagram/list-domain.png b/en/images/design-diagram/list-domain.png new file mode 100644 index 00000000..b4afb1a7 Binary files /dev/null and b/en/images/design-diagram/list-domain.png differ diff --git a/en/images/design-diagram/proxy-process.png b/en/images/design-diagram/proxy-process.png new file mode 100644 index 00000000..ccb87fb7 Binary files /dev/null and b/en/images/design-diagram/proxy-process.png differ diff --git a/en/images/design-diagram/request-create-eks-pod-log-ingestion.png b/en/images/design-diagram/request-create-eks-pod-log-ingestion.png new file mode 100644 index 00000000..1b925311 Binary files /dev/null and b/en/images/design-diagram/request-create-eks-pod-log-ingestion.png differ diff --git a/en/images/design-diagram/service-pipeline-process.png b/en/images/design-diagram/service-pipeline-process.png new file mode 100644 index 00000000..3ea3fa6d Binary files /dev/null and b/en/images/design-diagram/service-pipeline-process.png differ diff --git a/en/images/domain/add-sg-rules.png b/en/images/domain/add-sg-rules.png new file mode 100644 index 00000000..c7064b97 Binary files /dev/null and b/en/images/domain/add-sg-rules.png differ diff --git a/en/images/domain/cloudwatch-alarm-link-en.png b/en/images/domain/cloudwatch-alarm-link-en.png new file mode 100644 index 00000000..3eae112b Binary files /dev/null and b/en/images/domain/cloudwatch-alarm-link-en.png differ diff --git a/en/images/domain/cloudwatch-alarm-link-zh.png b/en/images/domain/cloudwatch-alarm-link-zh.png new file mode 100644 index 00000000..31255fe8 Binary files /dev/null and b/en/images/domain/cloudwatch-alarm-link-zh.png differ diff --git a/en/images/domain/domain-vpc-peering.png b/en/images/domain/domain-vpc-peering.png new file mode 100644 index 00000000..d6e71e3b Binary files /dev/null and b/en/images/domain/domain-vpc-peering.png differ diff --git a/en/images/domain/domain-vpc-peering.svg b/en/images/domain/domain-vpc-peering.svg new file mode 100644 index 00000000..4ecd3d56 --- /dev/null +++ b/en/images/domain/domain-vpc-peering.svg @@ -0,0 +1,341 @@ + + + + + + + + + + + + + + + + + + + + +
+
+
Centralized + Logging with OpenSearch VPC
(10.255.0.0/16)
+
+
+
+ Centralized + Logging with OpenSearch VPC... +
+
+ + + + + +
+
+
Private + subnet
+
+
+
+ Private + subnet +
+
+ + + + + +
+
+
AWS + Lambda
(Log Processor)
+
+
+
+ AWS Lambda... +
+
+ + + + + +
+
+
Route + Table
+
+
+
+ Route Table +
+
+ + + + +
+
+
10.0.0.0/16 + -> peering connection
+
+
+
+ 10.0.0.0/16 -> peering connecti... +
+
+ + + + +
+
+
+
+ Security Group +
+
+
+
+
+ Security Group +
+
+ + + + +
+
+
ALLOW + TCP/443 from 10.255.0.0/16
+
+
+
+ ALLOW + TCP/443 from 10.2... +
+
+ + + + + +
+
+
OpenSearch + VPC
(10.0.0.0/16)
+
+
+
+ OpenSearch + VPC... +
+
+ + + + + +
+
+
Private + subnet
+
+
+
+ Private + subnet +
+
+ + + + + +
+
+
Amazon + OpenSearch Service
(using VPC)
+
+
+
+ Amazon OpenSe... +
+
+ + + + + +
+
+
Route + Table
+
+
+
+ Route Table +
+
+ + + + +
+
+
10.255.0.0/16 + -> peering connection
+
+
+
+ 10.255.0.0/16 -> peering connection +
+
+ + + + + +
+
+
Peering + Connection
+
+
+
+ Peering Conne... +
+
+ + + + + +
+ + + + Text is not SVG - cannot + display + + +
\ No newline at end of file diff --git a/en/images/domain/policy.png b/en/images/domain/policy.png new file mode 100644 index 00000000..9df81a56 Binary files /dev/null and b/en/images/domain/policy.png differ diff --git a/en/images/domain/proxy.png b/en/images/domain/proxy.png new file mode 100644 index 00000000..a4c8d454 Binary files /dev/null and b/en/images/domain/proxy.png differ diff --git a/en/images/faq/assume-role-latency.png b/en/images/faq/assume-role-latency.png new file mode 100644 index 00000000..bde59798 Binary files /dev/null and b/en/images/faq/assume-role-latency.png differ diff --git a/en/images/faq/cloudformation-stuck.png b/en/images/faq/cloudformation-stuck.png new file mode 100644 index 00000000..bdac425b Binary files /dev/null and b/en/images/faq/cloudformation-stuck.png differ diff --git a/en/images/lambda-dahsboard.png b/en/images/lambda-dahsboard.png new file mode 100644 index 00000000..22389490 Binary files /dev/null and b/en/images/lambda-dahsboard.png differ diff --git a/en/images/launch-stack.png b/en/images/launch-stack.png new file mode 100644 index 00000000..2745adf4 Binary files /dev/null and b/en/images/launch-stack.png differ diff --git a/en/images/log analytics pipeline - application - log group concept.png b/en/images/log analytics pipeline - application - log group concept.png new file mode 100644 index 00000000..58b8d015 Binary files /dev/null and b/en/images/log analytics pipeline - application - log group concept.png differ diff --git a/en/images/log analytics pipeline - application.png b/en/images/log analytics pipeline - application.png new file mode 100644 index 00000000..ce9d1377 Binary files /dev/null and b/en/images/log analytics pipeline - application.png differ diff --git a/en/images/product-high-level-arch.png b/en/images/product-high-level-arch.png new file mode 100644 index 00000000..d5abbbdd Binary files /dev/null and b/en/images/product-high-level-arch.png differ diff --git a/en/images/s3-access-log.png b/en/images/s3-access-log.png new file mode 100644 index 00000000..fbfe06ae Binary files /dev/null and b/en/images/s3-access-log.png differ diff --git a/en/images/web-console-url.png b/en/images/web-console-url.png new file mode 100644 index 00000000..49ec1aea Binary files /dev/null and b/en/images/web-console-url.png differ diff --git a/en/images/workshop/500.png b/en/images/workshop/500.png new file mode 100644 index 00000000..d76de028 Binary files /dev/null and b/en/images/workshop/500.png differ diff --git a/en/images/workshop/Log-processing-3.png b/en/images/workshop/Log-processing-3.png new file mode 100644 index 00000000..ec899a15 Binary files /dev/null and b/en/images/workshop/Log-processing-3.png differ diff --git a/en/images/workshop/app-log-ingest-setting.png b/en/images/workshop/app-log-ingest-setting.png new file mode 100644 index 00000000..7c9c4780 Binary files /dev/null and b/en/images/workshop/app-log-ingest-setting.png differ diff --git a/en/images/workshop/app-pipe.png b/en/images/workshop/app-pipe.png new file mode 100644 index 00000000..196e34a2 Binary files /dev/null and b/en/images/workshop/app-pipe.png differ diff --git a/en/images/workshop/c9instancerole.png b/en/images/workshop/c9instancerole.png new file mode 100644 index 00000000..c85b9992 Binary files /dev/null and b/en/images/workshop/c9instancerole.png differ diff --git a/en/images/workshop/certification-success.png b/en/images/workshop/certification-success.png new file mode 100644 index 00000000..2434b70a Binary files /dev/null and b/en/images/workshop/certification-success.png differ diff --git a/en/images/workshop/chrome-warning.png b/en/images/workshop/chrome-warning.png new file mode 100644 index 00000000..54b46958 Binary files /dev/null and b/en/images/workshop/chrome-warning.png differ diff --git a/en/images/workshop/cloud9-1.png b/en/images/workshop/cloud9-1.png new file mode 100644 index 00000000..0229c569 Binary files /dev/null and b/en/images/workshop/cloud9-1.png differ diff --git a/en/images/workshop/cloud9-2.png b/en/images/workshop/cloud9-2.png new file mode 100644 index 00000000..f7ee4e59 Binary files /dev/null and b/en/images/workshop/cloud9-2.png differ diff --git a/en/images/workshop/cloud9-3.png b/en/images/workshop/cloud9-3.png new file mode 100644 index 00000000..c1c99399 Binary files /dev/null and b/en/images/workshop/cloud9-3.png differ diff --git a/en/images/workshop/cloud9-eks-1.png b/en/images/workshop/cloud9-eks-1.png new file mode 100644 index 00000000..fc9650cb Binary files /dev/null and b/en/images/workshop/cloud9-eks-1.png differ diff --git a/en/images/workshop/cloud9-role.png b/en/images/workshop/cloud9-role.png new file mode 100644 index 00000000..611033a5 Binary files /dev/null and b/en/images/workshop/cloud9-role.png differ diff --git a/en/images/workshop/cloudfront-arch-2.png b/en/images/workshop/cloudfront-arch-2.png new file mode 100644 index 00000000..7711a419 Binary files /dev/null and b/en/images/workshop/cloudfront-arch-2.png differ diff --git a/en/images/workshop/cloudfront-creating.png b/en/images/workshop/cloudfront-creating.png new file mode 100644 index 00000000..5cbb0a48 Binary files /dev/null and b/en/images/workshop/cloudfront-creating.png differ diff --git a/en/images/workshop/cloudfront-dashboard.png b/en/images/workshop/cloudfront-dashboard.png new file mode 100644 index 00000000..21a876a3 Binary files /dev/null and b/en/images/workshop/cloudfront-dashboard.png differ diff --git a/en/images/workshop/copy-policy.png b/en/images/workshop/copy-policy.png new file mode 100644 index 00000000..c8c8bad0 Binary files /dev/null and b/en/images/workshop/copy-policy.png differ diff --git a/en/images/workshop/create-application-log.png b/en/images/workshop/create-application-log.png new file mode 100644 index 00000000..0f4bd41b Binary files /dev/null and b/en/images/workshop/create-application-log.png differ diff --git a/en/images/workshop/create-index-pattern.png b/en/images/workshop/create-index-pattern.png new file mode 100644 index 00000000..331b8645 Binary files /dev/null and b/en/images/workshop/create-index-pattern.png differ diff --git a/en/images/workshop/create-ingestion.png b/en/images/workshop/create-ingestion.png new file mode 100644 index 00000000..49ceb5c9 Binary files /dev/null and b/en/images/workshop/create-ingestion.png differ diff --git a/en/images/workshop/create-policy.png b/en/images/workshop/create-policy.png new file mode 100644 index 00000000..949c58d5 Binary files /dev/null and b/en/images/workshop/create-policy.png differ diff --git a/en/images/workshop/create-service-log.png b/en/images/workshop/create-service-log.png new file mode 100644 index 00000000..4e57f0c0 Binary files /dev/null and b/en/images/workshop/create-service-log.png differ diff --git a/en/images/workshop/dashboard-global.png b/en/images/workshop/dashboard-global.png new file mode 100644 index 00000000..d9e5e9b0 Binary files /dev/null and b/en/images/workshop/dashboard-global.png differ diff --git a/en/images/workshop/define-index-pattern.png b/en/images/workshop/define-index-pattern.png new file mode 100644 index 00000000..72299697 Binary files /dev/null and b/en/images/workshop/define-index-pattern.png differ diff --git a/en/images/workshop/discover.png b/en/images/workshop/discover.png new file mode 100644 index 00000000..98229454 Binary files /dev/null and b/en/images/workshop/discover.png differ diff --git a/en/images/workshop/edit-attribute.png b/en/images/workshop/edit-attribute.png new file mode 100644 index 00000000..26e23c91 Binary files /dev/null and b/en/images/workshop/edit-attribute.png differ diff --git a/en/images/workshop/editing.png b/en/images/workshop/editing.png new file mode 100644 index 00000000..ab204014 Binary files /dev/null and b/en/images/workshop/editing.png differ diff --git a/en/images/workshop/eks-fluent-bit-1.png b/en/images/workshop/eks-fluent-bit-1.png new file mode 100644 index 00000000..b5f79311 Binary files /dev/null and b/en/images/workshop/eks-fluent-bit-1.png differ diff --git a/en/images/workshop/eks-fluent-bit-2.png b/en/images/workshop/eks-fluent-bit-2.png new file mode 100644 index 00000000..506c4272 Binary files /dev/null and b/en/images/workshop/eks-fluent-bit-2.png differ diff --git a/en/images/workshop/eks-generatelog-1.png b/en/images/workshop/eks-generatelog-1.png new file mode 100644 index 00000000..34f4a630 Binary files /dev/null and b/en/images/workshop/eks-generatelog-1.png differ diff --git a/en/images/workshop/eks-ingestion-1.png b/en/images/workshop/eks-ingestion-1.png new file mode 100644 index 00000000..448ccc2f Binary files /dev/null and b/en/images/workshop/eks-ingestion-1.png differ diff --git a/en/images/workshop/eks-ingestion-2.png b/en/images/workshop/eks-ingestion-2.png new file mode 100644 index 00000000..d5b3a712 Binary files /dev/null and b/en/images/workshop/eks-ingestion-2.png differ diff --git a/en/images/workshop/eks-log-import.png b/en/images/workshop/eks-log-import.png new file mode 100644 index 00000000..ddc3ea20 Binary files /dev/null and b/en/images/workshop/eks-log-import.png differ diff --git a/en/images/workshop/eks-network-sg.png b/en/images/workshop/eks-network-sg.png new file mode 100644 index 00000000..8012eda8 Binary files /dev/null and b/en/images/workshop/eks-network-sg.png differ diff --git a/en/images/workshop/eks-network-sg2.png b/en/images/workshop/eks-network-sg2.png new file mode 100644 index 00000000..e2914019 Binary files /dev/null and b/en/images/workshop/eks-network-sg2.png differ diff --git a/en/images/workshop/eks-network-sg3.png b/en/images/workshop/eks-network-sg3.png new file mode 100644 index 00000000..1d93b38f Binary files /dev/null and b/en/images/workshop/eks-network-sg3.png differ diff --git a/en/images/workshop/eks-network.png b/en/images/workshop/eks-network.png new file mode 100644 index 00000000..d1925aeb Binary files /dev/null and b/en/images/workshop/eks-network.png differ diff --git a/en/images/workshop/eks-nginx-dashboard.png b/en/images/workshop/eks-nginx-dashboard.png new file mode 100644 index 00000000..bdc9f279 Binary files /dev/null and b/en/images/workshop/eks-nginx-dashboard.png differ diff --git a/en/images/workshop/eks-nginx-log-1.png b/en/images/workshop/eks-nginx-log-1.png new file mode 100644 index 00000000..cfc7083b Binary files /dev/null and b/en/images/workshop/eks-nginx-log-1.png differ diff --git a/en/images/workshop/elb-dashboard.png b/en/images/workshop/elb-dashboard.png new file mode 100644 index 00000000..6a26612c Binary files /dev/null and b/en/images/workshop/elb-dashboard.png differ diff --git a/en/images/workshop/elb-parameters.png b/en/images/workshop/elb-parameters.png new file mode 100644 index 00000000..3998bfe7 Binary files /dev/null and b/en/images/workshop/elb-parameters.png differ diff --git a/en/images/workshop/find-lb.png b/en/images/workshop/find-lb.png new file mode 100644 index 00000000..bcae8096 Binary files /dev/null and b/en/images/workshop/find-lb.png differ diff --git a/en/images/workshop/fire-fox-2.png b/en/images/workshop/fire-fox-2.png new file mode 100644 index 00000000..10c98cdb Binary files /dev/null and b/en/images/workshop/fire-fox-2.png differ diff --git a/en/images/workshop/generate-logs.png b/en/images/workshop/generate-logs.png new file mode 100644 index 00000000..f091f0f6 Binary files /dev/null and b/en/images/workshop/generate-logs.png differ diff --git a/en/images/workshop/generate-slow-query-log.png b/en/images/workshop/generate-slow-query-log.png new file mode 100644 index 00000000..4635ed2b Binary files /dev/null and b/en/images/workshop/generate-slow-query-log.png differ diff --git a/en/images/workshop/generated-logs.png b/en/images/workshop/generated-logs.png new file mode 100644 index 00000000..d3055bd4 Binary files /dev/null and b/en/images/workshop/generated-logs.png differ diff --git a/en/images/workshop/import-domain-success.png b/en/images/workshop/import-domain-success.png new file mode 100644 index 00000000..5d2e105d Binary files /dev/null and b/en/images/workshop/import-domain-success.png differ diff --git a/en/images/workshop/import-domain.png b/en/images/workshop/import-domain.png new file mode 100644 index 00000000..7c6e5d43 Binary files /dev/null and b/en/images/workshop/import-domain.png differ diff --git a/en/images/workshop/index-patterns.png b/en/images/workshop/index-patterns.png new file mode 100644 index 00000000..b3927b6b Binary files /dev/null and b/en/images/workshop/index-patterns.png differ diff --git a/en/images/workshop/instance-group-install.png b/en/images/workshop/instance-group-install.png new file mode 100644 index 00000000..ad310dcf Binary files /dev/null and b/en/images/workshop/instance-group-install.png differ diff --git a/en/images/workshop/instance-group-installed.png b/en/images/workshop/instance-group-installed.png new file mode 100644 index 00000000..085b9b00 Binary files /dev/null and b/en/images/workshop/instance-group-installed.png differ diff --git a/en/images/workshop/loghub-portal-2.png b/en/images/workshop/loghub-portal-2.png new file mode 100644 index 00000000..e407195f Binary files /dev/null and b/en/images/workshop/loghub-portal-2.png differ diff --git a/en/images/workshop/loghub-success.png b/en/images/workshop/loghub-success.png new file mode 100644 index 00000000..43f5c45e Binary files /dev/null and b/en/images/workshop/loghub-success.png differ diff --git a/en/images/workshop/moto-detail.png b/en/images/workshop/moto-detail.png new file mode 100644 index 00000000..4f48ac79 Binary files /dev/null and b/en/images/workshop/moto-detail.png differ diff --git a/en/images/workshop/multi-line.png b/en/images/workshop/multi-line.png new file mode 100644 index 00000000..7cd7908a Binary files /dev/null and b/en/images/workshop/multi-line.png differ diff --git a/en/images/workshop/on-my-own.png b/en/images/workshop/on-my-own.png new file mode 100644 index 00000000..a0ee5324 Binary files /dev/null and b/en/images/workshop/on-my-own.png differ diff --git a/en/images/workshop/parse-log-2.png b/en/images/workshop/parse-log-2.png new file mode 100644 index 00000000..b2d1b8b7 Binary files /dev/null and b/en/images/workshop/parse-log-2.png differ diff --git a/en/images/workshop/policy-edit.png b/en/images/workshop/policy-edit.png new file mode 100644 index 00000000..6b519b88 Binary files /dev/null and b/en/images/workshop/policy-edit.png differ diff --git a/en/images/workshop/portal-signin.png b/en/images/workshop/portal-signin.png new file mode 100644 index 00000000..1ebba201 Binary files /dev/null and b/en/images/workshop/portal-signin.png differ diff --git a/en/images/workshop/proxy-create.png b/en/images/workshop/proxy-create.png new file mode 100644 index 00000000..2f04b00e Binary files /dev/null and b/en/images/workshop/proxy-create.png differ diff --git a/en/images/workshop/proxy-creating.png b/en/images/workshop/proxy-creating.png new file mode 100644 index 00000000..271472d5 Binary files /dev/null and b/en/images/workshop/proxy-creating.png differ diff --git a/en/images/workshop/proxy-enable.png b/en/images/workshop/proxy-enable.png new file mode 100644 index 00000000..f76ec2df Binary files /dev/null and b/en/images/workshop/proxy-enable.png differ diff --git a/en/images/workshop/proxy-link.png b/en/images/workshop/proxy-link.png new file mode 100644 index 00000000..880b4c57 Binary files /dev/null and b/en/images/workshop/proxy-link.png differ diff --git a/en/images/workshop/rds-arch-2.png b/en/images/workshop/rds-arch-2.png new file mode 100644 index 00000000..1b77f493 Binary files /dev/null and b/en/images/workshop/rds-arch-2.png differ diff --git a/en/images/workshop/rds-arch.png b/en/images/workshop/rds-arch.png new file mode 100644 index 00000000..a97201fc Binary files /dev/null and b/en/images/workshop/rds-arch.png differ diff --git a/en/images/workshop/rds-dashboard.png b/en/images/workshop/rds-dashboard.png new file mode 100644 index 00000000..1ee6e323 Binary files /dev/null and b/en/images/workshop/rds-dashboard.png differ diff --git a/en/images/workshop/rds-specify-settings.png b/en/images/workshop/rds-specify-settings.png new file mode 100644 index 00000000..2deda29c Binary files /dev/null and b/en/images/workshop/rds-specify-settings.png differ diff --git a/en/images/workshop/select-cloudfront.png b/en/images/workshop/select-cloudfront.png new file mode 100644 index 00000000..2d572c9d Binary files /dev/null and b/en/images/workshop/select-cloudfront.png differ diff --git a/en/images/workshop/select-time-field.png b/en/images/workshop/select-time-field.png new file mode 100644 index 00000000..11071782 Binary files /dev/null and b/en/images/workshop/select-time-field.png differ diff --git a/en/images/workshop/stack-management.png b/en/images/workshop/stack-management.png new file mode 100644 index 00000000..ecc42181 Binary files /dev/null and b/en/images/workshop/stack-management.png differ diff --git a/en/images/workshop/tenant.png b/en/images/workshop/tenant.png new file mode 100644 index 00000000..25193ba7 Binary files /dev/null and b/en/images/workshop/tenant.png differ diff --git a/en/images/workshop/view-dashboard.png b/en/images/workshop/view-dashboard.png new file mode 100644 index 00000000..5481537a Binary files /dev/null and b/en/images/workshop/view-dashboard.png differ diff --git a/en/images/workshop/web-console.png b/en/images/workshop/web-console.png new file mode 100644 index 00000000..102dca19 Binary files /dev/null and b/en/images/workshop/web-console.png differ diff --git a/en/images/workshop/workshop-demo.png b/en/images/workshop/workshop-demo.png new file mode 100644 index 00000000..699804e8 Binary files /dev/null and b/en/images/workshop/workshop-demo.png differ diff --git a/en/images/workshop/workshop-web-reboot.png b/en/images/workshop/workshop-web-reboot.png new file mode 100644 index 00000000..ded29018 Binary files /dev/null and b/en/images/workshop/workshop-web-reboot.png differ diff --git a/en/images/workshop/workshop-web.png b/en/images/workshop/workshop-web.png new file mode 100644 index 00000000..48afb8d1 Binary files /dev/null and b/en/images/workshop/workshop-web.png differ diff --git a/en/implementation-guide/architecture/index.html b/en/implementation-guide/architecture/index.html index ff8d5fdc..b9b4474b 100644 --- a/en/implementation-guide/architecture/index.html +++ b/en/implementation-guide/architecture/index.html @@ -1638,7 +1638,7 @@

Architecture diagram

Deploying this solution with the default parameters builds the following environment in the AWS Cloud.

-

arch +

arch Centralized Logging with OpenSearch architecture

This solution deploys the AWS CloudFormation template in your AWS Cloud account and completes the following settings.

    @@ -1692,13 +1692,13 @@

    Logs through Amazon S3

  1. Logs to Amazon S3 directly(OpenSearch as log processor)

    In this scenario, the service directly sends logs to Amazon S3.

    -

    arch-service-pipeline-s3 +

    arch-service-pipeline-s3 Amazon S3 based service log pipeline architecture

  2. Logs to Amazon S3 via Kinesis Data Firehose(OpenSearch as log processor)

    In this scenario, the service cannot directly put their logs to Amazon S3. The logs are sent to Amazon CloudWatch, and Kinesis Data Firehose (KDF) is used to subscribe the logs from CloudWatch Log Group and then put logs into Amazon S3.

    -

    arch-service-pipeline-kdf-to-s3 +

    arch-service-pipeline-kdf-to-s3 Amazon S3 (via KDF) based service log pipeline architecture

  3. @@ -1728,7 +1728,7 @@

    Logs through Amazon S3

  4. Logs to Amazon S3 directly(Light Engine as log processor)

    In this scenario, the service directly sends logs to Amazon S3.

    -

    arch-service-pipeline-s3-lightengine +

    arch-service-pipeline-s3-lightengine Amazon S3 based service log pipeline architecture

  5. @@ -1749,13 +1749,13 @@

    Logs through Amazon Kinesis Da
  6. Logs to KDS directly

    In this scenario, the service directly streams logs to Amazon Kinesis Data Streams (KDS).

    -

    arch-service-pipeline-kds +

    arch-service-pipeline-kds Amazon KDS based service log pipeline architecture

  7. Logs to KDS via subscription

    In this scenario, the service delivers the logs to CloudWatch Log Group, and then CloudWatch Logs stream the logs in real-time to KDS as the subscription destination.

    -

    arch-service-pipeline-cwl-to-kds +

    arch-service-pipeline-cwl-to-kds Amazon KDS (via subscription) based service log pipeline architecture

  8. @@ -1789,30 +1789,51 @@

    Application log analytics pipelineLogs from Amazon EC2 / Amazon EKS

    The log pipeline runs the following workflow:

    1. Fluent Bit works as the underlying log agent to collect logs from application servers and send them to an optional Log Buffer, or ingest into OpenSearch domain directly.
    2. -
    3. An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created.
    4. -
    5. Amazon SQS initiates AWS Lambda.
    6. -
    7. AWS Lambda get objects from the Amazon S3 log bucket.
    8. -
    9. AWS Lambda put objects to the staging bucket.
    10. -
    11. The Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches.
    12. -
    13. The Log Processor, AWS Step Functions, converts log data into Apache Parquet format and automatically partitions all incoming data based on criteria including time and region.
    14. +
    15. +

      The Log Buffer triggers the Lambda (Log Processor) to run.

      +
    16. +
    17. +

      The log processor reads and processes the log records and ingests the logs into the OpenSearch domain.

      +
    18. +
    19. +

      Logs that fail to be processed are exported to an Amazon S3 bucket (Backup Bucket)

      +


    -

    arch-app-log-pipeline-lighengine +

    arch-app-log-pipeline-lighengine Application log pipeline architecture for EC2/EKS

    The log pipeline runs the following workflow:

      -
    1. Fluent Bit works as the underlying log agent to collect logs from application servers and send them to an optional Log Buffer.
    2. -
    3. The Log Buffer triggers the Lambda to copy objects from log bucket to staging bucket.
    4. -
    5. Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches, converts them to Apache Parquet, and automatically partitions all incoming data by criteria including time and region.
    6. +
    7. +

      Fluent Bit works as the underlying log agent to collect logs from application servers and send them to an optional Log Buffer, or ingest into OpenSearch domain directly.

      +
    8. +
    9. +

      An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created.

      +
    10. +
    11. +

      Amazon SQS initiates AWS Lambda.

      +
    12. +
    13. +

      AWS Lambda gets objects from the Amazon S3 log bucket.

      +
    14. +
    15. +

      AWS Lambda puts objects to the staging bucket.

      +
    16. +
    17. +

      The Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches.

      +
    18. +
    19. +

      The Log Processor, AWS Step Functions, converts log data into Apache Parquet format and automatically partitions all incoming data based on criteria including time and region .

      +

    Logs from Syslog Client

    @@ -1822,7 +1843,7 @@

    Logs from Syslog Client

  9. The NLB together with the ECS containers in the architecture diagram will be provisioned only when you create a Syslog ingestion and be automated deleted when there is no Syslog ingestion.
-

arch-syslog-pipeline +

arch-syslog-pipeline Application log pipeline architecture for Syslog

  1. diff --git a/en/implementation-guide/aws-services/cloudfront/index.html b/en/implementation-guide/aws-services/cloudfront/index.html index 0cad982b..6740f2b5 100644 --- a/en/implementation-guide/aws-services/cloudfront/index.html +++ b/en/implementation-guide/aws-services/cloudfront/index.html @@ -1743,12 +1743,12 @@

    Using the CloudFormation Stack

    AWS Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template @@ -2052,7 +2052,7 @@

    Sample dashboard

    You can click the below image to view the high-resolution sample dashboard.

    -

    cloudfront-db

    +

    cloudfront-db

    Create log ingestion (Light Engine for log analytics)

    Using the Console

      @@ -2092,12 +2092,12 @@

      Using the CloudFormation Stack

      AWS Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template diff --git a/en/implementation-guide/aws-services/cloudtrail/index.html b/en/implementation-guide/aws-services/cloudtrail/index.html index a6bfb23b..d2dbf3d6 100644 --- a/en/implementation-guide/aws-services/cloudtrail/index.html +++ b/en/implementation-guide/aws-services/cloudtrail/index.html @@ -1673,12 +1673,12 @@

      Using the standalone CloudFor AWS Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template @@ -1957,7 +1957,7 @@

      Sample dashboard

      You can click the below image to view the high-resolution sample dashboard.

      -

      cloudtrail-db

      +

      cloudtrail-db

      diff --git a/en/implementation-guide/aws-services/config/index.html b/en/implementation-guide/aws-services/config/index.html index 29fe54f6..0f16053e 100644 --- a/en/implementation-guide/aws-services/config/index.html +++ b/en/implementation-guide/aws-services/config/index.html @@ -1674,12 +1674,12 @@

      Using the standalone CloudFor AWS Standard Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template @@ -1958,7 +1958,7 @@

      Sample Dashboard

      You can click the below image to view the high-resolution sample dashboard.

      -

      config-db

      +

      config-db

      diff --git a/en/implementation-guide/aws-services/elb/index.html b/en/implementation-guide/aws-services/elb/index.html index 43bfd6f8..74863721 100644 --- a/en/implementation-guide/aws-services/elb/index.html +++ b/en/implementation-guide/aws-services/elb/index.html @@ -1746,12 +1746,12 @@

      Using the CloudFormation Stack

      AWS Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template @@ -2103,12 +2103,12 @@

      Using the CloudFormation Stack

      AWS Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template diff --git a/en/implementation-guide/aws-services/lambda/index.html b/en/implementation-guide/aws-services/lambda/index.html index ded9564f..281ee57b 100644 --- a/en/implementation-guide/aws-services/lambda/index.html +++ b/en/implementation-guide/aws-services/lambda/index.html @@ -1668,12 +1668,12 @@

      Using the CloudFormation Stack

      AWS Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template @@ -1887,7 +1887,7 @@

      Sample Dashboard

      You can click the below image to view the high-resolution sample dashboard.

      -

      lambda-db

      +

      lambda-db

      diff --git a/en/implementation-guide/aws-services/rds/index.html b/en/implementation-guide/aws-services/rds/index.html index 53b9b75e..46212443 100644 --- a/en/implementation-guide/aws-services/rds/index.html +++ b/en/implementation-guide/aws-services/rds/index.html @@ -1717,12 +1717,12 @@

      Using the CloudFormation Stack

      AWS Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template @@ -2016,7 +2016,7 @@

      Sample Dashboard

      You can click the below image to view the high-resolution sample dashboard.

      -

      rds-db

      +

      rds-db

      diff --git a/en/implementation-guide/aws-services/s3/index.html b/en/implementation-guide/aws-services/s3/index.html index 71499ec7..5aa6edf0 100644 --- a/en/implementation-guide/aws-services/s3/index.html +++ b/en/implementation-guide/aws-services/s3/index.html @@ -1673,12 +1673,12 @@

      Using the standalone CloudFor AWS Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template @@ -1952,7 +1952,7 @@

      Sample Dashboard

      You can click the below image to view the high-resolution sample dashboard.

      -

      s3-db

      +

      s3-db

      diff --git a/en/implementation-guide/aws-services/vpc/index.html b/en/implementation-guide/aws-services/vpc/index.html index da88855b..0895d303 100644 --- a/en/implementation-guide/aws-services/vpc/index.html +++ b/en/implementation-guide/aws-services/vpc/index.html @@ -1678,12 +1678,12 @@

      Using the standalone CloudFor AWS Standard Regions -Launch Stack +Launch Stack Template AWS China Regions -Launch Stack +Launch Stack Template @@ -2002,7 +2002,7 @@

      Sample Dashboard

      You can click the below image to view the high-resolution sample dashboard.

      -

      vpcflow-db

      +

      vpcflow-db

      diff --git a/en/implementation-guide/aws-services/waf/index.html b/en/implementation-guide/aws-services/waf/index.html index 2cd1cd9b..a1969ef6 100644 --- a/en/implementation-guide/aws-services/waf/index.html +++ b/en/implementation-guide/aws-services/waf/index.html @@ -1757,22 +1757,22 @@

      Using the CloudFormation Stack

      AWS Regions (Full Request) -Launch Stack +Launch Stack Template AWS China Regions (Full Request) -Launch Stack +Launch Stack Template AWS Regions (Sampled Request) -Launch Stack +Launch Stack Template AWS China Regions (Sampled Request) -Launch Stack +Launch Stack Template @@ -2084,7 +2084,7 @@

      Sample Dashboard

      You can click the below image to view the high-resolution sample dashboard.

      -

      waf-db

      +

      waf-db

      Create log ingestion (Light Engine for log analytics)

      Using the Console

        @@ -2122,12 +2122,12 @@

        Using the CloudFormation Stack

        AWS Region(Full Request) -Launch Stack +Launch Stack Template AWS China Regions (Full Request) -Launch Stack +Launch Stack Template diff --git a/en/implementation-guide/deployment/with-cognito/index.html b/en/implementation-guide/deployment/with-cognito/index.html index 0179a887..d5c2beac 100644 --- a/en/implementation-guide/deployment/with-cognito/index.html +++ b/en/implementation-guide/deployment/with-cognito/index.html @@ -1596,11 +1596,11 @@

        Step 1. Launch the stack

        Launch with a new VPC -Launch Stack +Launch Stack Launch with an existing VPC -Launch Stack +Launch Stack diff --git a/en/implementation-guide/deployment/with-oidc/index.html b/en/implementation-guide/deployment/with-oidc/index.html index cfe20671..240fa43b 100644 --- a/en/implementation-guide/deployment/with-oidc/index.html +++ b/en/implementation-guide/deployment/with-oidc/index.html @@ -1721,10 +1721,10 @@

        (Option 1) Using C
      1. Set up the hosted UI with the Amazon Cognito console based on this guide.
      2. Choose Public client when selecting the App type.
      3. Enter the Callback URL and Sign out URL using your domain name for Centralized Logging with OpenSearch console. If your hosted UI is set up, you should be able to see something like below. -
      4. +
      5. Save the App client ID, User pool ID and the AWS Region to a file, which will be used later. - -
      6. + +

      In Step 2. Launch the stack, the OidcClientID is the App client ID, and OidcProvider is https://cognito-idp.${REGION}.amazonaws.com/${USER_POOL_ID}.

      (Option 2) Authing.cn OIDC client

      @@ -1737,14 +1737,14 @@

      (Option 2) Authing.cn OIDC client

    1. Enter the Application Name, and Subdomain.
    2. Save the App ID (that is, client_id) and Issuer to a text file from Endpoint Information, which will be used later. -

      +

    3. Update the Login Callback URL and Logout Callback URL to your IPC recorded domain name.

    4. Set the Authorization Configuration. -

      +

    You have successfully created an authing self-built application.

    @@ -1761,11 +1761,11 @@

    (Option 3) Keycloak OIDC client

  2. Go to the realm setting page. Choose Endpoints, and then OpenID Endpoint Configuration from the list.

    -

    +

  3. In the JSON file that opens up in your browser, record the issuer value which will be used later.

    -

    +

  4. Go back to Keycloak console and select Clients on the left navigation bar, and choose Create.

    @@ -1815,7 +1815,7 @@

    (Option 4) ADFS OpenID Connect Clie

    Under Windows PowerShell on ADFS server, run the following command to get the Issuer (issuer) of ADFS, which is similar to https://adfs.domain.com/adfs.

    Get-ADFSProperties | Select IdTokenIssuer
     
    -

    +

Step 2. Launch the stack

@@ -1837,19 +1837,19 @@

Step 2. Launch the stack

Launch with a new VPC in AWS Regions -Launch Stack +Launch Stack Launch with an existing VPC in AWS Regions -Launch Stack +Launch Stack Launch with a new VPC in AWS China Regions -Launch Stack +Launch Stack Launch with an existing VPC in AWS China Regions -Launch Stack +Launch Stack diff --git a/en/implementation-guide/domains/alarms/index.html b/en/implementation-guide/domains/alarms/index.html index 8e1fb50b..58676518 100644 --- a/en/implementation-guide/domains/alarms/index.html +++ b/en/implementation-guide/domains/alarms/index.html @@ -1619,7 +1619,7 @@

Using the CloudFormation stack

  1. Log in to the AWS Management Console and select the button to launch the AWS CloudFormation template.

    -

    Launch Stack

    +

    Launch Stack

    You can also download the template as a starting point for your own implementation.

  2. @@ -1737,7 +1737,7 @@

    Using the CloudFormation stack

    a CREATE_COMPLETE status in approximately 5 minutes.

    Once you have created the alarms, a confirmation email will be sent to your email address. You need to click the Confirm link in the email.

    Go to the CloudWatch Alarms page by choosing the General configuration > Alarms > CloudWatch Alarms link on the Centralized Logging with OpenSearch console, and the link location is shown as follows:

    -

    +

    Make sure that all the alarms are in OK status because you might have missed the notification if alarms have changed its status before subscription.

    Note

    diff --git a/en/implementation-guide/domains/import/index.html b/en/implementation-guide/domains/import/index.html index d37bf2c6..5a6efa9a 100644 --- a/en/implementation-guide/domains/import/index.html +++ b/en/implementation-guide/domains/import/index.html @@ -1650,7 +1650,7 @@

    Prerequisite

  3. Centralized Logging with OpenSearch supports Amazon OpenSearch Service, and engine version OpenSearch 1.3 or later.
  4. Centralized Logging with OpenSearch supports OpenSearch clusters within VPC. If you don't have an Amazon OpenSearch Service domain yet, you can create an Amazon OpenSearch Service domain within VPC. See Launching your Amazon OpenSearch Service domains within a VPC.
  5. Centralized Logging with OpenSearch supports OpenSearch clusters with fine-grained access control only. In the security configuration, the Access policy should look like the image below: -
  6. +

Import an Amazon OpenSearch Service Domain

    @@ -1671,7 +1671,7 @@

    Set up VPC Peering

    Note

    Automatic mode will create VPC peering and configure route table automatically. You do not need to set up VPC peering again.

    -

    +

    Follow this section to create VPC peering, update security group and update route tables.

    Create VPC Peering Connection

      diff --git a/en/implementation-guide/domains/proxy/index.html b/en/implementation-guide/domains/proxy/index.html index dfab1851..27014601 100644 --- a/en/implementation-guide/domains/proxy/index.html +++ b/en/implementation-guide/domains/proxy/index.html @@ -1663,7 +1663,7 @@

      Access proxy

    Architecture

    Centralized Logging with OpenSearch creates an Auto Scaling Group (ASG) together with an Application Load Balancer (ALB).

    -

    Proxy Stack Architecture

    +

    Proxy Stack Architecture

    The workflow is as follows:

    1. @@ -1716,7 +1716,7 @@

      Using the CloudFormation stack

      1. Log in to the AWS Management Console and select the button to launch the AWS CloudFormation template.

        -

        Launch Stack

        +

        Launch Stack

        You can also download the template as a starting point for your own implementation.

      2. @@ -1897,7 +1897,7 @@

        Create an associated DNS record

      Access Amazon OpenSearch Service via proxy

      After the DNS record takes effect, you can access the Amazon OpenSearch Service built-in dashboard from anywhere via proxy. You can enter the domain of the proxy in your browser, or click the Link button under Access Proxy in the General Configuration section.

      -

      Access Proxy Link

      +

      Access Proxy Link

      Delete a Proxy

      1. Log in to the Centralized Logging with OpenSearch console.
      2. diff --git a/en/implementation-guide/getting-started/2.create-proxy/index.html b/en/implementation-guide/getting-started/2.create-proxy/index.html index 66939413..50674c8c 100644 --- a/en/implementation-guide/getting-started/2.create-proxy/index.html +++ b/en/implementation-guide/getting-started/2.create-proxy/index.html @@ -1578,7 +1578,7 @@

        Create a Nginx proxy

      3. Enter the Domain Name.
      4. Choose the associated Load Balancer SSL Certificate which applies to the domain name.
      5. Choose the Nginx Instance Key Name. -
      6. +
      7. Choose Create.

      After provisioning the proxy infrastructure, you need to create an associated DNS record in your DNS resolver. The following introduces how to find the Application Load Balancing (ALB) domain, and then create a CNAME record pointing to this domain.

      diff --git a/en/implementation-guide/trouble-shooting/index.html b/en/implementation-guide/trouble-shooting/index.html index a7d1cb4a..25075c32 100644 --- a/en/implementation-guide/trouble-shooting/index.html +++ b/en/implementation-guide/trouble-shooting/index.html @@ -1722,7 +1722,7 @@

      Error: Unable to add backend role

      Centralized Logging with OpenSearch only supports Amazon OpenSearch Service domain with Fine-grained access control enabled. You need to go to Amazon OpenSearch Service console, and edit the Access policy for the Amazon OpenSearch Service domain.

      Error:User xxx is not authorized to perform sts:AssumeRole on resource

      -

      +

      If you see this error, please make sure you have entered the correct information during cross account setup, and then please wait for several minutes.

      Centralized Logging with OpenSearch uses AssumeRole for cross-account access. This is the best practice to temporary access the AWS resources in your member account. @@ -1744,7 +1744,7 @@

      Error: P

    You can get more information from Amazon EKS IAM role configuration

    My CloudFormation stack is stuck on deleting an AWS::Lambda::Function resource when I update the stack. How to resolve it?

    -

    +

    The Lambda function resides in a VPC, and you need to wait for the associated ENI resource to be deleted.

    The agent status is offline after I restart the EC2 instance, how can I make it auto start on instance restart?

    This usually happens if you have installed the log agent, but restart the instance before you create any Log Ingestion. The log agent will auto restart if there is at least one Log Ingestion. If you have a log ingestion, but the problem still exists, you can use systemctl status fluent-bit diff --git a/en/search/search_index.json b/en/search/search_index.json index f86ebb07..7c2936b3 100644 --- a/en/search/search_index.json +++ b/en/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"The Centralized Logging with OpenSearch solution provides comprehensive log management and analysis functions to help you simplify the build of log analytics pipelines. Built on top of Amazon OpenSearch Service, the solution allows you to streamline log ingestion, log processing, and log visualization. You can leverage the solution in multiple use cases such as to abide by security and compliance regulations, achieve refined business operations, and enhance IT troubleshooting and maintenance. Use this navigation table to quickly find answers to these questions: If you want to \u2026 Read\u2026 Know the cost for running this solution Cost Understand the security considerations for this solution Security Know which AWS Regions are supported for this solution Supported AWS Regions Get started with the solution quickly to import an Amazon OpenSearch Service domain, build a log analytics pipeline, and access the built-in dashboard Getting started Learn the operations related to Amazon OpenSearch Service domains Domain management Walk through the processes of building log analytics pipelines AWS Services logs and Applications logs This implementation guide describes architectural considerations and configuration steps for deploying the Centralized Logging with OpenSearch solution in the AWS cloud. It includes links to CloudFormation templates that launches and configures the AWS services required to deploy this solution using AWS best practices for security and availability. The guide is intended for IT architects, developers, DevOps, data engineers with practical experience architecting on the AWS Cloud.","title":"Overview"},{"location":"implementation-guide/alarm/","text":"There are different types of log alarms: log processor alarms, buffer layer alarms, and source alarms (only for application log pipeline). The alarms will be triggered when the defined condition is met. Log alarm type Log alarm condition Description Log processor alarms Error invocation # >= 10 for 5 minutes, 1 consecutive time When the number of log processor Lambda error calls is greater than or equal to 10 within 5 minutes (including 5 minutes), an email alarm will be triggered. Log processor alarms Failed record # >= 1 for 1 minute, 1 consecutive time When the number of failed records is greater than or equal to 1 within a 1-minute window, an alarm will be triggered. Log processor alarms Average execution duration in last 5 minutes >= 60000 milliseconds In the last 5 minutes, when the average execution time of log processor Lambda is greater than or equal to 60 seconds, an email alarm will be triggered. Buffer layer alarms SQS Oldest Message Age >= 30 minutes When the age of the oldest SQS message is greater than or equal to 30 minutes, it means that the message has not been consumed for at least 30 minutes, an email alarm will be triggered. Source alarms (only for application log pipeline) Fluent Bit output_retried_record_total >= 100 for last 5 minutes When the total number of retry records output by Fluent Bit in the past 5 minutes is greater than or equal to 100, an email alarm will be triggered. You can choose to enable log alarms or disable them according to your needs. Enable log alarms Sign in to the Centralized Logging with OpenSearch console. In the left navigation bar, under Log Analytics Pipelines , choose AWS Service Log or Application Log . Select the log pipeline created and choose View details . Select the Alarm tab. Switch on Alarms if needed and select an exiting SNS topic. If you choose Create a new SNS topic , you need to provide email address for the newly-created SNS topic to notify. Disable log alarms Sign in to the Centralized Logging with OpenSearch console. In the left navigation bar, under Log Analytics Pipelines , choose AWS Service Log or Application Log . Select the log pipeline created and choose View details . Select the Alarm tab. Switch off Alarms .","title":"Log alarms"},{"location":"implementation-guide/alarm/#enable-log-alarms","text":"Sign in to the Centralized Logging with OpenSearch console. In the left navigation bar, under Log Analytics Pipelines , choose AWS Service Log or Application Log . Select the log pipeline created and choose View details . Select the Alarm tab. Switch on Alarms if needed and select an exiting SNS topic. If you choose Create a new SNS topic , you need to provide email address for the newly-created SNS topic to notify.","title":"Enable log alarms"},{"location":"implementation-guide/alarm/#disable-log-alarms","text":"Sign in to the Centralized Logging with OpenSearch console. In the left navigation bar, under Log Analytics Pipelines , choose AWS Service Log or Application Log . Select the log pipeline created and choose View details . Select the Alarm tab. Switch off Alarms .","title":"Disable log alarms"},{"location":"implementation-guide/architecture/","text":"Deploying this solution with the default parameters builds the following environment in the AWS Cloud. Centralized Logging with OpenSearch architecture This solution deploys the AWS CloudFormation template in your AWS Cloud account and completes the following settings. Amazon CloudFront distributes the frontend web UI assets hosted in Amazon S3 bucket. Amazon Cognito user pool or OpenID Connector (OIDC) can be used for authentication. AWS AppSync provides the backend GraphQL APIs. Amazon DynamoDB stores the solution related information as backend database. AWS Lambda interacts with other AWS Services to process core logic of managing log pipelines or log agents, and obtains information updated in DynamoDB tables. AWS Step Functions orchestrates on-demand AWS CloudFormation deployment of a set of predefined stacks for log pipeline management. The log pipeline stacks deploy separate AWS resources and are used to collect and process logs and ingest them into Amazon OpenSearch Service for further analysis and visualization. Service Log Pipeline or Application Log Pipeline are provisioned on demand via Centralized Logging with OpenSearch console. AWS Systems Manager and Amazon EventBridge manage log agents for collecting logs from application servers, such as installing log agents (Fluent Bit) for application servers and monitoring the health status of the agents. Amazon EC2 or Amazon EKS installs Fluent Bit agents, and uploads log data to application log pipeline. Application log pipelines read, parse, process application logs and ingest them into Amazon OpenSearch domains or Light Engine. Service log pipelines read, parse, process AWS service logs and ingest them into Amazon OpenSearch domains or Light Engine. After deploying the solution, you can use AWS WAF to protect CloudFront or AppSync. Moreover, you can follow this guide to configure your WAF settings to prevent GraphQL schema introspection. This solution supports two types of log pipelines: Service Log Analytics Pipeline and Application Log Analytics Pipeline . Service log analytics pipeline Centralized Logging with OpenSearch supports log analysis for AWS services, such as Amazon S3 access logs, and Application Load Balancer access logs. For a complete list of supported AWS services, refer to Supported AWS Services . This solution ingests different AWS service logs using different workflows. Note Centralized Logging with OpenSearch supports cross-account log ingestion . If you want to ingest the logs from another AWS account, the resources in the Sources group in the architecture diagram will be in another account. Logs through Amazon S3 This section is applicable to Amazon S3 access logs, CloudFront standard logs, CloudTrail logs (S3), Application Load Balancing access logs, WAF logs, VPC Flow logs (S3), AWS Config logs, Amazon RDS/Aurora logs, and AWS Lambda Logs. The workflow supports two scenarios: Logs to Amazon S3 directly\uff08OpenSearch as log processor\uff09 In this scenario, the service directly sends logs to Amazon S3. Amazon S3 based service log pipeline architecture Logs to Amazon S3 via Kinesis Data Firehose\uff08OpenSearch as log processor\uff09 In this scenario, the service cannot directly put their logs to Amazon S3. The logs are sent to Amazon CloudWatch, and Kinesis Data Firehose ( KDF ) is used to subscribe the logs from CloudWatch Log Group and then put logs into Amazon S3. Amazon S3 (via KDF) based service log pipeline architecture The log pipeline runs the following workflow: AWS services logs are stored in Amazon S3 bucket (Log Bucket). An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created. Amazon SQS initiates the Log Processor Lambda to run. The log processor reads and processes the log files. The log processor ingests the logs into the Amazon OpenSearch Service. Logs that fail to be processed are exported to Amazon S3 bucket (Backup Bucket). For cross-account ingestion, the AWS Services store logs in Amazon S3 bucket in the member account, and other resources remain in central logging account. Logs to Amazon S3 directly\uff08Light Engine as log processor\uff09 In this scenario, the service directly sends logs to Amazon S3. Amazon S3 based service log pipeline architecture The log pipeline runs the following workflow: AWS service logs are stored in an Amazon S3 bucket (Log Bucket). An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created. Amazon SQS initiates AWS Lambda. AWS Lambda get objects from the Amazon S3 log bucket. AWS Lambda put objects to the staging bucket. The Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches. The Log Processor, AWS Step Functions, converts log data into Apache Parquet format and automatically partitions all incoming data based on criteria including time and region. Logs through Amazon Kinesis Data Streams This section is applicable to CloudFront real-time logs, CloudTrail logs (CloudWatch), and VPC Flow logs (CloudWatch). The workflow supports two scenarios: Logs to KDS directly In this scenario, the service directly streams logs to Amazon Kinesis Data Streams ( KDS ). Amazon KDS based service log pipeline architecture Logs to KDS via subscription In this scenario, the service delivers the logs to CloudWatch Log Group, and then CloudWatch Logs stream the logs in real-time to KDS as the subscription destination. Amazon KDS (via subscription) based service log pipeline architecture The log pipeline runs the following workflow: AWS Services logs are streamed to Kinesis Data Stream. KDS initiates the Log Processor Lambda to run. The log processor processes and ingests the logs into the Amazon OpenSearch Service. Logs that fail to be processed are exported to Amazon S3 bucket (Backup Bucket). For cross-account ingestion, the AWS Services store logs on Amazon CloudWatch log group in the member account, and other resources remain in central logging account. Warning This solution does not support cross-account ingestion for CloudFront real-time logs. Application log analytics pipeline Centralized Logging with OpenSearch supports log analysis for application logs, such as Nginx/Apache HTTP Server logs or custom application logs. Note Centralized Logging with OpenSearch supports cross-account log ingestion . If you want to ingest logs from the same account, the resources in the Sources group will be in the same account as your Centralized Logging with OpenSearch account. Otherwise, they will be in another AWS account. Logs from Amazon EC2 / Amazon EKS Logs from Amazon EC2/ Amazon EKS(OpenSearch as log processor) Application log pipeline architecture for EC2/EKS The log pipeline runs the following workflow: Fluent Bit works as the underlying log agent to collect logs from application servers and send them to an optional Log Buffer , or ingest into OpenSearch domain directly. An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created. Amazon SQS initiates AWS Lambda. AWS Lambda get objects from the Amazon S3 log bucket. AWS Lambda put objects to the staging bucket. The Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches. The Log Processor, AWS Step Functions, converts log data into Apache Parquet format and automatically partitions all incoming data based on criteria including time and region. Logs from Amazon EC2/ Amazon EKS(Light Engine as log processor) Application log pipeline architecture for EC2/EKS The log pipeline runs the following workflow: Fluent Bit works as the underlying log agent to collect logs from application servers and send them to an optional Log Buffer. The Log Buffer triggers the Lambda to copy objects from log bucket to staging bucket. Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches, converts them to Apache Parquet, and automatically partitions all incoming data by criteria including time and region. Logs from Syslog Client Important Make sure your Syslog generator/sender's subnet is connected to Centralized Logging with OpenSearch' two private subnets. You need to use VPC Peering Connection or Transit Gateway to connect these VPCs. The NLB together with the ECS containers in the architecture diagram will be provisioned only when you create a Syslog ingestion and be automated deleted when there is no Syslog ingestion. Application log pipeline architecture for Syslog Syslog client (like Rsyslog ) send logs to a Network Load Balancer (NLB) in Centralized Logging with OpenSearch's private subnets, and NLB routes to the ECS containers running Syslog servers. Fluent Bit works as the underlying log agent in the ECS Service to parse logs, and send them to an optional Log Buffer , or ingest into OpenSearch domain directly. The Log Buffer triggers the Lambda (Log Processor) to run. The log processor reads and processes the log records and ingests the logs into the OpenSearch domain. Logs that fail to be processed are exported to an Amazon S3 bucket (Backup Bucket).","title":"Architecture diagram"},{"location":"implementation-guide/architecture/#service-log-analytics-pipeline","text":"Centralized Logging with OpenSearch supports log analysis for AWS services, such as Amazon S3 access logs, and Application Load Balancer access logs. For a complete list of supported AWS services, refer to Supported AWS Services . This solution ingests different AWS service logs using different workflows. Note Centralized Logging with OpenSearch supports cross-account log ingestion . If you want to ingest the logs from another AWS account, the resources in the Sources group in the architecture diagram will be in another account.","title":"Service log analytics pipeline"},{"location":"implementation-guide/architecture/#logs-through-amazon-s3","text":"This section is applicable to Amazon S3 access logs, CloudFront standard logs, CloudTrail logs (S3), Application Load Balancing access logs, WAF logs, VPC Flow logs (S3), AWS Config logs, Amazon RDS/Aurora logs, and AWS Lambda Logs. The workflow supports two scenarios: Logs to Amazon S3 directly\uff08OpenSearch as log processor\uff09 In this scenario, the service directly sends logs to Amazon S3. Amazon S3 based service log pipeline architecture Logs to Amazon S3 via Kinesis Data Firehose\uff08OpenSearch as log processor\uff09 In this scenario, the service cannot directly put their logs to Amazon S3. The logs are sent to Amazon CloudWatch, and Kinesis Data Firehose ( KDF ) is used to subscribe the logs from CloudWatch Log Group and then put logs into Amazon S3. Amazon S3 (via KDF) based service log pipeline architecture The log pipeline runs the following workflow: AWS services logs are stored in Amazon S3 bucket (Log Bucket). An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created. Amazon SQS initiates the Log Processor Lambda to run. The log processor reads and processes the log files. The log processor ingests the logs into the Amazon OpenSearch Service. Logs that fail to be processed are exported to Amazon S3 bucket (Backup Bucket). For cross-account ingestion, the AWS Services store logs in Amazon S3 bucket in the member account, and other resources remain in central logging account. Logs to Amazon S3 directly\uff08Light Engine as log processor\uff09 In this scenario, the service directly sends logs to Amazon S3. Amazon S3 based service log pipeline architecture The log pipeline runs the following workflow: AWS service logs are stored in an Amazon S3 bucket (Log Bucket). An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created. Amazon SQS initiates AWS Lambda. AWS Lambda get objects from the Amazon S3 log bucket. AWS Lambda put objects to the staging bucket. The Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches. The Log Processor, AWS Step Functions, converts log data into Apache Parquet format and automatically partitions all incoming data based on criteria including time and region.","title":"Logs through Amazon S3"},{"location":"implementation-guide/architecture/#logs-through-amazon-kinesis-data-streams","text":"This section is applicable to CloudFront real-time logs, CloudTrail logs (CloudWatch), and VPC Flow logs (CloudWatch). The workflow supports two scenarios: Logs to KDS directly In this scenario, the service directly streams logs to Amazon Kinesis Data Streams ( KDS ). Amazon KDS based service log pipeline architecture Logs to KDS via subscription In this scenario, the service delivers the logs to CloudWatch Log Group, and then CloudWatch Logs stream the logs in real-time to KDS as the subscription destination. Amazon KDS (via subscription) based service log pipeline architecture The log pipeline runs the following workflow: AWS Services logs are streamed to Kinesis Data Stream. KDS initiates the Log Processor Lambda to run. The log processor processes and ingests the logs into the Amazon OpenSearch Service. Logs that fail to be processed are exported to Amazon S3 bucket (Backup Bucket). For cross-account ingestion, the AWS Services store logs on Amazon CloudWatch log group in the member account, and other resources remain in central logging account. Warning This solution does not support cross-account ingestion for CloudFront real-time logs.","title":"Logs through Amazon Kinesis Data Streams"},{"location":"implementation-guide/architecture/#application-log-analytics-pipeline","text":"Centralized Logging with OpenSearch supports log analysis for application logs, such as Nginx/Apache HTTP Server logs or custom application logs. Note Centralized Logging with OpenSearch supports cross-account log ingestion . If you want to ingest logs from the same account, the resources in the Sources group will be in the same account as your Centralized Logging with OpenSearch account. Otherwise, they will be in another AWS account.","title":"Application log analytics pipeline"},{"location":"implementation-guide/architecture/#logs-from-amazon-ec2-amazon-eks","text":"Logs from Amazon EC2/ Amazon EKS(OpenSearch as log processor) Application log pipeline architecture for EC2/EKS The log pipeline runs the following workflow: Fluent Bit works as the underlying log agent to collect logs from application servers and send them to an optional Log Buffer , or ingest into OpenSearch domain directly. An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created. Amazon SQS initiates AWS Lambda. AWS Lambda get objects from the Amazon S3 log bucket. AWS Lambda put objects to the staging bucket. The Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches. The Log Processor, AWS Step Functions, converts log data into Apache Parquet format and automatically partitions all incoming data based on criteria including time and region. Logs from Amazon EC2/ Amazon EKS(Light Engine as log processor) Application log pipeline architecture for EC2/EKS The log pipeline runs the following workflow: Fluent Bit works as the underlying log agent to collect logs from application servers and send them to an optional Log Buffer. The Log Buffer triggers the Lambda to copy objects from log bucket to staging bucket. Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches, converts them to Apache Parquet, and automatically partitions all incoming data by criteria including time and region.","title":"Logs from Amazon EC2 / Amazon EKS"},{"location":"implementation-guide/architecture/#logs-from-syslog-client","text":"Important Make sure your Syslog generator/sender's subnet is connected to Centralized Logging with OpenSearch' two private subnets. You need to use VPC Peering Connection or Transit Gateway to connect these VPCs. The NLB together with the ECS containers in the architecture diagram will be provisioned only when you create a Syslog ingestion and be automated deleted when there is no Syslog ingestion. Application log pipeline architecture for Syslog Syslog client (like Rsyslog ) send logs to a Network Load Balancer (NLB) in Centralized Logging with OpenSearch's private subnets, and NLB routes to the ECS containers running Syslog servers. Fluent Bit works as the underlying log agent in the ECS Service to parse logs, and send them to an optional Log Buffer , or ingest into OpenSearch domain directly. The Log Buffer triggers the Lambda (Log Processor) to run. The log processor reads and processes the log records and ingests the logs into the OpenSearch domain. Logs that fail to be processed are exported to an Amazon S3 bucket (Backup Bucket).","title":"Logs from Syslog Client"},{"location":"implementation-guide/faq/","text":"Frequently Asked Questions General Q: What is Centralized Logging with OpenSearch solution? Centralized Logging with OpenSearch is an AWS Solution that simplifies the building of log analytics pipelines. It provides to customers, as complementary of Amazon OpenSearch Service, capabilities to ingest and process both application logs and AWS service logs without writing code, and create visualization dashboards from out-of-the-box templates. Centralized Logging with OpenSearch automatically assembles the underlying AWS services, and provides you a web console to manage log analytics pipelines. Q: What are the supported logs in this solution? Centralized Logging with OpenSearch supports both AWS service logs and EC2/EKS application logs. Refer to the supported AWS services , and the supported application log formats and sources for more details. Q: Does Centralized Logging with OpenSearch support ingesting logs from multiple AWS accounts? Yes. Centralized Logging with OpenSearch supports ingesting AWS service logs and application logs from a different AWS account in the same region. For more information, see cross-account ingestion . Q: Does Centralized Logging with OpenSearch support ingesting logs from multiple AWS Regions? Currently, Centralized Logging with OpenSearch does not automate the log ingestion from a different AWS Region. You need to ingest logs from other regions into pipelines provisioned by Centralized Logging with OpenSearch. For AWS services which store the logs in S3 bucket, you can leverage the S3 Cross-Region Replication to copy the logs to the Centralized Logging with OpenSearch deployed region, and import incremental logs using the manual mode by specifying the log location in the S3 bucket. For application logs on EC2 and EKS, you need to set up the networking (for example, Kinesis VPC endpoint, VPC Peering), install agents, and configure the agents to ingest logs to Centralized Logging with OpenSearch pipelines. Q: What is the license of this solution? This solution is provided under the Apache-2.0 license . It is a permissive free software license written by the Apache Software Foundation. It allows users to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software under the terms of the license, without concern for royalties. Q: How can I find the roadmap of this solution? This solution uses GitHub project to manage the roadmap. You can find the roadmap here . Q: How can I submit a feature request or bug report? You can submit feature requests and bug report through the GitHub issues. Here are the templates for feature request , bug report . Q: How can I use stronger TLS Protocols to secure traffic, namely TLS 1.2 and above? By default, CloudFront uses the TLSv1 security policy along with a default certificate. Changing the TLS settings for CloudFront depends on the presence of your SSL certificates. If you don't have your own SSL certificates, you won't be able to alter the TLS setting for CloudFront. In order to configure TLS 1.2 or above, you will need a custom domain. This setup will enable you to enforce stronger TLS protocols for your traffic. To learn how to configure a custom domain and enable TLS 1.2+ for your service, you can follow the guide provided here: Use a Custom Domain with AWS AppSync, Amazon CloudFront, and Amazon Route 53 . Setup and configuration Q: Can I deploy Centralized Logging with OpenSearch on AWS in any AWS Region? Centralized Logging with OpenSearch provides two deployment options: option 1 with Cognito User Pool, and option 2 with OpenID Connect. For option 1, customers can deploy the solution in AWS Regions where Amazon Cognito User Pool, AWS AppSync, Amazon Kinesis Data Firehose (optional) are available. For option 2, customers can deploy the solution in AWS Regions where AWS AppSync, Amazon Kinesis Data Firehose (optional) are available. Refer to supported regions for deployment for more information. Q: What are the prerequisites of deploying this solution? Centralized Logging with OpenSearch does not provision Amazon OpenSearch clusters, and you need to import existing OpenSearch clusters through the web console. The clusters must meet the requirements specified in prerequisites . Q: Why do I need a domain name with ICP recordal when deploying the solution in AWS China Regions? The Centralized Logging with OpenSearch console is served via CloudFront distribution which is considered as an Internet information service. According to the local regulations, any Internet information service must bind to a domain name with ICP recordal . Q: What versions of OpenSearch does the solution work with? Centralized Logging with OpenSearch supports Amazon OpenSearch Service, with OpenSearch 1.3 or later. Q: What are the index name rules for OpenSearch created by the Log Analytics Pipeline? You can change the index name if needed when using the Centralized Logging with OpenSearch console to create a log analytics pipeline. If the log analytics pipeline is created for service logs, the index name is composed of - - -<00000x>, where you can define a name for Index Prefix and service-type is automatically generated by the solution according to the service type you have chosen. Moreover, you can choose different index suffix types to adjust index rollover time window. YYYY-MM-DD-HH: Amazon OpenSearch will roll the index by hour. YYYY-MM-DD: Amazon OpenSearch will roll the index by 24 hours. YYYY-MM: Amazon OpenSearch will roll the index by 30 days. YYYY: Amazon OpenSearch will roll the index by 365 days. It should be noted that in OpenSearch, the time is in UTC 0 time zone. Regarding the 00000x part, Amazon OpenSearch will automatically append a 6-digit suffix to the index name, where the first index rule is 000001, rollover according to the index, and increment backwards, such as 000002, 000003. If the log analytics pipeline is created for application log, the index name is composed of - -<00000x>. The rules for index prefix and index suffix, 00000x are the same as those for service logs. Q: What are the index rollover rules for OpenSearch created by the Log Analytics Pipeline? Index rollover is determined by two factors. One is the Index Suffix in the index name. If you enable the index rollover by capacity, Amazon OpenSearch will roll your index when the index capacity equals or exceeds the specified size, regardless of the rollover time window. Note that if one of these two factors matches, index rollover can be triggered. For example, we created an application log pipeline on January 1, 2023, deleted the application log pipeline at 9:00 on January 4, 2023, and the index name is nginx-YYYY-MM-DD-<00000x>. At the same time, we enabled the index rollover by capacity and entered 300GB. If the log data volume increases suddenly after creation, it can reach 300GB every hour, and the duration is 2 hours and 10 minutes. After that, it returns to normal, and the daily data volume is 90GB. Then OpenSearch creates three indexes on January 1, the index names are nginx-2023-01-01-000001, nginx-2023-01-01-000002, nginx-2023-01-01-000003, and then creates one every day Indexes respectively: nginx-2023-01-02-000004, nginx-2023-01-03-000005, nginx-2023-01-04-000006. Q: Can I deploy the solution in an existing VPC? Yes. You can either launch the solution with a new VPC or launch the solution with an existing VPC. When using an existing VPC, you need to select the VPC and the corresponding subnets. Refer to launch with Cognito User Pool or launch with OpenID Connect for more details. Q: I did not receive the email containing the temporary password when launching the solution with Cognito User Pool. How can I resend the password? Your account is managed by the Cognito User Pool. To resend the temporary password, you can find the user pool created by the solution, delete and recreate the user using the same email address. If you still have the same issue, try with another email address. Q: How can I create more users for this solution? If you launched the solution with Cognito User Pool, go to the AWS console, find the user pool created by the solution, and you can create more users. If you launched the solution with OpenID Connect (OIDC), you should add more users in the user pool managed by the OIDC provider. Note that all users have the same privileges. Pricing Q: How will I be charged and billed for the use of this solution? The solution is free to use, and you are responsible for the cost of AWS services used while running this solution. You pay only for what you use, and there are no minimum or setup fees. Refer to the Centralized Logging with OpenSearch Cost section for detailed cost estimation. Q: Will there be additional cost for cross-account ingestion? No. The cost will be same as ingesting logs within the same AWS account. Log Ingestion Q: What is the log agent used in the Centralized Logging with OpenSearch solution? Centralized Logging with OpenSearch uses AWS for Fluent Bit , a distribution of Fluent Bit maintained by AWS. The solution uses this distribution to ingest logs from Amazon EC2 and Amazon EKS. Q: I have already stored the AWS service logs of member accounts in a centralized logging account. How should I create service log ingestion for member accounts? In this case, you need to deploy the Centralized Logging with OpenSearch solution in the centralized logging account, and ingest AWS service logs using the Manual mode from the logging account. Refer to this guide for ingesting Application Load Balancer logs with Manual mode. You can do the same with other supported AWS services which output logs to S3. Q: Why there are some duplicated records in OpenSearch when ingesting logs via Kinesis Data Streams? This is usually because there is no enough Kinesis Shards to handle the incoming requests. When threshold error occurs in Kinesis, the Fluent Bit agent will retry that chunk . To avoid this issue, you need to estimate your log throughput and set a proper Kinesis shard number. Please refer to the Kinesis Data Streams quotas and limits . Centralized Logging with OpenSearch provides a built-in feature to scale-out and scale-in the Kinesis shards, and it would take a couple of minutes to scale out to the desired number. Q: How to install log agent on CentOS 7? Log in to your CentOS 7 machine and install SSM Agent manually. sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm sudo systemctl enable amazon-ssm-agent sudo systemctl start amazon-ssm-agent Go to the Instance Group panel of Centralized Logging with OpenSearch console, create Instance Group , select the CentOS 7 machine, choose Install log agent and wait for its status to be offline . Log in to CentOS 7 and install fluent-bit 1.9.3 manually. export RELEASE_URL = ${ FLUENT_BIT_PACKAGES_URL :- https ://packages.fluentbit.io } export RELEASE_KEY = ${ FLUENT_BIT_PACKAGES_KEY :- https ://packages.fluentbit.io/fluentbit.key } sudo rpm --import $RELEASE_KEY cat << EOF | sudo tee /etc/yum.repos.d/fluent-bit.repo [fluent-bit] name = Fluent Bit baseurl = $RELEASE_URL/centos/VERSION_ARCH_SUBSTR gpgcheck=1 repo_gpgcheck=1 gpgkey=$RELEASE_KEY enabled=1 EOF sudo sed -i 's|VERSION_ARCH_SUBSTR|\\$releasever/\\$basearch/|g' /etc/yum.repos.d/fluent-bit.repo sudo yum install -y fluent-bit-1.9.3-1 # Modify the configuration file sudo sed -i 's/ExecStart.*/ExecStart=\\/opt\\/fluent-bit\\/bin\\/fluent-bit -c \\/opt\\/fluent-bit\\/etc\\/fluent-bit.conf/g' /usr/lib/systemd/system/fluent-bit.service sudo systemctl daemon-reload sudo systemctl enable fluent-bit sudo systemctl start fluent-bit 4. Go back to the Instance Groups panel of the Centralized Logging with OpenSearch console and wait for the CentOS 7 machine status to be Online and proceed to create the instance group. Q: How can I consume CloudWatch custom logs? You can use Firehose to subscribe CloudWatch logs and transfer logs into Amazon S3. Firstly, create subscription filters with Amazon Kinesis Data Firehose based on this guide . Next, follow the instructions to learn how to transfer logs to Amazon S3. Then, you can use Centralized Logging with OpenSearch to ingest logs from Amazon S3 to OpenSearch. Log Visualization Q: How can I find the built-in dashboards in OpenSearch? Please refer to the AWS Service Logs and Application Logs to find out if there is a built-in dashboard supported. You also need to turn on the Sample Dashboard option when creating a log analytics pipeline. The dashboard will be inserted into the Amazon OpenSearch Service under Global Tenant . You can switch to the Global Tenant from the top right coder of the OpenSearch Dashboards.","title":"FAQ"},{"location":"implementation-guide/faq/#frequently-asked-questions","text":"","title":"Frequently Asked Questions"},{"location":"implementation-guide/faq/#general","text":"Q: What is Centralized Logging with OpenSearch solution? Centralized Logging with OpenSearch is an AWS Solution that simplifies the building of log analytics pipelines. It provides to customers, as complementary of Amazon OpenSearch Service, capabilities to ingest and process both application logs and AWS service logs without writing code, and create visualization dashboards from out-of-the-box templates. Centralized Logging with OpenSearch automatically assembles the underlying AWS services, and provides you a web console to manage log analytics pipelines. Q: What are the supported logs in this solution? Centralized Logging with OpenSearch supports both AWS service logs and EC2/EKS application logs. Refer to the supported AWS services , and the supported application log formats and sources for more details. Q: Does Centralized Logging with OpenSearch support ingesting logs from multiple AWS accounts? Yes. Centralized Logging with OpenSearch supports ingesting AWS service logs and application logs from a different AWS account in the same region. For more information, see cross-account ingestion . Q: Does Centralized Logging with OpenSearch support ingesting logs from multiple AWS Regions? Currently, Centralized Logging with OpenSearch does not automate the log ingestion from a different AWS Region. You need to ingest logs from other regions into pipelines provisioned by Centralized Logging with OpenSearch. For AWS services which store the logs in S3 bucket, you can leverage the S3 Cross-Region Replication to copy the logs to the Centralized Logging with OpenSearch deployed region, and import incremental logs using the manual mode by specifying the log location in the S3 bucket. For application logs on EC2 and EKS, you need to set up the networking (for example, Kinesis VPC endpoint, VPC Peering), install agents, and configure the agents to ingest logs to Centralized Logging with OpenSearch pipelines. Q: What is the license of this solution? This solution is provided under the Apache-2.0 license . It is a permissive free software license written by the Apache Software Foundation. It allows users to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software under the terms of the license, without concern for royalties. Q: How can I find the roadmap of this solution? This solution uses GitHub project to manage the roadmap. You can find the roadmap here . Q: How can I submit a feature request or bug report? You can submit feature requests and bug report through the GitHub issues. Here are the templates for feature request , bug report . Q: How can I use stronger TLS Protocols to secure traffic, namely TLS 1.2 and above? By default, CloudFront uses the TLSv1 security policy along with a default certificate. Changing the TLS settings for CloudFront depends on the presence of your SSL certificates. If you don't have your own SSL certificates, you won't be able to alter the TLS setting for CloudFront. In order to configure TLS 1.2 or above, you will need a custom domain. This setup will enable you to enforce stronger TLS protocols for your traffic. To learn how to configure a custom domain and enable TLS 1.2+ for your service, you can follow the guide provided here: Use a Custom Domain with AWS AppSync, Amazon CloudFront, and Amazon Route 53 .","title":"General"},{"location":"implementation-guide/faq/#setup-and-configuration","text":"Q: Can I deploy Centralized Logging with OpenSearch on AWS in any AWS Region? Centralized Logging with OpenSearch provides two deployment options: option 1 with Cognito User Pool, and option 2 with OpenID Connect. For option 1, customers can deploy the solution in AWS Regions where Amazon Cognito User Pool, AWS AppSync, Amazon Kinesis Data Firehose (optional) are available. For option 2, customers can deploy the solution in AWS Regions where AWS AppSync, Amazon Kinesis Data Firehose (optional) are available. Refer to supported regions for deployment for more information. Q: What are the prerequisites of deploying this solution? Centralized Logging with OpenSearch does not provision Amazon OpenSearch clusters, and you need to import existing OpenSearch clusters through the web console. The clusters must meet the requirements specified in prerequisites . Q: Why do I need a domain name with ICP recordal when deploying the solution in AWS China Regions? The Centralized Logging with OpenSearch console is served via CloudFront distribution which is considered as an Internet information service. According to the local regulations, any Internet information service must bind to a domain name with ICP recordal . Q: What versions of OpenSearch does the solution work with? Centralized Logging with OpenSearch supports Amazon OpenSearch Service, with OpenSearch 1.3 or later. Q: What are the index name rules for OpenSearch created by the Log Analytics Pipeline? You can change the index name if needed when using the Centralized Logging with OpenSearch console to create a log analytics pipeline. If the log analytics pipeline is created for service logs, the index name is composed of - - -<00000x>, where you can define a name for Index Prefix and service-type is automatically generated by the solution according to the service type you have chosen. Moreover, you can choose different index suffix types to adjust index rollover time window. YYYY-MM-DD-HH: Amazon OpenSearch will roll the index by hour. YYYY-MM-DD: Amazon OpenSearch will roll the index by 24 hours. YYYY-MM: Amazon OpenSearch will roll the index by 30 days. YYYY: Amazon OpenSearch will roll the index by 365 days. It should be noted that in OpenSearch, the time is in UTC 0 time zone. Regarding the 00000x part, Amazon OpenSearch will automatically append a 6-digit suffix to the index name, where the first index rule is 000001, rollover according to the index, and increment backwards, such as 000002, 000003. If the log analytics pipeline is created for application log, the index name is composed of - -<00000x>. The rules for index prefix and index suffix, 00000x are the same as those for service logs. Q: What are the index rollover rules for OpenSearch created by the Log Analytics Pipeline? Index rollover is determined by two factors. One is the Index Suffix in the index name. If you enable the index rollover by capacity, Amazon OpenSearch will roll your index when the index capacity equals or exceeds the specified size, regardless of the rollover time window. Note that if one of these two factors matches, index rollover can be triggered. For example, we created an application log pipeline on January 1, 2023, deleted the application log pipeline at 9:00 on January 4, 2023, and the index name is nginx-YYYY-MM-DD-<00000x>. At the same time, we enabled the index rollover by capacity and entered 300GB. If the log data volume increases suddenly after creation, it can reach 300GB every hour, and the duration is 2 hours and 10 minutes. After that, it returns to normal, and the daily data volume is 90GB. Then OpenSearch creates three indexes on January 1, the index names are nginx-2023-01-01-000001, nginx-2023-01-01-000002, nginx-2023-01-01-000003, and then creates one every day Indexes respectively: nginx-2023-01-02-000004, nginx-2023-01-03-000005, nginx-2023-01-04-000006. Q: Can I deploy the solution in an existing VPC? Yes. You can either launch the solution with a new VPC or launch the solution with an existing VPC. When using an existing VPC, you need to select the VPC and the corresponding subnets. Refer to launch with Cognito User Pool or launch with OpenID Connect for more details. Q: I did not receive the email containing the temporary password when launching the solution with Cognito User Pool. How can I resend the password? Your account is managed by the Cognito User Pool. To resend the temporary password, you can find the user pool created by the solution, delete and recreate the user using the same email address. If you still have the same issue, try with another email address. Q: How can I create more users for this solution? If you launched the solution with Cognito User Pool, go to the AWS console, find the user pool created by the solution, and you can create more users. If you launched the solution with OpenID Connect (OIDC), you should add more users in the user pool managed by the OIDC provider. Note that all users have the same privileges.","title":"Setup and configuration"},{"location":"implementation-guide/faq/#pricing","text":"Q: How will I be charged and billed for the use of this solution? The solution is free to use, and you are responsible for the cost of AWS services used while running this solution. You pay only for what you use, and there are no minimum or setup fees. Refer to the Centralized Logging with OpenSearch Cost section for detailed cost estimation. Q: Will there be additional cost for cross-account ingestion? No. The cost will be same as ingesting logs within the same AWS account.","title":"Pricing"},{"location":"implementation-guide/faq/#log-ingestion","text":"Q: What is the log agent used in the Centralized Logging with OpenSearch solution? Centralized Logging with OpenSearch uses AWS for Fluent Bit , a distribution of Fluent Bit maintained by AWS. The solution uses this distribution to ingest logs from Amazon EC2 and Amazon EKS. Q: I have already stored the AWS service logs of member accounts in a centralized logging account. How should I create service log ingestion for member accounts? In this case, you need to deploy the Centralized Logging with OpenSearch solution in the centralized logging account, and ingest AWS service logs using the Manual mode from the logging account. Refer to this guide for ingesting Application Load Balancer logs with Manual mode. You can do the same with other supported AWS services which output logs to S3. Q: Why there are some duplicated records in OpenSearch when ingesting logs via Kinesis Data Streams? This is usually because there is no enough Kinesis Shards to handle the incoming requests. When threshold error occurs in Kinesis, the Fluent Bit agent will retry that chunk . To avoid this issue, you need to estimate your log throughput and set a proper Kinesis shard number. Please refer to the Kinesis Data Streams quotas and limits . Centralized Logging with OpenSearch provides a built-in feature to scale-out and scale-in the Kinesis shards, and it would take a couple of minutes to scale out to the desired number. Q: How to install log agent on CentOS 7? Log in to your CentOS 7 machine and install SSM Agent manually. sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm sudo systemctl enable amazon-ssm-agent sudo systemctl start amazon-ssm-agent Go to the Instance Group panel of Centralized Logging with OpenSearch console, create Instance Group , select the CentOS 7 machine, choose Install log agent and wait for its status to be offline . Log in to CentOS 7 and install fluent-bit 1.9.3 manually. export RELEASE_URL = ${ FLUENT_BIT_PACKAGES_URL :- https ://packages.fluentbit.io } export RELEASE_KEY = ${ FLUENT_BIT_PACKAGES_KEY :- https ://packages.fluentbit.io/fluentbit.key } sudo rpm --import $RELEASE_KEY cat << EOF | sudo tee /etc/yum.repos.d/fluent-bit.repo [fluent-bit] name = Fluent Bit baseurl = $RELEASE_URL/centos/VERSION_ARCH_SUBSTR gpgcheck=1 repo_gpgcheck=1 gpgkey=$RELEASE_KEY enabled=1 EOF sudo sed -i 's|VERSION_ARCH_SUBSTR|\\$releasever/\\$basearch/|g' /etc/yum.repos.d/fluent-bit.repo sudo yum install -y fluent-bit-1.9.3-1 # Modify the configuration file sudo sed -i 's/ExecStart.*/ExecStart=\\/opt\\/fluent-bit\\/bin\\/fluent-bit -c \\/opt\\/fluent-bit\\/etc\\/fluent-bit.conf/g' /usr/lib/systemd/system/fluent-bit.service sudo systemctl daemon-reload sudo systemctl enable fluent-bit sudo systemctl start fluent-bit 4. Go back to the Instance Groups panel of the Centralized Logging with OpenSearch console and wait for the CentOS 7 machine status to be Online and proceed to create the instance group. Q: How can I consume CloudWatch custom logs? You can use Firehose to subscribe CloudWatch logs and transfer logs into Amazon S3. Firstly, create subscription filters with Amazon Kinesis Data Firehose based on this guide . Next, follow the instructions to learn how to transfer logs to Amazon S3. Then, you can use Centralized Logging with OpenSearch to ingest logs from Amazon S3 to OpenSearch.","title":"Log Ingestion"},{"location":"implementation-guide/faq/#log-visualization","text":"Q: How can I find the built-in dashboards in OpenSearch? Please refer to the AWS Service Logs and Application Logs to find out if there is a built-in dashboard supported. You also need to turn on the Sample Dashboard option when creating a log analytics pipeline. The dashboard will be inserted into the Amazon OpenSearch Service under Global Tenant . You can switch to the Global Tenant from the top right coder of the OpenSearch Dashboards.","title":"Log Visualization"},{"location":"implementation-guide/include-dashboard/","text":"You can access the built-in dashboard in Amazon OpenSearch to view log data. For more information, see Access Dashboard . You can click the below image to view the high-resolution sample dashboard.","title":"Include dashboard"},{"location":"implementation-guide/monitoring/","text":"Types of metrics The following types of metrics are available on the Centralized Logging with OpenSearch console. Log source metrics Fluent Bit FluentBitOutputProcRecords - The number of log records that this output instance has successfully sent. This is the total record count of all unique chunks sent by this output. If a record is not successfully sent, it does not count towards this metric. FluentBitOutputProcBytes - The number of bytes of log records that this output instance has successfully sent. This is the total byte size of all unique chunks sent by this output. If a record is not sent due to some error, then it will not count towards this metric. FluentBitOutputDroppedRecords - The number of log records that have been dropped by the output. This means they met an unrecoverable error or retries expired for their chunk. FluentBitOutputErrors - The number of chunks that have faced an error (either unrecoverable or retrievable). This is the number of times a chunk has failed, and does not correspond with the number of error messages you see in the Fluent Bit log output. FluentBitOutputRetriedRecords - The number of log records that experienced a retry. Note that this is calculated at the chunk level, and the count increased when an entire chunk is marked for retry. An output plugin may or may not perform multiple actions that generate many error messages when uploading a single chunk. FluentBitOutputRetriesFailed - The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit which applies to chunks. Once the Retry_Limit has been reached for a chunk, it is discarded and this metric is incremented. FluentBitOutputRetries - The number of times this output instance requested a retry for a chunk. Network Load Balancer SyslogNLBActiveFlowCount - The total number of concurrent flows (or connections) from clients to targets. This metric includes connections in the SYN_SENT and ESTABLISHED states. TCP connections are not terminated at the load balancer, so a client opening a TCP connection to a target counts as a single flow. SyslogNLBProcessedBytes - The total number of bytes processed by the load balancer, including TCP/IP headers. This count includes traffic to and from targets, minus health check traffic. Buffer metrics Log Buffer is a buffer layer between the Log Agent and OpenSearch clusters. The agent uploads logs into the buffer layer before being processed and delivered into the OpenSearch clusters. A buffer layer is a way to protect OpenSearch clusters from overwhelming. Kinesis Data Stream KDSIncomingBytes \u2013 The number of bytes successfully put to the Kinesis stream over the specified time period. This metric includes bytes from PutRecord and PutRecords operations. Minimum, Maximum, and Average statistics represent the bytes in a single put operation for the stream in the specified time period. KDSIncomingRecords \u2013 The number of records successfully put to the Kinesis stream over the specified time period. This metric includes record counts from PutRecord and PutRecords operations. Minimum, Maximum, and Average statistics represent the records in a single put operation for the stream in the specified time period. KDSPutRecordBytes \u2013 The number of bytes put to the Kinesis stream using the PutRecord operation over the specified time period. KDSThrottledRecords \u2013 The number of records rejected due to throttling in a PutRecords operation per Kinesis data stream, measured over the specified time period. KDSWriteProvisionedThroughputExceeded \u2013 The number of records rejected due to throttling for the stream over the specified time period. This metric includes throttling from PutRecord and PutRecords operations. The most commonly used statistic for this metric is Average. When the Minimum statistic has a non-zero value, records will be throttled for the stream during the specified time period. When the Maximum statistic has a value of 0 (zero), no records will be throttled for the stream during the specified time period. SQS SQSNumberOfMessagesSent - The number of messages added to a queue. SQSNumberOfMessagesDeleted - The number of messages deleted from the queue. Amazon SQS emits the NumberOfMessagesDeleted metric for every successful deletion operation that uses a valid receipt handle, including duplicate deletions. The following scenarios might cause the value of the NumberOfMessagesDeleted metric to be higher than expected: - Calling the DeleteMessage action on different receipt handles that belong to the same message: If the message is not processed before the visibility timeout expires, the message becomes available to other consumers that can process it and delete it again, increasing the value of the NumberOfMessagesDeleted metric. Calling the DeleteMessage action on the same receipt handle: If the message is processed and deleted, but you call the DeleteMessage action again using the same receipt handle, a success status is returned, increasing the value of the NumberOfMessagesDeleted metric. SQSApproximateNumberOfMessagesVisible - The number of messages available for retrieval from the queue. SQSApproximateAgeOfOldestMessage - The approximate age of the oldest non-deleted message in the queue. After a message is received three times (or more) and not processed, the message is moved to the back of the queue and the ApproximateAgeOfOldestMessage metric points at the second-oldest message that hasn't been received more than three times. This action occurs even if the queue has a redrive policy. Because a single poison-pill message (received multiple times but never deleted) can distort this metric, the age of a poison-pill message isn't included in the metric until the poison-pill message is consumed successfully. When the queue has a redrive policy, the message is moved to a dead-letter queue after the configured Maximum Receives . When the message is moved to the dead-letter queue, the ApproximateAgeOfOldestMessage metric of the dead-letter queue represents the time when the message was moved to the dead-letter queue (not the original time the message was sent). Log processor metrics The Log Processor Lambda is responsible for performing final processing on the data and bulk writing it to OpenSearch. TotalLogs \u2013 The total number of log records or events processed by the Lambda function. ExcludedLogs \u2013 The number of log records or events that were excluded from processing, which could be due to filtering or other criteria. LoadedLogs \u2013 The number of log records or events that were successfully processed and loaded into OpenSearch. FailedLogs \u2013 The number of log records or events that failed to be processed or loaded into OpenSearch. ConcurrentExecutions \u2013 The number of function instances that are processing events. If this number reaches your concurrent executions quota for the Region, or the reserved concurrency limit on the function, then Lambda throttles additional invocation requests. Duration \u2013 The amount of time that your function code spends processing an event. The billed duration for an invocation is the value of Duration rounded up to the nearest millisecond. Throttles \u2013 The number of invocation requests that are throttled. When all function instances are processing requests and no concurrency is available to scale up, Lambda rejects additional requests with a TooManyRequestsException error. Throttled requests and other invocation errors don't count as either Invocations or Errors. Invocations \u2013 The number of times that your function code is invoked, including successful invocations and invocations that result in a function error. Invocations aren't recorded if the invocation request is throttled or otherwise results in an invocation error. The value of Invocations equals the number of requests billed.","title":"Monitoring"},{"location":"implementation-guide/monitoring/#types-of-metrics","text":"The following types of metrics are available on the Centralized Logging with OpenSearch console.","title":"Types of metrics"},{"location":"implementation-guide/monitoring/#log-source-metrics","text":"","title":"Log source metrics"},{"location":"implementation-guide/monitoring/#fluent-bit","text":"FluentBitOutputProcRecords - The number of log records that this output instance has successfully sent. This is the total record count of all unique chunks sent by this output. If a record is not successfully sent, it does not count towards this metric. FluentBitOutputProcBytes - The number of bytes of log records that this output instance has successfully sent. This is the total byte size of all unique chunks sent by this output. If a record is not sent due to some error, then it will not count towards this metric. FluentBitOutputDroppedRecords - The number of log records that have been dropped by the output. This means they met an unrecoverable error or retries expired for their chunk. FluentBitOutputErrors - The number of chunks that have faced an error (either unrecoverable or retrievable). This is the number of times a chunk has failed, and does not correspond with the number of error messages you see in the Fluent Bit log output. FluentBitOutputRetriedRecords - The number of log records that experienced a retry. Note that this is calculated at the chunk level, and the count increased when an entire chunk is marked for retry. An output plugin may or may not perform multiple actions that generate many error messages when uploading a single chunk. FluentBitOutputRetriesFailed - The number of times that retries expired for a chunk. Each plugin configures a Retry_Limit which applies to chunks. Once the Retry_Limit has been reached for a chunk, it is discarded and this metric is incremented. FluentBitOutputRetries - The number of times this output instance requested a retry for a chunk.","title":"Fluent Bit"},{"location":"implementation-guide/monitoring/#network-load-balancer","text":"SyslogNLBActiveFlowCount - The total number of concurrent flows (or connections) from clients to targets. This metric includes connections in the SYN_SENT and ESTABLISHED states. TCP connections are not terminated at the load balancer, so a client opening a TCP connection to a target counts as a single flow. SyslogNLBProcessedBytes - The total number of bytes processed by the load balancer, including TCP/IP headers. This count includes traffic to and from targets, minus health check traffic.","title":"Network Load Balancer"},{"location":"implementation-guide/monitoring/#buffer-metrics","text":"Log Buffer is a buffer layer between the Log Agent and OpenSearch clusters. The agent uploads logs into the buffer layer before being processed and delivered into the OpenSearch clusters. A buffer layer is a way to protect OpenSearch clusters from overwhelming.","title":"Buffer metrics"},{"location":"implementation-guide/monitoring/#kinesis-data-stream","text":"KDSIncomingBytes \u2013 The number of bytes successfully put to the Kinesis stream over the specified time period. This metric includes bytes from PutRecord and PutRecords operations. Minimum, Maximum, and Average statistics represent the bytes in a single put operation for the stream in the specified time period. KDSIncomingRecords \u2013 The number of records successfully put to the Kinesis stream over the specified time period. This metric includes record counts from PutRecord and PutRecords operations. Minimum, Maximum, and Average statistics represent the records in a single put operation for the stream in the specified time period. KDSPutRecordBytes \u2013 The number of bytes put to the Kinesis stream using the PutRecord operation over the specified time period. KDSThrottledRecords \u2013 The number of records rejected due to throttling in a PutRecords operation per Kinesis data stream, measured over the specified time period. KDSWriteProvisionedThroughputExceeded \u2013 The number of records rejected due to throttling for the stream over the specified time period. This metric includes throttling from PutRecord and PutRecords operations. The most commonly used statistic for this metric is Average. When the Minimum statistic has a non-zero value, records will be throttled for the stream during the specified time period. When the Maximum statistic has a value of 0 (zero), no records will be throttled for the stream during the specified time period.","title":"Kinesis Data Stream"},{"location":"implementation-guide/monitoring/#sqs","text":"SQSNumberOfMessagesSent - The number of messages added to a queue. SQSNumberOfMessagesDeleted - The number of messages deleted from the queue. Amazon SQS emits the NumberOfMessagesDeleted metric for every successful deletion operation that uses a valid receipt handle, including duplicate deletions. The following scenarios might cause the value of the NumberOfMessagesDeleted metric to be higher than expected: - Calling the DeleteMessage action on different receipt handles that belong to the same message: If the message is not processed before the visibility timeout expires, the message becomes available to other consumers that can process it and delete it again, increasing the value of the NumberOfMessagesDeleted metric. Calling the DeleteMessage action on the same receipt handle: If the message is processed and deleted, but you call the DeleteMessage action again using the same receipt handle, a success status is returned, increasing the value of the NumberOfMessagesDeleted metric. SQSApproximateNumberOfMessagesVisible - The number of messages available for retrieval from the queue. SQSApproximateAgeOfOldestMessage - The approximate age of the oldest non-deleted message in the queue. After a message is received three times (or more) and not processed, the message is moved to the back of the queue and the ApproximateAgeOfOldestMessage metric points at the second-oldest message that hasn't been received more than three times. This action occurs even if the queue has a redrive policy. Because a single poison-pill message (received multiple times but never deleted) can distort this metric, the age of a poison-pill message isn't included in the metric until the poison-pill message is consumed successfully. When the queue has a redrive policy, the message is moved to a dead-letter queue after the configured Maximum Receives . When the message is moved to the dead-letter queue, the ApproximateAgeOfOldestMessage metric of the dead-letter queue represents the time when the message was moved to the dead-letter queue (not the original time the message was sent).","title":"SQS"},{"location":"implementation-guide/monitoring/#log-processor-metrics","text":"The Log Processor Lambda is responsible for performing final processing on the data and bulk writing it to OpenSearch. TotalLogs \u2013 The total number of log records or events processed by the Lambda function. ExcludedLogs \u2013 The number of log records or events that were excluded from processing, which could be due to filtering or other criteria. LoadedLogs \u2013 The number of log records or events that were successfully processed and loaded into OpenSearch. FailedLogs \u2013 The number of log records or events that failed to be processed or loaded into OpenSearch. ConcurrentExecutions \u2013 The number of function instances that are processing events. If this number reaches your concurrent executions quota for the Region, or the reserved concurrency limit on the function, then Lambda throttles additional invocation requests. Duration \u2013 The amount of time that your function code spends processing an event. The billed duration for an invocation is the value of Duration rounded up to the nearest millisecond. Throttles \u2013 The number of invocation requests that are throttled. When all function instances are processing requests and no concurrency is available to scale up, Lambda rejects additional requests with a TooManyRequestsException error. Throttled requests and other invocation errors don't count as either Invocations or Errors. Invocations \u2013 The number of times that your function code is invoked, including successful invocations and invocations that result in a function error. Invocations aren't recorded if the invocation request is throttled or otherwise results in an invocation error. The value of Invocations equals the number of requests billed.","title":"Log processor metrics"},{"location":"implementation-guide/release-notes/","text":"Date Changes March 2023 Initial release. April 2023 Released version 1.0.1 Fixed deployment failure due to S3 ACL changes. June 2023 Released version 1.0.3 Fixed the EKS Fluent Bit deployment configuration generation issue. Aug 2023 Released version 2.0.0 Added feature of ingesting log from S3 bucket continuously or on-demand Added log pipeline monitoring dashboard into the solution console Supported one-click enablement of pipeline alarms Added an option to automatically attach required IAM policies when creating an Instance Group Displayed an error message on the console when the installation of log agent fails Updated Application log pipeline creation process by allowing customer to specify a log source Added validations to OpenSearch domain when importing a domain or selecting a domain to create log pipeline Supported installing log agent on AL2023 instances Supported ingesting WAF (associated with CloudFront) sampled logs to OpenSearch in other regions except us-east-1 Allowed the same index name in different OpenSearch domains September 2023 Released version 2.0.1 Fixed the following issues: Automatically adjust log processor Lambda request's body size based on AOS instance type When you create an application log pipeline and select Nginx as log format, the default sample dashboard option is set to \"Yes\" Monitoring page cannot show metrics when there is only one dot The time of the data point of the monitoring metrics does not match the time of the abscissa Nov 2023 Released version 2.1.0 Added Light Engine to provide an Athena-based serverless and cost-effective log analytics engine to analyze infrequent access logs Added OpenSearch Ingestion to provide more log processing capabilities, with which OSI can provision compute resource (OCU)and pay per ingestion capacity Supported parsing logs in nested JSON format Supported CloudTrail logs ingestion from the specified bucket manually Fix can not list instances when creating instance group issue Fix the EC2 instance launch by the Auto Scaling group will fail to pass the health check issue Dec 2023 Released version 2.1.1 Fixed the following issues: Instance should not be added to the same Instance Group Cannot deploy CLO in UAE region Log ingestion error in light engine when not specified time key in the log config Mar 2024 Released version 2.1.2 Fixed the following issues: The upgrade from versions earlier than 2.1.0 leads to the loss of Amazon S3 notifications, preventing the proper collection of logs from the Amazon S3 buffer Including the \"@timestamp\" field in log configurations leads to failures in creating index_templates and an inability to write data to Amazon OpenSearch Due to the absence of the 'batch_size' variable, process failures occur in the Log Processor Lambda The Log Analytics Pipeline could not deploy cross-account AWS Lambda pipelines An issue with the ELB Service Log Parser resulted in the omission of numerous log lines An inaccurate warning message is displayed during pipeline creation with an existing index in Amazon OpenSearch Incorrect error message occurs when deleting an instance group in Application Logs","title":"Revisions"},{"location":"implementation-guide/source/","text":"Visit our GitHub repository to download the source code for this solution. The solution template is generated using the AWS Cloud Development Kit (CDK) . Refer to the README.md file for additional information.","title":"Developer guide"},{"location":"implementation-guide/trouble-shooting/","text":"Troubleshooting The following help you to fix errors or problems that you might encounter when using Centralized Logging with OpenSearch. Error: Failed to assume service-linked role arn:x:x:x:/AWSServiceRoleForAppSync The reason for this error is that the account has never used the AWS AppSync service. You can deploy the solution's CloudFormation template again. AWS has already created the role automatically when you encountered the error. You can also go to AWS CloudShell or the local terminal and run the following AWS CLI command to Link AppSync Role aws iam create-service-linked-role --aws-service-name appsync.amazonaws.com Error: Unable to add backend role Centralized Logging with OpenSearch only supports Amazon OpenSearch Service domain with Fine-grained access control enabled. You need to go to Amazon OpenSearch Service console, and edit the Access policy for the Amazon OpenSearch Service domain. Error\uff1aUser xxx is not authorized to perform sts:AssumeRole on resource If you see this error, please make sure you have entered the correct information during cross account setup , and then please wait for several minutes. Centralized Logging with OpenSearch uses AssumeRole for cross-account access. This is the best practice to temporary access the AWS resources in your member account. However, these roles created during cross account setup take seconds or minutes to be affective. Error: PutRecords API responded with error='InvalidSignatureException' Fluent-bit agent reports PutRecords API responded with error='InvalidSignatureException', message='The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.' Please restart the fluent-bit agent. For example, on EC2 with Amazon Linux2, run command: sudo service fluent-bit restart Error: PutRecords API responded with error='AccessDeniedException' Fluent-bit agent deployed on EKS Cluster reports \"AccessDeniedException\" when sending records to Kinesis. Verify that the IAM role trust relations are correctly set. With the Centralized Logging with OpenSearch console: Open the Centralized Logging with OpenSearch console. In the left sidebar, under Log Source , choose EKS Clusters . Choose the EKS Cluster that you want to check. Click the IAM Role ARN which will open the IAM Role in AWS Console. Choose the Trust relationships to verify that the OIDC Provider, the service account namespace and conditions are correctly set. You can get more information from Amazon EKS IAM role configuration My CloudFormation stack is stuck on deleting an AWS::Lambda::Function resource when I update the stack. How to resolve it? The Lambda function resides in a VPC, and you need to wait for the associated ENI resource to be deleted. The agent status is offline after I restart the EC2 instance, how can I make it auto start on instance restart? This usually happens if you have installed the log agent, but restart the instance before you create any Log Ingestion. The log agent will auto restart if there is at least one Log Ingestion. If you have a log ingestion, but the problem still exists, you can use systemctl status fluent-bit to check its status inside the instance. I have switched to Global tenant. However, I still cannot find the dashboard in OpenSearch. This is usually because Centralized Logging with OpenSearch received 403 error from OpenSearch when creating the index template and dashboard. This can be fixed by re-run the Lambda function manually by following the steps below: With the Centralized Logging with OpenSearch console: Open the Centralized Logging with OpenSearch console, and find the AWS Service Log pipeline which has this issue. Copy the first 5 characters from the ID section. E.g. you should copy c169c from ID c169cb23-88f3-4a7e-90d7-4ab4bc18982c Go to AWS Console > Lambda. Paste in function filters. This will filter in all the lambda function created for this AWS Service Log ingestion. Click the Lambda function whose name contains \"OpenSearchHelperFn\". In the Test tab, create a new event with any Event name. Click the Test button to trigger the Lambda, and wait the lambda function to complete. The dashboard should be available in OpenSearch. Error from Fluent-bit agent: version `GLIBC_2.25' not found This error is caused by old version of glibc . Centralized Logging with OpenSearch with version later than 1.2 requires glibc-2.25 or above. So you must upgrade the existing version in EC2 first. The upgrade command for different kinds of OS is shown as follows: Important We strongly recommend you run the commands with environments first. Any upgrade failure may cause severe loss. Redhat 7.9 For Redhat 7.9, the whole process will take about 2 hours,and at least 10 GB storage is needed. # install library yum install -y gcc gcc-c++ m4 python3 bison fontconfig-devel libXpm-devel texinfo bzip2 wget echo /usr/local/lib >> /etc/ld.so.conf # create tmp directory mkdir -p /tmp/library cd /tmp/library # install gmp-6.1.0 wget https://ftp.gnu.org/gnu/gmp/gmp-6.1.0.tar.bz2 tar xjvf gmp-6.1.0.tar.bz2 cd gmp-6.1.0 ./configure --prefix=/usr/local make && make install ldconfig cd .. # install mpfr-3.1.4 wget https://gcc.gnu.org/pub/gcc/infrastructure/mpfr-3.1.4.tar.bz2 tar xjvf mpfr-3.1.4.tar.bz2 cd mpfr-3.1.4 ./configure --with-gmp=/usr/local --prefix=/usr/local make && make install ldconfig cd .. # install mpc-1.0.3 wget https://gcc.gnu.org/pub/gcc/infrastructure/mpc-1.0.3.tar.gz tar xzvf mpc-1.0.3.tar.gz cd mpc-1.0.3 ./configure --prefix=/usr/local make && make install ldconfig cd .. # install gcc-9.3.0 wget https://ftp.gnu.org/gnu/gcc/gcc-9.3.0/gcc-9.3.0.tar.gz tar xzvf gcc-9.3.0.tar.gz cd gcc-9.3.0 mkdir build cd build/ ../configure --enable-checking=release --enable-language=c,c++ --disable-multilib --prefix=/usr make -j4 && make install ldconfig cd ../.. # install make-4.3 wget https://ftp.gnu.org/gnu/make/make-4.3.tar.gz tar xzvf make-4.3.tar.gz cd make-4.3 mkdir build cd build ../configure --prefix=/usr make && make install cd ../.. # install glibc-2.31 wget https://ftp.gnu.org/gnu/glibc/glibc-2.31.tar.gz tar xzvf glibc-2.31.tar.gz cd glibc-2.31 mkdir build cd build/ ../configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin --disable-sanity-checks --disable-werror make all && make install make localedata/install-locales # clean tmp directory cd /tmp rm -rf /tmp/library Ubuntu 22 sudo ln -s /snap/core20/1623/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 sudo ln -s /snap/core20/1623/usr/lib/x86_64-linux-gnu/libssl.so.1.1 /usr/lib/x86_64-linux-gnu/libssl.so.1.1 sudo ln -s /usr/lib/x86_64-linux-gnu/libsasl2.so.2 /usr/lib/libsasl2.so.3 Amazon Linux 2023 sudo su - yum install -y wget perl unzip gcc zlib-devel mkdir /tmp/openssl cd /tmp/openssl wget https://www.openssl.org/source/openssl-1.1.1s.tar.gz tar xzvf openssl-1.1.1s.tar.gz cd openssl-1.1.1s ./config --prefix=/usr/local/openssl11 --openssldir=/usr/local/openssl11 shared zlib make make install echo /usr/local/openssl11/lib/ >> /etc/ld.so.conf ldconfig","title":"Troubleshooting"},{"location":"implementation-guide/trouble-shooting/#troubleshooting","text":"The following help you to fix errors or problems that you might encounter when using Centralized Logging with OpenSearch.","title":"Troubleshooting"},{"location":"implementation-guide/trouble-shooting/#error-failed-to-assume-service-linked-role-arnxxxawsserviceroleforappsync","text":"The reason for this error is that the account has never used the AWS AppSync service. You can deploy the solution's CloudFormation template again. AWS has already created the role automatically when you encountered the error. You can also go to AWS CloudShell or the local terminal and run the following AWS CLI command to Link AppSync Role aws iam create-service-linked-role --aws-service-name appsync.amazonaws.com","title":"Error: Failed to assume service-linked role arn:x:x:x:/AWSServiceRoleForAppSync"},{"location":"implementation-guide/trouble-shooting/#error-unable-to-add-backend-role","text":"Centralized Logging with OpenSearch only supports Amazon OpenSearch Service domain with Fine-grained access control enabled. You need to go to Amazon OpenSearch Service console, and edit the Access policy for the Amazon OpenSearch Service domain.","title":"Error: Unable to add backend role"},{"location":"implementation-guide/trouble-shooting/#erroruser-xxx-is-not-authorized-to-perform-stsassumerole-on-resource","text":"If you see this error, please make sure you have entered the correct information during cross account setup , and then please wait for several minutes. Centralized Logging with OpenSearch uses AssumeRole for cross-account access. This is the best practice to temporary access the AWS resources in your member account. However, these roles created during cross account setup take seconds or minutes to be affective.","title":"Error\uff1aUser xxx is not authorized to perform sts:AssumeRole on resource"},{"location":"implementation-guide/trouble-shooting/#error-putrecords-api-responded-with-errorinvalidsignatureexception","text":"Fluent-bit agent reports PutRecords API responded with error='InvalidSignatureException', message='The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.' Please restart the fluent-bit agent. For example, on EC2 with Amazon Linux2, run command: sudo service fluent-bit restart","title":"Error: PutRecords API responded with error='InvalidSignatureException'"},{"location":"implementation-guide/trouble-shooting/#error-putrecords-api-responded-with-erroraccessdeniedexception","text":"Fluent-bit agent deployed on EKS Cluster reports \"AccessDeniedException\" when sending records to Kinesis. Verify that the IAM role trust relations are correctly set. With the Centralized Logging with OpenSearch console: Open the Centralized Logging with OpenSearch console. In the left sidebar, under Log Source , choose EKS Clusters . Choose the EKS Cluster that you want to check. Click the IAM Role ARN which will open the IAM Role in AWS Console. Choose the Trust relationships to verify that the OIDC Provider, the service account namespace and conditions are correctly set. You can get more information from Amazon EKS IAM role configuration","title":"Error: PutRecords API responded with error='AccessDeniedException'"},{"location":"implementation-guide/trouble-shooting/#my-cloudformation-stack-is-stuck-on-deleting-an-awslambdafunction-resource-when-i-update-the-stack-how-to-resolve-it","text":"The Lambda function resides in a VPC, and you need to wait for the associated ENI resource to be deleted.","title":"My CloudFormation stack is stuck on deleting an AWS::Lambda::Function resource when I update the stack. How to resolve it?"},{"location":"implementation-guide/trouble-shooting/#the-agent-status-is-offline-after-i-restart-the-ec2-instance-how-can-i-make-it-auto-start-on-instance-restart","text":"This usually happens if you have installed the log agent, but restart the instance before you create any Log Ingestion. The log agent will auto restart if there is at least one Log Ingestion. If you have a log ingestion, but the problem still exists, you can use systemctl status fluent-bit to check its status inside the instance.","title":"The agent status is offline after I restart the EC2 instance, how can I make it auto start on instance restart?"},{"location":"implementation-guide/trouble-shooting/#i-have-switched-to-global-tenant-however-i-still-cannot-find-the-dashboard-in-opensearch","text":"This is usually because Centralized Logging with OpenSearch received 403 error from OpenSearch when creating the index template and dashboard. This can be fixed by re-run the Lambda function manually by following the steps below: With the Centralized Logging with OpenSearch console: Open the Centralized Logging with OpenSearch console, and find the AWS Service Log pipeline which has this issue. Copy the first 5 characters from the ID section. E.g. you should copy c169c from ID c169cb23-88f3-4a7e-90d7-4ab4bc18982c Go to AWS Console > Lambda. Paste in function filters. This will filter in all the lambda function created for this AWS Service Log ingestion. Click the Lambda function whose name contains \"OpenSearchHelperFn\". In the Test tab, create a new event with any Event name. Click the Test button to trigger the Lambda, and wait the lambda function to complete. The dashboard should be available in OpenSearch.","title":"I have switched to Global tenant. However, I still cannot find the dashboard in OpenSearch."},{"location":"implementation-guide/trouble-shooting/#error-from-fluent-bit-agent-version-glibc_225-not-found","text":"This error is caused by old version of glibc . Centralized Logging with OpenSearch with version later than 1.2 requires glibc-2.25 or above. So you must upgrade the existing version in EC2 first. The upgrade command for different kinds of OS is shown as follows: Important We strongly recommend you run the commands with environments first. Any upgrade failure may cause severe loss.","title":"Error from Fluent-bit agent: version `GLIBC_2.25' not found"},{"location":"implementation-guide/trouble-shooting/#redhat-79","text":"For Redhat 7.9, the whole process will take about 2 hours,and at least 10 GB storage is needed. # install library yum install -y gcc gcc-c++ m4 python3 bison fontconfig-devel libXpm-devel texinfo bzip2 wget echo /usr/local/lib >> /etc/ld.so.conf # create tmp directory mkdir -p /tmp/library cd /tmp/library # install gmp-6.1.0 wget https://ftp.gnu.org/gnu/gmp/gmp-6.1.0.tar.bz2 tar xjvf gmp-6.1.0.tar.bz2 cd gmp-6.1.0 ./configure --prefix=/usr/local make && make install ldconfig cd .. # install mpfr-3.1.4 wget https://gcc.gnu.org/pub/gcc/infrastructure/mpfr-3.1.4.tar.bz2 tar xjvf mpfr-3.1.4.tar.bz2 cd mpfr-3.1.4 ./configure --with-gmp=/usr/local --prefix=/usr/local make && make install ldconfig cd .. # install mpc-1.0.3 wget https://gcc.gnu.org/pub/gcc/infrastructure/mpc-1.0.3.tar.gz tar xzvf mpc-1.0.3.tar.gz cd mpc-1.0.3 ./configure --prefix=/usr/local make && make install ldconfig cd .. # install gcc-9.3.0 wget https://ftp.gnu.org/gnu/gcc/gcc-9.3.0/gcc-9.3.0.tar.gz tar xzvf gcc-9.3.0.tar.gz cd gcc-9.3.0 mkdir build cd build/ ../configure --enable-checking=release --enable-language=c,c++ --disable-multilib --prefix=/usr make -j4 && make install ldconfig cd ../.. # install make-4.3 wget https://ftp.gnu.org/gnu/make/make-4.3.tar.gz tar xzvf make-4.3.tar.gz cd make-4.3 mkdir build cd build ../configure --prefix=/usr make && make install cd ../.. # install glibc-2.31 wget https://ftp.gnu.org/gnu/glibc/glibc-2.31.tar.gz tar xzvf glibc-2.31.tar.gz cd glibc-2.31 mkdir build cd build/ ../configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin --disable-sanity-checks --disable-werror make all && make install make localedata/install-locales # clean tmp directory cd /tmp rm -rf /tmp/library","title":"Redhat 7.9"},{"location":"implementation-guide/trouble-shooting/#ubuntu-22","text":"sudo ln -s /snap/core20/1623/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 sudo ln -s /snap/core20/1623/usr/lib/x86_64-linux-gnu/libssl.so.1.1 /usr/lib/x86_64-linux-gnu/libssl.so.1.1 sudo ln -s /usr/lib/x86_64-linux-gnu/libsasl2.so.2 /usr/lib/libsasl2.so.3","title":"Ubuntu 22"},{"location":"implementation-guide/trouble-shooting/#amazon-linux-2023","text":"sudo su - yum install -y wget perl unzip gcc zlib-devel mkdir /tmp/openssl cd /tmp/openssl wget https://www.openssl.org/source/openssl-1.1.1s.tar.gz tar xzvf openssl-1.1.1s.tar.gz cd openssl-1.1.1s ./config --prefix=/usr/local/openssl11 --openssldir=/usr/local/openssl11 shared zlib make make install echo /usr/local/openssl11/lib/ >> /etc/ld.so.conf ldconfig","title":"Amazon Linux 2023"},{"location":"implementation-guide/uninstall/","text":"Uninstall the Centralized Logging with OpenSearch Warning You will encounter IAM role missing error if you delete the Centralized Logging with OpenSearch main stack before you delete the log pipelines. Centralized Logging with OpenSearch console launches additional CloudFormation stacks to ingest logs. If you want to uninstall the Centralized Logging with OpenSearch solution. We recommend you to delete log pipelines (incl. AWS Service log pipelines and application log pipelines) before uninstall the solution. Step 1. Delete Application Log Pipelines Important Please delete all the log ingestion before deleting an application log pipeline. Go to the Centralized Logging with OpenSearch console, in the left sidebar, choose Application Log . Click the application log pipeline to view details. In the ingestion tab, delete all the application log ingestion in the pipeline. Uninstall/Disable the Fluent Bit agent. EC2 (Optional): after removing the log ingestion from EC2 instance group. Fluent Bit will automatically stop ship logs, it is optional for you to stop the Fluent Bit in your instances. Here are the command for stopping Fluent Bit agent. sudo service fluent-bit stop sudo systemctl disable fluent-bit.service EKS DaemonSet (Mandatory): if you have chosen to deploy the Fluent Bit agent using DaemonSet, you need to delete your Fluent Bit agent. Otherwise, the agent will continue ship logs to Centralized Logging with OpenSearch pipelines. kubectl delete -f ~/fluent-bit-logging.yaml EKS SideCar (Mandatory): please remove the fluent-bit agent in your .yaml file, and restart your pod. Delete the Application Log pipeline. Repeat step 2 to Step 5 to delete all your application log pipelines. Step 2. Delete AWS Service Log Pipelines Go to the Centralized Logging with OpenSearch console, in the left sidebar, choose AWS Service Log . Select and delete the AWS Service Log Pipeline one by one. Step 3. Clean up imported OpenSearch domains Delete Access Proxy , if you have created the proxy using Centralized Logging with OpenSearch console. Delete Alarms , if you have created alarms using Centralized Logging with OpenSearch console. Delete VPC peering Connection between Centralized Logging with OpenSearch's VPC and OpenSearch's VPC. Go to AWS VPC Console . Choose Peering connections in left sidebar. Find and delete the VPC peering connection between the Centralized Logging with OpenSearch's VPC and OpenSearch's VPC. You may not have Peering Connections if you did not use the \"Automatic\" mode when importing OpenSearch domains. (Optional) Remove imported OpenSearch Domains. (This will not delete the Amazon OpenSearch domain in the AWS account.) Step 4. Delete Centralized Logging with OpenSearch stack Go to the CloudFormation console . Find CloudFormation Stack of the Centralized Logging with OpenSearch solution. (Optional) Delete S3 buckets created by Centralized Logging with OpenSearch. Important The S3 bucket whose name contains LoggingBucket is the centralized bucket for your AWS service log. You might have enabled AWS Services to send logs to this S3 bucket. Deleting this bucket will cause AWS Services failed to send logs. Choose the CloudFormation stack of the Centralized Logging with OpenSearch solution, and select the Resources tab. In search bar, enter AWS::S3::Bucket . This will show all the S3 buckets created by Centralized Logging with OpenSearch solution, and the Physical ID field is the S3 bucket name. Go to S3 console, and find the S3 bucket using the bucket name. Empty and Delete the S3 bucket. Delete the CloudFormation Stack of the Centralized Logging with OpenSearch solution","title":"Uninstall the solution"},{"location":"implementation-guide/uninstall/#uninstall-the-centralized-logging-with-opensearch","text":"Warning You will encounter IAM role missing error if you delete the Centralized Logging with OpenSearch main stack before you delete the log pipelines. Centralized Logging with OpenSearch console launches additional CloudFormation stacks to ingest logs. If you want to uninstall the Centralized Logging with OpenSearch solution. We recommend you to delete log pipelines (incl. AWS Service log pipelines and application log pipelines) before uninstall the solution.","title":"Uninstall the Centralized Logging with OpenSearch"},{"location":"implementation-guide/uninstall/#step-1-delete-application-log-pipelines","text":"Important Please delete all the log ingestion before deleting an application log pipeline. Go to the Centralized Logging with OpenSearch console, in the left sidebar, choose Application Log . Click the application log pipeline to view details. In the ingestion tab, delete all the application log ingestion in the pipeline. Uninstall/Disable the Fluent Bit agent. EC2 (Optional): after removing the log ingestion from EC2 instance group. Fluent Bit will automatically stop ship logs, it is optional for you to stop the Fluent Bit in your instances. Here are the command for stopping Fluent Bit agent. sudo service fluent-bit stop sudo systemctl disable fluent-bit.service EKS DaemonSet (Mandatory): if you have chosen to deploy the Fluent Bit agent using DaemonSet, you need to delete your Fluent Bit agent. Otherwise, the agent will continue ship logs to Centralized Logging with OpenSearch pipelines. kubectl delete -f ~/fluent-bit-logging.yaml EKS SideCar (Mandatory): please remove the fluent-bit agent in your .yaml file, and restart your pod. Delete the Application Log pipeline. Repeat step 2 to Step 5 to delete all your application log pipelines.","title":"Step 1. Delete Application Log Pipelines"},{"location":"implementation-guide/uninstall/#step-2-delete-aws-service-log-pipelines","text":"Go to the Centralized Logging with OpenSearch console, in the left sidebar, choose AWS Service Log . Select and delete the AWS Service Log Pipeline one by one.","title":"Step 2. Delete AWS Service Log Pipelines"},{"location":"implementation-guide/uninstall/#step-3-clean-up-imported-opensearch-domains","text":"Delete Access Proxy , if you have created the proxy using Centralized Logging with OpenSearch console. Delete Alarms , if you have created alarms using Centralized Logging with OpenSearch console. Delete VPC peering Connection between Centralized Logging with OpenSearch's VPC and OpenSearch's VPC. Go to AWS VPC Console . Choose Peering connections in left sidebar. Find and delete the VPC peering connection between the Centralized Logging with OpenSearch's VPC and OpenSearch's VPC. You may not have Peering Connections if you did not use the \"Automatic\" mode when importing OpenSearch domains. (Optional) Remove imported OpenSearch Domains. (This will not delete the Amazon OpenSearch domain in the AWS account.)","title":"Step 3. Clean up imported OpenSearch domains"},{"location":"implementation-guide/uninstall/#step-4-delete-centralized-logging-with-opensearch-stack","text":"Go to the CloudFormation console . Find CloudFormation Stack of the Centralized Logging with OpenSearch solution. (Optional) Delete S3 buckets created by Centralized Logging with OpenSearch. Important The S3 bucket whose name contains LoggingBucket is the centralized bucket for your AWS service log. You might have enabled AWS Services to send logs to this S3 bucket. Deleting this bucket will cause AWS Services failed to send logs. Choose the CloudFormation stack of the Centralized Logging with OpenSearch solution, and select the Resources tab. In search bar, enter AWS::S3::Bucket . This will show all the S3 buckets created by Centralized Logging with OpenSearch solution, and the Physical ID field is the S3 bucket name. Go to S3 console, and find the S3 bucket using the bucket name. Empty and Delete the S3 bucket. Delete the CloudFormation Stack of the Centralized Logging with OpenSearch solution","title":"Step 4. Delete Centralized Logging with OpenSearch stack"},{"location":"implementation-guide/applications/","text":"Application Log Analytics Pipelines Centralized Logging with OpenSearch supports ingesting application logs from the following log sources: Amazon EC2 instance group : the solution automatically installs log agent (Fluent Bit 1.9), collects application logs on EC2 instances and then sends logs into Amazon OpenSearch. Amazon EKS cluster : the solution generates all-in-one configuration file for customers to deploy the log agent (Fluent Bit 1.9) as a DaemonSet or Sidecar. After log agent is deployed, the solution starts collecting pod logs and sends them to Amazon OpenSearch Service. Amazon S3 : the solution either ingests logs in the specified Amazon S3 location continuously or performs one-time ingestion. You can also filter logs based on Amazon S3 prefix or parse logs with custom Log Config. Syslog : the solution collects syslog logs through UDP or TCP protocol. Amazon OpenSearch Service is suitable for real-time log analytics and frequent queries and has full-text search capability. As of release 2.1.0, the solution starts to support log ingestion into Light Engine, which is suitable for non real-time log analytics and infrequent queries and has SQL-like search capability. The feature is supported when you choose Amazon EC2 instance group or Amazon EKS cluster as log source. After creating a log analytics pipeline, you can add more log sources to the log analytics pipeline. For more information, see add a new log source . Important If you are using Centralized Logging with OpenSearch to create an application log pipeline for the first time, you are recommended to learn the concepts and supported log formats and log sources . Supported Log Formats and Log Sources The table lists the log formats supported by each log source. For more information about how to create log ingestion for each log format, refer to Log Config . Log Format EC2 Instance Group EKS Cluster Amazon S3 Syslog Nginx Yes Yes Yes No Apache HTTP Server Yes Yes Yes No JSON Yes Yes Yes Yes Single-line Text Yes Yes Yes Yes Multi-line Text Yes Yes Yes No Multi-line Text (Spring Boot) Yes Yes Yes No Syslog RFC5424/RFC3164 No No No Yes Syslog Custom No No No Yes Concepts The following introduce concepts that help you to understand how the application log ingestion works. Application Log Analytics Pipeline To collect application logs, a data pipeline is needed. The pipeline not only buffers the data in transmit but also cleans or pre-processes data. For example, transforming IP to Geo location. Currently, Kinesis Data Stream is used as data buffering for EC2 log source. Log Ingestion A log ingestion configures the Log Source, Log Config and the Application Log Analytics Pipeline for the log agent used by Centralized Logging with OpenSearch. After that, Centralized Logging with OpenSearch will start collecting certain type of logs from the log source and sending them to Amazon OpenSearch. Log Agent A log agent is a program that reads logs from one location and sends them to another location (for example, OpenSearch). Currently, Centralized Logging with OpenSearch only supports Fluent Bit 1.9 log agent which is installed automatically. The Fluent Bit agent has a dependency of OpenSSL 1.1 . To learn how to install OpenSSL on Linux instances, refer to OpenSSL installation . To find the supported platforms by Fluent Bit, refer to this link . Log Buffer Log Buffer is a buffer layer between the Log Agent and OpenSearch clusters. The agent uploads logs into the buffer layer before being processed and delivered into the OpenSearch clusters. A buffer layer is a way to protect OpenSearch clusters from overwhelming. This solution provides the following types of buffer layers. Amazon S3 . Use this option if you can bear minutes-level latency for log ingestion. The log agent periodically uploads logs to an Amazon S3 bucket. The frequency of data delivery to Amazon S3 is determined by Buffer size (default value is 50 MiB) and Buffer interval (default value is 60 seconds) value that you configured when creating the application log analytics pipelines. The condition satisfied first triggers data delivery to Amazon S3. Amazon Kinesis Data Streams . Use this option if you need real-time log ingestion. The log agent uploads logs to Amazon Kinesis Data Stream in seconds. The frequency of data delivery to Kinesis Data Streams is determined by Buffer size (10 MiB) and Buffer interval (5 seconds). The condition satisfied first triggers data delivery to Kinesis Data Streams. Log Buffer is optional when creating an application log analytics pipeline. For all types of application logs, this solution allows you to ingest logs without any buffer layers. However, we only recommend this option when you have small log volume, and you are confident that the logs will not exceed the thresholds at the OpenSearch side. Log Source A Log Source refers to a location where you want Centralized Logging with OpenSearch to collect application logs from. Supported log sources includes: Amazon EC2 Instance Group Amazon EKS Cluster Amazon S3 Syslog Instance Group An instance group is a collection of EC2 instances from which you want to collect application logs. Centralized Logging with OpenSearch can help you install the log agent in each instance within a group. You can select arbitrary instances through the user interface, or choose an EC2 Auto Scaling Group . EKS Cluster The EKS Cluster in Centralized Logging with OpenSearch refers to the Amazon EKS from which you want to collect pod logs. Centralized Logging with OpenSearch will guide you to deploy the log agent as a DaemonSet or Sidecar in the EKS Cluster. Amazon S3 Centralized Logging with OpenSearch supports collectings logs stored in an Amazon S3 bucket. Syslog Centralized Logging with OpenSearch supports collecting syslog logs through UDP or TCP protocol. Log Config A Log Config is a configuration that defines the format of logs (that is, what fields each log line includes, and the data type of each field), based on which the Log Analytics Pipeline parses the logs before ingesting them into log storage. Log Config also allows you to define filters of the logs based on the fields in the logs.","title":"Overview"},{"location":"implementation-guide/applications/#application-log-analytics-pipelines","text":"Centralized Logging with OpenSearch supports ingesting application logs from the following log sources: Amazon EC2 instance group : the solution automatically installs log agent (Fluent Bit 1.9), collects application logs on EC2 instances and then sends logs into Amazon OpenSearch. Amazon EKS cluster : the solution generates all-in-one configuration file for customers to deploy the log agent (Fluent Bit 1.9) as a DaemonSet or Sidecar. After log agent is deployed, the solution starts collecting pod logs and sends them to Amazon OpenSearch Service. Amazon S3 : the solution either ingests logs in the specified Amazon S3 location continuously or performs one-time ingestion. You can also filter logs based on Amazon S3 prefix or parse logs with custom Log Config. Syslog : the solution collects syslog logs through UDP or TCP protocol. Amazon OpenSearch Service is suitable for real-time log analytics and frequent queries and has full-text search capability. As of release 2.1.0, the solution starts to support log ingestion into Light Engine, which is suitable for non real-time log analytics and infrequent queries and has SQL-like search capability. The feature is supported when you choose Amazon EC2 instance group or Amazon EKS cluster as log source. After creating a log analytics pipeline, you can add more log sources to the log analytics pipeline. For more information, see add a new log source . Important If you are using Centralized Logging with OpenSearch to create an application log pipeline for the first time, you are recommended to learn the concepts and supported log formats and log sources .","title":"Application Log Analytics Pipelines"},{"location":"implementation-guide/applications/#supported-log-formats-and-log-sources","text":"The table lists the log formats supported by each log source. For more information about how to create log ingestion for each log format, refer to Log Config . Log Format EC2 Instance Group EKS Cluster Amazon S3 Syslog Nginx Yes Yes Yes No Apache HTTP Server Yes Yes Yes No JSON Yes Yes Yes Yes Single-line Text Yes Yes Yes Yes Multi-line Text Yes Yes Yes No Multi-line Text (Spring Boot) Yes Yes Yes No Syslog RFC5424/RFC3164 No No No Yes Syslog Custom No No No Yes","title":"Supported Log Formats and Log Sources"},{"location":"implementation-guide/applications/#concepts","text":"The following introduce concepts that help you to understand how the application log ingestion works.","title":"Concepts"},{"location":"implementation-guide/applications/#application-log-analytics-pipeline","text":"To collect application logs, a data pipeline is needed. The pipeline not only buffers the data in transmit but also cleans or pre-processes data. For example, transforming IP to Geo location. Currently, Kinesis Data Stream is used as data buffering for EC2 log source.","title":"Application Log Analytics Pipeline"},{"location":"implementation-guide/applications/#log-ingestion","text":"A log ingestion configures the Log Source, Log Config and the Application Log Analytics Pipeline for the log agent used by Centralized Logging with OpenSearch. After that, Centralized Logging with OpenSearch will start collecting certain type of logs from the log source and sending them to Amazon OpenSearch.","title":"Log Ingestion"},{"location":"implementation-guide/applications/#log-agent","text":"A log agent is a program that reads logs from one location and sends them to another location (for example, OpenSearch). Currently, Centralized Logging with OpenSearch only supports Fluent Bit 1.9 log agent which is installed automatically. The Fluent Bit agent has a dependency of OpenSSL 1.1 . To learn how to install OpenSSL on Linux instances, refer to OpenSSL installation . To find the supported platforms by Fluent Bit, refer to this link .","title":"Log Agent"},{"location":"implementation-guide/applications/#log-buffer","text":"Log Buffer is a buffer layer between the Log Agent and OpenSearch clusters. The agent uploads logs into the buffer layer before being processed and delivered into the OpenSearch clusters. A buffer layer is a way to protect OpenSearch clusters from overwhelming. This solution provides the following types of buffer layers. Amazon S3 . Use this option if you can bear minutes-level latency for log ingestion. The log agent periodically uploads logs to an Amazon S3 bucket. The frequency of data delivery to Amazon S3 is determined by Buffer size (default value is 50 MiB) and Buffer interval (default value is 60 seconds) value that you configured when creating the application log analytics pipelines. The condition satisfied first triggers data delivery to Amazon S3. Amazon Kinesis Data Streams . Use this option if you need real-time log ingestion. The log agent uploads logs to Amazon Kinesis Data Stream in seconds. The frequency of data delivery to Kinesis Data Streams is determined by Buffer size (10 MiB) and Buffer interval (5 seconds). The condition satisfied first triggers data delivery to Kinesis Data Streams. Log Buffer is optional when creating an application log analytics pipeline. For all types of application logs, this solution allows you to ingest logs without any buffer layers. However, we only recommend this option when you have small log volume, and you are confident that the logs will not exceed the thresholds at the OpenSearch side.","title":"Log Buffer"},{"location":"implementation-guide/applications/#log-source","text":"A Log Source refers to a location where you want Centralized Logging with OpenSearch to collect application logs from. Supported log sources includes: Amazon EC2 Instance Group Amazon EKS Cluster Amazon S3 Syslog","title":"Log Source"},{"location":"implementation-guide/applications/#instance-group","text":"An instance group is a collection of EC2 instances from which you want to collect application logs. Centralized Logging with OpenSearch can help you install the log agent in each instance within a group. You can select arbitrary instances through the user interface, or choose an EC2 Auto Scaling Group .","title":"Instance Group"},{"location":"implementation-guide/applications/#eks-cluster","text":"The EKS Cluster in Centralized Logging with OpenSearch refers to the Amazon EKS from which you want to collect pod logs. Centralized Logging with OpenSearch will guide you to deploy the log agent as a DaemonSet or Sidecar in the EKS Cluster.","title":"EKS Cluster"},{"location":"implementation-guide/applications/#amazon-s3","text":"Centralized Logging with OpenSearch supports collectings logs stored in an Amazon S3 bucket.","title":"Amazon S3"},{"location":"implementation-guide/applications/#syslog","text":"Centralized Logging with OpenSearch supports collecting syslog logs through UDP or TCP protocol.","title":"Syslog"},{"location":"implementation-guide/applications/#log-config","text":"A Log Config is a configuration that defines the format of logs (that is, what fields each log line includes, and the data type of each field), based on which the Log Analytics Pipeline parses the logs before ingesting them into log storage. Log Config also allows you to define filters of the logs based on the fields in the logs.","title":"Log Config"},{"location":"implementation-guide/applications/create-log-config/","text":"Log Config Centralized Logging with OpenSearch solution supports creating log configs for the following log formats: JSON Apache Nginx Syslog Single-ine text Multi-line text For more information, refer to supported log formats and log sources . The following describes how to create log config for each log format. Create a JSON config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Choose Create a log config . Specify Config Name . Specify Log Path . You can use , to separate multiple paths. Choose JSON in the log type dropdown list. In the Sample log parsing section, paste a sample JSON log and click Parse log to verify if the log parsing is successful.JSON type support nested Json with a maximum nesting depth of X. If your JSON log sample is nested JSON, choose Pase Log and it displays a list of field type options for each layer. If needed, you can set the corresponding field type for each layer of fields. If you choose Remove to delete a field. The field type will be automatically inferred by OpenSearch. For Example: {\"timestamp\": \"2023-11-06T08:29:55.266Z\", \"correlationId\": \"566829027325526589\", \"processInfo\": { \"startTime\": \"2023-11-06T08:29:55.266Z\", \"hostname\": \"ltvtix0apidev01\", \"domainId\": \"e6826d97-a60f-45cb-93e1-b4bb5a7add29\", \"groupId\": \"group-2\", \"groupName\": \"grp_dev_bba\", \"serviceId\": \"instance-1\", \"serviceName\": \"ins_dev_bba\", \"version\": \"7.7.20210130\" }, \"transactionSummary\": { \"path\": \"https://www.leadmission-critical.info/relationships\", \"protocol\": \"https\", \"protocolSrc\": \"97\", \"status\": \"exception\", \"serviceContexts\": [ { \"service\": \"NSC_APP-117127_DCTM_Get Documentum Token\", \"monitor\": true, \"client\": \"Pass Through\", \"org\": null, \"app\": null, \"method\": \"getTokenUsingPOST\", \"status\": \"exception\", \"duration\": 25270 } ] } } Check if each fields type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. For nested JSON, the Time Key must be on the first level. Specify the Time format . The format syntax follows strptime . Check this for details. (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Create an Apache HTTP server log config Apache HTTP Server (httpd) is capable of writing error and access log files to a local directory. You can configure Centralized Logging with OpenSearch to ingest Apache HTTP server logs. Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Specify Log Path . You can use , to separate multiple paths. Choose Apache HTTP server in the log type dropdown menu. In the Apache Log Format section, paste your Apache HTTP server log format configuration. It is in the format of /etc/httpd/conf/httpd.conf and starts with LogFormat . For example: LogFormat \"%h %l %u %t \\\"%r\\\" %>s %b \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" combined (Optional) In the Sample log parsing section, paste a sample Apache HTTP server log to verify if the log parsing is successful. For example: 127.0.0.1 - - [22/Dec/2021:06:48:57 +0000] \"GET /xxx HTTP/1.1\" 404 196 \"-\" \"curl/7.79.1\" Choose Create . Create an Nginx log config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Specify Log Path . You can use , to separate multiple paths. Choose Nginx in the log type dropdown menu. In the Nginx Log Format section, paste your Nginx log format configuration. It is in the format of /etc/nginx/nginx.conf and starts with log_format . For example: log_format main '$remote_addr - $remote_user [$time_local] \"$request\" ' '$status $body_bytes_sent \"$http_referer\" ' '\"$http_user_agent\" \"$http_x_forwarded_for\"'; (Optional) In the Sample log parsing section, paste a sample Nginx log to verify if the log parsing is successful. For example: 127.0.0.1 - - [24/Dec/2021:01:27:11 +0000] \"GET / HTTP/1.1\" 200 3520 \"-\" \"curl/7.79.1\" \"-\" (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Create a Syslog config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Choose Syslog in the log type dropdown menu. Note that Centralized Logging with OpenSearch also supports Syslog with JSON format and single-line text format. RFC5424 Paste a sample RFC5424 log. For example: <35>1 2013-10-11T22:14:15Z client_machine su - - - 'su root' failed for joe on /dev/pts/2 Choose Parse Log . Check if each fields type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Specify the Time format . The format syntax follows strptime . Check this manual for details. For example: %Y-%m-%dT%H:%M:%SZ (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . RFC3164 Paste a sample RFC3164 log. For example: <35>Oct 12 22:14:15 client_machine su: 'su root' failed for joe on /dev/pts/2 Choose Parse Log . Check if each fields type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Since there is no year in the timestamp of RFC3164, it cannot be displayed as a time histogram in the Discover interface of Amazon OpenSearch. Specify the Time format . The format syntax follows strptime . Check this for details. For example: %b %m %H:%M:%S (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Custom In the Syslog Format section, paste your Syslog log format configuration. It is in the format of /etc/rsyslog.conf and starts with template or $template . The format syntax follows Syslog Message Format . For example: <%pri%>1 %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%\\n In the Sample log parsing section, paste a sample Nginx log to verify if the log parsing is successful. For example: <35>1 2013-10-11T22:14:15.003Z client_machine su - - 'su root' failed for joe on /dev/pts/2 Check if each fields type mapping is correct. Change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Specify the Time format . The format syntax follows strptime . Check this manual for details. (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Create a single-line text config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Specify Log Path . You can use , to separate multiple paths. Choose Single-line Text in the log type dropdown menu. Write the regular expression in Rubular to validate first and enter the value. For example: (?\\S+)\\s*-\\s*(?\\S+)\\s*\\[(?\\d+/\\S+/\\d+:\\d+:\\d+:\\d+)\\s+\\S+\\]\\s*\"(?\\S+)\\s+(?\\S+)\\s+\\S+\"\\s*(?\\S+)\\s*(?\\S+)\\s*\"(?[^\"]*)\"\\s*\"(?[^\"]*)\"\\s*\"(?[^\"]*)\".* In the Sample log parsing section, paste a sample Single-line text log and click Parse log to verify if the log parsing is successful. For example: 127.0.0.1 - - [24/Dec/2021:01:27:11 +0000] \"GET / HTTP/1.1\" 200 3520 \"-\" \"curl/7.79.1\" \"-\" Check if each fields type mapping is correct. Change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Specify the Time format . The format syntax follows strptime . Check this manual for details. (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Create a multi-line text config Sign in to the Centralized Logging with OpenSearch Console. In the left sidebar, under Resources , choose Log Config . Click the Create a log config button. Specify Config Name . Specify Log Path . You can use , to separate multiple paths. Choose Multi-line Text in the log type dropdown menu. Java - Spring Boot For Java Spring Boot logs, you could provide a simple log format. For example: %d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%thread] %logger : %msg%n Paste a sample multi-line log. For example: 2022-02-18 10:32:26.400 ERROR [http-nio-8080-exec-1] org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.ArithmeticException: / by zero] with root cause java.lang.ArithmeticException: / by zero at com.springexamples.demo.web.LoggerController.logs(LoggerController.java:22) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke Choose Parse Log . Check if each fields type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see Data Types . Note You must specify the datetime of the log using key \u201ctime\u201d. If not specified, system time will be added. Specify the Time format . The format syntax follows strptime . Check this for details. (Optional) In the Filter section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only. Select Create . Custom For other kinds of logs, you could specify the first line regex pattern. For example: (?

    架构图

    使用默认参数部署此解决方案会在 AWS 云中构建以下环境。

    -

    arch +

    arch 图1: 解决方案架构图

    此解决方案在您的 AWS 云账户中部署 AWS CloudFormation 模板并完成以下设置。

      @@ -1694,13 +1694,13 @@

      通过 Amazon S3 提取日志

    1. 直接记录到Amazon S3桶的日志(OpenSearch作为日志分析引擎)

      该AWS服务可以直接将日志记录到 Amazon S3 桶。

      -

      arch-service-pipeline-s3 +

      arch-service-pipeline-s3 图 2:用于分析通过Amazon S3中的 AWS 服务日志架构

    2. 通过Kinesis Data Firehose (KDF) 记录到Amazon S3桶的日志(OpenSearch作为日志分析引擎)

      该AWS服务不能直接将日志放入 Amazon S3 桶,而是只能记录到 Amazon CloudWatch。 使用KDF 先从CloudWatch 日志组订阅日志,然后放入Amazon S3桶。

      -

      arch-service-pipeline-kdf-to-s3 +

      arch-service-pipeline-kdf-to-s3 图 3:用于分析通过 KDF 记录到 Amazon S3中的 AWS 服务日志架构

    3. @@ -1730,7 +1730,7 @@

      通过 Amazon S3 提取日志

    4. 直接记录到Amazon S3桶的日志(Light Engine作为日志分析引擎)

      该AWS服务可以直接将日志记录到 Amazon S3 桶。

      -

      arch-service-pipeline-s3-lightengine +

      arch-service-pipeline-s3-lightengine 图:用于分析通过Amazon S3中的 AWS 服务日志架构

    5. @@ -1754,13 +1754,13 @@

      通过 Amazon Kinesis Data Streams (KDS
    6. 直接记录到Amazon KDS 的实时日志

      该AWS服务可以直接将日志发送到 Amazon KDS.

      -

      arch-service-pipeline-kds +

      arch-service-pipeline-kds 图 4:用于分析 通过 KDS 的 AWS 服务日志架构

    7. 使用Amazon KDS 订阅的实时流日志

      该AWS服务将日志记录到 Amazon CloudWatch。使用KDS 从CloudWatch 日志组订阅实时流日志。

      -

      arch-service-pipeline-cwl-to-kds +

      arch-service-pipeline-cwl-to-kds 图 5:用于分析 通过 KDS订阅 的 AWS 服务日志架构

    8. @@ -1793,7 +1793,7 @@

      应用日志分析管道

      来自 Amazon EC2 / Amazon EKS 的日志

      日志管道运行以下工作流:

      @@ -1815,7 +1815,7 @@

      来自 Amazon EC2 / Amazon EKS 的日志

      -

      arch-app-log-pipeline-lighengine +

      arch-app-log-pipeline-lighengine 图 :应用程序日志分析架构

      日志管道运行以下工作流:

        @@ -1835,7 +1835,7 @@

        来自 Syslog 客户端的日志

      1. 架构图中的 NLB 和 ECS 容器只会在您创建 Syslog 摄取时提供,并在没有 Syslog 摄取时自动删除。
      -

      arch-syslog-pipeline +

      arch-syslog-pipeline 图 7:Syslog 的应用程序日志管道架构

      1. diff --git a/zh/implementation-guide/aws-services/cloudfront/index.html b/zh/implementation-guide/aws-services/cloudfront/index.html index f94367ab..8595977c 100644 --- a/zh/implementation-guide/aws-services/cloudfront/index.html +++ b/zh/implementation-guide/aws-services/cloudfront/index.html @@ -1741,12 +1741,12 @@

        使用 CloudFormation 堆栈

        AWS 海外区域 -启动堆栈 +启动堆栈 模板 AWS 中国区域 -启动堆栈 +启动堆栈 模板 @@ -2082,12 +2082,12 @@

        使用 CloudFormation 堆栈

        AWS 海外区域 -启动堆栈 +启动堆栈 模板 AWS 中国区域 -启动堆栈 +启动堆栈 模板 @@ -2505,7 +2505,7 @@

        查看仪表板

        -

        cloudfront-db

        +

        cloudfront-db

        diff --git a/zh/implementation-guide/aws-services/cloudtrail/index.html b/zh/implementation-guide/aws-services/cloudtrail/index.html index 7ff25f2a..f7a11e9c 100644 --- a/zh/implementation-guide/aws-services/cloudtrail/index.html +++ b/zh/implementation-guide/aws-services/cloudtrail/index.html @@ -1673,12 +1673,12 @@

        使用独立的 CloudFormation 堆栈

        AWS 海外区域 -启动堆栈 +启动堆栈 模板 AWS 中国区域 -启动堆栈 +启动堆栈 模板 @@ -1956,7 +1956,7 @@

        示例仪表板

        您可以点击下面的图像查看高分辨率的示例仪表板。

        -

        cloudtrail-db

        +

        cloudtrail-db

        diff --git a/zh/implementation-guide/aws-services/config/index.html b/zh/implementation-guide/aws-services/config/index.html index 60deb48c..a23e15be 100644 --- a/zh/implementation-guide/aws-services/config/index.html +++ b/zh/implementation-guide/aws-services/config/index.html @@ -1674,12 +1674,12 @@

        使用 CloudFormation 堆栈

        AWS 海外区域 -Launch Stack +Launch Stack Template AWS 中国区域 -Launch Stack +Launch Stack Template @@ -1957,7 +1957,7 @@

        样品仪表板

        您可以点击下面的图像查看高分辨率的示例仪表板。

        -

        config-db

        +

        config-db

        diff --git a/zh/implementation-guide/aws-services/elb/index.html b/zh/implementation-guide/aws-services/elb/index.html index a2bdb2f6..69d62a5a 100644 --- a/zh/implementation-guide/aws-services/elb/index.html +++ b/zh/implementation-guide/aws-services/elb/index.html @@ -1745,12 +1745,12 @@

        使用 CloudFormation 堆栈

        AWS 海外区域 -启动堆栈 +启动堆栈 模板 AWS 中国区域 -启动堆栈 +启动堆栈 模板 @@ -2068,7 +2068,7 @@

        样品仪表板

        您可以点击下面的图像查看高分辨率的示例仪表板。

        -

        elb-db

        +

        elb-db

        创建日志摄取(Light Engine)

        使用日志通控制台

          @@ -2107,12 +2107,12 @@

          使用 CloudFormation 堆栈

          AWS 海外区域 -启动堆栈 +启动堆栈 模板 AWS 中国区域 -启动堆栈 +启动堆栈 模板 diff --git a/zh/implementation-guide/aws-services/lambda/index.html b/zh/implementation-guide/aws-services/lambda/index.html index d9522afc..f2965310 100644 --- a/zh/implementation-guide/aws-services/lambda/index.html +++ b/zh/implementation-guide/aws-services/lambda/index.html @@ -1668,12 +1668,12 @@

          使用 CloudFormation 堆栈

          AWS 海外区域 -启动堆栈 +启动堆栈 模板 AWS 中国区域 -启动堆栈 +启动堆栈 模板 @@ -1886,7 +1886,7 @@

          示例仪表板

          您可以点击下面的图像查看高分辨率的示例仪表板。

          -

          lambda-db

          +

          lambda-db

          diff --git a/zh/implementation-guide/aws-services/rds/index.html b/zh/implementation-guide/aws-services/rds/index.html index 5fd0ff3d..1926e27b 100644 --- a/zh/implementation-guide/aws-services/rds/index.html +++ b/zh/implementation-guide/aws-services/rds/index.html @@ -1691,12 +1691,12 @@

          使用 CloudFormation 堆栈

          AWS 海外区域 -启动堆栈 +启动堆栈 模板 AWS 中国区域 -启动堆栈 +启动堆栈 模板 diff --git a/zh/implementation-guide/aws-services/s3/index.html b/zh/implementation-guide/aws-services/s3/index.html index 2e46a54a..bfc187fa 100644 --- a/zh/implementation-guide/aws-services/s3/index.html +++ b/zh/implementation-guide/aws-services/s3/index.html @@ -1673,12 +1673,12 @@

          使用 CloudFormation 堆栈

          AWS 海外区域 -启动堆栈 +启动堆栈 模板 AWS 中国区域 -启动堆栈 +启动堆栈 模板 @@ -1951,7 +1951,7 @@

          示例仪表板

          您可以点击下面的图像查看高分辨率的示例仪表板。

          -

          s3-db

          +

          s3-db

          diff --git a/zh/implementation-guide/aws-services/vpc/index.html b/zh/implementation-guide/aws-services/vpc/index.html index 9599146f..d29088f1 100644 --- a/zh/implementation-guide/aws-services/vpc/index.html +++ b/zh/implementation-guide/aws-services/vpc/index.html @@ -1678,12 +1678,12 @@

          使用 CloudFormation 堆栈

          AWS 海外区域 -Launch Stack +Launch Stack Template AWS 中国区域 -Launch Stack +Launch Stack Template @@ -1996,7 +1996,7 @@

          示例仪表板

          您可以点击下面的图像查看高分辨率的示例仪表板。

          -

          vpcflow-db

          +

          vpcflow-db

          diff --git a/zh/implementation-guide/aws-services/waf/index.html b/zh/implementation-guide/aws-services/waf/index.html index 2446efd3..4b119fcc 100644 --- a/zh/implementation-guide/aws-services/waf/index.html +++ b/zh/implementation-guide/aws-services/waf/index.html @@ -1757,22 +1757,22 @@

          使用 CloudFormation 堆栈

          AWS 海外区域 (全量请求) -启动堆栈 +启动堆栈 模板 AWS 中国区域 (全量请求) -启动堆栈 +启动堆栈 模板 AWS 海外区域 (采样请求) -启动堆栈 +启动堆栈 模板 AWS 中国区域 (采样请求) -启动堆栈 +启动堆栈 模板 @@ -2093,7 +2093,7 @@

          示例仪表板

          您可以点击下面的图像查看高分辨率的示例仪表板。

          -

          waf-db

          +

          waf-db

          创建日志摄取(Light Engine)

          使用日志通控制台

            @@ -2134,12 +2134,12 @@

            使用 CloudFormation 堆栈

            AWS 海外区域 (全量请求) -启动堆栈 +启动堆栈 模板 AWS 中国区域 (全量请求) -启动堆栈 +启动堆栈 模板 diff --git a/zh/implementation-guide/deployment/with-cognito/index.html b/zh/implementation-guide/deployment/with-cognito/index.html index c5324881..803247ea 100644 --- a/zh/implementation-guide/deployment/with-cognito/index.html +++ b/zh/implementation-guide/deployment/with-cognito/index.html @@ -1596,11 +1596,11 @@

            步骤 1. 启动堆栈

            从新的 VPC 中部署 -启动堆栈 +启动堆栈 从现有的 VPC 中部署 -启动堆栈 +启动堆栈 diff --git a/zh/implementation-guide/deployment/with-oidc/index.html b/zh/implementation-guide/deployment/with-oidc/index.html index 56066ace..fd31c6d0 100644 --- a/zh/implementation-guide/deployment/with-oidc/index.html +++ b/zh/implementation-guide/deployment/with-oidc/index.html @@ -1719,10 +1719,10 @@

            (选项 1) 使用其他区域的 Cognito User Pool<
          1. 按照此 指南 添加应用程序客户端并设置托管 UI。
          2. 对于 应用程序类型,选择 公共客户端
          3. 请在填写 允许的回调 URL允许的注销 URL 时,使用给日志通 控制台准备的域名。在您的 hosted UI 设置成功后,您可以看到如下状态: -
          4. +
          5. 客户端 ID, 用户池 ID 保存到一个文本文件中,以备后面使用。 - -
          6. + +

          步骤 2. 启动堆栈中,OidcClientID 就是 客户端 ID, OidcProvider 是 https://cognito-idp.${REGION}.amazonaws.com/${USER_POOL_ID}

          (选项 2) Authing.cn OIDC 客户端

          @@ -1735,13 +1735,13 @@

          (选项 2) Authing.cn OIDC 客户端

        1. 输入应用名称认证地址
        2. 将Endpoint Information中的App ID(即client_id)和Issuer保存到一个文本文件中,以备后面使用。 -

          +

        3. Login Callback URLLogout Callback URL更新为IPC记录的域名。

        4. 设置以下授权配置。 -
        5. +

        您已经成功创建了一个身份验证自建应用程序。

        (选项 3) Keycloak OIDC 客户端

        @@ -1751,11 +1751,11 @@

        (选项 3) Keycloak OIDC 客户端

      2. 在左侧导航栏,选择 Add Realm。如果您已经有一个 Realm,请跳过此步骤。
      3. 进入领域设置页面。选择 Endpoints,然后从列表中选择 OpenID Endpoint Configuration

        -

        +

      4. 在浏览器打开的 JSON 文件中,记录 issuer 值,以备后面使用。

        -

        +

      5. 返回Keycloak控制台,在左侧导航栏选择Clients,然后选择Create

        @@ -1801,7 +1801,7 @@

        (选项 4) 使用 ADFS OpenID Connect

        在 ADFS 服务器上的 Windows PowerShell 下,运行以下命令获取 ADFS 的颁发者(issuer),类似于https://adfs.domain.com/adfs

        Get-ADFSProperties | Select IdTokenIssuer
         
        -

        +

      步骤 2. 启动堆栈

      @@ -1823,19 +1823,19 @@

      步骤 2. 启动堆栈

      在海外区域新的 VPC 中部署 -Launch Stack +Launch Stack 在海外区域现有的 VPC 中部署 -Launch Stack +Launch Stack 在中国区域新的 VPC 中部署 -Launch Stack +Launch Stack 在中国区域现有的 VPC 中部署 -Launch Stack +Launch Stack diff --git a/zh/implementation-guide/domains/alarms/index.html b/zh/implementation-guide/domains/alarms/index.html index 8bfa3063..5ea43e9c 100644 --- a/zh/implementation-guide/domains/alarms/index.html +++ b/zh/implementation-guide/domains/alarms/index.html @@ -1618,7 +1618,7 @@

      使用 CloudFormation 堆栈

      1. 登录 AWS 管理控制台并选择按钮以启动 AWS CloudFormation 模板。

        -

        启动堆栈

        +

        启动堆栈

        您还可以 下载模板 开始部署。

      2. @@ -1736,7 +1736,7 @@

        使用 CloudFormation 堆栈

        大约 5 分钟后出现 CREATE_COMPLETE 状态。

        创建告警后,将向您的电子邮件地址发送一封确认电子邮件。 您需要单击电子邮件中的 *确认* 链接。

        您可以通过日志通控制台的 基本配置 > 告警 > Cloudwatch 告警 链接访问 Cloudwatch 中新创建的告警,链接位置如下图所示:

        -

        +

        请确保所有的告警状态都是 确定 状态。因为任何在您确认订阅邮件提醒之前发出的告警并不会发送通知邮件。

        注意

        diff --git a/zh/implementation-guide/domains/import/index.html b/zh/implementation-guide/domains/import/index.html index 132a114c..be344da9 100644 --- a/zh/implementation-guide/domains/import/index.html +++ b/zh/implementation-guide/domains/import/index.html @@ -1654,7 +1654,7 @@

        前提条件

      3. VPC 内至少有一个 Amazon OpenSearch Service 域。如果您还没有 Amazon OpenSearch Service 域,您可以在 VPC 中创建一个 Amazon OpenSearch Service 域。请参阅 在 VPC 中启动您的 Amazon OpenSearch 服务域
      4. 日志通 仅支持启用了 细粒度访问控制 的 Amazon OpenSearch Service 域。 在 security configuration(安全配置)中,您的 access policy (访问策略)应该和下图类似: -
      5. +

      导入 Amazon OpenSearch Service 域

        @@ -1677,7 +1677,7 @@

        设置 VPC 对等互连

        注意

        自动模式将自动创建 VPC 对等并配置路由表。您无需再次设置 VPC 对等互连。

        -

        +

        按照以下内容创建 VPC 对等互连、更新安全组和更新路由表。

        创建 VPC 对等连接

          diff --git a/zh/implementation-guide/domains/proxy/index.html b/zh/implementation-guide/domains/proxy/index.html index 6b35ce20..a84affdf 100644 --- a/zh/implementation-guide/domains/proxy/index.html +++ b/zh/implementation-guide/domains/proxy/index.html @@ -1668,7 +1668,7 @@

          访问代理

        架构

        日志通创建一个 Auto Scaling Group (ASG) 和一个 Application Load Balancer (ALB)

        -

        代理堆栈架构

        +

        代理堆栈架构

        工作流程如下:

        1. @@ -1721,7 +1721,7 @@

          使用 CloudFormation 堆栈

          1. 登录 AWS 管理控制台并选择按钮以启动 AWS CloudFormation 模板。

            -

            启动堆栈

            +

            启动堆栈

            您也可以下载模板 开始部署。

          2. @@ -1900,7 +1900,7 @@

            创建关联的 DNS 记录

          通过代理访问 Amazon OpenSearch Service

          DNS 记录生效后,您可以通过代理从任何地方访问 Amazon OpenSearch Service 内置仪表板。 您可以在浏览器中输入代理的域,或单击常规配置部分中访问代理下的链接按钮。

          -

          访问代理链接

          +

          访问代理链接

          删除代理

          1. 登录日志通 控制台。
          2. diff --git a/zh/implementation-guide/getting-started/2.create-proxy/index.html b/zh/implementation-guide/getting-started/2.create-proxy/index.html index a453d006..bb64d50a 100644 --- a/zh/implementation-guide/getting-started/2.create-proxy/index.html +++ b/zh/implementation-guide/getting-started/2.create-proxy/index.html @@ -1578,7 +1578,7 @@

            创建一个 Nginx 代理

          3. 输入域名
          4. 选择适用于域名的关联 Load Balancer SSL 证书
          5. 选择 Nginx 实例密钥名称。 -
          6. +
          7. 选择创建

          配置代理基础架构后,您需要在 DNS 解析器中创建关联的 DNS 记录。下面介绍如何找到应用负载平衡 (ALB) 域,然后创建指向该域的 CNAME 记录。

          diff --git a/zh/implementation-guide/trouble-shooting/index.html b/zh/implementation-guide/trouble-shooting/index.html index 0afeaa9f..70e86fce 100644 --- a/zh/implementation-guide/trouble-shooting/index.html +++ b/zh/implementation-guide/trouble-shooting/index.html @@ -1722,7 +1722,7 @@

          Error: Unable to add backend role

          日志通仅支持启用了细粒度访问控制 的 Amazon OpenSearch Service 域。

          您需要转到 Amazon OpenSearch Service 控制台,并编辑 Amazon OpenSearch Service 域的访问策略

          Error:User xxx is not authorized to perform sts:AssumeRole on resource

          -

          +

          如果您发现该错误,请先确保您在跨账户日志摄取设置中正确填写了所有信息。然后请等待大约1分钟左右后再重试。

          日志通使用了 AssumeRole 来列出或创建您成员账户中的 AWS 资源。 这些 IAM 角色是在您设置跨账户日志摄取时被创建的,他们需要几秒钟或者几分钟的时间才能生效。

          @@ -1743,7 +1743,7 @@

          Error: P

        您可以前往 Amazon EKS IAM role configuration 获得更多信息。

        我的 CloudFormation 堆栈在更新堆栈时被卡在删除 AWS::Lambda::Function 资源上。如何解决这个问题?

        -

        +

        Lambda函数驻留在一个VPC中,你需要等待相关的ENI资源被删除。

        重启 EC2 实例后,代理状态为离线,如何让它在实例重启时自动启动?

        这种情况通常发生在你已经安装了日志代理,但是在你创建任何日志摄取之前实例发生重启。如果至少有一个日志摄取,日志代理将自动重新启动。如果你有一个日志摄取,但问题仍然存在,你可以使用 systemctl status fluent-bit 来检查实例内部的状态来检查它在实例中的状态。

        diff --git a/zh/sitemap.xml.gz b/zh/sitemap.xml.gz index 1a162a4c..842cd401 100644 Binary files a/zh/sitemap.xml.gz and b/zh/sitemap.xml.gz differ