Skip to content

Commit

Permalink
Merge pull request #217 from James96315/main
Browse files Browse the repository at this point in the history
Update to version v2.1.0
  • Loading branch information
James96315 authored Nov 16, 2023
2 parents f6f8549 + 1d326eb commit 7fec31f
Show file tree
Hide file tree
Showing 1,032 changed files with 200,212 additions and 17,497 deletions.
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,20 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [2.1.0] - 2023-11-15

### Added

- Added Light Engine to provide an Athena-based serverless and cost-effective log analytics engine to analyze infrequent access logs
- Added OpenSearch Ingestion to provide more log processing capabilities, with which OSI can provision compute resource OpenSearch Compute Units (OCU) and pay per ingestion capacity
- Supported parsing logs in nested JSON format
- Supported CloudTrail logs ingestion from the specified bucket manually

### Fixed

- Fixed the issue that the solution cannot list instances when creating instance groups #214
- Fixed the issue that EC2 instances launched by the Auto Scaling group failed to pass the health check #202

## [2.0.1] - 2023-09-27

### Fixed
Expand Down
30 changes: 13 additions & 17 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,56 +6,52 @@ documentation, we greatly value feedback and contributions from our community.
Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
information to effectively respond to your bug report or contribution.


## Reporting Bugs/Feature Requests

We welcome you to use the GitHub issue tracker to report bugs or suggest features.

When filing an issue, please check [existing open](https://github.com/aws-solutions/centralized-logging-with-opensearch/issues), or [recently closed](https://github.com/aws-solutions/centralized-logging-with-opensearch/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already
reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:

* A reproducible test case or series of steps
* The version of our code being used
* Any modifications you've made relevant to the bug
* Anything unusual about your environment or deployment

- A reproducible test case or series of steps
- The version of our code being used
- Any modifications you've made relevant to the bug
- Anything unusual about your environment or deployment

## Contributing via Pull Requests

Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:

1. You are working against the latest source on the *master* branch.
1. You are working against the latest source on the _main_ branch.
2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
3. You open an issue to discuss any significant work - we would hate for your time to be wasted.

To send us a pull request, please:

1. Fork the repository.
2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
3. Ensure all build processes execute successfully (see README.md for additional guidance).
4. Ensure all unit, integration, and/or snapshot tests pass, as applicable.
5. Commit to your fork using clear commit messages.
6. Send us a pull request, answering any default questions in the pull request interface.
7. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
3. Ensure local tests pass.
4. Commit to your fork using clear commit messages.
5. Send us a pull request, answering any default questions in the pull request interface.
6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.

GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).


## Finding contributions to work on
Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws-solutions/centralized-logging-with-opensearch/labels/help%20wanted) issues is a great place to start.

Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws-solutions/centralized-logging-with-opensearch/labels/help%20wanted) issues is a great place to start.

## Code of Conduct

This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
[email protected] with any additional questions or comments.


## Security issue notifications
If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public GitHub issue.

If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.

## Licensing

See the [LICENSE](https://github.com/aws-solutions/centralized-logging-with-opensearch/blob/main/LICENSE.txt) file for our project's licensing. We will ask you to confirm the licensing of your contribution.

16 changes: 14 additions & 2 deletions NOTICE.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
Centralized Logging with OpenSearch

Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except
Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except
in compliance with the License. A copy of the License is located at http://www.apache.org/licenses/
or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the
Expand All @@ -11,6 +12,12 @@ THIRD PARTY COMPONENTS
**********************
This software includes third party software subject to the following copyrights:

binaryornot under BSD License
chardet under GNU Lesser General Public License v2 or later (LGPLv2+) (LGPL)
func-timeout under GNU Lesser General Public License v2 (LGPLv2) (LGPLv2)
numpy under The 3-Clause BSD License
pyarrow under Apache License, Version 2.0
pytest-httpserver under The MIT License
boolean.py under BSD-2-Clause
aws-sam-translator under the Apache License, Version 2.0
aws-xray-sdk under the Apache License, Version 2.0
Expand Down Expand Up @@ -151,4 +158,9 @@ sweetalert2 under The MIT License
typescript under the Apache License, Version 2.0
web-vitals under the Apache License, Version 2.0
notice-js under MIT-0
js-json-schema-inferrer under ISC License
js-json-schema-inferrer under ISC License
@aws-cdk/aws-glue-alpha under the Apache License, Version 2.0
@reduxjs/toolkit under The MIT License
redux-mock-store under The MIT License
@types/redux-mock-store under The MIT License
jsonschema-path under the Apache License, Version 2.0
1 change: 0 additions & 1 deletion docs/en/images

This file was deleted.

1 change: 1 addition & 0 deletions docs/en/images
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
../images
42 changes: 37 additions & 5 deletions docs/en/implementation-guide/applications/create-log-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,19 +21,51 @@ The following describes how to create log config for each log format.
4. Specify **Config Name**.
5. Specify **Log Path**. You can use `,` to separate multiple paths.
6. Choose **JSON** in the log type dropdown list.
7. In the **Sample log parsing** section, paste a sample JSON log and click **Parse log** to verify if the log parsing is successful.
7. In the **Sample log parsing** section, paste a sample JSON log and click **Parse log** to verify if the log parsing is successful.JSON type support nested Json with a maximum nesting depth of X.

For example:
```json
{"host":"81.95.250.9", "user-identifier":"-", "time":"08/Mar/2022:06:28:03 +0000", "method": "PATCH", "request": "/clicks-and-mortar/24%2f7", "protocol":"HTTP/2.0", "status":502, "bytes":24337, "referer": "https://www.investorturn-key.net/functionalities/innovative/integrated"}
- If your JSON log sample is nested JSON, choose Pase Log and it displays a list of field type options for each layer. If needed, you can set the corresponding field type for each layer of fields. If you choose Remove to delete a field. The field type will be automatically inferred by OpenSearch.
For Example:
```
{"timestamp": "2023-11-06T08:29:55.266Z",
"correlationId": "566829027325526589",
"processInfo": {
"startTime": "2023-11-06T08:29:55.266Z",
"hostname": "ltvtix0apidev01",
"domainId": "e6826d97-a60f-45cb-93e1-b4bb5a7add29",
"groupId": "group-2",
"groupName": "grp_dev_bba",
"serviceId": "instance-1",
"serviceName": "ins_dev_bba",
"version": "7.7.20210130"
},
"transactionSummary": {
"path": "https://www.leadmission-critical.info/relationships",
"protocol": "https",
"protocolSrc": "97",
"status": "exception",
"serviceContexts": [
{
"service": "NSC_APP-117127_DCTM_Get Documentum Token",
"monitor": true,
"client": "Pass Through",
"org": null,
"app": null,
"method": "getTokenUsingPOST",
"status": "exception",
"duration": 25270
}
]
}
}
```

8. Check if each fields type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see [Data Types](https://opensearch.org/docs/latest/search-plugins/sql/datatypes/).

!!! Note "Note"

You must specify the datetime of the log using key “time”. If not specified, system time will be added.

For nested JSON, the Time Key must be on the first level.

9. Specify the **Time format**. The format syntax follows [strptime](https://linux.die.net/man/3/strptime). Check [this](https://docs.fluentbit.io/manual/pipeline/parsers/configuring-parser#time-resolution-and-fractional-seconds) for details.

10. (Optional) In the **Filter** section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only.
Expand Down
77 changes: 74 additions & 3 deletions docs/en/implementation-guide/applications/ec2-pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,20 @@ An instance group represents a group of EC2 Linux instances, which enables the s

This article guides you to create a log pipeline that ingests logs from an EC2 instance group.

## Prerequisites
## Create a log analytics pipeline (Amazon OpenSearch)

### Prerequisites
1. [Import an Amazon OpenSearch Service domain](../domains/import.md).

## Create a log analytics pipeline
### Create a log analytics pipeline

1. Sign in to the Centralized Logging with OpenSearch Console.

2. In the left sidebar, under **Log Analytics Pipelines**, choose **Application Log**.

3. Choose **Create a pipeline**

4. Choose **Instance Group** as Log Source, and choose **Next**.
4. Choose **Instance Group** as Log Source, choose **Amazon OpenSearch**, and choose **Next**.

5. Select an instance group. If you have no instance group yet, choose **Create Instance Group** at the top right corner, and follow the [Instance Group](./create-log-source.md#amazon-ec2-instance-group) guide to create an instance group. After that, choose **Refresh** and then select the newly created instance group.

Expand Down Expand Up @@ -82,7 +84,76 @@ You have created a log source for the log analytics pipeline. Now you are ready
12. Wait for the application pipeline turning to "Active" state.


## Create a log analytics pipeline (Light Engine)

### Create a log analytics pipeline

1. Sign in to the Centralized Logging with OpenSearch Console.

2. In the left sidebar, under **Log Analytics Pipelines**, choose **Application Log**.

3. Choose **Create a pipeline**

4. Choose **Instance Group** as Log Source, Choose **Light Engine**, and choose **Next**.

5. Select an instance group. If you have no instance group yet, choose **Create Instance Group** at the top right corner, and follow the [Instance Group](./create-log-source.md#amazon-ec2-instance-group) guide to create an instance group. After that, choose **Refresh** and then select the newly created instance group.

6. (Auto Scaling Group only) If your instance group is created based on an Auto Scaling Group, after ingestion status become "Created", then you can find the generated Shell Script in the instance group's detail page. Copy the shell script and update the User Data of the Auto Scaling [Launch configurations](https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-configurations.html) or [Launch template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html).

7. Keep the default **Permission grant method**.

8. (Optional) If you choose **I will manually add the below required permissions after pipeline creation**, continue to do the following:

1. Choose **Expand to view required permissions** and copy the provided JSON policy.
2. Go to **AWS Management Console**.
3. On the left navigation pane, choose **IAM**, and select **Policies** under **Access management**.
4. Choose **Create Policy**, choose **JSON** and replace all the content inside the text block. Make sure to substitute `<YOUR ACCOUNT ID>` with your account id.
5. Choose **Next**, and then enter a name for this policy.
6. Attach the policy to your EC2 instance profile to grant the log agent permissions to send logs to the application log pipeline. If you are using Auto Scaling group, you need to update the IAM instance profile associated with the Auto Scaling Group. If needed, you can follow the documentation to update your [launch template][launch-template] or [launch configuration][launch-configuration].

9. Choose **Next**.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with Amazon EC2 instance group as log source.

1. Enter a **Log Path** to specify the location of logs you want to collect.

2. Select a log config and then choose **Next**. If you do not find desired log config from the drop-down list, choose **Create New**, and follow instructions in [Log Cong](./create-log-config.md).


4. In the **Buffer** section,

* S3 buffer parameters

| Parameter | Default | Description |
| ---------------------------- | ------------------------------------------------ | ------------------------------------------------------------ |
| S3 Bucket | *A log bucket will be created by the solution.* | You can also select a bucket to store the log data. |
| Buffer size | 50 MiB | The maximum size of log data cached at the log agent side before delivering to S3. For more information, see [Data Delivery Frequency](https://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#frequency). |
| Buffer interval | 60 seconds | The maximum interval of the log agent to deliver logs to S3. For more information, see [Data Delivery Frequency](https://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#frequency). |
| Compression for data records | `Gzip` | The log agent compresses records before delivering them to the S3 bucket. |



5. Choose **Next**.

9. In the **Specify Light Engine Configuration** section, if you want to ingest associated templated Grafana dashboards, select **Yes** for the sample dashboard.

10. You can choose an existing Grafana, or if you need to import a new one, you can go to Grafana for configuration.

12. Select an S3 bucket to store partitioned logs and define a name for the log table. We have provided a predefined table name, but you can modify it according to your business needs.

13. The log processing frequency is set to **5** minutes by default, with a minimum processing frequency of **1** minute.

14. In the **Log Lifecycle** section, enter the log merge time and log archive time. We have provided default values, but you can adjust them based on your business requirements.

15. Select **Next**.

9. Enable **Alarms** if needed and select an exiting SNS topic. If you choose **Create a new SNS topic**, please provide a name and an email address for the new SNS topic.

16. If desired, add tags.

17. Select **Create**.

12. Wait for the application pipeline turning to "Active" state.

[kds]: https://aws.amazon.com/kinesis/data-streams/
[ssm-agent]: https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html
Expand Down
Loading

0 comments on commit 7fec31f

Please sign in to comment.