Skip to content

Commit

Permalink
Merge pull request #225 from jonrau1/kdf
Browse files Browse the repository at this point in the history
Outputs Update (Add KDF, Remove DDB), `--toml-file` command, AWS ISO-E and ISO-F Support
  • Loading branch information
jonrau1 authored Feb 4, 2024
2 parents 323e877 + 27de092 commit 9809b09
Show file tree
Hide file tree
Showing 23 changed files with 620 additions and 264 deletions.
56 changes: 36 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,45 +65,51 @@ ElectricEye also uses utilizes other tools such as [Shodan.io](https://www.shoda

1. First, clone this repository and install the requirements using `pip3`: `pip3 install -r requirements.txt`.

2. Then, modify the [TOML configuration](./eeauditor/external_providers.toml) located in `ElectricEye/eeauditor/external_providers.toml` to specify various configurations for the CSP(s) and SaaS Provider(s) you want to assess, specify where credentials are stored, and configure Outputs.
2. If you are evaluating anything other than your local AWS Account, provide a path to a modified [TOML configuration](./eeauditor/external_providers.toml) with `--toml-file` located in `ElectricEye/eeauditor/external_providers.toml`, or modify the example provided and do not provide that argument. The TOML file specifies multi-account, mulit-region, credential, and output specifics.

3. Finally, run the Controller to learn about the various Checks, Auditors, Assessment Targets, and Outputs.

```
$ python3 eeauditor/controller.py --help
python3 eeauditor/controller.py --help
Usage: controller.py [OPTIONS]
Options:
-t, --target-provider [AWS|Azure|OCI|GCP|Servicenow|M365|Salesforce]
-t, --target-provider [AWS|Azure|OCI|GCP|Servicenow|M365|Salesforce]
CSP or SaaS Vendor Assessment Target, ensure
that any -a or -c arg maps to your target
that any -a or -c arg maps to your target
provider e.g., -t AWS -a
Amazon_APGIW_Auditor
-a, --auditor-name TEXT Specify which Auditor you want to run by
using its name NOT INCLUDING .py. Defaults
-a, --auditor-name TEXT Specify which Auditor you want to run by
using its name NOT INCLUDING .py. Defaults
to ALL Auditors
-c, --check-name TEXT A specific Check in a specific Auditor you
-c, --check-name TEXT A specific Check in a specific Auditor you
want to run, this correlates to the function
name. Defaults to ALL Checks
-d, --delay INTEGER Time in seconds to sleep between Auditors
-d, --delay INTEGER Time in seconds to sleep between Auditors
being ran, defaults to 0
-o, --outputs TEXT A list of Outputs (files, APIs, databases,
ChatOps) to send ElectricEye Findings,
specify multiple with additional arguments:
-o csv -o postgresql -o slack [default:
-o, --outputs TEXT A list of Outputs (files, APIs, databases,
ChatOps) to send ElectricEye Findings,
specify multiple with additional arguments:
-o csv -o postgresql -o slack [default:
stdout]
--output-file TEXT For file outputs such as JSON and CSV, the
name of the file, DO NOT SPECIFY .file_type
--output-file TEXT For file outputs such as JSON and CSV, the
name of the file, DO NOT SPECIFY .file_type
[default: output]
--list-options Lists all valid Output options
--list-checks Prints a table of Auditors, Checks, and
Check descriptions to stdout - use this for
--list-checks Prints a table of Auditors, Checks, and
Check descriptions to stdout - use this for
-a or -c args
--create-insights Create AWS Security Hub Insights for
ElectricEye. This only needs to be done once
per Account per Region for Security Hub
--list-controls Lists all ElectricEye Controls (e.g. Check
Titles) for an Assessment Target
--toml-path TEXT The full path to the TOML file used for
configure e.g.,
~/path/to/mydir/external_providers.toml. If
this value is not provided the default path
of ElectricEye/eeauditor/external_providers.
toml is used.
--help Show this message and exit.
```

Expand Down Expand Up @@ -169,9 +175,15 @@ To pull from the various repositories, use these commands, you can replace `late

#### NOTE!! You can skip this section if you are using hard-coded credentials in your TOML and if you will not be using any AWS Output or running any AWS Auditors

When interacting with AWS credential stores such as AWS Systems Manager, AWS Secrets Manager and Outputs such as AWS Security and for Role Assumption into the Role specified in the `aws_electric_eye_iam_role_name` TOML parameter, ElectricEye uses your current (default) Boto3 Session which is derived from your credentials. Running ElectricEye from AWS Infrastructure that has an attached Role, or running from a location with `aws cli` credentials already instantiated, this is handled transparently. When using Docker, you will need to provide [Environment Variables](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#using-environment-variables) directly to the Container.
When interacting with AWS credential stores such as AWS Systems Manager, AWS Secrets Manager and Outputs such as AWS Security and for Role Assumption into the Role specified in the `aws_electric_eye_iam_role_name` TOML parameter, ElectricEye uses your current (default) Boto3 Session which is derived from your credentials.

Ensure that if you will be using AWS SSM (`ssm:GetParameter`), AWS Secrets Manager (`secretsmanager:GetSecretValue`), AWS Security Hub (`securityhub:BatchImportFindings`), Amazon SQS (`sqs:SendMessage`), and/or Amazon DynamoDB (`dynamodb:PutItem`) for credentials and Outputs that you have the proper permissions! You will likely also require `kms:Decrypt` depending if you are using AWS Key Management Service (KMS) Customer-managed Keys (CMKs) for your secrets/parameters encryption. You will need `sts:AssumeRole` to assume into the Role specified in the `aws_electric_eye_iam_role_name` TOML parameter.
Running ElectricEye from AWS Infrastructure that has an attached Role, or running from a location with `aws cli` credentials already instantiated, this is handled transparently.

When using Docker, you will need to provide [Environment Variables](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#using-environment-variables) directly to the Container.

Ensure that if you will be using AWS SSM (`ssm:GetParameter`), AWS Secrets Manager (`secretsmanager:GetSecretValue`), AWS Security Hub (`securityhub:BatchImportFindings`), Amazon SQS (`sqs:SendMessage`), and/or Amazon DynamoDB (`dynamodb:PutItem`) for credentials and Outputs that you have the proper permissions! You will likely also require `kms:Decrypt` depending if you are using AWS Key Management Service (KMS) Customer-managed Keys (CMKs) for your secrets/parameters encryption.

You will need `sts:AssumeRole` to assume into the Role specified in the `aws_electric_eye_iam_role_name` TOML parameter.

You will need to pass in your AWS Region, an AWS Access Key, and an AWS Secret Access Key. If you are NOT using an AWS IAM User with Access Keys you will need to also provide an AWS Session Token which is produced by temporary credentials such as an IAM Role or EC2 Instance Profile.

Expand Down Expand Up @@ -231,7 +243,11 @@ sudo docker run \
electriceye /bin/bash -c "python3 eeauditor/controller.py --help"
```

To save a local file output such as `-o json`. `-o cam-json`, `-o csv`, or `-o html` and so on, ensure that you specify a file name that begins with `/eeauditor/` as the `eeuser` within the Docker Image only has permissions within that directory. To remove the files you cannot use `docker cp` but you can submit the file to remote APIs you have control of by `base64` encoding the output or you can use the Session with AWS S3 permissions to upload the file to S3. If you are evaluating Oracle Cloud or Google Cloud Platform, your credentials will be locally loaded and you can upload to Oracle Object Storage or Google Cloud Storage buckets, respectively.
To save a local file output such as `-o json`. `-o cam-json`, `-o csv`, or `-o html` and so on, ensure that you specify a file name that begins with `/eeauditor/` as the `eeuser` within the Docker Image only has permissions within that directory.

To remove the files you cannot use `docker cp` but you can submit the file to remote APIs you have control of by `base64` encoding the output or you can use the Session with AWS S3 permissions to upload the file to S3.

If you are evaluating Oracle Cloud or Google Cloud Platform, your credentials will be locally loaded and you can upload to Oracle Object Storage or Google Cloud Storage buckets, respectively.

```bash
BUCKET_NAME="your_s3_bucket_you_have_access_to"
Expand Down Expand Up @@ -272,7 +288,7 @@ Feel free to open PRs and Issues where syntax, grammatic, and implementation err

### ElectricEye is for sale

Contact the maintainer for more information!
Hit me up at [email protected] (I don't actually have a SaaS tool) and I'll gladly sell the rights to this repo and take it down and give you all of the domains and even the AWS Accounts that I use behind the scenes.

### Early Contributors

Expand Down
28 changes: 13 additions & 15 deletions docs/outputs/OUTPUTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,17 @@ This documentation is all about Outputs supported by ElectricEye and how to conf
- [HTML Compliance Output](#html-compliance-output)
- [Normalized JSON Output](#json-normalized-output)
- [Cloud Asset Management JSON Output](#json-cloud-asset-management-cam-output)
- [Open Cyber Security Format (OCSF) V1.1.0 Output](#open-cyber-security-format-ocsf-v110-output)
- [CSV Output](#csv-output)
- [AWS Security Hub Output](#aws-security-hub-output)
- [MongoDB & AWS DocumentDB Output](#mongodb--aws-documentdb-output)
- [Cloud Asset Management MongoDB & AWS DocumentDB Output](#mongodb--aws-documentdb-cloud-asset-management-cam-output)
- [PostgreSQL Output](#postgresql-output)
- [Cloud Asset Management PostgreSQL Output](#postgresql-cloud-asset-management-cam-output)
- [Firemon Cloud Defense (DisruptOps) Output](#firemon-cloud-defense-disruptops-output)
- [AWS DynamoDB Output](#aws-dynamodb-output)
- [Amazon Simple Queue Service (SQS) Output](#amazon-simple-queue-service-sqs-output)
- [Slack Output](#slack-output)
- [Microsoft Teams Summary Output](#microsoft-teams-summary-output)
- [Open Cybersecurity Format (OCSF) -> Amazon Kinesis Data Firehose](#open-cybersecurity-format-ocsf---amazon-kinesis-data-firehose)

## Key Considerations

Expand Down Expand Up @@ -386,6 +386,8 @@ arn:aws-iso:ec2:us-iso-west-1:111111111111:volume/vol-123456abcdef/ebs-volume-en

## AWS Security Hub Output

**IMPORTANT NOTE**: This requires `securityhub:BatchImportFindings` IAM permissions!

The AWS Security Hub Output selection will write all ElectricEye findings into AWS Security Hub using the BatchImportFindings API in chunks of 100. All ElectricEye findings are already in ASFF, so no other processing is done to them, besides removing `ProductFields.AssetDetails` as Security Hub *cannot* support dicts or other complex types within `ProductFields`.

This Output will *not* provide the `ProductFields.AssetDetails` information.
Expand Down Expand Up @@ -1111,16 +1113,10 @@ Additionally, values within the `[outputs.postgresql]` section of the TOML file

- **`firemon_cloud_defense_api_key_value`**: This variable should be set to the API Key for your FireMon Cloud Defense tenant. This key is used to authenticate with the FireMon Cloud Defense API. The location where these credentials are stored should match the value of the `global.credentials_location` variable, which specifies the location of the credentials for all integrations.

### Example Firemon Cloud Defense (DisruptOps) Output

**NOT AVAILABLE**

## AWS DynamoDB Output

*Coming Soon*

## Amazon Simple Queue Service (SQS) Output

**IMPORTANT NOTE**: This requires `sqs:SendMessage` IAM permissions!

The Amazon SQS Output selection will write all ElectricEye findings to an Amazon Simple Queue Service (SQS) queue by using `json.dumps()` to insert messages into the queue with a one-second delay. To make use of the messages in the queue, ensure you are parsing the `["body"]` using `json.loads()`, or using another library in your preferred language to load the stringified JSON back into a proper JSON object. Using Amazon SQS is a great way to distribute ElectricEye findings to many other locations using various messaging service architectures with Lambda or Amazon Simple Notification Service (SNS) topics.

This Output will provide the `ProductFields.AssetDetails` information.
Expand Down Expand Up @@ -1218,14 +1214,16 @@ An example of the "Findings" output.

![SlackFindings](../../screenshots/outputs/slack_findings_output.jpg)

## Microsoft Teams Summary Output
## Open Cybersecurity Format (OCSF) -> Amazon Kinesis Data Firehose

*Coming Soon*
**IMPORTANT NOTE**: This requires `firehose:PutRecordBatch` IAM permissions!

## Open Security Finding Format (OCSF) v1.1.0
This output will send whichever the most up-to-date version of OCSF that ElectricEye supports to Kinesis Data Firehose, as of 4 FEB 2024 that is OCSF V1.1.0 mapped into Compliance Findings. Kinesis Data Firehouse is an extremely high-throughput data streaming service on AWS that allows you to batch several 100 events per second to select locations such as Snowflake, Splunk, Amazon S3, Datadog, OpenSearch Service, and more. You can configure buffering which allows your records to be batch-written to locations, this is helpful for building data lakes on AWS as you can ensure you have as big as a file as you can to keep the overall amount of files low. Additionally, you can configure dynamic partitioning in certain platforms to craft Hive-like partitions (e.g., `year=YYYY/month=MM/day=DD` and so on), additionally for certain locations you can automatically transform the data into the Apache Parquet columnar binary format or use AWS Lambda to perform Extraction-Loading-Transformation (ELT) to the records.

The Open Security Finding Format (OCSF) v1.1.0 Output will write all ElectricEye findings to a JSON file using Python's `json.dumps()` in the OCSF v1.1.0 format (as of 21 NOV 2023 `v1.1.0-dev` is still the tag). OCSF is a normalized security data format, ElectricEye makes use of the proposed [Compliance Finding (2003)](https://schema.ocsf.io/1.1.0-dev/classes/compliance_finding?extensions=) Event Class to normalize data about the finding, related compliance controls, remediation, resource and cloud-specific information.
As an added bonus, this is well suited for adding a custom CSPM, EASM & SSPM source into your Amazon Security Lake or Snowflake Data Cloud!

This Output will provide the `ProductFields.AssetDetails` information.

To use this Output include the following arguments in your ElectricEye CLI: `python3 eeauditor/controller.py {..args..} -o ocsf_v1_1_0 --output-file my_file_name_here`
To use this Output include the following arguments in your ElectricEye CLI: `python3 eeauditor/controller.py {..args..} -o ocsf_kdf`

Additionally, values within the `[outputs.firehose]` section of the TOML file *must be provided* for this integration to work.
2 changes: 1 addition & 1 deletion docs/setup/Setup_AWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ All AWS API interactivity is handled by `boto3` (and to a lesser extent lower-le
| Retrieving Accounts from one or more of your AWS Organizational Units | `organizations:ListAccountsForParent` | **NO** | You must either be in your Organizations Management Account or you must be a Delegated Administrator for an Organizations-enabled Service such as AWS Firewall Manager or Amazon GuardDuty |
| Sending findings to AWS Security Hub | `securityhub:BatchImportFindings` | **NO** | Ensure that AWS Security Hub is enabled in your Account & Region |
| Sending findings to Amazon SQS | `sqs:SendMessage` | **NO** | Ensure that your SQS Queue's Resource Policy also allows your IAM principal to `sqs:SendMessage` to it. </br> You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Queue with a Customer Managed Key. |
| Sending findings to Amazon DynamoDB | `dynamodb:PutItem` | **NO** | You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Table with a Customer Managed Key |
| Sending findings to Amazon Kinesis Data Firehose | `firehose:PutRecordBatch` | **NO** | You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Records going to KDF with a Customer Managed Key |
| Retrieving credentials from AWS Systems Manager Parameter Store | `ssm:GetParameter*` | **NO** | You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your SecureString Parameters with a Customer Managed Key |
| Retrieving credentials from AWS Secrets Manager | `secretsmanager:GetSecretValue` | **NO** | You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Secrets with a Customer Managed Key |
| If you run ElectricEye within a container without a seperate block device or file share managed, you will need to send file-based Outputs to S3, maybe | `s3:PutObject` | **NO** | If you do use S3, ensure that your Bucket Policy allows you to perform `s3:PutObject`. </br> You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Bucket with a Customer Managed Key. |
Expand Down
2 changes: 1 addition & 1 deletion eeauditor/auditors/aws/AWS_License_Manager_Auditor.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def get_license_manager_configurations(cache, session):
try:
liscMgrConfigs = licensemanager.list_license_configurations()["LicenseConfigurations"]
except ClientError as e:
logger.warn(
logger.warning(
"Cannot retrieve Amazon License Manager configurations, this is likely due to not using this service or you deleted the IAM Service Role. Refer to the error for more information: %s",
e.response["Error"]["Message"]
)
Expand Down
6 changes: 3 additions & 3 deletions eeauditor/auditors/aws/AWS_TrustedAdvisor_Auditor.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ def trusted_advisor_failing_cloudfront_ssl_cert_iam_certificate_store_check(cach
}
yield finding
except IndexError:
logging.warn(
logging.warning(
"Index Error was found encountered attempted to evaluate Trusted Advisor, this is likely because you do not have the appropriate AWS Support level."
)

Expand Down Expand Up @@ -351,7 +351,7 @@ def trusted_advisor_failing_cloudfront_ssl_cert_on_origin_check(cache: dict, ses
}
yield finding
except IndexError:
logging.warn(
logging.warning(
"Index Error was found encountered attempted to evaluate Trusted Advisor, this is likely because you do not have the appropriate AWS Support level."
)

Expand Down Expand Up @@ -531,7 +531,7 @@ def trusted_advisor_failing_exposed_access_keys_check(cache: dict, session, awsA
}
yield finding
except IndexError:
logging.warn(
logging.warning(
"Index Error was found encountered attempted to evaluate Trusted Advisor, this is likely because you do not have the appropriate AWS Support level."
)

Expand Down
Loading

0 comments on commit 9809b09

Please sign in to comment.