diff --git a/README.md b/README.md index db0827f2..65304472 100644 --- a/README.md +++ b/README.md @@ -65,45 +65,51 @@ ElectricEye also uses utilizes other tools such as [Shodan.io](https://www.shoda 1. First, clone this repository and install the requirements using `pip3`: `pip3 install -r requirements.txt`. -2. Then, modify the [TOML configuration](./eeauditor/external_providers.toml) located in `ElectricEye/eeauditor/external_providers.toml` to specify various configurations for the CSP(s) and SaaS Provider(s) you want to assess, specify where credentials are stored, and configure Outputs. +2. If you are evaluating anything other than your local AWS Account, provide a path to a modified [TOML configuration](./eeauditor/external_providers.toml) with `--toml-file` located in `ElectricEye/eeauditor/external_providers.toml`, or modify the example provided and do not provide that argument. The TOML file specifies multi-account, mulit-region, credential, and output specifics. 3. Finally, run the Controller to learn about the various Checks, Auditors, Assessment Targets, and Outputs. ``` -$ python3 eeauditor/controller.py --help +python3 eeauditor/controller.py --help Usage: controller.py [OPTIONS] Options: - -t, --target-provider [AWS|Azure|OCI|GCP|Servicenow|M365|Salesforce] + -t, --target-provider [AWS|Azure|OCI|GCP|Servicenow|M365|Salesforce] CSP or SaaS Vendor Assessment Target, ensure - that any -a or -c arg maps to your target + that any -a or -c arg maps to your target provider e.g., -t AWS -a Amazon_APGIW_Auditor - -a, --auditor-name TEXT Specify which Auditor you want to run by - using its name NOT INCLUDING .py. Defaults + -a, --auditor-name TEXT Specify which Auditor you want to run by + using its name NOT INCLUDING .py. Defaults to ALL Auditors - -c, --check-name TEXT A specific Check in a specific Auditor you + -c, --check-name TEXT A specific Check in a specific Auditor you want to run, this correlates to the function name. Defaults to ALL Checks - -d, --delay INTEGER Time in seconds to sleep between Auditors + -d, --delay INTEGER Time in seconds to sleep between Auditors being ran, defaults to 0 - -o, --outputs TEXT A list of Outputs (files, APIs, databases, - ChatOps) to send ElectricEye Findings, - specify multiple with additional arguments: - -o csv -o postgresql -o slack [default: + -o, --outputs TEXT A list of Outputs (files, APIs, databases, + ChatOps) to send ElectricEye Findings, + specify multiple with additional arguments: + -o csv -o postgresql -o slack [default: stdout] - --output-file TEXT For file outputs such as JSON and CSV, the - name of the file, DO NOT SPECIFY .file_type + --output-file TEXT For file outputs such as JSON and CSV, the + name of the file, DO NOT SPECIFY .file_type [default: output] --list-options Lists all valid Output options - --list-checks Prints a table of Auditors, Checks, and - Check descriptions to stdout - use this for + --list-checks Prints a table of Auditors, Checks, and + Check descriptions to stdout - use this for -a or -c args --create-insights Create AWS Security Hub Insights for ElectricEye. This only needs to be done once per Account per Region for Security Hub --list-controls Lists all ElectricEye Controls (e.g. Check Titles) for an Assessment Target + --toml-path TEXT The full path to the TOML file used for + configure e.g., + ~/path/to/mydir/external_providers.toml. If + this value is not provided the default path + of ElectricEye/eeauditor/external_providers. + toml is used. --help Show this message and exit. ``` @@ -169,9 +175,15 @@ To pull from the various repositories, use these commands, you can replace `late #### NOTE!! You can skip this section if you are using hard-coded credentials in your TOML and if you will not be using any AWS Output or running any AWS Auditors -When interacting with AWS credential stores such as AWS Systems Manager, AWS Secrets Manager and Outputs such as AWS Security and for Role Assumption into the Role specified in the `aws_electric_eye_iam_role_name` TOML parameter, ElectricEye uses your current (default) Boto3 Session which is derived from your credentials. Running ElectricEye from AWS Infrastructure that has an attached Role, or running from a location with `aws cli` credentials already instantiated, this is handled transparently. When using Docker, you will need to provide [Environment Variables](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#using-environment-variables) directly to the Container. +When interacting with AWS credential stores such as AWS Systems Manager, AWS Secrets Manager and Outputs such as AWS Security and for Role Assumption into the Role specified in the `aws_electric_eye_iam_role_name` TOML parameter, ElectricEye uses your current (default) Boto3 Session which is derived from your credentials. -Ensure that if you will be using AWS SSM (`ssm:GetParameter`), AWS Secrets Manager (`secretsmanager:GetSecretValue`), AWS Security Hub (`securityhub:BatchImportFindings`), Amazon SQS (`sqs:SendMessage`), and/or Amazon DynamoDB (`dynamodb:PutItem`) for credentials and Outputs that you have the proper permissions! You will likely also require `kms:Decrypt` depending if you are using AWS Key Management Service (KMS) Customer-managed Keys (CMKs) for your secrets/parameters encryption. You will need `sts:AssumeRole` to assume into the Role specified in the `aws_electric_eye_iam_role_name` TOML parameter. +Running ElectricEye from AWS Infrastructure that has an attached Role, or running from a location with `aws cli` credentials already instantiated, this is handled transparently. + +When using Docker, you will need to provide [Environment Variables](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#using-environment-variables) directly to the Container. + +Ensure that if you will be using AWS SSM (`ssm:GetParameter`), AWS Secrets Manager (`secretsmanager:GetSecretValue`), AWS Security Hub (`securityhub:BatchImportFindings`), Amazon SQS (`sqs:SendMessage`), and/or Amazon DynamoDB (`dynamodb:PutItem`) for credentials and Outputs that you have the proper permissions! You will likely also require `kms:Decrypt` depending if you are using AWS Key Management Service (KMS) Customer-managed Keys (CMKs) for your secrets/parameters encryption. + +You will need `sts:AssumeRole` to assume into the Role specified in the `aws_electric_eye_iam_role_name` TOML parameter. You will need to pass in your AWS Region, an AWS Access Key, and an AWS Secret Access Key. If you are NOT using an AWS IAM User with Access Keys you will need to also provide an AWS Session Token which is produced by temporary credentials such as an IAM Role or EC2 Instance Profile. @@ -231,7 +243,11 @@ sudo docker run \ electriceye /bin/bash -c "python3 eeauditor/controller.py --help" ``` -To save a local file output such as `-o json`. `-o cam-json`, `-o csv`, or `-o html` and so on, ensure that you specify a file name that begins with `/eeauditor/` as the `eeuser` within the Docker Image only has permissions within that directory. To remove the files you cannot use `docker cp` but you can submit the file to remote APIs you have control of by `base64` encoding the output or you can use the Session with AWS S3 permissions to upload the file to S3. If you are evaluating Oracle Cloud or Google Cloud Platform, your credentials will be locally loaded and you can upload to Oracle Object Storage or Google Cloud Storage buckets, respectively. +To save a local file output such as `-o json`. `-o cam-json`, `-o csv`, or `-o html` and so on, ensure that you specify a file name that begins with `/eeauditor/` as the `eeuser` within the Docker Image only has permissions within that directory. + +To remove the files you cannot use `docker cp` but you can submit the file to remote APIs you have control of by `base64` encoding the output or you can use the Session with AWS S3 permissions to upload the file to S3. + +If you are evaluating Oracle Cloud or Google Cloud Platform, your credentials will be locally loaded and you can upload to Oracle Object Storage or Google Cloud Storage buckets, respectively. ```bash BUCKET_NAME="your_s3_bucket_you_have_access_to" @@ -272,7 +288,7 @@ Feel free to open PRs and Issues where syntax, grammatic, and implementation err ### ElectricEye is for sale -Contact the maintainer for more information! +Hit me up at opensource@electriceye.cloud (I don't actually have a SaaS tool) and I'll gladly sell the rights to this repo and take it down and give you all of the domains and even the AWS Accounts that I use behind the scenes. ### Early Contributors diff --git a/docs/outputs/OUTPUTS.md b/docs/outputs/OUTPUTS.md index 5a84102a..8960d72e 100644 --- a/docs/outputs/OUTPUTS.md +++ b/docs/outputs/OUTPUTS.md @@ -11,6 +11,7 @@ This documentation is all about Outputs supported by ElectricEye and how to conf - [HTML Compliance Output](#html-compliance-output) - [Normalized JSON Output](#json-normalized-output) - [Cloud Asset Management JSON Output](#json-cloud-asset-management-cam-output) +- [Open Cyber Security Format (OCSF) V1.1.0 Output](#open-cyber-security-format-ocsf-v110-output) - [CSV Output](#csv-output) - [AWS Security Hub Output](#aws-security-hub-output) - [MongoDB & AWS DocumentDB Output](#mongodb--aws-documentdb-output) @@ -18,10 +19,9 @@ This documentation is all about Outputs supported by ElectricEye and how to conf - [PostgreSQL Output](#postgresql-output) - [Cloud Asset Management PostgreSQL Output](#postgresql-cloud-asset-management-cam-output) - [Firemon Cloud Defense (DisruptOps) Output](#firemon-cloud-defense-disruptops-output) -- [AWS DynamoDB Output](#aws-dynamodb-output) - [Amazon Simple Queue Service (SQS) Output](#amazon-simple-queue-service-sqs-output) - [Slack Output](#slack-output) -- [Microsoft Teams Summary Output](#microsoft-teams-summary-output) +- [Open Cybersecurity Format (OCSF) -> Amazon Kinesis Data Firehose](#open-cybersecurity-format-ocsf---amazon-kinesis-data-firehose) ## Key Considerations @@ -386,6 +386,8 @@ arn:aws-iso:ec2:us-iso-west-1:111111111111:volume/vol-123456abcdef/ebs-volume-en ## AWS Security Hub Output +**IMPORTANT NOTE**: This requires `securityhub:BatchImportFindings` IAM permissions! + The AWS Security Hub Output selection will write all ElectricEye findings into AWS Security Hub using the BatchImportFindings API in chunks of 100. All ElectricEye findings are already in ASFF, so no other processing is done to them, besides removing `ProductFields.AssetDetails` as Security Hub *cannot* support dicts or other complex types within `ProductFields`. This Output will *not* provide the `ProductFields.AssetDetails` information. @@ -1111,16 +1113,10 @@ Additionally, values within the `[outputs.postgresql]` section of the TOML file - **`firemon_cloud_defense_api_key_value`**: This variable should be set to the API Key for your FireMon Cloud Defense tenant. This key is used to authenticate with the FireMon Cloud Defense API. The location where these credentials are stored should match the value of the `global.credentials_location` variable, which specifies the location of the credentials for all integrations. -### Example Firemon Cloud Defense (DisruptOps) Output - -**NOT AVAILABLE** - -## AWS DynamoDB Output - -*Coming Soon* - ## Amazon Simple Queue Service (SQS) Output +**IMPORTANT NOTE**: This requires `sqs:SendMessage` IAM permissions! + The Amazon SQS Output selection will write all ElectricEye findings to an Amazon Simple Queue Service (SQS) queue by using `json.dumps()` to insert messages into the queue with a one-second delay. To make use of the messages in the queue, ensure you are parsing the `["body"]` using `json.loads()`, or using another library in your preferred language to load the stringified JSON back into a proper JSON object. Using Amazon SQS is a great way to distribute ElectricEye findings to many other locations using various messaging service architectures with Lambda or Amazon Simple Notification Service (SNS) topics. This Output will provide the `ProductFields.AssetDetails` information. @@ -1218,14 +1214,16 @@ An example of the "Findings" output. ![SlackFindings](../../screenshots/outputs/slack_findings_output.jpg) -## Microsoft Teams Summary Output +## Open Cybersecurity Format (OCSF) -> Amazon Kinesis Data Firehose -*Coming Soon* +**IMPORTANT NOTE**: This requires `firehose:PutRecordBatch` IAM permissions! -## Open Security Finding Format (OCSF) v1.1.0 +This output will send whichever the most up-to-date version of OCSF that ElectricEye supports to Kinesis Data Firehose, as of 4 FEB 2024 that is OCSF V1.1.0 mapped into Compliance Findings. Kinesis Data Firehouse is an extremely high-throughput data streaming service on AWS that allows you to batch several 100 events per second to select locations such as Snowflake, Splunk, Amazon S3, Datadog, OpenSearch Service, and more. You can configure buffering which allows your records to be batch-written to locations, this is helpful for building data lakes on AWS as you can ensure you have as big as a file as you can to keep the overall amount of files low. Additionally, you can configure dynamic partitioning in certain platforms to craft Hive-like partitions (e.g., `year=YYYY/month=MM/day=DD` and so on), additionally for certain locations you can automatically transform the data into the Apache Parquet columnar binary format or use AWS Lambda to perform Extraction-Loading-Transformation (ELT) to the records. -The Open Security Finding Format (OCSF) v1.1.0 Output will write all ElectricEye findings to a JSON file using Python's `json.dumps()` in the OCSF v1.1.0 format (as of 21 NOV 2023 `v1.1.0-dev` is still the tag). OCSF is a normalized security data format, ElectricEye makes use of the proposed [Compliance Finding (2003)](https://schema.ocsf.io/1.1.0-dev/classes/compliance_finding?extensions=) Event Class to normalize data about the finding, related compliance controls, remediation, resource and cloud-specific information. +As an added bonus, this is well suited for adding a custom CSPM, EASM & SSPM source into your Amazon Security Lake or Snowflake Data Cloud! This Output will provide the `ProductFields.AssetDetails` information. -To use this Output include the following arguments in your ElectricEye CLI: `python3 eeauditor/controller.py {..args..} -o ocsf_v1_1_0 --output-file my_file_name_here` \ No newline at end of file +To use this Output include the following arguments in your ElectricEye CLI: `python3 eeauditor/controller.py {..args..} -o ocsf_kdf` + +Additionally, values within the `[outputs.firehose]` section of the TOML file *must be provided* for this integration to work. \ No newline at end of file diff --git a/docs/setup/Setup_AWS.md b/docs/setup/Setup_AWS.md index 6f54633c..c1784ba5 100644 --- a/docs/setup/Setup_AWS.md +++ b/docs/setup/Setup_AWS.md @@ -25,7 +25,7 @@ All AWS API interactivity is handled by `boto3` (and to a lesser extent lower-le | Retrieving Accounts from one or more of your AWS Organizational Units | `organizations:ListAccountsForParent` | **NO** | You must either be in your Organizations Management Account or you must be a Delegated Administrator for an Organizations-enabled Service such as AWS Firewall Manager or Amazon GuardDuty | | Sending findings to AWS Security Hub | `securityhub:BatchImportFindings` | **NO** | Ensure that AWS Security Hub is enabled in your Account & Region | | Sending findings to Amazon SQS | `sqs:SendMessage` | **NO** | Ensure that your SQS Queue's Resource Policy also allows your IAM principal to `sqs:SendMessage` to it.
You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Queue with a Customer Managed Key. | -| Sending findings to Amazon DynamoDB | `dynamodb:PutItem` | **NO** | You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Table with a Customer Managed Key | +| Sending findings to Amazon Kinesis Data Firehose | `firehose:PutRecordBatch` | **NO** | You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Records going to KDF with a Customer Managed Key | | Retrieving credentials from AWS Systems Manager Parameter Store | `ssm:GetParameter*` | **NO** | You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your SecureString Parameters with a Customer Managed Key | | Retrieving credentials from AWS Secrets Manager | `secretsmanager:GetSecretValue` | **NO** | You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Secrets with a Customer Managed Key | | If you run ElectricEye within a container without a seperate block device or file share managed, you will need to send file-based Outputs to S3, maybe | `s3:PutObject` | **NO** | If you do use S3, ensure that your Bucket Policy allows you to perform `s3:PutObject`.
You will also require `kms:Decrypt` permissions and access to the key (via Key Policy) if you encrypt your Bucket with a Customer Managed Key. | diff --git a/eeauditor/auditors/aws/AWS_License_Manager_Auditor.py b/eeauditor/auditors/aws/AWS_License_Manager_Auditor.py index 23d016b3..31b5f60c 100644 --- a/eeauditor/auditors/aws/AWS_License_Manager_Auditor.py +++ b/eeauditor/auditors/aws/AWS_License_Manager_Auditor.py @@ -42,7 +42,7 @@ def get_license_manager_configurations(cache, session): try: liscMgrConfigs = licensemanager.list_license_configurations()["LicenseConfigurations"] except ClientError as e: - logger.warn( + logger.warning( "Cannot retrieve Amazon License Manager configurations, this is likely due to not using this service or you deleted the IAM Service Role. Refer to the error for more information: %s", e.response["Error"]["Message"] ) diff --git a/eeauditor/auditors/aws/AWS_TrustedAdvisor_Auditor.py b/eeauditor/auditors/aws/AWS_TrustedAdvisor_Auditor.py index 2c553f16..09297374 100644 --- a/eeauditor/auditors/aws/AWS_TrustedAdvisor_Auditor.py +++ b/eeauditor/auditors/aws/AWS_TrustedAdvisor_Auditor.py @@ -203,7 +203,7 @@ def trusted_advisor_failing_cloudfront_ssl_cert_iam_certificate_store_check(cach } yield finding except IndexError: - logging.warn( + logging.warning( "Index Error was found encountered attempted to evaluate Trusted Advisor, this is likely because you do not have the appropriate AWS Support level." ) @@ -351,7 +351,7 @@ def trusted_advisor_failing_cloudfront_ssl_cert_on_origin_check(cache: dict, ses } yield finding except IndexError: - logging.warn( + logging.warning( "Index Error was found encountered attempted to evaluate Trusted Advisor, this is likely because you do not have the appropriate AWS Support level." ) @@ -531,7 +531,7 @@ def trusted_advisor_failing_exposed_access_keys_check(cache: dict, session, awsA } yield finding except IndexError: - logging.warn( + logging.warning( "Index Error was found encountered attempted to evaluate Trusted Advisor, this is likely because you do not have the appropriate AWS Support level." ) diff --git a/eeauditor/cloud_utils.py b/eeauditor/cloud_utils.py index 5903dc0f..c530dd76 100644 --- a/eeauditor/cloud_utils.py +++ b/eeauditor/cloud_utils.py @@ -27,7 +27,7 @@ import json from botocore.exceptions import ClientError -logger = logging.getLogger(__name__) +logger = logging.getLogger("CloudUtils") # Boto3 Clients sts = boto3.client("sts") @@ -45,9 +45,12 @@ class CloudConfig(object): for use in EEAuditor when running ElectricEye Auditors and Check """ - def __init__(self, assessmentTarget): - here = path.abspath(path.dirname(__file__)) - tomlFile = f"{here}/external_providers.toml" + def __init__(self, assessmentTarget, tomlPath): + if tomlPath is None: + here = path.abspath(path.dirname(__file__)) + tomlFile = f"{here}/external_providers.toml" + else: + tomlFile = tomlPath with open(tomlFile, "rb") as f: data = tomload(f) @@ -108,7 +111,7 @@ def __init__(self, assessmentTarget): # Process ["aws_electric_eye_iam_role_name"] electricEyeRoleName = data["regions_and_accounts"]["aws"]["aws_electric_eye_iam_role_name"] if electricEyeRoleName is None or electricEyeRoleName == "": - logger.warn( + logger.warning( "A value for ['aws_electric_eye_iam_role_name'] was not provided. Will attempt to use current session credentials, this will likely fail if you're attempting to assess another AWS account." ) electricEyeRoleName = None @@ -426,6 +429,10 @@ def get_aws_regions(self): # majority of Regions have a "opt-in-not-required", hence the "not not opted in" list comp regions = [region["RegionName"] for region in ec2.describe_regions()["Regions"] if region["OptInStatus"] != "not-opted-in"] except ClientError as e: + logger.error( + "Could not retrieve AWS Regions because: %s", + e + ) raise e return regions @@ -549,17 +556,24 @@ def check_aws_partition(region: str): """ # GovCloud partition override - if region in ["us-gov-east-1", "us-gov-west-1"]: + if region in ["us-gov-east-1", "us-gov-west-1"] or "us-gov-" in region: partition = "aws-us-gov" # China partition override - elif region in ["cn-north-1", "cn-northwest-1"]: + elif region in ["cn-north-1", "cn-northwest-1"] or "cn-" in region: partition = "aws-cn" # AWS Secret Region override - elif region in ["us-isob-east-1", "us-isob-west-1"]: + elif region in ["us-isob-east-1", "us-isob-west-1"] or "isob-" in region: partition = "aws-isob" # AWS Top Secret Region override - elif region in ["us-iso-east-1", "us-iso-west-1"]: + elif region in ["us-iso-east-1", "us-iso-west-1"] or "iso-" in region: partition = "aws-iso" + # AWS UKSOF / British MOD Region override + elif "iso-e" in region or "isoe" in region: + partition = "aws-isoe" + # AWS Intel Community us-isof-south-1 Region override + elif region in ["us-isof-south-1"] or "iso-f" in region or "isof" in region: + partition = "aws-isof" + # TODO: Add European Sovreign Cloud Partition else: partition = "aws" diff --git a/eeauditor/controller.py b/eeauditor/controller.py index 3e3ce59a..dc403846 100644 --- a/eeauditor/controller.py +++ b/eeauditor/controller.py @@ -23,6 +23,7 @@ from insights import create_sechub_insights from eeauditor import EEAuditor from processor.main import get_providers, process_findings +from os import environ def print_controls(assessmentTarget, auditorName=None): app = EEAuditor(assessmentTarget) @@ -38,11 +39,12 @@ def print_checks(assessmentTarget, auditorName=None): app.print_checks_md() -def run_auditor(assessmentTarget, auditorName=None, pluginName=None, delay=0, outputs=None, outputFile=""): +def run_auditor(assessmentTarget, auditorName=None, pluginName=None, delay=0, outputs=None, outputFile="", tomlPath=None): if not outputs: outputs = ["stdout"] - app = EEAuditor(assessmentTarget) + app = EEAuditor(assessmentTarget, tomlPath) + app.load_plugins(auditorName) # Per-target calls - ensure you use the right run_*_checks*() function if assessmentTarget == "AWS": @@ -59,6 +61,11 @@ def run_auditor(assessmentTarget, auditorName=None, pluginName=None, delay=0, ou findings = list(app.run_non_aws_checks(pluginName=pluginName, delay=delay)) print(f"Done running Checks for {assessmentTarget}") + + if tomlPath is None: + environ["TOML_FILE_PATH"] = "None" + else: + environ["TOML_FILE_PATH"] = tomlPath # Multiple outputs supported process_findings( @@ -147,6 +154,12 @@ def run_auditor(assessmentTarget, auditorName=None, pluginName=None, delay=0, ou is_flag=True, help="Lists all ElectricEye Controls (e.g. Check Titles) for an Assessment Target" ) +# TOML Path +@click.option( + "--toml-path", + default=None, + help="The full path to the TOML file used for configure e.g., ~/path/to/mydir/external_providers.toml. If this value is not provided the default path of ElectricEye/eeauditor/external_providers.toml is used." +) def main( target_provider, @@ -159,6 +172,7 @@ def main( list_checks, create_insights, list_controls, + toml_path ): if list_controls: print_controls( @@ -191,6 +205,7 @@ def main( delay=delay, outputs=outputs, outputFile=output_file, + tomlPath=toml_path ) if __name__ == "__main__": diff --git a/eeauditor/eeauditor.py b/eeauditor/eeauditor.py index fc8e0e4a..d0244357 100644 --- a/eeauditor/eeauditor.py +++ b/eeauditor/eeauditor.py @@ -31,7 +31,7 @@ from cloud_utils import CloudConfig from pluginbase import PluginBase -logger = logging.getLogger(__name__) +logger = logging.getLogger("EEAuditor") here = path.abspath(path.dirname(__file__)) getPath = partial(path.join, here) @@ -42,7 +42,7 @@ class EEAuditor(object): credentials and cross-boundary configurations, and runs Checks and yields results back to controller.py CLI """ - def __init__(self, assessmentTarget, searchPath=None): + def __init__(self, assessmentTarget, tomlPath=None, searchPath=None): # each check must be decorated with the @registry.register_check("cache_name") # to be discovered during plugin loading. self.registry = CheckRegister() @@ -54,7 +54,7 @@ def __init__(self, assessmentTarget, searchPath=None): # AWS if assessmentTarget == "AWS": searchPath = "./auditors/aws" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # parse specific values for Assessment Target - these should match 1:1 with CloudConfig self.awsAccountTargets = utils.awsAccountTargets self.awsRegionsSelection = utils.awsRegionsSelection @@ -62,13 +62,13 @@ def __init__(self, assessmentTarget, searchPath=None): # GCP elif assessmentTarget == "GCP": searchPath = "./auditors/gcp" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # parse specific values for Assessment Target - these should match 1:1 with CloudConfig self.gcpProjectIds = utils.gcp_project_ids # OCI elif assessmentTarget == "OCI": searchPath = "./auditors/oci" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # parse specific values for Assessment Target - these should match 1:1 with CloudConfig self.ociTenancyId = utils.ociTenancyId self.ociUserId = utils.ociUserId @@ -78,15 +78,15 @@ def __init__(self, assessmentTarget, searchPath=None): # Azure elif assessmentTarget == "Azure": searchPath = "./auditors/azure" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # Alibaba elif assessmentTarget == "Alibaba": searchPath = "./auditors/alibabacloud" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # VMWare Cloud on AWS elif assessmentTarget == "VMC": searchPath = "./auditors/vmwarecloud" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) ################################### # SOFTWARE-AS-A-SERVICE PROVIDERS # @@ -94,11 +94,11 @@ def __init__(self, assessmentTarget, searchPath=None): # Servicenow elif assessmentTarget == "Servicenow": searchPath = "./auditors/servicenow" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # M365 elif assessmentTarget == "M365": searchPath = "./auditors/m365" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # parse specific values for Assessment Target - these should match 1:1 with CloudConfig self.m365TenantLocation = utils.m365TenantLocation self.m365ClientId = utils.m365ClientId @@ -107,7 +107,7 @@ def __init__(self, assessmentTarget, searchPath=None): # Salesforce elif assessmentTarget == "Salesforce": searchPath = "./auditors/salesforce" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) self.salesforceAppClientId = utils.salesforceAppClientId self.salesforceAppClientSecret = utils.salesforceAppClientSecret self.salesforceApiUsername = utils.salesforceApiUsername @@ -117,15 +117,15 @@ def __init__(self, assessmentTarget, searchPath=None): # GitHub elif assessmentTarget == "GitHub": searchPath = "./auditors/github" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # Google Workspaces elif assessmentTarget == "GoogleWorkspaces": searchPath = "./auditors/google_workspaces" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # Workday ERP elif assessmentTarget == "Workday": searchPath = "./auditors/workday_erp" - utils = CloudConfig(assessmentTarget) + utils = CloudConfig(assessmentTarget, tomlPath) # Search path for Auditors self.source = self.plugin_base.make_plugin_source( diff --git a/eeauditor/external_providers.toml b/eeauditor/external_providers.toml index 3310e931..14a89d44 100644 --- a/eeauditor/external_providers.toml +++ b/eeauditor/external_providers.toml @@ -319,29 +319,19 @@ title = "ElectricEye Configuration" mongodb_collection_name = "" - [outputs.dynamodb] - - # Table name - - dynamodb_table_name = "" - [outputs.amazon_sqs] - # Queue Name / URL + # Queue Name / URL, this must be in the same account as your current credentials amazon_sqs_queue_url = "" - # Batch + # Batch Size amazon_sqs_batch_size = 10 # This must be an integer - [outputs.microsoft_teams] + # Queue Region - # The location (or actual contents) of the Webhook for your MS Teams Channel - # this location must match the value of `global.credentials_location` e.g., if you specify "AWS_SSM" then - # the value for this variable should be the name of the AWS Systems Manager Parameter Store SecureString Parameter - - microsoft_teams_webhook_value = "" + amazon_sqs_queue_region = "" [outputs.slack] @@ -372,4 +362,14 @@ title = "ElectricEye Configuration" # A list of ElectricEye Finding States (matching the ASFF RecordState) that you want sent to slack if your selection for # `electric_eye_slack_message_type` is "FindingS". This defaults to ["ACTIVE"] - electric_eye_slack_finding_state_filter = ["ACTIVE"] # VALID VALUES | "ACTIVE", "ARCHIVED" \ No newline at end of file + electric_eye_slack_finding_state_filter = ["ACTIVE"] # VALID VALUES | "ACTIVE", "ARCHIVED" + + [outputs.firehose] + + # The name of your Kinesis Data Firehose Delivery Stream, this must be in the same account as your current credentials + + kinesis_firehose_delivery_stream_name = "" + + # Delivery Stream Region + + kinesis_firehose_region = "" \ No newline at end of file diff --git a/eeauditor/processor/outputs/amazon_sqs_output.py b/eeauditor/processor/outputs/amazon_sqs_output.py index 2fc06a81..5078e6fb 100644 --- a/eeauditor/processor/outputs/amazon_sqs_output.py +++ b/eeauditor/processor/outputs/amazon_sqs_output.py @@ -21,29 +21,30 @@ import tomli import boto3 import sys -from os import path +import os import json from base64 import b64decode #from hashlib import new as hasher from botocore.exceptions import ClientError from processor.outputs.output_base import ElectricEyeOutput -sqs = boto3.client("sqs") - @ElectricEyeOutput -class JsonProvider(object): +class AmazonSqsProvider(object): __provider__ = "amazon_sqs" def __init__(self): print("Preparing Amazon SQS output.") - # Get the absolute path of the current directory - currentDir = path.abspath(path.dirname(__file__)) - # Go two directories back to /eeauditor/ - twoBack = path.abspath(path.join(currentDir, "../../")) + if os.environ["TOML_FILE_PATH"] == "None": + # Get the absolute path of the current directory + currentDir = os.path.abspath(os.path.dirname(__file__)) + # Go two directories back to /eeauditor/ + twoBack = os.path.abspath(os.path.join(currentDir, "../../")) + # TOML is located in /eeauditor/ directory + tomlFile = f"{twoBack}/external_providers.toml" + else: + tomlFile = os.environ["TOML_FILE_PATH"] - # TOML is located in /eeauditor/ directory - tomlFile = f"{twoBack}/external_providers.toml" with open(tomlFile, "rb") as f: data = tomli.load(f) @@ -52,15 +53,19 @@ def __init__(self): queueUrl = sqsDetails["amazon_sqs_queue_url"] queueBatchSize = sqsDetails["amazon_sqs_batch_size"] + awsRegion = sqsDetails["amazon_sqs_queue_region"] + if awsRegion is None or awsRegion == "": + awsRegion = boto3.Session().region_name # Ensure that values are provided for all variable - use all() and a list comprehension to check the vars # empty strings will trigger `if not` - if not all(s for s in [queueUrl, queueBatchSize]): + if not all(s for s in [queueUrl, queueBatchSize, awsRegion]): print("An empty value was detected in '[outputs.amazon_sqs]'. Review the TOML file and try again!") sys.exit(2) self.queueUrl = queueUrl self.queueBatchSize = queueBatchSize + self.sqs = boto3.client("sqs", region_name=awsRegion) def write_findings(self, findings: list, **kwargs): if len(findings) == 0: @@ -110,8 +115,10 @@ def create_hashed_message_id(self, findingId): def send_message_to_sqs(self, batch): """ - TO DO + Writes batches of ASFF findings into SQS """ + sqs = self.sqs + for entry in batch: try: sqs.send_message( diff --git a/eeauditor/processor/outputs/cam_mongodb_output.py b/eeauditor/processor/outputs/cam_mongodb_output.py index c2cdb86e..dbaf839c 100644 --- a/eeauditor/processor/outputs/cam_mongodb_output.py +++ b/eeauditor/processor/outputs/cam_mongodb_output.py @@ -43,13 +43,16 @@ class CamMongodbProvider(object): def __init__(self): print("Preparing MongoDB / AWS DocumentDB credentials and PEM files (as needed).") - # Get the absolute path of the current directory - currentDir = os.path.abspath(os.path.dirname(__file__)) - # Go two directories back to /eeauditor/ - twoBack = os.path.abspath(os.path.join(currentDir, "../../")) + if os.environ["TOML_FILE_PATH"] == "None": + # Get the absolute path of the current directory + currentDir = os.path.abspath(os.path.dirname(__file__)) + # Go two directories back to /eeauditor/ + twoBack = os.path.abspath(os.path.join(currentDir, "../../")) + # TOML is located in /eeauditor/ directory + tomlFile = f"{twoBack}/external_providers.toml" + else: + tomlFile = os.environ["TOML_FILE_PATH"] - # TOML is located in /eeauditor/ directory - tomlFile = f"{twoBack}/external_providers.toml" with open(tomlFile, "rb") as f: data = tomli.load(f) diff --git a/eeauditor/processor/outputs/cam_postgresql_output.py b/eeauditor/processor/outputs/cam_postgresql_output.py index 65dffd9a..6e6f8c01 100644 --- a/eeauditor/processor/outputs/cam_postgresql_output.py +++ b/eeauditor/processor/outputs/cam_postgresql_output.py @@ -42,13 +42,16 @@ class CamPostgresProvider(object): def __init__(self): print("Preparing PostgreSQL credentials.") - # Get the absolute path of the current directory - currentDir = os.path.abspath(os.path.dirname(__file__)) - # Go two directories back to /eeauditor/ - twoBack = os.path.abspath(os.path.join(currentDir, "../../")) - - # TOML is located in /eeauditor/ directory - tomlFile = f"{twoBack}/external_providers.toml" + if os.environ["TOML_FILE_PATH"] == "None": + # Get the absolute path of the current directory + currentDir = os.path.abspath(os.path.dirname(__file__)) + # Go two directories back to /eeauditor/ + twoBack = os.path.abspath(os.path.join(currentDir, "../../")) + # TOML is located in /eeauditor/ directory + tomlFile = f"{twoBack}/external_providers.toml" + else: + tomlFile = os.environ["TOML_FILE_PATH"] + with open(tomlFile, "rb") as f: data = tomli.load(f) diff --git a/eeauditor/processor/outputs/dynamodb_output.py b/eeauditor/processor/outputs/dynamodb_output.py deleted file mode 100644 index 143cdaf4..00000000 --- a/eeauditor/processor/outputs/dynamodb_output.py +++ /dev/null @@ -1,105 +0,0 @@ -#This file is part of ElectricEye. -#SPDX-License-Identifier: Apache-2.0 - -#Licensed to the Apache Software Foundation (ASF) under one -#or more contributor license agreements. See the NOTICE file -#distributed with this work for additional information -#regarding copyright ownership. The ASF licenses this file -#to you under the Apache License, Version 2.0 (the -#"License"); you may not use this file except in compliance -#with the License. You may obtain a copy of the License at - -#http://www.apache.org/licenses/LICENSE-2.0 - -#Unless required by applicable law or agreed to in writing, -#software distributed under the License is distributed on an -#"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -#KIND, either express or implied. See the License for the -#specific language governing permissions and limitations -#under the License. -import boto3 -import sys -import os -from processor.outputs.output_base import ElectricEyeOutput - -# export DYNAMODB_TABLE_NAME='EEBackend' - -@ElectricEyeOutput -class JsonProvider(object): - __provider__ = "ddb_backend" - - def __init__(self): - # DynamoDB Table Name - try: - ddbBackendTableName = os.environ["DYNAMODB_TABLE_NAME"] - except KeyError: - ddbBackendTableName = "placeholder" - - if ddbBackendTableName == ("placeholder" or None): - print('Valid DynamoDB Table name was not provided!') - sys.exit(2) - else: - self.db_name = ddbBackendTableName - - def write_findings(self, findings: list, **kwargs): - - print(f"Writing {len(findings)} findings to backend") - - dynamodb = boto3.resource("dynamodb") - table = dynamodb.Table(self.db_name) - - # loop the findings and create a flatter structure - better for indexing without the nested lists - for fi in findings: - # Pull out the Finding ID just in case there is an underlying `KeyError` issue for debug - findingId = fi["Id"] - # some values may not always be present (Details, etc.) - change this to an empty Map - try: - resourceDetails = fi["Resources"][0]["Details"] - if not resourceDetails: - resourceDetails = [] - except KeyError: - resourceDetails = [] - # Partition data mapping - partition = fi["Resources"][0]["Partition"] - if partition == "aws": - partitionName = "AWS Commercial" - elif partition == "aws-us-gov": - partitionName = "AWS GovCloud" - elif partition == "aws-cn": - partitionName = "AWS China" - elif partition == "aws-isob": - partitionName = "AWS ISOB" # Secret Region - elif partition == "aws-iso": - partitionName = "AWS ISO" # Top Secret Region - # Chop up the Title to remove the finding ID - title = str(fi["Title"]).split('] ')[1] - - try: - # This format should map to FastAPI schema - tableItem = { - "FindingId": findingId, - "Provider": "AWS", - "ProviderAccountId": fi["AwsAccountId"], - "CreatedAt": str(fi["CreatedAt"]), - "Severity": fi["Severity"]["Label"].lower().capitalize(), - "Title": title, - "Description": fi["Description"], - "RecommendationText": str(fi["Remediation"]["Recommendation"]["Text"]), - "RecommendationUrl": str(fi["Remediation"]["Recommendation"]["Url"]), - "ResourceType": str(fi["Resources"][0]["Type"]), - "ResourceId": str(fi["Resources"][0]["Id"]), - "ResourcePartition": partitionName, - "ResourceDetails": resourceDetails, - "FindingStatus": fi["Workflow"]["Status"].lower().capitalize(), - "AuditReadinessMapping": fi["Compliance"]["RelatedRequirements"], - "AuditReadinessStatus": fi["Compliance"]["Status"].lower().capitalize() - } - # Write to DDB - table.put_item( - Item=tableItem - ) - except KeyError as e: - print(f"Issue with Finding ID {findingId} due to missing value {e}") - continue - - return True \ No newline at end of file diff --git a/eeauditor/processor/outputs/firemon_cloud_defense_output.py b/eeauditor/processor/outputs/firemon_cloud_defense_output.py index 2125ffea..4b0bf5c9 100644 --- a/eeauditor/processor/outputs/firemon_cloud_defense_output.py +++ b/eeauditor/processor/outputs/firemon_cloud_defense_output.py @@ -42,13 +42,16 @@ class FiremonCloudDefenseProvider(object): def __init__(self): print("Preparing Firemon Cloud Defense (DisruptOps) credentials.") - # Get the absolute path of the current directory - currentDir = os.path.abspath(os.path.dirname(__file__)) - # Go two directories back to /eeauditor/ - twoBack = os.path.abspath(os.path.join(currentDir, "../../")) - - # TOML is located in /eeauditor/ directory - tomlFile = f"{twoBack}/external_providers.toml" + if os.environ["TOML_FILE_PATH"] == "None": + # Get the absolute path of the current directory + currentDir = os.path.abspath(os.path.dirname(__file__)) + # Go two directories back to /eeauditor/ + twoBack = os.path.abspath(os.path.join(currentDir, "../../")) + # TOML is located in /eeauditor/ directory + tomlFile = f"{twoBack}/external_providers.toml" + else: + tomlFile = os.environ["TOML_FILE_PATH"] + with open(tomlFile, "rb") as f: data = tomli.load(f) diff --git a/eeauditor/processor/outputs/mongodb_output.py b/eeauditor/processor/outputs/mongodb_output.py index 1e7c2ac5..02aba523 100644 --- a/eeauditor/processor/outputs/mongodb_output.py +++ b/eeauditor/processor/outputs/mongodb_output.py @@ -48,13 +48,16 @@ class MongodbProvider(object): def __init__(self): print("Preparing MongoDB / AWS DocumentDB credentials and PEM files (as needed).") - # Get the absolute path of the current directory - currentDir = os.path.abspath(os.path.dirname(__file__)) - # Go two directories back to /eeauditor/ - twoBack = os.path.abspath(os.path.join(currentDir, "../../")) - - # TOML is located in /eeauditor/ directory - tomlFile = f"{twoBack}/external_providers.toml" + if os.environ["TOML_FILE_PATH"] == "None": + # Get the absolute path of the current directory + currentDir = os.path.abspath(os.path.dirname(__file__)) + # Go two directories back to /eeauditor/ + twoBack = os.path.abspath(os.path.join(currentDir, "../../")) + # TOML is located in /eeauditor/ directory + tomlFile = f"{twoBack}/external_providers.toml" + else: + tomlFile = os.environ["TOML_FILE_PATH"] + with open(tomlFile, "rb") as f: data = tomli.load(f) diff --git a/eeauditor/processor/outputs/ocsf_to_firehose_output.py b/eeauditor/processor/outputs/ocsf_to_firehose_output.py new file mode 100644 index 00000000..99de9a43 --- /dev/null +++ b/eeauditor/processor/outputs/ocsf_to_firehose_output.py @@ -0,0 +1,392 @@ +#This file is part of ElectricEye. +#SPDX-License-Identifier: Apache-2.0 + +#Licensed to the Apache Software Foundation (ASF) under one +#or more contributor license agreements. See the NOTICE file +#distributed with this work for additional information +#regarding copyright ownership. The ASF licenses this file +#to you under the Apache License, Version 2.0 (the +#"License"); you may not use this file except in compliance +#with the License. You may obtain a copy of the License at + +#http://www.apache.org/licenses/LICENSE-2.0 + +#Unless required by applicable law or agreed to in writing, +#software distributed under the License is distributed on an +#"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +#KIND, either express or implied. See the License for the +#specific language governing permissions and limitations +#under the License. + +import logging +import tomli +import boto3 +import sys +from typing import NamedTuple +from os import path, environ +from processor.outputs.output_base import ElectricEyeOutput +import json +from base64 import b64decode +from datetime import datetime +from botocore.exceptions import ClientError + +logger = logging.getLogger("OCSF_to_KDF_Output") + +# NOTE TO SELF: Updated this and FAQ.md as new standards are added +SUPPORTED_FRAMEWORKS = [ + "NIST CSF V1.1", + "NIST SP 800-53 Rev. 4", + "AICPA TSC", + "ISO 27001:2013", + "CIS Critical Security Controls V8", + "NIST SP 800-53 Rev. 5", + "NIST SP 800-171 Rev. 2", + "CSA Cloud Controls Matrix V4.0", + "CMMC 2.0", + "UK NCSC Cyber Essentials V2.2", + "HIPAA Security Rule 45 CFR Part 164 Subpart C", + "FFIEC Cybersecurity Assessment Tool", + "NERC Critical Infrastructure Protection", + "NYDFS 23 NYCRR Part 500", + "UK NCSC Cyber Assessment Framework V3.1", + "PCI-DSS V4.0", + "NZISM V3.5", + "ISO 27001:2022", + "Critical Risk Profile V1.2", + "ECB CROE", + "Equifax SCF V1.0", + "FBI CJIS Security Policy V5.9", + "CIS Amazon Web Services Foundations Benchmark V1.5" +] + +class AsffOcsfNormalizedMapping(NamedTuple): + severityId: int + severity: str + cloudAccountTypeId: int + cloudAccountType: str + complianceStatusId: int + complianceStatus: str + +here = path.abspath(path.dirname(__file__)) +with open(f"{here}/mapped_compliance_controls.json") as jsonfile: + CONTROLS_CROSSWALK = json.load(jsonfile) + +@ElectricEyeOutput +class OcsfFirehoseOutput(object): + __provider__ = "ocsf_kdf" + + def __init__(self): + print("Preparing to send OCSF V1.1.0 Compliance Findings to Amazon Kinesis Data Firehose.") + + if environ["TOML_FILE_PATH"] == "None": + # Get the absolute path of the current directory + currentDir = path.abspath(path.dirname(__file__)) + # Go two directories back to /eeauditor/ + twoBack = path.abspath(path.join(currentDir, "../../")) + # TOML is located in /eeauditor/ directory + tomlFile = f"{twoBack}/external_providers.toml" + else: + tomlFile = environ["TOML_FILE_PATH"] + + with open(tomlFile, "rb") as f: + data = tomli.load(f) + + # Variable for the entire [outputs.amazon_sqs] section + sqsDetails = data["outputs"]["firehose"] + + deliveryStream = sqsDetails["kinesis_firehose_delivery_stream_name"] + awsRegion = sqsDetails["kinesis_firehose_region"] + if awsRegion is None or awsRegion == "": + awsRegion = boto3.Session().region_name + + # Ensure that values are provided for all variable - use all() and a list comprehension to check the vars + # empty strings will trigger `if not` + if not deliveryStream: + logger.error("An empty value was detected in '[outputs.firehose]'. Review the TOML file and try again!") + sys.exit(2) + + self.deliveryStream = deliveryStream + self.firehose = boto3.client("firehose", region_name=awsRegion) + + def write_findings(self, findings: list, **kwargs): + if len(findings) == 0: + logger.error("There are not any findings to send to Kinesis Data Firehose!") + sys.exit(0) + + logger.info( + "Writing %s OCSF Compliance Findings to Kinesis Data Firehose!", + len(findings) + ) + + """# Use another list comprehension to remove `ProductFields.AssetDetails` from non-Asset reporting outputs + newFindings = [ + {**d, "ProductFields": {k: v for k, v in d["ProductFields"].items() if k != "AssetDetails"}} for d in findings + ] + + del findings""" + + """ + This list comprhension will base64 decode and convert a string to JSON for all instances of `ProductFields.AssetDetails` + except where it is a None type (this is done for placeholders in Checks where the Asset doesn't exist) and it will also + skip over areas in the event that `ProductFields` is missing any Cloud Asset Management required fields + """ + decodedFindings = [ + {**d, "ProductFields": {**d["ProductFields"], + "AssetDetails": json.loads(b64decode(d["ProductFields"]["AssetDetails"]).decode("utf-8")) + if d["ProductFields"]["AssetDetails"] is not None + else None + }} if "AssetDetails" in d["ProductFields"] + else d + for d in findings + ] + + del findings + + # Map in the new compliance controls + for finding in decodedFindings: + complianceRelatedRequirements = list(finding["Compliance"]["RelatedRequirements"]) + newControls = [] + nistCsfControls = [control for control in complianceRelatedRequirements if control.startswith("NIST CSF V1.1")] + for control in nistCsfControls: + crosswalkedControls = self.nist_csf_v_1_1_controls_crosswalk(control) + # Not every single NIST CSF Control maps across to other frameworks + if crosswalkedControls: + for crosswalk in crosswalkedControls: + if crosswalk not in newControls: + newControls.append(crosswalk) + else: + continue + + complianceRelatedRequirements.extend(newControls) + + del finding["Compliance"]["RelatedRequirements"] + finding["Compliance"]["RelatedRequirements"] = complianceRelatedRequirements + + ocsfFindings = self.ocsf_compliance_finding_mapping(decodedFindings) + + del decodedFindings + + firehose = self.firehose + + # TODO: Make this more performant, because woah dawg, this shit's stupid! + for i in range(0, len(ocsfFindings), 25): + encodedRecords = [] + records = ocsfFindings[i : i + 25] + for record in records: + encodedRecords.append({"Data": json.dumps(record).encode("utf-8")}) + del records + + try: + response = firehose.put_record_batch( + DeliveryStreamName=self.deliveryStream, + Records=encodedRecords + ) + if response["FailedPutCount"] > 0: + logger.warning( + "Failed to deliver %s records", + response["FailedPutCount"] + ) + except ClientError as e: + logger.warning( + "Error with sending batch to Firehose due to: %s", + e.response["Error"]["Message"] + ) + continue + + print("Finished write OCSF Compliance Findings to Kinesis Data Firehose.") + + return True + + def nist_csf_v_1_1_controls_crosswalk(self, nistCsfSubcategory): + """ + This function returns a list of additional control framework control IDs that mapped into a provided + NIST CSF V1.1 Subcategory (control) + """ + + # Not every single NIST CSF Control maps across to other frameworks + try: + return CONTROLS_CROSSWALK[nistCsfSubcategory] + except KeyError: + return [] + + def asff_to_ocsf_normalization(self, severityLabel: str, cloudProvider: str, complianceStatusLabel: str) -> AsffOcsfNormalizedMapping: + """ + Normalizes the following ASFF Severity, Cloud Account Provider, and Compliance values into OCSF + """ + + # map Severity.Label -> base_event.severity_id, base_event.severity + if severityLabel == "INFORMATIONAL": + severityId = 1 + severity = severityLabel.lower().capitalize() + if severityLabel == "LOW": + severityId = 2 + severity = severityLabel.lower().capitalize() + if severityLabel == "MEDIUM": + severityId = 3 + severity = severityLabel.lower().capitalize() + if severityLabel == "HIGH": + severityId = 4 + severity = severityLabel.lower().capitalize() + if severityLabel == "CRITICAL": + severityId = 5 + severity = severityLabel.lower().capitalize() + else: + severityId = 99 + severity = severityLabel.lower().capitalize() + + # map ProductFields.Provider -> cloud.account.type_id, cloud.account.type + if cloudProvider == "AWS": + acctTypeId = 10 + acctType = "AWS Account" + elif cloudProvider == "GCP": + acctTypeId = 5 + acctType = "GCP Account" + else: + acctTypeId = 99 + acctType = cloudProvider + + # map Compliance.Status -> compliance.status_id, compliance.status + if complianceStatusLabel == "PASSED": + complianceStatusId = 1 + complianceStatus = "Pass" + elif complianceStatusLabel == "WARNING": + complianceStatusId = 2 + complianceStatus = "Warning" + elif complianceStatusLabel == "FAILED": + complianceStatusId = 3 + complianceStatus = "Fail" + else: + complianceStatusId = 99 + complianceStatus = complianceStatusLabel.lower().capitalize() + + return ( + severityId, + severity, + acctTypeId, + acctType, + complianceStatusId, + complianceStatus + ) + + def iso8061_to_epochseconds(self, iso8061: str) -> int: + """ + Converts ISO 8061 datetime into Epochseconds timestamp + """ + return int(datetime.fromisoformat(iso8061).timestamp()) + + def ocsf_compliance_finding_mapping(self, findings: list) -> list: + """ + Takes ElectricEye ASFF and outputs to OCSF v1.1.0 Compliance Finding (2003), returns a list of new findings + """ + + ocsfFindings = [] + + logger.info("Mapping ASFF to OCSF") + + for finding in findings: + + asffToOcsf = self.asff_to_ocsf_normalization( + severityLabel=finding["Severity"]["Label"], + cloudProvider=finding["ProductFields"]["Provider"], + complianceStatusLabel=finding["Compliance"]["Status"] + ) + + ocsf = { + # Base Event data + "activity_id": 1, + "activity_name": "Create", + "category_name": "Findings", + "category_uid": 2, + "class_name": "Compliance Finding", + "class_uid": 2003, + "confidence_score": finding["Confidence"], + "severity": asffToOcsf[1], + "severity_id": asffToOcsf[0], + "status": "New", + "status_id": 1, + "time": self.iso8061_to_epochseconds(finding["CreatedAt"]), + "type_name": "Compliance Finding: Create", + "type_uid": 200301, + # Profiles / Metadata + "metadata": { + "uid": finding["Id"], + "correlation_uid": finding["GeneratorId"], + "version":"1.1.0", + "product": { + "name":"ElectricEye", + "version":"3.0", + "url_string":"https://github.com/jonrau1/ElectricEye", + "vendor_name":"ElectricEye" + }, + "profiles":[ + "cloud" + ] + }, + "cloud": { + "provider": finding["ProductFields"]["Provider"], + "project_uid": finding["ProductFields"]["ProviderAccountId"], + "region": finding["ProductFields"]["AssetRegion"], + "account": { + "uid": finding["ProductFields"]["ProviderAccountId"], + "type": asffToOcsf[3], + "type_uid": asffToOcsf[2] + } + }, + # Observables + "observables": [ + # Cloud Account (Project) UID + { + "name": "cloud.project_uid", + "type": "Resource UID", + "type_id": 10, + "value": finding["ProductFields"]["ProviderAccountId"] + }, + # Resource UID + { + "name": "resource.uid", + "type": "Resource UID", + "type_id": 10, + "value": finding["Resources"][0]["Id"] + } + ], + # Compliance Finding Class Info + "compliance": { + "requirements": finding["Compliance"]["RelatedRequirements"], + "control": str(finding["Title"]).split("] ")[0].replace("[",""), + "standards": SUPPORTED_FRAMEWORKS, + "status": asffToOcsf[5], + "status_id": asffToOcsf[4] + }, + "finding_info": { + "created_time": self.iso8061_to_epochseconds(finding["CreatedAt"]), + "desc": finding["Description"], + "first_seen_time": self.iso8061_to_epochseconds(finding["FirstObservedAt"]), + "modified_time": self.iso8061_to_epochseconds(finding["UpdatedAt"]), + "product_uid": finding["ProductArn"], + "title": finding["Title"], + "types": finding["Types"], + "uid": finding["Id"] + }, + "remediation": { + "desc": finding["Remediation"]["Recommendation"]["Text"], + "references": [finding["Remediation"]["Recommendation"]["Url"]] + }, + "resource": { + "data": finding["ProductFields"]["AssetDetails"], + "cloud_partition": finding["Resources"][0]["Partition"], + "region": finding["ProductFields"]["AssetRegion"], + "type": finding["ProductFields"]["AssetService"], + "uid": finding["Resources"][0]["Id"] + }, + "unmapped": { + "provide_type": finding["ProductFields"]["ProviderType"], + "asset_class": finding["ProductFields"]["AssetClass"], + "asset_service": finding["ProductFields"]["AssetService"], + "asset_component": finding["ProductFields"]["AssetComponent"], + "workflow_status": finding["Workflow"]["Status"], + "record_state": finding["RecordState"] + } + } + ocsfFindings.append(ocsf) + + return ocsfFindings \ No newline at end of file diff --git a/eeauditor/processor/outputs/ocsf_v1_1_0_output.py b/eeauditor/processor/outputs/ocsf_v1_1_0_output.py index b3955089..bfad01e5 100644 --- a/eeauditor/processor/outputs/ocsf_v1_1_0_output.py +++ b/eeauditor/processor/outputs/ocsf_v1_1_0_output.py @@ -27,32 +27,33 @@ from base64 import b64decode from datetime import datetime -logger = logging.getLogger(__name__) +logger = logging.getLogger("OCSF_V1.1.0_Output") # NOTE TO SELF: Updated this and FAQ.md as new standards are added -SUPPORTED_STANDARDS = [ - "NIST Cybersecurity Framework Version 1.1", - "NIST Special Publication 800-53 Revision 4", - "NIST Special Publication 800-53 Revision 5", - "NIST Special Publication 800-171 Revision 2", - "American Institute of Certified Public Accountants (AICPA) Trust Service Criteria (TSC) 2017/2020 for SOC 2", - "ISO/IEC 27001:2013/2017 Annex A", - "ISO/IEC 27001:2022 Annex A", - "Center for Internet Security (CIS) Critical Security Controls Version 8", - "Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) Version 4.0", - "United States Department of Defense Cybersecurity Maturity Model Certification (CMMC) Version 2.0", - "United States Federal Bureau of Investigation (FBI) Criminal Justice Information System (CJIS) Security Policy Version 5.9", - "United Kingdom National Cybercrime Security Center (NCSC) Cyber Essentials Version 2.2", - "United Kingdom National Cybercrime Security Center (NCSC) Assessment Framework Version 3.1", - "HIPAA 'Security Rule' U.S. Code 45 CFR Part 164 Subpart C", - "Federal Financial Institutions Examination Council (FFIEC) Cybersecurity Assessment Tool (CAT)", - "North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) Standard", - "New Zealand Information Security Manual Version 3.5", - "New York Department of Financial Services (NYDFS) Series 23 NYCRR Part 500; AKA NYDFS500", - "Critical Risk Institute (CRI) Critical Risk Profile Version 1.2", - "European Central Bank (ECB) Cyber Resilience Oversight Expectations (CROEs)", - "Equifax Security Controls Framework Version 1.0", - "Payment Card Industry (PCI) Data Security Standard (DSS) Version 4.0" +SUPPORTED_FRAMEWORKS = [ + "NIST CSF V1.1", + "NIST SP 800-53 Rev. 4", + "AICPA TSC", + "ISO 27001:2013", + "CIS Critical Security Controls V8", + "NIST SP 800-53 Rev. 5", + "NIST SP 800-171 Rev. 2", + "CSA Cloud Controls Matrix V4.0", + "CMMC 2.0", + "UK NCSC Cyber Essentials V2.2", + "HIPAA Security Rule 45 CFR Part 164 Subpart C", + "FFIEC Cybersecurity Assessment Tool", + "NERC Critical Infrastructure Protection", + "NYDFS 23 NYCRR Part 500", + "UK NCSC Cyber Assessment Framework V3.1", + "PCI-DSS V4.0", + "NZISM V3.5", + "ISO 27001:2022", + "Critical Risk Profile V1.2", + "ECB CROE", + "Equifax SCF V1.0", + "FBI CJIS Security Policy V5.9", + "CIS Amazon Web Services Foundations Benchmark V1.5" ] class AsffOcsfNormalizedMapping(NamedTuple): @@ -299,7 +300,7 @@ def ocsf_compliance_finding_mapping(self, findings: list) -> list: "compliance": { "requirements": finding["Compliance"]["RelatedRequirements"], "control": str(finding["Title"]).split("] ")[0].replace("[",""), - "standards": SUPPORTED_STANDARDS, + "standards": SUPPORTED_FRAMEWORKS, "status": asffToOcsf[5], "status_id": asffToOcsf[4] }, diff --git a/eeauditor/processor/outputs/postgresql_output.py b/eeauditor/processor/outputs/postgresql_output.py index 4759406b..cef430bc 100644 --- a/eeauditor/processor/outputs/postgresql_output.py +++ b/eeauditor/processor/outputs/postgresql_output.py @@ -46,13 +46,16 @@ class PostgresProvider(object): def __init__(self): print("Preparing PostgreSQL credentials.") - # Get the absolute path of the current directory - currentDir = os.path.abspath(os.path.dirname(__file__)) - # Go two directories back to /eeauditor/ - twoBack = os.path.abspath(os.path.join(currentDir, "../../")) - - # TOML is located in /eeauditor/ directory - tomlFile = f"{twoBack}/external_providers.toml" + if os.environ["TOML_FILE_PATH"] == "None": + # Get the absolute path of the current directory + currentDir = os.path.abspath(os.path.dirname(__file__)) + # Go two directories back to /eeauditor/ + twoBack = os.path.abspath(os.path.join(currentDir, "../../")) + # TOML is located in /eeauditor/ directory + tomlFile = f"{twoBack}/external_providers.toml" + else: + tomlFile = os.environ["TOML_FILE_PATH"] + with open(tomlFile, "rb") as f: data = tomli.load(f) diff --git a/eeauditor/processor/outputs/slack_output.py b/eeauditor/processor/outputs/slack_output.py index afd7911f..204336b6 100644 --- a/eeauditor/processor/outputs/slack_output.py +++ b/eeauditor/processor/outputs/slack_output.py @@ -48,13 +48,16 @@ class SlackProvider(object): def __init__(self): print("Preparing Slack credentials.") - # Get the absolute path of the current directory - currentDir = os.path.abspath(os.path.dirname(__file__)) - # Go two directories back to /eeauditor/ - twoBack = os.path.abspath(os.path.join(currentDir, "../../")) - - # TOML is located in /eeauditor/ directory - tomlFile = f"{twoBack}/external_providers.toml" + if os.environ["TOML_FILE_PATH"] == "None": + # Get the absolute path of the current directory + currentDir = os.path.abspath(os.path.dirname(__file__)) + # Go two directories back to /eeauditor/ + twoBack = os.path.abspath(os.path.join(currentDir, "../../")) + # TOML is located in /eeauditor/ directory + tomlFile = f"{twoBack}/external_providers.toml" + else: + tomlFile = os.environ["TOML_FILE_PATH"] + with open(tomlFile, "rb") as f: data = tomli.load(f) diff --git a/screenshots/ElectricEye2024Architecture.svg b/screenshots/ElectricEye2024Architecture.svg index 180cf7e9..ba2a49ba 100644 --- a/screenshots/ElectricEye2024Architecture.svg +++ b/screenshots/ElectricEye2024Architecture.svg @@ -1 +1 @@ -EVALUATECOMING SOON!COMING SOON!CLOUD SECURITY POSTUREMANAGEMENT (CSPM)SAAS SECURITY POSTUREMANAGEMENT (SSPM)ENRICHREPORTATTACK SURFACEMONITORING (ASM)COMINGSOON!AWS SecurityHubJSON (x3)AmazonDocumentDBMongoDBFiremon CloudDefenseTeamsSlackAmazonSQSAmazonDynamoDBCSVHTML (x2)SUPPORTED OUTPUTS(OCSF,File, DB, Queue, SaaS)PostgreSQLCOMINGSOON!COMINGSOON!OCSF v1.1.0 \ No newline at end of file +EVALUATECOMING SOON!COMING SOON!CLOUD SECURITY POSTUREMANAGEMENT (CSPM)SAAS SECURITY POSTUREMANAGEMENT (SSPM)ENRICHREPORTATTACK SURFACEMONITORING (ASM)COMINGSOON!AWS SecurityHubJSON (x3)AmazonDocumentDBMongoDBFiremonCloudDefenseSlackAmazonSQSAWS KinesisFirehoseCSVHTML (x2)SUPPORTED OUTPUTS(OCSF, File, DB, Queue, SaaS)PostgreSQLOCSF v1.1.0 \ No newline at end of file diff --git a/screenshots/ElectricEyeAnimated.gif b/screenshots/ElectricEyeAnimated.gif index c9c9aadf..d7fa5efc 100644 Binary files a/screenshots/ElectricEyeAnimated.gif and b/screenshots/ElectricEyeAnimated.gif differ diff --git a/screenshots/architecture-for-github-thumbnail.jpg b/screenshots/architecture-for-github-thumbnail.jpg index d5b8090a..58a8a19e 100644 Binary files a/screenshots/architecture-for-github-thumbnail.jpg and b/screenshots/architecture-for-github-thumbnail.jpg differ diff --git a/screenshots/extras/ElectricEye.pptx b/screenshots/extras/ElectricEye.pptx index ff95da69..fb1444b5 100644 Binary files a/screenshots/extras/ElectricEye.pptx and b/screenshots/extras/ElectricEye.pptx differ