From b1a47eb4db251780bfaf929be00374748bd76ac4 Mon Sep 17 00:00:00 2001 From: <> Date: Sat, 15 Jun 2024 00:14:39 +0000 Subject: [PATCH] Deployed 8516cc4 with MkDocs version: 1.5.3 --- search/search_index.json | 2 +- searches/threat_object_prevalence/index.html | 260 ++++++++++++++----- sitemap.xml.gz | Bin 127 -> 127 bytes 3 files changed, 190 insertions(+), 72 deletions(-) diff --git a/search/search_index.json b/search/search_index.json index c86cd61..2d0d78e 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"RBA all day","text":""},{"location":"#welcome-to-the-wonderful-world-of-risk-based-alerting","title":"Welcome to the wonderful world of Risk-Based Alerting!","text":"

RBA is Splunk's method to aggregate low-fidelity security events as interesting observations tagged with security metadata to create high-fidelity, low-volume alerts.

"},{"location":"#searches","title":"Searches","text":"

Useful SPL from the RBA community for working with risk events.

"},{"location":"#dashboards","title":"Dashboards","text":"

Simple XML or JSON for Splunk dashboards to streamline risk analysis.

"},{"location":"#risk-rules","title":"Risk Rules","text":"

Splunk's Threat Research Team has an incredible library of over 1000 detections in the Splunk's Enterprise Security Content Updates library. You can use Marcus Ferrera and Drew Church's awesome ATT&CK Detections Collector to pop out a handy HTML file of relevant ESCU detections for you to align with MITRE ATT&CK.

"},{"location":"#the-rba-community","title":"The RBA Community","text":"The RBA Community
    Join the RBA Community Today!\n

The RBA Community is a group of professionals dedicated to advancing the field of risk-based alerting (RBA) and Splunk Enterprise Security (ES). Our mission is to provide a forum for sharing knowledge, best practices, and the latest developments in RBA and ES, and to help professionals enhance their understanding and skills in these areas.

Whether you\u2019re new to RBA and ES or a seasoned pro, The RBA Community has something for everyone. We invite you to join us on this journey to enhance your understanding and expertise in RBA and ES \u2013 don\u2019t miss out on this opportunity to learn from the best and connect with other professionals in the field.

Learn more

"},{"location":"#contributing","title":"Contributing","text":"

Want to contribute? See our contributing guidelines.

"},{"location":"#discussionfaq","title":"Discussion/FAQ","text":"

See discussions and frequently asked questions on our GitHub Discussions board.

Visit Discussion Board

"},{"location":"contributing/contributing-guidelines/","title":"Contributing Guidelines","text":"

All are welcome to contribute!

A GitHub account is required to create a pull request to submit new content. If you do not want to submit changes, you may also consider the following:

"},{"location":"contributing/contributing-guidelines/#how-to-contribute","title":"How to Contribute","text":"

This repository uses MkDocs with the Material for MkDocs theme.

If you know the markdown language then using this style of documentation will be a breeze. For a full list of capabilities see MkDocs's website.

"},{"location":"contributing/contributing-guidelines/#fork-the-rba-github","title":"Fork the RBA GitHub","text":""},{"location":"contributing/contributing-guidelines/#create-a-local-environment-for-testing","title":"Create a local environment for testing","text":"

Testing locally will be a great way to ensure your changes will work with what currently exists.

The easiest way to get started is by using a python virtual environment. For simplicity, pipenv will be used for the following.

  1. Install python and pipenv on your local workstation -> Pipenv docs.
  2. Once installed, navigate to your forked repository and run the following to install the latest requirements.

    # your forked rba directory\n# ./rba\npipenv install -r docs/requirements.txt\n
  3. Now you can enter pipenv run mkdocs serve which will create a webserver that can be reached by opening your browser and navigating to http://localhost:8000.

"},{"location":"contributing/contributors/","title":"Thanks to our GitHub Contributors!","text":"

Daren Cook

Dean Luxton

Donald Murchison

elusive-mesmer

gabs - Gabriel Vasseur

7thdrxn - Haylee Mills

hettervik

matt-snyder-stuff

RedTigR

nterl0k - Steven Dick

Tyler Younger

ZachTheSplunker

"},{"location":"dashboards/","title":"Dashboards","text":""},{"location":"dashboards/#attck-matrix-risk-business-view","title":"ATT&CK Matrix Risk (Business View)","text":"

attack_matrix_risk.xml

Portrays risk in your environment through the lense of RBA and the MTRE ATT&CK framework.

"},{"location":"dashboards/#attribution-analytics-tuning-view","title":"Attribution Analytics (Tuning View)","text":"

audit_attribution_analytics.xml

Helpful for tuning new detections.

"},{"location":"dashboards/#rba-data-source-review","title":"RBA Data Source Review","text":"

rba_data_source_overview.xml

This helps you to better what data sources you are using in RBA and see gaps in your coverage.

"},{"location":"dashboards/#risk-attributions-investigative-view","title":"Risk Attributions (Investigative View)","text":"

risk_attributions.xml

Risk Attributions.

"},{"location":"dashboards/#risk-investigation","title":"Risk Investigation","text":"

risk_investigation.xml

Risk Investigations.

"},{"location":"dashboards/#risk-notable-analysis","title":"Risk Notable Analysis","text":"

risk_notable_analysis_dashboard.xml

Risk Notable Analysis.

"},{"location":"dashboards/attack_matrix_risk/","title":"ATT&CK Matrix Risk (Business View)","text":"

View on GitHub

"},{"location":"dashboards/audit_attribution_analytics/","title":"Attribution Analytics (Tuning View)","text":"

Helpful for tuning new detections.

View on GitHub

"},{"location":"dashboards/rba_data_source_overview/","title":"RBA Data Source Review","text":"

This helps you to better what data sources you are using in RBA and see gaps in your coverage.

View on GitHub

"},{"location":"dashboards/risk_attributions/","title":"Risk Attributions (Investigative View)","text":"Prerequisites

View on GitHub

"},{"location":"dashboards/risk_investigation/","title":"Risk Investigation","text":"

View on GitHub

"},{"location":"dashboards/risk_notable_analysis_dashboard/","title":"Risk Notable Analysis","text":"

View on GitHub

"},{"location":"searches/","title":"Helpful Searches","text":"

These are some SPL techniques to get the most out of RBA by adding new features to your implementation or handling a common issue.

"},{"location":"searches/#chaining-behaviors","title":"Chaining behaviors","text":"

This is some simple SPL to organize risk events by risk_object and create risk rules which look for a specific sequence of events or chain of behaviors.

"},{"location":"searches/#deduplicate-notables","title":"Deduplicate Notables","text":"

This feature will drastically reduce the number of duplicate Risk Notables by removing alerts where events are basically the same, already reviewed, or another Risk Incident Rule has already fired for.

"},{"location":"searches/#dynamic-drilldowns","title":"Dynamic Drilldowns","text":"

If you're utilizing a custom risk notable investigation dashboard, it can be incredibly helpful for each risk event source to have its own drilldown. Thanks to Donald Murchison from the RBA Slack for contributing this method, which is explained in more detail in this blog post.

"},{"location":"searches/#essential-rba-searches","title":"Essential RBA searches","text":"

This is all of the handy SPL contained in the Essential Guide to Risk Based Alerting; includes searches for finding noise, reducing noisy notables, and tuning risk rules.

"},{"location":"searches/#integrate-ai-with-rir","title":"Integrate A&I with RiR","text":"

Adding this SPL into your Risk Incident Rules normalizes your risk object to a unique key in the Asset & Identity Framework; the primary advantage of this is throttling to prevent a Risk Incident Rule from firing on both a system and user that represent the same risk events.

"},{"location":"searches/#limit-score-stacking","title":"Limit score stacking","text":"

This SPL for your Risk Score Risk Incident Rules ensures that a single correlation search can only contribute risk a total of three times (or whatever you would like). This is handy for reducing rapidly stacking risk which is common early in the RBA maturation process.

"},{"location":"searches/#naming-systemunknowncomputer-accounts","title":"Naming SYSTEM/Unknown/Computer Accounts","text":"

Computer accounts are used by Active Directory to authenticate machines to the domain, and RBA detections may find behavior in a log where the user account is simply listed as \"SYSTEM\" or even left blank because it is the computer account. This method renames the account to distinguish it as host$ from the noise of \"SYSTEM\" or \"unknown\". It can also be tied into the Asset & Identify framework and contribute to detections on user risk objects.

"},{"location":"searches/#risk-incident-rule-ideas","title":"Risk Incident Rule Ideas","text":"

Alternative ways to alert from the risk index that you may find useful. Later searches will be relying on the base search found in the \"Capped Risk Score by Source\" approach.

"},{"location":"searches/#risk-info-field","title":"Risk info field","text":"

This is one of my favorite additions to RBA; adding this macro to your risk rules creates a field called risk_info (which you can add to your Risk Datamodel) containing all of the useful fields your analyst might use for analysis. It's in JSON formatting which allows easy manipulation in SPL and excellent material for dashboards and unique drilldowns per field.

ADDITIONALLY, this frees risk_message to be used as a short and sweet summary rather than where you store all of the event detail. This lets Risk Notables tell a high level overview of events via risk_message, and is also handy to throttle or deduplicate by.

"},{"location":"searches/#risk-notable-history","title":"Risk Notable History","text":"

Tyler Younger from the RBA Slack contributed this handy method for including some useful history of risk notables for that risk object when it fires. I played with it a bit and created a version I might use in a dashboard for additional context. You should check with your analysts to see what would be most helpful for them.

"},{"location":"searches/#threat-object-prevalence","title":"Threat Object Prevalence","text":"

One of the great features in RBA is knowing how often something has occurred in an environment; generally, the more rare or anomalous something is, the more likely it is to be malicious. The threat object drilldown in the sample Risk Investigation Dashboard is designed to offer an analyst that context, but with a simple saved search, we could use that context in our Risk Notables as well.

"},{"location":"searches/#threat-object-types","title":"Threat Object Types","text":"

Increasing the number of threat object types you track in Risk Rules can be really helpful for tuning noisy alerts, threat hunting on anomalous combinations, and automating SOAR enrichment to unique threat object types. Haylee and Stuart's Threat Object Fun dashboards can be helpful for all three.

"},{"location":"searches/asset_and_identity_rir_logic/","title":"Integrate Asset & Identity Information into Risk Incident Rules","text":"

Note

This feature has been added to ES 7.1, utilizing the normalized_risk_object field. This is also utilized for throttling in the default Risk Incident Rules which prevents notables from firing regularly on the same identity with different users or same asset with different hosts if A&I is configured.

This was a comment on this excellent Splunk Idea to lower() or upper() the risk_object in Risk Incident Rules, which goes one step further by integrating A&I information:

| tstats `summariesonly` min(_time) as firstTime max(_time) as lastTime sum(All_Risk.calculated_risk_score) as risk_score, count(All_Risk.calculated_risk_score) as risk_event_count,values(All_Risk.annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, values(All_Risk.annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, values(All_Risk.tag) as tag, values(source) as source from datamodel=Risk.All_Risk by All_Risk.risk_object,All_Risk.risk_object_type\n| `drop_dm_object_name(\"All_Risk\")`\n| eval risk_object=upper(risk_object)\n| lookup update=true identity_lookup_expanded identity as risk_object OUTPUTNEW _key as asset_identity_id,identity as asset_identity_value\n| lookup update=true asset_lookup_by_str asset as risk_object OUTPUTNEW _key as asset_identity_id,asset as asset_identity_value\n| eval asset_identity_risk_object=CASE(isnull(asset_identity_id),risk_object,true(),asset_identity_id)\n| stats min(firstTime) as firstTime max(lastTime) as lastTime sum(risk_score) as risk_score, sum(risk_event_count) as risk_event_count,values(annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count, values(annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count, values(tag) as tag, values(source) as source, dc(source) as source_count values(asset_identity_value) as asset_identity_value values(risk_object) as risk_object dc(risk_object) as risk_object_count by asset_identity_risk_object,risk_object_type\n| eval \"annotations.mitre_attack\"='annotations.mitre_attack.mitre_technique_id', risk_threshold=100\n| eval user=case(risk_object_type=\"user\",risk_object,true(),user),src=case(risk_object_type=\"system\",risk_object,true(),src)\n| where risk_score >= $risk_threshold$\n| `get_risk_severity(risk_score)`\n| `security_content_ctime(firstTime)`\n| `security_content_ctime(lastTime)`\n

Note

As they mention in the comment -- the one \"catch\" is you'll need to change your throttle object from \"risk_object\" to \"asset_identity_risk_object\" -- but this is great for preventing duplicate notables on the same basic user / system combination.

"},{"location":"searches/asset_and_identity_rir_logic/#extra-credit","title":"Extra Credit","text":"

Adding the above logic will increase the accuracy of Risk based alerting, however pivoting via the built in drilldown will still be limited. The following changes will allow analysts to pivot directly to all Risk alerts detected by the assoicated RIR.

Create a macro called get_risk_asset_ident(2) with the following.

Macro Definition
eval risk_in=\"$risk_object_in$\",risk_type_in=\"$risk_object_type_in$\"\n| lookup update=true identity_lookup_expanded identity as risk_object OUTPUTNEW _key as assetid_ident_id,identity as assetid_ident_value \n| lookup update=true asset_lookup_by_str asset as risk_object OUTPUTNEW _key as assetid_asset_id,asset as assetid_asset_value \n| lookup update=true identity_lookup_expanded identity as risk_in OUTPUTNEW _key as assetid_in_ident,identity as assetid_in_ident_value \n| lookup update=true asset_lookup_by_str asset as risk_in OUTPUTNEW _key as assetid_in_asset,asset as assetid_in_asset_value \n| eval risk_object_out=CASE((risk_type_in=\"user\" AND assetid_ident_id = 'assetid_in_ident'),assetid_in_ident_value, (risk_type_in=\"system\" AND (assetid_asset_id = 'assetid_in_asset')),assetid_in_asset_value)\n| eval risk_in=upper(risk_in)\n| eval risk_object=upper(risk_object)\n| where isnotnull(risk_object_out) OR (risk_object = risk_in)\n
Arguments
risk_object_in,risk_object_type_in\n

get_risk_asset_ident(2) completed macro.

Update macro permissions

Assign global scope (All apps) and allow all users read permission.

"},{"location":"searches/asset_and_identity_rir_logic/#update-existing-rir-drilldowns","title":"Update existing RiR drilldowns","text":"

Modify existing RIR drilldowns to include the macro similar to below.

Example
| from datamodel:\"Risk.All_Risk\"  \n| `get_risk_asset_ident($risk_object|s$,$risk_object_type|s$)`\n| `get_correlations`  \n| rename annotations.mitre_attack.mitre_tactic_id as mitre_tactic_id, annotations.mitre_attack.mitre_tactic as mitre_tactic, annotations.mitre_attack.mitre_technique_id as mitre_technique_id, annotations.mitre_attack.mitre_technique as mitre_technique\n

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/deduplicate_notables/","title":"Deduplicate Notable Events","text":"

Throttle Alerts Which Have Already Been Reviewed or Fired

Because Risk Notables look at a period of time, it is common for a risk_object to keep creating notables as additional (and even duplicate) events roll in, as well as when events fall off as the time period moves forward. Additionally, different Risk Incident Rules could be firing on the same risk_object with the same events but create new Risk Notables. It is difficult to get around this with throttling, so here are some methods to deduplicate notables.

"},{"location":"searches/deduplicate_notables/#navigation","title":"Navigation","text":"

Here are two methods for Deduplicating Notable Events:

- Skill Level Pros Cons Method I Intermediate Deduplicates on front and back end More setup time Method II Beginner Easy to get started with Only deduplicates on back end"},{"location":"searches/deduplicate_notables/#method-i","title":"Method I","text":"

We'll use a Saved Search to store each Risk Notable's risk events and our analyst's status decision as cross-reference for new notables. Altogether new events will still fire, but repeated events from the same source will not. This also takes care of duplicate notables on the back end as events roll off of our search window.

KEEP IN MIND

Edits to the Incident Review - Main search may be replaced on updates to Enterprise Security; requiring you to make this minor edit again to regain this functionality. Ensure you have a step in your relevant process to check this search after an update.

"},{"location":"searches/deduplicate_notables/#1-create-a-truth-table","title":"1. Create a Truth Table","text":"

This method is described in Stuart McIntosh's 2019 .conf Talk (about 9m10s in), and we're going to create a similar lookup table. You can either download and import that file yourself, or create something like this in the Lookup Editor app:

Truth Table"},{"location":"searches/deduplicate_notables/#2-create-a-saved-search","title":"2. Create a Saved Search","text":"

Then we'll create a Saved Search which runs relatively frequently to store notable data and statuses.

  1. Navigate to Settings -> Searches, reports, and alerts.
  2. Select \"New Report\" in the top right.

Here is a sample to replicate

Sample Report With this SPL
index=notable eventtype=risk_notables\n| eval indexer_guid=replace(_bkt,\".*~(.+)\",\"\\1\"),event_hash=md5(_time._raw),event_id=indexer_guid.\"@@\".index.\"@@\".event_hash\n| fields _time event_hash event_id risk_object risk_score source orig_source\n| eval temp_time=time()+86400\n| lookup update=true event_time_field=temp_time incident_review_lookup rule_id AS event_id OUTPUT status as new_status\n| lookup update=true correlationsearches_lookup _key as source OUTPUTNEW default_status\n| eval status=case(isnotnull(new_status),new_status,isnotnull(status),status,1==1,default_status)\n| fields - temp_time,new_status,default_status\n| eval temp_status=if(isnull(status),-1,status)\n| lookup update=true reviewstatuses_lookup _key as temp_status OUTPUT status,label as status_label\n| fields - temp_status\n| eval sources = if(isnull(sources) , orig_source , sources )\n| table _time event_hash risk_object source status_label sources risk_score\n| reverse\n| streamstats current=f window=0 latest(event_hash) as previous_event_hash values(*) as previous_* by risk_object\n| eval previousNotable=if(isnotnull(previous_event_hash) , \"T\" , \"F\" )\n| fillnull value=\"unknown\" previous_event_hash previous_status_label previous_sources previous_risk_score\n| eval matchScore = if( risk_score != previous_risk_score , \"F\" , \"T\" )\n| eval previousStatus = case( match(previous_status_label, \"(Closed)\") , \"nonmalicious\" , match(previous_status_label, \"(New|Resolved)\") , \"malicious\" , true() , \"malicious\" )\n# (1)!\n| mvexpand sources\n| eval matchRR = if(sources != previous_sources , \"F\", \"T\")\n| stats  dc(sources) as dcSources dc(matchRR) as sourceCheckFlag values(*) as * by _time risk_object event_hash\n| eval matchRR = if(sourceCheckFlag > 1 , \"F\" , matchRR )\n| lookup RIR-Truth-Table.csv previousNotable previousStatus matchRR matchScore OUTPUT alert\n| table _time risk_object source risk_score event_hash dcSources alert previousNotable previousStatus matchRR matchScore\n| outputlookup RIR-Deduplicate.csv\n
  1. previousStatus uses the default ES status label \"Closed\".

In the SPL for previousStatus above, I used the default ES status label \"Closed\" as our only nonmalicious status. You'll have to make sure to use status labels which are relevant for your Incident Review settings. \"Malicious\" is used as the fallback status just in case, but you may want to differentiate \"New\" or unmatched statuses as something else for audit purposes; just make sure to create relevant matches in your truth table.

I recommend copying the alert column from malicious events

"},{"location":"searches/deduplicate_notables/#schedule-the-saved-search","title":"Schedule the Saved Search","text":"Create schedule
    Now find the search in this menu, click *Edit -> Edit Schedule* and try these settings:\n

I made this search pretty lean, so running it every three minutes should work pretty well; I also decided to only look back seven days as this lookup could balloon in size and cause bundle replication issues. You probably want to stagger your Risk Incident Rule cron schedules by one minute more than this one so they don't fire on the same risk_object with the same risk events.

"},{"location":"searches/deduplicate_notables/#3-deduplicate-notables","title":"3. Deduplicate notables","text":"

Our last step is to ensure that the Incident Review panel doesn't show us notables when we've found a match to our truth table which doesn't make sense to alert on. In the Searches, reports, alerts page, find the search Incident Review - Main and click Edit -> Edit Search.

By default it looks like this:

Default incident review search

And we're just inserting this line after the base search

Append to the base search
...\n| lookup RIR-Deduplicate.csv _time risk_object source OUTPUTNEW alert\n| search NOT alert=\"no\"\n

Updated incident review search"},{"location":"searches/deduplicate_notables/#congratulations","title":"Congratulations!","text":"

You should now have a significant reduction in duplicate notables

If something isn't working, make sure that the Saved Search is correctly outputting a lookup (which should have Global permissions), and ensure if you | inputlookup RIR-Deduplicate.csv you see all of the fields being returned as expected. If Incident Review is not working, something is wrong with the lookup or your edit to that search.

"},{"location":"searches/deduplicate_notables/#extra-credit","title":"Extra Credit","text":"

If you utilize the Risk info field so you have a short and sweet risk_message, you can add another level of granularity to your truth table.

if you utilize risk_message for ALL of the event detail, it may be too granular and isn't as helpful for throttling.

This is especially useful if you are creating risk events from a data source with its own signatures like EDR, IDS, or DLP. Because the initial truth table only looks at score and correlation rule, if you have one correlation rule importing numerous signatures, you may want to alert when a new signature within that source fires.

"},{"location":"searches/deduplicate_notables/#create-a-calculated-field","title":"Create a calculated field","text":"

First, we'll create a new Calculated Field from risk_message in our Risk Datamodel called risk_hash with eval's md5() function, which bypasses the need to deal with special characters or other strangeness that might be in that field. If you haven't done this before - no worries - you just have to go to Settings -> Data Models -> Risk Data Model -> Edit -> Edit Acceleration and turn this off. Afterwards, you can Create New -> Eval Expression like this:

Creating risk_hash from md5(risk_message) in data model Don't forget to re-enable the acceleration

You may have to rebuild the data model from the Settings -> Data Model menu for this field to appear in your events.

"},{"location":"searches/deduplicate_notables/#update-spl","title":"Update SPL","text":"

Then we have to add this field into our Risk Incident Rules by adding this line to their initial SPL and ensure this field is retained downstream:

Field to add to RiR
values(All_Risk.risk_hash) as risk_hashes\n

Now our Risk Notables will have a multi-value list of risk_message hashes. We must update our truth table to include a field called \"matchHashes\" - I've created a sample truth table here, but you must decide what is the proper risk appetite for your organization.

Next we'll edit the Saved Search we created above to include the new fields and logic:

Updated logic (changes highlighted)
...\n| eval sources = if(isnull(sources) , orig_source , sources )\n| table _time event_hash risk_object source status_label sources risk_score risk_hashes\n| reverse\n| streamstats current=f window=0 latest(event_hash) as previous_event_hash values(*) as previous_* by risk_object\n| eval previousNotable=if(isnotnull(previous_event_hash) , \"T\" , \"F\" )\n| fillnull value=\"unknown\" previous_event_hash previous_status_label previous_sources previous_risk_score previous_risk_hashes\n| eval matchScore = if( risk_score != previous_risk_score , \"F\" , \"T\" )\n| eval previousStatus = case( match(previous_status_label, \"(Closed)\") , \"nonmalicious\" , match(previous_status_label, \"(New|Resolved)\") , \"malicious\" , true() , \"malicious\" )\n| mvexpand risk_hashes\n| eval matchHashes= if(risk_hashes != previous_risk_hashes , \"F\" , \"T\" )\n| stats dc(matchHashes) as hashCheckFlag values(*) as * by _time risk_object event_hash\n| eval matchHashes = if(hashCheckFlag > 1 , \"F\" , matchHashes )\n| mvexpand sources\n| eval matchRR = if(sources != previous_sources , \"F\", \"T\")\n| stats  dc(sources) as dcSources dc(matchRR) as sourceCheckFlag values(*) as * by _time risk_object event_hash\n| eval matchRR = if(sourceCheckFlag > 1 , \"F\" , matchRR )\n| lookup RIR-Truth-Table.csv previousNotable previousStatus matchRR matchScore matchHashes OUTPUT alert\n| table _time risk_object source risk_score event_hash dcSources alert previousNotable previousStatus matchRR matchScore matchHashes\n| outputlookup RIR-Deduplicate.csv\n

Voila! We now ensure that our signature-based risk rule data sources will properly alert if there are interesting new events for that risk object.

"},{"location":"searches/deduplicate_notables/#method-ii","title":"Method II","text":"

This method is elegantly simple to ensure notables don't re-fire as earlier events drop off the rolling search window of your Risk Incident Rules. It does this by only firing if the latest risk event is from the past 70 minutes.

Append to existing RIR
...\n| stats latest(_indextime) AS latest_risk\n| where latest_risk >= relative_time(now(),\"-70m@m\")\n

Credit to Josh Hrabar and James Campbell, this is brilliant. Thanks y'all!

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/dynamic_drilldowns/","title":"Dynamic Drilldowns by Source","text":"

If you're utilizing a custom risk notable investigation dashboard, it can be incredibly helpful for each risk event source to have its own drilldown. Thanks to Donald Murchison from the RBA Slack for contributing this method, which is explained in more detail in this blog post.

"},{"location":"searches/dynamic_drilldowns/#create-a-drilldown-lookup","title":"Create a Drilldown Lookup","text":"

First you'll need a lookup file with your risk rule name, the drilldown itself, and a description, like so:

Specific drilldowns will help analysts find exactly what they want to know

You can utilize this example from Donald's article and change it to suit your purposes.

"},{"location":"searches/dynamic_drilldowns/#create-your-drilldown-panel","title":"Create your Drilldown Panel","text":"

In Donald's example, this panel shows the list of sources for the risk object indicated by $risk_object_token$ (which you will need to ensure matches whatever token your dashboard uses), a description, and the drilldown logic itself. Here is the SPL and helpful comments:

Drilldown Panel SPL
| tstats summariesonly=false count from datamodel=Risk.All_Risk where All_Risk.risk_object=\"$risk_object_token$\" by source\n``` Get a list of all risk rules that have generated a risk event for this risk object - assumes the dashbaord has an input which stores risk_object in \"risk_object_token\"\nreplace risk_object_token with your own token name - helpful to use risk_object_type in search if this is in a token as well ```\n| fields source\n``` Use map to run a search for each risk rule to generate the drilldowns - map was used to be able to pass the risk rule name as an argument to the subsearch.\nThis is required because we must run an individual \"\n| eval drilldown=\u2026\" for each risk rule in case fields are used in the drilldown that do not exist in other risk events.\nString concatentation with a null field would make our entire string null.\nIf you wanted to remove map for better performance you could do this by only using fields that are present in every risk rule or building drilldowns with coalesce - coalesce(risk_object,\\\"\\\") - to ensure no null fields```\n| map search=\"index=risk risk_object=\\\"$risk_object_token$\\\" | eval drilldown=[| inputlookup rba_risk_rule_drilldowns.csv | eval search_name=split(search_name,\\\"|\\\") | search search_name=\\\"$$source$$\\\" | eval drilldown=\\\"\\\\\\\"\\\".search_name.\\\"||@||\\\\\\\".\\\".drilldown.\\\".\\\\\\\"||@||\\\".description.\\\"\\\\\\\"\\\"\n``` In the map search, we first search for all risk events related to the risk rule. Every risk event will get a drilldown field that we will dedup later. We do not use the datamodel in case fields outside of the datamodel are used in the drilldown.\nThe |inputlookup subsearch concatenates search_name, drilldown, and description for each row```\n| stats values(drilldown) as drilldown\n| eval drilldown=mvjoin(drilldown,\\\".\\\\\\\"||&||\\\\\\\".\\\")\n``` We then condense all drilldowns to a single field and concatenate together - this allows us to evaluate all drilldowns within a single eval statement```\n|return $drilldown] | fields drilldown\"\n```Now we break out the drilldowns into their respective components```\n| eval drilldown=split(drilldown,\"||&||\")\n| mvexpand drilldown\n| eval search_name=mvindex(split(drilldown,\"||@||\"),0)\n| eval drilldown_description=mvindex(split(drilldown,\"||@||\"),2)\n| eval drilldown=mvindex(split(drilldown,\"||@||\"),1)\n| stats values(*) as * by drilldown\n``` Use stats to dedup the drilldowns - depending on the fields used in the drilldown there could be multiple variations of the same drilldown```\n| table search_name drilldown_description drilldown\n
"},{"location":"searches/dynamic_drilldowns/#add-drilldown-functionality","title":"Add Drilldown Functionality","text":"

You can follow along with Donald's article to add the drilldown in the GUI editor, but the SimpleXML for this panel would be:

Drilldown SimpleXML
<drilldown>\n<link target=\"_blank\">search?q=$row.drilldown$&amp;earliest=$field1.earliest$&amp;latest=$field1.latest$</link>\n</drilldown>\n

For a click anywhere on that row to drive the search. Make sure your time picker tokens match here as well!

You could also utilize the time around an event by retaining _time in the initial search and declaring this later in SPL: Extra Time Control

| eval lowtime = _time - 300\n| eval hightime = _time + 300\n

So you could use $row.lowtime$ and $row.hightime$ for your drilldown and search a five minute window around an event instead of utilizing the standard time picker for your dashboard.

"},{"location":"searches/dynamic_drilldowns/#customize-your-heart-out","title":"Customize Your Heart Out","text":"

This is a great way to incorporate something akin to Workflow Actions for your custom dashboards. You could go a bit further and potentially:

Just as examples. Please share your variations on the RBA Slack!

Authors

@7thdrxn - Haylee Mills Donald Murchison"},{"location":"searches/limit_score_stacking/","title":"Limit Risk Rule Score Stacking","text":"

These will help reduce the maximum amount of risk which can be added from noisy Risk Rules.

"},{"location":"searches/limit_score_stacking/#navigation","title":"Navigation","text":"

There are two methods for limiting score stacking

- Skill Level Pros Cons Method I Beginner Easy to get started with Less context around what was capped and why Method II Intermediate More precise deduplication and additional information Additional understanding of SPL"},{"location":"searches/limit_score_stacking/#method-i","title":"Method I","text":"

This caps the risk score contribution of a single source by 3x the highest score from that source.

| tstats summariesonly=true sum(All_Risk.calculated_risk_score) as summed_risk_score max(All_Risk.calculated_risk_score) as single_risk_score dc(source) as source_count count\n FROM datamodel=Risk.All_Risk\n WHERE All_Risk.risk_object_type=\"*\" (All_Risk.risk_object=\"*\" OR risk_object=\"*\")\n BY All_Risk.risk_object All_Risk.risk_object_type source\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*3, summed_risk_score, single_risk_score*3)\n| stats sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score dc(source) as source sum(count) as count\n BY All_Risk.risk_object All_Risk.risk_object_type\n| sort 1000 - risk_score\n...\n

Note

You may want to limit this to particular sources, but this is extra handy for noisy sources like EDR, DLP, or IDS.

Thanks David Dorsey!

"},{"location":"searches/limit_score_stacking/#method-ii","title":"Method II","text":"

This option adds some complexity, however, provides more information and better deduplication. The full write-up of how to accomplish this method can be found on gabs website.

Visit Website

*reference: https://www.gabrielvasseur.com/post/rba-a-better-way-to-dedup-risk-events

Final SPL from blog post
| inputlookup TEMP_GABS_riskybusiness.csv\n``` First we take the breakdown of what actually happened, before doing any kind of deduping ```\n| eventstats sum(count) as count_msg\n    by risk_object risk_object_type risk_score source risk_message ```Get breakdown per risk_message``` \n| eventstats values(eval(count_msg.\"*\".risk_score)) as breakdown_msg\n    by risk_object risk_object_type            source risk_message ```Get breakdown per risk_message```\n| eventstats sum(count) as count_src\n    by risk_object risk_object_type risk_score source ```Get breakdown per source```\n| eventstats values(eval(count_src.\"*\".risk_score)) as breakdown_src\n    by risk_object risk_object_type            source ```Get breakdown per source```\n| stats sum(count) as risk_event_count, values(breakdown_src) as breakdown_src,\n    values(breakdown_msg) as breakdown_msg, sum(eval(risk_score*count)) as total_score,\n    max(risk_score) as max_score, latest(_time) as _time, values(mitre_*) as mitre_*\n    by risk_object risk_object_type source risk_message ```Reduce to unique risk_message\n    (it's not impossible to have several risks with the same risk_message but different scores)```\n| eval risk_message= mvjoin(breakdown_msg,\"+\").\"=\".max_score\n    . if( total_score!=max_score, \" (!\" . total_score . \")\", \"\") . \" \" .risk_message\n``` START limit to a maximum of 10 contributions per source ```\n| sort 0 risk_object risk_object_type source - max_score ``` Only the lowest scores will be dedup'd ```\n| eventstats dc(risk_message) as dc_msg_per_source by risk_object risk_object_type source \n| streamstats count as rank_per_source by risk_object risk_object_type source \n| eval risk_message=case( \n    rank_per_source <= 10, risk_message,\n    rank_per_source = 11, \"...+\" . ( dc_msg_per_source - 20 ) . \" others from '\" . source . \"'...\" ,\n    1==1, null() ) \n| eval max_score=if( rank_per_source <= 10, max_score, 0 )\n``` END limit to a maximum of 10 contributions per source ```\n| stats sum(risk_event_count) as risk_event_count, values(breakdown_src) as breakdown_src,\n    list(risk_message) as risk_message, sum(max_score) as risk_score,\n    sum(total_score) as risk_score_nodedup, latest(_time) as _time, values(mitre_*) as mitre_*\n    by risk_object risk_object_type source ```Reduce to unique source```\n| eval breakdown_src = mvjoin(breakdown_src,\"+\") .\"=\".risk_score\n    . if( risk_score!=risk_score_nodedup, \" (!\" . risk_score_nodedup . \")\", \"\" ) . \" \".source\n| stats sum(risk_event_count) as risk_event_count, list(source) as source, dc(source) as source_count,\n    list(breakdown_src) as srcs, list(risk_message) as risk_message, sum(risk_score) as risk_score,\n    sum(risk_score_nodedup) as risk_score_nodedup, latest(_time) as _time, values(mitre_*) as mitre_*,\n    dc( mitre_tactic_id) as mitre_tactic_id_count, dc(mitre_technique_id) as mitre_technique_id_count\n    by risk_object risk_object_type ```Reduce to unique object```\n

Authors

@7thdrxn - Haylee Mills @gabs - Gabriel Vasseur"},{"location":"searches/naming_system_unknown_computer_accounts/","title":"Naming SYSTEM / Unknown / Computer Accounts - The SEAL Method","text":"

Computer accounts are used by Active Directory to authenticate machines to the domain, and RBA detections may find behavior in a log where the user account is simply listed as \"SYSTEM\" or even left blank because it is the computer account. This method renames the account to distinguish it as host$ from the noise of \"SYSTEM\" or \"unknown\". It can also be tied into the Asset & Identify framework and contribute to detections on user risk objects.

"},{"location":"searches/naming_system_unknown_computer_accounts/#steps","title":"Steps","text":"

Navigate to Settings > Fields > Calculated Fields > Add New

Setting Value Source XmlWinEventLog:Security Source XmlWinEventLog:Microsoft-Windows-Sysmon/Operational Name user Eval Expression if(user=\"SYSTEM\" OR user=\"-\",'host'+\"$\",'user') Conflicting knowledge objects - Sysmon TA

We have to be careful with existing order of knowledge objects and calculated fields. The Sysmon TA already has a user = \"\" calculated field which we can update as follows:

Existing:
user = upper(case(\n    NOT isnull(User) AND NOT User IN (\"-\"), replace(User, \"(.*)\\\\\\(.+)$\",\"\\2\"),\n    NOT isnull(SourceUser) AND NOT isnull(TargetUser) AND SourceUser==TargetUser, replace(SourceUser, \"(.*)\\\\\\(.+)$\",\"\\2\")\n    ))\n
Update to:
user = upper(case(\n    match(User,\".+\\\\\\SYSTEM\"), host.\"$\",\n    NOT isnull(User) AND NOT User IN (\"-\"), replace(User, \"(.*)\\\\\\(.+)$\",\"\\2\"),\n    NOT isnull(SourceUser) AND NOT isnull(TargetUser) AND SourceUser==TargetUser, replace(SourceUser, \"(.*)\\\\\\(.+)$\",\"\\2\")\n    ))\n
"},{"location":"searches/naming_system_unknown_computer_accounts/#extra-credit","title":"Extra Credit","text":"

Not going to map this entire process due to how different it can be in each environment, but you can now add the computer account to your Identity lookup to aggregate with other user accounts. For example, you might take the fields nt_host and owner from your Asset lookup (asset_lookup_by_str), then map owner to email in the Identity lookup (identity_lookup_expanded). If you make a saved search that outputs a CSV, you can now use that to add fields into your Identity lookup.

Authors

@Dean Luxton @StevenD"},{"location":"searches/risk_guide_searches/","title":"Essential RBA searches","text":"

Handy SPL contained in the Essential Guide to Risk Based Alerting.

"},{"location":"searches/risk_guide_searches/#determine-correlation-searches-with-high-falsebenign-positive-rates","title":"Determine Correlation Searches with High False/Benign Positive Rates","text":"
`notable`\n| stats count(eval(status_label=\"Incident\")) as incident count(eval(status_label=\"Resolved\")) as closed\n BY source\n| eval benign_rate = 1 - incident / (incident + closed)\n| sort - benign_rate\n
Note

Be sure to replace the status_label with whatever is used in your environment.

"},{"location":"searches/risk_guide_searches/#risk-rules-generating-the-most-risk","title":"Risk Rules Generating the Most Risk","text":"
| tstats summariesonly=false sum(All_Risk.calculated_risk_score)\n   as risk_score,dc(All_Risk.risk_object)\n   as risk_objects,count\n FROM datamodel=Risk.All_Risk\n WHERE * All_Risk.risk_object_type=\"*\" (All_Risk.risk_object=\"*\" OR risk_object=\"*\")\n BY source\n| sort 1000 - count risk_score\n
"},{"location":"searches/risk_guide_searches/#dig-into-noisy-threat-objects","title":"Dig into Noisy Threat Objects","text":"
| tstats summariesonly=true count dc(All_Risk.risk_object) as dc_objects dc(All_Risk.src) as dc_src dc(All_Risk.dest) as dc_dest dc(All_Risk.user) as dc_users dc(All_Risk.user_bunit) as dc_bunit sum(All_Risk.calculated_risk_score) as risk_score values(source) as source\n FROM datamodel=Risk.All_Risk\n BY All_Risk.threat_object,All_Risk.threat_object_type\n| `drop_dm_object_name(\"All_Risk\")`\n| sort 1000 - risk_score\n
"},{"location":"searches/risk_guide_searches/#find-noisiest-risk-rules-in-risk-notables","title":"Find Noisiest Risk Rules in Risk Notables","text":"
index=notable eventtype=risk_notables\n| stats count\n BY orig_source\n| eventstats sum(count) as total\n| eval percentage = round((count / total) * 100,2)\n| sort - percentage\n
"},{"location":"searches/risk_guide_searches/#structural-changes","title":"Structural Changes","text":""},{"location":"searches/risk_guide_searches/#notable-macro-to-edit-for-qa-risk-notables","title":"Notable Macro to Edit for QA Risk Notables","text":"

Add | eval QA=1 to the end of your Risk Incident Rules, editing the macro get_notable_index from the default to begin \"QA\" mode.

default
index=notable\n
QA mode
index=notable NOT QA=1\n

This will keep Risk Notables out of your Incident Review queue while you develop RBA.

"},{"location":"searches/risk_guide_searches/#create-a-sandbox-for-risk-rules-away-from-risk-notables","title":"Create a Sandbox for Risk Rules away from Risk Notables","text":"

Create an eventtype called something like QA and have it apply a tag called QA, then add the following to your Risk Incident Rules.

...\nWHERE NOT All_Risk.tag=QA\n...\n

This keeps your curated risk ecology preserved so you can compare how many Risk Notables you would see if your QA content was added.

"},{"location":"searches/risk_guide_searches/#include-previous-notables-in-new-notables","title":"Include Previous Notables in New Notables","text":"

If you create a lookup from a saved search called Past7DayNotables.csv where you store the previous time, status, and sources, you could include this in your Risk Incident Rules:

| lookup Past7DayNotables.csv risk_object OUTPUT prev_time prev_status prev_sources\n| eval prev_alerts = prev_time.\" - \".prev_status.\" - \".prev_sources\n
Note

Make sure to add prev_alerts to the Incident Review Settings page so this shows up in the Incident Review panel.

"},{"location":"searches/risk_guide_searches/#tuning","title":"Tuning","text":""},{"location":"searches/risk_guide_searches/#remove-results-with-a-lookup","title":"Remove Results with a Lookup","text":"

Once you have a lookup built out, insert it into a search like this:

index=proxy http_method=\"POST\" NOT\n  [| inputlookup RR_Proxy_Allowlist.csv\n  | fields Web.src Web.dest\n  | rename Web.* AS *]\n

You could also do this with a datamodel:

| tstats summariesonly=t values(Web.dest) as dest from datamodel Web.Web where Web.http_method=\"POST\" NOT\n  [| inputlookup RR_Proxy_Allowlist.csv | fields Web.src Web.dest]\n  by _time, Web.src\n

Using the Web datamodel field constraints as an example so we can properly exclude results from index or datamodel based risk rules.

"},{"location":"searches/risk_guide_searches/#adjust-risk-scores","title":"Adjust Risk Scores","text":""},{"location":"searches/risk_guide_searches/#using-eval","title":"Using eval","text":"
index=proxy signature=*\n| table src user user_bunit dest signature http_code\n| eval risk_score = case(\n  signature=\"JS:Adware.Lnkr.A\",\"10\",\n  signature=\"Win32.Adware.YTDownloader\",\"0\",\n  NOT http_code=\"200\",\"25\",\n  signature=\"Trojan.Win32.Emotet\" AND NOT user_bunit=\"THREAT INTELLIGENCE\",\"100\"\n  )\n

In this example, we are:

"},{"location":"searches/risk_guide_searches/#using-lookup","title":"Using lookup","text":"
index=proxy signature=*\n| table src user user_bunit dest signature http_code\n| lookup RR_Proxy_Adjust.csv src user user_bunit dest signature http_code OUTPUTNEW risk_score\n

We can do the same with a lookup and as many relevant fields as we need for the most constrained exclusions.

"},{"location":"searches/risk_guide_searches/#dedup-similar-events-from-counting-multiple-times-in-risk-notables-score","title":"Dedup Similar Events from Counting Multiple Times in Risk Notables (Score)","text":"
...\n BY All_Risk.risk_object,All_Risk.risk_object_type\n| `drop_dm_object_name(\"All_Risk\")`\n| streamstats sum(risk_score) as original_score values(source) as sources values(risk_message) as risk_messages by risk_object\n| eval adjust_score = case(\n source IN (\"My Noisy Rule That Fires a Lot but I Still Want to Know About, Once\", \"My Other Really Useful Context Low Risk Rule\"),\"1\",\n match(risk_message,\"IDS - Rule Category 1.*|IDS - Rule Category 2.*\") OR match(risk_message,\"DLP - Rule Category 1.*|DLP - Rule Category 2.*\"),\"1\",\n 1=1,null())\n| eval combine = coalesce(adjust_score,risk_message)\n| dedup combine risk_score\n| streamstats sum(risk_score) as risk_score values(sources) as source values(risk_messages) as risk_message by risk_object\n...\n

For making sure similar detections on basically the same event only count once in our total risk score.

"},{"location":"searches/risk_guide_searches/#weight-events-from-noisy-sources-in-risk-notables-metadata","title":"Weight Events from Noisy Sources in Risk Notables (Metadata)","text":"
...\nBY All_Risk.risk_object,All_Risk.risk_object_type\n| `drop_dm_object_name(\"All_Risk\")`\n| mvexpand source\n| lookup RIRadjust-rule_weight.csv source OUTPUTNEW mitre_weight source_weight\n| eval mitre_weight = if(isnotnull(mitre_weight),mitre_weight,\"0\")\n| eval source_weight = if(isnotnull(source_weight),source_weight,\"0\")\n| streamstats sum(mitre_weight) as mitre_weight_total sum(source_weight) as source_weight_total values(*) as * by risk_object risk_object_type\n| eval mitre_tactic_id_count = mitre_tactic_id_count - mitre_weight_total\n| eval source_count = source_count - source_weight_total\n| eval \"annotations.mitre_attack\" = 'annotations.mitre_attack.mitre_technique_id'\n| where mitre_tactic_id_count >= 3 and source_count >= 4\n

For tuning Risk Incident Rules that don't rely on an accretive score to alert, but still need a lever to tweak noisy sources. In our example lookup, we would include a value between 0 and 1 for each noisy source; IE 0.75 to only count a rule as \u00bc of a standard weight, 0.5 to only count as \u00bd, etc.

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/risk_incident_rule_ideas/","title":"Risk Incident Rule Ideas","text":"

Here are some alternative ways to alert from the risk index that you may find useful. Later searches will be relying on the base search found in the \"Capped Risk Score by Source\" approach.

Name Description Capped Risk Score by Source From the limit score stacking approach Events from Multiple Sourcetypes For events from multiple sourcetypes Events from Multiple Sourcetypes with Meta-Scoring Similar, but with more control over what alerts and how MITRE Counts with Meta-Scoring Meta-scoring approach to MITRE alert"},{"location":"searches/risk_incident_rule_ideas/#capped-risk-score-by-source","title":"Capped Risk Score by Source","text":"

Utilizes the limit score stacking approach to limit score contribution from a single source to double of its highest scoring risk event.

| tstats `summariesonly`\ncount as count\ncount(All_Risk.calculated_risk_score) as risk_event_count,\nsum(All_Risk.calculated_risk_score) as summed_risk_score,\nmax(All_Risk.calculated_risk_score) as single_risk_score,\nvalues(All_Risk.risk_message) as risk_message,\nvalues(All_Risk.annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id,\ndc(All_Risk.annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count,\nvalues(All_Risk.annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id,\ndc(All_Risk.annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count,\nvalues(All_Risk.tag) as tag,\nvalues(All_Risk.threat_object) as threat_object,\nvalues(All_Risk.threat_object_type) as threat_object_type,\ndc(source) as source_count,\n,max(_time) as _time\nfrom datamodel=Risk.All_Risk by All_Risk.risk_object,All_Risk.risk_object_type, source | `drop_dm_object_name(\"All_Risk\")` | eval \"annotations.mitre_attack\"='annotations.mitre_attack.mitre_technique_id' | `get_risk_severity(risk_score)`\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*2, summed_risk_score, single_risk_score*2)\n| stats values(*) as * sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count sum(risk_event_count) as risk_event_count dc(source) as source_count\n BY risk_object risk_object_type\n| fields - single_risk_score count\n| eval risk_score = summed_risk_score\n| where capped_risk_score > 100\n
"},{"location":"searches/risk_incident_rule_ideas/#events-from-multiple-sourcetypes","title":"Events from Multiple Sourcetypes","text":"

This is a very effective approach that looks for when a single risk object has events from multiple security data sources. With a well-defined naming scheme for your searches, you may not need to utilize a saved search to retain this information in your risk rules. Otherwise, you could run something like this somewhat infrequently as a saved search:

| rest splunk_server=local count=0 /services/saved/searches\n| search action.correlationsearch.enabled=1\n| rename dispatch.earliest_time as early_time qualifiedSearch as search_spl\n| table title search_spl\n| eval data_sourcetype = case(\nmatch(search_spl,\".*\\`(sysmon|wmi|powershell|wineventlog_(security|system))\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Endpoint.*\") OR match(title,\"Endpoint.*\") OR match(search_spl,\".*sourcetype\\=(|\\\")(xmlwineventlog:microsoft-windows-sysmon/operational).*\"),\"Endpoint\",\nmatch(search_spl,\".*datamodel(:|=|\\s)(|\\\")Endpoint.*\") OR match(title,\"Threat.*\") OR match(search_spl,\".*sourcetype\\=(|\\\")(wdtap:alerts).*\"),\"Malware\",\nmatch(search_spl,\".*\\`(okta|gws_reports_login)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Authentication.*\"),\"Authentication\",\nmatch(search_spl,\".*datamodel(:|=|\\s)(|\\\")Change.*\"),\"Change\",\nmatch(search_spl,\".*\\`(stream_http)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Web.*\"),\"Web\",\nmatch(search_spl,\".*\\`(o365_management_activity|gsuite_gmail)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Email.*\"),\"Email\",\nmatch(search_spl,\".*\\`(gsuite_gdrive)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Data Loss.*\"),\"DLP\",\nmatch(search_spl,\".*datamodel(:|=|\\s)(|\\\")Alerts.*\"),\"Alerts\",\nmatch(search_spl,\".*datamodel(:|=|\\s)(|\\\")Intrusion.*\"),\"IDS\",\nmatch(search_spl,\".*\\`(cisco_networks)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Network.*\"),\"Network\",\nmatch(search_spl,\".*\\`(kubernetes_azure|azuread|cloudtrail|aws_securityhub_finding|aws_cloudwatchlogs_eks|azure_audit|google_gcp_pubsub_message|aws_s3_accesslogs)\\`.*\"),\"Cloud\",\ntrue(),\"Unknown\")\n| fields - search_spl\n| outputlookup RR_sources.csv\n

Which looks at the SPL of a search to determine which sourcetype to group it under. Please modify this search as you see fit for your environment. This allows you to create a Risk Incident Rule like this:

...\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*2, summed_risk_score, single_risk_score*2)\n| lookup RR_sources.csv title AS source OUTPUTNEW data_sourcetype\n| stats values(*) as * sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count sum(risk_event_count) as risk_event_count dc(source) as source_count values(data_sourcetype) as sourcetypes dc(data_sourcetype) as sourcetype_count\n BY risk_object risk_object_type\n| fields - single_risk_score count\n| eval risk_score = summed_risk_score\n| where sourcetype_count > 1\n
"},{"location":"searches/risk_incident_rule_ideas/#events-from-multiple-sourcetypes-with-meta-scoring","title":"Events from Multiple Sourcetypes with Meta-Scoring","text":"

Sometimes, you may need more ways of distinguishing which events should have more relevance in an alert beyond a simple count or distinct count. The gist of this strategy is to declare a new variable with a value of 0, then utilize multiple eval statements to add to this value based on attributes about the event. Remember that a case() statement will only apply once and will apply the first match it finds, so you want to ensure your most important matches hit first. Don't be afraid to stack multiple eval statements, and you'll have to tweak what the threshold is depending on the values you chose.

...\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*2, summed_risk_score, single_risk_score*2)\n| lookup RR_sources.csv title AS source OUTPUTNEW data_sourcetype\n| rex field=risk_message \"Severity\\=(?<severity>\\w*)\\s\"\n| stats values(*) as * sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count sum(risk_event_count) as risk_event_count dc(source) as source_count values(data_sourcetype) as sourcetypes dc(data_sourcetype) as sourcetype_count\n BY risk_object risk_object_type\n| fields - single_risk_score count\n| eval risk_score = summed_risk_score\n| eval sourcetype_mod = 0\n| eval sourcetype_mod = if(match(sourcetypes,\"Endpoint\"),sourcetype_mod+20,sourcetype_mod)\n| eval sourcetype_mod = if(match(sourcetypes,\"Malware\"),sourcetype_mod+20,sourcetype_mod)\n| eval sourcetype_mod = if(match(sourcetypes,\"Web\"),sourcetype_mod+10,sourcetype_mod)\n| eval sourcetype_mod = if(match(sourcetypes,\"DLP\"),sourcetype_mod+10,sourcetype_mod)\n| eval sourcetype_mod = case(\nmatch(sourcetypes,\"IDS\") AND match(severity,\"(high|critical)\"),sourcetype_mod+20,\nmatch(sourcetypes,\"IDS\"),sourcetype_mod+10,\ntrue(),sourcetype_mod)\n| where sourcetype_mod > 39\n

Because sourcetypes is now a multi-valued field by risk_object, I had to create multiple eval checks so that the operation would apply more than once if events from multiple sourcetypes were found. You can also see how I pulled out severity from the risk_message earlier on with rex so I could make a distinction between higher and lower severity IDS events in the meta-scoring. This assumes only my IDS events have that particular formatting to indicate severity; you may have to use more logic to distinguish different sourcetypes and severities, it's just an example.

For the scoring threshold of 40, I chose this because of how I've structured the score additions. I will get an alert if a risk object has events from:

Which may remove a lot of noise from combinations which aren't as likely to be malicious. It is still worthwhile to occasionally review what doesn't pass the threshold to ensure you've crafted a method that surfaces high-fidelity alerts, or are caught with other Risk Incident Rules.

"},{"location":"searches/risk_incident_rule_ideas/#mitre-counts-with-meta-scoring","title":"MITRE Counts with Meta-Scoring","text":"

The meta-scoring method is useful for getting more value from your MITRE count thresholding rules.

...\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*2, summed_risk_score, single_risk_score*2)\n| eval mitre_weight = case(\ncapped_risk_score>70,\"0\",\ncapped_risk_score>40,\"0.5\",\ncapped_risk_score>5,\"0.75\",\ntrue(),\"1\")\n| eval mitre_weight_tactic = mitre_weight * mitre_tactic_id_count\n| eval mitre_weight_technique = mitre_weight * mitre_technique_id_count\n| eventstats sum(mitre_weight_tactic) as mitre_weight_tactic_total sum(mitre_weight_technique) as mitre_weight_technique_total by risk_object risk_object_type source\n| eval mitre_tactic_id_count = mitre_tactic_id_count - mitre_weight_tactic_total\n| eval mitre_technique_id_count = mitre_technique_id_count - mitre_weight_technique_total\n| stats values(*) as * sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score sum(mitre_tactic_id_count) as mitre_tactic_id_count sum(mitre_technique_id_count) as mitre_technique_id_count sum(risk_event_count) as risk_event_count dc(source) as source_count\n BY risk_object risk_object_type\n| fields - mitre_weight* single_risk_score count\n| eval risk_score = summed_risk_score\n| eval mitre_mod = 0\n| eval mitre_mod = case(\nmitre_tactic_id_count > 3,mitre_mod+20,\nmitre_tactic_id_count < 4 AND mitre_tactic_id_count > 1,mitre_mod+10,\ntrue(),mitre_mod)\n| eval mitre_mod = case(\nmitre_technique_id_count > 4,mitre_mod+20,\nmitre_technique_id_count < 5 AND mitre_technique_id_count > 2,mitre_mod+10,\ntrue(),mitre_mod)\n| eval mitre_mod = case(\nmvcount(source) > 4,mitre_mod+20,\nmvcount(source) < 5 AND mvcount(source) > 1,mitre_mod+10,\ntrue(),mitre_mod)\n| eval mitre_mod = case(\nmatch(sourcetypes,\"(Malware|Endpoint)\"),mitre_mod+20,\nmatch(sourcetypes,\"IDS\"),mitre_mod+10,\ntrue(),mitre_mod)\n| eval mitre_mod = case(\nmatch(user_category,\"(privileged|technical|executive|watchlist)\"),mitre_mod+20,\nmatch(src_category,\"(Server|DMZ)\"),mitre_mod+10,\ntrue(),mitre_mod)\n| where mitre_mod > 49\n

Near the beginning, we juggle some logic for counting events differently which have a lower risk score because when we aggregate on the count of MITRE Tactics/Techniques involved, we might want to treat events with a higher risk score as counting more heavily toward the overall total. This is especially true when aggregating events over longer periods like the out of the box 7 day rule, or something going as far back as 30 or 90 days.

Now in the meta-scoring, we have all sorts of ways to distinguish what might be more relevant to us. Now we incorporate:

Which gives us more control over the types of events that might bubble up in our alerts.

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/risk_info_event_detail/","title":"Risk info field","text":""},{"location":"searches/risk_info_event_detail/#create-macro-for-risk_info-field","title":"Create macro for risk_info field","text":"

You may want to keep risk_message relatively brief as a sort of high-level overview of a risk event, then utilize a new field to store details. We can create a macro called risk_info(1) to create a JSON-formatted field with this SPL:

Macro definition
eval risk_info = \"{\\\"risk_info\\\":{\"\n| foreach $fields$\n    [\n    | eval <<FIELD>>=if(isnull(<<FIELD>>), \"unknown\", <<FIELD>>)\n    ```Preparing json array if FIELD is multivalue, otherwise simple json value```\n    | eval json=if(mvcount(<<FIELD>>)>1,mv_to_json_array(mvdedup(<<FIELD>>)),\"\\\"\".<<FIELD>>.\"\\\"\") \n    | eval risk_info=risk_info.\"\\\"\".\"<<FIELD>>\".\"\\\": \".json.\",\"\n    ]\n| rex mode=sed field=risk_info \"s/,$/}}/\"\n| fields - json\n

Many thanks to RedTigR on the RBA Slack for providing the multi-value friendly version of this macro.

Utilizing the macro like risk_info(\"field1,field2,field3,etc\") to give us a JSON formatted field with any of the fields we like.

And then if we wanted to break this out in a dashboard we could use spath to break out fields into their own columns, or a rex command like this:

Example

| rex field=risk_info max_match=100 \"(?<risk_info2>\\\"\\w+\\\":\\s*((?:(?<!\\\\\\)\\\"[^\\\"]*\\\"|\\[[^\\]]*\\]))(?=,|\\s*}))\"\n

To break out each field as a multi-value on their own line in the same column. It looks really pretty, and you can even use $click.value2$ to determine exactly which MV field was clicked and utilize different drilldowns per field, for example.

"},{"location":"searches/risk_info_event_detail/#extracting-existing-fields-from-risk-events-into-risk_info-field","title":"Extracting existing fields from risk events into risk_info field","text":"

Assumption

Your risk rules are outputting specific details in addition to the risk fields (e.g. risk_message, risk_object etc.)

The following search replaces the View the individual Risk Attributions drilldown within a risk incident rule. It allows us to dynamically bring the output of each individual risk rule in a concise manner.

The aim of this is to minimize pivoting when performing the initial assessment of a risk incident while keeping the notable and risk_message field concise.

index=risk\n| search risk_object=$risk_object$\n| rename annotations.mitre_attack.mitre_tactic_id AS mitre_tactic_id, annotations.mitre_attack.mitre_tactic AS mitre_tactic\n| rex field=_raw max_match=0 \"(?<risk_info>[^\\=]+\\=\\\"([^\\\"]+\\\")+?)((, )|$)\"\n| eval risk_info=mvfilter(NOT match(risk_info, \"^(annotations)|(info_)|(savedsearch_description)|(risk_)|(orig_time)|(([0-9]+, )?search_name)\"))\n| table _time, source, risk_object, risk_score, risk_message, risk_info, risk_object_type, mitre_tactic_id, mitre_tactic\n| eval calculated_risk_score=risk_score\n| sort _time\n

Breaking down some decisions:

Authors

@7thdrxn - Haylee Mills @RedTigR @elusive-mesmer"},{"location":"searches/risk_notable_history/","title":"Risk Notable History","text":"

Tyler Younger from the RBA Slack contributed this handy method for including some useful history of risk notables for that risk object when it fires. I played with it a bit and created a version I might use in a dashboard for additional context. You should check with your analysts to see what would be most helpful for them.

"},{"location":"searches/risk_notable_history/#adding-risk-notable-history","title":"Adding Risk Notable History","text":"

You could add this subsearch to your Risk Incident Rules and add this field to Incident Review Settings so analysts see it reviewing a notable event, or maybe have it as a panel in an investigation dashboard. I will leave it with the makeresults and tabled results as an example so you can play around until it looks right.

| makeresults\n| eval risk_object=\"tyounger\"\n| join type=left max=0 risk_object\n    [| search earliest=-31d latest=now `notable`\n    ``` ### This may or may not make sense in your enviornment, the idea was to tidy up the search names, adjust as needed\n    ```\n   | replace \"* - Rule\" WITH * IN search_name\n   | replace \"Audit - UC - *\" WITH * IN search_name\n   | replace \"Threat - UC - *\" WITH * IN search_name\n   | replace \"Access - UC - *\" WITH * IN search_name\n   | replace \"Network - UC -*\" WITH * IN search_name\n   | replace \"Identity - UC -*\" WITH * IN search_name\n   | replace \"Endpoint - UC -*\" WITH * IN search_name\n   ``` ### ```\n    | eventstats count as history_count dc(search_name) as search_name_count values(search_name) as search_names first(_time) as last_time by risk_object,search_name\n    | eval days_ago=round((abs(last_time-now())/86400),2)\n    | convert ctime(first_time) as first_time\n    | convert ctime(last_time) as last_time\n    | eval history_count=if(isnull(history_count),\" new\", history_count)\n    | eval search_names=if(isnull(search_names),\" search null\",search_names)\n    | eval last_time=if(isnull(last_time),\" last time null\",last_time)\n    | eval days_ago=if(isnull(days_ago),\" days ago null\",days_ago)\n    | fillnull comment value=\"N/A\"\n    | table risk_object sources rule_name history_count risk_object first_time last_time search_name_count search_names days_ago status_label comment\n        ]\n    ``` ### Format history fields ### ```\n| eval notable_risk_history=\"(\".risk_object. \") previously alerted \".history_count.\" times with the following notable(s) [\".search_names.\"]\".\" with status label(s) (\".status_label.\") most recently on [\".last_time.\"] \".days_ago.\" days ago. comment(s) comments: (\".comment.\")\"\n| eval notable_risk_history=if(isnull(notable_risk_history),\"Risk object has not generated any notable events\",notable_risk_history)\n| eval search_names=if(isnull(search_names),\"N/A\",search_names)\n| makemv delim=\"comments: \" notable_risk_history\n| eval notable_risk_history=mvjoin(notable_risk_history, \"\")\n| table risk_object notable_risk_history\n

You should be able to simply use the join and logic all the way up to the final table command and perhaps make a macro for this to be added to the end of your Risk Incident Rules and provide that context.

"},{"location":"searches/risk_notable_history/#adding-risk-and-traditional-notable-history","title":"Adding Risk and Traditional Notable History","text":"

You might want to check other fields in regular notables to see if this risk object appears there as well. In this example, I am using coalesce to check src, dest, and user and bring those in on the join. I'm also playing with the spacing and formatting of the final results in case that gives you some ideas:

| makeresults\n| eval risk_object=\"gravity\"\n| join type=left max=0 risk_object\n    [| search earliest=-5000d latest=now `notable`\n    ``` ### This may or may not make sense in your enviornment, the idea was to tidy up the search names, adjust as needed\n    ```\n   | replace \"* - Rule\" WITH * IN search_name\n   | replace \"* - Combined\" WITH * IN search_name\n   | replace \"Audit - *\" WITH * IN search_name\n   | replace \"Threat - *\" WITH * IN search_name\n   | replace \"Access - *\" WITH * IN search_name\n   | replace \"Network -*\" WITH * IN search_name\n   | replace \"Identity -*\" WITH * IN search_name\n   | replace \"Endpoint -*\" WITH * IN search_name\n   ``` ### ```\n   | eval risk_object=coalesce(risk_object,src)\n   | eval risk_object=coalesce(risk_object,dest)\n   | eval risk_object=coalesce(risk_object,user)\n   | eval comment = \"|||---- \".comment\n    | eventstats count as history_count dc(search_name) as search_name_count values(search_name) as search_names latest(_time) as last_time latest(status_label) as status_label values(comment) as comments by risk_object,search_name\n    | dedup risk_object,search_name\n    | convert ctime(last_time) as last_time\n    | fillnull history_count search_names last_time value=\"N/A\"\n    | fillnull comments value=\"---- no comments\"\n    | eval comments = mvjoin(comments,\"\")\n    | table risk_object history_count risk_object last_time time search_name_count search_name status_label comments\n        ]\n    ``` ### Format history fields ### ```\n| eval notables = last_time.\" - \".history_count.\" - \".search_name.\" :: \".upper(status_label).\"|||\".comments\n| stats sum(history_count) as history_count values(notables) as notables by risk_object\n| eval notables = mvjoin(notables,\"||| |||-- \")\n| eval notable_history=\"(+. \".upper(risk_object). \" .+) previously alerted \".history_count.\" times with the following notable(s):||| |||-- \".notables\n| eval notable_history=split(notable_history,\"|||\")\n| fields - notables history_count\n| eval notable_history=if(isnull(notable_history),\"Risk object has not generated any notable events\",notable_history)\n

Either way, letting your analysts know what was seen before is helpful context when they begin investigating.

Authors

Tyler Younger"},{"location":"searches/this_then_that_alerts/","title":"Detect Chain of Behaviors","text":"

To make a risk rule that looks for two rules firing close together, we can use sort followed by the autoregress command within a certain duration:

index=risk sourcetype=stash search_name=\"Search1\" OR search_name=\"Search2\"\n| sort by user _time | dedup _time search_name user\n| delta _time as gap\n| autoregress search_name as prev_search\n| autoregress user as prev_user\n| where user = prev_user\n| table _time gap src user prev_user search_name prev_search\n| where ((search_name=\"Search1\" OR search_name=\"Search2\") AND (prev_search=\"Search1\" OR prev_search=\"Search2\") AND gap<600)\n

The benefit of not doing this in a single search is you still have the individual risk events as useful observations, and then can add more risk when observed together, or tweak risk down for noisy events without \"allowlisting\" altogether.

Ryan Moss from Verizon also spoke about using Analytic Stories with RBA which is another excellent method for low volume, high fidelity chained detections.

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/threat_object_prevalence/","title":"Threat Object Prevalence","text":"

THIS IS A WIP PAGE, THIS IS COMING SOON!

"},{"location":"searches/threat_object_types/","title":"Additional Threat Object Types","text":"

Increasing the number of threat object types you track in Risk Rules can be really helpful for tuning noisy alerts, threat hunting on anomalous combinations, and automating SOAR enrichment to unique threat object types. Haylee and Stuart's Threat Object Fun dashboards can be helpful for all three.

"},{"location":"searches/threat_object_types/#threat-object-types","title":"Threat Object Types","text":"

Some potential threat_object_types to keep in mind when creating risk rules:

source threat_object_type email, endpoint, network, proxy ip email, endpoint, proxy src_user email, endpoint, proxy user endpoint, email file_hash endpoint, email file_name endpoint, proxy domain endpoint, proxy url email email_subject email email_body endpoint command endpoint parent_process endpoint parent_process_name endpoint process endpoint process_file_name endpoint process_hash endpoint process_name endpoint registry_path endpoint registry_value_name endpoint registry_value_text endpoint service endpoint service_dll_file_hash endpoint service_file_hash proxy certificate_common_name proxy certificate_organization proxy certificate_serial proxy certificate_unit proxy http_referrer proxy http_user_agent"},{"location":"searches/threat_object_types/#other-types","title":"Other Types","text":"

You could also use open-source server handshake hashing algorithms like JA3, JA4, JARM, or CYU to identify anomalous server handshakes and potentially include:

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"RBA all day","text":""},{"location":"#welcome-to-the-wonderful-world-of-risk-based-alerting","title":"Welcome to the wonderful world of Risk-Based Alerting!","text":"

RBA is Splunk's method to aggregate low-fidelity security events as interesting observations tagged with security metadata to create high-fidelity, low-volume alerts.

"},{"location":"#searches","title":"Searches","text":"

Useful SPL from the RBA community for working with risk events.

"},{"location":"#dashboards","title":"Dashboards","text":"

Simple XML or JSON for Splunk dashboards to streamline risk analysis.

"},{"location":"#risk-rules","title":"Risk Rules","text":"

Splunk's Threat Research Team has an incredible library of over 1000 detections in the Splunk's Enterprise Security Content Updates library. You can use Marcus Ferrera and Drew Church's awesome ATT&CK Detections Collector to pop out a handy HTML file of relevant ESCU detections for you to align with MITRE ATT&CK.

"},{"location":"#the-rba-community","title":"The RBA Community","text":"The RBA Community
    Join the RBA Community Today!\n

The RBA Community is a group of professionals dedicated to advancing the field of risk-based alerting (RBA) and Splunk Enterprise Security (ES). Our mission is to provide a forum for sharing knowledge, best practices, and the latest developments in RBA and ES, and to help professionals enhance their understanding and skills in these areas.

Whether you\u2019re new to RBA and ES or a seasoned pro, The RBA Community has something for everyone. We invite you to join us on this journey to enhance your understanding and expertise in RBA and ES \u2013 don\u2019t miss out on this opportunity to learn from the best and connect with other professionals in the field.

Learn more

"},{"location":"#contributing","title":"Contributing","text":"

Want to contribute? See our contributing guidelines.

"},{"location":"#discussionfaq","title":"Discussion/FAQ","text":"

See discussions and frequently asked questions on our GitHub Discussions board.

Visit Discussion Board

"},{"location":"contributing/contributing-guidelines/","title":"Contributing Guidelines","text":"

All are welcome to contribute!

A GitHub account is required to create a pull request to submit new content. If you do not want to submit changes, you may also consider the following:

"},{"location":"contributing/contributing-guidelines/#how-to-contribute","title":"How to Contribute","text":"

This repository uses MkDocs with the Material for MkDocs theme.

If you know the markdown language then using this style of documentation will be a breeze. For a full list of capabilities see MkDocs's website.

"},{"location":"contributing/contributing-guidelines/#fork-the-rba-github","title":"Fork the RBA GitHub","text":""},{"location":"contributing/contributing-guidelines/#create-a-local-environment-for-testing","title":"Create a local environment for testing","text":"

Testing locally will be a great way to ensure your changes will work with what currently exists.

The easiest way to get started is by using a python virtual environment. For simplicity, pipenv will be used for the following.

  1. Install python and pipenv on your local workstation -> Pipenv docs.
  2. Once installed, navigate to your forked repository and run the following to install the latest requirements.

    # your forked rba directory\n# ./rba\npipenv install -r docs/requirements.txt\n
  3. Now you can enter pipenv run mkdocs serve which will create a webserver that can be reached by opening your browser and navigating to http://localhost:8000.

"},{"location":"contributing/contributors/","title":"Thanks to our GitHub Contributors!","text":"

Daren Cook

Dean Luxton

Donald Murchison

elusive-mesmer

gabs - Gabriel Vasseur

7thdrxn - Haylee Mills

hettervik

matt-snyder-stuff

RedTigR

nterl0k - Steven Dick

Tyler Younger

ZachTheSplunker

"},{"location":"dashboards/","title":"Dashboards","text":""},{"location":"dashboards/#attck-matrix-risk-business-view","title":"ATT&CK Matrix Risk (Business View)","text":"

attack_matrix_risk.xml

Portrays risk in your environment through the lense of RBA and the MTRE ATT&CK framework.

"},{"location":"dashboards/#attribution-analytics-tuning-view","title":"Attribution Analytics (Tuning View)","text":"

audit_attribution_analytics.xml

Helpful for tuning new detections.

"},{"location":"dashboards/#rba-data-source-review","title":"RBA Data Source Review","text":"

rba_data_source_overview.xml

This helps you to better what data sources you are using in RBA and see gaps in your coverage.

"},{"location":"dashboards/#risk-attributions-investigative-view","title":"Risk Attributions (Investigative View)","text":"

risk_attributions.xml

Risk Attributions.

"},{"location":"dashboards/#risk-investigation","title":"Risk Investigation","text":"

risk_investigation.xml

Risk Investigations.

"},{"location":"dashboards/#risk-notable-analysis","title":"Risk Notable Analysis","text":"

risk_notable_analysis_dashboard.xml

Risk Notable Analysis.

"},{"location":"dashboards/attack_matrix_risk/","title":"ATT&CK Matrix Risk (Business View)","text":"

View on GitHub

"},{"location":"dashboards/audit_attribution_analytics/","title":"Attribution Analytics (Tuning View)","text":"

Helpful for tuning new detections.

View on GitHub

"},{"location":"dashboards/rba_data_source_overview/","title":"RBA Data Source Review","text":"

This helps you to better what data sources you are using in RBA and see gaps in your coverage.

View on GitHub

"},{"location":"dashboards/risk_attributions/","title":"Risk Attributions (Investigative View)","text":"Prerequisites

View on GitHub

"},{"location":"dashboards/risk_investigation/","title":"Risk Investigation","text":"

View on GitHub

"},{"location":"dashboards/risk_notable_analysis_dashboard/","title":"Risk Notable Analysis","text":"

View on GitHub

"},{"location":"searches/","title":"Helpful Searches","text":"

These are some SPL techniques to get the most out of RBA by adding new features to your implementation or handling a common issue.

"},{"location":"searches/#chaining-behaviors","title":"Chaining behaviors","text":"

This is some simple SPL to organize risk events by risk_object and create risk rules which look for a specific sequence of events or chain of behaviors.

"},{"location":"searches/#deduplicate-notables","title":"Deduplicate Notables","text":"

This feature will drastically reduce the number of duplicate Risk Notables by removing alerts where events are basically the same, already reviewed, or another Risk Incident Rule has already fired for.

"},{"location":"searches/#dynamic-drilldowns","title":"Dynamic Drilldowns","text":"

If you're utilizing a custom risk notable investigation dashboard, it can be incredibly helpful for each risk event source to have its own drilldown. Thanks to Donald Murchison from the RBA Slack for contributing this method, which is explained in more detail in this blog post.

"},{"location":"searches/#essential-rba-searches","title":"Essential RBA searches","text":"

This is all of the handy SPL contained in the Essential Guide to Risk Based Alerting; includes searches for finding noise, reducing noisy notables, and tuning risk rules.

"},{"location":"searches/#integrate-ai-with-rir","title":"Integrate A&I with RiR","text":"

Adding this SPL into your Risk Incident Rules normalizes your risk object to a unique key in the Asset & Identity Framework; the primary advantage of this is throttling to prevent a Risk Incident Rule from firing on both a system and user that represent the same risk events.

"},{"location":"searches/#limit-score-stacking","title":"Limit score stacking","text":"

This SPL for your Risk Score Risk Incident Rules ensures that a single correlation search can only contribute risk a total of three times (or whatever you would like). This is handy for reducing rapidly stacking risk which is common early in the RBA maturation process.

"},{"location":"searches/#naming-systemunknowncomputer-accounts","title":"Naming SYSTEM/Unknown/Computer Accounts","text":"

Computer accounts are used by Active Directory to authenticate machines to the domain, and RBA detections may find behavior in a log where the user account is simply listed as \"SYSTEM\" or even left blank because it is the computer account. This method renames the account to distinguish it as host$ from the noise of \"SYSTEM\" or \"unknown\". It can also be tied into the Asset & Identify framework and contribute to detections on user risk objects.

"},{"location":"searches/#risk-incident-rule-ideas","title":"Risk Incident Rule Ideas","text":"

Alternative ways to alert from the risk index that you may find useful. Later searches will be relying on the base search found in the \"Capped Risk Score by Source\" approach.

"},{"location":"searches/#risk-info-field","title":"Risk info field","text":"

This is one of my favorite additions to RBA; adding this macro to your risk rules creates a field called risk_info (which you can add to your Risk Datamodel) containing all of the useful fields your analyst might use for analysis. It's in JSON formatting which allows easy manipulation in SPL and excellent material for dashboards and unique drilldowns per field.

ADDITIONALLY, this frees risk_message to be used as a short and sweet summary rather than where you store all of the event detail. This lets Risk Notables tell a high level overview of events via risk_message, and is also handy to throttle or deduplicate by.

"},{"location":"searches/#risk-notable-history","title":"Risk Notable History","text":"

Tyler Younger from the RBA Slack contributed this handy method for including some useful history of risk notables for that risk object when it fires. I played with it a bit and created a version I might use in a dashboard for additional context. You should check with your analysts to see what would be most helpful for them.

"},{"location":"searches/#threat-object-prevalence","title":"Threat Object Prevalence","text":"

One of the great features in RBA is knowing how often something has occurred in an environment; generally, the more rare or anomalous something is, the more likely it is to be malicious. The threat object drilldown in the sample Risk Investigation Dashboard is designed to offer an analyst that context, but with a simple saved search, we could use that context in our Risk Notables as well.

"},{"location":"searches/#threat-object-types","title":"Threat Object Types","text":"

Increasing the number of threat object types you track in Risk Rules can be really helpful for tuning noisy alerts, threat hunting on anomalous combinations, and automating SOAR enrichment to unique threat object types. Haylee and Stuart's Threat Object Fun dashboards can be helpful for all three.

"},{"location":"searches/asset_and_identity_rir_logic/","title":"Integrate Asset & Identity Information into Risk Incident Rules","text":"

Note

This feature has been added to ES 7.1, utilizing the normalized_risk_object field. This is also utilized for throttling in the default Risk Incident Rules which prevents notables from firing regularly on the same identity with different users or same asset with different hosts if A&I is configured.

This was a comment on this excellent Splunk Idea to lower() or upper() the risk_object in Risk Incident Rules, which goes one step further by integrating A&I information:

| tstats `summariesonly` min(_time) as firstTime max(_time) as lastTime sum(All_Risk.calculated_risk_score) as risk_score, count(All_Risk.calculated_risk_score) as risk_event_count,values(All_Risk.annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, values(All_Risk.annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, values(All_Risk.tag) as tag, values(source) as source from datamodel=Risk.All_Risk by All_Risk.risk_object,All_Risk.risk_object_type\n| `drop_dm_object_name(\"All_Risk\")`\n| eval risk_object=upper(risk_object)\n| lookup update=true identity_lookup_expanded identity as risk_object OUTPUTNEW _key as asset_identity_id,identity as asset_identity_value\n| lookup update=true asset_lookup_by_str asset as risk_object OUTPUTNEW _key as asset_identity_id,asset as asset_identity_value\n| eval asset_identity_risk_object=CASE(isnull(asset_identity_id),risk_object,true(),asset_identity_id)\n| stats min(firstTime) as firstTime max(lastTime) as lastTime sum(risk_score) as risk_score, sum(risk_event_count) as risk_event_count,values(annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count, values(annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count, values(tag) as tag, values(source) as source, dc(source) as source_count values(asset_identity_value) as asset_identity_value values(risk_object) as risk_object dc(risk_object) as risk_object_count by asset_identity_risk_object,risk_object_type\n| eval \"annotations.mitre_attack\"='annotations.mitre_attack.mitre_technique_id', risk_threshold=100\n| eval user=case(risk_object_type=\"user\",risk_object,true(),user),src=case(risk_object_type=\"system\",risk_object,true(),src)\n| where risk_score >= $risk_threshold$\n| `get_risk_severity(risk_score)`\n| `security_content_ctime(firstTime)`\n| `security_content_ctime(lastTime)`\n

Note

As they mention in the comment -- the one \"catch\" is you'll need to change your throttle object from \"risk_object\" to \"asset_identity_risk_object\" -- but this is great for preventing duplicate notables on the same basic user / system combination.

"},{"location":"searches/asset_and_identity_rir_logic/#extra-credit","title":"Extra Credit","text":"

Adding the above logic will increase the accuracy of Risk based alerting, however pivoting via the built in drilldown will still be limited. The following changes will allow analysts to pivot directly to all Risk alerts detected by the assoicated RIR.

Create a macro called get_risk_asset_ident(2) with the following.

Macro Definition
eval risk_in=\"$risk_object_in$\",risk_type_in=\"$risk_object_type_in$\"\n| lookup update=true identity_lookup_expanded identity as risk_object OUTPUTNEW _key as assetid_ident_id,identity as assetid_ident_value \n| lookup update=true asset_lookup_by_str asset as risk_object OUTPUTNEW _key as assetid_asset_id,asset as assetid_asset_value \n| lookup update=true identity_lookup_expanded identity as risk_in OUTPUTNEW _key as assetid_in_ident,identity as assetid_in_ident_value \n| lookup update=true asset_lookup_by_str asset as risk_in OUTPUTNEW _key as assetid_in_asset,asset as assetid_in_asset_value \n| eval risk_object_out=CASE((risk_type_in=\"user\" AND assetid_ident_id = 'assetid_in_ident'),assetid_in_ident_value, (risk_type_in=\"system\" AND (assetid_asset_id = 'assetid_in_asset')),assetid_in_asset_value)\n| eval risk_in=upper(risk_in)\n| eval risk_object=upper(risk_object)\n| where isnotnull(risk_object_out) OR (risk_object = risk_in)\n
Arguments
risk_object_in,risk_object_type_in\n

get_risk_asset_ident(2) completed macro.

Update macro permissions

Assign global scope (All apps) and allow all users read permission.

"},{"location":"searches/asset_and_identity_rir_logic/#update-existing-rir-drilldowns","title":"Update existing RiR drilldowns","text":"

Modify existing RIR drilldowns to include the macro similar to below.

Example
| from datamodel:\"Risk.All_Risk\"  \n| `get_risk_asset_ident($risk_object|s$,$risk_object_type|s$)`\n| `get_correlations`  \n| rename annotations.mitre_attack.mitre_tactic_id as mitre_tactic_id, annotations.mitre_attack.mitre_tactic as mitre_tactic, annotations.mitre_attack.mitre_technique_id as mitre_technique_id, annotations.mitre_attack.mitre_technique as mitre_technique\n

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/deduplicate_notables/","title":"Deduplicate Notable Events","text":"

Throttle Alerts Which Have Already Been Reviewed or Fired

Because Risk Notables look at a period of time, it is common for a risk_object to keep creating notables as additional (and even duplicate) events roll in, as well as when events fall off as the time period moves forward. Additionally, different Risk Incident Rules could be firing on the same risk_object with the same events but create new Risk Notables. It is difficult to get around this with throttling, so here are some methods to deduplicate notables.

"},{"location":"searches/deduplicate_notables/#navigation","title":"Navigation","text":"

Here are two methods for Deduplicating Notable Events:

- Skill Level Pros Cons Method I Intermediate Deduplicates on front and back end More setup time Method II Beginner Easy to get started with Only deduplicates on back end"},{"location":"searches/deduplicate_notables/#method-i","title":"Method I","text":"

We'll use a Saved Search to store each Risk Notable's risk events and our analyst's status decision as cross-reference for new notables. Altogether new events will still fire, but repeated events from the same source will not. This also takes care of duplicate notables on the back end as events roll off of our search window.

KEEP IN MIND

Edits to the Incident Review - Main search may be replaced on updates to Enterprise Security; requiring you to make this minor edit again to regain this functionality. Ensure you have a step in your relevant process to check this search after an update.

"},{"location":"searches/deduplicate_notables/#1-create-a-truth-table","title":"1. Create a Truth Table","text":"

This method is described in Stuart McIntosh's 2019 .conf Talk (about 9m10s in), and we're going to create a similar lookup table. You can either download and import that file yourself, or create something like this in the Lookup Editor app:

Truth Table"},{"location":"searches/deduplicate_notables/#2-create-a-saved-search","title":"2. Create a Saved Search","text":"

Then we'll create a Saved Search which runs relatively frequently to store notable data and statuses.

  1. Navigate to Settings -> Searches, reports, and alerts.
  2. Select \"New Report\" in the top right.

Here is a sample to replicate

Sample Report With this SPL
index=notable eventtype=risk_notables\n| eval indexer_guid=replace(_bkt,\".*~(.+)\",\"\\1\"),event_hash=md5(_time._raw),event_id=indexer_guid.\"@@\".index.\"@@\".event_hash\n| fields _time event_hash event_id risk_object risk_score source orig_source\n| eval temp_time=time()+86400\n| lookup update=true event_time_field=temp_time incident_review_lookup rule_id AS event_id OUTPUT status as new_status\n| lookup update=true correlationsearches_lookup _key as source OUTPUTNEW default_status\n| eval status=case(isnotnull(new_status),new_status,isnotnull(status),status,1==1,default_status)\n| fields - temp_time,new_status,default_status\n| eval temp_status=if(isnull(status),-1,status)\n| lookup update=true reviewstatuses_lookup _key as temp_status OUTPUT status,label as status_label\n| fields - temp_status\n| eval sources = if(isnull(sources) , orig_source , sources )\n| table _time event_hash risk_object source status_label sources risk_score\n| reverse\n| streamstats current=f window=0 latest(event_hash) as previous_event_hash values(*) as previous_* by risk_object\n| eval previousNotable=if(isnotnull(previous_event_hash) , \"T\" , \"F\" )\n| fillnull value=\"unknown\" previous_event_hash previous_status_label previous_sources previous_risk_score\n| eval matchScore = if( risk_score != previous_risk_score , \"F\" , \"T\" )\n| eval previousStatus = case( match(previous_status_label, \"(Closed)\") , \"nonmalicious\" , match(previous_status_label, \"(New|Resolved)\") , \"malicious\" , true() , \"malicious\" )\n# (1)!\n| mvexpand sources\n| eval matchRR = if(sources != previous_sources , \"F\", \"T\")\n| stats  dc(sources) as dcSources dc(matchRR) as sourceCheckFlag values(*) as * by _time risk_object event_hash\n| eval matchRR = if(sourceCheckFlag > 1 , \"F\" , matchRR )\n| lookup RIR-Truth-Table.csv previousNotable previousStatus matchRR matchScore OUTPUT alert\n| table _time risk_object source risk_score event_hash dcSources alert previousNotable previousStatus matchRR matchScore\n| outputlookup RIR-Deduplicate.csv\n
  1. previousStatus uses the default ES status label \"Closed\".

In the SPL for previousStatus above, I used the default ES status label \"Closed\" as our only nonmalicious status. You'll have to make sure to use status labels which are relevant for your Incident Review settings. \"Malicious\" is used as the fallback status just in case, but you may want to differentiate \"New\" or unmatched statuses as something else for audit purposes; just make sure to create relevant matches in your truth table.

I recommend copying the alert column from malicious events

"},{"location":"searches/deduplicate_notables/#schedule-the-saved-search","title":"Schedule the Saved Search","text":"Create schedule
    Now find the search in this menu, click *Edit -> Edit Schedule* and try these settings:\n

I made this search pretty lean, so running it every three minutes should work pretty well; I also decided to only look back seven days as this lookup could balloon in size and cause bundle replication issues. You probably want to stagger your Risk Incident Rule cron schedules by one minute more than this one so they don't fire on the same risk_object with the same risk events.

"},{"location":"searches/deduplicate_notables/#3-deduplicate-notables","title":"3. Deduplicate notables","text":"

Our last step is to ensure that the Incident Review panel doesn't show us notables when we've found a match to our truth table which doesn't make sense to alert on. In the Searches, reports, alerts page, find the search Incident Review - Main and click Edit -> Edit Search.

By default it looks like this:

Default incident review search

And we're just inserting this line after the base search

Append to the base search
...\n| lookup RIR-Deduplicate.csv _time risk_object source OUTPUTNEW alert\n| search NOT alert=\"no\"\n

Updated incident review search"},{"location":"searches/deduplicate_notables/#congratulations","title":"Congratulations!","text":"

You should now have a significant reduction in duplicate notables

If something isn't working, make sure that the Saved Search is correctly outputting a lookup (which should have Global permissions), and ensure if you | inputlookup RIR-Deduplicate.csv you see all of the fields being returned as expected. If Incident Review is not working, something is wrong with the lookup or your edit to that search.

"},{"location":"searches/deduplicate_notables/#extra-credit","title":"Extra Credit","text":"

If you utilize the Risk info field so you have a short and sweet risk_message, you can add another level of granularity to your truth table.

if you utilize risk_message for ALL of the event detail, it may be too granular and isn't as helpful for throttling.

This is especially useful if you are creating risk events from a data source with its own signatures like EDR, IDS, or DLP. Because the initial truth table only looks at score and correlation rule, if you have one correlation rule importing numerous signatures, you may want to alert when a new signature within that source fires.

"},{"location":"searches/deduplicate_notables/#create-a-calculated-field","title":"Create a calculated field","text":"

First, we'll create a new Calculated Field from risk_message in our Risk Datamodel called risk_hash with eval's md5() function, which bypasses the need to deal with special characters or other strangeness that might be in that field. If you haven't done this before - no worries - you just have to go to Settings -> Data Models -> Risk Data Model -> Edit -> Edit Acceleration and turn this off. Afterwards, you can Create New -> Eval Expression like this:

Creating risk_hash from md5(risk_message) in data model Don't forget to re-enable the acceleration

You may have to rebuild the data model from the Settings -> Data Model menu for this field to appear in your events.

"},{"location":"searches/deduplicate_notables/#update-spl","title":"Update SPL","text":"

Then we have to add this field into our Risk Incident Rules by adding this line to their initial SPL and ensure this field is retained downstream:

Field to add to RiR
values(All_Risk.risk_hash) as risk_hashes\n

Now our Risk Notables will have a multi-value list of risk_message hashes. We must update our truth table to include a field called \"matchHashes\" - I've created a sample truth table here, but you must decide what is the proper risk appetite for your organization.

Next we'll edit the Saved Search we created above to include the new fields and logic:

Updated logic (changes highlighted)
...\n| eval sources = if(isnull(sources) , orig_source , sources )\n| table _time event_hash risk_object source status_label sources risk_score risk_hashes\n| reverse\n| streamstats current=f window=0 latest(event_hash) as previous_event_hash values(*) as previous_* by risk_object\n| eval previousNotable=if(isnotnull(previous_event_hash) , \"T\" , \"F\" )\n| fillnull value=\"unknown\" previous_event_hash previous_status_label previous_sources previous_risk_score previous_risk_hashes\n| eval matchScore = if( risk_score != previous_risk_score , \"F\" , \"T\" )\n| eval previousStatus = case( match(previous_status_label, \"(Closed)\") , \"nonmalicious\" , match(previous_status_label, \"(New|Resolved)\") , \"malicious\" , true() , \"malicious\" )\n| mvexpand risk_hashes\n| eval matchHashes= if(risk_hashes != previous_risk_hashes , \"F\" , \"T\" )\n| stats dc(matchHashes) as hashCheckFlag values(*) as * by _time risk_object event_hash\n| eval matchHashes = if(hashCheckFlag > 1 , \"F\" , matchHashes )\n| mvexpand sources\n| eval matchRR = if(sources != previous_sources , \"F\", \"T\")\n| stats  dc(sources) as dcSources dc(matchRR) as sourceCheckFlag values(*) as * by _time risk_object event_hash\n| eval matchRR = if(sourceCheckFlag > 1 , \"F\" , matchRR )\n| lookup RIR-Truth-Table.csv previousNotable previousStatus matchRR matchScore matchHashes OUTPUT alert\n| table _time risk_object source risk_score event_hash dcSources alert previousNotable previousStatus matchRR matchScore matchHashes\n| outputlookup RIR-Deduplicate.csv\n

Voila! We now ensure that our signature-based risk rule data sources will properly alert if there are interesting new events for that risk object.

"},{"location":"searches/deduplicate_notables/#method-ii","title":"Method II","text":"

This method is elegantly simple to ensure notables don't re-fire as earlier events drop off the rolling search window of your Risk Incident Rules. It does this by only firing if the latest risk event is from the past 70 minutes.

Append to existing RIR
...\n| stats latest(_indextime) AS latest_risk\n| where latest_risk >= relative_time(now(),\"-70m@m\")\n

Credit to Josh Hrabar and James Campbell, this is brilliant. Thanks y'all!

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/dynamic_drilldowns/","title":"Dynamic Drilldowns by Source","text":"

If you're utilizing a custom risk notable investigation dashboard, it can be incredibly helpful for each risk event source to have its own drilldown. Thanks to Donald Murchison from the RBA Slack for contributing this method, which is explained in more detail in this blog post.

"},{"location":"searches/dynamic_drilldowns/#create-a-drilldown-lookup","title":"Create a Drilldown Lookup","text":"

First you'll need a lookup file with your risk rule name, the drilldown itself, and a description, like so:

Specific drilldowns will help analysts find exactly what they want to know

You can utilize this example from Donald's article and change it to suit your purposes.

"},{"location":"searches/dynamic_drilldowns/#create-your-drilldown-panel","title":"Create your Drilldown Panel","text":"

In Donald's example, this panel shows the list of sources for the risk object indicated by $risk_object_token$ (which you will need to ensure matches whatever token your dashboard uses), a description, and the drilldown logic itself. Here is the SPL and helpful comments:

Drilldown Panel SPL
| tstats summariesonly=false count from datamodel=Risk.All_Risk where All_Risk.risk_object=\"$risk_object_token$\" by source\n``` Get a list of all risk rules that have generated a risk event for this risk object - assumes the dashbaord has an input which stores risk_object in \"risk_object_token\"\nreplace risk_object_token with your own token name - helpful to use risk_object_type in search if this is in a token as well ```\n| fields source\n``` Use map to run a search for each risk rule to generate the drilldowns - map was used to be able to pass the risk rule name as an argument to the subsearch.\nThis is required because we must run an individual \"\n| eval drilldown=\u2026\" for each risk rule in case fields are used in the drilldown that do not exist in other risk events.\nString concatentation with a null field would make our entire string null.\nIf you wanted to remove map for better performance you could do this by only using fields that are present in every risk rule or building drilldowns with coalesce - coalesce(risk_object,\\\"\\\") - to ensure no null fields```\n| map search=\"index=risk risk_object=\\\"$risk_object_token$\\\" | eval drilldown=[| inputlookup rba_risk_rule_drilldowns.csv | eval search_name=split(search_name,\\\"|\\\") | search search_name=\\\"$$source$$\\\" | eval drilldown=\\\"\\\\\\\"\\\".search_name.\\\"||@||\\\\\\\".\\\".drilldown.\\\".\\\\\\\"||@||\\\".description.\\\"\\\\\\\"\\\"\n``` In the map search, we first search for all risk events related to the risk rule. Every risk event will get a drilldown field that we will dedup later. We do not use the datamodel in case fields outside of the datamodel are used in the drilldown.\nThe |inputlookup subsearch concatenates search_name, drilldown, and description for each row```\n| stats values(drilldown) as drilldown\n| eval drilldown=mvjoin(drilldown,\\\".\\\\\\\"||&||\\\\\\\".\\\")\n``` We then condense all drilldowns to a single field and concatenate together - this allows us to evaluate all drilldowns within a single eval statement```\n|return $drilldown] | fields drilldown\"\n```Now we break out the drilldowns into their respective components```\n| eval drilldown=split(drilldown,\"||&||\")\n| mvexpand drilldown\n| eval search_name=mvindex(split(drilldown,\"||@||\"),0)\n| eval drilldown_description=mvindex(split(drilldown,\"||@||\"),2)\n| eval drilldown=mvindex(split(drilldown,\"||@||\"),1)\n| stats values(*) as * by drilldown\n``` Use stats to dedup the drilldowns - depending on the fields used in the drilldown there could be multiple variations of the same drilldown```\n| table search_name drilldown_description drilldown\n
"},{"location":"searches/dynamic_drilldowns/#add-drilldown-functionality","title":"Add Drilldown Functionality","text":"

You can follow along with Donald's article to add the drilldown in the GUI editor, but the SimpleXML for this panel would be:

Drilldown SimpleXML
<drilldown>\n<link target=\"_blank\">search?q=$row.drilldown$&amp;earliest=$field1.earliest$&amp;latest=$field1.latest$</link>\n</drilldown>\n

For a click anywhere on that row to drive the search. Make sure your time picker tokens match here as well!

You could also utilize the time around an event by retaining _time in the initial search and declaring this later in SPL: Extra Time Control

| eval lowtime = _time - 300\n| eval hightime = _time + 300\n

So you could use $row.lowtime$ and $row.hightime$ for your drilldown and search a five minute window around an event instead of utilizing the standard time picker for your dashboard.

"},{"location":"searches/dynamic_drilldowns/#customize-your-heart-out","title":"Customize Your Heart Out","text":"

This is a great way to incorporate something akin to Workflow Actions for your custom dashboards. You could go a bit further and potentially:

Just as examples. Please share your variations on the RBA Slack!

Authors

@7thdrxn - Haylee Mills Donald Murchison"},{"location":"searches/limit_score_stacking/","title":"Limit Risk Rule Score Stacking","text":"

These will help reduce the maximum amount of risk which can be added from noisy Risk Rules.

"},{"location":"searches/limit_score_stacking/#navigation","title":"Navigation","text":"

There are two methods for limiting score stacking

- Skill Level Pros Cons Method I Beginner Easy to get started with Less context around what was capped and why Method II Intermediate More precise deduplication and additional information Additional understanding of SPL"},{"location":"searches/limit_score_stacking/#method-i","title":"Method I","text":"

This caps the risk score contribution of a single source by 3x the highest score from that source.

| tstats summariesonly=true sum(All_Risk.calculated_risk_score) as summed_risk_score max(All_Risk.calculated_risk_score) as single_risk_score dc(source) as source_count count\n FROM datamodel=Risk.All_Risk\n WHERE All_Risk.risk_object_type=\"*\" (All_Risk.risk_object=\"*\" OR risk_object=\"*\")\n BY All_Risk.risk_object All_Risk.risk_object_type source\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*3, summed_risk_score, single_risk_score*3)\n| stats sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score dc(source) as source sum(count) as count\n BY All_Risk.risk_object All_Risk.risk_object_type\n| sort 1000 - risk_score\n...\n

Note

You may want to limit this to particular sources, but this is extra handy for noisy sources like EDR, DLP, or IDS.

Thanks David Dorsey!

"},{"location":"searches/limit_score_stacking/#method-ii","title":"Method II","text":"

This option adds some complexity, however, provides more information and better deduplication. The full write-up of how to accomplish this method can be found on gabs website.

Visit Website

*reference: https://www.gabrielvasseur.com/post/rba-a-better-way-to-dedup-risk-events

Final SPL from blog post
| inputlookup TEMP_GABS_riskybusiness.csv\n``` First we take the breakdown of what actually happened, before doing any kind of deduping ```\n| eventstats sum(count) as count_msg\n    by risk_object risk_object_type risk_score source risk_message ```Get breakdown per risk_message``` \n| eventstats values(eval(count_msg.\"*\".risk_score)) as breakdown_msg\n    by risk_object risk_object_type            source risk_message ```Get breakdown per risk_message```\n| eventstats sum(count) as count_src\n    by risk_object risk_object_type risk_score source ```Get breakdown per source```\n| eventstats values(eval(count_src.\"*\".risk_score)) as breakdown_src\n    by risk_object risk_object_type            source ```Get breakdown per source```\n| stats sum(count) as risk_event_count, values(breakdown_src) as breakdown_src,\n    values(breakdown_msg) as breakdown_msg, sum(eval(risk_score*count)) as total_score,\n    max(risk_score) as max_score, latest(_time) as _time, values(mitre_*) as mitre_*\n    by risk_object risk_object_type source risk_message ```Reduce to unique risk_message\n    (it's not impossible to have several risks with the same risk_message but different scores)```\n| eval risk_message= mvjoin(breakdown_msg,\"+\").\"=\".max_score\n    . if( total_score!=max_score, \" (!\" . total_score . \")\", \"\") . \" \" .risk_message\n``` START limit to a maximum of 10 contributions per source ```\n| sort 0 risk_object risk_object_type source - max_score ``` Only the lowest scores will be dedup'd ```\n| eventstats dc(risk_message) as dc_msg_per_source by risk_object risk_object_type source \n| streamstats count as rank_per_source by risk_object risk_object_type source \n| eval risk_message=case( \n    rank_per_source <= 10, risk_message,\n    rank_per_source = 11, \"...+\" . ( dc_msg_per_source - 20 ) . \" others from '\" . source . \"'...\" ,\n    1==1, null() ) \n| eval max_score=if( rank_per_source <= 10, max_score, 0 )\n``` END limit to a maximum of 10 contributions per source ```\n| stats sum(risk_event_count) as risk_event_count, values(breakdown_src) as breakdown_src,\n    list(risk_message) as risk_message, sum(max_score) as risk_score,\n    sum(total_score) as risk_score_nodedup, latest(_time) as _time, values(mitre_*) as mitre_*\n    by risk_object risk_object_type source ```Reduce to unique source```\n| eval breakdown_src = mvjoin(breakdown_src,\"+\") .\"=\".risk_score\n    . if( risk_score!=risk_score_nodedup, \" (!\" . risk_score_nodedup . \")\", \"\" ) . \" \".source\n| stats sum(risk_event_count) as risk_event_count, list(source) as source, dc(source) as source_count,\n    list(breakdown_src) as srcs, list(risk_message) as risk_message, sum(risk_score) as risk_score,\n    sum(risk_score_nodedup) as risk_score_nodedup, latest(_time) as _time, values(mitre_*) as mitre_*,\n    dc( mitre_tactic_id) as mitre_tactic_id_count, dc(mitre_technique_id) as mitre_technique_id_count\n    by risk_object risk_object_type ```Reduce to unique object```\n

Authors

@7thdrxn - Haylee Mills @gabs - Gabriel Vasseur"},{"location":"searches/naming_system_unknown_computer_accounts/","title":"Naming SYSTEM / Unknown / Computer Accounts - The SEAL Method","text":"

Computer accounts are used by Active Directory to authenticate machines to the domain, and RBA detections may find behavior in a log where the user account is simply listed as \"SYSTEM\" or even left blank because it is the computer account. This method renames the account to distinguish it as host$ from the noise of \"SYSTEM\" or \"unknown\". It can also be tied into the Asset & Identify framework and contribute to detections on user risk objects.

"},{"location":"searches/naming_system_unknown_computer_accounts/#steps","title":"Steps","text":"

Navigate to Settings > Fields > Calculated Fields > Add New

Setting Value Source XmlWinEventLog:Security Source XmlWinEventLog:Microsoft-Windows-Sysmon/Operational Name user Eval Expression if(user=\"SYSTEM\" OR user=\"-\",'host'+\"$\",'user') Conflicting knowledge objects - Sysmon TA

We have to be careful with existing order of knowledge objects and calculated fields. The Sysmon TA already has a user = \"\" calculated field which we can update as follows:

Existing:
user = upper(case(\n    NOT isnull(User) AND NOT User IN (\"-\"), replace(User, \"(.*)\\\\\\(.+)$\",\"\\2\"),\n    NOT isnull(SourceUser) AND NOT isnull(TargetUser) AND SourceUser==TargetUser, replace(SourceUser, \"(.*)\\\\\\(.+)$\",\"\\2\")\n    ))\n
Update to:
user = upper(case(\n    match(User,\".+\\\\\\SYSTEM\"), host.\"$\",\n    NOT isnull(User) AND NOT User IN (\"-\"), replace(User, \"(.*)\\\\\\(.+)$\",\"\\2\"),\n    NOT isnull(SourceUser) AND NOT isnull(TargetUser) AND SourceUser==TargetUser, replace(SourceUser, \"(.*)\\\\\\(.+)$\",\"\\2\")\n    ))\n
"},{"location":"searches/naming_system_unknown_computer_accounts/#extra-credit","title":"Extra Credit","text":"

Not going to map this entire process due to how different it can be in each environment, but you can now add the computer account to your Identity lookup to aggregate with other user accounts. For example, you might take the fields nt_host and owner from your Asset lookup (asset_lookup_by_str), then map owner to email in the Identity lookup (identity_lookup_expanded). If you make a saved search that outputs a CSV, you can now use that to add fields into your Identity lookup.

Authors

@Dean Luxton @StevenD"},{"location":"searches/risk_guide_searches/","title":"Essential RBA searches","text":"

Handy SPL contained in the Essential Guide to Risk Based Alerting.

"},{"location":"searches/risk_guide_searches/#determine-correlation-searches-with-high-falsebenign-positive-rates","title":"Determine Correlation Searches with High False/Benign Positive Rates","text":"
`notable`\n| stats count(eval(status_label=\"Incident\")) as incident count(eval(status_label=\"Resolved\")) as closed\n BY source\n| eval benign_rate = 1 - incident / (incident + closed)\n| sort - benign_rate\n
Note

Be sure to replace the status_label with whatever is used in your environment.

"},{"location":"searches/risk_guide_searches/#risk-rules-generating-the-most-risk","title":"Risk Rules Generating the Most Risk","text":"
| tstats summariesonly=false sum(All_Risk.calculated_risk_score)\n   as risk_score,dc(All_Risk.risk_object)\n   as risk_objects,count\n FROM datamodel=Risk.All_Risk\n WHERE * All_Risk.risk_object_type=\"*\" (All_Risk.risk_object=\"*\" OR risk_object=\"*\")\n BY source\n| sort 1000 - count risk_score\n
"},{"location":"searches/risk_guide_searches/#dig-into-noisy-threat-objects","title":"Dig into Noisy Threat Objects","text":"
| tstats summariesonly=true count dc(All_Risk.risk_object) as dc_objects dc(All_Risk.src) as dc_src dc(All_Risk.dest) as dc_dest dc(All_Risk.user) as dc_users dc(All_Risk.user_bunit) as dc_bunit sum(All_Risk.calculated_risk_score) as risk_score values(source) as source\n FROM datamodel=Risk.All_Risk\n BY All_Risk.threat_object,All_Risk.threat_object_type\n| `drop_dm_object_name(\"All_Risk\")`\n| sort 1000 - risk_score\n
"},{"location":"searches/risk_guide_searches/#find-noisiest-risk-rules-in-risk-notables","title":"Find Noisiest Risk Rules in Risk Notables","text":"
index=notable eventtype=risk_notables\n| stats count\n BY orig_source\n| eventstats sum(count) as total\n| eval percentage = round((count / total) * 100,2)\n| sort - percentage\n
"},{"location":"searches/risk_guide_searches/#structural-changes","title":"Structural Changes","text":""},{"location":"searches/risk_guide_searches/#notable-macro-to-edit-for-qa-risk-notables","title":"Notable Macro to Edit for QA Risk Notables","text":"

Add | eval QA=1 to the end of your Risk Incident Rules, editing the macro get_notable_index from the default to begin \"QA\" mode.

default
index=notable\n
QA mode
index=notable NOT QA=1\n

This will keep Risk Notables out of your Incident Review queue while you develop RBA.

"},{"location":"searches/risk_guide_searches/#create-a-sandbox-for-risk-rules-away-from-risk-notables","title":"Create a Sandbox for Risk Rules away from Risk Notables","text":"

Create an eventtype called something like QA and have it apply a tag called QA, then add the following to your Risk Incident Rules.

...\nWHERE NOT All_Risk.tag=QA\n...\n

This keeps your curated risk ecology preserved so you can compare how many Risk Notables you would see if your QA content was added.

"},{"location":"searches/risk_guide_searches/#include-previous-notables-in-new-notables","title":"Include Previous Notables in New Notables","text":"

If you create a lookup from a saved search called Past7DayNotables.csv where you store the previous time, status, and sources, you could include this in your Risk Incident Rules:

| lookup Past7DayNotables.csv risk_object OUTPUT prev_time prev_status prev_sources\n| eval prev_alerts = prev_time.\" - \".prev_status.\" - \".prev_sources\n
Note

Make sure to add prev_alerts to the Incident Review Settings page so this shows up in the Incident Review panel.

"},{"location":"searches/risk_guide_searches/#tuning","title":"Tuning","text":""},{"location":"searches/risk_guide_searches/#remove-results-with-a-lookup","title":"Remove Results with a Lookup","text":"

Once you have a lookup built out, insert it into a search like this:

index=proxy http_method=\"POST\" NOT\n  [| inputlookup RR_Proxy_Allowlist.csv\n  | fields Web.src Web.dest\n  | rename Web.* AS *]\n

You could also do this with a datamodel:

| tstats summariesonly=t values(Web.dest) as dest from datamodel Web.Web where Web.http_method=\"POST\" NOT\n  [| inputlookup RR_Proxy_Allowlist.csv | fields Web.src Web.dest]\n  by _time, Web.src\n

Using the Web datamodel field constraints as an example so we can properly exclude results from index or datamodel based risk rules.

"},{"location":"searches/risk_guide_searches/#adjust-risk-scores","title":"Adjust Risk Scores","text":""},{"location":"searches/risk_guide_searches/#using-eval","title":"Using eval","text":"
index=proxy signature=*\n| table src user user_bunit dest signature http_code\n| eval risk_score = case(\n  signature=\"JS:Adware.Lnkr.A\",\"10\",\n  signature=\"Win32.Adware.YTDownloader\",\"0\",\n  NOT http_code=\"200\",\"25\",\n  signature=\"Trojan.Win32.Emotet\" AND NOT user_bunit=\"THREAT INTELLIGENCE\",\"100\"\n  )\n

In this example, we are:

"},{"location":"searches/risk_guide_searches/#using-lookup","title":"Using lookup","text":"
index=proxy signature=*\n| table src user user_bunit dest signature http_code\n| lookup RR_Proxy_Adjust.csv src user user_bunit dest signature http_code OUTPUTNEW risk_score\n

We can do the same with a lookup and as many relevant fields as we need for the most constrained exclusions.

"},{"location":"searches/risk_guide_searches/#dedup-similar-events-from-counting-multiple-times-in-risk-notables-score","title":"Dedup Similar Events from Counting Multiple Times in Risk Notables (Score)","text":"
...\n BY All_Risk.risk_object,All_Risk.risk_object_type\n| `drop_dm_object_name(\"All_Risk\")`\n| streamstats sum(risk_score) as original_score values(source) as sources values(risk_message) as risk_messages by risk_object\n| eval adjust_score = case(\n source IN (\"My Noisy Rule That Fires a Lot but I Still Want to Know About, Once\", \"My Other Really Useful Context Low Risk Rule\"),\"1\",\n match(risk_message,\"IDS - Rule Category 1.*|IDS - Rule Category 2.*\") OR match(risk_message,\"DLP - Rule Category 1.*|DLP - Rule Category 2.*\"),\"1\",\n 1=1,null())\n| eval combine = coalesce(adjust_score,risk_message)\n| dedup combine risk_score\n| streamstats sum(risk_score) as risk_score values(sources) as source values(risk_messages) as risk_message by risk_object\n...\n

For making sure similar detections on basically the same event only count once in our total risk score.

"},{"location":"searches/risk_guide_searches/#weight-events-from-noisy-sources-in-risk-notables-metadata","title":"Weight Events from Noisy Sources in Risk Notables (Metadata)","text":"
...\nBY All_Risk.risk_object,All_Risk.risk_object_type\n| `drop_dm_object_name(\"All_Risk\")`\n| mvexpand source\n| lookup RIRadjust-rule_weight.csv source OUTPUTNEW mitre_weight source_weight\n| eval mitre_weight = if(isnotnull(mitre_weight),mitre_weight,\"0\")\n| eval source_weight = if(isnotnull(source_weight),source_weight,\"0\")\n| streamstats sum(mitre_weight) as mitre_weight_total sum(source_weight) as source_weight_total values(*) as * by risk_object risk_object_type\n| eval mitre_tactic_id_count = mitre_tactic_id_count - mitre_weight_total\n| eval source_count = source_count - source_weight_total\n| eval \"annotations.mitre_attack\" = 'annotations.mitre_attack.mitre_technique_id'\n| where mitre_tactic_id_count >= 3 and source_count >= 4\n

For tuning Risk Incident Rules that don't rely on an accretive score to alert, but still need a lever to tweak noisy sources. In our example lookup, we would include a value between 0 and 1 for each noisy source; IE 0.75 to only count a rule as \u00bc of a standard weight, 0.5 to only count as \u00bd, etc.

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/risk_incident_rule_ideas/","title":"Risk Incident Rule Ideas","text":"

Here are some alternative ways to alert from the risk index that you may find useful. Later searches will be relying on the base search found in the \"Capped Risk Score by Source\" approach.

Name Description Capped Risk Score by Source From the limit score stacking approach Events from Multiple Sourcetypes For events from multiple sourcetypes Events from Multiple Sourcetypes with Meta-Scoring Similar, but with more control over what alerts and how MITRE Counts with Meta-Scoring Meta-scoring approach to MITRE alert"},{"location":"searches/risk_incident_rule_ideas/#capped-risk-score-by-source","title":"Capped Risk Score by Source","text":"

Utilizes the limit score stacking approach to limit score contribution from a single source to double of its highest scoring risk event.

| tstats `summariesonly`\ncount as count\ncount(All_Risk.calculated_risk_score) as risk_event_count,\nsum(All_Risk.calculated_risk_score) as summed_risk_score,\nmax(All_Risk.calculated_risk_score) as single_risk_score,\nvalues(All_Risk.risk_message) as risk_message,\nvalues(All_Risk.annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id,\ndc(All_Risk.annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count,\nvalues(All_Risk.annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id,\ndc(All_Risk.annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count,\nvalues(All_Risk.tag) as tag,\nvalues(All_Risk.threat_object) as threat_object,\nvalues(All_Risk.threat_object_type) as threat_object_type,\ndc(source) as source_count,\n,max(_time) as _time\nfrom datamodel=Risk.All_Risk by All_Risk.risk_object,All_Risk.risk_object_type, source | `drop_dm_object_name(\"All_Risk\")` | eval \"annotations.mitre_attack\"='annotations.mitre_attack.mitre_technique_id' | `get_risk_severity(risk_score)`\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*2, summed_risk_score, single_risk_score*2)\n| stats values(*) as * sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count sum(risk_event_count) as risk_event_count dc(source) as source_count\n BY risk_object risk_object_type\n| fields - single_risk_score count\n| eval risk_score = summed_risk_score\n| where capped_risk_score > 100\n
"},{"location":"searches/risk_incident_rule_ideas/#events-from-multiple-sourcetypes","title":"Events from Multiple Sourcetypes","text":"

This is a very effective approach that looks for when a single risk object has events from multiple security data sources. With a well-defined naming scheme for your searches, you may not need to utilize a saved search to retain this information in your risk rules. Otherwise, you could run something like this somewhat infrequently as a saved search:

| rest splunk_server=local count=0 /services/saved/searches\n| search action.correlationsearch.enabled=1\n| rename dispatch.earliest_time as early_time qualifiedSearch as search_spl\n| table title search_spl\n| eval data_sourcetype = case(\nmatch(search_spl,\".*\\`(sysmon|wmi|powershell|wineventlog_(security|system))\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Endpoint.*\") OR match(title,\"Endpoint.*\") OR match(search_spl,\".*sourcetype\\=(|\\\")(xmlwineventlog:microsoft-windows-sysmon/operational).*\"),\"Endpoint\",\nmatch(search_spl,\".*datamodel(:|=|\\s)(|\\\")Endpoint.*\") OR match(title,\"Threat.*\") OR match(search_spl,\".*sourcetype\\=(|\\\")(wdtap:alerts).*\"),\"Malware\",\nmatch(search_spl,\".*\\`(okta|gws_reports_login)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Authentication.*\"),\"Authentication\",\nmatch(search_spl,\".*datamodel(:|=|\\s)(|\\\")Change.*\"),\"Change\",\nmatch(search_spl,\".*\\`(stream_http)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Web.*\"),\"Web\",\nmatch(search_spl,\".*\\`(o365_management_activity|gsuite_gmail)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Email.*\"),\"Email\",\nmatch(search_spl,\".*\\`(gsuite_gdrive)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Data Loss.*\"),\"DLP\",\nmatch(search_spl,\".*datamodel(:|=|\\s)(|\\\")Alerts.*\"),\"Alerts\",\nmatch(search_spl,\".*datamodel(:|=|\\s)(|\\\")Intrusion.*\"),\"IDS\",\nmatch(search_spl,\".*\\`(cisco_networks)\\`.*\") OR match(search_spl,\".*datamodel(:|=|\\s)(|\\\")Network.*\"),\"Network\",\nmatch(search_spl,\".*\\`(kubernetes_azure|azuread|cloudtrail|aws_securityhub_finding|aws_cloudwatchlogs_eks|azure_audit|google_gcp_pubsub_message|aws_s3_accesslogs)\\`.*\"),\"Cloud\",\ntrue(),\"Unknown\")\n| fields - search_spl\n| outputlookup RR_sources.csv\n

Which looks at the SPL of a search to determine which sourcetype to group it under. Please modify this search as you see fit for your environment. This allows you to create a Risk Incident Rule like this:

...\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*2, summed_risk_score, single_risk_score*2)\n| lookup RR_sources.csv title AS source OUTPUTNEW data_sourcetype\n| stats values(*) as * sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count sum(risk_event_count) as risk_event_count dc(source) as source_count values(data_sourcetype) as sourcetypes dc(data_sourcetype) as sourcetype_count\n BY risk_object risk_object_type\n| fields - single_risk_score count\n| eval risk_score = summed_risk_score\n| where sourcetype_count > 1\n
"},{"location":"searches/risk_incident_rule_ideas/#events-from-multiple-sourcetypes-with-meta-scoring","title":"Events from Multiple Sourcetypes with Meta-Scoring","text":"

Sometimes, you may need more ways of distinguishing which events should have more relevance in an alert beyond a simple count or distinct count. The gist of this strategy is to declare a new variable with a value of 0, then utilize multiple eval statements to add to this value based on attributes about the event. Remember that a case() statement will only apply once and will apply the first match it finds, so you want to ensure your most important matches hit first. Don't be afraid to stack multiple eval statements, and you'll have to tweak what the threshold is depending on the values you chose.

...\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*2, summed_risk_score, single_risk_score*2)\n| lookup RR_sources.csv title AS source OUTPUTNEW data_sourcetype\n| rex field=risk_message \"Severity\\=(?<severity>\\w*)\\s\"\n| stats values(*) as * sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count sum(risk_event_count) as risk_event_count dc(source) as source_count values(data_sourcetype) as sourcetypes dc(data_sourcetype) as sourcetype_count\n BY risk_object risk_object_type\n| fields - single_risk_score count\n| eval risk_score = summed_risk_score\n| eval sourcetype_mod = 0\n| eval sourcetype_mod = if(match(sourcetypes,\"Endpoint\"),sourcetype_mod+20,sourcetype_mod)\n| eval sourcetype_mod = if(match(sourcetypes,\"Malware\"),sourcetype_mod+20,sourcetype_mod)\n| eval sourcetype_mod = if(match(sourcetypes,\"Web\"),sourcetype_mod+10,sourcetype_mod)\n| eval sourcetype_mod = if(match(sourcetypes,\"DLP\"),sourcetype_mod+10,sourcetype_mod)\n| eval sourcetype_mod = case(\nmatch(sourcetypes,\"IDS\") AND match(severity,\"(high|critical)\"),sourcetype_mod+20,\nmatch(sourcetypes,\"IDS\"),sourcetype_mod+10,\ntrue(),sourcetype_mod)\n| where sourcetype_mod > 39\n

Because sourcetypes is now a multi-valued field by risk_object, I had to create multiple eval checks so that the operation would apply more than once if events from multiple sourcetypes were found. You can also see how I pulled out severity from the risk_message earlier on with rex so I could make a distinction between higher and lower severity IDS events in the meta-scoring. This assumes only my IDS events have that particular formatting to indicate severity; you may have to use more logic to distinguish different sourcetypes and severities, it's just an example.

For the scoring threshold of 40, I chose this because of how I've structured the score additions. I will get an alert if a risk object has events from:

Which may remove a lot of noise from combinations which aren't as likely to be malicious. It is still worthwhile to occasionally review what doesn't pass the threshold to ensure you've crafted a method that surfaces high-fidelity alerts, or are caught with other Risk Incident Rules.

"},{"location":"searches/risk_incident_rule_ideas/#mitre-counts-with-meta-scoring","title":"MITRE Counts with Meta-Scoring","text":"

The meta-scoring method is useful for getting more value from your MITRE count thresholding rules.

...\n| eval capped_risk_score=if(summed_risk_score < single_risk_score*2, summed_risk_score, single_risk_score*2)\n| eval mitre_weight = case(\ncapped_risk_score>70,\"0\",\ncapped_risk_score>40,\"0.5\",\ncapped_risk_score>5,\"0.75\",\ntrue(),\"1\")\n| eval mitre_weight_tactic = mitre_weight * mitre_tactic_id_count\n| eval mitre_weight_technique = mitre_weight * mitre_technique_id_count\n| eventstats sum(mitre_weight_tactic) as mitre_weight_tactic_total sum(mitre_weight_technique) as mitre_weight_technique_total by risk_object risk_object_type source\n| eval mitre_tactic_id_count = mitre_tactic_id_count - mitre_weight_tactic_total\n| eval mitre_technique_id_count = mitre_technique_id_count - mitre_weight_technique_total\n| stats values(*) as * sum(capped_risk_score) as capped_risk_score sum(summed_risk_score) as summed_risk_score sum(mitre_tactic_id_count) as mitre_tactic_id_count sum(mitre_technique_id_count) as mitre_technique_id_count sum(risk_event_count) as risk_event_count dc(source) as source_count\n BY risk_object risk_object_type\n| fields - mitre_weight* single_risk_score count\n| eval risk_score = summed_risk_score\n| eval mitre_mod = 0\n| eval mitre_mod = case(\nmitre_tactic_id_count > 3,mitre_mod+20,\nmitre_tactic_id_count < 4 AND mitre_tactic_id_count > 1,mitre_mod+10,\ntrue(),mitre_mod)\n| eval mitre_mod = case(\nmitre_technique_id_count > 4,mitre_mod+20,\nmitre_technique_id_count < 5 AND mitre_technique_id_count > 2,mitre_mod+10,\ntrue(),mitre_mod)\n| eval mitre_mod = case(\nmvcount(source) > 4,mitre_mod+20,\nmvcount(source) < 5 AND mvcount(source) > 1,mitre_mod+10,\ntrue(),mitre_mod)\n| eval mitre_mod = case(\nmatch(sourcetypes,\"(Malware|Endpoint)\"),mitre_mod+20,\nmatch(sourcetypes,\"IDS\"),mitre_mod+10,\ntrue(),mitre_mod)\n| eval mitre_mod = case(\nmatch(user_category,\"(privileged|technical|executive|watchlist)\"),mitre_mod+20,\nmatch(src_category,\"(Server|DMZ)\"),mitre_mod+10,\ntrue(),mitre_mod)\n| where mitre_mod > 49\n

Near the beginning, we juggle some logic for counting events differently which have a lower risk score because when we aggregate on the count of MITRE Tactics/Techniques involved, we might want to treat events with a higher risk score as counting more heavily toward the overall total. This is especially true when aggregating events over longer periods like the out of the box 7 day rule, or something going as far back as 30 or 90 days.

Now in the meta-scoring, we have all sorts of ways to distinguish what might be more relevant to us. Now we incorporate:

Which gives us more control over the types of events that might bubble up in our alerts.

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/risk_info_event_detail/","title":"Risk info field","text":""},{"location":"searches/risk_info_event_detail/#create-macro-for-risk_info-field","title":"Create macro for risk_info field","text":"

You may want to keep risk_message relatively brief as a sort of high-level overview of a risk event, then utilize a new field to store details. We can create a macro called risk_info(1) to create a JSON-formatted field with this SPL:

Macro definition
eval risk_info = \"{\\\"risk_info\\\":{\"\n| foreach $fields$\n    [\n    | eval <<FIELD>>=if(isnull(<<FIELD>>), \"unknown\", <<FIELD>>)\n    ```Preparing json array if FIELD is multivalue, otherwise simple json value```\n    | eval json=if(mvcount(<<FIELD>>)>1,mv_to_json_array(mvdedup(<<FIELD>>)),\"\\\"\".<<FIELD>>.\"\\\"\") \n    | eval risk_info=risk_info.\"\\\"\".\"<<FIELD>>\".\"\\\": \".json.\",\"\n    ]\n| rex mode=sed field=risk_info \"s/,$/}}/\"\n| fields - json\n

Many thanks to RedTigR on the RBA Slack for providing the multi-value friendly version of this macro.

Utilizing the macro like risk_info(\"field1,field2,field3,etc\") to give us a JSON formatted field with any of the fields we like.

And then if we wanted to break this out in a dashboard we could use spath to break out fields into their own columns, or a rex command like this:

Example

| rex field=risk_info max_match=100 \"(?<risk_info2>\\\"\\w+\\\":\\s*((?:(?<!\\\\\\)\\\"[^\\\"]*\\\"|\\[[^\\]]*\\]))(?=,|\\s*}))\"\n

To break out each field as a multi-value on their own line in the same column. It looks really pretty, and you can even use $click.value2$ to determine exactly which MV field was clicked and utilize different drilldowns per field, for example.

"},{"location":"searches/risk_info_event_detail/#extracting-existing-fields-from-risk-events-into-risk_info-field","title":"Extracting existing fields from risk events into risk_info field","text":"

Assumption

Your risk rules are outputting specific details in addition to the risk fields (e.g. risk_message, risk_object etc.)

The following search replaces the View the individual Risk Attributions drilldown within a risk incident rule. It allows us to dynamically bring the output of each individual risk rule in a concise manner.

The aim of this is to minimize pivoting when performing the initial assessment of a risk incident while keeping the notable and risk_message field concise.

index=risk\n| search risk_object=$risk_object$\n| rename annotations.mitre_attack.mitre_tactic_id AS mitre_tactic_id, annotations.mitre_attack.mitre_tactic AS mitre_tactic\n| rex field=_raw max_match=0 \"(?<risk_info>[^\\=]+\\=\\\"([^\\\"]+\\\")+?)((, )|$)\"\n| eval risk_info=mvfilter(NOT match(risk_info, \"^(annotations)|(info_)|(savedsearch_description)|(risk_)|(orig_time)|(([0-9]+, )?search_name)\"))\n| table _time, source, risk_object, risk_score, risk_message, risk_info, risk_object_type, mitre_tactic_id, mitre_tactic\n| eval calculated_risk_score=risk_score\n| sort _time\n

Breaking down some decisions:

Authors

@7thdrxn - Haylee Mills @RedTigR @elusive-mesmer"},{"location":"searches/risk_notable_history/","title":"Risk Notable History","text":"

Tyler Younger from the RBA Slack contributed this handy method for including some useful history of risk notables for that risk object when it fires. I played with it a bit and created a version I might use in a dashboard for additional context. You should check with your analysts to see what would be most helpful for them.

"},{"location":"searches/risk_notable_history/#adding-risk-notable-history","title":"Adding Risk Notable History","text":"

You could add this subsearch to your Risk Incident Rules and add this field to Incident Review Settings so analysts see it reviewing a notable event, or maybe have it as a panel in an investigation dashboard. I will leave it with the makeresults and tabled results as an example so you can play around until it looks right.

| makeresults\n| eval risk_object=\"tyounger\"\n| join type=left max=0 risk_object\n    [| search earliest=-31d latest=now `notable`\n    ``` ### This may or may not make sense in your enviornment, the idea was to tidy up the search names, adjust as needed\n    ```\n   | replace \"* - Rule\" WITH * IN search_name\n   | replace \"Audit - UC - *\" WITH * IN search_name\n   | replace \"Threat - UC - *\" WITH * IN search_name\n   | replace \"Access - UC - *\" WITH * IN search_name\n   | replace \"Network - UC -*\" WITH * IN search_name\n   | replace \"Identity - UC -*\" WITH * IN search_name\n   | replace \"Endpoint - UC -*\" WITH * IN search_name\n   ``` ### ```\n    | eventstats count as history_count dc(search_name) as search_name_count values(search_name) as search_names first(_time) as last_time by risk_object,search_name\n    | eval days_ago=round((abs(last_time-now())/86400),2)\n    | convert ctime(first_time) as first_time\n    | convert ctime(last_time) as last_time\n    | eval history_count=if(isnull(history_count),\" new\", history_count)\n    | eval search_names=if(isnull(search_names),\" search null\",search_names)\n    | eval last_time=if(isnull(last_time),\" last time null\",last_time)\n    | eval days_ago=if(isnull(days_ago),\" days ago null\",days_ago)\n    | fillnull comment value=\"N/A\"\n    | table risk_object sources rule_name history_count risk_object first_time last_time search_name_count search_names days_ago status_label comment\n        ]\n    ``` ### Format history fields ### ```\n| eval notable_risk_history=\"(\".risk_object. \") previously alerted \".history_count.\" times with the following notable(s) [\".search_names.\"]\".\" with status label(s) (\".status_label.\") most recently on [\".last_time.\"] \".days_ago.\" days ago. comment(s) comments: (\".comment.\")\"\n| eval notable_risk_history=if(isnull(notable_risk_history),\"Risk object has not generated any notable events\",notable_risk_history)\n| eval search_names=if(isnull(search_names),\"N/A\",search_names)\n| makemv delim=\"comments: \" notable_risk_history\n| eval notable_risk_history=mvjoin(notable_risk_history, \"\")\n| table risk_object notable_risk_history\n

You should be able to simply use the join and logic all the way up to the final table command and perhaps make a macro for this to be added to the end of your Risk Incident Rules and provide that context.

"},{"location":"searches/risk_notable_history/#adding-risk-and-traditional-notable-history","title":"Adding Risk and Traditional Notable History","text":"

You might want to check other fields in regular notables to see if this risk object appears there as well. In this example, I am using coalesce to check src, dest, and user and bring those in on the join. I'm also playing with the spacing and formatting of the final results in case that gives you some ideas:

| makeresults\n| eval risk_object=\"gravity\"\n| join type=left max=0 risk_object\n    [| search earliest=-5000d latest=now `notable`\n    ``` ### This may or may not make sense in your enviornment, the idea was to tidy up the search names, adjust as needed\n    ```\n   | replace \"* - Rule\" WITH * IN search_name\n   | replace \"* - Combined\" WITH * IN search_name\n   | replace \"Audit - *\" WITH * IN search_name\n   | replace \"Threat - *\" WITH * IN search_name\n   | replace \"Access - *\" WITH * IN search_name\n   | replace \"Network -*\" WITH * IN search_name\n   | replace \"Identity -*\" WITH * IN search_name\n   | replace \"Endpoint -*\" WITH * IN search_name\n   ``` ### ```\n   | eval risk_object=coalesce(risk_object,src)\n   | eval risk_object=coalesce(risk_object,dest)\n   | eval risk_object=coalesce(risk_object,user)\n   | eval comment = \"|||---- \".comment\n    | eventstats count as history_count dc(search_name) as search_name_count values(search_name) as search_names latest(_time) as last_time latest(status_label) as status_label values(comment) as comments by risk_object,search_name\n    | dedup risk_object,search_name\n    | convert ctime(last_time) as last_time\n    | fillnull history_count search_names last_time value=\"N/A\"\n    | fillnull comments value=\"---- no comments\"\n    | eval comments = mvjoin(comments,\"\")\n    | table risk_object history_count risk_object last_time time search_name_count search_name status_label comments\n        ]\n    ``` ### Format history fields ### ```\n| eval notables = last_time.\" - \".history_count.\" - \".search_name.\" :: \".upper(status_label).\"|||\".comments\n| stats sum(history_count) as history_count values(notables) as notables by risk_object\n| eval notables = mvjoin(notables,\"||| |||-- \")\n| eval notable_history=\"(+. \".upper(risk_object). \" .+) previously alerted \".history_count.\" times with the following notable(s):||| |||-- \".notables\n| eval notable_history=split(notable_history,\"|||\")\n| fields - notables history_count\n| eval notable_history=if(isnull(notable_history),\"Risk object has not generated any notable events\",notable_history)\n

Either way, letting your analysts know what was seen before is helpful context when they begin investigating.

Authors

Tyler Younger"},{"location":"searches/this_then_that_alerts/","title":"Detect Chain of Behaviors","text":"

To make a risk rule that looks for two rules firing close together, we can use sort followed by the autoregress command within a certain duration:

index=risk sourcetype=stash search_name=\"Search1\" OR search_name=\"Search2\"\n| sort by user _time | dedup _time search_name user\n| delta _time as gap\n| autoregress search_name as prev_search\n| autoregress user as prev_user\n| where user = prev_user\n| table _time gap src user prev_user search_name prev_search\n| where ((search_name=\"Search1\" OR search_name=\"Search2\") AND (prev_search=\"Search1\" OR prev_search=\"Search2\") AND gap<600)\n

The benefit of not doing this in a single search is you still have the individual risk events as useful observations, and then can add more risk when observed together, or tweak risk down for noisy events without \"allowlisting\" altogether.

Ryan Moss from Verizon also spoke about using Analytic Stories with RBA which is another excellent method for low volume, high fidelity chained detections.

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/threat_object_prevalence/","title":"Threat Object Prevalence","text":"

One of my favorite features in RBA is knowing how often something has occurred in an environment; generally, the more rare or anomalous something is, the more likely it is to be malicious. The threat object drilldown in the sample Risk Investigation Dashboard is designed to offer an analyst that context, but we could utilize unique counts of threat objects to automatically tune scores.

"},{"location":"searches/threat_object_prevalence/#create-a-saved-search","title":"Create a Saved Search","text":"

You'll have to decide how often you want this information updated, but utilizing tstats against the Risk Index means this should be pretty snappy and could run pretty frequently over a long timeframe. Create a new saved search with this logic, firing with a frequency you like, and looking back at least 7 days:

| from datamodel Risk.All_Risk\n| eval threat_object = mvzip(threat_object_type,threat_object,\"|||\")\n| table risk_object threat_object user dest source\n| eval to_full = threat_object \n| eval to_split = mvjoin(to_full,\"::::\")\n| eventstats count by threat_object\n| sistats count values(to_full) values(user) values(dest) by threat_object,to_split,source\n| rename psrsvd_vm_to_full AS to_full\n| rename psrsvd_vm_user AS users\n| rename psrsvd_vm_dest AS dests\n| rename psrsvd_gc AS count\n| rex field=to_full max_match=0 \"(?<threat_pal_temp>.*?;\\d+;)\"\n| rex field=users max_match=0 \"(?<user_temp>.*?;\\d+;)\"\n| rex field=dests max_match=0 \"(?<dest_temp>.*?;\\d+;)\"\n| eventstats count as count2 by threat_object\n| table count user_temp dest_temp source threat_pal_temp threat_object count2\n| stats max(count) as total_count dc(user_temp) as dc_users values(user_temp) as users dc(dest_temp) as dc_dests values(dest_temp) as dests dc(sources_temp) as dc_sources values(count2) as to_count by threat_object threat_pal_temp source\n| table total_count to_count source dc_users users dc_dests dests threat_pal_temp threat_object\n| rex field=threat_pal_temp \"(?<threat_pal>.*?);(?<threat_count>\\d+);\"\n| rex field=users \"(?<users_temp>.*?);\\d+;\"\n| rex field=dests \"(?<dests_temp>.*?);\\d+;\"\n| rex field=threat_object \"^(?<threat_object_type>.+)\\|\\|\\|(?<threat_notype>.*)\"\n| rex field=threat_object \"^(?<threat_pal_type>.+)\\|\\|\\|(?<threat_pal_notype>.*)\"\n| eval to_count = if(threat_pal=threat_object,to_count,total_count)\n| stats sum(threat_count) as threat_count dc(users_temp) as dc_users dc(dests_temp) as dc_dests values(users) as users values(dests) as dests values(source) as sources dc(source) as dc_sources values(to_count) as to_count values(threat_object_type) as threat_object_type values(threat_notype) as threat_notype values(threat_pal_type) as threat_pal_type values(threat_pal_notype) as threat_pal_notype BY threat_object,threat_pal\n| eval triage_status = case(\ndc_dests>10,\"many_dest\",\ndc_users>10,\"many_user\",\ndc_dests>3,\"multiple_dest\",\ndc_users>3,\"multiple_user\",\nthreat_count>10, \"regular_event\",\ntrue(), \"rare\" )\n| eval threat_multiplier = case(\ntriage_status=\"many_dest\" OR triage_status=\"many_user\",\"0\",\ntriage_status=\"multiple_dest\" OR triage_status=\"multiple_user\",\"0.25\",\ntriage_status=\"regular_event\",\"0.5\",\ntrue(),\"1\")\n| eval threat_info=triage_status.\" \".threat_object_type.\": dc_users=\".dc_users.\" dc_dests=\".dc_dests.\" dc_sources=\".dc_sources.\" observed_count/alt_count=\".threat_count.\"/\".to_count.\" threat_object=\".threat_notype\n| eval users = if(mvcount(users)>9,\"10+ entities\",users), dests = if(mvcount(dests)>9,\"10+ entities\",dests)\n| table threat_multiplier threat_count to_count threat_info triage_status dc_users users dc_dests dests dc_sources sources threat_object threat_pal threat_object_type threat_pal_type threat_notype threat_pal_notype\n| sort - threat_count\n| outputlookup threat_check.csv\n

What I've tried to do with this lookup is see how often each threat object fires with another pair, which adds some complexity to the logic but ensures that a unique threat object pairing maintains its score and doesn't decrease scores inadvertently when a regularly firing threat object fires with something new.

The reason I sort in descending order by count is so that in the following search, mvindex() utilizes the highest count for that threat pairing.

Feel free to tweak the threat_multiplier to your heart's content, or include additional logic with triage_status.

"},{"location":"searches/threat_object_prevalence/#incorporating-into-risk-notables","title":"Incorporating into Risk Notables","text":"

I will use the general logic for the limit score stacking Risk Incident Rule with some modifications, and the datamodel command for clarity and speed:

| datamodel Risk search\n| `drop_dm_object_name(\"All_Risk\")`\n| eval threat_object = mvzip(threat_object_type,threat_object,\"|||\")\n| fillnull threat_object threat_object_type value=\"blah\"\n| lookup threat_check.csv threat_object AS threat_object threat_pal AS threat_object threat_object_type sources AS source OUTPUT threat_info threat_multiplier\n| eval combined_ro = risk_object . \"|\" . risk_object_type\n| fillnull risk_message value=\"\" | fillnull threat_multiplier value=\"1\" | eval risk_hash = md5(risk_message)\n| eval threat_info = mvindex(threat_info,0) , threat_multiplier = mvindex(threat_multiplier,0)\n| eventstats sum(calculated_risk_score) as total_risk_score max(calculated_risk_score) as single_risk_score by combined_ro source\n| eventstats sum(total_risk_score) as supersum max(single_risk_score) as single_score by combined_ro source\n| eval cross_total_score=if(supersum < single_score*2, supersum, single_score*2)\n| eval cross_total_score = cross_total_score * threat_multiplier\n| stats count as observed_count\n    sum(calculated_risk_score) as total_risk_score\n    max(calculated_risk_score) as single_risk_score\n    values(threat_info) as threat_info\n    values(risk_message) as risk_message\n    values(risk_hash) as risk_hash\n    values(threat_full) as threat_full\n    values(threat_multiplier) as threat_multiplier\n    values(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id\n    dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count\n    values(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id,\n    dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count\n    sum(cross_total_score) as cross_total_score\n    values(tag) as tag\n    dc(_time) AS dc_events \nby combined_ro, source, threat_object\n| eval capped_risk_score=if(total_risk_score < single_risk_score*2, total_risk_score, single_risk_score*2)\n| eval risk_object = mvindex(split(combined_ro, \"|\"), 0)\n| eval risk_object_type = mvindex(split(combined_ro, \"|\"), 1)\n| fields - combined_*\n| eval threat_info = mvjoin(threat_info,\"::::\")\n| eval threat_full = mvzip(threat_object,threat_info,\"::::THREAT OBJECT PAIRS::::\")\n| rex mode=sed field=threat_full \"s/::::/\\n/g\"\n| eval total_risk_score = total_risk_score * threat_multiplier\n| eval capped_risk_score = capped_risk_score * threat_multiplier\n| eventstats values(risk_object) as related_objects by risk_hash\n| stats \n    values(*) as *\n    sum(total_risk_score) as total_risk_score \n    sum(capped_risk_score) as capped_risk_score\n    values(cross_total_score) as cross_total_score\n    sum(normalized_risk_score) as normalized_risk_score\n    sum(normalcap_risk_score) as normalcap_risk_score\n    dc(mitre_tactic_id) as mitre_tactic_id_count\n    dc(mitre_technique_id) as mitre_technique_id_count\n    dc(source) as source_count\n    by risk_object\n| eval cross_total_score = sum(cross_total_score) , cross_total_score = if(cross_total_score>=total_risk_score,capped_risk_score,cross_total_score)\n| eval risk_score = capped_risk_score\n| search risk_score > 100\n| table risk_object risk_object_type risk_score cross_total_score capped_risk_score total_risk_score related_objects mitre_tactic_id_count mitre_tactic_id mitre_technique_id_count mitre_technique_id source_count source threat_full threat_object risk_message risk_hash\n

The threat_check.csv has more fields than are utilized here, you may want to utilize more in the lookup OUTPUT section or as context for an investigation dashboard.

risk_hash is utilized to see related_objects which have fired with the same events; you can use this as a throttling field instead of risk_object to prevent notables from firing with the same events on a user and a system.

I recommend utilizing \"capped_risk_score\" as our risk score as the code is above, however you may find that \"cross_total_score\" works better in some environments.

Please test this out in your environment and give me feedback! There are definitely multiple ways to trim this chia pet, but I wanted to give folks an idea of what was possible.

Authors

@7thdrxn - Haylee Mills"},{"location":"searches/threat_object_types/","title":"Additional Threat Object Types","text":"

Increasing the number of threat object types you track in Risk Rules can be really helpful for tuning noisy alerts, threat hunting on anomalous combinations, and automating SOAR enrichment to unique threat object types. Haylee and Stuart's Threat Object Fun dashboards can be helpful for all three.

"},{"location":"searches/threat_object_types/#threat-object-types","title":"Threat Object Types","text":"

Some potential threat_object_types to keep in mind when creating risk rules:

source threat_object_type email, endpoint, network, proxy ip email, endpoint, proxy src_user email, endpoint, proxy user endpoint, email file_hash endpoint, email file_name endpoint, proxy domain endpoint, proxy url email email_subject email email_body endpoint command endpoint parent_process endpoint parent_process_name endpoint process endpoint process_file_name endpoint process_hash endpoint process_name endpoint registry_path endpoint registry_value_name endpoint registry_value_text endpoint service endpoint service_dll_file_hash endpoint service_file_hash proxy certificate_common_name proxy certificate_organization proxy certificate_serial proxy certificate_unit proxy http_referrer proxy http_user_agent"},{"location":"searches/threat_object_types/#other-types","title":"Other Types","text":"

You could also use open-source server handshake hashing algorithms like JA3, JA4, JARM, or CYU to identify anomalous server handshakes and potentially include:

"}]} \ No newline at end of file diff --git a/searches/threat_object_prevalence/index.html b/searches/threat_object_prevalence/index.html index de3032c..e00074d 100644 --- a/searches/threat_object_prevalence/index.html +++ b/searches/threat_object_prevalence/index.html @@ -9,74 +9,192 @@ body[data-md-color-scheme="slate"] .gdesc-inner { background: var(--md-default-bg-color);} body[data-md-color-scheme="slate"] .gslide-title { color: var(--md-default-fg-color);} body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);} -
Skip to content
\ No newline at end of file +
Skip to content

Threat Object Prevalence

One of my favorite features in RBA is knowing how often something has occurred in an environment; generally, the more rare or anomalous something is, the more likely it is to be malicious. The threat object drilldown in the sample Risk Investigation Dashboard is designed to offer an analyst that context, but we could utilize unique counts of threat objects to automatically tune scores.

You'll have to decide how often you want this information updated, but utilizing tstats against the Risk Index means this should be pretty snappy and could run pretty frequently over a long timeframe. Create a new saved search with this logic, firing with a frequency you like, and looking back at least 7 days:

 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
| from datamodel Risk.All_Risk
+| eval threat_object = mvzip(threat_object_type,threat_object,"|||")
+| table risk_object threat_object user dest source
+| eval to_full = threat_object 
+| eval to_split = mvjoin(to_full,"::::")
+| eventstats count by threat_object
+| sistats count values(to_full) values(user) values(dest) by threat_object,to_split,source
+| rename psrsvd_vm_to_full AS to_full
+| rename psrsvd_vm_user AS users
+| rename psrsvd_vm_dest AS dests
+| rename psrsvd_gc AS count
+| rex field=to_full max_match=0 "(?<threat_pal_temp>.*?;\d+;)"
+| rex field=users max_match=0 "(?<user_temp>.*?;\d+;)"
+| rex field=dests max_match=0 "(?<dest_temp>.*?;\d+;)"
+| eventstats count as count2 by threat_object
+| table count user_temp dest_temp source threat_pal_temp threat_object count2
+| stats max(count) as total_count dc(user_temp) as dc_users values(user_temp) as users dc(dest_temp) as dc_dests values(dest_temp) as dests dc(sources_temp) as dc_sources values(count2) as to_count by threat_object threat_pal_temp source
+| table total_count to_count source dc_users users dc_dests dests threat_pal_temp threat_object
+| rex field=threat_pal_temp "(?<threat_pal>.*?);(?<threat_count>\d+);"
+| rex field=users "(?<users_temp>.*?);\d+;"
+| rex field=dests "(?<dests_temp>.*?);\d+;"
+| rex field=threat_object "^(?<threat_object_type>.+)\|\|\|(?<threat_notype>.*)"
+| rex field=threat_object "^(?<threat_pal_type>.+)\|\|\|(?<threat_pal_notype>.*)"
+| eval to_count = if(threat_pal=threat_object,to_count,total_count)
+| stats sum(threat_count) as threat_count dc(users_temp) as dc_users dc(dests_temp) as dc_dests values(users) as users values(dests) as dests values(source) as sources dc(source) as dc_sources values(to_count) as to_count values(threat_object_type) as threat_object_type values(threat_notype) as threat_notype values(threat_pal_type) as threat_pal_type values(threat_pal_notype) as threat_pal_notype BY threat_object,threat_pal
+| eval triage_status = case(
+dc_dests>10,"many_dest",
+dc_users>10,"many_user",
+dc_dests>3,"multiple_dest",
+dc_users>3,"multiple_user",
+threat_count>10, "regular_event",
+true(), "rare" )
+| eval threat_multiplier = case(
+triage_status="many_dest" OR triage_status="many_user","0",
+triage_status="multiple_dest" OR triage_status="multiple_user","0.25",
+triage_status="regular_event","0.5",
+true(),"1")
+| eval threat_info=triage_status." ".threat_object_type.": dc_users=".dc_users." dc_dests=".dc_dests." dc_sources=".dc_sources." observed_count/alt_count=".threat_count."/".to_count." threat_object=".threat_notype
+| eval users = if(mvcount(users)>9,"10+ entities",users), dests = if(mvcount(dests)>9,"10+ entities",dests)
+| table threat_multiplier threat_count to_count threat_info triage_status dc_users users dc_dests dests dc_sources sources threat_object threat_pal threat_object_type threat_pal_type threat_notype threat_pal_notype
+| sort - threat_count
+| outputlookup threat_check.csv
+

What I've tried to do with this lookup is see how often each threat object fires with another pair, which adds some complexity to the logic but ensures that a unique threat object pairing maintains its score and doesn't decrease scores inadvertently when a regularly firing threat object fires with something new.

The reason I sort in descending order by count is so that in the following search, mvindex() utilizes the highest count for that threat pairing.

Feel free to tweak the threat_multiplier to your heart's content, or include additional logic with triage_status.

Incorporating into Risk Notables

I will use the general logic for the limit score stacking Risk Incident Rule with some modifications, and the datamodel command for clarity and speed:

 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
+27
+28
+29
+30
+31
+32
+33
+34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
| datamodel Risk search
+| `drop_dm_object_name("All_Risk")`
+| eval threat_object = mvzip(threat_object_type,threat_object,"|||")
+| fillnull threat_object threat_object_type value="blah"
+| lookup threat_check.csv threat_object AS threat_object threat_pal AS threat_object threat_object_type sources AS source OUTPUT threat_info threat_multiplier
+| eval combined_ro = risk_object . "|" . risk_object_type
+| fillnull risk_message value="" | fillnull threat_multiplier value="1" | eval risk_hash = md5(risk_message)
+| eval threat_info = mvindex(threat_info,0) , threat_multiplier = mvindex(threat_multiplier,0)
+| eventstats sum(calculated_risk_score) as total_risk_score max(calculated_risk_score) as single_risk_score by combined_ro source
+| eventstats sum(total_risk_score) as supersum max(single_risk_score) as single_score by combined_ro source
+| eval cross_total_score=if(supersum < single_score*2, supersum, single_score*2)
+| eval cross_total_score = cross_total_score * threat_multiplier
+| stats count as observed_count
+    sum(calculated_risk_score) as total_risk_score
+    max(calculated_risk_score) as single_risk_score
+    values(threat_info) as threat_info
+    values(risk_message) as risk_message
+    values(risk_hash) as risk_hash
+    values(threat_full) as threat_full
+    values(threat_multiplier) as threat_multiplier
+    values(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id
+    dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count
+    values(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id,
+    dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count
+    sum(cross_total_score) as cross_total_score
+    values(tag) as tag
+    dc(_time) AS dc_events 
+by combined_ro, source, threat_object
+| eval capped_risk_score=if(total_risk_score < single_risk_score*2, total_risk_score, single_risk_score*2)
+| eval risk_object = mvindex(split(combined_ro, "|"), 0)
+| eval risk_object_type = mvindex(split(combined_ro, "|"), 1)
+| fields - combined_*
+| eval threat_info = mvjoin(threat_info,"::::")
+| eval threat_full = mvzip(threat_object,threat_info,"::::THREAT OBJECT PAIRS::::")
+| rex mode=sed field=threat_full "s/::::/\n/g"
+| eval total_risk_score = total_risk_score * threat_multiplier
+| eval capped_risk_score = capped_risk_score * threat_multiplier
+| eventstats values(risk_object) as related_objects by risk_hash
+| stats 
+    values(*) as *
+    sum(total_risk_score) as total_risk_score 
+    sum(capped_risk_score) as capped_risk_score
+    values(cross_total_score) as cross_total_score
+    sum(normalized_risk_score) as normalized_risk_score
+    sum(normalcap_risk_score) as normalcap_risk_score
+    dc(mitre_tactic_id) as mitre_tactic_id_count
+    dc(mitre_technique_id) as mitre_technique_id_count
+    dc(source) as source_count
+    by risk_object
+| eval cross_total_score = sum(cross_total_score) , cross_total_score = if(cross_total_score>=total_risk_score,capped_risk_score,cross_total_score)
+| eval risk_score = capped_risk_score
+| search risk_score > 100
+| table risk_object risk_object_type risk_score cross_total_score capped_risk_score total_risk_score related_objects mitre_tactic_id_count mitre_tactic_id mitre_technique_id_count mitre_technique_id source_count source threat_full threat_object risk_message risk_hash
+

The threat_check.csv has more fields than are utilized here, you may want to utilize more in the lookup OUTPUT section or as context for an investigation dashboard.

risk_hash is utilized to see related_objects which have fired with the same events; you can use this as a throttling field instead of risk_object to prevent notables from firing with the same events on a user and a system.

I recommend utilizing "capped_risk_score" as our risk score as the code is above, however you may find that "cross_total_score" works better in some environments.

Please test this out in your environment and give me feedback! There are definitely multiple ways to trim this chia pet, but I wanted to give folks an idea of what was possible.

Authors

@7thdrxn - Haylee Mills

Last update: June 15, 2024
Created: September 1, 2023
\ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 50f1d7ed89b1fa9434d8736612818442818f6c35..5854900f96f66bdd9d7bbb09e38a38aeaf7bd390 100644 GIT binary patch delta 12 Tcmb=gXOr*d;COduB3mT@8`cDw delta 12 Tcmb=gXOr*d;BdS-k*yK{7-<9P