Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove Starter Digital Fingerprinting (DFP) #1903

Merged
merged 11 commits into from
Oct 22, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion ci/release/update-version.sh
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,6 @@ sed_runner "s/v${CURRENT_FULL_VERSION}-runtime/v${NEXT_FULL_VERSION}-runtime/g"
examples/digital_fingerprinting/production/docker-compose.yml \
examples/digital_fingerprinting/production/Dockerfile
sed_runner "s/v${CURRENT_FULL_VERSION}-runtime/v${NEXT_FULL_VERSION}-runtime/g" examples/digital_fingerprinting/production/Dockerfile
sed_runner "s|blob/branch-${CURRENT_SHORT_TAG}|blob/branch-${NEXT_SHORT_TAG}|g" examples/digital_fingerprinting/starter/README.md

# examples/developer_guide
sed_runner 's/'"VERSION ${CURRENT_FULL_VERSION}.*"'/'"VERSION ${NEXT_FULL_VERSION}"'/g' \
Expand Down
3 changes: 0 additions & 3 deletions docs/source/basics/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,15 +52,12 @@ run:
--help Show this message and exit.

Commands:
pipeline-ae Run the inference pipeline with an AutoEncoder model
pipeline-fil Run the inference pipeline with a FIL model
pipeline-nlp Run the inference pipeline with a NLP model
pipeline-other Run a custom inference pipeline without a specific model type

Currently, Morpheus pipeline can be operated in four different modes.

* ``pipeline-ae``
* This pipeline mode is used to run training/inference on the AutoEncoder model.
* ``pipeline-fil``
* This pipeline mode is used to run inference on FIL (Forest Inference Library) models such as XGBoost, RandomForestClassifier, etc.
* ``pipeline-nlp``
Expand Down
48 changes: 3 additions & 45 deletions docs/source/cloud_deployment_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ limitations under the License.
- [Verify Model Deployment](#verify-model-deployment)
- [Create Kafka Topics](#create-kafka-topics)
- [Example Workflows](#example-workflows)
- [Run AutoEncoder Digital Fingerprinting Pipeline](#run-autoencoder-digital-fingerprinting-pipeline)
- [Run NLP Phishing Detection Pipeline](#run-nlp-phishing-detection-pipeline)
- [Run NLP Sensitive Information Detection Pipeline](#run-nlp-sensitive-information-detection-pipeline)
- [Run FIL Anomalous Behavior Profiling Pipeline](#run-fil-anomalous-behavior-profiling-pipeline)
Expand Down Expand Up @@ -383,10 +382,9 @@ kubectl -n $NAMESPACE exec deploy/broker -c broker -- kafka-topics.sh \

This section describes example workflows to run on Morpheus. Four sample pipelines are provided.

1. AutoEncoder pipeline performing Digital Fingerprinting (DFP).
2. NLP pipeline performing Phishing Detection (PD).
3. NLP pipeline performing Sensitive Information Detection (SID).
4. FIL pipeline performing Anomalous Behavior Profiling (ABP).
1. NLP pipeline performing Phishing Detection (PD).
2. NLP pipeline performing Sensitive Information Detection (SID).
3. FIL pipeline performing Anomalous Behavior Profiling (ABP).

Multiple command options are given for each pipeline, with varying data input/output methods, ranging from local files to Kafka Topics.

Expand Down Expand Up @@ -424,46 +422,6 @@ helm install --set ngc.apiKey="$API_KEY" \
morpheus-sdk-client
```


### Run AutoEncoder Digital Fingerprinting Pipeline
The following AutoEncoder pipeline example shows how to train and validate the AutoEncoder model and write the inference results to a specified location. Digital fingerprinting has also been referred to as **HAMMAH (Human as Machine <> Machine as Human)**.
These use cases are currently implemented to detect user behavior changes that indicate a change from a human to a machine or a machine to a human, thus leaving a "digital fingerprint." The model is an ensemble of an autoencoder and fast Fourier transform reconstruction.

Inference and training based on a user ID (`user123`). The model is trained once and inference is conducted on the supplied input entries in the example pipeline below. The `--train_data_glob` parameter must be removed for continuous training.

```bash
helm install --set ngc.apiKey="$API_KEY" \
--set sdk.args="morpheus --log_level=DEBUG run \
--num_threads=2 \
--edge_buffer_size=4 \
--pipeline_batch_size=1024 \
--model_max_batch_size=1024 \
--use_cpp=False \
pipeline-ae \
--columns_file=data/columns_ae_cloudtrail.txt \
--userid_filter=user123 \
--feature_scaler=standard \
--userid_column_name=userIdentitysessionContextsessionIssueruserName \
--timestamp_column_name=event_dt \
from-cloudtrail --input_glob=/common/models/datasets/validation-data/dfp-cloudtrail-*-input.csv \
--max_files=200 \
train-ae --train_data_glob=/common/models/datasets/training-data/dfp-cloudtrail-*.csv \
--source_stage_class=morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage \
--seed 42 \
preprocess \
inf-pytorch \
add-scores \
timeseries --resolution=1m --zscore_threshold=8.0 --hot_start \
monitor --description 'Inference Rate' --smoothing=0.001 --unit inf \
serialize \
to-file --filename=/common/data/<YOUR_OUTPUT_DIR>/cloudtrail-dfp-detections.csv --overwrite" \
--namespace $NAMESPACE \
<YOUR_RELEASE_NAME> \
morpheus-sdk-client
```

For more information on the Digital Fingerprint use cases, refer to the starter example and a more production-ready example that can be found in the `examples` source directory.

### Run NLP Phishing Detection Pipeline

The following Phishing Detection pipeline examples use a pre-trained NLP model to analyze emails (body) and determine phishing or benign. Here is the sample data as shown below is used to pass as an input to the pipeline.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/developer_guide/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -347,7 +347,7 @@ Launching a full production Kafka cluster is outside the scope of this project;

### Pipeline Validation

To verify that all pipelines are working correctly, validation scripts have been added at `${MORPHEUS_ROOT}/scripts/validation`. There are scripts for each of the main workflows: Anomalous Behavior Profiling (ABP), Humans-as-Machines-Machines-as-Humans (HAMMAH), Phishing Detection (Phishing), and Sensitive Information Detection (SID).
To verify that all pipelines are working correctly, validation scripts have been added at `${MORPHEUS_ROOT}/scripts/validation`. There are scripts for each of the main workflows: Anomalous Behavior Profiling (ABP), Phishing Detection (Phishing), and Sensitive Information Detection (SID).

To run all of the validation workflow scripts, use the following commands:

Expand Down
55 changes: 12 additions & 43 deletions docs/source/developer_guide/guides/5_digital_fingerprinting.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Every account, user, service, and machine has a digital fingerprint that represe
To construct this digital fingerprint, we will be training unsupervised behavioral models at various granularities, including a generic model for all users in the organization along with fine-grained models for each user to monitor their behavior. These models are continuously updated and retrained over time​, and alerts are triggered when deviations from normality occur for any user​.

## Training Sources
The data we will want to use for the training and inference will be any sensitive system that the user interacts with, such as VPN, authentication and cloud services. The digital fingerprinting example (`examples/digital_fingerprinting/README.md`) included in Morpheus ingests logs from [AWS CloudTrail](https://docs.aws.amazon.com/cloudtrail/index.html), [Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-sign-ins), and [Duo Authentication](https://duo.com/docs/adminapi).
The data we will want to use for the training and inference will be any sensitive system that the user interacts with, such as VPN, authentication and cloud services. The digital fingerprinting example (`examples/digital_fingerprinting/README.md`) included in Morpheus ingests logs from [Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-sign-ins), and [Duo Authentication](https://duo.com/docs/adminapi).

The location of these logs could be either local to the machine running Morpheus, a shared file system like NFS, or on a remote store such as [Amazon S3](https://aws.amazon.com/s3/).

Expand All @@ -44,54 +44,23 @@ Adding a new source for the DFP pipeline requires defining five critical pieces:
1. A [`DataFrameInputSchema`](6_digital_fingerprinting_reference.md#dataframe-input-schema-dataframeinputschema) for the [`DFPFileToDataFrameStage`](6_digital_fingerprinting_reference.md#file-to-dataframe-stage-dfpfiletodataframestage) stage.
1. A [`DataFrameInputSchema`](6_digital_fingerprinting_reference.md#dataframe-input-schema-dataframeinputschema) for the [`DFPPreprocessingStage`](6_digital_fingerprinting_reference.md#preprocessing-stage-dfppreprocessingstage).

## DFP Examples
The DFP workflow is provided as two separate examples: a simple, "starter" pipeline for new users and a complex, "production" pipeline for full scale deployments. While these two examples both perform the same general tasks, they do so in very different ways. The following is a breakdown of the differences between the two examples.

### The "Starter" Example

This example is designed to simplify the number of stages and components and provide a fully contained workflow in a single pipeline.

Key Differences:
* A single pipeline which performs both training and inference
* Requires no external services
* Can be run from the Morpheus CLI

This example is described in more detail in `examples/digital_fingerprinting/starter/README.md`.

### The "Production" Example
## Production Deployment Example

This example is designed to illustrate a full-scale, production-ready, DFP deployment in Morpheus. It contains all of the necessary components (such as a model store), to allow multiple Morpheus pipelines to communicate at a scale that can handle the workload of an entire company.

Key Differences:
Key Features:
* Multiple pipelines are specialized to perform either training or inference
* Requires setting up a model store to allow the training and inference pipelines to communicate
* Uses a model store to allow the training and inference pipelines to communicate
* Organized into a docker-compose deployment for easy startup
* Contains a Jupyter notebook service to ease development and debugging
* Can be deployed to Kubernetes using provided Helm charts
* Uses many customized stages to maximize performance.

This example is described in `examples/digital_fingerprinting/production/README.md` as well as the rest of this document.

### DFP Features
## DFP Features

#### AWS CloudTrail
| Feature | Description |
| ------- | ----------- |
| `userIdentityaccessKeyId` | for example, `ACPOSBUM5JG5BOW7B2TR`, `ABTHWOIIC0L5POZJM2FF`, `AYI2CM8JC3NCFM4VMMB4` |
| `userAgent` | for example, `Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 10.0; Trident/5.1)`, `Mozilla/5.0 (Linux; Android 4.3.1) AppleWebKit/536.1 (KHTML, like Gecko) Chrome/62.0.822.0 Safari/536.1`, `Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10 7_0; rv:1.9.4.20) Gecko/2012-06-10 12:09:43 Firefox/3.8` |
| `userIdentitysessionContextsessionIssueruserName` | for example, `role-g` |
| `sourceIPAddress` | for example, `208.49.113.40`, `123.79.131.26`, `128.170.173.123` |
| `userIdentityaccountId` | for example, `Account-123456789` |
| `errorMessage` | for example, `The input fails to satisfy the constraints specified by an AWS service.`, `The specified subnet cannot be found in the VPN with which the Client VPN endpoint is associated.`, `Your account is currently blocked. Contact [email protected] if you have questions.` |
| `userIdentitytype` | for example, `FederatedUser` |
| `eventName` | for example, `GetSendQuota`, `ListTagsForResource`, `DescribeManagedPrefixLists` |
| `userIdentityprincipalId` | for example, `39c71b3a-ad54-4c28-916b-3da010b92564`, `0baf594e-28c1-46cf-b261-f60b4c4790d1`, `7f8a985f-df3b-4c5c-92c0-e8bffd68abbf` |
| `errorCode` | for example, success, `MissingAction`, `ValidationError` |
| `eventSource` | for example, `lopez-byrd.info`, `robinson.com`, `lin.com` |
| `userIdentityarn` | for example, `arn:aws:4a40df8e-c56a-4e6c-acff-f24eebbc4512`, `arn:aws:573fd2d9-4345-487a-9673-87de888e4e10`, `arn:aws:c8c23266-13bb-4d89-bce9-a6eef8989214` |
| `apiVersion` | for example, `1984-11-26`, `1990-05-27`, `2001-06-09` |

#### Azure Active Directory
### Azure Active Directory
| Feature | Description |
| ------- | ----------- |
| `appDisplayName` | for example, `Windows sign in`, `MS Teams`, `Office 365`​ |
Expand All @@ -104,14 +73,14 @@ This example is described in `examples/digital_fingerprinting/production/README.
| `location.countryOrRegion` | country or region name​ |
| `location.city` | city name |

##### Derived Features
#### Derived Features
| Feature | Description |
| ------- | ----------- |
| `logcount` | tracks the number of logs generated by a user within that day (increments with every log)​ |
| `locincrement` | increments every time we observe a new city (`location.city`) in a user's logs within that day​ |
| `appincrement` | increments every time we observe a new app (`appDisplayName`) in a user's logs within that day​ |

#### Duo Authentication
### Duo Authentication
| Feature | Description |
| ------- | ----------- |
| `auth_device.name` | phone number​ |
Expand All @@ -121,7 +90,7 @@ This example is described in `examples/digital_fingerprinting/production/README.
| `reason` | reason for the results, for example, `User Cancelled`, `User Approved`, `User Mistake`, `No Response`​ |
| `access_device.location.city` | city name |

##### Derived Features
#### Derived Features
| Feature | Description |
| ------- | ----------- |
| `logcount` | tracks the number of logs generated by a user within that day (increments with every log)​ |
Expand All @@ -133,16 +102,16 @@ DFP in Morpheus is accomplished via two independent pipelines: training and infe

![High Level Architecture](img/dfp_high_level_arch.png)

#### Training Pipeline
### Training Pipeline
* Trains user models and uploads to the model store​
* Capable of training individual user models or a fallback generic model for all users​

#### Inference Pipeline
### Inference Pipeline
* Downloads user models from the model store​
* Generates anomaly scores per log​
* Sends detected anomalies to monitoring services

#### Monitoring
### Monitoring
* Detected anomalies are published to an S3 bucket, directory or a Kafka topic.
* Output can be integrated with a monitoring tool.

Expand Down
30 changes: 0 additions & 30 deletions docs/source/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -368,36 +368,6 @@ Commands:
trigger Buffer data until the previous stage has completed.
validate Validate pipeline output for testing.
```

And for the AE pipeline:

```
$ morpheus run pipeline-ae --help
Usage: morpheus run pipeline-ae [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...

<Help Paragraph Omitted>

Commands:
add-class Add detected classifications to each message.
add-scores Add probability scores to each message.
buffer (Deprecated) Buffer results.
delay (Deprecated) Delay results for a certain duration.
filter Filter message by a classification threshold.
from-azure Source stage is used to load Azure Active Directory messages.
from-cloudtrail Load messages from a CloudTrail directory.
from-duo Source stage is used to load Duo Authentication messages.
inf-pytorch Perform inference with PyTorch.
inf-triton Perform inference with Triton Inference Server.
monitor Display throughput numbers at a specific point in the pipeline.
preprocess Prepare Autoencoder input DataFrames for inference.
serialize Includes & excludes columns from messages.
timeseries Perform time series anomaly detection and add prediction.
to-file Write all messages to a file.
to-kafka Write all messages to a Kafka cluster.
train-ae Train an Autoencoder model on incoming data.
trigger Buffer data until the previous stage has completed.
validate Validate pipeline output for testing.
```
Note: The available commands for different types of pipelines are not the same. This means that the same stage, when used in different pipelines, may have different options. Check the CLI help for the most up-to-date information during development.

## Next Steps
Expand Down
6 changes: 0 additions & 6 deletions docs/source/stages/morpheus_stages.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,19 +44,15 @@ Stages are the building blocks of Morpheus pipelines. Below is a list of the mos

## Inference

- Auto Encoder Inference Stage {py:class}`~morpheus.stages.inference.auto_encoder_inference_stage.AutoEncoderInferenceStage` PyTorch inference stage used for Auto Encoder pipeline mode.
- PyTorch Inference Stage {py:class}`~morpheus.stages.inference.pytorch_inference_stage.PyTorchInferenceStage` PyTorch inference stage used for most pipeline modes with the exception of Auto Encoder.
- Triton Inference Stage {py:class}`~morpheus.stages.inference.triton_inference_stage.TritonInferenceStage` Inference stage which utilizes a [Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server).

## Input

- App Shield Source Stage {py:class}`~morpheus.stages.input.appshield_source_stage.AppShieldSourceStage` Load App Shield messages from one or more plugins into a DataFrame.
- Azure Source Stage {py:class}`~morpheus.stages.input.azure_source_stage.AzureSourceStage` Load Azure Active Directory messages.
- Cloud Trail Source Stage {py:class}`~morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage` Load messages from a CloudTrail directory.
- Control Message File Source Stage {py:class}`~morpheus.stages.input.control_message_file_source_stage.ControlMessageFileSourceStage` Receives control messages from different sources specified by a list of (fsspec)[https://filesystem-spec.readthedocs.io/en/latest/api.html?highlight=open_files#fsspec.open_files] strings.
- Control Message Kafka Source Stage {py:class}`~morpheus.stages.input.control_message_kafka_source_stage.ControlMessageKafkaSourceStage` Load control messages from a Kafka cluster.
- Databricks Delta Lake Source Stage {py:class}`~morpheus.stages.input.databricks_deltalake_source_stage.DataBricksDeltaLakeSourceStage` Source stage used to load messages from a DeltaLake table.
- Duo Source Stage {py:class}`~morpheus.stages.input.duo_source_stage.DuoSourceStage` Load Duo Authentication messages.
- File Source Stage {py:class}`~morpheus.stages.input.file_source_stage.FileSourceStage` Load messages from a file.
- HTTP Client Source Stage {py:class}`~morpheus.stages.input.http_client_source_stage.HttpClientSourceStage` Poll a remote HTTP server for incoming data.
- HTTP Server Source Stage {py:class}`~morpheus.stages.input.http_server_source_stage.HttpServerSourceStage` Start an HTTP server and listens for incoming requests on a specified endpoint.
Expand Down Expand Up @@ -92,7 +88,5 @@ Stages are the building blocks of Morpheus pipelines. Below is a list of the mos

- Deserialize Stage {py:class}`~morpheus.stages.preprocess.deserialize_stage.DeserializeStage` Partition messages based on the `pipeline_batch_size` parameter of the pipeline's `morpheus.config.Config` object.
- Drop Null Stage {py:class}`~morpheus.stages.preprocess.drop_null_stage.DropNullStage` Drop null data entries from a DataFrame.
- Preprocess AE Stage {py:class}`~morpheus.stages.preprocess.preprocess_ae_stage.PreprocessAEStage` Prepare Autoencoder input DataFrames for inference.
- Preprocess FIL Stage {py:class}`~morpheus.stages.preprocess.preprocess_fil_stage.PreprocessFILStage` Prepare FIL input DataFrames for inference.
- Preprocess NLP Stage {py:class}`~morpheus.stages.preprocess.preprocess_nlp_stage.PreprocessNLPStage` Prepare NLP input DataFrames for inference.
- Train AE Stage {py:class}`~morpheus.stages.preprocess.train_ae_stage.TrainAEStage` Train an Autoencoder model on incoming data.
Loading
Loading