Skip to content

Commit

Permalink
Merge pull request #4 from nbaltaci/nbaltaci-patch-1
Browse files Browse the repository at this point in the history
Nbaltaci patch 1
  • Loading branch information
devjayati authored Jan 10, 2024
2 parents e142aa0 + 05c8f65 commit ad31c6f
Showing 1 changed file with 6 additions and 8 deletions.
14 changes: 6 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,29 +17,27 @@ Developed by the Comcast SPIDER Team</p>

## Overview

AI/ML applications have unique security threats. Project GuardRail is a set of security and privacy requirements that AI/ML applications should meet during their design phase that serve as guardrails against these threats. These requirements help scope the threats such applications must be protected against. It consists of a baseline set required for **all** AI/ML applications, and two additional set of requirements that are specific to **continuously learning** models and **user interacting** models respectively. There are four additional questions that are specific to generative AI applications only.
AI/ML applications have unique security threats. Project GuardRail provides a questionnaire that includes a set of threat modeling questions for AI/ML applications. It helps ensure to meeting security and privacy requirements during the design phase, which serve as guardrails against those threats. The requirements help scope the threats to protect AI/ML applications against. It consists of a baseline set required for **all** AI/ML applications and two additional set of requirements that are specific to **continuous learning** and **user-interacting** models. There are four additional questions that are specific to generative AI applications only.

## Structure
The content of this library comes from a variety of frameworks, lists, and sources, both from academia and industry. We have performed several iterations to refine the library to accurately determine the scope and language of the questions. [Sources](#sources) below provides a list of all such sources that this material is derived from.

For every application, security and privacy threat models are conducted as usual.
The content of this library is derived from a variety of frameworks, lists, and sources, both from academia and industry. We have performed several iterations to refine the library to accurately determine the scope and language of the questions. The [sources](#sources) provided below offer a comprehensive list of all the materials contributing to this library.

As shown in the diagram below, the "Questionnaire for Manual Threat Modeling" defines the library. The 53 threats (and 4 additional generative AI threats) are divided into three categories as shown.

- All AI/ML applications must meet the 28 [baseline](./baseline.md) requirements.
- If an application is continuously learning, they must meet 6 [additional](./additional-1.md) requirements apart from baseline.
- If they EITHER train on user data OR interact with users, they must meet 19 [additional](./additional-2.md) requirements apart from baseline.
- If an application is continuously learning, it must meet 6 [additional](./additional-1.md) requirements in addition to the baseline.
- If an application EITHER traisn on user data OR interacts with users, it must meet 19 [additional](./additional-2.md) requirements.

Generative AI questions are differentiated and put into a separate group under each category if applicable.

![Structure-Diagram-GuardRail](assets/Structure-Diagram-GuardRail.jpg)

Each of the requirements are divided into four sub categories - data, model, artefact (output), and system/infrastructure, depending on which element of the ML application a threat is applicable to.
Each requirement is divided into four sub categories - data, model, artefact (output), and system/infrastructure, depending on the element of the ML application to which a threat is applicable.

<b>Data</b> indicates all input information to the model that it trains on. <b>Model</b> indicates the source code of the AI/ML application. <b>Artefact</b> indicates the output of the model, including predictions if applicable. <b>System/infrastructure</b> is the underlying architecture supporting the model functionality, like hardware, for example.

## Usage
This requirement document can be used as an assessment for both AI/ML applications as well as new third-party AI vendors. After an application undergoes the usual security review process and it is determined that it is not an AI/ML-driven application, the review ends. Otherwise, the application developers can take the baseline assessment. Following this, depending on whether the underlying model fits into the two additional categories outlined above, additional assessment questions can be added. This questionnaire can then be reported to the threat modeling team for review.
This threat modeling questionnaire can be used as an assessment for both AI/ML applications as well as new third-party AI vendors. After an application undergoes the usual security review process and it is determined that it is not an AI/ML-driven application, the review ends. Otherwise, the application developers can take the baseline assessment. Following this, depending on whether the underlying model fits into the two additional categories outlined above, additional assessment questions can be added. This questionnaire can then be reported to the threat modeling team for review.

![Process-Diagram-GuardRail](assets/Process-Diagram-GuardRail.jpg)

Expand Down

0 comments on commit ad31c6f

Please sign in to comment.