Skip to content

Commit

Permalink
Update 3_development_time_threats.md
Browse files Browse the repository at this point in the history
  • Loading branch information
disesdi authored Oct 18, 2024
1 parent cb28b1a commit a8d0f5d
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -382,7 +382,7 @@ Link to standards:
Poison robust model: select a model type and creation approach to reduce sensitivity to poisoned training data.

This control can be applied to a model that has already been training, so including models that have been obtained from an external source.
This control can be applied to a model that has already been trained, so including models that have been obtained from an external source.

The general principle of reducing sensitivity to poisoned training data is to make sure that the model does not memorize the specific malicious input pattern (or _backdoor trigger_). The following two examples represent different strategies, which can also complement each other in an approach called **fine pruning** (See [paper on fine-pruning](https://arxiv.org/pdf/1805.12185.pdf)):
1. Reduce memorization by removing elements of memory using **pruning**. Pruning in essence reduces the size of the model so it does not have the capacity to trigger on backdoor-examples while retaining sufficient accuracy for the intended use case. The approach removes neurons in a neural network that have been identified as non-essential for sufficient accuracy.
Expand Down

0 comments on commit a8d0f5d

Please sign in to comment.