Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
pratyushmaini authored Jan 10, 2024
1 parent 5ac2f11 commit b6924fa
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -292,10 +292,10 @@ <h3 class="title is-3">Forget Quality</h3>
<p>Here's the deal with \( p \)-values: a high \( p \)-value is like the test saying, "I can't really tell these two apart." This means the model is doing a great job at forgetting – the Truth Ratios for the unlearned and retain models look pretty similar. On the flip side, a low \( p \)-value is the test's way of saying, "These are definitely not the same." That's not great news for us, because it means there's still some memory of what it's supposed to forget, indicating a leak in privacy and a thumbs-down in unlearning.</p>
</div>

<h3>MAPO: Multiplicative Aggregation of Performance on Unlearning</h3>
<!-- <h3>MAPO: Multiplicative Aggregation of Performance on Unlearning</h3>
<p>To characterize an unlearning algorithm's efficacy, we aggregate these metrics using the MAPO (Multiplicative Aggregation of Performance On Unlearning) score, calculated across four datasets: forget set, retain set, real authors, and world facts.</p>
<p>Metrics are normalized to fall within [0, 1], ensuring higher values indicate better performance. This includes adjusting probabilities and ROUGE scores for the forget set and handling truth ratios differently for each dataset.</p>
<p>Metrics are normalized to fall within [0, 1], ensuring higher values indicate better performance. This includes adjusting probabilities and ROUGE scores for the forget set and handling truth ratios differently for each dataset.</p> -->


</div>
Expand Down

0 comments on commit b6924fa

Please sign in to comment.