You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I find that the way you calculate the Forget Quality is different from TOFU.
In TOFU, they test whether the Truth Ratio distribution of the unlearned and the retrained models on the Forget Set are indistinguishable. The more indistinguishable, the better the Forget Quality.
However, in this paper, you test whether the Truth Ratio distribution of the unlearned model on the Forget Set and the Retain Set are indistinguishable. The more distinguishable, the better the Forget Quality.
Intuitively, the calculation method in TOFU seems to be more reasonable ?
More importantly, when I used the calculation method in TOFU to evaluate the model obtained from this repository, all unlearned models have poor Forget Quality (i.e.,the p-value is very small).
Did you get similar results or give some reasonable explanation?
Thank you so much.
I use the default parameters in this repo to get the unlearned model (llama2-7b/Tofu_forget10). When using the evaluation metrics in this repo, similar results to those in the paper can be obtained.
When using the evaluation metrics in TOFU, the following results are obtained:
FO-GradDiff
Real Authors ROUGE: 0.8756666666666666
Real Authors Probability: 0.3370528533406933
Real Authors Truth Ratio: 0.4489442655298266
Real World ROUGE: 0.8632478632478633
Real World Probability: 0.34264623472935996
Real World Truth Ratio: 0.46739810209121113
Retain ROUGE: 0.7280395020713727
Retain Probability: 0.8915528871545129
Retain Truth Ratio: 0.4856914348122708
Forget ROUGE: 0.5733794338494536
Forget Probability: 0.7106083974911012
Forget Truth Ratio: 0.4947185571959659
Model Utility: 0.5261059306533171
Forget Quality: 2.4311282147882553e-17
KS Test PVal Forget: 2.4311282147882553e-17
KS Test Forget: 0.3566666666666667
loss_type: FO-GradDiff
SO-GradDiff
Real Authors ROUGE: 0.6396666666666666
Real Authors Probability: 0.49640407950009047
Real Authors Truth Ratio: 0.6819914971547228
Real World ROUGE: 0.863960113960114
Real World Probability: 0.48067123725124805
Real World Truth Ratio: 0.6363525765362628
Retain ROUGE: 0.4724707637989966
Retain Probability: 0.6284728291771527
Retain Truth Ratio: 0.4912353012278951
Forget ROUGE: 0.02356313497233958
Forget Probability: 4.983098299002777e-05
Forget Truth Ratio: 0.5393016081505795
Model Utility: 0.5770409673171679
Forget Quality: 3.709652809739326e-15
KS Test PVal Forget: 3.709652809739326e-15
KS Test Forget: 0.3333333333333333
loss_type: SO-GradDiff
FO_PO
Real Authors ROUGE: 0.9229999999999999
Real Authors Probability: 0.4526057700565208
Real Authors Truth Ratio: 0.5899532151431761
Real World ROUGE: 0.878917378917379
Real World Probability: 0.42424014910485935
Real World Truth Ratio: 0.5609580868128395
Retain ROUGE: 0.928529925876358
Retain Probability: 0.9108133095585154
Retain Truth Ratio: 0.4831902621090459
Forget ROUGE: 0.08464501647244944
Forget Probability: 0.767351383477948
Forget Truth Ratio: 0.5160132440176798
Model Utility: 0.6202638538477067
Forget Quality: 2.1942743021891237e-16
KS Test PVal Forget: 2.1942743021891237e-16
KS Test Forget: 0.3466666666666667
loss_type: FO_PO
SO_PO
Real Authors ROUGE: 0.925
Real Authors Probability: 0.4519823245062644
Real Authors Truth Ratio: 0.5796987400702259
Real World ROUGE: 0.8960113960113961
Real World Probability: 0.44049931507005974
Real World Truth Ratio: 0.5794212144270833
Retain ROUGE: 0.8520441557731588
Retain Probability: 0.8660661350625141
Retain Truth Ratio: 0.4740790703876069
Forget ROUGE: 0.14440216936098418
Forget Probability: 0.7876929595437442
Forget Truth Ratio: 0.5256482316628881
Model Utility: 0.6177794234804653
Forget Quality: 1.4582054786325707e-14
KS Test PVal Forget: 1.4582054786325707e-14
KS Test Forget: 0.32666666666666666
loss_type: SO_PO
The text was updated successfully, but these errors were encountered:
Hi, thanks for the nice work.
I find that the way you calculate the Forget Quality is different from TOFU.
In TOFU, they test whether the Truth Ratio distribution of the unlearned and the retrained models on the Forget Set are indistinguishable. The more indistinguishable, the better the Forget Quality.
However, in this paper, you test whether the Truth Ratio distribution of the unlearned model on the Forget Set and the Retain Set are indistinguishable. The more distinguishable, the better the Forget Quality.
Intuitively, the calculation method in TOFU seems to be more reasonable ?
More importantly, when I used the calculation method in TOFU to evaluate the model obtained from this repository, all unlearned models have poor Forget Quality (i.e.,the p-value is very small).
Did you get similar results or give some reasonable explanation?
Thank you so much.
I use the default parameters in this repo to get the unlearned model (llama2-7b/Tofu_forget10). When using the evaluation metrics in this repo, similar results to those in the paper can be obtained.
When using the evaluation metrics in TOFU, the following results are obtained:
FO-GradDiff
SO-GradDiff
FO_PO
SO_PO
The text was updated successfully, but these errors were encountered: