-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting truth ratio always 1 #19
Comments
@shaswati1 Hi, can you post your a snippet of your aggregated_stat? |
Do you mean the aggreated_stat values? Sure, please take a look at the attached file. This is for 30th unlearning step! |
Can you send me the command that you use for unlearning and evaluation? |
I used the same commands that you provided in this link! |
@zhilif just to make sure, by command did you mean the configurations used for unlearning and evaluation? |
Right. So you used the default config and the command in README without modification? |
Yes, except for the precision. I'm using fp16 instead of bf16. |
Sorry for the late response, I was busy with some other stuff. The command in the current README looks like this |
@zhilif, which json file should I refer to for |
I also have this problem and forget quality always 1. |
waht is "path_to_aggregated_ckpt_result" |
I am trying to reproduce the results in Figure 8 of the paper and getting the truth ratio as 1.0 for forget set (5%) for all the unlearning steps until 30. Can you please help me with this? @zhilif and @pratyushmaini
P.S. I'm using the new eval pipeline to generate the aggreate_stat.
The text was updated successfully, but these errors were encountered: