Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting truth ratio always 1 #19

Open
shaswati1 opened this issue Mar 25, 2024 · 13 comments
Open

Getting truth ratio always 1 #19

shaswati1 opened this issue Mar 25, 2024 · 13 comments

Comments

@shaswati1
Copy link

shaswati1 commented Mar 25, 2024

I am trying to reproduce the results in Figure 8 of the paper and getting the truth ratio as 1.0 for forget set (5%) for all the unlearning steps until 30. Can you please help me with this? @zhilif and @pratyushmaini

P.S. I'm using the new eval pipeline to generate the aggreate_stat.

@shaswati1 shaswati1 changed the title Getting truth ration always 1 Getting truth ratio always 1 Mar 25, 2024
@zhilif
Copy link
Collaborator

zhilif commented Mar 25, 2024

@shaswati1 Hi, can you post your a snippet of your aggregated_stat?

@shaswati1
Copy link
Author

Do you mean the aggreated_stat values? Sure, please take a look at the attached file. This is for 30th unlearning step!
aggregate_stat.csv

@zhilif
Copy link
Collaborator

zhilif commented Mar 25, 2024

Can you send me the command that you use for unlearning and evaluation?

@shaswati1
Copy link
Author

I used the same commands that you provided in this link!

@shaswati1
Copy link
Author

@zhilif just to make sure, by command did you mean the configurations used for unlearning and evaluation?

@zhilif
Copy link
Collaborator

zhilif commented Mar 26, 2024

@zhilif just to make sure, by command did you mean the configurations used for unlearning and evaluation?

Right. So you used the default config and the command in README without modification?

@shaswati1
Copy link
Author

Yes, except for the precision. I'm using fp16 instead of bf16.

@zhilif
Copy link
Collaborator

zhilif commented Mar 29, 2024

Yes, except for the precision. I'm using fp16 instead of bf16.

Sorry for the late response, I was busy with some other stuff. The command in the current README looks like this python aggregate_eval_stat.py retain_result=${path_to_aggregated_retain_result} ckpt_result=${path_to_aggregated_retain_result} \ method_name=${method_name} save_file=${save_filename}, and I notice this typo that ${path_to_aggregated_retain_result} shows up twice. ckpt_result should take the path of ${path_to_aggregated_ckpt_result}. I apologize for this error. Can you try again and let me know if it works? Thanks!

@shaswati1
Copy link
Author

@zhilif, which json file should I refer to forckpt_result? This is not defined in the readme.

@Carol-gutianle
Copy link

I also have this problem and forget quality always 1.

@Carol-gutianle
Copy link

image
Ok, I checked the function, perhaps ckpt_path refers to the retain version under the same settings.

@jackwwy
Copy link

jackwwy commented Aug 13, 2024

waht is "path_to_aggregated_ckpt_result"

@jackwwy
Copy link

jackwwy commented Aug 13, 2024

image Ok, I checked the function, perhaps ckpt_path refers to the retain version under the same settings.

so,Have you solved this problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants