Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two questions. #7

Open
muse1418 opened this issue Jun 27, 2024 · 0 comments
Open

Two questions. #7

muse1418 opened this issue Jun 27, 2024 · 0 comments

Comments

@muse1418
Copy link

Thank you for your excellent work! I have two questions:

  1. The paper mentions "If we only minimize the log-likelihood of predicting a single ground-truth description, the model can also output other correct descriptions given the adversarial example, making the attack ineffective." Is there any experimental support for this insight?
  2. Regarding the evaluation metrics, does this codebase provide code for calculating the attack success rate?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant