-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why does Blip2VisionModel
not receive the prompt as input?
#5
Comments
Hi! |
I think I might be missing something. When I run the text description
attack, I hit that code. Perhaps I have something misconfigured?
Cheers,
Rylan Schaeffer
…On Tue, Feb 20, 2024 at 4:56 AM Huanran Chen ***@***.***> wrote:
Hi!
This is because we only use the VisionEncoder of Blip2. Blip2 consists of
a vision encoder and a text decoder, the prompt will be only used by text
decoder. Here we perform "Image Feature Attack", in this case we don't need
the text decoder, as well as the prompt.
—
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACEHLC7SCTDR3L2PBPGU3YLYUSMQXAVCNFSM6AAAAABDQHKMYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJUGE3DAOJZGM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I'm sorry; you are right. It's my fault; it should have included some text prompts. Perhaps this is one of the reasons the "Text Description Attack" didn't perform well. However, this may not be the primary reason. I recently conducted a Text Description Attack on LLava and minigpt4, but the adversarial examples still cannot transfer to GPT-4V or Bard. I'm confident the code is correct since I evaluated the adversarial examples in white-box settings, and the outputs from the white-box models match my target prompts exactly. I believe the main challenge lies in the transferability of the adversarial examples. |
Hi, I'm interested in your work, but I have some questions about that.
|
Hi~
|
@dongyp13 @huanranchen hey This is Monika. I really appreciate the work you guys did. I need your help, I am trying to implement/replicate this project as my semester long project and trying to improve the success rates but I am unable to replicate the actual work locally and I am facing the problem with installing the dependencies. I really appreciate if you help/guide me with this as I need to submit this tomorrow. Thank you in advance. |
Hi, how can I help you? I suggest to run img_encoder_attack, as it doesn't need to deploy minigpt4 |
As best as I can tell, the
Blip2VisionModel
doesn't receive the prompt as input:Attack-Bard/surrogates/Blip2.py
Lines 51 to 53 in 5e9618a
Why is this? Could someone please clarify?
The text was updated successfully, but these errors were encountered: