Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The attack doesn't work with the poisons stored #6

Open
Sanghyun-Hong opened this issue Sep 25, 2023 · 0 comments
Open

The attack doesn't work with the poisons stored #6

Sanghyun-Hong opened this issue Sep 25, 2023 · 0 comments

Comments

@Sanghyun-Hong
Copy link

I've been reproducing the results in the paper with this code a lot. However, the attack does not seem to work at all with the poisons that I stored from the following run (as the paper recommends):

python sleeper_agent.py \
      --dataset CIFAR10 \
      --net ResNet18 \
      --eps 16 \
      --budget 0.01 \
      --patch_size 8 \
      --ensemble 4 \
      --name benchmark_test \
      --save benchmark \
      --benchmark_idx 0 \
      --poisonkey 2000000000
      --backdoor_poisoning \
      --retrain_from_init \
      --defend_features_only \
      --disable_adaptive_attack

This command will store three files base_indices.pickle poisons.pickle and source.pickle, and when I load them into the poisoning-benchmark written by Avi et al., the success rate of this backdoor attack is ~0.5%. Other poisoning attacks like Witches-brew and Bullseye-polytope work as expected.

Perhaps, this attack only works in a specific setup in the source code? or are there any specific ways we should do to make the attack compatible with the poisoning benchmarks?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant