Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The results of Mean detector are consistent #7

Open
wrx1990 opened this issue Nov 7, 2024 · 2 comments
Open

The results of Mean detector are consistent #7

wrx1990 opened this issue Nov 7, 2024 · 2 comments

Comments

@wrx1990
Copy link

wrx1990 commented Nov 7, 2024

ynthid_text_huggingface_integration.ipynb ,in part of Option 1: Mean detector,i got sample result.From the results, it cannot be distinguished whether there is a watermark or not.

Mean scores for watermarked responses:  [0.5001013  0.49999997 0.49628338 0.49961364]
Mean scores for unwatermarked responses:  [0.5001013  0.49999997 0.49628338 0.49961364]
Weighted Mean scores for watermarked responses:  [0.49701414 0.5016432  0.4982002  0.5014231 ]
Weighted Mean scores for unwatermarked responses:  [0.49701414 0.5016432  0.4982002  0.5014231 ]

When I add watermarks to the text or not, I set the same random number conditions and use different models, and the output is the same. From my understanding, I use synthid_mixin.SynthIDGemmaForCausalLM and transformers.GemmaForCausalLM. The results of the model should not be consistent

SynthIDGemmaForCausalLM

# Initialize a SynthID Text-enabled model.
model = synthid_mixin.SynthIDGemmaForCausalLM.from_pretrained(
    MODEL_NAME,
    device_map='auto',
    torch_dtype=torch.bfloat16,
)
# Prepare your inputs in the usual way.
inputs = tokenizer(
    INPUTS,
    return_tensors='pt',
    padding=True,
).to(DEVICE)
# Generate watermarked text.
outputs = model.generate(
    **inputs,
    do_sample=True,
    max_length=1024,
    temperature=TEMPERATURE,
    top_k=TOP_K,
    top_p=TOP_P,
)

GemmaForCausalLM

# Initialize a standard tokenizer from Transformers.
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
# Initialize a GemmaForCausalLM model.
model = transformers. GemmaForCausalLM.from_pretrained(
    MODEL_NAME,
    device_map='auto',
    torch_dtype=torch.bfloat16,
)

inputs = tokenizer(
    INPUTS,
    return_tensors='pt',
    padding=True,
).to(DEVICE)
outputs = model.generate(
    **inputs,
    do_sample=True,
    max_length=1024,
    temperature=TEMPERATURE,
    top_k=TOP_K,
    top_p=TOP_P,
)

What I understand is that after adding the watermark, the output results should be slightly different, but judging from the results, the two are currently completely consistent.Can you provide a sample of the notebook's result set? Thank you.

@sumedhghaisas2
Copy link
Collaborator

@wrx1990 Couple of questions
Are you using this model

model = synthid_mixin.SynthIDGemmaForCausalLM.from_pretrained(
    MODEL_NAME,
    device_map='auto',
    torch_dtype=torch.bfloat16,
)

as watermarked model? The watermarked model needs to set up differently which you can walk through the colab implementation.

@wrx1990
Copy link
Author

wrx1990 commented Nov 12, 2024

@sumedhghaisas2
first.
I run colab's code in the local environment, and the results in the Mean detector part are exactly the same. In theory, the results should be different between adding watermarks and not adding watermarks.

Mean scores for watermarked responses:  [0.5001013  0.49999997 0.49628338 0.49961364]
Mean scores for unwatermarked responses:  [0.5001013  0.49999997 0.49628338 0.49961364]
Weighted Mean scores for watermarked responses:  [0.49701414 0.5016432  0.4982002  0.5014231 ]
Weighted Mean scores for unwatermarked responses:  [0.49701414 0.5016432  0.4982002  0.5014231 ]

second,
In the Generate watermarked output section, enable_watermarking is true or false, the output results are consistent.Can't see any difference in results with watermark。

Can you tell me how to run the program and get different results on the output with and without adding watermark?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants