-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: Using Target Prompt for Improved Real Image Editing Results #4
Comments
Hi, many thanks for your insightful suggestions! The results are much promising. I have conducted a quick test with another image with the command: python playground.py --model_path runwayml/stable-diffusion-v1-5 --image_real corgi.jpg --inv_scale 1 --scale 5 --prompt1 "a photo of a corgi" --prompt2 "a photo of a corgi in lego style" --inv_prompt tar and the results are: |
Hi @ljzycmd, thanks for your feedback and for conducting a quick test! Looking forward to seeing future developments in the awesome project! |
Do you mind sharing playground.py? |
@lavenderrz, you can find |
Hi there,
Thank you for the amazing work! I thoroughly enjoyed reading your paper. I have a suggestion for potentially improving real image editing results. I noticed that in some cases, using the target prompt for DDIM inversion seems to yield better editing results compared to using the source prompt (as shown in Figure 3). Here are two examples (input image):
Using source prompt:
Using target prompt:
Using source prompt:
Using target prompt:
I used the cmds from here. The car's pose seems better aligned with the original input image, for which I've also observed similar behavior in my experiments. I guess this share some similarities with the idea behind Imagic. While I'm not certain if this would be universally beneficial, I think it might be worth exploring further. Once again, thank you and congratulations on the fantastic work!
The text was updated successfully, but these errors were encountered: