You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I guess the simple answer is to just not include the face in the mask_image white painted area. That's too obvious of an answer so I'm almost thinking you were only using an init_image and didn't use the mask_image to make it actual inpainting.
You can use the built in Dream Mask Maker WebUI to open your init image, paint a rough blob over the face, then choose Mask mode Everything Else and you'll have the parameters and the images ready to run that Dream prompt. Hope that's a full answer to your request.
I don't think the clip guided will go the direction of image2image or inpainting support, don't think that's in their planned feature list, but would be cool.
Also want to mention that as of a few days ago, they updated the Inpainting model and code and made some major improvements on the results. However they added a new HuggingFace Inpainting model card that you have to click on and accept to use the feature. Another minor inconvenience, but things out of my control.
Thanks, enjoy using my notebook, a rewritten version of it coming very soon with a pretty WebUI. Stay tuned.
Is there anyway to preserve faces when inpainting?
will clip guided diffusion ever support inpainting as well or is that a dumb question?
The text was updated successfully, but these errors were encountered: