IP-Adapter #157
Replies: 6 comments 3 replies
-
Yes, we need to upgrade diffusers and may be pytorch(2.2), I'm planning that but we need to ensure that It will work on windows/linux/mac/android. |
Beta Was this translation helpful? Give feedback.
-
Nevermind, I was using IP-Adapter with the default weight of 1.0, which is too high. Reducing the adapter weight improves the result when following text prompts. IP-Adapter face models seem to work better with realistic models, though reducing the adapter weight also reduces how closely it will follow the facial features. Maybe the IP-Adapter FaceID model gives better results? Gonna try it.
So, I guess IP-Adapter support in FastSD CPU will have to wait, then. |
Beta Was this translation helpful? Give feedback.
-
Yeah, dreamshaper-8, low adapter weight, does better at following text prompt. |
Beta Was this translation helpful? Give feedback.
-
I still don't quite get IP-Adapter; sometimes it generates images that follow closely the input image but sometimes it generates completely unrelated images. I guess it's more meant to be used in inpainting or with image masks in multi-adapter mode, where it seems to have the most potential but unfortunately multi-adapter has probably too high requirements for me to even try it. |
Beta Was this translation helpful? Give feedback.
-
Thanks, will try it. OK, that code is not much different than the code I was using; I mean, there's not many ways you can write the code to try IP-Adapter, it's just that with ControlNet you have a clear idea of what to expect in the output image, with IP-Adapter not so much. |
Beta Was this translation helpful? Give feedback.
-
Anyway, I've been making some tests and I think it might be possible to add IP-Adapter support by using the code straight from the IP-Adapter github repo instead of using the official diffusers implementation, so no need to upgrade the diffusers requirement. |
Beta Was this translation helpful? Give feedback.
-
I'm currently taking a look at IP-Adapter; I don't know if I'm using it wrong but I don't get it! It's true that it follows somewhat closely the face of the control image you use but it's otherwise terrible at following text prompts. Also, I always get the exact same face I use as input, only either flipped horizontally or slightly tilted, usually badly blended in, but I think that's not different to what you could get by pasting the face and using plain img2img.
In any case, trying to add IP-Adapter support to FastSD CPU would require upgrading the diffusers library requirement to version 0.24 or 0.25, which might require some work; to begin with, it seems like FastSD CPU can't even run on version 0.25.
Beta Was this translation helpful? Give feedback.
All reactions