Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable multiple prompts for T2I #125

Merged
merged 2 commits into from
Jul 17, 2024

Conversation

ad-astra-video
Copy link
Collaborator

This PR enables requests to use multiple prompts supported by SDXL and SD3 with no API updates needed. Diffusers uses the same prompt in all text_encoders if the prompts are not split. Multiple prompts is supported by SDXL and SD3 models and can benefit from specializing the prompts to the different text_ecnoders if tuned properly.

The prompts are split by including a | at the point where want the prompts split.

For SDXL see diffusers pipeline documentation here:https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl
Specifically the prompt_2 input to the __call__: https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline.__call__

For SD3 see diffusers pipeline documentation here: https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3
Specifically the prompt_2 and prompt_3 inputs to the __call__: https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline.__call__

@ad-astra-video ad-astra-video requested a review from rickstaa as a code owner July 17, 2024 01:11
Copy link
Member

@rickstaa rickstaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ad-astra-video thanks for implementing this so quickly. I just updated the code a bit and am now testing to see if the logic works with all models without throwing errors. After that we can merge 🚀.

@rickstaa rickstaa self-requested a review July 17, 2024 12:47
ad-astra-video and others added 2 commits July 17, 2024 14:48
This commit ensures that the prompt splitting logic implemented in the
previous commits can be reused in multiple pipelines.
@rickstaa rickstaa force-pushed the av-enable-multiple-prompts branch from e021b8a to 3b19060 Compare July 17, 2024 12:49
@rickstaa rickstaa merged commit 68b8d85 into livepeer:main Jul 17, 2024
1 check passed
@ad-astra-video ad-astra-video deleted the av-enable-multiple-prompts branch July 25, 2024 01:58
eliteprox pushed a commit to eliteprox/ai-worker that referenced this pull request Jul 26, 2024
* enable multiple prompts for T2I

* refactor(runner): make prompt splitting more general

This commit ensures that the prompt splitting logic implemented in the
previous commits can be reused in multiple pipelines.

---------

Co-authored-by: Rick Staa <[email protected]>
eliteprox pushed a commit to eliteprox/ai-worker that referenced this pull request Jul 26, 2024
* enable multiple prompts for T2I

* refactor(runner): make prompt splitting more general

This commit ensures that the prompt splitting logic implemented in the
previous commits can be reused in multiple pipelines.

---------

Co-authored-by: Rick Staa <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants