-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to load local models #39
Comments
Hey! Sure, thanks. Uh in theory this should be pretty easy but I've never tried it personally 😅 So, I think it should be this simple:
That's assuming you only want one model loaded per run. If you want to be able to switch models at runtime, you can just pass Hope that helps! Let me know either way. |
Thanks for the tips @gadicc ! I tried this using https://huggingface.co/CompVis/stable-diffusion-v1-4 in a volume mounted to the container. I think it's loading the model fine, but I'm getting an error during inference. Any ideas what I'm doing wrong?
I get this error:
And here are the pod logs:
|
Hey @shimizust Looks like a bug... maybe because upstream diffusers removed a default, otherwise I'm not sure why we never saw this before. Let me first explain the [most relevant lines of the] error and then the fix. You don't need to know or understand any of this, and feel free to skip if it's not of interest Line: So it's trying to calculate " Now as to what leads this error is a bit more complicated. In docker-diffusers-api, we automatically set a So, the workaround (until I push a proper fix) is to provide a {
"moduleInputs": {
// ...
"callback_steps": 20
}, // ...
} This just controls how often we report back the current progress via webhook... if it's irrelevant for your application just use a number higher than your num_inference_steps. Two other things I noticed (unrelated):
Good luck! |
Thanks @gadicc ! Specifying "callback_steps" to some int in "modelInputs" works and I'm able to generate images from my local model now. I guess setting RUNTIME_DOWNLOADS isn't strictly necessary then. |
Hi, thanks for making this project available!
I was wondering if it is possible to point directly to local models? Instead of downloading from a URL or HF Hub?
The text was updated successfully, but these errors were encountered: