This repository has been archived by the owner on Oct 8, 2024. It is now read-only.
Replies: 2 comments 9 replies
-
If you're going with Linux, you should use ROCm instead of ONNX. You'll get a 3-4x better performance. Automatic1111's webUI works with AMD GPUs on Linux. |
Beta Was this translation helpful? Give feedback.
1 reply
-
Added negative prompt to txt2img_onnx.py BTW, if you're going to focus on non-gpu systems, you should take a look at the fp16 version of the model. It works in CPU ONNX, but not using DirectML. |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Could you add the option to add negative prompting in commandline? I would really appreciate this option as using cpu, and the browser + gradio open ends up adding 10 seconds more per iteration. It's claimed that disabling the gradio loading bar improves inference speed here.
I'm might end up trying to install this on a lightweight linux distribution in order to generate a higher maximum size in one go without using swap memory. This kit is awesome by the way, the model conversion just works, it is only 18% slower than bes-dev/openvino , lower sizes are an option (finetuned miniSD), and I am able to use better sampling (DPM++ 2M) thanks to diffusers, in 6-10 steps.
Beta Was this translation helpful? Give feedback.
All reactions