This repository has been archived by the owner on Oct 8, 2024. It is now read-only.
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Tensor' #24
Unanswered
penguinpenguin24
asked this question in
Q&A
Replies: 1 comment 1 reply
-
For PowerShell, you're suppose to use The "unsupported operand" error you're seeing is because there's a bug in v0.6.0 where only PNDM scheduler currently works. If you stick with PNDM scheduler, it should work fine. See this section on how to get other schedulers working. Or wait till the next diffusers release (something greater than v0.7.2). |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I have no idea what I'm doing when it comes to this stuff so I apologize if I'm just doing something stupid, I looked all over the place for a solution before coming here. I would greatly appreciate if you could provide any help!
Everything was working fine at first, but after closing out of powershell and reopening I couldn't get the GUI to generate images. I know I'm supposed to run activate.bat before doing anything, however I don't get the (virtualenv) at the start of the line after running this as I did when I initially set this up and I think this screws everything up. I tried it in command prompt instead and it seems to work there, and I am able to generate files using the txt2img_onnx.py file. The problem is with trying to generate images using the GUI. I am able to load up the URL fine, but I get an error generating any images. I'll post the full message below, sorry if it's a bit long I assume the issue is at the end with "TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Tensor'
Keyboard interruption in main thread... closing server."
2022-11-14 16:21:47.5486863 [W:onnxruntime:, inference_session.cc:491 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-11-14 16:21:47.9062300 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2022-11-14 16:21:47.9125095 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
2022-11-14 16:21:50.0508419 [W:onnxruntime:, inference_session.cc:491 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-11-14 16:21:50.1124214 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2022-11-14 16:21:50.1183793 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
2022-11-14 16:21:50.5315258 [W:onnxruntime:, inference_session.cc:491 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-11-14 16:21:51.4875693 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2022-11-14 16:21:51.4937641 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
2022-11-14 16:22:00.1824010 [W:onnxruntime:, inference_session.cc:491 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-11-14 16:22:00.3700301 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2022-11-14 16:22:00.3763544 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
2022-11-14 16:22:01.6985542 [W:onnxruntime:, inference_session.cc:491 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-11-14 16:22:01.7621973 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2022-11-14 16:22:01.7684262 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
0%| | 0/40 [00:04<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\jaime\Desktop\stable-diffusion3\virtualenv\lib\site-packages\gradio\routes.py", line 289, in run_predict
output = await app.blocks.process_api(
File "C:\Users\jaime\Desktop\stable-diffusion3\virtualenv\lib\site-packages\gradio\blocks.py", line 982, in process_api
result = await self.call_function(fn_index, inputs, iterator)
File "C:\Users\jaime\Desktop\stable-diffusion3\virtualenv\lib\site-packages\gradio\blocks.py", line 824, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\jaime\Desktop\stable-diffusion3\virtualenv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\jaime\Desktop\stable-diffusion3\virtualenv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\jaime\Desktop\stable-diffusion3\virtualenv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\jaime\Desktop\stable-diffusion3\onnxUI.py", line 162, in generate_click
return run_diffusers(
File "C:\Users\jaime\Desktop\stable-diffusion3\onnxUI.py", line 88, in run_diffusers
batch_images = pipe(
File "C:\Users\jaime\Desktop\stable-diffusion3\virtualenv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 180, in call
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
File "C:\Users\jaime\Desktop\stable-diffusion3\virtualenv\lib\site-packages\diffusers\schedulers\scheduling_ddim.py", line 256, in step
pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Tensor'
Keyboard interruption in main thread... closing server.
Beta Was this translation helpful? Give feedback.
All reactions