You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have updated the extension to the latest version
What happened?
trying the faceswap simple workflow provided.
Steps to reproduce the problem
Your workflow
error when reding the node
Sysinfo
windows 11, rtx 3060
Relevant console log
To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: I:\AI I\ComfyUI-studio-v63\App\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
got prompt
[ReActor] 10:54:06 - STATUS - Working: source face index [0, 1], target face index [0, 1]
[ReActor] 10:54:06 - STATUS - Analyzing Source Image...
2024-10-13 10:54:07.3330653 [E:onnxruntime:Default, provider_bridge_ort.cc:1351 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1131 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "I:\AI I\ComfyUI-studio-v63\App\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
2024-10-13 10:54:07.4878836 [E:onnxruntime:Default, provider_bridge_ort.cc:1351 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1131 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "I:\AI I\ComfyUI-studio-v63\App\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"!!! Exception during processing !!! D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
Traceback (most recent call last):
File "I:\AI I\ComfyUI-studio-v63\App\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 383, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "I:\AI I\ComfyUI-studio-v63\App\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 435, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\custom_nodes\comfyui-reactor-node\nodes.py", line 350, in execute
script.process(
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\custom_nodes\comfyui-reactor-node\scripts\reactor_faceswap.py", line 101, in process
result = swap_face(
^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\custom_nodes\comfyui-reactor-node\scripts\reactor_swapper.py", line 246, in swap_face
source_faces = analyze_faces(source_img)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\custom_nodes\comfyui-reactor-node\scripts\reactor_swapper.py", line 151, in analyze_faces
face_analyser = getAnalysisModel(det_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\custom_nodes\comfyui-reactor-node\scripts\reactor_swapper.py", line 81, in getAnalysisModel
ANALYSIS_MODEL = insightface.app.FaceAnalysis(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\custom_nodes\comfyui-reactor-node\reactor_patcher.py", line 48, in patched_faceanalysis_init
model = model_zoo.get_model(onnx_file, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\python_embeded\Lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
model = router.get_model(providers=providers, provider_options=provider_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\ComfyUI\custom_nodes\comfyui-reactor-node\reactor_patcher.py", line 21, in patched_get_model
session = PickableInferenceSession(self.onnx_file, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\AI I\ComfyUI-studio-v63\App\python_embeded\Lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
super().__init__(model_path, **kwargs)
File "I:\AI I\ComfyUI-studio-v63\App\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 394, in __init__
raise fallback_error from e
File "I:\AI I\ComfyUI-studio-v63\App\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 389, in __init__
self._create_inference_session(self._fallback_providers, None)
File "I:\AI I\ComfyUI-studio-v63\App\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 435, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
Additional information
cuda is verified and installed
The text was updated successfully, but these errors were encountered:
First, confirm
What happened?
trying the faceswap simple workflow provided.
Steps to reproduce the problem
Your workflow
error when reding the node
Sysinfo
windows 11, rtx 3060
Relevant console log
Additional information
cuda is verified and installed
The text was updated successfully, but these errors were encountered: