You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(PS: in install_mac.sh, should use tensorflow-macos and tensorflow-metal instead of tensorflow-gpu)
Running he following command: python main.py -f /foo/bar/face.jpeg -t /foo/bar/input.mp4 --cli --apple produces a lot of the following error (maybe for every frame of the input video):
HUSTON, WE HAD AN EXCEPTION, PROCEED WITH CAUTION, SEND RICHARD THIS: cannot unpack non-iterable NoneType object. Line 947
2024-11-15 15:24:47.055665 [E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running CoreML_8352120247314976988_6 node. Name:'CoreMLExecutionProvider_CoreML_8352120247314976988_6_6' Status Message: Exception: /Users/runner/work/1/s/onnxruntime/core/providers/coreml/model/model.mm:66 InlinedVector<int64_t> onnxruntime::coreml::(anonymous namespace)::GetStaticOutputShape(gsl::span, gsl::span, const logging::Logger &) inferred_shape.size() == coreml_static_shape.size() was false. CoreML static output shape ({1,1,1,512,1}) and inferred shape ({3200,1}) have different ranks.
Exception in thread Thread-1357 (face_analyser_thread):
Traceback (most recent call last):
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/threading.py", line 1075, in _bootstrap_inner
self.run()
File "/Users/anonymous/AI/FastFaceSwap/utils.py", line 217, in run
self._return = self._target(*self._args, **self._kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/AI/FastFaceSwap/main.py", line 590, in face_analyser_thread
faces = face_analysers[sw].get(frame)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/site-packages/insightface/app/face_analysis.py", line 59, in get
bboxes, kpss = self.det_model.detect(img,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/site-packages/insightface/model_zoo/retinaface.py", line 224, in detect
scores_list, bboxes_list, kpss_list = self.forward(det_img, self.det_thresh)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/site-packages/insightface/model_zoo/retinaface.py", line 152, in forward
net_outs = self.session.run(self.output_names, {self.input_name : blob})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 266, in run
return self._sess.run(output_names, input_feed, run_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running CoreML_8352120247314976988_6 node. Name:'CoreMLExecutionProvider_CoreML_8352120247314976988_6_6' Status Message: Exception: /Users/runner/work/1/s/onnxruntime/core/providers/coreml/model/model.mm:66 InlinedVector<int64_t> onnxruntime::coreml::(anonymous namespace)::GetStaticOutputShape(gsl::span, gsl::span, const logging::Logger &) inferred_shape.size() == coreml_static_shape.size() was false. CoreML static output shape ({1,1,1,512,1}) and inferred shape ({3200,1}) have different ranks.
At the begining of execution before above error was generated, there're messages output in the console as below (may be helpful to diagnose the error):
/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/site-packages/codeformer/basicsr/utils/realesrgan_utils.py:56: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
loadnet = torch.load(model_path, map_location=torch.device("cpu"))
2024-11-15 17:14:47.775163 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'buff2fs'. It is not used by any node and should be removed from the model.
2024-11-15 17:14:47.791110 [W:onnxruntime:, coreml_execution_provider.cc:115 GetCapability] CoreMLExecutionProvider::GetCapability, number of partitions supported by CoreML: 16 number of nodes in the graph: 226 number of nodes supported by CoreML: 186
2024-11-15 17:14:51.717352 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-11-15 17:14:51.717381 [W:onnxruntime:, session_state.cc:1170 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
inswapper-shape: [1, 3, 128, 128]
The text was updated successfully, but these errors were encountered:
Environment:
Hardware: macbook M3 air 24G RAM
OS: macOS 14.3
miniconda w/ python 3.12
(PS: in install_mac.sh, should use
tensorflow-macos
andtensorflow-metal
instead oftensorflow-gpu
)Running he following command:
python main.py -f /foo/bar/face.jpeg -t /foo/bar/input.mp4 --cli --apple
produces a lot of the following error (maybe for every frame of the input video):HUSTON, WE HAD AN EXCEPTION, PROCEED WITH CAUTION, SEND RICHARD THIS: cannot unpack non-iterable NoneType object. Line 947
2024-11-15 15:24:47.055665 [E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running CoreML_8352120247314976988_6 node. Name:'CoreMLExecutionProvider_CoreML_8352120247314976988_6_6' Status Message: Exception: /Users/runner/work/1/s/onnxruntime/core/providers/coreml/model/model.mm:66 InlinedVector<int64_t> onnxruntime::coreml::(anonymous namespace)::GetStaticOutputShape(gsl::span, gsl::span, const logging::Logger &) inferred_shape.size() == coreml_static_shape.size() was false. CoreML static output shape ({1,1,1,512,1}) and inferred shape ({3200,1}) have different ranks.
Exception in thread Thread-1357 (face_analyser_thread):
Traceback (most recent call last):
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/threading.py", line 1075, in _bootstrap_inner
self.run()
File "/Users/anonymous/AI/FastFaceSwap/utils.py", line 217, in run
self._return = self._target(*self._args, **self._kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/AI/FastFaceSwap/main.py", line 590, in face_analyser_thread
faces = face_analysers[sw].get(frame)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/site-packages/insightface/app/face_analysis.py", line 59, in get
bboxes, kpss = self.det_model.detect(img,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/site-packages/insightface/model_zoo/retinaface.py", line 224, in detect
scores_list, bboxes_list, kpss_list = self.forward(det_img, self.det_thresh)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/site-packages/insightface/model_zoo/retinaface.py", line 152, in forward
net_outs = self.session.run(self.output_names, {self.input_name : blob})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anonymous/miniconda3/envs/FastFaceSwap/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 266, in run
return self._sess.run(output_names, input_feed, run_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running CoreML_8352120247314976988_6 node. Name:'CoreMLExecutionProvider_CoreML_8352120247314976988_6_6' Status Message: Exception: /Users/runner/work/1/s/onnxruntime/core/providers/coreml/model/model.mm:66 InlinedVector<int64_t> onnxruntime::coreml::(anonymous namespace)::GetStaticOutputShape(gsl::span, gsl::span, const logging::Logger &) inferred_shape.size() == coreml_static_shape.size() was false. CoreML static output shape ({1,1,1,512,1}) and inferred shape ({3200,1}) have different ranks.
At the begining of execution before above error was generated, there're messages output in the console as below (may be helpful to diagnose the error):
The text was updated successfully, but these errors were encountered: