inference_session.cc Exception during initialization: invalid unordered_map<K, T> key #18885
Labels
ep:DML
issues related to the DirectML execution provider
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
platform:windows
issues related to the Windows platform
Describe the issue
I compile ONNXRumtime('main' branch) on Lenovo x13s(ARM64 windows 11) device. The build command as below:
I compiled this onnxruntime with the parameter '--use_dml' and the extension name is 'onnxruntime-directml':
While I load 'text_encoder' model with below python code, I got error. I'm using the model got from 'https://huggingface.co/tlwu/stable-diffusion-v1-5/tree/fp16'. It was mentioned in the document: 'https://medium.com/microsoftazure/accelerating-stable-diffusion-inference-with-onnx-runtime-203bd7728540'
Error log:
It can load 'unet' and 'vae_decoder' model successfully.
By adding log for debugging this issue, I found issue is in 'GraphDesc [email protected]', it crash at the line 'const auto& outputNodeAndIndex = nameToNodeAndIndexMap.at(graphOutput->Name());':
Log:
Here the 'graphOutput->Name()' is '/text_model/Gather_3_output_0_CUDAExecutionProvider'. My device doesn't support CUDA and in my code I hasn't using 'CUDAExecutionProvider'. But there is output name about 'CUDAExecutionProvider'.
May we think this model has problem for running with 'DmlExecutionProvider' or 'CPUExecutionProvider'?
And by adding log in the below code, we can see this name hasn't been added to the 'nameToNodeAndIndexMap' variable:
To reproduce
Compile onnxruntime and run it with ARM64 Python 3.11.5.
Urgency
No response
Platform
Windows
OS Version
22H2
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.17.0(main branch)
ONNX Runtime API
Python
Architecture
ARM64
Execution Provider
DirectML
Execution Provider Library Version
1.17.0
The text was updated successfully, but these errors were encountered: