Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error Code 9: Internal Error (node_of_bevpool_out: could not find any supported formats consistent with input/output data types) #5

Open
lDarryll opened this issue Oct 9, 2023 · 1 comment

Comments

@lDarryll
Copy link

lDarryll commented Oct 9, 2023

log:

 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:225][getOutputDimensions]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:214][clone]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:82][Bevpoolv2Trt2IPluginV2DynamicExt]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:174][setPluginNamespace]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:225][getOutputDimensions]
[10/09/2023-18:07:26] [V] [TRT] Applying generic optimizations to the graph for inference.
[10/09/2023-18:07:26] [V] [TRT] Original: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After dead-layer removal: 6 layers
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:103][getPluginType]
[10/09/2023-18:07:26] [V] [TRT] After Myelin optimization: 6 layers
[10/09/2023-18:07:26] [V] [TRT] Applying ScaleNodes fusions.
[10/09/2023-18:07:26] [V] [TRT] After scale fusion: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After dupe layer removal: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After final dead-layer removal: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After tensor merging: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After vertical fusions: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After dupe layer removal: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After final dead-layer removal: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After tensor merging: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After slice removal: 6 layers
[10/09/2023-18:07:26] [V] [TRT] After concat removal: 6 layers
[10/09/2023-18:07:26] [V] [TRT] Trying to split Reshape and strided tensor
[10/09/2023-18:07:26] [V] [TRT] Graph construction and optimization completed in 0.00248367 seconds.
[10/09/2023-18:07:26] [V] [TRT] Using cublas as a tactic source
[10/09/2023-18:07:26] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +115, GPU +46, now: CPU 293, GPU 275 (MiB)
[10/09/2023-18:07:26] [V] [TRT] Using cuDNN as a tactic source
[10/09/2023-18:07:26] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +95, GPU +40, now: CPU 388, GPU 315 (MiB)
[10/09/2023-18:07:26] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[10/09/2023-18:07:26] [V] [TRT] Constructing optimization profile number 0 [1/1].
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:214][clone]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:82][Bevpoolv2Trt2IPluginV2DynamicExt]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:174][setPluginNamespace]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:244][supportsFormatCombination]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:244][supportsFormatCombination]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:244][supportsFormatCombination]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:244][supportsFormatCombination]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:244][supportsFormatCombination]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:167][destroy]
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:167][destroy]
[10/09/2023-18:07:26] [E] Error[9]: [pluginV2Builder.cpp::reportPluginError::23] Error Code 9: Internal Error (node_of_bevpool_out: could not find any supported formats consistent with input/output data types)
[10/09/2023-18:07:26] [E] Error[2]: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[10/09/2023-18:07:26] [E] Engine could not be created from network
[10/09/2023-18:07:26] [E] Building engine failed
[10/09/2023-18:07:26] [E] Failed to create engine from model or file.
[10/09/2023-18:07:26] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8401] # /Data2/zengsheng/TensorRT-8.4.1.5/bin/trtexec --onnx=bevpoolv2.onnx --plugins=/Data1/zengsheng/works/dl_works/tensorrt_plugin_generator/bevpoolv2_plugin_codes/Bevpoolv2Trt2IPluginV2DynamicExt/libBevpoolv2Trt2IPluginV2DynamicExt.so --verbose
 ----> debug <---- call [Bevpoolv2Trt2IPluginV2DynamicExt.cpp:167][destroy]

onnx:
bevpoolv2.zip

onnx

yml:

BEVPoolV2TRT2:
  attributes:
    out_height:
      datatype: int32
    out_width:
      datatype: int32
  inputs:
    tpg_input_0:
      shape: 6x59x16x44
    tpg_input_1:
      shape: 6x16x44x64
    tpg_input_2:
      shape: 179324
    tpg_input_3:
      shape: 179324
    tpg_input_4:
      shape: 179324
    tpg_input_5:
      shape: 11431
    tpg_input_6:
      shape: 11431
  outputs:
    tpg_output_0:
      shape: 1x128x128x64
  plugin_type: IPluginV2DynamicExt
  support_format_combination:
  - "float32+float32+float32+float32+float32+float32+float32+float32"

Why is this error reported, can the author help answer this question, thank you!

@zerollzeng
Copy link
Collaborator

Have you solve it? Sorry I miss the issue, you can provide a reproduce and I can help take a check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants