Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple FP8 GEMM is not runnable in CPU EP #22269

Open
galagam opened this issue Sep 30, 2024 · 1 comment
Open

Simple FP8 GEMM is not runnable in CPU EP #22269

galagam opened this issue Sep 30, 2024 · 1 comment
Labels
quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot

Comments

@galagam
Copy link

galagam commented Sep 30, 2024

Describe the issue

Simple model with GEMM(DQ(Q(input0)), DQ(Q(input1)) quantizing FP32 -> FP8E4M3 fails to run using the CPU EP. It is runnable when using the CUDA EP.
An identical model using FP16 -> FP8E4M3 quantization is able to run with either CPU EP or CUDA EP.

Reported error:
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Type Error: Type 'tensor(float8e4m3fn)' of input parameter (/self_attention/proj/TRT_FP8QuantizeLinear_output_0) of operator (QGemm) in node (/self_attention/proj/Gemm) is invalid.

To reproduce

ort_bug_fp8_fp32_gemm.zip
See attached model and script.

Usage: python test_ort.py ort_bug_gemm_fp8_fp32.onnx

Urgency

No response

Platform

Linux

OS Version

Ubuntu 22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.19.2

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

@yihonglyu yihonglyu added the quantization issues related to quantization label Oct 5, 2024
Copy link
Contributor

github-actions bot commented Nov 5, 2024

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

2 participants