Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ORTModelForCausalLM inference fails (after converting transformer to ONNX) #1678

Open
2 of 4 tasks
ingo-m opened this issue Feb 2, 2024 · 4 comments
Open
2 of 4 tasks
Labels
bug Something isn't working onnxruntime Related to ONNX Runtime

Comments

@ingo-m
Copy link

ingo-m commented Feb 2, 2024

System Info

The bug as described below occurs locally on my system with the following specs, and on google colab (see below for reproducible example):

- System: Ubuntu 22.04.3 LTS
- Kernel: 6.5.0-15-generic
- NVIDIA Driver Version: 525.147.05
- CUDA Version: 12.0
- Python: 3.10.13
- torch==2.2.0
- transformers==4.37.1
- onnxruntime-gpu==1.17.0
- optimum[onnxruntime-gpu]==1.16.2

Who can help?

@michaelbenayoun (error happens with a transformer model converted to ONNX)
@JingyaHuang (error seems to be related to ONNX runtime)

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction (minimal, reproducible, runnable)

The bug is described below, here is a reproducible example:
https://colab.research.google.com/drive/1QZ4_vttj-r5D3fwff49KZ0gzqwB5BRuM?usp=sharing

Expected behavior

I am trying to convert a transformer model ("bigscience/bloomz-560m") to ONNX format, and then perform inference with the ONNX model.

I was previously able to do this, with the following library versions:

torch==2.0.1
transformers==4.30.2
onnxruntime-gpu==1.15.1
optimum[onnxruntime-gpu]==1.9.1

However, after upgrading to the latest versions, performing inference with the ONNX model fails. These are the version I upgraded to:

torch==2.2.0
transformers==4.37.1
onnxruntime-gpu==1.17.0
optimum[onnxruntime-gpu]==1.16.2

Now, when trying to perform inference, I get this error:

RuntimeError: Error when binding input: There's no data transfer registered for copying tensors from Device:[DeviceType:1 MemoryType:0 DeviceId:0] to Device:[DeviceType:0 MemoryType:0 DeviceId:0]

When running locally, I additionally get this message in the error traceback (I don't get this on colab):

Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

The weird thing is that (when running locally) the respective virtual env does actually have libcublasLt.so.11 (in my case at ~/miniconda3/envs/py-onnx/lib/python3.10/site-packages/nvidia/cublas/lib):

.
├── __init__.py
├── libcublasLt.so.11
├── libcublasLt.so.12
├── libcublas.so.11
├── libcublas.so.12
├── libnvblas.so.11
└── libnvblas.so.12

So the cuda library cannot be found, although it is there? And why does it want to use libcublasLt.so.11 (and not libcublasLt.so.12)? 🤔

According to this issue, onnxruntime 1.17.0 does support CUDA 12. My CUDA version is 12.0 (which I didn't change).

@ingo-m ingo-m added the bug Something isn't working label Feb 2, 2024
@fxmarty fxmarty added the onnxruntime Related to ONNX Runtime label Feb 5, 2024
@fxmarty
Copy link
Contributor

fxmarty commented Feb 5, 2024

Hi @ingo-m, thank you for the report.

Locally, how did you install onnxruntime-gpu? The wheel hosted on PyPI index is for CUDA 11.8. https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html gives instructions on how to install ORT CUDA EP for CUDA 12.1.

Not sure it will work, but you can also try export LD_LIBRARY_PATH = $LD_LIBRARY_PATH:/path/to/miniconda3/envs/py-onnx/lib/python3.10/site-packages/nvidia/cublas/lib

Regarding the

RuntimeError: Error when binding input: There's no data transfer registered for copying tensors from Device:[DeviceType:1 MemoryType:0 DeviceId:0] to Device:[DeviceType:0 MemoryType:0 DeviceId:0]

I'm not sure yet, will investigate.

@fxmarty
Copy link
Contributor

fxmarty commented Feb 5, 2024

@ingo-m I can not reproduce the issue with:

import torch
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model_name = "bigscience/bloomz-560m"
device_name = "cuda"

tokenizer = AutoTokenizer.from_pretrained(base_model_name)

ort_model = ORTModelForCausalLM.from_pretrained(
    base_model_name,
    use_io_binding=True,
    export=True,
    provider="CUDAExecutionProvider",
)

prompt = "i like pancakes"
inference_ids = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to(
    device_name
)

# Try to generate a prediction (fails).
output_ids = ort_model.generate(
    input_ids=inference_ids["input_ids"],
    attention_mask=inference_ids["attention_mask"],
    max_new_tokens=512,
    temperature=1e-8,
    do_sample=True,
)

print(tokenizer.decode(output_ids[0], skip_special_tokens=True))

with CUDA 11.8, torch==2.1.2+cu118, optimum==1.16.2, onnxruntime-gpu==1.17.0, onnx==1.15.0.

@ingo-m
Copy link
Author

ingo-m commented Feb 5, 2024

@fxmarty thanks for looking into it.

Locally, I installed directly from PyPI (with pipenv). In other words, I did not follow the specific instructions for CUDA 12, so that explains the problem. (However, it's strange that I had no problems with CUDA 12 when I was still using the older version optimum[onnxruntime-gpu]==1.9.1 🤔).

On google colab, !nvidia-smi reveals that it's using CUDA 12 as well (this is a free tier colab instance):

Mon Feb  5 12:59:37 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05             Driver Version: 535.104.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla T4                       Off | 00000000:00:04.0 Off |                    0 |
| N/A   61C    P8              10W /  70W |      0MiB / 15360MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

As you said, it looks like CUDA 12 is the culprit.

@ingo-m
Copy link
Author

ingo-m commented Feb 5, 2024

Regarding this error:

RuntimeError: Error when binding input: There's no data transfer registered for copying tensors from Device:[DeviceType:1 MemoryType:0 DeviceId:0] to Device:[DeviceType:0 MemoryType:0 DeviceId:0]

Perhaps the ORTModelForCausalLM model was not placed on the GPU for inference (because the CUDAExecutionProvider didn't work because of the issue with CUDA 12), but the tokens were placed on the GPU, and then the error occurs because model and tokens are not on the same device?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working onnxruntime Related to ONNX Runtime
Projects
None yet
Development

No branches or pull requests

2 participants