Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] GPU op placement control when some ops must be on the CPU #23154

Open
Craigacp opened this issue Dec 19, 2024 · 0 comments
Open
Labels
performance issues related to performance regressions

Comments

@Craigacp
Copy link
Contributor

Craigacp commented Dec 19, 2024

Describe the issue

We export ONNX transformer encoder models in OML4Py with the tokenizer attached to the bottom of the model, so the ONNX model accepts a string tensor input and returns the embedding vector. We use the tokenizer operations from onnxruntime-extensions, which are CPU only, and have wrapped around an ONNX graph which batches, pads and truncates the tokenized representation using the SequenceMap operation.

When increasing the batch size we've noticed that much of the runtime is spent in the Loop op which SequenceMap uses, which is very odd considering it doesn't actually do very much. After some investigation we determined that this was due to most of the ops in the tokenizer graph being placed on the GPU rather than the CPU, even though the subgraph we're looping over must be placed on the CPU due to the presence of the BertTokenizer op. We would like the whole tokenization graph including the Loop op to be placed on the CPU EP, but there doesn't appear to be a way to control op placement in the C API. Alternatively there might be a bug in the way that ops are placed on the CPU, as this code looks like it should fallback the whole subgraph & loop to CPU, but I don't understand it well enough to see if that's part of the issue.

To reproduce

Run the supplied bert-tok.onnx graph with a batch size of 100 with the CUDA EP enabled, most of the runtime is spent in the tokenization loop operation transferring data between CPU and GPU.

Urgency

This performance issue prevents the use of GPUs to accelerate our models.

Platform

Linux

OS Version

OL8

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.20.0

ONNX Runtime API

C

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

No response

Model File

bert-tok.onnx.zip

Is this a quantized model?

No

@Craigacp Craigacp added the performance issues related to performance regressions label Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance issues related to performance regressions
Projects
None yet
Development

No branches or pull requests

1 participant