You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the Vision-Language Models (VLMs) Deployment section, particularly under the Offline Inference Pipeline, there are examples that demonstrate two separate functionalities:
Running batched inference.
Running a chat conversation, which processes multiple inputs.
I’m wondering if there's a way to combine both approaches. Specifically, I’d like to provide a list of lists, where each list represents a conversation (composed of tuples of (text, image) pairs, or just text). The goal would be to run batched inference for 8 conversations at a time.
This would streamline scenarios involving multiple conversations with image and text inputs. Is there any existing support for this, or could this be considered as a new feature?
Thanks in advance!
Related resources
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Motivation
In the
Vision-Language Models (VLMs) Deployment
section, particularly under theOffline Inference Pipeline
, there are examples that demonstrate two separate functionalities:I’m wondering if there's a way to combine both approaches. Specifically, I’d like to provide a list of lists, where each list represents a conversation (composed of tuples of (text, image) pairs, or just text). The goal would be to run batched inference for 8 conversations at a time.
This would streamline scenarios involving multiple conversations with image and text inputs. Is there any existing support for this, or could this be considered as a new feature?
Thanks in advance!
Related resources
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: