-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] LLaVA support #720
[WIP] LLaVA support #720
Conversation
e1bfcb4
to
1326edb
Compare
1326edb
to
06df643
Compare
a400418
to
dfb6b1d
Compare
@@ -41,6 +87,7 @@ class PromptSource(Enum): | |||
class OutputFormat(Enum): | |||
OPENAI_CHAT_COMPLETIONS = auto() | |||
OPENAI_COMPLETIONS = auto() | |||
OPENAI_VISION = auto() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The response format for chat VLMs is the same as the regular chat completion since we just have text out, why have a separate entry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name of the enum is a bit misleading 😅 The OutputFormat
enum is actually not about the format of the response but it's about the format of the resulting input json file by LlmInputs.
60b658a
to
8bf2710
Compare
src/c++/perf_analyzer/genai-perf/genai_perf/llm_inputs/dataset_decorators.py
Fixed
Show fixed
Hide fixed
src/c++/perf_analyzer/genai-perf/genai_perf/llm_inputs/dataset_decorators.py
Fixed
Show fixed
Hide fixed
bb5511d
to
b82bf40
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic work! Did you mean to delete test_end_to_end.py as part of this PR?
@dyastremsky yes, the script was originally created by Tim in the beginning of genai-perf when we didn't have CI but we never used it afterwards (it's not even part of our unit test). Since we now have CI in place, I don't think we need this script any more. |
Great job cleaning this up! |
* POC for LLaVA support * non-streaming request in VLM tests * image component sent in "image_url" field instead of HTML tag * generate sample image instead of loading from docs * add vision to endpoint mapping * fixes for handling OutputFormat * refactor - extract image preparation to a separate module * fixes to the refactor * replace match-case syntax with if-elseif-else * Update image payload format and fix tests * Few clean ups and tickets added for follow up tasks * Fix and add tests for vision format * Remove output format from profile data parser * Revert irrelevant code change * Revert changes * Remove unused dependency * Comment test_extra_inputs --------- Co-authored-by: Hyunjae Woo <[email protected]>
* POC LLaVA VLM support (#720) * POC for LLaVA support * non-streaming request in VLM tests * image component sent in "image_url" field instead of HTML tag * generate sample image instead of loading from docs * add vision to endpoint mapping * fixes for handling OutputFormat * refactor - extract image preparation to a separate module * fixes to the refactor * replace match-case syntax with if-elseif-else * Update image payload format and fix tests * Few clean ups and tickets added for follow up tasks * Fix and add tests for vision format * Remove output format from profile data parser * Revert irrelevant code change * Revert changes * Remove unused dependency * Comment test_extra_inputs --------- Co-authored-by: Hyunjae Woo <[email protected]> * Support multi-modal input from file for OpenAI Chat Completions (#749) * add synthetic image generator (#751) * synthetic image generator * format randomization * images should be base64-encoded arbitrarly * randomized image format * randomized image shape * prepare SyntheticImageGenerator to support different image sources * read from files * python 3.10 support fixes * remove unused imports * skip sampled image sizes with negative values * formats type fix * remove unused variable * synthetic image generator encodes images to base64 * image format not randomized * sample each dimension independently Co-authored-by: Hyunjae Woo <[email protected]> * apply code-review suggestsions * update class name * deterministic synthetic image generator * add typing to SyntheticImageGenerator * SyntheticImageGenerator doesn't load files * SyntheticImageGenerator always encodes images to base64 * remove unused imports * generate gaussian noise instead of blank images --------- Co-authored-by: Hyunjae Woo <[email protected]> * Add command line arguments for synthetic image generation (#753) * Add CLI options for synthetic image generation * read image format from file when --input-file is used * move encode_image method to utils * Lazy import some modules * Support synthetic image generation in GenAI-Perf (#754) * support synthetic image generation for VLM model * add test * integrate sythetic image generator into LlmInputs * add source images for synthetic image data * use abs to get positive int --------- Co-authored-by: Marek Wawrzos <[email protected]>
The goal of this MR is to enable measuring VLM throughput and latency where input includes images.