-
Notifications
You must be signed in to change notification settings - Fork 436
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vLLM model deployer #3032
base: develop
Are you sure you want to change the base?
vLLM model deployer #3032
Conversation
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
a674cb7
to
3d5a694
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the contribution @dudeperf3ct and Great work! We've been looking forward to this and can't wait to test it out.
|
||
NAME = VLLM | ||
|
||
REQUIREMENTS = ["vllm", "openai"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we please pin some version here to avoid conflicts and also make sure the integration will keep working
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, I have pinned this to the following
REQUIREMENTS = ["vllm >= 0.6.0", "openai >= 1.0.0"]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@safoinme should we add a upper bound of <0.7.0
for vllm
? The library is changing rapidly and we might not be sure if the implementation will work for the next major release.
|
||
blocking: bool = True | ||
model: Optional[str] = None | ||
tokenizer: Optional[str] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to add all other config we want here, but maybe let's start only with essential one as start
tokenizer_mode, trust_remote_code, dtype, revision, served_model_name,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added it in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better if we took this file out and alternatively documented to users how they would deploy models, can you please also provide a full example of a pipeline that deploys using vllm and create a PR for it in https://github.com/zenml-io/zenml-projects
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed it. I will create a separate PR on https://github.com/zenml-io/zenml-projects that creates the step and pipeline for using vllm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@safoinme Tracked here : zenml-io/zenml-projects#131
3d5a694
to
80f867a
Compare
9a973c5
to
c4e6526
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 files reviewed, 4 total issue(s) found.
5355006
to
18f423f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 files reviewed, 4 total issue(s) found.
Note:
This, and prior reviews, were resolved because we updated the style guide.
We'll leave a new review below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 files reviewed, 5 total issue(s) found.
We noticed a change to the style guide files, We resolved the existing comments to account for any changes to your style guide.
Returns: | ||
The flavor logo. | ||
""" | ||
return "https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/logos/vllm-logo-text-dark.png" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@schustmi can we please upload this logo to our s3 buckets in logo's path and share the URL here so it can be changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
description="vLLM Inference prediction service", | ||
) | ||
config: VLLMServiceConfig | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any specific reasons for not having endpoint implementation here? while having a static implementation for healthcheck_url
and predection_url
would work in same cases but it becomes invalid in case of port 8000 is not used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the code to reflect the same.
args = parser.parse_args() | ||
# Update the arguments in place | ||
args.__dict__.update(self.config.model_dump()) | ||
uvloop.run(run_server(args=args)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the main added things in the LocalDaemonServiceEndpoint
is looking for a free port in the local environment which then can be used when starting the server to avoid problems when the main given port is locked to some other process. check this.
we can then add port to the args to make sure that our server is starting on it, example here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @safoinme for the pointers.
I have updated the code to use endpoint configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also updated the example on zenml-projects
repo and tested the example.
…o feature/vllm-model-deployer
Describe changes
This PR adds support for vLLM as a model deployer.
TODO
Clarity on Endpoints here for vllm. For now, I have implemented a static version as part of.get_prediction_url
andget_healthcheck_url
ofVLLMDeploymentService
Ideal approach to add additional arguments specified under EngineArgs dataclass here. For now, onlymodel
andtokenizer
are added.Add documentation for new model deployer.Pre-requisites
Please ensure you have done the following:
develop
and the open PR is targetingdevelop
. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop.Types of changes