Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add instructions for running vLLM backend #8
Add instructions for running vLLM backend #8
Changes from 7 commits
1688a33
0ba6200
a4921c1
92124bf
aa8a105
ed108d0
c5213f6
2c6881c
ac33407
d2fdb3f
02c1167
d164dab
5ed4d0e
0cd3d91
45a531f
1e27105
97417c5
d943de2
99943cc
b08f426
682ad0c
e7578f1
502f4db
ea35a73
fe06416
0144d33
0f0f968
b81574d
faa29a6
4259a7e
76c2d89
31f1733
76d0652
a50ae8d
edaff54
33dbaed
6575197
9effb18
8dc3f51
bf0d905
7ec9b5f
3b64abc
3a3b326
48e08e7
9b4a193
45be0f6
204ce5a
8c9c4e7
3ab4774
757e2b2
aa9ec65
e0161f4
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would installing dependencies be part of build? Or do we need a seperate section on dependencies?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. I'll add this. I had made the assumption that this is using the vLLM backend, but we need to clarify/offer an independent build (e.g. adding these to a general Triton container).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a couple of options, how you can build vLLM backend.
Option 1. You can follow steps described in Building With Docker and use
build.py
script.The sample command will build a Triton Server container with all available options enabled:
Option 2. You can install vLLM backend directly into our NGC Triton container. In this case, please install vllm first:
pip install vllm
, then set up vllm_backend in the container as follows:Note: we should also mention separate container at some point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for drafting these instructions. Added!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by default this is in the pre-built container?
Maybe this is best described as the vLLM version installed in the system (or container).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC
nvcr.io/nvidia/tritonserver:23.09-py3
should have python backend in there, right?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if the latest container (which is mentioned here) have python backend already, we can skip it's build in
.sh
and just create the environment. Latest python backend would be able to findstub
in the default python backend location.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Olga and I discussed this offline. Since the CUDA version differs, it is most likely still necessary to create the stubs.
Note: When the vLLM backend container ships, it would be good to update the above command to use that so users do not need to pull the Triton server container unnecessarily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked into docs after meetings, it seems like
CUDA
version should not be a trigger for a newstub
: https://github.com/triton-inference-server/python_backend#building-custom-python-backend-stubCC @Tabrizian and @tanmayv25 I would like to clarify the above. If we already ship container with
python
backend, we should be able to re-use without extra cmake build. I'll test it locally meanwhileThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for checking! Yeah, if it works on your end, we can simplify this. Feel free to make a commit to co-author this PR, if you would like, or let me know when it works and I'll remove those steps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Working on it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These steps can be removed for 23.10 release in my opinion. We can just talk about building with
build.py
. If they need to build with different version of vLLM they can update the vLLM version from the version map.Or simply remove and re-installed vllm package in our shipped container. Let's just offer these two alternatives for simplicity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I agree with @tanmayv25, since we are moving into the direction of shipping container with conda environment,
conda
folder will become redundant + It would be easier to add it later as (and if) needed.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They cannot do those for all versions of vLLM. If vLLM releases another pinned vLLM version, they can update the version map or install it in the container. However, if they want to use the latest vLLM version, it will be incompatible with the CUDA in the container (hence the need for Conda). Is that still acceptable?
I'm happy to remove this. I just want to make sure customers can still use whichever version of vLLM they want.