-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GMC: Add GPU support for GMC. #292
GMC: Add GPU support for GMC. #292
Conversation
serviceName: tgi-service-llama | ||
config: | ||
endpoint: /generate | ||
MODEL_ID: Intel/neural-chat-7b-v3-3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is supposed to be the llama model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your review. Here is a workaround. If using the llama model, one GPU card may fail to launch two instance due to few GPU memory. I am ok to keep the llama model here, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is just to work around for current NV machine, for the example we want to provide to end user, it is better we use a meaningful example, like keep the llama model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is just to work around for current NV machine, for the example we want to provide to end user, it is better we use a meaningful example, like keep the llama model.
Fixed
@@ -39,7 +43,7 @@ kubectl create deployment client-test -n chatqa --image=python:3.8.13 -- sleep i | |||
**Access the pipeline using the above URL from the client pod** | |||
|
|||
```bash | |||
export CLIENT_POD=$(kubectl get pod -l app=client-test -o jsonpath={.items..metadata.name}) | |||
export CLIENT_POD=$(kubectl get pod -n chatqa -l app=client-test -o jsonpath={.items..metadata.name}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch!
Enable NVIDIA GPU support for GMC, including sequence and switch mode. Note that switch mode may fail due to NO enough GPU memory. Signed-off-by: PeterYang12 <[email protected]>
da66129
to
04c72cb
Compare
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Thank you, Iris and Daisy. :) |
Description
Enable NVIDIA GPU support for GMC, including sequence and switch modes. Note that switch mode may fail due to not enough GPU memory.
Issues
List the issue or RFC link this PR is working on. If there is no such link, please mark it as
n/a
.Type of change
List the type of change like below. Please delete options that are not relevant.
Dependencies
List the newly introduced 3rd party dependency if exists.
Tests
Describe the tests that you ran to verify your changes.