-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add k8s docs for getting started, K8s Manifest and Helm #179
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: devpramod <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some suggested edits
Also, when you add new documents, they need to be linked into the table of contents structure. There's an index.rst file in this folder you can edit to add these two documents.
I'd suggest you add an edit to the index.rst doc in this deploy folder, and replace the existing Kubernetes section with this:
Kubernetes
**********
.. toctree::
:maxdepth: 1
k8s_getting_started
TGI on Xeon with Helm Charts <k8s_helm>
* Xeon & Gaudi with GMC
* Xeon & Gaudi without GMC
Signed-off-by: devpramod <[email protected]> Signed-off-by: devpramod <[email protected]>
Signed-off-by: devpramod <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some of the stuff I see in the docs, is just a tutorial on things that already have docs. Like TGI/TEI, Helm, and Kubernetes. It feels a lot like we're overexplaining a concept that can be answered by a link to the source docs of another tool and a command for how it's relevant to use with ChatQnA.
For reference, this is the most handholding I would do in the case of deploying TGI:
Configure Model Server
Before we deploy a model, we need to configure the model server with information like, what model to use and how many max tokens to use. We will be using the tgi-on-intel
helm chart. This chart uses XPU to the serve model normally, but we are going to configure it to use gaudi2 instead.
First, look at the configuration files in the tgi
directory and add/remove any configuration options relevant to your workflow:
cd tgi
# Create a new configmap for your model server to use
kubectl apply -f cm.yaml
Tip
Here is the reference to the Huggingface Launcher Environment Variables and the TGI-Gaudi Environment Variables.
Deploy Model Server
Now that we have configured the model server, we can deploy it to Kubernetes. Using the provided config.yaml
file in the tgi
directory, we can deploy the model server.
Modify any values like resources or replicas in the config.yaml
file to suit your needs. Then, deploy the model server:
# Encode HF Token for secret.encodedToken
echo -n '<token>' | base64
# Install Chart
git clone https://github.com/intel/ai-containers
helm install model-server -f config.yaml ai-containers/workflows/charts/tgi
# Check the pod status
kubectl get pod
kubectl logs -f <pod-name>
Please use a tool like markdownlint to ensure consistent styling.
I've got a script in docs/scripts/checkmd.sh that uses pymarkdown (lint) to scan markdown files, with a bunch of checks disabled. Alas, if I wasn't retiring today, including a markdown linter was on my list to add to the CI checks. :) |
Signed-off-by: devpramod <[email protected]>
Signed-off-by: devpramod <[email protected]>
Signed-off-by: devpramod <[email protected]>
This PR contains the following docs:
Getting Started for k8s - Installation, basic introduction to k8s and has a section for helm and k8s manifest. As more k8s deployment modes are added, corresponding sections will be created in this doc
Deploy using helm charts, a doc that follows the xeon.md template as much as possible to deploy ChatQnA on k8s using Helm
Deploy using K8s Manifest, a doc that follows the xeon.md template as much as possible to deploy ChatQnA on k8s using a K8s manifest yaml