Skip to content

Commit

Permalink
add tgi bf16 setup on CPU k8s. (opea-project#795)
Browse files Browse the repository at this point in the history
Co-authored-by: root <[email protected]>
Co-authored-by: Eero Tamminen <[email protected]>
  • Loading branch information
3 people authored Sep 13, 2024
1 parent f990f79 commit ba17031
Show file tree
Hide file tree
Showing 2 changed files with 1,478 additions and 0 deletions.
11 changes: 11 additions & 0 deletions ChatQnA/kubernetes/intel/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,17 @@ sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" chat
kubectl apply -f chatqna.yaml
```

Newer CPUs such as Intel Cooper Lake, Sapphire Rapids, support [`bfloat16` data type](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format). If you have such CPUs, and given model supports `bfloat16`, adding `--dtype bfloat16` argument for `huggingface/text-generation-inference` server halves its memory usage and speeds it a bit. To use it, run the following commands:

```
# label your node for scheduling the service on it automatically
kubectl label node 'your-node-name' node-type=node-bfloat16
# add `nodeSelector` for the `huggingface/text-generation-inference` server at `chatqna_bf16.yaml`
# create
kubectl apply -f chatqna_bf16.yaml
```

## Deploy On Gaudi

```
Expand Down
Loading

0 comments on commit ba17031

Please sign in to comment.