Skip to content

How to run tabby on offline-environment with local model? #3517

Answered by zwpaper
wapleeeeee asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @wapleeeeee, Tabby requires the use of an embedding model in addition to the other models. You need to add the HTTP embedding model to the configuration for Tabby to operate in an offline environment.

for example:

[model.embedding.http]
kind = "openai/embedding"
model_name = "text-embedding-3-small"
api_endpoint = "http://localhost:8099/v1"
api_key = "apikey"

We have also created an example for vLLM at https://tabby.tabbyml.com/docs/references/models-http-api/vllm/.

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by wapleeeeee
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants