Skip to content

Commit

Permalink
update spec back
Browse files Browse the repository at this point in the history
  • Loading branch information
wsxiaoys committed Nov 2, 2023
1 parent bf07bed commit ecf6952
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 4 deletions.
1 change: 0 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
## Notice

* llama.cpp backend (CPU, Metal) now requires a redownload of gguf model due to upstream format changes: https://github.com/TabbyML/tabby/pull/645 https://github.com/ggerganov/llama.cpp/pull/3252
* With tabby fully migrated to the `llama.cpp` serving stack, the `--model` and `--chat-model` options now accept local file paths instead of a directory path containing both the `tabby.json` and `ggml` files, as was the case previously.

## Features

Expand Down
3 changes: 0 additions & 3 deletions MODEL_SPEC.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
# Tabby Model Specification (Unstable)

> [!WARNING]
> This documentation is no longer valid , tabby accept gguf files directly since release of v0.5. see https://github.com/TabbyML/registry-tabby for details.
Tabby organizes the model within a directory. This document provides an explanation of the necessary contents for supporting model serving. An example model directory can be found at https://huggingface.co/TabbyML/StarCoder-1B

The minimal Tabby model directory should include the following contents:
Expand Down

0 comments on commit ecf6952

Please sign in to comment.