From ecf695289703f063b46326b7746bad0e2d599b46 Mon Sep 17 00:00:00 2001 From: Meng Zhang Date: Thu, 2 Nov 2023 15:45:50 -0700 Subject: [PATCH] update spec back --- CHANGELOG.md | 1 - MODEL_SPEC.md | 3 --- 2 files changed, 4 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 494bb6bf8f87..a4ab1188839a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -3,7 +3,6 @@ ## Notice * llama.cpp backend (CPU, Metal) now requires a redownload of gguf model due to upstream format changes: https://github.com/TabbyML/tabby/pull/645 https://github.com/ggerganov/llama.cpp/pull/3252 -* With tabby fully migrated to the `llama.cpp` serving stack, the `--model` and `--chat-model` options now accept local file paths instead of a directory path containing both the `tabby.json` and `ggml` files, as was the case previously. ## Features diff --git a/MODEL_SPEC.md b/MODEL_SPEC.md index cd968537fa37..d214d61e2365 100644 --- a/MODEL_SPEC.md +++ b/MODEL_SPEC.md @@ -1,8 +1,5 @@ # Tabby Model Specification (Unstable) -> [!WARNING] -> This documentation is no longer valid , tabby accept gguf files directly since release of v0.5. see https://github.com/TabbyML/registry-tabby for details. - Tabby organizes the model within a directory. This document provides an explanation of the necessary contents for supporting model serving. An example model directory can be found at https://huggingface.co/TabbyML/StarCoder-1B The minimal Tabby model directory should include the following contents: