Skip to content

Releases: withcatai/node-llama-cpp

v3.0.0-beta.23

09 Jun 19:46
4ea0c3c
Compare
Choose a tag to compare
v3.0.0-beta.23 Pre-release
Pre-release

3.0.0-beta.23 (2024-06-09)

Bug Fixes

Features


Shipped with llama.cpp release b3091

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v2.8.11

24 May 13:13
81a203e
Compare
Choose a tag to compare

2.8.11 (2024-05-24)

Bug Fixes

  • bump llama.cpp release used in prebuilt binaries (#223) (81a203e)

v3.0.0-beta.22

19 May 19:00
6619b28
Compare
Choose a tag to compare
v3.0.0-beta.22 Pre-release
Pre-release

3.0.0-beta.22 (2024-05-19)

Bug Fixes


Shipped with llama.cpp release b2929

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.21

19 May 01:42
ba03ca9
Compare
Choose a tag to compare
v3.0.0-beta.21 Pre-release
Pre-release

3.0.0-beta.21 (2024-05-19)

Bug Fixes


Shipped with llama.cpp release b2929

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.20

19 May 00:13
d6a0f43
Compare
Choose a tag to compare
v3.0.0-beta.20 Pre-release
Pre-release

3.0.0-beta.20 (2024-05-19)

Bug Fixes

  • improve binary compatibility detection on Linux (#217) (d6a0f43)

Features

  • init command to scaffold a new project from a template (with node-typescript and electron-typescript-react templates) (#217) (d6a0f43)
  • debug mode (#217) (d6a0f43)
  • load LoRA adapters (#217) (d6a0f43)
  • improve Electron support (#217) (d6a0f43)

Shipped with llama.cpp release b2928

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.19

12 May 20:48
d321fe3
Compare
Choose a tag to compare
v3.0.0-beta.19 Pre-release
Pre-release

3.0.0-beta.19 (2024-05-12)

Bug Fixes

Features


Shipped with llama.cpp release b2861

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.18

09 May 23:28
453c162
Compare
Choose a tag to compare
v3.0.0-beta.18 Pre-release
Pre-release

3.0.0-beta.18 (2024-05-09)

Bug Fixes

  • more efficient max context size finding algorithm (#214) (453c162)
  • make embedding-only models work correctly (#214) (453c162)
  • perform context shift on the correct token index on generation (#214) (453c162)
  • make context loading work for all models on Electron (#214) (453c162)

Features


Shipped with llama.cpp release b2834

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v2.8.10

27 Apr 18:28
29e8c67
Compare
Choose a tag to compare

2.8.10 (2024-04-27)

Bug Fixes

v3.0.0-beta.17

24 Apr 17:23
ef501f9
Compare
Choose a tag to compare
v3.0.0-beta.17 Pre-release
Pre-release

3.0.0-beta.17 (2024-04-24)

Bug Fixes

  • FunctionaryChatWrapper bugs (#205) (ef501f9)
  • function calling syntax bugs (#205) ([ef501f9]
  • show GPU layers in the Model line in CLI commands (#205) ([ef501f9]
  • refactor: rename LlamaChatWrapper to Llama2ChatWrapper

Features


Shipped with llama.cpp release b2717

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.16

13 Apr 17:14
d332b77
Compare
Choose a tag to compare
v3.0.0-beta.16 Pre-release
Pre-release

3.0.0-beta.16 (2024-04-13)

Bug Fixes

Features


Shipped with llama.cpp release b2665

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)