Skip to content

Releases: superlinear-ai/raglite

v0.6.2

06 Jan 22:27
290e2c0
Compare
Choose a tag to compare

What's Changed

  • fix: remove unnecessary stop sequence by @lsorber in #84

Full Changelog: v0.6.1...v0.6.2

v0.6.1

06 Jan 14:18
d1e1f39
Compare
Choose a tag to compare

What's Changed

  • fix: conditionally enable LlamaRAMCache by @lsorber in #83
  • fix(deps): exclude litellm versions that break get_model_info by @lsorber in #78
  • fix: improve (re)insertion speed by @lsorber in #80
  • fix: fix Markdown heading boundary probas by @lsorber in #81

Full Changelog: v0.6.0...v0.6.1

v0.6.0

05 Jan 15:39
b19963d
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.5.1...v0.6.0

v0.5.1

18 Dec 15:15
bf598dc
Compare
Choose a tag to compare

What's Changed

  • fix: improve output for empty databases by @lsorber in #68

Full Changelog: v0.5.0...v0.5.1

v0.5.0

17 Dec 09:49
2e9bfaf
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.4.1...v0.5.0

v0.4.1

05 Dec 20:50
0c5b7b5
Compare
Choose a tag to compare

What's Changed

  • fix: support embedding with LiteLLM for Ragas by @undo76 in #56
  • fix: add and enable OpenAI strict mode by @undo76 in #55

Full Changelog: v0.4.0...v0.4.1

v0.4.0

04 Dec 16:31
abb4d1b
Compare
Choose a tag to compare

What's Changed

  • feat: improve late chunking and optimize pgvector settings by @lsorber in #51
    • Add a workaround for #24 to increase the embedder's context size from 512 to a user-definable size.
    • Increase the default embedder context size to 1024 tokens (more degrades bge-m3's performance).
    • Upgrade llama-cpp-python to the latest version.
    • More robust testing of rerankers with Kendall's rank correlation coefficient.
    • Optimise pgvector's settings.
    • Offer better control of oversampling in hybrid and vector search.
    • Upgrade to the PostgreSQL 17.

Full Changelog: v0.3.0...v0.4.0

v0.3.0

03 Dec 18:26
0fd1970
Compare
Choose a tag to compare

What's Changed

  • feat: support prompt caching and apply Anthropic's long-context prompt format by @undo76 in #52

Full Changelog: v0.2.1...v0.3.0

v0.2.1

22 Nov 17:12
003967b
Compare
Choose a tag to compare

What's Changed

  • fix: add fallbacks for model info by @undo76 in #44
  • fix: improve unpacking of keyword search results by @lsorber in #46
  • fix: upgrade rerankers and remove flashrank patch by @lsorber in #47
  • fix: improve structured output extraction and query adapter updates by @emilradix in #34

New Contributors

Full Changelog: v0.2.0...v0.2.1

v0.2.0

21 Oct 15:36
fdf803b
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.4...v0.2.0