Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

feat(code): switch fully to ollama as LLM provider #101

Merged
merged 26 commits into from
Feb 20, 2024
Merged

feat(code): switch fully to ollama as LLM provider #101

merged 26 commits into from
Feb 20, 2024

Conversation

frgfm
Copy link
Member

@frgfm frgfm commented Feb 20, 2024

This PR introduces the following modifications:

  • adds ollama as LLM service provider
  • drops usage of OpenAI in the community version
  • adds script & results of latency benchmark on ollama models
  • adds test cases for the chat route

Copy link

codecov bot commented Feb 20, 2024

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (6bc7921) 82.59% compared to head (4ef800a) 86.09%.

Files Patch % Lines
src/app/services/ollama.py 80.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #101      +/-   ##
==========================================
+ Coverage   82.59%   86.09%   +3.49%     
==========================================
  Files          29       29              
  Lines         971      906      -65     
==========================================
- Hits          802      780      -22     
+ Misses        169      126      -43     
Flag Coverage Δ
unittests 86.09% <94.44%> (+3.49%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@frgfm frgfm merged commit 2aaa508 into main Feb 20, 2024
13 of 15 checks passed
@frgfm frgfm deleted the ollama branch February 20, 2024 22:35
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant