-
-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: ollama support #8
Merged
+1,220
−609
Merged
Changes from 6 commits
Commits
Show all changes
42 commits
Select commit
Hold shift + click to select a range
df970d5
wip: first attempt at an adapter
olimorris a234364
add luadoc blocks
olimorris 5019e69
refactor adapters
olimorris d8b0b1d
chore: clean up spec
olimorris 4cf8283
wip: note on schema
olimorris fbba3a1
chore: slight word tweak for spec
olimorris d36dbde
refactor: strategy folder is now strategies
olimorris e012ebe
can form parameters based on the schema definition
olimorris 6621185
switch to using the schema from the adapter
olimorris e3fd4db
clean up client
olimorris cf9ffff
fix adapter
olimorris e93fe4d
client now uses adapter from config
olimorris 135829f
clean up client calls throughout the plugin
olimorris f9938b3
start adding callbacks to adapters
olimorris 71b935a
fix test
olimorris 83ae51d
make test more explict
olimorris 982ecf7
clean up test
olimorris 1d58fe9
chat buffer now fully moved to openai adapter
olimorris 82072d3
fix env var in tests
olimorris c843a16
inline strategy now uses adapter
olimorris 6cc4802
env vars in headers can be swapped in
olimorris e46a065
adapter call to format input messages to api
olimorris 1c8bab4
fix tests
olimorris 1f150b3
add ollama support
olimorris 9775fda
fix ollama `is_done` method
olimorris eb000d7
fix displaying decimal places in settings
olimorris 051058b
tweak adapters
olimorris eb70e27
update README.md
olimorris 3b1d5da
feat: add anthropic adapter
olimorris a47b569
refactor name of error callback
olimorris cd1db08
better handling of errors
olimorris d7386b4
start adding adapter tests
olimorris 8f25246
add ollama adapter test
olimorris c9647c5
allow adapters to be customised from the config
olimorris f9214e8
fix tests
olimorris 1447bca
make chat buffer more efficient and streamline adapters
olimorris 66773c4
update README.md
olimorris 90f9d0e
fix inline for ollama and openai
olimorris d82bccf
fix anthropic adapter
olimorris 6c90031
tweaks
olimorris 580a5d6
add adapters.md
olimorris e26c3e0
add anthropic spec
olimorris File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
---@class CodeCompanion.Adapter | ||
---@field name string | ||
---@field url string | ||
---@field header table | ||
---@field parameters table | ||
---@field schema table | ||
local Adapter = {} | ||
|
||
---@class CodeCompanion.AdapterArgs | ||
---@field name string | ||
---@field url string | ||
---@field header table | ||
---@field parameters table | ||
---@field schema table | ||
|
||
---@param args table | ||
---@return CodeCompanion.Adapter | ||
function Adapter.new(args) | ||
return setmetatable(args, { __index = Adapter }) | ||
end | ||
|
||
---@param settings table | ||
---@return CodeCompanion.Adapter | ||
function Adapter:set_params(settings) | ||
-- TODO: Need to take into account the schema's "mapping" field | ||
for k, v in pairs(settings) do | ||
self.parameters[k] = v | ||
end | ||
|
||
return self | ||
end | ||
|
||
return Adapter |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,134 @@ | ||
local Adapter = require("codecompanion.adapter") | ||
|
||
---@class CodeCompanion.Adapter | ||
---@field name string | ||
---@field url string | ||
---@field headers table | ||
---@field parameters table | ||
---@field schema table | ||
|
||
local adapter = { | ||
name = "OpenAI", | ||
url = "https://api.openai.com/v1/chat/completions", | ||
headers = { | ||
content_type = "application/json", | ||
Authorization = "Bearer ", -- ignore the API key for now | ||
}, | ||
parameters = { | ||
stream = true, | ||
}, | ||
schema = { | ||
model = { | ||
order = 1, | ||
mapping = "parameters", | ||
type = "enum", | ||
desc = "ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.", | ||
default = "gpt-4-0125-preview", | ||
choices = { | ||
"gpt-4-1106-preview", | ||
"gpt-4", | ||
"gpt-3.5-turbo-1106", | ||
"gpt-3.5-turbo", | ||
}, | ||
}, | ||
temperature = { | ||
order = 2, | ||
mapping = "parameters", | ||
type = "number", | ||
optional = true, | ||
default = 1, | ||
desc = "What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.", | ||
validate = function(n) | ||
return n >= 0 and n <= 2, "Must be between 0 and 2" | ||
end, | ||
}, | ||
top_p = { | ||
order = 3, | ||
mapping = "parameters", | ||
type = "number", | ||
optional = true, | ||
default = 1, | ||
desc = "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.", | ||
validate = function(n) | ||
return n >= 0 and n <= 1, "Must be between 0 and 1" | ||
end, | ||
}, | ||
stop = { | ||
order = 4, | ||
mapping = "parameters", | ||
type = "list", | ||
optional = true, | ||
default = nil, | ||
subtype = { | ||
type = "string", | ||
}, | ||
desc = "Up to 4 sequences where the API will stop generating further tokens.", | ||
validate = function(l) | ||
return #l >= 1 and #l <= 4, "Must have between 1 and 4 elements" | ||
end, | ||
}, | ||
max_tokens = { | ||
order = 5, | ||
mapping = "parameters", | ||
type = "integer", | ||
optional = true, | ||
default = nil, | ||
desc = "The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.", | ||
validate = function(n) | ||
return n > 0, "Must be greater than 0" | ||
end, | ||
}, | ||
presence_penalty = { | ||
order = 6, | ||
mapping = "parameters", | ||
type = "number", | ||
optional = true, | ||
default = 0, | ||
desc = "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.", | ||
validate = function(n) | ||
return n >= -2 and n <= 2, "Must be between -2 and 2" | ||
end, | ||
}, | ||
frequency_penalty = { | ||
order = 7, | ||
mapping = "parameters", | ||
type = "number", | ||
optional = true, | ||
default = 0, | ||
desc = "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.", | ||
validate = function(n) | ||
return n >= -2 and n <= 2, "Must be between -2 and 2" | ||
end, | ||
}, | ||
logit_bias = { | ||
order = 8, | ||
mapping = "parameters", | ||
type = "map", | ||
optional = true, | ||
default = nil, | ||
desc = "Modify the likelihood of specified tokens appearing in the completion. Maps tokens (specified by their token ID) to an associated bias value from -100 to 100. Use https://platform.openai.com/tokenizer to find token IDs.", | ||
subtype_key = { | ||
type = "integer", | ||
}, | ||
subtype = { | ||
type = "integer", | ||
validate = function(n) | ||
return n >= -100 and n <= 100, "Must be between -100 and 100" | ||
end, | ||
}, | ||
}, | ||
user = { | ||
order = 9, | ||
mapping = "parameters", | ||
type = "string", | ||
optional = true, | ||
default = nil, | ||
desc = "A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.", | ||
validate = function(u) | ||
return u:len() < 100, "Cannot be longer than 100 characters" | ||
end, | ||
}, | ||
}, | ||
} | ||
|
||
return Adapter.new(adapter) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
local assert = require("luassert") | ||
|
||
local chat_buffer_settings = { | ||
frequency_penalty = 0, | ||
model = "gpt-4-0125-preview", | ||
presence_penalty = 0, | ||
temperature = 1, | ||
top_p = 1, | ||
stop = nil, | ||
max_tokens = nil, | ||
logit_bias = nil, | ||
user = nil, | ||
} | ||
|
||
describe("Adapter", function() | ||
it("can form parameters from a chat buffer's settings", function() | ||
local adapter = require("codecompanion.adapters.openai") | ||
local result = adapter:set_params(chat_buffer_settings) | ||
|
||
-- Ignore this for now | ||
result.parameters.stream = nil | ||
|
||
assert.are.same(chat_buffer_settings, result.parameters) | ||
end) | ||
end) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the openapi schema? you may be able to autogenerate this in CI if so 👀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's from their chat completion api.
Stevearc had brilliantly created a schema in his dotfiles that we leverage for this:
Love the idea of being able to do this for every adapter.