Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: ollama support #8

Merged
merged 42 commits into from
Mar 7, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
df970d5
wip: first attempt at an adapter
olimorris Feb 29, 2024
a234364
add luadoc blocks
olimorris Feb 29, 2024
5019e69
refactor adapters
olimorris Feb 29, 2024
d8b0b1d
chore: clean up spec
olimorris Feb 29, 2024
4cf8283
wip: note on schema
olimorris Feb 29, 2024
fbba3a1
chore: slight word tweak for spec
olimorris Feb 29, 2024
d36dbde
refactor: strategy folder is now strategies
olimorris Mar 1, 2024
e012ebe
can form parameters based on the schema definition
olimorris Mar 1, 2024
6621185
switch to using the schema from the adapter
olimorris Mar 1, 2024
e3fd4db
clean up client
olimorris Mar 1, 2024
cf9ffff
fix adapter
olimorris Mar 1, 2024
e93fe4d
client now uses adapter from config
olimorris Mar 2, 2024
135829f
clean up client calls throughout the plugin
olimorris Mar 3, 2024
f9938b3
start adding callbacks to adapters
olimorris Mar 3, 2024
71b935a
fix test
olimorris Mar 3, 2024
83ae51d
make test more explict
olimorris Mar 3, 2024
982ecf7
clean up test
olimorris Mar 3, 2024
1d58fe9
chat buffer now fully moved to openai adapter
olimorris Mar 3, 2024
82072d3
fix env var in tests
olimorris Mar 3, 2024
c843a16
inline strategy now uses adapter
olimorris Mar 5, 2024
6cc4802
env vars in headers can be swapped in
olimorris Mar 5, 2024
e46a065
adapter call to format input messages to api
olimorris Mar 5, 2024
1c8bab4
fix tests
olimorris Mar 5, 2024
1f150b3
add ollama support
olimorris Mar 5, 2024
9775fda
fix ollama `is_done` method
olimorris Mar 5, 2024
eb000d7
fix displaying decimal places in settings
olimorris Mar 5, 2024
051058b
tweak adapters
olimorris Mar 5, 2024
eb70e27
update README.md
olimorris Mar 5, 2024
3b1d5da
feat: add anthropic adapter
olimorris Mar 6, 2024
a47b569
refactor name of error callback
olimorris Mar 6, 2024
cd1db08
better handling of errors
olimorris Mar 6, 2024
d7386b4
start adding adapter tests
olimorris Mar 6, 2024
8f25246
add ollama adapter test
olimorris Mar 6, 2024
c9647c5
allow adapters to be customised from the config
olimorris Mar 7, 2024
f9214e8
fix tests
olimorris Mar 7, 2024
1447bca
make chat buffer more efficient and streamline adapters
olimorris Mar 7, 2024
66773c4
update README.md
olimorris Mar 7, 2024
90f9d0e
fix inline for ollama and openai
olimorris Mar 7, 2024
d82bccf
fix anthropic adapter
olimorris Mar 7, 2024
6c90031
tweaks
olimorris Mar 7, 2024
580a5d6
add adapters.md
olimorris Mar 7, 2024
e26c3e0
add anthropic spec
olimorris Mar 7, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions lua/codecompanion/adapter.lua
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---@class CodeCompanion.Adapter
---@field name string
---@field url string
---@field header table
---@field parameters table
---@field schema table
local Adapter = {}

---@class CodeCompanion.AdapterArgs
---@field name string
---@field url string
---@field header table
---@field parameters table
---@field schema table

---@param args table
---@return CodeCompanion.Adapter
function Adapter.new(args)
return setmetatable(args, { __index = Adapter })
end

---@param settings table
---@return CodeCompanion.Adapter
function Adapter:set_params(settings)
-- TODO: Need to take into account the schema's "mapping" field
for k, v in pairs(settings) do
self.parameters[k] = v
end

return self
end

return Adapter
134 changes: 134 additions & 0 deletions lua/codecompanion/adapters/openai.lua
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
local Adapter = require("codecompanion.adapter")

---@class CodeCompanion.Adapter
---@field name string
---@field url string
---@field headers table
---@field parameters table
---@field schema table

local adapter = {
name = "OpenAI",
url = "https://api.openai.com/v1/chat/completions",
headers = {
content_type = "application/json",
Authorization = "Bearer ", -- ignore the API key for now
},
parameters = {
stream = true,
},
schema = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the openapi schema? you may be able to autogenerate this in CI if so 👀

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's from their chat completion api.

Stevearc had brilliantly created a schema in his dotfiles that we leverage for this:

2024-03-03 22_45_21 - CleanShot

Love the idea of being able to do this for every adapter.

model = {
order = 1,
mapping = "parameters",
type = "enum",
desc = "ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.",
default = "gpt-4-0125-preview",
choices = {
"gpt-4-1106-preview",
"gpt-4",
"gpt-3.5-turbo-1106",
"gpt-3.5-turbo",
},
},
temperature = {
order = 2,
mapping = "parameters",
type = "number",
optional = true,
default = 1,
desc = "What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.",
validate = function(n)
return n >= 0 and n <= 2, "Must be between 0 and 2"
end,
},
top_p = {
order = 3,
mapping = "parameters",
type = "number",
optional = true,
default = 1,
desc = "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.",
validate = function(n)
return n >= 0 and n <= 1, "Must be between 0 and 1"
end,
},
stop = {
order = 4,
mapping = "parameters",
type = "list",
optional = true,
default = nil,
subtype = {
type = "string",
},
desc = "Up to 4 sequences where the API will stop generating further tokens.",
validate = function(l)
return #l >= 1 and #l <= 4, "Must have between 1 and 4 elements"
end,
},
max_tokens = {
order = 5,
mapping = "parameters",
type = "integer",
optional = true,
default = nil,
desc = "The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.",
validate = function(n)
return n > 0, "Must be greater than 0"
end,
},
presence_penalty = {
order = 6,
mapping = "parameters",
type = "number",
optional = true,
default = 0,
desc = "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.",
validate = function(n)
return n >= -2 and n <= 2, "Must be between -2 and 2"
end,
},
frequency_penalty = {
order = 7,
mapping = "parameters",
type = "number",
optional = true,
default = 0,
desc = "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
validate = function(n)
return n >= -2 and n <= 2, "Must be between -2 and 2"
end,
},
logit_bias = {
order = 8,
mapping = "parameters",
type = "map",
optional = true,
default = nil,
desc = "Modify the likelihood of specified tokens appearing in the completion. Maps tokens (specified by their token ID) to an associated bias value from -100 to 100. Use https://platform.openai.com/tokenizer to find token IDs.",
subtype_key = {
type = "integer",
},
subtype = {
type = "integer",
validate = function(n)
return n >= -100 and n <= 100, "Must be between -100 and 100"
end,
},
},
user = {
order = 9,
mapping = "parameters",
type = "string",
optional = true,
default = nil,
desc = "A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.",
validate = function(u)
return u:len() < 100, "Cannot be longer than 100 characters"
end,
},
},
}

return Adapter.new(adapter)
25 changes: 25 additions & 0 deletions lua/spec/codecompanion/adapter_spec.lua
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
local assert = require("luassert")

local chat_buffer_settings = {
frequency_penalty = 0,
model = "gpt-4-0125-preview",
presence_penalty = 0,
temperature = 1,
top_p = 1,
stop = nil,
max_tokens = nil,
logit_bias = nil,
user = nil,
}

describe("Adapter", function()
it("can form parameters from a chat buffer's settings", function()
local adapter = require("codecompanion.adapters.openai")
local result = adapter:set_params(chat_buffer_settings)

-- Ignore this for now
result.parameters.stream = nil

assert.are.same(chat_buffer_settings, result.parameters)
end)
end)
Loading