Skip to content

Commit

Permalink
feat: add support for anthropic and ollama via adapters
Browse files Browse the repository at this point in the history
  • Loading branch information
olimorris authored Mar 7, 2024
1 parent ceb7b25 commit cd610db
Show file tree
Hide file tree
Showing 27 changed files with 1,220 additions and 609 deletions.
2 changes: 2 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,8 @@ jobs:
ln -s $(pwd) ~/.local/share/nvim/site/pack/vendor/start
- name: Run tests
env:
OPENAI_API_KEY: abc-123
run: |
export PATH="${PWD}/_neovim/bin:${PATH}"
export VIM="${PWD}/_neovim/share/nvim/runtime"
Expand Down
8 changes: 8 additions & 0 deletions ADAPTERS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Adapters

The purpose of this guide is to showcase how you can extend the functionality of CodeCompanion by adding your own actions to the _Action Palette_.


## Testing your adapters

- Two commented out lines in client.lua and the adapter itself
102 changes: 58 additions & 44 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<!-- panvimdoc-ignore-start -->

<p align="center">
<img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/5bc2145f-4a26-4cee-9e3c-57f2393b070f" alt="CodeCompanion.nvim" />
<img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/cc50a085-722b-4299-9f26-0373a96c7c55" alt="CodeCompanion.nvim" />
</p>

<h1 align="center">CodeCompanion.nvim</h1>
Expand All @@ -14,7 +14,8 @@
</p>

<p align="center">
Use the <a href="https://platform.openai.com/docs/guides/text-generation/chat-completions-api">OpenAI APIs</a> directly in Neovim. Use it to chat, author and advise you on your code.
Use the power of generative AI in Neovim. Use it to chat, author and advise you on your code.<br><br>
Supports Anthropic, Ollama and OpenAI.
</p>

> [!IMPORTANT]
Expand All @@ -29,9 +30,10 @@ Use the <a href="https://platform.openai.com/docs/guides/text-generation/chat-co
## :sparkles: Features

- :speech_balloon: A Copilot Chat experience from within Neovim
- :electric_plug: Adapter support for many generative AI services
- :rocket: Inline code creation and modification
- :sparkles: Built in actions for specific language prompts LSP error fixes and code advice
- :building_construction: Create your own custom actions for Neovim which hook into OpenAI
- :building_construction: Create your own custom actions for Neovim which hook into generative AI APIs
- :floppy_disk: Save and restore your chats
- :muscle: Async execution for improved performance

Expand All @@ -48,13 +50,12 @@ Use the <a href="https://platform.openai.com/docs/guides/text-generation/chat-co

## :zap: Requirements

- An API key from OpenAI (get one [here](https://platform.openai.com/api-keys))
- The `curl` library installed
- Neovim 0.9.2 or greater
- _(Optional)_ An API key to be set in your shell for your chosen generative AI service

## :package: Installation

- Set your OpenAI API Key as an environment variable in your shell (default name: `OPENAI_API_KEY`)
- Install the plugin with your package manager of choice:

```lua
Expand Down Expand Up @@ -95,34 +96,9 @@ You only need to the call the `setup` function if you wish to change any of the

```lua
require("codecompanion").setup({
api_key = "OPENAI_API_KEY", -- Your API key
org_api_key = "OPENAI_ORG_KEY", -- Your organisation API key
base_url = "https://api.openai.com", -- The URL to use for the API requests
ai_settings = {
-- Default settings for the Completions API
-- See https://platform.openai.com/docs/api-reference/chat/create
chat = {
model = "gpt-4-0125-preview",
temperature = 1,
top_p = 1,
stop = nil,
max_tokens = nil,
presence_penalty = 0,
frequency_penalty = 0,
logit_bias = nil,
user = nil,
},
inline = {
model = "gpt-3.5-turbo-0125",
temperature = 1,
top_p = 1,
stop = nil,
max_tokens = nil,
presence_penalty = 0,
frequency_penalty = 0,
logit_bias = nil,
user = nil,
},
adapters = {
chat = require("codecompanion.adapters").use("openai"),
inline = require("codecompanion.adapters").use("openai"),
},
saved_chats = {
save_dir = vim.fn.stdpath("data") .. "/codecompanion/saved_chats", -- Path to save chats to
Expand Down Expand Up @@ -169,7 +145,7 @@ require("codecompanion").setup({
["["] = "keymaps.previous", -- Move to the previous header in the chat
},
log_level = "ERROR", -- TRACE|DEBUG|ERROR
send_code = true, -- Send code context to OpenAI? Disable to prevent leaking code outside of Neovim
send_code = true, -- Send code context to the generative AI service? Disable to prevent leaking code outside of Neovim
silence_notifications = false, -- Silence notifications for actions like saving saving chats?
use_default_actions = true, -- Use the default actions in the action palette?
})
Expand All @@ -178,7 +154,45 @@ require("codecompanion").setup({
</details>

> [!WARNING]
> For some users, the sending of any code to OpenAI may not be an option. In those instances, you can set `send_code = false` in your config.
> Depending on your chosen adapter, you may need to setup environment variables within your shell. See the adapters section below for specific information.
### Adapters

The plugin uses adapters to bridge between generative AI services and the plugin. Currently the plugin supports:

- Anthropic (`anthropic`) - Requires `ANTHROPIC_API_KEY` to be set in your shell
- Ollama (`ollama`)
- OpenAI (`openai`) - Requires `OPENAI_API_KEY` to be set in your shell

You can specify an adapter for each of the strategies in the plugin:

```lua
require("codecompanion").setup({
adapters = {
chat = require("codecompanion.adapters").use("openai"),
inline = require("codecompanion.adapters").use("openai"),
},
})
```

#### Modifying Adapters

It may be necessary to modify certain parameters of an adapter. In the example below, we're changing the name of the API key that the OpenAI adapter uses by passing in a table to the `use` method:

```lua
require("codecompanion").setup({
adapters = {
chat = require("codecompanion.adapters").use("openai", {
env = {
api_key = "DIFFERENT_OPENAI_KEY",
},
}),
},
})
```

> [!TIP]
> To create your own adapter please refer to the [ADAPTERS](ADAPTERS.md) guide
### Edgy.nvim Configuration

Expand Down Expand Up @@ -244,13 +258,13 @@ The Action Palette, opened via `:CodeCompanionActions`, contains all of the acti

<p><img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/84d5e03a-0b48-4ffb-9ca5-e299d41171bd" alt="chat buffer" /></p>

The chat buffer is where you can converse with the OpenAI APIs, directly from Neovim. It behaves as a regular markdown buffer with some clever additions. When the buffer is written (or "saved"), autocmds trigger the sending of its content to OpenAI, in the form of prompts. These prompts are segmented by H1 headers: `user` and `assistant` (see OpenAI's [Chat Completions API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) for more on this). When a response is received, it is then streamed back into the buffer. The result is that you experience the feel of conversing with ChatGPT from within Neovim.
The chat buffer is where you can converse with the generative AI service, directly from Neovim. It behaves as a regular markdown buffer with some clever additions. When the buffer is written (or "saved"), autocmds trigger the sending of its content to the generative AI service, in the form of prompts. These prompts are segmented by H1 headers: `user` and `assistant`. When a response is received, it is then streamed back into the buffer. The result is that you experience the feel of conversing with ChatGPT from within Neovim.

#### Keymaps

When in the chat buffer, there are number of keymaps available to you (which can be changed in the config):

- `<C-s>` - Save the buffer and trigger a response from the OpenAI API
- `<C-s>` - Save the buffer and trigger a response from the generative AI service
- `<C-c>` - Close the buffer
- `q` - Cancel the stream from the API
- `gc` - Clear the buffer's contents
Expand All @@ -265,7 +279,7 @@ Chat buffers are not saved to disk by default, but can be by pressing `gs` in th

#### Settings

If `display.chat.show_settings` is set to `true`, at the very top of the chat buffer will be the OpenAI parameters which can be changed to tweak the response back to you. This enables fine-tuning and parameter tweaking throughout the chat. You can find more detail about them by moving the cursor over them or referring to the [OpenAI Chat Completions reference guide](https://platform.openai.com/docs/api-reference/chat).
If `display.chat.show_settings` is set to `true`, at the very top of the chat buffer will be the adapter parameters which can be changed to tweak the response back to you. This enables fine-tuning and parameter tweaking throughout the chat. You can find more detail about them by moving the cursor over them.

### Inline Code

Expand All @@ -284,7 +298,7 @@ You can use the plugin to create inline code directly into a Neovim buffer. This
> [!NOTE]
> The command can detect if you've made a visual selection and send any code as context to the API alongside the filetype of the buffer.
One of the challenges with inline editing is determining how the API's response should be handled in the buffer. If you've prompted the API to _"create a table of 5 fruits"_ then you may wish for the response to be placed after the cursor in the buffer. However, if you asked the API to _"refactor this function"_ then you'd expect the response to overwrite a visual selection. If this _placement_ isn't specified then the plugin will use OpenAI itself to determine if the response should follow any of the placements below:
One of the challenges with inline editing is determining how the API's response should be handled in the buffer. If you've prompted the API to _"create a table of 5 fruits"_ then you may wish for the response to be placed after the cursor in the buffer. However, if you asked the API to _"refactor this function"_ then you'd expect the response to overwrite a visual selection. If this _placement_ isn't specified then the plugin will use generative AI itself to determine if the response should follow any of the placements below:

- _after_ - after the visual selection
- _before_ - before the visual selection
Expand All @@ -296,11 +310,11 @@ As a final example, specifying a prompt like _"create a test for this code in a

### In-Built Actions

The plugin comes with a number of [in-built actions](https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/actions.lua) which aim to improve your Neovim workflow. Actions make use of either a _chat_ or an _inline_ strategy, which are essentially bridges between Neovim and OpenAI. The chat strategy opens up a chat buffer whilst an inline strategy will write output from OpenAI into the Neovim buffer.
The plugin comes with a number of [in-built actions](https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/actions.lua) which aim to improve your Neovim workflow. Actions make use of either a _chat_ or an _inline_ strategy. The chat strategy opens up a chat buffer whilst an inline strategy will write output from the generative AI service into the Neovim buffer.

#### Chat and Chat as

Both of these actions utilise the `chat` strategy. The `Chat` action opens up a fresh chat buffer. The `Chat as` action allows for persona based context to be set in the chat buffer allowing for better and more detailed responses from OpenAI.
Both of these actions utilise the `chat` strategy. The `Chat` action opens up a fresh chat buffer. The `Chat as` action allows for persona based context to be set in the chat buffer allowing for better and more detailed responses from the generative AI service.

> [!TIP]
> Both of these actions allow for visually selected code to be sent to the chat buffer as code blocks.
Expand Down Expand Up @@ -328,7 +342,7 @@ As the name suggests, this action provides advice on a visual selection of code

#### LSP assistant

Taken from the fantastic [Wtf.nvim](https://github.com/piersolenski/wtf.nvim) plugin, this action provides advice on how to correct any LSP diagnostics which are present on the visually selected lines. Again, the `send_code = false` value can be set in your config to prevent the code itself being sent to OpenAI.
Taken from the fantastic [Wtf.nvim](https://github.com/piersolenski/wtf.nvim) plugin, this action provides advice on how to correct any LSP diagnostics which are present on the visually selected lines. Again, the `send_code = false` value can be set in your config to prevent the code itself being sent to the generative AI service.

## :rainbow: Helpers

Expand Down Expand Up @@ -363,10 +377,10 @@ vim.api.nvim_create_autocmd({ "User" }, {
### Heirline.nvim

If you're using the fantastic [Heirline.nvim](https://github.com/rebelot/heirline.nvim) plugin, consider the following snippet to display an icon in the statusline whilst CodeCompanion is conversing with OpenAI:
If you're using the fantastic [Heirline.nvim](https://github.com/rebelot/heirline.nvim) plugin, consider the following snippet to display an icon in the statusline whilst CodeCompanion is conversing with the generative AI service:

```lua
local OpenAI = {
local CodeCompanion = {
static = {
processing = false,
},
Expand Down
2 changes: 1 addition & 1 deletion RECIPES.md
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@ And to determine the visibility of actions in the palette itself:
strategy = "saved_chats",
description = "Load your previously saved chats",
condition = function()
local saved_chats = require("codecompanion.strategy.saved_chats")
local saved_chats = require("codecompanion.strategies.saved_chats")
return saved_chats:has_chats()
end,
picker = {
Expand Down
4 changes: 2 additions & 2 deletions lua/codecompanion/actions.lua
Original file line number Diff line number Diff line change
Expand Up @@ -505,14 +505,14 @@ M.static.actions = {
strategy = "saved_chats",
description = "Load your previously saved chats",
condition = function()
local saved_chats = require("codecompanion.strategy.saved_chats")
local saved_chats = require("codecompanion.strategies.saved_chats")
return saved_chats:has_chats()
end,
picker = {
prompt = "Load chats",
items = function()
local client = require("codecompanion").get_client()
local saved_chats = require("codecompanion.strategy.saved_chats")
local saved_chats = require("codecompanion.strategies.saved_chats")
local items = saved_chats:list({ sort = true })

local chats = {}
Expand Down
107 changes: 107 additions & 0 deletions lua/codecompanion/adapter.lua
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
local log = require("codecompanion.utils.log")

---@class CodeCompanion.Adapter
---@field name string
---@field url string
---@field env? table
---@field raw? table
---@field header table
---@field parameters table
---@field callbacks table
---@field schema table
local Adapter = {}

---@class CodeCompanion.AdapterArgs
---@field name string
---@field url string
---@field env? table
---@field raw? table
---@field header table
---@field parameters table
---@field callbacks table
---@field schema table

---@param args table
---@return CodeCompanion.Adapter
function Adapter.new(args)
return setmetatable(args, { __index = Adapter })
end

---@return table
function Adapter:get_default_settings()
local settings = {}

for key, value in pairs(self.schema) do
if value.default ~= nil then
settings[key] = value.default
end
end

return settings
end

---@param settings? table
---@return CodeCompanion.Adapter
function Adapter:set_params(settings)
if not settings then
settings = self:get_default_settings()
end

for k, v in pairs(settings) do
local mapping = self.schema[k] and self.schema[k].mapping
if mapping then
local segments = {}
for segment in string.gmatch(mapping, "[^.]+") do
table.insert(segments, segment)
end

local current = self
for i = 1, #segments - 1 do
if not current[segments[i]] then
current[segments[i]] = {}
end
current = current[segments[i]]
end

-- Before setting the value, ensure the target exists or initialize it.
local target = segments[#segments]
if not current[target] then
current[target] = {}
end

-- Ensure 'target' is not nil and 'k' can be assigned to the final segment.
if target then
current[target][k] = v
end
end
end

return self
end

---@return CodeCompanion.Adapter
function Adapter:replace_header_vars()
if self.headers then
for k, v in pairs(self.headers) do
self.headers[k] = v:gsub("${(.-)}", function(var)
local env_var = self.env[var]

if env_var then
env_var = os.getenv(env_var)
if not env_var then
log:error("Error: Could not find env var: %s", self.env[var])
return vim.notify(
string.format("[CodeCompanion.nvim]\nCould not find env var: %s", self.env[var]),
vim.log.levels.ERROR
)
end
return env_var
end
end)
end
end

return self
end

return Adapter
Loading

0 comments on commit cd610db

Please sign in to comment.