Use the OpenAI APIs directly in Neovim. Use it to chat, author and advise you on your code.
Important
This plugin is provided as-is and is primarily developed for my own workflows. As such, I offer no guarantees of regular updates or support and I expect the plugin's API to change regularly. Bug fixes and feature enhancements will be implemented at my discretion, and only if they align with my personal use-case. Feel free to fork the project and customize it to your needs, but please understand my involvement in further development will be minimal.
- 💬 A Copilot Chat experience from within Neovim
- 🚀 Inline code creation and modification
- ✨ Built in actions for specific language prompts LSP error fixes and code advice
- 🏗️ Create your own custom actions for Neovim which hook into OpenAI
- 💾 Save and restore your chats
- 💪 Async execution for improved performance
- An API key from OpenAI (get one here)
- The
curl
library installed - Neovim 0.9.2 or greater
- Set your OpenAI API Key as an environment variable in your shell (default name:
OPENAI_API_KEY
) - Install the plugin with your package manager of choice:
-- Lazy.nvim
{
"olimorris/codecompanion.nvim",
dependencies = {
"nvim-treesitter/nvim-treesitter",
"nvim-lua/plenary.nvim",
{
"stevearc/dressing.nvim", -- Optional: Improves the default Neovim UI
opts = {},
},
},
config = true
}
-- Packer.nvim
use({
"olimorris/codecompanion.nvim",
config = function()
require("codecompanion").setup()
end,
requires = {
"nvim-treesitter/nvim-treesitter",
"nvim-lua/plenary.nvim",
"stevearc/dressing.nvim"
}
})
You only need to the call the setup
function if you wish to change any of the defaults:
Click to see the default configuration
require("codecompanion").setup({
api_key = "OPENAI_API_KEY", -- Your API key
org_api_key = "OPENAI_ORG_KEY", -- Your organisation API key
base_url = "https://api.openai.com", -- The URL to use for the API requests
ai_settings = {
-- Default settings for the Completions API
-- See https://platform.openai.com/docs/api-reference/chat/create
chat = {
model = "gpt-4-0125-preview",
temperature = 1,
top_p = 1,
stop = nil,
max_tokens = nil,
presence_penalty = 0,
frequency_penalty = 0,
logit_bias = nil,
user = nil,
},
inline = {
model = "gpt-3.5-turbo-0125",
temperature = 1,
top_p = 1,
stop = nil,
max_tokens = nil,
presence_penalty = 0,
frequency_penalty = 0,
logit_bias = nil,
user = nil,
},
},
saved_chats = {
save_dir = vim.fn.stdpath("data") .. "/codecompanion/saved_chats", -- Path to save chats to
},
display = {
action_palette = {
width = 95,
height = 10,
},
chat = { -- Options for the chat strategy
type = "float", -- float|buffer
show_settings = true, -- Show the model settings in the chat buffer?
show_token_count = true, -- Show the token count for the current chat in the buffer?
buf_options = { -- Buffer options for the chat buffer
buflisted = false,
},
float_options = { -- Float window options if the type is "float"
border = "single",
buflisted = false,
max_height = 0,
max_width = 0,
padding = 1,
},
win_options = { -- Window options for the chat buffer
cursorcolumn = false,
cursorline = false,
foldcolumn = "0",
linebreak = true,
list = false,
signcolumn = "no",
spell = false,
wrap = true,
},
},
},
keymaps = {
["<C-s>"] = "keymaps.save", -- Save the chat buffer and trigger the API
["<C-c>"] = "keymaps.close", -- Close the chat buffer
["q"] = "keymaps.cancel_request", -- Cancel the currently streaming request
["gc"] = "keymaps.clear", -- Clear the contents of the chat
["ga"] = "keymaps.codeblock", -- Insert a codeblock into the chat
["gs"] = "keymaps.save_chat", -- Save the current chat
["]"] = "keymaps.next", -- Move to the next header in the chat
["["] = "keymaps.previous", -- Move to the previous header in the chat
},
log_level = "ERROR", -- TRACE|DEBUG|ERROR
send_code = true, -- Send code context to OpenAI? Disable to prevent leaking code outside of Neovim
silence_notifications = false, -- Silence notifications for actions like saving saving chats?
use_default_actions = true, -- Use the default actions in the action palette?
})
Warning
For some users, the sending of any code to OpenAI may not be an option. In those instances, you can set send_code = false
in your config.
The author recommends pairing with edgy.nvim for a Co-Pilot Chat-like experience:
{
"folke/edgy.nvim",
event = "VeryLazy",
init = function()
vim.opt.laststatus = 3
vim.opt.splitkeep = "screen"
end,
opts = {
right = {
{ ft = "codecompanion", title = "Code Companion Chat", size = { width = 0.45 } },
}
}
}
The plugin sets the following highlight groups during setup:
CodeCompanionTokens
- Virtual text showing the token count when in a chat bufferCodeCompanionVirtualText
- All other virtual text in the chat buffer
The plugin has a number of commands:
:CodeCompanion
- Inline code writing and refactoring:CodeCompanionChat
- To open up a new chat buffer:CodeCompanionToggle
- Toggle a chat buffer:CodeCompanionActions
- To open up the action palette window
For an optimum workflow, the plugin author recommendeds the following keymaps:
vim.api.nvim_set_keymap("n", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("n", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
Note
For some actions, visual mode allows your selection to be sent directly to the chat buffer or the API itself (in the case of inline code actions).
Note
Please see the RECIPES guide in order to add your own actions to the palette.
The Action Palette, opened via :CodeCompanionActions
, contains all of the actions and their associated strategies for the plugin. It's the fastest way to start leveraging CodeCompanion. Depending on whether you're in normal or visual mode will affect the options that are available to you in the palette.
Tip
If you wish to turn off the default actions, set use_default_actions = false
in your config.
The chat buffer is where you can converse with the OpenAI APIs, directly from Neovim. It behaves as a regular markdown buffer with some clever additions. When the buffer is written (or "saved"), autocmds trigger the sending of its content to OpenAI, in the form of prompts. These prompts are segmented by H1 headers: user
and assistant
(see OpenAI's Chat Completions API for more on this). When a response is received, it is then streamed back into the buffer. The result is that you experience the feel of conversing with ChatGPT from within Neovim.
When in the chat buffer, there are number of keymaps available to you (which can be changed in the config):
<C-s>
- Save the buffer and trigger a response from the OpenAI API<C-c>
- Close the bufferq
- Cancel the stream from the APIgc
- Clear the buffer's contentsga
- Add a codeblockgs
- Save the chat to disk[
- Move to the next header]
- Move to the previous header
Chat buffers are not saved to disk by default, but can be by pressing gs
in the buffer. Saved chats can then be restored via the Action Palette and the Saved chats action.
If display.chat.show_settings
is set to true
, at the very top of the chat buffer will be the OpenAI parameters which can be changed to tweak the response back to you. This enables fine-tuning and parameter tweaking throughout the chat. You can find more detail about them by moving the cursor over them or referring to the OpenAI Chat Completions reference guide.
You can use the plugin to create inline code directly into a Neovim buffer. This can be invoked by using the Action Palette (as above) or from the command line via :CodeCompanion
. For example:
:CodeCompanion create a table of 5 fruits
:'<,'>CodeCompanion refactor the code to make it more concise
Note
The command can detect if you've made a visual selection and send any code as context to the API alongside the filetype of the buffer.
One of the challenges with inline editing is determining how the API's response should be handled in the buffer. If you've prompted the API to "create a table of 5 fruits" then you may wish for the response to be placed after the cursor in the buffer. However, if you asked the API to "refactor this function" then you'd expect the response to overwrite a visual selection. If this placement isn't specified then the plugin will use OpenAI itself to determine if the response should follow any of the placements below:
- after - after the visual selection
- before - before the visual selection
- cursor - one column after the cursor position
- new - in a new buffer
- replace - replacing the visual selection
As a final example, specifying a prompt like "create a test for this code in a new buffer" would result in a new Neovim buffer being created.
The plugin comes with a number of in-built actions which aim to improve your Neovim workflow. Actions make use of either a chat or an inline strategy, which are essentially bridges between Neovim and OpenAI. The chat strategy opens up a chat buffer whilst an inline strategy will write output from OpenAI into the Neovim buffer.
Both of these actions utilise the chat
strategy. The Chat
action opens up a fresh chat buffer. The Chat as
action allows for persona based context to be set in the chat buffer allowing for better and more detailed responses from OpenAI.
Tip
Both of these actions allow for visually selected code to be sent to the chat buffer as code blocks.
This action enables users to easily navigate between their open chat buffers. A chat buffer maybe deleted (and removed from memory) by pressing <C-q>
.
This action utilises the inline
strategy. This action can be useful for writing inline code in a buffer or even refactoring a visual selection; all based on a user's prompt. The action is designed to write code for the buffer filetype that it is initated in, or, if run from a terminal prompt, to write commands.
The strategy comes with a number of helpers which the user can type in the prompt, similar to GitHub Copilot Chat:
/doc
to add a documentation comment/optimize
to analyze and improve the running time of the selected code/tests
to create unit tests for the selected code
Note
The options available to the user in the Action Palette will depend on the Vim mode.
As the name suggests, this action provides advice on a visual selection of code and utilises the chat
strategy. The response from the API is streamed into a chat buffer which follows the display.chat
settings in your configuration.
Taken from the fantastic Wtf.nvim plugin, this action provides advice on how to correct any LSP diagnostics which are present on the visually selected lines. Again, the send_code = false
value can be set in your config to prevent the code itself being sent to OpenAI.
The plugin fires the following events during its lifecycle:
CodeCompanionRequest
- Fired during the API request. Outputsdata.status
with a value ofstarted
orfinished
CodeCompanionChatSaved
- Fired after a chat has been saved to diskCodeCompanionChat
- Fired at various points during the chat buffer. Comes with the following attributes:data.action = close_buffer
- For when a chat buffer has been permanently closeddata.action = hide_buffer
- For when a chat buffer is hiddendata.action = show_buffer
- For when a chat buffer is visible after being hidden
CodeCompanionInline
- Fired during the inline API request alongsideCodeCompanionRequest
. Outputsdata.status
with a value ofstarted
orfinished
Events can be hooked into as follows:
local group = vim.api.nvim_create_augroup("CodeCompanionHooks", {})
vim.api.nvim_create_autocmd({ "User" }, {
pattern = "CodeCompanionInline",
group = group,
callback = function(request)
print(request.data.status) -- outputs "started" or "finished"
end,
})
Tip
A possible use case is for formatting the buffer after an inline code request
If you're using the fantastic Heirline.nvim plugin, consider the following snippet to display an icon in the statusline whilst CodeCompanion is conversing with OpenAI:
local OpenAI = {
static = {
processing = false,
},
update = {
"User",
pattern = "CodeCompanionRequest",
callback = function(self, args)
self.processing = (args.data.status == "started")
vim.cmd("redrawstatus")
end,
},
{
condition = function(self)
return self.processing
end,
provider = " ",
hl = { fg = "yellow" },
},
}
- Steven Arcangeli for his genius creation of the chat buffer and his feedback
- Wtf.nvim for the LSP assistant action
- ChatGPT.nvim for the calculation of tokens