Skip to content

Run a local instance of a code-specialized model via Ollama and connect to your VSCode for code completion / generation.

Notifications You must be signed in to change notification settings

fifgreen/vscode-ollama-local-code-copilot

 
 

Repository files navigation

Visual Studio Code (VSCode) Ollama Local Code Co-pilot

An example to run an instance of Ollama via docker-compose and connect to VSCode for code completion / generation.

Prerequisites

The project uses the following software:

Usage

  1. Navigate to the directory where the repository has been cloned and start the containers:

    docker-compose up -d --build
    
  2. Wait for the ollama-setup-1 service to complete downloading codellama and other models.

     2024-01-31 23:36:50 {"status":"verifying sha256 digest"}
     2024-01-31 23:36:50 {"status":"writing manifest"}
     2024-01-31 23:36:50 {"status":"removing any unused layers"}
     2024-01-31 23:36:50 {"status":"success"}
     100 1128k    0 1128k    0    21   2546      0 --:--:--  0:07:33 --:--:--    23
    
  3. Install CodeGPT onto VSCode and follow the instructions provided here.

Note

Initial prompts to Ollama might be slow, if the container was not previously running and not interacted with the model recently.

Notes

Using different Large Language Models (LLM) with CodeGPT

You can edit the docker-entrypoint.sh to pull any model available in the Ollama library, however CodeGPT currently supports a few models via the UI. You will need to manually type in the model name you have pulled into the 'Model' field for Ollama provider.

What devices has this been tested on?

I have only tried the container on my MacBook Pro M1 Pro with 16 RAM, so YMMV.

About

Run a local instance of a code-specialized model via Ollama and connect to your VSCode for code completion / generation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Dockerfile 52.5%
  • Shell 47.5%