An example to run an instance of Ollama via docker-compose and connect to VSCode for code completion / generation.
The project uses the following software:
-
Navigate to the directory where the repository has been cloned and start the containers:
docker-compose up -d --build
-
Wait for the
ollama-setup-1
service to complete downloading codellama and other models.2024-01-31 23:36:50 {"status":"verifying sha256 digest"} 2024-01-31 23:36:50 {"status":"writing manifest"} 2024-01-31 23:36:50 {"status":"removing any unused layers"} 2024-01-31 23:36:50 {"status":"success"} 100 1128k 0 1128k 0 21 2546 0 --:--:-- 0:07:33 --:--:-- 23
-
Install CodeGPT onto VSCode and follow the instructions provided here.
Note
Initial prompts to Ollama might be slow, if the container was not previously running and not interacted with the model recently.
You can edit the docker-entrypoint.sh
to pull any model available in the Ollama library, however CodeGPT currently
supports a few models via the UI. You will need to manually type in the model name you
have pulled into the 'Model' field for Ollama provider.
I have only tried the container on my MacBook Pro M1 Pro with 16 RAM, so YMMV.