diff --git a/fern/docs/pages/installation/installation.mdx b/fern/docs/pages/installation/installation.mdx index ce8decab02..a33ad1a163 100644 --- a/fern/docs/pages/installation/installation.mdx +++ b/fern/docs/pages/installation/installation.mdx @@ -312,7 +312,7 @@ $env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall -- If your installation was correct, you should see a message similar to the following next time you start the server `BLAS = 1`. If there is some issue, please refer to the -[troubleshooting](#/installation/getting-started/troubleshooting#guide-for-building-llama-cpp-with-cuda-support) section. +[troubleshooting](#/installation/getting-started/troubleshooting#building-llama-cpp-with-nvidia-gpu-support) section. ```console llama_new_context_with_model: total VRAM used: 4857.93 MB (model: 4095.05 MB, context: 762.87 MB) @@ -345,7 +345,7 @@ CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cac If your installation was correct, you should see a message similar to the following next time you start the server `BLAS = 1`. If there is some issue, please refer to the -[troubleshooting](#/installation/getting-started/troubleshooting#guide-for-building-llama-cpp-with-cuda-support) section. +[troubleshooting](#/installation/getting-started/troubleshooting#building-llama-cpp-with-nvidia-gpu-support) section. ``` llama_new_context_with_model: total VRAM used: 4857.93 MB (model: 4095.05 MB, context: 762.87 MB)