From 83427431f7fc5cfad504881674022641aef73bf7 Mon Sep 17 00:00:00 2001 From: Javier Martinez Date: Wed, 7 Aug 2024 11:27:53 +0200 Subject: [PATCH] fix: troubleshooting link ... --- fern/docs/pages/installation/installation.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fern/docs/pages/installation/installation.mdx b/fern/docs/pages/installation/installation.mdx index ce8decab0..e7f80c87d 100644 --- a/fern/docs/pages/installation/installation.mdx +++ b/fern/docs/pages/installation/installation.mdx @@ -312,7 +312,7 @@ $env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall -- If your installation was correct, you should see a message similar to the following next time you start the server `BLAS = 1`. If there is some issue, please refer to the -[troubleshooting](#/installation/getting-started/troubleshooting#guide-for-building-llama-cpp-with-cuda-support) section. +[troubleshooting](/installation/getting-started/troubleshooting#building-llama-cpp-with-nvidia-gpu-support) section. ```console llama_new_context_with_model: total VRAM used: 4857.93 MB (model: 4095.05 MB, context: 762.87 MB) @@ -345,7 +345,7 @@ CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cac If your installation was correct, you should see a message similar to the following next time you start the server `BLAS = 1`. If there is some issue, please refer to the -[troubleshooting](#/installation/getting-started/troubleshooting#guide-for-building-llama-cpp-with-cuda-support) section. +[troubleshooting](/installation/getting-started/troubleshooting#building-llama-cpp-with-nvidia-gpu-support) section. ``` llama_new_context_with_model: total VRAM used: 4857.93 MB (model: 4095.05 MB, context: 762.87 MB)