This sample application demonstrates an end-to-end Retrieval Augmented Generation application in Vespa, where all the steps are run within Vespa. No other systems are required.
This sample application focuses on the generation part of RAG, and builds upon the MS Marco passage ranking sample application. Please refer to that sample application for details on more advanced forms of retrieval, such as vector search and cross-encoder re-ranking. The generation steps in this sample application happen after retrieval, so the techniques there can easily be used in this application as well. For the purposes of this sample application, we will use a simple text search using BM25.
We will show three versions of an end-to-end RAG application here:
- Using an external LLM service to generate the final response.
- Using local LLM inference to generate the final response.
- Deploying to Vespa Cloud and using GPU accelerated LLM inference to generate the final response.
For details on using retrieval augmented generation in Vespa, please refer to the RAG in Vespa documentation page. For more on the general use of LLMs in Vespa, please refer to LLMs in Vespa.
The following is a quick start recipe for getting started with a tiny slice of the MS Marco passage ranking dataset. Please follow the instructions in the MS Marco passage ranking sample application for instructions on downloading the entire dataset.
In the following we will deploy the sample application either to a local Docker (or Podman) container or to Vespa Cloud. Querying the sample application does not depend on the type of deployment, and is shown in the querying section below.
Make sure that Vespa CLI is installed. Update to the newest version:
$ brew install vespa-cli
Download this sample application:
$ vespa clone retrieval-augmented-generation rag && cd rag
Here we will deploy the sample application locally to a Docker or Podman container. Please ensure that either Docker or Podman is installed and running with 12 GB available memory.
Validate Docker resource settings, which should be a minimum of 12 GB:
$ docker info | grep "Total Memory" or $ podman info | grep "memTotal"
In the following, you can replace docker
with podman
and this should work
out of the box.
Pull and start the most recent Vespa container image:
$ docker pull vespaengine/vespa $ docker run --detach --name vespa-rag --hostname vespa-container \ --publish 8080:8080 --publish 19071:19071 \ vespaengine/vespa
We will use a local deployment using this docker image:
$ vespa config set target local
Verify that the configuration service (deploy API) is ready:
$ vespa status deploy --wait 300
Deploy the application. This downloads the LLM file which can take some time.
Note that if you don't want to perform local inference of the LLM, you can
remove the corresponding section in services.xml
so the application skips
this downloading.
$ vespa deploy --wait 900
Now the application should be deployed! You can continue to the querying section below for testing this application.
Here will deploy the sample application to Vespa Cloud on a GPU instance to perform the generative part. Note that this application can fit within the free quota, so it is free to try.
In the following we will set the Vespa CLI target to the cloud. Make sure you have created a tenant at console.vespa-cloud.com. Make note of the tenant name, it will be used in the next steps. For more information, see the Vespa Cloud getting started guide.
Configure the vespa client. Replace tenant-name
below with your tenant name.
We use the application name rag-app
here, but you are free to choose your own
application name:
$ vespa config set target cloud $ vespa config set application tenant-name.rag-app
Authorize Vespa Cloud access and add your public certificates to the application:
$ vespa auth login $ vespa auth cert
Deploy the application. This can take some time for all nodes to be provisioned:
$ vespa deploy --wait 900
Now the application should be deployed! You can continue to the querying section below for testing this application.
Let's feed the documents:
$ vespa feed ext/docs.jsonl
Run a query, first to check the retrieval:
$ vespa query query="what was the manhattan project?" hits=5
To test generation using the OpenAI client, post a query which runs
the openai
search chain:
$ vespa query \ --timeout 60 \ --header="X-LLM-API-KEY:insert-api-key-here" \ query="what was the manhattan project?" \ hits=5 \ searchChain=openai \ format=sse \ traceLevel=1
Here, we specifically set the search chain to openai
. This calls the
RAGSearcher
which is set up to use the OpenAI
client. Note that this
requires an OpenAI API key, which is sent in the header. We also add a timeout
as token generation can take some time.
To test generation using the local LLM model, post a query which runs
the local
search chain:
$ vespa query \ --timeout 120 \ query="what was the manhattan project?" \ hits=5 \ searchChain=local \ format=sse \ traceLevel=1
Note that if you are submitting this query to a local Docker deployment, it can take some time before the tokens start appearing. This is because the prompt evaluation can take a significant amount of time, particularly on CPUs without a lot cores. To alleviate this a bit, you can reduce the number of hits retrieved by Vespa to for instance 3.
Prompt evaluation and token generation is much more efficient on the GPU.
The parameters here are:
query
: the query used both for retrieval and the prompt question.hits
: the number of hits that Vespa should return in the retrieval stagesearchChain
: the search chain set up inservices.xml
that calls the generative processformat
: sets the format to server-sent events, which will stream the tokens as they are generated.traceLevel
: outputs some debug information such as the actual prompt that was sent to the LLM and token timing.
For more information on how to customize the prompt, please refer to the RAG in Vespa documentation.
For the local
deployments, shutdown and remove this container:
$ docker rm -f vespa-rag
To remove the application from Vespa Cloud:
$ vespa destroy