Inference pipeline that uses LangChain to create a chain that:
- downloads the fine-tuned model from Comet's model registry
- takes user questions as input
- queries the Qdrant Vector DB and enhances the prompt with related financial news
- calls the fine-tuned LLM for the final answer
- persists the chat history into memory
The inference pipeline defines how the user interacts with all the components we've built so far. We will combine all the components and make the actual financial assistant chatbot.
Thus, using LangChain, we will create a series of chains that:
- download & load the fine-tuned model from Comet's model registry
- take the user's input, embed it, and query the Qdrant Vector DB to extract related financial news
- build the prompt based on the user input, financial news context, and chat history
- call the LLM
- persist the history in memory
Also, the final step is to put the financial assistant to good use and deploy it as a serverless RESTful API using Beam.
Main dependencies you have to install yourself:
- Python 3.10
- Poetry 1.5.1
- GNU Make 4.3
Install dependencies:
make install
For developing run:
make install_dev
Prepare credentials:
cp .env.example .env
--> and complete the .env
file with your credentials.
You must create a FREE account in Qdrant and generate the QDRANT_API_KEY
and QDRANT_URL
environment variables. After, be sure to add them to your .env
file.
-> Check out this document to see how.
optional step in case you want to use Beam
Create and configure a free Beam account to deploy it as a serverless RESTful API and show it to your friends. You will pay only for what you use.
-> Create a Beam account & configure it.
Run bot locally:
make run
Run bot locally in dev mode:
make run_dev
Deploy the bot under a RESTful API using Beam:
make deploy_beam
Deploy the bot under a RESTful API using Beam in dev mode:
make deploy_beam_dev
Make a request to the bot calling the RESTful API:
export BEAM_DEPLOYMENT_ID=<BEAM_DEPLOYMENT_ID>
export BEAM_AUTH_TOKEN=<BEAM_AUTH_TOKEN>
make call_restful_api DEPLOYMENT_ID=${BEAM_DEPLOYMENT_ID} TOKEN=${BEAM_AUTH_TOKEN}
Start the Gradio UI:
make run_ui
Start the Gradio UI in dev mode:
make run_ui_dev
NOTE: Running the commands from above will host the UI on your computer. To run them, you need an Nvidia GPU with enough resources (e.g., to run the inference using Falcon 7B, you need ~8 GB VRAM). If you don't have that available, you can deploy it to Gradio Spaces
on HuggingFace. It is pretty straightforward to do so. Here are some docs to get you started.
Check the code for linting issues:
make lint_check
Fix the code for linting issues (note that some issues can't automatically be fixed, so you might need to solve them manually):
make lint_fix
Check the code for formatting issues:
make format_check
Fix the code for formatting issues:
make format_fix