forked from aritrasen87/LLM_RAG_Model_Deployment
-
Notifications
You must be signed in to change notification settings - Fork 0
/
useful_commands.txt
19 lines (16 loc) · 872 Bytes
/
useful_commands.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
python3 -m venv .venv
source .venv/bin/activate
which python
python3 -m pip install --upgrade pip
model_path = https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF
Kaggle Datatset link = https://www.kaggle.com/datasets/harshsinghal/nlp-and-llm-related-arxiv-papers
Embedding Model: https://huggingface.co/BAAI/bge-base-en-v1.5
Hugging Face(ChatInterface) -> https://www.gradio.app/docs/chatinterface
1. Creating a virtual environment managing the dependencies.
3. what .env file and and how to load secrets from .env file
4. how to configure and load llm models from local folder and using together api
5. how to modularize your code and create a vectore DB
6. Pydantic & What is fast api from concepts to code
8. What is gradio and how to create UIs using Gradio
9. Combine everything and create a fully functional LLM App.
Dockerize your solution. (Optional)