Full-stack LLM application with OpenAI, Flask, React, and Pinecone to help farmers
##Architecture of our Krishi Mitra- LLM ChatBotKrishi Mitra addresses the challenge faced by rural Indian farmers by providing a platform for agricultural education and guidance. It offers access to modern farming techniques, personalized mentorship, a 24/7 IVR helpline, and engagement with agricultural experts. By bridging the gap between traditional practices and modern advancements, Krishi Mitra empowers farmers to improve productivity, optimize resources, and enhance livelihoods.
- Automated calling using vonage API.
- AI chatboat which is specifically trained on agriculture .
- Multiple language support.
- Experts support to user.
- Education without internet
- Lack of immediate support and guidance.
- Limited access to agricultural resources.
- Dependency on traditional farming methods.
- Backend (Flask): This handles the logic to scrape the website and call OpenAI's Embeddings API to create embeddings from the website's text. It also stores these embeddings in the vector database (Pinecone) and retrieves relevant text to help the LLM answer the user's question.
- OpenAI: We'll call two different API's from OpenAI: (1) the Embeddings API to embed the text of the website as well as the user's question, and (2) the ChatCompletions API to get an answer from GPT-4 to send back to the user.
- Pinecone: This is the vector database that we'll use to (1) send the embeddings of the website's text to, and (2) retrieve the most similar text chunks for constructing the prompt to send to the LLM in step 3.
- Frontend (React+ Tailwind CSS): This is the interface that the user interacts with to input a URL and ask questions about the webpage.
Install Python dependencies
pip install -r requirements.txt
Install React dependencies
cd client
npm install
Create .env file
OPENAI_API_KEY=<YOUR_API_KEY>
PINECONE_API_KEY=<YOUR_API_KEY>
Start the Flask server
# In root directory
python run.py
Start the React app
cd client
npm start