This is a LlamaIndex project bootstrapped with create-llama
.
First, startup the backend as described in the backend README.
Second, run the development server of the frontend as described in the frontend README.
Open http://localhost:3000 with your browser to see the result.
First check setup instructions in both frontend and backend and set env's. Make sure you have docker installed and then first run:
docker compose up
Then generate AI embeddings:
docker compose exec backend /bin/bash
poetry run generate
exit
To learn more about LlamaIndex, take a look at the following resources:
- LlamaIndex Documentation - learn about LlamaIndex (Python features).
- LlamaIndexTS Documentation - learn about LlamaIndex (Typescript features).
You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!