This is a LlamaIndex project bootstrapped with create-llama
and adapted to include OpenInference instrumentation for OpenAI calls.
Our example will export spans data simultaneously on Console
and arize-phoenix, however you can run your code anywhere and can use any exporter that OpenTelemetry supports.
First, startup the backend as described in the backend README.
Second, run the development server of the frontend as described in the frontend README.
Open http://localhost:3000 with your browser to see the result.
Ensure that Docker is installed and running. Run the command docker compose up
to spin up services for the frontend, backend, and Phoenix. Once those services are running, open http://localhost:3000 to use the chat interface. When you're finished, run docker compose down
to spin down the services.
To learn more about LlamaIndex, take a look at the following resources:
- LlamaIndex Documentation - learn about LlamaIndex (Python features).
- LlamaIndexTS Documentation - learn about LlamaIndex (Typescript features).
You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!