Skip to content

second-state/llamatutor

 
 

Repository files navigation

LlamaTutor powered by LlamaEdge/Gaia

This project is a fork of the Original Project.

Changes in This Fork

In this fork, I have introduced several enhancements to allow the customization of the server and model used by the application. The primary changes are as follows:

Environment Variables

Three new environment variables have been added to the .env file:

  • SERPER_API_KEY: The serach API key for searching content online. You can also use BING_API_KEY here.
  • HELICONE_API_KEY: The Helicone API key for observability.
  • LLAMAEDGE_BASE_URL: URL for the LLM API base URL.
  • LLAMAEDGE_MODEL_NAME: Name of the model to be used.
  • LLAMAEDGE_API_KEY: API key for accessing the LLM services.

These variables allow you to customize the URL and use your own server and model. The default values for these variables are:

LLAMAEDGE_BASE_URL=https://llama.us.gaianet.network/v1
LLAMAEDGE_MODEL_NAME=llama
LLAMAEDGE_API_KEY=LlamaEdge

How to Use

1. Clone the Repository and navigate to the Project Directory:

git clone https://github.com/second-state/llamatutor.git
cd llamatutor

2. Create and Configure the .env File:

cp .example.env .env

Update the .env file with your desired values for the new variables:

LLAMAEDGE_BASE_URL=https://your-custom-url/v1
LLAMAEDGE_MODEL_NAME=your-custom-model
LLAMAEDGE_API_KEY=your-api-key

Apply for Serpre API key here and Helicone API key here.

4. Install Dependencies:

npm install

or

yarn

5. Run the Application:

npm run dev

or

yarn start

By configuring these environment variables, you can point the application to your own LLM server and model, providing greater flexibility and customization.

About

An AI personal tutor built with LlamaEdge

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 99.9%
  • Other 0.1%