This repository is for the source code handling the web-api call of llm providers as GTP from Open AI
- Install the requirements:
pip install -r requirements.txt
To run the application, you need to execute the app.py
script. You can do this by running the following command in your terminal:
python app.py
This will start the Flask server and the application will be accessible at http://localhost:5000.
To build the docker image, navigate to the root directory of the project and run the following command:
docker build -t llm-api-connector .
To run the application in docker, run the following command:
docker run -p 5000:5000 llm-api-connector
The application will be accessible at http://localhost:5000.
To test the code, navigate to the test folder and run the following command:
coverage run -m unittest discover
You can then view the coverage report by running:
coverage report
if you want to see the coverage in html format, run:
coverage html
and open the htmlcov/index.html file in your browser.