Skip to content

dheerajzorko/eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

eval

Evaluate your LLM Model outputs

Overview

This project leverages deepeval, an LLM output evaluation framework to function.

Features

Installation

  1. Install Deepeval
pip install -u deepeval
deepeval login --confident-api-key <YOUR_DEEPEVAL_KEY>

Dependencies

Make sure you have OPENAI API KEY set

export OPENAI_API_KEY=<YOUR_OPEN_AI_KEY>

Usage

how to run deepeval evaluation

deepeval test run <PROGRAM.py>

How it works

TBD

Notes

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published