Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evals and benchmarking structure #21

Merged
merged 29 commits into from
Nov 5, 2024

Conversation

tstescoTT
Copy link
Contributor

change log

  • adding evals instructions for vLLM with lm-evaluation-harness
  • adding benchmark instructions for vLLM offline_inference_tt.py
  • adding locust dir placeholder for stress testing
  • adding docs for repo development setup
  • adding pre-commit configuration for ruff linting, addressing Add linting / formatting checks on PRs #7

@tstescoTT tstescoTT force-pushed the tstesco/evals-benchmarking-structure branch from 446e74a to ee888d9 Compare October 24, 2024 18:38
evals/README.md Outdated Show resolved Hide resolved
evals/README.md Outdated Show resolved Hide resolved
requirements-dev.txt Outdated Show resolved Hide resolved
evals/README.md Outdated Show resolved Hide resolved
evals/README.md Outdated Show resolved Hide resolved
evals/README.md Outdated Show resolved Hide resolved
Copy link
Contributor

@milank94 milank94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pending the responses to @mvanniasingheTT's comments, PR looks good to me.

@tstescoTT tstescoTT merged commit 6af6afb into main Nov 5, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants