- This repo covers how to run the llama-2-7b-hf model in CPU-based machines and how to apply quantization configurations to CPU machines.
- Note: Please clone the repo https://huggingface.co/meta-llama/Llama-2-7b-hf/tree/main to Llama-2-7b-hf folder.
- Hugging face docs - https://huggingface.co/docs/transformers/index
- Model used for the experiment - https://huggingface.co/meta-llama/Llama-2-7b-hf
- Model related information - https://huggingface.co/meta-llama/Llama-2-7b-hf
-
Notifications
You must be signed in to change notification settings - Fork 0
License
gh-madhu1/llm-gen-ai
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published