Skip to content

Latest commit

 

History

History
 
 

Llama-3-Groq-8B-Tool-Use

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Groq/Llama-3-Groq-8B-Tool-Use

Meta developed and publicly released the Llama 3 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 8 billion to 70 billion parameters. Llama 3 is an auto-regressive language model that uses an optimized transformer architecture.

In this deployment, the Groq/Llama-3-Groq-8B-Tool-Use pretrained model is used, which generates a continuation of the incoming text. But to access this model you must have access granted by the Meta that you can request from https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use.

Deploying

Use this SDL to deploy the application on Akash. You will need to enter your Huggingface Access Key in "HF_TOKEN=" ENV variable and you can adjust the parameters passed into the "vllm serve" argument according to your hardware cluster configuration (refer to vLLM documentation for the various parameters). Lastly you can add additional debug flags through the ENV variables (consult the vLLM and Pytorch documentation for this as well)