-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Local App Snippet] support non conversational LLMs #954
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit, but important specially wrt llama.cpp
Added test cases as the examples are getting more complex and we can be sure not to break any existing examples
|
packages/tasks/src/local-apps.ts
Outdated
`curl -X POST "http://localhost:8000/v1/chat/completions" \\ `, | ||
` -H "Content-Type: application/json" \\ `, | ||
`curl -X POST "http://localhost:8000/v1/chat/completions" \\`, | ||
` -H "Content-Type: application/json" \\`, | ||
` --data '{`, | ||
` "model": "${model.id}",`, | ||
` "messages": [`, | ||
` {"role": "user", "content": "Hello!"}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
` {"role": "user", "content": "Hello!"}`, | |
` {"role": "user", "content": "What is the capital of France?"}`, |
Minor suggestion: Hello!
looks a bit too terse. Perhaps we can unify the Instruct examples to be the same as llama-cpp-python and so on.
Co-authored-by: vb <[email protected]>
Co-authored-by: Victor Muštar <[email protected]>
c81dff2
to
de46212
Compare
Description
Most GGUF files on the hub are insutrct/conversational. However, not all of them. Previously, local app snippets assumed that all GGUFs are insutrct/conversational.
vLLM
https://huggingface.co/meta-llama/Llama-3.2-3B?local-app=vllm
llama.cpp
https://huggingface.co/mlabonne/gemma-2b-GGUF?local-app=llama.cpp
llama-cpp-python