We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[email protected]
When running the embedding server:
./llamafiler -m ~/Downloads/all-MiniLM-L6-v2.F32.gguf
And acessing the open ai api endpoint v1/embedding, the model name is not populated
curl -H 'Content-Type: application/json' -d '{ "content":"foo"}' -X POST localhost:8080/v1/embeddings
This results in an empty model string:
{ "object": "list", "model": "", "usage": { "prompt_tokens": 3, "total_tokens": 3 }, "data": [{ "object": "embedding", "index": 0, "embedding": [0.032392547, 0.010513297, -0.011017947, 0.06687813, -0.066597596, -0.010583614, 0.18420886, 0.03049396,...] }]
Can the model name be extracted from the gguf metadata? Or the name provided using the -m option be used?
llamafiler v0.8.13
Linux
./llamafiler -m ~/Downloads/all-MiniLM-L6-v2.F32.gguf 2024-12-14T04:17:08.220113 llamafile/server/listen.cpp:33 server listen http://127.0.0.1:8080
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Contact Details
[email protected]
What happened?
When running the embedding server:
And acessing the open ai api endpoint v1/embedding, the model name is not populated
This results in an empty model string:
Can the model name be extracted from the gguf metadata? Or the name provided using the -m option be used?
Version
llamafiler v0.8.13
What operating system are you seeing the problem on?
Linux
Relevant log output
./llamafiler -m ~/Downloads/all-MiniLM-L6-v2.F32.gguf 2024-12-14T04:17:08.220113 llamafile/server/listen.cpp:33 server listen http://127.0.0.1:8080
The text was updated successfully, but these errors were encountered: