-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SyntaxError: JSON.parse: unexpected end of data #70
Comments
Which model were you using? |
I tried with mistral-large, mistral-nemo and also llama3.2 I saw in the ollama website that JSON can be specified as preferred format when the call is made to the generate endpoint like in this example: curl http://localhost:11434/api/generate -d '{ but I cannot find where to put this in the code base. |
Do you know the line number where the issue occurs? Might not be related to the LLM at all |
Hello and thank you for the great product.
I experience a this trouble when I try to use it with local llama models.
At first it starts to generate some code and somewhere in the middle I receive this error
Unhandled Runtime Error
SyntaxError: JSON.parse: unexpected end of data at line 1 column 1 of the JSON data
I tried many different models but every time I get the same error.
I saw in the models.ts file there is
if (providerId === 'fireworks') {
return 'json'
}
So i tried to set
if (providerId === 'ollama') {
return 'json'
}
but I still experience the same error.
Maybe the models do not response on proper JSON format or something else.
I also tried to add in the prompts.ts "You should response in JSON only" but it did not worked.
What could be the reason?
Is anyone able to help with this issue?
Once again I want to express my deep appreciation for this great project, but running a local models is a must have for my use case that is why I decided to open this issue.
The text was updated successfully, but these errors were encountered: