Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SyntaxError: JSON.parse: unexpected end of data #70

Open
stoykovstoyk opened this issue Oct 10, 2024 · 4 comments
Open

SyntaxError: JSON.parse: unexpected end of data #70

stoykovstoyk opened this issue Oct 10, 2024 · 4 comments

Comments

@stoykovstoyk
Copy link

Hello and thank you for the great product.

I experience a this trouble when I try to use it with local llama models.

At first it starts to generate some code and somewhere in the middle I receive this error

image

Unhandled Runtime Error
SyntaxError: JSON.parse: unexpected end of data at line 1 column 1 of the JSON data

I tried many different models but every time I get the same error.

I saw in the models.ts file there is
if (providerId === 'fireworks') {
return 'json'
}

So i tried to set
if (providerId === 'ollama') {
return 'json'
}

but I still experience the same error.

Maybe the models do not response on proper JSON format or something else.

I also tried to add in the prompts.ts "You should response in JSON only" but it did not worked.

What could be the reason?

Is anyone able to help with this issue?

Once again I want to express my deep appreciation for this great project, but running a local models is a must have for my use case that is why I decided to open this issue.

Copy link

linear bot commented Oct 10, 2024

@mishushakov
Copy link
Member

Which model were you using?

@stoykovstoyk
Copy link
Author

stoykovstoyk commented Oct 10, 2024

I tried with mistral-large, mistral-nemo and also llama3.2
Non of them worked

I saw in the ollama website that JSON can be specified as preferred format when the call is made to the generate endpoint like in this example:

curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt": "What color is the sky at different times of the day? Respond using JSON",
"format": "json",
"stream": false
}'

but I cannot find where to put this in the code base.

@mishushakov
Copy link
Member

Do you know the line number where the issue occurs? Might not be related to the LLM at all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants