This repository has been archived by the owner on May 12, 2023. It is now read-only.
Why is this generating widely nonsensical answers? #67
mindphluxnet
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For whatever reason every time I try to use the python bindings the only predictable outcome is that
I have the feeling it barely understands anything at all and just comes up with random stuff. Maybe I am expecting too much but I think one could expect at least somewhat predictable responses. The randomness is especially annoying as I wanted to develop an application using pyllamacpp and that application relies on properly formatted answers but even telling the AI "you have to answer in JSON and the schema to use is this" followed by the JSON schema to use results in unusable responses that are always cut short as well.
My code is basically the example with some altered settings I found in another thread:
and I am using ggml-vicuna-13b-4bit-rev1.bin as model.
Several examples:
The prompt entered was: "write a poem about yourself". And it did, sort of ...
but not only is the result incomplete, it's not what I asked. Who talked of Edgar Allan Poe? Why does the AI think it should do that?
So I tried with the prompt "Do exactly what I tell you, nothing more, nothing less. Write a poem about yourself.". The result?
Again, the answer is cut short, but what on earth is the AI on about? A questionnaire?
Another example:
While technically the answer is somewhere in there at no point the AI got asked to answer in context of a multiple choice question. What is going on?
Beta Was this translation helpful? Give feedback.
All reactions