Why are GPT4ALL's results reproducible? #2944
TessDejaeghere
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all!
I experimented with using GPT4ALL for named entity recognition on literary-historical texts (if anyone wants to have a look: https://github.com/GhentCDH/CLSinfra/tree/main/NER_LLM). Unlike ChatGPT or Mixtral (accessed through their respective APIs), given the same prompt and data - GPT4ALL's models always gave me reproducible results.
If I understand it correctly, generative LLMs are probabilistic in nature - and the most probable next token is not always the chosen candidate, explaining the variance and "creativity" of the output. The temperature was set to 0 for all the experiments.
Why are GPT4ALL's models giving me reproducible output? What am I missing?
Beta Was this translation helpful? Give feedback.
All reactions