llama3.1
is not stable to generate the executable commands via ros2 ai exec
#41
Labels
enhancement
New feature or request
Issue Description
Compared to OpenAI
gpt-4o
( orgpt-4
),llama3.1
(viaollama
) is so unstable to generate the executable command based on the user request. the generated answer is kinda off, and even it does not make really sense showed as following.AI model can be really different and network size could be significantly different, since local LLM
llama3.1
uses 4.7GB. even though i am not sure how much we can adjust the parameter, it would be worth to try Ollama Modelfile.Consideration
Originally i though
System Role
configuration was missing inllama3.1
againstgpt-4o
but according to https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1#supported-roles and https://ollama.com/blog/openai-compatibility, both supports the same system roles for the chat completion.The text was updated successfully, but these errors were encountered: