Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama3.1 is not stable to generate the executable commands via ros2 ai exec #41

Open
fujitatomoya opened this issue Sep 14, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@fujitatomoya
Copy link
Owner

Issue Description

Compared to OpenAI gpt-4o ( or gpt-4), llama3.1 (via ollama) is so unstable to generate the executable command based on the user request. the generated answer is kinda off, and even it does not make really sense showed as following.

root@tomoyafujita:~/ros2_ws/colcon_ws# unset OPENAI_API_KEY
root@tomoyafujita:~/ros2_ws/colcon_ws# export OPENAI_MODEL_NAME=llama3.1
root@tomoyafujita:~/ros2_ws/colcon_ws# export OPENAI_ENDPOINT=http://localhost:11434/v1
root@tomoyafujita:~/ros2_ws/colcon_ws# ros2 ai exec "give me all topics" --dry-run
Command Candidate: 'ros2 topic list /clock /tf /tf_static /parameter_events /rosout /rosout_agg /rosgraph/initial_node_config /topic_info /param_changed /class_loader/class_list /cmd_vel /odom /imu_data /joint_states'
root@tomoyafujita:~/ros2_ws/colcon_ws# ros2 ai exec "give me all topics" --dry-run
Command Candidate: 'ros2 topic list /clock'
root@tomoyafujita:~/ros2_ws/colcon_ws# ros2 ai exec "give me all topics" --dry-run
Command Candidate: 'ros2 topic list /clock '
root@tomoyafujita:~/ros2_ws/colcon_ws# ros2 ai exec "give me all topics" --dry-run
Command Candidate: 'ros2 topic list /clock /cmd_vel /image /joint_states /topic /type_support_msgs/string__multiarray____1_5 /rosout / rosgraph /clock /parameter_events /time'

AI model can be really different and network size could be significantly different, since local LLM llama3.1 uses 4.7GB. even though i am not sure how much we can adjust the parameter, it would be worth to try Ollama Modelfile.

Consideration

Originally i though System Role configuration was missing in llama3.1 against gpt-4o but according to https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1#supported-roles and https://ollama.com/blog/openai-compatibility, both supports the same system roles for the chat completion.

@fujitatomoya fujitatomoya self-assigned this Sep 14, 2024
@fujitatomoya fujitatomoya added the enhancement New feature or request label Sep 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant