Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2.4.25 #1038

Merged
merged 13 commits into from
Jul 3, 2024
Merged

v2.4.25 #1038

merged 13 commits into from
Jul 3, 2024

Conversation

ashpreetbedi
Copy link
Contributor

Branch for testing this PR: #1029

CorrM and others added 12 commits June 26, 2024 19:36
the updated prompt show good results with model 'qwen2' which sometimes just returning the JSON tool result
Push `Ollama` to its limits
… calls

- Handled cases where models add text descriptions to tool call JSON
- Handled cases where models split tool calls into multiple JSON objects
Push Ollama to its limits, improve ollama using tool results prompt and instructions to generate tool calls
@@ -257,8 +284,10 @@ def response_stream(self, messages: List[Message]) -> Iterator[str]:
assistant_message_content += response_content

# Strip out tool calls from the response
# If the response is a tool call, it will start with a {
if not response_is_tool_call and assistant_message_content.strip().startswith("{"):
extract_tool_calls_result = _extract_tool_calls(assistant_message_content)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function name without _

phi/llm/ollama/utils.py Outdated Show resolved Hide resolved
@ashpreetbedi ashpreetbedi merged commit a5818f9 into main Jul 3, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants