Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Instructor does not support multiple tool calls" Error When Using List[Model] #1111

Open
2 tasks done
andytriboletti opened this issue Oct 23, 2024 · 2 comments
Open
2 tasks done

Comments

@andytriboletti
Copy link

andytriboletti commented Oct 23, 2024

  • This is actually a bug report.
  • I have tried searching the documentation and have not found an answer.

What Model are you using?

Llama3.2

Describe the bug
When trying to handle multiple files using a List in my model, I'm getting an error about multiple tool calls.

Code:

class CodeFile(BaseModel):
    filename: str
    content: str

class MultiFileResponse(BaseModel):
    files: List[CodeFile]

Error message:
Error: Instructor does not support multiple tool calls, use List[Model] instead

What's the correct way to structure a model to handle multiple files when the error suggests using List[Model]? I'm trying to get multiple files (with filename and content) in a single response, but getting the "multiple tool calls" error.

The error message suggests using List[Model], but I'm already using List[CodeFile]. Would appreciate guidance on the correct way to structure this.

To Reproduce

from pydantic import BaseModel
from typing import List
import instructor
from openai import OpenAI

class CodeFile(BaseModel):
    filename: str
    content: str

class MultiFileResponse(BaseModel):
    files: List[CodeFile]

def main():
    client = OpenAI(
        base_url="http://localhost:11434/v1",
        api_key="ollama"
    )
    
    client = instructor.patch(client)
    
    try:
        response = client.chat.completions.create(
            model="llama3.2",
            messages=[
                {"role": "system", "content": "You are a Python code generator. Return multiple Python files."},
                {"role": "user", "content": "Generate a fibonacci generator with tests"}
            ],
            response_model=MultiFileResponse
        )
        print("Response:", response)
    except Exception as e:
        print("Error:", e)

if __name__ == "__main__":
    main()

run this script

Expected behavior
It outputs some files.

@ivanbelenky
Copy link
Contributor

ivanbelenky commented Oct 23, 2024

As in this issue your client is wrognfully created.

if you read closely at the docs you will notice that OpenAI client is being leveraged for Ollama instructor.Instructor instance in this case. Furthermore, it may not be clear as water but the mode is set as instructor.Mode.JSON.

For LLMs without tools mode, not overriding the default TOOLS mode will trigger --> attempt to parse tools responses --> not finding anything --> throwing the error you saw.

Hope this clarifies the issue.

@andytriboletti
Copy link
Author

andytriboletti commented Oct 30, 2024

Thank you for the link. I was able to get my program to run. However, I noticed the program took 10 hours(!) on my MacBook Pro. Is there anything wrong? How can I speed it up? Should I open a new bug? Thank you for your help.

time python aliendevtool.py JumpSearch
Verified jumpsearch_recursive.py was created successfully
Verified jumpsearch_iterative.py was created successfully
python aliendevtool.py JumpSearch  2.06s user 0.78s system 0% cpu 10:56.93 total

Here is the code I used: https://github.com/greenrobotllc/aliendevtool/blob/main/aliendevtool.py It creates a recursive and iterative version of different algorithms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants