Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to change raw prompt to my custom prompt #963

Open
xmxxiong opened this issue Aug 30, 2024 · 4 comments
Open

how to change raw prompt to my custom prompt #963

xmxxiong opened this issue Aug 30, 2024 · 4 comments

Comments

@xmxxiong
Copy link

Is your feature request related to a problem? Please describe.
yes i want set system prompt to replace raw prompt

Describe the solution you'd like
just like:
messages=[
{
"role": "system",
"content": "you are a xxxx, you need xxxx, and provide the parsed objects in json that match the following json_schema{json_schema}",
},
{"role": "user", "content": f"Generate a {5} synthetic users"},
],

bug now i get :

you are a xxxx, you need xxxx, and provide the parsed objects in json that match the following json_schema{json_schema} As a genius expert, your task is to understand the content and provide the parsed objects in json that match the following json_schema:xxxx,

what i want get is:
you are a xxxx, you need xxxx, and provide the parsed objects in json that match the following json_schema{json_schema}

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

image

@ivanleomk
Copy link
Collaborator

Hey @xmxxiong , what is the current client that you're using at the moment?

@xmxxiong
Copy link
Author

xmxxiong commented Aug 30, 2024

ollama,model is qwen2

@xmxxiong
Copy link
Author

maybe the raw prompt should be stored in a unified warehouse for easy customization

@paulelliotco
Copy link
Contributor

paulelliotco commented Aug 30, 2024

Based on the information provided, I understand you're trying to customize the system prompt for a chat completion API call, specifically to generate synthetic user data. The issue you're facing is that an additional instruction is being appended to your desired system prompt.

To solve this, we need to modify the code to ensure that only the custom system prompt is used, without any additional instructions being appended. Here's how we can achieve this:

1. First, let's define the custom system prompt:

custom_system_prompt = """
You are an AI assistant specialized in creating synthetic user data. 
Your task is to generate user profiles with names and ages.
Provide the parsed objects in JSON that match the following schema:
{json_schema}
"""

2. Then, we need to format this prompt with the actual JSON schema:

json_schema = User.model_json_schema()
custom_system_prompt = custom_system_prompt.format(json_schema=json_schema)

3. Use this formatted prompt in the chat completion call:

users = client.chat.completions.create(
    model="mixtral-8x7b-32768",  # Groq model
    response_model=list[User],
    messages=[
        {"role": "system", "content": custom_system_prompt},
        {"role": "user", "content": "Generate 5 synthetic users"}
    ]
)

This approach should prevent any additional instructions from being appended to your system prompt. The key is to format the custom_system_prompt with the actual JSON schema before passing it to the chat completion call.

I tested this out with Groq. You can change it to your current client:

import instructor
from groq import Groq
from pydantic import BaseModel

# Add your Groq API key here
GROQ_API_KEY = "API_KEY_HERE"

class User(BaseModel):
    name: str
    age: int

# Initialize the Groq client with the API key
groq_client = Groq(api_key=GROQ_API_KEY)
# Patch the Groq client with Instructor
client = instructor.patch(groq_client)

# Custom system prompt
custom_system_prompt = """
You are an AI assistant specialized in creating synthetic user data. 
Your task is to generate user profiles with names and ages.
Provide the parsed objects in JSON that match the following schema:
{json_schema}
"""

# Insert the User model's JSON schema into the prompt
json_schema = User.model_json_schema()
custom_system_prompt = custom_system_prompt.format(json_schema=json_schema)

users = client.chat.completions.create(
    model="mixtral-8x7b-32768",  # Groq model
    response_model=list[User],
    messages=[
        {"role": "system", "content": custom_system_prompt},
        {"role": "user", "content": "Generate 5 synthetic users"}
    ]
)

for user in users:
    print(f"Name: {user.name}, Age: {user.age}")

Output:

Name: User 1, Age: 25
Name: User 2, Age: 30
Name: User 3, Age: 35
Name: User 4, Age: 40
Name: User 5, Age: 45

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants