Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modifying the generate_gpt_response function to work with Gemini API? #25

Open
faseceroo opened this issue Dec 14, 2024 · 1 comment
Open

Comments

@faseceroo
Copy link

faseceroo commented Dec 14, 2024

Hello, I'm a beginner in all this, but I followed the instructions and it works perfectly! I can talk with Alexa, and by modifying the messages variable, I can even change its personality - it's amazing!
Now, I'm investigating further and I see that Google AI Studio has a Gemini API, which, from what I understand, allows us to do something similar.

The API call example they provide is:

curl \
  -H "Content-Type: application/json" \
  -d "{\"contents\":[{\"parts\":[{\"text\":\"Explain how AI works\"}]}]}" \
  -X POST "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash-latest:generateContent?key=YOUR_API_KEY"

I wonder if it's possible to modify the generate_gpt_response(...) function to reflect this API call.
I'm a complete beginner in both AI and Python, and I've already asked this question to ChatGTP, Meta, and Gemini itself, but none of them provided a working function.

@k4l1sh
Copy link
Owner

k4l1sh commented Dec 25, 2024

Indeed, it is entirely possible to use Gemini (or any other LLM service, such as Claude or Llama) by modifying how the request is sent in the generate_gpt_response(...) function. Currently, this skill simply makes a POST request to ChatGPT’s API endpoint. Because Gemini, Claude, or Llama also offer their own endpoints, you would just need to structure the request body and headers according to whichever service you want to call.

I have also been considering a flexible approach to handle multiple APIs within this repository. One idea is to create a separate function for each LLM provider (for example, generate_gemini_response(...), generate_claude_response(...), etc.), each with the appropriate payloads and endpoints. In the future, we could extend this to allow dynamically selecting which LLM to use based on user preference or environment configuration.

I am definitely open to pull requests or ideas on how best to implement this flow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants