You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm a beginner in all this, but I followed the instructions and it works perfectly! I can talk with Alexa, and by modifying the messages variable, I can even change its personality - it's amazing!
Now, I'm investigating further and I see that Google AI Studio has a Gemini API, which, from what I understand, allows us to do something similar.
The API call example they provide is:
curl \
-H "Content-Type: application/json" \
-d "{\"contents\":[{\"parts\":[{\"text\":\"Explain how AI works\"}]}]}" \
-X POST "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash-latest:generateContent?key=YOUR_API_KEY"
I wonder if it's possible to modify the generate_gpt_response(...) function to reflect this API call.
I'm a complete beginner in both AI and Python, and I've already asked this question to ChatGTP, Meta, and Gemini itself, but none of them provided a working function.
The text was updated successfully, but these errors were encountered:
Indeed, it is entirely possible to use Gemini (or any other LLM service, such as Claude or Llama) by modifying how the request is sent in the generate_gpt_response(...) function. Currently, this skill simply makes a POST request to ChatGPT’s API endpoint. Because Gemini, Claude, or Llama also offer their own endpoints, you would just need to structure the request body and headers according to whichever service you want to call.
I have also been considering a flexible approach to handle multiple APIs within this repository. One idea is to create a separate function for each LLM provider (for example, generate_gemini_response(...), generate_claude_response(...), etc.), each with the appropriate payloads and endpoints. In the future, we could extend this to allow dynamically selecting which LLM to use based on user preference or environment configuration.
I am definitely open to pull requests or ideas on how best to implement this flow.
Hello, I'm a beginner in all this, but I followed the instructions and it works perfectly! I can talk with Alexa, and by modifying the messages variable, I can even change its personality - it's amazing!
Now, I'm investigating further and I see that Google AI Studio has a Gemini API, which, from what I understand, allows us to do something similar.
The API call example they provide is:
I wonder if it's possible to modify the generate_gpt_response(...) function to reflect this API call.
I'm a complete beginner in both AI and Python, and I've already asked this question to ChatGTP, Meta, and Gemini itself, but none of them provided a working function.
The text was updated successfully, but these errors were encountered: