You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Range for gemini-1.5-flash: 0.0 - 2.0 (default: 1.0)
Range for gemini-1.5-pro: 0.0 - 2.0 (default: 1.0)
Range for gemini-1.0-pro-vision: 0.0 - 1.0 (default: 0.4)
Range for gemini-1.0-pro-002: 0.0 - 2.0 (default: 1.0)
Range for gemini-1.0-pro-001: 0.0 - 1.0 (default: 0.9)
To reduce developer confusion for developers coming from such cloud-based APIs to the Prompt API, would it make sense to add a new optional maxTemperature attribute to the AIAssistantCapabilities interface?
Knowing the maximum allowed temperature would also help with building a slider-based UI for the model configuration, as is common, for example, with OpenAI's Playground or Hugging Face's Gemini Pro Playground.
The text was updated successfully, but these errors were encountered:
Yep. I erroneously thought temperature was always from 0.0-1.0, but it turns out you can plug other numbers into $\Pr(v_k) = \frac{e^{l_k / \tau}}{\sum_i e^{l_i / \tau}}$, with interesting results. Adding shortly.
Currently, as per #17 (comment), the
temperature
of the Prompt API is "by definition a number between 0 and 1," but, in practice, different models tend to have different ranges. GPT-4o models use a range between[0, 2]
, and here's an example for various Gemini models:To reduce developer confusion for developers coming from such cloud-based APIs to the Prompt API, would it make sense to add a new optional
maxTemperature
attribute to theAIAssistantCapabilities
interface?Knowing the maximum allowed temperature would also help with building a slider-based UI for the model configuration, as is common, for example, with OpenAI's Playground or Hugging Face's Gemini Pro Playground.
The text was updated successfully, but these errors were encountered: