Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should there be maxTemperature, analog to maxTopK? #41

Closed
tomayac opened this issue Sep 12, 2024 · 1 comment
Closed

Should there be maxTemperature, analog to maxTopK? #41

tomayac opened this issue Sep 12, 2024 · 1 comment

Comments

@tomayac
Copy link
Contributor

tomayac commented Sep 12, 2024

Currently, as per #17 (comment), the temperature of the Prompt API is "by definition a number between 0 and 1," but, in practice, different models tend to have different ranges. GPT-4o models use a range between [0, 2], and here's an example for various Gemini models:

Range for gemini-1.5-flash: 0.0 - 2.0 (default: 1.0)
Range for gemini-1.5-pro: 0.0 - 2.0 (default: 1.0)
Range for gemini-1.0-pro-vision: 0.0 - 1.0 (default: 0.4)
Range for gemini-1.0-pro-002: 0.0 - 2.0 (default: 1.0)
Range for gemini-1.0-pro-001: 0.0 - 1.0 (default: 0.9)

To reduce developer confusion for developers coming from such cloud-based APIs to the Prompt API, would it make sense to add a new optional maxTemperature attribute to the AIAssistantCapabilities interface?

readonly attribute unsigned long? defaultTopK;
readonly attribute unsigned long? maxTopK;
readonly attribute float? defaultTemperature;
readonly attribute float? maxTemperature; // new

Knowing the maximum allowed temperature would also help with building a slider-based UI for the model configuration, as is common, for example, with OpenAI's Playground or Hugging Face's Gemini Pro Playground.

Screenshot 2024-09-12 at 09 38 16

Screenshot 2024-09-12 at 09 41 19

@domenic
Copy link
Collaborator

domenic commented Sep 18, 2024

Yep. I erroneously thought temperature was always from 0.0-1.0, but it turns out you can plug other numbers into $\Pr(v_k) = \frac{e^{l_k / \tau}}{\sum_i e^{l_i / \tau}}$, with interesting results. Adding shortly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants