Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhance modelInfo #17

Closed
hemanth opened this issue Jul 10, 2024 · 6 comments
Closed

enhance modelInfo #17

hemanth opened this issue Jul 10, 2024 · 6 comments

Comments

@hemanth
Copy link
Contributor

hemanth commented Jul 10, 2024

The ai.modelInfo interface would be more useful if it includes more detailed information about model capabilities and limits.

Use Cases:

The AIModelInfo interface provides detailed information about an AI model's capabilities and limits. Here are several practical use cases for this enhanced information:

  1. Dynamic UI Adaptation

    • Scenario: A chatbot application needs to adjust its user interface based on the model's capabilities.
    • Usage: The app checks AIModelInfo.capabilities.supportsStreaming to determine whether to show a "streaming" toggle in the UI.
    async function setupChatInterface() {
      const modelInfo = await window.ai.getModelInfo();
      if (modelInfo.capabilities.supportsStreaming) {
        showStreamingToggle();
      }
    }
  2. Intelligent Input Validation

    • Scenario: An AI-powered document analysis tool needs to ensure user inputs don't exceed model limits.
    • Usage: The application uses AIModelInfo.limits.maxInputLength to validate document length before submission.
    function validateDocument(document) {
      const modelInfo = await window.ai.getModelInfo();
      if (document.length > modelInfo.limits.maxInputLength) {
        alert(`Document exceeds maximum length of ${modelInfo.limits.maxInputLength} characters.`);
        return false;
      }
      return true;
    }
  3. Multilingual Support Detection

    • Scenario: A global customer service platform needs to route queries to appropriate AI models based on language.
    • Usage: The platform checks AIModelInfo.capabilities.supportedLanguages to determine which models can handle specific language inputs.
    async function routeQuery(query, language) {
      const modelInfo = await window.ai.getModelInfo();
      if (modelInfo.capabilities.supportedLanguages.includes(language)) {
        processQuery(query, modelInfo.name);
      } else {
        routeToHumanAgent(query);
      }
    }
  4. Adaptive Temperature Setting

    • Scenario: A creative writing assistant needs to adjust its randomness (temperature) based on the user's preference while staying within model limits.
    • Usage: The app uses AIModelInfo.limits.minTemperature and maxTemperature to set valid temperature range in the UI.
    function setupTemperatureSlider() {
      const modelInfo = await window.ai.getModelInfo();
      const slider = document.getElementById('temperatureSlider');
      slider.min = modelInfo.limits.minTemperature;
      slider.max = modelInfo.limits.maxTemperature;
      slider.value = modelInfo.limits.defaultTemperature;
    }
  5. Version-Specific Feature Enablement

    • Scenario: An AI-powered code completion tool needs to enable or disable features based on the model version.
    • Usage: The tool checks AIModelInfo.version to determine which features to enable.
    async function enableAdvancedFeatures() {
      const modelInfo = await window.ai.getModelInfo();
      if (parseFloat(modelInfo.version) >= 2.0) {
        enableMultilineCompletion();
        enableSyntaxAwareCompletion();
      }
    }
  6. Resource Allocation in Multi-Model Systems

    • Scenario: A cloud-based AI platform needs to allocate resources efficiently across multiple AI tasks.
    • Usage: The platform uses AIModelInfo.limits to estimate resource requirements for each task.
    async function allocateResources(task) {
      const modelInfo = await window.ai.getModelInfo(task.type);
      const estimatedTokens = task.inputLength + modelInfo.limits.maxOutputLength;
      const estimatedMemory = estimatedTokens * MEMORY_PER_TOKEN;
      allocateMemory(task.id, estimatedMemory);
    }
  7. Model Capability Comparison

    • Scenario: An AI model marketplace needs to provide users with a comparison of different models' capabilities.
    • Usage: The marketplace app fetches AIModelInfo for multiple models and creates a comparison table.
    async function compareModels(modelIds) {
      const comparisonData = await Promise.all(modelIds.map(async id => {
        const info = await window.ai.getModelInfo("text", id);
        return {
          name: info.name,
          version: info.version,
          maxInput: info.limits.maxInputLength,
          supportedLanguages: info.capabilities.supportedLanguages.join(', ')
        };
      }));
      displayComparisonTable(comparisonData);
    }
@domenic
Copy link
Collaborator

domenic commented Jul 10, 2024

This is somewhat related to #3.

Instead of focusing on specific API proposals, please start with use cases and applications you are trying to build, which you cannot build with the current API. (They should be real applications, ideally that you can link to!) https://whatwg.org/faq#adding-new-features is good reference here, especially step 1.

@hemanth
Copy link
Contributor Author

hemanth commented Jul 10, 2024

I thought this was simply an enhancement to the existing API and not a new proposal, so I had kept the focus solely on the API. I have updated it with a few examples, assuming Multi-Model capability support.

@domenic
Copy link
Collaborator

domenic commented Jul 11, 2024

Can you tell me more about what applications you are currently trying to build, which the current API cannot provide? This reads like a grab-bag wishlist and isn't very actionable. And again it contains a lot of specific solution proposals which we are not yet at the stage of evaluating.

@hemanth
Copy link
Contributor Author

hemanth commented Jul 11, 2024

Sure @domenic

Here is a simple chat application that I have build, which can be provided with:

  1. Intelligent Input Validation
  2. Multilingual Support (which needs detection)
  3. Adaptive Temperature
  4. Is there a way to know if there is multi-modal support.

It can also be something as simple as streaming or summarizing text can be enhanced if we get to know more about the model capabilities.

@domenic
Copy link
Collaborator

domenic commented Jul 29, 2024

  1. Dynamic UI Adaptation

Streaming is required by this API so there is no need for an API to detect it since it would always return true.

2. Intelligent Input Validation

This was added in 6956d4b

3. Multilingual Support Detection

This was added in fe41a59.

4. Adaptive Temperature Setting

Temperature is by definition a number between 0 and 1 so this is pointless.

5. Version-Specific Feature Enablement

Dupe of #3.

6. Resource Allocation in Multi-Model Systems

This was added in 6956d4b

7. Model Capability Comparison

Dupe of #3.

@domenic domenic closed this as completed Jul 29, 2024
@hemanth

This comment was marked as spam.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants