Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making the LLM providers more generics #10

Merged
merged 5 commits into from
Nov 4, 2024

Conversation

brichet
Copy link
Collaborator

@brichet brichet commented Oct 28, 2024

This PR adds the possibility of changing LLM provider, even if only MistralAI is available at the moment.

A unique inline completion provider is registered to the manager.
This inline completion provider relies on a completer specific to the LLM used, to correctly call the LLM API.

Adding a new LLM would be to:

  • add the completer class in llm-models directory
  • modify the llm-models/utils.ts to return the correct models
  • modify the schema/llm-provider.json to add the new option

TODO, probably in a follow up PR: dynamically update the available settings when the selected provider has changed.

@brichet brichet changed the title WIP on making the LLM providers more generics Making the LLM providers more generics Oct 31, 2024
@brichet brichet marked this pull request as ready for review October 31, 2024 08:29
@brichet brichet requested a review from jtpio October 31, 2024 08:29
@brichet brichet added the enhancement New feature or request label Oct 31, 2024
if (this._llmClient === null) {
const botMsg: IChatMessage = {
id: UUID.uuid4(),
body: '**Chat client not configured**',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can open a new issue to track translating this string (and maybe others) with the ITranslator?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#12

this._llmClient = options.llmClient;
}

get llmClient(): BaseChatModel | null {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if "provider" could also be a good name as an alternative to llmClient? Or something else similar to what Jupyter AI uses?

src/index.ts Outdated
id: 'jupyterlab-codestral:llm-provider',
autoStart: true,
requires: [ICompletionProviderManager, ISettingRegistry],
provides: ILlmProvider,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ILlmProvider may read a bit awkward with the change of casing between the two "L". Maybe ILLMProvider or IAIProvider could also work? (or something else)

Copy link
Member

@jtpio jtpio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @brichet!

Looks like we can get this one in and iterate more on it.

@jtpio
Copy link
Member

jtpio commented Nov 4, 2024

It would be interesting to follow-up by adding support for another provider, for example OpenAI.

Edit: opened #13

@jtpio jtpio merged commit f0637eb into jupyterlite:main Nov 4, 2024
7 checks passed
@jtpio jtpio mentioned this pull request Nov 4, 2024
@brichet brichet deleted the generic_llm branch November 4, 2024 15:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants