Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[widgets] use chatCompletionStream #664

Merged
merged 1 commit into from
May 15, 2024
Merged

Conversation

mishig25
Copy link
Collaborator

@mishig25 mishig25 commented May 13, 2024

Thanks to #645, it is now possible to use chatCompletionStream in Conversational Widget

image

@mishig25 mishig25 force-pushed the use_chatCompletionStream branch 2 times, most recently from f51c8c7 to 949a5c7 Compare May 13, 2024 12:11
@mishig25 mishig25 marked this pull request as ready for review May 13, 2024 12:12
@mishig25 mishig25 requested a review from SBrandeis as a code owner May 13, 2024 12:12
@mishig25 mishig25 force-pushed the use_chatCompletionStream branch 5 times, most recently from 448b308 to 0aabba9 Compare May 13, 2024 12:44
@mishig25 mishig25 marked this pull request as draft May 13, 2024 12:46
@mishig25 mishig25 marked this pull request as ready for review May 13, 2024 13:05
@mishig25
Copy link
Collaborator Author

ready for review !

@radames
Copy link
Contributor

radames commented May 13, 2024

nice @mishig25 LGTM! one curious question, how are you differentiating the models supporting chatCompletion API versus simple text generation? this https://huggingface.co/bigscience/bloom-560m from https://huggingface.co/HuggingFaceH4/zephyr-7b-beta? Because the chatCompletion would throw an error in case the backend doesn't support the API,

Comment on lines -138 to -154
let chatText;
try {
chatText = compiledTemplate.render({
messages,
add_generation_prompt: true,
...specialTokensMap,
});
} catch (e) {
error = `An error occurred while rendering the chat template: "${(e as Error).message}"`;
return;
}
const previousMessages = [...messages];

const input: TextGenerationInput & Required<Pick<TextGenerationInput, "parameters">> = {
inputs: chatText,
parameters: {
return_full_text: false,
},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given that's recent code from @Wauplin and @SBrandeis, i would like a review from them on this

@julien-c
Copy link
Member

@radames i think the conversational widget would not be displayed on pure text-generation models like https://huggingface.co/bigscience/bloom-560m

@radames
Copy link
Contributor

radames commented May 14, 2024

@radames i think the conversational widget would not be displayed on pure text-generation models like https://huggingface.co/bigscience/bloom-560m

yes that's right, I'm just curious how on differentiate these models since they're all pipeline_tag: text-generation ?

@julien-c
Copy link
Member

@radames there's a notion of a WidgetType that can be conversational:

export type WidgetType = PipelineType | "conversational";

If a model repo is text-generation AND it has a tag of conversational, it displays that widget

cc @osanseviero who implemed

Copy link
Contributor

@SBrandeis SBrandeis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice factorization!

@mishig25 mishig25 merged commit 673718f into main May 15, 2024
4 checks passed
@mishig25 mishig25 deleted the use_chatCompletionStream branch May 15, 2024 13:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants