Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offline Speech Recognition #2089 #2242

Open
wants to merge 21 commits into
base: main
Choose a base branch
from

Conversation

VladislavAntonyuk
Copy link
Collaborator

@VladislavAntonyuk VladislavAntonyuk commented Oct 1, 2024

Description of Change

Added 2 new methods for Offline Speech Recognition, Removed ListenAsync method, as it is impossible (with current implementation) to correctly Stop Listening the recognition. I also added a new State, allowing us to see if Listening is Active, but Silience.

Linked Issues

PR Checklist

Additional information

@VladislavAntonyuk VladislavAntonyuk self-assigned this Oct 1, 2024
@VladislavAntonyuk VladislavAntonyuk added breaking change This label is used for PRs that include a breaking change area/essentials Issue/Discussion/PR that has to do with Essentials labels Oct 1, 2024
@VladislavAntonyuk VladislavAntonyuk added the needs discussion Discuss it on the next Monthly standup label Oct 1, 2024
@brminnick brminnick added the hacktoberfest-accepted A PR that has been approved during Hacktoberfest label Oct 3, 2024
@VladislavAntonyuk VladislavAntonyuk removed the needs discussion Discuss it on the next Monthly standup label Oct 5, 2024
Copy link
Contributor

@bijington bijington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@VladislavAntonyuk thanks for this. I think it looks great! I only had a few comments to get your thoughts on some things.

Finally, this might be outside of the scope of this PR but I wanted to raise it because I think it should be in a follow-up PR; I would love the ability to chain the 2 implementations together when registering with DI, something like:

builder.Services.AddSpeechToText(SpeechToText.Default).WithFallback(OfflineSpeechToText.Default);

And in theory developers could chain the other way round:

builder.Services.AddSpeechToText(OfflineSpeechToText.Default).WithFallback(SpeechToText.Default);

What do you think to the above? We might have to wrap this in another class rather than complicating the flow of the 2 current implementations so it probably is best in a follow-up PR.

@VladislavAntonyuk
Copy link
Collaborator Author

Thank you @bijington.
if user registers SpeechToText like this builder.Services.AddSpeechToText(OfflineSpeechToText.Default).WithFallback(SpeechToText.Default);
he still needs to somehow resolve the implementation.

Also, we need to provide the lifetime of the service. And what is under the hood of AddSpeechToText? We'll still have the same AddSingleton(OfflineSpeechToText.Default) call in that method.

@bijington
Copy link
Contributor

bijington commented Oct 15, 2024

Thank you @bijington. if user registers SpeechToText like this builder.Services.AddSpeechToText(OfflineSpeechToText.Default).WithFallback(SpeechToText.Default); he still needs to somehow resolve the implementation.

Also, we need to provide the lifetime of the service. And what is under the hood of AddSpeechToText? We'll still have the same AddSingleton(OfflineSpeechToText.Default) call in that method.

Yes I agree the developer will need to define the lifetime of the service which increases the complexity.

Perhaps we could move the WithFallback method onto the ISpeechToText interface instead, then the developer could write something like:

builder.Services.AddSingleton(OfflineSpeechToText.Default.WithFallback(SpeechToText.Default));

Then WithFallback could look something like:

public static void ISpeechToTextExtensions
{
    public ISpeechToText WithFallback(this ISpeechToText primary, ISpeechToText secondary)
    {
        return new PriorityBasedSpeechToText(primary, secondary);
    }
}

internal class PriorityBasedSpeechToText : ISpeechToText
{
    readonly ISpeechToText primaryService;
    readonly ISpeechToText secondaryService;

    public PriorityBasedSpeechToText(this ISpeechToText primary, ISpeechToText secondary)
    {
        primaryService = primary;
        secondaryService = secondary;
    }

    public Task<SpeechToTextRecognitionResult> StartListenAsync()
    {
        // attempt primary, if fails fallback to secondary...
    }
}

It could well become more complicated with things like permissions, so we may have to request all permissions for both primary and secondary.

What do you think?

@VladislavAntonyuk
Copy link
Collaborator Author

I see pros and cons of such approach.
As for developers it is much easier registering the service, but the strategy of choosing the right implementation maybe complicated (Users preferences may forbid online recognition, unstable internet connection, etc).
Also as WithFallback still receives interface as a parameter, developers may write such code:
services.AddSpeechToText(SpeechToText.Default).WithFallback(SpeechToText.Default);

We could technically hide the online/offline recognition in the implementation and keep single service. We had such implementation for Windows in our initial release.

The main idea of Offline Speech To Text to allow developers explicitly specify the required implementation.
Also I don't want they inject 2 separate interfaces in the service (MyService(ISpeechToText s1, IOfflineSpeechToText s2)), because there simultaneous usage may be unpredictable.

We can open a discussion for the next month.

Copy link
Contributor

@bijington bijington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@VladislavAntonyuk thanks for this! I have a few comments and I think they are mostly pretty minor

…CommunityToolkit/Maui into 2089-offline-speech-recognition

# Conflicts:
#	src/CommunityToolkit.Maui.Core/Essentials/SpeechToText/OfflineSpeechToTextImplementation.tizen.cs
Copy link
Contributor

@bijington bijington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @VladislavAntonyuk I have added some xml docs improvements but the rest looks good to me

bijington
bijington previously approved these changes Oct 31, 2024
Copy link
Contributor

@bijington bijington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Do we just need to write the docs? Is this something you are happy to do? If not I'm happy to write something

@VladislavAntonyuk
Copy link
Collaborator Author

LGTM! Do we just need to write the docs? Is this something you are happy to do? If not I'm happy to write something

Thank you Shaun! I will create docs PR this weekend.

Copy link
Member

@pictos pictos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sample app is crashing for me. Here's the log that I could get.

I ran on iphone 14 pro. And allowed all permissions.

image

@VladislavAntonyuk
Copy link
Collaborator Author

The sample app is crashing for me. Here's the log that I could get.

I ran on iphone 14 pro. And allowed all permissions.

image

It’s text to speech. I will remove it from the offline sample

@pictos
Copy link
Member

pictos commented Nov 4, 2024

will try again during this week, asap

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 20 out of 35 changed files in this pull request and generated no suggestions.

Files not reviewed (15)
  • samples/CommunityToolkit.Maui.Sample.sln: Language not supported
  • samples/CommunityToolkit.Maui.Sample/CommunityToolkit.Maui.Sample.csproj: Language not supported
  • samples/CommunityToolkit.Maui.Sample/Pages/Essentials/OfflineSpeechToTextPage.xaml: Language not supported
  • samples/CommunityToolkit.Maui.Sample/Pages/Essentials/SpeechToTextPage.xaml: Language not supported
  • samples/CommunityToolkit.Maui.Sample/AppShell.xaml.cs: Evaluated as low risk
  • src/CommunityToolkit.Maui.Core/Essentials/SpeechToText/Events/SpeechToTextRecognitionResultCompletedEventArgs.cs: Evaluated as low risk
  • samples/CommunityToolkit.Maui.Sample/ViewModels/Essentials/EssentialsGalleryViewModel.cs: Evaluated as low risk
  • src/CommunityToolkit.Maui.Core/Essentials/SpeechToText/ISpeechToText.shared.cs: Evaluated as low risk
  • samples/CommunityToolkit.Maui.Sample/MauiProgram.cs: Evaluated as low risk
  • samples/CommunityToolkit.Maui.Sample/ViewModels/Essentials/SpeechToTextViewModel.cs: Evaluated as low risk
  • src/CommunityToolkit.Maui.Core/Essentials/SpeechToText/OfflineSpeechToTextImplementation.macos.cs: Evaluated as low risk
  • samples/CommunityToolkit.Maui.Sample/Pages/Essentials/OfflineSpeechToTextPage.xaml.cs: Evaluated as low risk
  • src/CommunityToolkit.Maui.Core/Essentials/SpeechToText/OfflineSpeechToTextImplementation.tizen.cs: Evaluated as low risk
  • samples/CommunityToolkit.Maui.Sample/ViewModels/Essentials/OfflineSpeechToTextViewModel.cs: Evaluated as low risk
  • src/CommunityToolkit.Maui.Core/Essentials/SpeechToText/OfflineSpeechToText.shared.cs: Evaluated as low risk
Comments skipped due to low confidence (3)

src/CommunityToolkit.Maui.Core/Essentials/SpeechToText/OfflineSpeechToTextImplementation.ios.cs:19

  • [nitpick] The error message could be more informative by including the current iOS version.
throw new NotSupportedException("Offline listening is supported on iOS 13 and later");

src/CommunityToolkit.Maui.Core/Essentials/SpeechToText/OfflineSpeechToTextImplementation.android.cs:85

  • [nitpick] The error message 'Failure in speech engine - {error}' might not be very helpful to the end-user. Consider providing a more user-friendly error message.
OnRecognitionResultCompleted(SpeechToTextResult.Failed(new Exception($"Failure in speech engine - {error}")));

src/CommunityToolkit.Maui.Core/Essentials/SpeechToText/OfflineSpeechToTextImplementation.android.cs:156

  • In the OnPartialResults and OnResults methods, the matches variable is being accessed without checking for null or empty values. Although the current implementation handles it, adding an explicit check would improve clarity and avoid potential issues.
if (matches is null || matches.Count == 0)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/essentials Issue/Discussion/PR that has to do with Essentials breaking change This label is used for PRs that include a breaking change hacktoberfest-accepted A PR that has been approved during Hacktoberfest
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Offline SpeechToText
4 participants