This repository has been archived by the owner on Oct 11, 2024. It is now read-only.
-
-
Notifications
You must be signed in to change notification settings - Fork 49
Home
Lucas Vilas-Bôas edited this page Apr 18, 2023
·
17 revisions
The AzSpeech plugin integrates Azure Speech Cognitive Services with Unreal Engine and provides simple functions for the following asynchronous tasks:
- Speech-to-Text
- .wav File-to-Text
- Text-to-Speech
- Text-to-.wav File
- Text-to-Audio Data
- Text-to-Sound Wave
- SSML-to-Speech
- SSML-to-.wav File
- SSML-to-Audio Data
- SSML-to-Sound Wave
The plugin also includes helper functions for:
- Converting Files to Sound Waves
- Converting Audio Data to Sound Waves
- Loading XML to Strings
- Qualifying Paths
- Qualifying XML File Paths
- Qualifying WAV File Paths
- Qualifying File Extensions
- Creating New Directories
- Opening Desktop Folder Picker
- Checking and Adding Android Permissions
- Validating Audio Data
- Checking Return from Recognition Map
AzSpeech also includes a new Editor Tool to generate audios as USoundWaves directly in the Engine:
Looking for the other pages of this documentation?
Check the upper right section of this wiki and expand this item: