Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I apply MAS method to other model ? #67

Open
dohuuphu opened this issue Apr 19, 2022 · 1 comment
Open

Can I apply MAS method to other model ? #67

dohuuphu opened this issue Apr 19, 2022 · 1 comment

Comments

@dohuuphu
Copy link

dohuuphu commented Apr 19, 2022

I'm applying MAS to FastSpeech2 to replace MFA tool.
I'm using encoder output in FastSpeech2 as the encoder output in Glow-TTS to fed into MAS method.
After several iteration, the duration ( MAS output ) was incorrect

Ex: MAS calculates for 36 phoneme and the output as below:
Just 2 phonemes have the value
[[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.1355,
0.0000, 0.0000, 4.9053, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000]]]

Can anyone give me some advices??

@unrea1-sama
Copy link

unrea1-sama commented Apr 28, 2022

I'm applying MAS to FastSpeech2 to replace MFA tool. I'm using encoder output in FastSpeech2 as the encoder output in Glow-TTS to fed into MAS method. After several iteration, the duration ( MAS output ) was incorrect

Ex: MAS calculates for 36 phoneme and the output as below: Just 2 phonemes have the value [[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.1355, 0.0000, 0.0000, 4.9053, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]

Can anyone give me some advices??

I think this may because the TextEncoder in Glow-TTS predicts the mean and variance of z, which is estimating the probability of z. Thus we can find the best alignment between frames and text by maxmizing the probability of z using MAS. However the Encoder in Fastspeech2 doesn't produce the probability, so maybe MAS can't be applied to Fastspeech2.
If you want to use a network to learn the alignment between texts and frames without attention, AlignTTS provides a method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants