-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About end2end implementation #12
Comments
@quangnh-2761 Hi! Yes, we've tried several experiments with end2end pipeline, but for some reason training on raw waveforms with mean-shifted terminal distribution was not stable enough. The samples you've listened to at the demo page were synthesized with the end2end model, which generates audios from the pure Gaussian noise, meaning starting denoising process from |
Thanks for your informative response. I will reproduce on my data and try other end2end architectures to check if they help. |
Wavegrad large indeed helps, but interestingly outputs on my dataset with wavegrad base architecture are not distorted, I will experiment with LJSpeech to find what the difference is. Can I ask which noise schedule you tried, I was struggle to find suitable |
@quangnh-2761 Great! We've used the same noise schedule as in original WaveGrad work: |
Thank you. For some reason wavegrad base with |
@quangnh-2761 Good! Can you share some audio samples to listen to? |
Sorry but I can not share samples from my datasets because of privacy policy, but I am experimenting with LJSpeech. I will send some samples when finish training. |
https://drive.google.com/drive/folders/1OCK_CD6nFmQZGPd_4hSdJLEN_ME1PxIU here are some samples from base and large models, trained with ~1k epochs. I think they are acceptable but still not perfect, I keep training to see if they improve (wavegrad 2 with base model can reach nearly 3.9 MOS, maybe because of other small details). A problem is that I can not do batch inference, because in the training phase I did random segment waveform to fit into memory, so model did not know how to deal with padding when inference, output would soon explode to infinity if I multiply with mask. Otherwise, generated audio might be affected by noise from padding frames. |
@quangnh-2761 I see, good! Seems like hard-increase of parameters number really helps. To improve the inference speed I can suggest to try noise schedule grid-search for needed number of solver steps. Or you can add to your existing model and use our novel reverse SDE solver: https://arxiv.org/abs/2109.13821. It requires much lower number of steps to produce good quality samples. It is easy to implement, you don't need to re-train the model. |
Thank you, I will check your work on fast solver. For my dataset, its language (Vietnamese) is single syllable and has no connection between words when pronounce so I think it's easier to learn and harder to identify errors, IMO model's size is kind of data dependent. |
Hi, thank you for sharing your excellent work.$\mu$ vector is no longer mean statistics of melspectrogram, just simply hidden features. I wonder what probability you put into monotonic alignment if the model can not access to mel features (mu_x and y in your code). Did you still keep the $\mathcal{L}_{enc}$ loss that constrain similarity between mu_y and mel spectrogram? Furthermore, since we do not know mean statistics of waveform, does the SDE have to change also? I have listened to some samples from e2e model in your website and noticed that although audio had noises, their alignments were quite decent, did you try to improve audio's quality with adversarial loss like Hifi gan?
I want to ask about your end-to-end TTS model. In the paper, you stated that only the decoder is changed such that it can generate waveform (by using WaveGrad architecture). So the
Thank you.
The text was updated successfully, but these errors were encountered: