You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there. I followed the README file and export CFLAGS to boost the dumpdata process. I use librosa to downsample LJspeech and use sox to create PCM files.
It takes me around 20min to process a 7s long wav file. And also comparing to the size of my wav which is only 302kb, the processed feature sum up to a 4G files.
Is it working properly? Can anyone gives some suggestion? Thx!
The text was updated successfully, but these errors were encountered:
if your training file is too small, dump_data will iterate over it to generate enough augmented data. In this case though 7s is just too short, but that's why it takes a long time.
Hi there. I followed the README file and export CFLAGS to boost the dumpdata process. I use librosa to downsample LJspeech and use sox to create PCM files.
It takes me around 20min to process a 7s long wav file. And also comparing to the size of my wav which is only 302kb, the processed feature sum up to a 4G files.
Is it working properly? Can anyone gives some suggestion? Thx!
The text was updated successfully, but these errors were encountered: