-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't find file bayesmodel.py #83
Comments
Finally, I download |
After importing this file, it occurs that |
the Also, the Your latest issue is similar to the other: the API has change whereas I didn't udate the recipe hence it is not fully compatible anymore. I'm not sure, exactly what you are looking for: beer is mostly for unsupervised ASR, especially AUD, maybe if you explain what you intend to achieve I can help you more precisely ? |
Thanks for your declaration and patience. I'm interested in HMM-VAE model at this paper and want to reproduce this experiments. I searched |
It should be "relatively easy" to get this model working in beer. First, I would strongly recommend that you stick with the The difficulty of the HMM-VAE is to parallelize the training: the HMM based AUD model can accumulate the statistics on different jobs but with the VAE it is not possible anymore. Having said this, the MBOSHI database is small and you can still train your model sequentially, it will be slow (maybe 1 day training) but it will be easier to implement. Side note: if you consider a GMM-VAE instead of an HMM-VAE, you can still have a very efficient training using GPU. Just a quick question, are you just interested in the model itself or your final task is also AUD ? |
I'm interested in the model itself and will have an idea related on HMM-VAE. AUD can be one of my task and other sequential data may work too. I have learned |
I've just created a notebook to illustrate how to build a VAE-GMM model (see the examples directory in this branch). That should give you a good starting point to get a VAE-HMM up and running. |
I'm interested in AUD task for non-speech domain. I'm trying to incorporate the HMM-VAE model to What do you suggest to make the VAE model work for AUD task |
Yes, the HMM based AUD is easily parallelized as it is more or less a EM algorithm. The VAE-HMM is harder to train on multiple machines and would heavily benefit GPU . To make it work, you can reimplement the forward-backward to be GPU friendly. For instance, this is what Kaldi is doing for the lattice free MMI. Currently, my forward-backward is implemented in the log-domain. On one hand it's slow because you have to switch back and forth between the log and probability domain but on the other hand it is robust against underflow of floating point values. A naive version of the forward-backward algorithm is straightforward to implement: making the forward (and backward) computation in the probability domain here will get you there. Unfortunately, you will quickly underflow. To avoid this issue, yo u need a modified version of the forward-backward algorithm that makes it stable. You can find a good explanation of the normalized forward-bacward in this book (chapter 13). I don't know if this version will be sufficiently stable but should be a good starting point. If you feel motivated pull requests are welcome :) |
thanks for your reply, at this moment I'm not concerned with parallel processing, I'm using small synthetic data to begin with. I'm trying to run the HMM-VAE phone loop model for unit discovery task. I guess amdtk library would do the job, right? Is there a publication of yours that explains the HMM-VAE phone loop model ? Is it the same with this paper |
About AMDTK, I discourage to use it. I did implement the HMM-VAE with it but it was with theano and it was hell to debug or to change the model. Actually, beer is somehow a reimplementation of AMDTK using pytorch to easily integrate neural network models. If you want to start with a toy example, I strongly suggest you to look at this example directly. Regarding publications, in addition ot the paper you mentioned the VAE-HMM phone-lopp was described in these papers: |
I've seen the HMM-VAE example but it does not incorporate Dirichlet prior over the phones, right? Here's what I tried: I've managed to run the AUD-HMM model. Then using the trained HMM model as prior for VAE, I copied and run the VAE optimization cells in HMM-VAE notebook. However, the elbos print The HMM prior model I've used is
How should I change the optimization code for this model to run with VAE? |
For the In general, training a graphical model (GMM, HMM, ...) and then setting it as the prior of an untrained VAE is probably not a good idea, well at least be careful about it. The main reason is that the AUD-HMM has been trained on a features space which is completely different from the latent space of the VAE. If you want to use an AUD-HMM as prior over the VAE, I recommend that you look at how I create the AUD-HMM initial model and use this one as the prior (not the trained one). Last recommendation, VAE + graphical model is a compelling idea on paper but unfortunately doesn't work as easily as one would hope (at least from my experience). Consequently, I strongly encourage to test first with the simple VAE-HMM (no Dirichlet prior) just to make sure the that the code is doing more or less what you expect. Good luck. |
Ok, I'll try those, thanks again for your time |
Indeed, papers have showed that HMM-VAE has a good performance but it's hard to reproduce the good results. |
There are many headers include
from .bayesmodel import xxx
, but I can't findbayesmodel.py
in you repo, which causes lots of problems.The text was updated successfully, but these errors were encountered: