Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Lorentz model returns nan #2

Open
leoamb opened this issue Feb 17, 2021 · 2 comments
Open

Using Lorentz model returns nan #2

leoamb opened this issue Feb 17, 2021 · 2 comments

Comments

@leoamb
Copy link

leoamb commented Feb 17, 2021

Hi, thank you for sharing this work. I found that using Lorentz model returns nan most of the time. I am not sure if this is because the Lornetz model is unbounded.

@louaaron
Copy link
Member

Hi. It could be for a variety of reasons, depending on your situation. Could you share your test code?

@leoamb
Copy link
Author

leoamb commented Feb 18, 2021

I used the demo code for batch normalization provided by you but using Lorentz model instead of Poincare. It turned out to be a machine precision problem. When I use float32 tensors, it returns nan but with float64 it works. However, this makes the model twice as large and much more slower to learn. Any ideas on how to modify the code to make it works with float32 tensors?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants