You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for sharing this work. I found that using Lorentz model returns nan most of the time. I am not sure if this is because the Lornetz model is unbounded.
The text was updated successfully, but these errors were encountered:
I used the demo code for batch normalization provided by you but using Lorentz model instead of Poincare. It turned out to be a machine precision problem. When I use float32 tensors, it returns nan but with float64 it works. However, this makes the model twice as large and much more slower to learn. Any ideas on how to modify the code to make it works with float32 tensors?
Hi, thank you for sharing this work. I found that using Lorentz model returns nan most of the time. I am not sure if this is because the Lornetz model is unbounded.
The text was updated successfully, but these errors were encountered: