-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Unit Tests - Comprehensive #4
Comments
This is great. I know it's just getting off the ground, but regarding range of numbers, is double precision in the realm of possibility? I know it's fraught with caveats and platform-dependence, but just curious—is it possible? 7/10 tests succeed for me, so just curious if that's a matter of precision or size limits or relation to powers of two or otherwise. |
@rreusser thanks! I haven't found a way to do double precision yet, but I'm open to suggestions and PRs. Hopefully something that makes this easy will make it into the next version of WebGL. Regarding the existing unit tests, it's hard to tell from the output right now when precision is the problem. The best way to test that, at the moment, is adding the It defaults to |
@rreusser I've added a notice about lack of 32-bit precision support to the unit tests, when the hardware doesn't support it. I also created an issue for emulating higher precision where needed: #8 Looks like it should be possible to get 64-bit precision (even on hardware that only supports 16-bit!). Check it out and give it a 👍 , if it looks good to you. |
@rreusser I made some updates to the unit tests, to give more information about failures. Would you be willing to pull the latest and post your results here? Thanks again for taking the time to check out my work! It's nice to know there are other people out there who might be interested. 😄 🎉 |
Ah, that's very helpful debug output! Yeah, it definitely looks like the limits of precision. All of the failed tests have about seven digits of precision, which is about what you'd expect from single precision. Computing the roundoff error probably comes down to something akin to My results:
And an additional console warning for every test:
|
(And yes, was curious whether this could be relatively simply plugged into ndarray. I still need to sit down and think about whether the blas strides are complete enough to permit a generic ndarray.) |
Thanks for the test output! I've been able to reproduce this bug in Chrome on OSX. It looks like most numbers are getting 5-7 digits of precision (as expected), but numbers near 129 are getting only 2-3. ( There's a special bit of code that encodes numbers into bytes on the GPU, for shipping back into javascript. This feels like a bug in that code. I'll work on that today, hopefully have a fix by tomorrow. Thanks again for the test output! 👍 |
@rreusser Bug is fixed in my test environment. Can you run the tests with the latest master and let me know what happens? Also, I'm happy to help integrate this with the scijs ndarray. Then I don't have to build a tensor library myself. 😄 |
More tests are almost always better.
This is enough for now. 😅 |
More extensive set of unit tests, with a larger upper bound on runtime (< 5 min)
The text was updated successfully, but these errors were encountered: