You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Several people asked me about this, given the recent trend in ML (and other areas) to use low, custom bit-widths for floating-points.
I suspect this would not be that hard to add as long as they are still IEEE compliant and it would amount to setting up machine epsilons appropriately. But maybe I am wrong here.
The text was updated successfully, but these errors were encountered:
I have implemented custom floating-point formats in the new branch rounding. Here is a simple example:
Variables
// <5, 2> = 5 significand bits (including the implicit bit) and 2 exponent bits
float<5, 2> x1 in [-10, 10];
Expressions
// The space between '>' and '=' is required
r1 rnd<4, 4> = x1 + x1;
I only ran several basic tests with new formats. I am not going to close this issue until it is confirmed that there are no critical bugs in my implementation (HOL Light proofs are not working with custom formats so I cannot formally verify my work right now).
It would be also nice to add fixed-point formats. It should be pretty straightforward. Basically, fixed-point formats are custom formats without exponent bits. But more work need to be done to implement fixed-point formats correctly.
Several people asked me about this, given the recent trend in ML (and other areas) to use low, custom bit-widths for floating-points.
I suspect this would not be that hard to add as long as they are still IEEE compliant and it would amount to setting up machine epsilons appropriately. But maybe I am wrong here.
The text was updated successfully, but these errors were encountered: