Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for custom floating-point bit-widths #19

Open
zvonimir opened this issue Oct 24, 2019 · 1 comment
Open

Add support for custom floating-point bit-widths #19

zvonimir opened this issue Oct 24, 2019 · 1 comment

Comments

@zvonimir
Copy link
Member

Several people asked me about this, given the recent trend in ML (and other areas) to use low, custom bit-widths for floating-points.
I suspect this would not be that hard to add as long as they are still IEEE compliant and it would amount to setting up machine epsilons appropriately. But maybe I am wrong here.

@monadius
Copy link
Member

I have implemented custom floating-point formats in the new branch rounding. Here is a simple example:

Variables
  // <5, 2> = 5 significand bits (including the implicit bit) and 2 exponent bits
  float<5, 2> x1 in [-10, 10];

Expressions
  // The space between '>' and '=' is required 
  r1 rnd<4, 4> = x1 + x1;

The reference is also updated.

I only ran several basic tests with new formats. I am not going to close this issue until it is confirmed that there are no critical bugs in my implementation (HOL Light proofs are not working with custom formats so I cannot formally verify my work right now).

It would be also nice to add fixed-point formats. It should be pretty straightforward. Basically, fixed-point formats are custom formats without exponent bits. But more work need to be done to implement fixed-point formats correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants