That does make sense, because a half of all available fp numbers are less than 1 in their magnitude. In particular there should be a plenty of numbers x such that |x| << 1 so x + 1 ~= 1; in fact, the proportion should be just shy of 50%.
But I guess using the density distribution of floating points is rarely useful in a problem. Your actual distribution will almost surely be way different. Imo, the tool presented here should provide a way to manually provide a custom density function (with some common presets like uniform and normal distributions).
That is indeed one of the problems with IEEE floats. There are only 10^80 atoms in the universe, and a Planck length is 1^-60th of the radius of the universe. But 64-bit floats have an absurd range of over 10^±300! Worse than that, notice that there are as many bit patterns in the never-used range between 10^300 and 10^301 as there are in the super-important range between 1 and 10! Super wasteful. Not to mention the quadrillions of values reserved to represent "NaN"...
This is one of the problems that alternative formats such as the Posit aim to solve. It's quite interesting: I've got an implementation in rust here if you want to play with it https://github.com/andrepd/posit-rust
I wonder, is there a way to only request reformulations that don’t involve branches? The tool already seems quite nice, but that might be a good feature.
Also, I’m not sure I understand the speedup. Is it latency or throughput?
How useful is this when you are using numbers in a reasonable range, like 10^-12 to 10^12?
Generally I try to scale my numbers to be in this range, whether by picking the right units or scaling constraints and objectives when doing nonlinear programming/ optimization.
The precondition on the link you shared has -1 <= x && x <= 1, so 99 is way outside of that range. But even so, testing for x=1, which is supposed to be inside that range, 0.5 doesn't seem tolerably close to 0.4142.
I have a suspicion that the accuracy number is the mean of accuracies over all valid floats in the range (or something approximating that), which is going to be weighted towards zero where the accuracy is higher, and perhaps where sqrt near 1 has some artefacts.
You're right I misread the graph. That said though I have played around with Herbie before, trying it out on a few of the more gnarly expressions I had in my code (analytical partial derivatives if equations of motion if launch vehicle in rotating spherical frame) and didn't see much appreciable improvement over the expected range of values, but then again I didn't check every single one.
What would be cool is if you could some how have this kind of analysis done automatically for your whole program where it finds the needle in the haystack expression that can be improved, assuming you gave expected ranges for your variables
This is an awesome piece of software, one of my favorite little pieces of magic. Finding more precise or more stable floating point formulas is often arduous and requires a lot of familiarity with the behavior of floats. This finds good formulas completely automatically. Super useful for numerical computation.
Did you read what this does? Because I get the feeling you didn’t…
This isn’t a library, you don’t include in your application, and it doesn’t try to replace an understanding of floating point issues on the programmers part.
Also nice to see an article thats not about AI or politics
For x in [−1.79e308, 1.79e308]:
Initial Program: 100.0% accurate, 1.0× speedup
Alternative 1: 67.5% accurate, 5.6× speedupThis is one of the problems that alternative formats such as the Posit aim to solve. It's quite interesting: I've got an implementation in rust here if you want to play with it https://github.com/andrepd/posit-rust
Also, I’m not sure I understand the speedup. Is it latency or throughput?
Like looking at this example,
https://herbie.uwplse.org/demo/b070b371a661191752fe37ce0321c...
It is claimed that for the function f(x) =sqrt(x+1) -1
Accuracy is increased by from 8.5% accuracy to 98% for alternative 5 Which has f(x) = 0.5x
Ok so x=99, the right answer is sqrt(100) -1 = 9
But 0.5 * 99 = 49.5 which doesn't seem too accurate to me.
What would be cool is if you could some how have this kind of analysis done automatically for your whole program where it finds the needle in the haystack expression that can be improved, assuming you gave expected ranges for your variables
> If the issue is that people write bad floating-point expressions, a code-writing tutorial would be a better solution.
Yeah you are just criticizing this without even looking at it. Shame.
This isn’t a library, you don’t include in your application, and it doesn’t try to replace an understanding of floating point issues on the programmers part.
Is this comment written by AI?