Whenver IEEE 754 and its quirks are discussed a potential alternative (but so far without hardware support) called Unum - Universal Numbers - should not go unmentioned:
IEEE 754 is not a trivial standard (it earned Kahan a Turing award). Error modes for IEEE 754 are precisely defined, even though it requires a lot of effort to understand what they mean. (For example, overflow triggers an exception, but gradual underflow is allowed.) Going beyond it requires some serious effort, and unums seem not to be the solution.
A good book to understand the IEEE 754 standard is the book by Michael Overton. The quoted article is unfortunately an example of floating-point "scaremongering". Floating point arithmetic is not approximate, it is accurately defined with precise error modes. Base-10 arithmetic is not however, the model to understand it, but rather "units in last place".
A note on the posit (aka sigmoid) standard that Gustafson discusses in the video:
1. Main feasibility benefit: behaves like IEEE 754 numbers in terms of accuracy.
2. Main performance benefit: able to match IEEE 754 precision & accuracy in fewer bits (~1/2) than IEEE 754.
3. When the standard is finalized, will specify behavior that in IEEE 754 allows for a degree of interpretation, meaning that behavior may not be reproducible across runs and machines.
4. IEEE 754 error-handling, that requires extra silicone or cycles, is not implemented. Overflow/underflow not considered errors leading to Inf or NaN, but rounded to nearest representation, hence has uniform treatment (rounding) across numbers with no exact representation. Illegal operations to be handled as general error à la integers rather than giving rise to NaN.
4a. Performance benefit: 4. should mean that less silicone or cycles required to perform arithmetic compared to IEEE 754, so Gustafson believes, though his team has only just finished with an ALU/FPU circuit schematic for posits.
4b. Programming using posits should feel more like programming using integers, versus handling NaN, Inf.
1. Main feasibility benefit: behaves like IEEE 754 numbers in terms of accuracy.
I think you mean "...in terms of allocation and storage"
2. I would like to disclaim that posits improve over IEEE 754 precision & accuracy over some value ranges. Over others, it's worse. The argument though is that you're more likely to be using those value ranges than the ones where you want. It's also not nearly 2x. For 64-bits, for example you'll realistically get about 6-7 extra bits of precision around 1. The standard DOES require features like "exact dot product" operation, which can allow you to do things that are "impossible" with the standard 754 spec; for example, exact solutions of systems of linear equations using convergent methods.
The way I've been thinking about it of late, is that floats and posits are a 'lossy compression of the real number line'; posits make better choices about how to compress this information, but there are of course tradeoffs.
https://en.wikipedia.org/wiki/Unum_(number_format)
Previous discussions (>2y old):
https://news.ycombinator.com/item?id=9943589 and
https://news.ycombinator.com/item?id=10245737
A slideset: https://www.slideshare.net/insideHPC/unum-computing-an-energ...