I absolutely disagree with your disagreement. Please try writing an ERP system where you have a quantity of a billionth of an item price of a billion euros (or an item price of a billionth Euro and a quantity of billions) and tell me which integer type you deem sufficient for this.
Additionally please research decimal floating points before disagreeing.
> Please try writing an ERP system where you have a quantity of a billionth of an item price of a billion euros (or an item price of a billionth Euro and a quantity of billions) and tell me which integer type you deem sufficient for this.
For what it's worth, values on the order of 10^18 are exactly representable in int64; decimal64, however, only has 16 significand digits, so there you are already affected by rounding, and it's not far from where the distance between consecutive numbers is greater than 1.
In general, with floating points, you either waste lots of space for excessive precision around zero, or you quickly lose precision when numbers grow. On top of that, you get all the other quirks of floating point numbers such as cancellation.
> Imagine a contract which says the contract partner receives e.g. 0.001% of the total revenue for the fiscal year (which could be 10bn Euros)
I'm imagining it, and I don't see why storage or computation time would be major obstacles to a calculation you run once a year. Use whatever big integers you want?
Suppose we dedicated one $100 hard drive (per year!) to storing all the relevant data. I feel like there would be plenty of space left over, and the budget would cover it.
Additionally please research decimal floating points before disagreeing.