The thing is that if 2.115 represents a calculated dollar figure, such as the value of some transaction or the cost of something or whatever, then we should round it to 2.12. (Unless we are working in a financial domain that deals with fractions of a cent.) Now in floating-point, we don't exactly have the exact value 2.12, but we have something that is extremely close. So close that if we happen to print it to %.3f, we better get 2.120, and if we print it to %.4f, we better see 2.1200.
That some monetary calculation works out to $2.115 (and is left that way) instead of being correctly rounded $2.12 doesn't add up to a valid argument against using floating-point for money.
I think piadodjanho does have a point there in the grandparent comment; "don't use floating-point for money" may just be a repeated mantra that doesn't entirely hold water. If extremely accurate engineering and scientific calculations can be done with floating-point, surely we can get floating-point values to measure stacks of pennies with the proper care in the programming.
> If extremely accurate engineering and scientific calculations can be done with floating-point, surely we can get floating-point values to measure stacks of pennies with the proper care in the programming.
That was for a long time my position. I definitely have commented before either here or in /r/programming to the effect that floating point is fine for money as long as you are aware that it is not exact and not associative, and take that into account when doing your calculations.
Any intermediate result in a calculation chain might be off a tiny amount from the exact value, but if you just rounded to the nearest 0.01 before you accumulated enough error to not < 0.005 off, you'd be fine.
I think that's probably true for addition of money amounts. If you have a large number of costs to add up, for example, you should be able to add thousands of them, round to nearest 0.01, and get the right result.
But for tax calculations, such as 10% of $21.15, 0.1 x 21.15 = 2.1149999999999998 in 64-bit IEEE floating point, and rounding the nearest 0.01 gives 2.11, not the 2.12 that we want. A call to fesetround(FE_UPWARD) makes that come out 2.115, and then rounding to the nearest 0.01 gives the desired 2.12.
Will FE_UPWARD make this work for all amounts and tax rates, or are there amounts and rates where we need FE_TONEAREST or FE_DOWNWARD? If so, how do we tell which one we need? Like I said earlier:
> I'm not fully convinced that you cannot do all the calculations in floating point, but I am convinced that I can't figure it out.
PS: calculating tax in cents given double amt, rate, using this method:
tax = amt * rate;
cents_tax = round(100 * tax);
almost works if the rounding mode is FE_UPWARD. For all amounts from 0.01 through 99.99, and all tax rates from 0.01% through 10.99% in increments of 0.01% it works except for 3.75% of $67.60 and 7.5% of $33.80.
> but if you just rounded to the nearest 0.01 before you accumulated enough error to not < 0.005 off, you'd be fine.
And in run-of-the-mill, everyday finance, there simply isn't enough calculation stuffed in between the concrete monetary points that are recorded in the ledger.
> If you have a large number of costs to add up, for example, you should be able to add thousands of them, round to nearest 0.01, and get the right result.
Exactly.
> But for tax calculations, such as 10% of $21.15, 0.1 x 21.15 = 2.1149999999999998 in 64-bit IEEE floating point, and rounding the nearest 0.01 gives 2.11, not the 2.12 that we want.
This problem will be there even if we use integers for the currency amounts, but floating-point only for these fractional calculations.
Luckily for us Canadians, I'm pretty sure the Canada Customs and Revenue Agency won't care which way you call this rounding. They also don't collect or refund overall discrepancies of less than around two dollars in a single tax return.
I think I've been mostly rounding taxes down over the years, and tax credits up. E.g. if a tax credit is $235.981..., I make it 235.99.
The myth that has been foisted on programmers is that if you use floating-point for numbers, the actual ledgers won't balance, and sum totals of columns of figures will appear incorrect if verified by pencil-and-paper arithmetic. That will certainly be true if the math is done very carelessly; and it's true that it's easier to get it right with less care using integers.
A percentage calculation whose rounding is called the wrong direction will, in and of itself, not cause such a problem. E.g. if we split some sum of money into two complementary percentages, we can do it such that the two add up to the original.
You have to be careful not to do this as two independent percentages. Like, dont take 10% of 21.15 and then 90% of 21.15, individually round them to a penny, and then expect them to add up to 21.15. It has to be centround(21.15 - centround(.1 * 21.15)) to get the 90% residue.
That some monetary calculation works out to $2.115 (and is left that way) instead of being correctly rounded $2.12 doesn't add up to a valid argument against using floating-point for money.
I think piadodjanho does have a point there in the grandparent comment; "don't use floating-point for money" may just be a repeated mantra that doesn't entirely hold water. If extremely accurate engineering and scientific calculations can be done with floating-point, surely we can get floating-point values to measure stacks of pennies with the proper care in the programming.