Hacker News new | past | comments | ask | show | jobs | submit login

It's not always terrible. I've seen doubles appropriately used in cases where performance was paramount, and floating point error was either not relevant or less important.

That said, yeah, when working with money in situations where money matters, some sort of decimal or rational datatype should be the rule, not the exception.




Storing money in floating point is always terrible. If speed is an issue, store it in integer types representing the smallest unit in the currency, e.g. pennies.

Unless you’re doing, what, massively parallel GPU algos on batches of independent amounts? But even then you could use the float as an int in that way... Honestly when is float ever actually good for money? Not for speed, not for correctness, ...


I think you mean that storing money in floating point is always terrible for accounting. Not all of finance is accounting.

Imagine you work at a hedge fund, and you have a model that predicts the true value of some option. Assume the option is trading for $3.00. You do not really care if your model spits out $3.5 or $3.5000000001, you are going to buy either way. And your model probably involves a bunch of transcendental functions or maybe even non-deterministic machine learning, so it's not really meaningful to expect it to be “exact” to some decimal or even rational value.

Even more saliently, you probably don't care whether your model outputs 2.9999999 or 3.000000 or 3.000001, either, because in any of those cases the actual correct interpretation is “we’re just not sure whether to buy or not”.

I think a good first-order characterization of domains where floating point can safely be used is “when the difference between < and <= is not very meaningful” (in calculus terms: when “how meaningful is a difference of `x`” is a continuous function of `x`).


I think the "floating point are bad for storing currencies" is one of the most common misconception about floating point.

Most people don't realize that the IEEE-754 single precision floating point represent real numbers with 9 decimal digits (or 23 binary digits). The double, on the other hand, represents the real numbers with 17 decimal digits.

This means that the double error UPPER BOUND is (0.00000000000000001)/2 per operation. But in reality the error is lower because of the rounding operations.

Also, it is posssible to extend the range using denormals, but most (all?) compilers disable them when compiling with anything other than O0 to avoid performance degradation.

The overheads associate with dealing with non-float types for most applications might not be worth it the cost and risk. If course, if the language are working with provides a currency type, go for it. But if doesn't , there is no need to worry.


> Most people don't realize that the IEEE-754 single precision floating point represent real numbers with 9 decimal digits (or 23 binary digits). The double, on the other hand, represents the real numbers with 17 decimal digits.

No, they don't. They merely can be converted back to decimal with those numbers of significant digits without loss of information.

That is important because (a) if this matters, you have to make sure you actually control the number of significant digits when converting to decimal, or you might end up with a different decimal, and (b) the operations that you do on the floats do not reliably behave as if there was the supposedly represented decimal number stored in them.

Now, sure, you can use floats for currency, if you know what you are doing, but the point of the warning against it is that you have to know what you are doing, and chances are you don't, or if you do, then you know where you can ignore it anyway.

(That is, unless you mean nothing more than that you can encode the information contained in an n-digit decimal in a float/double--which of course is true, but not particular to floating point numbers, as any state with a certain number of bits can, of course, encode any information of no more than that many bits, somehow.)


In a previous discussion, someone was worrying about using floats to represent price in JS. I think this is a consequence on the fear mongering on using floats to store currencies.

Floating Points are hard. There is a study done with academics that shows that even researches that works with float point everyday forget about the format intricacies. And the study didn't even look into the compiler mess.

But I agree with you, some (a lot?) of caution is needed when working with float point is good.


World GDP is around 87 trillion dollars.

    $ ruby -e 'pp 87e12 + 0.01'
    87000000000000.02
If you're certain that your software will never handle national-economy-scale or hyperinflationary use cases, then sure, you may be able to get away with 64-bit floats, but I think "no need to worry" is overstating your case. Please do worry about precision until you've proven you don't need to.


You probably want some smaller unit than a dollar for currency as well, in which case it becomes of a problem with even smaller amounts.

I really see no reason to use any other representation for currency than decimal fixed point. Store the amount as mils or whatever unit suits your use case.


Yeah, I should had be more careful with my words.

For the vast majority of person, there is no need to worry so much about using fp to represent currencies. There are other issues with float that will bite you in your back before precision became one of them.


Depending on context, you can assume that precision will bite you.

The problem is that rounding is kind of a big deal in certain financial contexts, and the process of rounding can greatly magnify floating point's decimal precision problems when you're dealing with numbers that are close to the .5's.

When I said up above that there are some contexts where IEEE floats are fine, those contexts are largely ones where you never have to round, or where you can guarantee that an accountant is never going to see or care how you rounded. So, to an approximation: Go ahead and fearlessly implement the Black-Scholes formula using doubles, but never, ever use them to do something simple like calculating an invoice.


Fun, tangential anecdote:

I worked with a CSV containing, among other things, phone numbers. A coworker called and complained that the phone numbers were all wrong. He'd edited the thing in MS Excel, which promptly converted the phone numbers to floating point with a loss in precision. When he saved it, those new numbers were happily written back to the disk.


I agree with your overall point: it most likely does not matter when the values are close enough. However :)

There can be two companies with 100M market cap. Corp A has issued 10M shares @ 10 each, Corp B has 10B shares priced at 0.01

A +/-0.001 change in Corp A share price is just 0.01% and moves the market cap by +/-10k, so probably nothing significant. The same nominal change in Corp B amounts to 10%, or +/- 10M in the company value, which is quite a big deal.

Also I think there may be some money to be made in changes at the 7th decimal place with large enough volume of high frequency transactions.


Because of the way floating point numbers work, you'd get an accurate amount for both cases, as it's really the number of significant figures, not decimals.


And that roughly captures the spot where I was seeing doubles used.

Yes, they could have used fixed point. I am guessing that what happened is that someone who had thought way more deeply about this than I ever needed to (I worked on the accounting side, where, yep, we always used decimals) either determined that, where the modeling was concerned, floating point errors were not worth worrying about, or estimated that the expected cost to the company stemming from bugs due to to fixed point math being easier to goof up on would have been smaller than the expected cost to the company due to floating point error.


To see 0.1 error using _double_ you have to do at least 2*10^17 operations (assuming the worst case scenario and no subnormals).

If you are working with such huge numbers, 0.1 cents is probably a cost you are willing to pay to avoid expending thousands in a software solution. The saving with power saving using a floating point is likely greater than power your computers will have to expend to get a precise solution.


You can get a larger error than that using one operation.

  fn main() {
      let x: f64 = 9007199254740992.0;
      assert_eq!(x + 1.0, x);
  }


You are absolutely right.

When adding numbers with large magnitudes differences (around 10^17 I think) it might exceed the format precision. I should have taken that in account when defining the error boundaries.

In dollars, you start having issues with cents when working with a tens of trillions.

For the vast majority of people this won't be an issue.


I can give you 1.0 error. Take a handful of numbers that add up to 1.5, sum them, and then round that result to the nearest unit.

I'm too lazy to figure out a specific example, but sets of numbers where doubles round up and decimals round down (or vice versa) aren't terribly uncommon.


My day job is high performance financial model implementation. Floats storing dollar amounts are the norm for predictions. Operating on values that are linear combinations of integer fractions multiplied by irrational constants (such as Euler’s number) is perfectly possible, but it’s much more performant to be aware of floating point epsilon when writing modeling code.


Financial models are predictive, they don't have to be accurate to a penny, right? Unlike processing actual money people own.

(I do some work with predictive simulations about money, but outside finance, and there we care that the result has accurate order of magnitude. Floats were used extensively in the project; I actually upgraded them to doubles for the sake of handling larger order of magnitude spans.)


That’s right. The trading desk also uses floats for analysis and regulatory reporting. Actual account balances come through an API that gives us floats, but rumor has it that it’s backed by Hollerith a punch card library maintained by cybernetic undead, encoded in 1215-EBCDIC-BLACKTONGUE.


I stand corrected, thanks for this example.


> If speed is an issue, store it in integer types representing the smallest unit in the currency, e.g. pennies

More typically, mills[1] (tenth of a cent).

[1]: https://en.m.wikipedia.org/wiki/Mill_(currency)


Amazon's EC2 hourly prices are rounded to mils ($0.011/hour).

https://aws.amazon.com/emr/pricing/

Azure has some hourly prices with ten-thousandths of a cent ($0.0102/hour):

https://azure.microsoft.com/en-ca/pricing/details/virtual-ma...

Microsoft should use gas station 9/10 pricing conventions to just barely undercut Amazon's lowest price $0.011 with $0.0109.

https://www.marketplace.org/2018/10/11/why-do-gas-prices-end...

>“They found out that if you priced your gas 1/10 of a cent below a break point, let’s say 40 cents a gallon, ‘.399’ just looked to the public like 39 cents…”


Tarsnap goes as low as counting attodollars. Yes, that's 10^-18 dollars, judging by the precision with which individual line items and total account funds are reported. Storage price is 250 picodollars per byte-month.


If it’s not possible to charge such amounts, what exactly is the point of the accuracy?


They're usually charging you for a shitload of them!


Tarsnap is prepaid.


"Tarsnap's author is a geek." ;)

https://www.tarsnap.com/picoUSD-why.html


"when it internally converts storage prices from picodollars per month to attodollars per day, it rounds the prices down to benefit the customer."

A gentleman and a scholar.


Storing money in floating point is fine. Just round to the nearest atomic unit when displaying. Sometimes this is a necessity when working with money in e.g. existing JSON APIs. You lose a few bits of range relative to fixed point storage but it's almost never a practical issue.

Performing arithmetic operations against money in floating point is the dangerous part, as error can accumulate beyond an atomic unit.


> Performing arithmetic operations against money in floating point is the dangerous part, as error can accumulate beyond an atomic unit.

A good example of this is trying to compute the sales tax on $21.15 given a tax rate of 10%. The exact answer would be $2.115, which should round to $2.12.

IEEE 64-bit floating point gives 2.1149999999999998, which is hard to get to round to 2.12 without breaking a bunch of other cases.

Here are three functions that try to compute tax in cents given an amount and a rate, in ways that seem quite plausible:

  def tax_f1(amt, rate):
    tax = round(amt * rate,2)
    return round(tax * 100)
  
  def tax_f2(amt, rate):
    return round(amt*rate*100)
  
  def tax_f3(amt, rate):
    return round(amt*rate*100+.5)
On these four problems:

   1% of $21.50
   3% of $21.50
   6% of $21.50
  10% of $21.15
the right answers are 22, 65, 129, and 212. Here are what those give:

  tax_f1:  21  65 129 211
  tax_f2:  22  64 129 211
  tax_f3:  22  65 130 212
Note that none of the get all four right.

I did some exhaustive testing and determined that storing a money amount in floating point is fine. Just convert to integer cents for computation. Even though the floating point representation in dollars is not exact, it is always close enough that multiplying by 100 and rounding works.

Similar for tax rates. Storing in floating point is fine, but convert to an integer by multiplying by an appropriate power of 10 first. In all the jurisdictions I have to deal with, tax rate x 10000 will always be an integer so I use that.

Give amt and rate, where amt is the integer cents and rate is the underlying rate x 10000, this works to get the tax in cents:

  def tax(amt, rate):
    tax = (amt * rate + 5000)//10000
    return tax
I'm not fully convinced that you cannot do all the calculations in floating point, but I am convinced that I can't figure it out.


> IEEE 64-bit floating point gives 2.1149999999999998, which is hard to get to round to 2.12 without breaking a bunch of other cases.

Your issue is on how to print the float, not with the precision of fp. For instance, `21.15 * 0.1` can be print both as 2.115 or 1.12 depending on how many decimal digits of precision you set your print function. I manage to get those results with printf using `%.3f` and `%.2f`, respectively.

To produce one cent (0.0x) error with the default FP rounding, it takes more than 1 Quadrillion of operation. Each operation can only introduce 1*10^17/2 error.

The "you shouldn't be using float to do monetary computation" is likely one the most spread float point misinformation.

The issues with your others examples is that you are rounding the data (therefore, discarding information). If you don't do any manual round, the result should be correct (I haven't test thought).


> Your issue is on how to print the float, not with the precision of fp. For instance, `21.15 * 0.1` can be print both as 2.115 or 1.12 depending on how many decimal digits of precision you set your print function. I manage to get those results with printf using `%.3f` and `%.2f`, respectively.

I get 2.115 with %.3f and 2.11 with %.2f. Here's my test program. Same result on my Mac with clang and my Debian 8 server with gcc.

  #include <stdio.h>
  
  double tax_on(double amt, double rate);
  
  int main(void)
  {
      double amt = 21.15;
      double rate = 0.1;
      double tax = tax_on(amt, rate);
      printf("%.3f\n", tax);
      printf("%.2f\n", tax);
      return 0;
  }
  
  double tax_on(double amt, double rate)
  {
      return amt * rate;
  }


The thing is that if 2.115 represents a calculated dollar figure, such as the value of some transaction or the cost of something or whatever, then we should round it to 2.12. (Unless we are working in a financial domain that deals with fractions of a cent.) Now in floating-point, we don't exactly have the exact value 2.12, but we have something that is extremely close. So close that if we happen to print it to %.3f, we better get 2.120, and if we print it to %.4f, we better see 2.1200.

That some monetary calculation works out to $2.115 (and is left that way) instead of being correctly rounded $2.12 doesn't add up to a valid argument against using floating-point for money.

I think piadodjanho does have a point there in the grandparent comment; "don't use floating-point for money" may just be a repeated mantra that doesn't entirely hold water. If extremely accurate engineering and scientific calculations can be done with floating-point, surely we can get floating-point values to measure stacks of pennies with the proper care in the programming.


> If extremely accurate engineering and scientific calculations can be done with floating-point, surely we can get floating-point values to measure stacks of pennies with the proper care in the programming.

That was for a long time my position. I definitely have commented before either here or in /r/programming to the effect that floating point is fine for money as long as you are aware that it is not exact and not associative, and take that into account when doing your calculations.

Any intermediate result in a calculation chain might be off a tiny amount from the exact value, but if you just rounded to the nearest 0.01 before you accumulated enough error to not < 0.005 off, you'd be fine.

I think that's probably true for addition of money amounts. If you have a large number of costs to add up, for example, you should be able to add thousands of them, round to nearest 0.01, and get the right result.

But for tax calculations, such as 10% of $21.15, 0.1 x 21.15 = 2.1149999999999998 in 64-bit IEEE floating point, and rounding the nearest 0.01 gives 2.11, not the 2.12 that we want. A call to fesetround(FE_UPWARD) makes that come out 2.115, and then rounding to the nearest 0.01 gives the desired 2.12.

Will FE_UPWARD make this work for all amounts and tax rates, or are there amounts and rates where we need FE_TONEAREST or FE_DOWNWARD? If so, how do we tell which one we need? Like I said earlier:

> I'm not fully convinced that you cannot do all the calculations in floating point, but I am convinced that I can't figure it out.

PS: calculating tax in cents given double amt, rate, using this method:

  tax = amt * rate;
  cents_tax = round(100 * tax);
almost works if the rounding mode is FE_UPWARD. For all amounts from 0.01 through 99.99, and all tax rates from 0.01% through 10.99% in increments of 0.01% it works except for 3.75% of $67.60 and 7.5% of $33.80.


> but if you just rounded to the nearest 0.01 before you accumulated enough error to not < 0.005 off, you'd be fine.

And in run-of-the-mill, everyday finance, there simply isn't enough calculation stuffed in between the concrete monetary points that are recorded in the ledger.

> If you have a large number of costs to add up, for example, you should be able to add thousands of them, round to nearest 0.01, and get the right result.

Exactly.

> But for tax calculations, such as 10% of $21.15, 0.1 x 21.15 = 2.1149999999999998 in 64-bit IEEE floating point, and rounding the nearest 0.01 gives 2.11, not the 2.12 that we want.

This problem will be there even if we use integers for the currency amounts, but floating-point only for these fractional calculations.

Luckily for us Canadians, I'm pretty sure the Canada Customs and Revenue Agency won't care which way you call this rounding. They also don't collect or refund overall discrepancies of less than around two dollars in a single tax return. I think I've been mostly rounding taxes down over the years, and tax credits up. E.g. if a tax credit is $235.981..., I make it 235.99.

The myth that has been foisted on programmers is that if you use floating-point for numbers, the actual ledgers won't balance, and sum totals of columns of figures will appear incorrect if verified by pencil-and-paper arithmetic. That will certainly be true if the math is done very carelessly; and it's true that it's easier to get it right with less care using integers.

A percentage calculation whose rounding is called the wrong direction will, in and of itself, not cause such a problem. E.g. if we split some sum of money into two complementary percentages, we can do it such that the two add up to the original.

You have to be careful not to do this as two independent percentages. Like, dont take 10% of 21.15 and then 90% of 21.15, individually round them to a penny, and then expect them to add up to 21.15. It has to be centround(21.15 - centround(.1 * 21.15)) to get the 90% residue.


The trick is that by default rounding happens using banker's rounding. Programming languages use this because this is what CPUs use. When you want to round your way, you need an extra digit and round manually:

    def tax_f4(amt, rate):
      tax = round(amt * rate * 1000)
      return tax // 10 + (tax % 10 > 4)


That works for 10% of $21.15, giving the desired 212.

However, for 10.14% of $21.15, it gives 215, but it should be 214. Another example is 3.5% of $60.70, for which it gives 213 but correct is 212.


You're right. My remainder calculation in my code snippet is incorrect. It should've been a floating point remainder instead.

    import math
    
    def tax_f5(amt, rate):
        t = amt * rate * 1000
        return round(t) // 10 + ((math.fmod(t, 10.0) - 5.0) > -1e-7)
But then since there's now an epsilon, it raises the question of how many digits of precision the tax rates typically need. This is indeed a difficult problem.


Some exhaustive testing on all amounts from $0.01 through $999.99 in $0.01 increments and all taxes from 0.01% through 99.99% in increments of 0.01% show that this is the minimum that does the trick (switching to C from Python for speed):

  unsigned long tax = (unsigned long)round(amt * rate * 1000000);
  return tax/(10000) + (fmod(tax, (double)(10000)) - (double)(5000) > -1e-5 ? 1 : 0);
(Yes, I see that I goofed in translation your code to C and typed -1e-5 instead of -1e-7. It looks like the results are the same with -1e-7).

I also tested that up through $9999.99 with taxes up to 12%, and no problems.

Adding another 0 to the 1000000, the two 10000's, and the 5000 works. And another, and another. Past that it starts to fail, but not the simple off-by-one failures you get when you don't use enough digits. These are way way off, so I'm guessing its running into some new class of problem. I haven't looked to see what that is yet.


> Storing money in floating point is fine. Just round to the nearest atomic unit when displaying.

Well, it's not just a display issue. In accounting, associativity and commutativity are important. People do care that `a + b + c - a == c + b` should evaluate to “true”.


It appears you did not see the critical point in the above comment. "Performing arithmetic operations against money in floating point is the dangerous part, as error can accumulate beyond an atomic unit."


You’re right, I missed that. If you’re not going to do any arithmetic, you might as well store them as strings.


There's very little point in storing money in floats if you're not going to do arithmetic in floats; about the only use case I can think of is JavaScript and JSON APIs.


Aside from the cases you mentioned, there are other dynamic languages in which numbers are by default floating point. e.g. Lua. I agree though.


Pennies (or any equivalents) are not the smallest unit in any currency. Fractions of it are perfectly acceptable and even common.


Even decimal floating point is a bad idea (for dealing with money) since you still can't represent a subset of rational numbers without approximation and without introducing rounding error during some calculations. It's just a different subset than what binary floating point can represent without approximation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: