Hacker News new | past | comments | ask | show | jobs | submit login

But it is correct.

> It's easy to write a program that sums up 0.01 until the result is not equal to n * 0.01.

It's not easy to do that if you use a floating point decimal type, like I recommended. For instance, using C#'s decimal, that will take you somewhere in the neighborhood of 10 to the 26 iterations. With a binary floating point number, it's less than 10.




Of course with a decimal type there is no rounding issue. That's not what 0.30000000000000004 is about.

Many languages have no decimal support built in or at least it is not the default type. With a binary type the rounding becomes already visible after 10959 additions of 1 cent.

  #include <stdbool.h>
  #include <stdio.h>
  #include <string.h>
  
  bool compare(int cents, float sum) {
    char buf[20], floatbuf[24];
    int len;
    bool result;
    
    len = sprintf(buf, "%d", cents / 100 ) ;
    sprintf(buf + len , ".%02d" , cents % 100 ) ;
    sprintf(floatbuf, "%0.2f", sum) ;
  
    result = ! strcmp(buf, floatbuf) ;
    if (! result)
      printf( "Cents: %d, exact: %s, calculated %s\n", cents, buf, floatbuf) ;
    return result;
  }
  
  int main() {
    float cent = 0.01f, sum = 0.0f;
  
    for (int i=0 ; compare(i, sum) ; i++) {
      sum += cent;
    }
    return 0;
  }
Result:

  Cents: 10959, exact: 109.59, calculated 109.60
This is on my 64 bit Intel, Linux, gcc, glibc. But I guess most machines use IEEE floating point these days so it should not vary a lot.


That is simply not true. The C# decimal type doesn't accumulate errors when adding, unless you exceed its ~28 digits of precision. E.g. see here: https://rextester.com/RMHNNF58645


> unless you exceed its ~28 digits of precision

Precisely. That's why I specified ~ 10^26 addition operations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: