Hacker News new | past | comments | ask | show | jobs | submit login

I don't think there is a "right" answer. Defaulting to bignum makes no more sense than defaulting to float for inputs "1" and "3" if the operation to be performed on the next line is division. Symbolic doesn't make sense all of the time either, what if it's a calculator app and the user enters "2*π", they probably don't want "2π" to be the result.

If we're going to try to find a "right" answer from a language view without knowing the exact program and use cases then the most reasonable compromise is likely "error" because types weren't specified on the constants or parsing functions.




There's is a mathematically correct answer for this problem given their decimal representation. That's the correct answer for the math, period. What "good enough" behavior is for a system that uses numbers under the hood depends on context and is only something that the developer can know. Maybe they're doing 3D graphics and single precision floats are fine, maybe they're doing accounting and they need accuracy to 100ths or 1000ths of a whole number.

The appropriate default is, I would argue, the one which preserves the mathematically correct answer (as close as possible) in the majority of cases and enables coders to override the default behavior if they want to specify the exact underlying numerical representation they desire (instead of it being automatic). That goes along with the "principle of least surprise" which is always a good de facto starting point for any human/computer interaction.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: