Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Floating point math shouldn't be that scary. The rules are well defined in standards, and for many domains are the only realistic option for performance reasons.

I've spent most of my career writing trading systems that have executed 100's of billions of dollars worth of trades, and have never had any floating point related bugs.

Using some kind of fixed point math would be entirely inappropriate for most HFT or scientific computing applications.



You can certainly make trading systems that work using floating point, but there are just so many fewer edge cases to consider when using fixed point.

With fixed point and at least 2 decimal places, 10.01 + 0.01 is always exactly equal to 10.02. But with FP you may end up with something like 10.0199999999, and then you have to be extra careful anywhere you convert that to a string that it doesn't get truncated to 10.01. That could be logging (not great but maybe not the end of the world if that goes wrong), or you could be generating an order message and then it is a real problem. And either way, you have to take care every time you do that, as opposed to solving the problem once at the source, in the way the value is represented.

> Using some kind of fixed point math would be entirely inappropriate for most HFT or scientific computing applications.

In the case of HFT, this would have to depend very greatly on the particulars. I know the systems I write are almost never limited by arithmetical operations, either FP or integer.


I work on game engines and the problem with floats isn't on small values like 10.01 but on large ones like 400,010.01 that's when the precision wildly varies.


The issue with floats is the mental model. The best way to think about them is like a ruler with many points clustered around 0 and exponentially fewer as the magnitude grows. Don't think of it like a real value - assume that there are hardly any values represented with perfect precision. Even "normalish" numbers like 10.1 are not in the set actually. When values are converted to strings, even in debuggers sometimes, they are often rounded which throws people off further ("hey, the value is exactly 10.1 - it is right there in the debugger"). What you can count on however is that integers are represented with perfect precision up to a point (e.g. 2^53 -1 for f64).

The other "metal model" issue is that associative operations in math. Adding a + (b + c) != (a + b) + c due to rounding. This is where fp-precise vs fp-fast comes in. Let's not talk about 80 bit registers (though that used to be another thing to think about).


Lua is telling me 0.1 + 0.1 == 0.2, but 0.1 + 0.2 != 0.3. That's 64-bit precision. The issue is not with precision, but with 1/10th being a repeating decimal in binary.


Not an issue on Scheme and Common Lisp and even Forth operating directly with rationals with custom words.


Not only that but the precision loss accumulates. Multiply too many numbers with small inaccuracies and you wind up with numbers with large inaccuracies


It depends on what you're doing. If your system is a linear regression on 30 features, you should probably use floating point. My recollection is fixed is prohibitively slower and with far less FOSS support.


I'm wondering if trading systems would run into the same issues as a bank or scientific calculation. You might not be making as many repeated calculations, and might not care if things are "off" by a tiny amount, because you're trading between money and securities, and the "loss" is part of your overhead. If a bank lost $0.01 after every 1 million transactions it would be a minor scandal.


Personally, I would be more concerned about something like determining whether the spread is more than a penny. Something like:

    if (ask - bid > 0.01) {
        // etc
    }
With floating point, I have to think about the following questions: * What if the constant 0.01 is actually slightly greater than mathematical 0.01? * What if the constant 0.01 is actually slightly less than mathematical 0.01? * What if ask - bid is actually slightly greater than the mathematical result? * What if ask - bid is actually slightly less than the mathematical result?

With floating point, that seemingly obvious code is anything but. With fixed point, you have none of those problems.

Granted, this only works for things that are priced in specific denominations (typically hundredths, thousandths, or ten thousandths), which is most securities.


So the spread is 0.0099999 instead of 0.01. When will that difference matter?


It matters if the strategy is designed to do very different things depending on whether or not the offers are locked (when bid == ask, or spread is less than 0.01).

In this example, I’m talking about securities that are priced in whole cents. If you represent prices as floats, then it’s possible that the spread appears to be less (or greater) than 0.01 when it’s actually not, due to the inability of floats to exactly represent most real numbers.


But I'm still not understanding the real-world consequences. What will those be, exactly? Any good examples or case studies to look at?


Many trading strategies operate on very thin margins. Most of the time it's less than one cent per share, often as little as a tenth of a cent per share or less.

A different example: let's say that you're trying to buy some security, and you've determined that the maximum price you can pay and still be profitable is 10.01. If you mistakenly use an order price of 10.00, you'll probably get fewer shares than you wanted, possibly none. If you mistakenly use a price of 10.02, you may end up paying too much and then that trade ends up not being profitable. If you use a price of 10.0199999 (assuming it's even possible to represent such a price via whatever protocol you're using), either your broker or the exchange will likely reject the order for having an invalid price.


I can imagine sth like: if (bid ask blah blah) { send order to buy 10 million of AAPL; }


All your price field messages are sent to the exchange and back via fixed point, so you are using fixed point for at least some of the process (unless you're targeting those few crypto exchanges that use fp prices).

If you need to be extremely fast (like fpga fast), you don't waste compute transforming their fixed point representation into floating.


Sure, string encodings are used for most APIs and ultra HFT may pattern match on the raw bytes, but for regular HFT if you're doing much math, it's going to be floating point math.


We might have different definitions of "HFT"


> Using some kind of fixed point math would be entirely inappropriate for most HFT or scientific computing applications.

May I ask why? (generally curious)


For starters, it's giving up a lot of performance, since fixed-point isn't accelerated by hardware like floating-point is.


Isn't fixed point just integer?


Yes, but you're not going to have efficient transcendental functions implemented in hardware.


Ah okay, fair enough. But what sort of transcendental functions would you use for HFT?

I guess I understood GGGGP's comment about using fixed point for interacting with currency to be about accounting. I'd expect floating point to be used for trading algorithms, but that's mostly statistics and I presume you'd switch back to fixed point before making trades etc.


Yes, integer combined with bit-shifts.


The problem with fixed point is in its, well, fixed point. You assign a fixed number of bits to the fractional part of the number. This gives you the same absolute precision everywhere, but the relative precision (distance to the next highest or lowest number) is worse for small numbers - which is a problem, because those tend to be pretty important. It's just overall a less efficient use of the bit encoding space (not just performance-wise, but also in the accuracy of the results you get back). Remember that fixed point does not mean absence of rounding errors, and if you use binary fixed point, you still cannot represent many decimal fractions such as 0.1.


With fixed point you either scale it up or use rationals.


Fundamentally there is uncertainty associated with any physical measurement which is usually proportional to the magnitude being measured. As long as floating point is << this uncertainty results are equally predictive. Floating point numbers bake these assumptions in.


It's the front of house/back of house distinction. Front of house should use fixed point, back of house should use floating point. Unless you're doing trading, you want really strict rules with regards to rounding and such, which are going to be easier to achieve with fixed point.


I don't think it is that clear. The split I think is between calculating settlement amounts which lead to real transfers of money and so should be fixed point whilst risk, pricing (thus trading) and valuation use models which need many calculations so need to be floating point.


How do you handle the lack of commutativity? I've always wondered about the practical implications.


I asked an ex-Bloomberg coder this question once after he told me he used floating points to represent currency all the time, and his response was along the lines of “unless you have blindingly-obvious problems like doing operations on near-zero numbers against very large numbers, these calculations are off by small amounts on their least-significant digits. Why would you waste the time or the electricity dealing with a discrepancy that’s not even worth the money to fix?”


Floating-point is completely commutative (ignoring NaN payloads).

It's the associativity law that it fails to uphold.


Nitpick: FP arithmetic is commutative. It's not associative.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: