My sentence there is a bit unclear -- I meant it as an implication.
Arithmetic on fixed-width registers in O(1) is equivalent to saying that bit operations are O(1), which intuitively makes the most sense, after all adding two uint64 is not the same as adding two arbitrary precision integers. But that's not always the model we pick, often we just assume that arithmetic is O(1) and sweep length considerations under the rug.
Also, what does fixed mean? Is it like the word size in an actual computer architecture, where registers are 64 bits and that's all you'll ever get, or it depends on the size of input? Because the latter is the transdichotomous model, yet another option. Unsurprisingly you can get different results with different computational models.
This is not to say that asymptotic analysis is wrong, obviously, but it's a bit of a mess and it's important to remember what computational model the analysis was performed under.
> But that's not always the model we pick, often we just assume that arithmetic is O(1) and sweep length considerations under the rug.
So, I only have rudimentary undergraduate knowledge about complexity theory, but my materials (e.g. in theoretical CS, cryptography and computational algebra) were always incredibly clear about the fact that arbitrary integer arithmetic isn't constant - this is why complexity classes such as weakly vs. strongly NP-complete exist. Do you have any examples of where people would treat arbitrary length integer arithmetic as constant? (I think most people just never have to use arbitrary precision arithmetic, so that's why they'd not talk about it explicitly.)
You're right that computational complexity depends on the model and that this makes it nontrivial to use in practice. The theoreticians would sweep this under the rug with "these models are all polynomially equivalent to Turing machines" (which lets us formulate a conjecture such as P=NP precisely), but of course that doesn't in practice help you distinguish between a constant, linear or quadratic algorithm.
Arithmetic on fixed-width registers in O(1) is equivalent to saying that bit operations are O(1), which intuitively makes the most sense, after all adding two uint64 is not the same as adding two arbitrary precision integers. But that's not always the model we pick, often we just assume that arithmetic is O(1) and sweep length considerations under the rug.
Also, what does fixed mean? Is it like the word size in an actual computer architecture, where registers are 64 bits and that's all you'll ever get, or it depends on the size of input? Because the latter is the transdichotomous model, yet another option. Unsurprisingly you can get different results with different computational models.
This is not to say that asymptotic analysis is wrong, obviously, but it's a bit of a mess and it's important to remember what computational model the analysis was performed under.