The universal approximation theorem actually assumes that the nonlinearity is monotonically increasing, nonconstant and continuous. I don't think floating point nonlinearities technically satisfy that.
0) nonconstant. Yes, for most cases the floating point nonlinearities map x => x, so it is not a constant.
1) bounded. Yes, the nonlinearites are bounded by the range of the FP.
2) monotonically-increasing. Yes. Consider a + b, where fp(a + b) < a + b, in other words, it's been rounded down. examine fp(a + (b - db)), cannot be rounded up to a number higher than fp(a + b), so the the floating point rounding functional fp must be monotonic for the operation +, a similar argument applies for multiply, and thus for any linear function.
3) continuous function. No. Well, you can't win at everything, no computer representation can be truly continuous, but it's reasonable approximation of the approximation theory, otherwise ML on computers in general would be hopeless.