This thread is an object lesson in the point I'm making: people have forgotten everything we've learned about making ML based products.
Parent comment doesn't understand the concept of expectation, and this comment is apparently unfamiliar with the fact that SotA for digit recognition [0] has been much higher than 90% even in the 90s. 90% accuracy for digit recognition is what you get if you use logistic regression as your model.
My point was that numbers that look good in terms of research often aren't close to go enough for the real world. It's not that 90% works for zip codes, it's that in the 90s accuracy was closer to 99%. You have validated my point rather than rejected it.
Parent comment doesn't understand the concept of expectation, and this comment is apparently unfamiliar with the fact that SotA for digit recognition [0] has been much higher than 90% even in the 90s. 90% accuracy for digit recognition is what you get if you use logistic regression as your model.
My point was that numbers that look good in terms of research often aren't close to go enough for the real world. It's not that 90% works for zip codes, it's that in the 90s accuracy was closer to 99%. You have validated my point rather than rejected it.
0. https://en.wikipedia.org/wiki/MNIST_database