We knew about universal function approximators like. Polynomials and trig functions since the 1700s. Turing and godel were around 1910 and 1920. The cybernetics movement is big in the 30s and 40s. Perceptrons 50s and 60s
Taylor expansions for all functions do not exist. Furthermore, our characterization of infinity was still poor, so we didn't even have a solid notion of what it would mean for a formalism to be able to compute all computable functions. The notion of a universal computer arguably didn't exist until Babbage.
I stand by my position that having a mathematical proof of computational universality is a significant difference that separates today from all prior eras that sought to understand the brain through contemporaneous technology.
> That’s not what I’m talking about. This is a basic analysis topic:
It's the same basic flaw: requiring continuous functions. Not all functions are continuous, therefore this is not sufficient.
> And you’re still ignoring the cybernetics, and perceptrons movement I keep referring to which was more than 100 years ago, and informed by Turing.
What about them? As long as they're universal, they can all simulate brains. Anything after Church and Turing is just window dressing. Notice how none of these new ideas claimed to change what could in principle be computed, only how much easier or more natural this paradigm might be for simulating or creating brains.
This implies it works piecewise. That’s also true of neural nets lol. You have to keep adding more neurons to get the granularity of whatever your discontinuities are.
It’s also a different reason than Taylor series which uses differentiability.