It would have been pretty interesting if this had NOT held.
It would have meant that even though "neural nets can NOT approximate (arbitrarily well, using the supremum metric) any continuous function", a neural network (the humans involved) was able to discover this limitation.
I find the idea of a neural net finding a limitation of a neural net, to be interesting.
A "neural net" is what a computer scientist decided to call something that he thought behaved somewhat like a now dated abstract model of what individual neurons worked like, from a period when neuroscience was really in its infancy.
If we take the phrase neural network to mean, "a series of deeply and widely interconnected elements which assist or inhibit the transmission of signals to one another, activated by the inputs exceeding a threshold," then I think there's a pretty good amount of overlap.
I understand there's a temporal signaling model (pulses at varying rates, not steady state signals) and stochastic information (like random firing and a bunch of noise), but once we abstract out the axons, dendrites, and neurochemicals, is there another piece of functional equipment which drastically effects things? How does our simplified view that small, individually stupid pieces, acting in concert to produce complex behavior differ from the real brain?
The point is more that you can't abstract neurons away into a simple "analog in, digital out" pseudo-transistor with fixed connections and expect that to describe how the brain works. The brain makes active use of all those details you are abstracting away, in ways that would make your model's predictions differ from reality.
No, absolutely not. I'm saying that a "neural net" a la McCulloch doesn't accurately model how computation is performed by the brain. You can't accurately model human thinking by recording the brain's connections as a classical neural net. That's all I'm saying.
You should explain in concrete terms why that is the case. I think its apparent that the human brain is a much more complex and advanced neural network (Intel 4004 vs Intel i7 perhaps?), but to say it is not is interesting and I would like to hear why.
Perhaps "not even close" was a bit strong, but I'm talking about a neural net as a specific mathematical model here. To say that the brain "is" an instance of a particular mathematical model isn't even really meaningful. At best you could try to argue that the brain is "modelled well by <formalism> for <purpose>", although I think you'd lose that argument for all but the weakest of purposes.
I'm not a neurobiologist (if you are please correct/clarify!), but here are just a few of doubtless many significant differences as I understand them:
* Real neurons don't update in a bunch of coordinated discrete timesteps, as an ANN learning algorithm does. They can fire independently and in continuous time.
* Real neurons' activation behaviour is closer to sudden delta-function-like spikes, rather than a smooth activation function or something which allows them to stay in a firing state for longer than a short pulse.
* The structure of the connection graph of neurons in the brain is incredibly more complex than that of artificial neural net models, can change over time, isn't split into obvious layers from input to output, there isn't a clearly-identifiable error signal which is used to train it at the outputs, ...
* A real neuron is an incredibly complex biological system whose behaviour can be modulated in all sorts of ways and which I expect would take a nightmarishly complicated set of PDEs to model with any degree of realism even as an isolated unit. To think we've captured all aspects of their behaviour relevant to human cognition with a simple weighted sum and a sigmoid (say) seems pretty naive.
Details aside though, the idea that you can start drawing deep philosophical conclusions about the nature of human thought, our ability to conceive of our own limitations etc, based on an analogy between a complex biological system and a simple mathematical formalism which is at best loosely influenced by certain limited aspects of it -- it's just silly, one of those "not even wrong" sort of statements.
I think its worth pointing out that when computer science researchers play with different kinds of neural nets which expand beyond the simplistic version used in the OP, they still call them neural nets. I have seen abstract computer-driven neural nets that fire independently and in (more) continuous time, are plastic, lack clear layers, etc.
I think when you said: "but I'm talking about a neural net as a specific mathematical model here" that is fair, you are asserting that while this model similar to a biological one at some very primitive level the model of neural net used here is so far removed for our biological implementation as to render any comparison non-meaningful. I believe that is possible, but we still have discussed evidence that is the case.
I thought it might not be "silly" question: Consider that a turning machine simulated with pencil and paper, given enough paper and time is capable of doing any calculation your desktop computer can. Now it may take 10,000+ years and many "operators" lives, but that is besides the point.
I was thinking that the comparison between the pencil and paper turning machine and the desktop machine, might be similar to the this very basic neural net model and our own much more complex biological instance. The idea is that the two systems while varying vastly in efficiency might equivalent in what they can theoretically compute.
I don't see how asking the question of how much we can simplify our biological neural net and still have be computationally equivalent to the original (even if less efficient) is a silly question.
(I don't like silly, it implies the question is not worth asking. Calling your students question silly is a good way to make them never ask another.)
Because the neural network described there is just an abstraction that has mathematical and (with some modification) practical utility. Real neurons do not behave in the way the model behaves. Synapses have plasticity, not just neurons. I'm not sure if synaptic efficacy (i.e. how much pre-synaptic input influences output) is fully understood even now.
To put it simply, it is possible for ANNs to fail at something and for us to succeed at the same thing.
Right-- in the musculoskeletal system there are mechanisms that operate approximately like pulleys; so I think you can say "a person is a neural net" in exactly the same sense as you can say "a person is a pulley." Neural nets are models of systems that we see in the human body that have been simplified and adapted such that they can be used as tools. It's not clear what lessons about humans can be drawn from studying NNs.
You could say that a neural network found this limitation of neural networks, to the extent that neural networks could be defined in terms mathematical logic. However, it's not guaranteed that the neural networks in our brain could be explained in these terms--the physical processes underlying them are not fully understood.
I find the idea of a neural net finding a limitation of a neural net, to be interesting.