There's no hypothesizing at this point. Neurons have been studied in the lab since – checks notes – 1873. Modern neural nets have largely taken Occam's Razor rather than precise biomimicry, mostly due to 'The Bitter Lesson' that basic neural networks show more generalizable emergent behaviors when scaled vs being clever about it. e.g., dendrites themselves have been shown to behave something like a multilayer perceptron on their own. So it's really perceptrons all the way down when it comes to brain circuitry.
Perceptrons were built to be mathematical models of neurons. When gears or 'electricity' were first created/harnessed, there was no intention to build a model of the mind or to mimic neurons whatsoever. There really is no weight to this argument.
> Declaring that the problem is basically solved
I'm not making that declaration for whatever you might be terming 'the problem' here. I'm just stating that 'understanding' is still (incorrectly) rooted in our belief that our means of computation is exceptional. As far as anyone can tell, 'understanding' isn't substrate dependent, and as far as we know, our 'understanding' comes from neuronal computation.
There's no hypothesizing at this point. Neurons have been studied in the lab since – checks notes – 1873. Modern neural nets have largely taken Occam's Razor rather than precise biomimicry, mostly due to 'The Bitter Lesson' that basic neural networks show more generalizable emergent behaviors when scaled vs being clever about it. e.g., dendrites themselves have been shown to behave something like a multilayer perceptron on their own. So it's really perceptrons all the way down when it comes to brain circuitry.
https://en.wikipedia.org/wiki/Golgi%27s_method
> electrical current flow or gears
Perceptrons were built to be mathematical models of neurons. When gears or 'electricity' were first created/harnessed, there was no intention to build a model of the mind or to mimic neurons whatsoever. There really is no weight to this argument.
> Declaring that the problem is basically solved
I'm not making that declaration for whatever you might be terming 'the problem' here. I'm just stating that 'understanding' is still (incorrectly) rooted in our belief that our means of computation is exceptional. As far as anyone can tell, 'understanding' isn't substrate dependent, and as far as we know, our 'understanding' comes from neuronal computation.