That preprint seems to be missing every single figure. From the text,
> The key feature of the model neuron is its use of
active dendrites and thousands of synapses, allowing the
neuron to recognize hundreds of unique patterns in large
populations of cells.
"active dendrites" and "thousands of synapses" sounds an awful like abstracting away a complex mathematical model to fit a particular definition of "single neuron".
They're researching the behaviour of real neurons by making computational models, rather than creating a "neural network" model for AI purposes, so the definition of "single neuron" they're using is meant to reflect the original meaning of the term.
But if the hypothesis is that thousands of little local functions can learn to recognize feature vectors, that's been a well-tested assumption in discriminative machine learning models for decades. Is there a biological twist to this finding?
It's mostly interesting biologically - it relates specifically to how synapses' vicinity to the main cell body affects their action, or more generally how the spatial configuration of a neuron's connections affects its computational function.
The gist seems to be that more distant synapses can't initiate an action potential, or firing of the neuron, but can prime the neuron to fire in reaction to synapses closer to the cell body. This means that outer synapses can provide 'context', e.g. indicate that prior steps in a sequence have been recognised, while inner synapses can cause firing if the context is fulfilled, i.e. context + necessary condition (recognition of the most recent step in the sequence) = action potential.
It's not computationally surprising, but it is a specific cellular mechanism.
This MIT Tech Review article is based on only a single, non-peer-reviewed preprint, with missing figures, written by non-experts. The preprint presents a "theory" with no data to support it. I guess it would take real work to do real reporting.
The figures are at the end of the document (this is pretty common for journal submission). The supporting data (in addition to the simulations) is in the form of references to existing experimental literature. Again, a pretty common format.
If we want real AI we need to spend more time reverse engineering the salient ingredients used by biology. I'm biased to take a functional approach to all of this and because these systems have to be robust to noise, I don't think you need to model every minute detail, but rather you just need to capture the core computational elements. This looks like it could be an interesting discovery toward that end.
Maybe I'm over-interpreting, but how is it possible we didn't know something so basic?
I would have expected neurons to have been thoroughly instrumented and tested.
It's a great question, and I think this is a pretty good insight into the general state of the field of neuroscience at the cellular level. We know a lot of details about things going on inside neurons, and a lot of details about synapses. And these details are often specific to one of the hundreds of different types of neurons that are found in different parts of the brain. (e.g. http://www.neuroelectro.org/neuron/index/). And we do have methods to measure and manipulate various things in neurons in a dish. But the dynamics of a neuron's voltage are complicated, non-linear, and time-varying, and there are many parameters (e.g. the concentrations of numerous ionic species and other small molecules, many of which we probably don't even know about yet).
Even then, going from these messy biological details (e.g. these 20 proteins assemble into a particular form and release this neurotransmitter from this synapse when X happens) to an explanation for how the neuron works at a more algorithmic level is hard, and the field isn't there yet. Assembling and abstracting the details is hard and it's one of the goals of theoretical neuroscience. The complexity is probably a symptom of our lack of understanding, rather than the cause of it, i.e. there probably are a lot of details that we can abstract away in a simpler functional model.
I haven't read the paper, and I'm only vaguely familiar with Hawkins et al.'s HTM work. But I disagree with the claim at the end of the TR piece that these predictions are imminently testable. Thinking up a specific experiment to try and disprove theoretical ideas is often the hardest part of experimental neuroscience.
In one sense we did know about the capabilities of neurons, or at least made similar predictions. How all of the ingredients come together to achieve this is not understood.
The computational properties of neurons are massively complex, e.g. local filtering in the dendrites, attenuation of depolarizations down the dendrite, the effect of local excitatory and inhibitory inputs, just to name a few. The authors of the manuscript present their interpretation of some of the these properties to achieve the goal of spatio-temporal pattern recognition by neurons (I haven't read past the manuscript yet). Others have done the same.
These models are based on cortical columns, and therefore only pertain to cortical activity. It would be interesting to see how other parts of the brain could be modeled, like the thalamus and hippocampus.
Anyhow, it would also be great to see if this could reproduce the ocular dominance column patterns.
I'm sad that your comment has been downvoted. I was looking forward to a turn or two of the old "one thing you could do with a hundred billion of them is imagine what you could do with a hundred billion of them" etc.