Hacker News new | past | comments | ask | show | jobs | submit login
Machine learning algorithms used to decode and enhance human memory (wired.com)
217 points by lxm on March 3, 2018 | hide | past | favorite | 47 comments



This study is interesting, but it's not really AI and it's not really novel.

The researchers fit a regression to predict word recall from high-frequency EEG activity when memorizing the word. We've known for several years that high-frequency activity predicts memory success, so this part isn't new.

In addition, several papers have tried to improve memory through high-frequency stimulation from brain implants, with various results. This paper proposes "closed-loop" stimulation, delivering stimulation only when the classifier predicts failure. They find that closed-loop is effective.

What the authors really want to claim is that closed-loop is more effective than open-loop, because otherwise their fancy "AI" classifier is useless. Surprisingly, this study does not compare closed-loop vs. open-loop.


I'm sad that the AI acronym has become overused, and has lost its credibility. Back in 2004 even the expression "Expert System" was warily used, and only when appropriate. The way this is going we're going to have AI toasters by the end of the year.


The way this is going we're going to have AI toasters by the end of the year.

I'd be surprised if they don't exist already. We already have AI rice cookers: "Zojirushi's top-of-the-line Induction Heating Pressure Rice Cooker & Warmer uses pressurized cooking and AI (Artificial Intelligence) to cook perfect rice." -- from https://www.zojirushi.com/app/product/npnvc


I realize this is somewhat ridiculous, but I actually found their FAQ [0] and the product very interesting.

The term "AI" has become somewhat meaningless, but in this case they appear to be adjusting cooking time based on previous results. I'd guess they are probably adjusting a couple of parameters.

My basic understanding of how rice cookers work, is that they essentially apply full heating power until all the water has boiled away/been absorbed. They know when this happens by monitoring the temperature, the temperature wont rise above 100 degrees until all the water has boiled away. At this point they shut off.

I guess more "intelligent" rice cookers can do a little more than this, maybe if they see that it's consistently taking less time than expected to cook the rice they can heat to a lower temperature at the start to aid water absorption or something? Would be interested in knowing more.

[0] https://www.zojirushi.com/app/faq/rice-cookers


There is kind of a Gresham'a Law with these buzzwords, where if you actually have new and interesting ideas around them, it's much harder to get people to take you seriously.


Every new faculty candidate we’ve had this term has mentioned wanting to apply AI or ML to their future research.

They’re mostly doing systems, so it makes exactly 0 sense.


>> "we're going to have AI toasters by the end of the year."

I hope not. It seems they can be a real pain in the ass:

Red Dwarf toaster: https://www.youtube.com/watch?v=LRq_SAuQDec


AI in toasters is better than many use cases being considered or publicized.

I would welcome a toaster that let me say "too burnt" or "too raw" or "just right" after each toasting, adjusted the cooking time and temperature accordingly, and generalized well to new kinds of bread and such.


Yesterday's fuzzy logic is today's AI?


fuzzy logic is a very ... fuzzy term.


A side-note: Statistical classification is machine learning, which is a subset of AI. Or atleast, was a subset of AI when it was classified itself. Machine learning has a lot of overlap with artificial intelligence, and statistical classification is, pedantically, AI. On a more general note, AI is an extremely broad field and -- I am assuming this is where you're coming from -- is not limited to the whole General/Narrow/Vertical/Foo/Baz/Bar jumble mumble.


Semantics aside, AI is being name-dropped to drive clicks. It's misleading. Doing logististic regression is not noteworthy and has not been for many decades. Further, we already knew how to improve memory without the regression, so the study doesn't accomplish much.

The article misleads about the science being done, and people are better off not reading it. For example as others have pointed out, regression is not a black box and it is clear what we do and do not understand using this model.


Update on the slightly better HN title:

1. The regression model used has absolutely nothing to do with decoding memory. The only signal here is high-frequency EEG activity, which does not provide information on the structure of human memory.

2. There is no evidence that the regression model was needed to enhance memory.


could you clarify what you mean by an open-loop system, and why it must be compared with a closed-loop one?


Sure, those terms are used in the paper. In general, open-loop vs. closed-loop refer to systems without and with feedback, respectively. Previous studies already showed that high-frequency stimulation could improve memory. Those were open-loop because they didn't use EEG feedback; stimulation is always on. The alternative is stimulation only when the subject is predicted to forget the word, based on EEG feedback.

The obvious question is whether EEG-based stimulation makes any difference compared to always-on stimulation. It is very possible that the difference is negligible and that the EEG feedback doesn't matter.


Personally what I worry about is that there are too many conflicting, adaptive, self-correcting systems that we don't understand from first principles.

The body has it's own internal "AI" that also responds and adapts to these incoming pulses over time. You could probably snort some speed and get the same effect described here ... but if you keep doing it, it won't keep working. Now replace the Speed with AI that generates the pulses and can adapt the dosage in response to the bodies AI... we just don't know what it would do long-term.

The real problem IMO is that the AI prescribing the dosage doesn't have any of the sensory inputs the human brain does. So it might boost working memory in a way that is maladaptive to the situation.

All in all -- I think these technologies could be quite interesting for allowing us to hyper-evolve out of our mental limitations that are still over-fitted to living in the jungle... but might make us weak as a species in the long run by forcing us to have sensory stimulations that are overfitted to a particular prescribed state that we label as "good".


It seems that almost every technological development will cause us to become "weak as a species," for the simple reason that these developments remove difficulties we have faced in the past.

Cars and bicycles damaged our endurance. Shoes softened our soles. If these technologies disappeared overnight, yes we would be worse off as a species, but that says nothing about the benefits of these technologies.

If these technologies improve our mental effectiveness, even if only within a specific type of sensory stimulation, it's likely that we would adapt our sensory perceptions to deliver these "optimized states," possibly through new technology, for an overall net gain in efficacy.


“If men learn this ... they will cease to exercise memory because they rely on that which is written” - Plato on literacy, ca. 350 BCE.


He was right to some extent.


He was absolutely right. It's just that the benefits of written language massively outweigh the small loss in memory performance we experience.

If your computer's OS and all your files were stored on a 64gb ramdisk, it would be fast, yes, but not very useful.


> The fact remains that while Kahana’s system can improve word recall in specific circumstances, he doesn’t know exactly how it’s improving function. That’s the nature of machine learning.

> Luckily, Kahana's team has thought this through, and some algorithms are easier to scrutinize than others. For this particular study, the researchers used a simple linear classifier, which allowed them to draw some inferences about how activity at individual electrodes might contribute to their model's ability to discriminate between patterns of brain activity.

Isn’t linear regression the easiest of all ML to understand? It’s neural networks that cause black boxes.


simple linear classifier... I'm guess logistic regression? And yes, neither would be a black box.


Better title: Tickling the brain with low-intensity electrical stimulation in a specific area can improve verbal short-term memory.

https://www.sciencedaily.com/releases/2018/01/180129134354.h...



The "AI" algorithm used in the paper is far from a black box. It's logistic regression, which is extremely well understood and has been used by statisticians and scientists for decades.


"The fact remains that while Kahana’s system can improve word recall in specific circumstances, he doesn’t know exactly how it’s improving function. That’s the nature of machine learning."

Seems like its also the nature of electro-stimulus to the brain.

Is the real story here in ML/AI, or in advances regarding 'when is it helpful to shock your brain a bit vs when is it not'?


It's not totally clear to me whether there is a real story in ML/AI or neuroscience.

The authors used logistic regression to try to determine whether a subject will remember a word or not, which the classifier did better than chance, but still did pretty badly, with an AUC of 0.61. Then, when the classifier said the probability of remembering the stimulus is less than 0.5, they sent some current through some electrodes. The set of electrodes to stimulate and the current were selected in consultation with a neurologist and fixed at the start of the session. They found that stimulation in the lateral temporal cortex was associated with a significant (but just barely) increase in recall compared to no stimulation or stimulation outside of lateral temporal cortex. (But it's unclear whether this decision to look at effects in LTC vs. outside of LTC was made a priori. If it was not, and many comparisons conducted before arriving on this story, then the effect may not be statistically significant after adjusting for the comparisons.)

Beyond the question of whether the outcome was selected post hoc, the main problem with the study is that, unless I have missed it, there is no control to demonstrate that selecting the trials on which to stimulate using the classifier is better than stimulating on every trial. This control seems necessary to demonstrate that the linear classifier (which is apparently now "artificial intelligence") is in any way useful. Otherwise, this paper has little scientific value, short of possibly providing another data point regarding the effect of stimulation upon memory.

Link to paper: https://www.nature.com/articles/s41467-017-02753-0#Sec19


You can see better improvement simply by learning how to use your memory properly. The memory palace technique has saved my life


A recent innovation we thought you should be aware of is related to the ability to provide for increased spatio-temporal resolution of the underlying EEG data set as a pre processing step prior to feeding the recorded EEG data into machine learning algorithms. TRUUST has pioneered this and is seeing fantastic results Pre-Clinically on MEA's in drug discovery research for Fragile X and Epilepsy indications. The technology was developed with Epilepsy in mind however. If anyone would have an interest feel free to reach out to us info@truustneuroimaging.com and below are related publications and resources. We thought it made more sense to enhance the data quality first rather than trying to optimize the crap out of algorithms given the problem generally results in better outcomes when better data goes in; garbage in garbage out sort of deal.

Published paper in Journal for Neuroscience Methods: https://www.clearslide.com/view/mail?iID=3f3TTfMPJNBRhXhRDJD...

Published Poster with Scripps at SfN for Fragile X: https://www.clearslide.com/view/mail?iID=C5dp3gjmMWnMxKktk44...

Cool video showing what is possible with recorded EEG: https://www.youtube.com/watch?v=rhRwpAA1KeA


It would be great if people started referring to classification and regression algorithms as Statistical Learning instead of Machine Learning. But then no one would write an article like this I guess.


Statistical learning by using gradient descent on functions with a special structure, which significantly improved classification accuracy in the tasks some people tend to associate with intelligence.


>But people—and institutional review boards—aren’t usually amenable to cracking open skulls in the name of science.

I feel as if future civilizations (if we get there) will look back at the lack of practice quoted above with the same demeanor as we do now for geocentrism:

Should the needs of the many outweigh the needs of the few?


Summary: Researchers collaborated with epilepsy patients, who already had electrodes implanted in their brain to monitor seizures, to improve the patients' memory. The electrodes are capable of both reading brain patterns, and stimulating brain activity. ML algorithms learned what each patient's brain pattern looked like when they successfully memorized a word. The ML algorithms would then provide a jolt to mirror those successful-memorization-brain-patterns for words that the patient would historically not have memorized.


It wil turn into a kind of pre-crime.


Thats not what we want. The memory should be recalled from external brainz. I just want to be a professor and a doctor of everything and clearly remember every paper written about everything complete with animated visuals and indexing.


A step toward building neural laces.


15%


Be careful what you wish for. Not all memories are good. Some are best left forgotten.

The irony is we keep failing to remember to consider unintended consequences.


Even if we consider them, what do you suggest? Stop all the research?


I would definitely suggest not using epilepsy patients as lab rats. First off, the article says "Machine learning is inherently notoriously inscrutable" which is extremely wrong and ignorant. Then they proceed to demonstrate how they essentially failed to force people to memorize useless words on a screen, by zapping their brains while not really know what they're doing or how anything works. This is NOT my idea of ethical, late-stage pre-market human research. This is something to do on rats, not people. This is the same as shock therapy of rebellious women in the 60's. It's horrible, has irreversible side effects and you need to stop doing it. These are people, not toys for PhDs.


I think you can chill out a little. These are volunteers and this is speculative research. And these people are already wired and receiving gross electric shocks to different nerve clusters in their brain to regulate epilepsy, so I doubt that more nuanced stimulation like this, is much worse, and may lead to new breakthroughs, and may turn what is a handicap, into a special abilility for these people. They are pioneers.


Ok. So when do we stop chilling out?

The fact remains: not all memories are desirable.


Research with caution. (Guidelines/laws maybe?)


You never know how the product of research is going to be used. How do you imagine laws trying to solve that?


Laws? We don't need laws. We need researchers (and their underwriters) with ethics, foresigh, and a sense of responsibility and morals.

The standard cop out "Oh. I'm / we're responsible for how X is used" is irresponsible. History shows us that.

We don't need laws. We simply need accountablity. It's not rocket science ;)


Maybe we did know but were made to forget.


Did I say that?

What I suggest is we stop ignoring the fact that there are (almost) always unintended consequences.

Why do we keep pretending we are smarter than we really are?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: