Hacker News new | past | comments | ask | show | jobs | submit login
The Future of Neuromorphic Computing (newyorker.com)
55 points by anthotny on Feb 16, 2017 | hide | past | favorite | 56 comments



The reason AI has been so successful recently is that the research community has assumed a ruthlessly empirical philosophy: no idea, no matter how beautiful or interesting, is considered truly useful until it bears measurable results on some dataset. The reason neuromorphic computing gets such skepticism from AI researchers is that so far it has resisted any attempts at this kind of empiricism. No neuromorphic implementation has shown state of the art results on any important problem.

If/When neuromorphic computers show groundbreaking results, the community will pivot quickly to using them. But expecting AI researchers to show deference to neuromorphic computing because it "mimics the brain" is to ignore the empirical philosophy that has led to AI's success.


To be fair, this whole Deep Learning renaissance was made possible and kicked off only after decades of research in multi layer neural nets (going back to the 80s) by Hinton, Lecun, etc. They stuck to their chosen method despite it not having great empirical results (the research community shunned NNs in the 90s for SVNs cause they worked better), because they believed it should and will work - and it did, eventually. So a similar argument for 'basic research' could be made for neuromorphic computing.


Yes, I totally agree. Yann LeCun, Geoff Hinton, Jurgen Schmidhuber and others did unpopular work for a long time. And they deserve tons of credit for their perseverance which paid off.

Similarly, I think it's great that there are AI researchers working on techniques which are currently out of favor. It's important to have diversity of viewpoint.

What irritates me about neuromorphic computing is that much of the work I see publicized (including the work in this article) isn't being presented as basic research on a risky hypothesis. Instead it's presented as the future of AI, despite the current lack of any demonstrated utility, and the almost complete disconnect between the AI researchers building the future of AI and the neuromorphic community.

The burden of proof is always on the researcher to show utility, and if the neuromorphic computing community can do that, I'll be super excited! Until then, I'll be waiting for something measurable and concrete, and rolling my eyes at brain analogies.


> Yes, I totally agree. Yann LeCun, Geoff Hinton, Jurgen Schmidhuber and others did unpopular work for a long time.

...

> Until then, I'll be ... rolling my eyes at brain analogies.

Maybe you don't realize this, but these guys made more brain analogies than you can count over the same period to which you attribute their greatness. Meanwhile, they were attacked year after year by state-of-the-art land grabbers saying the same things you just did.

> isn't being presented as basic research on a risky hypothesis.

It is basic research, but it's not a risky hypothesis. Existing neuromorphic computers achieve 10^14 ops/s at 20 W. Thats 5 Tops/Watt. The best GPUs currently achieve less than 200 Gops/Watt. Where is the risk in saying that a man-made neuromorphic chip can achieve more per dollar than a GPU. There is no risk, and suggesting that this field is somehow has too much risk for advances to be celebrated is absolutely crazy.


Non-neuromorphic (analog) deep learning chip startup here. We're forecasting AT LEAST ~50 TOPS/watt for inference.


Sure - I guess it's productive for me to answer why this doesn't disagree with my comment. By the time you get the software to hook up that kind of low bit precision (READ: neuromorphic) compute performance with extreme communication-minimizing strategies (READ: neuromorphic), which will invariable require compute colocated, persistent storage (READ: neuromorphic) in any type of general AI application, you're not exactly making the argument that neuromorphic chips are a bad idea.

We literally have to start taking neuromorphic to mean some silly semantics like "exactly like the brain in every possible way" in order to disagree with it.

Edit: also, to ground this discussion, there are extremely concrete reason why current neural net architectures will NOT work with the above optimizations. That's the primary motivation for talking about "neuromorphic", or any other synonym you want to coin, as fundamentally different hardware. AI software ppl need to have a term for hardware of the future, which simply won't be capable of running AlexNet well at all, in the same way that a GPU can't run CPU code well. I think the term "neuromorphic" to describe this hardware is as productive as any.


Which existing neuromorphic computers achieve 10^14 ops/s at 20 W? If you compare them to GPUs, those "ops" better be FP32 or at least FP16.

Also, you forgot to tell us what is that "extremely concrete reason why current neural net architectures will NOT work with the above optimizations".


>Which existing neuromorphic computers achieve 10^14 ops/s at 20 W? If you compare them to GPUs, those "ops" better be FP32 or at least FP16.

The comparison is of 3 bit neuromorphic synaptic ops against FP8 pascal ops. That factor is important (as it means that the neuromorphic ops are less useful), but it turns out to be dwarfed by the answer to your second question:

> Also, you forgot to tell us what is that "extremely concrete reason why current neural net architectures will NOT work with the above optimizations".

this is rather difficult to justify in this margin. But the idea is that proposals such as those above (50 Tops) tend to be optimistic on the efficiency of the raw compute ops. But these proposals really don't have much to say about the costs of communication (e.g. reading from memory, transmitting along wires, storing in registers, using buses, etc.). It turns out that if you don't have good ways to reduce these costs directly (and there are some, such as changing out registers for SRAMs, but nothing like the 100x speedup from analog computing), you have to just change the ratio of ops / bit*mm of communication per second. There are lots of easy ways to do that (e.g. just spin your ops over and over on the same data), but the real question is how to get useful intelligence out of your compute when it is data starved. This is an open question, and (sadly), very few ppl are working on it, compared to say low-bit-precision neural nets. But I predict this sentiment will be changed over the next few years.

Edit for below: no one is suggesting 50 Top/w hardware running alex net software to my knowledge (though would love to hear what they are proposing to run at that efficiency) . Nvidia among others are squeezing efficiency for cv applications with current software, but this comes at the cost of generality (it's unlike the communication tradeoffs they're making on that chip will make sense for generic AI research), and further improvements will rely on broader software changes, esp revolving around reduced communication. There are a lot of interesting ways to reduce communication without sacrificing performance, such as using smaller matrix sizes, which would reverse the state of the art trends.


Regarding your first answer, sounds like you're doing apples to oranges comparison here. What are those "synaptic ops"? Xavier board is announced to be capable of 30 Tops (INT8) at 30W, so even if your neuromorphic chip does 100 Tops at 20W, assuming for a second those ops are equivalent to INT3 operations, this makes them very similar in efficiency.

And you still haven't answered my second question: what is the reason the future neuromorphic chips won't be able to run current neural net architectures?

I'm not even sure what you are talking about at the end of your comment. The 50Tops/W figure was promised for an analog chip, designed to run modern DL algorithms. Sounds pretty reasonable, and I don't see how your arguments apply to it. Are you saying we can't build an analog chip for DL? Why does it have to be data starved?


Our hardware can run AlexNet...


In an integrated system at 50 tops/watt? How are you going to even access memory at less than 20 fJ per op? Like, you're specifically trying to hide the catch here. If we were to take you at face value, we'd have to also believe that Nvidia is working on an energy optimized system that is 50x worse for no good reason.

For reference, reading 1 bit from a very small 1.5kbit sram, which is much cheaper than the register caches in a gpu, costs more than 25 fJ per bit you read.


So this is locked up in "secret sauce". But as a hint, the analog aspect can be exploited.


Look, it sounds like your implying compute colocated storage in the analog properties of your system (which is exactly what a synaptic weight is btw), on top of using extremely low bit precision. So explicitly calling your system totally non-neuromorphic is a little deceiving. But even then I find this idea that you're going to be running the AlexNet communication protocol to pass around information in your system to be a little strange. If you're doing anything like passing digitized inputs through a fixed analog convolution then you're not going to beat the SRAM limit, which means that instead you have in mind keeping the data analog at all times, passing it through an increasing length of analog pipelines. Even if you get this working, I'm quite skeptical that by the time you have a complete system, you'll have reduce communication costs by even half the reduction you achieve in computation costs on a log scale. It's of course possible that I'm wrong there (and my entire argument hinges on the hypothesis that computation costs will fall faster than communication - which is true for CMOS but may be less true for optical), but this is really the only projection on which we disagree. If I'm right, then regardless of whether you can hit 50 Tops (or any value) on AlexNet, you'd be foolish not to reoptimize the architecture to reduce communication/compute ratios anyway.


Oh, I see what you meant now. Yes, when processing large amount of data (e.g. HD video) on an analog chip, DRAM to SRAM data transfer can potentially be a significant fraction of the overall energy consumption. However, if this becomes a bottleneck, you can grab the analog input signal directly (e.g. current from CCD), and this will reduce the communication costs dramatically (I don't have the numbers, but I believe Carver Mead built something called "Silicon Retina" in the 80s, so you can look it up).

Power consumption is not the only reason to switch to analog. Density and speed are just as important for AI applications.


I should clarify, once data enters the chip, we provide 50 tops/W. The transfer from dram is not included.


I never understand the odd advantage that brains are assumed to have over machines when comparing power consumption.

>... AlphaGo ... was able to beat a world-champion human player of Go, but only after it had trained ... running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)

A human brain has a severe limitation though. It can't consume more or less energy even if it I wanted to. AlphaGo could double, triple, etc its power consumption and expect to improve its performance.

The brain also took decades to train. Computers also have the advantage of being identical. You can't train any brain to be a master level Go player.

I just don't see brains as the high watermark of intelligence. They occupy a very specific niche in what I assume is a vast unbounded landscape of possible intelligences.


> The brain also took decades to train.

The brain of an insect doesn't take decades to train, and we're currently unable to match its capabilities, either.

> I just don't see brains as the high watermark of intelligence. They occupy a very specific niche in what I assume is a vast unbounded landscape of possible intelligences.

That is a hypothetical claim because we don't know what intelligence is. Surely, some algorithms are much better at some tasks than the human brain, but that has been the case since the advent of computing, and it does not make them intelligent.

Intelligence, or how we would currently define it colloquially and imprecisely, is an algorithm or a class of algorithms with some specific capabilities. Could those capabilities be taken further than the human brain? We certainly can't say that they cannot, but it's not obvious that they can, either. The only kind of intelligence we know, our own, comes with a host of disadvantages that may be features of the particular algorithm employed by the brain and/or to limitations of the hardware, but they could possibly be essential to intelligence itself. Who knows, maybe an intelligence with access to more powerful hardware would be more prone to incapacitating boredom and depression or other kinds of mental illness. This is just one hypothetical possibility, but given how limited our understanding of intelligence is, there are plenty of possible roadblocks ahead.

Even if a higher intelligence than humans' is possible, its hypothetical achievements are uncertain. Some of the greatest problems encountered by humans are not constrained by intelligence but by resources and observations, and others (e.g. politics) are limited by powers of persuasion (that also don't seem to be simply correlated with intelligence). For example, what's limiting theoretical physics isn't brains but access to experiments, and what's limiting certain optimization problems are computational limits, for which our own intelligence, at least, does not give good approximate solutions at all.


> The brain of an insect doesn't take decades to train, and we're currently unable to match its capabilities, either.

It's not particularly useful to simulate insects. We can far surpass some of their capabilities, but the goal is not to make an insect-robot, just like we didn't care to make a mechanical horse.


Both of these are under active development

Robotic insects: https://en.wikipedia.org/wiki/RoboBee

Robotic horses: http://www.bostondynamics.com/robot_bigdog.html


Those are mostly interested in biomimetic movement rather than intelligence. They do have some applications, but i don't think they ve convinced the world that mimicking organisms is necessarily optimal.


> We can far surpass some of their capabilities

We could far surpass many of the human brain's capabilities from the moment computers were invented. That's what computers are for. But it doesn't mean we've come close to human intelligence, and we're not very close to insect intelligence now. As others have commented, getting to insect intelligence may be quite useful. In any event, that would at least likely put us on the path. Right now, the fact that computers can do more and more things that only humans could does not mean those approaches are getting us close to human intelligence. It just means that we've discovered more useful algorithms, but those algorithms are not necessarily on the path to human intelligence any more than the AI of the 1950s.


Insect intelligence is already being emulated (poorly) because of its useful applications in swarm behavior.

https://www.technologyreview.com/s/603337/a-100-drone-swarm-...


>That is a hypothetical claim because we don't know what intelligence is.

>Intelligence, or how we would currently define it colloquially and imprecisely, is an algorithm or a class of algorithms with some specific capabilities.

To me it sounds like you are expanding inability to describe how intelligent things function into an inability to even describe intelligence. Intelligence as algorithm does not match with what I think are the best descriptions of intelligence.

Intelligence is an individual's ability to act towards achieving goals in uncertain environments.

How an individual functions: the way it acts, how it forms goals, how it models the environment. These will all determine how intelligent that individual is.

>The only kind of intelligence we know, our own, comes with a host of disadvantages that may be features of the particular algorithm employed by the brain and/or to limitations of the hardware, but they could possibly be essential to intelligence itself.

I think the idea that limitations to intelligence are inherent to intelligence itself is wrong. Not unsupported, but actually wrong.

>Even if a higher intelligence than humans' is possible, its hypothetical achievements are uncertain.

But you must admit more intelligent beings will certainly not have the same achievements as humans. What separates intelligent beings are the quality of outcomes. By being more intelligence they will have better outcomes.

Given the same constraints they will act more intelligently and achieve better outcomes. And we know machine intelligence will certainly be less constrained than ourselves. They will be able to directly self modify their intelligence, have infinite life spans, construct any sensors and actuators to expand input/output signals, determine their level of energy consumption, etc.

I don't see how anyone could not concede that machine intelligence will obviously be superior to human intelligence. The space of intelligence is so much larger than the point we occupy.


The way you define intelligence makes the rest of your statements tautologically true, but makes the existence claim problematic. Of course there could hypothetically be algorithms that better deal with uncertain environments than humans. Those problems are computationally hard to solve precisely, and you must find a decent approximation. But it is not certain at all that we could find an algorithm that consistently and significantly outperforms the one used by human brains. In fact, there are theorems that it is impossible to improve optimization algorithms indefinitely. See https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_op...

Now, suppose we don't find a better algorithm than the brain. Wouldn't better hardware make it significantly better? No, because not knowing the algorithm, we don't know how it scales with resources.

> I don't see how anyone could not concede that machine intelligence will obviously be superior to human intelligence.

Because of computational complexity. There's a lot we don't know about computation, but there are some things we do know. A layperson reading sci-fi novels could imagine an artificial brain smart enough to solve the halting problem, but even though it's easy to imagine, we know that's not actually possible, no matter what resources are at our disposal. This means there may be disappointing limitations to our ability to scale intelligence. But note that I'm not saying machine intelligence would not be superior at all; I'm saying it may not be as significantly superior as some would imagine. I don't see what beyond science-fiction makes you so sure of an obvious significant advantage.

> The space of intelligence is so much larger than the point we occupy.

Computational complexity makes this uncertain. It is possible that significantly better approximate results would require an exponential increase in resources. The hypothetical space of computation is also huge, but we know very little of it is actually reachable. If you take into account that the entire "space of intelligence" is incapable of efficiently solving NP-hard problems -- many of which are very important for science and technology -- you'll see that it must exist within paramaters that may be pretty narrow.

Also, consider this: our own intelligence, which has been sufficient for art and science and technology that could one day create an intelligence far superior to itself according to you, was really created simply to hunt animals a little better than cats and wolves, and has not evolved much since the agricultural revolution. This may indicate that intelligence may be more a step function than a smooth one. Then how many steps are there past our own? Maybe you're right and there are some more, but for me it's hard to imagine such a significant jump over us without breaking unbreakable complexity barriers. Anyway, I don't think we have enough knowledge to dismiss the possibility that there are no more steps beyond us, and that any improvement, while important and useful, may be less than dramatic.


The crow, gibbon or even jumping spider did not take decades to train. Yet each is capable of feats whether power is accounted for or not (but especially so when) that no algorithm can match in terms of sensor fusion, online learning sample efficiency and adaptive decision making.

> AlphaGo could double, triple, etc its power consumption and expect to improve its performance.

The problem of heat makes this a suboptimal route. And in the specific case of AlphaGo, the gains from additional hardware eventually saturated.


>The crow, gibbon or even jumping spider did not take decades to train

Rather they took millennia of evolution.


"Neuromorphic" = "Ornithopter of the mind"

Giving up on flapping wings was the first step to flight.


Indeed. Imagine that Wright Flyer never happened, but some time in 1940s, the progress in engines' specific thrust made a wing flapping machine able to take off. That's where we are with machine learning.


It bugs me when people always talk about "neuromorphic computing" and explore crazy ideas that never work and look at them in awe, but when anyone brings up a somewhat novel architecture for deep learning (nets that are being used today, successfully...) people say "that'll never work".

For example, our startup uses analog computing to achieve accuracy roughly equivalent to digital circuits, yet we're told that we're crazy? Meanwhile people dreaming about memristors are showered with grants and money....


You're from Isocline, right ? Your GPS chip was really good.

But your SIMD chip will be much more impressive, right?


No, they are not from Isocline...

There are groups at UCSB and U-Tenn working on analog neural network technologies as well.


Could you please share a bit more about your chip and when it would be ready ?


Not Isocline, but yes, our chip will be impressive :)


Tell me more :) ?


That sounds fascinating! Electrical, mechanical, hydraulic?


What do you mean?


I was asking what you meant! :) I've only heard of analog computers in an ancient context, so I have no idea what kinds people are working on nowadays.

You mentioned "chip" in another comment, so I'm guessing it's not mechanical/hydraulic.


Analog as in analog electrical signals, almost surely.


There's a lot of backlash and/or dismissiveness on HN every time someone brings up neuromorphic architectures, and I think it has a lot to do with the same defensiveness that people display when their political beliefs are challenged. When neuromorphic architectures start bearing fruit, programmers will no longer be so in-demand for configuring the machines, as it will shift the balance of power towards hardware engineers and hard scientists.


Computational neuroscientists have been using simplified models like these for decades, and in principle the operation of these 'neuromorphic' neurons can already be simulated in large numbers in 'ordinary' computers. So, it's not clear at all what is to be gained. AFAIK, most of the neuroscience community considers Truenorth a marketing ploy.

I don't think programmers should wait for these chips before they panic. They should already panic now, because deep learning works.


If these articles get into the math behind it, I think they will realize that, currently, the brain is just a metaphor for a style of computation.

The article does state this towards the end: "Given the utter lack of consensus on how the brain actually works, these designs are more or less cartoons of what neuroscientists think might be happening."

We don't really know how the brain does what it does.


> the recent success of A.I.

I guess they mean the recent success mostly due to modern hardware of 1960s statistical clustering and classification algorithms that for PR and historical purposes some people call "AI", but are currently unknown to have any significant relationship with what we call intelligence.

When we achieve the capabilities of an insect we would be able to call our algorithms "AI" without getting red in the face, as we'd know there's a decent chance we're at least on the path to intelligence. Until then, let's just call them statistical learning. That wouldn't make them any less valuable, but would represent them much more realistically and fairly.

It's funny how how statistics was once considered the worst kind of lie, and now for some it's becoming synonymous with intelligence.


In the movie Terminator 2, a futuristic robot with advanced AI was developed by reverse engineering a futuristic chip.

In reality, we do not need to reverse engineer a chip. We can just reverse engineer our own brains.


I see nothing in these "neuromorphic" architectures than hogwash trying to bullshit governments into giving them money. There's no conceptual advancement offered by these computers that can't be simulated with matlab. Until the day when we actually learn how neurons work, these will just be extremely premature optimizations.


These designs are advances in the field of computer architecture. They look at how brain processes information for ideas to make hardware more efficient, for some applications (such as pattern matching). Did you expect something more?


They use very rudimentary sketches that have little to do with real neurons. ANNs have been mimicking these things in a slightly lower detail since the 60s. We can do better pattern matching with ANNs.


I think you might be confused about terminology.

Neuromorphic computing is running some known ANN model directly in hardware. Why do we want it? Because ANN models in software work well for pattern matching, and we want to speed it up/make it more efficient.


Nope, ANN's and deep learning are not used by these boards (Neurogrid, zeroth, truenorth).


https://arxiv.org/abs/1603.08270

They have been designed, and are being used either for more efficient pattern matching, or to speed up brain simulations (again, using known neuronal models).

You seem to expect something else from neuromorphic computing, why?


I stand with Yann Lecun's criticism on the article:

https://m.facebook.com/yann.lecun/posts/10152184295832143

> [the truenorth team had] to shoehorn a convnet on a chip that really wasn't designed for it. I mean, if the goal was to run a convnet at low power, they should have built a chip for that. The performance (and the accuracy) would be a lot better than this.

They used their 'neuromorphic' chip in an explicitly non-neuromorphic way, basically approximately mapping deep learning processes to their chip. There is very little neuromorphicity (brain-likeness) about it (plasticity rules out of their ass, for starters). And they still get less than state-of-the art performance in most tasks!

I expect 'neuromorphic' to be used when sound neuroscience is used in large scale implementations that allow us to actually simulate parts of the brain. Anything else, we call it what it is, ANNs.


Well, none of those chips are brain-like at all. For example, TrueNorth is fully digital, it uses separate compute/memory blocks, signal multiplexing, signal encoding, routing protocols, instruction set, etc, none of which is in any way related to what brain is doing. What makes you think it's "neuromorphic"?

Whether you like it or not, get used to people calling their hardware ANN implementations "neuromorphic".


Nope, neuromorphic means the hardware would simulate the neurobiology, not ANNs. More practically, they would never publish in Science if their title was "printing ANNs in hardware".


TrueNorth hardware, as I illustrated, does not resemble neurobiology at all. There are no brain-like components there, on any level. Moreover, it can run ANN algorithms just as easily as more "neuromorphic" algorithms.

Pointing to how they chose to name it for publication is not exactly a very convincing argument to support your view, is it? :)


My view is they 're useless. i don't get your point, sorry.


My point is architectures like TrueNorth are very impressive from the point of view of a computer engineer, and they are very efficient when running their intended applications (neural network algorithms). The fact that they are not "brain-like" does not make them any less impressive.


> very impressive from the point of view of a computer engineer

Maybe, i suppose as much as a bitcoin ASIC is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: