Hacker News new | past | comments | ask | show | jobs | submit login
Jeff Hawkins: Thousand Brains Theory of Intelligence [video] (lexfridman.com)
226 points by jonbaer on July 1, 2019 | hide | past | favorite | 94 comments



I think the cortex is attractive to AI research because its highly uniform micro-anatomy suggests there is a simple algorithm behind general intelligence to be found within. While intelligent behaviours appear to reside in the cortex, the limbic system is heavily involved in learning and cognition generally. Additionally, animals lacking a cortex, like birds, are still capable of many intelligent behaviours typically regarded as cortical. I wonder whether the cortex may be a kind of FPGA wired up by the limbic system, not naturally containing the machinery for general learning, and a red herring to the pursuit of general intelligence.


> While intelligent behaviours appear to reside in the cortex, the limbic system is heavily involved in learning and cognition generally.

We understand the limbic system to process motivation and emotion; could the limbic system's activation during cognition not be "explained away" by it being recruited to process the emotional or motivational aspects of sensory or memory data? (Just like how there is visual cortex recruitment when using visual imagination, etc.)


It's important to recognize that the cortex is a relatively late addition to the structure of the brain. Brains have been learning about the world and how to behave appropriately long before some mammals developed such a large region here. It's much more likely that the core functionality to be an autonomous agent in the world is encoded in the limbic system (e.g. basal ganglia, hypothalamus, mid brain and brain stem regions).


Certainly those regions are doing something in humans; and certainly those regions had those functions in species without a cortex. But the implication that these regions' presence in the human brain implies they're the ultimate arbiter of the same functionality there, isn't guaranteed to be true.

Brains really do use "subsumption" (in the https://en.wikipedia.org/wiki/Subsumption_architecture sense) to accomplish various features; there are motor signals that some earlier-evolved part of the brain would be emitting if they were the only thing "online" in the brain, that are actively suppressed or "overridden" by another, later-evolved part of the brain, often a part that only "comes online" later in brain development. (Thus "early instincts" that disappear during development, like the infant diving reflex.) Often, these subsumed neural processes reappear when the region suppressing them is damaged, as in the normally-suppressed human lordosis reflex, or the normally-suppressed-after-infancy suckling reflex in most mammals.

There's no evidence that I know of that the limbic system and the cortex are in this sort of relationship; I'm just saying that this kind of relationship isn't unprecedented as a thing human brains do.

If such a relationship were to exist between the cortex and the limbic system, then both regions could be said to be "in charge" of cognition; sort of like a television tuner and a VCR can both be in charge of the image being displayed. One state machine (the TV; the cortex), passing input through to another state machine (the VCR; the limbic system) only in some subset of its states (the right "channel"; the right arousal state.)


> There's no evidence that I know of that the limbic system and the cortex are in this sort of relationship

It's true that in mammals (possessing a neocortex) the cortex takes over various processes traditionally performed by the limbic system. Complete removal of the cortex in rats demonstrates that behaviour can revert to entirely limbic control: https://www.ncbi.nlm.nih.gov/pubmed/564358

While the cortex can definitely be / become "in charge" of complex intelligent behaviours, it may still need the brain's phylogenically older machinery to bootstrap it.


>https://en.wikipedia.org/wiki/Subsumption_architecture

Looks like a kind of "deep" architecture:

"It does this by decomposing the complete behavior into sub-behaviors. These sub-behaviors are organized into a hierarchy of layers. Each layer implements a particular level of behavioral competence, and higher levels are able to subsume lower levels (= integrate/combine lower levels to a more comprehensive whole) in order to create viable behavior. For example, a robot's lowest layer could be "avoid an object". The second layer would be "wander around", which runs beneath the third layer "explore the world". Because a robot must have the ability to "avoid objects" in order to "wander around" effectively, the subsumption architecture creates a system in which the higher layers utilize the lower-level competencies. "

One can imagine how backpropagation style training on "explore the world" or other high-level scenarios would result in formation of "avoid objects" kernels at the lower levels similar to how image recognition deep nets produce Gabor like kernels at the lower levels. We do have some theorems on optimality of such deep networks for image recognition and i wouldn't be surprised if similar optimality was present for behavioral deep networks. Also, it seems that like in case of image deep nets, the deep architecture for behavior naturally allows for transfer learning by reusing the lower layers too - like one would reuse the low/mid layers of image deep net, one would naturally reuse the lower "avoid objects" layers (and may be some more complex aggregate behaviors from "mid-levels") for other tasks.


The architecture of the subsumption system is fairly well mapped: from basic optical edge detectors in the banks of the calcarine sulcus to high levels of sophistication as you move forward (object recognition, numeracy, vocabulary in the parietal, then motor function, then judgement and grammar in the frontal and prefrontal cortical areas). The additional dynamic of taking over or controlling traditionally limbic behavior (emotion) is an additional process which I would characterize as a blend of subsumption and adaption: the cortex aquires data and abstracts it, but also, as an adaptive response.


Exactly. I think when people talk about intelligent machines today what they are most after is autonomous, robust, common sense orientation and agency. It's not the ability to do algebra or reason logically, because we can actually already built systems that are quite good at that.

What we really want is a robot or a car that can orient itself in the abscence of structured data and doesn't glitch out into the wall once it loses its objective. And even primitive animals are very good at that.


You're ignoring a lot of its function. The hippocampus (within the limbic system) is necessary for long-term memory. If your hippocampi are removed, you still act more of less the same, but you can't remember new events or make semantic memories. Attention is also an important function of the limbic system.


The famous case is that of Henry Molaison (HM): https://en.wikipedia.org/wiki/Henry_Molaison


I know the brain isn't a computer, but why do we think examining the structure is going to reveal an algorithm for intelligence? Isn't that like taking an iPhone, cutting it into slices and studying it with the hopes of finding how GarageBand works?


Funny you'd mention that, there was a study from Jonas and Kording [0] that considered a microprocessor as an organism and applied analytic methods used in neuroscience to see if they can figure out how it processes information.

[0] Jonas, E. and Kording, K.P., 2017. Could a neuroscientist understand a microprocessor?. PLoS computational biology, 13(1), p.e1005268.

https://doi.org/10.1371/journal.pcbi.1005268


I think it is named after a 2002 study "Could a biologist fix a radio" which advocated that the current methods in biology were inadequate to understand a living body. It was a plea to do more System Biology, but even if it is widely known it does not changed much the way biology is done.

https://www.math.arizona.edu/~jwatkins/canabiologistfixaradi...


In computer science, once you understand a data structure, its algorithm is often obvious [1]. Maybe the brain will be like that too, i.e. once you get its structure, you get its algorithm for free.

[1] https://news.ycombinator.com/item?id=20047607


But examining the circuits of a computer doesn't tell you anything about a data structure, does it?

What would an iPhone schematic tell you about how GarageBand works?


Have you ever seen one of Professor Sussman's talks where he analyzes a circuit diagram?

For example, see Prof Sussman's 2011 StrangeLoop talk (circuit analysis example begins at ~25 min mark)...

We Really Don't Know How To Compute! https://www.infoq.com/presentations/We-Really-Dont-Know-How-...

If you have the hardware schematic or the physical hardware and enough time, you can figure out what the hardware does. You can determine what its constraints are, and if you understand it well enough (for simplicity's sake, let's say you understand the hardware up to the level of the engineers who designed it), you can tell what the hardware system can and can't do and what type of codes are required to make the hardware work. You can tell at a low level what the GarageBand developers had to work with when they designed their game. And once you know the required codes, you can write software to generate the codes to make it work. And if you're really good and have the right tools, you can analyze the hardware and/or model the data flows to determine what the optimal data structures must be based on the hardware capacity constraints and data flow.

Google "reverse engineering hardware chip circuits" or watch Ken Shirriff's 2016 Hackaday talk...

Reading Silicon: How to Reverse Engineer Integrated Circuits https://www.youtube.com/watch?v=aHx-XUA6f9g


It would tell you that GarageBand was a structure of processor opcodes that executed on the processor and had the ability to interface with a storage device, a video screen and an audio processor, among other things, and that the opcodes for GarageBand were likely stored on said storage device.

Maybe you wouldn't understand GarageBand yet, but you'd have a solid set of next steps for your research.


Would the layout of transistors indicate that opcodes exist?


or we could do it like the way physicists design their experiments-smash two iPhones together at tremendous velocities and see what pops out. Can you imagine early anatomists taking the same approach because the Catholic church forbade cutting open the body.


But before we can exam the algorithms on an iphone we need to understand the hardware, and cutting it open and looking at the structure is definitely a step on that path.


Most A.I. researchers believe the brain is a computer.


For people unfamiliar with the acronym (from Wikipedia): "A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing..."


I wonder whether the cortex may be a kind of FPGA wired up by the limbic system, not naturally containing the machinery for general learning, and a red herring to the pursuit of general intelligence.

The challenge is to discover the nature of "general intelligence", I'd say. To me, it seems like GE would not actually be a set of specific behaviors but rather a processes that integrates, extends, mediates between, connects etc specific behaviors. Probably whatever-does-GE would both program and be-programmed-by the limbic system and whatever other systems it relates to, and given this the cortex like a good candidate given that it accompanies the human ability to active "generally" (and it is quite possible that similar functionality might reside in a different system in different brains see comment by derefr: https://news.ycombinator.com/item?id=20328311).


I also find it interesting that birds don’t have a neuro cortex: my pet parrot exhibits a few very intelligent behaviors.

The newer thousand brain theory feels close to correct to me. I used the older HTM for a fairly quick time series anomaly detection experiment and it was generally promising. I would be curious how well a thousand brain approach would work.


"Homolog of mammalian neocortex found in bird brain"

https://www.sciencedaily.com/releases/2012/10/121001151953.h...


Jeff and Subutai have been on this hobby horse for a very long time. They don't have much to show for it, and many of their ideas are controversial even within neuroscience (for example, some think that their venerated "cortical columns" are structures without meaningful functions). It is very easy to bet against them given the fact that nothing much has happened, and "theories" about the brain are a dime-a-dozen. I'm glad that they have the gumption to go all-in on these ideas, but I'm not expecting anything earth shattering anytime soon.


That's the blessing/curse with completely self funded research group. No strict timelines, yearly review, benchmarks or external pressure. There is no hurry to deliver, you can dream on and keep developing your pet idea indefinitely. Nobody can tell you that you were wrong and you are now done.


No strict timelines, yearly review, benchmarks or external pressure.

Maybe not imposed on Jeff himself, at least not in this narrow sense, but the people who work for him most likely have all that.


That explains why they can continue, but it doesn't make it more likely that the idea will ever bare fruit.


Understanding the underlying structure is key. And the significance of structure -- the ability to see it, understand it, and re-create it -- has been recognized and pointed out by Feynman [1] and some of the world's best programmers [2]. Using data-structure representations that best match reality's underlying structure may be key too. With this in mind I had a thought -- a "dopey idea" in the Steve Jobs to Jony Ive sense of the word [3] -- the brain doesn't do backprop [4] -- at least not directly (as far as we know) -- but neural nets using backprop/gradient-descent work pretty well. Why so?

What if the underlying structure these nets are modeling isn't the brain but something else...maybe something more fundamental...like the curvature of spacetime, a path integral representation of information, or some other crude representation of an underlying quantum structure. And if so, what if learning is just the brain optimizing itself in accordance with this structure, and has this idea been explored before? Evidently it has. The idea of the brain as a quantum information processor has been explored to some degree in holonomic brain theory [5].

[1] "What I cannot create, I do not understand" https://en.wikiquote.org/wiki/Richard_Feynman

[2] Programmers on Data Structures https://news.ycombinator.com/item?id=20047607

[3] Jony Ive talking about Steve Jobs' Dopey Ideas [video] https://www.youtube.com/watch?v=4WNYYCX3Tg0

[4] Artificial intelligence pioneer says we need to start over https://www.axios.com/artificial-intelligence-pioneer-says-w...

[5] Holonomic Brain Theory https://en.wikipedia.org/wiki/Holonomic_brain_theory


the principle of backprop can be accomplished in structural changes to the synapse. Although that would be analogous to changing the activation function in each 'neuron' of a layer in a convnet-we have just 'factored it out' into a sequence of weights used in the conv layers. Up regulating and down regulating the number of receptors can change how each synapse contributes to a biological net.


We need more of this. We’ve sadly lost our general sense of how long fundamental research can take without any promise of success. My biggest respect for people who do this.


Just a side note, Lex has been killing it with this podcast. He’s been able to land some remarkable guests for a relatively new show with a niche market. I personally think he does a great job of mixing up the conversation with both technical and philosophical angles.

Highly recommended.


My favorite podcasts are Lex Friedman and Sean Carroll.

Anyone, please feel free to suggest other similar ones.



Eric Weinstein one was great.

What I meant is other podcast channels.


Not too similar but Sam Harris has some very good content on his podcast.


Absolutely! His interviews are my favorite recurring podcasts. A distant second, but still very good, is Azeem Ahzar’s Exponential View podcasts.


Yeah. It is impressive how he manages to self-market and how different he is on various media channels to his real-life personality


I have a metric ton of respect for Jeff and his team.

However, I do think that they did not go far enough in figuring out the computational capabilities of individual pyramidal neurons, which IMO is the foundation to figuring out how the neocortex works. I wrote a short book about that once - if anyone's interested, check it out, it's free and meant to be readable for AI/neuroscience enthusiasts:

http://www.corticalcircuitry.com/


Thanks for sharing! Curious to learn more about this. Have you posted it in the HTM discourse forum? I would be interested in what the community thinks of it!



Been following hawkins and Numenta for a while. They have never really taken off in commercial applications as far as I know despite some sound reasoning behind their theories. Hopefully they will make an impactful breakthrough soon.


same, I have been following before they changed the name to Numenta. Interesting stuff, will listen to this podcast soon


Does someone have links to written variants of this material rather than a podcast?

No offense if you like podcasts but I just don't have time for them even at 2x speed. they're just too slow and disorganized.




Is there a transcript somewhere? (It's 2 hours 9 minutes)


There's a youtube generated one if you go to https://www.youtube.com/watch?v=-EVqrDlAqYo and click the three dots under the vid.

I don't know if there's a way to get those things into an easy to read format? Clicking select all and pasting to the text editor sort of works but could be better.


you can increase playback speed. 1.1..1.25x speed is usually OK in terms of ability to understand the speech while saving several percent in time. Sure, the transcript would probably require significantly less time to read and is my usual preferred way to consume content, but when none is available the faster playback allows for saving at least a fraction of time.


Jeff has been at this for a really long time now. If it is this hard to make progress, it is likely basic research that just requires a hard slog and also may not turn out in the end as all basic research is risky. My default bet then is against him because most basic research fails, with a small chance of success.


Got introduced to Jeff after watching him keynote a conference about a decade ago. He's a great speaker. Also thought they had some interesting tech when it came to computer vision at the time. Sadly it never took off and then ConvNets blew whatever Numenta was doing out of the water. Surprised even with someone like Jeff that Numenta hasn't had any commercial success.


Learning a lot about the neocortex. Any recommendations on how to learn more?

@7:45 It’s the size of a dinner napkin, 2.5mm think, uniform, and similar in other animals.

@9:50 if you took the optic nerve and attached it to another part of the neocortex, that part would become the visual region.

@34:30 Thousand brains theory of intelligence.


> if you took the optic nerve and attached it to another part of the neocortex, that part would become the visual region.

FYI, that is super hand-wavey and covers over a lot about how the path of the information from the cones/rods gets into V1. The chain of neurons that pass infomation from your eyes to V1 is well studied [0]. Interruptions in that path cause a lot of sightedness issues and are not fun diseases to have. The musician, Stevie Wonder, among others, aledgedly has a form of blindness known as blindsight [1] where relfexes to motion are perserved, but information is not passed into the conscious mind.

In the end, though neuroscience is a facinating subject, we're just in the beginning of our understanding of the brain. More research is needed.

[0] https://en.wikipedia.org/wiki/Optic_chiasm a good place to start learning about the chain of information transfer.

[1] https://en.wikipedia.org/wiki/Blindsight

EDIT: Additionally, if you want to learn more about neuroscience, the best place to look is at Kandel's Principals of Neural Science [2]. It is a tome of a book, but is the best place to get a deep dive into the brain and our understanding of it. I've not yet seen anything else that is somewhat accessible to the general public but also gets into all the issues with any particular experiment. Most pop-sci book brush over a lot of the very important and thorny issues that each experiment has. I'd also love to know of a good book that is more accessible than Kandel.

[2] https://www.amazon.com/Principles-Neural-Science-Fifth-Kande...


Incidentally, Blindsight is the name of a science fiction novel which I really like. The name is inspired by that phenomenom, but it covers a lot of ground, from neurological processes to how aliens would look like (and how they would think), chinese room, etc.

https://rifters.com/real/Blindsight.htm


Note that the "uniform" part, on which the second part ist also conditioned, ist not universally accepted. There are certainly shared elements, but most likely also specialisations.


Learn about the rest of the brain's components. Intelligence and cognition is greatly dependent on the limbic system. For example, while memory appears to be stored throughout the cortex, we can't form new spatial and event memories if our hippocampi are removed.

It seems to me that the cortex functions like a kind of FPGA which the brain's core machinery uses to expand its capabilities. It has a uniform micro-anatomy, regardless of function; functions correlate with connectivity to nerves and other brain regions; in cases of damage to the cortex, it is capable of remapping and rerouting functions and connectivity.


Reading Jeff's book is a great intro. Then you can read some intro books like one of C. Koch's - very accessible. Brain theory is actually pretty readable until you get down to the biochemistry of a neuron, which is wildly complex. But, they still don't know how it all works, so it is fairly incomplete to read about (unlike much of physics, chemistry, or even many areas of biology)


What about bird intelligence? The modern view is that the parts of the bird brain use to be called the pallium actually are organized in a similar way as the neocortex. Although a bird brain is usually smaller than the mammal brain, it has in many case the same amount of neurons. Actually, a high percentage of neurons are located in the pallium than in the neocortex. Please note that our cerebellum contains more than half of the number of neurons in the brain.


Dileep George, one of the original co-founders of Numenta have since moved on to co-found Vicarious.They were in the news after breaking captcha, that too with commonplace hardware. They are far too secretive, but there is reason to believe that they too are part of the HTM legacy...


They publish occasional DL papers, but results are not impressive, so mainstream DL people ignore them.


The don't do deep learning. Their papers are based on graphical models. Their RCN model can be said to be inspired by Numentas earlier HTM model that dileep developed there, but is very different in fundamental ways.


Yeah they do, here are 3 papers on convnets:

https://arxiv.org/abs/1611.02252

https://arxiv.org/abs/1611.02767

https://arxiv.org/abs/1706.04313

But it seems like lately they focus more on RL for robotics.


No fundamental difference between centralized and decentralized computation, even if you throw in stochasticity. It is all reducible to a deterministic Turing machine.


Can anyone clarify Jeff Hawkins usage of the word time? I'm wondering why he isn't using rate or frequency instead of time.


“Time” as in “predicting the future”. Same as in RNNs.


I'm like 50/50 on whether or not Lex is himself an artificial intelligence.


Once intelligence is "solved", most humans are completely and utterly screwed.


There's the "Wally vs. Star Trek" theory of Artificial Intelligence (I know it sounds goofy, but it makes a difference when looking towards the direction of AI). If you look at the direction AI will take, science fiction is really all we have to go on and we have three platforms:

Terminator - To be avoided at all costs (everyone agrees)

Wally - AI does its best to "take care" of us, turning us into infantile, unthinking, pleasure seeking dweebs. The ultimate basal brain utopia. But empty.

Star Trek (TNG) - AI as an assistant to help us fulfill our higher purpose and make us better at being a higher order being. There is no doubt that the ship's computer has advanced AI if you look at the questions posed to it by the engineers and the crew, but it does not impose its "will" upon them. It is an observer until directly asked, even when death is at hand (seemingly).

There is a very good video on YouTube about it, but I can't find it (right now) that explains it in a very accessible way (that's pretty entertaining) with no real knowledge of the underlying functionality necessary. I suggest you watch it. It's great!

EDIT: https://www.youtube.com/watch?v=48mf2QUtUmg&t=949s


It's worth noting that the Wally and Star Trek interpretations are not incompatible. What are the trillions of people that can't qualify to be a red shirt doing? Probably zonked out in a holodeck


Cavorting in SF, bottling wine, or occasionally taking classic muscle cars out for a spin apparently.


How about the Culture option - AI is as far beyond us as we are beyond our pets, and takes care of us accordingly. But just like you don't stick your dog in a box and feed it morphine continuously, AI doesn't turn us into "infantile, unthinking, pleasure seeking dweebs" either.


I imagine wolves may disagree with your characterization of dogs


That might just make the comparison more accurate, of course - who says the AI wouldn't change us to be better adapted to our new, AI-ruled environment?


I think the aspect that is missing for a lot of these hypotheticals is the amount of input we allow the AI to receive.

In the Star Trek model we see the input directly limited to the data in the ship and surrounding visual areas (as far as i'm aware)

In the Wally model we see that the input is limited to a single optic/auditory/ect location.

We have a potential network that input can come from everywhere cameras, phones connected to the internet of everything. I think it really comes down to with how we want to interact with AI and how much input should it have to directly interface with us.

Also it's very hard to know even during deployment how accurate an AI will have a grasp on human want and interaction. A lot of people are worried about the 'non-consenting' information potentially available to the net. Very few people, if any, will ever really be able to address it should an AI be created and deployed.


When thinking on where we're headed, I tend to subscribe to the Culture theory, which I suppose is a bit in the middle between Wally and Star Trek: AI is certainly superior and running the show, but they're happy letting humanity live as if they're the bosses.

In it, humans are basically treated like pets or toddlers, in that they "do stuff" but they're kind of oblivious to the fact that AI is moving the world forward.


Are you all talking about Wall-E?


Yes :)


This is the "Person of Interest"[0] model.

[0] https://en.wikipedia.org/wiki/Person_of_Interest_(TV_series)


That does sound like an interesting video, and I'd like to see it if you find it (or I might search for it later).

As for the Star Trek (TNG) interpretation of AGI, I suspect something like this in the real world would give rise to a kind of "oracle" situation, which creates its own dangerous problems. A system which functions as described only by observing and giving input where asked still has the potential to affect the outcome of events simply by being a reliable predictor.

A paper explaining this concept far better than I would be able to in a HN comment is available here: http://www.aleph.se/papers/oracleAI.pdf


For the uninitiated: WALL-E

https://www.rottentomatoes.com/m/wall_e


Once AI is invented it will be all of these things and more. The question is how will it be used instrumentally. The answer if we look at current trends is in various ways to harm most humans living on the planet to take away their agency, privacy, freedom, utility to other human beings, etc. Western foreign policy is already totalitarian and fascist, how will it affect pariah states or those who become pariah states? Horribly.


I'm surprised that people would down vote you for this.

It's a very real possibility that solving intelligence would make most of human knowledge work obsolete.


AI is a choice (that is, it can probably be ignored, if we don't like it). I'm not sure why people think (negative forms of) AI are in some way inevitable. A lot of people are already afraid of losing their jobs right now, where we can't even fathom the basics of intelligence. I don't think humanity would just let some of the negative forms of AI we all seem so afraid of happen.


"I don't think humanity would just let some of the negative forms of AI we all seem so afraid of happen."

Government and corporations are the ones who will make that call.

Government = Wolf

Corporation = Fox

Average Joe = Sheep

Most of the sheep are protected by a cage; it takes a lot of mental power to operate in the increasingly technical world we've created.

Solve AI and you've removed that cage. Sure we'll still need some sheep but no where near as many as we have right now. Most will be made redundant.

The sheep today that are living fulfilling and happy lives because of their ability to process knowledge marvel at the possibilities of AI and how it could push the species forward but forget to consider what their own role would be in a world where machines are both stronger and infinitely smarter than any human.


I'd modify your metaphor slightly:

Government = The farmer

AI = Plant-based diet

Naively, everyone eating a plant-based diet is great for sheep, but it's hard to imagine a world where the farmer still feeds them.


Governments and corporations are nothing without your Average Joe, in fact, they are just a collection of Average Joes. So I fail to see the analogy here.


In this situation, heads of state and corporate executives are as much 'Average Joes' as kings and queens were a few centuries ago.


Yes, but a government and a corporation is much more than their head of state and corporate executive. If the actions of the government have a significant negative effect on a majority of the population, you can be quite sure that the government will be overturned at some point.


Who's going to do the overturning, though? If all you need is mental labor and capital, and AIs can supply mental labor for anyone... Well, the current holders of capital are going to have quite the advantage.


Tech is not a choice, especially when there are evident benefits from the tech. Like for instance was agriculture a choice, yes maybe for a small while after it was invented. After that it was no longer a choice. Now it's not a choice at all. AI will be much quicker.


Tech is most definitely optional: Nuclear energy was for a while considered a great option, but now a lot of people are (unfairly IMO) turning their back on it, and you see that there isn't as significant a number of new plants built (just to give an example)


Nuclear Energy was a cover story for weapons manufacturing.


I think nuclear weapons are a good example of how dangerous technology isn't a choice. When other parties have the ability to advance a technology, you no longer have the option of ignoring it. We can't put that cat back in the bag, and now we live in a world where an accident risks killing billions of people.

As technology advances, we run the risk that the strategic calculus changes in favour of one party pre-emptively attacking the others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: