Hacker News new | past | comments | ask | show | jobs | submit login

Excellent post, I rather agree.

> As long as AI is an open technology there will always be some criminals who just want to see the world burn.

Thankfully we have yet to see terror attacks with nuclear weapons, but the lower barriers of entry for potentially catastrophic AIs undoubtedly is alarming.

I often wonder if analog solutions will not be critical. To continue with your example, a nuke that must be armed with a physical lever would be a rather great hindrance to any AI. Internet communications, etc. will be more tricky, due to the high frequency activity, but the point of having meatspace 'firewalls' on mission critical activities is a nice short term kludge.




Low barrier to entry for catastrophic AI? Surely you aren't talking about the current state of AI. The current barrier of entry for an AGI (which is at least what it would take for catastrophe) is "does not exist". That's about as big as barriers get. Even when the first AGIs show up it will take full time supercomputers to perform like a single human brain. I don't see that as a particularly low barrier.


This is making the assumption that AGI is computationally expensive rather than just requiring of a particular algorithmic approach. It may be possible (and in fact I'd expect it to be so) to replicate general human-level intelligence with significantly less raw computational power than that embodied by the human brain.

Evolution does generate highly optimized systems but generally only when those systems have been around for tens of millions of years. Human-level intelligence has only been around for what, 50k - 100k years? We're probably still more in the 'just works' phase rather than the 'streamlined and optimal' phase.


Eh. The barrier for entry for an AGI does exist, though it is currently undefined, since we don't know what it is. The reason I say that is there are at least 7 billion general intelligences running around this planet (and many more if you consider animal intelligences) It is important to define it that way, not that it is impossible, just that it is unknown how much effort is needed to create an artificial one.

This distinction is very important when comparing the threat of AI with other significant threats. Before nuclear bombs were built we could not tell you what the difficulty was in creating one. Now that difficulty is well defined, and we can use that knowledge to prevent them from being built by most nations, except the most well funded.

If the barrier for entry for AGI (then ASI) is lower than we expect, then the threat of AI is significantly different than if AGI/ASI can only be created by nation states.


The barrier for entry for an alien invasion does exist, though it is currently undefined, since we don't know what it is. The reason I say that is there are at least 1 bloodthirsty species running around this galaxy (and many more if you consider the statistical possibility of life on other planets) It is important to define it that way, not that it is impossible, just that it is unknown how much time is needed before an alien invasion.

The reason I am framing things this way is we need to be very careful here because we are starting to turn towards speculation.


You know, you mean for that to sound implausible, but the Great Filter is in fact an open research problem.


I'm pointing out that this is all speculative and dangerously close to science-fiction.


You should learn the difference between what is impossible and what just has not happened yet. Much science-fiction that was in the realm of possibility is now science-reality. One should not need reminded they are communicating at the speed of light over a global communications network capable of reaching billions of people at a time. I'm sure at one point in the past that was science-fiction, now reality. I don't believe you can show me any science that points out why AI/AGI/ASI can be created, we simply are not at that level of sophistication.


Your argument is basically "some science-fiction has sometimes turned out to be true." That doesn't counter the fact that this is just speculation.


Um, pretty much, no.

Science fiction turns out to be true when physical reality agrees that it can be true. This again, is why we have a global communications network and personal wireless devices connected to it. This is also the reason we do not go faster than light.

The reason we don't have flying cars is they are completely possible. They are also terribly dangerous and expensive and a complete waste of energy.

The reason we don't have AGI is not that it is impossible, again if nature can create it, we can recreate it. Since we don't have a good understanding of the networked nature of emergent intelligence we cannot create a power optimized network that would allow us to create a energy efficient version. AGI itself is a complete waste of energy at this point. We already have many types of AI that are energy efficient and used in products now.


> Science fiction turns out to be true when physical reality agrees that it can be true

This is a ridiculous argument. Furthermore, even if it were true, it tells us nothing about the timeline. It could take 10,000 years for all we know.


In the past, single human brains have come close to destroying the world, and lots of people have access to supercomputers, so the barrier doesn't seem insurmountable.

I don't think you need AGI to cause a catastrophe. A narrow AI specializing in cyberattacks could be catastrophic, and is probably possible with current techniques.


one of the most effective and scariest attack vectors for AI would be convincing or coercing humans to do its bidding (e.g. "pull the lever or i'll switch off your father's life support")


It might be easier than that. Data Broker companies are the ones that run those stupid "Which LOTR character are you" or "What color are you" quizzes that are popular in some social media circles. They do it to slowly build psychological trait models of the people taking the quiz. This allows them to sell that information to marketing companies.

Access to that kind of data would help an AI determine the people that are more susceptible to manipulation. Add in records on health care, as you mentioned, and information on debts and you have data that can help an AI gather as many human minions as it needs.


Greedy people cause all kinds of problems now. I don't know why people aren't concerned that a greedy AI could be even worse.


Great recent story following this plot line: http://compellingsciencefiction.com/stories/crinkles.html





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: