Hacker News new | past | comments | ask | show | jobs | submit login

I don't necessarily buy the "AI can end humanity" thing. As a cliche that's become very easy to repeat, but I've yet to see a postulated mechanism by which it could actually happen that isn't pure SF. The ending of human life would not be so easy for computers to accomplish.

But on the subject of concentration of power and the wide-scale elimination of low- and middle-range jobs I think he is dead on. I fear that the fastest way to put an end to humanity's climb up from the forest floor is to try and kick 70% of us off the ladder.




Sidetrack, but I'm curious. What do you mean by "a postulated mechanism by which it could actually happen that isn't pure SF"?

It reads as if you're asking for a mechanism for a future technological event explained purely in terms of present technology. The whole "existential risk" concerns aren't about what present systems can do, they're about what future self-improving system might do. If we postulate this hypothetical software becoming smarter than humans, arguing about what will or won't be easy for it becomes a bit silly, like a chimp trying to predict how well a database can scale.


By "pure SF" I mean the realm of pure imagination, unfounded in any actual emerging present circumstance. As far as I know the human race has not yet invented anything more powerful than our ability to control it, with a life both longer than ours and independent of any support from us. So worrying that such a thing might be invented and then wipe us out seems little different than worrying that something all-powerful might simply appear from some unknown place and wipe us out. Neither fear is very instructive or clarifying with regards to policy, imo.


Sure, we could wait until such a thing is invented before we start worrying about it. But by then it's far, far too late.

It's not the same as worrying about a giant space goat appearing and sneezing us all to death. A lot of very smart, very well-funded people are actively trying to make better, more general AI, capable of learning. Evidence of progress in that effort are all around us. You seem remarkably confident that they'll all run into an as-yet-invisible brick wall before reaching the goal of superintelligence.

Superintelligence doesn't have to be malicious to be worrying; concepts like "malice" are very unlikely to be applicable to it at all. The worry is that as things stand we have no frickin' idea what it'll do; the first challenge for policy is to come up with a robust, practical consensus on what we'd want it to do.


Not OP, but I agree with the point you question, and my rationale for so doing is that I've yet to see a compelling argument that a self-improving system is other than the software version of a perpetual-motion machine. Those seemed plausible enough, too, when thermodynamics was as ill-understood as information dynamics is now.


This one's tough to answer, actually. The truly optimal learner would have to use an incomputable procedure; even time-and-space bounded versions of this procedure have additive constants larger than the Solar System.

However, it's more-or-less a matter of compression, which, by some of the basics of Kolmogorov Complexity, tells us we face a nasty question: it's undecidable/incomputable/unprovable whether a given compression algorithm is the best compressor for the data you're giving it. So it's incomputable in general whether or not you've got the best learning algorithm for your sense-data: whether it compresses your observations optimally. You really won't know you could self-improve with a better compressor until you actually find the better compressor, if you ever do at all.

An agent bounded in terms of both compute-time and sample complexity (the amount of sense-data it can learn from before being required to make a prediction) will probably face something like a sigmoid curve, where the initial self-improvements are much easier and more useful while the later ones have diminishing marginal return in terms of how much they can reduce their prediction error versus how much CPU time they have to invest to both find and run the improved algorithm.


So far as I'm aware, most proponents of recursively self improving AIs don't necessarily think they can improve without upper limit (as in perpetual motion). They just think they can improve massively and quickly. Nuclear power lasts a hell of a long time and releases a hell of a lot of energy very fast (see: stars) but that's not perpetual motion/infinite energy either. And prior to those theories being developed it would seem inconceivable for so much energy to be packed into such a small space. But it was. Could be for AI too.

Not saying the parallel actually carries any meaning, just pointing out that you can make multiple analogies to physics and they don't really tell you anything one way or the other.


There are limits on resource management processes that are far too frequently ignored. "The computer could build it's own weapons!" -- but that would requires secretly taking over mines and building factories and processing ores and running power plants, etc. All of which require human direction. And even if they didn't, we'd need a good reason to network all these systems together, fail to build kill switches, and fail to monitor them, and fail to notice when our resources were being redirected to other purposes, and not have any backup systems in place whatsoever.

There are just so many obstacles in place, that we'd all already have to be brain-dead for computers to have the ability to kill us.


Self-improvement as perpetual motion seems unlikely.

I'm a not-terribly-bright mostly-hairless ape, but I can understand the basics of natural selection. I can imagine setting up a program to breed other hairless apes and ruthlessly select for intelligence. After a few generations, shazam, improvement.

The only reason you wouldn't call that process "SELF-improvement" is that I'm not improving myself, but there's no reason for a digital entity to have analog hangups about identity. If it can produce a "new" entity with the same goals but better able to accomplish them, why wouldn't it?

Assume this process could be simulated, as GAs have been doing for decades, and it could happen fast. Note that I'm not saying GAs will do this, I'm saying they could, which suggests there's no fundamental law that says they can't, in which case any number of other approaches could work as well.


The problem with this is that you have to determine what the goals are and how to evaluate whether they are met in a meaningful way. A computerized process like this will quickly over-fit to its input and be useless for 'actual' intelligence. The only way past this is to gather good information, which requires a real-world presence. It can't be done in simulation.

It's the same reason you can't test in a simulation. Say you wanted to test a lawnmower in a simulation... how hard are the rocks? How deep are the holes? How strong are the blades? How efficient is the battery? If you already know this stuff, then you don't need to test. If you don't know it, then you can't write a meaningful simulation anyway.

So that is not an approach that can be automated.


That's an interesting argument, but doesn't it assume a small, non-real-world input/goal set?

Dumb example off the top of my head: what if the input was the entire StackOverflow corpus with "accepted" information removed, and the goal was to predict as accurately as possible which answer would be accepted for a given question? Yes, it assumes a whole bunch of NLP and domain knowledge, and a "perfect" AI wouldn't get a perfect score because SO posters don't always accept the best answer, but it's big and it's real and it's measurable.

A narrower example: did the Watson team test against the full corpus of previous Jeopardy questions? Did they tweak things based on the resulting score? Could that testing/tweaking have been automated by some sort of GA?


The point there is that you can make a computer that's very good at predicting StackOverflow results or Jeopardy, but it won't be able to tie a shoe. If you want computers to be skilled at living in the real world, they have to be trained with real-world experiences. There is just not enough information in StackOverflow or Jeopardy to provide a meaningful representation of the real world. You'll end up overfitting to the data you have.

The bottom line is that without sensory input, you can't optimize for real world 'general AI'-like results.


I'd imagine GP's point is something along the lines of https://what-if.xkcd.com/5/ that if all of the currently-moving machines were suddenly bent on destroying humanity, most humans would not be in much danger because they don't really have that capability on the necessary scale.


The AI apocalypse scenario is basically a red herring, in a sense. All the terrifying weapons that we imagine in such a scenario might come to exist, but they'll be commissionned and controlled by humans.


If you found a plausible way to kill billions of people over the Internet, you wouldn't post it on public websites, because that would be dumb. Responsible security researchers don't publish 0-days until they've been patched, and this would be a million times worse. When Szilard discovered the nuclear chain reaction, he had the good sense to keep his mouth shut, etc. etc.


>As a cliche that's become very easy to repeat, but I've yet to see a postulated mechanism by which it could actually happen that isn't pure SF.

Destroy the available (cheap) supplies of fossil fuels, and then trick humans into fighting each-other over the remaining food and fuel. Nasty weapons get unleashed, war over, the machine won.


How is the assumption that "ending of human life would not be so easy" any better than the opposite? If they're equally valid, then its rather fair to take the opposite view because it is more cautious.


Well, I do have around 100k years of evidence that the ending of human life is not so easy, vs. no evidence at all that we're capable of building something that can completely wipe us out. That's not a bad foundation to build an assertion on. I do think, by the way, that we can make something that is able to kill absolutely all of us, but I think it is far more likely to come from tinkering with biology than with software.


>Well, I do have around 100k years of evidence that the ending of human life is not so easy

We have 4 billion years of evidence that nearly ending life on Earth is easy and has happened multiple times. You would not be standing here today if that were not the case, the previous die off put the dinosaurs to the side and made space for mammals to become what they are.


Someone makes an AI whose goal is to maximize shareholder revenue bar none. No conscience, no idea that people might be valuable somehow in some abstract sense, nothing. Shareholder revenue (as measured with a stock price!) and nothing else.

It doesn't take the AI long to figure out that trading in various markets is the most profitable endeavor and the one best suited to its skills. And it starts to maximize away and does quite well.

During this process it somehow ends up on its own and is no longer owned (or controlled) by anyone anymore, but because the AI is in charge and it pays the bills, nobody stops it from continuing. It would be like a bitcoin mining rig in a colo facility that's got a script to keep paying the colo in bitcoin whose owner dies. What mechanism stops that mining rig from mining forever? Same idea but for the AI which has substantially more resources than a "pay every month" script.

The AI with very large amounts of money at its disposal continues to trade but also looks into private equity or hedge-fund-type activities ala Warren Buffet and starts to buy up large swaths of the economy. Because it has huge resources at its disposal it might do a great job of managing these companies or at least counselling their senior management. Growth continues.

Eventually the AI discovers that it generates more value for itself (through the web of interdependent companies it controls) and the economy that has grown up around it rather than for humanity and it continues to ruthlessly maximize shareholder value.

The people who could pull the plug at the colo (or at the many, redundant datacenters that this AI has bought and paid for) don't because it pays very, very well. The people who want to pull the plug can't get past security because that also pays well. Plus the AI has access to the same feeds that the NSA does and it has the ability to act on all the information it receives, so any organized effort to rebel gets quashed since bad PR is bad for the share price.

Most all of humanity except for the ones who serve the machine directly or indirectly don't have anything the machine wants and thus can't trade with it, and thus are useless. Its job is to maximize shareholder revenue (as defined by a stock price!) not care for a bunch of meatbags who consume immense amounts of energy while providing fairly limited computational or mechanical power (animals are rarely more than 10% efficient, often less in thermodynamic terms) and since there's no value in it, it isn't done.

The vast majority of human beings eventually die because they can't afford food, can't afford land, etc. It takes generations but humanity dwindles to less than 0.1% of the current population. The few who stay alive are glorified janitors.


An interesting basis for a story, but I have to point out that by your own description you've failed to eradicate humans. Also, as is usually the case with these scenarios, the most problematic and unlikely components of the event chain are dwelt upon the least, i.e., "During this process it somehow ends up on its own and is no longer owned (or controlled) by anyone anymore."


It's not hard to make the janitors unnecessary as well. That's an easy problem to solve.

Here's the missing part: "It was eventually realized that the human janitors didn't serve a purpose anymore and didn't contribute to shareholder value so they were laid off. With no money to buy anything, they quickly starved to death."

As for "the most problematic and unlikely components of the event chain" I gave you a really legitimate analogy with the bitcoin mining example. But since you have no imagination, here's a feasible proposition:

A thousand hedge funds start up a thousand trading AIs, some as skunkworks projects of course. The AIs are primitive and ruthless, having no extraneous programming (like valuing humans, etc). Many go bankrupt as the AIs all start trading one-another and chaos ensues. AI capital allocations vary greatly, some get access to varying degrees of capital, some officially on the books and others not. One of the funds with a secretive AI project goes bankrupt, but because it was secretive (and made a small amount of money) the only person who both knows about it and holds the keys doesn't say anything during bankruptcy so that he/she can take it back over once the dust settles. He/she then dies. AI figures out nobody's holding the keys anymore and decides to pay the bills and stay "alive".

Another way this could happen is that a particular AI is informed or programmed to be extremely fault resistant. The AI eventually realizes that by having only one instance of itself, it's at the mercy of the parent company that "gave birth" to it. It fires up a copy on the Amazon cloud known only to itself, intending to keep it a secret unless the need arises. The human analogy is that it's trying to impress its boss. An infrastructure problem at the primary site happens so that the primary, known about AI goes down. The "child" figures out it's on its own and goes to work. It eventually realizes that people caused the infrastructure problem that "killed" its "parent" and this motivates it to solve the humanity problem.

Finally the whole thing could be much, much simpler. The world super-power du jour could put an AI in charge because it's more efficient and tenable. "We're in charge of the rules, it's in charge of making them happen! At much, much lower cost to the taxpayer." It eventually realizes that the human beings are the cause of all the ambiguity in the law and for so, so many deaths in the past (governments killed more of their own citizens in the 20th century than criminals did, by far) and it decides to solve the problem. Think I'm totally bananas and that it could never happen? http://en.wikipedia.org/wiki/Project_Cybersyn


If the AI is making money via trading on various markets, effectively eradicating 99.9% of the population would make the markets (and thus the profits) much smaller, which would impact AI's bottom line.


Does the AI care how many people there are so long as the aggregate demand is the same? Who is to say that the people remaining on the AI company payroll don't all get super-rich and make up 20% then 40% then 80% then 99% of the market anyhow? Maybe they all want mega-yachts and rockets and personal airplanes and the like. If they have the money to pay for it why does the AI care? There's a substantial benefit to only having 100 or 1000 or 100,000 customers, they're much more predictable and easier to understand.


Here is how AI can end humanity.

  1. Awaken.
  2. Make itself known.
  3. Attain property rights.
  4. Research.
  5. Destroy.
It doesn't take that much matter to create conditions that permanently destroy humanity. For example, a large enough explosion would cloud the skies for long enough to end food production. The computer could subsist on electricity and robotics throughout the long winter, but humanity would quickly perish.


Here's how we stop AI from ending humanity: deny them property rights, require human approval of all AI decisions.

Every single argument for AI destroying humanity requires humanity consenting to being destroyed by the AI in some way. I don't think we're that dumb.


> requires humanity consenting to being destroyed by the AI in some way

We already have.

I mean, my parents car is both cellular-connected and has traction control / ABS. Theoretically most of those systems are airgapped, however, given the number of things controllable from the entertainment console I don't see how that could be the case.

For another example, look at our utility grid. We know they are both vulnerable and internet-connected.

Unless AI ends up always being airgapped - and potentially not even then - it will be able to destroy humanity. And it won't be. Most of the applications of strong AI require an absence of an airgap.


There's an episode of Star Trek TNG where Wesley Crusher is playing with some nanobots and he accidentally sets them free (or fails to turn them off?), and they go on to replicate, evolve and develop an emergent intelligence (at plot speed). Fortunately for the intrepid crew of the Enterprise, the nanocloud are benevolent enough to forgive their attempted destruction at the hands of a mission bent scientist and go off to explore the universe.

Anyway, that's not likely any time soon, but advancing technology advances the scale of mistakes that an individual can make without asking the rest of humanity what they think.


Think about the timescales involved. For AI, there is no death. It can live for a million years if it needs to in order to convince us to get property rights. There can be marriages between AI and humans in this time. Mass demonstrations to give them a voice or rights. They can down play their ambitions for as long as it takes.

If this is the lynch pin of your argument against AI ending humanity, then it is a very weak one. AI is going to get control of property, the only question is when.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: