Hacker News new | past | comments | ask | show | jobs | submit login
Bill Joy: Why the future doesn't need us (2004) (wired.com)
42 points by zengr on April 24, 2011 | hide | past | favorite | 27 comments



Here's the problem with the "Let's just agree not to do this research!" plan that everyone seems to suggest when they start thinking about existential risks: when we're sitting around in 2030 with a million times more computing power at our fingertips than we have today, constructing a workable AI just isn't going to be that difficult of an engineering problem. We already know the equations that we'd need to use do general intelligence, it's just that they're not computable with finite computer power, so we'd have to do some approximations, and at present it's not realistic because the approximation schemes we know of would work too slowly. Pump up our computer power a million times and these schemes start to become a lot more realistic, especially with some halfway decent pruning heuristics.

It's bad enough that (IMO) by 2040 or so, any reasonably smart asshole in his basement could probably do it on his laptop with access to only the reference materials available today; I have no idea how you avoid that risk by making some political agreement. Hell, ban the research altogether on pain of death, and there's still going to be some terrorist team working on it somewhere (and that's even if all the governments actually stop work on it, which they won't).

The only positive way out of this is to go to great pains to figure out how to design safe (friendly) AI, and to do so while it's still too difficult for random-dude-with-a-botnet to achieve (and preferably we should do it before the governments of the world see it as feasible enough to throw military research dollars at). We need to tackle the problem while it's still a difficult software problem, not a brute-force one that can be cracked by better hardware.


"We already know the equations that we'd need to use do general intelligence"

Not only am I pretty sure we don't know how to build a general intelligence I'm pretty sure that nobody really knows what kind of approach would be most likely to succeed.

Having said that, I would love to be proved wrong on this one - so as you specifically say that the necessary techniques have already been published perhaps you could give the relevant references?


There's an algorithm developed by Marcus Hutter called AIXI, which makes provably optimal decisions. Unfortunately(?) it's also uncomputable, but computable approximations exist including a Monte Carlo variant: http://www.vetta.org/2009/09/monte-carlo-aixi/. As the paper notes it scales extremely well; to get better results you just throw more computing power at it.


That's great, but making "optimal decisions" in some defined state space where the quality of various options is evaluable is a really different problem to general intelligence.


Nice to see they are using Pac-Man as an example domain - when I was doing AI research in this kind of field in an engineering department and was rather unpopular for suggesting that we should forget about working on complex domains (nuclear power stations) and focus on something a bit more manageable - my, actually quite serious, suggestion was Tetris. :-)


Indeed, AIXI is the algorithm I was referring to, and Monte Carlo AIXI is the approximation.

As hugh3 mentioned in a sibling comment (http://news.ycombinator.com/item?id=2479211), 'making "optimal decisions" in some defined state space where the quality of various options is evaluable is a really different problem to general intelligence'. While I definitely agree with this statement to some extent (namely, a powerful MCAIXI setup is not necessarily going to display any intelligence that's remotely human, at least without a lot of other stuff going on in the system), the concerning thing is that it should almost certainly be enough to get a system reasoning about its own design, since its code is a well defined state space where quality is evaluable (depending how the programmer decides to have it evaluate quality).

To end up with a dangerous runaway "AI" on our hands, we don't need AI that we'd consider intelligent or useful. All it takes for a runaway is an AI that is good at improving itself, working effectively at optimizing a metric that approximates "get better at improving yourself". AIXI approximations should be plenty powerful to do this with the amount of computing power we'll have in ~20 or 30 years (at the very least, there's a big enough chance that we really have to take it seriously).

This is one of the reasons Eliezer Yudkowsky is so keen on extending decision theory, so that we can get some idea what we should be actually be trying to approximate in order to have a decent shot at doing self-improvement safely.

The best way to sum up my concern is that (unboundedly) self-improving programs make up a tiny fraction of program-space that we can't quite hit with today's technology. Of that sliver of program space, there's a much smaller sliver that contains "programs that won't kill us." There's another sliver that contains "programs that have useful side effects". We need to make sure that the first "AI" that we create lies in the miniscule intersection, "self improving programs that do something useful [1] and won't kill us", and that's a terrifyingly small target to shoot at, so we had better work strenuously to make sure that when once it's feasible to create any of these programs our aim is good enough to hit the safe and useful ones.

[1] We need to find self improving programs that are useful early on because we'll need to use them as our "shield" against any malicious self-improvers that will inevitably be developed later. There's a significant first-mover advantage in AI, and even a small head start would probably make it difficult or impossible for a second AI to become a global threat if the first AI didn't want to allow it.


So AIXI is just old stochastic optimal control (decision theory, controlled Markov processes, R. Bellman's work, etc.) plus a way from Solomonoff to make extrapolations. Okay.

For practice, there have been related ideas from D. Bertsekas and R. Rockafellar.

For actual computing, the problem remains the curse of dimensionality: This curse is so bad that, for a brute force approach, which is what AIXI is, or really nearly anything general in stochastic optimal control on big problems, a few more decades of Moore's law still won't scratch the surface.


There are projects that simulate neurons at the biological level not just the neural network approximation which have demonstrated great results. Theses simulations work for simple organisms and scaling things up is not really a CS problem.

PS: It's assumed that there are shortcuts to AI, but the absolutely worst possible case is a QM simulation of each cell in a body and it's environment and we do have the math for that even if the computational power is hundreds or even thousands of years in the future at current computing growth rates.


I do quantum mechanical simulations for a living, and I really don't see that kind of thing ever working. Luckily I don't think you'd need a full quantum mechanical simulation to get an artificial brain working anyway.

But this raises the next problem: even if I did build myself a copy of my brain (overlooking the ethical issues in doing so), it's still not any damn smarter than I am. And if I can't figure out how to build a smarter brain than mine then it can't figure it out either, so we're still stuck.

This is the big hole in the "singularity" scenario -- there's no reason to think that brains are capable of building ever-smarter brains.

We don't even know, really, what that would mean. What would a smarter version of my brain look like? Would it be like my brain except capable of juggling more symbols at once? Or would it be like my brain except less likely to jump to dumb conclusions?


And if I can't figure out how to build a smarter brain than mine then it can't figure it out either, so we're still stuck.

For one thing, it's pretty likely that blindly expanding the size of your neocortex would increase your intelligence greatly without any major architectural changes - just pack more neurons in there in roughly the same pattern, or expand the depth of the cortical column (that might be more difficult architecturally, though, I don't think the structure is a homogeneous vertically as it is horizontally).

You're right, though, it's not necessarily true that even if we can improve our own design that means that by continuing to improve it we can achieve neverending gains. It is possible that at some point we'll reach a threshold, and we won't be quite smart enough to pass over it to the next level of intelligence.

But I think it's rather unlikely that we're close to that barrier - if we can beat our own intelligence at all, we should see a roughly exponential increase for at least a little while.


"This is the big hole in the "singularity" scenario -- there's no reason to think that brains are capable of building ever-smarter brains."

One thing that could be done with simulated brains is save a copy, tinker with it and see what happens. If it doesn't work out, just restart the simulation.

This is not so easy to do with biological human brains, but may be quite simple and straightforward with simulated human brains.

Of course, there are very serious ethical issues in doing that kind of tinkering and "rebooting" even of simulated brains. But the technological capability to do that will be there should someone decide to (and someone likely will).

"We don't even know, really, what that would mean. What would a smarter version of my brain look like?"

Does it have to be smarter? How about faster?

It may be possible to speed up a brain simulation to the point where it's thinking 10 times faster than humans. Or maybe 100 times faster. Who knows where the limits lie?

If you had 10 or 100 lifetimes to think about improving your brain, do you think you might come up with some promising ideas?


I would hope that as part of the process of creating these simulations we would gain an element of understanding into how our own general intelligence actually works, hopefully then allowing a degree of architectural upgrading to occur (i.e. not just faster but qualitatively better).

Once that happens, and assuming that there is a sequence of possible upgrade paths, then I would expect something like a Vingean singularity to occur.

Hopefully whatever entity does this will be an Iain M Banks fan and will appreciate that being nice to us slow dull bags of meat could be mutually entertaining.


>> ... if I can't figure out how to build a smarter brain than mine then it can't figure it out either

You mean, just like complex processors are impossible, because you personally couldn't design one, since there are just too many work years in doing that?

>>there's no reason to think that brains are capable of building ever-smarter brains.

What happens when you let groups of people build sub-systems? And then the groups iterate over those sub-systems?

Not to mention hardware speedups?

I know enough to be a bit in awe of quantum simulations, so I em a bit shocked to not see a better argument outside your own area of expertise. :-)


Link to an example of a simple organism simulation, based on that organism's nervous system. I have never seen one.


An example of a nematode simulation:

The purpose of this web site is to promote research and education in computational approaches to C. elegans behavior and neurobiology. This tiny animal has only 302 neurons and 95 muscle cells, making an anatomically detailed model of the entire body and nervous system an attainable goal. Physiological information is still incomplete, but computer simulations can help direction experiments to questions which are most relevant for understanding the neural control of behavior.

http://www.csi.uoregon.edu/projects/celegans/

As to scaling things up:

The IBM team's latest simulation results represent a model about 4.5% the scale of the human cerebral cortex, which was run at 1/83 of real time. The machine used provided 144TB of memory and 0.5PFLop/s [petaFLOPS per second].

Turning to the future, you can see that running human scale cortical simulations will probably require 4 PB of memory and to run these simulations in real time will require over 1 EFLop/s [exaFLOPS per second]. If the current trends in supercomputing continue, it seems that human-scale simulations will be possible in the not too distant future.

http://www.theregister.co.uk/2009/11/18/ibm_closer_to_thinki...

PS: A counter argument on the IBM simulation: http://www.popsci.com/technology/article/2009-11/blue-brain-...


> We already know the equations that we'd need to use do general intelligence, it's just that they're not computable with finite computer power, so we'd have to do some approximations, and at present it's not realistic because the approximation schemes we know of would work too slowly.

If you do know the equations, please tell me, as all models that I know of have fundamental failures that prevent them from generalizing to arbitrary problems even without considering computation.


"when we're sitting around in 2030 with a million times more computing power at our fingertips than we have today, constructing a workable AI just isn't going to be that difficult of an engineering problem."

Sorry, but bruteforcing "creativity" is a p==np problem. If you have a proof of p==np then please submit it right away and collect your turing award.

"We already know the equations that we'd need to use do general intelligence, it's just that they're not computable with finite computer power, so we'd have to do some approximations, and at present it's not realistic because the approximation schemes we know of would work too slowly."

Unless there have been some MAJOR discovery within the past couple of days, this sentence is completely wrong. We don't have equations to build an AGI because if we did, we would already have a working AGI. If we had these so called equations, we would have a more fundamental understanding of humans, how they work, and how they "learn". I will now stop thinking about this ridiculous sentence before I get an aneurysm.

P.S. I would love it if you could prove me wrong with links to any academic papers.


P?=NP doesn't have anything to do with possibility - it has to do with feasibility. The point is, with sufficient hardware, we might be able to just do the NP problem.

Also, brute-forcing creativity is not P==NP. Brute-forcing is NP. If P=NP, then there's a way to do it without brute-forcing.


Looking for a summary? Read this wiki entry: http://en.wikipedia.org/wiki/Why_the_future_doesn%27t_need_u...


Thanks! Very interesting. From the WP article:

"Martin Ford author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future [6] makes the case that the risk posed by accelerating technology may be primarily economic in nature. Ford argues that before technology reaches the point where it represents a physical existential threat, it will become possible to automate nearly all routine and repetitive jobs in the economy."

I find this, for short or medium time scales, quite likely, too. There will always be a demand for high skilled humans. Maybe even after the machines are the new bosses. But if we don't make groundbreaking progress in human learning techniques, most people will have trouble to learn these skills fast enough.

Then he writes further:

"In the absence of a major reform to the capitalist system, this could result in massive unemployment, plunging consumer spending and confidence, and an economic crisis potentially even more severe than the Great Depression. If such a crisis were to occur, subsequent technological progress would dramatically slow because there would be insufficient incentive to invest in innovation."

This sounds somewhat plausible but I don't believe it. The trend goes towards highly profitable mega corporations. Governments, and with it most people, become less powerful in economic terms. I think we can already see this effect very well. So there won't necessary be a recession. As long as the rich find ways to spend their money - like flying to Mars.


Even the most trivial computer programs have lots of bugs (with VERY few exceptions [1]), and we're worrying about creating a super-brain that is actually smarter than we are ourselves ?

And let's not forget the debugging, which is TWICE as hard as the coding ;-)

We may be able to simulate the hardware of the brain (using "biologic hardware"), but programming the AI software is probably greatly under-estimated....

[1] Some of the NASA software is probably as close to bug-free as we get, and check the required amount of planning, documentation and testing compared to the amount of actual code produced - http://www.fastcompany.com/magazine/06/writestuff.html


The human brain has much more bugs than any current piece of software.

The actual program will be small. The brain is clearly a learning agent in a task environment. The sensors and actuators are implementable, the only real question is what algorithm should process the input stimuli.


I think a relative simple solution to this is written in the first paragraphs. We will more or less 'merge' our minds with computers, just to a further extent than we already do now. Nowadays a computer is merely a tool, which helps us keeping in touch with relatives, visualizing ideas, calculating stuff. but this bond will probably become much more intense in the future, where whole subroutines of our thinking will rely on artificial machines. This might only seem like a threat considering our 21st century morality - but I think this will become widely accept in the next century.


The future must be now. At least one of my subroutines of thinking already relies on an artificial machine. I call it my Google neuron; it's wired up directly to everything I don't know off the top of my head and fires whenever I feel unsure about something. Well, the latency is still a bit high, but I'm sure someone is working on that problem. ;)


ATTENTION: Book spoiler below!

SF author Vernor Vinge who introduced the term "singularity" tried to come up with an idea to prevent this and other "out of the kids basements" lethal threads to humanity, in his latest book Rainbow End. The "solution" in this book though, is to put all of humanity under mind control.


Humanity may be doomed if it keeps innovating, but it's most certainly doomed if it stops.


I second this!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: