Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not arguing that machines will be more efficient than human brains. A airplane isn't more efficient than a goose. But airplanes do fly faster, higher and with more cargo than any flock of geese could ever carry.

Similarly, there is no contradiction between AI being less efficient than a human brain, and AI being preferable to humans because it can deal with data sets that are two or three orders of magnitude too large for any human (or even team of humans).



Even so, such AI doesn’t exist. All the AIs that exist today operate by fitting data. And to be able to perform a useful task it has to have well defined parameters and fit the data according to them. I’m not sure an AI that operates outside of these confinements have even been conceived of.

To make an AI that outperforms humans in any task has not been proven to be possible (to my knowledge) not even in theory. An airplane will fly faster, higher and with more cargo then a flock of geese, but a flock of geese reproduce, communicate with each other, digest grass, etc. An airplane will not outperform a flock of geese in any task, just the tasks which the airplane is optimized for.

I’m sorry, I confused the debate a little by talking about efficiency. My point was that there might be an inverse relation of generality of a machine and it’s efficiency. This was my way of providing a mechanism in which building a machine that outperforms humans in any task could be impossible. This mechanism—if it exists—could be sufficient in preventing such machines to be theoretically possible, as at some point you would need all the energy in the universe to perform a task better then a specialized machine (such as an organism).

Perhaps this inverse relationship doesn’t exists. The universe might conspire in a million other ways to make it impossible for us to build an AI that will outperform us in any task. The point is that “AI will outperforme humans in any task” is far from inevitable.


> All the AIs that exist today operate by fitting data. And to be able to perform a useful task it has to have well defined parameters and fit the data according to them. I’m not sure an AI that operates outside of these confinements have even been conceived of.

Such an AI has absolutely been conceived of. In Superintelligence: Paths, Dangers, Strategies, Nick Bostrom goes over the ways such an AI could exist, and poses some scenarios about how a recursively self-improving AI could "take off" and exceed human intellectual capacity on its own.

Moreover, we're already building such AIs (in a limited fashion). Deepmind recently made an AI that can beat all Atari games [1]. The AI wasn't given "well defined parameters". It was just shown the game, and it figured out, on its own, how to map inputs to actions on the screen, and which actions resulted in progress towards winning the game. Then, the same AI went on to do this over and over again, eventually beating all 57 Atari games.

Yes, you can argue that this is still a limited example. However it is an example that shows that AIs are capable of generalized learning. There's nothing, in principle, that prevents a domain-specific AI from learning and improving at other problem domains. The AI that I'm conceiving of is a supersonic jet. This AI is closer to the Wright Flyer. However, once you have a Wright Flyer, supersonic jets aren't that far away.

> To make an AI that outperforms humans in any task has not been proven to be possible (to my knowledge) not even in theory. An airplane will fly faster, higher and with more cargo then a flock of geese, but a flock of geese reproduce, communicate with each other, digest grass, etc. An airplane will not outperform a flock of geese in any task, just the tasks which the airplane is optimized for.

That's fair, but besides the point. The AI doesn't have to be better than humans at everything that humans can do. The AI just has to beat humans at everything that's economically valuable. When all jobs get eaten by the AI, it's cold comfort to me that the AI is still worse than humans at, say, enjoying a nice cup of tea.

[1]: https://www.technologyreview.com/2020/04/01/974997/deepminds...


The second time around is easier. The hard part was evolution, took billions of years, used huge resources and energy, but in a single run it evolved nature and humans. AI agents can rely on humans to avoid the enormous costs of blind evolution at least until they reach parity with us, then they have to pay the price and do extreme open-ended learning (solving all imaginable tasks, trying all strategies, giving up on simple objectives).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: