Hacker News new | past | comments | ask | show | jobs | submit login

To suppose that superhuman AI (AI smarter than us) won't exist

Which is exectly what Kelly doesn't say. He says that the smarter concept is ill defined, and that our current fantasies of some universally superior AI galloping onto the scene and taking over everything may be just that - fantasies.




> He says that the smarter concept is ill defined

Which isn't a contradiction like he claims it is. It just means that there are many different ways that a future AI can be smarter than us. That intelligence could be multi-dimensional.

But guess what, we can easily take that multi-dimensional input, and find a formula that reduces it to a single scalar value based on our practical valuation of these forms of intelligences (almost like an intelligenc 'utility function' from economics), and problem solved. We're right back to a single order dimension for ranking intelligence.

It was a really weak argument he put forward.

Additionally a weak argument was the branching / fan pattern of various species. Yes all living species are at the peak of evolution for their environment, but they weren't all pressured to evolve more intelligence. Some evolved strength, speed, flight to their environment.

If instead natural selection began only selecting for intelligence (like humans searching for AGI will), then you would could definitely rank all animals linearly on a single path of intelligence.


It just means that there are many different ways that a future AI can be smarter than us. That intelligence could be multi-dimensional

A condensed way of saying precisely what Kelly is saying in the article. Allowing for the very real possibility that I am simply too dumb and not grasping your point.

but they weren't all pressured to evolve more intelligence

And it isn't claimed that they were. General evolution is used as an example of potential patterns in evolution of various intelligences.


He attempted to use the multi-dimensionality of intelligence to make the following claim:

> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

This is poor reasoning. The fact that intelligence is multi-dimensional has no bearing on our ability to declare something smarter than us. It isn't at all meaningless. Because of this he claims that there will be no super-human AI.

Via analogy. He says, "you can't compare two football players because one may be stronger, while another if faster." So the concept of "better" is meaningless. And no player can be declared better.

My response is that's absurd. A simple counter-example, a single player can be both strong and faster, and thus clearly better.


A third player is weaker, but faster. And smarter. Or tougher. Or more agile. More agile but not quite as smart. More persistent. Less predictable. And so on and so forth. Your 'meaningless' only has meaning because you apply it to a hugely simplified model.


> we can easily take that multi-dimensional input, and find a formula that reduces it to a single scalar value based on our practical valuation of these forms of intelligences (almost like an intelligenc 'utility function' from economics)...

My original comment addressed that specific case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: