Hacker News new | past | comments | ask | show | jobs | submit login

I think part of the problem is that humans like to define their own intelligence in grandiose terms. Prior to their being solved, object identification, human-level speech recognition, handwriting recognition, machine translation and many other tasks were thought to require general intelligence. But once we got the machines doing them for us, we decided they weren't so hard after all. From this you can conclude one of two things:

1. We're on the wrong track, and don't understand our own intellects at all.

2. Human intelligence is just a collection of these same hacks, possibly ensembled by some relatively thin meta-algorithm.




#2 is definitely wrong. We aren't a collection of similar hacks; we are qualitatively different because we include the ability to gain new heuristics and to cross apply the ones we already have. No clever library of strung together hacks will have these 2 properties. In the same way, an ant hill is not just a bunch of ants, it has so many emergent properties that thinking of it in terms of its components is a mistake

We should however think more expansively about intelligence than just the sort humans have. The goal doesn't need to be straight mimicry of humans


> In the same way, an ant hill is not just a bunch of ants, it has so many emergent properties that thinking of it in terms of its components is a mistake

The thing about emergent behavior is that you just have to take enough entities and organize them the correct way, and the behavior appears, unexpectedly, and out of nowhere.

If it is really emergent, then nobody (what includes me and you) has no idea how far we are from a general intelligence at all.


An emergent property is precisely one that obtains unexpectedly from simple components. For instance, it is supposed by some that the universe truly consists of nothing more than electrons and quarks and what have you which interact in relatively simple ways, and that subjective human experience emerges from that. We then have a situation that is easier for us to reason about using higher level concepts, but there isn't a chapter in the laws of physics called "subjective human experience". That's in the book called "diverse applications of the basic laws of physics".

In any case, I dispute your facts too. I don't think we have the ability to cross apply our heuristics. I can calculate fairly accurately a lot of mathematical problems when I'm riding my bike, but put me in a maths class and I'm stuffed. I don't have independent access to those hacks.


A neural network that can train neural networks is capable of learning new heuristics. That is a thin meta-algorithm on top of a standard function approximation algorithm.

EDIT: I should also add that reinforcement learning more directly falsifies your claim.


> we include the ability to gain new heuristics and to cross apply the ones we already have.

Maybe our collection of hacks includes hack generation hacks (they cannot work reliably of course).


None of the problems you mention is actually solved though. They're all things that work sort of, some of the time, with caveats about how you define "work." They work well enough to be useful, but not well enough to argue we're converging on human-level intelligence.


Optical illusions (there's also physical ones) are often demonstrations that the problems aren't solved in humans either; they're just "things that work sort of, some of the time, with caveats about how you define 'work'".

More so if you include reasoning illusions like people being more scared to catch a plane than drive a car or thinking that a lotto ticket is a good investment.

Human intelligence doesn't really meet intuitive definitions of human intelligence. But it does work well enough as long as you ignore all the times it doesn't.


None of those tasks have been solved fully at human-levels. Subtasks and datasets? yes.


That isn't always true. Face recognition now exceeds human level performance. GoogleNet's image identifier is only 1.7% worse than this guy's performance:

http://karpathy.github.io/2014/09/02/what-i-learned-from-com...

Given that the author is writing a blog about AI, it's reasonable to assume he's above average in intelligence and knowledge, and therefore likely better than the average human at this task. But even if you don't make that assumption, object identification is within spitting distance of human level performance.

The same is true for many of these other tasks.


Not it is not. Learning new object classes from a single image or a few images is very hard. See

http://www.sciencemag.org/content/350/6266/1332.short

Machine translation is a joke.

Put any comment on this page through Google translate to another language and back to English and see what you get.

I did a small part of yours. Hardly human-level for just a small simple sentence.

> But even if you do not make this assumption, identifying the object involves spitting the distance from the performance of the human level.


The fact that one-shot learning is still hard does not falsify what I said. Computers are now better than humans at many tasks that were once thought to be the domain of general intelligence. How they were trained is not particularly significant.


It depends on what you mean by a task. Machines are not universally good at object detection because they fail in cases where there is too little data. We can't magically wish for non existent data (yet humans would do quite well on those data-scare tasks).


Ya, i'm happy to accept that humans are still better at generalizing from scant data. But that is something that's being actively worked on, and progress is being made.

https://en.wikipedia.org/wiki/One-shot_learning


Agreed. I would love to see quick progress in that too (that is one of my projects).


There's been lots of articles recently about reaching "human-level" performence, you should know better than to believe them. The tests are on constrained datasets and ignore many factors. You mention face recognition, you realize for every image the program correctly recognizes, there also exists an image, that you will perceive as identical to the aforementioned one, yet which the system fails for.


Yes, i'm aware of differential attacks on neural networks. That doesn't falsify the hypothesis. There are instances where you will fail to recognize a human that those NNs will not.


The brain for sure has a collection of hacks. But the meta-algorithm is pretty strong too.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: