Hacker News new | past | comments | ask | show | jobs | submit login

This example, and others like it point to the central weakness of neural networks for image recognition: No matter how much data you feed it, they never really develop concepts or abstractions of what the objects it is classifying really represent or mean.

This is an excellent point but it begs for an answer to the question "what does 'really mean' mean?" What are all the ways a human can determine what a picture "really means" and which of these methods can be used in a given picture?

We know dogs have certain shapes and goats have certain shapes. Other entities have different characteristics. We can explain how we think we reach conclusions. How we actually the conclusions is likely different and may or may not involve "pattern matching steroids" for a given case - what's more definite is we try to reconcile our conclusions between the example so they involve a single consistent picture of the world. Is determining "what a picture really means?"




Meaning exists in relationships, which its clear that the current generation of AI learns. An example is word2vec, which can learn that king - man + woman = queen, and simultaneously king - man + boy = prince, etc.

The current generation of image recognition is really missing an understanding of physics and 3d space. There's no understanding of what would happen if a dog moves its head around.

The next generation of algorithms might fix this. Some people are excited about "capsule networks", which are supposed to learn features that are able to be rotated significantly without breaking.


I haven't seen any follow-ups on capsule networks since their big splash half a year ago. I'm guessing follow-up projects have a research latency of a year.


We're going to get general AI the same way we get I: multi-sensory agents existing with agency, instincts, and guides in real 3D space. I cannot conceive of any other way to understand things deeply. Babies run experiments. How does AI play with a cat? How does it ever understand the concept of a cats mind without ever playing with it? If we want our AI to have conceptualization as we understand it we need AI to have similar sensory inputs and similar arrays of potential actions. And sure, we could copy the code from one AI to the next to have identical minds at t0, but I struggle with the ethics of that and really I'd rather have diversity in AIs than to have a bunch of clones running around thinking with the same types of thought patterns.

The problem I have once I think about it is that this line of thinking leads me to be much less sure of the nature of my own existence. Do we first let the mind of an AI develop to appreciate humanity before letting it know that it is an AI? Seems like it would solve a lot of possible problems since Ghandi wouldn't take the murder pill.

http://lesswrong.com/lw/2vj/gandhi_murder_pills_and_mental_i...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: