Hacker News new | past | comments | ask | show | jobs | submit login

This report by IEEE is actually a very good collection of topics and research currently being done in this field. They discuss specific problems and topics like the neocortex, IIF [1], neuromorphic engineering [2], pose cells, SLAM [3], and more.

For anyone interested in research being done in AI, ML, consciousness, etc., these are great articles written by actual scientists and researchers who are doing the work (as opposed to the hyperbolic articles or tweets you see online these days about AI).

[1] https://en.wikipedia.org/wiki/Integrated_information_theory

[2] https://spectrum.ieee.org/semiconductors/design/neuromorphic...

[3] https://spectrum.ieee.org/robotics/robotics-software/why-rat...




I'd like to add this article (with somewhat of a cheeky title): "The impossibility of intelligence explosion"

https://medium.com/@francois.chollet/the-impossibility-of-in...

It's written by François Chollet, creator of the Keras DL framework. In the article it is shown how the environment and intelligence are interrelated. Some of the points are expressed in the IEEE Special Report as well (sensorimotor integration). There are many correlations with the recent push towards simulation in AI - Atari, OpenAI Gym, AlphaGo, self driving cars, etc. It's a new front of development, where simulation will create playgrounds for AI.

The main point is that intelligence develops in the environment, and is a function of the complexity of the environment and task at hand. There is no general intelligence, or intelligence in itself, only task-related intelligence. An intelligence explosion can't happen in the void (or in a brain in a vat, or in a supercomputer that has no interface to the world, and can't act on the world). The author concludes that AGI is impossible based on environment and task limitations.

An interesting take because we're focusing too much on reverse engineering "the brain" as if it exists in itself, outside the environment. We should learn about meaning and behaviour from the environment and the structure of the problems the agent faces. Meaning is not "secreted" in the brain.


A related 'trend' in Cognitive Science is called Embodied Cognition [1]. Intelligence develops together with the body that it inhabits, and, as you mention, the environment that this body is living in.

Maybe Dolphins are as 'intelligent' as we are, but having fins instead of hands and living in a maritime environment just make it impossible for them to invent fire, printing presses and automobiles.

[1] https://en.wikipedia.org/wiki/Embodied_cognition


Far-fetched conclusions based on a misinterpretation of "no free lunch" theorem. The theorem doesn't forbid intelligence which is universal only in our own universe, as our own universe doesn't give us uniform distribution of all possible problems.


I tend to believe that a hole in François' argument is that a sufficiently powerful computer could simulate an environment inside, where the AI could thrive.


A hole? In that Swiss cheese? It's hardly surprising. He uses hypothetical Chomsky language device to support his idea of "there couldn't be general intelligence", while there's provably optimal intelligent agent (AIXI) and its computable approximations. He uses the self-improvement trends established by entities which aren't intelligent agents (military empires, civilization, mechatronics, personal investing) to predict what self-improvement of intellectual agent will be like. It's a pure London-streets-overflowing-with-bullshit level prediction.

I am not extreme singularitarian. There are hard physical limits making exponential progress and singularity impossible. But bad arguments are bad arguments, it doesn't matter if conclusions are appealing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: