Your statement about inductive transfer is false. There are many vision systems, for example, that use transfer learning (as it is known in our community) to use knowledge gained from one task in another. Moreover, many of these approaches are specific to vision -- they can't easily be applied in other domains.
The rest of your comments about reasoning systems are either part of theoretical computer science (e.g., most of that list of undecidable problems), or in the realm of philosophy/metaphysics/80s-style AI. The former is an interesting area, but much closer to math than most other areas of CS. I call the latter philosophy/metaphysics because these are often questions that are interesting from a purely academic viewpoint, but are utterly useless for trying to make progress on real tasks. This was the big problem with 80s style AI: people hoped that they could build general systems which could "reason" about how to derive algorithms to do particular tasks. Knowledge bases were supposed to be steps in this direction.
What the community learned is that this is not a valid approach, in part because it doesn't take into account the enormous amount of data people have access to from birth to adulthood (not to mention the fact that we can interact with our environment and see the results of our actions). There doesn't seem to be a good way to provide a computer system this kind of information.
There are also strong biological arguments against this idea. For example, a large fraction of human brain cells are exclusively devoted to processing visual information. This is in addition to the significant amount of visual processing done by our eyes and the optic nerve. There are similar systems in the brain devoted to speech processing and language, etc.
All of this suggests that a general reasoning system cannot hope to solve the challenging problems in these different domains.
The rest of your comments about reasoning systems are either part of theoretical computer science (e.g., most of that list of undecidable problems), or in the realm of philosophy/metaphysics/80s-style AI. The former is an interesting area, but much closer to math than most other areas of CS. I call the latter philosophy/metaphysics because these are often questions that are interesting from a purely academic viewpoint, but are utterly useless for trying to make progress on real tasks. This was the big problem with 80s style AI: people hoped that they could build general systems which could "reason" about how to derive algorithms to do particular tasks. Knowledge bases were supposed to be steps in this direction.
What the community learned is that this is not a valid approach, in part because it doesn't take into account the enormous amount of data people have access to from birth to adulthood (not to mention the fact that we can interact with our environment and see the results of our actions). There doesn't seem to be a good way to provide a computer system this kind of information.
There are also strong biological arguments against this idea. For example, a large fraction of human brain cells are exclusively devoted to processing visual information. This is in addition to the significant amount of visual processing done by our eyes and the optic nerve. There are similar systems in the brain devoted to speech processing and language, etc.
All of this suggests that a general reasoning system cannot hope to solve the challenging problems in these different domains.