Well said. Furthermore, data on the joy of writing a birthday card (to borrow one of your examples) can be useful in other tasks (such as determining what to write).
Typical machine learning problems deal with isolated training sets and isolated problems. This approach seems strange to me; in the case of neural networks, this is somewhat analogous to a newborn child who is deprived of all senses except the limited training data to make up their world, and good/bad feedback from the loss gradient. How can one expect this hypothetical newborn to learn any meaningful representation of the world with which our machine learning problems are derived?
I think the first step towards realizing anything like "Hollywood General AI" will be a system that spends an early portion of its existence ingesting a universe of contextual data, before it is presented with a problem to solve (at which point it can make use of seemingly unrelated information to do something like handwriting). Andrew Ng's work on self-taught learning (built on transfer learning) is particularly relevant here, but I think those ideas could be taken a lot further.
General AI has always been very fascinating and exciting to me. However, recently I feel like it will serve us humans better to continue along the path of ML.
We want an AI trained with ML to keep focusing on flying plane. We may not want an AGI pilot who can also get bored just like humans and can get distracted playing games.
Typical machine learning problems deal with isolated training sets and isolated problems. This approach seems strange to me; in the case of neural networks, this is somewhat analogous to a newborn child who is deprived of all senses except the limited training data to make up their world, and good/bad feedback from the loss gradient. How can one expect this hypothetical newborn to learn any meaningful representation of the world with which our machine learning problems are derived?
I think the first step towards realizing anything like "Hollywood General AI" will be a system that spends an early portion of its existence ingesting a universe of contextual data, before it is presented with a problem to solve (at which point it can make use of seemingly unrelated information to do something like handwriting). Andrew Ng's work on self-taught learning (built on transfer learning) is particularly relevant here, but I think those ideas could be taken a lot further.