I'll agree that this is interesting, but it seems like a lot of people in this thread miss the point: we're working with multi-layer tools now. This enables modeling of multi-layer processes. The code generation as it stands is a obviously a toy, but what happens if we actually think about the real processing layers?
Take this example of code processing, and instead front it with a parser that generates an AST. For now, an actual parser for a single language. Maybe later, a network trained to be a parser. The AST is then fed to our network. What could we get out of the AST network? Could we get interesting static analysis out of it? Tell us the time and/or space complexity? Perhaps we discover that we need other layers to perform certain tasks.
This, of course, has parallels in language processing. Humans don't just go in a single (neural) step from excitation of inner ear cells ("sound") directly to "meaning". Cog sci and linguistics work has broken out a number of discrete functions of language processing. Some have been derived via experiment, some observed via individuals with brain lesions, others worked out by studies of children and adult language learners. These "layers" provide their own information and inspiration for building deep learning systems.
Take this example of code processing, and instead front it with a parser that generates an AST. For now, an actual parser for a single language. Maybe later, a network trained to be a parser. The AST is then fed to our network. What could we get out of the AST network? Could we get interesting static analysis out of it? Tell us the time and/or space complexity? Perhaps we discover that we need other layers to perform certain tasks.
This, of course, has parallels in language processing. Humans don't just go in a single (neural) step from excitation of inner ear cells ("sound") directly to "meaning". Cog sci and linguistics work has broken out a number of discrete functions of language processing. Some have been derived via experiment, some observed via individuals with brain lesions, others worked out by studies of children and adult language learners. These "layers" provide their own information and inspiration for building deep learning systems.