Hacker News new | past | comments | ask | show | jobs | submit login

Look, I think the main disagreement here is that you don’t seem to consider spec inference as part of the program synthesis process, whereas I do. Your position may very well be correct from an academic point of view.

From my perspective, spec inference from informal spec is the main thing to solve. Because for formal specs, I’d just be programming in a declarative language to create the formal spec.

Spec by example won’t scale because you can’t provide examples across the entire domain for apps of real world complexity.

Once spec inference is solved, then you are just left with a search problem. I understand that the search space is freakin huge but I’d still say the latter problem is easier to solve than the former.

And I’d guess that the problem of inferring a spec from an informal description is what requires AGI.

I hope this clarifies my POV. I don’t think we disagree, we just have different perspectives.




Thank you for a level-headed and well thought-out response! I'm glad to see that our communication isn't "ratcheting" to more and more forceful forms.

Now. I think the root of our disagreement was the necessity or not of AGI for various kinds of program synthesis, which I think we 've probably pared down to program synthesis from an informal specification, particularly a natural language one.

I don't agree that AGI is necessary for that. I think that, as many AI tasks, such a complete and uncompromising solution can be avoided and a more problem-specific solution found instead.

In fact, we already had almost a full solution to the problem in 1968, with SHRDLU [1], a program that simulated a robotic hand directed by a human user in natural language to grasp and rearrange objects. This was in the strict confines of a "blocks world" but its capability, of interpreting its user's intent and translating it accurately to actions in its well-delineated domain remains unsurpassed [2]. Such a natural language interface could well be implemented for programming in bounded domains, for example to control machinery or run database queries etc. This capability remains largely unexploited, because the trends in AI have shifted and everybody is doing something else now. That's a wider-ranging conversation though. My main point is that a fully-intelligent machine is not necessary to communicate with a human in order to carry out useful tasks. A machine that can correctly interpret statements in a subset of natural language suffices. This is technology we could be using right now, only nobody has the, let's say, political will to develop it because it's seen as parochial, despite the fact that its capabilities are beyond the capabilities of modern systems.

As to the other kind of informal specifications, by examples, or by demonstration, what you say, that it won't scale etc, is not true. I mentioned earlier one-shot learning of recursive hypotheses without an example of the base case [3]. To clarify, the reason why this is an important capability is that recursive programs can be much more compact than non-recursive ones and still represent large sets of instances, even infinite sets of instances. In fact, for some program learning problems, only recursive solutions can be complete solutions (this is the case for arithmetic and grammar learning for example). The ability to learn such solutions from a single example I think should conclusively address concerns about scaling.

To be fair, this is a capability that was only recently achieved by Inductive Logic Programming (ILP) systems, a form of logic program synthesis (i.e. synthesis where the target language is a logic programming language, usually Prolog, but not necessarily). The Gulwani survey I linked mentions this recent advance in passing only. But ILP systems in general have excellent sample complexities and can routinely generalise robustly from a handful of examples (in the single digits), and have been doing this since the 1990's.

The cost of searching a large hypothesis space is, indeed, an issue. There are ways around it however. Here, I have to tout my on horn and mention that my doctoral research is exactly about logic program learning without search. I'd go as far as to say that what's really been keeping program synthesis back is the ubiquity of search-based approaches, and that program synthesis will be solved conclusively only when we can construct arbitrary programs without searching. But you can file that under "original research" (I mean, that's literally a description of my job). Anyway, no AGI is needed for all of this.

So, I actually understand your earlier skepticism, along the lines of "if all this is true, why haven't I heard of it?" (well, you said why do you still have a job but it's more or less the same thing). The answer remains: because it's not the trendy stuff. Trends drive both industry and research directions. Right now the trend is for deep learning, so you won't hear about different approaches to program synthesis, or even program synthesis in general. There's nothing to do about it. Personally, I just grin and bear it.

_________________

[1] https://en.wikipedia.org/wiki/SHRDLU

[2] For example, in the demonstration conversation quoted by wikipedia, note the "support supports support" sentence.

[3] This is already growing too long so I won't include examples, but let me know if you're curious for some.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: