Hacker News new | past | comments | ask | show | jobs | submit login

It doesn't need to be good at solving the problem. It only needs to be good at translating the problem of "If the unknown x is divided by 3 it has the same value as if I subtracted 9 from it" into "x/3 == x-9 && x is an Real number". The formal method tool will do the rest.

Note that if the LLM gets the implicit assumptions wrong, the solution will be unsatisfactory, and the query can be refined. This is exactly what happens with actual human experts, as per the anecdote I shared in [3]. So the LLM can replace some of the human-in-the-loop that makes it so hard to use formal methods tools. Humans are good at explaining the problem in human language, but have difficulty formulating them in ways that a formal tool can deal with. Humans, i.e. consultants, help with formalizing them in e.g. SMT. We could skip some of that, and make formal methods tools much more accessible.




Your example isn't ambiguous and if it was LLMs won't be better in choosing the right interpretation.


The problem is not that writing input for formal method tools is tricky syntactically. The problem is that it is hard to produce the actual semantic content of the input. Humans don't need help with that, especially not the kind of human that is capable of authoring that content. It's much more technical than stuff like "Make me a website with a blue background" or some crap like that. The potential for an LLM to mistranslate the English input is probably unacceptable.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: