In writing the code that is supposed to implement my idea, I find that my idea has many flaws.
Sending that idea to an LLM (in absence of AGI) seems like a great way to find out about the flaws too late.
Otherwise, specifying an application in such detail as to obtain the same effect is essentially coding, just in natural language, which is less precise.
Sending that idea to an LLM (in absence of AGI) seems like a great way to find out about the flaws too late.
Otherwise, specifying an application in such detail as to obtain the same effect is essentially coding, just in natural language, which is less precise.