Do you have good examples of tasks in which dubious verbal prompt could be an acceptable outcome?
By the way, I noticed:
> AI
Do not confuse LLMs with general AI. Notably, general AI was also implemented in system where critical failures would be intolerable - i.e., made to be reliable, or part of a finally reliable process.
> But lots of tasks
Do you have good examples of tasks in which dubious verbal prompt could be an acceptable outcome?
By the way, I noticed:
> AI
Do not confuse LLMs with general AI. Notably, general AI was also implemented in system where critical failures would be intolerable - i.e., made to be reliable, or part of a finally reliable process.