LLMs are on the way to AGI but they are still laughably bad at logical reasoning.
Any remotely interesting coding task is at least somewhat novel and requires some reasoning. So far, LLMs don't seem to be very good at handling things they haven't seen before.
Any remotely interesting coding task is at least somewhat novel and requires some reasoning. So far, LLMs don't seem to be very good at handling things they haven't seen before.