I'm still not convinced that it's not going through approximate reasoning chain retrieval and that's self-triggered to get more reasoning chains that will maximize it's goal. I'm seeing a lot of comments from other SWEs using it for non-trivial tasks in which it fails at but is just trying harder to look like it's problem solving. Even with more context and documentation, it fails to realize details an experienced SWE would pick up quickly.