AI has a hard time working with code that humans would consider hard to maintain and hard to extend.
If you give AI a set of tests to pass and turn it loose with no oversight, it will happily spit out 500k LOC when 500 would do. And then it will have a very hard time when you ask it to add some functionality.
AI routinely writes code that is beyond its ability to maintain
and extend. It can’t just one shot large code bases either, so any attempt to “regenerate the code”
is going to run into these same issues.
> If you give AI a set of tests to pass and turn it loose with no oversight, it will happily spit out 500k LOC when 500 would do. And then it will have a very hard time when you ask it to add some functionality.
I've been playing around with getting the AI to write a program, where I pretend I don't know anything about coding, only giving it scenarios that need to work in a specific way. The program is about financial planning and tax computations.
I recently discovered AI had implemented four different tax predictions to meet different scenarios. All of them incompatible and all incorrect but able to pass the specific test scenarios because it hardcoded which one to use for which test.
This is the kind of mess I'm seeing in the code when AI is left alone to just meet requirements without any oversight on the code itself.
None of that matters of its not a person writing the code