"Understanding the code" might not be that big of a deal as you might think -- we have this problem today already. A talented coder might leave the company and the employer may not be able to hire a replacement who's as good. Now they have to deal with some magic in the codebase. I don't hear people giving advice not to hire smart people.
At least with AI, you can (presumably) replicate the results if you re-run everything from the same state.
There's also a very interesting paragraph in the paper (I'm in no position to judge whether it's valid or not) that touches on this subject, but with a positive twist :
Interpretability. One major advantage of code generation models is that code itself is relatively interpretable. Understanding the behavior of neural networks is challenging, but the code that code generation models output is human readable and can be analysed by traditional methods (and is therefore easier to trust). Proving a sorting algorithm is correct is usually easier than proving a network will sort numbers correctly in all cases. Interpretability makes code generation safer for real-world environments and for fairer machine learning. We can examine code written by a human-readable code generation system for bias, and understand the decisions it makes.
> Now they have to deal with some magic in the codebase. I don't hear people giving advice not to hire smart people.
People do advise against hiring people who write incomprehensible code.
Yeah every now and then you run across some genius with sloppy code style and you have to confine them to a module that you'll mark "you're not expected to understand this" when they leave because they're really that much of a genius, but usually the smart people are smart enough to write readable code.
At least with AI, you can (presumably) replicate the results if you re-run everything from the same state.
There's also a very interesting paragraph in the paper (I'm in no position to judge whether it's valid or not) that touches on this subject, but with a positive twist :
Interpretability. One major advantage of code generation models is that code itself is relatively interpretable. Understanding the behavior of neural networks is challenging, but the code that code generation models output is human readable and can be analysed by traditional methods (and is therefore easier to trust). Proving a sorting algorithm is correct is usually easier than proving a network will sort numbers correctly in all cases. Interpretability makes code generation safer for real-world environments and for fairer machine learning. We can examine code written by a human-readable code generation system for bias, and understand the decisions it makes.