Some of your points stand. Though, in the iron triangle of speed, quality, scope, it turns out that quality and speed are linked. Quality is a requirement for speed.
So, it seems more akin to making meringue with yolks. Eventually maybe it will work, but if you knew what you are doing and cared, it would be done better faster.
The criticism though of losing sight of the goal is valid. That happens.
"Quality" for scientists is not the same specificities as "quality" for a software developer.
The goal of the software of the scientist is different, so the definition of what is "quality" is different.
The article illustrates that: a lot of "software done following the good software developer practice" ends up being of bad quality for the job, and end up wasting a lot of time.
Another aspect is the context: the iron triangle is also something built for a specific context. Of course a code that contains very very flexible function will have problems after 5 years of development and usage which will lead to drastic decrease of speed. But scientific code should not be used 5 years later (scientific code is to prove hypothesis, once the article published, you should not use this code, because this code, by construction, contains plenty of hypothesis testing that have been demonstrated not useful).
So, the reason "speed" is related to "quality" is different in science.
> "Quality" for scientists is not the same specificities as "quality" for a software developer.
I disagree here. I believe quality is intrinsic to the product. There may be different attributes to the "quality" of the thing that one person may value more than another, but the intrinsic quality is the same. An over-engineered solution is rarely good. Don't use a power-boat to go across a small swimming pool, and don't use a shoddy makeshift raft to across a giant lake.
> The goal of the software of the scientist is different, so the definition of what is "quality" is different.
I disagree here, but I think I see your point. The general goal for everyone (the goal of software) is to accomplish some task. Software is fundamentally just a tool. Software for the sake of software is bad.
This makes me think of an analogy where a person is trying to redo the plumbing on their kitchen sink. Compared to a firm that will do whacky crazy things and leave the situation worse than when they started, and walk away with their bills payed and job half done. Compared to this, certainly a competent novice is better. Though, sometimes it is really important to know about certain O-rings, not only would the "true" professional do the job slightly faster & more methodically than the novice, but they'll know about that O-ring that would only become a problem in the winter. At the same time, sometimes there are no such O-rings, and a simple job is just ultimately a relatively simple job.
To that last extent, hiring software engineers can sometimes be scary. When you get very intelligent people and pay them to "solve complex problems," they do tend to build things that are very flexible, FAANG-scalable, very industry latest, AI powered, for the sake of it lasting 5 years; rather than starting simple - Gall's law: “A complex system that works is invariably found to have evolved from a simple system that worked." Or, those engineers may start there because they know those are the "best practices" without yet experiencing why those are best practices, and when to apply those practices and when not to apply those practices.
You are switching definition: in this discussion, people have defined "quality" in one way, which is not relevant for the academic sector. Then you arrive with a different definition of quality to pretend that what they say is correct.
If you define "quality" as something intrinsic to the product, then the "good practice to do quality work" are not good to do quality work, because these practice leas to shit software (as illustrated again and again by plenty of people working in the academic sector or working close to them, like the author of the article here).
That's the problem in this discussion: "quality is good", "this practice is good for quality because in context X, the important attributes of quality are A and B", "therefore this practice is good in context Y even if the important attributes of quality are C and D".
I'm saying "what you call quality is A and B, and in science, quality is not A and B".
> This makes me think of an analogy ...
I agree with your analogy, but the person who know the O-ring is THE SCIENTIST. The software developers, and the "good practice rules" that they are applying, don't know anything about O-ring, they don't know how to install kitchen sink. Software developers don't know how to build scientific software, they don't even understand that scientific software have different needs and need different "good practice" than the ones they have learn for their specific job, which is a strongly different context.
It's like saying: a good practice in veterinarian is to give product X to the sick dog, so, we should follow the veterinarians good practice when we do medicine to humans, because veterinarians are doing products that help living things feel better, so it's the same right?
Your example with Gall's law is pretty clear: YOU DON'T WANT A SOFTWARE THAT LAST 5 YEARS IN RESEARCH. You NEED, and really need, a software that explore something that has a very big chance to lead to "no, it was incorrect", and that if it leads to "yes, it's correct", will be destroyed after the paper publication (because the publication contains the logic, so anyone can rebuild the implementation in a way that satisfy Gall's law if they want). Gall's law is totally correct, but they are not talking about "research software", they are talking about a different object, and the scientists are not building such object.
So, it seems more akin to making meringue with yolks. Eventually maybe it will work, but if you knew what you are doing and cared, it would be done better faster.
The criticism though of losing sight of the goal is valid. That happens.