Pretty sure this argument could have been levied against rocketry. For scientific telescopes most likely a frame shift would be needed to break out of this mindset that would be analogous but distinct from the one applied to rocketry.
There are two key differences between rockets and space telescopes (any space-faring probe, really) which prohibits "iterate to failure" as a development technique:
1. Forensics difficulty. You need extensive data to debug issues for complex systems. You can collect so much more data from terrestrial testing, which allows you to do the extensive forensic analysis required to achieve the required component reliability. Once you put a telescope in space, you can't inspect it anymore. A lot of the sensitive components which might fail on JWST have to be inspected to microscopic precision in order to perform adequate failure analysis.
2. Design requirements are far, far more precise. Many failures in deep space are effectively impossible to correct in later iterations due to point 1. Telemetry and sensor data is enough to debug rockets, but for JWST you would need to ship so much extra data/sensor infrastructure alongside the telescope that the whole project becomes recursively intractable.
The system complexity is so high that you absolutely must have the ability to make arbitrary system corrections because the chance of "building everything correctly the first time" is effectively zero, even with perfect hindsight! Essentially, any time you build the thing from scratch, you will always find mission-ending statistical deviations. The objective of on-ground testing is to identify and correct those specific deviations, until the whole system is within design tolerances. If you were to totally rebuild it, the next iteration will have a totally different set of statistical deviations which will need to be corrected. This process of development involves "hardening" the entire system through extensive testing, because it impossible to build a fully-hardened system to start with, even after "learning" from previous attempts.
I’m not arguing that the frame shift is identical to rockets, just that assumption breaking probably is a way to avoid such risk laden events like this launch. For example, finding a way to create a flywheel to drive costs down, or a way to take smaller steps, or a way to incentivize more disposable missions that will build on each other but can tolerate some failures. I’m not an expert but the reason rocketry is moving forward again isn’t because of “move fast and break things” per se but from a rethinking of foundational assumptions in general about how rockets “must” be developed that led to that methodology being discovered as a useful one.
I do agree with this. Most systems have significant basis in unnecessary or outdated methodologies. I think with the JWST project (with partial hindsight), we probably could have benefited from having a "stepping stone" optical telescope after Hubble which we could have used to proof out some of the hard parts of JWST. We learn quite a lot from just running missions beginning-to-end, and extremely long cycle times sacrifice this learning opportunity. It also means that we focus less on developing extensible "platforms" in favor of one-of-a-kind systems which have somewhat less carryover knowledge for the next project. Shorter mission timelines mean that you can better leverage state-of-the-art technology, rather than being forced into a design which constrains you to decade-old technology.