This sounds like doubling down on the approach was causing the problems.
The desire to control and incentivize researchers to compete against each other in order to justify their salary is understandable, but it looks like it has been blown so out of proportions lately that it's doing active harm. Most researchers start their career pretty self-motivated to do good research.
Installing another system to double-check every contribution will just increase the pressure to game the system in addition to doing research. And replicating a paper may sometimes cost as much as the original research, and it's not clear when to stop trying. How much collaboration with the original authors are you supposed to do, if you fail to replicate? If you are making decisions about their career, you will need some system to ensure it's not arbitrary, etc.
While I agree that "most" researchers start out with good intentions, I'm afraid I've directly and indirectly witnessed so many examples of fraud, data manipulation, wilful misrepresentation and outright incompetence, that I think we need some proper checks and balances put in place.
When people deliberately fake lab data to further their career, and that fake data is used to perform clinical trials on actual people, that's not just fraudulent, it's morally destitute. Yet this has happened.
People deliberately use improper statistics all the time to make their data "significant". It's outright fraud.
I've seen people doing sloppy work in the lab, and when questioning them, was told "no one cares so long as it's publishable". Coming from industry, where quality, accuracy and precision are paramount, I found the attitude shocking and repugnant. People should take pride and care in their work. If they can't do that, they shouldn't be working in the field.
PIs don't care so long as things are publishable. They live in wilful ignorance. Unless they are forced to investigate, it's easiest not to ask any questions and get unpleasant answers back. Many of them would be shocked if they saw the quality of work done by their underlings, but they live in an office and rarely get directly involved.
I've since gone back to industry. Academia is fundamentally broken.
When you say "double-checking" won't solve anything, I'd like to propose a different way of thinking about this:
* lab notebooks are supposed to be kept as a permanent record, checked and signed off. This rarely happens. It should be the responsibility of a manager to check and sign off every page, and question any changes or discrepancies.
* lab work needs independent validation, and lab workers should be able to prove their competence to perform tasks accurately and reproducibly; in industry labs do things like sending samples to reference labs, and receiving unknown samples to test, and these are used to calculate any deviation from the real value both between the reference lab and others in the same industry. They get ranked based upon their real-world performance.
* random external audits to check everything, record keeping, facilities, materials, data, working practices, with penalties for noncompliance.
Now, academic research is not the same as industry, but the point I'm making here is that what's largely missing here is oversight. By and large, there isn't any. But putting it in place would fix most of the problems, because most of the problems only exist because they are permitted to flourish in the absence of oversight. That's a failure of management in academia, globally. PIs aren't good managers. PIs see management in terms of academic prestige, and expanding their research group empires, but they are incompetent at it. They have zero training, little desire to do it, and it could be made a separate position in a department. Stop PIs managing, let them focus on science, and have a professional do it. And have compliance with oversight and work quality part of staff performance metrics, above publication quantity.
The desire to control and incentivize researchers to compete against each other in order to justify their salary is understandable, but it looks like it has been blown so out of proportions lately that it's doing active harm. Most researchers start their career pretty self-motivated to do good research.
Installing another system to double-check every contribution will just increase the pressure to game the system in addition to doing research. And replicating a paper may sometimes cost as much as the original research, and it's not clear when to stop trying. How much collaboration with the original authors are you supposed to do, if you fail to replicate? If you are making decisions about their career, you will need some system to ensure it's not arbitrary, etc.