"You must include a confirmatory study by an independent lab in order to publish this research, BUT we will give you an accept/reject decision on your paper prior to doing that study."
That way, there's much reduced bias in the confirmatory results since the paper gets published either way. And if the paper would get rejected even if the confirmation is successful, then that's a ton of wasted effort and money that are major disincentives to trying at all.
As it should be, science is about experiments and predictive results, not outcomes being "desirable". Let's incentivise that.
I think there's a broader problem though -- the rigid coupling between a) whether you've published, and b) whether you've done something worthwhile, and c) the minimal size of a publishable unit.
This forces scientists to "repurpose" the journal article to signal a lot more than "hey, we did this and something interesting is going on." Anyone who contributed wants to get their name on it, and the publication is viewed as some reward rather than a test of an ideas merit (as in the "we promise to publish even you're wrong", that you're describing).
I think it would be better if we had some fine-grained system for documenting everyone's contribution, like a git DAG. Then you could separate the question of "Alice collected this data" from "Bob proposed a great hypothesis about it" and "Charlie tried to replicate it" and "so-and-so has something publishable".
"You must include a confirmatory study by an independent lab in order to publish this research, BUT we will give you an accept/reject decision on your paper prior to doing that study."
That way, there's much reduced bias in the confirmatory results since the paper gets published either way. And if the paper would get rejected even if the confirmation is successful, then that's a ton of wasted effort and money that are major disincentives to trying at all.
As it should be, science is about experiments and predictive results, not outcomes being "desirable". Let's incentivise that.