>> Now if we ask S and R if they felt overwhelmed about an 11th scenario what are
they chances they say yes? Previous studies have said S is much more likely to
answer yes than R. This correlation is the thing they are measuring.
Thank you, that is a very clear explanation. And "construct validity", from
its name and from a quick look on wikipedia is exactly my concern.
If "the degree to which a test measures what it claims, or purports, to be measuring" is your concern, does the notion these researchers are thinking about it all the time leave you with less skepticism, or are you sure your skepticism is warranted because you’re worrying in a way they haven’t thought of while studying this caveat?
I think you're asking me "do you think you know better than all of us, or that we are all idiots"?
Neither. I am aware though that whole fields of research periodically go through upheavals and abandon previous methodological orthodoxies, and that happens only because there is a debate on those methodological orthodoxies. For example, if I understand correctly, experiments with rats in a maze used to be the mainstay of psychology experiments in the past, but they are now not so much.
Is my criticism really that hurtful?
Edit: I'm also aware of the fact that whole fields can be stuck in a rut and continue work that pays off now, in terms of publications, but is later dismissed. In machine learning we talk of "low-hanging fruit". It means that, you can publish a paper now that pushes the state-of-the-art a bit further but does nothing to address the limitations of whatever technique you are using. The result is noise.
Thank you, that is a very clear explanation. And "construct validity", from its name and from a quick look on wikipedia is exactly my concern.