If you are going to critique the methodology please provide a reference where this is not a robust method. You may be right but how can I tell without some references? The scholar.google search for "controlling for genetics using polygenetic scores" brings up many recent papers about this methodology and the arguments made in these seem stronger to me than this one comment. IMO on the internet when people can easily misinterpret the science, its important to be clear as possible especially when we take things down.
As far as scientific reports goes its a fine journal, its run by nature. It's not on the same planet as the predatory journals that spam inboxes. I worry that people will read your comment, assume you speak from authority, and discount any work they might see coming from that journal when we both know that good science can be found in scientific reports, and that impact factor is more strongly correlated with "sexy" or expensive science than good science anyhow.
I don't speak from authority, but I do speak from experience as an academic. Scientific Reports is run by Springer, who are more competent than Elsevier but just as predatory. Their acceptance rate is 48%, and in 2021, they published 23000 articles. (You can check this here: https://www.nature.com/srep/research-articles?year=2021.) At a publication fee of almost $2000 (Wikipedia), this is a money-making machine, as skewered by https://www.youtube.com/watch?v=8F9gzQz1Pms. I think it's obvious what the incentives are here.
The reason the method is not robust is that the typical polygenic score explains only 10% or less of the variance of its target phenotype. That leaves 90% of the variance unaccounted for by the control, which means your error term will be correlated with your focal dependent variable, violating the requirements for regression to give an unbiased estimate. I don't think these claims are controversial. We know polygenic scores are noisy. We know what happens when your control variables are noisy.
The fact that lots of people do it doesn't, sadly, make it work. Lots of social psychologists run trials with an N of 35 (though they're addressing this critique, to their credit). Lots of historians fail to specify their hypotheses and to search for disconfirming data. Economists spent the 80s and 90s running cross-country regressions, before realizing that they had, in aggregate, more independent variables than cases. And so on.
As far as scientific reports goes its a fine journal, its run by nature. It's not on the same planet as the predatory journals that spam inboxes. I worry that people will read your comment, assume you speak from authority, and discount any work they might see coming from that journal when we both know that good science can be found in scientific reports, and that impact factor is more strongly correlated with "sexy" or expensive science than good science anyhow.