"People" sciences like sociology, psychology and economics can make incredibly misleading claims because one experiment over a small sample of people at a certain moment in time might seem to support a claim, while the actual reason for the observed results is a factor which is never taken in consideration. On the other hand, conducting those experiments over wider demographics and in different points in time means that the study wants to build "universal" models of how each single person in the whole world acts, which is utterly dismissive of the specific local environment around people.
Sociology in particular should always be approached highly critically, because applying those theories and reasoning in its terms often means mass control over people's free will.
I majored in psychology in undergrad. A big part of why I didn't look for a psychology focused job is that the science is all so loose. I'd often learn about two different study-backed phenomena is two different classes that somewhat contradicted each other. Or I'd learn in a subsequent class that a previously taught study has been invalidated in one way or another. Almost everything is measured subjectively, so huge parts of our knowledge of psychology are a house of cards resting on assumptions that the diagnostic questionnaires used to measure are accurate and reliable. Many of the measured effects are small, and so it's hard to trust that randomization and controls are sufficient. Replication of results is a major issue.
It all just feels so 'loose' compared to the physical sciences.
Think about how loose medical science used to be (and for how long)! leeches, bloodletting, miasma, ridiculous enemas and all sorts of outright nonsense. We've got a lot more mistakes to make, but social sciences will improve too.
No, but let's not forget that much of statistics was originally invented to provide a rigorous underpinning for eugenics. Pearson, who invented many of the most commonly used statistical results, was a prominent eugenicist and contributed greatly to its ideas.
Sure, but it doesn't change the fact that your findings will be as useful or useless as your premise. If you're trying to use statsitics to prove pseudoscience, well, it's still pseudoscience at the end of the day. (NB: I don't mean that psychological sciences are pseudoscience.)
Honestly all sociology I have seen or been exposed to, including in college, seems to be more interested in acting as a platform to push specific ideas, rather than an attempt to find truth.
Beyond that those involved in sociology seem to believe that a study is the same thing as an experiment and like to believe that constitutes proof.
Ultimately we can't really run AB experiments on society at large because we are living in; however humanity has at its disposal all of history as a case study. My point is if you really want to understand how societies interact and form, and react, and live ask a historian, not a sociologist.
I also would apply most of these comments to economics except there seems to be more diversity of viewpoints, and studies are used less than math to try and provide a veneer of respectability.
EDIT:
If someone feels that history is inferior to sociology for understanding how societies act and behave please tell me why. I want to understand where I am wrong. But I see a lot of our arguments that we are having in society nowadays the same as one's had a thousand years ago, the discussions over Social Media are basically the exact same ones people had over the printing press in Europe, I recently read "The Republic" and there were the exact same arguments I see repeated here.
So if you feel contrary please tell me why, I admit I could be wrong, but want to understand where my reasoning is flawed.
I'm an economist. If I threw away the half of the data that didn't support my findings, and got caught, I'd lose my job and never publish again. I'm pretty sure the same is true in other social sciences, such as psychology. This is true irrespective of the well-documented problems that the article describes, which certainly also apply in economics and elsewhere, to varying degrees.
I think you underestimate how influential historians are in the long run, by changing how we see ourselves. But in any case, my point was about which disciplines we can trust, not about which are more or less powerful.
Economic policy is implemented by the parliament and the executive branch. It rarely closely follows advice by the economists. Even the Fed chair is a lawyer!
The thing about textual evidence is that you can't cite an entire text (obviously). You have to selectively choose what to quote in order to support your claims. Additionally, people can write one thing, and then write other contradictory things. Or they can act in ways that contradict what they write. It is from this totality of evidence that non-quantitative methods draw their conclusions. To get to the point, I'm not necessarily claiming that Nancy Maclean (the historian "caught cutting sentences in half") is in the right here, but if you actually follow the debate it seems quite nuanced and the internet critic hadn't actually even read most of the book they were criticizing (and also clearly has certain political leanings to boot). Certainly nothing like "throwing away half the data that didn't support my findings."
The JEL review which I quote in the linked blog certainly had read the book, and called it "replete with significantly flawed arguments, misplaced citations, and dubious conjectures". And if cutting sentences in half, to remove something which directly contradicts your thesis, doesn't count as historical malpractice, then what would?
It's just not that simple. I'm not making a value judgment on the book (I haven't read it), but a person can say two things in the same sentence and the broader context can make it clear that they're just covering their ass, for example. Perhaps that's not what's going on here. Perhaps the book does constitute "malpractice." But...I think the situation is more complex than you're giving it credit for, and I wouldn't be comfortable drawing conclusions without a greater familiarity with the book and the responses to it. I also don't give a lot of credence to the blog you linked, since they use (as one of their two pieces of evidence) a critique which openly admits it hasn't actually read the thing that is being critiqued.
To your last point, plagiarism, for example, definitely counts as malpractice and humanities professors lose their jobs for it.
After following up on your sources, I surrender my position. It appears that the quest for truth has largely been abandoned in academia, and that integrity is a fools dream.
I don't disagree with you, but frankly would be a bit frustrating to limit one's studies of human behavior to just history without trying to understand the dynamics of current societies, trying to understand how they respond to change and so on. Both fields have a completely different set of instruments and very limited overlap.
The best social science studies involve often accidental experiments, where good experimental conditions occur not because of design but because of happenstance. The analysis of these situations could be construed as a historical case study, or it could be construed as an experiment. I agree that seeing analogues in past societies is not the best approach, but studying history can sometimes reveal experiment-like conditions.
Another similar issue is with data from situations that would be clearly unethical to intentionally create. Behaviour of plane crash survivors standard on mountainsides, castaways, feral children, etc
I felt this way as well. But you might benefit from reading more old school sociology books.
C. Wright Mills The Sociological Imagination is great (should have been taught in college to you). Thorstein Veblen's Theory of the Leisure Class is good as well. These really seemed to me like attempts to approach truth, and perhaps that's because of the time they were written in vs the time we live in now.
"On the other hand, conducting those experiments over wider demographics and in different points in time means that the study wants to build "universal" models of how each single person in the whole world acts, which is utterly dismissive of the specific local environment around people."
I don't think building "universal models" or observing recurring patterns through analysis of 'experiments over wider demographics and in different points in time' require the ambition to predict a single individual behavior or actions as a corollary.
The problem lies - like you said - with the policymaker. And well more generally with people who extrapolate the results of a paper inadequately.
The problem is even more pervasive than that. There is an irresistible tendency to try to make universal statements rather than just sharing anecdotes and not generalizing from them.
Like, for example, I just made two universal statements, didn’t I?
Yeah so you have to make a difference between empirical science and science here really. Which Max Weber who was one of the pillars of social sciences stated around one hundred years ago.
"As such, he was a key proponent of methodological anti-positivism, arguing for the study of social action through interpretive (rather than empiricist) methods, based on understanding the purpose and meanings that individuals attach to their own actions."
Alternatively, anti-positivist endeavors should find themselves another space to occupy and not piggy-back on an institutional adjacency to actual sciences to posture credibility, authority, attain public funding etc.
On the other hand one might also argue that positivism is just a school of philosophy and that positivists are piggy backing on a thousands of years old tradition of philosophy.
Sociology in particular should always be approached highly critically, because applying those theories and reasoning in its terms often means mass control over people's free will.