Hacker News new | past | comments | ask | show | jobs | submit login

> Thomas Kuhn's "The Structure of Scientific Revolutions" calls into question the very idea of there being a "scientific truth" that is independent of historical assumptions.

OK, let's do this.

We have a scientific theory.

From it, we derive some engineering discipline, which uses the theory to, essentially, make predictions about what will happen if we do this to that, with the property that, if the predictions hold true, we'll have something useful.

The people following the engineering discipline create things.

Those things work.

Does that not, then, validate the scientific theory?

And if that scientific theory is validated, does that not knock Kuhn on his ass?

Because the forces which the artifact the engineers created is subject to don't give a rat's ass what our current culture says. They were the same billions of years ago and will be the same billions of years hence, the existence of our species or intelligent life at all notwithstanding.

OK, some fields of science don't make sense without humans to study. Right. But others will still be just as true if we're wiped out and replaced by sapient Corvidae, or not replaced at all.




Kuhn never denies that there are facts about the universe that we can observe. And I agree with you about this general class of example: no number of scientific discoveries or paradigm shifts will ever make the photoelectric effect cease occurring.

Kuhn's argument is that science occasionally undergoes what he calls "paradigm shifts" -- a low-level shift in assumptions about a certain realm of scientific thought that fundamentally changes the way we approach a particular field. One example Kuhn gives is the Copernican Revolution. Copernicus, as we all know, proposed the heliocentric solar model. Before Copernicus, most people used Ptolemy's epicycles to model the movement of planetary bodies. Initially, it worked, but the cracks started to show as observations accumulated. A major shift in our assumptions about the organization and modeling of planetary bodies had to occur before scientific progress could move forward.

In this sense, scientific knowledge progresses in giant shifts, rather than linearly or incrementally. Consider the theory of atomism how it was disrupted by the Rutherford Gold Foil experiment, or the Double-Slit experiment and what that did for physics. A dominant paradigm must always make way for a new paradigm in order for scientific progress to occur.

The upshot of all of this, according to Kuhn, is that the criteria for scientific truth are always caught up in certain historical assumptions and that we have to take these assumptions into account when assessing the veracity of a given theory. He doesn't say that there aren't facts about the universe, but rather, that the scientific approach to understanding the universe is caught up in a paradigmatic frame which makes it impossible to derive a simple, objective algorithm/process for scientific discovery.

Does that make sense?


> The upshot of all of this, according to Kuhn, is that the criteria for scientific truth are always caught up in certain historical assumptions and that we have to take these assumptions into account when assessing the veracity of a given theory.

If your historical assumptions are true in a useful way, your science will progress to the point where you have more-useful engineering; if they're wrong in an important way, your science will stall out, or give you the wrong answers. If they're neither, and they don't affect the ultimate utility of your predictions one way or the other, it all becomes a bit academic: Is it even useful to say your theories are wrong if they keep making good predictions and allow scientific progress to be made? Note well that atomism (pre-quantum) and geocentrism eventually stopped making good predictions, stopped being a gateway to more complete theories, or both. (For example, geocentrism is probably impossible to integrate with universal gravitation.)

"Truth" is something mathematics has access to, not physics. Truth-With-A-Capital-T is Absolute, Perfect, Incorruptible, and utterly inconsistent with reality as it is outside of the symbol-games we play in our minds, because Platonism is downright insane.

Therefore, "scientific truth" is contingent, sure, but it's contingent on more than mere fashions. It's contingent on experimentation and experiments don't care if your histories are contingent one way, the other, or the other way entirely. Nobody's histories were contingent enough to imagine the Earth repelled small rocks.

So Kuhn agrees that there is a universe and that there are, at least potentially, facts about that universe humans are capable of discovering. That puts him a few up on some philosophers. However, I don't agree that our criteria for scientific truth is fully entwined with our historical accidents as long as we rely on science to predict what the non-human world is going to do.


That doesn't invalidate the scientific method, nor does it preclude having a best-known model that matches all the currently available experimental data. New data that invalidates a model will necessitate a new model, or a new model may come along with better predictive power.

And to use current rationalist terminology: any given scientific model has an associated probability estimate for being true, which is very close to but not equal to 1; any work built on top of that model will depend on the truth of the model; and invalidating a model in favor of a new one requires re-evaluating any work based on that model. The "giant shifts" you're referring to occur when a model lower down the stack, with a pile of things built on it, gets invalidated or replaced by a better model.

On a day-to-day basis, you don't typically re-evaluate the validity of Newtonian or relativistic physics. Most of us regularly use Newtonian models despite knowing that they don't exactly match how the universe works. And we know that relativistic models don't exactly match how the universe works either (notably on a very small scale), though we don't have better models yet that work on both small and large scales.


  That doesn't invalidate the scientific method
You're right, but neither I nor Kuhn ever said that it did. In fact, he echoes your thoughts about how models replace one another:

  First, the new candidate must seem to resolve some outstanding and generally recognized problem that can be met in no other way. Second, the new paradigm must promise to preserve a relatively large part of the concrete problem solving activity that has accrued to science through its predecessors.
He doesn't deny that there is such a thing as scientific progress, he only means to model scientific progress as an episodic cycle in which existing paradigms present with insoluble problems, and that these problems are only resolved when the old paradigm is replaced.

It wasn't so long ago that Einstein declared that God doesn't play dice with the universe. Kuhn doesn't deny the existence of scientific facts or the utility of the scientific method -- he only hopes to illustrate that the notion of scientific truth is contingent on certain assumptions and that these assumptions often get in the way of future progress.

It's also important to remember that Kuhn wrote "The Structure of Scientific Revolutions" back in 1962. At that time, he was largely responding to the logical positivists. While I think a lot of the contemporary rationalist movement is caught up in old logical positivist modes of thought, your own willingness to invalidate models based on evidence wasn't fully developed before Kuhn and Popper brought such thinking into the mainstream.


> He doesn't say that there aren't facts about the universe, but rather, that the scientific approach to understanding the universe is caught up in a paradigmatic frame which makes it impossible to derive a simple, objective algorithm/process for scientific discovery.

I don't see how he gets from A to B. How is the fact that we obviously can only build models based on the past - since the future is not accessible to us - prove that it is impossible to refine models to asymptotically approach a hypothetical fundamental truth?

We may not have a formalized algorithm. But whatever is running on human brains has worked so far.

If I boil this down it sounds to me like he's claiming that a system that takes its past states as one of its inputs (observation of the universe being another) is incapable of refining scientific theories? Which, given the fact that's exactly how scientific discovery has been done so far, seems false.

So what am I missing here?


  I don't see how he gets from A to B
There is only so much I can do to summarize his argument in an online forum without replicating the entire book. I recognize that I'm not doing the full-text justice, but it's hard to go into much more depth without simply recommending that you read the book itself.

  If I boil this down it sounds to me like he's claiming that a system that takes its past states as one of its inputs (observation of the universe being another) is incapable of refining scientific theories? Which, given the fact that's exactly how scientific discovery has been done so far, seems false.
I wouldn't say that paradigm shifts like the Copernican revolution, the discovery of elementary particles, or the uncovering of quantum mechanics are acts of "refining" existing theories -- they largely involved throwing away a significant amount of work and starting from scratch. Scientists since Ptolemy created extraordinarily complex epicycle models to explain the movement of planetary bodies from a geocentric perspective. When the Copernican heliocentric model became accepted, all the work on epicycles became more or less useless.

Now, we didn't have to abandon Newtonian mechanics entirely, but quantum mechanics have replaced Newtonian mechanics in most fields dealing with particles and small numbers of atoms.

Kuhn's argument is that science has advanced, but not through a simple process of "refining". According to Kuhn, science isn't generally a linear or incremental process -- it is a cyclical one in which our existing models cease being useful and we have to find a better one.

I hope that helps! If you're interested, I highly recommend reading the text -- Kuhn was an excellent writer [1].

[1] http://projektintegracija.pravo.hr/_download/repository/Kuhn...


I think this is a very naive refutation of what the point was about Kuhn.

First off, the notion that engineering "validates" science. What do you mean by validate? Do you mean successful engineering informed by a set of scientific principles, somehow, in a scientifically rigorous way, renders those principles true? As in, end of story true?

Because most of mechanical engineering is done on the assumption that force equals mass times acceleration. The industrial revolution yielded countless engineering marvels on the back of Newton--cars, trains, breathtaking buildings and bridges. But the theory isn't "true," because Albert Einstein did some thinking and realized that all of the physics change as you go really fast--something "engineering" missed, despite the fact that the stuff "worked."

But somehow it feels wrong to say that Newton was wrong, right? Because in his world, with the kind of thinking and set of scientific instruments available to him, and the battled hardened inverse square phenomena [1] that could be painstakingly measured and applied in engineering, it was infallible. It was true. But only in historical context.

1) Actually not quite so. The strange orbit of Mercury was out of line with Newton's equations, so much so that the planet Vulcan, placed out of Earth's visible observation, was invented to explain it. And so the theory was saved, until General Relativity explained it away, too. So much for "truth."


> I think this is a very naive refutation of what the point was about Kuhn.

I don't deny that it's naive.

> First off, the notion that engineering "validates" science. What do you mean by validate? Do you mean successful engineering informed by a set of scientific principles, somehow, in a scientifically rigorous way, renders those principles true? As in, end of story true?

Nothing is end-of-story true except in mathematics, where we have access to absolute truth by virtue of first having accepted an axiom system as absolutely true within a context and then having accepted some logical rules as being capable of turning one absolute truth into another absolute truth within the same context.

Mathematics is absolute, but it's only valid within the abstract context of that branch of mathematics.

Physics, for example, is only conditionally true, contingent on us finding evidence which refutes a given theory, but it is applicable to the real world.

So engineering provides evidence that the theories we have can predict the behavior of the Universe at least in the context where they're being applied. A theory is only validated in the world in which it is tested. Granted. However, to the extent it is tested and validated, that validation should be accepted as worthwhile, as opposed to being written off as something culturally contingent.

> Because most of mechanical engineering is done on the assumption that force equals mass times acceleration. The industrial revolution yielded countless engineering marvels on the back of Newton--cars, trains, breathtaking buildings and bridges. But the theory isn't "true," because Albert Einstein did some thinking and realized that all of the physics change as you go really fast--something "engineering" missed, despite the fact that the stuff "worked."

Right, and Einstein's predictions about how the acceleration of a massive particle to near light speed would affect its measurement of time were not validated by engineering but experimentation. And it's also true that bridge engineering validates Newton as much as it does Einstein and Dirac, for example, because it operates in a world where all three theories are "valid" in the sense of "if you use them to help make your bridge, they will not cause it to fall down", and it validates whatever ideas the ancient Roman bridge-builders had, at least if the bridge is of a style the Romans made. I grant all of that.

Philosophically, then, we're back to Popper, in that negative results push science forwards, whereas positive results only make us more sure that the ground we're standing on is solid. We shouldn't ignore positive results, though, because the bridge will still stand even after the next paradigm shift; we should further accept all theories as provisionally correct. That much seems fairly mainstream, philosophically speaking.

However, we are moving forwards. We are able to explain more observations than we have been able to in the past. We are not just moving in circles, with each paradigm shift undoing all of our work and sending us back to square one. We learn to make better and better bridges, to bring this back to engineering.

> But somehow it feels wrong to say that Newton was wrong, right? Because in his world, with the kind of thinking and set of scientific instruments available to him, and the battled hardened inverse square phenomena [1] that could be painstakingly measured and applied in engineering, it was infallible. It was true. But only in historical context.

Newton's laws were always provisional. We now know them to be incomplete, but still useful for human-scale construction, on Earth or in space or on other bodies entirely. They've been subsumed into more modern theories as a special case; they're the equations you observe when you set the paramters to be similar to what humans will experience first-hand. And, as you said, they couldn't explain Mercury, which modern theories can, so they were incomplete even before we had GPS satellites to falsify their predictions. (I mean, they were observably incomplete. Our observation doesn't dictate what reality is; any solipsists can kindly imagine that I don't exist and refrain from communicating with me.)

So engineering does validate theories, but validation isn't enough to winnow theories until you come up with some test some of them fail. That's just Popperian philosophy, though, isn't it? That's just the philosophy of science that all the cool kids are so done with right now, right? My point is that we shouldn't imagine that the validation is worthless, or imagine that it can be undone, because any new theory will have to explain precisely the same behavior as the old one, paradigm shift or no.


I don't really have time to respond to your comments right now, but I did want to make one tangential remark.

  Mathematics is absolute, but it's only valid within the abstract context of that branch of mathematics.
Mathematics itself went through a paradigm shift in the early 20th century, known as the "foundational crisis". At the time, mathematicians began running into paradoxes which existing theories could not properly address, including Russel's Paradox.

In response, mathematicians developed a set of formal axioms (nowadays most people use ZFC, although sometimes Von Neumann–Bernays–Gödel and other variations are used) which produce a mathematical foundation that is consistent (i.e. free of paradoxes/contradictions).

However, as Gödel's Incompleteness Theorems demonstrated, there is no set of foundational axioms which are both consistent (free of contradictions) and complete (all mathematical truths can be deduced by such a system).

So, while it is true that mathematical proofs are formally valid deductions from a set of axioms, it is worth recognizing that the relationship between mathematics and truth are somewhat more complex than they seem. As it stands, there are an infinite number of mathematical statements that cannot be derived by an axiomatic system. Some philosophers have even sought to identify 'quasi-empiricism' in mathematical thought [1].

And if you find that interesting, you'll love James Conant's paper on Logically Alien Thought [2].

[1] http://en.wikipedia.org/wiki/Quasi-empiricism_in_mathematics

[2] http://philosophy.uchicago.edu/faculty/files/conant/Search%2...


My point about Mercury was important because it shows that there is an appreciable level of "give" that a theory has before the scientific community agrees that there is something wrong with it. That level of give is socially determined. It does matter if the discoverer of the anomaly is a Cambridge phd or a crackpot with no credentials. The measurement instruments matter, and the fallability of those instruments play into the acceptance of the results, too. A Popperian viewpoint is somewhat naive because what constitutes a falsification is incredibly fraught! Read Lakatos. He models scientific progression as a series of research programs that have "hard cores" of belief that are protected by ancillary theories. In the event of a negative result, it's those theories that are investigated first. For example, is my telescope correct? Is the theory of light that informs my telescope correct? Is there a dark planet influencing things that I can't see? In hindsight, Mercury should have falsified Newton, because all of the falsifying observations were valid. But it didn't because reasons.

We (and by we I mean Popperians) want to believe that science is a series of universally positive logical assertions that can be cut down by a single negative observation, as logic would dictate. But we don't always know what the criteria are for successful negative observations. The criteria are less rigid and well defined than we would be willing to admit. They vary from community to community. Robert Milikan won the Nobel prize for measuring the charge of an electron with his brilliant oil drop experiment. Only problem? His measurement was wrong. As folks tried to repeat it, they deviated more and more from his original measurement, until many repetitions and many publications later they landed on the correct value. If you were to plot the "true" measurement for the charge of the electron against time, you would see something deviating very slowly from an arbitrary incorrect value to the correct one. You have to ask, how on earth is this possible? Bias, authority, imprecision over truth criteria—all at play. And I think it's this sociological fuzziness in play in many thousands of small ways that lead us to at least question the assumptions on which truth is founded.


> We (and by we I mean Popperians) want to believe that science is a series of universally positive logical assertions that can be cut down by a single negative observation, as logic would dictate. But we don't always know what the criteria are for successful negative observations. The criteria are less rigid and well defined than we would be willing to admit.

Bayesian reasoning helps here, I think, because people are wrong, and different people are wrong with different probabilities. For example, overturning mass-energy conservation because someone said they saw a professor turn into a small cat or a strange spacecraft appear and disappear is not reasonable: The probability of one person being wrong or insane is a lot higher than the probability of something really well-verified being completely incorrect.

Is it political at times? Yes. Can it be improved? Sure. But it is flawed, not completely broken, and I think Kuhn makes too much of the flawed-ness which encourages people to imagine that it's completely broken and therefore the next paradigm shift will validate homeopathy.


I don't think anybody is claiming that science is "completely broken," but there are some who want nice, clean, logical delineations, who think that science is filling out some invisible, giant truth table. And that by each assertion in that truth table, there's a straightforward "this is how to falsify me" entry that scientists can look up and enact.

Based on how science actually works, this notion is fanciful. No such table exists. If you were to have the luxury of asking the top physicists, say, to create such a table for you, they'd very likely all look different.

Also, your comment regarding homeopathy is something of a strawman. Paradigms are incommensurate. If we do incur a paradigm shift in our lifetimes, it's likely that our current ways of speaking about science will be unable to capture it.


The hang-up in these discussions is the concept of "truth."

Newton's physics are useful in engineering; in fact they have been good enough for almost every engineering project mankind has ever undertaken, including complex stuff like landing a robot on a comet. But are they true?

Well, we know they are not complete. There is evidence that Newton can't explain; that's how we ended up with our modern theories of quantum mechanics and general relativity. But are those true?

Because we know we cannot reconcile them with one another. And then there is dark matter and dark energy, which are as yet unexplained, and might comprise about 95% of the known mass/energy of the universe.

More prosaically, think of the last time you saw a bird fly. What's the "truth" there? Your observation, or the theories of quantum mechanics and gravity, which we believe govern the matter and energy of the bird?

In programming we have a concept of "leaky abstractions"--build a layer of abstraction on top an underlying technology, and chances are at some point, someone will have to descend to the underlying technology to fix a bug or find an optimization.

What if our entire history of scientific observation is just a collection of leaky abstractions? We have no way of telling in advance when we've reached the bottom. So the theory we think might be "true" today, might turn out to be a leaky abstraction tomorrow.

Edit to add: Cultural and historical assumptions raise their heads when we find the holes in the abstractions and are trying to explain them. In the absence of reliable theories, or in the face of seemingly incompatible observations, we just try to sort of apply what we already know.

Kuhn's idea is not that scientific knowledge might be wrong. It's that human beings might be wrong when they think they know what the truth, or reality.

Einstein said of quantum mechanics, "God does not play dice," and he spent years trying to prove that the universe is as deterministic as he believed it to be. That's what the voice of historical assumptions sounds like.


>From it, we derive some engineering discipline, which uses the theory to, essentially, make predictions about what will happen if we do this to that, with the property that, if the predictions hold true, we'll have something useful.

We actually use logic along with induction to get a lot of theories. The fact that it works is proved by the scientific method, not the other way around.


There are a lot of ways to be mistaken that you're sweeping under the rug. For example, the geocentric model worked reasonably well for predicting the movements of planets.

The scientific revolutions that Kuhn was talking about often don't make the old model entirely invalid, but rather an approximation that works reasonable well in a limited domain. So the old model is not entirely true, but it's sorta true. If you're the sort of person who wants to say that theories are either true or false, it's an edge case that's not easily handled.


I think it's weird that Kuhn never realized that things work. Could have saved himself a lot of work.


This is a fairly uncharitable interpretation of Kuhn's thought. Please read my above explanation of Kuhn's thought or, better yet, skip the middleman and read "The Structure of Scientific Revolutions" and get it from the man himself :)


(the comment you're replying to doesn't make any sense as anything other than sarcasm)


Yeah, I realized that in retrospect. Thanks for not being harsh about it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: