Hacker News new | past | comments | ask | show | jobs | submit login
Half the Facts You Know Are Probably Wrong (reason.com)
171 points by tokenadult on Dec 25, 2012 | hide | past | favorite | 105 comments



I'd peg it at over 90%, depending on how your define terms.

One of the difficult parts of being a teenager and young adult is that you learn the rhythm and gist of how things operate without actually learning how they operate. This is one of the reasons "being cool" is such a big deal at that age -- the idea is to learn to mimic and fit in with the rest of how society works, not actually to be able to manipulate it. Faking it is much more important than actually doing it. That's what everybody else is doing anyway.

Somewhere in your 30s or 40s (perhaps sooner or later) you realize that much of life is like that: you don't really understand how a car works, only that most of the time you can get in it and go places. When it doesn't work, you take it to another person. She may also not understand what's wrong. But most all of the time she does. If she doesn't, she takes it to another person, and so on. You keep getting closer and closer to total understanding but you never reach 100%. That same pattern applies to medicine, advanced physics, and most everything else. When there's a billion areas of knowledge, even 99% knowledge leaves a helluva lot of holes. There's this surface layer of generalities and half-truths that work so often that it's just not worth diving down into the details on everything else.

On top of that, other people are very happy to share their own generalizations and heuristics, so we're kind of operating in a rumor-of-a-rumor mode. The vast majority of the time it doesn't matter, but sometimes, like when you ask questions about how startups work and most of your surrounding culture gives you bad answers, it does. Most of life isn't a geometric proof built on the linear progress of science; it's a bunch of heuristics strewn together to make something practical that you can use.

What we "know" is a product of generalizations and communications about half-truths from the environment we're in. This is one reason why it's so critical to spend a lot of time challenging the things you think you know, especially in areas that make a big difference to you personally. It also means that you must accept a deep and abiding peacefulness in not knowing very much at all about most of the stuff you actually work with in your day-to-day life.


> you must accept a deep and abiding peacefulness in not knowing very much at all about most of the stuff you actually work with in your day-to-day life

That's how 90% percent of the people think (the ones I only care about using and manipulating to achieve my purposes - not necessarily egotistical purposes, mind it, I might actually ask charity donation from them), but not how I and the 10% I care to hang around with do. For example, you don't kneed to know the details of how a car works, but knowing what an internal combustion engine is and that most breaks work by rubbing two things together and basic stuff like this is what everyone should know - my grandma knows this and she can reason heuristically about this even if she has never been to school and can't read or write. Same about computers - if you are an IT manager or even a sales or marketing guy in something computer related, if, for example, you don't know the difference between compiled and interpreted code and what this has to do with portability and, in consequence, cost of multi-platform development, then you're a mindless tool (probably a bad "tool") and I'll have no respect for you and your rights.

You can't separate how it works type of knowledge from how to use it one, no matter how good the abstractions and UIs are. If it's something you're working with, then you should know the principles of how it works!


A lot of people subconsciously define stupidity as "not knowing what I know" and I'm afraid you've succumbed to the same illness.

Why should you know how an internal combustion engine works in order to drive a car? What's the point in knowing the effects of compiled and interpreted code on portability when so many other factors could make that basic knowledge irrelevant?

You save your actual argument for your last paragraph (you can't use it if you don't know how it works) but you never give any examples or explain why that'd even be plausible.


It's not about what I know, it's just that I make a real effort to understand how the things that I do really work, at least at a basic concepts level. For example when I learned a bit about about how to cook I researched a bit about cooking chemistry, when I go to a foreign country I research a bit about the culture, religion and politics. And so on. And it's less effort than it seems. I think seeing knowledge in general as "fun" and just having fun knowing things for knowledge's sake makes your mind work in an entirely different way. Sure, you can google everything, half of what you know is very imprecise or false, and you forget half of the things you learn shortly after learning them. And having a mind rich in "trivia" may impair you in "think fast" situations and sometimes (though rarely) even make you seem "slow" because your mind is weighing extra facts and connections that may be irrelevant while others get faster to the solution because they don't even know these extra "paths" exist and don't waste time exploring them. But it simply makes you "richer" and in a weird way, happier, and not by inflated self-esteem that you know more than others. I know most people think along the lines of "just live life, use stuff that works and enjoy" but I just can't imagine living like this and I find it hard to have more than a superficial conversation with people that live and see the world this way. About the last paragraph, the answer is: there needn't be any answer. I just choose to think this way and I like people that think more like me and I will respect them more and favor them (even unjustly) in all situations. Maybe "my kind" (or at least the corresponding meme) will have higher chances of survival in the evolutionary game of life, maybe not, it's just the dice I "choose" to play :) (Disclaimer: yeah, "you should" should be "I respect/favor you more if you", and yes, I love good abstractions and UIs and I understand their value).


> But it simply makes you "richer" and in a weird way, happier, and not by inflated self-esteem that you know more than others.

That's how I believe you should frame it. "Explore a little bit more of our world, it's awesome" and not "you're an idiot".

And think about the fact that people that don't know something you do, might also know all sorts of things that you don't.


Why is not knowing things so fashionable these days? Especially in a community of people I'd think would be taking things apart just to learn how they work.

> Why should you know how an internal combustion engine works in order to drive a car?

Easy—so when you go to a mechanic with an exhaust blockage and he tries to sell you a new transmission, you know enough to tell him where to stick his transmission.

Yes, this actually happened to me. Thankfully, I knew enough about cars to know I needed another mechanic.

Obviously not every driver needs to be an automotive engineer or even a technician, but it's a little silly to throw up your hands and give up on knowing anything when the basics aren't that hard to understand.

> What's the point in knowing the effects of compiled and interpreted code on portability when so many other factors could make that basic knowledge irrelevant?

At first, I thought "the effects of compiled and interpreted code on portability" was an absurd thing to talk about. Compiled vs. interpreted is a pretty fuzzy distinction (Java is compiled, but only to bytecode that's then interpreted, but that interpreter just just-in-time compiles to native code anyway), and a compiled language doesn't inherently mean you're stuck on one CPU or OS until you rewrite your whole application—sometimes you just need to recompile it. But then I have the basic knowledge that "compiled" often means "compiled to and shipped as native binaries," which matters a lot if I'm trying to run Jim Bob's Digital Accountant on my SPARC workstation running LunarOS 2.7. If that software's only distributed as a Windows binary, I'm pretty much out of luck (unless I happen to have an emulator handy that interprets Windows binaries, but there's that fuzzy distinction again), but if it's a Java application there's a chance it'll just work, as long as Java's been ported to LunarOS on SPARC.

The moral of this story: you should know this stuff so when people talk about it it's not all nonsensical babbling and you don't have to just smile and nod. And, so you can reason about things and know when someone's feeding you a load of BS.

Or, so you don't go off and, say, write an article on how Apple is going to switch to ARM so they don't have to maintain two separate versions of their applications for phones and tablets vs. laptops because you don't know code doesn't have to be rewritten from scratch for each, but that you do need a different UI on a phone than you do on a laptop.

In general, know more things so the world makes more sense, and so you make more sense interacting with the world.


> In general, know more things so the world makes more sense, and so you make more sense interacting with the world.

No disagreement there. It's good to know things. But it's petty to label people as stupid because they don't know <random fact, skill or theory that you happen to know about> and that's what I was railing against.


most breaks work by rubbing...

At this point, forget, how the pads operate, I'd be happy if people just knew how to spell "brakes"...

I'm not picking on you in particular- I swear it is one of the most frequently misspelled words I read online.


I'd hire anytime a guy who knows how something works over someone who knows how to correctly spell the name of that thing, or even knows how the damned things is called. I've known people way smarter than I am that have way worse spelling, grammar and even basic communication skills than I have. We seriously overvalue communication related skills just because they form one's ..."gift wrapping" (plus the thing with most programmers being pathologically sensitive to grammar and spelling mistakes, but this is somewhat understandable :) ).


I'd hire anytime a guy who knows how something works over someone who knows how to correctly spell the name of that thing, or even knows how the damned things is called.

These things are very frequently closely related in at least one direction. Useful as a filter.

I've known people way smarter than I am that have way worse spelling, grammar and even basic communication skills than I have.

It isn't really the rule, though, is it?


Oh, give them a brake.


I work with the English language every day, and so I have no respect for the rights of people who can't even spell the word "need" correctly. Is that what you're saying? :-)


Instead of comments attacking a geek elitist worldview, I get folks pissed off 'cause I mistyped "breaks" and "kneed" :) ...and I actually find it amusing after almost a dozen whiskey eggnogs ...funny times the holidays :)


This is a great point. Another good example is the "linguistic division of labor", put forth by Putnam (http://en.wikipedia.org/wiki/Hilary_Putnam) wherein it is proposed that language can function (i.e. words have meaning) because people assume there's a person (or people) that can fix the meaning of a certain term. An example: When it was announced in 2011 that researchers have found that the electron was (nearly) a perfect sphere, I had no idea what "perfect sphere" meant in this context, so I appealed to the experts on on PE (http://physics.stackexchange.com/questions/10433/what-does-i...).


yeah, rumors, urban legends, folklore, oversimplifications, generalizations, even memes, instead of understanding or even knowing.

This is due to cheap, low quality media, which broadcast and publish any crap imaginable. Pseudo-psychology, pseudo-medicine, pseudo-economics, pseudo-sociology, pseudo-everything.

Just memes made out of other memes. "Sugar is bad, eat fiber". "Paper money is bad, buy gold". "Apple will expand infinitely, buy stocks". "Java is the platform of choice", you name it. And everyone is ready to approve or argue, to express an opinion, based on his own internalized memes.

Uncontrolled consumption of lowest quality media leads to losing ability to see the world as it is, not as some talking heads around describe it.

And all those few standard responses, ready to be displayed whether appropriate or not - not this, then it must be that. lol wut?


This is a remarkable comment, but within it lurks a subtle assertion that deserves to be explicit: 'knowing' is itself somehow important in and of itself.

I'm growing increasingly convinced that this is wrong. Knowledge has value, but it is a strictly secondary value. It's value comes from the things that depend on it. For example, if you're an artisan and/or business person, then your primary driving force is "make stuff people want to buy". In that case, knowledge is indeed very useful for improving your outcomes (from how to build things to what people want to how to how to reach people, etc). If you are a scientist then knowledge is a critical component to building new knowledge--and subsequently defending it. Even that most basic satisfaction, the satisfaction of curiosity, is only a pleasant, transient jolt. (And that pleasure is, alas, so disconnected from the real world that even false knowledge is sufficient to trigger it, much to the chagrin of the habitually skeptical.)

So yes, I think your model is fundamentally correct, that knowledge is an extraordinarily practical thing, designed primarily to know it's limits and find a specialist when that limit is reached. But you missed the outer context that gives meaning to this conversation - the fact that knowledge is merely the handmaiden to another driving force, and a handmaiden only need be "good enough" to reach a minimum acceptable thresh-hold.


> This is one reason why it's so critical to spend a lot of time challenging the things you think you know...

Especially the stuff you really think you know. And the ease of doing so is inversely correlated with the amount of stuff you think you know about topic X.


There's this surface layer of generalities and half-truths that work so often that it's just not worth diving down into the details on everything else [...] This is one reason why it's so critical to spend a lot of time challenging the things you think you know

I don't understand how you got from point A to point B. Where did you actually show that it's critical to be unsatisfied with the system that is worth 99% of the time?


This was a fantastic insight. Thanks for writing.


I love how this spawned a tree with someone demonstrating your point.


>You don't really understand how a car works, only that most of the time you can get in it and go places. When it doesn't work, you take it to another person. She may also not understand what's wrong.

Oh, please! "She"? Let's all ruin the flow of our prose in an attempt at social engineering that probably isn't even worthwhile, shall we? I suppose you didn't bat an eyelash at Denzel Washington as savant pilot Whip Whitaker either.


I don't even understand the purpose of it. What percentage of car mechanics is female? Less than 5%? Now I'm thinking about whether any given car mechanic is most likely to be male or female and am completely distracted from the point she is trying to make.


Sounds like a personal problem. I didn't even notice.


Yes. It's a personal problem. I made that evident by writing from the 1st person as evidenced by the usage of the word "I". And it clearly didn't affect only me as evidenced by the upvotes on my comment as well as the one one level above mine.


I didn't even notice this, perhaps that says something.


It jarred me... and not because the concept of female mechanics is strange to me (it is not any more strange to me than the concept of female programmers or female gamers) but because it really felt forced and I looked and the commenter had a male-sounding name making me feel like it was forced.


If the parent had used "he", would you be complaining?


No, because like it or not, "he" is the de facto pronoun for a person of irrelevant gender.


No, "he" is the default generic.


In addition to the point made by the other two users who responded to you, the overwhelming majority of car mechanics are men.


Social Engineering?


The article says:

In 2005, the physician and statistician John Ioannides published “Why Most Published Research Findings Are False” in the journal PLoS Medicine. Ioannides cataloged the flaws of much biomedical research, pointing out that reported studies are less likely to be true when they are small, the postulated effect is likely to be weak, research designs and endpoints are flexible, financial and nonfinancial conflicts of interest are common, and competition in the field is fierce. Ioannides concluded that “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”

If Ioannidis' findings apply to areas of science other than biomedical research (where there is also competition, pressure to publish, conflict of interest, etc.), then most scientific knowledge is already false at the time that it's published; you don't have to wait 45 years for it to become false like this article claims.

Here are some references to Ioannidis' work:

- The cited PLoS paper (2005): http://www.plosmedicine.org/article/info:doi/10.1371/journal...

- A less technical article in The Atlantic: "Lies, Damned Lies, and Medical Science" (2010): http://www.theatlantic.com/magazine/archive/2010/11/lies-dam...

Edit: Corrected spelling of Ioannidis' name; it's misspelled in the article.


The scientific method and the community of science that surrounds it is truly a powerful machine - able to take the worst aspects of human nature, sailing atop a river of garbage specked with half-wrong answers, and spin that mix into the gold of technology. It doesn't matter what your right to wrong to nonsense ratio is when it comes to deciphering the world; so long as you have the will to progress and your sifting mechanism is good enough, accumulating a whole pile of right is just a matter of time.

The front line of science is a messy place; a mostly wrong messy place, as any of us who have spent time there know. A recent study claimed massive error rates across all scientific papers - which is not a surprise to scientists. The closer to the edge of knowledge you come, the more wrong you'll find - a great frothing sea of wrong, enthusiastically generated by scientists in search of nuggets of right. It's all part of the process, and you have to step back from the details in order to see where the process is taking you. In any complex field, and biotechnology and medicine are about as complex is it gets outside astrophysics, validating truth takes time. Scratch any unanswered question and it'll bleed papers and reviews, a dozen for any given position on the topic.


> The front line of science is a messy place; a mostly wrong messy place

Reading the article, the most interesting aspect was the concept of a "half-life" of knowledge. This, of course, applies not just to the front lines, but also to the rear guard.

By definition, a new idea that proves to work well obsoletes an old one that didn't work quite as well.


> "In the past half-century, all of the foregoing facts have turned out to be wrong.... As facts are made and remade with increasing speed, Arbesman is worried that most of us don’t keep up to date."

It can take a lot longer than a half-century to catch up.

For example, it's surprising how often I hear people talk about "exponential" population growth, which was the going theory in 1798 [0] but known to be inaccurate by 1838 [1]. I hear similarly out-of-date comments on topics ranging from evolution to Bible manuscripts with shocking regularity.

People sometimes learn "facts" from trusted friends or preferred authority figures, who learned them the same way, going back generations without anybody thinking to double-check. I suspect the emotional cost of finding out a friend was wrong provides just enough friction to keep people from looking things up, even in a world with wikipedia and snopes at our fingertips.

[0] http://en.wikipedia.org/wiki/An_Essay_on_the_Principle_of_Po...

[1] http://en.wikipedia.org/wiki/Logistic_function#In_ecology:_m...


To be fair, a lot of laypeople use "exponential" to mean superlinear. Not to mention that the "logistic function" you cited is essentially exponential until it hits saturation.


Laziness seems a far simpler and more believable reason why people don't double-check, not the emotional cost of finding a friend was wrong... or did I miss something?


My first foray into science was working in a lab at NASA on an internship. The scientist I was working with came up with a hypothesis and instead of testing it in a scientific way, simply disregarded the information that didn't agree with her thesis and used the information that did. I'm obviously not saying all scientists do this, but the way she did it, and the complete lack of self awareness for what she was doing make me feel like it happens fairly often.

I also feel like its a pretty good analogue for modern life. People seem less willing to discover the truth than they are to use facts that support the point they want to make while ignoring the inconvenient parts of the facts.


I think the way the scientific method is generally presented does not really accurately represent how science is really done. It's true that there are many scientists who more or less follow the scientific method to the letter. But I think it's more common that scientists individually don't follow the scientific method terribly closely. It's only scientists collectively that follow the scientific method.

One scientist might come up with a hypothesis, never test it, but nevertheless believe it devoutly. But science progresses because other scientists are sceptical and test the hypothesis.

It's in the first scientist's self-interest that the hypothesis is proven true, but it's generally more in other scientists' self-interest to prove the hypothesis false.

Some of the greatest scientists of the 20th century closed their eyes to overwhelming evidence that their pet hypothesis was wrong. One of the classic examples was Geoffrey Burbidge (known best for showing that the heavy elements were created in stars and released into the universe in supernovae) who believed until the day he died that quasars were galactic objects.


I experienced the same thing in my masters degree.


discovery of the truth is in our self-interest, nobody else's.

therefore, it is a more lonesome endeavour, and harder than subscribing to the pap fed to the masses.

see my [other](http://news.ycombinator.com/item?id=4968390) comment.

(edit: link, grammar &c.)


> Most of the DNA in the human genome is junk

This topic is still very controversial and I don't think the article should be citing it in this context.

It was only a few months ago that the ENCODE project released its first results claiming that 80% of the genome is "functional", and many scientists have noted that their definition of "functional" is much too broad. It includes any piece of DNA that becomes bound to protein at some time, while it is known that many proteins bind nonspecifically.


Yup. Most of the DNA is not directly involved in coding proteins but we've been finding out more and more of how DNA works. Significant parts of DNA are there merely to help maintain chemical and solubility properties so that the DNA coils correctly. And far more of it is there for gene regulation.


The upshot of the article is that little should ever be published unless you have perfect information and understanding about the subject.

No publishing on DNA until we know everything, etc.


Yes I noticed that too and immediately started to godel: "how many of the wrongnesses the author believes are wrong are really right ..."


A counterpoint - The Relativity of Wrong, famous essay by Asimov. http://chem.tufts.edu/answersinscience/relativityofwrong.htm


An interesting BBC Radio Four programme about this kind of stuff specifically about the mind / brain: (http://www.bbc.co.uk/programmes/b016wzs9)

I'm cautious about children. I've seen plenty of boys learn a long list of (for example) dinosaur names. Is this something I should try to avoid with my son, or should I encourage it, or should I just remain neutral? I have no idea.

I do know that I want to encourage him to think about relationships between things, rather than just learning a list of names. (For example; "That tree is evergreen, which means it keeps its leaves all year round. It does that because [...]" rather than "That's a Leylandii".


Learning the names with the dinosaurs seems harmless, as long as he also learns that sometimes they change, or were based on an erroneous assumption and the named creature didn't actually exist.

Dinosaur names change or get dropped, only because of more information becoming available. Extant species' names are more settled, but mostly because we have so much more information. I expect that the dinosaur species with the most fossil evidence aren't going to have their names change any time soon, unless some sort of new, general approach to naming sweeps the field leading to rethinking the whole thing.

What should be avoided is unintentionally creating the kind of mind-set that could have a hard time dealing with corrections later on in life.

Worst-case scenario, years later he finds out a favorite dinosaur of his youth was a misnamed individual of a different type, and ends up deciding that scientists are frauds and becomes a young-earth creationist.

Basically, as long as there's understanding that scientists are generally doing the best they can with the available information; that scientists in the early years of a new field may have made more mistakes due to the limited knowledge framework (early misconception of the iguanadon's thumb spike as a nose spike, etc); and that scientific errors made in good faith are a sign of science's strength, because they can be corrected by sufficient contrary evidence.


> I do know that I want to encourage him to think about relationships between things, rather than just learning a list of names. (For example; "That tree is evergreen, which means it keeps its leaves all year round. It does that because [...]" rather than "That's a Leylandii".

I don't suppose there's any harm in loading your children up with trivial knowledge, but I would certainly put more effort into teaching them thinking skills than specific knowledge.

These days, if you have the necessary thinking skills, the sum of human knowledge is within reaching distance. If you know how to research a subject, how to corroborate and verify the veracity of positive claims and theories presented to you, you can find out pretty much everything about anything in a matter of minutes.

I suppose it's the old "give a man a fish" adage applied to knowledge.


"I don't suppose there's any harm in loading your children up with trivial knowledge, but I would certainly put more effort into teaching them thinking skills than specific knowledge."

I find that kids are like sponges for details like names. They'll soak them up whether you're teaching them intentionally or not. Or if you're trying to get your kid to learn animals or stars or whatever, they'll be memorizing scores of baseball statistics, or Pokemon, or airplanes.

Teach thinking skills; the trivia will take care of itself.


> Is this something I should try to avoid with my son, or should I encourage it

Avoid.

I would avoid spending time learning the names of things. Learn the thing itself, but the name tells you very little.

Watch this: http://www.youtube.com/watch?v=05WS0WN7zMQ


This is the only piece of advice I would listen to out of all of these replies. Copy the great, not the average.


[...] It does that because, most likely, it is a gymnosperm, a fruitless tree. Its seeds are ovules not enclosed inside an ovary, which means that it does not need to yield fruit.

Sorry for the diversion: it was a question in today's Trivial Pursuit.


The long list of names is a way to make the dinosaurs a little more personal. Nothing wrong with it. No need to encourage or discourage. Just augment with diverse bits of science and your kid will be fine.


I was telling somebody my theory on this the other day, and I thought I might share it here.

Basically information is power. Due to informational asymmetry[1] being exploitable via economic, political and other forms of power, it is almost always in somebody else's interest that you be misinformed.

Therefore the person with the most to lose (and the person with the best interest in being informed) is you. And instead of seeking out the forms of information that empower us, we fritter away our attentions on trivialities and pop culture (yes, including programming pop culture). A good example of a prophet who tries to steer us programmers the right way is Kalzumeus, imho.

Part of the reason we fritter away our time is that it is simply easier to consume the mass-delusion, because the search for truth can be fairly lonesome (since you are the only interested party).

Under the circumstances outlined above, it is no wonder that most of what we believe may be false, and part of a larger (self-bootstrapping) conspiracy to keep us in the dark (cue ominous music, break out the tin-foil hats).

[1] http://en.wikipedia.org/wiki/Information_asymmetry


I try to keep the question "What do you know, and how do you think you know it?" in my head as much as possible (I think I picked it up from Harry Potter and the Methods of Rationality). I mentally recite it a lot, often without really thinking about it. Every once in a while, though, the thought catches me as I'm recalling some long-known fact, and I'll realize that I don't have a good reason to believe it. Many of these are things I was told long ago by friends/parents/family/teachers and stored away, their sources forgotten. The maxim has proved useful in unlearning things I was too young to disbelieve when I learned them.


I know its good to have a healthy self doubt but this makes dumb people look more confident (who don't revise their facts) than the smart people who are too cautious with their knowledge. This is the reasons most dumb people end up being the boss of the smart people.


Obligatory xkcd(s):

http://xkcd.com/843/ (Misconceptions)

http://xkcd.com/1053/ (Ten Thousand)

http://en.wikipedia.org/wiki/List_of_common_misconceptions (The link referred to in 'Misconceptions')

EDIT: Darnit.


Your links are swapped!

Is that a typo or humor about facts being wrong?


Now I know that half the facts I know are probably wrong. But which half is that fact in?


That reminds me of the old joke about medical school: half the things you learn are wrong, and half you forget - you just have to hope it's the same half.


This is a naive view, with inaccuracies in at least one of the "facts" it claims to be wrong. It's what I would call a level 2 grad student view. Level 1 - you believe every research article you read and everything your advisors tell you, and you glom onto one area or viewpoint in particular without considering alternatives. Level 2 - you reject everything that isn't in line with your viewpoint. I've seen professors stuck at level 1 and 2. Level 3 - you start to see what's 'right' and 'wrong' (and more useful or less useful) about everything you learn.

Consider this "fact" from the article, which has no citation whatsoever:

"Increased K-12 spending and lower pupil/teacher ratios boost public school student outcomes."

Which the author claims is "wrong."

Actually the evidence suggests that it is right, although the effect of class size alone may be small. And it is not just a matter of how much an effect reducing class size has on learning, but also in what circumstances and to what extent. And even with these "facts", it doesn't mean that big classes are inherently always "bad" and useless - MOOCs are still quite useful even though 90% of students don't finish them. And large class lectures actually could be made more effective if given after (not before) a lab or other interactive learning activity. But a wealth of research has showed how one on one tutoring is generally the most effective method of teaching (which also directly contradicts the author's assertion that reducing class size has no effect).

http://www.districtadministration.com/article/does-class-siz... http://www.ppta.org.nz/index.php/-issues-in-education/class-...


Half of the facts are wrong may be a little steep. Do we breathe air? Does it contain oxygen? I'm sure some enthusiast will give me the composition of air (inc some stat on nitrogen:). There are many trivial facts that we know with confidence.

However, in a recent conversation with my mother, I asked her about a talking donkey in the bible. she confirmed the reference saying "the lord works in mysterious ways" and "donkeys could talk back then". Maybe there is some truth that 1/2 of our facts are wrong... If you skip religion, I view my mother as a very clever woman who is very astute with business. She was indoctrinated at a young age.


Talking donkey is described as a miracle, meaning donkeys weren't used to talk even back then :) Miracles by definition are unscientific, unpredictable, unreproducible and contrary to all our previous experience.

But leaving donkeys aside, we are not usually thinking with extreme scientific rigor, because that would be too expensive and, frankly, wasteful. Frequently heuristics - also known as cognitive biases, prejudices, etc. - serve as well enough for everyday life. Of course, sometimes you need much better precision and then you have to know how to do it.


Religion is everywhere, even where it is specifically disclaimed (see also: The Singularity).

We're hardwired for it.


Religion is just one form of worldview or belief system.

Everyone has a worldview/belief system, no matter how scientific they are.

An belief systems are non-rational. These are the default nodes in your cognitive framework. They aren't evaluated, they are just assumed to be facts. They aren't easily changeable. For anyone, no matter how good they are at reasoning.


> Everyone has a worldview/belief system, no matter how scientific they are.

Except some worldviews aren't based on beliefs, if you define 'belief' as 'blind faith'.


The point is that regardless of the basis of the worldview, those underlying fact nodes in the cognitive framework are terminal nodes. There isn't any rationalization that goes on to reevaluate them, except under very rare circumstances which aren't motivated by a rational process.


Are you claiming that the idea that AI will eventually be smarter than humans and thus render human work mostly useless, a religious idea?


No, I'm claiming that the idea that progress toward this can be predicted and that the event will occur around 2045 is a religious idea.

Like all of the best religious ideas, it is one that came about by a man faced with his own mortality.

On a side note, and this is opening a huge philosophical can of worms, I'm not convinced of the utility in merging our intelligence to machine intelligence in such a scenario, either from the point of view of overall utility (what do they gain from our meeker intelligence?) or from the point of view of our own "existence" actually being near-immortalized.

Stated simply, if you could make a perfect copy of my brain into a 20 year old clone of myself (I'm 39) and that clone were created such that it didn't have to worry about issues like telomere shortening or any of the other processes theorized to cause issues for clones, but after the process was completed the original copy of me would be terminated, I wouldn't take that deal and I doubt many other people would either regardless of how they feel about the theoretical ability to duplicate an exact copy of their own consciousness. Ultimately, the copy of me that is me wants to continue on but cannot.

I will die.


While I agree that you will die, the rest is claptrap. There's nothing religious about the singularity.

We are most definitely not hard wired for religion. It's a mythology that preceded the scientific era. In a thousand years or less it will be laughed off like the talking donkey nonsense. We're not hard wired at all...


We're not hardwired for something as abstract as religion, but we do have a lot of cognitive heuristics and biases that while they may have had uses in the ancestral environment (and indeed some today), nevertheless make errors in reasoning such as religion (and homeopathy, judging risks and probabilities, reducing fractions, etc.) quite easy for even the most intelligent of us. http://en.wikipedia.org/wiki/List_of_cognitive_biases

I agree there's little that's religious about the Singularity itself or most Singularitarians (http://www.acceleratingfuture.com/steven/?p=21), and the above poster is probably mislead about it. (I suspect he's mislead because he brought up the 2045 date and because he's confused about identity. That date I believe is only professed strongly by Kurzweil in the public mediasphere, who is at most with respect to the Singularity a predictor with some past successes and equipped with a strong argument based on accelerated returns, but he isn't working directly toward a positive (or negative) Singularity while those who are that I'm aware of hesitate to estimate date ranges.) Personally I think barring mass human extinction the stage is set for a Singularity event within this century, and I do have many justifications for it, not a religious "I have faith and faith is good" 'justification' or an apologetic one trying to extract justification by means of some shaky philosophy and an a priori sacred book (Singularity is Near? Never read it) containing no significant errors.


I see the Singularity as an apocalyptic prophecy, textbook millenarianism [1] that takes much inspiration from old-school Marxism. Like the latter, it presents itself as economic and scientific because it has big-sounding concepts like Law of Accelerating Returns [2] / Law of Accelerating Misery; both seem well reasoned, are slick, intelligent, and easy for smart people to believe in.

Now, Marx couldn't think of politics as independent from the interests of the capitalist class. That's why he failed, as Popper described: the New Deal, Social Security, public and subsidized education, progressive taxation... all those things came and made his prophecy fail. As in hindsight was to be expected, if only because prophecies rarely become true.

Today, Kurzweil can't think of biology as the vast, incredibly complex area that it is, and so his prophecy will fail when it's clear how difficult it is to achieve immortality, or full cybernetisation of the human mind, or general AI, or whatever. I am 90% confident in that outcome.

Yeah, we are not hard wired for religion, but still it's very easy to fall for it. Scientific reasoning is not and probably never will be the norm, if simply for the fact that we're just poor dumb chimps. After all, religion didn't stop appearing after 1900. The Scientologists are proof of that.

[1] https://en.wikipedia.org/wiki/Millenarianism : a Great Event will come during which Humanity will be Judged and Reborn. Singularity just takes out the Judgement part.

[2] LOAR forgets how Moore's law, for example, was very specific and probably short-term. If Kurzweil had been born in the 19th century, he would have probably been talking about the coming Steam Engine Singularity. In the 60s, he'd be talking about scientific bases on Mars by 2000...


Do you have any links to peer-reviewed journal articles that conclude beyond any possible doubt that we're not hard-wired in any way for religion? If not, do you believe it simply because it's a comforting assertion that fits in well with your world view? And isn't believing something without evidence simply because it's comforting rather...religious?


What I thought "singularity" meant is merely that eventually human work will become worthless as easy to replicate machines will be more intelligent.

I agree that predicting when it would happen is somewhat silly. Also, I had no idea that "merging" humans into machines had anything to do with the singularity.

If you just take the notion that eventually AI will surpass human intelligence, I think it is a very reasonable idea.


I'm not the OP, but: Yes.

And on top of that I'm claiming that the singularity will never happen.

We have incredibly fast computers with enormous amounts of memory, yet are no closer to AI.

And even given infinite computing power, we still wouldn't have AI, so the race for computing power is a red herring. (It's not like we know how to do it, but the computers are not powerful enough - we don't have the slightest clue how make AI.)

(I define AI as the ability to learn any topic at all with unstructured instruction - like reading a book, or being told verbally. Additionally after learning the material an AI must be able to invent and synthesize new material from the old.)

And on top of that computers aren't getting any faster, that's basically stopped. (Although if AI is parallelizable it might not matter.)


> We have incredibly fast computers with enormous amounts of memory, yet are no closer to AI.

Supercomputers today are the equivalent of a tractor - they do simple things that people can do but at a greater scale. Nobody ever looked at the first tractor and claimed it was a step towards better AI, and nor should they when they look at supercomputers.


> And on top of that I'm claiming that the singularity will never happen

You'll get a lot of negative replies about this, after all HN is very pro-Singularity or however you want to call it, but my gut feeling is that you're right.

I can see Moore's law being applied successfully for the next 100 years in one form or another (more processing cores, quantum computing finally taking off etc.), enough for us to reach a state where we could say "Mission accomplished" and let the machines do our job, but I cannot imagine, not even in the remotest of ways, how AI humor would sound like, meaning how would the machines make jokes. In fact, that would be my "personal" Turing-test for deciding that the Singularity has truly arrived: can the machines understand and perpetuate humor?


We definitely are closer to an AI than we were.

More and more functions that were once human-only are now a machine capability. Intelligently playing games, voice recognition, image recognition, and some (though this is still undergoing intensive research) working with natural language.

I really really don't see how you can believe that humans will forever be more intelligent than machines. What makes humans so special? Why would you believe that?


Please, please don't listen to this article! The author is far worse than the processes he is demeaning. This article gives no reason whatsoever to believe the title. The main argument seems to be "we learn lots of new things, therefore most of what you previously learned is wrong." This has numerous and obvious problems. We can easily be learning new facts about things that were previously not considered or not known. Most facts that humans know are not scientific facts but rather personal. The large majority of people don't know very much about recent experimental results and only hear about ones after they have been so long and well-established that there is little chance they are significantly wrong. A fact can be 'wrong' while still being nearly right. That the list of ten largest cities in the US has changed just means that the person only knew the list of ten largest cities at the time that they learned it.

If 'half of our facts were wrong' is taken at face value, then our truth-determining process is no better than flipping a coin. Do you think we could engineer bridges, build computers, get to the moon, sequence the human genome, or write software if every time we didn't know something we just asked it as a yes-no question and flipped a coin?

So as much as I want to support skepticism and free-thinking or whatever this article is aiming at, this article is wrong and bad. I have the means to go out and determine true facts and I do this routinely.


The problem is the word "fact". Somehow this is now perceived as an eternal truth, rather than a statement which is true right now, but might change soon.

Some of the examples point this out nicely, let's take the 10 biggest cities in the US. Isn't it obvious that this "fact" only works with a timestamp attached to it? The 10 biggest cities in 1990?

Maybe this is more about the language in science, textbooks and science journalism. A little more precision would make the real facts stand out better.


Re: The title: Not for me, baby. I haven't many "facts", and most of those are math or avowed articles of faith. I learned a long time ago to replace "is" with "I have read that" or "I have heard". My opinions are wrong all the time -- but not for long.


Several of these are things that were the good-faith best effort at the time based on available knowledge and/or technology. (DNA, cold-blooded dinosaurs, star size limits, etc).

I don't much like the depiction of scientific knowledge as "manufactured facts".


Complete fantasy tangent here but ...

Its amazing to me how much effort we as human beings put into memorizing and retaining 'factual' knowledge like this. Can you imagine how far we could go if all this knowledge could literally be loaded into our brains at an early age so that instead of basic arithmetic (for example) kids in elementary were already doing quantum physics and things of that nature.

I know its far far away, but it really really fascinates me to think of the implications of just doing away with having to relearn "learned" things so that we could devote energies to far more advanced topics.


Assuming that the act of learning doesn't 'grow' the brain in a meaningful way as part of the process. I think that's probably an assumption too far though.


When you learn, the myelin between the relevant neurons increases. The Talent Code discusses this at length, and argues that skill in something is a function of myelin strength.


>> Half the Facts You Know Are Probably Wrong

Now to which half does this fact belong?


Then article speaks about the half life of truth, but that is really just how long it takes us to find out it was never true in the first place. But even that is assuming that in 100 more years we don't find out it was actually correct. We need to make a distinction between truth that becomes outdated and facts that were never true in the first place. So what percentage of current scientific conclusions are likely to be wrong?


Given the source, I can't help but suspect there's an underlying desire to willfully misunderstand how science works in order to dismiss all research that could be used to support or justify regulation.

Hey, you never know, maybe in ten years we'll find out tobacco use doesn't cause cancer. Better remove all the restrictions that have been placed on tobacco until we're certain.


I would agree that having a healthy scepticism of those in authority is a very fundamental part of libertarian philosophy. But if you find the author's political inclinations objectionable you can read the original book by Samuel Arbesman that Ronald Bailey based his article on.


We are wrong more often than we are right, but the scientific method has proven itself a powerful error-correction tool.


> Arbesman notes that “the number of neurons that can be recorded simultaneously has been growing exponentially, with a doubling time of about seven and a half years.” This suggests that brain/computer linkages will one day be possible.

What?


Current brain-computer interfaces have 96 needle-like electrodes, so can interface with 96 neurons. That produces an enormous amount of data at the sampling rates used, but is already being used for things like controlling robot arms.

In 7.5 years, perhaps it'll be up to 192 neurons, if the connected computers and processors can keep up with the data flow.

7.5 years after that, maybe it'll be up to 384. Etc.


These days, QI is my favourite TV show for exactly these reasons.

http://www.youtube.com/results?search_query=qi&page=&...


I guess I have a small problem with this use of the word "fact" which to me is something provably true. What this piece is actually saying is that half of scientific theory is probably wrong.


Only in mathematics are things provably true. In science, things are considered true based on the best theories and experimental evidence we have so far, and our observational abilities increase as technology gets better over time. For example, Newtonian mechanics was considered "true" for centuries, until it was replaced by relativistic mechanics. In the 18th century, we just didn't have the equipment to measure phenomena at quantum scales; now we do. As we observe the world in greater detail, we find that things that seemed to be true before are now false. That's the best that science can do.


Actually, I'd say that in mathematics, things are provably true given that the initial assumptions are true. However, mathematics does not so far as I know deal with the truth of those initial assumptions (axioms), it simply accepts them as true.

Where this gets interesting is that you can say, what if this axiom was not true? Let's assume it's not. And then possibly come up with a whole new system of mathematics (like non-euclidean geometry) which may turn out to have actual use in previously unsolvable problems.


I think it's better to conceive of mathematical axioms as being part of mathematical system X or not, rather than as true or false. That is, they're not self-evident or universal or true according to some external standard, they're merely postulated as properties of a given mathematical structure.

See http://en.wikipedia.org/wiki/Axiom#Non-logical_axioms


That's seems like a useful understanding.


Adding to this, maybe "true" is a bit of a loaded word. Newtonian mechanics works just as well today as it ever did. I prefer to think of it in terms of usefulness. 18th century knowledge is like a Pentium 3 that we just don't want anymore.


I have a more than small problem with the use of the phrase scientific theory[1] which to me is something provable.

>A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world.[2]

[1]: https://en.wikipedia.org/wiki/Scientific_theory

[2]: http://www.aaas.org/news/press_room/evolution/qanda.shtml


I'm not sure how you'd prove the theory that dinosaurs were cold-blooded, short of taking the rectal temp of a Stegosaur.

But you could falsify the theory by finding characteristics incompatible with being exothermic. Finding the dinosaurs with feathers and other characteristics of the (warm-blooded) birds probably helped, too.

Some of the "incorrect facts" are really theories that were substantiated to the best degree possible at the time, and no falsifying evidence existed at the time.

I suspect some, like "non-functional junk DNA", is a case of scientific qualifiers being ignored in the mainstream media and forgotten.

Humans like "facts", but we inevitably drop the error bars that originally accompanied each new "fact".


Depending on meaning of "wrong". If it means that there will be facts that are not accounted for by current version of certain theory - then it's probably true for most of them. If it means that our current interpretation of certain facts is completely incorrect - such as we think X is caused by Y, but in fact X is caused by Z which has zero correlation and no causal links with Y - I would be very surprised if it turned out like this for half of the established theories. Though for those that are just being developed probably much more than half of them will be wrong in this sense. I guess it also depends on the field - I'd say it's unlikely we'd have some overwhelming change in classical mechanics, but if you talk about something like biology or neurophysiology - there are a lot of things that are not clear, so it is entirely possible that our understanding, for example, of how addiction works - would be radically different in 20 years.


Including this one.


It's not that widely held bogus views turn out to be wrong by accident or from some congenital defect in human reasoning.

Issac Newton's physics, while not applying to all possible contexts, still holds as correct over a vast range, and that's because it's properly identifying an aspect of Nature's behavior, and it did so due to Newton's extreme methodological diligence. Some of these other "facts" that get overturned likely did not adhere to Newton's Rules of Reasoning, or had some other flaw that led people to jump to conclusions prematurely.

Of course, it's exceedingly unpopular to point out that we should learn why we make mistakes and then strive to not make them by scrutinizing the source of the error and adopting better methods. Most people are amenable to having their mistakes pointed out, but are extremely offended if you try to point out that the way they are arriving at conclusions is mistaken. It's as if most people don't want to believe that yes, there is a difference between wisdom and foolishness, and instead want to believe that the highest form of foolishness is in fact to believe in wisdom.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: