Hacker News new | past | comments | ask | show | jobs | submit login
We Don’t Need More Blood Tests (fivethirtyeight.com)
110 points by fisherjeff on May 13, 2016 | hide | past | favorite | 96 comments



It seems like one of the biggest issues here is that a single test for a given thing isn't enough data to be able to reliably tell whether or not the results are significant. So people are arguing that we should collect fewer data points?

I totally disagree with this viewpoint. The medical community's current approach to testing (positive result, treat ALL THE THINGS!) is an artifact of the difficulty and cost of performing the tests; the inability of many providers to apply basic concepts of probability to test results should not be used as an argument against advancing the state of the art, particularly as the industry begins baking data-driven clinical decision support into automated health systems.

If you have 40 tests spanning 20 years saying that you aren't at risk for Total Scrotal Implosion, and then suddenly, without any symptoms, you get a result saying your testicles will fall off tomorrow, you have context with which to interpret this result. Without the historic data there is much greater risk of you and your healthcare provider agreeing to an unnecessary knee-jerk scrotalectomy.

Less data is never the answer. Just my 2 cents.


"If you have 40 tests spanning 20 years saying that you aren't at risk for Total Scrotal Implosion, and then suddenly, without any symptoms, you get a result saying your testicles will fall off tomorrow, you have context with which to interpret this result. Without the historic data there is much greater risk of you and your healthcare provider agreeing to an unnecessary knee-jerk scrotalectomy."

This would be true if the issue if the reason for a test's error rate is due to inaccuracy of the result (e.g. I am trying to measure temperature, and 5% of the time it measures higher than it actually is) In that case, measuring often and keeping track of historical data would help.

However, this is NOT the major problem with these sorts of medical tests. The issue is that they are measuring something only RELATED to the disorder they are screening for, and not the disorder itself.

The actual fact is something more like: We have noticed that people that measure above value X on this test have higher rates of Total Scrotal Implosion.

However, there are lots of people who have above value X on the test who do NOT ever get Total Scrotal Implosion. You can test them every day, and the test accurately measures that they have higher than X of whatever is being tested - but they will never get TSI.

You can't fix this with more tests and tracking historical data - the test is accurate for what it is measuring, so repeated tests aren't going to change the overall accuracy of the PREDICTION that is being made from the test.


I'm not sure that's true, if the Prediction accounts for the fact that many people with a high value never get the disease. But that's only possible with sufficient data showing a negative correlation.


If many people with the high value never get the disease, then the historical data won't help you discover a false positive (which is what the person I was responding to was arguing)


You are incorrect.

All kinds of "shotgun testing" (i.e. indiscriminate testing for everything like you recommend) have been studied, and proven worthless at best, and more often than not, actively harmful.

First, there is the issue of tests' limitation, and extremely low predictive power. For instance, if testing positive on A makes its 20 times more likely that you'll get B, and the prevalence of B is 1 out 1000 000 in the general population, your own personal risk remains low enough that nothing has changed -- except that you will panic and do unnecessary interventions to reduce this risk. That is precisely the reason tests are asked when you already have symptoms, so if you're pretest probability of disease is 10%, a positive test results means you most probably have it, and it is worth doing something about it.

Second, all known treatments (including "preventive" ones) carry non-zero risk. When you don't have symptoms, whether or not you test positive, your risk of dying from a disease remains lower than dying from an intervention to prevent the disease -- thus, you gain nothing by testing.

Let's take for instance a 40-yo female who gets an ECG done for no good reason. It shows signs of heart disease, which could be a variant of normal, or a sign of a disease. The lady is worried, so she goes on with a stress test just to be sure. She tests positive (a sizeable proportion of those tests are false positive for multiple reasons), so she decides to go on and follows up with a coronary angiogram to see if there's any blockage. Angiogram is normal, but, a coronary is perforated during procedure (1/10 000 risk), and she dies on the table, when she never had any health problems beforehand. This kind of stuff happens all the times.

Finally, from an ethical stand-point, as long as healthcare -- and the individual's stupid testing choices -- are paid for collectively, individual choices should be severely restricted.

If we were in a country without any kind of state-sponsored healthcare, where you'd get to pay for any self-harm from your own pocket, I'd argue for free-for-all testing for anyone without any oversight.


I'm a physician. This is the thing most people don't get about tests. It's tiring to see these startups with their flawed agenda.


But what if we could redesign the healthcare systems, in a way that doesn't expose people to all their test results , only critical ones, but gives the doctor/system that data in order to enable to better watch their patient ?

Would that be useful?


No, as I explained, when disease prevalence is low, most tests -- even when clearly "positive" -- don't shift probabilities in any appreciable way, and just add to the noise, and render decision making even harder.

Of all patients, nurses and doctors are the ones who are the less likely to asks for "more tests", precisely because they understand that they are essentially meaningless when pre-test probability is very low.

Suggested readings :

1. Bayes' Theorem: https://en.wikipedia.org/wiki/Bayes%27_theorem

2. Base rate fallacy: https://en.wikipedia.org/wiki/Base_rate_fallacy


Well, it's a good point but how do you decide what's critical?

To answer my own question; I believe the future lies in machine learning algorithms processing symptoms and tests (I guess a symptom is also a test in the sense that it's the answer to a question).

Most of the times there's also not a simple answer to be found . The right answer depends on many factors including the capabilities of your hospital/country/economy and the state of science.


> The right answer depends on many factors including the capabilities of your hospital [...]

Absolutely !

I'll add that the physician himself is a kind of test, in the sense that his own sensitivity/specificity to diagnosing a disease can be calculated.

It is well known -- and I'a argue it's a feature, not a bug --, that the exact same patients with the exact same symptoms will have a different work-up whether he's seen a GP, an emergency physician, or, say a heart surgeon. The reason is pretty simple: because disease prevalence is different in those three practices, the doctor has to order more or less tests to get the same predictive power. E.g., when every patient has heart disease, every ECG change is probably sign of disease, whereas when almost nobody has any heart problems, ECGs are pretty meaningless.


>> I believe the future lies in machine learning algorithms processing symptoms and tests

At the core you're still left with the question - tell your patient directly results of a statistical analysis of a few possibilities and options - and often see him take the wrong one, or guide him through trust(in you or the machine) while not showing him full details. Right ?


The person you're replying to isn't incorrect. He was saying that having few tests that your treat as predictive isn't as helpful as a long line of tests that are interpreted over time as predictive.

Your example is someone taking a test and immediately moving ahead with treatment. Not someone taking a test, noting the result was interesting, and then takes more tests over the next few weeks, months, or years to confirm the results were something to be concerned about.

If the tests are cheap and simple to run, there is no reason to EVER act on the basis of a single data point.


You have no clue, sorry.


While I agree that it's easy to misinterpret statistics. And I agree that sometimes the government needs to restrict freedom of choice to nudge the collective good for the betterment of society. I just can't see testing as one of those issues compared to the other health issues of society, like overwork, junk food, sedentary life styles etc...


Nah, 'paviva is right. Unnecessary tests are a problem, and you can put them in the same bag as hypochondria and WebMD abuse in terms of things where our brain's heuristics and cutting corners in evaluating probabilities start to work against us.

> compared to the other health issues of society, like overwork, junk food, sedentary life styles etc...

You can always find bigger issues. But given that there are startups and big companies that try to solve the issues you mentioned with spurious, half-assed, unscientific pseudo-tests (yay "wearables", yay "Internet of Things"!), it's even more worrying, because suddenly testing abuse may get coupled with the problems above.


This is like the people talking about throwing away their scales or only weighing themselves every few weeks while trying to control their weight.

The solution isn't fewer data points, it's to collect the data frequently and rigorously, and the post process it into a trend.

That's how testing should be done. You get a long series of measurements that are post processed into a coherent picture of the reality.


No, this is like people telling others to weigh themselves only once a week - because they know that members of the general population can't be bothered to understand "mathy" concepts like running average, or (gasp!) low-pass filter. It's recommended because otherwise a lot of people end out freaking out over noise in the data. And this is exactly the topic here - laymen freaking out over data they're not equipped intellectually and emotionally to comprehend.

Now the other point - that moar data is always better; in principle, yes, if you follow rigorous rules about collecting, analyzing and integrating it into the existing body of evidence. Which is not what usually happens outside research conditions. As you said,

> [the solution is] to collect the data frequently and rigorously, and the post process it into a trend.

The thing is - we've having big problems with the "rigorously" part, as well as no-bullshit post-processing. The current wave of companies selling "health" sensors ain't helping - they push on frequent collection in a totally non-rigorous way, using half-assed measuring equipment. And giving that data to normal people first (besides taking it and monetizing it), many of whom will obviously be freaking out. This is not helping to form "coherent picture of the reality" much. It's just helping those companies line their pockets.


Completely agree. False positives are a fact of statistics, what we need is more data to establish baselines for people and populations and understand when is the right time to worry. I'm genuinely surprised that doctors aren't leading a charge to get more data about their patients and arguing instead for less testing and data. Do they really feel that their diagnostics are so good that they only need a few data points every couple years? My watch and phone know more about my health than my doctor.


It's been addressed elsewhere in the thread, but the problem is not statistical false positives, but rather technically "true" positives (for what the test measures) that do not correlate to an actual disease. More data points do not help.


More data points do help the individual know their baseline range for when they feel healthy compared to range when they experience a health problem. Looking at second order effects like when a value suddenly changes for a particular individual can be very indicative of a problem. More data points would also allow us to look at correlations in the data to better refine the interpretation for combinations of tests.

I can understand economic reasons why since we're all paying for insurance collectively we might want to limit testing. What I have a harder time understanding is medical profession demanding that physics change to accommodate their process rather than changing their process to accommodate physics and statistics.

I really don't expect any tests to be perfect. I especially don't expect any test given when I'm sick to be able to tell me what a normal value for me should be when I'm well. What I would like to see is us embrace the reality of data and have enough of it that we can start to separate the signal from the noise. Just look at examples like the success of The Nurses Health Study[1] because they looked at lots of data over lots of years from lots of people. Not surprisingly a lot of health issues are difficult to understand looking at single data points.

[1] http://www.nhs3.org/


> The medical community's current approach to testing (positive result, treat ALL THE THINGS!)

No, that's what the patients want.

"You do have cancer. We're going to watch and wait." is something that's only recently been accepted by some patients, even though the side effects of treatment are so drastic. And those positive results only happen because people push inappropriate testing.

> Less data is never the answer.

More dirty data isn't particuarly helpful.


> > The medical community's current approach to testing (positive result, treat ALL THE THINGS!)

> No, that's what the patients want.

And the insurance companies, because they don't want to be sued for a false negative with unfortunate consequences. And there's an additional terrible incentive in that if you treat "just in case" and it has a negative consequence in terms of lifestyle, it's OK because "it's better than the alternative".

Consider prostate cancers that develop slowly and could probably have been left alone (referred to as "watchful waiting") -- but if you operate, the patient will survive; the side effects like incontinence aren't the doctor's or insurance company's problem.


The data isn't "dirty", it's perfectly accurate. You are just using it wrong. The test can give you an objective answer like "you have a 20% probability of having cancer".

But the system doesn't weigh the cost and Quality Adjusted Life Years of treatment vs not treatment, it just defaults to treatment. This is the problem that needs to be fixed, not eliminating collecting data.

And if the patients really are the problem, then don't show them the raw numbers. But having them is potentially useful. But maybe they should see the numbers, and maybe if they decide on treatment anyway that is their right to do so, and taking it away is wrong. Either way the problem is the system, not tests themselves.


You're assuming people know what the test results mean. Every time we ask people what the tests mean we find they don't know.

Gerd Gigerenzer (Reckoning with risk) shows that doctors, nurses, and patients don't understand the results of screening tests.

Here's another example: https://www.sciencenews.org/blog/context/doctors-flunk-quiz-...


That's a problem that can be fixed. Very basic statistics is much simpler than most of things doctors have to learn about. The article explained it elegantly in a simple graphic.

I find it hard to believe there is ever a time where collecting less data is an improvement. At worst the data doesn't change anything, but at best it gives you new information that improves outcomes.

If more (correct) information is actually making outcomes worse, it's not the information's fault. It's the system using that information incorrectly.


> But the system doesn't weigh the cost and Quality Adjusted Life Years of treatment vs not treatment, it just defaults to treatment. This is the problem that needs to be fixed, not eliminating collecting data.

Doesn't it? I mean, depends on the place probably, but I remember having a class with an MD once and we were discussing the overall goal of healthcare, and how to balance physical and mental well-being. The problems that arise there are exactly like this: you know, with your "perfectly accurate" data, that patient has X and, say, 3 years to live with serious symptoms showing up only close to the (for lack of better word) deadline; telling them about it will most likely mean 3 years of stress, painful treatment and heavy strain on patient's family&friends for, at best, a small extension of the lifespan. Not telling them means they live 2.5 years happy and then for the last 0.5 year they get sick. Should you tell them?

Most people scream "yes", and that's exactly your approach of "defaulting to treatment". Doctors would sometimes like to answer "no", but that means lying to the patient, and not showing them the data.

> And if the patients really are the problem, then don't show them the raw numbers. But having them is potentially useful. But maybe they should see the numbers, and maybe if they decide on treatment anyway that is their right to do so, and taking it away is wrong.

It seems like a free will issue, except that if 99% of people do the same wrong, stupid thing when experiencing a particular situation, it doesn't seem right to let them suffer from it. It's one of those human rationality errors. Sometimes people do need to be protected from themselves.

Now the problem is that the current trend of separating the doctor's office from the lab - whether via third-party private labs or all those half-assed smartphone-based tests - means that it's hard to hide raw data from the patient.

And yeah, I'm a bit conflicted about it - I want to look at my own raw data, I want to play with it, graph it, whatever, but I'm also aware I might freak out if something really weird shows up in them.


> More dirty data isn't particularly helpful.

Depends on how it is dirty. If it is systematic error, then of course it doesn't help. If it is statistical error, then repeating the test over and over is exactly what you need to do.


Most medicals tests are systematic and not statistical error. False positives will continue to be positive if you re-administer the test because they are due to variation in the individual and not the test.


I definitely wish I had a comprehensive log of tests over my lifetime. I'm now more interested in the changes (trends) than the absolute values.


To clear out some of the issues that may come from too emotional approach to medicine, let me state an equivalent problem: arguing that MOAR DATA is good for medicine is the same as arguing TSA needs to do MOAR screenings of all kind on airports to everyone.

Yes, the same statistical issues apply to both cases, and so do harm/good tradeoffs.


What do you think happens to the person who receives a single lab result, out of many, indicating that they have cancer? Human nature is to focus on the negative. To assume that humans can, by looking at a multitude of data points, ignore the small # of false positives is to misunderstand the evolution of human emotion.

https://www.psychologytoday.com/articles/200306/our-brains-n... talks about why we are evolutionarily programmed to latch on to the negative aspects of our life


> Less data is never the answer. Just my 2 cents.

That assume perfectly rational reactions. Many people can't deal with "You tested positive for X. We should keep an eye on it and see if it develops into something." It makes them nervous. They want a pill. They want surgery. etc.

The problem with even really good tests that test exactly what you want is that they have 4 modes-2 good: test positive for X/you actually have X, test false for X/you don't have X and 2 bad: test positive for X/you actually don't have X and test negative for X/you actually do have X.

The problem is that when the actual instance of "you have X" is very low, the "test positive for X/you actually don't have X" can swamp your signal.

Add in the natural noisiness of biological systems, and you wind up with lots of incorrect assessments.


> Many people can't deal with "You tested positive for X. We should keep an eye on it and see if it develops into something."

As I see it, this is mostly a healthcare UX problem. If a test is such that a negative result is very reliable in ruling out the condition tested for but, because of the combination of false positive rate and low incidence, a positive result doesn't indicate the presence of the condition, it shouldn't be presented to a non-technical end-user (i.e., most patients) as a positive result. It should be "The test to rule out Condition X was not able to rule it out."


Similarly, when running a hypothesis test statisticians "fail to reject" the null hypothesis, rather than accepting it.


For those who haven't had a chance to read about the negative impact of testing, check out Atul Gawande's Overkill - http://www.newyorker.com/magazine/2015/05/11/overkill-atul-g...

It explains why one of the most dangerous things for a healthy person to do is get tested. I've been in healthcare analytics for almost a decade now and see the same thing in the population data.

Theranos and other pro-testers are usually well intentioned but fundamentally misguided and haven't looked at population wide data sets, which tell a different story.


What Gawande describes is more of a sysematic irrationality in the healthcare system than a flaw with a test.

The decision to over-treat is part of the tradeoff we get when physicians are viewed as authority figures. Their inaction (not doing a treatment) is viewed as a delegitimization of the patient's needs, and so there is social pressure to treat, even when harm could be caused. This is a psychological blind spot that equally effects patient and physician.

But with respect to tests, so long as a test has a known false positive and false negative rate, its result can be accurately factored into a probabilistic model of a patient's overall health.

Our healthcare system is biased toward acute conditions and extreme interventions. Things like early disease progression and wellness are generally not even considered relevant to most doctors.

The reasoning approach of an evidence-based differential diagnosis which is taught to medical students is a powerful heuristic, but it is designed to work within the constraints of acute illness and (potentially) urgent intervention. So of course if fails when test results are considered without appropriate measures to improve the signal to noise ratio of the first branch of the decision tree.

With any kind of broad-spectrum, speculative testing, any result would need to be considered over time and in the context of many other factors. It is not a drop-in replacement for any step of the traditional differential.


"But with respect to tests, so long as a test has a known false positive and false negative rate, its result can be accurately factored into a probabilistic model of a patient's overall health."

In order to do that, you would also need to know the correlations BETWEEN tests in terms of false positive and negative. And there are a lot of combinations.


Not only that, many countries have health care systems where the same person who earns their living from testing also earns their living from treatments. That's a tough premise to work with if you're designing health care for Utopia.


> its result can be accurately factored into a probabilistic model of a patient's overall health.

But my health with respect to an illness is not probabilistic[1]. I either have the illness or don't have the illness. Probabilities are not useful when the sample size is one (me).

[1] Pedantic: it is probabilistic, but the probability is either 0% or 100% because the confidence interval sucks.


It's probabilistic because you don't know whether or not you have an illness. It's like tossing a coin - it's 100% on one side and 0% on the other, but you don't know which side it is on until you check - that's why we say a fair coin has a 50% probability of landing on either side when tossed.

Now the test you use with that (mathematical) coin is 100% accurate. Tests in medicine are not. They're more like "oh I see you sort of seem to have X; X has been known to occur a bit more in people suffering from Y than in those not suffering from Y". Hence the uncertainty.


I strongly disagree. We need more blood tests, and we need them badly. We need them in forms which people can self administer, outside of the prescription-and-professional-blood-draw model.

The history of blood glucose testing is informative here. It used to be a hospital lab test, like most other tests: done fasting, infrequently, to diagnose diabetes. Today, that test is done with an over-the-counter test kit, and diabetics do it multiple times per day. This provides information that the infrequent, hospital version couldn't provide at all: how blood glucose responds to meals.

There are many other tests which, if they could be frequent and self-administered, would enable people to make new discoveries. The common metabolic tests, for example - HDL, LDL, etc - are very closely analogous to glucose tests, in that they respond to meals and that response is probably more informative than the fasting test. But there's nothing analogous to a glucose tolerance test for cholesterol, because that requires ten tests in a row and that's too expensive.

One of the more common complaints I hear from friends is about migraines. Blood tests are worthless there because it's impossible to get a blood draw during an actual migraine. Same for most mental illnesses; comparing blood tests between a bipolar person's manic and depressive phases would be fascinating, but no one does it.

And that's not even mentioning micronutrient status screening. The rates of micronutrient deficiencies in the United States found by the National Health and Nutrition Examination Survey (NHANES) are shocking; it's the twenty-first century and the rate of iodine deficiency is 9%.


The history of blood glucose testing is informative here.

You're conflating a diagnostic test with a test that patients need to control dosing (of insulin). To make a diagnosis of diabetes, such frequent testing is not any more informative. Better tests, such as HbA1C, have been developed to indirectly measure blood glucose levels over a 3-month timescale, which is more appropriate for diagnosis.

I don't think there's any evidence yet that people being able to monitor their lipid levels while eating provides any useful medical information, unless you have some kind of (incredibly rare) inherited lipid metabolism deficiency.

comparing blood tests between a bipolar person's manic and depressive phases would be fascinating, but no one does it.

There in fact has been plenty of work on this, but in a research setting, where it belongs. See section 6 of http://www.ncbi.nlm.nih.gov/pubmed/27017833

it's the twenty-first century and the rate of iodine deficiency is 9%.

Micronutrient deficiencies are usually a result of dietary choices. This problem is more easily solved by encouraging everyone to take a daily multivitamin, which would be completely prophylactic, than by encouraging the same population to subscribe to series of blood tests that may or may not reveal the problem, and would require follow-up action. Again, think about it from a population health perspective.


The fact that you immediately jumped to the assumption that the only useful thing one could do with glucose testing is to diagnose diabetes or plan insulin doses is indicative of the failure of imagination endemic to the system.

Glucose testing is useful for all kinds of things; the fact that you yourself (and most doctors) don't know that, or think that other people can't be trusted with their own data without some Credentialed Professional to interpret it for them, is both insulting and limiting.

I don't want to go back to a world where AT&T had to anticipate the ways I'd want to use telecom. Although sadly in some respects we've never left it.


Whoa, let's not put words in my mouth here. First of all, I said nothing about denying people access to their medical data. Once the tests are done, yes, it's the patient's data (and in the US, HIPAA concurs). We're not in disagreement there.

Secondly, there may be all kinds of other uses for glucose tests that one could research, but consumers running tests on themselves in an uncontrolled manner is not research. I would never say that no other uses will ever be discovered, but let's do that scientifically, please. My specific issue was with how diabetic glucose self-testing was used as rhetorical evidence that more blood tests help people, while failing to note that those tests are done to dose (potentially dangerous, fast-acting) medications, not to "keep tabs" on anybody's diabetes in a diagnostic sense, as was implied by the omission.

You say below that "people are coming around on glucose in the same way that we now understand that the cardio signal [...] are predictive of an enormous number of physiological and psychological phenomena." That's a lovely hypothesis, but please tell me who these people are, and please show me the evidence of the predictive value.

Until then, the Credentialed Professionals are perfectly justified in shrugging their shoulders at post-prandial glucose data from healthy patients (who, contrarily, will demand that needless and dangerous follow-up procedures are ordered for them), and the companies selling consumers these tests will not be helping anybody become healthier. I could go on, but this comment sums up the societal effects better than I could, even referencing your "ideal" of the ECG for screening. https://news.ycombinator.com/item?id=11694341


Really? Okay.

Dunstan, D. W., Daly, R. M., Owen, N., Jolley, D., De Courten, M., Shaw, J., & Zimmet, P. (2002). High-intensity resistance training improves glycemic control in older patients with type 2 diabetes. Diabetes care, 25(10), 1729-1736.

Mäntyselkä, P., Miettola, J., Niskanen, L., & Kumpusalo, E. (2008). Glucose regulation and chronic pain at multiple sites. Rheumatology, 47(8), 1235-1238.

Newcomer, J. W., Haupt, D. W., Fucetola, R., Melson, A. K., Schweiger, J. A., Cooper, B. P., & Selke, G. (2002). Abnormalities in glucose regulation during antipsychotic treatment of schizophrenia. Archives of General Psychiatry, 59(4), 337-345.

Nybo, L. (2003). CNS fatigue and prolonged exercise: effect of glucose supplementation. Medicine and science in sports and exercise, 35(4), 589-594.

You have a lot more faith in Credentialed Professionals than I do, apparently. Or a lot less faith in anybody else.


Just curious, what are the other uses for glucose testing?


Getting a snapshot of your body's ability to metabolize carbohydrates is a much more granular view into metabolic health than are 'summary statistics' like A1C. Continuous glucose monitors, like those by Dexcom, are especially valuable -- you can get a detailed characterization of insulin response to food, of energy mobilization during exercise of different intensities, of general systemic stress, etc.

People are coming around on glucose in the same way that we now understand that the cardio signal (preferably using a sensitive measure like ECG, but even with crappy PPG sensors) are predictive of an enormous number of physiological and psychological phenomena.


Evidence is needed here.

This isn't an industry where "disruption" is harmless, moving fast and breaking things in the endless pursuit of personal fortune is going to cost many innocent people years of their lives.

Because there is no one single class of people, you can't just prescribe on the basis of a self administered blood test.

Take liver function test, depending on what you are looking for, and when you've last eat, you can "prove" you have the beginnings of cirrhosis. Without controlled testing people will be lead to the wrong conclusions.

Also, what is the half life of the compounds you are testing for? how long can they be in a vile before they start to distort?

This is an actual science, and needs to be treated with some respect for the scientific process (evidence leads, actions follow.)

You can draw blood during a migraine, the vein constriction is in the brain, not the rest of the body, otherwise there would be a trivially simple test for migraines vs headaches.

I could go on. The point is this: theranos is basically a symptom of the wrong type of progress. snake oil dressed as science in the pursuit of personal profit. No science was shared, humanities knowledge was not expanded despite the huge amount of money waisted on something that was clearly bollocks


You aren't giving false positives any credit. False positives can be DEVASTATING. If a test has 50, 75, 90% false positive rates, as many do - you are causing much panic for nothing.

If tests were 100% reliable and a simple pinprick.. absolutely, everyone should take one when they wake up and drink their morning coffee. But with the accuracy of tests today? Not so sure.

basically take any 100% healthy person. Run 500 tests on them. They will have 5-6 major problems via a false positive result. Run more detailed invasive tests on those. Maybe you rule them out, maybe you get another false positive. You could end up going through Chemo or some other "cure" for a disease you never had, and the "cure" could end up giving you real cancer.


>If a test has 50, 75, 90% false positive rates, as many do

What tests are you referring to? I can't find a reference but I highly doubt the FDA would approve of many tests with that high of false-positive rates.


I think these statistics are claiming '50-90% of observed positives are false', not '50-90% of all tests return false positives'. Suppose 0.1% of the population has a condition and the test for the condition has a 0.5% false positive rate. If administered blindly, most positive tests will be false positives.


I think this is the crux of the article's argument - more people testing more frequently and "blindly" means a whole lot more positive results, most of which will be false.


Such a test would still be very useful, when combined with other independent weak tests. If 5 weak tests in a row confirm you have cancer, then you should worry.

And anyway just report the numbers to the doctor and the patient. "This test has a 5:1 likelihood ratio, so if the odds of you having the disease were 1 to 100, it's now 5 : 100 or 1 to 20."

Everyone needs to get a better grasp on probability.


Which doesn't mean it's not a good test to rule out the tested condition if there are symptoms that might indicate the condition. Sometimes it's clinically valuable to rule out a condition, particularly if the appropriate treatment for another possible condition would be drifter if the tested-for condition might be present. There are people whose training makes them, in principle, the appropriate experts to decide when these tests are needed and worthwhile, they're called physicians.


>'50-90% of observed positives are false'

if ^ is what he means, he should write that. The term "false-positive" has a very clear definition in medical statistics.


Fair enough. But the more rare a condition, the more observed positives will be false.


If you read the fine article, you will see a very clear example of how a highly accurate test can still be misleading for clinical practice if the clinician is deciding whether or not to treat for a low-base-rate condition. Examples like this have been in books about statistical reasoning for medical decision-making over and over and over again since the turn of the century.


There's a difference between false positive rate and positive predictive value:

https://en.wikipedia.org/wiki/False_positive_rate

https://en.wikipedia.org/wiki/Positive_and_negative_predicti...

The latter depends on the prevalence of the test condition in the population, which is one of the major points of the OP.

A test can have a low false positive rate but still have a low positive predictive value if the test condition is sufficiently rare (as it is for most diseases). brianwawok was probably referring to tests with a low positive predictive value.


All very true. Perhaps he mistyped, not sure. I'm just pointing out what was written and factually inaccurate.


Your doubt is misplaced.. At least at the high end. Pap smears have a false negative rate of ~20% and a false positive rate of ~10%. Many doctors are now recommending less frequent tests;

http://www.wsj.com/articles/SB125875596169058039


10% false positive rate is significantly lower than the quoted "50, 75, 90%"


How about you read the article the entire discussion is about? You will learn what is meant right there.


It is indeed a pity blood tests are regulated, and not available not anyone who wants them -- I would be making millions out of people like you'd draw bloods before, after, and during migraine, and test for who-knows-what for who-knows-why. I'm curious, though, how come is that blood tests are not possible during migraine attack, I really cannot understand why should it be so.

That being said, you have no clue about human physiology, and not much more about clinical biochemistry.


I'm a T1 and poke my finger on average 10 times a day. It's led to A1C of a nearly non-diabetic person, 5.3%.


Abstracting out the essence of this article: It's arguing about the danger of ubiquitous "big data" in certain contexts.

The human population around 8 billion, which is 2^33. 33 is roughly the number of pairs out of 9 objects. Heuristically, this would mean that if we test for more than 9 boolean variables (high/low), purely by randomness we will start seeing correlations due to a "limited" population size of 8 billion. If we start treating people just based on those observations, we would seriously wreck human health.

Diagnosis (decision making) based on correlations (rather than causality) is very tricky.


For the record, we _are_ treating people based on exact same observations albeit correlated on much less number of people. However, we make use of some probability, other observational and subjective evidence in order to determine right course of action. I agree with another comment made above regarding "less data is never the answer". We may make some mistakes with small amount of data but the only way we will ever learn is by making a hypotheses and acting on it. We should tread carefully but not too careful to never step forward.


A lot of the comments here reveal not having read the fine article (which is not very long). If you look at the graphic labeled "How Accurate Tests Can Be Mostly Wrong," about halfway down the displayed Web article, you will see a very carefully worked out (and realistic) example about how even a very accurate medical test can result in mostly false positive indications of a disease--all that is necessary for that, mathematically, is that there is a low base rate of the disease. Examples like this have been commonplace in books about statistical reasoning for making medical decisions for more than a decade, and I have shared links to Hacker News before that make this same point. This is something everyone needs to know (but the investors in Theranos didn't know) to make sound decisions about how much testing to do and what to do with test results.

Other authors who write about this issue are cited in the article kindly submitted for our discussion. I urge everyone here to read a lot of the writings of Dr. John P.A. Ioannidis,[1] who is quoted in the article.

[1] https://med.stanford.edu/profiles/john-ioannidis?tab=publica...


Pretty disappointed with 538 on this article. The basic argument is that we shouldn't do more medical testing as doctors aren't sure how to interpret the results, public isn't sophisticated enough to understand and too lazy to do anything even if they did.

Personally I feel that if we had more data over a larger group of people we would be able to learn what leads to disease better. Further I'm a little shocked that the medical industry can get away with a stated policy that patients should wait until their symptoms are acute and then get a minimum number of tests so their doctors don't get confused. I'd like to see us have more data about our health even if we're not sure what to do about it right away. Ignorance is easier to implement but isn't always bliss.


>Personally I feel that if we had more data over a larger group of people we would be able to learn what leads to disease better.

The article is pretty much arguing that there isn't currently a method that exists that will accurately give us more data.

How useful is "more data" when 73% of it is garbage?


> Further I'm a little shocked that the medical industry can get away with a stated policy that patients should wait until their symptoms are acute and then get a minimum number of tests so their doctors don't get confused.

It's because this is a reasonable strategy, that takes into the account the actual quality of the tests, the base rate, and the fact that people are idiots^W^Woverreact dramatically when it comes to health. This is a combination of insight that most of the serious diseases are rare, so most patients will be fine if they wait a little bit more, coupled with the insight that they most definitely won't be fine if they freak out, and most of them will.


Think about it as getting a time series of previous measurements to aid in the detection of anomilies. If you just give me the metrics of a production system at an instant in time, it's hard to say whether anything's wrong. But when I know the response time has been in the same range for the past year, and then increases to 2x and stays that way for a week, I can be pretty confident that something broke.

Whether we're anywhere near feasibly getting that resolution of data is a different question.


The argument is that some blood tests aren't that effective and you get false positives. For certain tests that might be true. However, aren't we still better off if we can perform all tests more cheaply and on a more regular basis? Having a few tests that aren't effective does not invalidate the entire goal.


Very much a chicken/egg problem.

>The effectiveness of screening for a given disease before signs and symptoms appear depends on a host of conditions... including whether there’s an effective treatment for whatever is found...

There are so very few ways to even think about, treat, or even test for the earliest stages of a disease precisely because we don't have data about the early stages. Getting that data would certainly provide for data upon which to design better early-stage therapies. The use of measuring PSA (as mentioned) works great in more advanced stages of prostate cancer, and probably doesn't work well for early stages - but if we had good and regular testing we could find an indicator for those early stages.

So which comes first, the readout, or the therapy? Of course, it's the readout. Just because we don't have those new early-stage-targeting therapies yet doesn't mean a new array of (accurate and reliable) testing wouldn't be extraordinarily helpful.


Blood tests are a single evidence point in a chain of thinking.

You do a blood test to confirm your initial theory, not as an exploratory exercise. You need it to be accurate, because otherwise you'll have to do it multiple times. (which unless the number of tests required is cheaper than the original, its pointless)


I agree with your comment. Beyond the false premise, I think that this fivethirtyeight piece is extremely dangerous because it seems to prefer shocking headlines over actually depth. i.e. "We Don’t Need More Blood Tests" and the infographic "How accurate tests can be mostly wrong."


What false premise? Fivethirtyeight is not wrong. In this case, with seemingly reliable tests (>90% accuracy), only 27% of positive tests are correct. If you're talking about something that has an invasive next step, like a biopsy, and you decide to increase the number of tests done each year, then you are going to also be increasing the number of unnecessary invasive procedures each year as well.

Take mammograms. It's a fairly non-invasive test for breast cancer. Everyone wants to reduce breast cancer, right? Unfortunately, the data is very much like fivethirtyeight's example. In a 10 year period, 1 in 2 women are harmed by a false positive and 1 in 5 were harmed by an unnecessary surgical procedure. What about the lives saved? None.[0] Screening mammograms only end up harming patients.

While it may make sense intuitively that more screening must be better, the data generally fails to back it up.

[0] http://www.thennt.com/nnt/screening-mammography-for-reducing...


If an exploratory test is done for a condition sufficiently rare that a positive is quite likely to be a false positive, then a positive result should not, in the absence of other evidence, lead to an invasive procedure. What action should be taken depends on the particulars of the condition, but just monitoring for actual symptoms of the condition or maybe trying a different test would almost certainly be more sensible options.

The article seems to be decrying additional testing, which is basically just additional information gathering, on the basis that some people might make dumb decisions based on the information. That's probably true, but I don't think it means that more information is a bad thing.


The case of mammography is often brought up for this argument, but is a cherry picked example. Furthermore, it isn't a blood test, so it isn't even what is being argued against by fivethirtyeight.

There is no harm done by extremely accurate testing for HIV. and less accurate tests should be used in screening by medical professionals that know how to proceed from a positive result. Again, blanket statements like "We Don’t Need More Blood Tests" are dangerous


given how 538 seemed to want to be positioned in the market place, I'm pretty disappointed with how similar their model of 'shocking headlines over actual depth' is to what are generally dismissed as low brow competitors

its not actually an informative news source


What do you consider cheap? I can get a basic metabolic panel of 26 tests (lipids, glucose, CBC, electrolytes, organ functions) for around $50 as an individual. This includes drawing the blood, shipping and third part profit margins. Imagine the actual costs which are probably around $15 or so. I'm guessing a lot of these basic tests have been automated so they can be done cheaply. So I don't think costs are the main issue with the current technology. There are tests which are much more expensive but those may have manual labor involved and I don't think Theranos can make them any more cost effective.


That's exactly what I was thinking. Sure, if we look at a population then maybe we don't see a remarkable result. But surely we've helped SOMEONE here.

From the article: "which means that only 27% of positive test results are right"

Yes, 27% is not great, but that's still more than 1/4 of the people that had a good result and will live longer or better or whatever. 1/4 of the people got some kind of benefit. Sure, we'd rather see 75% but 27% is SOMETHING and that's worth, well, something I guess. Does the cost mean it's worth it?

For those in the 27% they are sure as heck going to say yes, right?


Maybe, maybe not. For example the treatments for prostate cancer can be worse than not treating early state elevated PSAs.

Being told you have AIDs, then 24h later being told "just kidding false positive"... do you think that has no consequences on your life?


> For example the treatments for prostate cancer can be worse than not treating early state elevated PSAs.

I suppose that positive test results need not be followed directly by treatment, but can also be used to trigger further testing.


In the specific case of prostate testing, all they can do is repeat the test. And it likely comes back with the same results.

Many men in their 40s and 50s have high test results. Those same men can live 30, 40 more years with the condition. Or they can die of prostate cancer in 3 years. Or they could get treatment, and lose all use of their prostate for the rest of their life.

It seems like prostate cancer testing is the poster child for "the test tells you something, but often you statistically don't want to act on the test...".

I have no plans of ever having the test done.


what do you say to the 73% whose lives got negatively impacted? Prostate screening is a great example. Elevated PSAs are indicative of cancer, which people want to get rid of. So they have a procedure done, which often leaves them incontinent. But the reality is that, the majority of the time, prostate cancer remains effectively dormant. The end result is a lot of old men who would have been healthy but are now walking around in diapers because they got a test.


You've missed the other side of the equation, which is: what if there is no cure for the 27% with the supposed condition? What help has been provided then?

What if there are some recommended follow-up treatments, but they are all expensive and risky, with the possibility of terrible complications?

This is how over-diagnosis actually leads to worse outcomes on a population scale. Medicine is often viewed as this big near-perfect algorithm where information is always enabling. In fact, too much information can be counterproductive for a patient and doctor.


After decades of practicing medicine and seeing results of countless lab tests, I tend to agree with many of the comments here. The article isn't wrong at its core, but it is sensationalizing some very important issues.

The idea that lab studies yield false positives (and negatives too) is hardly novel. Of course test results can be misleading or easily misconstrued. We know a single test, or even a set of tests is rarely definitive. We know interpreting tests is an exercise in probabilistic thinking and careful practitioners rely on test results only to the extent warranted.

I often get a question like "so what does this test mean?" A single anomalous reading, probably not much. I answer "it's only a test", confirming a diagnosis is a laborious process to make sure the facts align as best as can be determined. That is, the gamut of history, direct observation, and a variety of lab/imaging measurements looking at a clinical situation from several angles need to converge.

In many practice domains lab/measurement technologies provide tremendous benefit. Think about the contributions of imaging (CT, MRI), endoscopy (colonoscopy, etc.), and yes, advances in medical laboratory science also save untold lives every day. Everyone here on HN knows all technologies can and will be misused but that doesn't mean they are not valuable and worthy.

I take my own advice to never forget: "it's only a test", and any test is no more useful than the limits of its credibility.


I have very high chance of diabetes and other blood sugar related health problems, like heart diseases. I certainly think that cheap, repetitive blood tests would help me improve my health in the same way having my phone count my steps helps me to have a more active life. Please make it work. Maybe that start-up won't make, then please make another one. Thanks.


Something is not clear to me. How do you know that a false positive is, in fact, a false positive?

I presume there are some other pieces of information that provide the reference value.

So when someone gets a positive test, why not put them through the next test to see if they keep coming up positive?

What's the argument for not testing in the first place? It will panic people? If that's the case, why not make it random whether you're called to an extra test?


  > What's the argument for not testing in the first place?

  The wider the pool of people being tested, the greater
  the chance of false positives, which is why screening
  guidelines generally limit the population to be screened.
  The more independent tests you do at once, each with its
  own chance of error, the larger the chance that at least
  one of those tests produces an incorrect result, said
  Rebecca Goldin, director of STATS.org and a professor of
  mathematical sciences at George Mason University.
It's not just panic, but that if the false positive rate is too high then untargeted testing becomes overwhelming and counterproductive. There's a slim, very slim, chance that as a man I could develop breast cancer. But there's little value in testing me as it's not present in my family history (male or female).

So let's say (in this example) that the false positive rate is only 1% in men and the true incidence rate is 0.01%, and we test 100 million men:

    10,000 men have breast cancer, great we've detected it
   999,900 men don't, but are subjected to more testing
That more testing is likely costly, time consuming, and potentially invasive. This acts as a net drain on medical resources because we've just performed nearly 1 million unnecessary mammograms. They take technician and physician time. They take machine time. And so on.

So false positive rates may be low, but when applied to a massive population the total number of false positives can overwhelm the system.


> Something is not clear to me. How do you know that a false positive is, in fact, a false positive?

Bob has a full body MRI. THat returns some shadowy spots on his lungs. He's asked if he used to smoke (he did), and if he had any childhood illnesses (german measles, measles, chicken pox, etc) (he did).

Are those spots early cancer? Or are they just scarring from the childhood illness?

One way is to cut Bob open and have a look. Or jab him with a massive biopsy needle. Except, now you know that this one spot isn't cancer, it was a scar. How about all the rest? That one biopsy increased Bob's risk of harm.

> What's the argument for not testing in the first place?

Testing often doesn't give any useful information. It often gives wrong information - people who have a disease are told they don't have it, and people who don't have it are told they do have it. Many of these people then go on to have treatment. All treatment carries some risk of harm.

You need to know if the benefits of testing (especially of testing a large, mostly healthy population) outweigh the risks of providing risky treatment to those people, some of who will not have the disease.

You also have to see what happens if you just don't do anything. Prostate cancer is a good example here. Some people will die of it - they have an aggressive cancer that will kill them rapidly. But many people will die with it - they have a slow growing tumour, and they'll die in old age from something else. Treatment for prostate cancer is pretty harsh.


the article and most comments here are more about EH's VISION of empowering regular people for cheap and on-demand blood tests. but the major problems the regulatory bodies, medical practioners and diagnostics experts have are mostly about theranos' underlining TECHNOLOGY, its basic SCIENCES and daily lab PRACTICES. to this date none of these three has been established at theranos (non-peer-reviewed, non-core patents, criticized by cms & fda w/ banning threats). in short, it just does not work (yet?)

it's like someone always talk about building beautiful resorts or gigatic mining operations on Mars, but fails to develop rocket propulsion systems or space travelling vehicles. if there would be other someone to bring us to Mars, it must be elon musk.


See also Science Based Medicine - a skeptical look at screening tests.

https://www.sciencebasedmedicine.org/a-skeptical-look-at-scr...


> which means that only 27% of positive test results are right

The negative test results are still right 99.8% of the time.


But throwing the blood sample in the trash and just saying "You don't have it" is right 98% of the time. Would you support that test? Because I'm willing to charge you for it.


Yeah sure, but you know...what about disruption!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: