> This is an unbalanced study. As such, it doesn't tell you anything about the ability to differentiate between the three populations.
Complete Nonsense
> That is not enough to be able to adequately know range of measurable values in the population of patients w/o cancer.
True. But it is a promising initial signal that the distribution of non cancerous folks is probably very different from the invasive cancer folks. The effect size here is huge. “Range” is less interesting than “Distribution”
> Error bars here are absolutely necessary. Two reasons: First, you want to know the approximate ranges for each group in Figures 3 and 5. Not showing them is misleading.
You don’t really need error bars when you’re showing all of a small number of data points. But sure whatever
> Secondly -- you actually also want error bars for each patient sample. I'd expect for there to be at least three replicates for each saliva sample to show that the strips are able to consistently measure a known value from each sample.
Would be nice. Sounds like they did ten measurements per.
> What you'd really like to show is that the HER2+ patients could be differentiated from HER2- patients. Which, does look really good, but with only one HER2+ sample, you really can't tell much. (And the presence of so much signal in the HER2- samples raises some very interesting biological/mechanistic questions).
I’m not clear why you’re saying HER+ vs HER- is the important difference here.
Look, you obviously have your opinions here. I'm not sure just saying "Complete nonsense" at things is really all that helpful.
What I said ("it doesn't tell you anything about the ability to differentiate between the three populations") is quite correct. This study shows that there is a difference between the groups of samples tested with their HER2 test strip, with a one-way p-value of ~0.002.
I'm not convinced that the samples are representative of their populations. The number of non-cancer samples too low.
>* You don’t really need error bars when you’re showing all of a small number of data points.*
Error bars are visually helpful ways to show that the group values overlap. Which, in this case, they do (I did replot this data to confirm).
>* Would be nice. Sounds like they did ten measurements per.*
This is for one test. They sampled the test strip 10 times. I mean they should have tested each sample on at least 3 different test strips to get a mean value for the sample. This is a paper that is trying to say that their test strips are accurate, so it would make sense to test them multiple times.
>I’m not clear why you’re saying HER+ vs HER- is the important difference here.
I'm not sure what they are trying to claim in there paper... are they trying to say that they can diagnose breast cancer (which would requires many more biomarkers), or are they trying to say that they can differentiate between HER2+ and HER2- cancers (which would be more appropriate for a HER2 test).
The other biomarker has even more overlap, so not sure how helpful that would be.
Really, I think they are also missing an opportunity -- the bigger use for me would be in longitudinal testing. If they could show changes in signal over time for a particular patient that corresponded to treatment status -- that would be a great use for a cheap non-invasive test.
> Look, you obviously have your opinions here. I'm not sure just saying "Complete nonsense" at things is really all that helpful.
> What I said ("it doesn't tell you anything about the ability to differentiate between the three populations") is quite correct. This study shows that there is a difference between the groups of samples tested with their HER2 test strip, with a one-way p-value of ~0.002.
You claimed this was the implication of an unbalanced study. I’m sorry but that is a complete non sequitur. It is nonsense. Most of everything else I disagree with but isn’t nonsense.
> I'm not sure what they are trying to claim in there paper... are they trying to say that they can diagnose breast cancer (which would requires many more biomarkers), or are they trying to say that they can differentiate between HER2+ and HER2- cancers (which would be more appropriate for a HER2 test).
They are simply showing different distributions of tests for different populations and observing that, foremost, the invasive cancer ones have a significantly different distribution; and that it’s shows promise as a test for the future.
Complete Nonsense
> That is not enough to be able to adequately know range of measurable values in the population of patients w/o cancer.
True. But it is a promising initial signal that the distribution of non cancerous folks is probably very different from the invasive cancer folks. The effect size here is huge. “Range” is less interesting than “Distribution”
> Error bars here are absolutely necessary. Two reasons: First, you want to know the approximate ranges for each group in Figures 3 and 5. Not showing them is misleading.
You don’t really need error bars when you’re showing all of a small number of data points. But sure whatever
> Secondly -- you actually also want error bars for each patient sample. I'd expect for there to be at least three replicates for each saliva sample to show that the strips are able to consistently measure a known value from each sample.
Would be nice. Sounds like they did ten measurements per.
> What you'd really like to show is that the HER2+ patients could be differentiated from HER2- patients. Which, does look really good, but with only one HER2+ sample, you really can't tell much. (And the presence of so much signal in the HER2- samples raises some very interesting biological/mechanistic questions).
I’m not clear why you’re saying HER+ vs HER- is the important difference here.