My assumption is that 98% of people have no idea what the right sample size is for this particular experiment, including myself of course.
To all who criticize and ridicule someone who would like to have more samples, why do you think 21 is such a perfect number in this case? Wouldn't 15 be enough, if statistics and all applies? 10? 1? Would 30 be too much?
It is a function of the effect size, not the experiment. When the effect size is large, you need fewer samples to detect it reliably. In this case it looks pretty large. Would you want more people before rolling it out? Of course. But itβs very unlikely to be vaporware provided the samples are random.
Thank you! Came here to say, but found, that you've already written it.
Effect size is going to be important when talking about clinical significance, not just statistic significance, too.
I also want to point out that we have no stats on sensitivity, specificity, or diagnostic odds ratio, which are all clinically relevant to physicians deciding when to test and how to interpret test results.
The good news is that it's a non-invasive, low cost test which plays a factor in clinical decision-making.
> To all who criticize and ridicule someone who would like to have more samples, why do you think 21 is such a perfect number in this case? Wouldn't 15 be enough, if statistics and all applies? 10? 1? Would 30 be too much?
This is the point that you are fundamentally misunderstanding. Nobody is making the argument that "21 is such a perfect number in this case". The rest of your sentences ("Wouldn't 15 be enough, if statistics and all applies? 10? 1? Would 30 be too much?") seem to point to a belief that people are pulling numbers based on a finger in the wind.
The whole point of (a lot of) statistical testing is that it allows you to come up with a specific number that determines how likely a particular result is due to chance. That is what the p < .05 "standard" is about - it's a determination that the results have less than a 5% likelihood to be due to random chance (though that 5% number used as "significant" is just basically pulled out of thin air, and p-hacking is another topic...) That is, I and the comment I replied to aren't making the argument that 21 "is such a perfect number". We're making the argument that even with a small sample size it's possible to determine the relative error bars, with precision, using statistical methods, not just pulling a number based on feels. Yes, larger sample sizes reduce the size of those error bars. But often not in ways that are "intuitive". You have to do the math.
None of what I wrote above is meant to imply that statistical tests can't be misused, or that they often require assumptions about the underlying population distribution that may not be correct.
To all who criticize and ridicule someone who would like to have more samples, why do you think 21 is such a perfect number in this case? Wouldn't 15 be enough, if statistics and all applies? 10? 1? Would 30 be too much?