via NIH/WikimediaCommons |
Almost 40 years ago in grad school I railed a bit about the non-random and small sample sizes of so many journal-published studies, particularly in the social and medical sciences… I felt like a lone wolf in the wilderness though.
Eventually, in 2005 these sorts of concerns became center-stage when John Ioannidis published his oft-cited paper, "Why Most Published Research Findings Are False."
Now, Katherine Button, Ioannidis, and others have published "Power failure: why small sample size undermines the reliability of neuroscience," further delineating such problems, as specific to up-and-coming neuroscience:
http://www.guardian.co.uk/science/sifting-the-evidence/2013/apr/10/unreliable-neuroscience-power-matters
A few lines therefrom:
"There is growing interest in the need to improve reliability in science… Many of the most hyped scientific discoveries eventually cannot be replicated...The article goes on to discuss various problems with statistical samples, false positives, and false negatives, and also making mention of publication bias, before concluding, "The current reliance on small, low-powered studies is wasteful and inefficient, and it undermines the ability of neuroscience to gain genuine insight into brain function and behaviour. " And from the research article's abstract: "Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles."
"A major factor that influences the reliability of science is statistical power. We cannot measure everyone or everything, so we take samples and use statistical inference to determine the probability that the results we observe in our sample reflect some underlying scientific truth."
Anyway, read the whole Guardian piece, or if you have access, the original journal article in Nature Neuroscience.
No comments:
Post a Comment