Tuesday, March 8, 2016
Lies, Damned Lies, and...
yeah, you know what's coming....
If statistics talk leaves you yawning, followed by a major face-plant on the desktop, no reason to continue reading....
For a few years now controversy over the use of stats in social sciences in particular (but also in biomedicine and other research areas) has been brewing. It seems to have erupted Mt. Vesuvius fashion this week.
I was myself a psychology major long ago, but never held much credence in the methodologies or empiricism that was practiced. Indeed, I'm amazed it's taken this long for the controversy (in large part over two pillars: significance-testing and replication) to bubble over; there's so much weak, dodgy psyche research out there going back a century.
Below are a sampling of the many recent posts dealing with a new set of "six principles" (in regard to p-values) the American Statistical Association just recommended:
1) First, this NY Times piece gives some of the background to the whole debate:
2) "Retraction Watch" interviews the Executive Director of the American Statistical Association, Ron Wasserstein, about the 'six new principles' for the use of p-values:
http://retractionwatch.com/2016/03/07/were-using-a-common-statistical-test-all-wrong-statisticians-want-to-fix-that/ (a good read)
3) Nature reports:
4) from another British journal:
5) Deborah Mayo adds some nuance to the discussion here:
6) Andrew Gelman adds comments, with more emphasis on 'null hypothesis testing' and "the garden of forking paths":
7) and from FiveThirtyEight blog, perhaps the most entertaining read:
8) Some older work of Nassim Taleb, also pertinent to the discussion here:
These are just a smattering of what is out there right now on this matter. Much of what transpires in psychology (and particularly social psychology), I believe, would be better characterized as "social studies" and NOT "social science" -- and social studies, by the way, are very much worth doing (despite the variables, complexity, and ambiguities involved)... they just ought not be confused with good "science," especially in regards to the generalizability or extrapolation of the findings.
When I took statistics in grad school the professor warned us on the first day of class that he hated teaching these "statistics for social scientists" courses (but was required to) because there was no way to satisfactorily teach students what they needed to know for doing research, in a semester or two. And no matter what grade we got in the class our knowledge and understanding of statistics would be inadequate; we would be turned loose on the world thinking we knew more than we did.
I've often seen mathematicians write that no one really learns or understands calculus when they first take it (again no matter what grade you get); it is only after further years of mathematics study that a deeper understanding of the calculus finally emerges. So too statistics.
Anyway, these statistical debates entail hugely important issues that we need to attend to; issues going well beyond journal research articles. Currently, statistics and probability classes are increasingly a part of secondary education, and we need some assurance of getting it right... lest we send off a whole 'nother generation with wrong ways to think about and apply statistics.
This controversy will continue for a long awhile... because it is tangled and long overdue (and... it's putting a lot of practitioners on the defensive).