Good, succinct post from Andrew Gelman today on statistics and "measurement," noting at start that "Statistics does not require randomness. The three essential elements of statistics are measurement, comparison, and variation." And of those, Gelman believes that "measurement" is the most slighted or "neglected" element:
He says when it comes to measurement in research or statistics textbooks "...there’s silence, just an implicit assumption that the measurement is what it is, that it’s valid and that it’s as reliable as it needs to be."
Of course we witness this all across research... from Government economic statistics that get routinely "revised" or recomputed on a near-monthly basis, to epidemiology where statistics change sometimes with each new study sample, to even high level physics where new findings too often have to be altered or abandoned when measurements are re-taken or newly-analyzed. And perhaps psychology takes the greatest brunt of criticism as Gelman writes, "A common thread in these [psychological] studies is sloppy, noisy, biased measurement." Indeed, 'behavior' is one of the most difficult things to measure and generalize about empirically. ...But, no one ever said it was easy.
I don't think Gelman even goes far enough here. Part of the intrinsic problem of measurement is the necessity to recognize and precisely define all pertinent variables that are to be measured... in most fields, a close-to-impossible task; so measurement is relegated to a rough (and sometimes VERY rough) approximation, while still being discussed as if exact.
I suspect one of the reasons there is so much anti-science sentiment/distrust in this country is because of how often the public sees a scientific "measurement" go awry, after it had been presented as "certain" (I realize this is often more the fault of press or other science-writer coverage, than due to the scientists themselves).