I didn’t expect to take a month-long hiatus from blog writing. But I did! I’ve been exploring the Champlain Valley of northwestern Vermont and meeting its huge lake, interesting forest types, and cool people. American Basswood everywhere, nice large stands of Solomon’s Seal on rocky ledges, thimbleberries, gooseberries, and blackberries fruiting, different roadside weeds than I’m used to (mugwort! giant hogweed!) and a truly unbelievable amount of poison ivy. Plus foot-cutting zebra mussels in the lake. New bioregion, new hazards!
Now that I’m writing here again, I want to shift gears a little from the overall land use theme for a moment, because there’s a quite fascinating article out in the New Yorker about the increasing non-replicability of previously replicable scientific research. There are a lot of angles to take on this. For a while I was very interested in the Sociology of Science field and this seems to fit in well with that tradition of looking at the scientific world from a cultural and behavioral perspective. And I think this also points to the major amount of subjectivity in the field of psychology and mental health in general. But the trends of nonreplicability the article’s talking about apply to other fields like ecology and even particle physics. I think it’s revealing a kind of “uncertainty principle” weakness of the scientific method – splitting reality down to component parts gets you out of the chaos long enough to take measurements and gather data, but getting out of the chaos can limit the ability to understand all the other variables that can affect those measurements. Or something like that. All in all another reason why I like systems thinking.
The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Full article here. Worth a read.