a little science, for a change

Dammit, I (re)started this blog to talk about science, and I’m gonna talk about science!
Over at JOHO/blog, Dave is posting summaries of the talks he attends at Foocamp06. This one really pushed my buttons:

Chris Csikszentmihalyi says science doesn’t work the way it thinks it does. For one thing, only 3-5% of experiments are re-proven. Often that’s because they’re so sensitive to instruments and materials. Also, much of the knowledge is tacit. Instead, scientific conflicts are usually settled by looking at the lab it came from, etc.

OK, let’s unpack that a little:

science doesn’t work the way it thinks it does.

Having just read Structure of Scientific Revolutions, I’m inclined to think this is true. However:

For one thing, only 3-5% of experiments are re-proven.

Where’d he get these numbers? This recent article (not freely available, brief summary and discussion here) shows that, of 19 papers in an apparently-randomly-chosen issue of Nature, 17 reported results that have been corroborated within four years. My own informal efforts seem to agree that a majority of results are “re-proven”, for meaningful values of “re-proven”. If CC is talking only about straight replication (same experiment, different hands) he’s simply bypassing the more common mechanisms by which scientific results are established as reliable.
As for mechanism:

Often that’s because they’re so sensitive to instruments and materials. Also, much of the knowledge is tacit.

The article I linked talks about this —

[on] recreating an exact copy of a piece of experimental kit: “It’s very difficult to make a carbon copy. You can make a near one, but if it turns out that what is critical is the way he glued his transducers, and he forgets to tell you that the technician always puts a copy of Physical Review on top of them for weight, well, it could make all the difference.”

True, but let’s not forget that ratio (17/19). This is exactly why most results are corroborated (shown to be reliable by work that builds on them) rather than directly reproduced. Well, actually, the more basic reason why is that scientists tend to trust published data, with good reason (fraud is real but not common1) — why waste time repeating an experiment that shows X when you can test X just as well, and move your own work forward at the same time, by designing an experiment to build on X? Unless X is thoroughly outrageous/counterintuitive/whatever, that’s what most researchers do: assume that published results will stand up. If the odds are 17/19, I’d call that a pretty fair bet. If CC were right, and the odds were more like 95/5, wouldn’t science have long since ground to a halt?
Then there’s this:

The Prayer Gauge Debate. In the 19th Century there were attempts to measure the efficacy of prayer. Science went up against a popular paradigm. Chris contrasts this with lab press releases getting done if they promise a cure for cancer. I.e., scientists learn to mis-represent their projects in order to get funded.

See, that just chafes my scrote. Has CC ever tried to get an accurate representation of his work into the press? More to the point, has he ever watched helplessly as some PR flack mangled his research into a press release and made him look like an ass in the media? Scientists, as a matter of course, do not mis-represent their work to the media: they don’t have to, the quality of science journalism being what it is. (They do, of course, tailor grant applications to the priorities of the funding bodies; the extent to which that practice approaches dishonesty is a different conversation altogether.)

—-
1 (though see here, particularly comments by per, for a different view)

One thought on “a little science, for a change

  1. Scientists, as a matter of course, do not mis-represent their work to the media: they don’t have to, the quality of science journalism being what it is.
    I’ve seen this time and time again with media coverage of our work. We make limited claims, they come out on the other end as sweeping ones. Questions we answer about possible applications and down-the-line speculation, which are qualified as such in the interview / info gathering process, are presented without context as the crux of our research. Tossed-off comments in the course of a 2-3 hour interview process become leads in the finished product. Everything is ultra-simplified and dumbed down. On and on and on.
    I can’t speak much to the verification part, though I think you’re on the right track with the building-upon rather than replication model. Also, peer review incorporates some of this, as from what I’ve read the criticisms often get really involved and basically amount to some form of error-checking on the part of the reviewers. Maybe not actual replication, but testing of the claims against their own(usually similar) store of experience and knowledge. You’ll see things like “I’ve done X which is very similar to what you are trying, but I didn’t get(or expect, or whatever) these kinds of results. Please elaborate on how or why you did and answer these questions or doubts. Etc.”

Leave a Reply

Your email address will not be published. Required fields are marked *