I’m always a bit leery of edge.org, seeing as how it’s first and foremost a promotional vehicle for John Brockman’s stable of authors, but I do enjoy the Annual Question. This year’s is no exception:
When thinking changes your mind, that’s philosophy.
When God changes your mind, that’s faith.
When facts change your mind, that’s science.
WHAT HAVE YOU CHANGED YOUR MIND ABOUT? WHY?
Science is based on evidence. What happens when the data change? How have scientific findings or arguments changed your mind?”
What struck me about the answers was that a number of them point out, if indirectly, that the wording of the question is utter bollocks. Whoever wrote the question has drunk deep of the “impartial search for Truth” Kool-Aid and needs an infusion of Kuhn1 (or a week in an actual lab), stat.
Roger Highfield comes right out and says it:
I am a heretic. I have come to question the key assumption behind this survey: “When facts change your mind, that’s science.” This idea that science is an objective fact-driven pursuit is laudable, seductive and – alas – a mirage.
Science is a never-ending dialogue between theorists and experimenters. But people are central to that dialogue. And people ignore facts. They distort them or select the ones that suit their cause, depending on how they interpret their meaning. Or they don’t ask the right questions to obtain the relevant facts.
That science is nothing like the simplistic picture we were all fed in school seems to be something of a theme in the answers to this year’s Edge question:
I have changed my mind about the omniscience and omnipotence of science. I now realize that science is strictly limited, and that it is extremely dangerous not to appreciate this.
In two weeks I finished the book [that changed my mind] and then my way of thinking changed. I understood that science was not only a pursuit of knowledge but a social process too, with its rules and tricks: a never-ending tale such as human life.
I’ve begun to rethink the way we teach students to engage in scientific research. I was trained, as a chemist, to use the classic scientific method: Devise a testable hypothesis, and then design an experiment to see if the hypothesis is correct or not. And I was told that this method is equally valid for the social sciences. I’ve changed my mind that this is the best way to do science. I have three reasons for this change of mind.
First, and probably most importantly, I’ve learned that one often needs simply to sit and observe and learn about one’s subject before even attempting to devise a testable hypothesis. […]
Second, I’ve learned that truly interesting questions really often can’t be reduced to a simple testable hypothesis, at least not without being somewhat absurd. […]
Third, I’ve learned that the scientific community’s emphasis on hypothesis-based research leads too many scientists to devise experiments to prove, rather than test, their hypotheses.
Mentors, paper referees and grant reviewers have warned me on occasion about scientific “fishing expeditions,” the conduct of empirical research that does not test a specific hypothesis or is not guided by theory. Such “blind empiricism” was said to be unscientific, to waste time and produce useless data. Although I have never been completely convinced of the hazards of fishing, I now reject them outright, with a few reservations.
I’m not advocating the collection of random facts, but the use of broad-based descriptive studies to learn what to study and how to study it. Those who fish learn where the fish are, their species, number and habits. Without the guidance of preliminary descriptive studies, hypothesis testing can be inefficient and misguided. Hypothesis testing is a powerful means of rejecting error — of trimming the dead limbs from the scientific tree — but it does not generate hypotheses or signify which are worthy of test.
I used to view the scientific literature as a collective human effort to build an enduring and expanding structure of knowledge. Each new publication in a respected, refereed journal would be digested and debated… [b]ut once it has passed scrutiny, a new contribution would be absorbed into the edifice of science, expanding and enhancing it, while providing a fragment of immortality to the authors.
My perception was wrong. New scientific ideas can be smothered with silence.
At its heart, science is a human endeavor, carried out by people. When the questions are truly ambitious, it takes a great personal commitment to make any headway — a big investment in energy and in emotion as well. I know from having met with many of the lead researchers that the debates can get heated, sometimes uncomfortably so. More importantly, when you’re engaged in an epic struggle like this — trying, for instance, to put together a theory of broad sweep — it may be difficult, if not impossible, to keep an “open mind” because you may be well beyond that stage, having long since cast your lot with a particular line of reasoning. And after making an investment over the course of many years, it’s natural to want to protect it.
…the ambivalence associated with an even probability distribution makes it terribly difficult for an ideal scientist to decide where to go for dinner. […]
I used to believe that the ethos of science, the very nature of science, guaranteed the ethical behavior of its practitioners. As a student and a young researcher, I could not conceive of cheating, claiming credit for the work of others, or fabricating data. Among my mentors and my colleagues, I saw no evidence that anyone else believed otherwise. And I didn’t know enough of the history of my own subject to be aware of ethical lapses by earlier scientists. There was, I sensed, a wonderful purity to science. Looking back, I have to count naiveté as among my virtues as a scientist.
Now I have changed my mind, and I have changed it because of evidence, which is what we scientists are supposed to do. Various examples of cheating, some of them quite serious, have come to light in the last few decades, and misbehaviors in earlier times have been reported as well. Scientists are, as the saying goes, “only human,” which, in my opinion, is neither an excuse nor an adequate explanation.
Popper’s characterization of how science is practiced –as a cycle of conjecture and refutation — bears little relation to what goes on in the labs and journals.
1 Yes, I read SSR fairly recently, and it gave me a clear structure for a lot of vague suspicions I’d been entertaining since grad school. I suspect I’m doing that “ooh, philosophy of science, yeah, I read Kuhn” thing that probably drives real philosophers of science bugfuck. By way of mitigation, the latter are invited to recommend further reading.
bill, I saw the same trend you’ve noted, although you picked up a bunch I hadn’t noticed. We’ve seen some of the same criticisms about the hypothesis-driven orthodoxy arising in the context of the NIH’s breastbeating over peer review over 2007.
I think the key, as always, is going to be to really hash out what we mean in detail instead of creating this amorphous skepticism of the conduct of science. This latter is a possible interpretation of all those comments.
We need to concentrate on ?? such as, what is the “good” type of fishing expedition and what is the “bad” that gave rise to the original orthodoxy?
I’ll note that some of this is being resolved/driven by the gene array technologies. because the experiments are such obvious fishing expeditions in the good and bad senses. so for quite some time now in talks and grants people have been distinguishing between the “hypothesis generating” (read: don’t ding me on ‘fishing expedition’ grounds, please) and “hypothesis testing” aspects of the research program.
Interestingly there are no objective reasons why this very excellent distinction can’t or shouldn’t be made in just about any type of research program. Pepperberg’s comments lay this out pretty well, I think.