Janet Stemwedel is a bit bummed out by all the cynicism in her comments section lately:
What’s sticking in my craw a little is the “Eh, what are you gonna do?” note of resignation about some of the problematic behaviors and outcomes that are acknowledged to be even more common than the headlines would lead us to believe.
Janet claims that “Norms are what we ought to do, not what we suspect everyone actually does”. Me, I think “norms” is used to describe both sets of behaviours, and when observed behaviour norms differ from espoused value norms, there’s something rotten in the state of whatever field or community you are looking at. Janet again:
I do not think we can afford to embrace resignation here. I think that seeing the problems actually saddles us with some responsibilty to do something about them. […] I don’t think we can have it both ways. I don’t think we can trumpet the reliability of Science and wallow in cynicism about the actual people and institutional structures involved in the production of science.
I don’t disagree, but I do wonder what actual somethings Janet has in mind for “us” to do. One of Janet’s examples (of how we can’t have it both ways) involves reproducibility of results:
[we can’t claim] that reproducibility is a central feature of scientific knowledge […] but […] only the “important” findings will be tested for reproducibility in the course of trying to build new findings upon them
I don’t think this is actually a problem. Very little research is reproduced; most is confirmed or corroborated by means of further experiments predicated on the assumption that the original result is/was reliable.
If a result is false but never found out, it probably means no one cared. It’s not as though people were combing through the original research literature and changing their lives or doing dangerous things on the basis of what they find there. (Or are they? If so, someone alert the Darwin Awards people.) If no one ever predicates a further experiment on a particular result, that result was presumably entirely uninteresting. I don’t think that “a whole lot of findings on the books but not fully verified” is a problem — the “books” are not running out of room, and the potentially useful findings will be verified or refuted precisely because they are potentially useful.
This, though:
when scientists report that their serious efforts to reproduce a finding have failed the journal editors and university administrators can be counted on to do very little to recognize that the original finding shouldn’t be counted as “knowledge” anymore
is an entirely different kettle of fish — rotten fish at that — but you can’t blame it on the scientific method. It’s the scientific infrastructure that’s the problem here: [publish or perish] x [shrinking funding] x [far more scientists than permanent positions] = powerful incentive to cut corners or outright cheat, and very little incentive — even for those with tenure and power — to stand up to the cheats or take the corner-cutters to task. When what should happen in response to irreproducible results does not happen, that’s politics — not science.
In a similar vein, Janet says:
I don’t think we can proclaim that scientific work is basically trustworthy while also saying everyone has doubts about his collaborators […], and it’s a completely normal and expected thing for scientists to present their findings as much more certain and important than they are…
Again, I think the system works — that is, scientific findings are generally reliable, for reasons of confirmation/corroboration as above. The larger question, though, is, does it work as well as it could? Here I think the answer is a resounding No, at least from the perspective of a working scientist. The system is a meatgrinder, and if you want to come out whole at the other end then things like not overselling your results become luxuries you can’t afford.
That’s not to say that there won’t always, so long as funding has any limits on it at all, be corner-cutters, cheats, overselling of results (particularly to granting committees), and so on. What we have now, though, is a situation in which there are so many more postdocs than research positions to which they might aspire that it is hardly to be wondered at that “normal misbehaviour” does not seem an oxymoron.
When more than 75% of postdocs (and that figure is five years old, and I can’t see it having dropped in that time!) will not go on to any kind of permanent research position, we are not talking about the kind of competition that selects the best individuals and ensures the best product. We are talking about a situation in which advancement is more dependent on personal politics and luck than on talent or hard work. Working harder won’t give you an edge — the guy in the next lab sleeps there. Being smart won’t do it — the average IQ where you work qualifies for MENSA. Being willing to cheat, though — if you don’t get caught, that might just help.
Janet goes on to say:
I do think that scientists who care about building good scientific knowledge have some responsibilities to shoulder here.
How do you behave in conducting and reporting your research, and in your work with collaborators and your interactions with competitors? In a community of science where everyone behaved as you do, would it be easier or harder to open a journal and find a credible report? Would it be easier or harder for scientists in a field to get to the bottom of a difficult scientific question?
What kind of behavior do you tolerate in your scientific colleagues? What kind of behavior do you encourage? Do you give other scientists a hard time for doing things that undermine your ability to trust scientific findings or other scientists in the community? If you don’t, why not?
These are all very good questions, but it’s that last one that gets to the heart of the matter. I do what I can — I don’t cheat or cut corners or steal, and if everyone did as I do the credibility of published research would improve, and it would be easier for scientists to do their work (in particular, given my support for Open Science, collaboration would be much easier).
If that sounds like blowing my own trumpet, it’s not: I’m a lowly postdoc, and what I said of myself is probably true of the majority of scientists at or below my level on the food chain. It’s also why I am likely to remain a lowly postdoc until I become unemployable in research: those who go on to be PIs and Department Heads and Directors of Institutes are largely those type-A assholes who are willing to cut corners and stomp on other people to get what they want. How exactly am I supposed to give a PI “a hard time”? If I don’t, I think it’s pretty damn clear why not. (You — anyone reading this — can think less of me for that if you wish, but since I doubt that you are one of the rare few who has put their own, and their families’, livelihood on the line for a principle, you can also blow me.) I can, and do, discuss these issues with other postdocs — but to what avail? It’s precisely the ones who don’t listen, who secretly think me naive or weak, who are going to have the competitive edge.
Janet ends by saying:
maybe it’s time to turn from cynicism to righteous anger, to exert some peer pressure (and some personal discipline) so that hardly anyone will feel comfortable trying to get away with the stuff “everyone” does now.
Well, I’m full of anger. It doesn’t seem to be helping anything.