Damn good idea.

Via Peer-to-Peer, Ariberto Fassati in this week’s Nature correspondence (sorry, toll access only):

Reviewers [of scientific publications] often make significant contributions in shaping discoveries. They suggest new experiments, propose novel interpretations and reject some papers outright. […] It is well worth keeping a record of such work, for no history of science will be complete and accurate without it.
I therefore propose that journals’ records should be made publicly available after an adequate lapse of time, including the names of reviewers and the confidential comments exchanged between editors and reviewers. The Nobel Foundation makes all its records available after 50 years, as do many governmental and other institutions. This delay may be reduced for scientific journals to, perhaps, 15 or 20 years.

Now that’s a damn good idea: it’s long past time that reviewing got its due as an essential part of a scientist’s job, and opening the records should help to generate such recognition (to say nothing of the invaluable contribution to historiography of science).
My only quibble: why 15 years? If six months is long enough for an embargo on a closed-access paper, why is it not also long enough to keep the reviews secret? I presume the idea is to prevent retaliation for harsh reviews, but if all the information is public it would take a truly dedicated holder of a truly heinous grudge to follow up (in such a way as not to get caught doing it!) after six or twelve months. More to the point, we can dramatically reduce the risk of such retaliation by changing the community attitude towards reviewing. If peer review becomes a fully acknowledged part of the job, excellence in which is respected and rewarded — and if everyone knows their reviews will be made public! — then low quality (gratuitously mean, ill-informed, lazy, self-serving, etc) reviews should be a thing of the past.

Happy (blog) Birthday!

I usually try to keep my entries in this category entirely “serious”, because then readers can avoid all the personal and other clutter in this blog, so I don’t do a lot of birthdays and such.
It is, however, hardly unserious to take a moment to wish a happy birthday, and many happy returns, to the indispensable Open Access News, which turns five today — and to extend a hearty thanks and congratulations to its indefatigable author, Peter Suber.
It’s safe to say that Open Access would not be where it is today, nor expanding at its current rapid rate, without Peter and his blog. So thanks Peter, and happy (blog) birthday!

Another new Open Science blog.

Speaking of new faces in the blogosphere, Heather Piwowar has a new blog, Research Remix, focusing on Open Data:

… the goal of this blog is to capture my notes as I flail around learning everything I can about data sharing and re-use, with the short-term goal of writing my biomedical informatics doctoral dissertation literature review. Taking notes here out in the open in case it interests anyone else along the way.

(Link not in original.) Bravo!
In one of her first posts, Heather points to a Nature editorial (sorry, closed access) calling for psychologists to move towards Open Data:

In psychology there is little tradition of making the data on which researchers base their statistical analyses freely available to others after publication. This makes it difficult for anyone to independently reanalyse research results, and prevents small data sets from being combined for meta-analysis, or large ones mined for fresh insights or perspectives.
Psychologists need to rethink their reluctance to share data.

Heather notes that the article only glances off the really interesting question:

Does the concept of sharing data generate unnecessary angst? Does it actually generate angst, or is it mostly laziness or selfishness or fear? If angst, is the angst indeed unwarranted? To what extent does sharing data in fact lead to additional stresses for authors?
I’d love to see research into the reasons why scientists do not share data, and whether their reasons are upheld by events. This knowledge would allow us to address the underlying issues deterring authors from making their data available, which is bound to be more effective for long-term goals than simply relying on requirements from funding agencies and journals.

The article touches on what I think is the most important reason for reluctance to share:

Like many researchers in other disciplines, psychologists fear that if different analytical approaches are brought to bear on their data, different conclusions could be drawn, casting doubt on their competence — or even their integrity.

In my field (biomed), it’s not so much fear of being found out in a mistake or a lie (though I bet a fair proportion are worried about being caught in “normal misbehaviour”). The real killer is ego: what if someone else gets there first? The field has become so over-competitive that many (I’d say most) researchers seek to maximize any edge they can get. Everyone seems to think their Nobel is just around the corner, and they can’t bear the idea of someone else getting it — so they’re willing to let data go underutilized rather than risk having to share credit (or being done out of it).
I think Heather is right about addressing underlying issues, but it does occur to me that the same researchers who won’t share their data may also be unlikely to cooperate with research into the reasons why: those reasons frequently do not reflect well on individuals or the community. In the short term, mandates are probably the only effective mechanism for getting widespread adoption of open access and open data practices over the initial hump of apathy, fear of change, selfishness, laziness and so on. In the long term, I hope that as the mandates take effect, the increased efficiency of open science — of collaboration over competition — will become apparent, and the nature of the scientific community will change in an ever more open direction.

Open Science news

Via Jean-Claude, the Open Science world welcomes another researcher, Sivappa Rasapalli of Totally Retrosynthetic. This is great news, since one of the primary obstacles to wider acceptance of Open Science ideas is the lack of working examples (real research, not just blabber on a blog like mine). In addition to the blog, Shiva also has a wikispace for his research proposals, and (when he is in a position to do so) plans to publish his research results openly as well. In his own words:

Basically, I want to
1. Avoid unnecessary duplication (thus protecting the ideas)
2. Reap the expertise of chemists out there thus improve the ideas further
3. Collaborate with researchers willing to try the ideas and give the credit
4. Help the folks with the research ideas, but no opportunity to execute them .

So feel free to pitch in and voice your opinions on the ideas.

So get on over there, O my tens of readers, and lean those giant brains of yours against Shiva’s research questions. As Jean-Claude is fond of pointing out, what better way to get credit for your idea than to collaborate in real time with an Open Science advocate using documents “registered with third-party time stamps and efficiently indexed by the most popular search engine in the world”?
And besides, collaboration is fun. Discovery is the addiction that drives research — it’s the crackpipe hit, the rush, the thrill, that keeps us going through the down times and the plodding; but one of the best ways to alleviate the boredom and despondency that sets in between fixes is to collaborate. Not only does it bring fresh perspectives and ideas, it reminds us that we’re not in this alone.
(If you read my last post, that might seem at odds with the views I expressed there. What can I say? I have my bad days. But even on the worst of ’em, it’s the possibilities of Open Science that keep me from throwing up my hands and leaving research altogether.)

Norm is a lazy fat cartoon character.

Janet Stemwedel is a bit bummed out by all the cynicism in her comments section lately:

What’s sticking in my craw a little is the “Eh, what are you gonna do?” note of resignation about some of the problematic behaviors and outcomes that are acknowledged to be even more common than the headlines would lead us to believe.

Janet claims that “Norms are what we ought to do, not what we suspect everyone actually does”. Me, I think “norms” is used to describe both sets of behaviours, and when observed behaviour norms differ from espoused value norms, there’s something rotten in the state of whatever field or community you are looking at. Janet again:

I do not think we can afford to embrace resignation here. I think that seeing the problems actually saddles us with some responsibilty to do something about them. […] I don’t think we can have it both ways. I don’t think we can trumpet the reliability of Science and wallow in cynicism about the actual people and institutional structures involved in the production of science.

I don’t disagree, but I do wonder what actual somethings Janet has in mind for “us” to do. One of Janet’s examples (of how we can’t have it both ways) involves reproducibility of results:

[we can’t claim] that reproducibility is a central feature of scientific knowledge […] but […] only the “important” findings will be tested for reproducibility in the course of trying to build new findings upon them

I don’t think this is actually a problem. Very little research is reproduced; most is confirmed or corroborated by means of further experiments predicated on the assumption that the original result is/was reliable.
If a result is false but never found out, it probably means no one cared. It’s not as though people were combing through the original research literature and changing their lives or doing dangerous things on the basis of what they find there. (Or are they? If so, someone alert the Darwin Awards people.) If no one ever predicates a further experiment on a particular result, that result was presumably entirely uninteresting. I don’t think that “a whole lot of findings on the books but not fully verified” is a problem — the “books” are not running out of room, and the potentially useful findings will be verified or refuted precisely because they are potentially useful.
This, though:

when scientists report that their serious efforts to reproduce a finding have failed the journal editors and university administrators can be counted on to do very little to recognize that the original finding shouldn’t be counted as “knowledge” anymore

is an entirely different kettle of fish — rotten fish at that — but you can’t blame it on the scientific method. It’s the scientific infrastructure that’s the problem here: [publish or perish] x [shrinking funding] x [far more scientists than permanent positions] = powerful incentive to cut corners or outright cheat, and very little incentive — even for those with tenure and power — to stand up to the cheats or take the corner-cutters to task. When what should happen in response to irreproducible results does not happen, that’s politics — not science.
In a similar vein, Janet says:

I don’t think we can proclaim that scientific work is basically trustworthy while also saying everyone has doubts about his collaborators […], and it’s a completely normal and expected thing for scientists to present their findings as much more certain and important than they are…

Again, I think the system works — that is, scientific findings are generally reliable, for reasons of confirmation/corroboration as above. The larger question, though, is, does it work as well as it could? Here I think the answer is a resounding No, at least from the perspective of a working scientist. The system is a meatgrinder, and if you want to come out whole at the other end then things like not overselling your results become luxuries you can’t afford.
That’s not to say that there won’t always, so long as funding has any limits on it at all, be corner-cutters, cheats, overselling of results (particularly to granting committees), and so on. What we have now, though, is a situation in which there are so many more postdocs than research positions to which they might aspire that it is hardly to be wondered at that “normal misbehaviour” does not seem an oxymoron.
When more than 75% of postdocs (and that figure is five years old, and I can’t see it having dropped in that time!) will not go on to any kind of permanent research position, we are not talking about the kind of competition that selects the best individuals and ensures the best product. We are talking about a situation in which advancement is more dependent on personal politics and luck than on talent or hard work. Working harder won’t give you an edge — the guy in the next lab sleeps there. Being smart won’t do it — the average IQ where you work qualifies for MENSA. Being willing to cheat, though — if you don’t get caught, that might just help.
Janet goes on to say:

I do think that scientists who care about building good scientific knowledge have some responsibilities to shoulder here.
How do you behave in conducting and reporting your research, and in your work with collaborators and your interactions with competitors? In a community of science where everyone behaved as you do, would it be easier or harder to open a journal and find a credible report? Would it be easier or harder for scientists in a field to get to the bottom of a difficult scientific question?
What kind of behavior do you tolerate in your scientific colleagues? What kind of behavior do you encourage? Do you give other scientists a hard time for doing things that undermine your ability to trust scientific findings or other scientists in the community? If you don’t, why not?

These are all very good questions, but it’s that last one that gets to the heart of the matter. I do what I can — I don’t cheat or cut corners or steal, and if everyone did as I do the credibility of published research would improve, and it would be easier for scientists to do their work (in particular, given my support for Open Science, collaboration would be much easier).
If that sounds like blowing my own trumpet, it’s not: I’m a lowly postdoc, and what I said of myself is probably true of the majority of scientists at or below my level on the food chain. It’s also why I am likely to remain a lowly postdoc until I become unemployable in research: those who go on to be PIs and Department Heads and Directors of Institutes are largely those type-A assholes who are willing to cut corners and stomp on other people to get what they want. How exactly am I supposed to give a PI “a hard time”? If I don’t, I think it’s pretty damn clear why not. (You — anyone reading this — can think less of me for that if you wish, but since I doubt that you are one of the rare few who has put their own, and their families’, livelihood on the line for a principle, you can also blow me.) I can, and do, discuss these issues with other postdocs — but to what avail? It’s precisely the ones who don’t listen, who secretly think me naive or weak, who are going to have the competitive edge.
Janet ends by saying:

maybe it’s time to turn from cynicism to righteous anger, to exert some peer pressure (and some personal discipline) so that hardly anyone will feel comfortable trying to get away with the stuff “everyone” does now.

Well, I’m full of anger. It doesn’t seem to be helping anything.

Another “why didn’t I think of that?” moment.

Rich Apodaca provides my daily dose of “smack self in forehead”:

Recently, I attended a talk given by Max Levchin, co-founder of PayPal, on the subject of product design. In it, he advised those seeking to create a successful startup to build products designed to enable users to commit one or more of the Seven Deadly Sins.

His reasoning was simplicity itself. The Seven Deadly Sins were those activities so universal, that people needed to be threatened with all kinds of bad things if they did them. Looking at it from a detached, secular perspective, most people seem hard-wired to want to commit one or more of the Seven Deadly Sins – repeatedly and without encouragement. Looking at it from a product designer perspective,
cha-ching!

See Rich’s post for a concise summary of the Seven Scientific Deadly Sins, and why they are not necessarily sins at all; the take-home point is this:

Why does any of this matter? For the simple reason that information technology and economics are in the process of rendering obsolete existing models of scientific publication. To build the systems of the future, it’s essential to understand the motivations of those using the current one.

Rich is exactly right. Scientists have all kinds of reasons for publishing, and the particular exigencies of research mean that the nobler impulses tend to be pushed to the back of one’s mind — at the practical, day-to-day level, it’s the Sins that win. This strikes me as an insight that open access/open science advocates would do well to keep in mind.

We are all Rob Knop. Well, us postdocs are, anyway.

Rob Knop is in a jam all too familiar to researchers and their long-suffering loved ones. He’s on the tenure track, but he doesn’t have independent funding — and so his university is basically planning to kick him out:

Vanderbilt has made it 100% clear that without funding at the level of an NSF grant, I will not get tenure, regardless of anything else. Indeed, my chair has told me that funding is the only issue he sees as being a serious question with my tenure case.

Note that Rob is clearly a sufficiently good teacher and colleague, and his scientific acumen is clearly sufficiently well regarded, for him to be granted tenure — which is the only form of job security available in academic research. Still, he’s a goner if he doesn’t make that funding cut — which, these days, somewhere between 10 and 20% of applications do, depending on field and political climate.
The system is broken: there are too many PhD graduates and not enough real jobs for them. A postdoc is not a real job; even a tenure-track position, one step up the foodchain from a postdoc, is not a real job. A real job will not be yanked out from under you every few years, unless you or your boss can continually win funding — and when you get down to 20% funding levels, between politics and the sheer volume of work dumped on the granting committees, you might as well pick the names out of a hat. A real job does not leave you entirely at the mercy of your superiors, who can demand insane work hours from you, knowing that if you won’t sacrifice your life on the altar of their lab/department/whatever, there are ten other PhDs clamoring for the chance to do so. I’m no fan of the dismal science, but the law of supply and demand does seem to be consistent with observed phenomena here.
There have been a number of responses to Rob’s cri de coeur, and if you’re interested in the issue Google blogsearch and Technorati (if it’s working) will find them for you. I have been collecting links on the “postdoc problem”, and meaning to look for actual data on same, for some time — maybe I’ll even write that post one day. For now I just want to grab one sentence out of Chad’s response:

I’ve been extremely fortunate in my career.

And this is key. The majority of successful (tenured, funded) academics got that way largely by luck. Most of them have all kinds of fairy tales, as Rob puts it, that they tell themselves so that they can believe it was talent and hard work and nothing else, which is why they continue to urge smart kids into dead-end “careers”. (I do not mean to imply that Chad is untalented and lazy! The point here is that he is one of the few who recognizes what he owes to dumb luck.)
You cannot bank on luck.
I’m not saying “don’t ever go to grad school, don’t ever try to make a living out of research” — research is addictive, just look at me, still kidding myself I have more than a year or two left. But I am saying “you probably won’t make it”, by which I mean “have a backup plan”.
For my own field, biomed research, I would encourage would-be grad students to consider medical school instead. You can do basic or clinical research with an MD, and you have a backup career (a real career, not ten years of indentured servitude as a postdoc followed by “tough, yer out”). Hell, if you’re really keen you can do an MD-PhD — although frankly I don’t see the point. You learn nothing about research in a PhD that you can’t learn on the job, and it’s not as though you’re going to go straight from school to running a lab. You’ll be serving a kind of apprenticeship, a sort of postdoc, in any case — but you’ll be treated better. (There’s a widespread perception among PhDs that MDs make lousy researchers, but no one ever presents any hard data and my own experience indicates that the proportion of idiots is the same among MDs and PhDs — roughly 90%, as per Sturgeon’s Law.)