My first mashup. I’m so proud. Even though it’s fairly crap.

I hate antibodies. There, I said it. When they work, they are an exquisite tool; when they don’t, which seems to be most of the bloody time, they are an infuriating waste of money and effort.
About the only thing I hate more than antibodies is shopping, especially comparison shopping, for antibodies. Biocompare is OK, but not great — and I distrust all commercial comparison-shopping services anyway, since I figure they sell priority listings.
Enter the internets: Alf recently pointed to a dynamic version of Google’s custom search, and Nature recently published a tech feature on antibodies — including a nice long table of suppliers, complete with websites.
So for now, here’s the crude version: I just jammed both those things together onto a single page: Google Custom Antibody Search.
What I’d like to do, eventually, is to turn the thing into a communal resource. This will mean finding a way to make it quick and easy for anyone to add a new suppliers’ website. I could put it on a wiki somewhere, but I’d like to be able to offer a one-click way for people to contribute… maybe a one-click Simpy button with a tag like “AbSupplier”, a way to produce a non-redundant subset of the links so tagged, and then a way to write those links back out to the custom search page…
Anyway, there it is. I don’t even know for sure that it will be useful — I’ll try it myself at work, and see. Feel free to leave a comment here suggesting ways I could improve it, or just take the idea and build the thing properly yourself.

… wait, that’s not a mashup, is it? If I got it working with Simpy or something, then it’d be a mashup. Poo.

Damn good idea.

Via Peer-to-Peer, Ariberto Fassati in this week’s Nature correspondence (sorry, toll access only):

Reviewers [of scientific publications] often make significant contributions in shaping discoveries. They suggest new experiments, propose novel interpretations and reject some papers outright. […] It is well worth keeping a record of such work, for no history of science will be complete and accurate without it.
I therefore propose that journals’ records should be made publicly available after an adequate lapse of time, including the names of reviewers and the confidential comments exchanged between editors and reviewers. The Nobel Foundation makes all its records available after 50 years, as do many governmental and other institutions. This delay may be reduced for scientific journals to, perhaps, 15 or 20 years.

Now that’s a damn good idea: it’s long past time that reviewing got its due as an essential part of a scientist’s job, and opening the records should help to generate such recognition (to say nothing of the invaluable contribution to historiography of science).
My only quibble: why 15 years? If six months is long enough for an embargo on a closed-access paper, why is it not also long enough to keep the reviews secret? I presume the idea is to prevent retaliation for harsh reviews, but if all the information is public it would take a truly dedicated holder of a truly heinous grudge to follow up (in such a way as not to get caught doing it!) after six or twelve months. More to the point, we can dramatically reduce the risk of such retaliation by changing the community attitude towards reviewing. If peer review becomes a fully acknowledged part of the job, excellence in which is respected and rewarded — and if everyone knows their reviews will be made public! — then low quality (gratuitously mean, ill-informed, lazy, self-serving, etc) reviews should be a thing of the past.

Norm is a lazy fat cartoon character.

Janet Stemwedel is a bit bummed out by all the cynicism in her comments section lately:

What’s sticking in my craw a little is the “Eh, what are you gonna do?” note of resignation about some of the problematic behaviors and outcomes that are acknowledged to be even more common than the headlines would lead us to believe.

Janet claims that “Norms are what we ought to do, not what we suspect everyone actually does”. Me, I think “norms” is used to describe both sets of behaviours, and when observed behaviour norms differ from espoused value norms, there’s something rotten in the state of whatever field or community you are looking at. Janet again:

I do not think we can afford to embrace resignation here. I think that seeing the problems actually saddles us with some responsibilty to do something about them. […] I don’t think we can have it both ways. I don’t think we can trumpet the reliability of Science and wallow in cynicism about the actual people and institutional structures involved in the production of science.

I don’t disagree, but I do wonder what actual somethings Janet has in mind for “us” to do. One of Janet’s examples (of how we can’t have it both ways) involves reproducibility of results:

[we can’t claim] that reproducibility is a central feature of scientific knowledge […] but […] only the “important” findings will be tested for reproducibility in the course of trying to build new findings upon them

I don’t think this is actually a problem. Very little research is reproduced; most is confirmed or corroborated by means of further experiments predicated on the assumption that the original result is/was reliable.
If a result is false but never found out, it probably means no one cared. It’s not as though people were combing through the original research literature and changing their lives or doing dangerous things on the basis of what they find there. (Or are they? If so, someone alert the Darwin Awards people.) If no one ever predicates a further experiment on a particular result, that result was presumably entirely uninteresting. I don’t think that “a whole lot of findings on the books but not fully verified” is a problem — the “books” are not running out of room, and the potentially useful findings will be verified or refuted precisely because they are potentially useful.
This, though:

when scientists report that their serious efforts to reproduce a finding have failed the journal editors and university administrators can be counted on to do very little to recognize that the original finding shouldn’t be counted as “knowledge” anymore

is an entirely different kettle of fish — rotten fish at that — but you can’t blame it on the scientific method. It’s the scientific infrastructure that’s the problem here: [publish or perish] x [shrinking funding] x [far more scientists than permanent positions] = powerful incentive to cut corners or outright cheat, and very little incentive — even for those with tenure and power — to stand up to the cheats or take the corner-cutters to task. When what should happen in response to irreproducible results does not happen, that’s politics — not science.
In a similar vein, Janet says:

I don’t think we can proclaim that scientific work is basically trustworthy while also saying everyone has doubts about his collaborators […], and it’s a completely normal and expected thing for scientists to present their findings as much more certain and important than they are…

Again, I think the system works — that is, scientific findings are generally reliable, for reasons of confirmation/corroboration as above. The larger question, though, is, does it work as well as it could? Here I think the answer is a resounding No, at least from the perspective of a working scientist. The system is a meatgrinder, and if you want to come out whole at the other end then things like not overselling your results become luxuries you can’t afford.
That’s not to say that there won’t always, so long as funding has any limits on it at all, be corner-cutters, cheats, overselling of results (particularly to granting committees), and so on. What we have now, though, is a situation in which there are so many more postdocs than research positions to which they might aspire that it is hardly to be wondered at that “normal misbehaviour” does not seem an oxymoron.
When more than 75% of postdocs (and that figure is five years old, and I can’t see it having dropped in that time!) will not go on to any kind of permanent research position, we are not talking about the kind of competition that selects the best individuals and ensures the best product. We are talking about a situation in which advancement is more dependent on personal politics and luck than on talent or hard work. Working harder won’t give you an edge — the guy in the next lab sleeps there. Being smart won’t do it — the average IQ where you work qualifies for MENSA. Being willing to cheat, though — if you don’t get caught, that might just help.
Janet goes on to say:

I do think that scientists who care about building good scientific knowledge have some responsibilities to shoulder here.
How do you behave in conducting and reporting your research, and in your work with collaborators and your interactions with competitors? In a community of science where everyone behaved as you do, would it be easier or harder to open a journal and find a credible report? Would it be easier or harder for scientists in a field to get to the bottom of a difficult scientific question?
What kind of behavior do you tolerate in your scientific colleagues? What kind of behavior do you encourage? Do you give other scientists a hard time for doing things that undermine your ability to trust scientific findings or other scientists in the community? If you don’t, why not?

These are all very good questions, but it’s that last one that gets to the heart of the matter. I do what I can — I don’t cheat or cut corners or steal, and if everyone did as I do the credibility of published research would improve, and it would be easier for scientists to do their work (in particular, given my support for Open Science, collaboration would be much easier).
If that sounds like blowing my own trumpet, it’s not: I’m a lowly postdoc, and what I said of myself is probably true of the majority of scientists at or below my level on the food chain. It’s also why I am likely to remain a lowly postdoc until I become unemployable in research: those who go on to be PIs and Department Heads and Directors of Institutes are largely those type-A assholes who are willing to cut corners and stomp on other people to get what they want. How exactly am I supposed to give a PI “a hard time”? If I don’t, I think it’s pretty damn clear why not. (You — anyone reading this — can think less of me for that if you wish, but since I doubt that you are one of the rare few who has put their own, and their families’, livelihood on the line for a principle, you can also blow me.) I can, and do, discuss these issues with other postdocs — but to what avail? It’s precisely the ones who don’t listen, who secretly think me naive or weak, who are going to have the competitive edge.
Janet ends by saying:

maybe it’s time to turn from cynicism to righteous anger, to exert some peer pressure (and some personal discipline) so that hardly anyone will feel comfortable trying to get away with the stuff “everyone” does now.

Well, I’m full of anger. It doesn’t seem to be helping anything.

We are all Rob Knop. Well, us postdocs are, anyway.

Rob Knop is in a jam all too familiar to researchers and their long-suffering loved ones. He’s on the tenure track, but he doesn’t have independent funding — and so his university is basically planning to kick him out:

Vanderbilt has made it 100% clear that without funding at the level of an NSF grant, I will not get tenure, regardless of anything else. Indeed, my chair has told me that funding is the only issue he sees as being a serious question with my tenure case.

Note that Rob is clearly a sufficiently good teacher and colleague, and his scientific acumen is clearly sufficiently well regarded, for him to be granted tenure — which is the only form of job security available in academic research. Still, he’s a goner if he doesn’t make that funding cut — which, these days, somewhere between 10 and 20% of applications do, depending on field and political climate.
The system is broken: there are too many PhD graduates and not enough real jobs for them. A postdoc is not a real job; even a tenure-track position, one step up the foodchain from a postdoc, is not a real job. A real job will not be yanked out from under you every few years, unless you or your boss can continually win funding — and when you get down to 20% funding levels, between politics and the sheer volume of work dumped on the granting committees, you might as well pick the names out of a hat. A real job does not leave you entirely at the mercy of your superiors, who can demand insane work hours from you, knowing that if you won’t sacrifice your life on the altar of their lab/department/whatever, there are ten other PhDs clamoring for the chance to do so. I’m no fan of the dismal science, but the law of supply and demand does seem to be consistent with observed phenomena here.
There have been a number of responses to Rob’s cri de coeur, and if you’re interested in the issue Google blogsearch and Technorati (if it’s working) will find them for you. I have been collecting links on the “postdoc problem”, and meaning to look for actual data on same, for some time — maybe I’ll even write that post one day. For now I just want to grab one sentence out of Chad’s response:

I’ve been extremely fortunate in my career.

And this is key. The majority of successful (tenured, funded) academics got that way largely by luck. Most of them have all kinds of fairy tales, as Rob puts it, that they tell themselves so that they can believe it was talent and hard work and nothing else, which is why they continue to urge smart kids into dead-end “careers”. (I do not mean to imply that Chad is untalented and lazy! The point here is that he is one of the few who recognizes what he owes to dumb luck.)
You cannot bank on luck.
I’m not saying “don’t ever go to grad school, don’t ever try to make a living out of research” — research is addictive, just look at me, still kidding myself I have more than a year or two left. But I am saying “you probably won’t make it”, by which I mean “have a backup plan”.
For my own field, biomed research, I would encourage would-be grad students to consider medical school instead. You can do basic or clinical research with an MD, and you have a backup career (a real career, not ten years of indentured servitude as a postdoc followed by “tough, yer out”). Hell, if you’re really keen you can do an MD-PhD — although frankly I don’t see the point. You learn nothing about research in a PhD that you can’t learn on the job, and it’s not as though you’re going to go straight from school to running a lab. You’ll be serving a kind of apprenticeship, a sort of postdoc, in any case — but you’ll be treated better. (There’s a widespread perception among PhDs that MDs make lousy researchers, but no one ever presents any hard data and my own experience indicates that the proportion of idiots is the same among MDs and PhDs — roughly 90%, as per Sturgeon’s Law.)

Essence of mouse.


In case anyone was wondering, this is the sort of thing I do all day. That cotton-candy-looking stuff is mouse genomic DNA, about 600 micrograms of it, harvested from a tumor caused (we’re trying to find out how) by deletion of the MNT gene in T cells. I was going to try to say something profound, but the little DNA monster (the “eyes” are air bubbles trapped when the DNA came out of solution) rather deflated my pomposity.

It’s here!

Open Laboratory cover image.jpg What’s here? Why, the first-ever Science Blogging Anthology, of course: 50 posts, plus a couple of bonus entries, chosen from the best of science blogging in 2006. There’s also a preface and introduction by the editor, Bora Zivkovic of A Blog Around The Clock.
I was privileged to help Bora narrow the field from well over 200 posts, and many of my favorites made it into the final 50. As Bora intimates in his introduction, blogs are conversations and so they lose a certain liveliness when embalmed in a blook (blog + book; don’t blame me, I didn’t coin it!) like this. Nonetheless, there is some excellent writing in this thing, it is as perfect an introduction to science blogging as you’re likely to see offline, and it’s a fun read all on its own. True to the open nature of the original medium, you can of course surf over to Bora’s blog and find the anthology entries listed there. No one will mind if you do, but I hope you will also consider buying the blook — which, after all, unlike the internets, you can carry with you on the bus and leave on the break-room table at work (which is what I plan to do with my second copy). It’s priced at cost and any incidental proceeds will go towards next year’s edition.
Bravo, Bora!

Dear Public: please don’t mistake PZ Myers for a representative of my profession.

New(ish) bioethics blog Biopolitical Times has a post up which takes issue with PZ Myers’ response to the proposal to carry out therapeutic cloning using enucleated cow eggs and human somatic nuclei. Myers:

In fact, I want to go further than these scientists propose.
Don’t terminate the experiment after a few days when you’ve got healthy, growing blastocysts. Slip the best looking ones back into the cow. Work out methods for gestating them in a non-human mammal.
I want to be there nine months later when the vet reaches into the cow’s vagina and pulls out a slick, slimy, healthy human infant.
I want to see the Pope’s head explode when he sees it. I want David Cronenberg there with a camera, cackling happily.

Jesse at BT:

All this is proposed to rile up cultural conservatives, whom the blogger ridicules. Speaking as a generally secular political progressive, this attitude frightens and frustrates me. I’ve long felt that embracing the worst aspects of human biotechnology, such as these “Brave New World” scenarios, is a short road for progressives to lose sight of their core values and alienate the majority of the public. Rubbing this in the faces of those who are opposed – a group much larger than religious conservatives – for the purpose of a “fun and exciting” discussion is adolescent.

Now, I think Myers is trying to be funny. It’s impossible to tell, of course, because mixed in with what might pass for humor is his usual brand of vicious elitism and kneejerk prejudice.
The thing to remember about Myers is that, as I’ve noted before, he’s not a scientist (ask PubMed), nor is he an ethicist. He’s just a loudmouth braying into the cozy echo chamber of his blog. Best to ignore him, except that I feel obligated to push back from time to time just in case real people (“Joe Sixpack”, as Myers would have it) start mistaking him for a spokesman for actual research.

Open letter to Reed Elsevier

Further to the petition and boycott pledge I linked a while back, Tom Stafford has put together an open letter to Reed Elsevier that you can sign if you are an academic or researcher. Tom writes:

The letter will be sent to the Times Higher Education Supplement, a leading UK academics’ weekly, with potential for other national and international coverage. This will be the next in what has now become a series of open letters from professional users of Reed products. Previous letters have been signed by medics (in The Lancet) and high-profile writers (in the Times Literary Supplement), and both have received considerable, and worldwide, media attention.

Here’s the text of the letter (also available as a pdf here):

Mr Jan Hommen
Reed Elsevier PLC
1-3 Strand
xx October 2006
Dear Mr Hommen
We are an international group of academics who are extremely concerned
about Reed Elsevier’s involvement in organising major arms fairs in the
UK and around the world.
We rely on our academic work to be disseminated chiefly by means of
books and peer-reviewed articles, a significant share of these via Reed
Elsevier publications. Being both contributors and (unpaid) referees,
and readers of Reed Elsevier journals makes us stakeholders in the Reed
Elsevier business.
On its website, your company states that it is “committed to making
genuine contributions to the science and health communities” and that it
is “proud to be part of [these] communities”. Conversely, we are not
proud to be associated with Reed Elsevier as we feel your statements are
undermined by the conflict between your arms fair activities and our own
ethical stance. Arms fairs, marketing the tools of violence, are a major
link in the chain of the global arms trade which proliferates arms
around the world and fuels a cycle of human, scientific, economic and
cultural destruction.
This is entirely at odds with the ethical and social obligations we have
to promote the beneficial applications of our work and prevent their
misuse, to anticipate and evaluate the possible unintended consequences
of scientific and technological developments, and to consider at all
times the moral responsibility we carry for our work.
We call on Reed Elsevier to cease all involvement in arms fairs since it
is not compatible with the aims of many of your stakeholders.
Yours sincerely

If you want to sign it, send email to tDOTstaffordATsheffieldDOTacDOTuk with “open letter to Reed Elsevier” in the subject line and a brief note including your full academic title, name, discipline and institution (or former institution if retired). The petition is ongoing, so also please sign that if you haven’t already. As I write there are 357 signatories; if you’re reading this you will probably recognize #19, 32, 55 and 90 (I’m #28).
I know that, after the umpteenth petition or letter or fundraiser or whatever, outrage fatigue starts to set in; and I know that, as world affairs go, there are more important issues than scumbags Reed Elsevier branching out into arms dealing. But — and here I’m speaking to my colleagues: researchers, teachers and academics the world over — this is our issue. It’s in our professional backyard; we own a chunk of it. Not only is a major academic publishing house part of our community, or at least of its infrastructure (whether we like it or not), but as the primary consumers of their primary products and services we have an unusual degree of leverage in this situation. Reed Elsevier is a business: if enough of their customers sign Tom’s letter and petition (and Nick’s boycott), they will get out of the arms trade.

I take exception!

In the course of promoting next year’s Science Blogging Conference, Coturnix writes:

Jean-Claude Bradley is the pioneer in the use of blogs in science in the way that too many of us are still too scared to do – posting on a daily basis the ideas, methods and data from the lab.

Not all of us are scared. I have colleagues with legitimate claims on all of the work I am doing at the moment, and none of them are willing to go to open-notebook. I anticipate even having trouble with my refusal to deal with Elsevier and my intention to publish only in open-access journals.
I’ve been in this lab a year, so everything I’m doing is directly based on someone else’s data and ideas — that is, to such an extent that I do not feel I can insist on an open notebook. Recently, though, I applied for funding to start an entirely new project. This will not mean that I can suddenly ignore my colleagues’ wishes, but it will put me in a stronger position to say, “well, this is my project, and I want to do it this way”.
I think of it as just another experiment. If I’m right, open science is a better way to work, and the benefits of choosing a better model will become apparent to my colleagues, and so open science will spread from early adopters like Jean-Claude (and, soon, I hope, me). If I’m wrong, I’ll fail — but I’ll fail on my own terms, and I can live with that.

How to hold an effective (lab) meeting.

Lab meetings are an unavoidable part of lab life. I’ve worked or studied in seven labs in two countries, and in all of them a regular, usually weekly, meeting was part of normal lab function; I’d venture to say that it’s pretty much a universal. The format doesn’t change much from lab to lab, either. The “body” of the meeting consists of either everyone presenting a quick rundown of what they’ve been doing, or one person presenting their latest work in more detail, and general lab business is an “anyone got anything?” sort of affair tacked on at the beginning or end. No one has a defined role, there is no agenda, no records are kept. And then, of course, everyone complains about wasting time in lab meeting.
This entry was prompted by our (Hurlin lab) meeting on Friday, where we complained about wasting time and talked about ways to improve our meetings. It struck me that if you’re going to do something 50-odd times a year, you might as well get good at it, and with our meeting format currently being overhauled this is the perfect chance for me to try things out. I’m going to go over this with the spousal unit, who is something of an organization junkie/expert, and I’m hoping that the Lazyweb will chime in as well. I’d be very interested to hear about what works, or doesn’t work, in your lab meetings.
So why do we even have lab meetings? There seem to be three basic functions — that is, three things we want to achieve. First, it’s a chance to get everyone together for announcements, organization and joint decisions: do we need more gel rigs, who’s going to be the new safety officer, that kind of thing. Second, it’s a way to keep everyone, particularly the PI, in touch with everyone’s projects. Finally, it’s a way to get everyone’s feedback on your project and any problems you might be having — to get the combined lab brainpower focused on one question or set of questions.
Most of the information on the web relates to (*shudder*) corporate meetings, but I’ve picked out the bits I thought were applicable to lab meetings. Fwiw, here are most of the sites I used to put this list together.
1. Make sure you need a meeting.
Given the functions I listed above, we need the meetings, but perhaps not weekly? Would monthly be too infrequent? What about every two weeks? It probably depends on the size of the lab and how much time the PI spends actually in it, but for most labs I guess weekly meetings are best. Also, should we try to accomodate all three functions in one meeting, or would we be better off splitting the “admin” and “research” functions? Since “admin” doesn’t usually take much time, I think the fewer meetings the better.
2. Start on time and end on time.
Nearly every site I read emphasizes these two points, and that timekeeping is crucial. Another common suggestion is to give people defined roles, including facilitator (see below) and timekeeper. In small meetings, I guess these two roles could be combined, but there might be benefit in keeping them separate.
The question this raises for me is, how long should a lab meeting be? Ours start at 09:30 and can easily stretch until 12:00, which more or less wipes out half a day. I think lab business should take no more than 20 minutes (and often much less), which leaves presentations. One way to get them down to a more reasonable time might be to make them a bit less informal than we currently do (photocopied pages out of someone’s lab notes are not uncommon!). If the presentations were more structured, they could more easily adhere to a time limit. I think I’ll suggest that it shouldn’t take more than 30 minutes to present your last 6 weeks’ worth of work, especially if you focus on questions you want answered by the lab Hive Mind. Supposing that questions and discussion take up a full hour, that’s still a two hour meeting.
3. Have an agenda.
For a lab meeting, I think this means something more like set a format:

  • Lab business first or last? (Last, so there’s incentive not to drag it out. We currently do it first, and tend to yap.)
  • Who will speak?– one person at a time, or several, or everybody? I think this depends on the size of the lab. We have 6 people, so if only one person speaks we each present every 6 weeks. I think this is about right, but some of my coworkers would like to get the Hive Mind’s and Peter’s undivided attention more often.
  • Should each presenter follow a general outline, so that talks have a structure? As above, I think so — it will help keep the presenter and the meeting focused on what we’re trying to achieve. I think I’ll suggest something along the lines of: background (what project is this again?), current results, problems, future plans.

4. Keep minutes.
Another nearly universal recommendation. Minutes can be used to start the meeting with action items from last meeting, which can be useful to nudge people along with their commitments. (In the same vein, action items should always come with an attached Person Responsible.) Minutes should be archived in a communal place (like our shared disc drive), so that everyone can refer back and you don’t have to keep reinventing the wheel. Keeper-of-the-Minutes is another role, like timekeeper and facilitator, that needs to be assigned or rotated.
5. End the meeting with a summary.
Mostly for lab business: what are we going to do? Who is going to do it?
6. Get feedback on whether the format is working.
We’ll be experimenting with these ideas over the next few months, so it will be important to keep track of what’s working. (I’ll report back here.)
7. Facilitation is crucial.
Universally acknowledged, and may well be the most important point. Having someone to keep everything on track seems to be critical for what I am trying to achieve here: avoiding timewasting. Some ideas that seem good to me:

  • the facilitator shouldn’t take sides on an issue, but strive to find out what the consensus is (may be difficult in small meetings)
  • the role of facilitator could be rotated around, so everyone shares the task (and if someone should prove to be especially good at it, they could take it up permanently)
  • a good way to avoid sidetracks is to have a sheet of paper or whiteboard on which to “park” deferred topics
  • a good way to encourage lurkers and dampen the dominant is to go round-robin and get everyone’s feedback as a way of finalizing a topic

So, that’s my first pass at improving lab meetings. Any ideas, Lazyweb?