Motes, beams &c.

A while back, Philip Davis over at The Scholarly Kitchen posted about a small but useful research project of his:

All I did was ask five librarians at institutions administrating Open Access publication charges two simple questions:
“Can you provide a list of Open Access articles that you have supported through your author support program,” and “Have you rejected any requests to date?”

This is (to me) clearly information that such programs should be collating and reporting, and after two weeks Davis’ results were not exactly stellar:

Two weeks after asking my simple questions, I received just two short responses. No list, no numbers, but at least a few details: There was some confusion on the part of faculty of what an OA article publication charge really was. Some faculty requests were actually for page charges in conventional subscription journals; one faculty submitted a request for reprint charges; others submitted invoices to the library when they should have been directed to the external granting agency (like the HHMI). To date, no bonafide requests have been denied.

That’s useful information, as far as it goes, but it doesn’t go very far. Davis plays the conspiracy theory card way too hard for my taste, with “dark secrets” in the post title and an opening paragraph that reeks of melodrama:

You would have thought I was requesting a field manual for interrogating prisoners of war or a list of members on Dick Cheney’s Energy Taskforce. At least in those instances, I would have received a response that answering my questions violated national security or “executive privilege.”

Whoa, cowboy, back up a minute. As commenter Amanda R pointed out, we don’t know much about how Davis went about gathering the information:

As a point of clarification, were you directly refused data, or did libraries simply not respond? Did you contact them back and ask why there was no response, or if there was a reason they weren’t providing the full data you wanted?
Obviously, you deserve a professional response from the libraries you contacted. But, as much as it pains me to say it, I could easily imagine a library in which a request for statistics was bumped around internally for a few weeks before actually being answered.

In a Friendfeed discussion, librarian Christina Pikas made a related point:

the worst part of this is figuring out who you would send a request like that to. It takes me 10 e-mails and 3 phone calls to find the right person at my mothership main library. Almost seems that he’s taking confusion for malicious intent

as did commenter JQ Johnson:

when I in March queried the same institutions that Davis did, I got lots of cooperation. For example, UNC pointed me to a public letter (2/20/2009) to their vice chancellor that summarized in some detail the 12 requests they had funded to date. I’m puzzled why Davis got the response he did. Did he ask the wrong people?

Davis replied to both Amanda R and JQJ, but he gave non-answers containing no information about his methodology and insisted that what he had shown was a lack of transparency:

Whether the lack of response was caused by human error, technological barriers or internal policy, the result is a lack of transparency in how these author-support programs are performing.
These are all good questions but they skirt around the main issue of why I received only 2 responses, and why even these two responses were unable to provide me with any meaningful (even summarized or anonymized) data.

I found this very frustrating and left a comment1 aimed at clarifying why that was so:

JQJ’s comments and questions do not seem to me to skirt the issue at all, but rather to speak directly to alternative explanations for the lack of response. Methodological concerns are not trivial here.

  • Whom did you contact?
  • Did you say explicitly that you were sensitive to confidentiality issues and happy with various forms of anonymized data?
  • Did you phone anyone, or simply email?
  • How do you know your emails didn’t just end up in the spam bin?
  • Did you follow up (an unanswered question from Amanda, above)?

And so on. You have asked good questions, and have shown that routine reporting could be improved for such programs (already a useful outcome). But you need a good deal more evidence — including a more transparent methodology — before you go claiming there are “dark secrets” at work.

Now, it’s been almost two weeks since I left that comment, and it hasn’t appeared or been answered. What dark secrets is Philip Davis hiding? What dim, Crotty-esque ambitions of being the famous naysayer, the Nicholas Carr of Open Access, are forming even now in the troubled subconscious of this —
Or, you know, I just got stuck in the spam queue. It happens. 🙂
Davis finishes up by saying something relatively unexceptionable if taken out of the context of his insistence on ignoring both Occam’s and Hanlon’s razors:

Library Open Access policies cannot exist with secret budgets, ambiguous guidelines, and a practice of stonewalling requests for information.
Those who campaign for Open Access need to be held accountable just like everyone else, and budget transparency is the first step.

Exactly so — everyone else, including bloggers who wish to hold librarian feet to the accountability fire.

1I added the list formatting for this post, hoping for improved readability.

Leave a Reply

Your email address will not be published. Required fields are marked *