Motes, beams &c.

A while back, Philip Davis over at The Scholarly Kitchen posted about a small but useful research project of his:

All I did was ask five librarians at institutions administrating Open Access publication charges two simple questions:
“Can you provide a list of Open Access articles that you have supported through your author support program,” and “Have you rejected any requests to date?”

This is (to me) clearly information that such programs should be collating and reporting, and after two weeks Davis’ results were not exactly stellar:

Two weeks after asking my simple questions, I received just two short responses. No list, no numbers, but at least a few details: There was some confusion on the part of faculty of what an OA article publication charge really was. Some faculty requests were actually for page charges in conventional subscription journals; one faculty submitted a request for reprint charges; others submitted invoices to the library when they should have been directed to the external granting agency (like the HHMI). To date, no bonafide requests have been denied.

That’s useful information, as far as it goes, but it doesn’t go very far. Davis plays the conspiracy theory card way too hard for my taste, with “dark secrets” in the post title and an opening paragraph that reeks of melodrama:

You would have thought I was requesting a field manual for interrogating prisoners of war or a list of members on Dick Cheney’s Energy Taskforce. At least in those instances, I would have received a response that answering my questions violated national security or “executive privilege.”

Whoa, cowboy, back up a minute. As commenter Amanda R pointed out, we don’t know much about how Davis went about gathering the information:

As a point of clarification, were you directly refused data, or did libraries simply not respond? Did you contact them back and ask why there was no response, or if there was a reason they weren’t providing the full data you wanted?
Obviously, you deserve a professional response from the libraries you contacted. But, as much as it pains me to say it, I could easily imagine a library in which a request for statistics was bumped around internally for a few weeks before actually being answered.

In a Friendfeed discussion, librarian Christina Pikas made a related point:

the worst part of this is figuring out who you would send a request like that to. It takes me 10 e-mails and 3 phone calls to find the right person at my mothership main library. Almost seems that he’s taking confusion for malicious intent

as did commenter JQ Johnson:

when I in March queried the same institutions that Davis did, I got lots of cooperation. For example, UNC pointed me to a public letter (2/20/2009) to their vice chancellor that summarized in some detail the 12 requests they had funded to date. I’m puzzled why Davis got the response he did. Did he ask the wrong people?

Davis replied to both Amanda R and JQJ, but he gave non-answers containing no information about his methodology and insisted that what he had shown was a lack of transparency:

Whether the lack of response was caused by human error, technological barriers or internal policy, the result is a lack of transparency in how these author-support programs are performing.
These are all good questions but they skirt around the main issue of why I received only 2 responses, and why even these two responses were unable to provide me with any meaningful (even summarized or anonymized) data.

I found this very frustrating and left a comment1 aimed at clarifying why that was so:

JQJ’s comments and questions do not seem to me to skirt the issue at all, but rather to speak directly to alternative explanations for the lack of response. Methodological concerns are not trivial here.

  • Whom did you contact?
  • Did you say explicitly that you were sensitive to confidentiality issues and happy with various forms of anonymized data?
  • Did you phone anyone, or simply email?
  • How do you know your emails didn’t just end up in the spam bin?
  • Did you follow up (an unanswered question from Amanda, above)?

And so on. You have asked good questions, and have shown that routine reporting could be improved for such programs (already a useful outcome). But you need a good deal more evidence — including a more transparent methodology — before you go claiming there are “dark secrets” at work.

Now, it’s been almost two weeks since I left that comment, and it hasn’t appeared or been answered. What dark secrets is Philip Davis hiding? What dim, Crotty-esque ambitions of being the famous naysayer, the Nicholas Carr of Open Access, are forming even now in the troubled subconscious of this —
Or, you know, I just got stuck in the spam queue. It happens. 🙂
Davis finishes up by saying something relatively unexceptionable if taken out of the context of his insistence on ignoring both Occam’s and Hanlon’s razors:

Library Open Access policies cannot exist with secret budgets, ambiguous guidelines, and a practice of stonewalling requests for information.
Those who campaign for Open Access need to be held accountable just like everyone else, and budget transparency is the first step.

Exactly so — everyone else, including bloggers who wish to hold librarian feet to the accountability fire.

1I added the list formatting for this post, hoping for improved readability.

Better than nothing? A bookmarklet for The Open Lab

A while back, I mentioned that the Open Lab could really use a bookmarklet to make submission easier and faster.
Since for once the LazyWeb did not provide, I’ve had a crack at it. I’ve got a simple version working (though I haven’t tested it anywhere but FireFox3); all it does is pop up a conveniently-sized window showing the submission form:


If you drag that to your toolbar, you can at least hit the bookmarklet while you’re on the page you want to submit, and simply move the popup around in order to copy over the information. I find it a lot more convenient than having to open the submission form in a separate window and go back and forth.
What would really make this useful is if it would auto-fill in the submitter’s name, address and website and pull in the title and url of the post being submitted. I’m trying to add that functionality, but I’m a complete javascript n00b and so far cannot get it to work, no matter what existing bookmarklets I try to use as a model. So I hope it’s useful to someone as-is — and if you know your way around javascript feel free to upgrade it! — but I wouldn’t hold my breath waiting for the improved version.

Oprah and anti-vaccine propaganda

Unless you’ve been living under a rock, you’ll be aware of the latest round of anti-vaccine inanity, but unless you’ve taken some time to look into it you might not be aware of quite how stupid, and how dangerous, the anti-vaccine “crusade” really is.
In either case, do yourself and everyone else a favor and go read Shirley’s open letter to Oprah.

@Oprah, don’t watch show but nice Duke speech. take own advice and make difficult decision to pull support from mccarthy, save lives. kthxbi

It’s a quick read, but contains all the facts you’d want at your fingertips and pointers to plenty more. It’s also warm and calm and human, and strikes me as being far more likely to actually get Oprah to reconsider her stance than some of the angrier and/or more science-focused commentary out there.
So go read it, blog about it, retweet it. See if we can get Oprah’s attention.

The Semantic Web: a long and somewhat convoluted definition.

This1 is an attempt to define and explain the semantic web for a lay audience, though it should be remembered that I am a member of that audience myself…
It is a commonplace that we are drowning in information, and nowhere is this “information overload” more apparent than in scientific research. The National Library of Medicine’s literature database, PubMed, is searched more than 60 million times a month and contains almost 19 million records from more than 5300 journals — still only a fraction of the approximately 15,000 active, refereed, scientific journals listed in Ulrich’s Periodicals Directory2. GenBank, the world’s foremost repository of nucleic acid sequence information, contains roughly 100 billion bases in 100 million sequence records, and is growing at an exponentially increasing rate that is currently in excess of 50,000 records per day. Unlike PubMed and GenBank, which are cross-disciplinary databases, the Nucleic Acids Research Molecular Biology Database Collection is a carefully curated list of high-value specialist resources; it currently lists 1170 distinct, largely non-overlapping databases. I could go on, but you get the point3.
As things stand, researchers talk to researchers and use computers to facilitate that conversation; what we need is for computers to be able to talk to computers. To cope with (literally) inhuman volumes of data, we need that data to start making sense to machines, so that they can do something no human brain can do: process all of it. We need to make it possible for machines to transfer richly interconnected data among themselves, mix and remix it, generate new connections, filter it, process it, transform it, and output the results to formats and interfaces that make sense to human brains — substrates on which we can carry out the sorts of synthetic, creative thinking that computers cannot do.
We need a man-machine partnership in which both partners can do what they do best, and that means we need the semantic web.
The semantic web is the outcome of processes and frameworks with which computers can manipulate data in a way that makes it accessible by human brains. It is built on the standards and metadata — information about data — that are required for automated data exchange and processing, which in turn is required to create machine-generated, human-scale summaries, skeletons, outlines and other representations of, and interfaces with, the entire knowledge corpus.
Here’s an example. Human brains have no trouble processing the following data:

Another reason for opening access to research. Wilbanks J. BMJ. 333:1306-8 (2006).

To you, that’s a reference; but to a computer, it’s just a string of text. What a computer needs is information (metatada) about each substring:

Title: Another reason for opening access to research.
Author: Wilbanks, J
Journal: British Medical Journal
Issue: 333
Date: 2006

Now the computer “knows” which letters identify John, which constitute the title of the article, and so on. If you set the standards up properly, it even “knows” that Wilbanks is the surname and J the first initial, and so on into ever finer grained properties.
Now imagine you had, oh, say, about 19 million such records. A human brain cannot do anything useful with such a database, but a computer can — which is exactly why we can ask PubMed human-scale questions like “how many papers did J Wilbanks publish between 2000 and 2009?”, or “show me all the papers with “access to research” in the title”.
Now multiply that — the ability to ask human-scale questions of a mass of information far too large for any human brain to absorb or process — by thousands of different types of information (text, gene sequences, chemical formulae, microarray results, etc etc), millions of individual records within each data type, recorded in thousands of journals and databases, produced by hundreds of thousands of laboratories, libraries and garage hackers. Imagine what we could learn if we could query all of that information on a human scale.
There: that’s a glimpse of the potential power of the semantic web.
1 This entry started life as an early draft of a letter in support of John Wilbanks’ application for a TED fellowship. We didn’t get enough signatures in time, so it never was even sent. My apologies to those people who did sign on; if John re-applies I’ll try again, with better planning!
2 tickboxes = active, refereed, scholarly/academic; search = LC Classification Number for [Q* OR R* OR S* OR T* OR U* OR V*]
3In fact, I’m always on the lookout for more good examples of the “data deluge” and the rapid progress of science and tech; post ’em (in comments) if you got ’em.

More on the “Australasian Journal of…” series.

On the basis of the evidence below, I believe the entire “Australasian journal of…” series from Excerpta Medica to be either nonexistent or fake, in the same sense of “fake” that Elsevier has already admitted applies to the following six titles from that series:

  • Australasian Journal of General Practice
  • Australasian Journal of Neurology
  • Australasian Journal of Cardiology
  • Australasian Journal of Clinical Pharmacy
  • Australasian Journal of Cardiovascular Medicine
  • Australasian Journal of Bone & Joint Medicine

WorldCat lists a further thirteen titles in the apparent series:

  • Australasian journal of asthma
  • Australasian journal of bone & joint medicine
  • Australasian journal of dentistry
  • Australasian journal of depression
  • Australasian journal of gastroenterology
  • Australasian journal of hospital pharmacy
  • Australasian journal of infectious diseases
  • Australasian journal of musculoskeletal medicine
  • Australasian journal of obstetrics & gynaecology
  • Australasian journal of paediatrics
  • Australasian journal of pain management
  • Australasian journal of psychiatry
  • Australasian journal of respiratory medicine
  • Australasian journal of sexual health

I believe these all to be either nonexistent or fake because:
1a. Although WorldCat lists ISSNs for all titles, all but two include a note saying “ISSN prepublication record”. The two entries which do not carry that note are also the only two titles listed as being held in any library:
1b. Only the “Australasian journal of musculoskeletal medicine” and the admitted fake “Australasian Journal of Bone & Joint Medicine” are listed as being held by any library in WorldCat.  Both are listed at the State Library of New South Wales:

Australasian journal of bone & joint medicine.
Chatswood, N.S.W. : Excerpta Medica Communications, 2002- 
v. : ill. ; 30 cm.
State Ref Library
Vol. 1, issue 2 (2002)-v. 4, issue 1 (2005)
Australasian journal of musculoskeletal medicine.
Chatswood, N.S.W. : Excerpta Media Communications, 2002. 
v. : ill. ; 30 cm.
State Ref Library
Vol. 1, issue 1 (2002).

I’ve written to the library to ask for a copy or photograph of either journal.
2. None of the series titles have websites that I can find.
3. None of them are listed in PubMed, Ulrich’s Periodicals Directory, Elsevier’s own Science Direct or Scopus (I’d be obliged if someone with access could check Web of Science). Update: Peter Murray checked, and couldn’t find any of the titles in the WoS “publication name” field. Thanks Peter!
4a. A phrase search in Google Scholar returns hits only for the Australasian journal of psychiatry; all of these are citations, three of which are apparent self-citations to the same article:

Mellsop GW, Menkes DB, El-Badri S. Releasing Psychiatry from the Constraints of Categorical Diagnosis. Australasian Journal of Psychiatry. 2007;15:3-5. doi: 10.1080/10398560601083134

That DOI resolves to an article of the same name and with the same page numbers in Australasian Psychiatry, which is published by Informa Healthcare for The Royal Australian and New Zealand College of Psychiatrists.  I’ve written to the communicating author, Dr Mellsop, to ask for a reprint.
Of the remaining three hits, two are citations to other articles in the Australasian Journal of Psychiatry and one I cannot decipher without paying a fee to see the references of an obscure paper.  Of the two I can decipher, one resolves to a paper in Australasian Psychiatry from 2003; the same article is available from Informaworld.  The other is to an “in press” citation from 2007 (which also appears in 4b below).
4b. The same search on Google returns a number of hits, including the following:

  • from this page:
    M.I. Loh., & Restubog, S.L. (2007). Lecturers’ and Students’ Perceptions of Current Teaching Methods about Schizophrenia. Australasian Journal of Psychiatry, 15, 347-349.
    This does not seem to be related to the Informaworld journal Australasian Psychiatry since vol 15 p 347 is this, and I could only find these two papers by Jennifer Loh on the informaworld site. 
  • from this page:
    Langdon, R. (2003). Theory of mind and psychopathology: autism versus schizophrenia [Abstract]. Australasian Journal of Psychiatry.
    from this page:
    Griffiths, K., Farrer, L., & Christensen, H. (2007). Clickety-Click: the e- trains on track. Australasian Journal of Psychiatry, 15(2), 100-108.
    also from here and here:
    Griffiths, K.; Farrer, L.; and Christensen, H. Clickety-click: The e-trains on track. Australasian Journal of Psychiatry, In press, accepted 10/06.
    This appears to be the same paper in the Informaworld journal, Australasian Psychiatry.
  • from here, here and here:
    Tarantola D (2007) The interface of mental health and human rights in Indigenous populations: triple jeopardy and triple opportunity Australasian Journal of Psychiatry, 15(Suppl):S10-S17
    Again, here’s the same paper in Australasian Psychiatry.
  • from here and here:
    Cornes, A., & Napier, J. (in press). Challenges of mental health interpreting: Therapy has taught us that it’s all our fault Australasian Journal of Psychiatry.
    And the same paper seems to appear in Australasian Psychiatry.

I’ve written to Drs Loh, Langdon, Griffiths, Tarantola and Napier to ask for copies.

Excerpta Medica in action

The Elsevier fake journal scandal is expanding in two directions. First, it’s now “fake journals”, plural. Elsevier has admitted to publishing six of these things:

  • Australasian Journal of General Practice
  • Australasian Journal of Neurology
  • Australasian Journal of Cardiology
  • Australasian Journal of Clinical Pharmacy
  • Australasian Journal of Cardiovascular Medicine
  • Australasian Journal of Bone & Joint Medicine

Only one, Bone & Joint Medicine, is on the list I posted yesterday of Excerpta Medica “Australasian journal of…” titles from WorldCat. That leaves thirteen titles in the same series, none of which are listed in PubMed, Science Direct, Ulrich’s or (thanks to Peter Murray, see comments on that post) Scopus. Jonathan Rochkind has pointed out how to find the rest of their titles in WorldCat; there are around 50 all told.
That’s the tip; I await the rest of the iceberg.
The second direction in which the scandal is expanding is towards ghostwriting: I think probably Laika was the first person to make this connection clear. This is a separate but related issue, and Excerpta Medica appears to be up to their armpits in this sleazy practice as well. There’s quite a large literature on ghostwriting, so here are just a few quotes (mentioning Excerpta Medica) to whet your appetite (if indeed one could be said to have an ‘appetite’ for something so nauseating):
Anna Wilde Mathews, At medical journals, paid writers play big role

When articles are ghostwritten by someone paid by a company, the big question is whether the article gets slanted. That’s what one former free-lance medical writer alleges she was told to do by a company hired by Johnson & Johnson.
Susanna Dodgson, who holds a doctorate in physiology, says she was hired in 2002 by Excerpta Medica, the Elsevier medical-communications firm, to write an article about J&J’s anemia drug Eprex. A J&J unit had sponsored a study measuring whether Eprex patients could do well taking the drug only once a week. The company was facing competition from a rival drug sold by Amgen Inc. that could be given once a week or less.
Dr. Dodgson says she was given an instruction sheet directing her to emphasize the “main message of the study” — that 79.3 percent of people with anemia had done well on a once-a-week Eprex dose. In fact, only 63.2 percent of patients responded well as defined by the original study protocol, according to a report she was provided. That report said the study’s goal “could not be reached.” Both the instruction sheet and the report were viewed by The Wall Street Journal. The higher figure Dr. Dodgson was asked to highlight used a broader definition of success and excluded patients who dropped out of the trial or didn’t adhere to all its rules. The instructions noted that some patients on large doses didn’t seem to do well with the once-weekly administration but warned that this point “has not been discussed with marketing and is not definitive!”
The Eprex study appeared last year in the journal Clinical Nephrology, highlighting the 79.3 percent figure without mentioning the lower one. The article didn’t acknowledge Dr. Dodgson or Excerpta Medica. Dr. Dodgson, who now teaches medical writing at the University of the Sciences in Philadelphia, says she didn’t like the Eprex assignment “but I had to earn a living.”
The listed lead author, Paul Barre of McGill University in Montreal, says Excerpta Medica did “a lot of the scutwork” but he had “complete freedom” to change its drafts. Dr. Barre says he helped design the study and enroll patients in it. In statements, J&J and Excerpta Medica offered similar explanations of the process. J&J says it regularly uses outside firms “to expedite the development of independent, peer-reviewed publications.”

Carl Elliott, Pharma goes to the laundry: public relations and the business of medical education

One of the most ingenious pieces of the Fen-Phen public relations strategy was its ghostwriting scheme. In 1996 Wyeth hired Excerpta Medica Inc, a New Jersey-based medical communications firm, to write ten articles for medical journals promoting obesity treatment. Wyeth paid Excerpta Medica $20,000 per article. In turn, Excerpta Medica paid prominent university researchers $1,000 to $1,500 to edit drafts of their articles and put their names on the published product. Wyeth kept each article under tight control, scrubbing drafts of any material that could damage sales. One draft article included sentences that read: “Individual case reports also suggest a link between dexfenfluramine and primary pulmonary hypertension.” Wyeth had Excerpta delete it. (21)
What made Excerpta Medica such an inspired choice is that it is a branch of the academic publisher, Reed Elsevier Plc., which publishes many of the world’s most prestigious science journals. Excerpta Medica manages two journals itself: Clinical Therapeutics and Current Therapeutic Research. According to court documents, Excerpta Medica planned to submit most of the articles it produced to Elsevier journals. In the actual event, Excerpta managed to publish only two articles before Fen-Phen was withdrawn from the market in 1997. One appeared in Clinical Therapeutics, the other in the American Journal of Medicine (another Elsevier journal). In neither case did the authors of the articles disclose that they were paid by Excerpta Medica. So clean was the laundering operation, in fact, that many of the authors did not even realize that Wyeth was involved. Richard Atkinson of the University of Wisconsin wrote a letter to Excerpta Medica congratulating them on the thoroughness and clarity of their article. “Perhaps I can get you to write all my papers for me!” he wrote. He did have one reservation about the piece he was signing: “My only general comment is that this piece may make dexfenfluramine sound better than it really is.” (22)

Sergio Sismondo, Ghost Management: How Much of the Medical Literature Is Shaped Behind the Scenes by the Pharmaceutical Industry?

Several of the publication planning firms identified are owned by major publishing houses. For example, Excerpta Medica is “an Elsevier business” and writes that its “relationship with Elsevier allows… access to editors and editorial boards who provide professional advice and deep opinion leader networks” [40]. Wolters Kluwer Health draws attention to its publisher Lippincott Williams & Wilkins, with “nearly 275 periodicals and 1,500 books in more than 100 disciplines,” and to Ovid and its other medical information providers, emphasizing the links it can make between its different arms [41]. Vertical integration is attractive in the industry as a whole: at least three of the world’s largest advertising agencies own not only MECCs, but also CROs [contract research organizations] [13].

No bottom to worse at Elsevier?

Like Dorothea, I haven’t said anything about the slimy Merck/Elsevier fake publication deal, because I thought the blogosphere had plenty of coverage. Anyone who reads me would know all about the scandal.
The latest development, though, strikes me as something that should be shouted from every available rooftop: Elsevier simply must answer the questions raised.
Via Dorothea: Jonathan Rochkind has done a little “forensic librarianship” and raised astonishing questions about the entire imprint, Excerpta Medica, which published the fake journal that started all of this.
Go read Jonathan, but the bottom line is this: Excerpta Medica does not provide a straightforward list of its own publications or make clear which are, ahem, “industry-sponsored“.
Jonathan says “WorldCat lists 50 publications by Excerpta Medica Communications”; I just tried a simple author search for that phrase and got only 21 results, including the recently-exposed-as-fake Australasian journal of bone & joint medicine; how many others are fake? How about the other fourteen thirteen “Australasian Journal of” titles in the same list:

  • Australasian journal of asthma
  • Australasian journal of bone & joint medicine
  • Australasian journal of dentistry
  • Australasian journal of depression
  • Australasian journal of gastroenterology
  • Australasian journal of hospital pharmacy
  • Australasian journal of infectious diseases
  • Australasian journal of musculoskeletal medicine
  • Australasian journal of obstetrics & gynaecology
  • Australasian journal of paediatrics
  • Australasian journal of pain management
  • Australasian journal of psychiatry
  • Australasian journal of respiratory medicine
  • Australasian journal of sexual health

Why, for one thing, are none of them indexed by Science Direct? The PubMed journal limit field contains only Australasian journals of dermatology, pharmacy and optometry; the latter two seem to be defunct and the first is published by Wiley.
Futher obvious questions arising:

  • What exactly were the 11 “publications” mentioned in this case study, and where were they published?

    Excerpta Medica published more than 11 scientific publications, all offering medical education credits, and targeting medical specialties from the clinical pharmacist to the physician specialist and emergency nurse. Over 700,000 of these publications have been sent to medical professionals to build awareness…

  • Someone should take a close look at the publications (and faculty) mentioned in this case study:

    Excerpta Medica summarized the issues and recommendations from these [“faculty-led regional advisory board”] meetings and communicated them in a funneled approach, beginning with broad reach and comprehensive content, to more regionally focused publications.
    Excerpta Medica first created a full issue and subsequent supplement of Clinical Cornerstone™, the company’s proprietary, peer-reviewed, indexed, continuing medical education (CME) journal distributed to 75,000 physicians. As a result, the data gained significant credibility within the larger physician community.
    The final published product from these regional meetings was a series of regional newsletters. The newsletters referenced the indexed Clinical Cornerstone publications and also highlighted the leading regional attendees on the cover to establish credibility and regional buy-in with the recipients. Approximately 2000 copies of each newsletter were sent to physicians in each region.

  • What exactly is the “company-sponsored journal” created in this case study? We’re told that

    The quarterly publication was created to build awareness of the disease [targeted by the client’s product] and prepare the specialist and primary markets for future indications. It was also designed to establish this client as one of the industry’s authorities on cardiovascular disease.

    and that

    The clinical content was complemented with high-quality photographic images, giving each issue a very professional and attractive appearance.
    The publication was launched in December 2004 and continues to run today. Circulation has increased from 10,000 at launch to 17,000 currently and includes such specialties as cardiology, diabetology, nephrology, internal medicine, and general practice.

    but not the name of the journal. Wanna bet it starts with “Australasian journal of…”?

Alternative Connotea bookmarklets for OATP

Peter Suber launched the Open Access Tracking Project on April 16, and you can read a full description of it in this month’s SPARC OA Newsletter.
I encourage anyone interested in contributing to the OATP to read the full description so as to make your contributions maximally useful. Here are the basics:

  • the project runs on Connotea, using shared tags
  • the only official tag right now is
  • use the tag for developments from the past six months or so
  • user-defined tags are encouraged and should use the same format:, where foo can be any relevant subtopic

If you are pressed for time, and we all are, then it may help to have a Connotea bookmarklet with the tag (or oa.unclassified, if the item is older than six months) already filled in. That way you can just hit the bookmarklet, hit “add to my library” and be done. It’s better if you have time to put in further classifying tags and a description, but at least this way the page will be recorded.
I guess the easiest way to do this would be to have three bookmarklets, the regular one and the “two click” bookmarklets I describe here. If you’re using FireFox, here are the two-click versions; you can install them the same way as the regular one (drag to the toolbar) and, if you like, rename them using the “Organize Bookmarks” dialog box:


This would obviously be better as a one-click than a two-click bookmarklet, but I failed dismally in my attempt to make it so because I don’t actually know anything about javascript. I’ve previously suggested to the lazyweb that someone make a bookmarklet for another project, and nothing came of it; I’m hoping both that this little hack will be useful, and that it will inspire an actual programmer to improve it.

Congratulations to Harvard.

Harvard has been fortunate enough to secure the services of Peter Suber, who has been appointed a Berkman Fellow.
I cannot say it better so I will simply quote Stevan Harnad’s comments accompanying the announcement:

A brilliant choice, and eminently well-deserved. Peter — whose historic contributions to the growth of OA have been spectacularly successful — will continue his invaluable OA work, but this Fellowship will also make it possible for him to begin writing the books on OA and related matters that are welling up in him, and that the world scholarly and scientific research community (as well as the historians of knowledge) are eagerly waiting to read, digest and learn from for years to come.
It is so gratifying to see true merit being rewarded occasionally, as it ought to be (although my guess is that this is just the beginning of the honors to be accorded to this selfless and sapient transformer of Gutenberg scholarship into PostGutenberg scholarship).