Fooling around with numbers, part 2

Following on from this post, and in the spirit of eating my own dogfood1, herewith the first part of my analysis of the U Cali OSC dataset.
The dataset includes some 3137 titles with accompanying information about publisher, list price, ISI impact factor, UC online uses and average annual price increase; these measures are defined here. The spreadsheet and powerpoint files I used to make the figures below are available here: spreadsheet, ppt.
As a first pass, I’ve simply made pairwise comparisons between impact factor, price and online use. There’s no apparent correlation between impact factor and price, for either the full set or a subset defined by IF and price cutoffs designed to remove “extremes”, as shown in the inset figure:
UCOSCpriceIF.JPG
One other thing that stands out is the cluster of Elsevier journals in the high-price, low-impact quadrant, and the Nature group smaller cluster of NPG’s highest IF titles at the opposite extreme. Note that n < 3137 because not all titles have impact factors, usage stats, etc. I've included the correlation coefficients mainly because their absence would probably be more distracting than having the (admittedly fairly meaningless) numbers available, at least for readers whose minds work like mine. Next I asked whether there was any clearer connection between price and online uses aggregated over all UC campuses: UCOSCpriceuse.JPG
Again, not so much. I played about with various cutoffs, and the best I could get was a weak correlation at the low end of both scales (see inset). And again, note Elsevier in the “low value” quadrant, and Nature in a class of its own. Being probably the one scientific journal every lay person can name, in terms of brand recognition it’s the Albert Einstein of journals. Interestingly, not even the other NPG titles come close to Nature itself on this measure, though they do when plotted against IF. I wonder whether that actually reflects a lay readership?
Finally (for the moment) I played the Everest (“because it’s there”) card and plotted use against impact factor:
UCOSCuseIF.JPG
The relationship here is still weak, but noticeably stronger than for the other two comparisons — particularly once we eliminate the Nature outlier (see inset). I’ve seen papers describing 0.4 as “strong correlation”, but I think for most purposes that’s wishful thinking on the part of the authors. I do wish I knew enough about statistics to be able to say definitively whether this correlation is significantly greater than those in the first two figures. (Yes yes, I could look it up. The word you want is “lazy”, OK?) Even if the difference is significant, and even if we are lenient and describe the correlation between IF and online use as “moderate”, I would argue that it’s a rich-get-richer effect in action rather than any evidence of quality or value. Higher-IF journals have better name recognition, and researchers tend to pull papers out of their “to-read” pile more often if they know the journal, so when it comes time to write up results those are the papers that get cited. Just for fun, here’s the same graph with some of the most-used journals identified by name:
UCOSCtitles.JPG
Peter Suber has pointed out a couple of other (formal!) studies that have come to similar conclusions to those presented here. There are probably many such, because the relevant literature is dauntingly large. There’s even a journal of scientometrics! The FriendFeed discussion of my earlier post has generated some interesting further questions, for instance Bob O’Hara’s observation that a finer-grained analysis would be more useful. I’m not sure I’m up for manually curating the data, though, and I can’t see any other way to achieve what Bob suggests… I might do it for the smaller Elsevier Life Sciences set. For the moment I think I’ll concentrate more on slightly different questions regarding IF and price distributions, as in Fig 3 in my last post — tune in next time for more adventures in inept statistical analysis!
————-
1 I’m always on about Open Data and “publish early, publish often” collaborative models like Open Notebook Science, and it occurs to me that the ethos applies to blogging as much as to formal publications. So I’m going to try to post analyses like this in parts, so as to get earlier feedback, and of course I try to make all my data and methods available. Let me know if you think I’m missing any opportunities to practice what I preach.

Leave a Reply

Your email address will not be published. Required fields are marked *