Sunday, July 06, 2008

Academic shenanigans - Impact factors and such

Conflict of interest declaration: I am an editor of two academic journals published by a commercial publisher.

The 'standing' of academic journals in the scientific community is these days evaluated by two sets of criteria.

The old-fashioned one: A journal's relative importance is measured by peer esteem, ie how many really really famous people (let's call them peers) publish in a given journal, how long does it take the journal to review a manuscript (inefficiency of the review process is here taken as a measure of the journal's importance), how many manuscripts are being rejected (if you publish only one manuscript per year, you'd probably be the most competitive to get into, ergo best journal) and so on and so forth. Very much like the ongoing evaluation of graduate philosophy programs on a US based website that relies on peer gossip (ie someone chooses someone else as a peer, enough people play along, voila, you have a 'system' of evaluation). Most, if not all of this old-fashioned stuff really is quasi-religious in nature and can safely be discarded.

The supposedly scientific one: Well, Thomson Scientific has more or less cornered the market with its ISI. What they do, roughly, is to measure a journal's impact based on how frequently an article is cited in a 2-year period after its publication - that's then weighted against the overall number of papers published in the journal in the same time frame. This, of course, equals importance and quality with citations - ie a quantitative measure. Disciplines like medicine and law are well served by this, because it's part and parcel of those disciplines' academic papers to reference meticulously. It also helps, of course, that many more people work in such disciplines then in, say theology, so there's more people's publications, and more citations going around. The result is that such journals tend to rank much higher in terms of impact than even the best theology journal. It's also easy to manipulate this impact factor game, simply by publishing content that is likely to be sufficiently controversial to generate lots and lots of citations. Someone pointed out, rightly so, that the paper published by the fraudulent South Korean cloning guy, that has since been withdrawn, helped the journal's impact factor, because it gets cited by everyone as an example of scientific misconduct.

None of this is really newsworthy, however. What is newsworthy, is this: Journal editors at scientific-research based Rockefeller University in New York City have bought data sets including their journals from Thomson to replicate their findings (ie their journals' impact factor). They could not reproduce Thomson's results, using even Thomson's own method. Worse, on request Thomson was unable to verify its own results. This is of serious concern, because many academics, myself included, despite misgivings about the counting game to which Thomson reduces academic excellence, thought that that is the best there is. Well, it turns out that it is clearly a very unreliable best there is.

Perhaps it is time to junk the ISI and go along with the Rockefeller scientists' suggestion that ' publishers to make their citation data available in a publicly accessible database, and thus free this important information from Thomson Scientific's (and other companies') proprietary stranglehold.' This seems a sensible proposition. Surely it cannot be permitted to continue that academics' careers depend on proprietary commercial data that cannot be independently verified, and that - as the Rockefeller people have shown - cannot even be verified by the company itself.

Ethical Progress on the Abortion Care Frontiers on the African Continent

The Supreme Court of the United States of America has overridden 50 years of legal precedent and reversed constitutional protections [i] fo...