Wednesday, 11 April 2012

Short-cited V: where to publish?

The second side of bibliometrics is the one I have not considered so much: how to use citation data to help you decide where to publish. This was more important, I suspect, when Impact Factor was the thing that everyone cared about. Happily this is no longer the case - at least, not in the UK - but it is still undeniably important and used in a general sense to establish whether you, as an author, are routinely publishing in top ranking journals.

Call me old-fashioned but I prefer to choose my journals on the basis of their research scope and target audience. Second, the format of the articles can be important - sometimes, a long article is easier to write and might have higher impact than the short, concise papers that some of the highest-impact journals favour. Nevertheless, when the scope and style of the journals appears similar, citation metrics can be an indicator of readership and influence; I know my own opinion of journals is heavily influenced by individual papers I have read from those journals, as well as being biased by my genetics background, and so an impartial view can be informative.

What I did not realise is the range of citation metrics and resources available for comparing journals. The main two sites (as listed on the bibliometrics course I attended) seem to be  Scimago and JCR (Web of Knowledge), which use Scopus and ISI citation data, respectively. To compare journals, however, you need to pick a category and here is where I encountered my first problem: in Scimago, I could not find any bioinformatics or computational biology category. JCR does, at least, have a mathematical and computational biology category but the overlap with my kind of bioinformatics (i.e. molecular evolution) is not that great and so the problem still remains: if your field does not nicely align with one of their selected categories, it is nigh-on impossible to get a general overview or ranking of your options (or papers).

You can, of course, still do pairwise comparisons of specific journals. Then the question becomes, which metric?  The traditional metric is "Impact Factor", which is the mean number of times a paper is cited in the two year period following its publication. As explored in my last post on bibliometrics, the suitability of this period is obviously very field-dependent and also genre-dependent. Methods articles, for example, are likely to have a greater lag before people start publishing papers using them than discovery articles. More worrying, though, is the way these metrics can be blatantly abused. Self-citation (such as editorials citing papers in that issue) has been used for years to boost Impact Factor a little but it seems that this at least is something that JCR etc. monitor and respond to. More sinister is something that I only discovered today: the citation cartel. I recommend reading the linked Scholarly Kitchen article but, in a nutshell, this is when a group of editors boost their journal's Impact Factor by writing a review in another journal that almost exclusively cites papers from their own journal that were published within the last two years! How they can write a coherent article by doing this without plaigiarising heavily, I am not sure, as they must have to ignore masses of relevant literature, but the examples in the article are pretty horrific.

If you are suspicious of the standard two-year Impact Factor due to these abuses, you can always opt for a 5 year Impact Factor, which is probably a little less prone to abuse and, arguably, a better indicator of impact. (I know that I still make use of a lot of papers dating back to 2007!) If not sure which duration is more appropriate, you can check the cited half-life (median age of articles cited by the journal) and citing half-life (median age of articles from that journal when cited). There are also slightly more esoteric metrics out there, including the "Article Influence Score", which is Impact Factor weighted (how?!) by the journal prestige/ranking of citing articles, and the rather mysterious "Eigenfactor" score, which was described to us as the amount of time spent in a given journal when taking a random walk through article space. Hmmm. At least these ones are harder to fake, I guess.

I think it's quite interesting to look at these things, partly because I am just a geek who likes data and graphs and things, but I think the general conclusion of short-cited IV still holds: don't use citation metrics alone to make decisions. Any decisions. Ever!

No comments:

Post a Comment

Thanks for leaving a comment! (Unless you're a spammer, in which case please stop - I am only going to delete it. You are just wasting your time and mine.)