Showing posts with label journals. Show all posts
Showing posts with label journals. Show all posts

Tuesday, 21 February 2017

ARGH! How to give your journal a bad reputation

Received on 21 February 2017:

Dear Dr. Richard J Edwards,

Good Morning…..!

We are in shortfall of one article for successful release of Volume 3, Issue 2. Is it possible for you to support us with your transcript for this issue before 28th February? If this is a short notice please do send 2 page opinion or mini review, we hope 2 page article isn’t time taken for eminent like you.

We are confident that you are always will be there to support us.

Await your response.

Sincerely,
Brittney Reeves
Advanced Research in Gastroenterology & Hepatology (ARGH)

Begging for a two-page article in a week from someone who doesn’t even work in the field… ARGH indeed!

Monday, 29 February 2016

Thank YOU, PLOS ONE!

There are many flaws with the peer review system but it remains the best system we have for ensuring a certain degree of quality control prior to publication. One of the ways that the system could be improved is better recognition - and therefore motivation - for reviewers.

Ideally, there would be some form of payment, but I find it hard to see this happening any time soon. (It is difficult enough to get funds to publish papers - getting funds to get papers reviewed when they might well end up getting rejected is going to be way harder.)

The next best thing is some kind of reward or recognition. Some journals give discounted publication fees to reviewers, which is a great idea. Another great idea has just been put into action by PLOS ONE*: public recognition for reviewers:

On behalf of PLOS and the PLOS ONE editorial team, I would like to thank you for participating in the peer review process this past year at PLOS ONE.

We know there are many claims on your time and expertise and we very much appreciate your valuable input in 2015. With your help, we have continued to publish an influential, lively and highly accessed Open Access journal. Simply put, we could not do it without you and the thousands of other volunteers for PLOS ONE and the other PLOS journals who graciously contributed time reviewing manuscripts.

A public “Thank You” to our 2015 reviewers – including you – was published earlier this week.

(2016) PLOS ONE 2015 Reviewer Thank You. PLoS ONE 11(2): e0150341. doi:10.1371/journal.pone.0150341

Your name is listed in the Supporting Information file associated with the article. I hope that you will be able to use this letter, along with the article citation, to claim the credit and recognition you deserve within your institution for supporting PLOS ONE and Open Access publishing.

The article itself is short but sweet:

PLOS and the PLOS ONE editorial team would like to express our gratitude to all those individuals who participated in the peer review process of submissions to PLOS ONE over this past year. During 2015 PLOS ONE published over 28,000 research articles. This would not have been possible without the contribution of more than 76,000 reviewers from around the world and a wide range of disciplines.

The names of our 2015 PLOS ONE reviewers are listed in S1–S5 Reviewer Lists. Thank you to all our reviewers for generously sharing your time, insight and expertise with PLOS ONE authors in the evaluation of their work. Your efforts are a key reason for PLOS ONE’s success as an innovative and influential publication.

It’s nice to be appreciated. One more reason to be a fan of PLOS ONE. (Which I am, despite those who look down their noses at the journal because of its “scientifically rigorous research, regardless of novelty” policy.)

*The other PLOS journals did it to but I did not review anything for them this year.

Monday, 16 April 2012

Short-cited VII: does Open Access publishing encourage bad science?

The argument about Open Access publishing is one that is going to run for some time, I am sure. (See yesterday's piece in The Economist, for example. One spectre that raise its ugly head from time to time, though, is the notion that Open Access is bad because it encourages bad science. In essence, the argument goes: Open Access is cheap for the publishers, therefore they are willing to publish anything. In support of this, I have seen people cite the high acceptance rates at PLoS and BioMedCentral of (apparently) 50-70% as an indicator that they will publish pretty much anything. I've even seen this used in favour of removing pre-publication peer-review altogether in favour of post-publication peer-review, as in the case of WebmedCentral - the implication being that, in the Open Access world, pre-publication peer-review is useless.

I think this is all rather unfair on Open Acess journals, especially the likes of PLoS and BMC. The latter have just done away with the need for the science to be deemed "interesting" as long as the science is sound. (And this for only some of their titles.) This is a far cry from publishing anything and peer-review being worthless.

The peer-review process involves highlighting things that need to be changed to meet the "scientific soundness" criterion. Most authors then go ahead and make the required changes, usually improving the paper as a result. I would argue that this is the single biggest utility of peer-review; as an author, I want to go through this process myself, even if it is sometimes irritating. The fact that PLoS and BMC publish 50-70% of papers just indicates that scientists are capable of producing scientifically sound work (following revisions) 50-70% of the time. The higher rejection rate at Science and Nature is more to do with what science is perceived as being of general interest, rather than the soundness or quality of the work. (I also suspect the acceptance rate for the flagship PLoS and BMC journals that do stipulate a need for impact is considerably lower than across the whole series.)

The peer-review process and quality of reviewers is not always perfect but it is still better than none at all. I am very reluctant to ever publish in a journal that does not have pre-publication peer review and it is not just about the journal's "Impact Factor"; the quality of the papers is going to be lower without the opportunity for revisions in the face of peer-review and at least the threat of being in that 30-50% that aren't deemed scientifically sound enough to publish.

The biggest problem in the modern scientific literature is the pressure on scientists to publish too soon and too often, not the presence of journals that are willing to publish boring science. Until we get rewarded for the quality of our science and not the quantity or impact of our papers (first is not always best), I cannot see this changing, sadly.

Thursday, 12 April 2012

Short-cited VI: the scientific skullduggery of the citation cartel

In yesterday's post, I briefly mentioned and linked to an interesting blog article on "The Emergence of a Citation Cartel", in which editors from one journal are exposed writing review articles in a different journal that almost exclusively cite their journal's papers from the last two years, thus boosting their Impact Factor.

The most striking case cited was a review article in Medical Science Monitor by Eve, Fillmore, Borlongan and Sanberg: "Stem cells have the potential to rejuvenate regenerative medicine research". This review cites 495 papers (at an average of around 9 words per citation!), of which 445 are from "Cell Transplantation - The Regenerative Medicine Journal", published between 2008 and 2009.

At the time, I wondered:
How they can write a coherent article by doing this without plaigiarising heavily, I am not sure, as they must have to ignore masses of relevant literature...
The answer turns out to be depressing simple: despite the rather intriguing title of the review, it is nothing more than literally "an analysis of the articles published in the journal Cell Transplantation - The Regenerative Medicine Journal between 2008 and 2009" under the pretext that this "reveals the topics and categories that are on the cutting edge of regenerative medicine research". If you want to read the whole article, you can download it for personal use here. It's a bit boring, though, to be honest. You can actually get the idea of what they did from the abstract:
The increasing number of publications featuring the use of stem cells in regenerative processes supports the idea that they are revolutionizing regenerative medicine research. In an analysis of the articles published in the journal Cell Transplantation - The Regenerative Medicine Journal between 2008 and 2009, which reveals the topics and categories that are on the cutting edge of regenerative medicine research, stem cells are becoming increasingly relevant as the "runner-up" category to "neuroscience" related articles. The high volume of stem cell research casts a bright light on the hope for stem cells and their role in regenerative medicine as a number of reports deal with research using stem cells entering, or seeking approval for, clinical trials. The "methods and new technologies" and "tissue engineering" sections were almost equally as popular, and in part, reflect attempts to maximize the potential of stem cells and other treatments for the repair of damaged tissue. Transplantation studies were again more popular than non-transplantation, and the contribution of stem cell-related transplants was greater than other types of transplants. The non-transplantation articles were predominantly related to new methods for the preparation, isolation and manipulation of materials for transplant by specific culture media, gene therapy, medicines, dietary supplements, and co-culturing with other cells and further elucidation of disease mechanisms. A sizeable proportion of the transplantation articles reported on how previously new methods may have aided the ability of the cells or tissue to exert beneficial effects following transplantation.
Table 1 of the paper, "Characterization of publications in Cell Transplantation from 2008 to 2009", tabulates all 453 papers from this period. (This actually begs the question as to why only 445 seem to be in the reference list but I am certainly not going to try and find out which eight are missing!) As one can imagine, this is hardly gripping reading and I would greatly question that two years of one journal can give any unbiased insights into "the topics and categories that are on the cutting edge of regenerative medicine research"; it therefore comes as no surprise that (according to Google Scholar), this paper has only been cited once to date.

So, despite first appearances, this does not seem to be a case of scientific malpractice. It's hard to see it as anything other than scientific skullduggery (and drudgery), however. The pretext for publication is really pretty thin and, I would argue, if the intent was not at least in part to hide the activity then the choice of sources would feature in the title, not just the abstract. As highlighted in the Scholarly Kitchen article, three out of four authors are editorial board members of Cell Transplantation and, if nothing else, this surely constitutes a "conflict of interests". (None were listed.) Indeed, parts of the paper read like an advert for the journal. Perhaps there is a place for such an article, but surely it is as an editorial in the journal itself? The Scholarly Kitchen post covers more activities of a similar ilk - the authors seem to have made this review a regular activity.

I'm not generally one to point fingers and, to be fair to Eve et al., it's a competitive world; it could be argued that they are guilty of nothing other than "playing the game". (And playing it well!) That said, if Impact Factor and other metrics are going to be used to compare journals and select where to publish, surely this kind of thing must be stopped?

Wednesday, 11 April 2012

Short-cited V: where to publish?

The second side of bibliometrics is the one I have not considered so much: how to use citation data to help you decide where to publish. This was more important, I suspect, when Impact Factor was the thing that everyone cared about. Happily this is no longer the case - at least, not in the UK - but it is still undeniably important and used in a general sense to establish whether you, as an author, are routinely publishing in top ranking journals.

Call me old-fashioned but I prefer to choose my journals on the basis of their research scope and target audience. Second, the format of the articles can be important - sometimes, a long article is easier to write and might have higher impact than the short, concise papers that some of the highest-impact journals favour. Nevertheless, when the scope and style of the journals appears similar, citation metrics can be an indicator of readership and influence; I know my own opinion of journals is heavily influenced by individual papers I have read from those journals, as well as being biased by my genetics background, and so an impartial view can be informative.

What I did not realise is the range of citation metrics and resources available for comparing journals. The main two sites (as listed on the bibliometrics course I attended) seem to be  Scimago and JCR (Web of Knowledge), which use Scopus and ISI citation data, respectively. To compare journals, however, you need to pick a category and here is where I encountered my first problem: in Scimago, I could not find any bioinformatics or computational biology category. JCR does, at least, have a mathematical and computational biology category but the overlap with my kind of bioinformatics (i.e. molecular evolution) is not that great and so the problem still remains: if your field does not nicely align with one of their selected categories, it is nigh-on impossible to get a general overview or ranking of your options (or papers).

You can, of course, still do pairwise comparisons of specific journals. Then the question becomes, which metric?  The traditional metric is "Impact Factor", which is the mean number of times a paper is cited in the two year period following its publication. As explored in my last post on bibliometrics, the suitability of this period is obviously very field-dependent and also genre-dependent. Methods articles, for example, are likely to have a greater lag before people start publishing papers using them than discovery articles. More worrying, though, is the way these metrics can be blatantly abused. Self-citation (such as editorials citing papers in that issue) has been used for years to boost Impact Factor a little but it seems that this at least is something that JCR etc. monitor and respond to. More sinister is something that I only discovered today: the citation cartel. I recommend reading the linked Scholarly Kitchen article but, in a nutshell, this is when a group of editors boost their journal's Impact Factor by writing a review in another journal that almost exclusively cites papers from their own journal that were published within the last two years! How they can write a coherent article by doing this without plaigiarising heavily, I am not sure, as they must have to ignore masses of relevant literature, but the examples in the article are pretty horrific.

If you are suspicious of the standard two-year Impact Factor due to these abuses, you can always opt for a 5 year Impact Factor, which is probably a little less prone to abuse and, arguably, a better indicator of impact. (I know that I still make use of a lot of papers dating back to 2007!) If not sure which duration is more appropriate, you can check the cited half-life (median age of articles cited by the journal) and citing half-life (median age of articles from that journal when cited). There are also slightly more esoteric metrics out there, including the "Article Influence Score", which is Impact Factor weighted (how?!) by the journal prestige/ranking of citing articles, and the rather mysterious "Eigenfactor" score, which was described to us as the amount of time spent in a given journal when taking a random walk through article space. Hmmm. At least these ones are harder to fake, I guess.

I think it's quite interesting to look at these things, partly because I am just a geek who likes data and graphs and things, but I think the general conclusion of short-cited IV still holds: don't use citation metrics alone to make decisions. Any decisions. Ever!

Saturday, 3 December 2011

Hoorah for NAR!

I have a bit of a soft-spot for Nucleic Acids Research. This is partly because one of my colleagues is one of the Senior Executive Editors, and the Editorial Manager is at the desk next to mine. It is also partly because they have been kind enough to publish five of my papers, which is more than any other journal. This week, however, it is mainly because I have just received my "thank you for reviewing" present of The Oxford Book of Modern Science Writing from Oxford University Press, an anthology of extracts from other scientists that Richard Dawkins has chosen, along with a brief introductory spiel to each one by the man himself.

Reviewing for journals is typically a bit of a thankless task and so I really like that NAR reward their reviewers with a small gift voucher for their parent publisher. It's the little things in life that often make the difference, so well done NAR!

As for the book itself... I have only read one extract so far and I enjoyed it. I'll review the book properly once I have read a bit more. Given that, until 2008, Dawkins held the Simonyi Professorship for the Public Understanding of Science at Oxford, I expect that I am in for a treat.