Traditionally, research misconduct has largely fallen into three types — fabrication, falsification and plagiarism. But, in the digital age of scientific publishing and with the increasing use of metrics, new forms of scientific manipulation are emerging that do not affect the research within an article but do enhance its impact, so-called ‘post-production misconduct’.
As discussed in a recent essay by Professor Mario Biagioli in the Los Angeles Review of Books, the use of quantitative metrics to measure undefined concepts, such as the ‘impact’ of a paper, has led to individuals gaming the system to their advantage. Professor Biagioli suggests that examples of such approaches may include:
- citation rings, where colleagues agree to extensively cite each other’s articles, regardless of relevance
- coercive citations, where peer reviewers and editors ‘encourage’ authors to cite the reviewers’ own research in order to gain a good review
- creating co-authors from prestigious universities to facilitate publication
- buying a place on an author byline of an article submitted for publication by a writing company
- more radically, hacking journal databases and adding your name to the byline of an accepted article.
Professor Biagioli describes how these practices can increase an academic’s citation metric, which can lead to improved career prospects or financial bonuses. In turn, academics with high citations counts feed into other metrics that are used to assess the ‘excellence’ of universities which, Professor Biagioli suggests, are themselves not immune to practices that manipulate the system to their advantage.
The extent of citation manipulation (or citation hacking as it may be called), either through self-citation, citation rings, or coercive citation, was the subject of another article available as a preprint on bioRxiv and summarised in a Nature news article by Richard Van Noorden. The research carried out by Jonathan D. Wren and Constantin Georgescu used an algorithm to analyse the PubMed database to identify unusual citing patterns.
Their findings suggested that around 16% of authors may have engaged in some kind of reference list manipulation.
Given their results, the authors believe that introducing a system to detect and prevent citation hacking may be warranted.
Professor Biagioli highlights that the difference between this type of misconduct and more traditional methods of scientific manipulation is that it is ongoing, continuing long after the research has been published — impact accumulates as citations increase over time. As long as scientists are rewarded on the basis of metrics such as citation counts, there will always be an incentive for citation hacking: the Nature article concludes that, ultimately, it is this system that will need to change.
With thanks to our sponsor, Aspire Scientific Ltd