Is high-volume publishing threatening research integrity?
KEY TAKEAWAYS
- A recent analysis revealed ~20,000 scientific authors publishing impossibly high numbers of articles.
- High-volume publishing in the pursuit of inflated metrics represents a threat to research integrity.

We have reported previously on the rising numbers of highly prolific scientific authors. Dalmeet Singh Chawla recently highlighted this issue in Chemical & Engineering News, discussing findings that ~20,000 scientists from Stanford’s top 2% list publish an “implausibly high” number of papers. Singh Chawla explored the implications of high-volume publishing on research integrity, as well as potential solutions.
Study findings
The study, published in Accountability in Research, examined the publication patterns of ~200,000 researchers spanning 22 distinct disciplines, from Stanford University’s list of top 2% scientists (based on citation metrics). It found that:
- around 10% (20,000 scientists) produced an impossibly high volume of publications
- some scientists published hundreds of studies per year, with hundreds or even thousands of new co-authors
- approximately 1,000 were early-career scientists with ≤10 years’ academic experience.
Impact on research integrity
Analysis authors, Simone Pilia and Peter Mora, blame the surprising number of hyperprolific authors on a culture that rewards publication quantity through high scores on metrics. They suggest that this not only compromises research quality but leads to some scientists, “particularly the younger ones”, feeling pressured. Pilia and Mora linked the incentive to churn out large quantities of publications with “unethical practices” such as the inclusion of co-authors who have not made adequate contributions to the research. Based on their findings, Pilia and Mora warn that normalising high-volume publishing poses a significant threat to the fundamental academic process.
“Normalising high-volume publishing poses a significant threat to the fundamental academic process.”
A divisive solution?
Pilia and Mora propose adjusting metrics for scientists exceeding publication and co-authorship thresholds. However, according to Singh Chawla, information scientist Ludo Waltman fears that such adjustments would make research evaluation too complex and confusing. He proposes that research assessment should focus less on metrics and more on a wider range of research activities.
The reliability of metrics for research evaluation is an ongoing topic of discussion within the scientific community, and this latest research serves as a reminder for authors to keep research integrity at the heart of their publication decisions.
————————————————–

Categories