The journal impact factor (IF) has been the topic of much debate, with many calling for it to be replaced by alternative measures. In a preprint recently published on arXiv, Manolis Antonoyiannakis questions the use of the IF as a method of ranking journals and reveals just how much this metric can be altered by a single highly-cited paper.
Antonoyiannakis analysed data from 11,639 journals that received an IF in the 2017 Journal Citation Reports, calculating how much the top-cited paper affected the citation average. IF volatility (the gain or loss of IF due to a single paper) was compared to the journal size (the number of citable papers published in 2015–16).
Strikingly, IFs were much more volatile for smaller versus larger journals; this was particularly apparent in those journals publishing fewer than 250 papers each year.
Antonoyiannakis notes that compared with large journals, small publications have much more to gain, in terms of IF, by including highly-cited papers – and stand to lose far more with low-cited articles. Antonoyiannakis speculates that perhaps this is a strong incentive for small journals with high IFs to remain small.
Other key findings were that:
- For many journals, the single most cited paper significantly boosts IF – for 381 journals, the IF increased by at least 0.5, and in the most extreme case, the relative change in IF was 474%.
- For small journals, even low- or moderately-cited papers can give large boosts in IF.
Overall, Antonoyiannakis concludes that IF volatility is not a statistical anomaly, but a widespread issue that affects many journals each year. The author considers it critical that alternative methods, based on robust statistics, are used to compare journals and to improve our assessment of research.