Journal impact factors (JIFs) are a topic often found at the centre of debate within the scientific community. Designed as a measure of the quality of a journal as a whole, many believe that researchers, funders and employers now inappropriately use the measure to assess the quality of individual papers or authors. This type of misuse has led to many big players in the science publishing world calling for its removal or replacement, as reported in a Nature news article recently.
JIFs are calculated using the average number of citations that articles published by a journal in the previous two years have received in the current year. Although a sound concept, evidence in a recent paper authored by senior employees at a number of science publishers indicate that the JIF is heavily influenced by a small number of papers that are highly cited.
The paper, posted to preprint server bioRxiv, looked at the distribution of citations for articles published in 11 journals between 2013–14 and compared this with each journal’s 2015 impact factor. They found that 65–76% of papers received fewer citations than the impact factor of the journal. The authors propose that journals adopt citation distributions as a more appropriate representation of a journal’s status and they provide instructions on how to do this. Others argue that the JIF still has value and that removing it completely would be a mistake. The debate continues….