The journal impact factor has long dominated the assessment of scholarly publishing. Based on citation indexing, and conceived over four decades ago, the impact factor’s creators unlikely anticipated its widespread adoption and longevity, nor its purported misuse. Yet the winds seem set to change. In a recent article in Nature, Paul Wouters and his co-authors insist that scholarly publishing needs a “broader, more-transparent suite” of journal indicators, and explore what this might look like.
As the authors highlight, to determine how best to judge a journal, it is paramount to consider the question, “What’s a journal for?”. They identify journals’ key functions as registering, curating, evaluating, disseminating and archiving research and note that the impact factor may only capture limited aspects of these. While all journal functions should be evaluated, the authors warn that “having more indicators does not equate to having better ones”. They propose that the next generation of journal indicators should be designed and implemented responsibly to meet the following criteria:
- Justified: Indicator should have a minor and explicitly defined role in research assessment
- Contextualised: Both numerical statistics and statistical distributions should be reported, with consideration of interdisciplinary differences.
- Informed: There should be indicator education, facilitated by experts.
- Responsible: Consideration of potential indicator influence on researcher or stakeholder behaviour.
So, who should govern the next generation of journal indicators? The authors suggest the assembly of an inclusive governing organisation. The organisation should make recommendations on indicator use, educate stakeholders on good practice, and provide guidance on open access publishing and data sharing. Launch of such a governing body is planned at a 2020 workshop – all interested stakeholders are invited to contact the authors to join the initiative.