- Use of hyperbolic adjectives is increasing in academic publishing and impact submissions, driven by factors like competition for funding.
- This raises questions about the effectiveness and reliability of judging the real-world impact of academic research and calls for a re-evaluation of assessment methods.
The use of embellished or sensational language to exaggerate the importance of research findings has increased in recent decades. In an article on the LSE Impact Blog, Professor Ken Hyland describes the extent of this issue in academic publishing and impact case studies submitted for UK government funding.
The Research Excellence Framework (REF), introduced in 2014, evaluates the quality and real-world impact of academic research to facilitate equitable funding distribution to UK universities. Prof. Hyland, along with Prof. Kevin Jiang, analysed 800 impact case studies in 8 disciplines that had been submitted to the 2014 REF for evaluation. Their findings highlighted that:
- hyperbole was more common in impact case studies than in research articles (2.11 vs 1.55 ‘hype’ terms per 100 words)
- chemistry, physics, and computer science – where research often lacks real-world application – had the highest frequency of hyperbole usage
- the most prevalent ‘hype’ terms emphasised certainty, accounting for nearly half of all cases across all disciplines
- STEM researchers tended to highlight the novelty of their work through terms like ‘first’, ‘novel’, and ‘unique’, while social scientists emphasised the contribution of their research with words like ‘essential’, ‘useful’, ‘critical’, and ‘influential’.
An analysis of 360 articles published in leading journals across 4 scientific fields revealed that there are now twice as many ‘hype’ terms in each paper compared to 50 years ago. Additionally, reports indicate a 9-fold increase in the use of words such as ‘novel’, ‘innovative’, and ‘unprecedented’ in PubMed journals between 1974 and 2014.
“All this … is the result of an explosion of publishing fuelled by intensive audit regimes, where individuals are measured by the length of their resumes, as much as the quality of their work. Metrics, financial rewards and career prospects … dominate the lives of academics across the planet, creating greater pressure, more explicit incentives and fiercer competition to publish.”
Prof. Hyland points to the shortcomings in how the University Grants Council defines and assesses impact, noting that authors might not be the best judges of their own work’s real-world value. He suggests that the current system encourages exaggeration of the importance of research findings.
To address this issue, impact assessment methods may need to be re-evaluated to promote accuracy and transparency and foster a culture of rigorous academic contribution.