While a cornerstone of scientific publication, peer review processes have been under great scrutiny in recent years. Topics such as transparency, bias and ghostwriting in peer review have all been tackled in recent articles on The Publication Plan. But have you ever considered the process to be arbitrary?
In a paper published in Scientometrics, Elise Brezis and Aliaksandr Birukou investigated the arbitrariness of peer review in contexts where a set number of submissions can be accepted (such as selecting conference papers for presentation, or grant proposals to fund). They focused on two main areas for potential arbitrariness: homophily, specifically the personal bias of reviewers towards projects aligned with their own preferred level of innovation, and the time allocated for the review. To explore this phenomenon, Brezis and Birukou developed a mathematical model where the true value of a paper was defined by three criteria (soundness, contribution and innovation). The model allowed for reviewer differences in time spent on evaluation and the degree of homophily.
Based on their results, Brezis and Birukou concluded that:
- The peer review process is arbitrary.
“The peer review process leads to arbitrariness: for the same given papers, when the reviewers are different, then we get a different ranking of the papers.”
These results are consistent with the Neural Information Processing Systems (NIPS) experiment, conducted in 2014, which demonstrated that different review committees presented with the same group of papers would select different articles for presentation at the conference. Similar subjectivity in assessment was also identified for peer review of National Institutes of Health (NIH) grant applications.
- Across iterations of the simulated review process, more innovative papers had a higher variance in ratings, leading to a lower likelihood of acceptance.
“Innovative projects are not highly ranked in the existing peer review process, mainly due to the homophilic trait of reviewers.”
The authors suggest that a degree of conformity exists within peer review that may result in low acceptance rates for more contentious or inventive papers.
The authors highlight that variation in homophily and time spent on peer review is sufficient to generate arbitrariness in the peer review process – regardless of how the soundness, contribution and innovation of a paper are weighted. As an industry, we acknowledge the limitations of the current peer review process. Could, as Brezis and Birukou suggest, artificial intelligence (AI) be the answer? Certainly, the use of AI in peer review is being explored, and widespread implementation may not be too far away.