Skip to content

ChatGPT and peer review: risk or revolution?


KEY TAKEAWAYS

  • AI-generated peer reviews are increasingly common, but they often lack depth and true scientific insight.
  • Responsible AI use can support, but not replace, expert human review, but clear guidelines and transparency are needed to maintain scientific integrity.

A recent article by James Zou in Nature highlights the growing role of AI in peer review, where up to 17% of peer-review comments in a sample of computer-science reviews were AI generated. While tools like ChatGPT can assist with reviewing research papers, they also present challenges that the academic community must address.

The growing use of AI in peer review

Since the rise of ChatGPT in 2022, researchers have observed an increase in AI-generated peer reviews. These reviews are often characterised by a formal, verbose style and often do not refer specifically to the content of the submitted paper. Zou’s study, which analysed 50,000 peer reviews, also found that AI-generated text was more common in last-minute reviews, suggesting that time constraints may drive its use.

Risks and limitations of AI in peer review

While AI can streamline certain peer-review tasks, it cannot replace expert human reviewers. Current large language models (LLMs) struggle with deep scientific reasoning and can often generate misguided assessments or ‘hallucinations’. AI-generated feedback can also lack technical knowledge and may overlook critical methodological flaws. Even when AI tools are used for low-risk applications, such as retrieving or summarising information, they can be unreliable, and all AI outputs should be verified by human reviewers. Platforms like OpenReview, which facilitate interactive discussions between authors and reviewers, offer a promising model for balancing AI assistance with human oversight.

Responsible AI use in peer review

Zou concludes that the adoption of AI in academic publishing is inevitable. Instead of banning AI, the scientific community must establish guidelines for its responsible use.

Instead of banning AI, the scientific community must establish guidelines for its responsible use.

To maintain scientific integrity, journals and conferences should require reviewers to disclose AI usage and develop policies that limit AI’s role to supportive, rather than decision-making, functions. More research is needed to define best practices, ensuring that AI benefits peer review without compromising its core principles.

————————————————–

How should journals handle AI-generated reviews?

Never miss a post

Enter your email address below to follow our blog and receive new posts by email.

Never miss
a post

Enter your email address below to follow The Publication Plan and receive new posts by email.

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Discover more from The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning

Subscribe now to keep reading and get access to the full archive.

Continue reading