- ChatGPT and Al technology may revolutionise research and publishing, creating both opportunities and concerns.
- Policies and recommendations are needed to ensure ethical and transparent use of AI technologies in science.
ChatGPT is a machine-learning system with the ability to autonomously learn from huge data sets to produce what appears to be intelligent writing. Consequently, since its release in November 2022, ChatGPT has been the focus of many discussions within the MedComms community, due to its potential impact on medical research and publication processes.
An example of how ChatGPT can be used was recently reported by Curtis Kendrick in a Scholarly Kitchen article. Kendrick described using ChatGPT to prepare a presentation about racism in academic libraries, by asking the system queries and requesting citations on the subject. The author concluded that while responses were credible and clearly written, the generated citations were either incomplete or used non-existent references.
In an article published in Nature, Eva A M van Dis and colleagues discuss ChatGPT and other AI technologies in the context of publishing and research. They note that whilst ChatGPT offers many opportunities, it also raises concerns:
ChatGPT “might accelerate the innovation process, shorten time-to-publication and, by helping people to write fluently, make science more equitable and increase the diversity of scientific perspectives. However, it could also degrade the quality and transparency of research and fundamentally alter our autonomy as human researchers”.
van Dis et al. highlight 5 key recommendations when using systems like ChatGPT:
1. Retain human verification steps
- Expert-driven verification processes should be used to prevent inaccuracies, bias, and plagiarism.
- These issues may arise if relevant articles are missing in the ChatGPT training set, relevant information is not extracted, or credible sources are not distinguished from less credible sources.
2. Develop transparency and accountability rules
- The use of Al technologies should be stated by authors (including the extent of its use in the preparation of manuscripts and analyses) and by scientific journals (eg, in the selection of manuscripts for publications).
3. Invest in open-source AI technologies
- The authors encourage investments in non-profit projects to develop open-source, transparent Al technologies that are under democratic control.
- The training sets used for the development of AI technology should be publicly available, in line with moves towards increased transparency and open science, and academic publishers should allow machine-learning systems access to their archives to ensure AI outputs are accurate and comprehensive.
4. Embrace opportunities
- ChatGPT can accelerate certain tasks, such as performing a literature search. However, this advantage needs to be carefully balanced with the potential loss of skills and autonomy in the research process.
5. Debate on the ethics, integrity, and transparency of ChatGPT use in science
- van Dis et al call for an ongoing international forum on the development and responsible use of AI technologies for research.
- As a first step, they suggest a summit for scientists, technology companies, research funders, science academies, publishers, non-governmental organisations, and privacy and legal specialists to discuss and make recommendations and policies.
The authors conclude “The focus should be on embracing the opportunity and managing the risks. We are confident that science will find a way to benefit from conversational Al without losing the many important aspects that render scientific work one of the most profound and gratifying enterprises: curiosity, imagination and discovery”.