Recommendations for ethical use of AI chatbots in publications
KEY TAKEAWAY
- Responsible use of generative AI in publications involves careful consideration of authorship, thorough verification of generated output, and transparent reporting.
ChatGPT, the large language model (LLM) that has taken the world by storm, is making its way into medical communications. As we delve into the era of artificial intelligence (AI)-powered publishing, it is crucial to maximise the potential of ChatGPT and other LLMs while adhering to best practices.
Editorials published in Nature and ACS Nano have summarised valuable insights into ChatGPT’s capabilities, potential benefits, and ethical concerns and have put forward ground rules for its use in publications. In May 2023, the World Association of Medical Editors (WAME) updated their recommendations for the use of generative AI in scholarly communication. Recognising the importance of ethical use of AI in academic settings, Cambridge University Press has released the first AI research ethics policy. But what does all this mean for the medical communications industry? Relevant recommendations are summarised below.
Authorship
- LLM tools do not qualify as credited authors as they cannot generate new ideas or take responsibility for the accuracy, integrity, and originality of the work.
Verifying output
- AI-generated text should be cross-checked with trusted sources to ensure it is accurate, plagiarism free, and appropriately referenced.
- Given the potential bias in chatbot output, authors should review AI-generated text to ensure a balanced view of the subject matter.
Transparent reporting
- Authors should specify the extent of chatbot use in their work. For papers with AI-drafted text, details of writing assistance, including prompts used, should be provided in the acknowledgments section. If chatbots were used to analyse data or generate results, this should be declared in both the abstract and the experimental section. The forthcoming CANGARU checklist, set to be published in March 2024, aims to standardise the reporting of methods and results for clinical and scientific studies using LLMs, facilitating comprehensive disclosure.
By implementing these best practices, the medical communications industry can harness the power of generative AI while upholding ethical standards, fostering transparency, and ensuring accurate information is shared.
————————————————–