Image manipulation: how AI tools are helping journals fight back
KEY TAKEAWAYS
- Image manipulation is a prevalent issue in academic publishing and a potential sign of research misconduct.
- Many journals are now using AI tools to identify problematic images prior to publication; however, these will need to evolve as image manipulation becomes increasingly sophisticated.

Image manipulation in research articles is a growing concern. In a recent article for Nature News, Nicola Jones outlines how academic journals are embracing the use of artificial intelligence (AI) tools to identify manipulated images pre-publication.
How prevalent is image manipulation?
While often unintentional, image manipulation is prevalent and a potential sign of research misconduct. As reported by Jones, a 2016 study by science integrity consultant Dr Elisabeth Bik and colleagues found that nearly 4% of published biomedical science papers contained problematic figures. Similarly, around 4% of the 51,000 documented retractions in the Retraction Watch database flag a concern relating to published images. A more recent study by Dr Sholto David, which used AI to help identify suspect images, puts this figure at up to 16%.
What action is being taken by journals?
Jones highlights that a number of journals are taking steps to identify problematic images prior to publication. Some, including Journal of Cell Science, PLOS Biology, and PLOS One, either ask for or require the submission of raw images used in figures. In addition, many journals now use AI tools such as ImageTwin, ImaChek, and Proofig to screen images for signs of manipulation prior to publication. In January 2024, the Science family of journals revealed it will be using Proofig across all submissions, while other publishers are developing their own AI image integrity software.
Will AI put an end to this issue?
Jones reports that while AI tools make it faster and easier to detect problematic images, experts warn that they have limited capabilities to detect more complex manipulations, such as those made using AI. Bernd Pulverer, chief editor of EMBO Reports, cautions that as image manipulation becomes increasingly sophisticated it will become ever harder to detect, with existing screening tools soon becoming largely obsolete.
While AI tools make it faster and easier to detect problematic images, experts warn that they have limited capabilities to detect more complex manipulations such as those made using AI.
To stamp out image manipulation in the long run, we need to change how science is done, Dr Bik proposes. She calls for a greater focus on rigour and reproducibility and a crackdown on bullying and high pressure environments in research labs, which she believes create a culture where cheating is acceptable. We look forward to seeing how the development of increasingly advanced AI tools will help in the continuing fight against research misconduct.
Categories
Artificial intelligence, Plagiarism, Research integrity, Retraction
