“Academics and editors need to stop pretending that software always catches recycled text and start reading more carefully”. This is a quote from Prof Debora Weber-Wulff, a professor of media and computing at the HTW Berlin − University of Applied Sciences, in her World View article “Plagiarism detectors are a crutch, and a problem” for Nature. Prof Weber-Wulff has been testing such software for many years and is concerned that too much reliance is placed upon the numbers that are generated by these programs.
Plagiarism detection software compare text and apply algorithms to produce a score of how similar a tested manuscript is to other articles. Prof Weber-Wulff emphasises that the numbers generated are frequently difficult to understand and that false positives (eg due to a large number of overlapping references) and false negatives (eg plagiarised text not accessible to the software) are possible. Yet, these plagiarism scores are often accepted without any further investigation and may be used by journal editors during their acceptance/rejection decision-making process. Plagiarism and duplication of text is a widespread issue that needs addressing and Prof Weber-Wulff highlights that 38/449 (8%) of abstracts submitted to the World Conference on Research Integrity this year were considered to be either plagiarised or duplicated from research already published.
Prof Weber-Wulff reiterates that plagiarism detection software can be used to identify matching text but ultimately the decision of originality should be made by a person. Careful reading, analysis and interpretation of the writing and the references used in an article cannot be replicated by a computer; human input is required.
Subscribe to Blog via Email