Skip to content

Predatory journal ‘safelists’ and ‘watchlists’: who decides?

‘Predatory’ publishing practices were first made infamous by Jeffrey Beall in 2008 when he published a list of journals and publishers that engaged in unethical activities, profiting from open access fees. When Beall took his list offline in 2017, several other organisations began publishing ‘safelists’ and ‘watchlists’ to help researchers identify appropriate target journals for their work – although predatory publishing can be difficult to define. A recent article by Professor Amy Koerber and colleagues in The Journal of Academic Librarianship explores how these lists have their own pitfalls.

Professor Koerber et al used qualitative methods to distinguish between 4 watchlists that aim to identify predatory journals and publishers (eg Stop Predatory Journals) and 10 safelists that aim to identify legitimate journals and publishers (eg Directory of Open Access Journals and the Berlin Institute of Health Open Access Journal Positive List).

Watchlists were typically modelled after Beall’s original list, and all recognised that faulty peer review and poor paper quality were key indicators of a predatory journal. These watchlists can be controversial, with lack of transparency a concern: only Dolos list has provided their criteria for determining which journals they consider to be predatory, parasitic or pseudoscientific. Safelists generally identified journals that distinguish between business and editorial practices, as well as those demonstrating ‘truth in branding’ in the journal’s public presentation, as legitimate and ethical.

However, an inherent risk of both safelists and watchlists is that they are consistently incomplete and out of date due to evolving predatory publishing practices. The authors note that suspect publishing practices may also occur at well-known journals or publishers, making it difficult for organisations to ‘future-proof’ their lists.

An inherent risk of both safelists and watchlists is that they are consistently incomplete and out of date

In describing the unsystematic approach used to develop these lists, Professor Koerber et al highlight the need for further research into predatory publishing. Currently, the burden of navigating these complexities falls to individual authors: challenging when distinctions between trustworthy and predatory are not always clear-cut.


Summary by Kristian Clausen MPH from Aspire Scientific


With thanks to our sponsor, Aspire Scientific Ltd

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Leave a Reply

%d bloggers like this: