Ivan Oransky has been at the forefront of efforts to highlight research integrity issues for over a decade, co-founding Retraction Watch in 2010 to track and publicise retractions in the scientific literature. Following his presentation at the 2020 European Medical Writers Association (EMWA) symposium, we spoke to him about retractions during the COVID-19 pandemic and steps he believes should be taken to tackle research integrity challenges in the future.
First of all, COVID-19 is having a huge, ongoing impact on our daily lives and on scientific research – reflected in the huge number of COVID-19-related publications. At the same time, Retraction Watch’s list of retracted COVID-19 papers continues to grow. Which of the COVID-19-related retractions to date do you think have been the most notable, and what do these cases tell us about current practice in scientific publishing?
“I don’t know that I would choose any particular COVID-19-related retraction as most notable – I suppose that’s like asking which of your children is your favourite. There are certainly the ones that gained the most attention – if I had to pick one, it would be the Lancet paper about hydroxychloroquine that was based on a very questionable (at best) dataset from a company called Surgisphere. I think that paper captured the most attention, and close behind it was a New England Journal of Medicine (NEJM) paper that was also based on those alleged data, but wasn’t about hydroxychloroquine so didn’t capture quite so many eyeballs. Those are the retractions where I think a lot of people had a Casablanca “shocked, shocked!” moment, with the idea that, somehow, this was completely different from anything that’s ever happened in science before. And that’s just nonsense – complete revisionist history.
I think it’s more important, or useful in a way, to look at the whole pattern. I wouldn’t call these data so much, but there have been 87 retractions of COVID-19-related papers to date. That number isn’t all that different from what you would expect to see given the number of papers – and preprints – that have been published.
There have been 87 retractions of COVID-19-related papers to date. That number isn’t all that different from what you would expect to see given the number of papers – and preprints – that have been published.
However, 10 of these retractions were because Elsevier published manuscripts twice that authors had only submitted once. What that speaks to is the rush, or the fast pace, of publishing in the COVID-19 era. The fast pace isn’t so bad, but the system of peer review and publication hasn’t really adapted well enough to it over the years – although I would argue that there have been some strides in that direction.
The fast pace of publishing in the COVID-19 era…isn’t so bad, but the system of peer review and publication hasn’t really adapted well enough to it over the years.
To me, it’s not a particular retraction that’s important – rather the phenomenon that everyone’s rushing and there’s a lot of sloppiness. If anything, I’d say that the proportion of retractions due to misconduct is much lower than you might see in a typical dataset of retractions. I don’t know what to make of that yet, and it could be that people just haven’t found the cases of misconduct so far, but I think that that’s worth paying attention to. It really speaks more to sloppiness and rushing rather than out-and-out fraud accounting for COVID-19-related retractions.”
The proportion of retractions due to misconduct is much lower than you might see in a typical dataset…it really speaks more to sloppiness and rushing rather than out-and-out fraud accounting for COVID-19-related retractions.
While journals have acted quickly to retract some COVID-19-related publications, in general, the pace of investigation and retraction is very slow. However, you’ve recently highlighted a “double-standard” involving rapid retraction when papers draw negative attention on social media. How should journals prioritise their investigations to address allegations in a timely way?
“Well, I think that what journals and publishers should do is actually prioritise investigations. Although some argue that the problem is certain papers being retracted before other papers, the problem is that not enough papers are being retracted, full-stop. There are countless papers being flagged – whether that’s on PubPeer, through correspondence with journals or by scientific sleuths like Elisabeth Bik – where journals are doing nothing. Maybe they’re investigating the cases and it’s just taking them a long time – but why is it taking them so long?
One positive development over the past few years is that some journals are actually hiring entire staffs to look at allegations and to try to catch issues that might lead to retraction before articles are published. Those are the journals and publishers that I think everyone should emulate, such as the Journal of Biological Chemistry, PLOS ONE and FEBS PRESS.
Some journals are actually hiring entire staffs to look at allegations and to try to catch issues…before articles are published. Those are the journals and publishers that I think everyone should emulate.
So, to me, the issue is not so much whether we should retract some papers before others. The more important question is ‘why are journals not prioritising investigations, full-stop?’ If there has to be some prioritisation, then we should retract papers with fatal flaws that seem to be doing harm, or have the potential for doing harm, first. The problem is that then nobody will do anything about all of the other papers. I really hesitate to talk about prioritising certain ‘retractable offences’ over others as I know what will happen – I’ve been watching journals ignore problems for a decade. If you give journals and publishers an excuse, or a rationalisation for why they’re not getting to something they should be getting to, you’re creating more of an issue, and journals know that.”
I really hesitate to talk about prioritising certain ‘retractable offences’ over others as I know what will happen – I’ve been watching journals ignore problems for a decade.
Recently, Retraction Watch discussed a Scientific Reports article retracted following a post-publication peer review round requested by the Editor. Are changes to peer review processes needed to avoid this kind of retraction? Do you think increasing adoption of post-publication and open peer review processes will impact retraction rates?
“I think whether changes are needed to peer review processes depends on what your goal is. Is your goal to prevent retractions, or is it to actually have a transparent publication process that reflects how science works instead of having papers be the be all and end all in terms of promotions, tenure, and so on? I think you have to decide what your goals are, and once you’ve decided this, you can create a system that makes sense.
Part of what always puzzles me is why journals can’t just be honest all the time about how much gets through peer review that shouldn’t.
Part of what always puzzles me is why journals can’t just be honest all the time about how much gets through peer review that shouldn’t. In my opinion, journals have never done a good job of answering this. I hope that one of the illuminating things about the Lancet and NEJM COVID-19-related retractions is that the editors were really forced to admit that their peer review systems were not well-equipped for those papers, although the journals approached this in different ways. These lessons are a good thing, but it’s not as if these issues with peer review only happen when there’s a retraction that catches everyone’s attention.
I hope that one of the illuminating things about the Lancet and NEJM COVID-19-related retractions is that the Editors were really forced to admit that their peer review systems were not well-equipped for those papers.
The paper in Scientific Reports caught everyone’s attention because of what it’s about and the conclusions [the paper made links between obesity and dishonesty], but papers are slipping through like this all the time. Journals need to acknowledge this and provide their peer review reports. I do think that, even if it’s anonymised, publishing peer review comments is a good idea so you can have some faith in the process, see what happened, and believe what happened. I’m not sure that there’s an alternative to journals acknowledging the limitations of peer review processes – I think that they just have to be honest. At this point, every single time a retraction happens, everyone says it was an anomaly and finds a reason for why it was unique. We’re now cataloguing close to 2,000 retractions per year, suggesting that this is not true, and these cases are not unique.”
At this point, every single time a retraction happens, everyone says it was an anomaly and finds a reason for why it was unique. We’re now cataloguing close to 2,000 retractions per year, suggesting that this is not true.
Retractions can occur for any number of reasons, but retraction notices (if they appear at all) can be vague about the underlying cause. How should a retraction ‘ideally’ be conveyed? Is a nomenclature needed, particularly to help protect authors when the retraction is due to honest error?
“Over the years, I’ve actually grown to be increasingly opposed to a nomenclature for various ‘types’ of retraction. I think that in every case I’ve seen where nomenclature is involved, either journals make category errors or they use nomenclature as weasel words. Elsevier have used ‘withdrawn’ in certain cases (and other publishers have followed suit in some ways), and really this is an excuse or rationale not to include any information about why the paper was withdrawn or retracted. That’s a step way backwards. We all make category errors – I make category errors probably every day, but I hope I correct them. For whatever reason, the notion that what we really need is a better taxonomy has persisted – but how that is going to solve the problem of lawyers getting involved in the process and obfuscating reality, or journals not including reliable information in retraction notices, I don’t understand. It won’t help anyone if you still don’t know what actually happened.
What should actually happen – and this is borne out in the economics literature – is that retraction notices should state as clearly as possible what occurred, or state frankly if it’s unclear, as sometimes people have muddied the waters. If that’s the case, then say so: ‘we don’t know what’s happened here because lawyers on either side have been bickering for a year about this – but we feel we should tell readers anyway’. That’s a pretty honest way to go, unlike the approach of not saying anything.
Retraction notices should state as clearly as possible what occurred, or state frankly if it’s unclear.
For individual researchers, it’s very clear that if you retract a paper for fraud, dishonesty or misconduct, you have a retraction penalty, and your citations decline. Maybe your whole subfield’s citations decline as you bring everyone down with you. When you retract a paper due to honest error and the retraction notice very clearly explains this, you don’t see that decline. One study says you might even see a bump, although that hasn’t been replicated.
So, clarity in retraction notices is what’s needed. I think the notion that we can classify everything with a set of words – that will be argued about forever anyway – is the wrong way to go.”
Even after retraction, papers continue to be cited. Do journals need to do more to publicise retractions, and how can authors make sure they don’t fall into this trap?
“Again, it depends what journals want. Do they want to be upfront and help scientists be more efficient, make new discoveries and build knowledge, or are they more interested in protecting their reputations and hiding the fact that something has been retracted? I go by the old adage ‘never ascribe to malice that which is adequately explained by incompetence’, so I’m willing to acknowledge that the lack of action from journals may be due to incompetence rather than being intentional.
Do they [journals] want to be upfront and help scientists be more efficient, make new discoveries and build knowledge, or are they more interested in protecting their reputations and hiding the fact that something has been retracted?
There are now countless studies, conducted by librarians and bibliometrics and scientometrics scholars, showing that it can be very difficult to find that an article has been retracted. Journals and publishers are not transmitting the metadata to where they should (whether this is PubMed, Web of Science, etc) and sometimes they transmit the wrong metadata (eg they call something a correction when it’s a retraction). Even on the journal’s own pages or on the PDFs, articles often don’t show up as retracted. Journals should do more, as they’re the ones who end up publishing papers citing retracted work.
Journals should do more, as they’re the ones who end up publishing papers citing retracted work.
So, how can authors make sure they don’t fall into this trap? We created a database that is primarily for tracking retractions and we’re more comprehensive than any database of or containing retractions. At the moment, there are close to 25,000 retractions in our database – that’s almost twice as many as you’ll find in any other similar database. Authors can search for articles one-by-one using our database, if they want, or they can sign up for software suites and bibliographic management software packages that are working with Retraction Watch’s database. If you use Zotero for example, you’ll get an automatic flag every time a paper in your library is retracted. We get notes about this on Twitter all the time from people who didn’t know it existed and find it really helpful – we’re thrilled with that. We’d love the Retraction Watch database to be incorporated into more software packages too. Without automated flagging, which publishers just aren’t doing at this point, I just don’t see how authors can avoid citing retracted work – but these automated processes have become pretty easy to do.”
Without automated flagging, which publishers just aren’t doing at this point, I just don’t see how authors can avoid citing retracted work.
The extent and sophistication of journal targeting by paper mills and scams is ever-increasing. From your perspective, what can be done to tackle this problem and future-proof publishing processes against these attacks?
“To me, this really takes a two-pronged approach. One prong is to tackle what we know is out there that no-one has seen fit to tackle yet. iThenticate and other software that looks for plagiarism and duplication follow this model: journals and publishers realised there was a lot of plagiarism, someone developed some software, and now everyone uses it. The same could be done with our database of retractions. Right now, we don’t have a good set of software tools that can detect image manipulation or image duplication, for example. We have individuals including Elisabeth Bik who are doing amazing work, but that’s not really scalable and we need a scalable solution. However, these solutions are only looking to fight yesterday’s battles. Meanwhile, the people who came up with these bad practices are coming up with more ‘clever’ approaches and we won’t know what those are until they explode. So, all of this fits into one prong – rooting out problems once we know they exist.
We also need to take a step back and move upstream to what the real issue is, which is the incentive structure. If we really want to de-incentivise bad (arguably, sometimes criminal) behaviours of misconduct and fraud, we need to decouple every career-affecting decision in academia from publishing papers in top journals. If you remove that incentive, then nobody’s going to feel a particular need to fake papers, go to a paper mill, or anything else.
If we really want to de-incentivise bad (arguably, sometimes criminal) behaviours of misconduct and fraud, we need to decouple every career-affecting decision in academia from publishing papers in top journals.
It’s probably no accident that paper mills tend to be concentrated in places, particularly China, where the incentive structure has been completely warped towards papers for so many years. If we don’t look at these incentive structures, every year or so, another scam will come out.
If we don’t look at these incentive structures, every year or so, another scam will come out.
We wrote about fake peer review back in 2012 – it turns out this hasn’t been eradicated, although it is now easier to detect and has been cut down. We broke a story about selling authorship in Russia, we’ve reported on paper mills – there’s just always something, and there’s always going to be something else. I don’t have the kind of mind to think up what will be next, although I can often find it once it happens thanks to sources like the scientific sleuths. None, or very little, of this will happen if we remove the very pervasive and poisonous incentive structures we have at the moment.”
As noted in the 10 takeaways from 10 years at Retraction Watch, pharma-funded publications account for a low proportion of retractions. You’ve noted that this is unsurprising given the increased scrutiny in pharma versus academia – what changes should academia make to reduce retraction rates?
“Maybe this is controversial, but I don’t know that we should (certainly in the short or medium term) push to reduce retraction rates. If we mean reduce retraction rates as a proxy for reducing ‘bad behaviour’ – sloppiness or even misconduct – then yes, we should take measures to try to prevent that or to detect it better. There are still a lot of papers that should be retracted but haven’t been, so I don’t think we’ve reached the peak of retractions yet. Just like any other metric, if you suddenly decide that we need to cut down on retractions, that will make things worse. I do think that there are lots of steps that academia can take to try to cut down on these bad behaviours – this goes back to incentives, in a large part.
I don’t think we’ve reached the peak of retractions yet. Just like any other metric, if you suddenly decide that we need to cut down on retractions, that will make things worse.
On the flipside, I don’t think that we should absolve pharma-funded publications of bad behaviour or misconduct. For those sorts of papers, studies can be set up in such a way to get the desired results, but this is not something that would be considered misconduct or would be a ‘retractable offence’. There are gatekeepers and hoops that studies need to jump through (like Institutional Review Boards), but we shouldn’t assume that those systems are perfect.
Both settings have a lot of work to do – in academia you see behaviours that are ‘retractable offences’ while in pharma, that’s not the case, but research practices can have other negative effects. If universities are interested in lowering the rates of misconduct in their ranks, they need to look inwardly and examine whether they’ve created incentive structures that reward good or bad behaviour.”
Finally, in your opinion, what is the biggest challenge to research integrity right now, and how can this be overcome?
“I’m going to sound like a broken record, but I do think that incentives are my main concern and the thing that needs the most attention. That being said, one of the things that worries me is the significant tribalism in science, which has been amplified and made more visible by COVID-19.
One of the things that worries me is the significant tribalism in science, which has been amplified and made more visible by COVID-19.
You want constructive criticisms and critiques in science – you don’t want them to be ad hominem attacks. The critiques should help move the science and the evidence to a better place. Often, the most critical peer reviews are not necessarily of the papers that are most problematic (or frankly those that shouldn’t have been considered for publication in the first place), but are of papers that disagree with your point of view. I guess there’s a tribalism that cuts in every which way, whether it’s scientific, political, or due to the family tree of where and who you trained with. You end up with a lot of people shouting at each other and ‘creating heat without shedding a lot of light’. In the same way, social media has amplified and exacerbated a lot of issues in terms of politics, world events, conspiracy theories and what have you. Sometimes the loudest voices in science don’t have the evidence on their side, but their rhetorical approach is better.
Sometimes the loudest voices in science don’t have the evidence on their side, but their rhetorical approach is better.
I’m all for free speech – I think everyone should feel free to speak their mind and I encourage that, even when they disagree with me – but if we don’t figure out how to get away from this tribalism, we’re just going to polarise science even more. If we couple that with all the issues science is facing, whether it’s a real lack of funding, or publish-or-perish incentives, it’s not going to go well.”
Ivan Oransky is Editor in Chief of Spectrum, Distinguished Writer In Residence at New York University’s Carter Journalism Institute, and President of the Association of Health Care Journalists. He is also co-founder of Retraction Watch, which can be followed on Twitter @RetractionWatch. You can contact Ivan at firstname.lastname@example.org and follow him on Twitter @ivanoransky.
With thanks to our sponsor, Aspire Scientific Ltd