Meeting report: summary of session 2 of the 8th EMWA Symposium
The 8th European Medical Writers Association (EMWA) symposium, entitled ‘Research Integrity & The Medical Communicator: What We Do When No One Is Watching’ took place on 6th November. Researchers, journal publishers and representatives from the pharmaceutical industry and medical communication agencies joined together in this virtual event, to share their perspectives on the importance of research integrity and how it can be achieved.
A summary of session 2 of the meeting is provided below to benefit those who were unable to attend, and as a timely reminder of the key topics for those who did.
Read our summaries of session 1 and session 3 of the meeting.
Session 2: how we can strengthen research integrity
Evolving approaches to improve research integrity
Marcus Munafò (University of Bristol) gave a fascinating account of the UKRN (United Kingdom Reproducibility Network), including why it was established, its structure, its activities, and the ways in which it is facilitating conversations across different levels of the academic and research ecosystem.
Background
We have a scientific method that has provided unparalleled insights into nature for over 400 years. There are concerns, however, that the scientific method has become distorted over time by changes in technology and pressures on academia to publish and obtain grants (a seminal moment being the publication of the essay titled ‘Why Most Published Research Findings Are False’ by Ioannidis in 2005).
Compounding the less-than-ideal incentive structures is the fact that researchers are human and bring to their work certain cognitive biases such as the tendency to see a pattern where there isn’t one. In addition, scientists are incentivised to find something because discovery is prioritised over replication. This prompts the question:
How can we protect against bias and create incentives that benefit science rather than scientists and their individual careers?
Structure
The ‘systems’ nature of the research integrity issue means that responsibility does not sit with any one group and so the UKRN was launched in 2019 to bring together different stakeholders to fill that gap. UKRN comprises:
- a Steering Group – a small group of academics across a variety of disciplines
- local networks – typically led by early career researchers and consisting of informal groups engaged with research integrity who can set up initiatives and provide a voice at the grassroots level
- institutional leads – more senior individuals involved in the high-level strategy of the institution who work with the local network lead so that the voice of the grassroots community is heard by senior management
- a stakeholder engagement group – comprising publishers, learned societies and funders.
The UKRN now has 17 institutional members across the UK and more than 50 local networks at a range of different types and sizes of institution. Munafò noted that it is particularly important to focus on early career researchers, who are keen to move the field forward. Later in their careers, researchers may realise that their idealistic notions of academia were naïve and feel the pressure to publish and obtain grants, while senior researchers, who have ‘survived’ the system may not be aware of the problem.
Goals
Key features of UKRN are that it is peer-led – facilitating discussions between researchers, publishers and funders. Its goals are to improve quality of work in research through improving incentives and training; and it aims to achieve broad disciplinary representation (although still predominantly working in the biomedical field they are increasingly engaging with arts and humanities). Initiatives include:
- local level initiatives to stimulate conversations at the grassroots level
- stakeholder-level partnerships, eg with funders and publishers to combine grant review and journal peer review into one process.
- If a grant is provisionally accepted for funding it’s handed over to a journal for a more thorough review of the protocol, which may lead to provisional acceptance of the results before any data have been collected, focusing on the importance of the question and the strength of the methodology.
- This can accelerate the publication process and reduce journal reviewer burden (as authors can avoid approaching multiple journals before their work is accepted).
Other national networks modelled on the UKRN, such as the Swiss Reproducibility Network, are being launched. As publishers, funders and researchers all work internationally, Munafò would ultimately like to see an international network working together at a global level.
Ensuring the integrity of medical evidence generation – a 360-degree view
Mona Khalid (Galapagos NV) highlighted that, increasingly, our lives are being dictated by algorithms that, effectively, separate winners from losers (eg deciding whose CV gets shortlisted for a job, who gets credit and who doesn’t). In a medical context, algorithms determine who will receive treatment and when. Algorithms are built from two things: the data/evidence that informs them and definitions of success (outcomes). How can we ensure that we trust algorithms in terms of their application in our daily lives?
Research integrity standards need to be tailored according to the type of research project. With respect to observational studies, their broad scope and variable design means that there are potential quality issues. Increased use of this type of research by a variety of stakeholders means that there is increased scrutiny on the design, methods and reporting of these types of studies. So how do we ensure their integrity?
Real world evidence (RWE) studies need to be conducted in a standardised, transparent way: case definition, coding and outcomes all need to be very clearly described. Several groups have developed statements and papers around good practice of RWE studies:
- ISPOR (International Society of Pharmacoeconomics and Outcomes Research)
- Regulators such as the FDA
- STROBE – the STROBE group have released a variety of statements and checklists for researchers to follow as well as guidance documents on transparency and data integrity. The STROBE guidelines consist of 22 reporting requirements specific to observational data and provide specific guidance on nomenclature used to describe the wide array of different designs and methodology of observational studies.
Khalid highlighted that there is no comprehensive registry of observational studies akin to ClinicalTrials.gov. Some mandated registers exist that tackle specific aspects (eg safety registers) but this shortfall needs to be addressed.
Randomised, controlled trials (RCTs) have been hailed as the gold standard in evidence generation. Increasingly, the question of which treatment to use in which patients and when (the outputs of RCTs) may not be the full evidence base. The future is more holistic, and a full spectrum of evidence is increasingly required. In oncology, for example, regulators have granted conditional approvals of treatments with the proviso that manufacturers supplement RCT data with post-marketing evidence from clinical practice from a real-world setting.
The future of data generation for treatment approvals may differ from the traditional pathway and consist of a more looped approach developing a continuum of evidence. Several examples exist where this continuum of evidence has been used to support label expansion or new indications.
Khalid concluded that it is critical to ensure integrity of evidence across medical evidence generation. Ultimately, this is to ensure that people get the most appropriate treatments and that the algorithms that dictate so much of our daily lives get it right.
In the subsequent Q&A session, Khalid noted that although medical writers may have a very strong background working on RCTs they may lack familiarity with RWE studies. When reporting RWE studies, it is important to:
- explain why and how patients were selected for observation
- characterise the population very well
- describe effect modifiers
- appreciate that the interpretation and conclusions need to be well informed.
It was noted that EMWA is working on educational initiatives in this area.
How a publisher’s research integrity group works
Suzanne Farley (Springer Nature Research Integrity Group ([SNRIG]) noted that while upholding integrity during the publication process involves taking personal responsibility (especially authors), everyone involved in the research enterprise has a shared responsibility.
No one has a good handle on the number of research integrity problems in the published record but increased focus on this issue in recent years has prompted data gathering. One of the best resources is the Retraction Watch database. Although the rate of retractions doubled between 2003 and 2009, it appears to be levelling off, and remains a relatively rare phenomenon. However, Farley noted that the number of retractions is not necessarily a reflection of what should be retracted and feels that we may be ‘uncovering the tip of a very large iceberg’.
Relatively few authors are responsible for a disproportionate number of retractions owing to deliberate misconduct, likely because once they successfully commit research misconduct, they accelerate the frequency with which they try it. Farley highlighted that that there is geographic variation in the retraction rate likely linked to the variable timeline of awareness around the world.
Many large publishers have a department dedicated to managing issues related to research integrity because, although oversight of research integrity is generally considered to be an inherent part of an editor’s role, many haven’t received much training in how to deal with issues when they arise. Standard manuscript quality checks – for example, the use of plagiarism detection software and duplicate submission checks – capture a large portion of potential problems, and technologies such as artificial intelligence are increasingly being used, but technology needs to be supplemented with people.
Focusing on the SNRIG, Farley explained that the team of 10 works across 3000+ journals (spanning science, technology, engineering and mathematics, arts, humanities and social sciences), as well as books and databases. Determining whether a research integrity problem is the result of an honest error or deliberate misconduct is not the aim of the group. Rather than ‘administering punishment’, the group functions as an advisory body focused on correcting the published scholarly record through:
- training for in-house staff and external editors
- advice on resolving specific cases
- policy, workflow, product and system development.
Traditionally, research integrity work is fraught and confrontational. The SNRIG is trying to change this by maintaining a non-accusatory approach, adhering to the principle of innocent until proven guilty.
A variety of groups report potential problems, including researchers, institutions, funders, journalists, and members of the public. The number of people reporting problems is increasing steeply, prompted both by growing awareness of the problem and by stakeholders becoming fed up with the literature being ‘polluted’ with unreliable research.
The issues that are reported to the group include:
- plagiarism/duplicate publication
- authorship disputes
- peer review process problems
- deliberate data fabrication/manipulation
- image manipulation.
As well as ‘standard’ cases, involving single papers or books, the team also works on ‘large’ cases, involving multiple papers, journals or publishers. This represents systematic manipulation of the review process – paper mills and organised rings of fake peer review. Cases deemed to be serious because they are related to medical treatments with an immediate impact on human health or ethical breaches in human research are deemed to be ‘large’ cases. Increasingly, authors engage lawyers and this complicates and slows down the process.
The team is currently working on 36 large cases involving over 2000 papers. Network mapping analysis is being used to identify relationships between individuals involved in this activity across Springer Nature journals.
Farley outlined the process for investigating potential problems:
- when a potential problem is escalated to the SNRIG, the group reviews the full manuscript (and the peer review process)
- an explanation is requested from the author (using neutral language)
- legal advisors are consulted
- the Editor in Chief is advised with a best-practice resolution
- the editor makes the final decision with regards to the action taken, which includes correction, editorial expression of concern and retraction.
Resolutions can sometimes be achieved within a few months but may take 2–3 years, particularly with larger cases when different institutions and legal jurisdictions are involved. Notably, publishers are currently not subject to a regulator, but are instead trusted to have good, standard quality control and to do the right thing when a problem is detected.
Farley closed by highlighting that progress is being made on these issues but is patchy and slow because research integrity is often considered a low priority by publishers.
Publishers are not the only stakeholders and we need to work together to standardise integrity approaches as far as possible.
In the subsequent Q&A session, it was noted that medical writers need to be aware of the prevalence of these problems and if they do detect a potential issue they should report it, anonymously if they wish.
Publishers’ responsibilities in promoting data quality and reproducibility
Session 2 continued with Iain Hrynaszkiewicz (Publisher, Open Research at Public Library of Science [PLOS]) discussing practical ways that publishers can improve the quality and reproducibility of content. Hrynaszkiewicz highlighted that the retractions we see are just the tip of the iceberg: the burden of irreproducible research is much greater. For example, the cost of irreproducible preclinical biological research in the US is estimated to be around half of the total spend, which can have serious consequences in areas such as drug discovery. The following factors may contribute to irreproducibility:
- poor study design
- selective reporting of results
- insufficient protocols, computer code or reagent information available from original lab
- raw data not available
- insufficient peer review of research.
Hrynaszkiewicz noted that while publishers can’t necessarily measure whether they’re impacting reproducibility, they can measure transparency as a precursor to reproducibility.
Hrynaszkiewicz highlighted 6 things that publishers can do to help improve the situation:
- Understand researchers’ needs – PLOS has been surveying researchers about barriers to sharing code: while some of the issues are practical, publishers need to be aware of legal and ethical barriers that researchers may face.
- Raise awareness and help to create change – publisher/journal data policies have become more common since 2012 and really proliferated since 2016. The stringency of data sharing policies varies by subject area and journal but policies making data availability statements mandatory have resulted in much stronger compliance. Hrynaszkiewicz noted that the implementation of such policies will differ if the aim is to increase data availability versus raising awareness. He also acknowledged that stronger policies do entail costs, but this should be viewed as an investment considering the estimated €10.2 billion annual cost to the European economy of not making data Findable, Accessible, Interoperable and Reusable (FAIR). He also noted the citation benefits for researchers in making data available in a repository.
- Improve the quality and objectivity of peer review – Hrynaszkiewicz highlighted the use of checklists or reporting guidelines to improve reproducibility and the potential for software to detect plagiarism or image manipulation early, allowing peer reviewers’ efforts to be reserved for submissions without these issues. Reviewer engagement with associated data or code is boosted if access to such data or code is made easier by, for example, highlighting data policies in the version of a manuscript sent to peer reviewers.
- Enhance scholarly infrastructure – there is no lack of journals which publish data based on scientific rigour and soundness, regardless of perceived importance. Preprint platforms can be useful for the early sharing of results, while platforms like io are available for other research outputs. Publishers could partner with research data repositories and make it easier to link published articles to other research outputs , such as those posted on figshare.
- Enhance established incentives – and create new ones – publishers should look at incentives to share more research, a wider variety of research and more rigorous research. ‘Traditional’ incentives such as citations to research papers and new authorship opportunities may be the most effective incentives for researchers to share research data. Many publishers offer additional article types to encourage sharing of data, software and methods/protocols, eg Scientific Data, Earth Systems Science Data, Journal of Open Research Software, Current Protocols, MethodsX, and Nature Protocols. Beyond this, ‘experimental’ incentives designed to encourage transparency include badges for open research practices introduced by the Center for Open Science.
- Be open ourselves – building researchers’ trust in publishers as a partner to promote sharing and openness requires publishers to be open. Supporting open access publishing and reuse, opening up parts of subscription publishing such as reference lists and abstracts, and making user research available can help to achieve this. Publishers must also collaborate with each other and other stakeholders – such as through the 2020 STM Research Data Year, supporting the uptake of common approaches to data sharing, and the Scholarly Link eXchange, collating information on links between research outputs.
In closing, Hrynaszkiewicz looked at practical steps researchers and writers could take to promote transparency. He highlighted the need to consider journal and funder data sharing policies in advance and to be prepared to respond to data sharing requests, and to consider whether findings should be shared early via preprints. Finally, Hrynaszkiewicz urged attendees to publish their results regardless of the outcome, as there is no shortage of publication venues in which to do so.
Responsible data sharing to improve research integrity
In the final talk of session 2, Julie Wood (Director of Strategy and Operations, Vivli) discussed strategies for and the benefits of sharing clinical trial data responsibly. Wood stated that, while obstacles to data sharing (related to skills, ability and funding) remain, we are getting closer to overcoming many of them as cultural change towards data sharing accelerates.
The benefits of openness and protecting patient privacy must be balanced. Wood demonstrated 3 different approaches to sharing individual patient-level data (IPD):
- open access: anyone can access data with a simple online data use agreement
- managed access: individuals can request access to data for scientific purposes, requests are reviewed and, if granted, data are available in a secure environment supported by a clear legal framework
- restricted access: invitation-only access only to those who provide their own data.
While restricted access may not allow the level of openness desired to move science forward, managed access (such as that provided by Vivli and other data repositories) allows data sharing in a secure, isolated research environment that minimises risks to patient identification.
Wood presented 5 considerations for data sharing across industry and academia.
- Why should my organisation share its data?
Alongside the commitments made by pharmaceutical industry bodies, there are ethical obligations to trial participants who expect data to be shared and reused (assuming that adequate safeguards are in place and that this is not done solely for commercial gain). Sharing IPD can leverage trial participants’ contributions to answer multiple scientific questions, preventing repetitive trials and enabling new discovery while building trust in clinical research. As noted in the previous talk, journals may mandate data sharing, and in fact, data sharing statements may influence editorial decisions.
- What are the key components of a data sharing program?
From a repository perspective, there are 3 key elements to consider: policy, mechanism and resources (to respond to queries and fulfil requests). A transparent policy must be publicly available, and consider study dates, data formats/languages, product approval status, legal/contractual constraints and anonymisation issues. The policy must also specify who will approve requests.
- When should we begin a program?
The International Committee of Medical Journal Editors (ICMJE) now requires data sharing plans to be provided at trial registration, while the Institute of Medicine recommends that the underlying data are shared 6 months after publication and 18 months after product abandonment or 30 days after regulatory approval. Data packages should contain linked study documentation, including the study protocol, data dictionary, statistical analysis plan and clinical study report, alongside de-identified/anonymised IPD.
- How can we manage a data sharing program?
Wood noted that many institutions (such as universities) manage data sharing in-house, which requires policies, a dedicated team, and a suitable platform to be built, managed and updated. Alternatively, a trusted partner can be used to manage and assist with these areas.
- What can partners like Vivli do for us?
Briefly, Wood outlined Vivli as a non-profit organisation, focused on sharing clinical research data through its membership-based platform, enabling industry, academic and funder members to share and access data related to any disease area, country or sponsor (currently covering over 5,700 trials).
In the subsequent panel discussion, Wood noted that data sharing is still relatively new, and the amount of data that has been shared is more limited than you might imagine, particularly at the IPD level. However, Wood’s outlook was positive, noting that new tools are continually being developed to facilitate data sharing and that we are on the cusp of being able to extract greater benefits from big data.
Panel discussion
To finish session 2, Tim Koder (Oxford PharmaGenesis and Open Pharma) moderated a panel discussion. Key points included:
- Although there is more to be done by publishers to share the data they collect to benefit the scientific community, Hrynaszkiewicz noted that many publishers have engaged with OpenCitations and the Initiative for Open Abstracts (I4OA).
- Long timelines for the publication of real-world evidence may negate some of the benefits of being able to conduct analyses of pre-existing real-world data faster than randomised controlled trials. Khalid noted that robust study designs, approaching journals directly and the traction real-world evidence is gaining with regulatory authorities can help to address these challenges.
- Research quality is now a distinct discipline in academia, moving from the ‘epidemiology’ phase (what affects research quality) to the ‘intervention’ phase (testing potential changes to ways of working, such as open research and different publishing systems, to see whether they have a positive impact on research quality, without unintended consequences). Munafò highlighted that cultural change can occur very quickly, particularly when people see the value in it, as seen with the embedding of meta-analysis and evidence-based medicine into the scientific process following the Cochrane collaboration.
Cultural change can occur very quickly, particularly when people see the value in it.
- Industry stakeholders discussed how through collaborative research, pharma funders can permeate their ways of working into the academic community, via contracts, research agreements and discussions that set the tone for partnerships. Slávka Baróniková (Conference Director of EMWA and Co-Chair of the Medical Communications Special Interest Group (SIG) of EMWA) added that she has already seen a shift in the approach of academic collaborators, noting that the more they work together, the greater their understanding and adoption of the principles into their own institutions. Valuable insights into the hurdles academics face can also be gained. Munafò noted that while pharma processes, often resulting from regulation, may be counter-cultural for academia, perhaps tailored elements of this more directive approach are needed in this setting. This is clearly an area in which academia can follow industry’s lead.
——————————————————–
Written as part of a Media Partnership between EMWA and The Publication Plan, by Caroline Greenwood BSc and Beatrice Tyrrell DPhil from Aspire Scientific, an independent medical writing agency led by experienced editorial team members, and supported by MSc and/or PhD-educated writers.
——————————————————–
With thanks to our sponsor, Aspire Scientific Ltd