“The Publication Plan” is now also launched on Facebook


“The Publication Plan” has launched its official Facebook page, as a new platform to share news and information with authors, researchers, medical writers and communications professionals, pharmaceutical industry managers, medical journal editors and publishers (amongst others).


Final rule clarifies and expands results reporting requirements of the FDA Amendments Act

It is a legal requirement for those responsible for certain applicable clinical trials of FDA-regulated products to register the studies on ClinicalTrials.gov and report results within a defined timeframe, as set out in Title VIII of the FDA Amendments Act (FDAAA). While registration of studies is now common practice, compliance with the requirement to report trial results remains low. This is thought to be due to a lack of understanding of the statutory requirement and who it applies to. The final rule, has now been developed by the Department of Health and Human Services upon consultation with companies, trade associations, academic institutions and the general public, and acts to clarify and expand the requirement for results reporting. The key issues of interest within the final rule were summarised in a special report by Zarin et al., in the New England Journal of Medicine this month.

The authors explain how the final rule clarifies terminology used in the statutory requirements, including the terms “applicable clinical trial” and “controlled” study. In addition, the authors describe the ways in which the rule improves transparency by, for example, expanding the requirements for reporting results of trials for unapproved products, baseline characteristics and adverse event information. Similarly, the rule now defines the level of specification for outcome measures that is required and stipulates that the full up-to-date protocol and statistical analysis plan should be submitted at the time of results reporting. The authors go on to discuss how National Library of Medicine at the National Institutes of Health (NIH) who manage the ClinicalTrials.gov registry will post submitted records within 30 days, highlighting any potential quality control concerns, and will also post registration information for trials of unapproved devices.

By removing ambiguity, the rule, which comes into effect on January 18, 2017, aims to take the decision of posting study results out of the hands of sponsors and others responsible for clinical trials. The authors conclude that as the majority of clinical trials run by US academic medical centres will now fall under the FDAAA, they hope sponsors will take the opportunity to not just meet expectations but go beyond the minimum requirements set out in the act.

ClinicalTrials.gov is offering a series of free live webinars to provide responsible parties with further information about the final rule.



Summary by Alice Wareham, PhD from Aspire Scientific

Data availability information now required by 13 Nature titles

In another step towards improved transparency, Nature and 12 other Nature titles have announced a new policy that requires authors to include a data availability statement in all papers reporting original research.

These statements, trialled previously in five Nature journals, aim to provide information to the reader on whether they can access the data “necessary to interpret, replicate and build on the methods or findings reported in the article” and how they can go about doing this. The policy also encourages authors to cite data sets with an assigned digital object identifier. It is thought that both approaches will not only improve the reuse of published research, but also increase recognition for those who create and share data.

The initial trial, which took place earlier this year, highlighted differences between disciplines in the awareness and openness to data sharing, and illustrated that a lack of appropriate data repositories can be a barrier to adopting the practice. It is hoped that implementation of the policy across all Nature journals by early next year will assist in promoting transparent data sharing, following similar moves by other journals and also a number of research and funding bodies. This policy is part of a larger project by the publisher, Springer Nature, to standardise data policies across all of its journals.



Summary by Alice Wareham, PhD from Aspire Scientific

Defining ghostwriting: ISMPP and GAPP respond to controversial BMJ article

The British Medical Journal (BMJ) recently published an article titled ‘Ghostwriting: the importance of definition and its place in contemporary drug marketing’. In the article, the author, Alastair Matheson, describes the many ways he believes the pharmaceutical industry influences medical publications of industry-sponsored research for marketing purposes and claims there has been an active rebranding of ghostwriting that has contributed to this. He controversially states, “The use of writers, regardless of whether they are called ghosts, is just one of several options for building commercial perspectives into academic literature, then spinning their attribution to strengthen credibility.”

The claims made in this article have led to Rapid Responses being published on the BMJ website from the International Society of Medical Publications Professions (ISMPP) and the Global Alliance of Publications Professionals (GAPP), who both categorically refute them. In the response from ISMPP (published on 12 September 2016), the not-for-profit organisation contend that rather than a rebranding, “there has been a positive evolution of transparency and completeness in medical publications reporting industry research”. ISMPP reiterate their position that ghostwriting is unacceptable but they fully support the transparent disclosure of the use of professional medical writers in the development of publications. The response goes on to say that current disclosure practices have progressed over the last decade and are now commonplace in medical literature, resulting in improved transparency.

This response was fully supported by GAPP who published their own reply on 15 September 2016, restating that, because their involvement is transparent, professional medical writers are not ghostwriters. GAPP refute Matheson’s claims that the International Committee of Medical Journal Editors (ICMJE) criteria bar writers from authorship, pointing out that the criteria recommend that the contribution of medical writers is appropriately disclosed in the acknowledgements unless all four ICMJE criteria are met, in which case the writer can be given co-authorship. In response to Matheson’s claim that these disclosures of involvement are often in “the small print” of the acknowledgements, GAPP contend that this is in fact controlled by the journal and publishers, and therefore outside the influence of industry or medical writers. GAPP also reject the claim that medical writers are subject to industry leverage over content, presenting evidence that on the whole, this is not the case.

These two associations have strived to set standards in the ethical reporting of industry research and to correct misinformation presented in the media about the medical writing profession. It is therefore unsurprising that both have felt it necessary to respond to this controversial article, and suggests that perhaps there is still work to be done.



Summary by Alice Wareham, PhD from Aspire Scientific

Interpreting clinical trial results: is a positive primary outcome good enough?

In a recent review from the New England Journal of Medicine, Stuart Pocock and Gregg Stone take a close look at the evaluation of “positive” clinical trials, providing readers with a strategy to effectively interpret the results from such trials.

The authors recognise that in-depth examination of trial data is necessary to determine whether the findings provide sufficient evidence to change clinical practice. They propose that answering a set of key questions may assist in this process:

1. Does a P value of <0.05 provide strong enough evidence?
A significance level of 5% for the primary efficacy outcome is the minimum requirement for a trial to be declared “positive”. However, achieving this level of significance may not always represent sufficiently strong enough evidence of efficacy and should prompt deeper inspection into the secondary outcomes and how the study was conducted.

2. What is the magnitude of treatment benefit?
The treatment difference should be large when both the relative and absolute treatment effects, as well as the extent of uncertainty as indicated by the 95% confidence interval, are examined.

3. Is the primary outcome clinically important (and internally consistent)?
For some diseases, a surrogate primary outcome measure is used to indicate a clinical outcome. Even if this outcome is positive, an analogous effect on important clinical measures, such as mortality, may not be evident. Further to this, positive composite primary outcomes require careful inspection to determine which elements are driving the result.

4. Are secondary outcomes supportive?
Positive, pre-specified secondary outcomes can improve confidence in the overall “positivity” of a trial, while negative secondary outcomes can cast doubt on the primary outcome result.

5. Are the principal findings consistent across important subgroups?
Relative and absolute treatment effects may vary according to patient characteristics. Importantly, there may be subgroups of patients in a “positive” trial that do no benefit from the new treatment and will need protecting from an ineffective (or harmful) treatment.

6. Do concerns about safety counterbalance positive efficacy?
Whether there are any safety concerns that may offset efficacy benefits should be assessed; absolute benefits and risks should be presented in terms of differences in percentages and numbers needed to treat analyses may help to assess clinical benefit.

7. Is the efficacy–safety balance patient-specific?
Statistical and modelling techniques may be required to assess the trade-off between efficacy and safety for different patient populations.

8. Is the trial large enough to be convincing?
Small trials lack statistical power, warranting cautious interpretation.

9. Was the trial stopped early?
An interim estimate of treatment effect may be high as a result of random variation throughout the trial, relative to the true effect. Stopping a trial early may therefore exaggerate treatment efficacy and also reduce data collection for important secondary and safety outcomes.

10. Are there flaws in trial design and conduct?
Biases in the design and conduct of the trial, for example a lack of blinding, may negate any benefit substantiated by a highly significant positive primary outcome.

11. Do the findings apply to my patients?
The eligibility criteria of a trial should be scrutinised to check whether the findings can be generalised to other patients. Results from studies conducted at single centres should be interpreted with caution as centre-specific effects and geographical location may also affect generalisability. Moreover, concurrent advances in care may reduce the relevance of the findings to contemporary clinical practice.

Pocock and Stone provide examples from published clinical trials in cardiovascular disease to illustrate each of the above points, which can be easily applied to other therapy areas.



Summary by Louise Niven, DPhil from Aspire Scientific.

SmartFigures Lab launched at The EMBO Meeting 2016

smart-figuresAt The EMBO Meeting 2016, held in Germany earlier this week, EMBO and John Wiley and Sons, Inc. launched the ‘SmartFigures Lab’, a prototype online publishing website. SmartFigures are interactive figures that link data to biological databases and to related data across papers, enabling users to navigate across literature through interconnected figures. The demo currently includes approximately 8000 SmartFigures from 300 papers and has been built on EMBO’s SourceData platform. Rather than indexing keywords in an article, which may provide a subjective interpretation of the results, SourceData directly searches experimental data and their inter-relations. Over 15,000 experiments from papers across 23 journals have been tagged by SourceData to date.

Thomas Lemberger, SourceData Project Leader at EMBO commented that “By linking results between papers into a network of related information – a scientific ‘knowledge graph’ – SourceData applications such as SmartFigures may open new ways for scientists to work with scientific articles as data-rich research tools.”

The project is part of an ongoing 8-year partnership between Wiley and EMBO.


Summary by Louise Niven, DPhil from Aspire Scientific.

Wellcome launches new open access requirements for publishers

Wellcome have issued a set of requirements for open access publishing, which will come into force on April 1 2017. The policy outlines the service that publishers must provide to receive an article processing charge (APC) from the charity, focussing on three key services – depositing, licensing and invoicing. It states that “when Wellcome funds are used to pay an APC the article must be deposited, at the time of publication, in PubMed Central (PMC)”. Further to this, updates must be made to PMC if papers are corrected or retracted and all deposited articles must be made available under the Creative Commons Attribution (CC-BY) Licence. Publishers must also have a publicly available policy setting out their approach and criteria for refunding APC payments.

Robert Kiley, Head of Digital Services at Wellcome explains how, in setting out their expectations of publishers in a more detailed way, Wellcome are “taking the opportunity to: both introduce a small number of new requirements and make previously implicit requirements explicit”.

Wiley, Springer Nature, Oxford University Press, Royal Society and PLOS are among some of the publishers who have already committed to the requirements, and other publishers who currently provide or intend to provide the services outlined by Wellcome have until 16th December 2016 to sign up.

The publisher requirements follow an analysis by Wellcome, which showed that during 2014-2015, 30% of Wellcome and Charity Open Access Fund (COAF) articles for which an APC was paid were non-compliant with their open access policies.

COAF members Cancer Research UK, British Heart Foundation and Parkinson’s UK will also apply the same requirements for outcomes of research they have funded. For the Bill and Melinda Gates Foundation, the requirements are due to come into effect in January 2017.



Summary by Louise Niven, DPhil from Aspire Scientific.

“Sting” operation exposes predatory publisher

magnifying glass.png

Predatory journals exploit the open-access model, charging authors publication fees in return for fast publication, without the associated editorial and publishing services expected from legitimate journals. The number of articles published in predatory journals rose almost eight-fold between 2010 and 2014, along with a similar rise in the number of journals themselves.

A poor quality website or unknown editorial board may give away the identity of a predatory journal. However, the professional appearance of some of some journals can attract less experienced researchers, who may be vulnerable to their invitations to publish. Beall’s list provides a catalogue of ‘potential, possible, or probable predatory scholarly open-access publishers’, to help researchers navigate the predatory publishing phenomenon.

Previously, authors have submitted ‘hoax’ papers in order to expose predatory publishers. In a recent article, Retraction Watch recounts how, in 2014, Hatixhe Latifi-Pupovci, a researcher at the University of Pristina in Kosovo, submitted a previously published paper in order to expose the poor credibility and lack of peer review at Medical Archives. The paper was accepted less than two months after submission and following its publication Latifi-Pupovci was sent a reminder to pay the 250 EUR publication fee, which she never did. After Latifi-Pupovci alerted faculty and students at her university of the situation and her decision to renounce the paper, the editor-in-chief, Izet Masic, found out about the sting operation. He went on to accuse the author of plagiarism in an editorial and it took nearly two years (June 2016) for an official retraction notice to be issued. This notice does not explain any issues with the paper and does not indicate that the author had decided to renounce her submission. In his accompanying editorial, Masic makes a case for the importance of peer-review.

While Retraction Watch acknowledge that this particular case may be ‘uniquely jumbled’, they cite other examples of journals publishing hoax papers online, exposing the problems within academic publishing. However, as the use of stings and hoaxes is controversial, alternative approaches may be required to curb predatory publishing activity in the future.


Summary by Louise Niven, DPhil from Aspire Scientific.