Skip to content

How research can be shared effectively to advance science: insights from the Center for Open Science

In recent years there has been increasing support for data sharing and transparency within scientific publishing to improve the quality of research and expedite scientific advancements. Indeed the need to make research freely and rapidly available has become particularly apparent during the ongoing COVID-19 pandemic. The Publication Plan talked to David Mellor, Director of Policy Initiatives at the Center for Open Science, to find out how the organisation is working to make open science an achievable goal for all.

For anyone who is not already familiar with the Center for Open Science (COS), could you explain what the organisation is aiming to achieve?

“We are a non-profit technology and culture change company located in Charlottesville, VA, USA. Our mission is to increase trust and credibility in scientific research through transparency and rigour.”

Briefly, what are your key strategies to realise those goals?

We have three main strategies for achieving our mission.

  • First, we seek evidence to understand the barriers to reproducibility in scientific claims. We do this through reproducibility projects in psychology, cancer biology, and the social sciences that attempt to replicate the results of previously reported findings. Challenges in this process include a lack of clarity on precisely how research was conducted originally (which can be fixed through open data and materials) and false positives that result from unreported flexibility in data analysis.
  • Second, we advocate for policies and educate on practices that address the barriers identified above. Our main set of recommendations are covered in the Transparency and Openness Promotion Guidelines (TOP). These include specific guidance that journals, publishers, funders, and universities can follow to make science more open and reproducible. Beyond open data, they include recommendations for code, materials, reporting guidelines, preregistration, and replication studies.
  • Finally, we build and maintain tools to enable the practices for which we advocate. The Open Science Framework (OSF) is an open-source web platform built to support research projects while connecting directly to built-in data repositories, registries, and preprint servers so that all of the materials and processes that eventually get summarised in a typical journal article are connected and preserved. This is the real evidence that is too often lost in the scholarly workflow.”

Could you tell us a bit about your background, how you became involved with open science and what your role as Director of Policy Initiatives entails?

“My background is in behavioural ecology and citizen science. After receiving my doctorate from Rutgers University, I did a post-doc at Virginia Tech working with citizen scientists in state parks and agricultural areas. Their projects involved studying water quality effects between different land management practices and invasive plant species control in threatened habitats around the state. We built tools to help them collaborate online and work with land managers. This type of collaboration and data sharing is what piqued my interest in related open science practices, and when a position became available at the COS I jumped on board! At the time, we were promoting new habits for the basic sciences, primarily preregistration through an education and award campaign, the Prereg Challenge. ”

The COS highlights that changes in researcher incentives, scientific infrastructure, and scholarly communication business models are needed to increase open science. For each of these aspects could you briefly highlight the main issues with current practices and explain how these could be improved?

“I think that incentives to improve scientific culture are the toughest nut to crack (as Alison Mudditt recently described).

Too much of academic culture is focused on imperfect measures of prestige.

Journals that have a high mean citation rate (ie the journal impact factor) are assumed to be the most prestigious and best, and having a manuscript accepted by such journals is necessary for career advancement. But why should the opinion of a few editors and reviewers, who make decisions in very opaque ways that are not subject to any scrutiny, have any more sway than a more open evaluation process? It’s very unscientific and it leads people to do whatever they think is necessary to appease ‘Reviewer 2’. Likewise, securing funding is subject to the whims of a panel of folks who have to rank dozens of fabulous research projects. The hyper-competition and low rate of funding lends a veneer of excellence when in fact getting into that 5% funded bucket is more a matter of chance than excellence, once you cull through the acceptable set of projects. There has to be a better way. And there is.

Scientists value transparency. It’s the only way to really evaluate evidence, claims, and good research design.

The TOP Guidelines provide a framework for evaluating how these organisations work, and that is what we are doing with TOP Factor. It’s a measure of institutional process – how unbiased and transparently does a journal, publisher, or funder work? That can be measured, and we’re measuring it. Our goal is to include every journal and, eventually, funder in that database. It provides very clear guidance on what needs to be done to improve policies that are focused on rigour, which is valued by science, over impact and prestige, which is of course valued by people but is not an ideal of science!

Supporting open infrastructure is another area where we are active. Frankly, it’s a perennial problem: getting funded to build something new and exciting is (relatively) easy, but maintenance and long-term support require business plans that align with scientific ideals and avoid the common pitfalls of monetising the outputs of science. That requires building coalitions with the stakeholders who have an interest and value the products and services and working with them to find the right way to support the tools.”

There has been much discussion in recent years around pushing towards open science and striving for increased transparency. In your experience has this translated into significant improvements in daily routine practice? Do you think the time will come when all research is openly accessible by default?

“There has been a huge shift. The number of basic scientists who are adding clarity to the process of scientific inquiry through preregistration is sky rocketing, particularly in the social sciences where ideas about bias and sloppy methods have become more well known. There is leadership at major funding organisations, such as the Institute for Education Sciences, that is tackling this head on. But work in future years will focus on connecting these pockets of early uptake and spreading them to more communities. There is a lot of work to be done in the preclinical biomedical sciences to make sure that data sharing is common, that bias is reduced through preregistration, so that open science can have the opportunity to succeed and make all of science more efficient and clear.”

In your recent presentation at the ISMPP Annual Meeting, you noted the low reproducibility of research (being able to produce the same findings of previous studies). What is the main reason for this? How can this situation be improved?

“Being able to replicate empirical research findings is a very old idea, that dates back hundreds of years. Essentially, if I make a claim, then others should be able to obtain the same findings if they follow the same steps. If not, it’s just a shallow statement. Now, sometimes the events that surrounded a claim can be recreated, so instead we want to see how the data were gathered and analysed. In either case, it’s only scientific if we can follow your logic from data to conclusion. Unfortunately, this concept is not widely applied. It takes time to look under the hood. Doing a study for a second time seems like a waste compared to chasing a new experiment. We’re all busy. But we’ve let those excuses cloud our thinking and assume that no one is ever going to check. That’s unfortunate, because it’s changed how we act. Across many fields, we see bad practices such as ‘p-hacking’ and HARKing being widely applied, and this undermines our methods. When most research can’t be replicated by others, it erodes trust. And we can either try to sweep that under the rug, which will just lead to less and less trust, or we can tackle these issues head on.

The expectation that others will do the same work or check our steps should be comforting, not threatening. Besides, if we want our results to ever break out of the Ivory Tower and make a positive difference in the world, we need to take these actions. That’s why the steps in the TOP Guidelines are so critical.”

“We need to know that replicating research will get published and funded so that we’ll have jobs and positions to keep finding new discoveries and confirming them in this way.”

Finally, looking to the future, are you able to share with us details of any upcoming initiatives or projects in the pipeline?

“Most of our work in the coming years will be focused on applying these methods to related disciplines. Preclinical disease research suffers from too much bias and opacity, and there are unique challenges to data sharing and replications in education studies that need to be tackled. We’re working with key communities there to ensure that these processes can result in a fair view of the evidence for all research.”

David Mellor is Director of Policy Initiatives at the Center for Open Science. You can contact David via david@cos.io.

——————————————————–

With thanks to our sponsor, Aspire Scientific Ltd


Never miss a post

Enter your email address below to follow our blog and receive new posts by email.

Never miss
a post

Enter your email address below to follow The Publication Plan and receive new posts by email.

We don’t spam! Read our privacy policy for more info.

Leave a Reply

%d bloggers like this: