Improving evidence-based practice through preregistration of applied research: Barriers and recommendations

Authors : Thomas Rhys Evans, Peter Branney, Andrew Clements, Ella Hatton

Preregistration is the practice of publicly publishing plans on central components of the research process before access to, or collection, of data. Within the context of the replication crisis, open science practices like preregistration have been pivotal in facilitating greater transparency in research.

However, such practices have been applied nearly exclusively to basic academic research, with rare consideration of the relevance to applied and consultancy-based research. This is particularly problematic as such research is typically reported with very low levels of transparency and accountability despite being disseminated as influential gray literature to inform practice.

Evidence-based practice is best served by an appreciation of multiple sources of quality evidence, thus the current review considers the potential of preregistration to improve both the accessibility and credibility of applied research toward more rigorous evidence-based practice.

The current three-part review outlines, first, the opportunities of preregistration for applied research, and second, three barriers – practical challenges, stakeholder roles, and the suitability of preregistration.

Last, this review makes four recommendations to overcome these barriers and maximize the opportunities of preregistration for academics, industry, and the structures they are held within – changes to preregistration templates, new types of templates, education and training, and recognition and structural changes.

URL : Improving evidence-based practice through preregistration of applied research: Barriers and recommendations

DOI : https://doi.org/10.1080/08989621.2021.1969233

Knowledge and Attitudes Among Life Scientists Toward Reproducibility Within Journal Articles: A Research Survey

Authors : Evanthia Kaimaklioti Samota, Robert P. Davey

We constructed a survey to understand how authors and scientists view the issues around reproducibility, focusing on interactive elements such as interactive figures embedded within online publications, as a solution for enabling the reproducibility of experiments.

We report the views of 251 researchers, comprising authors who have published in eLIFE Sciences, and those who work at the Norwich Biosciences Institutes (NBI). The survey also outlines to what extent researchers are occupied with reproducing experiments themselves. Currently, there is an increasing range of tools that attempt to address the production of reproducible research by making code, data, and analyses available to the community for reuse. We wanted to collect information about attitudes around the consumer end of the spectrum, where life scientists interact with research outputs to interpret scientific results.

Static plots and figures within articles are a central part of this interpretation, and therefore we asked respondents to consider various features for an interactive figure within a research article that would allow them to better understand and reproduce a published analysis.

The majority (91%) of respondents reported that when authors describe their research methodology (methods and analyses) in detail, published research can become more reproducible. The respondents believe that having interactive figures in published papers is a beneficial element to themselves, the papers they read as well as to their readers.

Whilst interactive figures are one potential solution for consuming the results of research more effectively to enable reproducibility, we also review the equally pressing technical and cultural demands on researchers that need to be addressed to achieve greater success in reproducibility in the life sciences.

URL : Knowledge and Attitudes Among Life Scientists Toward Reproducibility Within Journal Articles: A Research Survey

DOI : https://doi.org/10.3389/frma.2021.678554

Open science, the replication crisis, and environmental public health

Author : Daniel J. Hicks

Concerns about a crisis of mass irreplicability across scientific fields (“the replication crisis”) have stimulated a movement for open science, encouraging or even requiring researchers to publish their raw data and analysis code.

Recently, a rule at the US Environmental Protection Agency (US EPA) would have imposed a strong open data requirement. The rule prompted significant public discussion about whether open science practices are appropriate for fields of environmental public health.

The aims of this paper are to assess (1) whether the replication crisis extends to fields of environmental public health; and (2) in general whether open science requirements can address the replication crisis.

There is little empirical evidence for or against mass irreplicability in environmental public health specifically. Without such evidence, strong claims about whether the replication crisis extends to environmental public health – or not – seem premature.

By distinguishing three concepts – reproducibility, replicability, and robustness – it is clear that open data initiatives can promote reproducibility and robustness but do little to promote replicability.

I conclude by reviewing some of the other benefits of open science, and offer some suggestions for funding streams to mitigate the costs of adoption of open science practices in environmental public health.

URL : Open science, the replication crisis, and environmental public health

DOI : https://doi.org/10.1080/08989621.2021.1962713

Replication and trustworthiness

Authors : Rik Peels, Lex Bouter

This paper explores various relations that exist between replication and trustworthiness. After defining “trust”, “trustworthiness”, “replicability”, “replication study”, and “successful replication”, we consider, respectively, how trustworthiness relates to each of the three main kinds of replication: reproductions, direct replications, and conceptual replications.

Subsequently, we explore how trustworthiness relates to the intentionality of a replication. After that, we discuss whether the trustworthiness of research findings depends merely on evidential considerations or also on what is at stake.

We conclude by adding replication to the other issues that should be considered in assessing the trustworthiness of research findings: (1) the likelihood of the findings before the primary study was done (that is, the prior probability of the findings), (2) the study size and the methodological quality of the primary study, (3) the number of replications that were performed and the quality and consistency of their aggregated findings, and (4) what is at stake.

URL : Replication and trustworthiness

DOI : https://doi.org/10.1080/08989621.2021.1963708

Systematizing Confidence in Open Research and Evidence (SCORE)

Authors : Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Tim Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, C Lee Giles, Michael B Gordon, Reed Gordon-Sarney, Christopher Griffin, Timothy Gulden et al.,

Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts.

Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment.

The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates.

Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.

URL : Systematizing Confidence in Open Research and Evidence (SCORE)

DOI : https://doi.org/10.31235/osf.io/46mnb

Science Communication in the Context of Reproducibility and Replicability: How Nonscientists Navigate Scientific Uncertainty

Author : Emily L. Howell

Scientists stand to gain in obvious ways from recent efforts to develop robust standards for and mechanisms of reproducibility and replicability. Demonstrations of reproducibility and replicability may provide clarity with respect to areas of uncertainty in scientific findings and translate into greater impact for the research.

But when it comes to public perceptions of science, it is less clear what gains might come from recent efforts to improve reproducibility and replicability. For example, could such efforts improve public understandings of scientific uncertainty?

To gain insight into this issue, we would need to know how those views are shaped by media coverage of it, but none of the emergent research on public views of reproducibility and replicability in science considers that question.

We do, however, have the recent report on Reproducibility and Replicability in Science issued by the National Academies of Sciences, Engineering, and Medicine, which provides an overview of public perceptions of uncertainty in science.

Here, I adapt that report to begin a conversation between researchers and practitioners, with the aim of expanding research on public perceptions of scientific uncertainty. This overview draws upon research on risk perception and science communication to describe how the media influences the communication and perception of uncertainty in science.

It ends by presenting recommendations for communicating scientific uncertainty as it pertains to issues of reproducibility and replicability.

URL : Science Communication in the Context of Reproducibility and Replicability: How Nonscientists Navigate Scientific Uncertainty

Original location : https://hdsr.mitpress.mit.edu/pub/3g7u601s/release/2