A Trust Framework for Online Research Data Services

Authors : Malcolm Wolski, Louise Howard, Joanna Richardson

There is worldwide interest in the potential of open science to increase the quality, impact, and benefits of science and research. More recently, attention has been focused on aspects such as transparency, quality, and provenance, particularly in regard to data.

For industry, citizens, and other researchers to participate in the open science agenda, further work needs to be undertaken to establish trust in research environments.

Based on a critical review of the literature, this paper examines the issue of trust in an open science environment, using virtual laboratories as the focus for discussion. A trust framework, which has been developed from an end-user perspective, is proposed as a model for addressing relevant issues within online research data services and tools.

URL : A Trust Framework for Online Research Data Services

DOI : http://dx.doi.org/10.3390/publications5020014

Replicability and Reproducibility in Comparative Psychology

Author : Jeffrey R. Stevens

Psychology faces a replication crisis. The Reproducibility Project: Psychology sought to replicate the effects of 100 psychology studies. Though 97% of the original studies produced statistically significant results, only 36% of the replication studies did so (Open Science Collaboration, 2015).

This inability to replicate previously published results, however, is not limited to psychology (Ioannidis, 2005). Replication projects in medicine (Prinz et al., 2011) and behavioral economics (Camerer et al., 2016) resulted in replication rates of 25 and 61%, respectively, and analyses in genetics (Munafò, 2009) and neuroscience (Button et al., 2013) question the validity of studies in those fields. Science, in general, is reckoning with challenges in one of its basic tenets: replication.

Comparative psychology also faces the grand challenge of producing replicable research. Though social psychology has born the brunt of most of the critique regarding failed replications, comparative psychology suffers from some of the same problems faced by social psychology (e.g., small sample sizes).

Yet, comparative psychology follows the methods of cognitive psychology by often using within-subjects designs, which may buffer it from replicability problems (Open Science Collaboration, 2015). In this Grand Challenge article, I explore the shared and unique challenges of and potential solutions for replication and reproducibility in comparative psychology.

URL : Replicability and Reproducibility in Comparative Psychology

Alternative location : http://journal.frontiersin.org/article/10.3389/fpsyg.2017.00862/full

TrueReview: A Platform for Post-Publication Peer Review

Authors : Luca de Alfaro, Marco Faella

In post-publication peer review, scientific contributions are first published in open-access forums, such as arXiv or other digital libraries, and are subsequently reviewed and possibly ranked and/or evaluated.

Compared to the classical process of scientific publishing, in which review precedes publication, post-publication peer review leads to faster dissemination of ideas, and publicly-available reviews. The chief concern in post-publication reviewing consists in eliciting high-quality, insightful reviews from participants.

We describe the mathematical foundations and structure of TrueReview, an open-source tool we propose to build in support of post-publication review.

In TrueReview, the motivation to review is provided via an incentive system that promotes reviews and evaluations that are both truthful (they turn out to be correct in the long run) and informative (they provide significant new information).

TrueReview organizes papers in venues, allowing different scientific communities to set their own submission and review policies. These venues can be manually set-up, or they can correspond to categories in well-known repositories such as arXiv.

The review incentives can be used to form a reviewer ranking that can be prominently displayed alongside papers in the various disciplines, thus offering a concrete benefit to reviewers. The paper evaluations, in turn, reward the authors of the most significant papers, both via an explicit paper ranking, and via increased visibility in search.

URL : https://arxiv.org/abs/1608.07878

 

What is open peer review? A systematic review

Author : Tony Ross-Hellauer

Background

“Open peer review” (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with a myriad of overlapping and often contradictory definitions.

While the term is used by some to refer to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles.

For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods.

Methods

Recognising the absence of a consensus view on what open peer review is, this article undertakes a systematic review of definitions of “open peer review” or “open review”, to create a corpus of 122 definitions.

These definitions are then systematically analysed to build a coherent typology of the many different innovations in peer review signified by the term, and hence provide the precise technical definition currently lacking.

Results

This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Quantifying definitions in this way allows us to accurately portray exactly how  ambiguously the phrase “open peer review”  has been used thus far, for the literature offers a total of 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature.

Conclusions

Based on this work, I propose a pragmatic definition of open peer review as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the ethos of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.

URL : What is open peer review? A systematic review

DOI : http://dx.doi.org/10.12688/f1000research.11369.1

Open Science: What, Why, and How

Authors : Barbara A. Spellman, Elizabeth A. Gilbert, Katherine S. Corker

Open Science is a collection of actions designed to make scientific processes more transparent and results more accessible. Its goal is to build a more replicable and robust science; it does so using new technologies, altering incentives, and changing attitudes.

The current movement towards open science was spurred, in part, by a recent “series of unfortunate events” within psychology and other sciences.

These events include the large number of studies that have failed to replicate and the prevalence of common research and publication procedures that could explain why.

Many journals and funding agencies now encourage, require, or reward some open science practices, including pre-registration, providing full materials, posting data, distinguishing between exploratory and confirmatory analyses, and running replication studies.

Individuals can practice and encourage open science in their many roles as researchers, authors, reviewers, editors, teachers, and members of hiring, tenure, promotion, and awards committees.

A plethora of resources are available to help scientists, and science, achieve these goals.

URL : https://osf.io/preprints/psyarxiv/ak6jr

Metrics for openness

Authors : David M. Nichols, Michael B. Twidale

The characterization of scholarly communication is dominated by citation-based measures. In this paper we propose several metrics to describe different facets of open access and open research.

We discuss measures to represent the public availability of articles along with their archival location, licenses, access costs, and supporting information. Calculations illustrating these new metrics are presented using the authors’ publications.

We argue that explicit measurement of openness is necessary for a holistic description of research outputs.

URL : http://hdl.handle.net/10289/10842

Imagining tomorrow’s university: open science and its impact

Authors : Adina Howe, Michael D. Howe, Amy L. Kaleita, D. Raj Raman

As part of a recent workshop entitled « Imagining Tomorrow’s University”, we were asked to visualize the future of universities as research becomes increasingly data- and computation-driven, and identify a set of principles characterizing pertinent opportunities and obstacles presented by this shift.

In order to establish a holistic view, we take a multilevel approach and examine the impact of open science on individual scholars as well as on the university as a whole.

At the university level, open science presents a double-edged sword: when well executed, open science can accelerate the rate of scientific inquiry across the institution and beyond; however, haphazard or half-hearted efforts are likely to squander valuable resources, diminish university productivity and prestige, and potentially do more harm than good. We present our perspective on the role of open science at the university.

URL : Imagining tomorrow’s university: open science and its impact

DOI : http://dx.doi.org/10.12688/f1000research.11232.1