Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: an observational study

Authors : Tom E. Hardwicke, Manuel Bohn, Kyle MacDonald, Emily Hembacher, Michèle B. Nuijten, Benjamin N. Peloquin, Benjamin E. deMayo, Bria Long, Erica J. Yoon, Michael C. Frank

For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015.

Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one ‘major numerical discrepancy’ (>10% difference) prompting us to request input from original authors.

Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles.

Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected.

Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.

URL : Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: an observational study

DOI : https://doi.org/10.1098/rsos.201494

Improving Opportunities for New Value of Open Data: Assessing and Certifying Research Data Repositories

Author : Robert R. Downs

Investments in research that produce scientific and scholarly data can be leveraged by enabling the resulting research data products and services to be used by broader communities and for new purposes, extending reuse beyond the initial users and purposes for which the data were originally collected.

Submitting research data to a data repository offers opportunities for the data to be used in the future, providing ways for new benefits to be realized from data reuse. Improvements to data repositories that facilitate new uses of data increase the potential for data reuse and for gains in the value of open data products and services that are associated with such reuse.

Assessing and certifying the capabilities and services offered by data repositories provides opportunities for improving the repositories and for realizing the value to be attained from new uses of data.

The evolution of data repository certification instruments is described and discussed in terms of the implications for the curation and continuing use of research data.

URL : Improving Opportunities for New Value of Open Data: Assessing and Certifying Research Data Repositories

DOI : http://doi.org/10.5334/dsj-2021-001

Open Science and the Hype Cycle

Author : George Strawn

The introduction of a new technology or innovation is often accompanied by “ups and downs” in its fortunes. Gartner Inc. defined a so-called hype cycle to describe a general pattern that many innovations experience: technology trigger, peak of inflated expectations, trough of disillusionment, slope of enlightenment, and plateau of productivity.

This article will compare the ongoing introduction of Open Science (OS) with the hype cycle model and speculate on the relevance of that model to OS. Lest the title of this article mislead the reader, be assured that the author believes that OS should happen and that it will happen.

However, I also believe that the path to OS will be longer than many of us had hoped. I will give a brief history of the today’s “semi-open” science, define what I mean by OS, define the hype cycle and where OS is now on that cycle, and finally speculate what it will take to traverse the cycle and rise to its plateau of productivity (as described by Gartner).

URL : Open Science and the Hype Cycle

DOI : https://doi.org/10.1162/dint_a_00081

The Rigor and Transparency Index Quality Metric for Assessing Biological and Medical Science Methods

Authors : Joe Menke, Martijn Roelandse, Burak Ozyurt, Maryann Martone, Anita Bandrowski

The reproducibility crisis is a multifaceted problem involving ingrained practices within the scientific community. Fortunately, some causes are addressed by the author’s adherence to rigor and reproducibility criteria, implemented via checklists at various journals.

We developed an automated tool (SciScore) that evaluates research articles based on their adherence to key rigor criteria, including NIH criteria and RRIDs, at an unprecedented scale. We show that despite steady improvements, less than half of the scoring criteria, such as blinding or power analysis, are routinely addressed by authors; digging deeper, we examined the influence of specific checklists on average scores.

The average score for a journal in a given year was named the Rigor and Transparency Index (RTI), a new journal quality metric. We compared the RTI with the Journal Impact Factor and found there was no correlation. The RTI can potentially serve as a proxy for methodological quality.

URL : The Rigor and Transparency Index Quality Metric for Assessing Biological and Medical Science Methods

DOI : https://doi.org/10.1016/j.isci.2020.101698

Where Do Early Career Researchers Stand on Open Science Practices? A Survey Within the Max Planck Society

Authors : Daniel Toribio-Flórez, Lukas Anneser, Felipe Nathan deOliveira-Lopes, Martijn Pallandt, Isabell Tunn, Hendrik Windel

Open science (OS) is of paramount importance for the improvement of science worldwide and across research fields. Recent years have witnessed a transition toward open and transparent scientific practices, but there is still a long way to go.

Early career researchers (ECRs) are of crucial relevance in the process of steering toward the standardization of OS practices, as they will become the future decision makers of the institutional change that necessarily accompanies this transition. Thus, it is imperative to gain insight into where ECRs stand on OS practices.

Under this premise, the Open Science group of the Max Planck PhDnet designed and conducted an online survey to assess the stance toward OS practices of doctoral candidates from the Max Planck Society.

As one of the leading scientific institutions for basic research worldwide, the Max Planck Society provides a considerable population of researchers from multiple scientific fields, englobed into three sections: biomedical sciences, chemistry, physics and technology, and human and social sciences.

From an approximate total population of 5,100 doctoral candidates affiliated with the Max Planck Society, the survey collected responses from 568 doctoral candidates. The survey assessed self-reported knowledge, attitudes, and implementation of different OS practices, namely, open access publications, open data, preregistrations, registered reports, and replication studies.

ECRs seemed to hold a generally positive view toward these different practices and to be interested in learning more about them. Furthermore, we found that ECRs’ knowledge and positive attitudes predicted the extent to which they implemented these OS practices, although levels of implementation were rather low in the past. We observed differences and similarities between scientific sections.

We discuss these differences in terms of need and feasibility to apply these OS practices in specific scientific fields, but additionally in relation to the incentive systems that shape scientific communities. Lastly, we discuss the implications that these results can have for the training and career advancement of ECRs, and ultimately, for the consolidation of OS practices.

URL : Where Do Early Career Researchers Stand on Open Science Practices? A Survey Within the Max Planck Society

DOI : https://doi.org/10.3389/frma.2020.586992

Deep Learning in Mining Biological Data

Authors : Mufti Mahmud, M. Shamim Kaiser, T. Martin McGinnity, Amir Hussain

Recent technological advancements in data acquisition tools allowed life scientists to acquire multimodal data from different biological application domains. Categorized in three broad types (i.e. images, signals, and sequences), these data are huge in amount and complex in nature.

Mining such enormous amount of data for pattern recognition is a big challenge and requires sophisticated data-intensive machine learning techniques. Artificial neural network-based learning systems are well known for their pattern recognition capabilities, and lately their deep architectures—known as deep learning (DL)—have been successfully applied to solve many complex pattern recognition problems.

To investigate how DL—especially its different architectures—has contributed and been utilized in the mining of biological data pertaining to those three types, a meta-analysis has been performed and the resulting resources have been critically analysed. Focusing on the use of DL to analyse patterns in data from diverse biological domains, this work investigates different DL architectures’ applications to these data.

This is followed by an exploration of available open access data sources pertaining to the three data types along with popular open-source DL tools applicable to these data. Also, comparative investigations of these tools from qualitative, quantitative, and benchmarking perspectives are provided.

Finally, some open research challenges in using DL to mine biological data are outlined and a number of possible future perspectives are put forward.

URL : Deep Learning in Mining Biological Data

DOI : https://doi.org/10.1007/s12559-020-09773-x

Questionable Research Practices and Open Science in Quantitative Criminology

Authors : Jason Chinn, Justin Pickett, Simine Vazire, Alex Holcombe

Objectives

Questionable research practices (QRPs) lead to incorrect research results and contribute to irreproducibility in science. Researchers and institutions have proposed open science practices (OSPs) to improve the detectability of QRPs and the credibility of science. We examine the prevalence of QRPs and OSPs in criminology, and researchers’ opinions of those practices.

Methods

We administered an anonymous survey to authors of articles published in criminology journals. Respondents self-reported their own use of 10 QRPs and 5 OSPs. They also estimated the prevalence of use by others, and reported their attitudes toward the practices.

Results

QRPs and OSPs are both common in quantitative criminology, about as common as they are in other fields. Criminologists who responded to our survey support using QRPs in some circumstances, but are even more supportive of using OSPs.

We did not detect a significant relationship between methodological training and either QRP or OSP use.

Support for QRPs is negatively and significantly associated with support for OSPs. Perceived prevalence estimates for some practices resembled a uniform distribution, suggesting criminologists have little knowledge of the proportion of researchers that engage in certain questionable practices.

Conclusions

Most quantitative criminologists in our sample use QRPs, and many use multiple QRPs. The substantial prevalence of QRPs raises questions about the validity and reproducibility of published criminological research.

We found promising levels of OSP use, albeit at levels lagging what researchers endorse. The findings thus suggest that additional reforms are needed to decrease QRP use and increase the use of OSPs.

URL : Questionable Research Practices and Open Science in Quantitative Criminology

DOI : https://doi.org/10.31235/osf.io/bwm7s