Transparent, Reproducible, and Open Science Practices of Published Literature in Dermatology Journals: Cross-Sectional Analysis

Authors : J Michael Anderson, Andrew Niemann, Austin L Johnson, Courtney Cook, Daniel Tritz, Matt Vassar

Background

Reproducible research is a foundational component for scientific advancements, yet little is known regarding the extent of reproducible research within the dermatology literature.

Objective

This study aimed to determine the quality and transparency of the literature in dermatology journals by evaluating for the presence of 8 indicators of reproducible and transparent research practices.

Methods

By implementing a cross-sectional study design, we conducted an advanced search of publications in dermatology journals from the National Library of Medicine catalog. Our search included articles published between January 1, 2014, and December 31, 2018.

After generating a list of eligible dermatology publications, we then searched for full text PDF versions by using Open Access Button, Google Scholar, and PubMed. Publications were analyzed for 8 indicators of reproducibility and transparency—availability of materials, data, analysis scripts, protocol, preregistration, conflict of interest statement, funding statement, and open access—using a pilot-tested Google Form.

Results

After exclusion, 127 studies with empirical data were included in our analysis. Certain indicators were more poorly reported than others. We found that most publications (113, 88.9%) did not provide unmodified, raw data used to make computations, 124 (97.6%) failed to make the complete protocol available, and 126 (99.2%) did not include step-by-step analysis scripts.

Conclusions

Our sample of studies published in dermatology journals do not appear to include sufficient detail to be accurately and successfully reproduced in their entirety. Solutions to increase the quality, reproducibility, and transparency of dermatology research are warranted.

More robust reporting of key methodological details, open data sharing, and stricter standards journals impose on authors regarding disclosure of study materials might help to better the climate of reproducible research in dermatology.

URL : Transparent, Reproducible, and Open Science Practices of Published Literature in Dermatology Journals: Cross-Sectional Analysis

DOI : https://doi.org/10.2196/16078

Resurfacing Historical Scientific Data: A Case Study Involving Fruit Breeding Data

Authors : Shannon L. Farrell, Lois G. Hendrickson, Kristen L. Mastel, Katherine Adina Allen, Julia A. Kelly

Objective

The objective of this paper is to illustrate the importance and complexities of working with historical analog data that exists on university campuses. Using a case study of fruit breeding data, we highlight issues and opportunities for librarians to help preserve and increase access to potentially valuable data sets.

Methods

We worked in conjunction with researchers to inventory, describe, and increase access to a large, 100-year-old data set of analog fruit breeding data. This involved creating a spreadsheet to capture metadata about each data set, identifying data sets at risk for loss, and digitizing select items for deposit in our institutional repository.

Results/Discussion

We illustrate that large amounts of data exist within biological and agricultural sciences departments and labs, and how past practices of data collection, record keeping, storage, and management have hindered data reuse.

We demonstrate that librarians have a role in collaborating with researchers and providing direction in how to preserve analog data and make it available for reuse. This work may provide guidance for other science librarians pursing similar projects.

Conclusions

This case study demonstrates how science librarians can build or strengthen their role in managing and providing access to analog data by combining their data management skills with researchers’ needs to recover and reuse data.

URL : Resurfacing Historical Scientific Data: A Case Study Involving Fruit Breeding Data

DOI : https://doi.org/10.7191/jeslib.2019.1171

“Data Stewardship Wizard”: A Tool Bringing Together Researchers, Data Stewards, and Data Experts around Data Management Planning

Authors: Robert Pergl, Rob Hooft, Marek Suchánek, Vojtěch Knaisl, Jan Slifka

The Data Stewardship Wizard is a tool for data management planning that is focused on getting the most value out of data management planning for the project itself rather than on fulfilling obligations.

It is based on FAIR Data Stewardship, in which each data-related decision in a project acts to optimize the Findability, Accessibility, Interoperability and/or Reusability of the data.

The background to this philosophy is that the first reuser of the data is the researcher themselves. The tool encourages the consulting of expertise and experts, can help researchers avoid risks they did not know they would encounter by confronting them with practical experience from others, and can help them discover helpful technologies they did not know existed.

In this paper, we discuss the context and motivation for the tool, we explain its architecture and we present key functions, such as the knowledge model evolvability and migrations, assembling data management plans, metrics and evaluation of data management plans.

URL : “Data Stewardship Wizard”: A Tool Bringing Together Researchers, Data Stewards, and Data Experts around Data Management Planning

DOI : http://doi.org/10.5334/dsj-2019-059

Inferring the causal effect of journals on citations

Author : Vincent Traag

Articles in high-impact journals are by definition more highly cited on average. But are they cited more often because the articles are somehow “better”? Or are they cited more often simply because they appeared in a high-impact journal? Although some evidence suggests the latter the causal relationship is not clear.

We here compare citations of published journal articles to citations of their preprint versions to uncover the causal mechanism. We build on an earlier model to infer the causal effect of journals on citations. We find evidence for both effects.

We show that high-impact journals seem to select articles that tend to attract more citations. At the same time, we find that high-impact journals augment the citation rate of published articles.

Our results yield a deeper understanding of the role of journals in the research system. The use of journal metrics in research evaluation has been increasingly criticised in recent years and article-level citations are sometimes suggested as an alternative.

Our results show that removing impact factors from evaluation does not negate the influence of journals. This insight has important implications for changing practices of research evaluation.

URL : https://arxiv.org/abs/1912.08648

The Pricing of Open Access Journals: Diverse Niches and Sources of Value in Academic Publishing

Authors : Kyle Siler, Koen Frenken

Open Access (OA) publishing has created new academic and economic niches in contemporary science. OA journals offer numerous publication outlets with varying editorial philosophies and business models.

This article analyzes the Directory of Open Access Journals (DOAJ) (N=12,127) to identify characteristics of OA academic journals related to the adoption of Article Processing Charge (APC)-based business models, as well as price points of journals that charge APCs. Journal Impact Factor (JIF), language, publisher mission, DOAJ Seal, economic and geographic regions of publishers, peer review duration and journal discipline are all significantly related to the adoption and pricing of journal APCs.

Even after accounting for other journal characteristics (prestige, discipline, publisher country), journals published by for-profit publishers charge the highest APCs. Journals with status endowments (JIF, DOAJ Seal), articles written in English, published in wealthier regions, and in medical or science-based disciplines are also relatively costlier.

The OA publishing market reveals insights into forces that create economic and academic value in contemporary science. Political and institutional inequalities manifest in the varying niches occupied by different OA journals and publishers.

URL : The Pricing of Open Access Journals: Diverse Niches and Sources of Value in Academic Publishing

DOI : https://doi.org/10.1162/qss_a_00016

Data Curation for Big Interdisciplinary Science: The Pulley Ridge Experience

Authors : Timothy B. Norris, Christopher C. Mader

The curation and preservation of scientific data has long been recognized as an essential activity for the reproducibility of science and the advancement of knowledge. While investment into data curation for specific disciplines and at individual research institutions has advanced the ability to preserve research data products, data curation for big interdisciplinary science remains relatively unexplored terrain.

To fill this lacunae, this article presents a case study of the data curation for the National Centers for Coastal Ocean Science (NCCOS) funded project “Understanding Coral Ecosystem Connectivity in the Gulf of Mexico-Pulley Ridge to the Florida Keys” undertaken from 2011 to 2018 by more than 30 researchers at several research institutions.

The data curation process is described and a discussion of strengths, weaknesses and lessons learned is presented. Major conclusions from this case study include: the reimplementation of data repository infrastructure builds valuable institutional data curation knowledge but may not meet data curation standards and best practices; data from big interdisciplinary science can be considered as a special collection with the implication that metadata takes the form of a finding aid or catalog of datasets within the larger project context; and there are opportunities for data curators and librarians to synthesize and integrate results across disciplines and to create exhibits as stories that emerge from interdisciplinary big science.

URL : Data Curation for Big Interdisciplinary Science: The Pulley Ridge Experience

Alternative location : https://escholarship.umassmed.edu/jeslib/vol8/iss2/8/

Peer Review of Research Data Submissions to ScholarsArchive@OSU: How can we improve the curation of research datasets to enhance reusability?

Authors : Clara Llebot, Steven Van Tuyl

Objective

Best practices such as the FAIR Principles (Findability, Accessibility, Interoperability, Reusability) were developed to ensure that published datasets are reusable. While we employ best practices in the curation of datasets, we want to learn how domain experts view the reusability of datasets in our institutional repository, ScholarsArchive@OSU.

Curation workflows are designed by data curators based on their own recommendations, but research data is extremely specialized, and such workflows are rarely evaluated by researchers.

In this project we used peer-review by domain experts to evaluate the reusability of the datasets in our institutional repository, with the goal of informing our curation methods and ensure that the limited resources of our library are maximizing the reusability of research data.

Methods

We asked all researchers who have datasets submitted in Oregon State University’s repository to refer us to domain experts who could review the reusability of their data sets. Two data curators who are non-experts also reviewed the same datasets.

We gave both groups review guidelines based on the guidelines of several journals. Eleven domain experts and two data curators reviewed eight datasets.

The review included the quality of the repository record, the quality of the documentation, and the quality of the data. We then compared the comments given by the two groups.

Results

Domain experts and non-expert data curators largely converged on similar scores for reviewed datasets, but the focus of critique by domain experts was somewhat divergent.

A few broad issues common across reviews were: insufficient documentation, the use of links to journal articles in the place of documentation, and concerns about duplication of effort in creating documentation and metadata. Reviews also reflected the background and skills of the reviewer.

Domain experts expressed a lack of expertise in data curation practices and data curators expressed their lack of expertise in the research domain.

Conclusions

The results of this investigation could help guide future research data curation activities and align domain expert and data curator expectations for reusability of datasets.

We recommend further exploration of these common issues and additional domain expert peer-review project to further refine and align expectations for research data reusability.

URL : Peer Review of Research Data Submissions to ScholarsArchive@OSU: How can we improve the curation of research datasets to enhance reusability?

DOI : https://doi.org/10.7191/jeslib.2019.1166