Establishing an early indicator for data sharing and reuse

Authors : Agata Piękniewska, Laurel L. Haak, Darla Henderson, Katherine McNeill, Anita Bandrowski, Yvette Seger

Funders, publishers, scholarly societies, universities, and other stakeholders need to be able to track the impact of programs and policies designed to advance data sharing and reuse. With the launch of the NIH data management and sharing policy in 2023, establishing a pre-policy baseline of sharing and reuse activity is critical for the biological and biomedical community.

Toward this goal, we tested the utility of mentions of research resources, databases, and repositories (RDRs) as a proxy measurement of data sharing and reuse. We captured and processed text from Methods sections of open access biological and biomedical research articles published in 2020 and 2021 and made available in PubMed Central.

We used natural language processing to identify text strings to measure RDR mentions. In this article, we demonstrate our methodology, provide normalized baseline data sharing and reuse activity in this community, and highlight actions authors and publishers can take to encourage data sharing and reuse practices.

URL : Establishing an early indicator for data sharing and reuse

DOI : https://doi.org/10.1002/leap.1586

PreprintMatch: A tool for preprint to publication detection shows global inequities in scientific publication

Authors : Peter Eckmann, Anita Bandrowski

Preprints, versions of scientific manuscripts that precede peer review, are growing in popularity. They offer an opportunity to democratize and accelerate research, as they have no publication costs or a lengthy peer review process. Preprints are often later published in peer-reviewed venues, but these publications and the original preprints are frequently not linked in any way.

To this end, we developed a tool, PreprintMatch, to find matches between preprints and their corresponding published papers, if they exist. This tool outperforms existing techniques to match preprints and papers, both on matching performance and speed. PreprintMatch was applied to search for matches between preprints (from bioRxiv and medRxiv), and PubMed.

The preliminary nature of preprints offers a unique perspective into scientific projects at a relatively early stage, and with better matching between preprint and paper, we explored questions related to research inequity.

We found that preprints from low income countries are published as peer-reviewed papers at a lower rate than high income countries (39.6% and 61.1%, respectively), and our data is consistent with previous work that cite a lack of resources, lack of stability, and policy choices to explain this discrepancy.

Preprints from low income countries were also found to be published quicker (178 vs 203 days) and with less title, abstract, and author similarity to the published version compared to high income countries. Low income countries add more authors from the preprint to the published version than high income countries (0.42 authors vs 0.32, respectively), a practice that is significantly more frequent in China compared to similar countries.

Finally, we find that some publishers publish work with authors from lower income countries more frequently than others.

URL : PreprintMatch: A tool for preprint to publication detection shows global inequities in scientific publication

DOI : https://doi.org/10.1371/journal.pone.0281659

The Rigor and Transparency Index Quality Metric for Assessing Biological and Medical Science Methods

Authors : Joe Menke, Martijn Roelandse, Burak Ozyurt, Maryann Martone, Anita Bandrowski

The reproducibility crisis is a multifaceted problem involving ingrained practices within the scientific community. Fortunately, some causes are addressed by the author’s adherence to rigor and reproducibility criteria, implemented via checklists at various journals.

We developed an automated tool (SciScore) that evaluates research articles based on their adherence to key rigor criteria, including NIH criteria and RRIDs, at an unprecedented scale. We show that despite steady improvements, less than half of the scoring criteria, such as blinding or power analysis, are routinely addressed by authors; digging deeper, we examined the influence of specific checklists on average scores.

The average score for a journal in a given year was named the Rigor and Transparency Index (RTI), a new journal quality metric. We compared the RTI with the Journal Impact Factor and found there was no correlation. The RTI can potentially serve as a proxy for methodological quality.

URL : The Rigor and Transparency Index Quality Metric for Assessing Biological and Medical Science Methods

DOI : https://doi.org/10.1016/j.isci.2020.101698