Building a Trustworthy Data Repository: CoreTrustSeal Certification as a Lens for Service Improvements

Authors : Cara Key, Clara Llebot, Michael Boock

Objective

The university library aims to provide university researchers with a trustworthy institutional repository for sharing data. The library sought CoreTrustSeal certification in order to measure the quality of data services in the institutional repository, and to promote researchers’ confidence when depositing their work.

Methods

The authors served on a small team of library staff who collaborated to compose the certification application. They describe the self-assessment process, as they iterated through cycles of compiling information and responding to reviewer feedback.

Results

The application team gained understanding of data repository best practices, shared knowledge about the institutional repository, and identified areas of service improvements necessary to meet certification requirements. Based on the application and feedback, the team took measures to enhance preservation strategies, governance, and public-facing policies and documentation for the repository.

Conclusions

The university library gained a better understanding of top-notch data services and measurably improved these services by pursuing and obtaining CoreTrustSeal certification.

URL : Building a Trustworthy Data Repository: CoreTrustSeal Certification as a Lens for Service Improvements

DOI : https://doi.org/10.7191/jeslib.761

Reproducibility in Management Science

Authors : Miloš Fišar, Ben Greiner, Christoph Huber, Elena Katok, Ali I. Ozkes

With the help of more than 700 reviewers, we assess the reproducibility of nearly 500 articles published in the journal Management Science before and after the introduction of a new Data and Code Disclosure policy in 2019.

When considering only articles for which data accessibility and hardware and software requirements were not an obstacle for reviewers, the results of more than 95% of articles under the new disclosure policy could be fully or largely computationally reproduced. However, for 29% of articles, at least part of the data set was not accessible to the reviewer. Considering all articles in our sample reduces the share of reproduced articles to 68%.

These figures represent a significant increase compared with the period before the introduction of the disclosure policy, where only 12% of articles voluntarily provided replication materials, of which 55% could be (largely) reproduced. Substantial heterogeneity in reproducibility rates across different fields is mainly driven by differences in data set accessibility.

Other reasons for unsuccessful reproduction attempts include missing code, unresolvable code errors, weak or missing documentation, and software and hardware requirements and code complexity. Our findings highlight the importance of journal code and data disclosure policies and suggest potential avenues for enhancing their effectiveness.

DOI : https://doi.org/10.1287/mnsc.2023.03556

The Future of Data in Research Publishing: From Nice to Have to Need to Have?

Authors : Christine L. Borgman, Amy Brand

Science policy promotes open access to research data for purposes of transparency and reuse of data in the public interest. We expect demands for open data in scholarly publishing to accelerate, at least partly in response to the opacity of artificial intelligence algorithms.

Open data should be findable, accessible, interoperable, and reusable (FAIR), and also trustworthy and verifiable. The current state of open data in scholarly publishing is in transition from ‘nice to have’ to ‘need to have.’

Research data are valuable, interpretable, and verifiable only in context of their origin, and with sufficient infrastructure to facilitate reuse. Making research data useful is expensive; benefits and costs are distributed unevenly.

Open data also poses risks for provenance, intellectual property, misuse, and misappropriation in an era of trolls and hallucinating AI algorithms. Scholars and scholarly publishers must make evidentiary data more widely available to promote public trust in research.

To make research processes more trustworthy, transparent, and verifiable, stakeholders need to make greater investments in data stewardship and knowledge infrastructures.

DOI : https://doi.org/10.1162/99608f92.b73aae77

Applying Librarian- Created Evaluation Tools to Determine Quality and Credibility of Open Access Library Science Journals

Authors : Maggie Albro, Jessica L. Serrao, Christopher D. Vidas, Jenessa M. McElfresh, K. Megan Sheffield, Megan Palmer

This article explores the application of journal quality and credibility evaluation tools to library science publications. The researchers investigate quality and credibility attributes of forty-eight peer-reviewed library science journals with open access components using two evaluative tools developed and published by librarians.

The results identify common positive and negative attributes of library science journals, compare the results of the two evaluation tools, and discuss their ease of use and limitations. Overall, the results show that while library science journals do not fall prey to the same concerning characteristics that librarians use to caution other researchers, there are several areas in which publishers can improve the quality and credibility of their journals.

URL : https://preprint.press.jhu.edu/portal/sites/default/files/06_24.1albro.pdf

Gender differences in submission behavior exacerbate publication disparities in elite journals

Authors : Isabel Basson, Chaoqun Ni, Giovanna Badia, Nathalie Tufenkji, Cassidy R. Sugimoto, Vincent Larivière

Women are particularly underrepresented in journals of the highest scientific impact, with substantial consequences for their careers. While a large body of research has focused on the outcome and the process of peer review, fewer articles have explicitly focused on gendered submission behavior and the explanations for these differences.

In our study of nearly five thousand active authors, we find that women are less likely to report having submitted papers and, when they have, to submit fewer manuscripts, on average, than men. Women were more likely to indicate that they did not submit their papers (in general and their subsequently most cited papers) to Science, Nature, or PNAS because they were advised not to.

In the aggregate, no statistically significant difference was observed between men and women in how they rated the quality of their work. Nevertheless, regardless of discipline, women were more likely than men to indicate that their “work was not ground-breaking or sufficiently novel” as a rationale for not submitting to one of the listed prestigious journals. Men were more likely than women to indicate that the “work would fit better in a more specialized journal.”

We discuss the implications of these findings and interventions that can serve to mitigate the disparities caused by gendered differences in submission behavior.

DOI : https://doi.org/10.1101/2023.08.21.554192