Reproducible and transparent research practices in published neurology research

Authors : Trevor Torgerson, Austin L. Johnson, Jonathan Pollard, Daniel Tritz, Matt Vassar

Background

The objective of this study was to evaluate the nature and extent of reproducible and transparent research practices in neurology publications.

Methods

The NLM catalog was used to identify MEDLINE-indexed neurology journals. A PubMed search of these journals was conducted to retrieve publications over a 5-year period from 2014 to 2018.

A random sample of publications was extracted. Two authors conducted data extraction in a blinded, duplicate fashion using a pilot-tested Google form. This form prompted data extractors to determine whether publications provided access to items such as study materials, raw data, analysis scripts, and protocols.

In addition, we determined if the publication was included in a replication study or systematic review, was preregistered, had a conflict of interest declaration, specified funding sources, and was open access.

Results

Our search identified 223,932 publications meeting the inclusion criteria, from which 400 were randomly sampled. Only 389 articles were accessible, yielding 271 publications with empirical data for analysis.

Our results indicate that 9.4% provided access to materials, 9.2% provided access to raw data, 0.7% provided access to the analysis scripts, 0.7% linked the protocol, and 3.7% were preregistered.

A third of sampled publications lacked funding or conflict of interest statements. No publications from our sample were included in replication studies, but a fifth were cited in a systematic review or meta-analysis.

Conclusions

Currently, published neurology research does not consistently provide information needed for reproducibility. The implications of poor research reporting can both affect patient care and increase research waste. Collaborative intervention by authors, peer reviewers, journals, and funding sources is needed to mitigate this problem.

URL : Reproducible and transparent research practices in published neurology research

DOI : https://doi.org/10.1186/s41073-020-0091-5

 

The Heritage Data Reuse Charter: from principles to research workflows

Authors : Erzsébet Tóth-Czifra, Laurent Romary

There is a growing need to establish domain-or discipline-specific approaches to research data sharing workflows. A defining feature of data and data workflows in the arts and humanities domain is their dependence on cultural heritage sources hosted and curated in museums, libraries, galleries and archives.

A major difficulty when scholars interact with heritage data is that the nature of the cooperation between researchers and Cultural Heritage Institutions (henceforth CHIs) is often constrained by structural and legal challenges but even more by uncertainties as to the expectations of both parties.

The Heritage Data Reuse Charter aims to address these by designing a common environment that will enable all the relevant actors to work together to connect and improve access to heritage data and make transactions related to the scholarly use of cultural heritage data more visible and transparent.

As a first step, a wide range of stakeholders on the Cultural Heritage and research sector agreed upon a set of generic principles, summarized in the Mission Statement of the Charter, that can serve as a baseline governing the interactions between CHIs, researchers and data centres.

This was followed by a long and thorough validation process related to these principles through surveys 1 and workshops 2. As a second step, we now put forward a questionnaire template tool that helps researchers and CHIs to translate the 6 core principles into specific research project settings.

It contains questions about access to data, provenance information, preferred citation standards, hosting responsibilities etc. on the basis of which the parties can arrive at mutual reuse agreements that could serve as a starting point for a FAIR-by-construction data management, right from the project planning/application phase.

The questionnaire template and the resulting mutual agreements can be flexibly applied to projects of different scale and in platform-independent ways. Institutions can embed them into their own exchange protocols while researchers can add them to their Data Management Plans.

As such, they can show evidence for responsible and fair conduct of cultural heritage data, and fair (but also FAIR) research data management practices that are based on partnership with the holding institution.

URL : https://halshs.archives-ouvertes.fr/halshs-02475692

Open Access eXchange (OAeX): an economic model and platform for fundraising open scholarship services

Authors : Jack Hyland, Alexander Kouker, Dmitri Zaitsev

This article describes the Open Access eXchange (OAeX) project, a pragmatic and comprehensive economic model and fundraising platform for open scholarship initiatives.

OAeX connects bidders with funders at scale and right across the open scholarship spectrum through crowdfunding: financial expenditure is regulated by a market of freely competing providers and financial transactions and transparency are assured by a clearing-house entity.

Specifically, OAeX seeks to facilitate open access publishing without the barrier of article processing charges (APCs), as well as contribute to solving challenges of transparency and economic sustainability in open scholarship projects in the broader sense.

URL : Open Access eXchange (OAeX): an economic model and platform for fundraising open scholarship services

DOI : http://doi.org/10.1629/uksg.500

Open Access+ Service: reframing library support to take research outputs to non-academic audiences

Author: Scott Taylor

The University of Manchester Library has established a key role in facilitating scholarly discourse through its mediated open access (OA) services, but has little track record in intentionally taking OA research outputs to non-academic audiences.

This article outlines recent exploratory steps the Library has taken to convince researchers to fully exploit this part of the scholarly communication chain. Driving developments within this service category is a belief that despite the recent rise in OA, the full public benefit of research outputs is often not being realized as many papers are written in inaccessibly technical language.

Recognizing our unique position to help authors reach broader audiences with simpler expressions of their work, we have evolved our existing managed OA services to systematically share plain-English summaries of OA papers via Twitter.

In parallel, we have taken steps to ensure that our commercial analytics tools work harder to identify and reach the networked communities that form around academic disciplines in the hope that these simpler expressions of research will be more likely to diffuse beyond these networks.

URL : Open Access+ Service: reframing library support to take research outputs to non-academic audiences

DOI : http://doi.org/10.1629/uksg.499

Demarcating Spectrums of Predatory Publishing: Economic and Institutional Sources of Academic Legitimacy

Author : Kyle Sile

The emergence of Open Access (OA) publishing has altered incentives and opportunities for academic stakeholders and publishers. These changes have yielded a variety of new economic and academic niches, including journals with questionable peer review systems and business models, commonly dubbed ‘predatory publishing.’ Empirical analysis of the Cabell’s Journal Blacklist reveals substantial diversity in types and degrees of predatory publishing.

While some blacklisted publishers produce journals with many severe violations of academic norms, ‘grey’ journals and publishers occupy borderline or ambiguous niches between predation and legitimacy.

Predation in academic publishing is not a simple binary phenomenon and should instead be perceived as a spectrum with varying types and degrees of illegitimacy. Conceptions of predation are based on overlapping evaluations of academic and economic legitimacy.

High institutional status benefits publishers by reducing conflicts between – if not aligning – professional and market institutional logics, which are more likely to conflict and create illegitimacy concerns in downmarket niches.

High rejection rates imbue high-status journals with value and pricing power, while low-status OA journals face ‘predatory’ incentives to optimize revenue via low selectivity.

Status influences the social acceptability of profit-seeking in academic publishing, rendering lower-status publishers vulnerable to being perceived and stigmatized as illegitimate.

URL : https://osf.io/preprints/socarxiv/6r274/

Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies

Authors : Jeroen Baas, Michiel Schotten, Andrew Plume, Grégoire Côté, Reza Karimi

Scopus is among the largest curated abstract and citation databases, with a wide global and regional coverage of scientific journals, conference proceedings, and books, while ensuring only the highest quality data are indexed through rigorous content selection and re-evaluation by an independent Content Selection and Advisory Board.

Additionally, extensive quality assurance processes continuously monitor and improve all data elements in Scopus. Besides enriched metadata records of scientific articles, Scopus offers comprehensive author and institution profiles, obtained from advanced profiling algorithms and manual curation, ensuring high precision and recall.

The trustworthiness of Scopus has led to its use as bibliometric data source for large-scale analyses in research assessments, research landscape studies, science policy evaluations, and university rankings.

Scopus data have been offered for free for selected studies by the academic research community, such as through application programming interfaces, which have led to many publications employing Scopus data to investigate topics such as researcher mobility, network visualizations, and spatial bibliometrics.

In June 2019, the International Center for the Study of Research was launched, with an advisory board consisting of bibliometricians, aiming to work with the scientometric research community and offering a virtual laboratory where researchers will be able to utilize Scopus data.

URL : Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies

DOI : https://doi.org/10.1162/qss_a_00019

OpenCitations, an infrastructure organization for open scholarship

Authors : Silvio Peroni, David Shotton

OpenCitations is an infrastructure organization for open scholarship dedicated to the publication of open citation data as Linked Open Data using Semantic Web technologies, thereby providing a disruptive alternative to traditional proprietary citation indexes.

Open citation data are valuable for bibliometric analysis, increasing the reproducibility of large-scale analyses by enabling publication of the source data. Following brief introductions to the development and benefits of open scholarship and to Semantic Web technologies, this paper describes OpenCitations and its data sets, tools, services, and activities.

These include the OpenCitations Data Model; the SPAR (Semantic Publishing and Referencing) Ontologies; OpenCitations’ open software of generic applicability for searching, browsing, and providing REST APIs over resource description framework (RDF) triplestores; Open Citation Identifiers (OCIs) and the OpenCitations OCI Resolution Service; the OpenCitations Corpus (OCC), a database of open downloadable bibliographic and citation data made available in RDF under a Creative Commons public domain dedication; and the OpenCitations Indexes of open citation data, of which the first and largest is COCI, the OpenCitations Index of Crossref Open DOI-to-DOI Citations, which currently contains over 624 million bibliographic citations and is receiving considerable usage by the scholarly community.

URL : OpenCitations, an infrastructure organization for open scholarship

DOI : https://doi.org/10.1162/qss_a_00023