From Conceptualization to Implementation: FAIR Assessment of Research Data Objects

Authors: Anusuriya Devaraju, Mustapha Mokrane, Linas Cepinskas, Robert Huber, Patricia Herterich, Jerry de Vries, Vesa Akerman, Hervé L’Hours, Joy Davidson, Michael Diepenbroek

Funders and policy makers have strongly recommended the uptake of the FAIR principles in scientific data management. Several initiatives are working on the implementation of the principles and standardized applications to systematically evaluate data FAIRness.

This paper presents practical solutions, namely metrics and tools, developed by the FAIRsFAIR project to pilot the FAIR assessment of research data objects in trustworthy data repositories. The metrics are mainly built on the indicators developed by the RDA FAIR Data Maturity Model Working Group.

The tools’ design and evaluation followed an iterative process. We present two applications of the metrics: an awareness-raising self-assessment tool and an automated FAIR data assessment tool.

Initial results of testing the tools with researchers and data repositories are discussed, and future improvements suggested including the next steps to enable FAIR data assessment in the broader research data ecosystem.

URL : From Conceptualization to Implementation: FAIR Assessment of Research Data Objects

DOI : http://doi.org/10.5334/dsj-2021-004

An overview of biomedical platforms for managing research data

Authors : Vivek Navale, Denis von Kaeppler, Matthew McAuliffe

Biomedical platforms provide the hardware and software to securely ingest, process, validate, curate, store, and share data. Many large-scale biomedical platforms use secure cloud computing technology for analyzing, integrating, and storing phenotypic, clinical, and genomic data. Several web-based platforms are available for researchers to access services and tools for biomedical research.

The use of bio-containers can facilitate the integration of bioinformatics software with various data analysis pipelines. Adoption of Common Data Models, Common Data Elements, and Ontologies can increase the likelihood of data reuse. Managing biomedical Big Data will require the development of strategies that can efficiently leverage public cloud computing resources.

The use of the research community developed standards for data collection can foster the development of machine learning methods for data processing and analysis. Increasingly platforms will need to support the integration of data from multiple disease area research.

URL : An overview of biomedical platforms for managing research data

DOI : https://doi.org/10.1007/s42488-020-00040-0

Evaluation of Data Sharing After Implementation of the International Committee of Medical Journal Editors Data Sharing Statement Requirement

Authors : Valentin Danchev, Yan Min, John Borghi, Mike Baiocchi, John P. A. Ioann

Importance

The benefits of responsible sharing of individual-participant data (IPD) from clinical studies are well recognized, but stakeholders often disagree on how to align those benefits with privacy risks, costs, and incentives for clinical trialists and sponsors.

The International Committee of Medical Journal Editors (ICMJE) required a data sharing statement (DSS) from submissions reporting clinical trials effective July 1, 2018. The required DSSs provide a window into current data sharing rates, practices, and norms among trialists and sponsors.

Objective

To evaluate the implementation of the ICMJE DSS requirement in 3 leading medical journals: JAMA, Lancet, and New England Journal of Medicine (NEJM).

Design, Setting, and Participants

This is a cross-sectional study of clinical trial reports published as articles in JAMA, Lancet, and NEJM between July 1, 2018, and April 4, 2020. Articles not eligible for DSS, including observational studies and letters or correspondence, were excluded.

A MEDLINE/PubMed search identified 487 eligible clinical trials in JAMA (112 trials), Lancet (147 trials), and NEJM (228 trials). Two reviewers evaluated each of the 487 articles independently.

Exposure

Publication of clinical trial reports in an ICMJE medical journal requiring a DSS.

Main Outcomes and Measures

The primary outcomes of the study were declared data availability and actual data availability in repositories. Other captured outcomes were data type, access, and conditions and reasons for data availability or unavailability. Associations with funding sources were examined.

Results

A total of 334 of 487 articles (68.6%; 95% CI, 64%-73%) declared data sharing, with nonindustry NIH-funded trials exhibiting the highest rates of declared data sharing (89%; 95% CI, 80%-98%) and industry-funded trials the lowest (61%; 95% CI, 54%-68%).

However, only 2 IPD sets (0.6%; 95% CI, 0.0%-1.5%) were actually deidentified and publicly available as of April 10, 2020. The remaining were supposedly accessible via request to authors (143 of 334 articles [42.8%]), repository (89 of 334 articles [26.6%]), and company (78 of 334 articles [23.4%]).

Among the 89 articles declaring that IPD would be stored in repositories, only 17 (19.1%) deposited data, mostly because of embargo and regulatory approval. Embargo was set in 47.3% of data-sharing articles (158 of 334), and in half of them the period exceeded 1 year or was unspecified.

Conclusions and Relevance

Most trials published in JAMA, Lancet, and NEJM after the implementation of the ICMJE policy declared their intent to make clinical data available. However, a wide gap between declared and actual data sharing exists.

To improve transparency and data reuse, journals should promote the use of unique pointers to data set location and standardized choices for embargo periods and access requirements.

URL : Evaluation of Data Sharing After Implementation of the International Committee of Medical Journal Editors Data Sharing Statement Requirement

DOI :10.1001/jamanetworkopen.2020.33972

Rethinking success, integrity, and culture in research (part 2) — a multi-actor qualitative study on problems of science

Authors : Noémie Aubert Bonn, Wim Pinxten

Background

Research misconduct and questionable research practices have been the subject of increasing attention in the past few years. But despite the rich body of research available, few empirical works also include the perspectives of non-researcher stakeholders.

Methods

We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science.

We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.

Results

Given the breadth of our results, we divided our findings in a two-paper series with the current paper focusing on the problems that affect the integrity and research culture. We first found that different actors have different perspectives on the problems that affect the integrity and culture of research.

Problems were either linked to personalities and attitudes, or to the climates in which researchers operate. Elements that were described as essential for success (in the associate paper) were often thought to accentuate the problems of research climates by disrupting research culture and research integrity.

Even though all participants agreed that current research climates need to be addressed, participants generally did not feel responsible nor capable of initiating change. Instead, respondents revealed a circle of blame and mistrust between actor groups.

Conclusions

Our findings resonate with recent debates, and extrapolate a few action points which might help advance the discussion.

First, the research integrity debate must revisit and tackle the way in which researchers are assessed.

Second, approaches to promote better science need to address the impact that research climates have on research integrity and research culture rather than to capitalize on individual researchers’ compliance.

Finally, inter-actor dialogues and shared decision making must be given priority to ensure that the perspectives of the full research system are captured. Understanding the relations and interdependency between these perspectives is key to be able to address the problems of science.

URL : Rethinking success, integrity, and culture in research (part 2) — a multi-actor qualitative study on problems of science

DOI : https://doi.org/10.1186/s41073-020-00105-z

Rethinking success, integrity, and culture in research (part 1) — a multi-actor qualitative study on success in science

Authors : Noémie Aubert Bonn, Wim Pinxten

Background

Success shapes the lives and careers of scientists. But success in science is difficult to define, let alone to translate in indicators that can be used for assessment. In the past few years, several groups expressed their dissatisfaction with the indicators currently used for assessing researchers.

But given the lack of agreement on what should constitute success in science, most propositions remain unanswered. This paper aims to complement our understanding of success in science and to document areas of tension and conflict in research assessments.

Methods

We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science.

We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.

Results

Given the breadth of our results, we divided our findings in a two-paper series, with the current paper focusing on what defines and determines success in science. Respondents depicted success as a multi-factorial, context-dependent, and mutable construct.

Success appeared to be an interaction between characteristics from the researcher (Who), research outputs (What), processes (How), and luck. Interviewees noted that current research assessments overvalued outputs but largely ignored the processes deemed essential for research quality and integrity.

Interviewees suggested that science needs a diversity of indicators that are transparent, robust, and valid, and that also allow a balanced and diverse view of success; that assessment of scientists should not blindly depend on metrics but also value human input; and that quality should be valued over quantity.

Conclusions

The objective of research assessments may be to encourage good researchers, to benefit society, or simply to advance science. Yet we show that current assessments fall short on each of these objectives. Open and transparent inter-actor dialogue is needed to understand what research assessments aim for and how they can best achieve their objective.

URL : Rethinking success, integrity, and culture in research (part 1) — a multi-actor qualitative study on success in science

DOI : https://doi.org/10.1186/s41073-020-00104-0

Methodological quality of COVID-19 clinical research

Authors : Richard G. Jung, Pietro Di Santo, Cole Clifford, Graeme Prosperi-Porta, Stephanie Skanes, Annie Hung, Simon Parlow, Sarah Visintini, F. Daniel Ramirez, Trevor Simard & Benjamin Hibbert

The COVID-19 pandemic began in early 2020 with major health consequences. While a need to disseminate information to the medical community and general public was paramount, concerns have been raised regarding the scientific rigor in published reports.

We performed a systematic review to evaluate the methodological quality of currently available COVID-19 studies compared to historical controls. A total of 9895 titles and abstracts were screened and 686 COVID-19 articles were included in the final analysis.

Comparative analysis of COVID-19 to historical articles reveals a shorter time to acceptance (13.0[IQR, 5.0–25.0] days vs. 110.0[IQR, 71.0–156.0] days in COVID-19 and control articles, respectively; p < 0.0001).

Furthermore, methodological quality scores are lower in COVID-19 articles across all study designs. COVID-19 clinical studies have a shorter time to publication and have lower methodological quality scores than control studies in the same journal. These studies should be revisited with the emergence of stronger evidence.

URL : Methodological quality of COVID-19 clinical research

DOI : https://doi.org/10.1038/s41467-021-21220-5

How is science clicked on Twitter? Click metrics for Bitly short links to scientific publications

Authors : Zhichao Fang, Rodrigo Costas, Wencan Tian, Xianwen Wang, Paul Wouters

To provide some context for the potential engagement behavior of Twitter users around science, this article investigates how Bitly short links to scientific publications embedded in scholarly Twitter mentions are clicked on Twitter.

Based on the click metrics of over 1.1 million Bitly short links referring to Web of Science (WoS) publications, our results show that around 49.5% of them were not clicked by Twitter users. For those Bitly short links with clicks from Twitter, the majority of their Twitter clicks accumulated within a short period of time after they were first tweeted.

Bitly short links to the publications in the field of Social Sciences and Humanities tend to attract more clicks from Twitter over other subject fields. This article also assesses the extent to which Twitter clicks are correlated with some other impact indicators.

Twitter clicks are weakly correlated with scholarly impact indicators (WoS citations and Mendeley readers), but moderately correlated to other Twitter engagement indicators (total retweets and total likes).

In light of these results, we highlight the importance of paying more attention to the click metrics of URLs in scholarly Twitter mentions, to improve our understanding about the more effective dissemination and reception of science information on Twitter.

URL : How is science clicked on Twitter? Click metrics for Bitly short links to scientific publications

DOI : https://doi.org/10.1002/asi.24458