Association between the Rankings of Top Bioinformatics and Medical Informatics Journals and the Scholarly Reputations of Chief Editors

Author : Salim Sazzed

The scientometric indices, such as the journal Impact Factor (IF) or SCImago Journal Rank (SJR), often play a determining role while choosing a journal for possible publication. The Editor-in-Chief (EiC), also known as a lead editor or chief editor, usually decides the outcomes (e.g., accept, reject) of the submitted manuscripts taking the reviewer’s feedback into account.

This study investigates the associations between the EiC’s scholarly reputation (i.e., citation-level metrics) and the rankings of top Bioinformatics and Computational Biology (BCB) and Medical Informatics (MI) journals. I consider three scholarly indices (i.e., citation, h-index, and i-10 index) of the EiC and four scientometric indices (i.e., h5-index, h5-median, impact factor, and SJR) of various journals.

To study the correlation between scientometric indices of the EiC and journal, I apply Spearman (ρ) and Kendall (τ) correlation coefficients. Moreover, I employ machine learning (ML) models for the journal’s SJR and IF predictions leveraging the EiC’s scholarly reputation indices.

The analysis reveals no correlation between the EiC’s scholarly achievement and the journal’s quantitative metrics. ML models yield high prediction errors for SJR and IF estimations, which suggests that the EiC’s scholarly indices are not good representations of the journal rankings.

URL : Association between the Rankings of Top Bioinformatics and Medical Informatics Journals and the Scholarly Reputations of Chief Editors

DOI : https://doi.org/10.3390/publications9030042

Covid-19 et Science ouverte, premiers reculs

Auteur/Author : Ghislaine Chartron

Cet article propose un premier bilan de la science ouverte liée à la pandémie Covid-19. Typologie des ressources mises à disposition en fonction des publics cibles, analyse de certains problèmes de qualité de l’information et des données, enjeux de la science des données et de la gouvernance des données, énoncé de certaines limites de la science ouverte dans le contexte Covid-19, évolution de la communication scientifique en virologie.

URL : https://hal.archives-ouvertes.fr/hal-03347094

Do authors of research funded by the Canadian Institutes of Health Research comply with its open access mandate?: A meta-epidemiologic study

Authors : Michael A. Scaffidi, Karam Elsolh, Juana Li, Yash Verma, Rishi Bansal, Nikko Gimpaya, Vincent Larivière, Rishad Khan, Samir C. Grover

Background

Since 2008, the Canadian Institutes of Health Research (CIHR) has mandated that studies it funds either in whole or in part are required to publish their results as open access (OA) within 12 months of publication using either online repositories and/or OA journals.

Yet, there is evidence that authors are poorly compliant with this mandate. Specifically, there has been an apparent decrease in OA publication after 2015, which coincides with a change in the OA policy during the same year.

One particular policy change that may have contributed to this decline was lifting the requirement that authors deposit their article in an OA repository immediately upon publication.

We investigated the proportion of OA compliance of CIHR-funded studies in the period before and after the policy change of 2015 with manual confirmation of both CIHR funding and OA status.

Methods and findings

We identified CIHR-funded studies published between the years 2014 to 2017 using a comprehensive search in the Web of Science (WoS). We took a stratified random sample from all four years (i.e. 2014 to 2017), with 250 studies from each year.

Two authors independently reviewed the final full-text publications retrieved from the journal web page to determine to confirm CIHR funding, as indicated in the acknowledgements or elsewhere in the paper.

For each study, we also collected bibliometric data that included citation count and Altmetric attention score Statistical analyses were conducted using two-tailed Fisher’s exact test with relative risk (RR). Among the 851 receiving CIHR funding published from 2014 to 2017, the percentage of CIHR-funded studies published as OA significantly decreased from 79.6% in 2014 to 70.3% in 2017 (RR = 0.88, 95% CI: 0.79–0.99, P = 0.028).

When considering all four years, there was no significant difference in the percentage of CIHR-funded studies published as OA in both 2014 and 2015 compared to both 2016 and 2017 (RR = 0.97, 95% CI: 0.90–1.05, P = 0.493). Additionally, OA publications had significantly higher citation count (both in year of publication and in total) and higher attention scores (P<0.05).

Conclusions

Overall, we found that there was a significant decrease in the proportion of CIHR funded studies published as OA from 2014 compared to 2017, though this difference did not persist when comparing both 2014–2015 to 2016–2017.

The primary limitation was the reliance of self-reported data from authors on CIHR funding status. We posit that this decrease may be attributable to CIHR’s OA policy change in 2015.

Further exploration is warranted to both validate these studies using a larger dataset and, if valid, investigate the effects of potential interventions to improve the OA compliance, such as use of a CIHR publication database, and reinstatement of a policy for authors to immediately submit their findings to OA repositories upon publication.

URL : Do authors of research funded by the Canadian Institutes of Health Research comply with its open access mandate?: A meta-epidemiologic study

DOI : https://doi.org/10.1371/journal.pone.0256577

Status, use and impact of sharing individual participant data from clinical trials: a scoping review

Authors : Christian Ohmann, David Moher, Maximilian Siebert, Edith Motschall, Florian Naudet

Objectives

To explore the impact of data-sharing initiatives on the intent to share data, on actual data sharing, on the use of shared data and on research output and impact of shared data.

Eligibility criteria

All studies investigating data-sharing practices for individual participant data (IPD) from clinical trials.

Sources of evidence

We searched the Medline database, the Cochrane Library, the Science Citation Index Expanded and the Social Sciences Citation Index via Web of Science, and preprints and proceedings of the International Congress on Peer Review and Scientific Publication.

In addition, we inspected major clinical trial data-sharing platforms, contacted major journals/publishers, editorial groups and some funders.

Charting methods

Two reviewers independently extracted information on methods and results from resources identified using a standardised questionnaire. A map of the extracted data was constructed and accompanied by a narrative summary for each outcome domain.

Results

93 studies identified in the literature search (published between 2001 and 2020, median: 2018) and 5 from additional information sources were included in the scoping review. Most studies were descriptive and focused on early phases of the data-sharing process. While the willingness to share IPD from clinical trials is extremely high, actual data-sharing rates are suboptimal.

A survey of journal data suggests poor to moderate enforcement of the policies by publishers. Metrics provided by platforms suggest that a large majority of data remains unrequested. When requested, the purpose of the reuse is more often secondary analyses and meta-analyses, rarely re-analyses. Finally, studies focused on the real impact of data-sharing were rare and used surrogates such as citation metrics.

Conclusions

There is currently a gap in the evidence base for the impact of IPD sharing, which entails uncertainties in the implementation of current data-sharing policies. High level evidence is needed to assess whether the value of medical research increases with data-sharing practices.

URL : Status, use and impact of sharing individual participant data from clinical trials: a scoping review

Original location : https://bmjopen.bmj.com/content/11/8/e049228

Visual Summary Identification From Scientific Publications via Self-Supervised Learning

Authors : Shintaro Yamamoto, Anne Lauscher, Simone Paolo Ponzetto, Goran Glavaš, Shigeo Morishima

The exponential growth of scientific literature yields the need to support users to both effectively and efficiently analyze and understand the some body of research work. This exploratory process can be facilitated by providing graphical abstracts–a visual summary of a scientific publication.

Accordingly, previous work recently presented an initial study on automatic identification of a central figure in a scientific publication, to be used as the publication’s visual summary.

This study, however, have been limited only to a single (biomedical) domain. This is primarily because the current state-of-the-art relies on supervised machine learning, typically relying on the existence of large amounts of labeled data: the only existing annotated data set until now covered only the biomedical publications.

In this work, we build a novel benchmark data set for visual summary identification from scientific publications, which consists of papers presented at conferences from several areas of computer science. We couple this contribution with a new self-supervised learning approach to learn a heuristic matching of in-text references to figures with figure captions.

Our self-supervised pre-training, executed on a large unlabeled collection of publications, attenuates the need for large annotated data sets for visual summary identification and facilitates domain transfer for this task. We evaluate our self-supervised pretraining for visual summary identification on both the existing biomedical and our newly presented computer science data set.

The experimental results suggest that the proposed method is able to outperform the previous state-of-the-art without any task-specific annotations.

URL : Visual Summary Identification From Scientific Publications via Self-Supervised Learning

DOI : https://doi.org/10.3389/frma.2021.719004

Open science, the replication crisis, and environmental public health

Author : Daniel J. Hicks

Concerns about a crisis of mass irreplicability across scientific fields (“the replication crisis”) have stimulated a movement for open science, encouraging or even requiring researchers to publish their raw data and analysis code.

Recently, a rule at the US Environmental Protection Agency (US EPA) would have imposed a strong open data requirement. The rule prompted significant public discussion about whether open science practices are appropriate for fields of environmental public health.

The aims of this paper are to assess (1) whether the replication crisis extends to fields of environmental public health; and (2) in general whether open science requirements can address the replication crisis.

There is little empirical evidence for or against mass irreplicability in environmental public health specifically. Without such evidence, strong claims about whether the replication crisis extends to environmental public health – or not – seem premature.

By distinguishing three concepts – reproducibility, replicability, and robustness – it is clear that open data initiatives can promote reproducibility and robustness but do little to promote replicability.

I conclude by reviewing some of the other benefits of open science, and offer some suggestions for funding streams to mitigate the costs of adoption of open science practices in environmental public health.

URL : Open science, the replication crisis, and environmental public health

DOI : https://doi.org/10.1080/08989621.2021.1962713

Clinical trial transparency and data sharing among biopharmaceutical companies and the role of company size, location and product type: a cross-sectional descriptive analysis

Authors : Sydney A Axson, Michelle M Mello, Deborah Lincow, Catherine Yang, Cary P Gross, Joseph S Ross, Jennifer Miller

Objectives

To examine company characteristics associated with better transparency and to apply a tool used to measure and improve clinical trial transparency among large companies and drugs, to smaller companies and biologics.

Design

Cross-sectional descriptive analysis.

Setting and participants

Novel drugs and biologics Food and Drug Administration (FDA) approved in 2016 and 2017 and their company sponsors.

Main outcome measures

Using established Good Pharma Scorecard (GPS) measures, companies and products were evaluated on their clinical trial registration, results dissemination and FDA Amendments Act (FDAAA) implementation; companies were ranked using these measures and a multicomponent data sharing measure.

Associations between company transparency scores with company size (large vs non-large), location (US vs non-US) and sponsored product type (drug vs biologic) were also examined.

Results

26% of products (16/62) had publicly available results for all clinical trials supporting their FDA approval and 67% (39/58) had public results for trials in patients by 6 months after their FDA approval; 58% (32/55) were FDAAA compliant.

Large companies were significantly more transparent than non-large companies (overall median transparency score of 95% (IQR 91–100) vs 59% (IQR 41–70), p<0.001), attributable to higher FDAAA compliance (median of 100% (IQR 88–100) vs 57% (0–100), p=0.01) and better data sharing (median of 100% (IQR 80–100) vs 20% (IQR 20–40), p<0.01). No significant differences were observed by company location or product type.

Conclusions

It was feasible to apply the GPS transparency measures and ranking tool to non-large companies and biologics. Large companies are significantly more transparent than non-large companies, driven by better data sharing procedures and implementation of FDAAA trial reporting requirements.

Greater research transparency is needed, particularly among non-large companies, to maximise the benefits of research for patient care and scientific innovation.

URL : Clinical trial transparency and data sharing among biopharmaceutical companies and the role of company size, location and product type: a cross-sectional descriptive analysis

DOI : http://dx.doi.org/10.1136/bmjopen-2021-053248