Status, use and impact of sharing individual participant data from clinical trials: a scoping review

Authors : Christian Ohmann, David Moher, Maximilian Siebert, Edith Motschall, Florian Naudet

Objectives

To explore the impact of data-sharing initiatives on the intent to share data, on actual data sharing, on the use of shared data and on research output and impact of shared data.

Eligibility criteria

All studies investigating data-sharing practices for individual participant data (IPD) from clinical trials.

Sources of evidence

We searched the Medline database, the Cochrane Library, the Science Citation Index Expanded and the Social Sciences Citation Index via Web of Science, and preprints and proceedings of the International Congress on Peer Review and Scientific Publication.

In addition, we inspected major clinical trial data-sharing platforms, contacted major journals/publishers, editorial groups and some funders.

Charting methods

Two reviewers independently extracted information on methods and results from resources identified using a standardised questionnaire. A map of the extracted data was constructed and accompanied by a narrative summary for each outcome domain.

Results

93 studies identified in the literature search (published between 2001 and 2020, median: 2018) and 5 from additional information sources were included in the scoping review. Most studies were descriptive and focused on early phases of the data-sharing process. While the willingness to share IPD from clinical trials is extremely high, actual data-sharing rates are suboptimal.

A survey of journal data suggests poor to moderate enforcement of the policies by publishers. Metrics provided by platforms suggest that a large majority of data remains unrequested. When requested, the purpose of the reuse is more often secondary analyses and meta-analyses, rarely re-analyses. Finally, studies focused on the real impact of data-sharing were rare and used surrogates such as citation metrics.

Conclusions

There is currently a gap in the evidence base for the impact of IPD sharing, which entails uncertainties in the implementation of current data-sharing policies. High level evidence is needed to assess whether the value of medical research increases with data-sharing practices.

URL : Status, use and impact of sharing individual participant data from clinical trials: a scoping review

Original location : https://bmjopen.bmj.com/content/11/8/e049228

Visual Summary Identification From Scientific Publications via Self-Supervised Learning

Authors : Shintaro Yamamoto, Anne Lauscher, Simone Paolo Ponzetto, Goran Glavaš, Shigeo Morishima

The exponential growth of scientific literature yields the need to support users to both effectively and efficiently analyze and understand the some body of research work. This exploratory process can be facilitated by providing graphical abstracts–a visual summary of a scientific publication.

Accordingly, previous work recently presented an initial study on automatic identification of a central figure in a scientific publication, to be used as the publication’s visual summary.

This study, however, have been limited only to a single (biomedical) domain. This is primarily because the current state-of-the-art relies on supervised machine learning, typically relying on the existence of large amounts of labeled data: the only existing annotated data set until now covered only the biomedical publications.

In this work, we build a novel benchmark data set for visual summary identification from scientific publications, which consists of papers presented at conferences from several areas of computer science. We couple this contribution with a new self-supervised learning approach to learn a heuristic matching of in-text references to figures with figure captions.

Our self-supervised pre-training, executed on a large unlabeled collection of publications, attenuates the need for large annotated data sets for visual summary identification and facilitates domain transfer for this task. We evaluate our self-supervised pretraining for visual summary identification on both the existing biomedical and our newly presented computer science data set.

The experimental results suggest that the proposed method is able to outperform the previous state-of-the-art without any task-specific annotations.

URL : Visual Summary Identification From Scientific Publications via Self-Supervised Learning

DOI : https://doi.org/10.3389/frma.2021.719004

Open science, the replication crisis, and environmental public health

Author : Daniel J. Hicks

Concerns about a crisis of mass irreplicability across scientific fields (“the replication crisis”) have stimulated a movement for open science, encouraging or even requiring researchers to publish their raw data and analysis code.

Recently, a rule at the US Environmental Protection Agency (US EPA) would have imposed a strong open data requirement. The rule prompted significant public discussion about whether open science practices are appropriate for fields of environmental public health.

The aims of this paper are to assess (1) whether the replication crisis extends to fields of environmental public health; and (2) in general whether open science requirements can address the replication crisis.

There is little empirical evidence for or against mass irreplicability in environmental public health specifically. Without such evidence, strong claims about whether the replication crisis extends to environmental public health – or not – seem premature.

By distinguishing three concepts – reproducibility, replicability, and robustness – it is clear that open data initiatives can promote reproducibility and robustness but do little to promote replicability.

I conclude by reviewing some of the other benefits of open science, and offer some suggestions for funding streams to mitigate the costs of adoption of open science practices in environmental public health.

URL : Open science, the replication crisis, and environmental public health

DOI : https://doi.org/10.1080/08989621.2021.1962713

Clinical trial transparency and data sharing among biopharmaceutical companies and the role of company size, location and product type: a cross-sectional descriptive analysis

Authors : Sydney A Axson, Michelle M Mello, Deborah Lincow, Catherine Yang, Cary P Gross, Joseph S Ross, Jennifer Miller

Objectives

To examine company characteristics associated with better transparency and to apply a tool used to measure and improve clinical trial transparency among large companies and drugs, to smaller companies and biologics.

Design

Cross-sectional descriptive analysis.

Setting and participants

Novel drugs and biologics Food and Drug Administration (FDA) approved in 2016 and 2017 and their company sponsors.

Main outcome measures

Using established Good Pharma Scorecard (GPS) measures, companies and products were evaluated on their clinical trial registration, results dissemination and FDA Amendments Act (FDAAA) implementation; companies were ranked using these measures and a multicomponent data sharing measure.

Associations between company transparency scores with company size (large vs non-large), location (US vs non-US) and sponsored product type (drug vs biologic) were also examined.

Results

26% of products (16/62) had publicly available results for all clinical trials supporting their FDA approval and 67% (39/58) had public results for trials in patients by 6 months after their FDA approval; 58% (32/55) were FDAAA compliant.

Large companies were significantly more transparent than non-large companies (overall median transparency score of 95% (IQR 91–100) vs 59% (IQR 41–70), p<0.001), attributable to higher FDAAA compliance (median of 100% (IQR 88–100) vs 57% (0–100), p=0.01) and better data sharing (median of 100% (IQR 80–100) vs 20% (IQR 20–40), p<0.01). No significant differences were observed by company location or product type.

Conclusions

It was feasible to apply the GPS transparency measures and ranking tool to non-large companies and biologics. Large companies are significantly more transparent than non-large companies, driven by better data sharing procedures and implementation of FDAAA trial reporting requirements.

Greater research transparency is needed, particularly among non-large companies, to maximise the benefits of research for patient care and scientific innovation.

URL : Clinical trial transparency and data sharing among biopharmaceutical companies and the role of company size, location and product type: a cross-sectional descriptive analysis

DOI : http://dx.doi.org/10.1136/bmjopen-2021-053248

Preprints in times of COVID19: the time is ripe for agreeing on terminology and good practices

Authors : Raffaella Ravinetto, Céline Caillet, Muhammad H. Zaman, Jerome Amir Singh, Philippe J. Guerin, Aasim Ahmad, Carlos E. Durán, Amar Jesani, Ana Palmero, Laura Merson, Peter W. Horby, E. Bottieau, Tammy Hoffmann, Paul N. Newton

Over recent years, the research community has been increasingly using preprint servers to share manuscripts that are not yet peer-reviewed. Even if it enables quick dissemination of research findings, this practice raises several challenges in publication ethics and integrity.

In particular, preprints have become an important source of information for stakeholders interested in COVID19 research developments, including traditional media, social media, and policy makers.

Despite caveats about their nature, many users can still confuse pre-prints with peer-reviewed manuscripts. If unconfirmed but already widely shared first-draft results later prove wrong or misinterpreted, it can be very difficult to “unlearn” what we thought was true. Complexity further increases if unconfirmed findings have been used to inform guidelines.

To help achieve a balance between early access to research findings and its negative consequences, we formulated five recommendations: (a) consensus should be sought on a term clearer than ‘pre-print’, such as ‘Unrefereed manuscript’, “Manuscript awaiting peer review” or ‘’Non-reviewed manuscript”; (b) Caveats about unrefereed manuscripts should be prominent on their first page, and each page should include a red watermark stating ‘Caution—Not Peer Reviewed’; (c) pre-print authors should certify that their manuscript will be submitted to a peer-review journal, and should regularly update the manuscript status; (d) high level consultations should be convened, to formulate clear principles and policies for the publication and dissemination of non-peer reviewed research results; (e) in the longer term, an international initiative to certify servers that comply with good practices could be envisaged.

URL : Preprints in times of COVID19: the time is ripe for agreeing on terminology and good practices

DOI : https://doi.org/10.1186/s12910-021-00667-7

Publication patterns’ changes due to the COVID-19 pandemic: a longitudinal and short-term scientometric analysis

Authors : Shir Aviv-Reuven, Ariel Rosenfeld

In recent months the COVID-19 (also known as SARS-CoV-2 and Coronavirus) pandemic has spread throughout the world. In parallel, extensive scholarly research regarding various aspects of the pandemic has been published. In this work, we analyse the changes in biomedical publishing patterns due to the pandemic.

We study the changes in the volume of publications in both peer reviewed journals and preprint servers, average time to acceptance of papers submitted to biomedical journals, international (co-)authorship of these papers (expressed by diversity and volume), and the possible association between journal metrics and said changes.

We study these possible changes using two approaches: a short-term analysis through which changes during the first six months of the outbreak are examined for both COVID-19 related papers and non-COVID-19 related papers; and a longitudinal approach through which changes are examined in comparison to the previous four years.

Our results show that the pandemic has so far had a tremendous effect on all examined accounts of scholarly publications: A sharp increase in publication volume has been witnessed and it can be almost entirely attributed to the pandemic; a significantly faster mean time to acceptance for COVID-19 papers is apparent, and it has (partially) come at the expense of non-COVID-19 papers; and a significant reduction in international collaboration for COVID-19 papers has also been identified.

As the pandemic continues to spread, these changes may cause a slow down in research in non-COVID-19 biomedical fields and bring about a lower rate of international collaboration.

DOI : https://doi.org/10.1007/s11192-021-04059-x

The Use of Twitter by Medical Journals: Systematic Review of the Literature

Authors : Natalie Erskine, Sharief Hendricks

Background

Medical journals use Twitter to engage and disseminate their research articles and implement a range of strategies to maximize reach and impact.

Objective

This study aims to systematically review the literature to synthesize and describe the different Twitter strategies used by medical journals and their effectiveness on journal impact and readership metrics.

Methods

A systematic search of the literature before February 2020 in four electronic databases (PubMed, Web of Science, Scopus, and ScienceDirect) was conducted. Articles were reviewed using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines.

Results

The search identified 44 original research studies that evaluated Twitter strategies implemented by medical journals and analyzed the relationship between Twitter metrics and alternative and citation-based metrics. The key findings suggest that promoting publications on Twitter improves citation-based and alternative metrics for academic medical journals.

Moreover, implementing different Twitter strategies maximizes the amount of attention that publications and journals receive. The four key Twitter strategies implemented by many medical journals are tweeting the title and link of the article, infographics, podcasts, and hosting monthly internet-based journal clubs. Each strategy was successful in promoting the publications. However, different metrics were used to measure success.

Conclusions

Four key Twitter strategies are implemented by medical journals: tweeting the title and link of the article, infographics, podcasts, and hosting monthly internet-based journal clubs. In this review, each strategy was successful in promoting publications but used different metrics to measure success.

Thus, it is difficult to conclude which strategy is most effective. In addition, the four strategies have different costs and effects on dissemination and readership. We recommend that journals and researchers incorporate a combination of Twitter strategies to maximize research impact and capture audiences with a variety of learning methods.

URL : The Use of Twitter by Medical Journals: Systematic Review of the Literature

DOI : https://doi.org/10.2196/26378