Reporting of interventional clinical trial results in a French academic center: a survey of completed studies

Authors : Anne Sophie Alix Doucet, Constant VINATIER, Loïc Fin, Hervé Léna, Hélène Rangé, Clara Locher, Florian Naudet

Background: The dissemination of clinical trial results is an important scientific and ethical endeavour. This survey of completed interventional studies in a French academic center describes their reporting status.

Methods: We explored all interventional studies sponsored by Rennes University Hospital identified on the French Open Science Monitor which tracks trials registered on EUCTR or clinicaltrials.gov, and provides an automatic assessment of the reporting of results. For each study, we ascertained the actual reporting of results using systematic searches on the hospital internal database, bibliographic databases (Google Scholar, PubMed), and by contacting all principal investigators (PIs). We describe several features (including total budget and numbers of trial participants) of the studies that did not report any results.

Results: The French Open Science Monitor identified 93 interventional studies, among which 10 (11%) reported results. In contrast, our survey identified 36 studies (39%) reporting primary analysis results and an additional 18 (19%) reporting results for secondary analyses (without results for their primary analysis). The overall budget for studies that did not report any results was estimated to be €5,051,253 for a total of 6,735 trial participants. The most frequent reasons for the absence of results reported by PIs were lack of time for 18 (42%), and logistic difficulties (e.g. delay in obtaining results or another blocking factor) for 12 (28%). An association was found between non-publication and negative results (adjusted Odds Ratio = 4.70, 95% Confidence Interval [1.67;14.11]).

Conclusions: Even allowing for the fact that automatic searches underestimate the number of studies with published results, the level of reporting was disappointingly low. This amounts to a waste of trial participants’ implication and money. Corrective actions are needed.

URL : Reporting of interventional clinical trial results in a French academic center: a survey of completed studies

DOI : https://doi.org/10.21203/rs.3.rs-3782467/v1

Status, use and impact of sharing individual participant data from clinical trials: a scoping review

Authors : Christian Ohmann, David Moher, Maximilian Siebert, Edith Motschall, Florian Naudet

Objectives

To explore the impact of data-sharing initiatives on the intent to share data, on actual data sharing, on the use of shared data and on research output and impact of shared data.

Eligibility criteria

All studies investigating data-sharing practices for individual participant data (IPD) from clinical trials.

Sources of evidence

We searched the Medline database, the Cochrane Library, the Science Citation Index Expanded and the Social Sciences Citation Index via Web of Science, and preprints and proceedings of the International Congress on Peer Review and Scientific Publication.

In addition, we inspected major clinical trial data-sharing platforms, contacted major journals/publishers, editorial groups and some funders.

Charting methods

Two reviewers independently extracted information on methods and results from resources identified using a standardised questionnaire. A map of the extracted data was constructed and accompanied by a narrative summary for each outcome domain.

Results

93 studies identified in the literature search (published between 2001 and 2020, median: 2018) and 5 from additional information sources were included in the scoping review. Most studies were descriptive and focused on early phases of the data-sharing process. While the willingness to share IPD from clinical trials is extremely high, actual data-sharing rates are suboptimal.

A survey of journal data suggests poor to moderate enforcement of the policies by publishers. Metrics provided by platforms suggest that a large majority of data remains unrequested. When requested, the purpose of the reuse is more often secondary analyses and meta-analyses, rarely re-analyses. Finally, studies focused on the real impact of data-sharing were rare and used surrogates such as citation metrics.

Conclusions

There is currently a gap in the evidence base for the impact of IPD sharing, which entails uncertainties in the implementation of current data-sharing policies. High level evidence is needed to assess whether the value of medical research increases with data-sharing practices.

URL : Status, use and impact of sharing individual participant data from clinical trials: a scoping review

Original location : https://bmjopen.bmj.com/content/11/8/e049228

Publication by association: how the COVID-19 pandemic has shown relationships between authors and editorial board members in the field of infectious diseases

Authors : Clara Locher, David Moher, Ioana Alina Cristea, Florian Naudet

During the COVID-19 pandemic, the rush to scientific and political judgements on the merits of hydroxychloroquine was fuelled by dubious papers which may have been published because the authors were not independent from the practices of the journals in which they appeared.

This example leads us to consider a new type of illegitimate publishing entity, ‘self-promotion journals’ which could be deployed to serve the instrumentalisation of productivity-based metrics, with a ripple effect on decisions about promotion, tenure and grant funding, but also on the quality of manuscripts that are disseminated to the medical community and form the foundation of evidence-based medicine.

DOI : http://dx.doi.org/10.1136/bmjebm-2021-111670

‘Nepotistic journals’: a survey of biomedical journals

Authors : Alexandre Scanff, Florian Naudet, Ioana Cristea, David Moher, Dorothy V M Bishop, Clara Locher

Context

Convergent analyses in different disciplines support the use of the Percentage of Papers by the Most Prolific author (PPMP) as a red flag to identify journals that can be suspected of questionable editorial practices. We examined whether this index, complemented by the Gini index, could be useful for identifying cases of potential editorial bias, using a large sample of biomedical journals.

Methods

We extracted metadata for all biomedical journals referenced in the National Library of Medicine, with any attributed Broad Subject Terms, and at least 50 authored (i.e. by at least one author) articles between 2015 and 2019, identifying the most prolific author (i.e. the person who signed the most papers in each particular journal).

We calculated the PPMP and the 2015-2019 Gini index for the distribution of articles across authors. When the relevant information was reported, we also computed the median publication lag (time between submission and acceptance) for articles authored by any of the most prolific authors and that for articles not authored by prolific authors.

For outlier journals, defined as a PPMP or Gini index above the 95th percentile of their respective distributions, a random sample of 100 journals was selected and described in relation to status on the editorial board for the most prolific author.

Results

5 468 journals that published 4 986 335 papers between 2015 and 2019 were analysed. The PPMP 95th percentile was 10.6% (median 2.9%). The Gini index 95th percentile was 0.355 (median 0.183). Correlation between the two indices was 0.35 (95CI 0.33 to 0.37). Information on publication lag was available for 2 743 journals.

We found that 277 journals (10.2%) had a median time lag to publication for articles by the most prolific author(s) that was shorter than 3 weeks, versus 51 (1.9%) journals with articles not authored by prolific author(s).

Among the random sample of outlier journals, 98 provided information about their editorial board. Among these 98, the most prolific author was part of the editorial board in 60 cases (61%), among whom 25 (26% of the 98) were editors-in-chief.

Discussion

In most journals publications are distributed across a large number of authors. Our results reveal a subset of journals where a few authors, often members of the editorial board, were responsible for a disproportionate number of publications.

The papers by these authors were more likely to be accepted for publication within 3 weeks of their submission. To enhance trust in their practices, journals need to be transparent about their editorial and peer review practices.

URL : ‘Nepotistic journals’: a survey of biomedical journals

DOI : https://doi.org/10.1101/2021.02.03.429520

Publication by association: the Covid-19 pandemic reveals relationships between authors and editors

Authors : Clara Locher, David Moher, Ioana Cristea, Florian Naudet

During the COVID-19 pandemic, the rush to scientific and political judgments on the merits of hydroxychloroquine was fuelled by dubious papers which may have been published because the authors were not independent from the practices of the journals in which they appeared.

This example leads us to consider a new type of illegitimate publishing entity, “self-promotion journals” which could be deployed to serve the instrumentalisation of productivity-based metrics, with a ripple effect on decisions about promotion, tenure, and grant funding.

URL : Publication by association: the Covid-19 pandemic reveals relationships between authors and editors

DOI : https://doi.org/10.31222/osf.io/64u3s

Data-sharing recommendations in biomedical journals and randomised controlled trials: an audit of journals following the ICMJE recommendations

Authors : Maximilian Siebert, Jeanne Fabiola Gaba, Laura Caquelin, Henri Gouraud, Alain Dupuy, David Moher, Florian Naudet

Objective

To explore the implementation of the International Committee of Medical Journal Editors (ICMJE) data-sharing policy which came into force on 1 July 2018 by ICMJE-member journals and by ICMJE-affiliated journals declaring they follow the ICMJE recommendations.

Design

A cross-sectional survey of data-sharing policies in 2018 on journal websites and in data-sharing statements in randomised controlled trials (RCTs).

Setting

ICMJE website; PubMed/Medline.

Eligibility criteria

ICMJE-member journals and 489 ICMJE-affiliated journals that published an RCT in 2018, had an accessible online website and were not considered as predatory journals according to Beall’s list. One hundred RCTs for member journals and 100 RCTs for affiliated journals with a data-sharing policy, submitted after 1 July 2018.

Main outcome measures

The primary outcome for the policies was the existence of a data-sharing policy (explicit data-sharing policy, no data-sharing policy, policy merely referring to ICMJE recommendations) as reported on the journal website, especially in the instructions for authors.

For RCTs, our primary outcome was the intention to share individual participant data set out in the data-sharing statement.

Results

Eight (out of 14; 57%) member journals had an explicit data-sharing policy on their website (three were more stringent than the ICMJE requirements, one was less demanding and four were compliant), five (35%) additional journals stated that they followed the ICMJE requirements, and one (8%) had no policy online. In RCTs published in these journals, there were data-sharing statements in 98 out of 100, with expressed intention to share individual patient data reaching 77 out of 100 (77%; 95% CI 67% to 85%).

One hundred and forty-five (out of 489) ICMJE-affiliated journals (30%; 26% to 34%) had an explicit data-sharing policy on their website (11 were more stringent than the ICMJE requirements, 85 were less demanding and 49 were compliant) and 276 (56%; 52% to 61%) merely referred to the ICMJE requirements.

In RCTs published in affiliated journals with an explicit data-sharing policy, data-sharing statements were rare (25%), and expressed intentions to share data were found in 22% (15% to 32%).

Conclusion

The implementation of ICMJE data-sharing requirements in online journal policies was suboptimal for ICMJE-member journals and poor for ICMJE-affiliated journals.

The implementation of the policy was good in member journals and of concern for affiliated journals. We suggest the conduct of continuous audits of medical journal data-sharing policies in the future.

URL : Data-sharing recommendations in biomedical journals and randomised controlled trials: an audit of journals following the ICMJE recommendations

DOI : http://dx.doi.org/10.1136/bmjopen-2020-038887

Data sharing and reanalysis of randomized controlled trials in leading biomedical journals with a full data sharing policy: survey of studies published in The BMJ and PLOS Medicine

Authors : Florian Naudet, Charlotte Sakarovitch, Perrine Janiaud, Ioana Cristea, Daniele Fanelli, David Moher, John P A Ioannidis

Objectives

To explore the effectiveness of data sharing by randomized controlled trials (RCTs) in journals with a full data sharing policy and to describe potential difficulties encountered in the process of performing reanalyses of the primary outcomes.

Design

Survey of published RCTs.

Setting

PubMed/Medline.

Eligibility criteria

RCTs that had been submitted and published by The BMJ and PLOS Medicine subsequent to the adoption of data sharing policies by these journals.

Main outcome measure

The primary outcome was data availability, defined as the eventual receipt of complete data with clear labelling. Primary outcomes were reanalyzed to assess to what extent studies were reproduced. Difficulties encountered were described.

Results

37 RCTs (21 from The BMJ and 16 from PLOS Medicine) published between 2013 and 2016 met the eligibility criteria. 17/37 (46%, 95% confidence interval 30% to 62%) satisfied the definition of data availability and 14 of the 17 (82%, 59% to 94%) were fully reproduced on all their primary outcomes. Of the remaining RCTs, errors were identified in two but reached similar conclusions and one paper did not provide enough information in the Methods section to reproduce the analyses. Difficulties identified included problems in contacting corresponding authors and lack of resources on their behalf in preparing the datasets. In addition, there was a range of different data sharing practices across study groups.

Conclusions

Data availability was not optimal in two journals with a strong policy for data sharing. When investigators shared data, most reanalyses largely reproduced the original results. Data sharing practices need to become more widespread and streamlined to allow meaningful reanalyses and reuse of data.