A systematic examination of preprint platforms for use in the medical and biomedical sciences setting

Authors : Jamie J Kirkham, Naomi Penfold, Fiona Murphy, Isabelle Boutron, John PA Ioannidis, Jessica K Polka, David Moher

Objectives

The objective of this review is to identify all preprint platforms with biomedical and medical scope and to compare and contrast the key characteristics and policies of these platforms. We also aim to provide a searchable database to enable relevant stakeholders to compare between platforms.

Study Design and Setting

Preprint platforms that were launched up to 25th June 2019 and have a biomedical and medical scope according to MEDLINE’s journal selection criteria were identified using existing lists, web-based searches and the expertise of both academic and non-academic publication scientists.

A data extraction form was developed, pilot-tested and used to collect data from each preprint platform’s webpage(s). Data collected were in relation to scope and ownership; content-specific characteristics and information relating to submission, journal transfer options, and external discoverability; screening, moderation, and permanence of content; usage metrics and metadata.

Where possible, all online data were verified by the platform owner or representative by correspondence.

Results

A total of 44 preprint platforms were identified as having biomedical and medical scope, 17 (39%) were hosted by the Open Science Framework preprint infrastructure, six (14%) were provided by F1000 Research Ltd (the Open Research Central infrastructure) and 21 (48%) were other independent preprint platforms. Preprint platforms were either owned by non-profit academic groups, scientific societies or funding organisations (n=28; 64%), owned/partly owned by for-profit publishers or companies (n=14; 32%) or owned by individuals/small communities (n=2; 5%).

Twenty-four (55%) preprint platforms accepted content from all scientific fields although some of these had restrictions relating to funding source, geographical region or an affiliated journal’s remit.

Thirty-three (75%) preprint platforms provided details about article screening (basic checks) and 14 (32%) of these actively involved researchers with context expertise in the screening process.

The three most common screening checks related to the scope of the article, plagiarism and legal/ethical/societal issues and compliance. Almost all preprint platforms allow submission to any peer-reviewed journal following publication, have a preservation plan for read-access, and most have a policy regarding reasons for retraction and the sustainability of the service.

Forty-one (93%) platforms currently have usage metrics, with the most common metric being the number of downloads presented on the abstract page.

Conclusion

A large number of preprint platforms exist for use in biomedical and medical sciences, all of which offer researchers an opportunity to rapidly disseminate their research findings onto an open-access public server, subject to scope and eligibility.

However, the process by which content is screened before online posting and withdrawn or removed after posting varies between platforms, which may be associated with platform operation, ownership, governance and financing.

DOI : https://doi.org/10.1101/2020.04.27.063578

Reproducible research practices, openness and transparency in health economic evaluations: study protocol for a cross-sectional comparative analysis

Authors : Ferrán Catalá-López, Lisa Caulley, Manuel Ridao, Brian Hutton, Don Husereau, Michael F Drummond, Adolfo Alonso-Arroyo, Manuel Pardo-Fernández, Enrique Bernal-Delgado, Ricard Meneu, Rafael Tabarés-Seisdedos, José Ramón Repullo, David Moher

Introduction

There has been a growing awareness of the need for rigorously and transparent reported health research, to ensure the reproducibility of studies by future researchers.

Health economic evaluations, the comparative analysis of alternative interventions in terms of their costs and consequences, have been promoted as an important tool to inform decision-making.

The objective of this study will be to investigate the extent to which articles of economic evaluations of healthcare interventions indexed in MEDLINE incorporate research practices that promote transparency, openness and reproducibility.

Methods and analysis

This is the study protocol for a cross-sectional comparative analysis. We registered the study protocol within the Open Science Framework (osf.io/gzaxr). We will evaluate a random sample of 600 cost-effectiveness analysis publications, a specific form of health economic evaluations, indexed in MEDLINE during 2012 (n=200), 2019 (n=200) and 2022 (n=200).

We will include published papers written in English reporting an incremental cost-effectiveness ratio in terms of costs per life years gained, quality-adjusted life years and/or disability-adjusted life years. Screening and selection of articles will be conducted by at least two researchers.

Reproducible research practices, openness and transparency in each article will be extracted using a standardised data extraction form by multiple researchers, with a 33% random sample (n=200) extracted in duplicate.

Information on general, methodological and reproducibility items will be reported, stratified by year, citation of the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement and journal. Risk ratios with 95% CIs will be calculated to represent changes in reporting between 2012–2019 and 2019–2022.

Ethics and dissemination

Due to the nature of the proposed study, no ethical approval will be required. All data will be deposited in a cross-disciplinary public repository.

It is anticipated the study findings could be relevant to a variety of audiences. Study findings will be disseminated at scientific conferences and published in peer-reviewed journals.

URL : Reproducible research practices, openness and transparency in health economic evaluations: study protocol for a cross-sectional comparative analysis

DOI : http://dx.doi.org/10.1136/bmjopen-2019-034463

Defining predatory journals and responding to the threat they pose: a modified Delphi consensus process

Authors : Samantha Cukier, Manoj M. Lalu, Gregory L. Bryson, Kelly D. Cobey, Agnes Grudniewicz, David Moher

Background

Posing as legitimate open access outlets, predatory journals and publishers threaten the integrity of academic publishing by not following publication best practices. Currently, there is no agreed upon definition of predatory journals, making it difficult for funders and academic institutions to generate practical guidance or policy to ensure their members do not publish in these channels.

Methods

We conducted a modified three-round Delphi survey of an international group of academics, funders, policy makers, journal editors, publishers and others, to generate a consensus definition of predatory journals and suggested ways the research community should respond to the problem.

Results

A total of 45 participants completed the survey on predatory journals and publishers. We reached consensus on 18 items out of a total of 33, to be included in a consensus definition of predatory journals and publishers.

We came to consensus on educational outreach and policy initiatives on which to focus, including the development of a single checklist to detect predatory journals and publishers, and public funding to support research in this general area.

We identified technological solutions to address the problem: a ‘one-stop-shop’ website to consolidate information on the topic and a ‘predatory journal research observatory’ to identify ongoing research and analysis about predatory journals/publishers.

Conclusions

In bringing together an international group of diverse stakeholders, we were able to use a modified Delphi process to inform the development of a definition of predatory journals and publishers.

This definition will help institutions, funders and other stakeholders generate practical guidance on avoiding predatory journals and publishers.

URL : Defining predatory journals and responding to the threat they pose: a modified Delphi consensus process

DOI : https://doi.org/10.1101/19010850

Knowledge and motivations of researchers publishing in presumed predatory journals: a survey

Authors : Kelly D Cobey, Agnes Grudniewicz, Manoj M Lalu, Danielle B Rice, Hana Raffoul, David Moher

Objectives

To develop effective interventions to prevent publishing in presumed predatory journals (ie, journals that display deceptive characteristics, markers or data that cannot be verified), it is helpful to understand the motivations and experiences of those who have published in these journals.

Design

An online survey delivered to two sets of corresponding authors containing demographic information, and questions about researchers’ perceptions of publishing in the presumed predatory journal, type of article processing fees paid and the quality of peer review received. The survey also asked six open-ended items about researchers’ motivations and experiences.

Participants

Using Beall’s lists, we identified two groups of individuals who had published empirical articles in biomedical journals that were presumed to be predatory.

Results

Eighty-two authors partially responded (~14% response rate (11.4%[44/386] from the initial sample, 19.3%[38/197] from second sample) to our survey. The top three countries represented were India (n=21, 25.9%), USA (n=17, 21.0%) and Ethiopia (n=5, 6.2%).

Three participants (3.9%) thought the journal they published in was predatory at the time of article submission. The majority of participants first encountered the journal via an email invitation to submit an article (n=32, 41.0%), or through an online search to find a journal with relevant scope (n=22, 28.2%).

Most participants indicated their study received peer review (n=65, 83.3%) and that this was helpful and substantive (n=51, 79.7%). More than a third (n=32, 45.1%) indicated they did not pay fees to publish.

Conclusions

This work provides some evidence to inform policy to prevent future research from being published in predatory journals.

Our research suggests that common views about predatory journals (eg, no peer review) may not always be true, and that a grey zone between legitimate and presumed predatory journals exists. These results are based on self-reports and may be biased thus limiting their interpretation.

URL : Knowledge and motivations of researchers publishing in presumed predatory journals: a survey

DOI : http://dx.doi.org/10.1136/bmjopen-2018-026516

Data sharing and reanalysis of randomized controlled trials in leading biomedical journals with a full data sharing policy: survey of studies published in The BMJ and PLOS Medicine

Authors : Florian Naudet, Charlotte Sakarovitch, Perrine Janiaud, Ioana Cristea, Daniele Fanelli, David Moher, John P A Ioannidis

Objectives

To explore the effectiveness of data sharing by randomized controlled trials (RCTs) in journals with a full data sharing policy and to describe potential difficulties encountered in the process of performing reanalyses of the primary outcomes.

Design

Survey of published RCTs.

Setting

PubMed/Medline.

Eligibility criteria

RCTs that had been submitted and published by The BMJ and PLOS Medicine subsequent to the adoption of data sharing policies by these journals.

Main outcome measure

The primary outcome was data availability, defined as the eventual receipt of complete data with clear labelling. Primary outcomes were reanalyzed to assess to what extent studies were reproduced. Difficulties encountered were described.

Results

37 RCTs (21 from The BMJ and 16 from PLOS Medicine) published between 2013 and 2016 met the eligibility criteria. 17/37 (46%, 95% confidence interval 30% to 62%) satisfied the definition of data availability and 14 of the 17 (82%, 59% to 94%) were fully reproduced on all their primary outcomes. Of the remaining RCTs, errors were identified in two but reached similar conclusions and one paper did not provide enough information in the Methods section to reproduce the analyses. Difficulties identified included problems in contacting corresponding authors and lack of resources on their behalf in preparing the datasets. In addition, there was a range of different data sharing practices across study groups.

Conclusions

Data availability was not optimal in two journals with a strong policy for data sharing. When investigators shared data, most reanalyses largely reproduced the original results. Data sharing practices need to become more widespread and streamlined to allow meaningful reanalyses and reuse of data.

 

Assessing the utility of an institutional publications officer: a pilot assessment

Authors : Kelly D. Cobey, James Galipeau, Larissa Shamseer, David Moher

Background

The scholarly publication landscape is changing rapidly. We investigated whether the introduction of an institutional publications officer might help facilitate better knowledge of publication topics and related resources, and effectively support researchers to publish.

Methods

In September 2015, a purpose-built survey about researchers’ knowledge and perceptions of publication practices was administered at five Ottawa area research institutions. Subsequently, we publicly announced a newly hired publications officer (KDC) who then began conducting outreach at two of the institutions.

Specifically, the publications officer gave presentations, held one-to-one consultations, developed electronic newsletter content, and generated and maintained a webpage of resources. In March 2016, we re-surveyed our participants regarding their knowledge and perceptions of publishing.

Mean scores to the perception questions, and the percent of correct responses to the knowledge questions, pre and post survey, were computed for each item. The difference between these means or calculated percentages was then examined across the survey measures.

Results

82 participants completed both surveys. Of this group, 29 indicated that they had exposure to the publications officer, while the remaining 53 indicated they did not. Interaction with the publications officer led to improvements in half of the knowledge items (7/14 variables).

While improvements in knowledge of publishing were also found among those who reported not to have interacted with the publications officer (9/14), these effects were often smaller in magnitude. Scores for some publication knowledge variables actually decreased between the pre and post survey (3/14).

Effects for researchers’ perceptions of publishing increased for 5/6 variables in the group that interacted with the publications officer.

Discussion

This pilot provides initial indication that, in a short timeframe, introducing an institutional publications officer may improve knowledge and perceptions surrounding publishing.

This study is limited by its modest sample size and temporal relationship between the introduction of the publications officer and changes in knowledge and perceptions. A randomized trial examining the publications officer as an effective intervention is needed.

URL : Assessing the utility of an institutional publications officer: a pilot assessment

DOI : https://doi.org/10.7717/peerj.3294