Defining predatory journals and responding to the threat they pose: a modified Delphi consensus process

Authors : Samantha Cukier, Manoj M. Lalu, Gregory L. Bryson, Kelly D. Cobey, Agnes Grudniewicz, David Moher

Background

Posing as legitimate open access outlets, predatory journals and publishers threaten the integrity of academic publishing by not following publication best practices. Currently, there is no agreed upon definition of predatory journals, making it difficult for funders and academic institutions to generate practical guidance or policy to ensure their members do not publish in these channels.

Methods

We conducted a modified three-round Delphi survey of an international group of academics, funders, policy makers, journal editors, publishers and others, to generate a consensus definition of predatory journals and suggested ways the research community should respond to the problem.

Results

A total of 45 participants completed the survey on predatory journals and publishers. We reached consensus on 18 items out of a total of 33, to be included in a consensus definition of predatory journals and publishers.

We came to consensus on educational outreach and policy initiatives on which to focus, including the development of a single checklist to detect predatory journals and publishers, and public funding to support research in this general area.

We identified technological solutions to address the problem: a ‘one-stop-shop’ website to consolidate information on the topic and a ‘predatory journal research observatory’ to identify ongoing research and analysis about predatory journals/publishers.

Conclusions

In bringing together an international group of diverse stakeholders, we were able to use a modified Delphi process to inform the development of a definition of predatory journals and publishers.

This definition will help institutions, funders and other stakeholders generate practical guidance on avoiding predatory journals and publishers.

Does the use of open, non-anonymous peer review in scholarly publishing introduce bias? Evidence from the F1000 post-publication open peer review publishing model

Authors : Mike Thelwall, Verena Weigert, Liz Allen, Zena Nyakoojo, Eleanor-Rose Papas

This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous peer review – and used in the post-publication, peer review system operated by the open-access scholarly publishing platform F1000Research.

First, is there evidence of bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of previous reviewer?

Scrutinising the open peer review comments published on F1000Research, we assess the extent of two frequently cited potential influences on reviewers that may be the result of the transparency offered by a fully attributable, open peer review publishing model: the national affiliations of authors and reviewers, and the ability of reviewers to view previously-published reviewer reports before submitting their own.

The effects of these potential influences were investigated for all first versions of articles published by 8 July 2019 to F1000Research. In 16 out of the 20 countries with the most articles, there was a tendency for reviewers based in the same country to give a more positive review.

The difference was statistically significant in one. Only 3 countries had the reverse tendency. Second, there is no evidence of a conformity bias. When reviewers mentioned a previous review in their peer review report, they were not more likely to give the same overall judgement.

Although reviewers who had longer to potentially read a previously published reviewer reports were slightly less likely to agree with previous reviewer judgements, this could be due to these articles being difficult to judge rather than deliberate non-conformity.

Scientific sinkhole: The pernicious price of formatting

Authors : Allana G. LeBlanc, Joel D. Barnes, Travis J. Saunders, Mark S. Tremblay, Jean-Philippe Chaput

Objective

To conduct a time-cost analysis of formatting in scientific publishing.

Design

International, cross-sectional study (one-time survey).

Setting

Internet-based self-report survey, live between September 2018 and January 2019.

Participants

Anyone working in research, science, or academia and who submitted at least one peer-reviewed manuscript for consideration for publication in 2017. Completed surveys were available for 372 participants from 41 countries (60% of respondents were from Canada).

Main outcome measure

Time (hours) and cost (wage per hour x time) associated with formatting a research paper for publication in a peer-reviewed academic journal.

Results

The median annual income category was US$61,000–80,999, and the median number of publications formatted per year was four. Manuscripts required a median of two attempts before they were accepted for publication. The median formatting time was 14 hours per manuscript, or 52 hours per person, per year. This resulted in a median calculated cost of US$477 per manuscript or US$1,908 per person, per year. Conclusions To our knowledge, this is the first study to analyze the cost of manuscript formatting in scientific publishing. Our results suggest that scientific formatting represents a loss of 52 hours, costing the equivalent of US$1,908 per researcher per year.

These results identify the hidden and pernicious price associated with scientific publishing and provide evidence to advocate for the elimination of strict formatting guidelines, at least prior to acceptance.

bioRxiv: the preprint server for biology

Authors : Richard Sever, Ted Roeder, Samantha Hindle, Linda Sussman, Kevin-John Black, Janet Argentine, Wayne Manos, John R. Inglis

The traditional publication process delays dissemination of new research, often by months, sometimes by years. Preprint servers decouple dissemination of research papers from their evaluation and certification by journals, allowing researchers to share work immediately, receive feedback from a much larger audience, and provide evidence of productivity long before formal publication.

Launched in 2013 as a non-profit community service, the bioRxiv server has brought preprint practice to the life sciences and recently posted its 64,000th manuscript.

The server now receives more than four million views per month and hosts papers spanning all areas of biology. Initially dominated by evolutionary biology, genetics/genomics and computational biology, bioRxiv has been increasingly populated by papers in neuroscience, cell and developmental biology, and many other fields.

Changes in journal and funder policies that encourage preprint posting have helped drive adoption, as has the development of bioRxiv technologies that allow authors to transfer papers easily between the server and journals.

A bioRxiv user survey found that 42% of authors post their preprints prior to journal submission whereas 37% post concurrently with journal submission. Authors are motivated by a desire to share work early; they value the feedback they receive, and very rarely experience any negative consequences of preprint posting.

Rapid dissemination via bioRxiv is also encouraging new initiatives that experiment with the peer review process and the development of novel approaches to literature filtering and assessment.

Authors : Christophe Boudry, Patricio Alvarez-Muñoz, Ricardo Arencibia-Jorge, Didier Ayena, Niels J. Brouwer, Zia Chaudhuri, Brenda Chawner, Emilienne Epee, Khalil Erraïs, Akbar Fotouhi, Almutez M. Gharaibeh, Dina H. Hassanein, Martina C. Herwig-Carl, Katherine Howard, Dieudonne Kaimbo Wa Kaimbo, Patricia-Ann Laughrea, Fernando A. Lopez, Juan D. Machin-Mastromatteo, Fernando K. Malerbi, Papa Amadou Ndiaye, Nina A. Noor, Josmel Pacheco-Mendoza, Vasilios P. Papastefanou, Mufarriq Shah, Carol L. Shields, Ya Xing Wang, Vasily Yartsev, Frederic Mouriaux

Background

The problem of access to medical information, particularly in low-income countries, has been under discussion for many years. Although a number of developments have occurred in the last decade (e.g., the open access (OA) movement and the website Sci-Hub), everyone agrees that these difficulties still persist very widely, mainly due to the fact that paywalls still limit access to approximately 75% of scholarly documents.

In this study, we compare the accessibility of recent full text articles in the field of ophthalmology in 27 established institutions located worldwide.

Methods

A total of 200 references from articles were retrieved using the PubMed database. Each article was individually checked for OA. Full texts of non-OA (i.e., “paywalled articles”) were examined to determine whether they were available using institutional and Hinari access in each institution studied, using “alternative ways” (i.e., PubMed Central, ResearchGate, Google Scholar, and Online Reprint Request), and using the website Sci-Hub.

Results

The number of full texts of “paywalled articles” available using institutional and Hinari access showed strong heterogeneity, scattered between 0% full texts to 94.8% (mean = 46.8%; SD = 31.5; median = 51.3%).

We found that complementary use of “alternative ways” and Sci-Hub leads to 95.5% of full text “paywalled articles,” and also divides by 14 the average extra costs needed to obtain all full texts on publishers’ websites using pay-per-view.

Conclusions

The scant number of available full text “paywalled articles” in most institutions studied encourages researchers in the field of ophthalmology to use Sci-Hub to search for scientific information.

The scientific community and decision-makers must unite and strengthen their efforts to find solutions to improve access to scientific literature worldwide and avoid an implosion of the scientific publishing model.

This study is not an endorsement for using Sci-Hub. The authors, their institutions, and publishers accept no responsibility on behalf of readers.

Authors : Akira Inokuchi, Yusuf Sulistyo Nugroho, Fumiaki Konishi, Hideaki Hata, Akito Monden, Kenichi Matsumoto

Academic publications have been evaluated with the impact on research communities based on the number of citations. On the other hand, the impact of academic publications on industry has been rarely studied.

This paper investigates how academic publications contribute to software development by analyzing publication citations in source code comments in open source software repositories.

We propose an automated approach of detecting academic publications based on Named Entity Recognition, and achieve 0.90 in F1 as detection accuracy. We conduct a large-scale study of publication citations with 319,438,977 comments collected from active 25,925 repositories written in seven programming languages.

Our findings indicate that academic publications can be knowledge sources of software development, and there can be potential issues of obsoleting knowledge.

The NIH Open Citation Collection: A public access, broad coverage resource

Authors : B. Ian Hutchins, Kirk L. Baker, Matthew T. Davis, Mario A. Diwersy, Ehsanul Haque, Robert M. Harriman, Travis A. Hoppe, Stephen A. Leicht, Payam Meyer, George M. Santangelo

Citation data have remained hidden behind proprietary, restrictive licensing agreements, which raises barriers to entry for analysts wishing to use the data, increases the expense of performing large-scale analyses, and reduces the robustness and reproducibility of the conclusions.

For the past several years, the National Institutes of Health (NIH) Office of Portfolio Analysis (OPA) has been aggregating and enhancing citation data that can be shared publicly. Here, we describe the NIH Open Citation Collection (NIH-OCC), a public access database for biomedical research that is made freely available to the community.

This dataset, which has been carefully generated from unrestricted data sources such as MedLine, PubMed Central (PMC), and CrossRef, now underlies the citation statistics delivered in the NIH iCite analytic platform.

We have also included data from a machine learning pipeline that identifies, extracts, resolves, and disambiguates references from full-text articles available on the internet. Open citation links are available to the public in a major update of iCite (https://icite.od.nih.gov).