Defining predatory journals and responding to the threat they pose: a modified Delphi consensus process

Authors : Samantha Cukier, Manoj M. Lalu, Gregory L. Bryson, Kelly D. Cobey, Agnes Grudniewicz, David Moher

Background

Posing as legitimate open access outlets, predatory journals and publishers threaten the integrity of academic publishing by not following publication best practices. Currently, there is no agreed upon definition of predatory journals, making it difficult for funders and academic institutions to generate practical guidance or policy to ensure their members do not publish in these channels.

Methods

We conducted a modified three-round Delphi survey of an international group of academics, funders, policy makers, journal editors, publishers and others, to generate a consensus definition of predatory journals and suggested ways the research community should respond to the problem.

Results

A total of 45 participants completed the survey on predatory journals and publishers. We reached consensus on 18 items out of a total of 33, to be included in a consensus definition of predatory journals and publishers.

We came to consensus on educational outreach and policy initiatives on which to focus, including the development of a single checklist to detect predatory journals and publishers, and public funding to support research in this general area.

We identified technological solutions to address the problem: a ‘one-stop-shop’ website to consolidate information on the topic and a ‘predatory journal research observatory’ to identify ongoing research and analysis about predatory journals/publishers.

Conclusions

In bringing together an international group of diverse stakeholders, we were able to use a modified Delphi process to inform the development of a definition of predatory journals and publishers.

This definition will help institutions, funders and other stakeholders generate practical guidance on avoiding predatory journals and publishers.

URL : Defining predatory journals and responding to the threat they pose: a modified Delphi consensus process

DOI : https://doi.org/10.1101/19010850

Does the use of open, non-anonymous peer review in scholarly publishing introduce bias? Evidence from the F1000 post-publication open peer review publishing model

Authors : Mike Thelwall, Verena Weigert, Liz Allen, Zena Nyakoojo, Eleanor-Rose Papas

This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous peer review – and used in the post-publication, peer review system operated by the open-access scholarly publishing platform F1000Research.

First, is there evidence of bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of previous reviewer?

Scrutinising the open peer review comments published on F1000Research, we assess the extent of two frequently cited potential influences on reviewers that may be the result of the transparency offered by a fully attributable, open peer review publishing model: the national affiliations of authors and reviewers, and the ability of reviewers to view previously-published reviewer reports before submitting their own.

The effects of these potential influences were investigated for all first versions of articles published by 8 July 2019 to F1000Research. In 16 out of the 20 countries with the most articles, there was a tendency for reviewers based in the same country to give a more positive review.

The difference was statistically significant in one. Only 3 countries had the reverse tendency. Second, there is no evidence of a conformity bias. When reviewers mentioned a previous review in their peer review report, they were not more likely to give the same overall judgement.

Although reviewers who had longer to potentially read a previously published reviewer reports were slightly less likely to agree with previous reviewer judgements, this could be due to these articles being difficult to judge rather than deliberate non-conformity.

URL : https://arxiv.org/abs/1911.03379

Scientific sinkhole: The pernicious price of formatting

Authors : Allana G. LeBlanc, Joel D. Barnes, Travis J. Saunders, Mark S. Tremblay, Jean-Philippe Chaput

Objective

To conduct a time-cost analysis of formatting in scientific publishing.

Design

International, cross-sectional study (one-time survey).

Setting

Internet-based self-report survey, live between September 2018 and January 2019.

Participants

Anyone working in research, science, or academia and who submitted at least one peer-reviewed manuscript for consideration for publication in 2017. Completed surveys were available for 372 participants from 41 countries (60% of respondents were from Canada).

Main outcome measure

Time (hours) and cost (wage per hour x time) associated with formatting a research paper for publication in a peer-reviewed academic journal.

Results

The median annual income category was US$61,000–80,999, and the median number of publications formatted per year was four. Manuscripts required a median of two attempts before they were accepted for publication.

The median formatting time was 14 hours per manuscript, or 52 hours per person, per year. This resulted in a median calculated cost of US$477 per manuscript or US$1,908 per person, per year.

Conclusions

To our knowledge, this is the first study to analyze the cost of manuscript formatting in scientific publishing. Our results suggest that scientific formatting represents a loss of 52 hours, costing the equivalent of US$1,908 per researcher per year.

These results identify the hidden and pernicious price associated with scientific publishing and provide evidence to advocate for the elimination of strict formatting guidelines, at least prior to acceptance.

URL : Scientific sinkhole: The pernicious price of formatting

DOI : https://doi.org/10.1371/journal.pone.0223116

Worldwide inequality in access to full textscientific articles: the example ofophthalmology

Authors : Christophe Boudry, Patricio Alvarez-Muñoz, Ricardo Arencibia-Jorge, Didier Ayena, Niels J. Brouwer, Zia Chaudhuri, Brenda Chawner, Emilienne Epee, Khalil Erraïs, Akbar Fotouhi, Almutez M. Gharaibeh, Dina H. Hassanein, Martina C. Herwig-Carl, Katherine Howard, Dieudonne Kaimbo Wa Kaimbo, Patricia-Ann Laughrea, Fernando A. Lopez, Juan D. Machin-Mastromatteo, Fernando K. Malerbi, Papa Amadou Ndiaye, Nina A. Noor, Josmel Pacheco-Mendoza, Vasilios P. Papastefanou, Mufarriq Shah, Carol L. Shields, Ya Xing Wang, Vasily Yartsev, Frederic Mouriaux

Background

The problem of access to medical information, particularly in low-income countries, has been under discussion for many years. Although a number of developments have occurred in the last decade (e.g., the open access (OA) movement and the website Sci-Hub), everyone agrees that these difficulties still persist very widely, mainly due to the fact that paywalls still limit access to approximately 75% of scholarly documents.

In this study, we compare the accessibility of recent full text articles in the field of ophthalmology in 27 established institutions located worldwide.

Methods

A total of 200 references from articles were retrieved using the PubMed database. Each article was individually checked for OA. Full texts of non-OA (i.e., “paywalled articles”) were examined to determine whether they were available using institutional and Hinari access in each institution studied, using “alternative ways” (i.e., PubMed Central, ResearchGate, Google Scholar, and Online Reprint Request), and using the website Sci-Hub.

Results

The number of full texts of “paywalled articles” available using institutional and Hinari access showed strong heterogeneity, scattered between 0% full texts to 94.8% (mean = 46.8%; SD = 31.5; median = 51.3%).

We found that complementary use of “alternative ways” and Sci-Hub leads to 95.5% of full text “paywalled articles,” and also divides by 14 the average extra costs needed to obtain all full texts on publishers’ websites using pay-per-view.

Conclusions

The scant number of available full text “paywalled articles” in most institutions studied encourages researchers in the field of ophthalmology to use Sci-Hub to search for scientific information.

The scientific community and decision-makers must unite and strengthen their efforts to find solutions to improve access to scientific literature worldwide and avoid an implosion of the scientific publishing model.

This study is not an endorsement for using Sci-Hub. The authors, their institutions, and publishers accept no responsibility on behalf of readers.

URL : Worldwide inequality in access to full textscientific articles: the example ofophthalmology

DOI : https://doi.org/10.7717/peerj.7850

Open Access — Towards a non-normative and systematic understanding

Authors : Niels Taubert, Anne Hobert, Nicolas Fraser, Najko Jahn, Elham Iravani

The term Open Access not only describes a certain model of scholarly publishing — namely in digital format freely accessible to readers — but often also implies that free availability of research results is desirable, and hence has a normative character.

Together with the large variety of presently used definitions of different Open Access types, this normativity hinders a systematic investigation of the development of open availability of scholarly literature.

In this paper, we propose a non-normative definition of Open Access and its usage as a neutral, descriptive term in bibliometric studies and research on science.

To this end, we first specify what normative figures are commonly associated with the term Open Access and then develop a neutral definition. We further identify distinguishing characteristics of openly accessible literature, called dimensions, and derive a classification scheme into Open Access categories based on these dimensions.

Additionally, we present an operationalisation method to assign scientific publications to the respective categories in practice. Here, we describe useful data sources, which can be employed to gather the information needed for the classification of scholarly works according to the presented classification scheme.

URL : https://arxiv.org/abs/1910.11568

Different Preservation Levels: The Case of Scholarly Digital Editions

Authors : Elias Oltmanns, Tim Hasler, Wolfgang Peters-Kottig, Heinz-Günter Kuper

Ensuring the long-term availability of research data forms an integral part of data management services. Where OAIS compliant digital preservation has been established in recent years, in almost all cases the services aim at the preservation of file-based objects.

In the Digital Humanities, research data is often represented in highly structured aggregations, such as Scholarly Digital Editions. Naturally, scholars would like their editions to remain functionally complete as long as possible.

Besides standard components like webservers, the presentation typically relies on project specific code interacting with client software like webbrowsers. Especially the latter being subject to rapid change over time invariably makes such environments awkward to maintain once funding has ended.

Pragmatic approaches have to be found in order to balance the curation effort and the maintainability of access to research data over time. A sketch of four potential service levels aiming at the long-term availability of research data in the humanities is outlined: (1) Continuous Maintenance, (2) Application Conservation, (3) Application Data Preservation, and (4) Bitstream Preservation.

The first being too costly and the last hardly satisfactory in general, we suggest that the implementation of services by an infrastructure provider should concentrate on service levels 2 and 3. We explain their strengths and limitations considering the example of two Scholarly Digital Editions.

URL : Different Preservation Levels: The Case of Scholarly Digital Editions

DOI : http://doi.org/10.5334/dsj-2019-051

From Academia to Software Development: Publication Citations in Source Code Comments

Authors : Akira Inokuchi, Yusuf Sulistyo Nugroho, Fumiaki Konishi, Hideaki Hata, Akito Monden, Kenichi Matsumoto

Academic publications have been evaluated with the impact on research communities based on the number of citations. On the other hand, the impact of academic publications on industry has been rarely studied.

This paper investigates how academic publications contribute to software development by analyzing publication citations in source code comments in open source software repositories.

We propose an automated approach of detecting academic publications based on Named Entity Recognition, and achieve 0.90 in F1 as detection accuracy. We conduct a large-scale study of publication citations with 319,438,977 comments collected from active 25,925 repositories written in seven programming languages.

Our findings indicate that academic publications can be knowledge sources of software development, and there can be potential issues of obsoleting knowledge.

URL : https://arxiv.org/abs/1910.06932