Prestigious Science Journals Struggle to Reach Even Average Reliability

Author : Björn Brembs

In which journal a scientist publishes is considered one of the most crucial factors determining their career. The underlying common assumption is that only the best scientists manage to publish in a highly selective tier of the most prestigious journals.

However, data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with increasing rank of the journal. On the contrary, an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank.

The data supporting these conclusions circumvent confounding factors such as increased readership and scrutiny for these journals, focusing instead on quantifiable indicators of methodological soundness in the published literature, relying on, in part, semi-automated data extraction from often thousands of publications at a time.

With the accumulating evidence over the last decade grew the realization that the very existence of scholarly journals, due to their inherent hierarchy, constitutes one of the major threats to publicly funded science: hiring, promoting and funding scientists who publish unreliable science eventually erodes public trust in science.

URL : Prestigious Science Journals Struggle to Reach Even Average Reliability

DOI : https://doi.org/10.3389/fnhum.2018.00037

Evolution of the scholarly mega-journal, 2006–2017

 Author : Bo-Christer Björk

Mega-journals are a new kind of scholarly journal made possible by electronic publishing. They are open access (OA) and funded by charges, which authors pay for the publishing services. What distinguishes mega-journals from other OA journals is, in particular, a peer review focusing only on scientific trustworthiness.

The journals can easily publish thousands of articles per year and there is no need to filter articles due to restricted slots in the publishing schedule. This study updates some earlier longitudinal studies of the evolution of mega-journals and their publication volumes.

After very rapid growth in 2010–2013, the increase in overall article volumes has slowed down. Mega-journals are also increasingly dependent for sustained growth on Chinese authors, who now contribute 25% of all articles in such journals.

There has also been an internal shift in market shares. PLOS ONE, which totally dominated mega-journal publishing in the early years, currently publishes around one-third of all articles. Scientific Reports has grown rapidly since 2014 and is now the biggest journal.

URL : Evolution of the scholarly mega-journal, 2006–2017

DOI : https://doi.org/10.7717/peerj.4357

Can your paper evade the editors axe? Towards an AI assisted peer review system

Authors : Tirthankar Ghosal, Rajeev Verma, Asif Ekbal, Sriparna Saha, Pushpak Bhattacharyya, Srinivasa Satya Sameer Kumar Chivukula, Georgios Tsatsaronis, Pascal Coupet, Michelle Gregory

This work is an exploratory study of how we could progress a step towards an AI assisted peer- review system. The proposed approach is an ambitious attempt to automate the Desk-Rejection phenomenon prevalent in academic peer review.

In this investigation we first attempt to decipher the possible reasons of rejection of a scientific manuscript from the editors desk. To seek a solution to those causes, we combine a flair of information extraction techniques, clustering, citation analysis to finally formulate a supervised solution to the identified problems.

The projected approach integrates two important aspects of rejection: i) a paper being rejected because of out of scope and ii) a paper rejected due to poor quality. We extract several features to quantify the quality of a paper and the degree of in-scope exploring keyword search, citation analysis, reputations of authors and affiliations, similarity with respect to accepted papers.

The features are then fed to standard machine learning based classifiers to develop an automated system. On a decent set of test data our generic approach yields promising results across 3 different journals.

The study inherently exhibits the possibility of a redefined interest of the research community on the study of rejected papers and inculcates a drive towards an automated peer review system.

URL : https://arxiv.org/abs/1802.01403

The Social Structure of Consensus in Scientific Review

Authors : Misha Teplitskiy, Daniel Acuna, Aida Elamrani-Raoult, Konrad Kording, James Evans

Personal connections between creators and evaluators of scientific works are ubiquitous, and the possibility of bias ever-present. Although connections have been shown to bias prospective judgments of (uncertain) future performance, it is unknown whether such biases occur in the much more concrete task of assessing the scientific validity of already completed work, and if so, why.

This study presents evidence that personal connections between authors and reviewers of neuroscience manuscripts are associated with biased judgments and explores the mechanisms driving the effect.

Using reviews from 7,981 neuroscience manuscripts submitted to the journal PLOS ONE, which instructs reviewers to evaluate manuscripts only on scientific validity, we find that reviewers favored authors close in the co-authorship network by ~0.11 points on a 1.0 – 4.0 scale for each step of proximity.

PLOS ONE’s validity-focused review and the substantial amount of favoritism shown by distant vs. very distant reviewers, both of whom should have little to gain from nepotism, point to the central role of substantive disagreements between scientists in different “schools of thought.”

The results suggest that removing bias from peer review cannot be accomplished simply by recusing the closely-connected reviewers, and highlight the value of recruiting reviewers embedded in diverse professional networks.

URL : https://arxiv.org/abs/1802.01270

Gathering the Needles: Evaluating the Impact of Gold Open Access Content With Traditional Subscription Journals

Authors : Alison Boba, Jill Emery

Utilizing the Project COUNTER Release 4 JR1-GOA report, two librarians explore these data in comparison to journal package subscriptions represented via the JR1 reports. This paper outlines the methodology and study undertaken at the Portland State University Library and the University of Nebraska Medical Center Library using these reports for the first time.

The initial outcomes of the study are provided in various Tables for 2014 and 2015. The intent of the study was to provide both institutions with a baseline from which to do further study. In addition, some ideas are given for how these reports can be used in vendor negotiations going forward.

URL : Gathering the Needles: Evaluating the Impact of Gold Open Access Content With Traditional Subscription Journals

DOI : http://dx.doi.org/10.1629/uksg.291

Allegation of scientific misconduct increases Twitter attention

Authors : Lutz Bornmann, Robin Haunschild

The web-based microblogging system Twitter is a very popular altmetrics source for measuring the broader impact of science. In this case study, we demonstrate how problematic the use of Twitter data for research evaluation can be, even though the aspiration of measurement is degraded from impact to attention measurement.

We collected the Twitter data for the paper published by Yamamizu et al. (2017). An investigative committee found that the main figures in the paper are fraudulent.

URL : https://arxiv.org/abs/1802.00606

A scoping review of comparisons between abstracts and full reports in primary biomedical research

Authors : Guowei Li, Luciana P. F. Abbade, Ikunna Nwosu, Yanling Jin, Alvin Leenus, Muhammad Maaz, Mei Wang, Meha Bhatt, Laura Zielinski, Nitika Sanger, Bianca Bantoto, Candice Luo, Ieta Shams, Hamnah Shahid, Yaping Chang, Guangwen Sun, Lawrence Mbuagbaw, Zainab Samaan, Mitchell A. H. Levine, Jonathan D. Adachi, Lehana Thabane

Background

Evidence shows that research abstracts are commonly inconsistent with their corresponding full reports, and may mislead readers.

In this scoping review, which is part of our series on the state of reporting of primary biomedical research, we summarized the evidence from systematic reviews and surveys, to investigate the current state of inconsistent abstract reporting, and to evaluate factors associated with improved reporting by comparing abstracts and their full reports.

Methods

We searched EMBASE, Web of Science, MEDLINE, and CINAHL from January 1st 1996 to September 30th 2016 to retrieve eligible systematic reviews and surveys. Our primary outcome was the level of inconsistency between abstracts and corresponding full reports, which was expressed as a percentage (with a lower percentage indicating better reporting) or categorized rating (such as major/minor difference, high/medium/low inconsistency), as reported by the authors.

We used medians and interquartile ranges to describe the level of inconsistency across studies. No quantitative syntheses were conducted. Data from the included systematic reviews or surveys was summarized qualitatively.

Results

Seventeen studies that addressed this topic were included. The level of inconsistency was reported to have a median of 39% (interquartile range: 14% – 54%), and to range from 4% to 78%. In some studies that separated major from minor inconsistency, the level of major inconsistency ranged from 5% to 45% (median: 19%, interquartile range: 7% – 31%), which included discrepancies in specifying the study design or sample size, designating a primary outcome measure, presenting main results, and drawing a conclusion.

A longer time interval between conference abstracts and the publication of full reports was found to be the only factor which was marginally or significantly associated with increased likelihood of reporting inconsistencies.

Conclusions

This scoping review revealed that abstracts are frequently inconsistent with full reports, and efforts are needed to improve the consistency of abstract reporting in the primary biomedical community.

URL : A scoping review of comparisons between abstracts and full reports in primary biomedical research

DOI : https://doi.org/10.1186/s12874-017-0459-5