Can your paper evade the editors axe? Towards an AI assisted peer review system

Authors : Tirthankar Ghosal, Rajeev Verma, Asif Ekbal, Sriparna Saha, Pushpak Bhattacharyya, Srinivasa Satya Sameer Kumar Chivukula, Georgios Tsatsaronis, Pascal Coupet, Michelle Gregory

This work is an exploratory study of how we could progress a step towards an AI assisted peer- review system. The proposed approach is an ambitious attempt to automate the Desk-Rejection phenomenon prevalent in academic peer review.

In this investigation we first attempt to decipher the possible reasons of rejection of a scientific manuscript from the editors desk. To seek a solution to those causes, we combine a flair of information extraction techniques, clustering, citation analysis to finally formulate a supervised solution to the identified problems.

The projected approach integrates two important aspects of rejection: i) a paper being rejected because of out of scope and ii) a paper rejected due to poor quality. We extract several features to quantify the quality of a paper and the degree of in-scope exploring keyword search, citation analysis, reputations of authors and affiliations, similarity with respect to accepted papers.

The features are then fed to standard machine learning based classifiers to develop an automated system. On a decent set of test data our generic approach yields promising results across 3 different journals.

The study inherently exhibits the possibility of a redefined interest of the research community on the study of rejected papers and inculcates a drive towards an automated peer review system.

URL : https://arxiv.org/abs/1802.01403

The Social Structure of Consensus in Scientific Review

Authors : Misha Teplitskiy, Daniel Acuna, Aida Elamrani-Raoult, Konrad Kording, James Evans

Personal connections between creators and evaluators of scientific works are ubiquitous, and the possibility of bias ever-present. Although connections have been shown to bias prospective judgments of (uncertain) future performance, it is unknown whether such biases occur in the much more concrete task of assessing the scientific validity of already completed work, and if so, why.

This study presents evidence that personal connections between authors and reviewers of neuroscience manuscripts are associated with biased judgments and explores the mechanisms driving the effect.

Using reviews from 7,981 neuroscience manuscripts submitted to the journal PLOS ONE, which instructs reviewers to evaluate manuscripts only on scientific validity, we find that reviewers favored authors close in the co-authorship network by ~0.11 points on a 1.0 – 4.0 scale for each step of proximity.

PLOS ONE’s validity-focused review and the substantial amount of favoritism shown by distant vs. very distant reviewers, both of whom should have little to gain from nepotism, point to the central role of substantive disagreements between scientists in different “schools of thought.”

The results suggest that removing bias from peer review cannot be accomplished simply by recusing the closely-connected reviewers, and highlight the value of recruiting reviewers embedded in diverse professional networks.

URL : https://arxiv.org/abs/1802.01270

Gathering the Needles: Evaluating the Impact of Gold Open Access Content With Traditional Subscription Journals

Authors : Alison Boba, Jill Emery

Utilizing the Project COUNTER Release 4 JR1-GOA report, two librarians explore these data in comparison to journal package subscriptions represented via the JR1 reports. This paper outlines the methodology and study undertaken at the Portland State University Library and the University of Nebraska Medical Center Library using these reports for the first time.

The initial outcomes of the study are provided in various Tables for 2014 and 2015. The intent of the study was to provide both institutions with a baseline from which to do further study. In addition, some ideas are given for how these reports can be used in vendor negotiations going forward.

URL : Gathering the Needles: Evaluating the Impact of Gold Open Access Content With Traditional Subscription Journals

DOI : http://dx.doi.org/10.1629/uksg.291

Allegation of scientific misconduct increases Twitter attention

Authors : Lutz Bornmann, Robin Haunschild

The web-based microblogging system Twitter is a very popular altmetrics source for measuring the broader impact of science. In this case study, we demonstrate how problematic the use of Twitter data for research evaluation can be, even though the aspiration of measurement is degraded from impact to attention measurement.

We collected the Twitter data for the paper published by Yamamizu et al. (2017). An investigative committee found that the main figures in the paper are fraudulent.

URL : https://arxiv.org/abs/1802.00606

A scoping review of comparisons between abstracts and full reports in primary biomedical research

Authors : Guowei Li, Luciana P. F. Abbade, Ikunna Nwosu, Yanling Jin, Alvin Leenus, Muhammad Maaz, Mei Wang, Meha Bhatt, Laura Zielinski, Nitika Sanger, Bianca Bantoto, Candice Luo, Ieta Shams, Hamnah Shahid, Yaping Chang, Guangwen Sun, Lawrence Mbuagbaw, Zainab Samaan, Mitchell A. H. Levine, Jonathan D. Adachi, Lehana Thabane

Background

Evidence shows that research abstracts are commonly inconsistent with their corresponding full reports, and may mislead readers.

In this scoping review, which is part of our series on the state of reporting of primary biomedical research, we summarized the evidence from systematic reviews and surveys, to investigate the current state of inconsistent abstract reporting, and to evaluate factors associated with improved reporting by comparing abstracts and their full reports.

Methods

We searched EMBASE, Web of Science, MEDLINE, and CINAHL from January 1st 1996 to September 30th 2016 to retrieve eligible systematic reviews and surveys. Our primary outcome was the level of inconsistency between abstracts and corresponding full reports, which was expressed as a percentage (with a lower percentage indicating better reporting) or categorized rating (such as major/minor difference, high/medium/low inconsistency), as reported by the authors.

We used medians and interquartile ranges to describe the level of inconsistency across studies. No quantitative syntheses were conducted. Data from the included systematic reviews or surveys was summarized qualitatively.

Results

Seventeen studies that addressed this topic were included. The level of inconsistency was reported to have a median of 39% (interquartile range: 14% – 54%), and to range from 4% to 78%. In some studies that separated major from minor inconsistency, the level of major inconsistency ranged from 5% to 45% (median: 19%, interquartile range: 7% – 31%), which included discrepancies in specifying the study design or sample size, designating a primary outcome measure, presenting main results, and drawing a conclusion.

A longer time interval between conference abstracts and the publication of full reports was found to be the only factor which was marginally or significantly associated with increased likelihood of reporting inconsistencies.

Conclusions

This scoping review revealed that abstracts are frequently inconsistent with full reports, and efforts are needed to improve the consistency of abstract reporting in the primary biomedical community.

URL : A scoping review of comparisons between abstracts and full reports in primary biomedical research

DOI : https://doi.org/10.1186/s12874-017-0459-5

Opening Academic Publishing – Development and application of systematic evaluation criteria

Authors : Anna Björk, Juho-Matti Paavola, Teemu Ropponen, Mikael Laakso, Leo Lahti

This report summarizes the development of a standardized scorecard for evaluating the openness of academic publishers. The assessment was completed in January 2018 as part of the Open Science and Research Initiative of the Finnish Ministry of Education and Culture.

The project complements the previous reports published by the Open Science and Research Initiative and the Finnish Ministry of Education and Culture, which have covered (i) the openness of universities and polytechnics, (ii) the overall situation of OA publishing costs in Finland, and (iii) research organization and research funding organizations, including selected European research funders.

The project mapped and evaluated the openness of selected major academic publishers: Association for Computing Machinery (ACM), American Chemical Society (ACS), Elsevier, Institute of Electrical and Electronic Engineering (IEEE), Lippincott, Williams & Wilkins (LWW), Sage, Springer Nature, Taylor & Francis, and Wiley-Blackwell. The dimensions of publisher openness were summarized in a scorecard of seven key factors, providing a new tool for systematic and standardized evaluation.

We used data from the publisher websites to compare the key factors of openness, and the publishers were given a chance to provide comments on the collected information. As complementary sources, we utilized data from commonly acknowledged, open databases: Directory of OA Journals (DOAJ), Gold OA Journals 2011-2016 (GOAJ2), Scopus (title list + Scimago), and Sherpa / Romeo.

The main results include the scorecard and the evaluation of openness of the selected major academic publishers. These are based on seven key factors: (i) Fraction of open access (OA) journals and their articles of the total publication output, (ii) costs of OA publishing (article processing charges, APC), (iii) use of Creative Commons (CC) licensing, (iv) self-archiving policies, (v) access to text and data mining (TDM), (vi) openness of citation data, and (vii) accessibility of information relating to OA practices.

To take a look beyond the publisher level into journal level practices we also sampled individual journals. We use the samples to discuss the distribution of journals according to APCs, their licensing and three impact metrics (CiteScore 2016, Scimago Journal & Country Ranks (SJR) 2016, and Source Normalized Impact per Paper (SNIP) 2016).

The evaluation of the selected publishers with the scorecard indicates, for example, that the fraction of OA journals and their articles of the total publication output runs low within this group. In our sample of journals, the most expensive OA journals also seem to bear the highest impact metrics.

A definite view on the matter, however, would require more extensive data and further research. We
conclude by discussing key aspects and complexities in quantitative evaluation and in the design of a standardized assessment of publisher openness, and note also further factors that could be included in future versions of the scorecard.

URL : Opening Academic Publishing – Development and application of systematic evaluation criteria

Alternative location : https://avointiede.fi/documents/10864/12232/OPENING+ACADEMIC+PUBLISHING+.pdf/a4358f81-88cf-4915-92db-88335092c992

The counting house: measuring those who count. Presence of Bibliometrics, Scientometrics, Informetrics, Webometrics and Altmetrics in the Google Scholar Citations, ResearcherID, ResearchGate, Mendeley & Twitter

Authors : Alberto Martin-Martin, Enrique Orduna-Malea, Juan M. Ayllon, Emilio Delgado Lopez-Cozar

Following in the footsteps of the model of scientific communication, which has recently gone through a metamorphosis (from the Gutenberg galaxy to the Web galaxy), a change in the model and methods of scientific evaluation is also taking place.

A set of new scientific tools are now providing a variety of indicators which measure all actions and interactions among scientists in the digital space, making new aspects of scientific communication emerge.

In this work we present a method for capturing the structure of an entire scientific community (the Bibliometrics, Scientometrics, Informetrics, Webometrics, and Altmetrics community) and the main agents that are part of it (scientists, documents, and sources) through the lens of Google Scholar Citations.

Additionally, we compare these author portraits to the ones offered by other profile or social platforms currently used by academics (ResearcherID, ResearchGate, Mendeley, and Twitter), in order to test their degree of use, completeness, reliability, and the validity of the information they provide.

A sample of 814 authors (researchers in Bibliometrics with a public profile created in Google Scholar Citations was subsequently searched in the other platforms, collecting the main indicators computed by each of them.

The data collection was carried out on September, 2015. The Spearman correlation was applied to these indicators (a total of 31) , and a Principal Component Analysis was carried out in order to reveal the relationships among metrics and platforms as well as the possible existence of metric cluster.

URL : https://arxiv.org/abs/1602.02412