Peer reviewing: a private affair between the individual researcher and the publishing houses, or a responsibility of the university?

Authors : Leif Longva, Eirik Reierth, Lars Moksness, Bård Smedsrød

Peer reviewing is mandatory for scientific journals as quality control of submitted manuscripts, for universities to rank applicants for scientific positions, and for funding agencies to rank grant applications.

In spite of this deep dependency of peer reviewing throughout the entire academic realm, universities exhibit a peculiar lack of interest in this activity.

The aim of this article is to show that by taking an active interest in peer reviewing the universities will take control over the management and policy shaping of scientific publishing, a regime that is presently largely controlled by the big publishing houses.

The benefits of gaining control of scientific publishing policy include the possibility to implement open access publishing and to reduce the unjustifiably high subscription rates currently charged by some of the major publishing houses.

A common international clean-up action is needed to move this pivotal element of scientific publishing from the dark hiding places of the scientific journals to where it should be managed: namely, at the universities.

In addition to the economic benefits, we postulate that placing peer reviewing at the universities will improve the quality of published research.


Is there agreement on the prestige of scholarly book publishers in the Humanities? DELPHI over survey results

Authors : Elea Giménez-Toledo, Jorge Mañana-Rodríguez

Despite having an important role supporting assessment processes, criticism towards evaluation systems and the categorizations used are frequent. Considering the acceptance by the scientific community as an essential issue for using rankings or categorizations in research evaluation, the aim of this paper is testing the results of rankings of scholarly book publishers’ prestige, Scholarly Publishers Indicators (SPI hereafter).

SPI is a public, survey-based ranking of scholarly publishers’ prestige (among other indicators). The latest version of the ranking (2014) was based on an expert consultation with a large number of respondents.

In order to validate and refine the results for Humanities’ fields as proposed by the assessment agencies, a Delphi technique was applied with a panel of randomly selected experts over the initial rankings.

The results show an equalizing effect of the technique over the initial rankings as well as a high degree of concordance between its theoretical aim (consensus among experts) and its empirical results (summarized with Gini Index).

The resulting categorization is understood as more conclusive and susceptible of being accepted by those under evaluation.


TrueReview: A Platform for Post-Publication Peer Review

Authors : Luca de Alfaro, Marco Faella

In post-publication peer review, scientific contributions are first published in open-access forums, such as arXiv or other digital libraries, and are subsequently reviewed and possibly ranked and/or evaluated.

Compared to the classical process of scientific publishing, in which review precedes publication, post-publication peer review leads to faster dissemination of ideas, and publicly-available reviews. The chief concern in post-publication reviewing consists in eliciting high-quality, insightful reviews from participants.

We describe the mathematical foundations and structure of TrueReview, an open-source tool we propose to build in support of post-publication review.

In TrueReview, the motivation to review is provided via an incentive system that promotes reviews and evaluations that are both truthful (they turn out to be correct in the long run) and informative (they provide significant new information).

TrueReview organizes papers in venues, allowing different scientific communities to set their own submission and review policies. These venues can be manually set-up, or they can correspond to categories in well-known repositories such as arXiv.

The review incentives can be used to form a reviewer ranking that can be prominently displayed alongside papers in the various disciplines, thus offering a concrete benefit to reviewers. The paper evaluations, in turn, reward the authors of the most significant papers, both via an explicit paper ranking, and via increased visibility in search.



Understanding the Impact of Early Citers on Long-Term Scientific Impact

Authors : Mayank Singh, Ajay Jaiswal, Priya Shree, Arindam Pal, Animesh Mukherjee, Pawan Goyal

This paper explores an interesting new dimension to the challenging problem of predicting long-term scientific impact (LTSI) usually measured by the number of citations accumulated by a paper in the long-term.

It is well known that early citations (within 1-2 years after publication) acquired by a paper positively affects its LTSI. However, there is no work that investigates if the set of authors who bring in these early citations to a paper also affect its LTSI.

In this paper, we demonstrate for the first time, the impact of these authors whom we call early citers (EC) on the LTSI of a paper. Note that this study of the complex dynamics of EC introduces a brand new paradigm in citation behavior analysis.

Using a massive computer science bibliographic dataset we identify two distinct categories of EC – we call those authors who have high overall publication/citation count in the dataset as influential and the rest of the authors as non-influential.

We investigate three characteristic properties of EC and present an extensive analysis of how each category correlates with LTSI in terms of these properties. In contrast to popular perception, we find that influential EC negatively affects LTSI possibly owing to attention stealing.

To motivate this, we present several representative examples from the dataset. A closer inspection of the collaboration network reveals that this stealing effect is more profound if an EC is nearer to the authors of the paper being investigated.

As an intuitive use case, we show that incorporating EC properties in the state-of-the-art supervised citation prediction models leads to high performance margins.

At the closing, we present an online portal to visualize EC statistics along with the prediction results for a given query paper.


Le défi de l’interopérabilité entre plates-formes pour la construction de savoirs augmentés en sciences humaines et sociales

Auteurs/Authors : Camille Prime-Claverie, Annaïg Mahé

A l’ère numérique, le secteur de la recherche engendre une prolifération de contenus informatisés et garantir un meilleur accès aux résultats de recherche est un objectif qui pourrait paraître aisément réalisable.

Pourtant, depuis une décennie, le secteur de la communication scientifique traverse des mutations profondes qui se traduisent par des difficultés pour l’ensemble des acteurs à se positionner dans ce nouveau contexte.

L’information se retrouve disséminée au sein de plusieurs plateformes nées sous l’impulsion de différents types d’acteurs qui affichent des positions et intérêts parfois divergents.

Dans cet environnement largement distribué, la réalisation de l’interopérabilité devient un enjeu majeur pour un meilleur accès à l’IST, permettant en outre la circulation des données et leur enrichissement.

Cette contribution propose d’aborder la question de la circulation et du partage de la littérature scientifique en sciences humaines et sociales en France à partir de données moissonnables par le protocole OAI-PMH.

Elle tente mettre en exergue ce qui constitue des opportunités ou des freins pour la réutilisation, l’éditorialisation et la construction de savoirs augmentées dans ce domaine.

L’étude menée se centre sur cinq plateformes françaises mettant à disposition des documents scientifiques dans le domaine des SHS et sur l’étude d’un fournisseur de services proposant des fonctionnalités d’enrichissement.


On the origin of nonequivalent states: How we can talk about preprints

Authors : Cameron Neylon, Damian Pattinson, Geoffrey Bilder, Jennifer Lin

Increasingly, preprints are at the center of conversations across the research ecosystem. But disagreements remain about the role they play. Do they “count” for research assessment?

Is it ok to post preprints in more than one place? In this paper, we argue that these discussions often conflate two separate issues, the history of the manuscript and the status granted it by different communities.

In this paper, we propose a new model that distinguishes the characteristics of the object, its “state”, from the subjective “standing” granted to it by different communities.

This provides a way to discuss the difference in practices between communities, which will deliver more productive conversations and facilitate negotiation, as well as sharpening our focus on the role of different stakeholders on how to collectively improve the process of scholarly communications not only for preprints, but other forms of scholarly contributions.

URL : On the origin of nonequivalent states: How we can talk about preprints


A Multi-dimensional Investigation of the Effects of Publication Retraction on Scholarly Impact

Authors : Xin Shuai, Isabelle Moulinier, Jason Rollins, Tonya Custis, Frank Schilder, Mathilda Edmunds

Over the past few decades, the rate of publication retractions has increased dramatically in academia. In this study, we investigate retractions from a quantitative perspective, aiming to answer two fundamental questions.

One, how do retractions influence the scholarly impact of retracted papers, authors, and institutions? Two, does this influence propagate to the wider academic community through scholarly associations?

Specifically, we analyzed a set of retracted articles indexed in Thomson Reuters Web of Science (WoS), and ran multiple experiments to compare changes in scholarly impact against a control set of non-retracted articles, authors, and institutions.

We further applied the Granger Causality test to investigate whether different scientific topics are dynamically affected by retracted papers occurring within those topics.

Our results show two key findings: first, the scholarly impact of retracted papers and authors significantly decreases after retraction, and the most severe impact decrease correlates to retractions based on proven purposeful scientific misconduct; second, this retraction penalty does not seem to spread through the broader scholarly social graph, but instead has a limited and localized effect.

Our findings may provide useful insights for scholars or science committees to evaluate the scholarly value of papers, authors, or institutions related to retractions.