Is there agreement on the prestige of scholarly book publishers in the Humanities? DELPHI over survey results

Authors : Elea Giménez-Toledo, Jorge Mañana-Rodríguez

Despite having an important role supporting assessment processes, criticism towards evaluation systems and the categorizations used are frequent. Considering the acceptance by the scientific community as an essential issue for using rankings or categorizations in research evaluation, the aim of this paper is testing the results of rankings of scholarly book publishers’ prestige, Scholarly Publishers Indicators (SPI hereafter).

SPI is a public, survey-based ranking of scholarly publishers’ prestige (among other indicators). The latest version of the ranking (2014) was based on an expert consultation with a large number of respondents.

In order to validate and refine the results for Humanities’ fields as proposed by the assessment agencies, a Delphi technique was applied with a panel of randomly selected experts over the initial rankings.

The results show an equalizing effect of the technique over the initial rankings as well as a high degree of concordance between its theoretical aim (consensus among experts) and its empirical results (summarized with Gini Index).

The resulting categorization is understood as more conclusive and susceptible of being accepted by those under evaluation.

URL : https://arxiv.org/abs/1705.04517

Novel processes and metrics for a scientific evaluation rooted in the principles of science

Authors : Michaël Bon, Michael Taylor, Gary S McDowell

Scientific evaluation is a determinant of how scientists, institutions and funders behave, and as such is a key element in the making of science. In this article, we propose an alternative to the current norm of evaluating research with journal rank.

Following a well-defined notion of scientific value, we introduce qualitative processes that can also be quantified and give rise to meaningful and easy-to-use article-level metrics.

In our approach, the goal of a scientist is transformed from convincing an editorial board through a vertical process to convincing peers through an horizontal one. We argue that such an evaluation system naturally provides the incentives and logic needed to constantly promote quality, reproducibility, openness and collaboration in science.

The system is legally and technically feasible and can gradually lead to the self-organized reappropriation of the scientific process by the scholarly community and its institutions. We propose an implementation of our evaluation system with the platform “the Self-Journals of Science” (www.sjscience.org).

URL : Novel processes and metrics for a scientific evaluation rooted in the principles of science

Alternative location : http://www.sjscience.org/article?id=580