The production of scientific and societal value in research evaluation: a review of societal impact assessment methods

Authors : Jorrit P Smit, Laurens K Hessels

Over the past two decades, several methods have been developed to evaluate the societal impact of research. Compared to the practical development of the field, the conceptual development is relatively weak.

This review article contributes to the latter by elucidating the theoretical aspects of the dominant methods for evaluating societal impact of research, in particular, their presuppositions about the relationship between scientific and societal value of research. We analyse 10 approaches to the assessment of the societal impact of research from a constructivist perspective.

The methods represent different understandings of knowledge exchange, which can be understood in terms of linear, cyclical, and co-production models. In addition, the evaluation methods use a variety of concepts for the societal value of research, which suggest different relationships with scientific value.

While some methods rely on a clear and explicit distinction between the two types of value, other methods, in particular Evaluative Inquiry, ASIRPA, Contribution Mapping, Public Value Mapping, and SIAMPI, consider the mechanisms for producing societal value integral to the research process.

We conclude that evaluation methods must balance between demarcating societal value as a separate performance indicator for practical purposes and doing justice to the (constructivist) science studies’ findings about the integration of scientific and societal value of research.

Our analytic comparison of assessment methods can assist research evaluators in the conscious and responsible selection of an approach that fits with the object under evaluation. As evaluation actively shapes knowledge production, it is important not to use oversimplified concepts of societal value.

URL : The production of scientific and societal value in research evaluation: a review of societal impact assessment methods

DOI : https://doi.org/10.1093/reseval/rvab002

The impact of geographical bias when judging scientific studies

Authors : Marta Kowal, Piotr Sorokowski, Emanuel Kulczycki, Agnieszka Żelaźniewicz

The beauty of science lies within its core assumption that it seeks to identify the truth, and as such, the truth stands alone and does not depend on the person who proclaims it. However, people’s proclivity to succumb to various stereotypes is well known, and the scientific world may not be exceptionally immune to the tendency to judge a book by its cover.

An interesting example is geographical bias, which includes distorted judgments based on the geographical origin of, inter alia, the given work and not its actual quality or value. Here, we tested whether both laypersons (N = 1532) and scientists (N = 480) are prone to geographical bias when rating scientific projects in one of three scientific fields (i.e., biology, philosophy, or psychology).

We found that all participants favored more biological projects from the USA than China; in particular, expert biologists were more willing to grant further funding to Americans. In philosophy, however, laypersons rated Chinese projects as better than projects from the USA. Our findings indicate that geographical biases affect public perception of research and influence the results of grant competitions.

URL : The impact of geographical bias when judging scientific studies

DOI : https://doi.org/10.1007/s11192-021-04176-7

Comment sauver l’ouverture de la science ? l’évaluation

Auteur/Author : Denis Jerome

Les mondes de la recherche et celui des éditeurs encouragent une disponibilité des résultats de la recherche à tous et gratuitement. La transition vers une science ouverte se développe rapidement mais elle n’est pas sans poser de sérieux problèmes qui ne sont pas uniquement d’ordre budgétaire mais peuvent aussi porter atteinte à l’éthique et au bon fonctionnement de la recherche.

Les acteurs incontournables que sont les chercheurs individuellement ou via les sociétés savantes et les académies doivent reprendre le contrôle de cette transition en reconsidérant le rôle de l’évaluation qui est le nœud du problème. C’est la pratique de l’évaluation qu’il faut revoir.

URL : https://hal.archives-ouvertes.fr/hal-03291013

Researchers’ attitudes towards the h-index on Twitter 2007–2020: criticism and acceptance

Authors : Mike Thelwall, Kayvan Kousha

The h-index is an indicator of the scientific impact of an academic publishing career. Its hybrid publishing/citation nature and inherent bias against younger researchers, women, people in low resourced countries, and those not prioritizing publishing arguably give it little value for most formal and informal research evaluations.

Nevertheless, it is well-known by academics, used in some promotion decisions, and is prominent in bibliometric databases, such as Google Scholar. In the context of this apparent conflict, it is important to understand researchers’ attitudes towards the h-index.

This article used public tweets in English to analyse how scholars discuss the h-index in public: is it mentioned, are tweets about it positive or negative, and has interest decreased since its shortcomings were exposed?

The January 2021 Twitter Academic Research initiative was harnessed to download all English tweets mentioning the h-index from the 2006 start of Twitter until the end of 2020. The results showed a constantly increasing number of tweets.

Whilst the most popular tweets unapologetically used the h-index as an indicator of research performance, 28.5% of tweets were critical of its simplistic nature and others joked about it (8%). The results suggest that interest in the h-index is still increasing online despite scientists willing to evaluate the h-index in public tending to be critical.

Nevertheless, in limited situations it may be effective at succinctly conveying the message that a researcher has had a successful publishing career.

DOI : https://doi.org/10.1007/s11192-021-03961-8

Le partage des données vu par les chercheurs : une approche par la valeur

Auteur/Author : Violaine Rebouillat

Le propos de cet article porte sur la compréhension des logiques qui interviennent dans la définition de la valeur des données de la recherche, celles-ci pouvant avoir une influence sur les critères déterminant leur motivation au partage.

L’approche méthodologique repose sur une enquête qualitative, menée dans le cadre d’une recherche doctorale, qui a déployé 57 entretiens semi-directifs. Alors que les travaux menés autour des données sont focalisés sur les freins et motivations du partage, l’originalité de cette recherche consiste à identifier les différents prismes par lesquels la question de la valeur des données impacte la motivation et la décision de leur partage.

L’analyse des résultats montre que, tous domaines confondus, la valeur des données reste encore cristallisée autour de la publication et de la reconnaissance symbolique du travail du chercheur.

Les résultats permettent de comprendre que la question du partage est confrontée à un impensé : celui du cadre actuel de l’évaluation de la recherche, qui met l’article scientifique au cœur de son dispositif.

Ce travail contribue donc à montrer que l’avenir du partage des données dépend des systèmes alternatifs futurs d’évaluation de la recherche, associés à la science ouverte.

URL : https://lesenjeux.univ-grenoble-alpes.fr/2021/varia/03-le-partage-des-donnees-vu-par-les-chercheurs-une-approche-par-la-valeur/

How faculty define quality, prestige, and impact in research

Authors : Esteban Morales, Erin McKiernan, Meredith T. Niles, Lesley Schimanski, Juan Pablo Alperin

Despite the calls for change, there is significant consensus that when it comes to evaluating publications, review, promotion, and tenure processes should aim to reward research that is of high “quality,” has an “impact,” and is published in “prestigious” journals.

Nevertheless, such terms are highly subjective and present challenges to ascertain precisely what such research looks like. Accordingly, this article responds to the question: how do faculty from universities in the United States and Canada define the terms quality, prestige, and impact?

We address this question by surveying 338 faculty members from 55 different institutions. This study’s findings highlight that, despite their highly varied definitions, faculty often describe these terms in overlapping ways. Additionally, results shown that marked variance in definitions across faculty does not correspond to demographic characteristics.

This study’s results highlight the need to more clearly implement evaluation regimes that do not rely on ill-defined concepts.

DOI : https://doi.org/10.1101/2021.04.14.439880

Research impact evaluation and academic discourse

Author : Marta Natalia Wróblewska

The introduction of ‘impact’ as an element of assessment constitutes a major change in the construction of research evaluation systems. While various protocols of impact evaluation exist, the most articulated one was implemented as part of the British Research Excellence Framework (REF).

This paper investigates the nature and consequences of the rise of ‘research impact’ as an element of academic evaluation from the perspective of discourse. Drawing from linguistic pragmatics and Foucauldian discourse analysis, the study discusses shifts related to the so-called Impact Agenda on four stages, in chronological order: (1) the ‘problematization’ of the notion of ‘impact’, (2) the establishment of an ‘impact infrastructure’, (3) the consolidation of a new genre of writing–impact case study, and (4) academics’ positioning practices towards the notion of ‘impact’, theorized here as the triggering of new practices of ‘subjectivation’ of the academic self.

The description of the basic functioning of the ‘discourse of impact’ is based on the analysis of two corpora: case studies submitted by a selected group of academics (linguists) to REF2014 (no = 78) and interviews (n = 25) with their authors.

Linguistic pragmatics is particularly useful in analyzing linguistic aspects of the data, while Foucault’s theory helps draw together findings from two datasets in a broader analysis based on a governmentality framework. This approach allows for more general conclusions on the practices of governing (academic) subjects within evaluation contexts.

URL : Research impact evaluation and academic discourse

DOI : https://doi.org/10.1057/s41599-021-00727-8