How should evaluation be? Is a good evaluation of research also just? Towards the implementation of good evaluation

Authors : Cinzia Daraio, Alessio Vaccari

In this paper we answer the question of how evaluation should be by proposing a good evaluation of research practices. A good evaluation of research practices, intended as social practices à la MacIntyre, should take into account the stable motivations and the traits of the characters (i.e. the virtues) of researchers.

We also show that a good evaluation is also just, beyond the sense of fairness, as working on good research practices implies keep into account a broader sense of justice. After that, we propose the development of a knowledge base for the assessment of “good” evaluations of research practices to implement a questionnaire for the assessment of researchers’ virtues.

Although the latter is a challenging task, the use of ontologies and taxonomic knowledge, and the reasoning algorithms that can draw inferences on the basis of such knowledge represents a way for testing the consistency of the information reported in the questionnaire and to analyse correctly and coherently how the data is gathered through it.

Finally, we describe the potential application usefulness of our proposal for the reform of current research assessment systems.

URL : How should evaluation be? Is a good evaluation of research also just? Towards the implementation of good evaluation

DOI : https://doi.org/10.1007/s11192-022-04329-2

Reshaping How Universities Can Evaluate the Research Impact of Open Humanities for Societal Benefit

Authors : Paul Longley Arthur, Lydia Hearn

During the twenty-first century, for the first time, the volume of digital data has surpassed the amount of analog data. As academic practices increasingly become digital, opportunities arise to reshape the future of scholarly communication through more accessible, interactive, open, and transparent methods that engage a far broader and more diverse public.

Yet despite these advances, the research performance of universities and public research institutes remains largely evaluated through publication and citation analysis rather than by public engagement and societal impact.

This article reviews how changes to bibliometric evaluations toward greater use of altmetrics, including social media mentions, could enhance uptake of open scholarship in the humanities.

In addition, the article highlights current challenges faced by the open scholarship movement, given the complexity of the humanities in terms of its sources and outputs that include monographs, book chapters, and journals in languages other than English; the use of popular media not considered as scholarly papers; the lack of time and energy to develop digital skills among research staff; problems of authority and trust regarding the scholarly or non-academic nature of social media platforms; the prestige of large academic publishing houses; and limited awareness of and familiarity with advanced digital applications.

While peer review will continue to be a primary method for evaluating research in the humanities, a combination of altmetrics and other assessment of research impact through different data sources may provide a way forward to ensure the increased use, sustainability, and effectiveness of open scholarship in the humanities.

DOI : https://doi.org/10.3998/jep.788

What Is Wrong With the Current Evaluative Bibliometrics?

Author : Endel Põder

Bibliometric data are relatively simple and describe objective processes of publishing articles and citing others. It seems quite straightforward to define reasonable measures of a researcher’s productivity, research quality, or overall performance based on these data. Why do we still have no acceptable bibliometric measures of scientific performance?

Instead, there are hundreds of indicators with nobody knowing how to use them. At the same time, an increasing number of researchers and some research fields have been excluded from the standard bibliometric analysis to avoid manifestly contradictive conclusions.

I argue that the current biggest problem is the inadequate rule of credit allocation for multiple authored articles in mainstream bibliometrics. Clinging to this historical choice excludes any systematic and logically consistent bibliometrics-based evaluation of researchers, research groups, and institutions.

During the last 50 years, several authors have called for a change. Apparently, there are no serious methodologically justified or evidence-based arguments in the favor of the present system.

However, there are intractable social, psychological, and economical issues that make adoption of a logically sound counting system almost impossible.

URL : What Is Wrong With the Current Evaluative Bibliometrics?

DOI : https://doi.org/10.3389/frma.2021.824518

Evaluation and Merit-Based Increase in Academia: A Case Study in the First Person

Author : Christine Musselin

This article provides a reflexive account of the process of defining and implementing a mechanism to evaluate a group of academics in a French higher education institution. The situation is a rather unusual case for France, as the assessed academics are not civil servants but are employed by their university and this evaluation leads to merit-based salary increases.

To improve and implement this strategy was one of the author’s tasks, when she was vice-president for research at the institution in this case.

The article looks at this experience retrospectively, emphasizing three issues of particular relevance in the context of discussions about valuation studies and management proposed in this symposium: (1) the decision to distinguish between different types of profiles and thus categorize, or to apply the same criteria to all; (2) the concrete forms of commensuration to be developed in order to be able to evaluate and rank individuals from different disciplines; (3) the quantification of qualitative appreciation, i.e. their transformation into merit-based salary increases.

URL : Evaluation and Merit-Based Increase in Academia: A Case Study in the First Person

DOI : https://doi.org/10.3384/VS.2001-5992.2021.8.2.73-88

The production of scientific and societal value in research evaluation: a review of societal impact assessment methods

Authors : Jorrit P Smit, Laurens K Hessels

Over the past two decades, several methods have been developed to evaluate the societal impact of research. Compared to the practical development of the field, the conceptual development is relatively weak.

This review article contributes to the latter by elucidating the theoretical aspects of the dominant methods for evaluating societal impact of research, in particular, their presuppositions about the relationship between scientific and societal value of research. We analyse 10 approaches to the assessment of the societal impact of research from a constructivist perspective.

The methods represent different understandings of knowledge exchange, which can be understood in terms of linear, cyclical, and co-production models. In addition, the evaluation methods use a variety of concepts for the societal value of research, which suggest different relationships with scientific value.

While some methods rely on a clear and explicit distinction between the two types of value, other methods, in particular Evaluative Inquiry, ASIRPA, Contribution Mapping, Public Value Mapping, and SIAMPI, consider the mechanisms for producing societal value integral to the research process.

We conclude that evaluation methods must balance between demarcating societal value as a separate performance indicator for practical purposes and doing justice to the (constructivist) science studies’ findings about the integration of scientific and societal value of research.

Our analytic comparison of assessment methods can assist research evaluators in the conscious and responsible selection of an approach that fits with the object under evaluation. As evaluation actively shapes knowledge production, it is important not to use oversimplified concepts of societal value.

URL : The production of scientific and societal value in research evaluation: a review of societal impact assessment methods

DOI : https://doi.org/10.1093/reseval/rvab002

The impact of geographical bias when judging scientific studies

Authors : Marta Kowal, Piotr Sorokowski, Emanuel Kulczycki, Agnieszka Żelaźniewicz

The beauty of science lies within its core assumption that it seeks to identify the truth, and as such, the truth stands alone and does not depend on the person who proclaims it. However, people’s proclivity to succumb to various stereotypes is well known, and the scientific world may not be exceptionally immune to the tendency to judge a book by its cover.

An interesting example is geographical bias, which includes distorted judgments based on the geographical origin of, inter alia, the given work and not its actual quality or value. Here, we tested whether both laypersons (N = 1532) and scientists (N = 480) are prone to geographical bias when rating scientific projects in one of three scientific fields (i.e., biology, philosophy, or psychology).

We found that all participants favored more biological projects from the USA than China; in particular, expert biologists were more willing to grant further funding to Americans. In philosophy, however, laypersons rated Chinese projects as better than projects from the USA. Our findings indicate that geographical biases affect public perception of research and influence the results of grant competitions.

URL : The impact of geographical bias when judging scientific studies

DOI : https://doi.org/10.1007/s11192-021-04176-7