Les systèmes d’information recherche : un nouvel objet du questionnement éthique

Auteur/Authors : Joachim Schöpfel, Otmane Azeroual

La politique en faveur de la science ouverte interroge les critères et les procédures de l’évaluation de la recherche, tout en mettant en avant les principes fondamentaux de l’éthique scientifique, comme la transparence, l’ouverture et l’intégrité.

Dans ce contexte, nous menons depuis 2020 une analyse de la dimension éthique des systèmes d’information consacrés à l’évaluation de la recherche (SI recherche).

Cet article présente les résultats d’une enquête internationale conduite en 2021 avec un petit échantillon de professionnels et de chercheurs sur deux aspects : l’éthique comme objet du modèle de données de ces systèmes (métriques), et l’aspect éthique de la mise en place et de l’utilisation de ces systèmes.

La discussion fait le lien avec la qualité de ces systèmes, insiste sur la distinction entre l’évaluation des institutions et des personnes et propose l’analyse de ces systèmes à partir du concept d’une responsabilité morale répartie des infrastructures éthiques (infraéthique).

DOI : https://doi.org/10.4000/rfsic.13254

How Can Science and Research Work Well? Toward a Critique of New Public Management Practices in Academia From a Socio-Philosophical Perspective

Author : Jan-Philipp Kruse

While New Public Management practices (NPM) have been adopted in academia and higher education over the past two decades, this paper is investigating their role in a specifically socio-philosophical way: The preeminent question is what organization of science is likely to make science and research work well in the context of a complex society.

The starting point is an obvious intuition: that academia would be “economized” by NPM (basically, that something is coming from the outside and is disturbing the inside). Habermas provides a sophisticated theorization for this intuition.

In contrast, the thesis advanced here is that we should consider NPM potentially problematic—but not for descending from economics or administration outside academia. It is because NPM often cannot help research and science to function well. In this (rather “essayistic” than strictly deductive) consideration,

I will therefore tentatively discuss an alternative approach that takes up critical intuitions while transposing them into a different setting. If we understand science and research as a form of life, a different picture emerges that can still bring immanent standards to bear, but at the same time compose them more broadly.

This outlines a socio-philosophical critique of NPM. Accordingly, the decisive factor is not NPM’s provenance. What is decisive is that it addresses some organizational problems while at the same time creating new ones.

At the end, an outlook is sketched on how the specific situation of NPM allows some hypotheses on academy’s [by “academy”, I am referring to the whole research community (like “academia”)] future organization. Ex negativo, it seems likely that qualitative evaluation criteria and creative freedom will have to play a greater role.

URL : How Can Science and Research Work Well? Toward a Critique of New Public Management Practices in Academia From a Socio-Philosophical Perspective

Original location : https://www.frontiersin.org/articles/10.3389/frma.2022.791114/full

How should evaluation be? Is a good evaluation of research also just? Towards the implementation of good evaluation

Authors : Cinzia Daraio, Alessio Vaccari

In this paper we answer the question of how evaluation should be by proposing a good evaluation of research practices. A good evaluation of research practices, intended as social practices à la MacIntyre, should take into account the stable motivations and the traits of the characters (i.e. the virtues) of researchers.

We also show that a good evaluation is also just, beyond the sense of fairness, as working on good research practices implies keep into account a broader sense of justice. After that, we propose the development of a knowledge base for the assessment of “good” evaluations of research practices to implement a questionnaire for the assessment of researchers’ virtues.

Although the latter is a challenging task, the use of ontologies and taxonomic knowledge, and the reasoning algorithms that can draw inferences on the basis of such knowledge represents a way for testing the consistency of the information reported in the questionnaire and to analyse correctly and coherently how the data is gathered through it.

Finally, we describe the potential application usefulness of our proposal for the reform of current research assessment systems.

URL : How should evaluation be? Is a good evaluation of research also just? Towards the implementation of good evaluation

DOI : https://doi.org/10.1007/s11192-022-04329-2

Reshaping How Universities Can Evaluate the Research Impact of Open Humanities for Societal Benefit

Authors : Paul Longley Arthur, Lydia Hearn

During the twenty-first century, for the first time, the volume of digital data has surpassed the amount of analog data. As academic practices increasingly become digital, opportunities arise to reshape the future of scholarly communication through more accessible, interactive, open, and transparent methods that engage a far broader and more diverse public.

Yet despite these advances, the research performance of universities and public research institutes remains largely evaluated through publication and citation analysis rather than by public engagement and societal impact.

This article reviews how changes to bibliometric evaluations toward greater use of altmetrics, including social media mentions, could enhance uptake of open scholarship in the humanities.

In addition, the article highlights current challenges faced by the open scholarship movement, given the complexity of the humanities in terms of its sources and outputs that include monographs, book chapters, and journals in languages other than English; the use of popular media not considered as scholarly papers; the lack of time and energy to develop digital skills among research staff; problems of authority and trust regarding the scholarly or non-academic nature of social media platforms; the prestige of large academic publishing houses; and limited awareness of and familiarity with advanced digital applications.

While peer review will continue to be a primary method for evaluating research in the humanities, a combination of altmetrics and other assessment of research impact through different data sources may provide a way forward to ensure the increased use, sustainability, and effectiveness of open scholarship in the humanities.

DOI : https://doi.org/10.3998/jep.788

What Is Wrong With the Current Evaluative Bibliometrics?

Author : Endel Põder

Bibliometric data are relatively simple and describe objective processes of publishing articles and citing others. It seems quite straightforward to define reasonable measures of a researcher’s productivity, research quality, or overall performance based on these data. Why do we still have no acceptable bibliometric measures of scientific performance?

Instead, there are hundreds of indicators with nobody knowing how to use them. At the same time, an increasing number of researchers and some research fields have been excluded from the standard bibliometric analysis to avoid manifestly contradictive conclusions.

I argue that the current biggest problem is the inadequate rule of credit allocation for multiple authored articles in mainstream bibliometrics. Clinging to this historical choice excludes any systematic and logically consistent bibliometrics-based evaluation of researchers, research groups, and institutions.

During the last 50 years, several authors have called for a change. Apparently, there are no serious methodologically justified or evidence-based arguments in the favor of the present system.

However, there are intractable social, psychological, and economical issues that make adoption of a logically sound counting system almost impossible.

URL : What Is Wrong With the Current Evaluative Bibliometrics?

DOI : https://doi.org/10.3389/frma.2021.824518