How Can Science and Research Work Well? Toward a Critique of New Public Management Practices in Academia From a Socio-Philosophical Perspective

Author : Jan-Philipp Kruse

While New Public Management practices (NPM) have been adopted in academia and higher education over the past two decades, this paper is investigating their role in a specifically socio-philosophical way: The preeminent question is what organization of science is likely to make science and research work well in the context of a complex society.

The starting point is an obvious intuition: that academia would be “economized” by NPM (basically, that something is coming from the outside and is disturbing the inside). Habermas provides a sophisticated theorization for this intuition.

In contrast, the thesis advanced here is that we should consider NPM potentially problematic—but not for descending from economics or administration outside academia. It is because NPM often cannot help research and science to function well. In this (rather “essayistic” than strictly deductive) consideration,

I will therefore tentatively discuss an alternative approach that takes up critical intuitions while transposing them into a different setting. If we understand science and research as a form of life, a different picture emerges that can still bring immanent standards to bear, but at the same time compose them more broadly.

This outlines a socio-philosophical critique of NPM. Accordingly, the decisive factor is not NPM’s provenance. What is decisive is that it addresses some organizational problems while at the same time creating new ones.

At the end, an outlook is sketched on how the specific situation of NPM allows some hypotheses on academy’s [by “academy”, I am referring to the whole research community (like “academia”)] future organization. Ex negativo, it seems likely that qualitative evaluation criteria and creative freedom will have to play a greater role.

URL : How Can Science and Research Work Well? Toward a Critique of New Public Management Practices in Academia From a Socio-Philosophical Perspective

Original location : https://www.frontiersin.org/articles/10.3389/frma.2022.791114/full

How should evaluation be? Is a good evaluation of research also just? Towards the implementation of good evaluation

Authors : Cinzia Daraio, Alessio Vaccari

In this paper we answer the question of how evaluation should be by proposing a good evaluation of research practices. A good evaluation of research practices, intended as social practices à la MacIntyre, should take into account the stable motivations and the traits of the characters (i.e. the virtues) of researchers.

We also show that a good evaluation is also just, beyond the sense of fairness, as working on good research practices implies keep into account a broader sense of justice. After that, we propose the development of a knowledge base for the assessment of “good” evaluations of research practices to implement a questionnaire for the assessment of researchers’ virtues.

Although the latter is a challenging task, the use of ontologies and taxonomic knowledge, and the reasoning algorithms that can draw inferences on the basis of such knowledge represents a way for testing the consistency of the information reported in the questionnaire and to analyse correctly and coherently how the data is gathered through it.

Finally, we describe the potential application usefulness of our proposal for the reform of current research assessment systems.

URL : How should evaluation be? Is a good evaluation of research also just? Towards the implementation of good evaluation

DOI : https://doi.org/10.1007/s11192-022-04329-2

Reshaping How Universities Can Evaluate the Research Impact of Open Humanities for Societal Benefit

Authors : Paul Longley Arthur, Lydia Hearn

During the twenty-first century, for the first time, the volume of digital data has surpassed the amount of analog data. As academic practices increasingly become digital, opportunities arise to reshape the future of scholarly communication through more accessible, interactive, open, and transparent methods that engage a far broader and more diverse public.

Yet despite these advances, the research performance of universities and public research institutes remains largely evaluated through publication and citation analysis rather than by public engagement and societal impact.

This article reviews how changes to bibliometric evaluations toward greater use of altmetrics, including social media mentions, could enhance uptake of open scholarship in the humanities.

In addition, the article highlights current challenges faced by the open scholarship movement, given the complexity of the humanities in terms of its sources and outputs that include monographs, book chapters, and journals in languages other than English; the use of popular media not considered as scholarly papers; the lack of time and energy to develop digital skills among research staff; problems of authority and trust regarding the scholarly or non-academic nature of social media platforms; the prestige of large academic publishing houses; and limited awareness of and familiarity with advanced digital applications.

While peer review will continue to be a primary method for evaluating research in the humanities, a combination of altmetrics and other assessment of research impact through different data sources may provide a way forward to ensure the increased use, sustainability, and effectiveness of open scholarship in the humanities.

DOI : https://doi.org/10.3998/jep.788

What Is Wrong With the Current Evaluative Bibliometrics?

Author : Endel Põder

Bibliometric data are relatively simple and describe objective processes of publishing articles and citing others. It seems quite straightforward to define reasonable measures of a researcher’s productivity, research quality, or overall performance based on these data. Why do we still have no acceptable bibliometric measures of scientific performance?

Instead, there are hundreds of indicators with nobody knowing how to use them. At the same time, an increasing number of researchers and some research fields have been excluded from the standard bibliometric analysis to avoid manifestly contradictive conclusions.

I argue that the current biggest problem is the inadequate rule of credit allocation for multiple authored articles in mainstream bibliometrics. Clinging to this historical choice excludes any systematic and logically consistent bibliometrics-based evaluation of researchers, research groups, and institutions.

During the last 50 years, several authors have called for a change. Apparently, there are no serious methodologically justified or evidence-based arguments in the favor of the present system.

However, there are intractable social, psychological, and economical issues that make adoption of a logically sound counting system almost impossible.

URL : What Is Wrong With the Current Evaluative Bibliometrics?

DOI : https://doi.org/10.3389/frma.2021.824518

Evaluation and Merit-Based Increase in Academia: A Case Study in the First Person

Author : Christine Musselin

This article provides a reflexive account of the process of defining and implementing a mechanism to evaluate a group of academics in a French higher education institution. The situation is a rather unusual case for France, as the assessed academics are not civil servants but are employed by their university and this evaluation leads to merit-based salary increases.

To improve and implement this strategy was one of the author’s tasks, when she was vice-president for research at the institution in this case.

The article looks at this experience retrospectively, emphasizing three issues of particular relevance in the context of discussions about valuation studies and management proposed in this symposium: (1) the decision to distinguish between different types of profiles and thus categorize, or to apply the same criteria to all; (2) the concrete forms of commensuration to be developed in order to be able to evaluate and rank individuals from different disciplines; (3) the quantification of qualitative appreciation, i.e. their transformation into merit-based salary increases.

URL : Evaluation and Merit-Based Increase in Academia: A Case Study in the First Person

DOI : https://doi.org/10.3384/VS.2001-5992.2021.8.2.73-88