From Research Evaluation to Research Analytics. The digitization of academic performance measurement

Authors : Anne K. Krüger, Sabrina Petersohn

One could think that bibliometric measurement of academic performance has always been digital since the computer-assisted invention of the Science Citation Index. Yet, since the 2000s, the digitization of bibliometric infrastructure has accelerated at a rapid pace. Citation databases are indexing an increasing variety of publication types.

Altmetric data aggregators are producing data on the reception of research outcomes. Machine-readable persistent identifiers are created to unambiguously identify researchers, research organizations, and research objects; and evaluative software tools and current research information systems are constantly enlarging their functionalities to make use of these data and extract meaning from them.

In this article, we analyse how these developments in evaluative bibliometrics have contributed to an extension of indicator-based research evaluation towards data-driven research analytics.

Drawing on empirical material from blogs and websites as well as from research and policy papers, we discuss how interoperability, scalability, and flexibility as material specificities of digital infrastructures generate new ways of data production and their assessment, which affect the possibilities of how academic performance can be understood and (e)valuated.

URL : From Research Evaluation to Research Analytics. The digitization of academic performance measurement

DOI : https://doi.org/10.3384/VS.2001-5992.2022.9.1.11-46

Indicators of research quality, quantity, openness and responsibility in institutional review, promotion and tenure policies across seven countries

Authors : Nancy Pontika, Thomas Klebel, Antonia Correia, Hannah Metzler, Petr Knoth, Tony Ross-Hellauer

The need to reform research assessment processes related to career advancement at research institutions has become increasingly recognised in recent years, especially to better foster open and responsible research practices. Current assessment criteria are believed to focus too heavily on inappropriate criteria related to productivity and quantity as opposed to quality, collaborative open research practices, and the socio-economic impact of research.

Evidence of the extent of these issues is urgently needed to inform actions for reform, however. We analyse current practices as revealed by documentation on institutional review, promotion and tenure processes in seven countries (Austria, Brazil, Germany, India, Portugal, United Kingdom and United States of America).

Through systematic coding and analysis of 143 RPT policy documents from 107 institutions for the prevalence of 17 criteria (including those related to qualitative or quantitative assessment of research, service to the institution or profession, and open and responsible research practices), we compare assessment practices across a range of international institutions to significantly broaden this evidence-base.

Although prevalence of indicators varies considerably between countries, overall we find that currently open and responsible research practices are minimally rewarded and problematic practices of quantification continue to dominate.

URL : Indicators of research quality, quantity, openness and responsibility in institutional review, promotion and tenure policies across seven countries

DOI : https://doi.org/10.1162/qss_a_00224

Implementing the Declaration on Research Assessment: a publisher case study

Authors : Victoria Gardner, Mark Robinson, Elisabetta O’Connell

There has been much debate around the role of metrics in scholarly communication, with particular focus on the misapplication of journal metrics, such as the impact factor in the assessment of research and researchers.

Various initiatives have advocated for a change in this culture, including the Declaration on Research Assessment (DORA), which invites stakeholders throughout the scholarly communication ecosystem to sign up and show their support for practices designed to address the misuse of metrics.

This case study provides an overview of the process undertaken by a large academic publisher (Taylor & Francis Group) in signing up to DORA and implementing some of its key practices in the hope that it will provide some guidance to others considering becoming a signatory.

Our experience suggests that research, consultation and flexibility are crucial components of the process. Additionally, approaching signing with a project mindset versus a ‘sign and forget’ mentality can help organizations to understand the practical implications of signing, to anticipate and mitigate potential obstacles and to support cultural change.

URL : Implementing the Declaration on Research Assessment: a publisher case study

DOI : http://doi.org/10.1629/uksg.573

Toward More Inclusive Metrics and Open Science to Measure Research Assessment in Earth and Natural Sciences

Authors : Olivier Pourret, Dasapta Erwin Irawan, Najmeh Shaghaei, Elenora M. van Rijsingen, Lonni Besançon

The conventional assessment of scientists relies on a set of metrics which are mostly based on the production of scientific articles and their citations. These metrics are primarily established at the journal level (e.g., the Journal Impact Factor), the article-level (e.g., times cited), and the author level (e.g., h-index; Figure 1).

These metrics form the basis of criteria that have been widely used to measure institutional reputation, as well as that of authors and research groups. By relying mostly on citations (Langfeldt et al., 2021), however, they are inherently flawed in that they provide only a limited picture of scholarly production. Indeed, citations only count document use within scholarly works and thus provide a very limited view of the use and impact of an article.

Those reveal only the superficial dimensions of a research’s impact on society. Even within academia, citations are limited since the link they express does not hold any value (Tennant et al., 2019). As an example, one could be cited for the robustness of the presented work while the other could be cited for its main limitation (Aksnes et al., 2019).

As such, two articles could be cited the same number of times for very different reasons, and relying on citations to evaluate scientific work therefore displays obvious limitations (Tahamtan et al., 2016). Beyond this issue, however, the conventional assessment of scientists is clearly beneficial to some scientists more than others and does not reflect or encourage the dissemination of knowledge back to the public that is ultimately paying scientists.

This is visible in the Earth and natural sciences which has been organized to solve local community problems in dealing with the Earth system like groundwater hazards (Irawan et al., 2021; Dwivedi et al., 2022).

URL: Toward More Inclusive Metrics and Open Science to Measure Research Assessment in Earth and Natural Sciences

DOI : https://doi.org/10.3389/frma.2022.850333