L’effet SIGAPS : La recherche médicale française sous l’emprise de l’évaluation comptable

Auteurs/Authors : Yves Gingras, Mahdi Khelfaoui

Cette recherche a pour but de mettre en évidence les effets pervers générés par l’introduction du système SIGAPS (Système d’interrogation, de gestion, et d’analyse des publications scientifiques) sur la production scientifique française en médecine et en sciences biomédicales.

Cet outil biblio-métrique de gestion et de financement de la recherche présente un exemple emblématique des dé-rives que peuvent générer les méthodes d’évaluation de la recherche reposant sur des critères pu-rement comptables.

Dans cette note, nous présentons d’abord le fonctionnement de SIGAPS, pour ensuite expliquer précisément en quoi les méthodes de calcul des « points SIGAPS », fondés sur les facteurs d’impact des revues et l’ordre des noms des co-auteurs, posent de nombreux problèmes.

Nous identifions notamment les effets du système SIGAPS sur les dynamiques de publications, les choix des lieux de publications, la langue de publication et les critères de recrutement et de promotion des chercheurs.

Finalement, nous montrons que l’utilisation du système SIGAPS ne répond pas bien à tous les critères de ce que l’on pourrait appeler une « éthique de l’évaluation » qui devrait respecter certaines règles, comme la transparence, l’équité et la validité des indicateurs.

URL : https://cirst2.openum.ca/files/sites/179/2020/10/Note_2020-05vf.pdf

Changing how we evaluate research is difficult, but not impossible

Authors : Anna Hatch, Stephen Curry

The San Francisco Declaration on Research Assessment (DORA) was published in 2013 and described how funding agencies, institutions, publishers, organizations that supply metrics, and individual researchers could better evaluate the outputs of scientific research.

Since then DORA has evolved into an active initiative that gives practical advice to institutions on new ways to assess and evaluate research. This article outlines a framework for driving institutional change that was developed at a meeting convened by DORA and the Howard Hughes Medical Institute.

The framework has four broad goals: understanding the obstacles to changes in the way research is assessed; experimenting with different approaches; creating a shared vision when revising existing policies and practices; and communicating that vision on campus and beyond.

URL : Changing how we evaluate research is difficult, but not impossible

DOI : https://doi.org/10.7554/eLife.58654

Use of the journal impact factor for assessing individual articles need not be statistically wrong

Authors : Ludo Waltman, Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor.

Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments.

We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles.

In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received.

It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.

URL : Use of the journal impact factor for assessing individual articles need not be statistically wrong

DOI : https://doi.org/10.12688/f1000research.23418.1

Inferring the causal effect of journals on citations

Author : Vincent Traag

Articles in high-impact journals are by definition more highly cited on average. But are they cited more often because the articles are somehow “better”? Or are they cited more often simply because they appeared in a high-impact journal? Although some evidence suggests the latter the causal relationship is not clear.

We here compare citations of published journal articles to citations of their preprint versions to uncover the causal mechanism. We build on an earlier model to infer the causal effect of journals on citations. We find evidence for both effects.

We show that high-impact journals seem to select articles that tend to attract more citations. At the same time, we find that high-impact journals augment the citation rate of published articles.

Our results yield a deeper understanding of the role of journals in the research system. The use of journal metrics in research evaluation has been increasingly criticised in recent years and article-level citations are sometimes suggested as an alternative.

Our results show that removing impact factors from evaluation does not negate the influence of journals. This insight has important implications for changing practices of research evaluation.

URL : https://arxiv.org/abs/1912.08648

Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C McKiernan, Lesley A Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T Niles, Juan P Alperin

We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms.

Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.

URL : Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7554/eLife.47338.001

Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C. McKiernan​, Lesley A. Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T. Niles, Juan Pablo Alperin

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems.

While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units.

We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents.

Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF.

Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations.

None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.

URL : Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7287/peerj.preprints.27638v2

Outcomes and Impacts of Development Interventions: Toward Conceptual Clarity

Authors : Brian Belcher, Markus Palenberg

The terms “outcome” and “impact” are ubiquitous in evaluation discourse. However, there are many competing definitions that lack clarity and consistency and sometimes represent fundamentally different meanings.

This leads to profound confusion, undermines efforts to improve learning and accountability, and represents a challenge for the evaluation profession. This article investigates how the terms are defined and understood by different institutions and communities. It systematically investigates representative sets of definitions, analyzing them to identify 16 distinct defining elements.

This framework is then used to compare definitions and assess their usefulness and limitations. Based on this assessment, the article proposes a remedy in three parts: applying good definition practice in future definition updates, differentiating causal perspectives and using appropriate causal language, and employing meaningful qualifiers when using the terms outcome and impact.

The article draws on definitions used in international development, but its findings also apply to domestic public sector policies and interventions.

URL : Outcomes and Impacts of Development Interventions: Toward Conceptual Clarity

DOI : https://doi.org/10.1177/1098214018765698