Inferring the causal effect of journals on citations

Author : Vincent Traag

Articles in high-impact journals are by definition more highly cited on average. But are they cited more often because the articles are somehow “better”? Or are they cited more often simply because they appeared in a high-impact journal? Although some evidence suggests the latter the causal relationship is not clear.

We here compare citations of published journal articles to citations of their preprint versions to uncover the causal mechanism. We build on an earlier model to infer the causal effect of journals on citations. We find evidence for both effects.

We show that high-impact journals seem to select articles that tend to attract more citations. At the same time, we find that high-impact journals augment the citation rate of published articles.

Our results yield a deeper understanding of the role of journals in the research system. The use of journal metrics in research evaluation has been increasingly criticised in recent years and article-level citations are sometimes suggested as an alternative.

Our results show that removing impact factors from evaluation does not negate the influence of journals. This insight has important implications for changing practices of research evaluation.

URL : https://arxiv.org/abs/1912.08648

Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C McKiernan, Lesley A Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T Niles, Juan P Alperin

We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms.

Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.

URL : Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7554/eLife.47338.001

Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C. McKiernan​, Lesley A. Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T. Niles, Juan Pablo Alperin

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems.

While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units.

We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents.

Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF.

Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations.

None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.

URL : Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7287/peerj.preprints.27638v2

Outcomes and Impacts of Development Interventions: Toward Conceptual Clarity

Authors : Brian Belcher, Markus Palenberg

The terms “outcome” and “impact” are ubiquitous in evaluation discourse. However, there are many competing definitions that lack clarity and consistency and sometimes represent fundamentally different meanings.

This leads to profound confusion, undermines efforts to improve learning and accountability, and represents a challenge for the evaluation profession. This article investigates how the terms are defined and understood by different institutions and communities. It systematically investigates representative sets of definitions, analyzing them to identify 16 distinct defining elements.

This framework is then used to compare definitions and assess their usefulness and limitations. Based on this assessment, the article proposes a remedy in three parts: applying good definition practice in future definition updates, differentiating causal perspectives and using appropriate causal language, and employing meaningful qualifiers when using the terms outcome and impact.

The article draws on definitions used in international development, but its findings also apply to domestic public sector policies and interventions.

URL : Outcomes and Impacts of Development Interventions: Toward Conceptual Clarity

DOI : https://doi.org/10.1177/1098214018765698

Plus ou moins open : les revues de rang A en Sciences de l’information et de la communication

Auteurs/Authors : Joachim Schöpfel, Hélène Prost, Amel Fraisse

Selon une étude récente, presque la moitié des articles publiés par des chercheurs français sont diffusés en libre accès, déposés dans les archives ouvertes, comme HAL, ou mis en ligne dans des revues administrées suivant le modèle du “open access”, sans abonnement payant.

Dans cet environnement dynamique, les agences d’évaluation de l’enseignement supérieur et de la recherche ont un rôle à jouer, par le biais de leurs critères et outils d’évaluation.

En fonction de leur approche et méthodologie, ces établissements peuvent créer des opportunités pour le développement du libre accès, par l’incitation au partage des résultats de la recherche, ou bien, ralentir le processus par le maintien des critères habituels, dont notamment l’évaluation bibliométrique à partir du classement des publications.

Notre étude propose un regard sur notre propre discipline, avec un état des lieux dans le domaine des sciences de l’information et de la communication en France, à partir de la liste actualisée des revues de rang A publiée fin 2017 et sous l’aspect du libre accès.

L’approche est exploratoire. Il s’agit avant tout d’étudier nos propres standards et pratiques, en tant que communauté de recherche en SIC par rapport à la politique scientifique du libre accès et de la science ouverte. 38 % des revues de rang A en SIC sont en libre accès. Mais ces revues représentent seulement 4 % de l’ensemble des revues SIC en libre accès.

URL : https://journals.openedition.org/rfsic/4706

L’obsession de la productivité et la fabrique du chercheur publiant

Auteur/Author : Franck Aggeri

À quoi rêvent les jeunes doctorants en gestion lorsqu’ils débutent leur thèse ? Leurs aspirations ne diffèrent pas fondamentalement de celles des doctorants d’autres disciplines : ils valorisent l’autonomie supposée du métier, la réflexion et les discussions intellectuelles, la lecture, la création, l’écriture, la pédagogie.

Cette vision romantique du métier est souvent renforcée par la rencontre avec des enseignants-chercheurs qui leur ont donné le goût de la réflexion, leur ont fait découvrir l’esthétique de l’écriture et de l’argumentation, des textes marquants ou des recherches de terrain originales.

Bref, ils rêvent souvent de devenir des enseignants-chercheurs singuliers. Modèle des singularités vs modèle productif Le modèle des singularités dans la recherche, rappelle Lucien Karpik, est celui auquel se réfèrent traditionnellement les chercheurs.

Il repose sur une orientation symbolique « autour d’un ensemble de normes et de valeurs classiques : la découverte comme finalité, l’importance de l’originalité, de l’ambition et du plaisir intellectuel, un imaginaire enraciné dans l’histoire de la science, la position centrale du jugement des pairs, le pouvoir collégial ou semi-collégial, une conception du métier organisée autour de l’indépendance individuelle, une compétition animée par la volonté d’être le premier à découvrir et le premier à publier, le premier reconnu et le premier primé » (Karpik, 2012, p. 119).

À rebours du modèle des singularités, se développe depuis quelques années, notamment en économie et en sciences de gestion, un modèle productif qui repose sur une performance « objective » mesurée à partir d’une métrique simple : le nombre de publications de rang A.

URL : https://halshs.archives-ouvertes.fr/halshs-01368023

Scholarly Communication Librarians’ Relationship with Research Impact Indicators: An Analysis of a National Survey of Academic Librarians in the United States

Authors: Rachel Ann Miles, Stacy Konkiel, Sarah Sutton

INTRODUCTION

Academic librarians, especially in the field of scholarly communication, are often expected to understand and engage with research impact indicators. However, much of the current literature speculates about how academic librarians are using and implementing research impact indicators in their practice.

METHODS

This study analyzed the results from a 2015 survey administered to over 13,000 academic librarians at Carnegie-classified R1 institutions in the United States. The survey concentrated on academic librarians’ familiarity with and usage of research impact indicators.

RESULTS

This study uncovered findings related to academic librarians’ various levels of familiarity with research impact indicators and how they implement and use research impact indicators in their professional development and in their library job duties.

DISCUSSION

In general, academic librarians with regular scholarly communication support duties tend to have higher levels of familiarity of research impact indicators. In general, academic librarians are most familiar with the citation counts and usage statistics and least familiar with altmetrics.

During consultations with faculty, the Journal Impact Factor (JIF) and citation counts are more likely to be addressed than the author h-index, altmetrics, qualitative measures, and expert peer reviews.

The survey results also hint towards a growing interest in altmetrics among academic librarians for their professional advancement.

CONCLUSION

Academic librarians are continually challenged to keep pace with the changing landscape of research impact metrics and research assessment models. By keeping pace and implementing research impact indicators in their own practices, academic librarians can provide a crucial service to the wider academic community.

URL : Scholarly Communication Librarians’ Relationship with Research Impact Indicators: An Analysis of a National Survey of Academic Librarians in the United States

DOI : http://doi.org/10.7710/2162-3309.2212