Rethinking success, integrity, and culture in research (part 2) — a multi-actor qualitative study on problems of science

Authors : Noémie Aubert Bonn, Wim Pinxten

Background

Research misconduct and questionable research practices have been the subject of increasing attention in the past few years. But despite the rich body of research available, few empirical works also include the perspectives of non-researcher stakeholders.

Methods

We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science.

We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.

Results

Given the breadth of our results, we divided our findings in a two-paper series with the current paper focusing on the problems that affect the integrity and research culture. We first found that different actors have different perspectives on the problems that affect the integrity and culture of research.

Problems were either linked to personalities and attitudes, or to the climates in which researchers operate. Elements that were described as essential for success (in the associate paper) were often thought to accentuate the problems of research climates by disrupting research culture and research integrity.

Even though all participants agreed that current research climates need to be addressed, participants generally did not feel responsible nor capable of initiating change. Instead, respondents revealed a circle of blame and mistrust between actor groups.

Conclusions

Our findings resonate with recent debates, and extrapolate a few action points which might help advance the discussion.

First, the research integrity debate must revisit and tackle the way in which researchers are assessed.

Second, approaches to promote better science need to address the impact that research climates have on research integrity and research culture rather than to capitalize on individual researchers’ compliance.

Finally, inter-actor dialogues and shared decision making must be given priority to ensure that the perspectives of the full research system are captured. Understanding the relations and interdependency between these perspectives is key to be able to address the problems of science.

URL : Rethinking success, integrity, and culture in research (part 2) — a multi-actor qualitative study on problems of science

DOI : https://doi.org/10.1186/s41073-020-00105-z

Rethinking success, integrity, and culture in research (part 1) — a multi-actor qualitative study on success in science

Authors : Noémie Aubert Bonn, Wim Pinxten

Background

Success shapes the lives and careers of scientists. But success in science is difficult to define, let alone to translate in indicators that can be used for assessment. In the past few years, several groups expressed their dissatisfaction with the indicators currently used for assessing researchers.

But given the lack of agreement on what should constitute success in science, most propositions remain unanswered. This paper aims to complement our understanding of success in science and to document areas of tension and conflict in research assessments.

Methods

We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science.

We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.

Results

Given the breadth of our results, we divided our findings in a two-paper series, with the current paper focusing on what defines and determines success in science. Respondents depicted success as a multi-factorial, context-dependent, and mutable construct.

Success appeared to be an interaction between characteristics from the researcher (Who), research outputs (What), processes (How), and luck. Interviewees noted that current research assessments overvalued outputs but largely ignored the processes deemed essential for research quality and integrity.

Interviewees suggested that science needs a diversity of indicators that are transparent, robust, and valid, and that also allow a balanced and diverse view of success; that assessment of scientists should not blindly depend on metrics but also value human input; and that quality should be valued over quantity.

Conclusions

The objective of research assessments may be to encourage good researchers, to benefit society, or simply to advance science. Yet we show that current assessments fall short on each of these objectives. Open and transparent inter-actor dialogue is needed to understand what research assessments aim for and how they can best achieve their objective.

URL : Rethinking success, integrity, and culture in research (part 1) — a multi-actor qualitative study on success in science

DOI : https://doi.org/10.1186/s41073-020-00104-0

L’effet SIGAPS : La recherche médicale française sous l’emprise de l’évaluation comptable

Auteurs/Authors : Yves Gingras, Mahdi Khelfaoui

Cette recherche a pour but de mettre en évidence les effets pervers générés par l’introduction du système SIGAPS (Système d’interrogation, de gestion, et d’analyse des publications scientifiques) sur la production scientifique française en médecine et en sciences biomédicales.

Cet outil biblio-métrique de gestion et de financement de la recherche présente un exemple emblématique des dé-rives que peuvent générer les méthodes d’évaluation de la recherche reposant sur des critères pu-rement comptables.

Dans cette note, nous présentons d’abord le fonctionnement de SIGAPS, pour ensuite expliquer précisément en quoi les méthodes de calcul des « points SIGAPS », fondés sur les facteurs d’impact des revues et l’ordre des noms des co-auteurs, posent de nombreux problèmes.

Nous identifions notamment les effets du système SIGAPS sur les dynamiques de publications, les choix des lieux de publications, la langue de publication et les critères de recrutement et de promotion des chercheurs.

Finalement, nous montrons que l’utilisation du système SIGAPS ne répond pas bien à tous les critères de ce que l’on pourrait appeler une « éthique de l’évaluation » qui devrait respecter certaines règles, comme la transparence, l’équité et la validité des indicateurs.

URL : https://cirst2.openum.ca/files/sites/179/2020/10/Note_2020-05vf.pdf

Changing how we evaluate research is difficult, but not impossible

Authors : Anna Hatch, Stephen Curry

The San Francisco Declaration on Research Assessment (DORA) was published in 2013 and described how funding agencies, institutions, publishers, organizations that supply metrics, and individual researchers could better evaluate the outputs of scientific research.

Since then DORA has evolved into an active initiative that gives practical advice to institutions on new ways to assess and evaluate research. This article outlines a framework for driving institutional change that was developed at a meeting convened by DORA and the Howard Hughes Medical Institute.

The framework has four broad goals: understanding the obstacles to changes in the way research is assessed; experimenting with different approaches; creating a shared vision when revising existing policies and practices; and communicating that vision on campus and beyond.

URL : Changing how we evaluate research is difficult, but not impossible

DOI : https://doi.org/10.7554/eLife.58654

Use of the journal impact factor for assessing individual articles need not be statistically wrong

Authors : Ludo Waltman, Vincent A. Traag

Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor.

Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments.

We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles.

In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received.

It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.

URL : Use of the journal impact factor for assessing individual articles need not be statistically wrong

DOI : https://doi.org/10.12688/f1000research.23418.1

Inferring the causal effect of journals on citations

Author : Vincent Traag

Articles in high-impact journals are by definition more highly cited on average. But are they cited more often because the articles are somehow “better”? Or are they cited more often simply because they appeared in a high-impact journal? Although some evidence suggests the latter the causal relationship is not clear.

We here compare citations of published journal articles to citations of their preprint versions to uncover the causal mechanism. We build on an earlier model to infer the causal effect of journals on citations. We find evidence for both effects.

We show that high-impact journals seem to select articles that tend to attract more citations. At the same time, we find that high-impact journals augment the citation rate of published articles.

Our results yield a deeper understanding of the role of journals in the research system. The use of journal metrics in research evaluation has been increasingly criticised in recent years and article-level citations are sometimes suggested as an alternative.

Our results show that removing impact factors from evaluation does not negate the influence of journals. This insight has important implications for changing practices of research evaluation.

URL : https://arxiv.org/abs/1912.08648

Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C McKiernan, Lesley A Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T Niles, Juan P Alperin

We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms.

Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.

URL : Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7554/eLife.47338.001