How unpredictable is research impact? Evidence from the UK’s Research Excellence Framework

Authors : Ohid Yaqub, Dmitry Malkov, Josh Siepel

Although ex post evaluation of impact is increasingly common, the extent to which research impacts emerge largely as anticipated by researchers, or as the result of serendipitous and unpredictable processes, is not well understood.

In this article, we explore whether predictions of impact made at the funding stage align with realized impact, using data from the UK’s Research Excellence Framework (REF). We exploit REF impact cases traced back to research funding applications, as a dataset of 2,194 case–grant pairs, to compare impact topics with funder remits.

For 209 of those pairs, we directly compare their descriptions of ex ante and ex post impact. We find that impact claims in these case–grant pairs are often congruent with each other, with 76% showing alignment between anticipated impact at funding stage and the eventual claimed impact in the REF. Co-production of research, often perceived as a model for impactful research, was a feature of just over half of our cases.

Our results show that, contrary to other preliminary studies of the REF, impact appears to be broadly predictable, although unpredictability remains important. We suggest that co-production is a reasonably good mechanism for addressing the balance of predictable and unpredictable impact outcomes.

URL : How unpredictable is research impact? Evidence from the UK’s Research Excellence Framework

DOI : https://doi.org/10.1093/reseval/rvad019

Judging Journals: How Impact Factor and Other Metrics Differ across Disciplines

Authors : Quinn Galbraith, Alexandra Carlile Butterfield, Chase Cardon

Given academia’s frequent use of publication metrics and the inconsistencies in metrics across disciplines, this study examines how various disciplines are treated differently by metric systems. We seek to offer academic librarians, university rank and tenure committees, and other interested individuals guidelines for distinguishing general differences between journal bibliometrics in various disciplines.

This study addresses the following questions: How well represented are different disciplines in the indexing of each metrics system (Eigenfactor, Scopus, Web of Science, Google Scholar)? How does each metrics system treat disciplines differently, and how do these differences compare across metrics systems?

For university libraries and academic librarians, this study may increase understanding of the comparative value of various metrics, which hopefully will facilitate more informed decisions regarding the purchase of journal subscriptions and the evaluation of journals and metrics systems.

This study indicates that different metrics systems prioritize different disciplines, and metrics are not always easily compared across disciplines. Consequently, this study indicates that simple reliance on metrics in publishing or purchasing decisions is often flawed.

URL : Judging Journals: How Impact Factor and Other Metrics Differ across Disciplines

DOI : https://doi.org/10.5860/crl.84.6.888

Why are these publications missing? Uncovering the reasons behind the exclusion of documents in free-access scholarly databases

Authors : Lorena Delgado-QuirósIsidro F. AguilloAlberto Martín-MartínEmilio Delgado López-CózarEnrique Orduña-MaleaJosé Luis Ortega

This study analyses the coverage of seven free-access bibliographic databases (Crossref, Dimensions—non-subscription version, Google Scholar, Lens, Microsoft Academic, Scilit, and Semantic Scholar) to identify the potential reasons that might cause the exclusion of scholarly documents and how they could influence coverage.

To do this, 116 k randomly selected bibliographic records from Crossref were used as a baseline. API endpoints and web scraping were used to query each database. The results show that coverage differences are mainly caused by the way each service builds their databases.

While classic bibliographic databases ingest almost the exact same content from Crossref (Lens and Scilit miss 0.1% and 0.2% of the records, respectively), academic search engines present lower coverage (Google Scholar does not find: 9.8%, Semantic Scholar: 10%, and Microsoft Academic: 12%). Coverage differences are mainly attributed to external factors, such as web accessibility and robot exclusion policies (39.2%–46%), and internal requirements that exclude secondary content (6.5%–11.6%).

In the case of Dimensions, the only classic bibliographic database with the lowest coverage (7.6%), internal selection criteria such as the indexation of full books instead of book chapters (65%) and the exclusion of secondary content (15%) are the main motives of missing publications.

URL : Why are these publications missing? Uncovering the reasons behind the exclusion of documents in free-access scholarly databases

DOI : https://doi.org/10.1002/asi.24839

The Predatory Paradox : Ethics, Politics, and Practices in Contemporary Scholarly Publishing

Authors : Amy Koerber, Jesse C. Starkey, Karin Ardon-Dryer, R. Glenn Cummins, Lyombe Eko, Kerk F. Kee

In today’s ‘publish or perish’ academic setting, the institutional prizing of quantity over quality has given rise to and perpetuated the dilemma of predatory publishing. Upon a close examination, however, the definition of ‘predatory’ itself becomes slippery, evading neat boxes or lists which might seek to easily define and guard against it.

This volume serves to foreground a nuanced representation of this multifaceted issue. In such a rapidly evolving landscape, this book becomes a field guide to its historical, political, and economic aspects, presenting thoughtful interviews, legal analysis and original research. Case studies from both European-American and non-European-American stakeholders emphasize the worldwide nature of the challenge faced by researchers of all levels.

This coauthored book is structured into both textual and supplemental materials. Key takeaways, discussion questions, and complete classroom activities accompanying each chapter provide opportunities for engagement and real-world applications of these concepts.

Crucially relevant to early career researchers and the senior faculty, library scholars, and administrators who mentor and support them, ‘The Predatory Paradox: Ethics, Politics, and Practices in Contemporary Scholarly Publishing’ offers practical recommendations for navigating the complex and often contradictory advice currently available. University instructors and teaching faculty will also find the reading essential in order to properly prepare both graduate and undergraduate students for the potential pitfalls endemic to scholarly publishing.

URL : The Predatory Paradox : Ethics, Politics, and Practices in Contemporary Scholarly Publishing

DOI : https://doi.org/10.11647/OBP.0364

Enquête quantitative sur les pratiques et les besoins des chercheurs sur la gestion des données de la recherche, algorithmes et codes sources dans les établissements du site toulousain

Authors : Danielle Brunet, Soraya Demay, Pierre Diaz, Borbala Goncz, Laure Leclerc, Flora Poupinot, Sibilla Michelle

Le Comité de réflexion pour le partage et la valorisation des données de la recherche et la coordination de la Science Ouverte (CéSO) de l’Université de Toulouse a réalisé une enquête quantitative sur la gestion des données de la recherche, algorithmes et codes sources.

Adressée à l’ensemble de la communauté scientifique du site toulousain, son objectif était de produire un état des lieux des pratiques, des connaissances et des besoins des chercheurs en matière de gestion des données de la recherche. Les résultats permettront de préciser l’offre de services proposée sur le site toulousain.

Cette enquête concerne les établissements membres de l’Université de Toulouse ainsi que les organismes de recherche partenaires : Université Toulouse Capitole, Université Toulouse – Jean Jaurès, Université Toulouse III – Paul Sabatier, Institut national polytechnique de Toulouse (Toulouse INP), Institut national des sciences appliquées de Toulouse (INSA Toulouse), Institut supérieur de l’aéronautique et de l’espace (ISAE-SUPAERO), Institut national universitaire Champollion (INU Champollion), École nationale de l’aviation civile (ENAC), École nationale d’ingénieurs de Tarbes (ENIT), École nationale supérieure d’architecture de Toulouse (ENSA Toulouse), École nationale vétérinaire de Toulouse (ENVT), École nationale supérieure de formation de l’enseignement agricole (ENSFEA), Institut catholique d’arts et métiers (ICAM), École nationale supérieure des mines d’Albi-Carmaux (IMT Mines d’Albi), Toulouse Business School (TBS), Centre national d’études spatiales (CNES), Centre national de la recherche scientifique (CNRS), Institut national de recherche pour l’agriculture, l’alimentation et l’environnement (INRAE), Institut national de l’a santé et de la recherche médicale (Inserm), Institut de recherche pour le développement (IRD) ; Office national d’études et de recherche aérospatiales (Onera), Météo-France.

URL : Enquête quantitative sur les pratiques et les besoins des chercheurs sur la gestion des données de la recherche, algorithmes et codes sources dans les établissements du site toulousain

Original location : https://ut3-toulouseinp.hal.science/hal-04262708v1/

Measured in a context: making sense of open access book data

Author : Ronald Snijder

Open access (OA) book platforms, such as JSTOR, OAPEN Library or Google Books, have been available for over a decade. Each platform shows usage data, but this results in confusion about how well an individual book is performing overall. Even within one platform, there are considerable usage differences between subjects and languages. Some context is therefore necessary to make sense of OA books usage data.

A possible solution is a new metric – the Transparent Open Access Normalized Index (TOANI) score. It is designed to provide a simple answer to the question of how well an individual open access book or chapter is performing. The transparency is based on clear rules, and by making all of the data used visible.

The data is normalized, using a common scale for the complete collection of an open access book platform and, to keep the level of complexity as low as possible, the score is based on a simple metric.

As a proof of the concept, the usage of over 18,000 open access books and chapters in the OAPEN Library has been analysed, to determine whether each individual title has performed as well as can be expected compared to similar titles.

URL : Measured in a context: making sense of open access book data

DOI : https://doi.org/10.1629/uksg.627

Academic co-authorship is a risky game

Authors : Teddy Lazebnik, Stephan Beck, Labib Shami

Conducting a project with multiple participants is a complex task that involves multiple social, economic, and psychological interactions. Conducting academic research in general and the process of writing an academic manuscript, in particular, is notorious for being challenging to successfully navigate due to the current form of collaboration dynamics common in academia.

In this study, we propose a game-theory-based model for a co-authorship writing project in which authors are allowed to raise an ultimatum, blocking the publishment of the manuscript if they do not get more credit for the work.

Using the proposed model, we explore the influence of the contribution and utility of publishing the manuscript on the rate one or more authors would gain from raising an ultimatum. Similarly, we show that the project’s duration and the current state have a major impact on this rate, as well as the number of authors.

In addition, we examine common student-advisor and colleague-colleague co-authorships scenarios. Our model reveals disturbing results and demonstrates that the current, broadly accepted, academic practices for collaborations are designed in a way that stimulates authors to raise an ultimatum and stopped only by their integrity and not by a systematic design.

URL : Academic co-authorship is a risky game

DOI : https://doi.org/10.1007/s11192-023-04843-x