The role of non-scientific factors vis-a-vis the quality of publications in determining their scholarly impact

Authors : Giovanni Abramo, Ciriaco Andrea D’Angelo, Leonardo Grilli

In the evaluation of scientific publications’ impact, the interplay between intrinsic quality and non-scientific factors remains a subject of debate. While peer review traditionally assesses quality, bibliometric techniques gauge scholarly impact. This study investigates the role of non-scientific attributes alongside quality scores from peer review in determining scholarly impact.

Leveraging data from the first Italian Research Assessment Exercise (VTR 2001-2003) and Web of Science citations, we analyse the relationship between quality scores, non-scientific factors, and publication short- and long-term impact.

Our findings shed light on the significance of non-scientific elements overlooked in peer review, offering policymakers and research management insights in choosing evaluation methodologies. Sections delve into the debate, identify non-scientific influences, detail methodologies, present results, and discuss implications.

Arxiv : https://arxiv.org/abs/2404.05345

A survey of how biology researchers assess credibility when serving on grant and hiring committees

Authors : Iain Hrynaszkiewicz, Beruria Novich, James Harney, Veronique Kiermer

Researchers who serve on grant review and hiring committees have to make decisions about the intrinsic value of research in short periods of time, and research impact metrics such Journal Impact Factor (JIF) exert undue influence on these decisions. Initiatives such as the Coalition for Advancing Research Assessment (CoARA) and the Declaration on Research Assessment (DORA) emphasize responsible use of quantitative metrics and avoidance of journal-based impact metrics for research assessment. Further, our previous qualitative research suggested that assessing credibility, or trustworthiness, of research is important to researchers not only when they seek to inform their own research but also in the context of research assessment committees.

To confirm our findings from previous interviews in quantitative terms, we surveyed 485 biology researchers who have served on committees for grant review or hiring and promotion decisions, to understand how they assess the credibility of research outputs in these contexts. We found that concepts like credibility, trustworthiness, quality and impact lack consistent definitions and interpretations by researchers, which had already been observed in our interviews.

We also found that assessment of credibility is very important to most (81%) of researchers serving in these committees but fewer than half of respondents are satisfied with their ability to assess credibility. A substantial proportion of respondents (57% of respondents) report using journal reputation and JIF to assess credibility – proxies that research assessment reformers consider inappropriate to assess credibility because they don’t rely on intrinsic characteristics of the research.

This gap between importance of an assessment and satisfaction in the ability to conduct it was reflected in multiple aspects of credibility we tested and it was greatest for researchers seeking to assess the integrity of research (such as identifying signs of fabrication, falsification, or plagiarism), and the suitability and completeness of research methods. Non-traditional research outputs associated with Open Science practices – research data, code, protocol and preprints sharing – are particularly hard for researchers to assess, despite the potential of Open Science practices to signal trustworthiness.

Our results suggest opportunities to develop better guidance and better signals to support the evaluation of research credibility and trustworthiness – and ultimately support research assessment reform, away from the use of inappropriate proxies for impact and towards assessing the intrinsic characteristics and values researchers see as important.

DOI : https://doi.org/10.31222/osf.io/ht836

Societal and scientific impact of policy research: A large-scale empirical study of some explanatory factors using Altmetric and Overton

Authors: Pablo Dorta-González, Alejandro Rodríguez-Caro, María Isabel Dorta-González

This study investigates how scientific research influences policymaking by analyzing citations of research articles in policy documents (policy impact) for nearly 125,000 articles across 434 public policy journals. We reveal distinct citation patterns between policymakers and other stakeholders like researchers, journalists, and the public.

News and blog mentions, social media engagement, and open access publications (excluding fully open access) significantly increase the likelihood of a research article being cited in policy documents. Conversely, articles locked behind paywalls and those published under the full open access model (based on Altmetric data) have a lower chance of being policy-cited. Publication year and policy type show no significant influence. Our findings emphasize the crucial role of science communication channels like news media and social media in bridging the gap between research and policy.

Interestingly, academic citations hold a weaker influence on policy citations compared to news mentions, suggesting a potential disconnect between how researchers reference research and how policymakers utilize it. This highlights the need for improved communication strategies to ensure research informs policy decisions more effectively.

This study provides valuable insights for researchers, policymakers, and science communicators. Researchers can tailor their dissemination efforts to reach policymakers through media channels. Policymakers can leverage these findings to identify research with higher policy relevance. Science communicators can play a critical role in translating research for policymakers and fostering dialogue between the scientific and policymaking communities.

Arxiv : https://arxiv.org/abs/2403.06714

Scholar Metrics Scraper (SMS): automated retrieval of citation and author data

Authors : Yutong Cao, Nicole A. Cheung, Dean Giustini, Jeffrey LeDue, Timothy H. Murphy

Academic departments, research clusters and evaluators analyze author and citation data to measure research impact and to support strategic planning. We created Scholar Metrics Scraper (SMS) to automate the retrieval of bibliometric data for a group of researchers.

The project contains Jupyter notebooks that take a list of researchers as an input and exports a CSV file of citation metrics from Google Scholar (GS) to visualize the group’s impact and collaboration. A series of graph outputs are also available. SMS is an open solution for automating the retrieval and visualization of citation data.

URL : Scholar Metrics Scraper (SMS): automated retrieval of citation and author data

DOI : https://doi.org/10.3389/frma.2024.1335454

Does the Use of Unusual Combinations of Datasets Contribute to Greater Scientific Impact?

Authors : Yulin Yu, Daniel M. Romero

Scientific datasets play a crucial role in contemporary data-driven research, as they allow for the progress of science by facilitating the discovery of new patterns and phenomena. This mounting demand for empirical research raises important questions on how strategic data utilization in research projects can stimulate scientific advancement.

In this study, we examine the hypothesis inspired by the recombination theory, which suggests that innovative combinations of existing knowledge, including the use of unusual combinations of datasets, can lead to high-impact discoveries. We investigate the scientific outcomes of such atypical data combinations in more than 30,000 publications that leverage over 6,000 datasets curated within one of the largest social science databases, ICPSR.

This study offers four important insights. First, combining datasets, particularly those infrequently paired, significantly contributes to both scientific and broader impacts (e.g., dissemination to the general public). Second, the combination of datasets with atypically combined topics has the opposite effect — the use of such data is associated with fewer citations.

Third, younger and less experienced research teams tend to use atypical combinations of datasets in research at a higher frequency than their older and more experienced counterparts.

Lastly, despite the benefits of data combination, papers that amalgamate data remain infrequent. This finding suggests that the unconventional combination of datasets is an under-utilized but powerful strategy correlated with the scientific and broader impact of scientific discoveries.

URL : https://arxiv.org/abs/2402.05024

Clickbait or conspiracy? How Twitter users address the epistemic uncertainty of a controversial preprint

Authors : Mareike Bauer, Maximilian Heimstädt, Carlos Franzreb, Sonja Schimmler

Many scientists share preprints on social media platforms to gain attention from academic peers, policy-makers, and journalists. In this study we shed light on an unintended but highly consequential effect of sharing preprints: Their contribution to conspiracy theories. Although the scientific community might quickly dismiss a preprint as insubstantial and ‘clickbaity’, its uncertain epistemic status nevertheless allows conspiracy theorists to mobilize the text as scientific support for their own narratives.

To better understand the epistemic politics of preprints on social media platforms, we studied the case of a biomedical preprint, which was shared widely and discussed controversially on Twitter in the wake of the coronavirus disease 2019 pandemic. Using a combination of social network analysis and qualitative content analysis, we compared the structures of engagement with the preprint and the discursive practices of scientists and conspiracy theorists.

We found that despite substantial engagement, scientists were unable to dampen the conspiracy theorists’ enthusiasm for the preprint. We further found that members from both groups not only tried to reduce the preprint’s epistemic uncertainty but sometimes deliberately maintained it.

The maintenance of epistemic uncertainty helped conspiracy theorists to reinforce their group’s identity as skeptics and allowed scientists to express concerns with the state of their profession.

Our study contributes to research on the intricate relations between scientific knowledge and conspiracy theories online, as well as the role of social media platforms for new genres of scholarly communication.

URL : Clickbait or conspiracy? How Twitter users address the epistemic uncertainty of a controversial preprint

DOI : https://doi.org/10.1177/20539517231180575

How unpredictable is research impact? Evidence from the UK’s Research Excellence Framework

Authors : Ohid Yaqub, Dmitry Malkov, Josh Siepel

Although ex post evaluation of impact is increasingly common, the extent to which research impacts emerge largely as anticipated by researchers, or as the result of serendipitous and unpredictable processes, is not well understood.

In this article, we explore whether predictions of impact made at the funding stage align with realized impact, using data from the UK’s Research Excellence Framework (REF). We exploit REF impact cases traced back to research funding applications, as a dataset of 2,194 case–grant pairs, to compare impact topics with funder remits.

For 209 of those pairs, we directly compare their descriptions of ex ante and ex post impact. We find that impact claims in these case–grant pairs are often congruent with each other, with 76% showing alignment between anticipated impact at funding stage and the eventual claimed impact in the REF. Co-production of research, often perceived as a model for impactful research, was a feature of just over half of our cases.

Our results show that, contrary to other preliminary studies of the REF, impact appears to be broadly predictable, although unpredictability remains important. We suggest that co-production is a reasonably good mechanism for addressing the balance of predictable and unpredictable impact outcomes.

URL : How unpredictable is research impact? Evidence from the UK’s Research Excellence Framework

DOI : https://doi.org/10.1093/reseval/rvad019