From ‘research impact’ to ‘research value’: a new approach to support research for societal benefit

Authors :  Ruth A O’Connor, Sejul Malde, A Wendy Russell, Maya Haviland, Kate Bellchambers, Kirsty Jones, Ginny M Sargent, Sara Bice

University research has a vital role to play in addressing complex societal challenges. The research impact (RI) agenda should enable this but is critiqued for creating an audit culture focused narrowly on economic returns on investment and university rankings. There is a need for alternative approaches that better support research for societal benefit. A current hiatus in research assessment processes in Australia provides an opportunity to explore alternatives.

In this study, we elicited responses from 53 university staff in academic and professional roles to explore what constitutes research impact in practice, and what helps to achieve it. The responses highlight a disconnect between the current institutional framing of research impact and both the practices and values of those seeking to create societal benefit through research.

We identify four tensions between the motivations and practice of research staff on one hand and the research impact agenda on the other. Tensions related to (1) narrow definitions of impact inadequately encompassing valuable work; (2) the premise of linear impact pathways inaccurately portraying the complexity of impact; (3) assessment rewarding individual endeavour over collaboration; and (4) assessment focusing on auditing rather than learning through evaluation.

We take these findings and apply current theories of public and cultural value to offer ‘research value’ as an alternative approach to address the four tensions and nurture research for societal benefit.

URL : From ‘research impact’ to ‘research value’: a new approach to support research for societal benefit

DOI : https://doi.org/10.1093/reseval/rvag002

Do data management policies become more open over time?

Author : Beth Montague-Hellen

Research data management (RDM) policies are ubiquitous in UK Higher Education Institutions, and are often written and managed by, or with, the library team. RDM policies attempt to balance the requirements of keeping data safe and secure when necessary and opening up data to allow reuse and to support research integrity.

This article uses a framework analysis approach on 134 policies to investigate whether the UK RDM policies have become more open over time in terms of policy points and language. The investigation shows that recent policies have shown an increased likelihood of being more open in several areas: how long data should be archived for, sharing of software, and the mandatory inclusion of data availability statements in journal articles.

Language around FAIR data terms have increased, as has using research integrity as a key reason to manage data according to best practices.

URL : Do data management policies become more open over time?

DOI : https://doi.org/10.53377/lq.23144

 

Open access journals lack image accessibility guidelines

Authors : Kaitlin Stack Whitney, Julia Perrone, Christie A. Bahlai

In recent decades, there has been a move to “open” science and research. One component of open access is “accessibility,” often used to mean that data and other products are free to use by others. However, accessibility also refers to considering and meeting the needs of people with disabilities.

Our objective was to evaluate how open access journals incorporate disability accessibility as part of publishing. Using a random sample of 300 English-language journals and image accessibility as a lens, we assessed author guidelines. Of 289 journals with guidelines, 38 (13%) included color choice, six (∼2%) included contrast ratios, and none included alternative text.

We also assessed the open access statements for the same 300 journals to understand how they conceive of openness and accessibility. Of the 298 journals with open access statements, 228 (∼77%) included the words access or accessibility. Yet none included disability or disabled and only two journals (<1%) mentioned inclusive or inclusion.

Our findings indicate that the open access journals sampled are not considering disability accessibility in their submission guidelines or open access frameworks. Incorporating disability accessibility into open scholarship considerations is critical to bridge, and not exacerbate, information inequalities for people with disabilities.

URL : Open access journals lack image accessibility guidelines

DOI : https://doi.org/10.1162/qss_a_00338

Scrolling through science: how accurate is science content on TikTok

Authors : Ricardo Morais,

TikTok has become a popular platform for science communication, particularly among younger audiences, allowing creators to reach broader audiences. However, concerns about the accuracy of science content shared on the platform have emerged, prompting this study to investigate the reliability of informal science communication by popular creators. Informal science communication is the casual sharing of scientific information on platforms like TikTok.

The main objective is to assess how well this content adheres to established scientific principles and avoids misinformation. By analysing videos from creators with significant followings, we will evaluate their adherence to scientific accuracy and identify factors that influence it, such as the creators’ backgrounds and platform algorithms.

The findings will highlight trends in the accuracy of content, with some creators producing reliable information while others risk spreading misinformation.

Ultimately, the research will provide recommendations for enhancing the accuracy of science content on TikTok, promoting critical thinking among viewers, and advancing informed science communication on social media.

URL : Scrolling through science: how accurate is science content on TikTok

DOI : https://doi.org/10.22323/165520251230163519

Can ChatGPT write better scientific titles? A comparative evaluation of human-written and AI-generated titles

Authors : Paul Sebo, Bing Nie, Ting Wang

Background

Large language models (LLMs) such as GPT-4 are increasingly used in scientific writing, yet little is known about how AI-generated scientific titles are perceived by researchers in terms of quality.

Objective

To compare the perceived alignment with the abstract content (as a surrogate for perceived accuracy), appeal, and overall preference for AI-generated versus human-written scientific titles.

Methods

We conducted a blinded comparative study with 21 researchers from diverse academic backgrounds. A random sample of 50 original titles was selected from 10 high-impact general internal medicine journals. For each title, an alternative version was generated using GPT-4.0. Each rater evaluated 50 pairs of titles, each pair consisting of one original and one AI-generated version, without knowing the source of the titles or the purpose of the study.

For each pair, raters independently assessed both titles on perceived alignment with the abstract content and appeal, and indicated their overall preference. We analyzed alignment and appeal using Wilcoxon signed-rank tests and mixed-effects ordinal logistic regressions, preferences using McNemar’s test and mixed-effects logistic regression, and inter-rater agreement with Gwet’s AC.
Results

AI-generated titles received significantly higher ratings for both perceived alignment with the abstract content (mean 7.9 vs. 6.7, p-value <0.001) and appeal (mean 7.1 vs. 6.7, p-value <0.001) than human-written titles. The odds of preferring an AI-generated title were 1.7 times higher (p-value =0.001), with 61.8% of 1,049 paired judgments favoring the AI version. Inter-rater agreement was moderate to substantial (Gwet’s AC: 0.54–0.70).

Conclusions

AI-generated titles were rated more favorably than human-written titles within the context of this study in terms of perceived alignment with the abstract content, appeal, and preference, suggesting that LLMs may enhance the effectiveness of scientific communication. These findings support the responsible integration of AI tools in research.

URL : Can ChatGPT write better scientific titles? A comparative evaluation of human-written and AI-generated titles

DOI : https://doi.org/10.12688/f1000research.173647.2

A framework for assessing the trustworthiness of scientific research findings

Authors : Brian A. Nosek, David B. Allison, Kathleen Hall Jamieson, Marcia McNutt, A. Beau Nielsen, Susan M. Wolf

Vigorous debate has erupted over the trustworthiness of scientific research findings in a number of domains. The question “what makes research findings trustworthy?” elicits different answers depending on whether the emphasis is on research integrity and ethics, research methods, transparency, inclusion, assessment and peer review, or scholarly communication. Each provides partial insight.

We offer a systems approach that focuses on whether the research is accountable, evaluable, well-formulated, has been evaluated, controls for bias, reduces error, and whether the claims are warranted by the evidence. We tie each of these components to measurable indicators of trustworthiness for evaluating the research itself, the researchers conducting the research, and the organizations supporting the research.

Our goals are to offer a framework that can be applied across methods, approaches, and disciplines and to foster innovation in development of trustworthiness indicators. Developing valid indicators will improve the conduct and assessment of research and, ultimately, public understanding and trust.

URL : A framework for assessing the trustworthiness of scientific research findings

DOI : https://doi.org/10.1073/pnas.2536736123

Artificial intelligence in academic practices and policy discourses across ‘Big 5’ publishers

Authors :  Gergely Ferenc Lendvai, Aczél Petra

The present study investigates how the five largest academic publishers (Elsevier, Springer, Wiley, Taylor & Francis, and SAGE) are responding to the epistemic and procedural challenges posed by generative AI through formal policy frameworks.

Situated within ongoing debates about the boundaries of authorship and the governance of AI-generated content, our research aims to critically assess the discursive and regulatory contours of publishers’ authorship guidelines (PGs).

We employed a multi-method design that combines qualitative coding, semantic network analysis, and comparative matrix visualization to examine the official policy texts collected from each publisher’s website. Findings reveal a foundational consensus across all five publishers in prohibiting AI systems from being credited as authors and in mandating disclosure of AI usage.

However, beyond this shared baseline, marked divergences emerge in the scope, specificity, and normative framing of AI policies. Co-occurrence and semantic analyses underline the centrality of ‘authorship’, ‘ethics’, and ‘accountability’ in AI discourse. Structural similarity measures further reveal alignment among Wiley, Elsevier, and Taylor & Francis, with Springer as a clear outlier.

Our results point to an unsettled regulatory landscape where policies serve not only as instruments of governance but also as performative assertions of institutional identity and legitimacy.

Consequently, the fragmented field of PG highlights the need for harmonized, inclusive, and enforceable frameworks that recognize both the potential and risks of AI in scholarly communication.

URL : Artificial intelligence in academic practices and policy discourses across ‘Big 5’ publishers

DOI : https://doi.org/10.1093/reseval/rvag004