The Open Book Environment (OBE) Dashboard: A Tool for Increasing Publisher Transparency for Authors, Librarians, and the Scholarly Community

Authors : Holly Limbert, Dan DeSanto

Introduction: The Open Book Environment (OBE) Dashboard is introduced as a pioneering tool aimed at fostering transparency and clarity in the realm of open access book publishing. In response to the growing need for accessible information for authors, librarians, and stakeholders, this dashboard aggregates data from a multitude of publishers into a centralized platform.

Description of Program/Service: Employing a comprehensive set of criteria, including pricing for book processing charges (BPCs), licensing options, editorial quality statements, and self-archiving policies, the Dashboard evaluates publisher transparency. Through a color-coded system, it visually represents the degree of openness exhibited by each publisher, empowering authors to make informed decisions about where to publish their work.

Next Steps: Looking ahead, the Dashboard’s dynamic nature allows for continuous updates, facilitating its role as an agent for positive change within the scholarly publishing community. As a versatile resource, the OBE Dashboard holds promise in enhancing efficiency, transparency, and accountability in open access book publishing.

Open Book Environment (OBE) Dashboard: https://bit.ly/OBEdashboard
OBE Additions and Edits Form: https://bit.ly/OBEdashboardform
Zenodo Link: https://zenodo.org/records/13366056

URL : The Open Book Environment (OBE) Dashboard: A Tool for Increasing Publisher Transparency for Authors, Librarians, and the Scholarly Community

DOI : https://doi.org/10.31274/jlsc.18112

Open and impactful academic publishing

Authors : Rosaria Ciriminna, Giovanna Li Petri, Giuseppe Angellotti, Rafael Luque, Mario Pagliaro

Introduction

The advantages of self-archiving research articles on institutional repositories or personal academic websites are numerous and relevant for society and individual researchers. Yet, self-archiving has been adopted by a small minority of active scholars.

Methods

Aiming to further inform educational work on open and impactful academic publishing in the digital era, we posed selected questions to Stevan Harnad 30 years after his “subversive proposal” to maximize research impact by self-archiving scholarly articles in university-hosted or disciplinary online repositories to make published articles openly available.

Results and discussion

Self-archiving is even more needed today than it was when Professor Harnad called for it when the World Wide Web was in its infancy; OA academic publishing is a necessary but not sufficient condition for impactful research; self-archiving on a personal academic website is often more effective than in institutional repositories.

URL : Open and impactful academic publishing

DOI : https://doi.org/10.3389/frma.2025.1544965

Evaluating the predictive capacity of ChatGPT for academic peer review outcomes across multiple platforms

Authors : Mike Thelwall, Abdallah Yaghi

Academic peer review is at the heart of scientific quality control, yet the process is slow and time-consuming. Technology that can predict peer review outcomes may help with this, for example by fast-tracking desk rejection decisions. While previous studies have demonstrated that Large Language Models (LLMs) can predict peer review outcomes to some extent, this paper introduces two new contexts and employs a more robust method—averaging multiple ChatGPT scores.

Averaging 30 ChatGPT predictions, based on reviewer guidelines and using only the submitted titles and abstracts failed to predict peer review outcomes for F1000Research (Spearman’s rho = 0.00). However, it produced mostly weak positive correlations with the quality dimensions of SciPost Physics (rho = 0.25 for validity, rho = 0.25 for originality, rho = 0.20 for significance, and rho = 0.08 for clarity) and a moderate positive correlation for papers from the International Conference on Learning Representations (ICLR) (rho = 0.38). Including article full texts increased the correlation for ICLR (rho = 0.46) and slightly improved it for F1000Research (rho = 0.09), with variable effects on the four quality dimension correlations for SciPost LaTeX files.

The use of simple chain-of-thought system prompts slightly increased the correlation for F1000Research (rho = 0.10), marginally reduced it for ICLR (rho = 0.37), and further decreased it for SciPost Physics (rho = 0.16 for validity, rho = 0.18 for originality, rho = 0.18 for significance, and rho = 0.05 for clarity). Overall, the results suggest that in some contexts, ChatGPT can produce weak pre-publication quality predictions.

However, their effectiveness and the optimal strategies for employing them vary considerably between platforms, journals, and conferences. Finally, the most suitable inputs for ChatGPT appear to differ depending on the platform.

URL : Evaluating the predictive capacity of ChatGPT for academic peer review outcomes across multiple platforms

DOI : https://doi.org/10.1007/s11192-025-05287-1

Open access, open infrastructures, and their funding: Learning from histories to more effectively enhance diamond OA ecologies for books

Authors  : Kira Hopkins, Kevin Sanders

The decade since the “Bottlenecks in the Open Access System” special issue of JLSC in 2014 has been an expansive one for open access (OA) and OA books in particular. The creation of a scholarly publishing ecosystem that enables works to be freely accessible for readers has been successful in many ways.

However, the underlying politics and economics of OA scholarly publishing often remain opaque or under-interrogated (Lawson et al., 2015). The problems with journal OA funding, specifically regarding inequality of access to publishing, discussed by Bonaccorso et al. (2014) in their contribution to that issue, have also increased and become entrenched as we discuss below.

This entrenchment has been largely via the growth and consolidation of gold OA, “transformative” agreements, and read-and-publish journal deals, which have effectively, and unnecessarily, commodified OA publications. We would argue that this is in direct tension with some of the foundations of contemporary OA.

OA was explicitly described from early principles as not a business model and as aiming to reduce financial barriers from authors, libraries, and other groups (Suber, 2024). We would like to note that, while the main focus of this paper is books, we begin with a discussion of journals. This is because we are focusing on the history, development, and critiques of OA fundings in the intervening ten years following the “Bottlenecks” special issue.

OA journal publishing has been at the forefront of discussions of OA funding, and it has dominated the last decade, and more, of this discussion; it would therefore be remiss of us not to discuss this history, the resulting current landscape of inequity, and the potential ramifications if this were to be transferred to OA books, a more nascent field in general.

URL : Open access, open infrastructures, and their funding: Learning from histories to more effectively enhance diamond OA ecologies for books

DOI : https://doi.org/10.31274/jlsc.18284

 

Knowledge Production and Intellectual Property: A Perspective on Scientific Publications in the Capitalist System

Author : Sofia Guilhem Basilio

The digital revolution has reshaped the production, dissemination, and accessibility of scientific knowledge. However, capitalist logic persists, commodifying intellectual labour and concentrating market power within a few mega-publishers.

This article critically examines scientific publishing through the lens of Marx’s theory of value, focusing on intellectual property rent as a mechanism of capital accumulation.

By highlighting the Brazilian higher education system – where public resources are redirected to private publishers via paywalls and Article Processing Charges (APCs) – the paper exposes the contradictions of contemporary academic publishing.

It critiques the dual exploitation of researchers as producers and consumers of knowledge and argues for alternative, equitable models like Open Access. Situating the analysis within global and local contexts, the article advocates for the democratisation of scientific knowledge as a resistance to commodification and privatisation.

URL : Knowledge Production and Intellectual Property: A Perspective on Scientific Publications in the Capitalist System

DOI : https://doi.org/10.31269/triplec.v23i1.1520

Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomised controlled trial

Authors : Melissa L Rethlefsen, Sara Schroter, Lex M Bouter, Jamie J Kirkham,  David Moher, Ana Patricia Ayala, David Blanco, Tara J Brigham, Holly K Grossetta Nardini,  Shona Kirtley, Kate Nyhan, Whitney Townsend, Maurice Zeegers

Objective

To evaluate the impact of adding librarians and information specialists (LIS) as methodological peer reviewers to the formal journal peer review process on the quality of search reporting and risk of bias in systematic review searches in the medical literature.

Design

Pragmatic two-group parallel randomised controlled trial.

Setting

Three biomedical journals.

Participants

Systematic reviews and related evidence synthesis manuscripts submitted to The BMJ, BMJ Open and BMJ Medicine and sent out for peer review from 3 January 2023 to 1 September 2023. Randomisation (allocation ratio, 1:1) was stratified by journal and used permuted blocks (block size=4). Of 2670 manuscripts sent to peer review during study enrollment, 400 met inclusion criteria and were randomised (62 The BMJ, 334 BMJ Open, 4 BMJ Medicine). 76 manuscripts were revised and resubmitted in the intervention group and 90 in the control group by 2 January 2024.

Interventions

All manuscripts followed usual journal practice for peer review, but those in the intervention group had an additional (LIS) peer reviewer invited.

Main outcome measures

The primary outcomes are the differences in first revision manuscripts between intervention and control groups in the quality of reporting and risk of bias. Quality of reporting was measured using four prespecified PRISMA-S items. Risk of bias was measured using ROBIS Domain 2. Assessments were done in duplicate and assessors were blinded to group allocation. Secondary outcomes included differences between groups for each individual PRISMA-S and ROBIS Domain 2 item. The difference in the proportion of manuscripts rejected as the first decision post-peer review between the intervention and control groups was an additional outcome.

Results

Differences in the proportion of adequately reported searches (4.4% difference, 95% CI: −2.0% to 10.7%) and risk of bias in searches (0.5% difference, 95% CI: −13.7% to 14.6%) showed no statistically significant differences between groups. By 4 months post-study, 98 intervention and 70 control group manuscripts had been rejected after peer review (13.8% difference, 95% CI: 3.9% to 23.8%).

Conclusions

Inviting LIS peer reviewers did not impact adequate reporting or risk of bias of searches in first revision manuscripts of biomedical systematic reviews and related review types, though LIS peer reviewers may have contributed to a higher rate of rejection after peer review.

URL : Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomised controlled trial

DOI : https://doi.org/10.1136/bmjebm-2024-113527

The Origins and Veracity of References ‘Cited’ by Generative Artificial Intelligence Applications: Implications for the Quality of Responses

AuthorDirk H. R. Spennemann

The public release of ChatGPT in late 2022 has resulted in considerable publicity and has led to widespread discussion of the usefulness and capabilities of generative Artificial intelligence (Ai) language models. Its ability to extract and summarise data from textual sources and present them as human-like contextual responses makes it an eminently suitable tool to answer questions users might ask.

Expanding on a previous analysis of the capabilities of ChatGPT3.5, this paper tested what archaeological literature appears to have been included in the training phase of three recent generative Ai language models: ChatGPT4o, ScholarGPT, and DeepSeek R1. While ChatGPT3.5 offered seemingly pertinent references, a large percentage proved to be fictitious. While the more recent model ScholarGPT, which is purportedly tailored towards academic needs, performed much better, it still offered a high rate of fictitious references compared to the general models ChatGPT4o and DeepSeek.

Using ‘cloze’ analysis to make inferences on the sources ‘memorized’ by a generative Ai model, this paper was unable to prove that any of the four genAi models had perused the full texts of the genuine references. It can be shown that all references provided by ChatGPT and other OpenAi models, as well as DeepSeek, that were found to be genuine, have also been cited on Wikipedia pages.

This strongly indicates that the source base for at least some, if not most, of the data is found in those pages and thus represents, at best, third-hand source material. This has significant implications in relation to the quality of the data available to generative Ai models to shape their answers. The implications of this are discussed.

URL : The Origins and Veracity of References ‘Cited’ by Generative Artificial Intelligence Applications: Implications for the Quality of Responses

DOI : https://doi.org/10.3390/publications13010012