Global insights: ChatGPT’s influence on academic and research writing, creativity, and plagiarism policies

Authors : Muhammad Abid Malik, Amjad Islam Amjad, Sarfraz Aslam, Abdulnaser Fakhrou

Introduction: The current study explored the influence of Chat Generative Pre-Trained Transformer (ChatGPT) on the concepts, parameters, policies, and practices of creativity and plagiarism in academic and research writing.

Methods: Data were collected from 10 researchers from 10 different countries (Australia, China, the UK, Brazil, Pakistan, Bangladesh, Iran, Nigeria, Trinidad and Tobago, and Turkiye) using semi-structured interviews. NVivo was employed for data analysis.

Results: Based on the responses, five themes about the influence of ChatGPT on academic and research writing were generated, i.e., opportunity, human assistance, thought-provoking, time-saving, and negative attitude. Although the researchers were mostly positive about it, some feared it would degrade their writing skills and lead to plagiarism. Many of them believed that ChatGPT would redefine the concepts, parameters, and practices of creativity and plagiarism.

Discussion: Creativity may no longer be restricted to the ability to write, but also to use ChatGPT or other large language models (LLMs) to write creatively. Some suggested that machine-generated text might be accepted as the new norm; however, using it without proper acknowledgment would be considered plagiarism. The researchers recommended allowing ChatGPT for academic and research writing; however, they strongly advised it to be regulated with limited use and proper acknowledgment.

URL : Global insights: ChatGPT’s influence on academic and research writing, creativity, and plagiarism policies

DOI : https://doi.org/10.3389/frma.2024.1486832

Peer Reviews of Peer Reviews: A Randomized Controlled Trial and Other Experiments

Authors : Alexander Goldberg, Ivan Stelmakh, Kyunghyun Cho, Alice Oh, Alekh Agarwal, Danielle Belgrave, Nihar B. Shah

Is it possible to reliably evaluate the quality of peer reviews? We study this question driven by two primary motivations — incentivizing high-quality reviewing using assessed quality of reviews and measuring changes to review quality in experiments. We conduct a large scale study at the NeurIPS 2022 conference, a top-tier conference in machine learning, in which we invited (meta)-reviewers and authors to evaluate reviews given to submitted papers.

First, we conduct a RCT to examine bias due to the length of reviews. We generate elongated versions of reviews by adding substantial amounts of non-informative content. Participants in the control group evaluate the original reviews, whereas participants in the experimental group evaluate the artificially lengthened versions.

We find that lengthened reviews are scored (statistically significantly) higher quality than the original reviews. In analysis of observational data we find that authors are positively biased towards reviews recommending acceptance of their own papers, even after controlling for confounders of review length, quality, and different numbers of papers per author.

We also measure disagreement rates between multiple evaluations of the same review of 28%-32%, which is comparable to that of paper reviewers at NeurIPS. Further, we assess the amount of miscalibration of evaluators of reviews using a linear model of quality scores and find that it is similar to estimates of miscalibration of paper reviewers at NeurIPS.

Finally, we estimate the amount of variability in subjective opinions around how to map individual criteria to overall scores of review quality and find that it is roughly the same as that in the review of papers. Our results suggest that the various problems that exist in reviews of papers — inconsistency, bias towards irrelevant factors, miscalibration, subjectivity — also arise in reviewing of reviews.

Arxiv : https://arxiv.org/abs/2311.09497

Enhancing peer review efficiency: A mixed-methods analysis of artificial intelligence-assisted reviewer selection across academic disciplines

Author : Shai Farber

This mixed-methods study evaluates the efficacy of artificial intelligence (AI)-assisted reviewer selection in academic publishing across diverse disciplines. Twenty journal editors assessed AI-generated reviewer recommendations for a manuscript. The AI system achieved a 42% overlap with editors’ selections and demonstrated a significant improvement in time efficiency, reducing selection time by 73%.

Editors found that 37% of AI-suggested reviewers who were not part of their initial selection were indeed suitable. The system’s performance varied across disciplines, with higher accuracy in STEM fields (Cohen’s d = 0.68). Qualitative feedback revealed an appreciation for the AI’s ability to identify lesser-known experts but concerns about its grasp of interdisciplinary work. Ethical considerations, including potential algorithmic bias and privacy issues, were highlighted.

The study concludes that while AI shows promise in enhancing reviewer selection efficiency and broadening the reviewer pool, it requires human oversight to address limitations in understanding nuanced disciplinary contexts. Future research should focus on larger-scale longitudinal studies and developing ethical frameworks for AI integration in peer-review processes.

URL : Enhancing peer review efficiency: A mixed-methods analysis of artificial intelligence-assisted reviewer selection across academic disciplines

DOI : https://doi.org/10.1002/leap.1638

Publish or perish? Innovative models for scholarly publishing in Zimbabwe

Authors : Nomsa Chirisa, Mpho Ngoepe

Innovative publishing models have emerged to meet the demands of the ‘publish or perish’ philosophy prevalent in academic and scholarly circles. Publishing models serve as the operational blueprint underpinning the value and supply chains of products in the publishing industry, aligning operational plans, design strategies, and production methodologies with the overarching goal of scholarly publishing. The duty of scholarly publishers to advance knowledge and disseminate it widely necessitates their role in supporting researchers to meet the expectations of the ‘publish or perish’ culture.

This philosophy becomes even more critical in the endangered landscape of scholarly publishing in Africa, where scholarly publishing is evidently perishing, as researchers in the region face additional challenges in accessing reputable publishing outlets for their work. Zimbabwe has a low research publishing output, and although it ranks second in southern Africa, it lags behind South Africa by an astounding 65%. This intensifies the pressure to publish to maintain visibility and credibility within the global academic community.

This paper thus examines the publishing models implemented in the publishing of scholarly works by scholarly publishers in Zimbabwe. Qualitative data were collected through the Delphi Technique design, with publishing experts over three rounds of interviews, and triangulated with data from document analysis.

The key findings indicate open access, self-publishing, and collaborative publishing as effective market models for university presses. However, Zimbabwean universities are still lagging behind, as few have established university presses.

URL : Publish or perish? Innovative models for scholarly publishing in Zimbabwe

DOI : https://doi.org/10.1177/02666669241289916

The Costs of Open Access Publication: A Case Study at Catalan Universities

Authors : Ángel Borrego, Lluís Anglada

This article explores the financial dynamics of open access (OA) publication in Catalan universities by combining four data sources: publication data coupled with article processing charge (APC) estimates; information on journal subscriptions, transformative agreements and APC payments made by universities; acknowledgements of APC funding sources in OA scholarly outputs; and a survey of authors.

The findings reveal a consistent increase in OA publication across Catalan universities, with 60% of the articles indexed in the Web of Science being published in either gold or hybrid OA in 2022. In parallel, investment in the research publishing system shows an upward trend. Resources allocated to journal subscription licenses have been redirected towards transformative agreements, leading to a rise in hybrid OA publications. Additional budget allocations have been made to accommodate APCs for gold OA journals.

Authors employ varied funding sources for gold and hybrid OA, with university funding programmes and research grants commonly facilitating gold OA, while transformative agreements often support hybrid OA. Authors associated with Catalan universities frequently benefit from funding schemes and transformative agreements that are accessible to their coauthors.

However, survey responses underscore the multifaceted nature of researchers’ financial support, including personal assets and waivers. Authors express frustration with the evolving OA landscape, particularly concerning the exorbitant publication fees.

Nevertheless, the allure of high-impact journals and expedited peer review processes continues to incentivize authors towards gold OA. Researchers voice concerns regarding the lack of equitable funding programmes and potential conflicts of interest within gold OA models, which signals the risk of compromising peer review integrity to prioritize profits.

This study underscores the need for further research to deepen our understanding of scholarly publishing expenditure and inform strategies for fostering a sustainable, equitable OA ecosystem.

URL : The Costs of Open Access Publication: A Case Study at Catalan Universities

DOI : https://doi.org/10.53377/lq.19069

Open access publications drive few visits from Google Search results to institutional repositories

Authors : Enrique Orduña‑Malea, Cristina I. Font‑Julián

Given the importance of Google Search in generating visits to institutional repositories (IR), a lack of visibility in search engine results pages can hinder the possibility of their publications being found, read, downloaded, and, eventually, cited.

To address this, institutions need to evaluate the visibility of their repositories to determine what actions might be implemented to enhance them. However, measuring the search engine optimization (SEO) visibility of IRs requires a highly accurate, technically feasible method. This study constitutes the first attempt to design such a method, specifically applied here to measuring the IR visibility of Spain’s national university system in Google Search based on a set of SEO-based metrics derived from the Ubersuggest SEO tool.

A comprehensive dataset spanning three months and comprising 217,589 bibliographic records and 316,899 organic keywords is used as a baseline. Our findings show that many records deposited in these repositories are not ranked among the top positions in Google Search results, and that the most visible records are mainly academic works (theses and dissertations) written in Spanish in the Humanities and Social Sciences.

However, most visits are generated by a small number of records. All in all, our results call into question the role played by IRs in attracting readers via Google Search to the institutions’ scientific heritage and serve to underscore the prevailing emphasis within IRs on preservation as opposed to online dissemination.

Potential improvements might be achieved using enhanced metadata schemes and normalized description practices, as well as by adopting other actionable insights that can strengthen the online visibility of IRs.

This study increases understanding of the role played by web indicators in assessing the web-based impact of research outputs deposited in IRs, and should be of particular interest for a range of stakeholders, including open access and open science advocates, research agencies, library practitioners, repository developers, and website administrators.

URL : Open access publications drive few visits from Google Search results to institutional repositories

DOI : https://doi.org/10.1007/s11192-024-05175-0

The use of ChatGPT for identifying disruptive papers in science: a first exploration

Authors : Lutz Bornmann, Lingfei Wu, Christoph Ettl

ChatGPT has arrived in quantitative research evaluation. With the exploration in this Letter to the Editor, we would like to widen the spectrum of the possible use of ChatGPT in bibliometrics by applying it to identify disruptive papers.

The identification of disruptive papers using publication and citation counts has become a popular topic in scientometrics. The disadvantage of the quantitative approach is its complexity in the computation. The use of ChatGPT might be an easy to use alternative.

URL : The use of ChatGPT for identifying disruptive papers in science: a first exploration

DOI : https://doi.org/10.1007/s11192-024-05176-z