Does it pay to pay? A comparison of the benefits of open-access publishing across various sub-fields in biology

Authors : Amanda D. Clark, Tanner C. Myers, Todd D. Steury, Ali Krzton et al.

Authors are often faced with the decision of whether to maximize traditional impact metrics or minimize costs when choosing where to publish the results of their research. Many subscription-based journals now offer the option of paying an article processing charge (APC) to make their work open.

Though such “hybrid” journals make research more accessible to readers, their APCs often come with high price tags and can exclude authors who lack the capacity to pay to make their research accessible.

Here, we tested if paying to publish open access in a subscription-based journal benefited authors by conferring more citations relative to closed access articles. We identified 146,415 articles published in 152 hybrid journals in the field of biology from 2013–2018 to compare the number of citations between various types of open access and closed access articles.

In a simple generalized linear model analysis of our full dataset, we found that publishing open access in hybrid journals that offer the option confers an average citation advantage to authors of 17.8 citations compared to closed access articles in similar journals.

After taking into account the number of authors, Journal Citation Reports 2020 Quartile, year of publication, and Web of Science category, we still found that open access generated significantly more citations than closed access (p < 0.0001).

However, results were complex, with exact differences in citation rates among access types impacted by these other variables. This citation advantage based on access type was even similar when comparing open and closed access articles published in the same issue of a journal (p < 0.0001).

However, by examining articles where the authors paid an article processing charge, we found that cost itself was not predictive of citation rates (p = 0.14). Based on our findings of access type and other model parameters, we suggest that, in the case of the 152 journals we analyzed, paying for open access does confer a citation advantage.

For authors with limited budgets, we recommend pursuing open access alternatives that do not require paying a fee as they still yielded more citations than closed access. For authors who are considering where to submit their next article, we offer additional suggestions on how to balance exposure via citations with publishing costs.

URL : Does it pay to pay? A comparison of the benefits of open-access publishing across various sub-fields in biology

DOI : https://doi.org/10.7717/peerj.16824

The Nexus of Open Science and Innovation: Insights from Patent Citations

Author : Abdelghani Maddi

This paper aims to analyze the extent to which inventive activity relies on open science. In other words, it investigates whether inventors utilize Open Access (OA) publications more than subscription-based ones, especially given that some inventors may lack institutional access.

To achieve this, we utilized the (Marx, 2023) database, which contains citations of patents to scientific publications (Non-Patent References-NPRs). We focused on publications closely related to invention, specifically those cited solely by inventors within the body of patent texts. Our dataset was supplemented by OpenAlex data.

The final sample comprised 961,104 publications cited in patents, of which 861,720 had a DOI. Results indicate that across all disciplines, OA publications are 38% more prevalent in patent citations (NPRs) than in the overall OpenAlex database.

In biology and medicine, inventors use 73% and 27% more OA publications, respectively, compared to closed-access ones. Chemistry and computer science are also disciplines where OA publications are more frequently utilized in patent contexts than subscription-based ones.

HAL : https://cnrs.hal.science/hal-04454843

Is gold open access helpful for academic purification? A causal inference analysis based on retracted articles in biochemistry

Authors : Er-Te Zheng, Zhichao Fang, Hui-Zhen Fu

The relationship between transparency and credibility has long been a subject of theoretical and analytical exploration within the realm of social sciences, and it has recently attracted increasing attention in the context of scientific research. Retraction serves as a pivotal mechanism in addressing concerns about research integrity.

This study aims to empirically examining the relationship between open access level and the effectiveness of current mechanism, specifically academic purification centered on retracted articles. In this study, we used matching and Difference-in-Difference (DiD) methods to examine whether gold open access is helpful for academic purification in biochemistry field.

We collected gold open access (Gold OA) and non-open access (non-OA) biochemistry retracted articles as the treatment group, and matched them with corresponding unretracted articles as the control group from 2005 to 2021 based on Web of Science and Retraction Watch database.

The results showed that compared to non-OA, Gold OA is advantageous in reducing the retraction time of flawed articles, but does not demonstrate a significant advantage in reducing citations after retraction. This indicates that Gold OA may help expedite the detection and retraction of flawed articles, ultimately promoting the practice of responsible research.

DOI : https://doi.org/10.1016/j.ipm.2023.103640

How many authors are (too) many? A retrospective, descriptive analysis of authorship in biomedical publications

Authors : Martin Jakab, Eva Kittl, Tobias Kiesslich

Publishing in academic journals is primary to disseminate research findings, with authorship reflecting a scientist’s contribution, yielding academic recognition, and carrying significant financial implications. Author numbers per article have consistently risen in recent decades, as demonstrated in various journals and fields.

This study is a comprehensive analysis of authorship trends in biomedical papers from the NCBI PubMed database between 2000 and 2020, utilizing the Entrez Direct (EDirect) E-utilities to retrieve bibliometric data from a dataset of 17,015,001 articles. For all publication types, the mean author number per publication significantly increased over the last two decades from 3.99 to 6.25 (+ 57%, p < 0.0001) following a linear trend (r2 = 0.99) with an average relative increase of 2.28% per year.

This increase was highest for clinical trials (+ 5.67 authors per publication, + 97%), the smallest for case reports (+ 1.01 authors, + 24%). The proportion of single/solo authorships dropped by a factor of about 3 from 17.03% in 2000 to 5.69% in 2020. The percentage of eleven or more authors per publication increased ~ sevenfold, ~ 11-fold and ~ 12-fold for reviews, editorials, and systematic reviews, respectively. Confirming prior findings, this study highlights the escalating authorship in biomedical publications.

Given potential unethical practices, preserving authorship as a trustable indicator of scientific performance is critical. Understanding and curbing questionable authorship practices and inflation are imperative, as discussed through relevant literature to tackle this issue.

URL : How many authors are (too) many? A retrospective, descriptive analysis of authorship in biomedical publications

DOI : https://doi.org/10.1007/s11192-024-04928-1

Comparison of effect estimates between preprints and peer-reviewed journal articles of COVID-19 trials

Authors : Mauricia Davidson, Theodoros Evrenoglou, Carolina Graña, Anna Chaimani, Isabelle Boutron

Background

Preprints are increasingly used to disseminate research results, providing multiple sources of information for the same study. We assessed the consistency in effect estimates between preprint and subsequent journal article of COVID-19 randomized controlled trials.

Methods

The study utilized data from the COVID-NMA living systematic review of pharmacological treatments for COVID-19 (covid-nma.com) up to July 20, 2022. We identified randomized controlled trials (RCTs) evaluating pharmacological treatments vs. standard of care/placebo for patients with COVID-19 that were originally posted as preprints and subsequently published as journal articles.

Trials that did not report the same analysis in both documents were excluded. Data were extracted independently by pairs of researchers with consensus to resolve disagreements. Effect estimates extracted from the first preprint were compared to effect estimates from the journal article.

Results

The search identified 135 RCTs originally posted as a preprint and subsequently published as a journal article. We excluded 26 RCTs that did not meet the eligibility criteria, of which 13 RCTs reported an interim analysis in the preprint and a final analysis in the journal article. Overall, 109 preprint–article RCTs were included in the analysis.

The median (interquartile range) delay between preprint and journal article was 121 (73–187) days, the median sample size was 150 (71–464) participants, 76% of RCTs had been prospectively registered, 60% received industry or mixed funding, 72% were multicentric trials. The overall risk of bias was rated as ‘some concern’ for 80% of RCTs.

We found that 81 preprint–article pairs of RCTs were consistent for all outcomes reported. There were nine RCTs with at least one outcome with a discrepancy in the number of participants with outcome events or the number of participants analyzed, which yielded a minor change in the estimate of the effect. Furthermore, six RCTs had at least one outcome missing in the journal article and 14 RCTs had at least one outcome added in the journal article compared to the preprint. There was a change in the direction of effect in one RCT. No changes in statistical significance or conclusions were found.

Conclusions

Effect estimates were generally consistent between COVID-19 preprints and subsequent journal articles. The main results and interpretation did not change in any trial. Nevertheless, some outcomes were added and deleted in some journal articles.

URL : Comparison of effect estimates between preprints and peer-reviewed journal articles of COVID-19 trials

DOI : https://doi.org/10.1186/s12874-023-02136-8

More ethics in the laboratory, please! Scientists’ perspectives on ethics in the preclinical phase

Authors : Paola Buedo, Eugenia Prieto, Jolanta Perek-Białas, Idalina Odziemczyk-
Stawarz, Marcin Waligora

In recent years there have been calls to improve ethics in preclinical research. Promoting ethics in preclinical research should consider the perspectives of scientists. Our study aims to explore researchers’ perspectives on ethics in the preclinical phase.

Using interviews and focus groups, we collected views on ethical issues in preclinical research from experienced (n = 11) and early-stage researchers (ESRs) (n = 14) working in a gene therapy and regenerative medicine consortium. A recurring theme among ESRs was the impact of health-related preclinical research on climate change.

They highlighted the importance of strengthening ethics in relations within the scientific community. Experienced researchers were focused on technicalities of methods used in preclinical research. They stressed the need for more safeguards to protect the sensitive personal data they work with.

Both groups drew attention to the importance of the social context of research and its social impact. They agreed that it is important to be socially responsible – to be aware of and be sensitive to the needs and views of society.

This study helps to identify key ethical challenges and, when combined with more data, can ultimately lead to informed and evidence-based improvements to existing regulations.

URL : More ethics in the laboratory, please! Scientists’ perspectives on ethics in the preclinical phase

DOI : https://doi.org/10.1080/08989621.2023.2294996

Clickbait or conspiracy? How Twitter users address the epistemic uncertainty of a controversial preprint

Authors : Mareike Bauer, Maximilian Heimstädt, Carlos Franzreb, Sonja Schimmler

Many scientists share preprints on social media platforms to gain attention from academic peers, policy-makers, and journalists. In this study we shed light on an unintended but highly consequential effect of sharing preprints: Their contribution to conspiracy theories. Although the scientific community might quickly dismiss a preprint as insubstantial and ‘clickbaity’, its uncertain epistemic status nevertheless allows conspiracy theorists to mobilize the text as scientific support for their own narratives.

To better understand the epistemic politics of preprints on social media platforms, we studied the case of a biomedical preprint, which was shared widely and discussed controversially on Twitter in the wake of the coronavirus disease 2019 pandemic. Using a combination of social network analysis and qualitative content analysis, we compared the structures of engagement with the preprint and the discursive practices of scientists and conspiracy theorists.

We found that despite substantial engagement, scientists were unable to dampen the conspiracy theorists’ enthusiasm for the preprint. We further found that members from both groups not only tried to reduce the preprint’s epistemic uncertainty but sometimes deliberately maintained it.

The maintenance of epistemic uncertainty helped conspiracy theorists to reinforce their group’s identity as skeptics and allowed scientists to express concerns with the state of their profession.

Our study contributes to research on the intricate relations between scientific knowledge and conspiracy theories online, as well as the role of social media platforms for new genres of scholarly communication.

URL : Clickbait or conspiracy? How Twitter users address the epistemic uncertainty of a controversial preprint

DOI : https://doi.org/10.1177/20539517231180575