Authors : Kyle Siler, Koen Frenken
Open Access (OA) publishing has created new academic and economic niches in contemporary science. OA journals offer numerous publication outlets with varying editorial philosophies and business models.
This article analyzes the Directory of Open Access Journals (DOAJ) (N=12,127) to identify characteristics of OA academic journals related to the adoption of Article Processing Charge (APC)-based business models, as well as price points of journals that charge APCs. Journal Impact Factor (JIF), language, publisher mission, DOAJ Seal, economic and geographic regions of publishers, peer review duration and journal discipline are all significantly related to the adoption and pricing of journal APCs.
Even after accounting for other journal characteristics (prestige, discipline, publisher country), journals published by for-profit publishers charge the highest APCs. Journals with status endowments (JIF, DOAJ Seal), articles written in English, published in wealthier regions, and in medical or science-based disciplines are also relatively costlier.
The OA publishing market reveals insights into forces that create economic and academic value in contemporary science. Political and institutional inequalities manifest in the varying niches occupied by different OA journals and publishers.
URL : The Pricing of Open Access Journals: Diverse Niches and Sources of Value in Academic Publishing
DOI : https://doi.org/10.1162/qss_a_00016
Authors : Darwin Y. Fu, Jacob J Hughey
Preprints in biology are becoming more popular, but only a small fraction of the articles published in peer-reviewed journals have previously been released as preprints.
To examine whether releasing a preprint on bioRxiv was associated with the attention and citations received by the corresponding peer-reviewed article, we assembled a dataset of 74,239 articles, 5,405 of which had a preprint, published in 39 journals.
Using log-linear regression and random-effects meta-analysis, we found that articles with a preprint had, on average, a 49% higher Altmetric Attention Score and 36% more citations than articles without a preprint.
These associations were independent of several other article- and author-level variables (such as scientific subfield and number of authors), and were unrelated to journal-level variables such as access model and Impact Factor.
This observational study can help researchers and publishers make informed decisions about how to incorporate preprints into their work.
URL : https://elifesciences.org/articles/52646
Authors : Yasar Tonta, Muge Akbulut
One of the main indicators of scientific development of a given country is the number of papers published in high impact scholarly journals. Many countries introduced performance-based research funding systems (PRFSs) to create a more competitive environment where prolific researchers get rewarded with subsidies to increase both the quantity and quality of papers.
Yet, subsidies do not always function as a leverage to improve the citation impact of scholarly papers. This paper investigates the effect of the publication support system of Turkey (TR) on the citation impact of papers authored by Turkish researchers.
Based on a stratified probabilistic sample of 4,521 TR-addressed papers, it compares the number of citations to determine if supported papers were cited more often than those of not supported ones, and if they were published in journals with relatively higher citation impact in terms of journal impact factors, article influence scores and quartiles.
Both supported and not supported papers received comparable number of citations per paper, and were published in journals with similar citation impact values. Findings suggest that subsidies do not seem to be an effective incentive to improve the quality of scholarly papers. Such support programs should therefore be reconsidered.
URL : https://arxiv.org/abs/1909.10068
Authors : Erin C McKiernan, Lesley A Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T Niles, Juan P Alperin
We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms.
Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.
We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.
URL : Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations
DOI : https://doi.org/10.7554/eLife.47338.001
Author : Mike Thelwall
This paper introduces a simple agglomerative clustering method to identify large publishing consortia with at least 20 authors and 80% shared authorship between articles. Based on Scopus journal articles 1996-2018, under these criteria, nearly all (88%) of the large consortia published research with citation impact above the world average, with the exceptions being mainly the newer consortia for which average citation counts are unreliable.
On average, consortium research had almost double (1.95) the world average citation impact on the log scale used (Mean Normalised Log Citation Score). At least partial alphabetical author ordering was the norm in most consortia.
The 250 largest consortia were for nuclear physics and astronomy around expensive equipment, and for predominantly health-related issues in genomics, medicine, public health, microbiology and neuropsychology.
For the health-related issues, except for the first and last few authors, authorship seem to primary indicate contributions to the shared project infrastructure necessary to gather the raw data.
It is impossible for research evaluators to identify the contributions of individual authors in the huge alphabetical consortia of physics and astronomy, and problematic for the middle and end authors of health-related consortia.
For small scale evaluations, authorship contribution statements could be used, when available.
URL : https://arxiv.org/abs/1906.01849
Authors : Michael Fire, Carlos Guestrin
The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades.
Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which, “when a measure becomes a target, it ceases to be a good measure.”
In this study, we analyzed >120 million papers to examine how the academic publishing world has evolved over the last century, with a deeper look into the specific field of biology. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening.
In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists.
Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors.
Moreover, by analyzing properties of >2,600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department.
Authors : Margit Osterloh, Bruno S. Frey
Publications in top journals today have a powerful influence on academic careers although there is much criticism of using journal rankings to evaluate individual articles.
We ask why this practice of performance evaluation is still so influential. We suggest this is the case because a majority of authors benefit from the present system due to the extreme skewness of citation distributions. “Performance paradox” effects aggravate the problem.
Three extant suggestions for reforming performance management are critically discussed. We advance a new proposal based on the insight that fundamental uncertainty is symptomatic for scholarly work. It suggests focal randomization using a rationally founded and well-orchestrated procedure.
URL : How to avoid borrowed plumes in academia