Author : Miguel A. Fortuna
In the same way ecosystems tend to increase maturity by decreasing the flow of energy per unit biomass, we should move towards a more mature science by publishing less but high-quality papers and getting away from joining large teams in small roles. That is, we should decrease our scientific productivity for good.
URL : https://arxiv.org/abs/1906.02927
Author : Mike Thelwall
This paper introduces a simple agglomerative clustering method to identify large publishing consortia with at least 20 authors and 80% shared authorship between articles. Based on Scopus journal articles 1996-2018, under these criteria, nearly all (88%) of the large consortia published research with citation impact above the world average, with the exceptions being mainly the newer consortia for which average citation counts are unreliable.
On average, consortium research had almost double (1.95) the world average citation impact on the log scale used (Mean Normalised Log Citation Score). At least partial alphabetical author ordering was the norm in most consortia.
The 250 largest consortia were for nuclear physics and astronomy around expensive equipment, and for predominantly health-related issues in genomics, medicine, public health, microbiology and neuropsychology.
For the health-related issues, except for the first and last few authors, authorship seem to primary indicate contributions to the shared project infrastructure necessary to gather the raw data.
It is impossible for research evaluators to identify the contributions of individual authors in the huge alphabetical consortia of physics and astronomy, and problematic for the middle and end authors of health-related consortia.
For small scale evaluations, authorship contribution statements could be used, when available.
URL : https://arxiv.org/abs/1906.01849
Authors : Federico Bianchi, Francisco Grimaldo, Flaminio Squazzoni
This paper presents an index that measures reviewer contribution to editorial processes of scholarly journals. Following a metaphor of ranking algorithms in sports tournaments, we created an index that considers reviewers on different context-specific dimensions, i.e., report delivery time, the length of the report and the alignment of recommendations to editorial decisions.
To test the index, we used a dataset of peer review in a multi-disciplinary journal, including 544 reviewers on 606 submissions in six years. Although limited by sample size, the test showed that the index identifies outstanding contributors and weak performing reviewers efficiently.
Our index is flexible, contemplates extensions and could be incorporated into available scholarly journal management tools. It can assist editors in rewarding high performing reviewers and managing editorial turnover.
URL : The F3-index. Valuing reviewers for scholarly journals
DOI : https://doi.org/10.1016/j.joi.2018.11.007
Authors : Michaela Strinzel, Anna Severin, Katrin Milzow, Matthias Egger
Despite growing awareness of predatory publishing and research on its market characteristics, the defining attributes of fraudulent journals remain controversial.
We aimed to develop a better understanding of quality criteria for scholarly journals by analysing journals and publishers indexed in blacklists of predatory journals and whitelists of legitimate journals and the lists’ inclusion criteria.
We searched for blacklists and whitelists in early 2018. Lists that included journals across disciplines were eligible. We used a mixed methods approach, combining quantitative and qualitative analyses.
To quantify overlaps between lists in terms of indexed journals and publishers we employed the Jaro-Winkler string metric and Venn diagrams. To identify topics addressed by the lists’ inclusion criteria and to derive their broader conceptual categories, we used a qualitative coding approach.
Two blacklists (Beall’s and Cabell’s) and two whitelists (DOAJ and Cabell’s) were eligible. The number of journals per list ranged from 1404 to 12357 and the number of publishers from 473 to 5638. Seventy-three journals and 42 publishers were included both in a blacklist and whitelist. A total of 198 inclusion criteria were examined.
Seven thematic themes were identified: (i) peer review, (ii) editorial services, (iii) policy, (iv) business practices, (v) publishing, archiving and access, (vi) website and (vii) indexing and metrics.
Business practices accounted for almost half of blacklists’ criteria, whereas whitelists gave more emphasis to criteria related to policy and guidelines. Criteria were grouped into four broad concepts: (i) transparency, (ii) ethics, (iii) professional standards and (iv) peer review and other services.
Whitelists gave more weight to transparency whereas blacklists focused on ethics and professional standards. The criteria included in whitelists were easier to verify than those used in blacklists. Both types of list gave relatively little emphasis to the quality of peer review.
There is overlap between journals and publishers included in blacklists and whitelists. Blacklists and whitelists differ in their criteria for quality and the weight given to different dimensions of quality. Aspects that are central but difficult to verify receive insufficient attention.
URL : “Blacklists” and “whitelists” to tackle predatory publishing : A cross-sectional comparison and thematic analysis
DOI : https://doi.org/10.7287/peerj.preprints.27532v1
Author : Michael B Jackson
AoB PLANTS is a not-for-profit, open access, plant science journal and one of three peer-reviewed journals owned and managed by the Annals of Botany Company. This article explains events and thinking that led to the starting of AoB PLANTS and how the unique features of the Journal came to be formalized prior to its launch in September 2009.
The article also describes how the Journal’s management developed over the first 10 years and summarizes the Journal’s achievements in a decade where open access journals have proliferated despite subscription journals continuing to dominate the publishing of peer-reviewed botanical science.
URL : Ten years of AoB PLANTS the open access journal for plant scientists: inception and progress since 2009
DOI : https://doi.org/10.1093/aobpla/plz025
Authors : Michael Fire, Carlos Guestrin
The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades.
Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which, “when a measure becomes a target, it ceases to be a good measure.”
In this study, we analyzed >120 million papers to examine how the academic publishing world has evolved over the last century, with a deeper look into the specific field of biology. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening.
In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists.
Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors.
Moreover, by analyzing properties of >2,600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department.
Authors : Michael B. McNally, Erik G. Christiansen
Transitioning from closed courses and educational resources to open educational resources (OER) and open courseware (OCW) requires considerations of many factors beyond simply the use of an open licence.
This paper examines the pedagogical choices and trade-offs involved in creating OER and OCW. Eight factors are identified that influence openness (open licensing, accessibility and usability standards, language, cultural considerations, support costs, digital distribution, and file formats).
These factors are examined under closed, mixed and most open scenarios to relatively compare the amount of effort, willingness, skill and knowledge required.
The paper concludes by suggesting that maximizing openness is not practical and argues that open educators should strive for ‘open enough’ rather than maximal openness.
DOI : https://doi.org/10.5210/fm.v24i6.9180