Author : Alexander M. Petersen
Since their emergence just a decade ago, nearly 2% of scientific research is now published by megajournals, representing a major industrial shift in the production of knowledge. Such high-throughput production stresses several aspects of the publication process, including the editorial oversight of peer-review.
As the largest megajournal, PLOS ONE has relied on a single-tier editorial board comprised of ∼7000 active academics, who thereby face conflicts of interest relating to their dual roles as both producers and gatekeepers of peer-reviewed literature.
While such conflicts of interest are also a factor for editorial boards of smaller journals, little is known about how the scalability of megajournals may introduce perverse incentives for editorial service.
To address this issue, we analyzed the activity of PLOS ONE editors over the journal’s inaugural decade (2006–2015) and find highly variable activity levels. We then leverage this variation to model how editorial bias in the manuscript decision process relates to two editor-specific factors: repeated editor-author interactions and shifts in the rates of citations directed at editors – a form of citation remuneration that is analogue to self-citation.
Our results indicate significantly stronger manuscript bias among a relatively small number of extremely active editors, who also feature relatively high self-citation rates coincident in the manuscripts they handle.
These anomalous activity patterns are consistent with the perverse incentives and the temptations they offer at scale, which is theoretically grounded in the “slippery-slope” evolution of apathy and misconduct in power-driven environments.
By applying quantitative evaluation to the gatekeepers of scientific knowledge, we shed light on various ethics issues crucial to science policy – in particular, calling for more transparent and structured management of editor activity in megajournals that rely on active academics.
URL : Megajournal mismanagement: Manuscript decision bias and anomalous editor activity at PLOS ONE
DOI : https://doi.org/10.1016/j.joi.2019.100974
Authors : Asura Enkhbayar, Stefanie Haustein, Germana Barata, Juan Pablo Alperin
Despite its undisputed position as the biggest social media platform, Facebook has never entered the main stage of altmetrics research. In this study, we argue that the lack of attention by altmetrics researchers is not due to a lack of relevant activity on the platform, but because of the challenges in collecting Facebook data have been limited to activity that takes place in a select group of public pages and groups.
We present a new method of collecting shares, reactions, and comments across the platform-including private timelines-and use it to gather data for all articles published between 2015 to 2017 in the journal PLOS ONE.
We compare the gathered data with altmetrics collected and aggregated by Altmetric. The results show that 58.7% of papers shared on the platform happen outside of public view and that, when collecting all shares, the volume of activity approximates patterns of engagement previously only observed for Twitter.
Both results suggest that the role and impact of Facebook as a medium for science and scholarly communication has been underestimated. Furthermore, they emphasise the importance of openness and transparency around the collection and aggregation of altmetrics.
URL : https://arxiv.org/abs/1909.01476
Authors : Wolfgang Otto, Behnam Ghavimi, Philipp Mayr, Rajesh Piryani, Vivek Kumar Singh
In this article, we describe highly cited publications in a PLOS ONE full-text corpus. For these publications, we analyse the citation contexts concerning their position in the text and their age at the time of citing.
By selecting the perspective of highly cited papers, we can distinguish them based on the context during citation even if we do not have any other information source or metrics.
We describe the top cited references based on how, when and in which context they are cited. The focus of this study is on a time perspective to explain the nature of the reception of highly cited papers.
We have found that these references are distinguishable by the IMRaD sections of their citation. And further, we can show that the section usage of highly cited papers is time-dependent.
The longer the citation interval, the higher the probability that a reference is cited in a method section.
URL : https://arxiv.org/abs/1903.11693
Authors : Simon Wakeling, Peter Willett, Claire Creaser, Jenny Fry, Stephen Pinfield, Valerie Spezi, Marc Bonne, Christina Founti, Itzelle Medina Perea
Article commenting functionality allows users to add publically visible comments to an article on a publisher’s website. As well as facilitating forms of post-publication peer review, for publishers of open-access mega-journals (large, broad scope, OA journals that seek to publish all technically or scientifically sound research) comments are also thought to serve as a means for the community to discuss and communicate the significance and novelty of the research, factors which are not assessed during peer review.
In this paper we present the results of an analysis of commenting on articles published by the Public Library of Science (PLOS), publisher of the first and best-known mega-journal PLOS ONE, between 2003 and 2016.
We find that while overall commenting rates are low, and have declined since 2010, there is substantial variation across different PLOS titles. Using a typology of comments developed for this research we also find that only around half of comments engage in an academic discussion of the article, and that these discussions are most likely to focus on the paper’s technical soundness.
Our results suggest that publishers have yet to encourage significant numbers of readers to leave comments, with implications for the effectiveness of commenting as a means of collecting and communicating community perceptions of an article’s importance.
DOI : https://doi.org/10.1177%2F0165551518819965
Authors : Lisa M. Federer, Christopher W. Belter, Douglas J. Joubert, Alicia Livinski, Ya-Ling Lu, Lissa N. Snyders, Holly Thompson
A number of publishers and funders, including PLOS, have recently adopted policies requiring researchers to share the data underlying their results and publications. Such policies help increase the reproducibility of the published literature, as well as make a larger body of data available for reuse and re-analysis.
In this study, we evaluate the extent to which authors have complied with this policy by analyzing Data Availability Statements from 47,593 papers published in PLOS ONE between March 2014 (when the policy went into effect) and May 2016.
Our analysis shows that compliance with the policy has increased, with a significant decline over time in papers that did not include a Data Availability Statement. However, only about 20% of statements indicate that data are deposited in a repository, which the PLOS policy states is the preferred method.
More commonly, authors state that their data are in the paper itself or in the supplemental information, though it is unclear whether these data meet the level of sharing required in the PLOS policy.
These findings suggest that additional review of Data Availability Statements or more stringent policies may be needed to increase data sharing.
URL : Data sharing in PLOS ONE: An analysis of Data Availability Statements
DOI : https://doi.org/10.1371/journal.pone.0194768
Author : Bo-Christer Björk
Mega-journals are a new kind of scholarly journal made possible by electronic publishing. They are open access (OA) and funded by charges, which authors pay for the publishing services. What distinguishes mega-journals from other OA journals is, in particular, a peer review focusing only on scientific trustworthiness.
The journals can easily publish thousands of articles per year and there is no need to filter articles due to restricted slots in the publishing schedule. This study updates some earlier longitudinal studies of the evolution of mega-journals and their publication volumes.
After very rapid growth in 2010–2013, the increase in overall article volumes has slowed down. Mega-journals are also increasingly dependent for sustained growth on Chinese authors, who now contribute 25% of all articles in such journals.
There has also been an internal shift in market shares. PLOS ONE, which totally dominated mega-journal publishing in the early years, currently publishes around one-third of all articles. Scientific Reports has grown rapidly since 2014 and is now the biggest journal.
URL : Evolution of the scholarly mega-journal, 2006–2017
DOI : https://doi.org/10.7717/peerj.4357
Authors : Misha Teplitskiy, Daniel Acuna, Aida Elamrani-Raoult, Konrad Kording, James Evans
Personal connections between creators and evaluators of scientific works are ubiquitous, and the possibility of bias ever-present. Although connections have been shown to bias prospective judgments of (uncertain) future performance, it is unknown whether such biases occur in the much more concrete task of assessing the scientific validity of already completed work, and if so, why.
This study presents evidence that personal connections between authors and reviewers of neuroscience manuscripts are associated with biased judgments and explores the mechanisms driving the effect.
Using reviews from 7,981 neuroscience manuscripts submitted to the journal PLOS ONE, which instructs reviewers to evaluate manuscripts only on scientific validity, we find that reviewers favored authors close in the co-authorship network by ~0.11 points on a 1.0 – 4.0 scale for each step of proximity.
PLOS ONE’s validity-focused review and the substantial amount of favoritism shown by distant vs. very distant reviewers, both of whom should have little to gain from nepotism, point to the central role of substantive disagreements between scientists in different “schools of thought.”
The results suggest that removing bias from peer review cannot be accomplished simply by recusing the closely-connected reviewers, and highlight the value of recruiting reviewers embedded in diverse professional networks.
URL : https://arxiv.org/abs/1802.01270