Authors : Simone Righi, Károly Takács
It is not easy to rationalize how peer review, as the current grassroots of science, can work based on voluntary contributions of reviewers. There is no rationale to write impartial and thorough evaluations.
Consequently, there is no risk in submitting low-quality work by authors. As a result, scientists face a social dilemma: if everyone acts according to his or her own self-interest, low scientific quality is produced. Still, in practice, reviewers as well as authors invest high effort in reviews and submissions.
We examine how the increased relevance of public good benefits (journal impact factor), the editorial policy of handling incoming reviews, and the acceptance decisions that take into account reputational information can help the evolution of high-quality contributions from authors.
High effort from the side of reviewers is problematic even if authors cooperate: reviewers are still best off by producing low-quality reviews, which does not hinder scientific development, just adds random noise and unnecessary costs to it.
We show with agent-based simulations that tacit agreements between authors that are based on reciprocity might decrease these costs, but does not result in superior scientific quality. Our study underlines why certain self-emerged current practices, such as the increased importance of journal metrics, the reputation-based selection of reviewers, and the reputation bias in acceptance work efficiently for scientific development.
Our results find no answers, however, how the system of peer review with impartial and thorough evaluations could be sustainable jointly with rapid scientific development.
URL : http://arxiv.org/abs/1607.02574
Authors : Marco Giordan, Attila Csikasz-Nagy, Andrew M. Collings
Publishing in scientific journals is one of the most important ways in which scientists disseminate research to their peers and to the wider public.
Pre-publication peer review underpins this process, but peer review is subject to various criticisms and is under pressure from growth in the number of scientific publications.
Here we examine an element of the editorial process at eLife, in which the Reviewing Editor usually serves as one of the referees, to see what effect this has on decision times, decision type, and the number of citations.
We analysed a dataset of 8,905 research submissions to eLife since June 2012, of which 2,750 were sent for peer review, using R and Python to perform the statistical analysis.
The Reviewing Editor serving as one of the peer reviewers results in faster decision times on average, with the time to final decision ten days faster for accepted submissions (n=1,405) and 5 days faster for papers that were rejected after peer review (n=1,099).
There was no effect on whether submissions were accepted or rejected, and a very small (but significant) effect on citation rates for published articles where the Reviewing Editor served as one of the peer reviewers.
An important aspect of eLife’s peer-review process is shown to be effective, given that decision times are faster when the Reviewing Editor serves as a reviewer. Other journals hoping to improve decision times could consider adopting a similar approach.
URL : The effects of an editor serving as one of the reviewers during the peer-review process
DOI : http://dx.doi.org/10.12688/f1000research.8452.1
This article narrates the development of the experimentation of an open peer review and open commentary protocols. This experiment concerns propositions of articles for the environmental sciences journal VertigO, digital and open access scientific publication.
This experiment did not last long enough (4 months) and was not deployed on a large enough corpus (10 preprints) to lead to firm quantitative conclusions. However, it highlights practical leads and thoughts about the potentialities and the limitations of the open review processes – in the broadest sense – for scientific publishing.
Based on the exemplary of the experiment and a participant observation as a copy-editor devoted to open peer review, the article finally proposes a model from the experimented prototype.
This model, named OPRISM, could be implemented on other publishing contexts for social sciences and humanities. Central and much debated activity in the academic world, peer review refers to different practices such as control, validation, allocation and contradiction exercised by the scientific community for itself.
Its scope is wide: from the allocation for funding to the relevance of a recruitment. According to common sense, the control of the scientific community by itself is a guarantee of scientific quality.
This issue became even more important in an international context of competition between universities and between scholars themselves.
URL : Open peer review : from an experiment to a model
Alternative location : https://hal.archives-ouvertes.fr/hal-01302597
Online peer-production platforms facilitate the coordination of creative work and services. Generally considered as empowering participatory tools and a source of common good, they can also be, however, alienating instruments of digital labour.
This paper proposes a typology of peer-production platforms, based on the centralization/decentralization levels of several of their design features. Between commons-based peer-production and crowdsourced, user-generated content “enclosed” by corporations, a wide range of models combine different social, political, technical and economic arrangements.
This combined analysis of the level of (de)centralization of platform features provides information on emancipation capabilities in a more granular way than a market-based qualification of platforms, based on the nature of ownership or business models only.
The five selected features of the proposed typology are: ownership of means of production, technical architecture/design, social organization/governance of work patterns, ownership of the peer-produced resource, and value of the output.
URL : Towards a (De)centralization-Based Typology of Peer Production
Alternative location : http://triplec.at/index.php/tripleC/article/view/728
Cet article relate le déroulement de l’expérimentation d’un dispositif d’évaluation ouverte par les pairs et de commentaire ouvert, pour des propositions d’articles à la revue en sciences de l’environnement VertigO, publication scientifique électronique en accès libre.
Si cette expérimentation ne s’est pas déroulée sur un temps assez long (4 mois) et un corpus assez étendu (10 manuscrits) pour en tirer des conclusions quantitatives fermes, elle expose néanmoins des pistes et des réflexions concrètes sur les potentialités et les limites de l’ouverture des processus d’évaluation – au sens large – pour la publication scientifique.
Se basant sur l’exemplarité de l’expérience et une observation participante en tant que secrétaire de rédaction consacré à l’évaluation ouverte, l’article propose finalement la modélisation du prototype expérimenté. Ce modèle, surnommé OPRISM, pourrait être utilisé dans d’autres cadres éditoriaux pour les sciences humaines et sociales.
URL : https://hal-paris1.archives-ouvertes.fr/hal-01283582v1
We apply a novel mistake index to assess trends in the proportion of corrections published between 1993 and 2014 in Nature, Science and PNAS. The index revealed a progressive increase in the proportion of corrections published in these three high-quality journals.
The index appears to be independent of the journal impact factor or the number of items published, as suggested by a comparative analyses among 16 top scientific journals of different impact factors and disciplines. A more detailed analysis suggests that the trend in the time-to-correction increased significantly over time and also differed among journals (Nature 233 days; Science 136 days; PNAS 232 days).
A detailed review of 1,428 errors showed that 60% of corrections were related to figures, authors, references or results. According to the three categories established, 34.7% of the corrections were considered mild, 47.7% moderate and 17.6% severe, also differing among journals. Errors occurring during the printing process were responsible for 5% of corrections in Nature, 3% in Science and 18% in PNAS.
The measurement of the temporal trends in the quality of scientific manuscripts can assist editors and reviewers in identifying the most common mistakes, increasing the rigor of peer-review and improving the quality of published scientific manuscripts.
URL : Improving the peer-review process and editorial quality: key errors escaping the review and editorial process in top scientific journals
DOI : https://doi.org/10.7717/peerj.1670
Recent controversies highlighting substandard peer review in Open Access (OA) and traditional (subscription) journals have increased the need for authors, funders, publishers, and institutions to assure quality of peer-review in academic journals. I propose that transparency of the peer-review process may be seen as an indicator of the quality of peer-review, and develop and validate a tool enabling different stakeholders to assess transparency of the peer-review process.
Methods and Findings
Based on editorial guidelines and best practices, I developed a 14-item tool to rate transparency of the peer-review process on the basis of journals’ websites. In Study 1, a random sample of 231 authors of papers in 92 subscription journals in different fields rated transparency of the journals that published their work. Authors’ ratings of the transparency were positively associated with quality of the peer-review process but unrelated to journal’s impact factors.
In Study 2, 20 experts on OA publishing assessed the transparency of established (non-OA) journals, OA journals categorized as being published by potential predatory publishers, and journals from the Directory of Open Access Journals (DOAJ). Results show high reliability across items (α = .91) and sufficient reliability across raters. Ratings differentiated the three types of journals well.
In Study 3, academic librarians rated a random sample of 140 DOAJ journals and another 54 journals that had received a hoax paper written by Bohannon to test peer-review quality. Journals with higher transparency ratings were less likely to accept the flawed paper and showed higher impact as measured by the h5 index from Google Scholar.
The tool to assess transparency of the peer-review process at academic journals shows promising reliability and validity. The transparency of the peer-review process can be seen as an indicator of peer-review quality allowing the tool to be used to predict academic quality in new journals.
URL : Peer Review Quality and Transparency of the Peer-Review Process in Open Access and Subscription Journals