Open peer review : from an experiment to a model. A narrative of an open peer review experimentation

This article narrates the development of the experimentation of an open peer review and open commentary protocols. This experiment concerns propositions of articles for the environmental sciences journal VertigO, digital and open access scientific publication.

This experiment did not last long enough (4 months) and was not deployed on a large enough corpus (10 preprints) to lead to firm quantitative conclusions. However, it highlights practical leads and thoughts about the potentialities and the limitations of the open review processes – in the broadest sense – for scientific publishing.

Based on the exemplary of the experiment and a participant observation as a copy-editor devoted to open peer review, the article finally proposes a model from the experimented prototype.

This model, named OPRISM, could be implemented on other publishing contexts for social sciences and humanities. Central and much debated activity in the academic world, peer review refers to different practices such as control, validation, allocation and contradiction exercised by the scientific community for itself.

Its scope is wide: from the allocation for funding to the relevance of a recruitment. According to common sense, the control of the scientific community by itself is a guarantee of scientific quality.

This issue became even more important in an international context of competition between universities and between scholars themselves.

URL : Open peer review : from an experiment to a model

Alternative location : https://hal.archives-ouvertes.fr/hal-01302597

Towards a (De)centralization-Based Typology of Peer Production

Online peer-production platforms facilitate the coordination of creative work and services. Generally considered as empowering participatory tools and a source of common good, they can also be, however, alienating instruments of digital labour.

This paper proposes a typology of peer-production platforms, based on the centralization/decentralization levels of several of their design features. Between commons-based peer-production and crowdsourced, user-generated content “enclosed” by corporations, a wide range of models combine different social, political, technical and economic arrangements.

This combined analysis of the level of (de)centralization of platform features provides information on emancipation capabilities in a more granular way than a market-based qualification of platforms, based on the nature of ownership or business models only.

The five selected features of the proposed typology are: ownership of means of production, technical architecture/design, social organization/governance of work patterns, ownership of the peer-produced resource, and value of the output.

URL : Towards a (De)centralization-Based Typology of Peer Production

Alternative location : http://triplec.at/index.php/tripleC/article/view/728

Évaluation ouverte par les pairs : de l’expérimentation à la modélisation : Récit d’une expérience d’évaluation ouverte par les pairs

Cet article relate le déroulement de l’expérimentation d’un dispositif d’évaluation ouverte par les pairs et de commentaire ouvert, pour des propositions d’articles à la revue en sciences de l’environnement VertigO, publication scientifique électronique en accès libre.

Si cette expérimentation ne s’est pas déroulée sur un temps assez long (4 mois) et un corpus assez étendu (10 manuscrits) pour en tirer des conclusions quantitatives fermes, elle expose néanmoins des pistes et des réflexions concrètes sur les potentialités et les limites de l’ouverture des processus d’évaluation – au sens large – pour la publication scientifique.

Se basant sur l’exemplarité de l’expérience et une observation participante en tant que secrétaire de rédaction consacré à l’évaluation ouverte, l’article propose finalement la modélisation du prototype expérimenté. Ce modèle, surnommé OPRISM, pourrait être utilisé dans d’autres cadres éditoriaux pour les sciences humaines et sociales.

URL : https://hal-paris1.archives-ouvertes.fr/hal-01283582v1

Improving the peer-review process and editorial quality: key errors escaping the review and editorial process in top scientific journals

We apply a novel mistake index to assess trends in the proportion of corrections published between 1993 and 2014 in Nature, Science and PNAS. The index revealed a progressive increase in the proportion of corrections published in these three high-quality journals.

The index appears to be independent of the journal impact factor or the number of items published, as suggested by a comparative analyses among 16 top scientific journals of different impact factors and disciplines. A more detailed analysis suggests that the trend in the time-to-correction increased significantly over time and also differed among journals (Nature 233 days; Science 136 days; PNAS 232 days).

A detailed review of 1,428 errors showed that 60% of corrections were related to figures, authors, references or results. According to the three categories established, 34.7% of the corrections were considered mild, 47.7% moderate and 17.6% severe, also differing among journals. Errors occurring during the printing process were responsible for 5% of corrections in Nature, 3% in Science and 18% in PNAS.

The measurement of the temporal trends in the quality of scientific manuscripts can assist editors and reviewers in identifying the most common mistakes, increasing the rigor of peer-review and improving the quality of published scientific manuscripts.

URL : Improving the peer-review process and editorial quality: key errors escaping the review and editorial process in top scientific journals

DOI : https://doi.org/10.7717/peerj.1670

Peer Review Quality and Transparency of the Peer-Review Process in Open Access and Subscription Journals

Background

Recent controversies highlighting substandard peer review in Open Access (OA) and traditional (subscription) journals have increased the need for authors, funders, publishers, and institutions to assure quality of peer-review in academic journals. I propose that transparency of the peer-review process may be seen as an indicator of the quality of peer-review, and develop and validate a tool enabling different stakeholders to assess transparency of the peer-review process.

Methods and Findings

Based on editorial guidelines and best practices, I developed a 14-item tool to rate transparency of the peer-review process on the basis of journals’ websites. In Study 1, a random sample of 231 authors of papers in 92 subscription journals in different fields rated transparency of the journals that published their work. Authors’ ratings of the transparency were positively associated with quality of the peer-review process but unrelated to journal’s impact factors.

In Study 2, 20 experts on OA publishing assessed the transparency of established (non-OA) journals, OA journals categorized as being published by potential predatory publishers, and journals from the Directory of Open Access Journals (DOAJ). Results show high reliability across items (α = .91) and sufficient reliability across raters. Ratings differentiated the three types of journals well.

In Study 3, academic librarians rated a random sample of 140 DOAJ journals and another 54 journals that had received a hoax paper written by Bohannon to test peer-review quality. Journals with higher transparency ratings were less likely to accept the flawed paper and showed higher impact as measured by the h5 index from Google Scholar.

Conclusions

The tool to assess transparency of the peer-review process at academic journals shows promising reliability and validity. The transparency of the peer-review process can be seen as an indicator of peer-review quality allowing the tool to be used to predict academic quality in new journals.

URL : Peer Review Quality and Transparency of the Peer-Review Process in Open Access and Subscription Journals

DOI: 10.1371/journal.pone.0147913

The Peer Reviewers’ Openness Initiative: incentivizing open research practices through peer review

Openness is one of the central values of science. Open scientific practices such as sharing data, materials and analysis scripts alongside published articles have many benefits, including easier replication and extension studies, increased availability of data for theory-building and meta-analysis, and increased possibility of review and collaboration even after a paper has been published. Although modern information technology makes sharing easier than ever before, uptake of open practices had been slow. We suggest this might be in part due to a social dilemma arising from misaligned incentives and propose a specific, concrete mechanism—reviewers withholding comprehensive review—to achieve the goal of creating the expectation of open practices as a matter of scientific principle.

URL : The Peer Reviewers’ Openness Initiative: incentivizing open research practices through peer review

DOI: 10.1098/rsos.150547

Peer review: The current landscape and future trends

This paper is based on research commissioned by the Wellcome Trust in 2015 and catalogues current initiatives and trends in the systems and processes surrounding peer review. It considers issues such as open and interactive reviews, post-publication comments and ratings, and the platforms provided by both publishers and other organisations to support such activity; third-party peer review platforms; and measures from publishers and others to provide more recognition and rewards for peer reviewers. It also speculates on likely key trends in peer review for the future.

URL : http://onlinelibrary.wiley.com/doi/10.1002/leap.1008/abstract