Beyond Fact Checking: Reconsidering the Status of Truth of Published Articles

Authors : David Pontille, Didier Torny

Since the 17th century, scientific knowledge has been produced through a collective process, involving specific technologies used to perform experiments, to regulate modalities for participation of peers or lay people, and to ensure validation of the facts and publication of major results.

In such a world guided by the quest for a new kind of truth against previous beliefs various forms of misconduct – from subtle plagiarism to the entire fabrication of data and results – have largely been considered as minimal, if not inexistent.

Yet, some “betrayers of the truth” have been alleged in many fraudulent cases at least from the 1970s onward and the phenomenon is currently a growing concern in many academic corners. Facing numerous alerts, journals have generalized dedicated editorial formats to notify their readers of the emerging doubts affecting articles they had published.

This short piece is exclusively focused on these formats, which consists in “flagging” some articles to mark their problematic status.The visibility given to these flags and policies undermine the very basic components of the economy of science: How long can we collectively pretend that peer-reviewed knowledge should be the anchor to face a “post-truth” world?

URL : https://halshs.archives-ouvertes.fr/halshs-01576348

Les classements à l’international des revues en SHS

Auteurs/Authors : David Pontille, Didier Torny

Bien que plusieurs classements de revues aient été élaborés dès les années 1970, le caractère inédit de ceux qui ont émergé au cours des années 2000 réside dans leur statut d’instrument de politique publique. C’est le cas de l’Australie, du Brésil, de la France, de la Flandre, de la Norvège, et des Pays-Bas où cette technologie d’évaluation est en vigueur pour certains domaines – notamment en sciences humaines et sociales (SHS).

Dans cet article, nous analysons les modes d’existence de cette technologie d’évaluation spécifique. Bien que la formule générique du « classement de revues » se propage au plan international , différentes versions se développent parallèlement : leurs modalités de production, les valeurs défendues par leurs promoteurs et leurs usagers, aussi bien que leurs formes concrètes sont extrêmement variées.

Nous montrons que l’espace de variations des classements de revues en SHS est toujours bordé par deux options : favoriser une « bonne recherche » qui, sous l’effet d’avantages cumulatifs, risque de conduire à une science (hyper)normale soutenant des dispositions de conformité sociale chez les chercheurs ; encourager l’émergence des communautés minoritaires (linguistiques, disciplinaires, interdisciplinaires) et promouvoir la diversité des méthodes, théories et objets, au risque de mener à des formes de relativisme ou d’archipelisation de la recherche.

URL : https://hal-mines-paristech.archives-ouvertes.fr/hal-01256027

Analysing researchers’ outreach efforts and the association with publication metrics: A case study of Kudos

Authors : Mojisola Erdt, Htet Htet Aung, Ashley Sara Aw, Charlie Rapple, Yin-Leng Theng

With the growth of scholarly collaboration networks and social communication platforms, members of the scholarly community are experimenting with their approach to disseminating research outputs, in an effort to increase their audience and outreach.

However, from a researcher’s point of view, it is difficult to determine whether efforts to make work more visible are worthwhile (in terms of the association with publication metrics) and within that, difficult to assess which platform or network is most effective for sharing work and connecting to a wider audience.

We undertook a case study of Kudos (https://www.growkudos.com), a web-based service that claims to help researchers increase the outreach of their publications, to examine the most effective tools for sharing publications online, and to investigate which actions are associated with improved metrics.

We extracted a dataset from Kudos of 830,565 unique publications claimed by authors, for which 20,775 had actions taken to explain or share via Kudos, and for 4,867 of these full text download data from publishers was available.

Findings show that researchers are most likely to share their work on Facebook, but links shared on Twitter are more likely to be clicked on. A Mann-Whitney U test revealed that a treatment group (publications having actions in Kudos) had a significantly higher median average of 149 full text downloads (23.1% more) per publication as compared to a control group (having no actions in Kudos) with a median average of 121 full text downloads per publication.

These findings suggest that performing actions on publications, such as sharing, explaining, or enriching, could help to increase the number of full text downloads of a publication.

URL : Analysing researchers’ outreach efforts and the association with publication metrics: A case study of Kudos

DOI : https://doi.org/10.1371/journal.pone.0183217

Citation Count Analysis for Papers with Preprints

Authors : Sergey Feldman, Kyle Lo, Waleed Ammar

We explore the degree to which papers prepublished on arXiv garner more citations, in an attempt to paint a sharper picture of fairness issues related to prepublishing. A paper’s citation count is estimated using a negative-binomial generalized linear model (GLM) while observing a binary variable which indicates whether the paper has been prepublished.

We control for author influence (via the authors’ h-index at the time of paper writing), publication venue, and overall time that paper has been available on arXiv. Our analysis only includes papers that were eventually accepted for publication at top-tier CS conferences, and were posted on arXiv either before or after the acceptance notification.

We observe that papers submitted to arXiv before acceptance have, on average, 65\% more citations in the following year compared to papers submitted after. We note that this finding is not causal, and discuss possible next steps.

URL : https://arxiv.org/abs/1805.05238

The evolving preprint landscape: Introductory report for the Knowledge Exchange working group on preprints

Authors : Jonathan Tennant, Serge Bauin, Sarah James, Juliane Kant

In 1961, the USA National Institutes of Health (NIH) launched a program called Information Exchange Groups, designed for the circulation of biological preprints, but this shut down in 1967 (Confrey, 1996; Cobb, 2017).

In 1991, the arXiv repository was launched for physics, computer science, and mathematics, which is when preprints (or ‘e-prints’) began to increase in popularity and attention (Wikipedia ArXiv#History; Jackson, 2002). The Social Sciences Research Network (SSRN) was launched in 1994, and in 1997 Research Papers in Economics (Wikipedia RePEc) was launched.

In 2008, the research network platforms Academia.edu and ResearchGate were both launched and allowed sharing of research papers at any stage. In 2013, two new biological preprint servers were launched, bioRxiv (by Cold Spring Harbor Laboratory) and PeerJ Preprints (by PeerJ) (Wikipedia BioRxiv; Wikipedia PeerJ).

Between these major ongoing initiatives were various, somewhat less-successful attempts to launch preprint servers, including Nature Precedings (folded in April 2012) and Netprints from the British Medical Journal (Wikipedia Nature Precedings; BMJ, 1999).

Now, a range of innovative services, organisations, and platforms are rapidly developing around preprints, prompting this overview of the present ecosystem on behalf of Knowledge Exchange.

URL : The evolving preprint landscape: Introductory report for the Knowledge Exchange working group on preprints

DOI : https://dx.doi.org/10.17605/OSF.IO/796TU

The once and future library: the role of the (national) library in supporting research

Author: Torsten Reimer

The global research environment is changing rapidly and with it the role of libraries in facilitating research. Taking the British Library as an example, this article provides a situational analysis of the challenges research libraries face in this context.

It outlines a new, or at least modified, role for research libraries, taking the emerging research services strategy of the British Library and its ‘Everything Available’ change management portfolio as an example.

It argues that if libraries want to keep adding value to the research process, they need to shift their thinking from focusing on local collections to contributing to a global knowledge environment – in a persistent and open fashion.

URL : The once and future library: the role of the (national) library in supporting research

DOI : http://doi.org/10.1629/uksg.409

Peer-review under review – A statistical study on proposal ranking at ESO. Part I: the pre-meeting phase

Author : Ferdinando Patat

Peer review is the most common mechanism in place for assessing requests for resources in a large variety of scientific disciplines. One of the strongest criticisms to this paradigm is the limited reproducibility of the process, especially at largely oversubscribed facilities. In this and in a subsequent paper we address this specific aspect in a quantitative way, through a statistical study on proposal ranking at the European Southern Observatory.

For this purpose we analysed a sample of about 15000 proposals, submitted by more than 3000 Principal Investigators over 8 years. The proposals were reviewed by more than 500 referees, who assigned over 140000 grades in about 200 panel sessions.

After providing a detailed analysis of the statistical properties of the sample, the paper presents an heuristic model based on these findings, which is then used to provide quantitative estimates of the reproducibility of the pre-meeting process.

On average, about one third of the proposals ranked in the top quartile by one referee are ranked in the same quartile by any other referee of the panel. A similar value is observed for the bottom quartile.

In the central quartiles, the agreement fractions are very marginally above the value expected for a fully aleatory process (25%). The agreement fraction between two panels composed by 6 referees is 55+/-5% (50% confidence level) for the top and bottom quartiles.

The corresponding fraction for the central quartiles is 33+/-5%. The model predictions are confirmed by the results obtained from boot-strapping the data for sub-panels composed by 3 referees, and fully consistent with the NIPS experiment. The post-meeting phase will be presented and discussed in a forthcoming paper.

URL : https://arxiv.org/abs/1805.06981