Peer-review under review – A statistical study on proposal ranking at ESO. Part I: the pre-meeting phase

Author : Ferdinando Patat

Peer review is the most common mechanism in place for assessing requests for resources in a large variety of scientific disciplines. One of the strongest criticisms to this paradigm is the limited reproducibility of the process, especially at largely oversubscribed facilities. In this and in a subsequent paper we address this specific aspect in a quantitative way, through a statistical study on proposal ranking at the European Southern Observatory.

For this purpose we analysed a sample of about 15000 proposals, submitted by more than 3000 Principal Investigators over 8 years. The proposals were reviewed by more than 500 referees, who assigned over 140000 grades in about 200 panel sessions.

After providing a detailed analysis of the statistical properties of the sample, the paper presents an heuristic model based on these findings, which is then used to provide quantitative estimates of the reproducibility of the pre-meeting process.

On average, about one third of the proposals ranked in the top quartile by one referee are ranked in the same quartile by any other referee of the panel. A similar value is observed for the bottom quartile.

In the central quartiles, the agreement fractions are very marginally above the value expected for a fully aleatory process (25%). The agreement fraction between two panels composed by 6 referees is 55+/-5% (50% confidence level) for the top and bottom quartiles.

The corresponding fraction for the central quartiles is 33+/-5%. The model predictions are confirmed by the results obtained from boot-strapping the data for sub-panels composed by 3 referees, and fully consistent with the NIPS experiment. The post-meeting phase will be presented and discussed in a forthcoming paper.


Dissertation-to-Book Publication Patterns Among a Sample of R1 Institutions

Authors: Karen Rupp-Serrano, Jen Waller


A common concern about openly available electronic theses and dissertations is that their “openness” will prevent graduate student authors from publishing their work commercially in the future. A handful of studies have explored aspects of this topic; this study reviewed dissertation-to-book publication patterns at Carnegie Classification R1 academic institutions.


This study analyzed over 23,000 dissertations from twelve U.S. universities to determine how frequently dissertations were subsequently published as books matching the original dissertation in pagination, chapters, and subject matter.

WorldCat and several other resources were used to make publication determinations.


Across the sample set, a very small percentage of dissertations were published as books that matched the original dissertation on pagination, chapters, and subject matter. The average number of years for dissertations in the study to be published as books was determined for broad subject categories and for select academic disciplines.

Results were compared across public and private institutions, and books that were self-published or published by questionable organizations were identified.


Dissertation-to-book trends occur primarily in the social sciences, humanities, and arts. With dissertations for which the author is actively working to publish as a book, the commonly offered 6- to 24-month embargo periods appear sufficient, provided that extensions or renewals continue to be available.


This study has implications for librarians providing services to graduate students, faculty advisors, and graduate colleges/schools in regard to dissertation embargo lengths, self-publishing, and what we have termed questionable publishers, as these areas continue to provide opportunities for librarians to educate these stakeholders.

URL : Dissertation-to-Book Publication Patterns Among a Sample of R1 Institutions



Prepublication disclosure of scientific results: Norms, competition, and commercial orientation

Authors : Jerry G. Thursby, Carolin Haeussler, Marie C. Thursby, Lin Jiang

On the basis of a survey of 7103 active faculty researchers in nine fields, we examine the extent to which scientists disclose prepublication results, and when they do, why? Except in two fields, more scientists disclose results before publication than not, but there is significant variation in their reasons to disclose, in the frequency of such disclosure, and in withholding crucial results when making public presentations.

They disclose results for feedback and credit and to attract collaborators. Particularly in formulaic fields, scientists disclose to attract new researchers to the field independent of collaboration and to deter others from working on their exact problem.

A probability model shows that 70% of field variation in disclosure is related to differences in respondent beliefs about norms, competition, and commercialization. Our results suggest new research directions—for example, do the problems addressed or the methods of scientific production themselves shape norms and competition?

Are the levels we observe optimal or simply path-dependent? What is the interplay of norms, competition, and commercialization in disclosure and the progress of science?

URL : Prepublication disclosure of scientific results: Norms, competition, and commercial orientation


Measuring Scientific Broadness

Authors : Tom Price, Sabine Hossenfelder

Who has not read letters of recommendations that comment on a student’s `broadness’ and wondered what to make of it?

We here propose a way to quantify scientific broadness by a semantic analysis of researchers’ publications. We apply our methods to papers on the open-access server and report our findings.


Opium in science and society: Numbers

Authors : Julian N. Marewski, Lutz Bornmann

In science and beyond, numbers are omnipresent when it comes to justifying different kinds of judgments. Which scientific author, hiring committee-member, or advisory board panelist has not been confronted with page-long « publication manuals », « assessment reports », « evaluation guidelines », calling for p-values, citation rates, h-indices, or other statistics in order to motivate judgments about the « quality » of findings, applicants, or institutions?

Yet, many of those relying on and calling for statistics do not even seem to understand what information those numbers can actually convey, and what not. Focusing on the uninformed usage of bibliometrics as worrysome outgrowth of the increasing quantification of science and society, we place the abuse of numbers into larger historical contexts and trends.

These are characterized by a technology-driven bureaucratization of science, obsessions with control and accountability, and mistrust in human intuitive judgment. The ongoing digital revolution increases those trends.

We call for bringing sanity back into scientific judgment exercises. Despite all number crunching, many judgments – be it about scientific output, scientists, or research institutions – will neither be unambiguous, uncontroversial, or testable by external standards, nor can they be otherwise validated or objectified.

Under uncertainty, good human judgment remains, for the better, indispensable, but it can be aided, so we conclude, by a toolbox of simple judgment tools, called heuristics.

In the best position to use those heuristics are research evaluators (1) who have expertise in the to-be-evaluated area of research, (2) who have profound knowledge in bibliometrics, and (3) who are statistically literate.


Open and inclusive collaboration in science: A framework

Authors : Qian Dai, Eunjung Shin, Carthage Smith

Open science can be variously defined.  In some communities it is related principally to open access to scientific publications, for others it includes open access to research data and for others still it includes  opening  up  the  processes  of  academic  research  to  engage  all  interested  civil  society  stakeholders.

The  absence  of  a  common  understanding  of  what  is,  and  isn’t,  included  in  open  science  creates  confusion  in discussions  across  these  different  communities.  It  is  potentially  holding  back  efforts  to  develop  effective  policies for promoting open science at the international level.

This paper builds on the limited conceptual work that has been published to date and proposes a broad framework for open science. The framework is not  meant  to  be  prescriptive  but  should  help  different  communities  and  policy  makers  to  decide  on  their  own  priorities  within  the  open  science  space  and  to  better  visualise  how  these  priorities  link  to  different  stage of the scientific process and to different actors.

Such a framework can be useful also in considering how  best  to  incentivise  and  measure  various  aspects  of  open  science.  Digitalisation  is  fundamentally  changing  science  and  the  paper  lays  out  some  of  the  opportunities,  risks  and  major  policy  challenges  associated with these changes.


Predatory Publishers using Spamming Strategies for Call for Papers and Review Requests : A Case Study

Author : Alexandru-Ionut Petrisor

Spam e-mail and calls from the predatory publishers are very similar in purpose: they are deceptive and produce material losses. Moreover, the predatory publishers show evolving strategies to lure potential victims, as their number increases. In an effort to help researchers defending against their constant menace, this article aims to identify a set of common features of spam e-mail and calls from predatory publishers.

The methodology consisted of a comparative analysis of data found on the Internet and e-mails received at several addresses during December 2017 – January 2018. The results indicate that concealed, fake or disguised identity of the sender and/or of the message, mass mailing, missing or useless opt-out option and an obvious commercial character are the most prominent common features.

Moreover, the location of predatory publishers is well disguised; the analysis of the real location, found using web-based tools, suggests a joint management or at least a concerted action of several publishers, and raises additional questions related to the reasons of masking the true location.

From a theoretical standpoint, the results show, once again, that predatory publishers are a part of the worldwide scam, and should be ‘convicted’ in a similar way, including the means of legal actions. From a practical perspective, distinct recommendations were phrased for researchers, policy makers, libraries, and future research.

URL : Predatory Publishers using Spamming Strategies for Call for Papers and Review Requests : A Case Study

Alternative location :