The Two-Way Street of Open Access Journal Publishing: Flip It and Reverse It

Authors : Lisa Matthias, Najko Jahn, Mikael Laakso

As Open access (OA) is often perceived as the end goal of scholarly publishing, much research has focused on flipping subscription journals to an OA model. Focusing on what can happen after the presumed finish line, this study identifies journals that have converted from OA to a subscription model, and places these “reverse flips” within the greater context of scholarly publishing.

In particular, we examine specific journal descriptors, such as access mode, publisher, subject area, society affiliation, article volume, and citation metrics, to deepen our understanding of reverse flips.

Our results show that at least 152 actively publishing journals have reverse-flipped since 2005, suggesting that this phenomenon does not constitute merely a few marginal outliers, but instead a common pattern within scholarly publishing.

Notably, we found that 62% of reverse flips (N = 95) had not been born-OA journals, but had been founded as subscription journals, and hence have experienced a three-stage transformation from closed to open to closed.

We argue that reverse flips present a unique perspective on OA, and that further research would greatly benefit from enhanced data and tools for identifying such cases.

URL : The Two-Way Street of Open Access Journal Publishing: Flip It and Reverse It 

DOI : https://doi.org/10.3390/publications7020023

Does bibliometric research confer legitimacy to research assessment practice? A sociological study of reputational control, 1972-2016

Authors : Arlette Jappe, David Pithan, Thomas Heinze

The use of bibliometric measures in the evaluation of research has increased considerably based on expertise from the growing research field of evaluative citation analysis (ECA).

However, mounting criticism of such metrics suggests that the professionalization of bibliometric expertise remains contested. This paper investigates why impact metrics, such as the journal impact factor and the h-index, proliferate even though their legitimacy as a means of professional research assessment is questioned.

Our analysis is informed by two relevant sociological theories: Andrew Abbott’s theory of professions and Richard Whitley’s theory of scientific work. These complementary concepts are connected in order to demonstrate that ECA has failed so far to provide scientific authority for professional research assessment.

This argument is based on an empirical investigation of the extent of reputational control in the relevant research area. Using three measures of reputational control that are computed from longitudinal inter-organizational networks in ECA (1972–2016), we show that peripheral and isolated actors contribute the same number of novel bibliometric indicators as central actors. In addition, the share of newcomers to the academic sector has remained high.

These findings demonstrate that recent methodological debates in ECA have not been accompanied by the formation of an intellectual field in the sociological sense of a reputational organization.

Therefore, we conclude that a growing gap exists between an academic sector with little capacity for collective action and increasing demand for routine performance assessment by research organizations and funding agencies.

This gap has been filled by database providers. By selecting and distributing research metrics, these commercial providers have gained a powerful role in defining de-facto standards of research excellence without being challenged by expert authority.

URL : Does bibliometric research confer legitimacy to research assessment practice? A sociological study of reputational control, 1972-2016

DOI : https://doi.org/10.1371/journal.pone.0199031

Opium in science and society: Numbers

Authors : Julian N. Marewski, Lutz Bornmann

In science and beyond, numbers are omnipresent when it comes to justifying different kinds of judgments. Which scientific author, hiring committee-member, or advisory board panelist has not been confronted with page-long “publication manuals”, “assessment reports”, “evaluation guidelines”, calling for p-values, citation rates, h-indices, or other statistics in order to motivate judgments about the “quality” of findings, applicants, or institutions?

Yet, many of those relying on and calling for statistics do not even seem to understand what information those numbers can actually convey, and what not. Focusing on the uninformed usage of bibliometrics as worrysome outgrowth of the increasing quantification of science and society, we place the abuse of numbers into larger historical contexts and trends.

These are characterized by a technology-driven bureaucratization of science, obsessions with control and accountability, and mistrust in human intuitive judgment. The ongoing digital revolution increases those trends.

We call for bringing sanity back into scientific judgment exercises. Despite all number crunching, many judgments – be it about scientific output, scientists, or research institutions – will neither be unambiguous, uncontroversial, or testable by external standards, nor can they be otherwise validated or objectified.

Under uncertainty, good human judgment remains, for the better, indispensable, but it can be aided, so we conclude, by a toolbox of simple judgment tools, called heuristics.

In the best position to use those heuristics are research evaluators (1) who have expertise in the to-be-evaluated area of research, (2) who have profound knowledge in bibliometrics, and (3) who are statistically literate.

URL : https://arxiv.org/abs/1804.11210

How to counter undeserving authorship

Authors: Stefan Eriksson, Tove Godskesen, Lars Andersson, Gert Helgesson

The average number of authors listed on contributions to scientific journals has increased considerably over time. While this may be accounted for by the increased complexity of much research and a corresponding need for extended collaboration, several studies suggest that the prevalence of non-deserving authors on research papers is alarming.

In this paper a combined qualitative and quantitative approach is suggested to reduce the number of undeserving authors on academic papers: 1) ask scholars who apply for positions to explain the basics of a random selection of their co-authored papers, and 2) in bibliometric measurements, divide publications and citations by the number of authors.

URL : How to counter undeserving authorship

DOI : http://doi.org/10.1629/uksg.395

The counting house: measuring those who count. Presence of Bibliometrics, Scientometrics, Informetrics, Webometrics and Altmetrics in the Google Scholar Citations, ResearcherID, ResearchGate, Mendeley & Twitter

Authors : Alberto Martin-Martin, Enrique Orduna-Malea, Juan M. Ayllon, Emilio Delgado Lopez-Cozar

Following in the footsteps of the model of scientific communication, which has recently gone through a metamorphosis (from the Gutenberg galaxy to the Web galaxy), a change in the model and methods of scientific evaluation is also taking place.

A set of new scientific tools are now providing a variety of indicators which measure all actions and interactions among scientists in the digital space, making new aspects of scientific communication emerge.

In this work we present a method for capturing the structure of an entire scientific community (the Bibliometrics, Scientometrics, Informetrics, Webometrics, and Altmetrics community) and the main agents that are part of it (scientists, documents, and sources) through the lens of Google Scholar Citations.

Additionally, we compare these author portraits to the ones offered by other profile or social platforms currently used by academics (ResearcherID, ResearchGate, Mendeley, and Twitter), in order to test their degree of use, completeness, reliability, and the validity of the information they provide.

A sample of 814 authors (researchers in Bibliometrics with a public profile created in Google Scholar Citations was subsequently searched in the other platforms, collecting the main indicators computed by each of them.

The data collection was carried out on September, 2015. The Spearman correlation was applied to these indicators (a total of 31) , and a Principal Component Analysis was carried out in order to reveal the relationships among metrics and platforms as well as the possible existence of metric cluster.

URL : https://arxiv.org/abs/1602.02412

Evaluation of research activities of universities of Ukraine and Belarus: a set of bibliometric indicators and its implementation

Authors : Vladimir Lazarev, Serhii Nazarovets, Alexey Skalaban

Monitoring bibliometric indicators of University rankings is considered as a subject of a University library activity. In order to fulfill comparative assessment of research activities of the universities of Ukraine and Belarus the authors introduced a set of bibliometric indicators.

A comparative assessment of the research activities of corresponding universities was fulfilled; the data on the leading universities are presented. The sensitivity of the one of the indicators to rapid changes of the research activity of universities and the fact that the other one is normalized across the fields of science condition advantage of the proposed set over the one that was used in practice of the corresponding national rankings.

URL : https://arxiv.org/abs/1711.02059

Improving the Measurement of Scientific Success by Reporting a Self-Citation Index

Authors : JustinW. Flatt, Alessandro Blasimme, Effy Vayena

Who among the many researchers is most likely to usher in a new era of scientific breakthroughs? This question is of critical importance to universities, funding agencies, as well as scientists who must compete under great pressure for limited amounts of research money.

Citations are the current primary means of evaluating one’s scientific productivity and impact, and while often helpful, there is growing concern over the use of excessive self-citations to help build sustainable careers in science.

Incorporating superfluous self-citations in one’s writings requires little effort, receives virtually no penalty, and can boost, albeit artificially, scholarly impact and visibility, which are both necessary for moving up the academic ladder.

Such behavior is likely to increase, given the recent explosive rise in popularity of web-based citation analysis tools (Web of Science, Google Scholar, Scopus, and Altmetric) that rank research performance.

Here, we argue for new metrics centered on transparency to help curb this form of self-promotion that, if left unchecked, can have a negative impact on the scientific workforce, the way that we publish new knowledge, and ultimately the course of scientific advance.

URL : Improving the Measurement of Scientific Success by Reporting a Self-Citation Index

DOI : http://www.mdpi.com/2304-6775/5/3/20