Does bibliometric research confer legitimacy to research assessment practice? A sociological study of reputational control, 1972-2016

Authors : Arlette Jappe, David Pithan, Thomas Heinze

The use of bibliometric measures in the evaluation of research has increased considerably based on expertise from the growing research field of evaluative citation analysis (ECA).

However, mounting criticism of such metrics suggests that the professionalization of bibliometric expertise remains contested. This paper investigates why impact metrics, such as the journal impact factor and the h-index, proliferate even though their legitimacy as a means of professional research assessment is questioned.

Our analysis is informed by two relevant sociological theories: Andrew Abbott’s theory of professions and Richard Whitley’s theory of scientific work. These complementary concepts are connected in order to demonstrate that ECA has failed so far to provide scientific authority for professional research assessment.

This argument is based on an empirical investigation of the extent of reputational control in the relevant research area. Using three measures of reputational control that are computed from longitudinal inter-organizational networks in ECA (1972–2016), we show that peripheral and isolated actors contribute the same number of novel bibliometric indicators as central actors. In addition, the share of newcomers to the academic sector has remained high.

These findings demonstrate that recent methodological debates in ECA have not been accompanied by the formation of an intellectual field in the sociological sense of a reputational organization.

Therefore, we conclude that a growing gap exists between an academic sector with little capacity for collective action and increasing demand for routine performance assessment by research organizations and funding agencies.

This gap has been filled by database providers. By selecting and distributing research metrics, these commercial providers have gained a powerful role in defining de-facto standards of research excellence without being challenged by expert authority.

URL : Does bibliometric research confer legitimacy to research assessment practice? A sociological study of reputational control, 1972-2016

DOI : https://doi.org/10.1371/journal.pone.0199031

Opium in science and society: Numbers

Authors : Julian N. Marewski, Lutz Bornmann

In science and beyond, numbers are omnipresent when it comes to justifying different kinds of judgments. Which scientific author, hiring committee-member, or advisory board panelist has not been confronted with page-long “publication manuals”, “assessment reports”, “evaluation guidelines”, calling for p-values, citation rates, h-indices, or other statistics in order to motivate judgments about the “quality” of findings, applicants, or institutions?

Yet, many of those relying on and calling for statistics do not even seem to understand what information those numbers can actually convey, and what not. Focusing on the uninformed usage of bibliometrics as worrysome outgrowth of the increasing quantification of science and society, we place the abuse of numbers into larger historical contexts and trends.

These are characterized by a technology-driven bureaucratization of science, obsessions with control and accountability, and mistrust in human intuitive judgment. The ongoing digital revolution increases those trends.

We call for bringing sanity back into scientific judgment exercises. Despite all number crunching, many judgments – be it about scientific output, scientists, or research institutions – will neither be unambiguous, uncontroversial, or testable by external standards, nor can they be otherwise validated or objectified.

Under uncertainty, good human judgment remains, for the better, indispensable, but it can be aided, so we conclude, by a toolbox of simple judgment tools, called heuristics.

In the best position to use those heuristics are research evaluators (1) who have expertise in the to-be-evaluated area of research, (2) who have profound knowledge in bibliometrics, and (3) who are statistically literate.

URL : https://arxiv.org/abs/1804.11210

How to counter undeserving authorship

Authors: Stefan Eriksson, Tove Godskesen, Lars Andersson, Gert Helgesson

The average number of authors listed on contributions to scientific journals has increased considerably over time. While this may be accounted for by the increased complexity of much research and a corresponding need for extended collaboration, several studies suggest that the prevalence of non-deserving authors on research papers is alarming.

In this paper a combined qualitative and quantitative approach is suggested to reduce the number of undeserving authors on academic papers: 1) ask scholars who apply for positions to explain the basics of a random selection of their co-authored papers, and 2) in bibliometric measurements, divide publications and citations by the number of authors.

URL : How to counter undeserving authorship

DOI : http://doi.org/10.1629/uksg.395

The counting house: measuring those who count. Presence of Bibliometrics, Scientometrics, Informetrics, Webometrics and Altmetrics in the Google Scholar Citations, ResearcherID, ResearchGate, Mendeley & Twitter

Authors : Alberto Martin-Martin, Enrique Orduna-Malea, Juan M. Ayllon, Emilio Delgado Lopez-Cozar

Following in the footsteps of the model of scientific communication, which has recently gone through a metamorphosis (from the Gutenberg galaxy to the Web galaxy), a change in the model and methods of scientific evaluation is also taking place.

A set of new scientific tools are now providing a variety of indicators which measure all actions and interactions among scientists in the digital space, making new aspects of scientific communication emerge.

In this work we present a method for capturing the structure of an entire scientific community (the Bibliometrics, Scientometrics, Informetrics, Webometrics, and Altmetrics community) and the main agents that are part of it (scientists, documents, and sources) through the lens of Google Scholar Citations.

Additionally, we compare these author portraits to the ones offered by other profile or social platforms currently used by academics (ResearcherID, ResearchGate, Mendeley, and Twitter), in order to test their degree of use, completeness, reliability, and the validity of the information they provide.

A sample of 814 authors (researchers in Bibliometrics with a public profile created in Google Scholar Citations was subsequently searched in the other platforms, collecting the main indicators computed by each of them.

The data collection was carried out on September, 2015. The Spearman correlation was applied to these indicators (a total of 31) , and a Principal Component Analysis was carried out in order to reveal the relationships among metrics and platforms as well as the possible existence of metric cluster.

URL : https://arxiv.org/abs/1602.02412

Evaluation of research activities of universities of Ukraine and Belarus: a set of bibliometric indicators and its implementation

Authors : Vladimir Lazarev, Serhii Nazarovets, Alexey Skalaban

Monitoring bibliometric indicators of University rankings is considered as a subject of a University library activity. In order to fulfill comparative assessment of research activities of the universities of Ukraine and Belarus the authors introduced a set of bibliometric indicators.

A comparative assessment of the research activities of corresponding universities was fulfilled; the data on the leading universities are presented. The sensitivity of the one of the indicators to rapid changes of the research activity of universities and the fact that the other one is normalized across the fields of science condition advantage of the proposed set over the one that was used in practice of the corresponding national rankings.

URL : https://arxiv.org/abs/1711.02059

Improving the Measurement of Scientific Success by Reporting a Self-Citation Index

Authors : JustinW. Flatt, Alessandro Blasimme, Effy Vayena

Who among the many researchers is most likely to usher in a new era of scientific breakthroughs? This question is of critical importance to universities, funding agencies, as well as scientists who must compete under great pressure for limited amounts of research money.

Citations are the current primary means of evaluating one’s scientific productivity and impact, and while often helpful, there is growing concern over the use of excessive self-citations to help build sustainable careers in science.

Incorporating superfluous self-citations in one’s writings requires little effort, receives virtually no penalty, and can boost, albeit artificially, scholarly impact and visibility, which are both necessary for moving up the academic ladder.

Such behavior is likely to increase, given the recent explosive rise in popularity of web-based citation analysis tools (Web of Science, Google Scholar, Scopus, and Altmetric) that rank research performance.

Here, we argue for new metrics centered on transparency to help curb this form of self-promotion that, if left unchecked, can have a negative impact on the scientific workforce, the way that we publish new knowledge, and ultimately the course of scientific advance.

URL : Improving the Measurement of Scientific Success by Reporting a Self-Citation Index

DOI : http://www.mdpi.com/2304-6775/5/3/20

DuEPublicA: Automated bibliometric reports based on the University Bibliography and external citation data

Author : Eike T. Spielberg

This paper describes a web application to generate bibliometric reports based on the University Bibliography and the Scopus citation database. Our goal is to offer an alternative to easy-to-prepare automated reports from commercial sources.

These often suffer from an incomplete coverage of publication types and a difficult attribution to people, institutes and universities. Using our University Bibliography as the source to select relevant publications solves the two problems.

As it is a local system, maintained and set up by the library, we can include every publication type we want. As the University Bibliography is linked to the identity management system of the university, it enables an easy selection of publications for people, institutes and the whole university.

The program is designed as a web application, which collects publications from the University Bibliography, enriches them with citation data from Scopus and performs three kinds of analyses:
1. A general analysis (number and type of publications, publications per year etc.),
2. A citation analysis (average citations per publication, h-index, uncitedness), and
3. An affiliation analysis (home and partner institutions)

We tried to keep the code highly generic, so that the inclusion of other databases (Web of Science, IEEE) or other bibliographies is easily feasible. The application is written in Java and XML and uses XSL transformations and LaTeX to generate bibliometric reports as HTML pages and in pdf format.

Warnings and alerts are automatically included if the citation analysis covers only a small fraction of the publications from the University Bibliography. In addition, we describe a small tool that helps to collect author details for an analysis.

URL : http://journal.code4lib.org/articles/12549