A multidimensional perspective on the citation impact of scientific publications

Authors : Yi Bu, Ludo Waltman, Yong Huang

The citation impact of scientific publications is usually seen as a one-dimensional concept. We introduce a three-dimensional perspective on the citation impact of publications. In addition to the level of citation impact, quantified by the number of citations received by a publication, we also conceptualize and operationalize the depth and dependence of citation impact.

This enables us to make a distinction between publications that have a deep impact concentrated in one specific field of research and publications that have a broad impact scattered over different research fields.

It also allows us to distinguish between publications that are strongly dependent on earlier work and publications that make a more independent scientific contribution.

We present a large-scale empirical analysis of the level, depth, and dependence of the citation impact of publications. In addition, we report a case study focusing on publications in the field of scientometrics.

Our three-dimensional citation impact framework provides a more detailed understanding of the citation impact of a publication than a traditional one-dimensional perspective.

URL : https://arxiv.org/abs/1901.09663

Systematic analysis of agreement between metrics and peer review in the UK REF

Authors : Vincent Traag, Ludo Waltman

When performing a national research assessment, some countries rely on citation metrics whereas others, such as the UK, primarily use peer review. In the influential Metric Tide report, a low agreement between metrics and peer review in the UK Research Excellence Framework (REF) was found.

However, earlier studies observed much higher agreement between metrics and peer review in the REF and argued in favour of using metrics. This shows that there is considerable ambiguity in the discussion on agreement between metrics and peer review.

We provide clarity in this discussion by considering four important points: (1) the level of aggregation of the analysis; (2) the use of either a size-dependent or a size-independent perspective; (3) the suitability of different measures of agreement; and (4) the uncertainty in peer review.

In the context of the REF, we argue that agreement between metrics and peer review should be assessed at the institutional level rather than at the publication level. Both a size-dependent and a size-independent perspective are relevant in the REF.

The interpretation of correlations may be problematic and as an alternative we therefore use measures of agreement that are based on the absolute or relative differences between metrics and peer review.

To get an idea of the uncertainty in peer review, we rely on a model to bootstrap peer review outcomes. We conclude that particularly in Physics, Clinical Medicine, and Public Health, metrics agree quite well with peer review and may offer an alternative to peer review.

URL : https://arxiv.org/abs/1808.03491

A Large-Scale Analysis of Impact Factor Biased Journal Self-Citations

Authors : Caspar Chorus, Ludo Waltman

Based on three decades of citation data from across scientific fields of science, we study trends in impact factor biased self-citations of scholarly journals, using a purpose-built and easy to use citation based measure.

Our measure is given by the ratio between i) the relative share of journal self-citations to papers published in the last two years, and ii) the relative share of journal self-citations to papers published in preceding years.

A ratio higher than one suggests that a journal’s impact factor is disproportionally affected (inflated) by self-citations. Using recently reported survey data, we show that there is a relation between high values of our proposed measure and coercive journal self-citation malpractices.

We use our measure to perform a large-scale analysis of impact factor biased journal self-citations. Our main empirical result is, that the share of journals for which our measure has a (very) high value has remained stable between the 1980s and the early 2000s, but has since risen strongly in all fields of science.

This time span corresponds well with the growing obsession with the impact factor as a journal evaluation measure over the last decade.

Taken together, this suggests a trend of increasingly pervasive journal self-citation malpractices, with all due unwanted consequences such as inflated perceived importance of journals and biased journal rankings.

URL : A Large-Scale Analysis of Impact Factor Biased Journal Self-Citations

DOI : http://dx.doi.org/10.1371/journal.pone.0161021