Authors : Vincent Traag, Ludo Waltman
When performing a national research assessment, some countries rely on citation metrics whereas others, such as the UK, primarily use peer review. In the influential Metric Tide report, a low agreement between metrics and peer review in the UK Research Excellence Framework (REF) was found.
However, earlier studies observed much higher agreement between metrics and peer review in the REF and argued in favour of using metrics. This shows that there is considerable ambiguity in the discussion on agreement between metrics and peer review.
We provide clarity in this discussion by considering four important points: (1) the level of aggregation of the analysis; (2) the use of either a size-dependent or a size-independent perspective; (3) the suitability of different measures of agreement; and (4) the uncertainty in peer review.
In the context of the REF, we argue that agreement between metrics and peer review should be assessed at the institutional level rather than at the publication level. Both a size-dependent and a size-independent perspective are relevant in the REF.
The interpretation of correlations may be problematic and as an alternative we therefore use measures of agreement that are based on the absolute or relative differences between metrics and peer review.
To get an idea of the uncertainty in peer review, we rely on a model to bootstrap peer review outcomes. We conclude that particularly in Physics, Clinical Medicine, and Public Health, metrics agree quite well with peer review and may offer an alternative to peer review.
URL : https://arxiv.org/abs/1808.03491
Authors : Caspar Chorus, Ludo Waltman
Based on three decades of citation data from across scientific fields of science, we study trends in impact factor biased self-citations of scholarly journals, using a purpose-built and easy to use citation based measure.
Our measure is given by the ratio between i) the relative share of journal self-citations to papers published in the last two years, and ii) the relative share of journal self-citations to papers published in preceding years.
A ratio higher than one suggests that a journal’s impact factor is disproportionally affected (inflated) by self-citations. Using recently reported survey data, we show that there is a relation between high values of our proposed measure and coercive journal self-citation malpractices.
We use our measure to perform a large-scale analysis of impact factor biased journal self-citations. Our main empirical result is, that the share of journals for which our measure has a (very) high value has remained stable between the 1980s and the early 2000s, but has since risen strongly in all fields of science.
This time span corresponds well with the growing obsession with the impact factor as a journal evaluation measure over the last decade.
Taken together, this suggests a trend of increasingly pervasive journal self-citation malpractices, with all due unwanted consequences such as inflated perceived importance of journals and biased journal rankings.
URL : A Large-Scale Analysis of Impact Factor Biased Journal Self-Citations
DOI : http://dx.doi.org/10.1371/journal.pone.0161021