Inferring the causal effect of journals on citations

Author : Vincent Traag

Articles in high-impact journals are by definition more highly cited on average. But are they cited more often because the articles are somehow “better”? Or are they cited more often simply because they appeared in a high-impact journal? Although some evidence suggests the latter the causal relationship is not clear.

We here compare citations of published journal articles to citations of their preprint versions to uncover the causal mechanism. We build on an earlier model to infer the causal effect of journals on citations. We find evidence for both effects.

We show that high-impact journals seem to select articles that tend to attract more citations. At the same time, we find that high-impact journals augment the citation rate of published articles.

Our results yield a deeper understanding of the role of journals in the research system. The use of journal metrics in research evaluation has been increasingly criticised in recent years and article-level citations are sometimes suggested as an alternative.

Our results show that removing impact factors from evaluation does not negate the influence of journals. This insight has important implications for changing practices of research evaluation.

URL : https://arxiv.org/abs/1912.08648

Systematic analysis of agreement between metrics and peer review in the UK REF

Authors : Vincent Traag, Ludo Waltman

When performing a national research assessment, some countries rely on citation metrics whereas others, such as the UK, primarily use peer review. In the influential Metric Tide report, a low agreement between metrics and peer review in the UK Research Excellence Framework (REF) was found.

However, earlier studies observed much higher agreement between metrics and peer review in the REF and argued in favour of using metrics. This shows that there is considerable ambiguity in the discussion on agreement between metrics and peer review.

We provide clarity in this discussion by considering four important points: (1) the level of aggregation of the analysis; (2) the use of either a size-dependent or a size-independent perspective; (3) the suitability of different measures of agreement; and (4) the uncertainty in peer review.

In the context of the REF, we argue that agreement between metrics and peer review should be assessed at the institutional level rather than at the publication level. Both a size-dependent and a size-independent perspective are relevant in the REF.

The interpretation of correlations may be problematic and as an alternative we therefore use measures of agreement that are based on the absolute or relative differences between metrics and peer review.

To get an idea of the uncertainty in peer review, we rely on a model to bootstrap peer review outcomes. We conclude that particularly in Physics, Clinical Medicine, and Public Health, metrics agree quite well with peer review and may offer an alternative to peer review.

URL : https://arxiv.org/abs/1808.03491