Open access book usage data – how close is COUNTER to the other kind?

Author : Ronald Snijder

In April 2020, the OAPEN Library moved to a new platform, based on DSpace 6. During the same period, IRUS-UK started working on the deployment of Release 5 of the COUNTER Code of Practice (R5). This is, therefore, a good moment to compare two widely used usage metrics – R5 and Google Analytics (GA).

This article discusses the download data of close to 11,000 books and chapters from the OAPEN Library, from the period 15 April 2020 to 31 July 2020. When a book or chapter is downloaded, it is logged by GA and at the same time a signal is sent to IRUS-UK.

This results in two datasets: the monthly downloads measured in GA and the usage reported by R5, also clustered by month. The number of downloads reported by GA is considerably larger than R5. The total number of downloads in GA for the period is over 3.6 million.

In contrast, the amount reported by R5 is 1.5 million, around 400,000 downloads per month. Contrasting R5 and GA data on a country-by-country basis shows significant differences. GA lists more than five times the number of downloads for several countries, although the totals for other countries are about the same.

When looking at individual tiles, of the 500 highest ranked titles in GA that are also part of the 1,000 highest ranked titles in R5, only 6% of the titles are relatively close together. The choice of metric service has considerable consequences on what is reported.

Thus, drawing conclusions about the results should be done with care. One metric is not better than the other, but we should be open about the choices made. After all, open access book metrics are complicated, and we can only benefit from clarity.

URL : Open access book usage data – how close is COUNTER to the other kind?

DOI : http://doi.org/10.1629/uksg.539

Day-to-day discovery of preprint–publication links

Authors : Guillaume Cabanac, Theodora Oikonomidi, Isabelle Boutron

Preprints promote the open and fast communication of non-peer reviewed work. Once a preprint is published in a peer-reviewed venue, the preprint server updates its web page: a prominent hyperlink leading to the newly published work is added.

Linking preprints to publications is of utmost importance as it provides readers with the latest version of a now certified work. Yet leading preprint servers fail to identify all existing preprint–publication links.

This limitation calls for a more thorough approach to this critical information retrieval task: overlooking published evidence translates into partial and even inaccurate systematic reviews on health-related issues, for instance.

We designed an algorithm leveraging the Crossref public and free source of bibliographic metadata to comb the literature for preprint–publication links. We tested it on a reference preprint set identified and curated for a living systematic review on interventions for preventing and treating COVID-19 performed by international collaboration: the COVID-NMA initiative (covid-nma.com).

The reference set comprised 343 preprints, 121 of which appeared as a publication in a peer-reviewed journal. While the preprint servers identified 39.7% of the preprint–publication links, our linker identified 90.9% of the expected links with no clues taken from the preprint servers.

The accuracy of the proposed linker is 91.5% on this reference set, with 90.9% sensitivity and 91.9% specificity. This is a 16.26% increase in accuracy compared to that of preprint servers. We release this software as supplementary material to foster its integration into preprint servers’ workflows and enhance a daily preprint–publication chase that is useful to all readers, including systematic reviewers.

This preprint–publication linker currently provides day-to-day updates to the biomedical experts of the COVID-NMA initiative.

URL : Day-to-day discovery of preprint–publication links

DOI : https://doi.org/10.1007/s11192-021-03900-7

What happens when a journal converts to Open Access? A bibliometric analysis

Authors : Fakhri Momeni, Philipp Mayr, Nicholas Fraser, Isabella Peters

In recent years, increased stakeholder pressure to transition research to Open Access has led to many journals converting, or ‘flipping’, from a closed access (CA) to an open access (OA) publishing model.

Changing the publishing model can influence the decision of authors to submit their papers to a journal, and increased article accessibility may influence citation behaviour. In this paper we aimed to understand how flipping a journal to an OA model influences the journal’s future publication volumes and citation impact.

We analysed two independent sets of journals that had flipped to an OA model, one from the Directory of Open Access Journals (DOAJ) and one from the Open Access Directory (OAD), and compared their development with two respective control groups of similar journals. For bibliometric analyses, journals were matched to the Scopus database.

We assessed changes in the number of articles published over time, as well as two citation metrics at the journal and article level: the normalised impact factor (IF) and the average relative citations (ARC), respectively. Our results show that overall, journals that flipped to an OA model increased their publication output compared to journals that remained closed.

Mean normalised IF and ARC also generally increased following the flip to an OA model, at a greater rate than was observed in the control groups. However, the changes appear to vary largely by scientific discipline. Overall, these results indicate that flipping to an OA publishing model can bring positive changes to a journal.

URL : https://arxiv.org/abs/2103.14522

Novelty, Disruption, and the Evolution of Scientific Impact

Authors : Yiling Lin, James Allen Evans, Lingfei Wu

Since the 1950s, citation impact has been the dominant metric by which science is quantitatively evaluated. But research contributions play distinct roles in the unfolding drama of scientific debate, agreement and advance, and institutions may value different kinds of advances.

Computational power, access to citation data and an array of modeling techniques have given rise to a widening portfolio of metrics to extract different signals regarding their contribution. Here we unpack the complex, temporally evolving relationship between citation impact alongside novelty and disruption, two emerging measures that capture the degree to which science not only influences, but transforms later work.

Novelty captures how research draws upon unusual combinations of prior work. Disruption captures how research comes to eclipse the prior work on which it builds, becoming recognized as a new scientific direction.

We demonstrate that: 1) novel papers disrupt existing theories and expand the scientific frontier; 2) novel papers are more likely to become “sleeping beauties” and accumulate citation impact over the long run; 3) novelty can be reformulated as distance in journal embedding spaces to map the moving frontier of science.

The evolution of embedding spaces over time reveals how yesterday’s novelty forms today’s scientific conventions, which condition the novelty–and surprise–of tomorrow’s breakthroughs.

URL : https://arxiv.org/abs/2103.03398

Research impact evaluation and academic discourse

Author : Marta Natalia Wróblewska

The introduction of ‘impact’ as an element of assessment constitutes a major change in the construction of research evaluation systems. While various protocols of impact evaluation exist, the most articulated one was implemented as part of the British Research Excellence Framework (REF).

This paper investigates the nature and consequences of the rise of ‘research impact’ as an element of academic evaluation from the perspective of discourse. Drawing from linguistic pragmatics and Foucauldian discourse analysis, the study discusses shifts related to the so-called Impact Agenda on four stages, in chronological order: (1) the ‘problematization’ of the notion of ‘impact’, (2) the establishment of an ‘impact infrastructure’, (3) the consolidation of a new genre of writing–impact case study, and (4) academics’ positioning practices towards the notion of ‘impact’, theorized here as the triggering of new practices of ‘subjectivation’ of the academic self.

The description of the basic functioning of the ‘discourse of impact’ is based on the analysis of two corpora: case studies submitted by a selected group of academics (linguists) to REF2014 (no = 78) and interviews (n = 25) with their authors.

Linguistic pragmatics is particularly useful in analyzing linguistic aspects of the data, while Foucault’s theory helps draw together findings from two datasets in a broader analysis based on a governmentality framework. This approach allows for more general conclusions on the practices of governing (academic) subjects within evaluation contexts.

URL : Research impact evaluation and academic discourse

DOI : https://doi.org/10.1057/s41599-021-00727-8

What is the benefit from publishing a working paper in a journal in terms of citations? Evidence from economics

Authors : Klaus Wohlraben, Constantin Bürgi

Many papers in economics that are published in peer reviewed journals are initially released in widely circulated working paper series. This raises the question about the benefit of publishing in a peer-reviewed journal in terms of citations.

Specifically, we address the question: to what extent does the stamp of approval obtained by publishing in a peer-reviewed journal lead to more subsequent citations for papers that are already available in working paper series? Our data set comprises about 28,000 working papers from four major working paper series in economics.

Using panel data methods, we show that the publication in a peer reviewed journal results in around twice the number of yearly citations relative to working papers that never get published in a journal. Our results hold in several robustness checks.

URL : What is the benefit from publishing a working paper in a journal in terms of citations? Evidence from economics

DOI : https://doi.org/10.1007/s11192-021-03942-x

Inferring the causal effect of journals on citations

Author : Vincent A Traag

Articles in high-impact journals are, on average, more frequently cited. But are they cited more often because those articles are somehow more “citable”? Or are they cited more often simply because they are published in a high-impact journal? Although some evidence suggests the latter, the causal relationship is not clear.

We here compare citations of preprints to citations of the published version to uncover the causal mechanism. We build on an earlier model of citation dynamics to infer the causal effect of journals on citations. We find that high-impact journals select articles that tend to attract more citations.

At the same time, we find that high-impact journals augment the citation rate of published articles. Our results yield a deeper understanding of the role of journals in the research system.

The use of journal metrics in research evaluation has been increasingly criticized in recent years and article-level citations are sometimes suggested as an alternative. Our results show that removing impact factors from evaluation does not negate the influence of journals. This insight has important implications for changing practices of research evaluation.

DOI : https://doi.org/10.1162/qss_a_00128