Improving the discoverability and web impact of open repositories: techniques and evaluation

Author : George Macgregor

In this contribution we experiment with a suite of repository adjustments and improvements performed on Strathprints, the University of Strathclyde, Glasgow, institutional repository powered by EPrints 3.3.13.

These adjustments were designed to support improved repository web visibility and user engagement, thereby improving usage. Although the experiments were performed on EPrints it is thought that most of the adopted improvements are equally applicable to any other repository platform.

Following preliminary results reported elsewhere, and using Strathprints as a case study, this paper outlines the approaches implemented, reports on comparative search traffic data and usage metrics, and delivers conclusions on the efficacy of the techniques implemented.

The evaluation provides persuasive evidence that specific enhancements to technical aspects of a repository can result in significant improvements to repository visibility, resulting in a greater web impact and consequent increases in content usage.

COUNTER usage grew by 33% and traffic to Strathprints from Google and Google Scholar was found to increase by 63% and 99% respectively. Other insights from the evaluation are also explored.

The results are likely to positively inform the work of repository practitioners and open scientists.


A multidimensional perspective on the citation impact of scientific publications

Authors : Yi Bu, Ludo Waltman, Yong Huang

The citation impact of scientific publications is usually seen as a one-dimensional concept. We introduce a three-dimensional perspective on the citation impact of publications. In addition to the level of citation impact, quantified by the number of citations received by a publication, we also conceptualize and operationalize the depth and dependence of citation impact.

This enables us to make a distinction between publications that have a deep impact concentrated in one specific field of research and publications that have a broad impact scattered over different research fields.

It also allows us to distinguish between publications that are strongly dependent on earlier work and publications that make a more independent scientific contribution.

We present a large-scale empirical analysis of the level, depth, and dependence of the citation impact of publications. In addition, we report a case study focusing on publications in the field of scientometrics.

Our three-dimensional citation impact framework provides a more detailed understanding of the citation impact of a publication than a traditional one-dimensional perspective.


Readership Data and Research Impact

Authors : Ehsan Mohammadi, Mike Thelwall

Reading academic publications is a key scholarly activity. Scholars accessing and recording academic publications online are producing new types of readership data. These include publisher, repository, and academic social network download statistics as well as online reference manager records.

This chapter discusses the use of download and reference manager data for research evaluation and library collection development. The focus is on the validity and application of readership data as an impact indicator for academic publications across different disciplines.

Mendeley is particularly promising in this regard, although all data sources are not subjected to rigorous quality control and can be manipulated.


Preprints in Scholarly Communication: Re-Imagining Metrics and Infrastructures

Authors : B. Preedip Balaji, M. Dhanamjaya

Digital scholarship and electronic publishing among the scholarly communities are changing when metrics and open infrastructures take centre stage for measuring research impact. In scholarly communication, the growth of preprint repositories over the last three decades as a new model of scholarly publishing has emerged as one of the major developments.

As it unfolds, the landscape of scholarly communication is transitioning, as much is being privatized as it is being made open and towards alternative metrics, such as social media attention, author-level, and article-level metrics. Moreover, the granularity of evaluating research impact through new metrics and social media change the objective standards of evaluating research performance.

Using preprint repositories as a case study, this article situates them in a scholarly web, examining their salient features, benefits, and futures. Towards scholarly web development and publishing on semantic and social web with open infrastructures, citations, and alternative metrics—how preprints advance building web as data is discussed.

We examine that this will viably demonstrate new metrics and in enhancing research publishing tools in scholarly commons facilitating various communities of practice.

However, for the preprint repositories to sustain, scholarly communities and funding agencies should support continued investment in open knowledge, alternative metrics development, and open infrastructures in scholarly publishing.

URL : Preprints in Scholarly Communication: Re-Imagining Metrics and Infrastructures


On the Heterogeneous Distributions in Paper Citations

Authors : Jinhyuk Yun, Sejung Ahn, June Young Lee

Academic papers have been the protagonists in disseminating expertise. Naturally, paper citation pattern analysis is an efficient and essential way of investigating the knowledge structure of science and technology.

For decades, it has been observed that citation of scientific literature follows a heterogeneous and heavy-tailed distribution, and many of them suggest a power-law distribution, log-normal distribution, and related distributions.

However, many studies are limited to small-scale approaches; therefore, it is hard to generalize. To overcome this problem, we investigate 21 years of citation evolution through a systematic analysis of the entire citation history of 42,423,644 scientific literatures published from 1996 to 2016 and contained in SCOPUS.

We tested six candidate distributions for the scientific literature in three distinct levels of Scimago Journal & Country Rank (SJR) classification scheme. First, we observe that the raw number of annual citation acquisitions tends to follow the log-normal distribution for all disciplines, except for the first year of the publication.

We also find significant disparity between the yearly acquired citation number among the journals, which suggests that it is essential to remove the citation surplus inherited from the prestige of the journals.

Our simple method for separating the citation preference of an individual article from the inherited citation of the journals reveals an unexpected regularity in the normalized annual acquisitions of citations across the entire field of science.

Specifically, the normalized annual citation acquisitions have power-law probability distributions with an exponential cut-off of the exponents around 2.3, regardless of its publication and citation year.

Our results imply that journal reputation has a substantial long-term impact on the citation.


A “basket of metrics”—the best support for understanding journal merit

Authors : Lisa Colledge, Chris James


To survey opinion of the assertion that useful metricbased input requires a “basket of metrics” to allow more varied and nuanced insights into merit than is possible by using one metric alone.


A poll was conducted to survey opinions (N=204; average response rate=61%) within the international research community on using usage metrics in merit systems.


“Research is best quantified using multiple criteria” was selected by most (40%) respondents as the reason that usage metrics are valuable, and 95% of respondents indicated that they would be likely or very likely to use usage metrics in their assessments of research merit, if they had access to them.

There was a similar degree of preference for simple and sophisticated usage metrics confirming that one size does not fit all, and that a one-metric approach to merit is insufficient.


This survey demonstrates a clear willingness and a real appetite to use a “basket of metrics” to broaden the ways in which research merit can be detected and demonstrated.


Do funding applications where peer reviewers disagree have higher citations? A cross-sectional study

Authors : Adrian G Barnett, Scott R. Glisson, Stephen Gallo


Decisions about which applications to fund are generally based on the mean scores of a panel of peer reviewers. As well as the mean, a large disagreement between peer reviewers may also be worth considering, as it may indicate a high-risk application with a high return.


We examined the peer reviewers’ scores for 227 funded applications submitted to the American Institute of Biological Sciences between 1999 and 2006. We examined the mean score and two measures of reviewer disagreement: the standard deviation and range.

The outcome variable was the relative citation ratio, which is the number of citations from all publications associated with the application, standardised by field and publication year.


There was a clear increase in relative citations for applications with a better mean. There was no association between relative citations and either of the two measures of disagreement.


We found no evidence that reviewer disagreement was able to identify applications with a higher than average return. However, this is the first study to empirically examine this association, and it would be useful to examine whether reviewer disagreement is associated with research impact in other funding schemes and in larger sample sizes.

URL : Do funding applications where peer reviewers disagree have higher citations? A cross-sectional study