Can Google Scholar and Mendeley help to assess the scholarly impacts of dissertations?

Authors : Kayvan Kousha, Mike Thelwall

Dissertations can be the single most important scholarly outputs of junior researchers. Whilst sets of journal articles are often evaluated with the help of citation counts from the Web of Science or Scopus, these do not index dissertations and so their impact is hard to assess.

In response, this article introduces a new multistage method to extract Google Scholar citation counts for large collections of dissertations from repositories indexed by Google.

The method was used to extract Google Scholar citation counts for 77,884 American doctoral dissertations from 2013-2017 via ProQuest, with a precision of over 95%. Some ProQuest dissertations that were dual indexed with other repositories could not be retrieved with ProQuest-specific searches but could be found with Google Scholar searches of the other repositories.

The Google Scholar citation counts were then compared with Mendeley reader counts, a known source of scholarly-like impact data. A fifth of the dissertations had at least one citation recorded in Google Scholar and slightly fewer had at least one Mendeley reader.

Based on numerical comparisons, the Mendeley reader counts seem to be more useful for impact assessment purposes for dissertations that are less than two years old, whilst Google Scholar citations are more useful for older dissertations, especially in social sciences, arts and humanities.

Google Scholar citation counts may reflect a more scholarly type of impact than that of Mendeley reader counts because dissertations attract a substantial minority of their citations from other dissertations.

In summary, the new method now makes it possible for research funders, institutions and others to systematically evaluate the impact of dissertations, although additional Google Scholar queries for other online repositories are needed to ensure comprehensive coverage.

URL : https://arxiv.org/abs/1902.08746

Improving the discoverability and web impact of open repositories: techniques and evaluation

Author : George Macgregor

In this contribution we experiment with a suite of repository adjustments and improvements performed on Strathprints, the University of Strathclyde, Glasgow, institutional repository powered by EPrints 3.3.13.

These adjustments were designed to support improved repository web visibility and user engagement, thereby improving usage. Although the experiments were performed on EPrints it is thought that most of the adopted improvements are equally applicable to any other repository platform.

Following preliminary results reported elsewhere, and using Strathprints as a case study, this paper outlines the approaches implemented, reports on comparative search traffic data and usage metrics, and delivers conclusions on the efficacy of the techniques implemented.

The evaluation provides persuasive evidence that specific enhancements to technical aspects of a repository can result in significant improvements to repository visibility, resulting in a greater web impact and consequent increases in content usage.

COUNTER usage grew by 33% and traffic to Strathprints from Google and Google Scholar was found to increase by 63% and 99% respectively. Other insights from the evaluation are also explored.

The results are likely to positively inform the work of repository practitioners and open scientists.

URL : https://journal.code4lib.org/articles/14180

A multidimensional perspective on the citation impact of scientific publications

Authors : Yi Bu, Ludo Waltman, Yong Huang

The citation impact of scientific publications is usually seen as a one-dimensional concept. We introduce a three-dimensional perspective on the citation impact of publications. In addition to the level of citation impact, quantified by the number of citations received by a publication, we also conceptualize and operationalize the depth and dependence of citation impact.

This enables us to make a distinction between publications that have a deep impact concentrated in one specific field of research and publications that have a broad impact scattered over different research fields.

It also allows us to distinguish between publications that are strongly dependent on earlier work and publications that make a more independent scientific contribution.

We present a large-scale empirical analysis of the level, depth, and dependence of the citation impact of publications. In addition, we report a case study focusing on publications in the field of scientometrics.

Our three-dimensional citation impact framework provides a more detailed understanding of the citation impact of a publication than a traditional one-dimensional perspective.

URL : https://arxiv.org/abs/1901.09663

Readership Data and Research Impact

Authors : Ehsan Mohammadi, Mike Thelwall

Reading academic publications is a key scholarly activity. Scholars accessing and recording academic publications online are producing new types of readership data. These include publisher, repository, and academic social network download statistics as well as online reference manager records.

This chapter discusses the use of download and reference manager data for research evaluation and library collection development. The focus is on the validity and application of readership data as an impact indicator for academic publications across different disciplines.

Mendeley is particularly promising in this regard, although all data sources are not subjected to rigorous quality control and can be manipulated.

URL : https://arxiv.org/abs/1901.08593

Preprints in Scholarly Communication: Re-Imagining Metrics and Infrastructures

Authors : B. Preedip Balaji, M. Dhanamjaya

Digital scholarship and electronic publishing among the scholarly communities are changing when metrics and open infrastructures take centre stage for measuring research impact. In scholarly communication, the growth of preprint repositories over the last three decades as a new model of scholarly publishing has emerged as one of the major developments.

As it unfolds, the landscape of scholarly communication is transitioning, as much is being privatized as it is being made open and towards alternative metrics, such as social media attention, author-level, and article-level metrics. Moreover, the granularity of evaluating research impact through new metrics and social media change the objective standards of evaluating research performance.

Using preprint repositories as a case study, this article situates them in a scholarly web, examining their salient features, benefits, and futures. Towards scholarly web development and publishing on semantic and social web with open infrastructures, citations, and alternative metrics—how preprints advance building web as data is discussed.

We examine that this will viably demonstrate new metrics and in enhancing research publishing tools in scholarly commons facilitating various communities of practice.

However, for the preprint repositories to sustain, scholarly communities and funding agencies should support continued investment in open knowledge, alternative metrics development, and open infrastructures in scholarly publishing.

URL : Preprints in Scholarly Communication: Re-Imagining Metrics and Infrastructures

DOI : https://doi.org/10.3390/publications7010006

On the Heterogeneous Distributions in Paper Citations

Authors : Jinhyuk Yun, Sejung Ahn, June Young Lee

Academic papers have been the protagonists in disseminating expertise. Naturally, paper citation pattern analysis is an efficient and essential way of investigating the knowledge structure of science and technology.

For decades, it has been observed that citation of scientific literature follows a heterogeneous and heavy-tailed distribution, and many of them suggest a power-law distribution, log-normal distribution, and related distributions.

However, many studies are limited to small-scale approaches; therefore, it is hard to generalize. To overcome this problem, we investigate 21 years of citation evolution through a systematic analysis of the entire citation history of 42,423,644 scientific literatures published from 1996 to 2016 and contained in SCOPUS.

We tested six candidate distributions for the scientific literature in three distinct levels of Scimago Journal & Country Rank (SJR) classification scheme. First, we observe that the raw number of annual citation acquisitions tends to follow the log-normal distribution for all disciplines, except for the first year of the publication.

We also find significant disparity between the yearly acquired citation number among the journals, which suggests that it is essential to remove the citation surplus inherited from the prestige of the journals.

Our simple method for separating the citation preference of an individual article from the inherited citation of the journals reveals an unexpected regularity in the normalized annual acquisitions of citations across the entire field of science.

Specifically, the normalized annual citation acquisitions have power-law probability distributions with an exponential cut-off of the exponents around 2.3, regardless of its publication and citation year.

Our results imply that journal reputation has a substantial long-term impact on the citation.

URL : https://arxiv.org/abs/1810.08809

A “basket of metrics”—the best support for understanding journal merit

Authors : Lisa Colledge, Chris James

Aim

To survey opinion of the assertion that useful metricbased input requires a “basket of metrics” to allow more varied and nuanced insights into merit than is possible by using one metric alone.

Methods

A poll was conducted to survey opinions (N=204; average response rate=61%) within the international research community on using usage metrics in merit systems.

Results

“Research is best quantified using multiple criteria” was selected by most (40%) respondents as the reason that usage metrics are valuable, and 95% of respondents indicated that they would be likely or very likely to use usage metrics in their assessments of research merit, if they had access to them.

There was a similar degree of preference for simple and sophisticated usage metrics confirming that one size does not fit all, and that a one-metric approach to merit is insufficient.

Conclusion

This survey demonstrates a clear willingness and a real appetite to use a “basket of metrics” to broaden the ways in which research merit can be detected and demonstrated.

URL : http://europeanscienceediting.eu/articles/a-basket-of-metrics-the-best-support-for-understanding-journal-merit/