Do altmetrics assess societal impact in the same way as case studies? An empirical analysis testing the convergent validity of altmetrics based on data from the UK Research Excellence Framework (REF)

Authors : Lutz Bornmann, Robin Haunschild, Jonathan Adams

Altmetrics have been proposed as a way to assess the societal impact of research. Although altmetrics are already in use as impact or attention metrics in different contexts, it is still not clear whether they really capture or reflect societal impact.

This study is based on altmetrics, citation counts, research output and case study data from the UK Research Excellence Framework (REF), and peers’ REF assessments of research output and societal impact. We investigated the convergent validity of altmetrics by using two REF datasets: publications submitted as research output (PRO) to the REF and publications referenced in case studies (PCS).

Case studies, which are intended to demonstrate societal impact, should cite the most relevant research papers. We used the MHq’ indicator for assessing impact – an indicator which has been introduced for count data with many zeros.

The results of the first part of the analysis show that news media as well as mentions on Facebook, in blogs, in Wikipedia, and in policy-related documents have higher MHq’ values for PCS than for PRO.

Thus, the altmetric indicators seem to have convergent validity for these data. In the second part of the analysis, altmetrics have been correlated with REF reviewers’ average scores on PCS. The negative or close to zero correlations question the convergent validity of altmetrics in that context.

We suggest that they may capture a different aspect of societal impact (which can be called unknown attention) to that seen by reviewers (who are interested in the causal link between research and action in society).

URL : https://arxiv.org/abs/1807.03977

An introduction to achieving policy impact for early career researchers

Authors : Megan C Evans, Christopher Cvitanovic

Scientists are increasingly required to demonstrate the real world tangible impacts arising from their research. Despite significant advances in scholarship dedicated to understanding and improving the relationships between science, policy and practice, much of the existing literature remains high level, theoretical, and not immediately accessible to early career researchers (ECRs) who work outside of the policy sciences.

In this paper, we draw on the literature and our own experiences working in the environmental sciences to provide an accessible resource for ECRs seeking to achieve policy impact in their chosen field. First, we describe key concepts in public policy to provide sufficient background for the non-expert.

Next, we articulate a number of practical steps and tools that can help ECRs to identify and enhance the policy relevance of their research, better understand the policy world in practice and identify a range of pathways to achieving impact.

Finally, we draw on our personal experiences to highlight some of the key individual characteristics and values that are needed to operate more effectively at the interface of science, policy and practice.

Our hope is that the information and tools provided here can help to empower ECRs to create their own pathways to impact that best suit their individual goals, circumstances, interests and strengths.

URL : An introduction to achieving policy impact for early career researchers

Alternative location : https://www.nature.com/articles/s41599-018-0144-2

Google Scholar as a data source for research assessment

Authors : Emilio Delgado López-Cózar, Enrique Orduna-Malea, Alberto Martín-Martín

The launch of Google Scholar (GS) marked the beginning of a revolution in the scientific information market. This search engine, unlike traditional databases, automatically indexes information from the academic web. Its ease of use, together with its wide coverage and fast indexing speed, have made it the first tool most scientists currently turn to when they need to carry out a literature search.

Additionally, the fact that its search results were accompanied from the beginning by citation counts, as well as the later development of secondary products which leverage this citation data (such as Google Scholar Metrics and Google Scholar Citations), made many scientists wonder about its potential as a source of data for bibliometric analyses.

The goal of this chapter is to lay the foundations for the use of GS as a supplementary source (and in some disciplines, arguably the best alternative) for scientific evaluation.

First, we present a general overview of how GS works. Second, we present empirical evidences about its main characteristics (size, coverage, and growth rate). Third, we carry out a systematic analysis of the main limitations this search engine presents as a tool for the evaluation of scientific performance.

Lastly, we discuss the main differences between GS and other more traditional bibliographic databases in light of the correlations found between their citation data. We conclude that Google Scholar presents a broader view of the academic world because it has brought to light a great amount of sources that were not previously visible.

URL : https://arxiv.org/abs/1806.04435

Collaboration Diversity and Scientific Impact

Authors : Yuxiao Dong, Hao Ma, Jie Tang, Kuansan Wang

The shift from individual effort to collaborative output has benefited science, with scientific work pursued collaboratively having increasingly led to more highly impactful research than that pursued individually.

However, understanding of how the diversity of a collaborative team influences the production of knowledge and innovation is sorely lacking. Here, we study this question by breaking down the process of scientific collaboration of 32.9 million papers over the last five decades.

We find that the probability of producing a top-cited publication increases as a function of the diversity of a team of collaborators—namely, the distinct number of institutions represented by the team.

We discover striking phenomena where a smaller, yet more diverse team is more likely to generate highly innovative work than a relatively larger team within one institution.

We demonstrate that the synergy of collaboration diversity is universal across different generations, research fields, and tiers of institutions and individual authors.

Our findings suggest that collaboration diversity strongly and positively correlates with the production of scientific innovation, giving rise to the potential revolution of the policies used by funding agencies and authorities to fund research projects, and broadly the principles used to organize teams, organizations, and societies.

URL : https://arxiv.org/abs/1806.03694

Analysing researchers’ outreach efforts and the association with publication metrics: A case study of Kudos

Authors : Mojisola Erdt, Htet Htet Aung, Ashley Sara Aw, Charlie Rapple, Yin-Leng Theng

With the growth of scholarly collaboration networks and social communication platforms, members of the scholarly community are experimenting with their approach to disseminating research outputs, in an effort to increase their audience and outreach.

However, from a researcher’s point of view, it is difficult to determine whether efforts to make work more visible are worthwhile (in terms of the association with publication metrics) and within that, difficult to assess which platform or network is most effective for sharing work and connecting to a wider audience.

We undertook a case study of Kudos (https://www.growkudos.com), a web-based service that claims to help researchers increase the outreach of their publications, to examine the most effective tools for sharing publications online, and to investigate which actions are associated with improved metrics.

We extracted a dataset from Kudos of 830,565 unique publications claimed by authors, for which 20,775 had actions taken to explain or share via Kudos, and for 4,867 of these full text download data from publishers was available.

Findings show that researchers are most likely to share their work on Facebook, but links shared on Twitter are more likely to be clicked on. A Mann-Whitney U test revealed that a treatment group (publications having actions in Kudos) had a significantly higher median average of 149 full text downloads (23.1% more) per publication as compared to a control group (having no actions in Kudos) with a median average of 121 full text downloads per publication.

These findings suggest that performing actions on publications, such as sharing, explaining, or enriching, could help to increase the number of full text downloads of a publication.

URL : Analysing researchers’ outreach efforts and the association with publication metrics: A case study of Kudos

DOI : https://doi.org/10.1371/journal.pone.0183217

Citation Count Analysis for Papers with Preprints

Authors : Sergey Feldman, Kyle Lo, Waleed Ammar

We explore the degree to which papers prepublished on arXiv garner more citations, in an attempt to paint a sharper picture of fairness issues related to prepublishing. A paper’s citation count is estimated using a negative-binomial generalized linear model (GLM) while observing a binary variable which indicates whether the paper has been prepublished.

We control for author influence (via the authors’ h-index at the time of paper writing), publication venue, and overall time that paper has been available on arXiv. Our analysis only includes papers that were eventually accepted for publication at top-tier CS conferences, and were posted on arXiv either before or after the acceptance notification.

We observe that papers submitted to arXiv before acceptance have, on average, 65\% more citations in the following year compared to papers submitted after. We note that this finding is not causal, and discuss possible next steps.

URL : https://arxiv.org/abs/1805.05238

Measuring Scientific Broadness

Authors : Tom Price, Sabine Hossenfelder

Who has not read letters of recommendations that comment on a student’s `broadness’ and wondered what to make of it?

We here propose a way to quantify scientific broadness by a semantic analysis of researchers’ publications. We apply our methods to papers on the open-access server arXiv.org and report our findings.

URL : https://arxiv.org/abs/1805.04647