Do altmetrics assess societal impact in the same way as case studies? An empirical analysis testing the convergent validity of altmetrics based on data from the UK Research Excellence Framework (REF)

Authors : Lutz Bornmann, Robin Haunschild, Jonathan Adams

Altmetrics have been proposed as a way to assess the societal impact of research. Although altmetrics are already in use as impact or attention metrics in different contexts, it is still not clear whether they really capture or reflect societal impact.

This study is based on altmetrics, citation counts, research output and case study data from the UK Research Excellence Framework (REF), and peers’ REF assessments of research output and societal impact. We investigated the convergent validity of altmetrics by using two REF datasets: publications submitted as research output (PRO) to the REF and publications referenced in case studies (PCS).

Case studies, which are intended to demonstrate societal impact, should cite the most relevant research papers. We used the MHq’ indicator for assessing impact – an indicator which has been introduced for count data with many zeros.

The results of the first part of the analysis show that news media as well as mentions on Facebook, in blogs, in Wikipedia, and in policy-related documents have higher MHq’ values for PCS than for PRO.

Thus, the altmetric indicators seem to have convergent validity for these data. In the second part of the analysis, altmetrics have been correlated with REF reviewers’ average scores on PCS. The negative or close to zero correlations question the convergent validity of altmetrics in that context.

We suggest that they may capture a different aspect of societal impact (which can be called unknown attention) to that seen by reviewers (who are interested in the causal link between research and action in society).

URL : https://arxiv.org/abs/1807.03977

Creativity in Science and the Link to Cited References: Is the Creative Potential of Papers Reflected in their Cited References?

Authors : Iman Tahamtan, Lutz Bornmann

Several authors have proposed that a large number of unusual combinations of cited references in a paper point to its high creative potential (or novelty). However, it is still not clear whether the number of unusual combinations can really measure the creative potential of papers.

The current study addresses this question on the basis of several case studies from the field of scientometrics. We identified some landmark papers in this field. Study subjects were the corresponding authors of these papers.

We asked them where the ideas for the papers came from and which role the cited publications played. The results revealed that the creative ideas might not necessarily have been inspired by past publications.

The literature seems to be important for the contextualization of the idea in the field of scientometrics. Instead, we found that creative ideas are the result of finding solutions to practical problems, result from discussions with colleagues, and profit from interdisciplinary exchange. The roots of the studied landmark papers are discussed in detail.

URL : https://arxiv.org/abs/1806.00224

Opium in science and society: Numbers

Authors : Julian N. Marewski, Lutz Bornmann

In science and beyond, numbers are omnipresent when it comes to justifying different kinds of judgments. Which scientific author, hiring committee-member, or advisory board panelist has not been confronted with page-long « publication manuals », « assessment reports », « evaluation guidelines », calling for p-values, citation rates, h-indices, or other statistics in order to motivate judgments about the « quality » of findings, applicants, or institutions?

Yet, many of those relying on and calling for statistics do not even seem to understand what information those numbers can actually convey, and what not. Focusing on the uninformed usage of bibliometrics as worrysome outgrowth of the increasing quantification of science and society, we place the abuse of numbers into larger historical contexts and trends.

These are characterized by a technology-driven bureaucratization of science, obsessions with control and accountability, and mistrust in human intuitive judgment. The ongoing digital revolution increases those trends.

We call for bringing sanity back into scientific judgment exercises. Despite all number crunching, many judgments – be it about scientific output, scientists, or research institutions – will neither be unambiguous, uncontroversial, or testable by external standards, nor can they be otherwise validated or objectified.

Under uncertainty, good human judgment remains, for the better, indispensable, but it can be aided, so we conclude, by a toolbox of simple judgment tools, called heuristics.

In the best position to use those heuristics are research evaluators (1) who have expertise in the to-be-evaluated area of research, (2) who have profound knowledge in bibliometrics, and (3) who are statistically literate.

URL : https://arxiv.org/abs/1804.11210

Allegation of scientific misconduct increases Twitter attention

Authors : Lutz Bornmann, Robin Haunschild

The web-based microblogging system Twitter is a very popular altmetrics source for measuring the broader impact of science. In this case study, we demonstrate how problematic the use of Twitter data for research evaluation can be, even though the aspiration of measurement is degraded from impact to attention measurement.

We collected the Twitter data for the paper published by Yamamizu et al. (2017). An investigative committee found that the main figures in the paper are fraudulent.

URL : https://arxiv.org/abs/1802.00606

Betweenness and diversity in journal citation networks as measures of interdisciplinarity—A tribute to Eugene Garfield

Authors : Loet Leydesdorff, Caroline S. Wagner, Lutz Bornmann

Journals were central to Eugene Garfield’s research interests. Among other things, journals are considered as units of analysis for bibliographic databases such as the Web of Science and Scopus. In addition to providing a basis for disciplinary classifications of journals, journal citation patterns span networks across boundaries to variable extents.

Using betweenness centrality (BC) and diversity, we elaborate on the question of how to distinguish and rank journals in terms of interdisciplinarity. Interdisciplinarity, however, is difficult to operationalize in the absence of an operational definition of disciplines; the diversity of a unit of analysis is sample-dependent. BC can be considered as a measure of multi-disciplinarity.

Diversity of co-citation in a citing document has been considered as an indicator of knowledge integration, but an author can also generate trans-disciplinary—that is, non-disciplined—variation by citing sources from other disciplines.

Diversity in the bibliographic coupling among citing documents can analogously be considered as diffusion  or differentiation of knowledge across disciplines. Because the citation networks in the cited direction reflect both structure and variation, diversity in this direction is perhaps the best available measure of interdisciplinarity at the journal level.

Furthermore, diversity is based on a summation and can therefore be decomposed; differences among (sub)sets can be tested for statistical significance. In the appendix, a general-purpose routine for measuring diversity in networks is provided.

URL : Betweenness and diversity in journal citation networks as measures of interdisciplinarity—A tribute to Eugene Garfield

DOI : https://doi.org/10.1007/s11192-017-2528-2

 

 

To what extent does the Leiden Manifesto also apply to altmetrics? A discussion of the manifesto against the background of research into altmetrics

Authors : Lutz Bornmann, Robin Haunschild

Purpose

Hicks, Wouters, Waltman, de Rijcke, and Rafols (2015) have formulated the so-called Leiden manifesto, in which they have assembled the ten principles for a meaningful evaluation of research on the basis of bibliometric data.

Approach

In this work the attempt is made to indicate the relevance of the Leiden manifesto for altmetrics.

Results

As shown by the discussion of the ten principles against the background of the knowledge about and the research into altmetrics, the principles also have a great importance for altmetrics and should be taken into account in their application.

Originality

Altmetrics is already frequently used in the area of research evaluation. Thus, it is important that the user of altmetrics data knows the relevance of the Leiden manifesto also in this area.

URL : https://figshare.com/articles/To_what_extent_does_the_Leiden_Manifesto_also_apply_to_altmetrics_A_discussion_of_the_manifesto_against_the_background_of_research_into_altmetrics/1464981