Social media metrics for new research evaluation

Authors : Paul Wouters, Zohreh Zahedi, Rodrigo Costas

This chapter approaches, both from a theoretical and practical perspective, the most important principles and conceptual frameworks that can be considered in the application of social media metrics for scientific evaluation.

We propose conceptually valid uses for social media metrics in research evaluation. The chapter discusses frameworks and uses of these metrics as well as principles and recommendations for the consideration and application of current (and potentially new) metrics in research evaluation.

URL  :

A review of literature on evaluating the scientific, social and political impact of social sciences and humanities research

Authors : Emanuela Reale,  Dragana Avramov,  Kubra Canhial,  Claire Donovan,  Ramon Flecha, Poul Holm,  Charles Larkin,  Benedetto Lepori,  Judith Mosoni-Fried,  Esther Oliver, Emilia Primeri,  Lidia Puigvert,  Andrea Scharnhorst,  Andràs Schubert,  Marta Soler Sàndor, Soòs  Teresa, Sordé  Charles, Travis  René Van Horik

Recently, the need to contribute to the evaluation of the scientific, social, and political impact of Social Sciences and Humanities (SSH) research has become a demand of policy makers and society.

The international scientific community has made significant advances that have transformed the impact of evaluation landscape. This article reviews the existing scientific knowledge on evaluation tools and techniques that are applied to assess the scientific impact of SSH research; the changing structure of social and political impacts of SSH research is investigated based on an overarching research question: to what extent do scholars attempt to apply methods, instruments, and approaches that take into account the distinctive features of SSH?

The review also includes examples of European Union (EU) projects that demonstrate these impacts. This article culminates in a discussion of the development of the assessment of different impacts and identifies limitations, and areas and topics to explore in the future.

URL : A review of literature on evaluating the scientific, social and political impact of social sciences and humanities research


Les classements à l’international des revues en SHS

Auteurs/Authors : David Pontille, Didier Torny

Bien que plusieurs classements de revues aient été élaborés dès les années 1970, le caractère inédit de ceux qui ont émergé au cours des années 2000 réside dans leur statut d’instrument de politique publique. C’est le cas de l’Australie, du Brésil, de la France, de la Flandre, de la Norvège, et des Pays-Bas où cette technologie d’évaluation est en vigueur pour certains domaines – notamment en sciences humaines et sociales (SHS).

Dans cet article, nous analysons les modes d’existence de cette technologie d’évaluation spécifique. Bien que la formule générique du « classement de revues » se propage au plan international , différentes versions se développent parallèlement : leurs modalités de production, les valeurs défendues par leurs promoteurs et leurs usagers, aussi bien que leurs formes concrètes sont extrêmement variées.

Nous montrons que l’espace de variations des classements de revues en SHS est toujours bordé par deux options : favoriser une « bonne recherche » qui, sous l’effet d’avantages cumulatifs, risque de conduire à une science (hyper)normale soutenant des dispositions de conformité sociale chez les chercheurs ; encourager l’émergence des communautés minoritaires (linguistiques, disciplinaires, interdisciplinaires) et promouvoir la diversité des méthodes, théories et objets, au risque de mener à des formes de relativisme ou d’archipelisation de la recherche.


Patent Citations Analysis and Its Value in Research Evaluation: A Review and a New Approach to Map Technology-relevant Research

Authors : Anthony F.J. van Raan


First, to review the state-of-the-art in patent citation analysis, particularly characteristics of patent citations to scientific literature (scientific non-patent references, SNPRs). Second, to present a novel mapping approach to identify technology-relevant research based on the papers cited by and referring to the SNPRs.


In the review part we discuss the context of SNPRs such as the time lags between scientific achievements and inventions. Also patent-to-patent citation is addressed particularly because this type of patent citation analysis is a major element in the assessment of the economic value of patents.

We also review the research on the role of universities and researchers in technological development, with important issues such as universities as sources of technological knowledge and inventor-author relations.

We conclude the review part of this paper with an overview of recent research on mapping and network analysis of the science and technology interface and of technological progress in interaction with science.

In the second part we apply new techniques for the direct visualization of the cited and citing relations of SNPRs, the mapping of the landscape around SNPRs by bibliographic coupling and co-citation analysis, and the mapping of the conceptual environment of SNPRs by keyword co-occurrence analysis.


We discuss several properties of SNPRs. Only a small minority of publications covered by the Web of Science or Scopus are cited by patents, about 3%–4%. However, for publications based on university-industry collaboration the number of SNPRs is considerably higher, around 15%.

The proposed mapping methodology based on a “second order SNPR approach” enables a better assessment of the technological relevance of research.

Research limitations

The main limitation is that a more advanced merging of patent and publication data, in particular unification of author and inventor names, in still a necessity.

Practical implications

The proposed mapping methodology enables the creation of a database of technology-relevant papers (TRPs). In a bibliometric assessment the publications of research groups, research programs or institutes can be matched with the TRPs and thus the extent to which the work of groups, programs or institutes are relevant for technological development can be measured.


The review part examines a wide range of findings in the research of patent citation analysis. The mapping approach to identify a broad range of technology-relevant papers is novel and offers new opportunities in research evaluation practices.

URL : Patent Citations Analysis and Its Value in Research Evaluation: A Review and a New Approach to Map Technology-relevant Research



Evaluation of research activities of universities of Ukraine and Belarus: a set of bibliometric indicators and its implementation

Authors : Vladimir Lazarev, Serhii Nazarovets, Alexey Skalaban

Monitoring bibliometric indicators of University rankings is considered as a subject of a University library activity. In order to fulfill comparative assessment of research activities of the universities of Ukraine and Belarus the authors introduced a set of bibliometric indicators.

A comparative assessment of the research activities of corresponding universities was fulfilled; the data on the leading universities are presented. The sensitivity of the one of the indicators to rapid changes of the research activity of universities and the fact that the other one is normalized across the fields of science condition advantage of the proposed set over the one that was used in practice of the corresponding national rankings.


Usage Bibliometrics as a Tool to Measure Research Activity

Authors : Edwin A. Henneken, Michael J. Kurtz

Measures for research activity and impact have become an integral ingredient in the assessment of a wide range of entities (individual researchers, organizations, instruments, regions, disciplines).

Traditional bibliometric indicators, like publication and citation based indicators, provide an essential part of this picture, but cannot describe the complete picture.

Since reading scholarly publications is an essential part of the research life cycle, it is only natural to introduce measures for this activity in attempts to quantify the efficiency, productivity and impact of an entity.

Citations and reads are significantly different signals, so taken together, they provide a more complete picture of research activity. Most scholarly publications are now accessed online, making the study of reads and their patterns possible.

Click-stream logs allow us to follow information access by the entire research community, real-time. Publication and citation datasets just reflect activity by authors. In addition, download statistics will help us identify publications with significant impact, but which do not attract many citations.

Click-stream signals are arguably more complex than, say, citation signals. For one, they are a superposition of different classes of readers. Systematic downloads by crawlers also contaminate the signal, as does browsing behavior.

We discuss the complexities associated with clickstream data and how, with proper filtering, statistically significant relations and conclusions can be inferred from download statistics.

We describe how download statistics can be used to describe research activity at different levels of aggregation, ranging from organizations to countries. These statistics show a correlation with socio-economic indicators.

A comparison will be made with traditional bibliometric indicators. We will argue that astronomy is representative of more general trends.


A scientists’ view of scientometrics: Not everything that counts can be counted

Authors : Ralph Kenna, Olesya Mryglod, Bertrand Berche

Like it or not, attempts to evaluate and monitor the quality of academic research have become increasingly prevalent worldwide. Performance reviews range from at the level of individuals, through research groups and departments, to entire universities.

Many of these are informed by, or functions of, simple scientometric indicators and the results of such exercises impact onto careers, funding and prestige. However, there is sometimes a failure to appreciate that scientometrics are, at best, very blunt instruments and their incorrect usage can be misleading.

Rather than accepting the rise and fall of individuals and institutions on the basis of such imprecise measures, calls have been made for indicators be regularly scrutinised and for improvements to the evidence base in this area.

It is thus incumbent upon the scientific community, especially the physics, complexity-science and scientometrics communities, to scrutinise metric indicators. Here, we review recent attempts to do this and show that some metrics in widespread use cannot be used as reliable indicators research quality.