Authors : Vincent Traag, Ludo Waltman
When performing a national research assessment, some countries rely on citation metrics whereas others, such as the UK, primarily use peer review. In the influential Metric Tide report, a low agreement between metrics and peer review in the UK Research Excellence Framework (REF) was found.
However, earlier studies observed much higher agreement between metrics and peer review in the REF and argued in favour of using metrics. This shows that there is considerable ambiguity in the discussion on agreement between metrics and peer review.
We provide clarity in this discussion by considering four important points: (1) the level of aggregation of the analysis; (2) the use of either a size-dependent or a size-independent perspective; (3) the suitability of different measures of agreement; and (4) the uncertainty in peer review.
In the context of the REF, we argue that agreement between metrics and peer review should be assessed at the institutional level rather than at the publication level. Both a size-dependent and a size-independent perspective are relevant in the REF.
The interpretation of correlations may be problematic and as an alternative we therefore use measures of agreement that are based on the absolute or relative differences between metrics and peer review.
To get an idea of the uncertainty in peer review, we rely on a model to bootstrap peer review outcomes. We conclude that particularly in Physics, Clinical Medicine, and Public Health, metrics agree quite well with peer review and may offer an alternative to peer review.
URL : https://arxiv.org/abs/1808.03491
Authors : Ari Melo Mariano, Maíra Rocha Santos
Measurement is a complicated but very necessary task. Many indices have been created in an effort to define the quality of knowledge produced but they have attracted strong criticism, having become synonymous with individualism, competition and mere productivity and, furthermore, they fail to head science towards addressing local demands or towards producing international knowledge by means of collaboration.
Institutions, countries, publishers, governments and authors have a latent need to create quality and productivity indices because they can serve as filters that influence far-reaching decision making and even decisions on the professional promotion of university teachers.
Even so, in the present-day context, the very creators of those indices admit that they were not designed for that purpose, given that different research areas, the age of the researcher, the country and the language spoken all have an influence on the index calculations.
Accordingly, this research sets out three indices designed to head science towards its universal objective by valuing collaboration and the dissemination of knowledge.
It is hoped that the proposed indices may provoke new discussions and the proposal of new, more assertive indicators for the analysis of scientific research quality.
URL : https://arxiv.org/abs/1807.07595
Author : Javier Arias
Open Access has matured for journals, but its uptake in the book market is still delayed, despite the fact that books continue to be the leading publishing format for social sciences and humanities.
The 30-months EU-funded project HIRMEOS (High Integration of Research Monographs in the European Open Science infrastructure) tackles the main obstacles of the full integration of five important digital platforms supporting open access monographs.
The content of participating platforms will be enriched with tools that enable identification, authentication and interoperability (via DOI, ORCID, Fundref), and tools that enrich information and entity extraction (INRIA (N)ERD), the ability to annotate monographs (Hypothes.is), and gather usage and alternative metric data.
This paper focuses on the development and implementation of Open Source Metrics Services that enable the collection of OA Metrics and Altmetrics from third-party platforms, and how the architecture of these tools will allow implementation in any external platform, particularly in start-up Open Access publishers.
URL : https://hal.archives-ouvertes.fr/hal-01816811
Authors : David M. Nichols, Michael B. Twidale
The characterization of scholarly communication is dominated by citation-based measures. In this paper we propose several metrics to describe different facets of open access and open research.
We discuss measures to represent the public availability of articles along with their archival location, licenses, access costs, and supporting information. Calculations illustrating these new metrics are presented using the authors’ publications.
We argue that explicit measurement of openness is necessary for a holistic description of research outputs.
URL : http://hdl.handle.net/10289/10842
Authors : Qing Ke, Yong-Yeol Ahn, Cassidy R. Sugimoto
Metrics derived from Twitter and other social media—often referred to as altmetrics—are increasingly used to estimate the broader social impacts of scholarship. Such efforts, however, may produce highly misleading results, as the entities that participate in conversations about science on these platforms are largely unknown.
For instance, if altmetric activities are generated mainly by scientists, does it really capture broader social impacts of science? Here we present a systematic approach to identifying and analyzing scientists on Twitter.
Our method can identify scientists across many disciplines, without relying on external bibliographic data, and be easily adapted to identify other stakeholder groups in science.
We investigate the demographics, sharing behaviors, and interconnectivity of the identified scientists.
We find that Twitter has been employed by scholars across the disciplinary spectrum, with an over-representation of social and computer and information scientists; under-representation of mathematical, physical, and life scientists; and a better representation of women compared to scholarly publishing.
Analysis of the sharing of URLs reveals a distinct imprint of scholarly sites, yet only a small fraction of shared URLs are science-related. We find an assortative mixing with respect to disciplines in the networks between scientists, suggesting the maintenance of disciplinary walls in social media.
Our work contributes to the literature both methodologically and conceptually—we provide new methods for disambiguating and identifying particular actors on social media and describing the behaviors of scientists, thus providing foundational information for the construction and use of indicators on the basis of social media metrics.
URL : A systematic identification and analysis of scientists on Twitter
DOI : https://doi.org/10.1371/journal.pone.0175368
« This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration.
This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. »
URL : http://microblogging.infodocs.eu/wp-content/uploads/2015/07/2015_metric_tide.pdf
Related URL : http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/The,Metric,Tide/2015_metric_tide.pdf
« Evaluating and comparing the academic performance of a journal, a researcher or a single paper has long remained a critical, necessary but also controversial issue. Most of existing metrics invalidate comparison across different fields of science or even between different types of papers in the same field. This paper proposes a new metric, called return on citation (ROC), which is simply a citation ratio but applies to evaluating the paper, the journal and the researcher in a consistent way, allowing comparison across different fields of science and between different types of papers and discouraging unnecessary and coercive/self-citation. »
URL : http://arxiv.org/abs/1412.8420