Scholar Metrics Scraper (SMS): automated retrieval of citation and author data

Authors : Yutong Cao, Nicole A. Cheung, Dean Giustini, Jeffrey LeDue, Timothy H. Murphy

Academic departments, research clusters and evaluators analyze author and citation data to measure research impact and to support strategic planning. We created Scholar Metrics Scraper (SMS) to automate the retrieval of bibliometric data for a group of researchers.

The project contains Jupyter notebooks that take a list of researchers as an input and exports a CSV file of citation metrics from Google Scholar (GS) to visualize the group’s impact and collaboration. A series of graph outputs are also available. SMS is an open solution for automating the retrieval and visualization of citation data.

URL : Scholar Metrics Scraper (SMS): automated retrieval of citation and author data

DOI : https://doi.org/10.3389/frma.2024.1335454

Exploring National Infrastructures to Support Impact Analyses of Publicly Accessible Research: A Need for Trust, Transparency and Collaboration at Scale

Authors : Jennifer Kemp, Charles Watkinson, Christina Drummond

Usage data on research outputs such as books and journals is well established in the scholarly community. However, as research impact is derived from a broader set of scholarly outputs, such as data, code and multimedia, more holistic usage and impact metrics could inform national innovation and research policy.

Usage data reporting standards, such as Project COUNTER, provide the basis for shared statistics reporting practice; however, as mandated access to publicly funded research has increased the demand for impact metrics and analytics, stakeholders are exploring how to scaffold and strengthen shared infrastructure to better support the trusted, multi-stakeholder exchange of usage data across a variety of outputs.

In April 2023, a workshop on Exploring National Infrastructure for Public Access and Impact Reporting supported by the United States (US) National Science Foundation (NSF), explored these issues. This paper contextualizes the resources shared and recommendations generated in the workshop.

DOI : https://dx.doi.org/10.7302/22166

Judging Journals: How Impact Factor and Other Metrics Differ across Disciplines

Authors : Quinn Galbraith, Alexandra Carlile Butterfield, Chase Cardon

Given academia’s frequent use of publication metrics and the inconsistencies in metrics across disciplines, this study examines how various disciplines are treated differently by metric systems. We seek to offer academic librarians, university rank and tenure committees, and other interested individuals guidelines for distinguishing general differences between journal bibliometrics in various disciplines.

This study addresses the following questions: How well represented are different disciplines in the indexing of each metrics system (Eigenfactor, Scopus, Web of Science, Google Scholar)? How does each metrics system treat disciplines differently, and how do these differences compare across metrics systems?

For university libraries and academic librarians, this study may increase understanding of the comparative value of various metrics, which hopefully will facilitate more informed decisions regarding the purchase of journal subscriptions and the evaluation of journals and metrics systems.

This study indicates that different metrics systems prioritize different disciplines, and metrics are not always easily compared across disciplines. Consequently, this study indicates that simple reliance on metrics in publishing or purchasing decisions is often flawed.

URL : Judging Journals: How Impact Factor and Other Metrics Differ across Disciplines

DOI : https://doi.org/10.5860/crl.84.6.888

Development and preliminary validation of an open access, open data and open outreach indicator

Authors : Evgenios Vlachos, Regine Ejstrup, Thea Marie Drachen, Bertil Fabricius Dorch

We present the development and preliminary validation of a new person-centered indicator that we propose is named “OADO” after its target concepts: Open Access (OA), Open Data (OD) and Open Outreach (OO).

The indicator is comprised of two factors: the research factor indicating the degree of OA articles and OD in research; and the communication factor indicating the degree of OO in communication activities in which a researcher has participated. We stipulate that the weighted version of this new indicator, the Weighted-OADO, can be used to assess the openness of researchers in relation to their peers from their own discipline, department, or even group/center.

The OADO is developed and customized to the needs of Elsevier’s Research Information Management System (RIMS) environment, Pure. This offers the advantage of more accurate interpretations and recommendations for action, as well as the possibility to be implemented (and further validated) by multiple institutions, allowing disciplinary comparisons of the open practices across multiple institutes.

Therefore, the OADO provides recommendations for action, and enables institutes to make informed decisions based on the indicator’s outcome. To test the validity of the OADO, we retrieved the Pure publication records from two departments for each of the five faculties of the University of Southern Denmark and calculated the OADO of 995 researchers in total.

We checked for definition validity, actionability, transferability, possibility of unexpected discontinuities of the indicator, factor independence, normality of the indicator’s distributions across the departments, and indicator reliability.

Our findings reveal that the OADO is a reliable indicator for departments with normally distributed values with regards to their Weighted-OADO. Unfortunately, only two departments displayed normal distributions, one from the health sciences and one from engineering.

For departments where the normality assumption is not satisfied, the OADO can still be useful as it can indicate the need for making a greater effort toward openness, and/or act as an incentive for detailed registration of research outputs and datasets.

URL : Development and preliminary validation of an open access, open data and open outreach indicator

DOI : https://doi.org/10.3389/frma.2023.1218213

Metrics and peer review agreement at the institutional level

Authors : Vincent A Traag, Marco Malgarini, Sarlo Scipione

In the past decades, many countries have started to fund academic institutions based on the evaluation of their scientific performance. In this context, post-publication peer review is often used to assess scientific performance. Bibliometric indicators have been suggested as an alternative to peer review.

A recurrent question in this context is whether peer review and metrics tend to yield similar outcomes. In this paper, we study the agreement between bibliometric indicators and peer review based on a sample of publications submitted for evaluation to the national Italian research assessment exercise (2011–2014).

In particular, we study the agreement between bibliometric indicators and peer review at a higher aggregation level, namely the institutional level. Additionally, we also quantify the internal agreement of peer review at the institutional level. We base our analysis on a hierarchical Bayesian model using cross-validation.

We find that the level of agreement is generally higher at the institutional level than at the publication level. Overall, the agreement between metrics and peer review is on par with the internal agreement among two reviewers for certain fields of science in this particular context.

This suggests that for some fields, bibliometric indicators may possibly be considered as an alternative to peer review for the Italian national research assessment exercise. Although results do not necessarily generalise to other contexts, it does raise the question whether similar findings would obtain for other research assessment exercises, such as in the United Kingdom.

URL : https://arxiv.org/abs/2006.14830

Deep Impact: A Study on the Impact of Data Papers and Datasets in the Humanities and Social Sciences

Authors : Barbara McGillivray, Paola Marongiu, Nilo Pedrazzini, Marton Ribary, Mandy Wigdorowitz, Eleonora Zordan

The humanities and social sciences (HSS) have recently witnessed an exponential growth in data-driven research. In response, attention has been afforded to datasets and accompanying data papers as outputs of the research and dissemination ecosystem.

In 2015, two data journals dedicated to HSS disciplines appeared in this landscape: Journal of Open Humanities Data (JOHD) and Research Data Journal for the Humanities and Social Sciences (RDJ).

In this paper, we analyse the state of the art in the landscape of data journals in HSS using JOHD and RDJ as exemplars by measuring performance and the deep impact of data-driven projects, including metrics (citation count; Altmetrics, views, downloads, tweets) of data papers in relation to associated research papers and the reuse of associated datasets.

Our findings indicate: that data papers are published following the deposit of datasets in a repository and usually following research articles; that data papers have a positive impact on both the metrics of research papers associated with them and on data reuse; and that Twitter hashtags targeted at specific research campaigns can lead to increases in data papers’ views and downloads.

HSS data papers improve the visibility of datasets they describe, support accompanying research articles, and add to transparency and the open research agenda.

URL : Deep Impact: A Study on the Impact of Data Papers and Datasets in the Humanities and Social Sciences

DOI : https://doi.org/10.3390/publications10040039

Implementing the Declaration on Research Assessment: a publisher case study

Authors : Victoria Gardner, Mark Robinson, Elisabetta O’Connell

There has been much debate around the role of metrics in scholarly communication, with particular focus on the misapplication of journal metrics, such as the impact factor in the assessment of research and researchers.

Various initiatives have advocated for a change in this culture, including the Declaration on Research Assessment (DORA), which invites stakeholders throughout the scholarly communication ecosystem to sign up and show their support for practices designed to address the misuse of metrics.

This case study provides an overview of the process undertaken by a large academic publisher (Taylor & Francis Group) in signing up to DORA and implementing some of its key practices in the hope that it will provide some guidance to others considering becoming a signatory.

Our experience suggests that research, consultation and flexibility are crucial components of the process. Additionally, approaching signing with a project mindset versus a ‘sign and forget’ mentality can help organizations to understand the practical implications of signing, to anticipate and mitigate potential obstacles and to support cultural change.

URL : Implementing the Declaration on Research Assessment: a publisher case study

DOI : http://doi.org/10.1629/uksg.573