Does Open Access Foster Interdisciplinary Citation? Decomposing Open Access Citation Advantage

Authors : Kai Nishikawa, Akiyoshi Murakami

The existence of an open access (OA) citation advantage, that is, whether OA increases citations, has been a topic of interest for many years. Although numerous previous studies have focused on whether OA increases citations, expectations for OA go beyond that. One such expectation is the promotion of knowledge transfer across various fields.

This study aimed to clarify whether OA, especially gold OA, increases interdisciplinary citations in various natural science fields. Specifically, we measured the effect of OA on interdisciplinary and within-discipline citation counts by decomposing an existing metric of the OA citation advantage.

The results revealed that OA increases both interdisciplinary and within-discipline citations in many fields and increases only interdisciplinary citations in chemistry, computer science, and clinical medicine. Among these fields, clinical medicine tends to obtain more interdisciplinary citations without being influenced by specific journals or papers.

The findings indicate that OA fosters knowledge transfer to different fields, which extends our understanding of its effects.

Arxiv : https://arxiv.org/abs/2411.14653

A survey of how biology researchers assess credibility when serving on grant and hiring committees

Authors : Iain Hrynaszkiewicz, Beruria Novich, James Harney, Veronique Kiermer

Researchers who serve on grant review and hiring committees have to make decisions about the intrinsic value of research in short periods of time, and research impact metrics such Journal Impact Factor (JIF) exert undue influence on these decisions. Initiatives such as the Coalition for Advancing Research Assessment (CoARA) and the Declaration on Research Assessment (DORA) emphasize responsible use of quantitative metrics and avoidance of journal-based impact metrics for research assessment. Further, our previous qualitative research suggested that assessing credibility, or trustworthiness, of research is important to researchers not only when they seek to inform their own research but also in the context of research assessment committees.

To confirm our findings from previous interviews in quantitative terms, we surveyed 485 biology researchers who have served on committees for grant review or hiring and promotion decisions, to understand how they assess the credibility of research outputs in these contexts. We found that concepts like credibility, trustworthiness, quality and impact lack consistent definitions and interpretations by researchers, which had already been observed in our interviews.

We also found that assessment of credibility is very important to most (81%) of researchers serving in these committees but fewer than half of respondents are satisfied with their ability to assess credibility. A substantial proportion of respondents (57% of respondents) report using journal reputation and JIF to assess credibility – proxies that research assessment reformers consider inappropriate to assess credibility because they don’t rely on intrinsic characteristics of the research.

This gap between importance of an assessment and satisfaction in the ability to conduct it was reflected in multiple aspects of credibility we tested and it was greatest for researchers seeking to assess the integrity of research (such as identifying signs of fabrication, falsification, or plagiarism), and the suitability and completeness of research methods. Non-traditional research outputs associated with Open Science practices – research data, code, protocol and preprints sharing – are particularly hard for researchers to assess, despite the potential of Open Science practices to signal trustworthiness.

Our results suggest opportunities to develop better guidance and better signals to support the evaluation of research credibility and trustworthiness – and ultimately support research assessment reform, away from the use of inappropriate proxies for impact and towards assessing the intrinsic characteristics and values researchers see as important.

DOI : https://doi.org/10.31222/osf.io/ht836

Scholar Metrics Scraper (SMS): automated retrieval of citation and author data

Authors : Yutong Cao, Nicole A. Cheung, Dean Giustini, Jeffrey LeDue, Timothy H. Murphy

Academic departments, research clusters and evaluators analyze author and citation data to measure research impact and to support strategic planning. We created Scholar Metrics Scraper (SMS) to automate the retrieval of bibliometric data for a group of researchers.

The project contains Jupyter notebooks that take a list of researchers as an input and exports a CSV file of citation metrics from Google Scholar (GS) to visualize the group’s impact and collaboration. A series of graph outputs are also available. SMS is an open solution for automating the retrieval and visualization of citation data.

URL : Scholar Metrics Scraper (SMS): automated retrieval of citation and author data

DOI : https://doi.org/10.3389/frma.2024.1335454

Exploring National Infrastructures to Support Impact Analyses of Publicly Accessible Research: A Need for Trust, Transparency and Collaboration at Scale

Authors : Jennifer Kemp, Charles Watkinson, Christina Drummond

Usage data on research outputs such as books and journals is well established in the scholarly community. However, as research impact is derived from a broader set of scholarly outputs, such as data, code and multimedia, more holistic usage and impact metrics could inform national innovation and research policy.

Usage data reporting standards, such as Project COUNTER, provide the basis for shared statistics reporting practice; however, as mandated access to publicly funded research has increased the demand for impact metrics and analytics, stakeholders are exploring how to scaffold and strengthen shared infrastructure to better support the trusted, multi-stakeholder exchange of usage data across a variety of outputs.

In April 2023, a workshop on Exploring National Infrastructure for Public Access and Impact Reporting supported by the United States (US) National Science Foundation (NSF), explored these issues. This paper contextualizes the resources shared and recommendations generated in the workshop.

DOI : https://dx.doi.org/10.7302/22166

Judging Journals: How Impact Factor and Other Metrics Differ across Disciplines

Authors : Quinn Galbraith, Alexandra Carlile Butterfield, Chase Cardon

Given academia’s frequent use of publication metrics and the inconsistencies in metrics across disciplines, this study examines how various disciplines are treated differently by metric systems. We seek to offer academic librarians, university rank and tenure committees, and other interested individuals guidelines for distinguishing general differences between journal bibliometrics in various disciplines.

This study addresses the following questions: How well represented are different disciplines in the indexing of each metrics system (Eigenfactor, Scopus, Web of Science, Google Scholar)? How does each metrics system treat disciplines differently, and how do these differences compare across metrics systems?

For university libraries and academic librarians, this study may increase understanding of the comparative value of various metrics, which hopefully will facilitate more informed decisions regarding the purchase of journal subscriptions and the evaluation of journals and metrics systems.

This study indicates that different metrics systems prioritize different disciplines, and metrics are not always easily compared across disciplines. Consequently, this study indicates that simple reliance on metrics in publishing or purchasing decisions is often flawed.

URL : Judging Journals: How Impact Factor and Other Metrics Differ across Disciplines

DOI : https://doi.org/10.5860/crl.84.6.888

Development and preliminary validation of an open access, open data and open outreach indicator

Authors : Evgenios Vlachos, Regine Ejstrup, Thea Marie Drachen, Bertil Fabricius Dorch

We present the development and preliminary validation of a new person-centered indicator that we propose is named “OADO” after its target concepts: Open Access (OA), Open Data (OD) and Open Outreach (OO).

The indicator is comprised of two factors: the research factor indicating the degree of OA articles and OD in research; and the communication factor indicating the degree of OO in communication activities in which a researcher has participated. We stipulate that the weighted version of this new indicator, the Weighted-OADO, can be used to assess the openness of researchers in relation to their peers from their own discipline, department, or even group/center.

The OADO is developed and customized to the needs of Elsevier’s Research Information Management System (RIMS) environment, Pure. This offers the advantage of more accurate interpretations and recommendations for action, as well as the possibility to be implemented (and further validated) by multiple institutions, allowing disciplinary comparisons of the open practices across multiple institutes.

Therefore, the OADO provides recommendations for action, and enables institutes to make informed decisions based on the indicator’s outcome. To test the validity of the OADO, we retrieved the Pure publication records from two departments for each of the five faculties of the University of Southern Denmark and calculated the OADO of 995 researchers in total.

We checked for definition validity, actionability, transferability, possibility of unexpected discontinuities of the indicator, factor independence, normality of the indicator’s distributions across the departments, and indicator reliability.

Our findings reveal that the OADO is a reliable indicator for departments with normally distributed values with regards to their Weighted-OADO. Unfortunately, only two departments displayed normal distributions, one from the health sciences and one from engineering.

For departments where the normality assumption is not satisfied, the OADO can still be useful as it can indicate the need for making a greater effort toward openness, and/or act as an incentive for detailed registration of research outputs and datasets.

URL : Development and preliminary validation of an open access, open data and open outreach indicator

DOI : https://doi.org/10.3389/frma.2023.1218213

Metrics and peer review agreement at the institutional level

Authors : Vincent A Traag, Marco Malgarini, Sarlo Scipione

In the past decades, many countries have started to fund academic institutions based on the evaluation of their scientific performance. In this context, post-publication peer review is often used to assess scientific performance. Bibliometric indicators have been suggested as an alternative to peer review.

A recurrent question in this context is whether peer review and metrics tend to yield similar outcomes. In this paper, we study the agreement between bibliometric indicators and peer review based on a sample of publications submitted for evaluation to the national Italian research assessment exercise (2011–2014).

In particular, we study the agreement between bibliometric indicators and peer review at a higher aggregation level, namely the institutional level. Additionally, we also quantify the internal agreement of peer review at the institutional level. We base our analysis on a hierarchical Bayesian model using cross-validation.

We find that the level of agreement is generally higher at the institutional level than at the publication level. Overall, the agreement between metrics and peer review is on par with the internal agreement among two reviewers for certain fields of science in this particular context.

This suggests that for some fields, bibliometric indicators may possibly be considered as an alternative to peer review for the Italian national research assessment exercise. Although results do not necessarily generalise to other contexts, it does raise the question whether similar findings would obtain for other research assessment exercises, such as in the United Kingdom.

URL : https://arxiv.org/abs/2006.14830