Implementing the Declaration on Research Assessment: a publisher case study

Authors : Victoria Gardner, Mark Robinson, Elisabetta O’Connell

There has been much debate around the role of metrics in scholarly communication, with particular focus on the misapplication of journal metrics, such as the impact factor in the assessment of research and researchers.

Various initiatives have advocated for a change in this culture, including the Declaration on Research Assessment (DORA), which invites stakeholders throughout the scholarly communication ecosystem to sign up and show their support for practices designed to address the misuse of metrics.

This case study provides an overview of the process undertaken by a large academic publisher (Taylor & Francis Group) in signing up to DORA and implementing some of its key practices in the hope that it will provide some guidance to others considering becoming a signatory.

Our experience suggests that research, consultation and flexibility are crucial components of the process. Additionally, approaching signing with a project mindset versus a ‘sign and forget’ mentality can help organizations to understand the practical implications of signing, to anticipate and mitigate potential obstacles and to support cultural change.

URL : Implementing the Declaration on Research Assessment: a publisher case study

DOI : http://doi.org/10.1629/uksg.573

Toward More Inclusive Metrics and Open Science to Measure Research Assessment in Earth and Natural Sciences

Authors : Olivier Pourret, Dasapta Erwin Irawan, Najmeh Shaghaei, Elenora M. van Rijsingen, Lonni Besançon

The conventional assessment of scientists relies on a set of metrics which are mostly based on the production of scientific articles and their citations. These metrics are primarily established at the journal level (e.g., the Journal Impact Factor), the article-level (e.g., times cited), and the author level (e.g., h-index; Figure 1).

These metrics form the basis of criteria that have been widely used to measure institutional reputation, as well as that of authors and research groups. By relying mostly on citations (Langfeldt et al., 2021), however, they are inherently flawed in that they provide only a limited picture of scholarly production. Indeed, citations only count document use within scholarly works and thus provide a very limited view of the use and impact of an article.

Those reveal only the superficial dimensions of a research’s impact on society. Even within academia, citations are limited since the link they express does not hold any value (Tennant et al., 2019). As an example, one could be cited for the robustness of the presented work while the other could be cited for its main limitation (Aksnes et al., 2019).

As such, two articles could be cited the same number of times for very different reasons, and relying on citations to evaluate scientific work therefore displays obvious limitations (Tahamtan et al., 2016). Beyond this issue, however, the conventional assessment of scientists is clearly beneficial to some scientists more than others and does not reflect or encourage the dissemination of knowledge back to the public that is ultimately paying scientists.

This is visible in the Earth and natural sciences which has been organized to solve local community problems in dealing with the Earth system like groundwater hazards (Irawan et al., 2021; Dwivedi et al., 2022).

URL: Toward More Inclusive Metrics and Open Science to Measure Research Assessment in Earth and Natural Sciences

DOI : https://doi.org/10.3389/frma.2022.850333

RipetaScore: Measuring the Quality, Transparency, and Trustworthiness of a Scientific Work

Authors : Josh Q. Sumner, Cynthia Hudson Vitale, Leslie D. McIntosh

A wide array of existing metrics quantifies a scientific paper’s prominence or the author’s prestige. Many who use these metrics make assumptions that higher citation counts or more public attention must indicate more reliable, better quality science.

While current metrics offer valuable insight into scientific publications, they are an inadequate proxy for measuring the quality, transparency, and trustworthiness of published research.

Three essential elements to establishing trust in a work include: trust in the paper, trust in the author, and trust in the data. To address these elements in a systematic and automated way, we propose the ripetaScore as a direct measurement of a paper’s research practices, professionalism, and reproducibility.

Using a sample of our current corpus of academic papers, we demonstrate the ripetaScore’s efficacy in determining the quality, transparency, and trustworthiness of an academic work.

In this paper, we aim to provide a metric to evaluate scientific reporting quality in terms of transparency and trustworthiness of the research, professionalism, and reproducibility.

URL : RipetaScore: Measuring the Quality, Transparency, and Trustworthiness of a Scientific Work

DOI : https://doi.org/10.3389/frma.2021.751734

Perspectives on Open Science and The Future of Scholarly Communication: Internet Trackers and Algorithmic Persuasion

Authors : Tiberius Ignat, Paul Ayris, Beatrice Gini, Olga Stepankova, Deniz Özdemir, Damla Bal, Yordanka Deyanov

The current digital content industry is heavily oriented towards building platforms that track users’ behaviour and seek to convince them to stay longer and come back sooner onto the platform. Similarly, authors are incentivised to publish more and to become champions of dissemination.

Arguably, these incentive systems are built around public reputation supported by a system of metrics, hard to be assessed. Generally, the digital content industry is permeable to non-human contributors (algorithms that are able to generate content and reactions), anonymity and identity fraud. It is pertinent to present a perspective paper about early signs of track and persuasion in scholarly communication.

Building our views, we have run a pilot study to determine the opportunity for conducting research about the use of “track and persuade” technologies in scholarly communication. We collected observations on a sample of 148 relevant websites and we interviewed 15 that are experts related to the field.

Through this work, we tried to identify 1) the essential questions that could inspire proper research, 2) good practices to be recommended for future research, and 3) whether citizen science is a suitable approach to further research in this field.

The findings could contribute to determining a broader solution for building trust and infrastructure in scholarly communication. The principles of Open Science will be used as a framework to see if they offer insights into this work going forward.

URL : Perspectives on Open Science and The Future of Scholarly Communication: Internet Trackers and Algorithmic Persuasion

DOI : https://doi.org/10.3389/frma.2021.748095

Why does library holding format really matter for book impact assessment?: Modelling the relationship between citations and altmetrics with print and electronic holdings

Author : Ashraf Maleki

Scholarly books are important outputs in some fields and their many publishing formats seem to introduce opportunities to scrutinize their impact. As there is a growing interest in the publisher-enforced massive collection of ebooks in libraries in the past decade, this study examined how this influences the relationship that library print holdings (LPH), library electronic holdings (LEH) and total library holdings (TLH) have with other metrics.

As a follow up study to a previous research on OCLC library holdings, the relationship between library holdings and twelve other metrics including Scopus Citations, Google Books (GB) Citations, Goodreads engagements, and Altmetric indicators were examined for 119,794 Scopus-indexed book titles across 26 fields.

Present study confirms the weak correlation levels observed between TLH and other indicators in previous studies and contributes additional evidence that print holdings can moderately reflect research, educational and online impact of books consistently more efficient than eholdings and total holdings across fields and over time, except for Mendeley for which eholdings slightly prevailed.

Regression models indicated that along with other dimensions, Google Books Citations frequently best explained LPH (in 14 out of 26 fields), whereas Goodreads User counts were weak, but the best predictor of both LEH and TLH (in 15 fields out of 26), suggesting significant association of eholdings with online uptake of books.

Overall, findings suggest that inclusion of eholdings overrides the more impactful counts of print holdings in Total Library Holdings metric and therefore undermines the statistical results, whilst print holdings has both statistically and theoretically promising underlying assumptions for prediction of impact of books and shows greater promise than the general Library Holding metric for book impact assessment.

Thus, there is a need for a distinction between print and electronic holding counts to be made, otherwise total library holding data need to be interpreted with caution.

URL : Why does library holding format really matter for book impact assessment?: Modelling the relationship between citations and altmetrics with print and electronic holdings

DOI : https://doi.org/10.1007/s11192-021-04239-9

The role of metrics in peer assessments

Authors :  Liv Langfeldt, Ingvild Reymert, Dag W Aksnes

Metrics on scientific publications and their citations are easily accessible and are often referred to in assessments of research and researchers. This paper addresses whether metrics are considered a legitimate and integral part of such assessments. Based on an extensive questionnaire survey in three countries, the opinions of researchers are analysed.

We provide comparisons across academic fields (cardiology, economics, and physics) and contexts for assessing research (identifying the best research in their field, assessing grant proposals and assessing candidates for positions).

A minority of the researchers responding to the survey reported that metrics were reasons for considering something to be the best research. Still, a large majority in all the studied fields indicated that metrics were important or partly important in their review of grant proposals and assessments of candidates for academic positions.

In these contexts, the citation impact of the publications and, particularly, the number of publications were emphasized. These findings hold across all fields analysed, still the economists relied more on productivity measures than the cardiologists and the physicists. Moreover, reviewers with high scores on bibliometric indicators seemed more frequently (than other reviewers) to adhere to metrics in their assessments.

Hence, when planning and using peer review, one should be aware that reviewers—in particular reviewers who score high on metrics—find metrics to be a good proxy for the future success of projects and candidates, and rely on metrics in their evaluation procedures despite the concerns in scientific communities on the use and misuse of publication metrics.

URL : The role of metrics in peer assessments

DOI : https://doi.org/10.1093/reseval/rvaa032