What Is Wrong With the Current Evaluative Bibliometrics?

Author : Endel Põder

Bibliometric data are relatively simple and describe objective processes of publishing articles and citing others. It seems quite straightforward to define reasonable measures of a researcher’s productivity, research quality, or overall performance based on these data. Why do we still have no acceptable bibliometric measures of scientific performance?

Instead, there are hundreds of indicators with nobody knowing how to use them. At the same time, an increasing number of researchers and some research fields have been excluded from the standard bibliometric analysis to avoid manifestly contradictive conclusions.

I argue that the current biggest problem is the inadequate rule of credit allocation for multiple authored articles in mainstream bibliometrics. Clinging to this historical choice excludes any systematic and logically consistent bibliometrics-based evaluation of researchers, research groups, and institutions.

During the last 50 years, several authors have called for a change. Apparently, there are no serious methodologically justified or evidence-based arguments in the favor of the present system.

However, there are intractable social, psychological, and economical issues that make adoption of a logically sound counting system almost impossible.

URL : What Is Wrong With the Current Evaluative Bibliometrics?

DOI : https://doi.org/10.3389/frma.2021.824518

The role of metrics in peer assessments

Authors :  Liv Langfeldt, Ingvild Reymert, Dag W Aksnes

Metrics on scientific publications and their citations are easily accessible and are often referred to in assessments of research and researchers. This paper addresses whether metrics are considered a legitimate and integral part of such assessments. Based on an extensive questionnaire survey in three countries, the opinions of researchers are analysed.

We provide comparisons across academic fields (cardiology, economics, and physics) and contexts for assessing research (identifying the best research in their field, assessing grant proposals and assessing candidates for positions).

A minority of the researchers responding to the survey reported that metrics were reasons for considering something to be the best research. Still, a large majority in all the studied fields indicated that metrics were important or partly important in their review of grant proposals and assessments of candidates for academic positions.

In these contexts, the citation impact of the publications and, particularly, the number of publications were emphasized. These findings hold across all fields analysed, still the economists relied more on productivity measures than the cardiologists and the physicists. Moreover, reviewers with high scores on bibliometric indicators seemed more frequently (than other reviewers) to adhere to metrics in their assessments.

Hence, when planning and using peer review, one should be aware that reviewers—in particular reviewers who score high on metrics—find metrics to be a good proxy for the future success of projects and candidates, and rely on metrics in their evaluation procedures despite the concerns in scientific communities on the use and misuse of publication metrics.

URL : The role of metrics in peer assessments

DOI : https://doi.org/10.1093/reseval/rvaa032

Evaluation of research activities of universities of Ukraine and Belarus: a set of bibliometric indicators and its implementation

Authors : Vladimir Lazarev, Serhii Nazarovets, Alexey Skalaban

Monitoring bibliometric indicators of University rankings is considered as a subject of a University library activity. In order to fulfill comparative assessment of research activities of the universities of Ukraine and Belarus the authors introduced a set of bibliometric indicators.

A comparative assessment of the research activities of corresponding universities was fulfilled; the data on the leading universities are presented. The sensitivity of the one of the indicators to rapid changes of the research activity of universities and the fact that the other one is normalized across the fields of science condition advantage of the proposed set over the one that was used in practice of the corresponding national rankings.

URL : https://arxiv.org/abs/1711.02059

Open Access Indicators and Scholarly Communications in Latin America

Statut

This book is the result of a joint research and development project supported by UNESCO and undertaken in 2013 by UNESCO in partnership with the Public Knowledge Project (PKP), the Scientific Electronic Library Online (SciELO), the Network of Scientific Journals of Latin America, the Caribbean, Spain and Portugal (RedALyC), Africa Journals Online (AJOL), the Latin America Social Sciences SchoolBrazil (FLACSO-Brazil), and the Latin American Council of Social Sciences (CLACSO). This book aims to contribute to the understanding of scholarly production, use and reach through measures that are open and inclusive. The present book is divided into two sections.

The first section presents a narrative summary of Open Access in Latin America, including a description of the major regional initiatives that are collecting and systematizing data related to Open Access scholarship, and of available data that can be used to understand the (i) growth, (ii) reach, and (iii) impact of Open Access in developing regions. The first section ends with recommendations for future activities. The second section includes in-depth case-studies with the descriptions of indicators and methodologies of peer-review journal portals SciELO and Redalyc, and a case of subject digital repository maintained by CLACSO.

URL : https://microblogging.infodocs.eu/wp-content/uploads/2015/08/alperin2014.pdf

Alternative location : http://hdl.handle.net/10760/25122

F1000, Mendeley and Traditional Bibliometric Indicators

This article compares the Faculty of 1000 (F1000) quality filtering results and Mendeley usage data with traditional bibliometric indicators, using a sample of 1397 Genomics and Genetics articles published in 2008 selected by F1000 Faculty Members (FMs). Both Mendeley user counts and F1000 article factors (FFas) correlate significantly with citation counts and associated Journal Impact Factors. However, the correlations for Mendeley user counts are much larger than those for FFas.

It may be that F1000 is good at disclosing the merit of an article from an expert practitioner point of view while Mendeley user counts may be more closely related to traditional citation impact. Articles that attract exceptionally many citations are generally disorder or disease related, while those with extremely high social bookmark user counts are mainly historical or introductory.

URL : http://2012.sticonference.org/Proceedings/vol2/Li_F1000_541.pdf

Google Scholar Metrics an unreliable tool for assessing…

Statut

Google Scholar Metrics: an unreliable tool for assessing scientific journals :

“We introduce Google Scholar Metrics (GSM), a new bibliometric product of Google that aims at providing the H-index for scientific journals and other information sources. We conduct a critical review of GSM showing its main characteristics and possibilities as a tool for scientific evaluation. We discuss its coverage along with the inclusion of repositories, bibliographic control, and its options for browsing and searching. We conclude that, despite Google Scholar’s value as a source for scien- tific assessment, GSM is an immature product with many shortcomings, and therefore we advise against its use for evalu- ation purposes. However, the improvement of these shortcomings would place GSM as a serious competitor to the other existing products for evaluating scientific journals.”

URL : http://digibug.ugr.es/handle/10481/21540

Le classement de Leiden environnement scientifique et configuration…

Statut

Le classement de Leiden: environnement scientifique et configuration :

“Le classement de Leiden s’impose aujourd’hui comme une alternative pertinente et valable vis-à-vis de celui de Shanghai. De nombreux indicateurs font intervenir les caractéristiques propres aux champs disciplinaires et des calculs fondés sur le principe de distribution. Il est conçu par le centre CWTS de l’université néerlandaise de Leiden.”

“The Leiden Ranking is considered today as quite a pertinent and valuable alternative vs. the Shanghai Ranking. A significant number of indicators involve for instance Fields Citation Scores and data distribution. It is conceived by the CWTS of the University of Leiden – The Netherlands.”

URL : http://archivesic.ccsd.cnrs.fr/sic_00696098