How should evaluation be? Is a good evaluation of research also just? Towards the implementation of good evaluation

Authors : Cinzia Daraio, Alessio Vaccari

In this paper we answer the question of how evaluation should be by proposing a good evaluation of research practices. A good evaluation of research practices, intended as social practices à la MacIntyre, should take into account the stable motivations and the traits of the characters (i.e. the virtues) of researchers.

We also show that a good evaluation is also just, beyond the sense of fairness, as working on good research practices implies keep into account a broader sense of justice. After that, we propose the development of a knowledge base for the assessment of “good” evaluations of research practices to implement a questionnaire for the assessment of researchers’ virtues.

Although the latter is a challenging task, the use of ontologies and taxonomic knowledge, and the reasoning algorithms that can draw inferences on the basis of such knowledge represents a way for testing the consistency of the information reported in the questionnaire and to analyse correctly and coherently how the data is gathered through it.

Finally, we describe the potential application usefulness of our proposal for the reform of current research assessment systems.

URL : How should evaluation be? Is a good evaluation of research also just? Towards the implementation of good evaluation

DOI : https://doi.org/10.1007/s11192-022-04329-2

Academic criteria for promotion and tenure in biomedical sciences faculties: cross sectional analysis of international sample of universities

Authors : Danielle B Rice, Hana Raffoul, John P A Ioannidis, David Moher

Objective

To determine the presence of a set of pre-specified traditional and non-traditional criteria used to assess scientists for promotion and tenure in faculties of biomedical sciences among universities worldwide.

Design

Cross sectional study.

Setting

International sample of universities.

Participants

170 randomly selected universities from the Leiden ranking of world universities list.

Main outcome measure

Presence of five traditional (for example, number of publications) and seven non-traditional (for example, data sharing) criteria in guidelines for assessing assistant professors, associate professors, and professors and the granting of tenure in institutions with biomedical faculties.

Results

A total of 146 institutions had faculties of biomedical sciences, and 92 had eligible guidelines available for review. Traditional criteria of peer reviewed publications, authorship order, journal impact factor, grant funding, and national or international reputation were mentioned in 95% (n=87), 37% (34), 28% (26), 67% (62), and 48% (44) of the guidelines, respectively. Conversely, among non-traditional criteria, only citations (any mention in 26%; n=24) and accommodations for employment leave (37%; 34) were relatively commonly mentioned.

Mention of alternative metrics for sharing research (3%; n=3) and data sharing (1%; 1) was rare, and three criteria (publishing in open access mediums, registering research, and adhering to reporting guidelines) were not found in any guidelines reviewed.

Among guidelines for assessing promotion to full professor, traditional criteria were more commonly reported than non-traditional criteria (traditional criteria 54.2%, non-traditional items 9.5%; mean difference 44.8%, 95% confidence interval 39.6% to 50.0%; P=0.001).

Notable differences were observed across continents in whether guidelines were accessible (Australia 100% (6/6), North America 97% (28/29), Europe 50% (27/54), Asia 58% (29/50), South America 17% (1/6)), with more subtle differences in the use of specific criteria.

Conclusions

This study shows that the evaluation of scientists emphasises traditional criteria as opposed to non-traditional criteria. This may reinforce research practices that are known to be problematic while insufficiently supporting the conduct of better quality research and open science. Institutions should consider incentivising non-traditional criteria.

URL : Academic criteria for promotion and tenure in biomedical sciences faculties: cross sectional analysis of international sample of universities

DOI : https://doi.org/10.1136/bmj.m2081

 

Google Scholar as a data source for research assessment

Authors : Emilio Delgado López-Cózar, Enrique Orduna-Malea, Alberto Martín-Martín

The launch of Google Scholar (GS) marked the beginning of a revolution in the scientific information market. This search engine, unlike traditional databases, automatically indexes information from the academic web. Its ease of use, together with its wide coverage and fast indexing speed, have made it the first tool most scientists currently turn to when they need to carry out a literature search.

Additionally, the fact that its search results were accompanied from the beginning by citation counts, as well as the later development of secondary products which leverage this citation data (such as Google Scholar Metrics and Google Scholar Citations), made many scientists wonder about its potential as a source of data for bibliometric analyses.

The goal of this chapter is to lay the foundations for the use of GS as a supplementary source (and in some disciplines, arguably the best alternative) for scientific evaluation.

First, we present a general overview of how GS works. Second, we present empirical evidences about its main characteristics (size, coverage, and growth rate). Third, we carry out a systematic analysis of the main limitations this search engine presents as a tool for the evaluation of scientific performance.

Lastly, we discuss the main differences between GS and other more traditional bibliographic databases in light of the correlations found between their citation data. We conclude that Google Scholar presents a broader view of the academic world because it has brought to light a great amount of sources that were not previously visible.

URL : https://arxiv.org/abs/1806.04435

Measuring the visibility of the universi…

Measuring the visibility of the universities’ scientific production using scientometric methods :
Paper presents scientometry as a science and a fundamental instrument for determining the
international value of an university as well as for the statistical evaluation of scientific research results.
The impact of the research measurable through scientometric indicators is analyzed. Promoting the
scientific production of universities through institutional digital repositories deals with the concept of
scientific production of the university and the development of scientific research in information
society. These concepts are approached through the prism of marketing methods and techniques. The
digital repository is analyzed as a PRODUCT, destined for promoting, archieving and preserving
scientific production.

URL : http://www.wseas.us/e-library/conferences/2010/Tunisia/EDUTE/EDUTE-22.pdf