Open Science Alternatives to Scopus and the Web of Science: A Case Study in Regional Resilience

Authors : Irina D. Turgel, Olga A. Chernova

The recent years have seen increasing support for open science in academic circles. However, the large number of scientometric databases calls into question the comparability of the search and analysis tools they provide.

Using the subject area of regional resilience as an example, in this study, the aim was to analyze the capabilities of widely used databases to serve as alternatives to Scopus and Web of Science in solving research problems.

As alternatives, in the present article, the following open, free scientometric databases were considered: AMiner, Wizdom.ai, the Lens, Dimensions, and OpenAlex. Their capabilities were demonstrated for the subject area under study, and the obtained results were compared.

The study results showed that alternative databases provide essential data on trends in scientific development. It is noteworthy that they largely replicate the provided data, supplementing and expanding them by using different types of data sources. However, open databases do not guarantee a high quality of materials and exhibit a relatively low level of metadata.

Thus, it is premature to abandon the use of Scopus and Web of Science in research activities. Since scientometric databases were developed in different contexts, they are characterized by structural and functional heterogeneity, which complicates their comparison.

Therefore, a selective approach should be adopted for the choice of scientometric databases, taking into account financial and other constraints, as well as the specifics of research problems.

URL : Open Science Alternatives to Scopus and the Web of Science: A Case Study in Regional Resilience

DOI : https://doi.org/10.3390/publications12040043

Distinguishing articles in questionable and non-questionable journals using quantitative indicators associated with quality

Author : Dimity Stephen

This study investigates the viability of distinguishing articles in questionable journals (QJs) from those in non-QJs on the basis of quantitative indicators typically associated with quality. Subsequently, I examine what can be deduced about the quality of articles in QJs based on the differences observed.

I contrast the length of abstracts and full-texts, prevalence of spelling errors, text readability, number of references and citations, the size and internationality of the author team, the documentation of ethics and informed consent statements, and the presence erroneous decisions based on statistical errors in 1,714 articles from 31 QJs, 1,691 articles from 16 journals indexed in Web of Science (WoS), and 1,900 articles from 45 mid-tier journals, all in the field of psychology.

The results suggest that QJ articles do diverge from the disciplinary standards set by peer-reviewed journals in psychology on quantitative indicators of quality that tend to reflect the effect of peer review and editorial processes. However, mid-tier and WoS journals are also affected by potential quality concerns, such as under-reporting of ethics and informed consent processes and the presence of errors in interpreting statistics. Further research is required to develop a comprehensive understanding of the quality of articles in QJs.

Arxiv : https://arxiv.org/abs/2405.06308

Beyond views, productivity, and citations: measuring geopolitical differences of scientific impact in communication research

Authors : János József Tóth, Gergő Háló, Manuel Goyanes

Scientometric analyses applying critical sociological frameworks have previously shown that high-prestige research output—with regards to both quantity and impact—is typically clustered in a few core countries and world regions, indicating uneven power relations and systematic biases within global academia.

Although citation count is a common formula in these analyses, only a handful of studies investigated altmetrics (impact measures beyond citation-based metrics) in communication science. In this paper, we explore geopolitical biases of impact amongst the most productive scholars in the field of communication from 11 countries and 3 world regions.

Drawing on SCOPUS data, we test three formulas that measure scholarly performance (citations per document; views per document; and citations per view) to investigate how geographical location affects the impact of scholars. Our results indicate a strong US-dominance with regard to citation-based impact, emphasizing a further need for de-Westernization within the field.

Moreover, the analysis of altmetric formulas revealed that research published by Eastern European and Spanish scholars, although accessed similarly or even more often than American or Western European publications, is less cited than those. Country-level comparisons are also discussed.

URL : Beyond views, productivity, and citations: measuring geopolitical differences of scientific impact in communication research

DOI : https://doi.org/10.1007/s11192-023-04801-7

Identification and classification of evaluation indicators for scientific and technical publications and related factors

Authors : Hassan Mahmoudi Topkanlo, Mehrdad CheshmehSohrabi

Introduction

Given the importance of the issue of the widespread impact of scientific and technical publications in today’s world, and the diversity and multiplicity of indicators for measuring these publications, it is a necessity to classify these indicators from different angles and through different tools and methods.

Method

This study used documentary analysis and Delphi technique methods. The members of the Delphi panel were twenty-one experts in metric fields in information science who answered the research questionnaires several times until reaching a consensus.

Analysis

Kendall’s coefficient of concordance and a one-sample t-test were used to measure the agreement of the panel members as raters on the questionnaire items.

Results

A total of thirty-four sub-categories of indicators of assessment were identified which were categorised according to their similarities and differences into eight main categories as follows: measurement method, measurement unit, measurement content, measurement purpose, measurement development, measurement resource, measurability, and measurement environment.

Conclusion

Classification of the indicators of evaluation for scientific and technical publications and related factors can lead to improved understanding, critique, modelling and development of indicators. The findings of this study can be considered a basis for further research and help develop evaluative theoretical foundations in scientific and technical publications and related factors.

URL : Identification and classification of evaluation indicators for scientific and technical publications and related factors

DOI : https://doi.org/10.47989/irpaper953

Awareness Mentality and Strategic Behavior in Science

Author : Rafael Ball

Acknowledgement of scientific achievements was and is essentially achieved through the citation of a publication. Increasingly, however, it is no longer just the publication itself that plays an important role, but also the degree of attention that a scientist achieves with this very publication.

Thus, the importance of strategic behavior in science is progressing and an awareness mentality is spreading. In this paper, the causes and backgrounds of this development are discussed, identifying the use of reductionist, quantitative systems in science management and research funding, the loss of critical judgment and technocratic dominance, quantitative assessments used for decision making, altmetrics and the like as alternative views, the use of perception scores in reference databases and universities as well as ambitions of journals as main drivers.

Besides, different forms of strategic behavior in science and the resulting consequences and impacts are being highlighted.

URL : Awareness Mentality and Strategic Behavior in Science

DOI : https://doi.org/10.3389/frma.2021.703159