How faculty define quality, prestige, and impact in research

Authors : Esteban Morales, Erin McKiernan, Meredith T. Niles, Lesley Schimanski, Juan Pablo Alperin

Despite the calls for change, there is significant consensus that when it comes to evaluating publications, review, promotion, and tenure processes should aim to reward research that is of high “quality,” has an “impact,” and is published in “prestigious” journals.

Nevertheless, such terms are highly subjective and present challenges to ascertain precisely what such research looks like. Accordingly, this article responds to the question: how do faculty from universities in the United States and Canada define the terms quality, prestige, and impact?

We address this question by surveying 338 faculty members from 55 different institutions. This study’s findings highlight that, despite their highly varied definitions, faculty often describe these terms in overlapping ways. Additionally, results shown that marked variance in definitions across faculty does not correspond to demographic characteristics.

This study’s results highlight the need to more clearly implement evaluation regimes that do not rely on ill-defined concepts.

DOI : https://doi.org/10.1101/2021.04.14.439880

Assessing the Quality of Scientific Papers

Authors : Roman Vainshtein, Gilad Katz, Bracha Shapira, Lior Rokach

A multitude of factors are responsible for the overall quality of scientific papers, including readability, linguistic quality, fluency,semantic complexity, and of course domain-specific technical factors.

These factors vary from one field of study to another. In this paper, we propose a measure and method for assessing the overall quality of the scientific papers in a particular field of study.

We evaluate our method in the computer science domain, but it can be applied to other technical and scientific fields.Our method is based on the corpus linguistics technique. This technique enables the extraction of required information and knowledge associated with a specific domain.

For this purpose, we have created a large corpus, consisting of papers from very high impact conferences. First, we analyze this corpus in order to extract rich domain-specific terminology and knowledge.

Then we use the acquired knowledge to estimate the quality of scientific papers by applying our proposed measure. We examine our measure on high and low scientific impact test corpora.

Our results show a significant difference in the measure scores of the high and low impact test corpora. Second, we develop a classifier based on our proposed measure and compare it to the baseline classifier.

Our results show that the classifier based on our measure over-performed the baseline classifier. Based on the presented results the proposed measure and the technique can be used for automated assessment of scientific papers.

URL : https://arxiv.org/abs/1908.04200

Quality open access publishing and registration to Directory of Open Access Journals

Authors : Xin Bi, Xi’an Jiaotong

With the fast development of open access publishing worldwide, Directory of Open Access Journals (DOAJ) as a community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals, has been recognized for its high criteria in facilitating high quality open access scholarly publishing and used as the portal for accessing quality open access journals.

While the numbers of journal application to be inclusion in DOAJ in Asia are kept increasing dramatically, many editors of these journals are not very clear about the idea or concept of the open access which have been embedded in the application form containing 58 questions falling into several different criteria categories.

The very commonly seen misunderstanding of the required item, inaccurate or vague or incomplete and even missing information, poorly organized website, non-transparent process of publishing, especially no open access statement and copyright statement, or conflicts between the policy statements would cause much more communication between the reviewer and the editor and delay the completion of the review.

This article gives an in depth introduction to DOAJ criteria and detailed introduction to the general process on how to register to DOAJ, suggestions based on application review also is given for journal editors to better prepare for this application.

And it is the most important for editors to keep in mind that to be indexed by DOAJ is not just about filling a form, it is about truly change and adapt to best practices in open access publishing.

URL : Quality open access publishing and registration to Directory of Open Access Journals

DOI : https://doi.org/10.6087/kcse.82

Quality Assessment of Studies Published in Open Access and Subscription Journals: Results of a Systematic Evaluation

Authors : Sonja Milovanovic, Jovana Stojanovic, Ljupcho Efremov, Rosarita Amore, Stefania Boccia

Introduction

Along with the proliferation of Open Access (OA) publishing, the interest for comparing the scientific quality of studies published in OA journals versus subscription journals has also increased.

With our study we aimed to compare the methodological quality and the quality of reporting of primary epidemiological studies and systematic reviews and meta-analyses published in OA and non-OA journals.

Methods

In order to identify the studies to appraise, we listed all OA and non-OA journals which published in 2013 at least one primary epidemiologic study (case-control or cohort study design), and at least one systematic review or meta-analysis in the field of oncology.

For the appraisal, we picked up the first studies published in 2013 with case-control or cohort study design from OA journals (Group A; n = 12), and in the same time period from non-OA journals (Group B; n = 26); the first systematic reviews and meta-analyses published in 2013 from OA journals (Group C; n = 15), and in the same time period from non-OA journals (Group D; n = 32).

We evaluated the methodological quality of studies by assessing the compliance of case-control and cohort studies to Newcastle and Ottawa Scale (NOS) scale, and the compliance of systematic reviews and meta-analyses to Assessment of Multiple Systematic Reviews (AMSTAR) scale.

The quality of reporting was assessed considering the adherence of case-control and cohort studies to STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) checklist, and the adherence of systematic reviews and meta-analyses to Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) checklist.

Results

Among case-control and cohort studies published in OA and non-OA journals, we did not observe significant differences in the median value of NOS score (Group A: 7 (IQR 7–8) versus Group B: 8 (7–9); p = 0.5) and in the adherence to STROBE checklist (Group A, 75% versus Group B, 80%; p = 0.1).

The results did not change after adjustment for impact factor. The compliance with AMSTAR and adherence to PRISMA checklist were comparable between systematic reviews and meta-analyses published in OA and non-OA journals (Group C, 46.0% versus Group D, 55.0%; p = 0.06), (Group C, 72.0% versus Group D, 76.0%; p = 0.1), respectively).

Conclusion

The epidemiological studies published in OA journals in the field of oncology approach the same methodological quality and quality of reporting as studies published in non-OA journal.

URL : Quality Assessment of Studies Published in Open Access and Subscription Journals: Results of a Systematic Evaluation

DOI : http://dx.doi.org/10.1371/journal.pone.0154217

The Changing Publication Practices in Academia: Inherent Uses and Issues in Open Access and Online Publishing and the Rise of Fraudulent Publications

“Open access and online publishing present significant changes to the Australian higher education sector in a climate demanding increasing research outputs from academic staff. Today’s researchers struggle to discern credible journals from a new wave of ‘low credibility,’ counterfeit, and predatory journals. A New York Times article on the issue resulted in hundreds of anonymous posts, having a whistleblower effect. An analysis of reader posts, examined in this paper, demonstrated that fear and cynicism were dominant, and that unscrupulous publishing practices were often rewarded.

A lack of quality control measures to assist researchers to choose reputable journals and avoid fraudulent ones is becoming evident as universities’ funding and workforce development become increasingly dependent on research outputs. Online publishing is also redefining traditional notions of academic prestige. Adapting to the twenty-first century online publishing landscape requires the higher education sector to meet these challenges with a combination of academic rigour and innovative tools that support researchers, so as to maintain quality and integrity within changing academic publishing practice.”

URL : http://dx.doi.org/10.3998/3336451.0018.308

Le JCR facteur d’impact (IF) et le SCImago Journal Rank Indicator (SJR) des revues françaises : une étude comparative

Auteurs/Authors : Joachim Schöpfel, Hélène Prost

Une des fonctions principales des revues scientifiques est de contribuer à l’évaluation de la recherche et des chercheurs. Depuis plus de 50 ans, le facteur d’impact (IF) de l’Institute of Scientific Information (ISI) est devenu l’indicateur dominant de la qualité d’une revue, malgré certaines faiblesses et critiques dont notamment la sur-représentation des revues anglophones. Cela est un handicap aussi bien pour les chercheurs français que pour les éditeurs francophones ; publier en français n’est pas valorisant.

Or, il existe depuis 2007 une alternative sérieuse à l’IF : le nouveau SCImago Journal Rank Indicator (SJR) qui applique l’algorithme de Google (PageRank) aux revues de la base bibliographique SCOPUS dont la couverture est plus large que celle de l’ISI.

Le but de notre étude est de comparer ces deux indicateurs par rapport aux titres français. L’objectif est de répondre à trois questions : Quelle est la couverture pour les titres français indexés par l’ISI et par SCOPUS (nombre de revues, domaines scientifiques) ? Quelles sont les différences des deux indicateurs IF et SJR par rapport aux revues françaises (classement) ? Quel est l’intérêt du SJR pour l’évaluation, en termes de représentativité des titres français ?

Les résultats de notre analyse de 368 revues françaises avec IF et/ou SJR sont plutôt encourageants pour une utilisation du nouvel indicateur SJR, du moins en complémentarité au IF :

(1) Couverture : 166 revues sont indexées par l’ISI (45 %), 345 revues par SCOPUS (94 %), 143 revues par les deux (39 %). 82% des revues sont issus des domaines STM, 18% des domaines SHS. La couverture de SCOPUS est meilleure surtout en médecine et pharmacologie.

(2) Classement : Pour les titres avec IF et SJR, la corrélation entre les deux indicateurs est significative (0,76). En termes de classement (ranking), l’IF différencie mieux les revues que le SJR (155 vs. 89 rangs). En revanche, du fait de la couverture plus exhaustive de SCOPUS, le SJR rend visible au niveau international davantage de titres.

(3) Représentativité : L’intérêt de SCOPUS et du SJR réside dans la couverture plus représentative de l’édition française (19% vs 9% pour ISI/IF), notamment en STM (38% vs 19 %), beaucoup moins en SHS (6% vs 2 %). Sont indexés surtout les titres de quelques grands éditeurs français ou internationaux ; la plupart des éditeurs français (80 %–90 %) n’ont aucun titre dans le JCR et/ou SCOPUS, même si de nouveau SCOPUS est plus représentatif (avec 17% des éditeurs vs 10% pour le JCR).

Les problèmes méthodologiques et les perspectives pour une évaluation multidimensionnelle sont discutés. L’étude compare le IF et le SJR par rapport aux 368 titres français avec IF et/ou SJR. Les résultats : La couverture du SJR est plus large que celle de l’IF (94% vs 45%) et meilleure surtout dans les sciences médicales. Pour les titres avec IF et SJR, la corrélation entre les deux indicateurs est significative (0,76). En termes de classement (ranking), l’IF différencie mieux les revues que le SJR (155 vs 89 rangs). L’intérêt du SJR réside dans la couverture plus représentative de l’édition française (19% vs 9% avec IF), notamment en STM (38% vs 19 %), moins en SHS (6% vs 2 %).

URL : http://archivesic.ccsd.cnrs.fr/sic_00567847/fr/