Judging Journals: How Impact Factor and Other Metrics Differ across Disciplines

Authors : Quinn Galbraith, Alexandra Carlile Butterfield, Chase Cardon

Given academia’s frequent use of publication metrics and the inconsistencies in metrics across disciplines, this study examines how various disciplines are treated differently by metric systems. We seek to offer academic librarians, university rank and tenure committees, and other interested individuals guidelines for distinguishing general differences between journal bibliometrics in various disciplines.

This study addresses the following questions: How well represented are different disciplines in the indexing of each metrics system (Eigenfactor, Scopus, Web of Science, Google Scholar)? How does each metrics system treat disciplines differently, and how do these differences compare across metrics systems?

For university libraries and academic librarians, this study may increase understanding of the comparative value of various metrics, which hopefully will facilitate more informed decisions regarding the purchase of journal subscriptions and the evaluation of journals and metrics systems.

This study indicates that different metrics systems prioritize different disciplines, and metrics are not always easily compared across disciplines. Consequently, this study indicates that simple reliance on metrics in publishing or purchasing decisions is often flawed.

URL : Judging Journals: How Impact Factor and Other Metrics Differ across Disciplines

DOI : https://doi.org/10.5860/crl.84.6.888

Roles and Responsibilities for Peer Reviewers of International Journals

Author : Carol Nash

There is a noticeable paucity of recently published research on the roles and responsibilities of peer reviewers for international journals. Concurrently, the pool of these peer reviewers is decreasing. Using a narrative research method developed by the author, this study questioned these roles and responsibilities through the author’s assessment in reviewing for five publishing houses July–December 2022, in comparison with two recent studies regarding peer review, and the guidelines of the five publishing houses.

What should be most important in peer review is found discrepant among the author, those judging peer review in these publications, and the five publishing houses. Furthermore, efforts to increase the pool of peer reviewers are identified as ineffective because they focus on the reviewer qua reviewer, rather than on their primary role as researchers.

To improve consistency, authors have regularly called for peer review training. Yet, this advice neglects to recognize the efforts of journals in making their particular requirements for peer review clear, comprehensive and readily accessible.

Consequently, rather than peer reviewers being trained and rewarded as peer reviewers, journals are advised to make peer review a requirement for research publication, and their guidelines necessary reading and advice to follow for peer reviewers.

URL : Roles and Responsibilities for Peer Reviewers of International Journals

DOI : https://doi.org/10.3390/publications11020032

How do journals deal with problematic articles. Editorial response of journals to articles commented in PubPeer

Authors : José-Luis Ortega, Lorena Delgado-Quirós

The aim of this article is to explore the editorial response of journals to research articles that may contain methodological errors or misconduct. A total of 17,244 articles commented on in PubPeer, a post-publication peer review site, were processed and classified according to several error and fraud categories.

Then, the editorial response (i.e., editorial notices) to these papers were retrieved from PubPeer, Retraction Watch, and PubMed to obtain the most comprehensive picture. The results show that only 21.5% of the articles that deserve an editorial notice (i.e., honest errors, methodological flaws, publishing fraud, manipulation) were corrected by the journal. This percentage would climb to 34% for 2019 publications.

This response is different between journals, but cross-sectional across all disciplines. Another interesting result is that high-impact journals suffer more from image manipulations, while plagiarism is more frequent in low-impact journals.

The study concludes with the observation that the journals have to improve their response to problematic articles.

URL : How do journals deal with problematic articles. Editorial response of journals to articles commented in PubPeer

DOI : https://doi.org/10.3145/epi.2023.ene.18

The Issues with Journal Issues: Let Journals Be Digital Libraries

Author : C. Sean Burns

Science depends on a communication system, and today, that is largely provided by digital technologies such as the internet and web. Despite the fact that digital technologies provide the infrastructure for this communication system, peer-reviewed journals continue to mimic workflows and processes from the print era.

This paper focuses on one artifact from the print era, the journal issue, and describes how this artifact has been detrimental to the communication of science, and therefore, to science itself.

To replace the journal issue, this paper argues that scholarly publishing and journals could more fully embrace digital technologies by creating digital libraries to present and organize scholarly output.

URL : The Issues with Journal Issues: Let Journals Be Digital Libraries

DOI : https://doi.org/10.3390/publications11010007

Do open-access dermatology articles have higher citation counts than those with subscription-based access?

Authors : Fangyi Xie, Sherief Ghozy, David F. Kallmes, Julia S. Lehman

Background

Open-access (OA) publishing is increasingly prevalent in dermatology, and many journals now offer hybrid options, including conventional (subscription-based access [SA]) publishing or OA (with an author publishing charge) in a subscription journal. OA publishing has been noted in many disciplines, but this has been rarely studied in dermatology.

Methods

Using the Clarivate Journal Citation Report, we compiled a list of English-language dermatology hybrid OA journals containing more than 5% OA articles. We sampled any OA review or original research article in 4 issues from 2018 to 2019 and matched an equal number of SA articles. Citation count, citation count excluding self-citations and view counts found using Scopus and Altmetrics score were recorded for each article. Statistical analyses were performed using logistic and negative binomial models using R software.

Results

Twenty-seven hybrid dermatology journals were found, and 538 articles were sampled (269 OA, 269 SA). For both original research and review articles, OA articles had significantly higher mean citation counts (mean 13.2, standard deviation [SD] 17.0) compared to SA articles (mean 7.9, SD 8.8) (odds ratio [OR] 1.04; 95% CI 1.02–1.05; P < .001) including when adjusted for time from publication.

Original research OA articles had significantly higher citation counts than original research SA articles (excluding self-citations; OR, 1.03; 95% CI, 1.01–1.05; P = .003), and review articles also had OA citation advantage than review SA articles (OR, 1.06; 95% CI, 1.02–1.11; P = .008). There was, however, no significant difference in citation counts between review articles and original research articles (OR, 1.00; 95% CI, 0.19–5.31; P = 1.000).

There was no significant difference seen in view counts (OA: mean±SD 17.7±10.8; SA: mean±SD 17.1±12.4) and Altmetric score (OA: mean±SD 13.2±47.8; SA: mean±SD 6.3±25.0) between OA and SA articles. Potential confounders included the fact that more OA articles were published in Europe than in Asia, and pharmaceutical-funded articles were more likely to be published OA.

Conclusions

We noted a higher citation count for OA articles than SA articles in dermatology hybrid journals. However, dermatology researchers should take into account confounding factors when deciding whether to increase the impact of their work by selecting OA over SA publishing.

URL : Do open-access dermatology articles have higher citation counts than those with subscription-based access?

DOI : https://doi.org/10.1371/journal.pone.0279265

The Twitter accounts of scientific journals: a dataset

Author : Andreas Nishikawa-Pacher

Twitter harbours dense networks of academics, but to what extent do scientific journals use that platform? This article introduces a dataset of 3,485 Twitter accounts pertaining to a sample of 13,821 journals listed in Web of Science’s three major indices (SCIE, SSCI and AHCI).

The summary statistics indicate that 25.2% of the journals have a dedicated Twitter presence. This number is likely to grow, as, on average, every one and a half days sees yet another journal setting up a new profile. The share of Twitter presence, however, varies strongly by publisher and discipline.

The most active discipline is political science, which has almost 75% of its journals on Twitter, while other research categories have zero. The median account issues 116 messages a year and it interacts with distinct other users once in two to three Tweets. Approximately 600 journals refer to themselves as ‘peer-reviewed’, while 263 journals refer to their citation-based impact (like the impact factor) in their profile description.

All in all, the data convey immense heterogeneity with respect to the Twitter behaviour of scientific journals. As there are numerous deceptive Twitter profile names established by predatory publishers, it is recommended that journals establish their official accounts lest bogus journals mislead the public about scientific findings. The dataset is available for use for further scientometric analyses.

URL : The Twitter accounts of scientific journals: a dataset

DOI : http://doi.org/10.1629/uksg.593

Comparison of Study Results Reported in medRxiv Preprints vs Peer-reviewed Journal Articles

Authors : Guneet Janda, Vishal Khetpal, Xiaoting Shi, Joseph S. Ross, Joshua D. Wallach

Question

What is the concordance among sample size, primary end points, results for primary end points, and interpretations described in preprints of clinical studies posted on medRxiv that are subsequently published in peer-reviewed journals (preprint-journal article pairs)?

Findings

In this cross-sectional study of 547 clinical studies that were initially posted to medRxiv and later published in peer-reviewed journals, 86.4% of preprint-journal article pairs were concordant in terms of sample size, 97.6% in terms of primary end points, 81.1% in terms of results of primary end points, and 96.2% in terms of study interpretations.

Meaning

This study suggests that most clinical studies posted as preprints on medRxiv and subsequently published in peer-reviewed journals had concordant study characteristics, results, and final interpretations.

URL : Comparison of Clinical Study Results Reported in medRxiv Preprints vs Peer-reviewed Journal Articles

Original location : https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2799350