Does Monetary Support Increase Citation Impact of Scholarly Papers?

Authors : Yasar Tonta, Muge Akbulut

One of the main indicators of scientific development of a given country is the number of papers published in high impact scholarly journals. Many countries introduced performance-based research funding systems (PRFSs) to create a more competitive environment where prolific researchers get rewarded with subsidies to increase both the quantity and quality of papers.

Yet, subsidies do not always function as a leverage to improve the citation impact of scholarly papers. This paper investigates the effect of the publication support system of Turkey (TR) on the citation impact of papers authored by Turkish researchers.

Based on a stratified probabilistic sample of 4,521 TR-addressed papers, it compares the number of citations to determine if supported papers were cited more often than those of not supported ones, and if they were published in journals with relatively higher citation impact in terms of journal impact factors, article influence scores and quartiles.

Both supported and not supported papers received comparable number of citations per paper, and were published in journals with similar citation impact values. Findings suggest that subsidies do not seem to be an effective incentive to improve the quality of scholarly papers. Such support programs should therefore be reconsidered.

URL : https://arxiv.org/abs/1909.10068

Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C McKiernan, Lesley A Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T Niles, Juan P Alperin

We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms.

Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.

URL : Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7554/eLife.47338.001

Large publishing consortia produce higher citation impact research but co-author contributions are hard to evaluate

Author : Mike Thelwall

This paper introduces a simple agglomerative clustering method to identify large publishing consortia with at least 20 authors and 80% shared authorship between articles. Based on Scopus journal articles 1996-2018, under these criteria, nearly all (88%) of the large consortia published research with citation impact above the world average, with the exceptions being mainly the newer consortia for which average citation counts are unreliable.

On average, consortium research had almost double (1.95) the world average citation impact on the log scale used (Mean Normalised Log Citation Score). At least partial alphabetical author ordering was the norm in most consortia.

The 250 largest consortia were for nuclear physics and astronomy around expensive equipment, and for predominantly health-related issues in genomics, medicine, public health, microbiology and neuropsychology.

For the health-related issues, except for the first and last few authors, authorship seem to primary indicate contributions to the shared project infrastructure necessary to gather the raw data.

It is impossible for research evaluators to identify the contributions of individual authors in the huge alphabetical consortia of physics and astronomy, and problematic for the middle and end authors of health-related consortia.

For small scale evaluations, authorship contribution statements could be used, when available.

URL : https://arxiv.org/abs/1906.01849

Over-optimization of academic publishing metrics: observing Goodhart’s Law in action

Authors : Michael Fire, Carlos Guestrin

Background

The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades.

Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which, “when a measure becomes a target, it ceases to be a good measure.”

Results

In this study, we analyzed >120 million papers to examine how the academic publishing world has evolved over the last century, with a deeper look into the specific field of biology. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening.

In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists.

Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors.

Moreover, by analyzing properties of >2,600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department.

Conclusions

Academic publishing has changed considerably; now we need to reconsider how we measure success.

URL : Over-optimization of academic publishing metrics: observing Goodhart’s Law in action

DOI : https://doi.org/10.1093/gigascience/giz053

How to avoid borrowed plumes in academia

Authors : Margit Osterloh, Bruno S. Frey

Publications in top journals today have a powerful influence on academic careers although there is much criticism of using journal rankings to evaluate individual articles.

We ask why this practice of performance evaluation is still so influential. We suggest this is the case because a majority of authors benefit from the present system due to the extreme skewness of citation distributions. “Performance paradox” effects aggravate the problem.

Three extant suggestions for reforming performance management are critically discussed. We advance a new proposal based on the insight that fundamental uncertainty is symptomatic for scholarly work. It suggests focal randomization using a rationally founded and well-orchestrated procedure.

URL : How to avoid borrowed plumes in academia

Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C. McKiernan​, Lesley A. Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T. Niles, Juan Pablo Alperin

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems.

While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units.

We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents.

Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF.

Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations.

None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.

URL : Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7287/peerj.preprints.27638v2

From closed to open access: A case study of flipped journals

Authors : Fakhri Momeni, Nicholas Fraser, Isabella Peters, Philipp Mayr

In recent years, increased stakeholder pressure to transition research to Open Access has led to many journals “flipping” from a toll access to an open access publishing model. Changing the publishing model can influence the decision of authors to submit their papers to a journal, and increased article accessibility may influence citation behaviour.

The aim of this paper is to show changes in the number of published articles and citations after the flipping of a journal. We analysed a set of 171 journals in the Web of Science (WoS) which flipped to open access.

In addition to comparing the number of articles, average relative citation (ARC) and normalized impact factor (IF) are applied, respectively, as bibliometric indicators at the article and journal level, to trace the transformation of flipped journals covered.

Our results show that flipping mostly has had positive effects on journal’s IF. But it has had no obvious citation advantage for the articles. We also see a decline in the number of published articles after flipping.

We can conclude that flipping to open access can improve the performance of journals, despite decreasing the tendency of authors to submit their articles and no better citation advantages for articles.

URL : https://arxiv.org/abs/1903.11682