Over-optimization of academic publishing metrics: observing Goodhart’s Law in action

Authors : Michael Fire, Carlos Guestrin

Background

The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades.

Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which, “when a measure becomes a target, it ceases to be a good measure.”

Results

In this study, we analyzed >120 million papers to examine how the academic publishing world has evolved over the last century, with a deeper look into the specific field of biology. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening.

In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists.

Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors.

Moreover, by analyzing properties of >2,600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department.

Conclusions

Academic publishing has changed considerably; now we need to reconsider how we measure success.

URL : Over-optimization of academic publishing metrics: observing Goodhart’s Law in action

DOI : https://doi.org/10.1093/gigascience/giz053

How to avoid borrowed plumes in academia

Authors : Margit Osterloh, Bruno S. Frey

Publications in top journals today have a powerful influence on academic careers although there is much criticism of using journal rankings to evaluate individual articles.

We ask why this practice of performance evaluation is still so influential. We suggest this is the case because a majority of authors benefit from the present system due to the extreme skewness of citation distributions. “Performance paradox” effects aggravate the problem.

Three extant suggestions for reforming performance management are critically discussed. We advance a new proposal based on the insight that fundamental uncertainty is symptomatic for scholarly work. It suggests focal randomization using a rationally founded and well-orchestrated procedure.

URL : How to avoid borrowed plumes in academia

Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C. McKiernan​, Lesley A. Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T. Niles, Juan Pablo Alperin

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems.

While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units.

We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents.

Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF.

Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations.

None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.

URL : Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7287/peerj.preprints.27638v2

From closed to open access: A case study of flipped journals

Authors : Fakhri Momeni, Nicholas Fraser, Isabella Peters, Philipp Mayr

In recent years, increased stakeholder pressure to transition research to Open Access has led to many journals “flipping” from a toll access to an open access publishing model. Changing the publishing model can influence the decision of authors to submit their papers to a journal, and increased article accessibility may influence citation behaviour.

The aim of this paper is to show changes in the number of published articles and citations after the flipping of a journal. We analysed a set of 171 journals in the Web of Science (WoS) which flipped to open access.

In addition to comparing the number of articles, average relative citation (ARC) and normalized impact factor (IF) are applied, respectively, as bibliometric indicators at the article and journal level, to trace the transformation of flipped journals covered.

Our results show that flipping mostly has had positive effects on journal’s IF. But it has had no obvious citation advantage for the articles. We also see a decline in the number of published articles after flipping.

We can conclude that flipping to open access can improve the performance of journals, despite decreasing the tendency of authors to submit their articles and no better citation advantages for articles.

URL : https://arxiv.org/abs/1903.11682

A “basket of metrics”—the best support for understanding journal merit

Authors : Lisa Colledge, Chris James

Aim

To survey opinion of the assertion that useful metricbased input requires a “basket of metrics” to allow more varied and nuanced insights into merit than is possible by using one metric alone.

Methods

A poll was conducted to survey opinions (N=204; average response rate=61%) within the international research community on using usage metrics in merit systems.

Results

“Research is best quantified using multiple criteria” was selected by most (40%) respondents as the reason that usage metrics are valuable, and 95% of respondents indicated that they would be likely or very likely to use usage metrics in their assessments of research merit, if they had access to them.

There was a similar degree of preference for simple and sophisticated usage metrics confirming that one size does not fit all, and that a one-metric approach to merit is insufficient.

Conclusion

This survey demonstrates a clear willingness and a real appetite to use a “basket of metrics” to broaden the ways in which research merit can be detected and demonstrated.

URL : http://europeanscienceediting.eu/articles/a-basket-of-metrics-the-best-support-for-understanding-journal-merit/

Evaluating research and researchers by the journal impact factor: is it better than coin flipping?

Authors : Ricardo Brito, Alonso Rodríguez-Navarro

The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers.

The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations.

Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF.

To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low.

However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.

URL : https://arxiv.org/abs/1809.10999

Plurality in multi-disciplinary research: multiple institutional affiliations are associated with increased citations

Authors : Paul Sanfilippo​, Alex W. Hewitt, David A. Mackey

Background

The institutional affiliations and associated collaborative networks that scientists foster during their research careers are salient in the production of high-quality science. The phenomenon of multiple institutional affiliations and its relationship to research output remains relatively unexplored in the literature.

Methods

We examined 27,612 scientific articles, modelling the normalized citation counts received against the number of authors and affiliations held.

Results

In agreement with previous research, we found that teamwork is an important factor in high impact papers, with average citations received increasing concordant with the number of co-authors listed.

For articles with more than five co-authors, we noted an increase in average citations received when authors with more than one institutional affiliation contributed to the research.

Discussion

Multiple author affiliations may play a positive role in the production of high-impact science. This increased researcher mobility should be viewed by institutional boards as meritorious in the pursuit of scientific discovery.

URL : Plurality in multi-disciplinary research: multiple institutional affiliations are associated with increased citations

DOI : https://doi.org/10.7717/peerj.5664