Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C McKiernan, Lesley A Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T Niles, Juan P Alperin

We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms.

Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.

URL : Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7554/eLife.47338.001

Do Download Reports Reliably Measure Journal Usage? Trusting the Fox to Count Your Hens?

Authors : Alex Wood-Doughty, Ted Bergstrom, Douglas G. Steigerwald

Download rates of academic journals have joined citation counts as commonly used indicators of the value of journal subscriptions. While citations reflect worldwide influence, the value of a journal subscription to a single library is more reliably measured by the rate at which it is downloaded by local users.

If reported download rates accurately measure local usage, there is a strong case for using them to compare the cost-effectiveness of journal subscriptions. We examine data for nearly 8,000 journals downloaded at the ten universities in the University of California system during a period of six years.

We find that controlling for number of articles, publisher, and year of download, the ratio of downloads to citations differs substantially among academic disciplines.

After adding academic disciplines to the control variables, there remain substantial “publisher effects”, with some publishers reporting significantly more downloads than would be predicted by the characteristics of their journals.

These cross-publisher differences suggest that the currently available download statistics, which are supplied by publishers, are not sufficiently reliable to allow libraries to make subscription decisions based on price and reported downloads, at least without making an adjustment for publisher effects in download reports.

URL : Do Download Reports Reliably Measure Journal Usage? Trusting the Fox to Count Your Hens?

DOI: https://doi.org/10.5860/crl.80.5.694

Over-optimization of academic publishing metrics: observing Goodhart’s Law in action

Authors : Michael Fire, Carlos Guestrin

Background

The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades.

Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which, “when a measure becomes a target, it ceases to be a good measure.”

Results

In this study, we analyzed >120 million papers to examine how the academic publishing world has evolved over the last century, with a deeper look into the specific field of biology. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening.

In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists.

Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors.

Moreover, by analyzing properties of >2,600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department.

Conclusions

Academic publishing has changed considerably; now we need to reconsider how we measure success.

URL : Over-optimization of academic publishing metrics: observing Goodhart’s Law in action

DOI : https://doi.org/10.1093/gigascience/giz053

Preprints in Scholarly Communication: Re-Imagining Metrics and Infrastructures

Authors : B. Preedip Balaji, M. Dhanamjaya

Digital scholarship and electronic publishing among the scholarly communities are changing when metrics and open infrastructures take centre stage for measuring research impact. In scholarly communication, the growth of preprint repositories over the last three decades as a new model of scholarly publishing has emerged as one of the major developments.

As it unfolds, the landscape of scholarly communication is transitioning, as much is being privatized as it is being made open and towards alternative metrics, such as social media attention, author-level, and article-level metrics. Moreover, the granularity of evaluating research impact through new metrics and social media change the objective standards of evaluating research performance.

Using preprint repositories as a case study, this article situates them in a scholarly web, examining their salient features, benefits, and futures. Towards scholarly web development and publishing on semantic and social web with open infrastructures, citations, and alternative metrics—how preprints advance building web as data is discussed.

We examine that this will viably demonstrate new metrics and in enhancing research publishing tools in scholarly commons facilitating various communities of practice.

However, for the preprint repositories to sustain, scholarly communities and funding agencies should support continued investment in open knowledge, alternative metrics development, and open infrastructures in scholarly publishing.

URL : Preprints in Scholarly Communication: Re-Imagining Metrics and Infrastructures

DOI : https://doi.org/10.3390/publications7010006

A “basket of metrics”—the best support for understanding journal merit

Authors : Lisa Colledge, Chris James

Aim

To survey opinion of the assertion that useful metricbased input requires a “basket of metrics” to allow more varied and nuanced insights into merit than is possible by using one metric alone.

Methods

A poll was conducted to survey opinions (N=204; average response rate=61%) within the international research community on using usage metrics in merit systems.

Results

“Research is best quantified using multiple criteria” was selected by most (40%) respondents as the reason that usage metrics are valuable, and 95% of respondents indicated that they would be likely or very likely to use usage metrics in their assessments of research merit, if they had access to them.

There was a similar degree of preference for simple and sophisticated usage metrics confirming that one size does not fit all, and that a one-metric approach to merit is insufficient.

Conclusion

This survey demonstrates a clear willingness and a real appetite to use a “basket of metrics” to broaden the ways in which research merit can be detected and demonstrated.

URL : http://europeanscienceediting.eu/articles/a-basket-of-metrics-the-best-support-for-understanding-journal-merit/

How significant are the public dimensions of faculty work in review, promotion, and tenure documents?

Authors : Juan Pablo Alperin, Gustavo E. Fischman, Erin C. McKiernan, Carol Muñoz Nieves, Meredith T. Niles, Lesley Schimanski

Much of the work of universities, even private institutions, has significant public dimensions. Faculty work in particular is often funded by public funds, is aimed at serving the public good, and is subject to public evaluation.

To understand how the public dimensions of faculty work are valued, we analyzed review, tenure and promotion documents from a representative sample of 129 Canadian and American universities.

We found terms and concepts related to public and community are mentioned in a large portion of documents, but mostly in ways that relate to service—an undervalued aspect of academic careers.

Moreover, we find significant mentions of traditional research outputs and citation-based metrics. Such outputs and metrics reward faculty work targeted to academics, and mostly disregard the public dimensions.

We conclude that institutions that want to live up to their public mission need to work towards systemic change in how faculty work is assessed and incentivized.

URL : How significant are the public dimensions of faculty work in review, promotion, and tenure documents?

DOI : https://hcommons.org/deposits/item/hc:21015

Do all citations value the same? Valuing citations by the value of the citing items

Authors : Cristiano Giuffrida, Giovanni Abramo, Ciriaco Andrea D’Angelo

Bibliometricians have long recurred to citation counts to measure the impact of publications on the advancement of science. However, since the earliest days of the field, some scholars have questioned whether all citations should value the same, and have gone on to weight them by a variety of factors.

However sophisticated the operationalization of the measures, the methodologies used in weighting citations still present limits in their underlying assumptions. This work takes an alternate approach to resolving the underlying problem: the proposal is to value citations by the impact of the citing articles.

As well as conceptualizing a new indicator of impact, the work illustrates its application to the 2004-2012 Italian scientific production indexed in the WoS.

The new indicator appears highly correlated to traditional field normalized citations, however the shifts observed between the two measures are frequent and the number of outliers not at all negligible. Moreover, the new indicator seems to show greater “sensitivity” when used in identification of the top-cited papers.

URL : https://arxiv.org/abs/1809.06088