Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C. McKiernan​, Lesley A. Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T. Niles, Juan Pablo Alperin

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems.

While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units.

We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents.

Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF.

Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations.

None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.

URL : Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7287/peerj.preprints.27638v2

From closed to open access: A case study of flipped journals

Authors : Fakhri Momeni, Nicholas Fraser, Isabella Peters, Philipp Mayr

In recent years, increased stakeholder pressure to transition research to Open Access has led to many journals “flipping” from a toll access to an open access publishing model. Changing the publishing model can influence the decision of authors to submit their papers to a journal, and increased article accessibility may influence citation behaviour.

The aim of this paper is to show changes in the number of published articles and citations after the flipping of a journal. We analysed a set of 171 journals in the Web of Science (WoS) which flipped to open access.

In addition to comparing the number of articles, average relative citation (ARC) and normalized impact factor (IF) are applied, respectively, as bibliometric indicators at the article and journal level, to trace the transformation of flipped journals covered.

Our results show that flipping mostly has had positive effects on journal’s IF. But it has had no obvious citation advantage for the articles. We also see a decline in the number of published articles after flipping.

We can conclude that flipping to open access can improve the performance of journals, despite decreasing the tendency of authors to submit their articles and no better citation advantages for articles.

URL : https://arxiv.org/abs/1903.11682

A “basket of metrics”—the best support for understanding journal merit

Authors : Lisa Colledge, Chris James

Aim

To survey opinion of the assertion that useful metricbased input requires a “basket of metrics” to allow more varied and nuanced insights into merit than is possible by using one metric alone.

Methods

A poll was conducted to survey opinions (N=204; average response rate=61%) within the international research community on using usage metrics in merit systems.

Results

“Research is best quantified using multiple criteria” was selected by most (40%) respondents as the reason that usage metrics are valuable, and 95% of respondents indicated that they would be likely or very likely to use usage metrics in their assessments of research merit, if they had access to them.

There was a similar degree of preference for simple and sophisticated usage metrics confirming that one size does not fit all, and that a one-metric approach to merit is insufficient.

Conclusion

This survey demonstrates a clear willingness and a real appetite to use a “basket of metrics” to broaden the ways in which research merit can be detected and demonstrated.

URL : http://europeanscienceediting.eu/articles/a-basket-of-metrics-the-best-support-for-understanding-journal-merit/

Evaluating research and researchers by the journal impact factor: is it better than coin flipping?

Authors : Ricardo Brito, Alonso Rodríguez-Navarro

The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers.

The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations.

Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF.

To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low.

However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.

URL : https://arxiv.org/abs/1809.10999

Plurality in multi-disciplinary research: multiple institutional affiliations are associated with increased citations

Authors : Paul Sanfilippo​, Alex W. Hewitt, David A. Mackey

Background

The institutional affiliations and associated collaborative networks that scientists foster during their research careers are salient in the production of high-quality science. The phenomenon of multiple institutional affiliations and its relationship to research output remains relatively unexplored in the literature.

Methods

We examined 27,612 scientific articles, modelling the normalized citation counts received against the number of authors and affiliations held.

Results

In agreement with previous research, we found that teamwork is an important factor in high impact papers, with average citations received increasing concordant with the number of co-authors listed.

For articles with more than five co-authors, we noted an increase in average citations received when authors with more than one institutional affiliation contributed to the research.

Discussion

Multiple author affiliations may play a positive role in the production of high-impact science. This increased researcher mobility should be viewed by institutional boards as meritorious in the pursuit of scientific discovery.

URL : Plurality in multi-disciplinary research: multiple institutional affiliations are associated with increased citations

DOI : https://doi.org/10.7717/peerj.5664

Agriculture Journals Covered by Directory of Open Access Journals: An Analytical Study

Author : Muruli Acharya

With the advent of open access movement, open access journals (OAJs) being the prodigious source of academic and research information have been gaining significant magnitude.

The electronic age has made it easier and more convenient than ever to break barriers to research information. The present study aims to study and analyse the status of 497 OAJs in Agriculture indexed in Directory of Open Access Journals.

Specified traits such as Geographic and language wise distribution, coverage of Indexing/Abstracting databases, ranking of journals according to Impact Factor (IF), OA licensing model adopted, policy of plagiarism, visibility on social media and related issues of the OAJs in Agriculture are evaluated in the paper.

Results indicated the dominance of De Gruyter Open as a publisher with highest number of OAJs, English as a content language, Indonesia with highest number of OAJs, Google scholar with highest journals indexed.

The study observes the increasing migration of journals from commercial practice to OA. Frontiers in Plant Science found with highest Impact Factor among OAJs in Agriculture.

URL : Agriculture Journals Covered by Directory of Open Access Journals: An Analytical Study

Alternative location : http://publications.drdo.gov.in/ojs/index.php/djlit/article/view/13114

The Journal Impact Factor: A brief history, critique, and discussion of adverse effects

Authors : Vincent Lariviere, Cassidy R. Sugimoto

The Journal Impact Factor (JIF) is, by far, the most discussed bibliometric indicator. Since its introduction over 40 years ago, it has had enormous effects on the scientific ecosystem: transforming the publishing industry, shaping hiring practices and the allocation of resources, and, as a result, reorienting the research activities and dissemination practices of scholars.

Given both the ubiquity and impact of the indicator, the JIF has been widely dissected and debated by scholars of every disciplinary orientation. Drawing on the existing literature as well as on original research, this chapter provides a brief history of the indicator and highlights well-known limitations-such as the asymmetry between the numerator and the denominator, differences across disciplines, the insufficient citation window, and the skewness of the underlying citation distributions.

The inflation of the JIF and the weakening predictive power is discussed, as well as the adverse effects on the behaviors of individual actors and the research enterprise. Alternative journal-based indicators are described and the chapter concludes with a call for responsible application and a commentary on future developments in journal indicators

URL : https://arxiv.org/abs/1801.08992