Our Study is Published, But the Journey is Not Finished!

Authors : Olivier Pourret, Katsuhiko Suzuki, Yoshio Takahashi

Each June, we receive e-mails from publishers welcoming the evolution of their journals’ journal impact factor (JIF). The JIF is a controversial metric (Callaway 2016), and it is worth asking, “What’s behind it?”

In this age of “publish or perish” (Harzing 2007), we take much time and effort to write our papers and get them published. But how much time and effort do we put into finding readers or ensuring that we are reaching the right audience? Are metrics, such as the JIF, good guides for how well we are doing at reaching our target audience?

DOI : https://doi.org/10.2138/gselements.16.4.229

Measuring and Mapping Data Reuse: Findings From an Interactive Workshop on Data Citation and Metrics for Data Reuse

Author : Lisa Federer

Widely adopted standards for data citation are foundational to efforts to track and quantify data reuse. Without the means to track data reuse and metrics to measure its impact, it is difficult to reward researchers who share high-value data with meaningful credit for their contribution.

Despite initial work on developing guidelines for data citation and metrics, standards have not yet been universally adopted. This article reports on the recommendations collected from a workshop held at the Future of Research Communications and e-Scholarship (FORCE11) 2018 meeting titled Measuring and Mapping Data Reuse: An Interactive Workshop on Metrics for Data.

A range of stakeholders were represented among the participants, including publishers, researchers, funders, repository administrators, librarians, and others.

Collectively, they generated a set of 68 recommendations for specific actions that could be taken by standards and metrics creators; publishers; repositories; funders and institutions; creators of reference management software and citation styles; and researchers, students, and librarians.

These specific, concrete, and actionable recommendations would help facilitate broader adoption of standard citation mechanisms and easier measurement of data reuse.

URL : Measuring and Mapping Data Reuse: Findings From an Interactive Workshop on Data Citation and Metrics for Data Reuse

DOI : https://doi.org/10.1162/99608f92.ccd17b00

Prevalence of Potentially Predatory Publishing in Scopus on the Country Level

Authors : Tatiana Savina, Ivan Sterligov

We present the results of a large-scale study of potentially predatory journals (PPJ) represented in the Scopus database, which is widely used for research evaluation. Both journal metrics and country, disciplinary data have been evaluated for different groups of PPJ: those listed by Jeffrey Beall and those delisted by Scopus because of “publication concerns”.

Our results show that even after years of delisting, PPJ are still highly visible in the Scopus database with hundreds of active potentially predatory journals. PPJ papers are continuously produced by all major countries, but with different shares. All major subject areas are affected. The largest number of PPJ papers are in engineering and medicine.

On average, PPJ have much lower citation metrics than other Scopus-indexed journals. We conclude with a brief survey of the case of Kazakhstan where the share of PPJ papers at one time amounted to almost a half of all Kazakhstan papers in Scopus, and propose a link between PPJ share and national research evaluation policies (in particular, rules of awarding academic degrees).

The progress of potentially predatory journal research will be increasingly important because such evaluation methods are becoming more widespread in times of the Metric Tide.

URL : https://arxiv.org/abs/2003.08283

Rethinking the Journal Impact Factor and Publishing in the Digital Age

Authors : Mark S. Nestor, Daniel Fischer, David Arnold, Brian Berman, James Q. Del Rosso

Clinical and experimental literature search has changed significantly over the past few decades, and with it, the way in which we value information. Today, our need for immediate access to relevant and specific literature, regardless of specialty, has led to a growing demand for open access to publications.

The Journal Impact Factor (JIF) has been a long-time standard for representing the quality or “prestige” of a journal, but it appears to be losing its relevance. Here, we define the JIF and deconstruct its validity as a modern measure of a journal’s quality, discuss the current models of academic publication, including their advantages and shortcomings, and discuss the benefits and shortcomings of a variety of open-access models, including costs to the author.

We have quantified a nonsubscribed physician’s access to full articles associated with dermatologic disease and aesthetics cited on PubMed. For some of the most common dermatology conditions, 23.1 percent of citations (ranging from 17.2% for melasma to 31.9% for malignant melanoma) were available as free full articles, and for aesthetic procedures, 18.9 percent of citations (ranging from 11.9% for laser hair removal to 27.9% for botulinum toxin) were available as free full articles.

Finally, we discuss existing alternative metrics for measuring journal impact and propose the adoption of a superior publishing model, one that satisfies modern day standards of scholarly knowledge pursuit and dissemination of scholarly publications for dermatology and all of medical science.

URL : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7028381/

The stability of Twitter metrics: A study on unavailable Twitter mentions of scientific publications

Authors : Zhichao Fang, Jonathan Dudek, Rodrigo Costas

This paper investigates the stability of Twitter counts of scientific publications over time. For this, we conducted an analysis of the availability statuses of over 2.6 million Twitter mentions received by the 1,154 most tweeted scientific publications recorded by this http URL up to October 2017.

Results show that of the Twitter mentions for these highly tweeted publications, about 14.3% have become unavailable by April 2019. Deletion of tweets by users is the main reason for unavailability, followed by suspension and protection of Twitter user accounts.

This study proposes two measures for describing the Twitter dissemination structures of publications: Degree of Originality (i.e., the proportion of original tweets received by a paper) and Degree of Concentration (i.e., the degree to which retweets concentrate on a single original tweet).

Twitter metrics of publications with relatively low Degree of Originality and relatively high Degree of Concentration are observed to be at greater risk of becoming unstable due to the potential disappearance of their Twitter mentions.

In light of these results, we emphasize the importance of paying attention to the potential risk of unstable Twitter counts, and the significance of identifying the different Twitter dissemination structures when studying the Twitter metrics of scientific publications.

URL : https://arxiv.org/abs/2001.07491

Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

Authors : Erin C McKiernan, Lesley A Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T Niles, Juan P Alperin

We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms.

Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status.

We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF.

URL : Meta-Research: Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

DOI : https://doi.org/10.7554/eLife.47338.001

Do Download Reports Reliably Measure Journal Usage? Trusting the Fox to Count Your Hens?

Authors : Alex Wood-Doughty, Ted Bergstrom, Douglas G. Steigerwald

Download rates of academic journals have joined citation counts as commonly used indicators of the value of journal subscriptions. While citations reflect worldwide influence, the value of a journal subscription to a single library is more reliably measured by the rate at which it is downloaded by local users.

If reported download rates accurately measure local usage, there is a strong case for using them to compare the cost-effectiveness of journal subscriptions. We examine data for nearly 8,000 journals downloaded at the ten universities in the University of California system during a period of six years.

We find that controlling for number of articles, publisher, and year of download, the ratio of downloads to citations differs substantially among academic disciplines.

After adding academic disciplines to the control variables, there remain substantial “publisher effects”, with some publishers reporting significantly more downloads than would be predicted by the characteristics of their journals.

These cross-publisher differences suggest that the currently available download statistics, which are supplied by publishers, are not sufficiently reliable to allow libraries to make subscription decisions based on price and reported downloads, at least without making an adjustment for publisher effects in download reports.

URL : Do Download Reports Reliably Measure Journal Usage? Trusting the Fox to Count Your Hens?

DOI: https://doi.org/10.5860/crl.80.5.694