Measuring Scientific Impact Beyond Citation Counts

Authors : Robert M. Patton, Christopher G. Stahl, Jack C. Wells

The measurement of scientific progress remains a significant challenge exasperated by the use of multiple different types of metrics that are often incorrectly used, overused, or even explicitly abused.

Several metrics such as h-index or journal impact factor (JIF) are often used as a means to assess whether an author, article, or journal creates an « impact » on science. Unfortunately, external forces can be used to manipulate these metrics thereby diluting the value of their intended, original purpose.

This work highlights these issues and the need to more clearly define « impact » as well as emphasize the need for better metrics that leverage full content analysis of publications.


Sharing data increases citations

Authors: Thea Marie Drachen, Ole Ellegaard, Asger Væring Larsen, Søren Bertil Fabricius Dorch

This paper presents some indications to the existence of a citation advantage related to sharing data using astrophysics as a case. Through bibliometric analyses we find a citation advantage for astrophysical papers in core journals.

The advantage arises as indexed papers are associated with data by bibliographical links, and consists of papers receiving on average significantly more citations per paper per year, than do papers not associated with links to data.


The Post-Embargo Open Access Citation Advantage: It Exists (Probably), Its Modest (Usually), and the Rich Get Richer (of Course)

Author : Jim Ottaviani

Many studies show that open access (OA) articles—articles from scholarly journals made freely available to readers without requiring subscription fees—are downloaded, and presumably read, more often than closed access/subscription-only articles.

Assertions that OA articles are also cited more often generate more controversy. Confounding factors (authors may self-select only the best articles to make OA; absence of an appropriate control group of non-OA articles with which to compare citation figures; conflation of pre-publication vs. published/publisher versions of articles, etc.) make demonstrating a real citation difference difficult.

This study addresses those factors and shows that an open access citation advantage as high as 19% exists, even when articles are embargoed during some or all of their prime citation years. Not surprisingly, better (defined as above median) articles gain more when made OA.

URL : The Post-Embargo Open Access Citation Advantage: It Exists (Probably), Its Modest (Usually), and the Rich Get Richer (of Course)


Research impact of paywalled versus open access papers

Authors : Éric Archambault, Grégoire Côté, Brooke Struck, Matthieu Voorons

This note presents data from the 1science oaIndx on the average of relative citations (ARC) for 3.3 million papers published from 2007 to 2009 and indexed in the Web of Science (WoS).

These data show a decidedly large citation advantage for open access (OA) papers, despite them suffering from a lag in availability compared to paywalled papers.


A simple proposal for the publication of journal citation distributions

Authors : Vincent Larivière, Véronique Kiermer, Catriona J. MacCallum, Marcia McNutt, Mark Patterson, Bernd Pulverer, Sowmya Swaminathan, Stuart Taylor, Stephen Curry

Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs.

Application of this straightforward protocol reveals the full extent of the skew of distributions and variation in citations received by published papers that is characteristic of all scientific journals.

Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF.

We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.

URL : A simple proposal for the publication of journal citation distributions

Alternative location :

Open access publishing trend analysis: statistics beyond the perception

Authors : Elisabetta Poltronieri, Elena Bravo, Moreno Curti, Maurizio Ferri, Cristina Mancini


The purpose of this analysis was twofold: to track the number of open access journals acquiring impact factor, and to investigate the distribution of subject categories pertaining to these journals. As a case study, journals in which the researchers of the National Institute of Health (Istituto Superiore di Sanità) in Italy have published were surveyed.


Data were collected by searching open access journals listed in the Directory of Open Access Journals ) then compared with those having an impact factor as tracked by the Journal Citation Reports for the years 2010-2012. Journal Citation Reports subject categories were matched with Medical Subject Headings to provide a larger content classification.


A survey was performed to determine the Directory journals matching the Journal Citation Reports list, and their inclusion in a given subject area.


In the years 2010-2012, an increase in the number of journals was observed for Journal Citation Reports (+ 4.93%) and for the Directory (+18.51%). The discipline showing the highest increment was medicine (315 occurrences, 26%).


From 2010 to 2012, the number of open access journals with impact factor has gradually risen, with a prevalence for journals relating to medicine and biological science disciplines, suggesting that authors prefer to publish more than before in open access journals.



Can Scientific Impact Be Predicted?

Authors : Yuxiao Dong, Reid A. Johnson, Nitesh V. Chawla

A widely used measure of scientific impact is citations. However, due to their heavy-tailed distribution, citations are fundamentally difficult to predict.

Instead, to characterize scientific impact, we address two analogous questions asked by many scientific researchers: « How will my h-index evolve over time, and which of my previously or newly published papers will contribute to it? » To answer these questions, we perform two related tasks. First, we develop a model to predict authors’ future h-indices based on their current scientific impact. Second, we examine the factors that drive papers—either previously or newly published—to increase their authors’ predicted future h-indices.

By leveraging relevant factors, we can predict an author’s h-index in five years with an R2 value of 0.92 and whether a previously (newly) published paper will contribute to this future h-index with an F1 score of 0.99 (0.77).

We find that topical authority and publication venue are crucial to these effective predictions, while topic popularity is surprisingly inconsequential. Further, we develop an online tool that allows users to generate informed h-index predictions.

Our work demonstrates the predictability of scientific impact, and can help scholars to effectively leverage their position of « standing on the shoulders of giants. »