The Post-Embargo Open Access Citation Advantage: It Exists (Probably), Its Modest (Usually), and the Rich Get Richer (of Course)

Author : Jim Ottaviani

Many studies show that open access (OA) articles—articles from scholarly journals made freely available to readers without requiring subscription fees—are downloaded, and presumably read, more often than closed access/subscription-only articles.

Assertions that OA articles are also cited more often generate more controversy. Confounding factors (authors may self-select only the best articles to make OA; absence of an appropriate control group of non-OA articles with which to compare citation figures; conflation of pre-publication vs. published/publisher versions of articles, etc.) make demonstrating a real citation difference difficult.

This study addresses those factors and shows that an open access citation advantage as high as 19% exists, even when articles are embargoed during some or all of their prime citation years. Not surprisingly, better (defined as above median) articles gain more when made OA.

URL : The Post-Embargo Open Access Citation Advantage: It Exists (Probably), Its Modest (Usually), and the Rich Get Richer (of Course)

DOI : http://dx.doi.org/10.1371/journal.pone.0159614

Research impact of paywalled versus open access papers

Authors : Éric Archambault, Grégoire Côté, Brooke Struck, Matthieu Voorons

This note presents data from the 1science oaIndx on the average of relative citations (ARC) for 3.3 million papers published from 2007 to 2009 and indexed in the Web of Science (WoS).

These data show a decidedly large citation advantage for open access (OA) papers, despite them suffering from a lag in availability compared to paywalled papers.

URL : http://www.1science.com/oanumbr.html

A simple proposal for the publication of journal citation distributions

Authors : Vincent Larivière, Véronique Kiermer, Catriona J. MacCallum, Marcia McNutt, Mark Patterson, Bernd Pulverer, Sowmya Swaminathan, Stuart Taylor, Stephen Curry

Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs.

Application of this straightforward protocol reveals the full extent of the skew of distributions and variation in citations received by published papers that is characteristic of all scientific journals.

Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF.

We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.

URL : A simple proposal for the publication of journal citation distributions

Alternative location : http://www.biorxiv.org/content/early/2016/07/05/062109.abstract

Open access publishing trend analysis: statistics beyond the perception

Authors : Elisabetta Poltronieri, Elena Bravo, Moreno Curti, Maurizio Ferri, Cristina Mancini

Introduction

The purpose of this analysis was twofold: to track the number of open access journals acquiring impact factor, and to investigate the distribution of subject categories pertaining to these journals. As a case study, journals in which the researchers of the National Institute of Health (Istituto Superiore di Sanità) in Italy have published were surveyed.

Method

Data were collected by searching open access journals listed in the Directory of Open Access Journals ) then compared with those having an impact factor as tracked by the Journal Citation Reports for the years 2010-2012. Journal Citation Reports subject categories were matched with Medical Subject Headings to provide a larger content classification.

Analysis

A survey was performed to determine the Directory journals matching the Journal Citation Reports list, and their inclusion in a given subject area.

Results

In the years 2010-2012, an increase in the number of journals was observed for Journal Citation Reports (+ 4.93%) and for the Directory (+18.51%). The discipline showing the highest increment was medicine (315 occurrences, 26%).

Conclusions

From 2010 to 2012, the number of open access journals with impact factor has gradually risen, with a prevalence for journals relating to medicine and biological science disciplines, suggesting that authors prefer to publish more than before in open access journals.

URL : http://www.informationr.net/ir/21-2/paper712.html

 

Can Scientific Impact Be Predicted?

Authors : Yuxiao Dong, Reid A. Johnson, Nitesh V. Chawla

A widely used measure of scientific impact is citations. However, due to their heavy-tailed distribution, citations are fundamentally difficult to predict.

Instead, to characterize scientific impact, we address two analogous questions asked by many scientific researchers: “How will my h-index evolve over time, and which of my previously or newly published papers will contribute to it?” To answer these questions, we perform two related tasks. First, we develop a model to predict authors’ future h-indices based on their current scientific impact. Second, we examine the factors that drive papers—either previously or newly published—to increase their authors’ predicted future h-indices.

By leveraging relevant factors, we can predict an author’s h-index in five years with an R2 value of 0.92 and whether a previously (newly) published paper will contribute to this future h-index with an F1 score of 0.99 (0.77).

We find that topical authority and publication venue are crucial to these effective predictions, while topic popularity is surprisingly inconsequential. Further, we develop an online tool that allows users to generate informed h-index predictions.

Our work demonstrates the predictability of scientific impact, and can help scholars to effectively leverage their position of “standing on the shoulders of giants.”

URL : https://arxiv.org/abs/1606.05905

Are Wikipedia Citations Important Evidence of the Impact of Scholarly Articles and Books?

Authors : Kayvan Koush, Mike Thelwall

Individual academics and research evaluators often need to assess the value of published research. Whilst citation counts are a recognised indicator of scholarly impact, alternative data is needed to provide evidence of other types of impact, including within education and wider society.

Wikipedia is a logical choice for both of these because the role of a general encyclopaedia is to be an understandable repository of facts about a diverse array of topics and hence it may cite research to support its claims.

To test whether Wikipedia could provide new evidence about the impact of scholarly research, this article counted citations to 302,328 articles and 18,735 monographs in English indexed by Scopus in the period 2005 to 2012.

The results show that citations from Wikipedia to articles are too rare for most research evaluation purposes, with only 5% of articles being cited in all fields. In contrast, a third of monographs have at least one citation from Wikipedia, with the most in the arts and humanities.

Hence, Wikipedia citations can provide extra impact evidence for academic monographs. Nevertheless, the results may be relatively easily manipulated and so Wikipedia is not recommended for evaluations affecting stakeholder interests.

URL : http://www.scit.wlv.ac.uk/~cm1993/papers/WikipediaCitations.pdf

Quantifying the changing role of past publications

Our current societies increasingly rely on electronic repositories of collective knowledge. An archetype of these databases is the Web of Science (WoS) that stores scientific publications. In contrast to several other forms of knowledge — e.g., Wikipedia articles — a scientific paper does not change after its “birth”.

Nonetheless, from the moment a paper is published it exists within the evolving web of other papers, thus, its actual meaning to the reader changes.

To track how scientific ideas (represented by groups of scientific papers) appear and evolve, we apply a novel combination of algorithms explicitly allowing for papers to change their groups. We (i) identify the overlapping clusters of the undirected yearly co-citation networks of the WoS (1975-2008) and (ii) match these yearly clusters (groups) to form group timelines.

After visualizing the longest lived groups of the entire data set we assign topic labels to the groups. We find that in the entire Web of Science multidisciplinarity is clearly over-represented among cutting edge ideas. In addition, we provide detailed examples for papers that (i) change their topic labels and (ii) move between groups.

URL : http://arxiv.org/abs/1605.00509