Authors: Thea Marie Drachen, Ole Ellegaard, Asger Væring Larsen, Søren Bertil Fabricius Dorch
This paper presents some indications to the existence of a citation advantage related to sharing data using astrophysics as a case. Through bibliometric analyses we find a citation advantage for astrophysical papers in core journals.
The advantage arises as indexed papers are associated with data by bibliographical links, and consists of papers receiving on average significantly more citations per paper per year, than do papers not associated with links to data.
Many studies show that open access (OA) articles—articles from scholarly journals made freely available to readers without requiring subscription fees—are downloaded, and presumably read, more often than closed access/subscription-only articles.
Assertions that OA articles are also cited more often generate more controversy. Confounding factors (authors may self-select only the best articles to make OA; absence of an appropriate control group of non-OA articles with which to compare citation figures; conflation of pre-publication vs. published/publisher versions of articles, etc.) make demonstrating a real citation difference difficult.
This study addresses those factors and shows that an open access citation advantage as high as 19% exists, even when articles are embargoed during some or all of their prime citation years. Not surprisingly, better (defined as above median) articles gain more when made OA.
Authors : Vincent Larivière, Véronique Kiermer, Catriona J. MacCallum, Marcia McNutt, Mark Patterson, Bernd Pulverer, Sowmya Swaminathan, Stuart Taylor, Stephen Curry
Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs.
Application of this straightforward protocol reveals the full extent of the skew of distributions and variation in citations received by published papers that is characteristic of all scientific journals.
Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF.
We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.
The purpose of this analysis was twofold: to track the number of open access journals acquiring impact factor, and to investigate the distribution of subject categories pertaining to these journals. As a case study, journals in which the researchers of the National Institute of Health (Istituto Superiore di Sanità) in Italy have published were surveyed.
Data were collected by searching open access journals listed in the Directory of Open Access Journals ) then compared with those having an impact factor as tracked by the Journal Citation Reports for the years 2010-2012. Journal Citation Reports subject categories were matched with Medical Subject Headings to provide a larger content classification.
A survey was performed to determine the Directory journals matching the Journal Citation Reports list, and their inclusion in a given subject area.
In the years 2010-2012, an increase in the number of journals was observed for Journal Citation Reports (+ 4.93%) and for the Directory (+18.51%). The discipline showing the highest increment was medicine (315 occurrences, 26%).
From 2010 to 2012, the number of open access journals with impact factor has gradually risen, with a prevalence for journals relating to medicine and biological science disciplines, suggesting that authors prefer to publish more than before in open access journals.
Authors : Yuxiao Dong, Reid A. Johnson, Nitesh V. Chawla
A widely used measure of scientific impact is citations. However, due to their heavy-tailed distribution, citations are fundamentally difficult to predict.
Instead, to characterize scientific impact, we address two analogous questions asked by many scientific researchers: « How will my h-index evolve over time, and which of my previously or newly published papers will contribute to it? » To answer these questions, we perform two related tasks. First, we develop a model to predict authors’ future h-indices based on their current scientific impact. Second, we examine the factors that drive papers—either previously or newly published—to increase their authors’ predicted future h-indices.
By leveraging relevant factors, we can predict an author’s h-index in five years with an R2 value of 0.92 and whether a previously (newly) published paper will contribute to this future h-index with an F1 score of 0.99 (0.77).
We find that topical authority and publication venue are crucial to these effective predictions, while topic popularity is surprisingly inconsequential. Further, we develop an online tool that allows users to generate informed h-index predictions.
Our work demonstrates the predictability of scientific impact, and can help scholars to effectively leverage their position of « standing on the shoulders of giants. »
Individual academics and research evaluators often need to assess the value of published research. Whilst citation counts are a recognised indicator of scholarly impact, alternative data is needed to provide evidence of other types of impact, including within education and wider society.
Wikipedia is a logical choice for both of these because the role of a general encyclopaedia is to be an understandable repository of facts about a diverse array of topics and hence it may cite research to support its claims.
To test whether Wikipedia could provide new evidence about the impact of scholarly research, this article counted citations to 302,328 articles and 18,735 monographs in English indexed by Scopus in the period 2005 to 2012.
The results show that citations from Wikipedia to articles are too rare for most research evaluation purposes, with only 5% of articles being cited in all fields. In contrast, a third of monographs have at least one citation from Wikipedia, with the most in the arts and humanities.
Hence, Wikipedia citations can provide extra impact evidence for academic monographs. Nevertheless, the results may be relatively easily manipulated and so Wikipedia is not recommended for evaluations affecting stakeholder interests.