Authors : Vincent Larivière, Véronique Kiermer, Catriona J. MacCallum, Marcia McNutt, Mark Patterson, Bernd Pulverer, Sowmya Swaminathan, Stuart Taylor, Stephen Curry
Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs.
Application of this straightforward protocol reveals the full extent of the skew of distributions and variation in citations received by published papers that is characteristic of all scientific journals.
Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF.
We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.
Authors : Molly M. King, Carl T. Bergstrom, Shelley J. Correll, Jennifer Jacquet, Jevin D. West
How common is self-citation in scholarly publication and does the practice vary by gender? Using novel methods and a dataset of 1.5 million research papers in the scholarly database JSTOR published between 1779-2011, we find that nearly 10% of references are self-citations by a paper’s authors.
We further find that over the years between 1779-2011, men cite their own papers 56% more than women do. In the last two decades of our data, men self-cite 70% more than women. Women are also more than ten percentage points more likely than men to not cite their own previous work at all.
Despite increased representation of women in academia, this gender gap in self-citation rates has remained stable over the last 50 years. We break down self-citation patterns by academic field and number of authors, and comment on potential mechanisms behind these observations.
These findings have important implications for scholarly visibility and likely consequences for academic careers.
To examine whether National Institutes of Health (NIH) funded articles that were archived in PubMed Central (PMC) after the release of the 2008 NIH Public Access Policy show greater scholarly impact than comparable articles not archived in PMC.
A list of journals across several subject areas was developed from which to collect article citation data. Citation information and cited reference counts of the articles published in 2006 and 2009 from 122 journals were obtained from the Scopus database. The articles were separated into categories of NIH funded, non-NIH funded and whether they were deposited in PubMed Central. An analysis of citation data across a five-year timespan was performed on this set of articles.
A total of 45,716 articles were examined, including 7,960 with NIH-funding. An analysis of the number of times these articles were cited found that NIH-funded 2006 articles in PMC were not cited significantly more than NIH-funded non-PMC articles. However, 2009 NIH funded articles in PMC were cited 26% more than 2009 NIH funded articles not in PMC, 5 years after publication. This result is highly significant even after controlling for journal (as a proxy of article quality and topic).
Our analysis suggests that factors occurring between 2006 and 2009 produced a subsequent boost in scholarly impact of PubMed Central. The 2008 Public Access Policy is likely to be one such factor, but others may have contributed as well (e.g., growing size and visibility of PMC, increasing availability of full-text linkouts from PubMed, and indexing of PMC articles by Google Scholar).
We present here evidence for the existence of a citation advantage within astrophysics for papers that link to data. Using simple measures based on publication data from NASA Astrophysics Data System we find a citation advantage for papers with links to data receiving on the average significantly more citations per paper than papers without links to data. Furthermore, using INSPEC and Web of Science databases we investigate whether either papers of an experimental or theoretical nature display different citation behavior.
This paper reports on an interview-based citation behaviour study, part of a wider study of trust in information resources, investigating why researchers chose to cite particular references in one of their publications. Their motivations are explored, with an emphasis on whether they regarded the reference as an authoritative and trustworthy source, and, if so, to what extent and why.
Semi-structured critical incident interviews were carried out with eighty-seven researchers from the UK and the USA.
The answers were analysed using qualitative techniques and then grouped under descriptive headings of the types of reasons for citation provided. These were then used to create a table of these types of reason and the frequency of their use.
The motivations for citing were found to be complex and multi-faceted but, in nearly all cases, researchers do regard the authority and trustworthiness of the cited source as an important factor in choosing to cite it. This suggests that citation is at least partly an acknowledgement of the intellectual influence of the content of the cited source. It was also found, however, that researchers have strong social networks of trust and collaboration. They make considerable use of these in receiving and gathering information for research and thus context is also important in their eventual citation decisions.
Citing behaviour does include an acknowledgement of useful intellectual content, but this process cannot be separated from the researcher’s position in networks of trusted social and research influence. The digital transition has provided tools to help maintain and develop these social networks and it has also made it easier for researchers to investigate the credentials of the sources of documents. This suggests that the current distinction in the literature between normative and constructionist theories of citation behaviour may not capture the nuanced and complex relationship between these two factors.
« The aim of this research was to identify the motivations for citation to Wikipedia in scientific papers. Also, the number of citation to Wikipedia, location of citation, type of citing papers, subject of citing and cited articles were determined and compared in different subject fields. From all English articles indexed in Scopus in 2007 and 2012 that have cited Wikipedia, 602 articles were selected using stratified random sampling. Content analysis and bibliometric methods were used to carry out the research. Results showed that there are 20 motivations for citing Wikipedia and the most frequent of them are providing general information and definition, facts and figures. Citations to Wikipedia often were in the introduction and introductory sections of papers. Computer sciences, internet and chemistry were the most cited subjects. The use of Wikipedia in articles is increasing both in terms of quantity and diversity. However, there are disciplinary differences both in the amount and the nature of use of Wikipedia. »