An experiment run in 2009 could not assess whether making monographs available in open access enhanced scholarly impact. This paper revisits the experiment, drawing on additional citation data and tweets. It attempts to answer the following research question: does open access have a positive influence on the number of citations and tweets a monograph receives, taking into account the influence of scholarly field and language?
The correlation between monograph citations and tweets is also investigated. The number of citations and tweets measured in 2014 reveal a slight open access advantage, but the influence of language or subject should also be taken into account. However, Twitter usage and citation behaviour hardly overlap.
De multiples études, dans la littérature internationale, ont cherché à évaluer l’impact de l’Open Access sur le taux de citation des articles scientifiques. La présente étude, en langue française, reste limitée aux publications 2010 de l’Ecole des Ponts.
Elle offre néanmoins un état de l’art des précédentes études sur le sujet à un lectorat de professionnels francophones et a pour originalité de mesurer le nombre moyen de citations par mois, avant et après “libération” Open Access des articles et d’éviter ainsi la plupart des biais qui peuvent être rencontrés dans ce type de démarche.
En plus de confirmer, comme beaucoup d’autres l’ont fait auparavant, un avantage net de l’Open Access sur le taux de citation en informatique, sciences de la terre et de l’univers, ingénierie, sciences environnementales, mathématiques, physique et astronomie, elle montre aussi qu’une « libération » précoce peut avoir un impact plus favorable qu’une « libération » tardive dans certains champs disciplinaires, comme les mathématiques et physique/astronomie.
Authors : Robert M. Patton, Christopher G. Stahl, Jack C. Wells
The measurement of scientific progress remains a significant challenge exasperated by the use of multiple different types of metrics that are often incorrectly used, overused, or even explicitly abused.
Several metrics such as h-index or journal impact factor (JIF) are often used as a means to assess whether an author, article, or journal creates an « impact » on science. Unfortunately, external forces can be used to manipulate these metrics thereby diluting the value of their intended, original purpose.
This work highlights these issues and the need to more clearly define « impact » as well as emphasize the need for better metrics that leverage full content analysis of publications.
Authors: Thea Marie Drachen, Ole Ellegaard, Asger Væring Larsen, Søren Bertil Fabricius Dorch
This paper presents some indications to the existence of a citation advantage related to sharing data using astrophysics as a case. Through bibliometric analyses we find a citation advantage for astrophysical papers in core journals.
The advantage arises as indexed papers are associated with data by bibliographical links, and consists of papers receiving on average significantly more citations per paper per year, than do papers not associated with links to data.
Many studies show that open access (OA) articles—articles from scholarly journals made freely available to readers without requiring subscription fees—are downloaded, and presumably read, more often than closed access/subscription-only articles.
Assertions that OA articles are also cited more often generate more controversy. Confounding factors (authors may self-select only the best articles to make OA; absence of an appropriate control group of non-OA articles with which to compare citation figures; conflation of pre-publication vs. published/publisher versions of articles, etc.) make demonstrating a real citation difference difficult.
This study addresses those factors and shows that an open access citation advantage as high as 19% exists, even when articles are embargoed during some or all of their prime citation years. Not surprisingly, better (defined as above median) articles gain more when made OA.
Authors : Vincent Larivière, Véronique Kiermer, Catriona J. MacCallum, Marcia McNutt, Mark Patterson, Bernd Pulverer, Sowmya Swaminathan, Stuart Taylor, Stephen Curry
Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs.
Application of this straightforward protocol reveals the full extent of the skew of distributions and variation in citations received by published papers that is characteristic of all scientific journals.
Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF.
We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.