Journals were central to Eugene Garfield’s research interests. Among other things, journals are considered as units of analysis for bibliographic databases such as the Web of Science and Scopus. In addition to providing a basis for disciplinary classifications of journals, journal citation patterns span networks across boundaries to variable extents.
Using betweenness centrality (BC) and diversity, we elaborate on the question of how to distinguish and rank journals in terms of interdisciplinarity. Interdisciplinarity, however, is difficult to operationalize in the absence of an operational definition of disciplines; the diversity of a unit of analysis is sample-dependent. BC can be considered as a measure of multi-disciplinarity.
Diversity of co-citation in a citing document has been considered as an indicator of knowledge integration, but an author can also generate trans-disciplinary—that is, non-disciplined—variation by citing sources from other disciplines.
Diversity in the bibliographic coupling among citing documents can analogously be considered as diffusion or differentiation of knowledge across disciplines. Because the citation networks in the cited direction reflect both structure and variation, diversity in this direction is perhaps the best available measure of interdisciplinarity at the journal level.
Furthermore, diversity is based on a summation and can therefore be decomposed; differences among (sub)sets can be tested for statistical significance. In the appendix, a general-purpose routine for measuring diversity in networks is provided.
URL : Betweenness and diversity in journal citation networks as measures of interdisciplinarity—A tribute to Eugene Garfield
DOI : https://doi.org/10.1007/s11192-017-2528-2
Authors : Pablo Dorta-González, Yolanda Santana-Jiménez
The potential benefit of open access (OA) in relation to citation impact has been discussed in the literature in depth. The methodology used to test the OA citation advantage includes comparing OA vs. non-OA journal impact factors and citations of OA versus non-OA articles published in the same non-OA journals.
However, one problem with many studies is that they are small -restricted to a discipline or set of journals-. Moreover, conclusions are not entirely consistent among research areas and ‘early view’ and ‘selection bias’ have been suggested as possible explications. In the present paper, an analysis of gold OA from across all areas of research -the 27 subject areas of the Scopus database- is realized.
As a novel contribution, this paper takes a journal-level approach to assessing the OA citation advantage, whereas many others take a paper-level approach. Data were obtained from Scimago Lab, sorted using Scopus database, and tagged as OA/non-OA using the DOAJ list.
Jointly with the OA citation advantage, the OA prevalence as well as the differences between access types (OA vs. non-OA) in production and referencing are tested. A total of 3,737 OA journals (16.8%) and 18,485 non-OA journals (83.2%) published in 2015 are considered. As the main conclusion, there is no generalizable gold OA citation advantage at journal level.
URL : https://arxiv.org/abs/1708.06242
Authors : Sven E. Hug, Martin P. Brändle
This is the first in-depth study on the coverage of Microsoft Academic (MA). The coverage of a verified publication list of a university was analyzed on the level of individual publications in MA, Scopus, and Web of Science (WoS).
Citation counts were analyzed and issues related to data retrieval and data quality were examined. A Perl script was written to retrieve metadata from MA. We find that MA covers journal articles, working papers, and conference items to a substantial extent. MA surpasses Scopus and WoS clearly with respect to book-related document types and conference items but falls slightly behind Scopus with regard to journal articles.
MA shows the same biases as Scopus and WoS with regard to the coverage of the social sciences and humanities, non-English publications, and open-access publications. Rank correlations of citation counts are high between MA and the benchmark databases.
We find that the publication year is correct for 89.5% of all publications and the number of authors for 95.1% of the journal articles. Given the fast and ongoing development of MA, we conclude that MA is on the verge of becoming a bibliometric superpower. However, comprehensive studies on the quality of MA data are still lacking.
URL : https://arxiv.org/abs/1703.05539
This research aims to diachronically analyze the worldwide scientific production on open access, in the academic and scientific context, in order to contribute to knowledge and visualization of its main actors.
As a method, bibliographical, descriptive and analytical research was used, with the contribution of bibliometric studies, especially the production indicators, scientific collaboration and indicators of thematic co-occurrence.
The Scopus database was used as a source to retrieve the articles on the subject, with a resulting corpus of 1179 articles. Using Bibexcel software, frequency tables were constructed for the variables, and Pajek software was used to visualize the collaboration network and VoSViewer for the construction of the keywords’ network.
As for the results, the most productive researchers come from countries such as the United States, Canada, France and Spain. Journals with higher impact in the academic community have disseminated the new constructed knowledge. A collaborative network with a few subnets where co-authors are from different countries has been observed.
As conclusions, this study allows identifying the themes of debates that mark the development of open access at the international level, and it is possible to state that open access is one of the new emerging and frontier fields of library and information science.
URL : Scientific Production on Open Access: A Worldwide Bibliometric Analysis in the Academic and Scientific Context
DOI : http://www.mdpi.com/2304-6775/4/1/1
Journal classification systems play an important role in bibliometric analyses. The two most important bibliographic databases, Web of Science and Scopus, each provide a journal classification system. However, no study has systematically investigated the accuracy of these classification systems. To examine and compare the accuracy of journal classification systems, we define two criteria on the basis of direct citation relations between journals and categories.
We use Criterion I to select journals that have weak connections with their assigned categories, and we use Criterion II to identify journals that are not assigned to categories with which they have strong connections. If a journal satisfies either of the two criteria, we conclude that its assignment to categories may be questionable. Accordingly, we identify all journals with questionable classifications in Web of Science and Scopus.
Furthermore, we perform a more in-depth analysis for the field of Library and Information Science to assess whether our proposed criteria are appropriate and whether they yield meaningful results. It turns out that according to our citation-based criteria Web of Science performs significantly better than Scopus in terms of the accuracy of its journal classification system.
URL : http://arxiv.org/abs/1511.00735
In the humanities and social sciences, bibliometric methods for the assessment of research performance are (so far) less common. The current study takes a concrete example in an attempt to evaluate a research institute from the area of social sciences and humanities with the help of data from Google Scholar (GS).
In order to use GS for a bibliometric study, we have developed procedures for the normalisation of citation impact, building on the procedures of classical bibliometrics. In order to test the convergent validity of the normalized citation impact scores, we have calculated normalized scores for a subset of the publications based on data from the WoS or Scopus.
Even if scores calculated with the help of GS and WoS/Scopus are not identical for the different publication types (considered here), they are so similar that they result in the same assessment of the institute investigated in this study: For example, the institute’s papers whose journals are covered in WoS are cited at about an average rate (compared with the other papers in the journals).
URL : : https://figshare.com/articles/The_application_of_bibliometrics_to_research_evaluation_in_the_humanities_and_social_sciences_an_exploratory_study_using_normalized_Google_Scholar_data_for_the_publications_of_a_research_institute/1293588
A statistical analysis of full text downloads of articles in Elseviers ScienceDirect covering all disciplines reveals large differences in download frequencies, their skewness, and their correlation with Scopus-based citation counts, between disciplines, journals, and document types. Download counts tend to be two orders of magnitude higher and less skewedly distributed than citations. A mathematical model based on the sum of two exponentials does not adequately capture monthly download counts.
The degree of correlation at the article level within a journal is similar to that at the journal level in the discipline covered by that journal, suggesting that the differences between journals are to a large extent discipline specific. Despite the fact that in all study journals download and citation counts per article positively correlate, little overlap may exist between the set of articles appearing in the top of the citation distribution and that with the most frequently downloaded ones.
Usage and citation leaks, bulk downloading, differences between reader and author populations in a subject field, the type of document or its content, differences in obsolescence patterns between downloads and citations, different functions of reading and citing in the research process, all provide possible explanations of differences between download and citation distributions.
URL : http://arxiv.org/abs/1510.05129