Applying Librarian- Created Evaluation Tools to Determine Quality and Credibility of Open Access Library Science Journals

Authors : Maggie Albro, Jessica L. Serrao, Christopher D. Vidas, Jenessa M. McElfresh, K. Megan Sheffield, Megan Palmer

This article explores the application of journal quality and credibility evaluation tools to library science publications. The researchers investigate quality and credibility attributes of forty-eight peer-reviewed library science journals with open access components using two evaluative tools developed and published by librarians.

The results identify common positive and negative attributes of library science journals, compare the results of the two evaluation tools, and discuss their ease of use and limitations. Overall, the results show that while library science journals do not fall prey to the same concerning characteristics that librarians use to caution other researchers, there are several areas in which publishers can improve the quality and credibility of their journals.

URL : https://preprint.press.jhu.edu/portal/sites/default/files/06_24.1albro.pdf

To What Extent is Inclusion in the Web of Science an Indicator of Journal ‘Quality’?

Authors : Diego Chavarro, Ismael Rafols, Puay Tang

The assessment of research based on the journal in which it is published is a widely adopted practice. Some research assessments use the Web of Science (WoS) to identify “high quality” journals, which are assumed to publish excellent research.

The authority of WoS on journal quality stems from its selection of journals based on editorial standards and scientific impact criteria. These can be considered as universalistic criteria, meaning that they can be applied to any journal regardless of its place of publication, language, or discipline.

In this article we examine the coverage by WoS of journals produced in Latin America, Spain, and Portugal. We use a logistic regression to examine the probability of a journal to be covered by WoS given universalistic criteria (editorial standards and scientific impact of the journal) and particularistic criteria (country, language, and discipline of the journal).

We find that it is not possible to predict the inclusion of journals in WoS only through the universalistic criteria because particularistic variables such as country of the journal, its discipline and language are also related to inclusion in WoS.

We conclude that using WoS as a universalistic tool for research assessment can disadvantage science published in journals with adequate editorial standards and scientific merit. We discuss the implications of these findings within the research evaluation literature, specifically for countries and disciplines not extensively covered by WoS.

URL : https://dx.doi.org/10.2139/ssrn.2990653

Measuring the Quality of Electronic Journals This…

Measuring the Quality of Electronic Journals :

“This paper presents the methodology developed to create a system to evaluate academic electronic journals. This methodology was developed in two stages. In the first stage, a system to evaluate electronic journals was created. The criteria framework and the indicators for assessment for academic electronic journals were selected and defined. According to this framework, several questions were designed to measure each indicator and, as a result, an instrument to evaluate academic electronic journals was built. In the second stage, this instrument was validated by 16 editors of electronic journals of different countries and different areas of knowledge that were considered as judges to evaluate clarity, importance, relevance and coverage of each question, indicator and criteria. This instrument was distributed by e‑mail. The opinions given by the judges were processed and then used to help in the construction of a new instrument that is ready to be presented to the Mexican Council of Scientific Research in order to evaluate Mexican academic electronic journals.”

URL : http://eprints.rclis.org/handle/10760/15699