Applying Librarian- Created Evaluation Tools to Determine Quality and Credibility of Open Access Library Science Journals

Authors : Maggie Albro, Jessica L. Serrao, Christopher D. Vidas, Jenessa M. McElfresh, K. Megan Sheffield, Megan Palmer

This article explores the application of journal quality and credibility evaluation tools to library science publications. The researchers investigate quality and credibility attributes of forty-eight peer-reviewed library science journals with open access components using two evaluative tools developed and published by librarians.

The results identify common positive and negative attributes of library science journals, compare the results of the two evaluation tools, and discuss their ease of use and limitations. Overall, the results show that while library science journals do not fall prey to the same concerning characteristics that librarians use to caution other researchers, there are several areas in which publishers can improve the quality and credibility of their journals.

URL : https://preprint.press.jhu.edu/portal/sites/default/files/06_24.1albro.pdf

Systematizing Confidence in Open Research and Evidence (SCORE)

Authors : Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Tim Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, C Lee Giles, Michael B Gordon, Reed Gordon-Sarney, Christopher Griffin, Timothy Gulden et al.,

Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts.

Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment.

The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates.

Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.

URL : Systematizing Confidence in Open Research and Evidence (SCORE)

DOI : https://doi.org/10.31235/osf.io/46mnb

Youth and Digital Media: From Credibility to Information Quality

Building upon a process- and context-oriented information quality framework, this paper seeks to map and explore what we know about the ways in which young users of age 18 and under search for information online, how they evaluate information, and how their related practices of content creation, levels of new literacies, general digital media usage, and social patterns affect these activities.

A review of selected literature at the intersection of digital media, youth, and information quality — primarily works from library and information science, sociology, education, and selected ethnographic studies — reveals patterns in youth’s information-seeking behavior, but also highlights the importance of contextual and demographic factors both for search and evaluation.

Looking at the phenomenon from an information-learning and educational perspective, the literature shows that youth develop competencies for personal goals that sometimes do not transfer to school, and are sometimes not appropriate for school.

Thus far, educational initiatives to educate youth about search, evaluation, or creation have depended greatly on the local circumstances for their success or failure.”

URL : http://ssrn.com/abstract=2005272