A survey of how biology researchers assess credibility when serving on grant and hiring committees

Authors : Iain Hrynaszkiewicz, Beruria Novich, James Harney, Veronique Kiermer

Researchers who serve on grant review and hiring committees have to make decisions about the intrinsic value of research in short periods of time, and research impact metrics such Journal Impact Factor (JIF) exert undue influence on these decisions. Initiatives such as the Coalition for Advancing Research Assessment (CoARA) and the Declaration on Research Assessment (DORA) emphasize responsible use of quantitative metrics and avoidance of journal-based impact metrics for research assessment. Further, our previous qualitative research suggested that assessing credibility, or trustworthiness, of research is important to researchers not only when they seek to inform their own research but also in the context of research assessment committees.

To confirm our findings from previous interviews in quantitative terms, we surveyed 485 biology researchers who have served on committees for grant review or hiring and promotion decisions, to understand how they assess the credibility of research outputs in these contexts. We found that concepts like credibility, trustworthiness, quality and impact lack consistent definitions and interpretations by researchers, which had already been observed in our interviews.

We also found that assessment of credibility is very important to most (81%) of researchers serving in these committees but fewer than half of respondents are satisfied with their ability to assess credibility. A substantial proportion of respondents (57% of respondents) report using journal reputation and JIF to assess credibility – proxies that research assessment reformers consider inappropriate to assess credibility because they don’t rely on intrinsic characteristics of the research.

This gap between importance of an assessment and satisfaction in the ability to conduct it was reflected in multiple aspects of credibility we tested and it was greatest for researchers seeking to assess the integrity of research (such as identifying signs of fabrication, falsification, or plagiarism), and the suitability and completeness of research methods. Non-traditional research outputs associated with Open Science practices – research data, code, protocol and preprints sharing – are particularly hard for researchers to assess, despite the potential of Open Science practices to signal trustworthiness.

Our results suggest opportunities to develop better guidance and better signals to support the evaluation of research credibility and trustworthiness – and ultimately support research assessment reform, away from the use of inappropriate proxies for impact and towards assessing the intrinsic characteristics and values researchers see as important.

DOI : https://doi.org/10.31222/osf.io/ht836

Applying Librarian- Created Evaluation Tools to Determine Quality and Credibility of Open Access Library Science Journals

Authors : Maggie Albro, Jessica L. Serrao, Christopher D. Vidas, Jenessa M. McElfresh, K. Megan Sheffield, Megan Palmer

This article explores the application of journal quality and credibility evaluation tools to library science publications. The researchers investigate quality and credibility attributes of forty-eight peer-reviewed library science journals with open access components using two evaluative tools developed and published by librarians.

The results identify common positive and negative attributes of library science journals, compare the results of the two evaluation tools, and discuss their ease of use and limitations. Overall, the results show that while library science journals do not fall prey to the same concerning characteristics that librarians use to caution other researchers, there are several areas in which publishers can improve the quality and credibility of their journals.

URL : https://preprint.press.jhu.edu/portal/sites/default/files/06_24.1albro.pdf

Systematizing Confidence in Open Research and Evidence (SCORE)

Authors : Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Tim Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, C Lee Giles, Michael B Gordon, Reed Gordon-Sarney, Christopher Griffin, Timothy Gulden et al.,

Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts.

Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment.

The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates.

Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.

URL : Systematizing Confidence in Open Research and Evidence (SCORE)

DOI : https://doi.org/10.31235/osf.io/46mnb

Youth and Digital Media: From Credibility to Information Quality

Building upon a process- and context-oriented information quality framework, this paper seeks to map and explore what we know about the ways in which young users of age 18 and under search for information online, how they evaluate information, and how their related practices of content creation, levels of new literacies, general digital media usage, and social patterns affect these activities.

A review of selected literature at the intersection of digital media, youth, and information quality — primarily works from library and information science, sociology, education, and selected ethnographic studies — reveals patterns in youth’s information-seeking behavior, but also highlights the importance of contextual and demographic factors both for search and evaluation.

Looking at the phenomenon from an information-learning and educational perspective, the literature shows that youth develop competencies for personal goals that sometimes do not transfer to school, and are sometimes not appropriate for school.

Thus far, educational initiatives to educate youth about search, evaluation, or creation have depended greatly on the local circumstances for their success or failure.”

URL : http://ssrn.com/abstract=2005272