Software Curation in Research Libraries: Practice and Promise

Authors : Alexandra Chassanoff, Yasmin AlNoamany, Katherine Thornton, John Borghi

INTRODUCTION

Research software plays an increasingly vital role in the scholarly record. Academic research libraries are in the early stages of exploring strategies for curating and preserving research software, aiming to facilitate support and services for long-term access and use.

DESCRIPTION OF PROGRAM

In 2016, the Council on Library and Information Resources (CLIR) began offering postdoctoral fellowships in software curation. Four institutions hosted the initial cohort of software curation fellows.

This article describes the work activities and research program of the cohort, highlighting the challenges and benefits of doing this exploratory work in research libraries.

NEXT STEPS

Academic research libraries are poised to play an important role in research and development around robust services for software curation. The next cohort of CLIR fellows is set to begin in fall 2018 and will likely shape and contribute substantially to an emergent research agenda.

URL : Software Curation in Research Libraries: Practice and Promise

DOI : https://doi.org/10.7710/2162-3309.2239

A “basket of metrics”—the best support for understanding journal merit

Authors : Lisa Colledge, Chris James

Aim

To survey opinion of the assertion that useful metricbased input requires a “basket of metrics” to allow more varied and nuanced insights into merit than is possible by using one metric alone.

Methods

A poll was conducted to survey opinions (N=204; average response rate=61%) within the international research community on using usage metrics in merit systems.

Results

“Research is best quantified using multiple criteria” was selected by most (40%) respondents as the reason that usage metrics are valuable, and 95% of respondents indicated that they would be likely or very likely to use usage metrics in their assessments of research merit, if they had access to them.

There was a similar degree of preference for simple and sophisticated usage metrics confirming that one size does not fit all, and that a one-metric approach to merit is insufficient.

Conclusion

This survey demonstrates a clear willingness and a real appetite to use a “basket of metrics” to broaden the ways in which research merit can be detected and demonstrated.

URL : http://europeanscienceediting.eu/articles/a-basket-of-metrics-the-best-support-for-understanding-journal-merit/

Open Science by Design

Contributors : National Academies of Sciences, Engineering, and Medicine; Policy and Global Affairs; Board on Research Data and Information; Committee on Toward an Open Science Enterprise

Openness and sharing of information are fundamental to the progress of science and to the effective functioning of the research enterprise. The advent of scientific journals in the 17th century helped power the Scientific Revolution by allowing researchers to communicate across time and space, using the technologies of that era to generate reliable knowledge more quickly and efficiently.

Harnessing today’s stunning, ongoing advances in information technologies, the global research enterprise and its stakeholders are moving toward a new open science ecosystem.

Open science aims to ensure the free availability and usability of scholarly publications, the data that result from scholarly research, and the methodologies, including code or algorithms, that were used to generate those data.

Open Science by Design is aimed at overcoming barriers and moving toward open science as the default approach across the research enterprise.

This report explores specific examples of open science and discusses a range of challenges, focusing on stakeholder perspectives. It is meant to provide guidance to the research enterprise and its stakeholders as they build strategies for achieving open science and take the next steps.

URL : https://www.nap.edu/catalog/25116/open-science-by-design-realizing-a-vision-for-21st-century

What Value Do Journal Whitelists and Blacklists Have in Academia?

Authors : Jaime A. Teixeira da Silva, Panagiotis Tsigaris

This paper aims to address the issue of predatory publishing, sensu lato. To achieve this, we offer our perspectives, starting initially with some background surrounding the birth of the concept, even though the phenomenon may have already existed long before the popularization of the term “predatory publishing”.

The issue of predation or “predatory” behavior in academic publishing is no longer limited to open access (OA). Many of the mainstream publishers that were exclusively subscription-based are now evolving towards a state of complete OA.

Academics seeking reliable sources of journals to publish their work tend to rely on a journal’s metrics such as citations and indexing, and on whether it is blacklisted or whitelisted.

Jeffrey Beall raised awareness of the risks of “predatory” OA publishing, and his blacklists of “predatory” OA journals and publishers began to be used for official purposes to distinguish valid from perceived invalid publishing venues.

We initially reflect on why we believe the blacklists created by Beall were flawed, primarily due to the weak set of criteria confusing non-predatory with true predatory journals leading to false positives and missing out on blacklisting true predatory journals due to false negatives.

Historically, most critiques of “predatory publishing” have relied excessively on Beall’s blacklists to base their assumptions and conclusions but there is a need to look beyond these.

There are currently a number of blacklists and whitelists circulating in academia, but they all have imperfections, such as the resurrected Beall blacklists, Crawford’s OA gray list based on Beall’s lists, Cabell’s new blacklist with about 11,000 journals, the DOAJ with about 11,700 OA journals, and UGC, with over 32,600 journals prior to its recent (May 2018) purge of 4305 journals.

The reader is led into a discussion about blacklists’ lack of reliability, using the scientific framework of conducting research to assess whether a journal could be predatory at the pre- and post-study levels. We close our discussion by offering arguments why we believe blacklists are academically invalid.

URL : What Value Do Journal Whitelists and Blacklists Have in Academia?

DOI : https://doi.org/10.1016/j.acalib.2018.09.017

Leveraging Concepts in Open Access Publications

Authors : Andrea Bertino, Luca Foppiano, Laurent Romary, Pierre Mounier

Aim

This paper addresses the integration of a Named Entity Recognition and Disambiguation (NERD) service within a group of open access (OA) publishing digital platforms and considers its potential impact on both research and scholarly publishing.

This application, called entity-fishing, was initially developed by Inria in the context of the EU FP7 project CENDARI (Lopez et al., 2014) and provides automatic entity recognition and disambiguation against Wikipedia and Wikidata. Distributed with an open-source licence, it was deployed as a web service in the DARIAH infrastructure hosted at the French HumaNum.

Methods

In this paper, we focus on the specific issues related to its integration on five OA platforms specialized in the publication of scholarly monographs in social sciences and humanities as part of the work carried out within the EU H2020 project HIRMEOS (High Integration of Research Monographs in the European Open Science infrastructure).

Results and Discussion

In the following sections, we give a brief overview of the current status and evolution of OA publications and how HIRMEOS aims to contribute to this.

We then give a comprehensive description of the entity-fishing service, focusing on its concrete applications in real use cases together with some further possible ideas on how to exploit the generated annotations.

Conclusions

We show that entity-fishing annotations can improve both research and publishing process. Entity-fishing annotations can be used to achieve a better and quicker understanding of the specific and disciplinary language of certain monographs and so encourage non-specialists to use them.

In addition, a systematic implementation of the entity-fishing service can be used by publishers to generate thematic indexes within book collections to allow better cross-linking and query functions.

URL : https://hal.inria.fr/hal-01900303/

Sustainable open access for scholarly journals in 6 years – the incubator model at Utrecht University Library Open Access Journals

Authors : Jeroen Sondervan, Fleur Stigter

Key points

  • Humanities and the social science journals need flexible funding models.
  • Pragmatism and collaboration are key to transforming traditional publishing initiatives.
  • The Uopen Journals model sets a 6‐year development target for developing sustainable journals.
  • Actively involved editors are key to a journal’s success.

The future of global research: A case study on the use of scenario planning in the publishing industry

Authors : Samira Rhoods, Anca Babor

Key points

  • Scenario planning is fun and engaging and is a good opportunity to revisit your company’s core strengths and competitive advantage!
  • Scenario planning should drive long‐term thinking in organizations.
  • It will change the nature of the strategic conversation and can be used to help validate business innovation.
  • Scenarios can help to engage with other organizations in the industry and help people work together to create preferred future outcomes.
  • The complexity of scenario planning should not be underestimated and shortcuts do not work.

URL : The future of global research: A case study on the use of scenario planning in the publishing industry

DOI : https://doi.org/10.1002/leap.1152