Library Publishing Programs at Capacity: Addressing Issues of Sustainability and Scalability

Authors : Johanna Meetz, Jason Boczar

Introduction

This article discusses the changes to overall goals, direction, and services that were made to two library publishing programs at Pacific University and the University of South Florida when they were no longer able to grow their programs due to an inability to hire additional staff and COVID-19-instigated staff reassignments.

Description of Programs

Pacific University’s publishing program grew out of its institutional repository and, at its peak, published seven open access journals. In addition, Pacific University Libraries founded a University Press in 2016, which has published six books as of 2021. The University of South Florida’s publishing program began publishing open access journals in 2008, and it has grown to include over 20 journals.

Lessons Learned

Both the Pacific University and the University of South Florida publishing programs have faced scalability and sustainability issues, which were further exacerbated by COVID-19. The focus of our library publishing programs, as well as many others, has been on continual growth, which is not sustainable without the ability to hire additional staff or allocate staff time differently.

We argue that standardizing services as well as creating a business plan can help ensure that publishing programs are sustainable and scalable.

Next Steps

We hope to begin a conversation among library publishers about acknowledging limits and creating achievable definitions of success outside of continual growth.

URL : Library Publishing Programs at Capacity: Addressing Issues of Sustainability and Scalability

DOI : https://doi.org/10.31274/jlsc.12909

RipetaScore: Measuring the Quality, Transparency, and Trustworthiness of a Scientific Work

Authors : Josh Q. Sumner, Cynthia Hudson Vitale, Leslie D. McIntosh

A wide array of existing metrics quantifies a scientific paper’s prominence or the author’s prestige. Many who use these metrics make assumptions that higher citation counts or more public attention must indicate more reliable, better quality science.

While current metrics offer valuable insight into scientific publications, they are an inadequate proxy for measuring the quality, transparency, and trustworthiness of published research.

Three essential elements to establishing trust in a work include: trust in the paper, trust in the author, and trust in the data. To address these elements in a systematic and automated way, we propose the ripetaScore as a direct measurement of a paper’s research practices, professionalism, and reproducibility.

Using a sample of our current corpus of academic papers, we demonstrate the ripetaScore’s efficacy in determining the quality, transparency, and trustworthiness of an academic work.

In this paper, we aim to provide a metric to evaluate scientific reporting quality in terms of transparency and trustworthiness of the research, professionalism, and reproducibility.

URL : RipetaScore: Measuring the Quality, Transparency, and Trustworthiness of a Scientific Work

DOI : https://doi.org/10.3389/frma.2021.751734

What Is Wrong With the Current Evaluative Bibliometrics?

Author : Endel Põder

Bibliometric data are relatively simple and describe objective processes of publishing articles and citing others. It seems quite straightforward to define reasonable measures of a researcher’s productivity, research quality, or overall performance based on these data. Why do we still have no acceptable bibliometric measures of scientific performance?

Instead, there are hundreds of indicators with nobody knowing how to use them. At the same time, an increasing number of researchers and some research fields have been excluded from the standard bibliometric analysis to avoid manifestly contradictive conclusions.

I argue that the current biggest problem is the inadequate rule of credit allocation for multiple authored articles in mainstream bibliometrics. Clinging to this historical choice excludes any systematic and logically consistent bibliometrics-based evaluation of researchers, research groups, and institutions.

During the last 50 years, several authors have called for a change. Apparently, there are no serious methodologically justified or evidence-based arguments in the favor of the present system.

However, there are intractable social, psychological, and economical issues that make adoption of a logically sound counting system almost impossible.

URL : What Is Wrong With the Current Evaluative Bibliometrics?

DOI : https://doi.org/10.3389/frma.2021.824518

Evaluation and Merit-Based Increase in Academia: A Case Study in the First Person

Author : Christine Musselin

This article provides a reflexive account of the process of defining and implementing a mechanism to evaluate a group of academics in a French higher education institution. The situation is a rather unusual case for France, as the assessed academics are not civil servants but are employed by their university and this evaluation leads to merit-based salary increases.

To improve and implement this strategy was one of the author’s tasks, when she was vice-president for research at the institution in this case.

The article looks at this experience retrospectively, emphasizing three issues of particular relevance in the context of discussions about valuation studies and management proposed in this symposium: (1) the decision to distinguish between different types of profiles and thus categorize, or to apply the same criteria to all; (2) the concrete forms of commensuration to be developed in order to be able to evaluate and rank individuals from different disciplines; (3) the quantification of qualitative appreciation, i.e. their transformation into merit-based salary increases.

URL : Evaluation and Merit-Based Increase in Academia: A Case Study in the First Person

DOI : https://doi.org/10.3384/VS.2001-5992.2021.8.2.73-88

The Rise of the Guest Editor—Discontinuities of Editorship in Scholarly Publishing

Authors : Marcel Knöchelmann, Felicitas Hesselmann, Martin Reinhart, Cornelia Schendzielorz

Scholarly publishing lives on traditioned terminology that gives meaning to subjects such as authors, inhouse editors and external guest editors, artifacts such as articles, journals, special issues, and collected editions, or practices of acquisition, selection, and review.

These subjects, artifacts, and practices ground the constitution of scholarly discourse. And yet, the meaning ascribed to each of these terms shifts, blurs, or is disguised as publishing culture shifts, which becomes manifest in new digital publishing technology, new forms of publishing management, and new forms of scholarly knowledge production.

As a result, we may come to over- or underestimate changes in scholarly communication based on traditioned but shifting terminology. In this article, we discuss instances of scholarly publishing whose meaning shifted.

We showcase the cultural shift that becomes manifest in the new, prolific guest editor. Though the term suggests an established subject, this editorial role crystallizes a new cultural setting of loosened discourse communities and temporal structures, a blurring of publishing genres and, ultimately, the foundations of academic knowledge production.

URL : The Rise of the Guest Editor—Discontinuities of Editorship in Scholarly Publishing

DOI : https://doi.org/10.3389/frma.2021.748171

Barriers to Full Participation in the Open Science Life Cycle among Early Career Researchers

Authors : Natasha J. Gownaris, Koen Vermeir, Martin-Immanuel Bittner, Lasith Gunawardena, Sandeep Kaur-Ghumaan, Robert Lepenies, Godswill Ntsomboh Ntsefong, Ibrahim Sidi Zakari

Open science (OS) is currently dominated by a small subset of practices that occur late in the scientific process. Early career researchers (ECRs) will play a key role in transitioning the scientific community to more widespread use of OS from pre-registration to publication, but they also face unique challenges in adopting these practices. Here, we discuss these challenges across the OS life cycle.

Our essay relies on the published literature, an informal survey of 32 ECRs from 14 countries, and discussions among members of the Global Working Group on Open Science (Global Young Academy and National Young Academies).

We break the OS life cycle into four stages—study design and tracking (pre-registration, open processes), data collection (citizen science, open hardware, open software, open data), publication (open access publishing, open peer review, open data), and outreach (open educational resources, citizen science)—and map potential barriers at each stage.

The most frequently discussed barriers across the OS life cycle were a lack of awareness and training, prohibitively high time commitments, and restrictions and/or a lack of incentives by supervisors.

We found that OS practices are highly fragmented and that awareness is particularly low for OS practices that occur during the study design and tracking stage, possibly creating ‘path-dependencies’ that reduce the likelihood of OS practices at later stages.

We note that, because ECRs face unique barriers to adopting OS, there is a need for specifically targeted policies such as mandatory training at the graduate level and promotion incentives.

URL : Barriers to Full Participation in the Open Science Life Cycle among Early Career Researchers

DOI : http://doi.org/10.5334/dsj-2022-002