Avoiding the “Axe”: Advancing Affordable and Open Education Resources at a Midsize University

Authors: Jennifer Bazeley, Carolyn Haynes, Carla S. Myers, Eric Resnis

INTRODUCTION

To address the soaring cost of textbooks, higher education institutions have launched a number of strategies to promote the adoption of affordable and open educational resources (AOER).

Although a few models for promoting and sustaining alternative and open educational resources (AOER) at higher education institutions can be found in the professional literature, additional examples are needed to assist the wide of range of universities and colleges in meeting this critical need.

DESCRIPTION OF PROGRAM

In this article, the authors describe Miami University’s ongoing efforts to reduce college textbook costs for students. These initiatives were instigated in some ways by the state legislature, but were also fueled by factual evidence regarding the impact textbook costs have on the student learning experience.

The authors (university librarians and associate provost) provide a description of the institutional context and the challenges they faced in implementing AOER initiatives and chronicle the steps that their university has taken to address the challenge of rising costs of course materials.

NEXT STEPS

Next steps for growing the programs and recommendations for other institutions looking to develop similar initiatives are also explored.

URL : Avoiding the “Axe”: Advancing Affordable and Open Education Resources at a Midsize University

DOI: https://doi.org/10.7710/2162-3309.2259

Embracing New Trends in Scholarly Communication: From Competency Requirements in the Workplace to LIS Curriculum Presence

Author : Jaya Raju

INTRODUCTION

Scholarly communication has undergone dramatic change in the digital era as a result of rapidly evolving digital technology. It is within this context of evolving scholarly communication that this paper reports on an inquiry into (1) the extent to which university libraries in South Africa are actively embracing new and emerging trends in scholarly communication; and (2), the extent to which LIS school curricula in South Africa are responding to new and emerging scholarly communication competencies required in university libraries.

METHODS

This qualitative study, located within an interpretivist epistemological worldview, was informed by the Operational Elements of Scientific Communication aspect of Khosrowjerdi’s (2011) Viable Scientific Communication Model.

Data was collected using summative content analysis of university library job advertisements over a four-year period; South African university libraries’ organizational organograms; and course descriptions available on the websites of South Africa’s LIS schools.

RESULTS & DISCUSSION

A review of job advertisements and organograms shows that on the whole university libraries in South Africa are embracing the new and emerging trends in scholarly communication, but some university libraries are performing better than others in adopting emerging scholarly communication services such as RDM, digital humanities, or research landscape analysis.

Course description analysis provides evidence that LIS schools’ curricula, as per global trend reported in the literature, do not seem to be keeping pace with developments in scholarly communication.

CONCLUSION

The ambivalent nature of an evolving scholarly communications field with unclear definitions and boundaries necessitates professional practitioners who are adaptable and open to change as well as an LIS education curriculum that is in constant review to seamlessly embrace an evolving field propelled by advancing digital technologies.

URL : Embracing New Trends in Scholarly Communication: From Competency Requirements in the Workplace to LIS Curriculum Presence

DOI : https://doi.org/10.7710/2162-3309.2291

A Multi-match Approach to the Author Uncertainty Problem

Authors: Stephen F. Carley, Alan L. Porter, Jan L. Youtie

Purpose

The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases.

Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names).

The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem.

In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies.

Design/methodology/approach

The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’).

Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives).

From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual.

We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person.

Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem.

Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person?

They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references.

Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination.

Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification.

To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study.

The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part.

Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets.

Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names.

After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it).

The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. T

his typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to).

Research limitations

Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example).

Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary.

Practical implications

The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist.

Originality/value

Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both.

Findings

Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish.

Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications.

While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly.

It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with.

The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs.

While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage.

URL : A Multi-match Approach to the Author Uncertainty Problem

DOI : https://doi.org/10.2478/jdis-2019-0006

Developing a model for university presses

Authors : Megan Taylor, Kathrine S H Jensen

This article presents a model for developing a university press based around three guiding principles and six key stages of the publishing process, with associated activities.

The model is designed to be applicable to a range of business models, including subscription, open access and hybrid. The guiding principles, publishing stages and strategic points all constitute the building blocks necessary to implement and maintain a sustainable university press.

At the centre of the model there are three interconnected main guiding principles: strategic alignment, stakeholder relationships and demonstrating impact.

The publishing process outlined in the outer ring of the model is made up of six sections: editorial, production, dissemination, preservation, communication and analytics.

These sections were based on the main stages that a journal article or monograph goes through from proposal or commissioning stage through to publication and beyond.

The model highlights the overall importance of working in partnership and building relationships as key to developing and maintaining a successful press.

URL : Developing a model for university presses

DOI : http://doi.org/10.1629/uksg.469

Bibliodiversity in Practice: Developing Community-Owned, Open Infrastructures to Unleash Open Access Publishing

Authors : Lucy Barnes, Rupert Gatti

Academic publishing is changing. The drive towards open access publishing, which is being powered in the UK by funding bodies (SHERPA Juliet), the requirements of REFs 2021 (UKRI) and 2027 (Hill 2018), and Europe-wide movements such as the recently-announced Plan S (‘About Plan S’), has the potential to shake up established ways of publishing academic research.

Within book publishing, the traditional print formats and the conventional ways of disseminating research, which are protected and promoted by a small number of powerful incumbents, are being challenged.

Academic publishing, and academic book publishing, is at a crossroads: will it find ways to accommodate open access distribution within its existing structures?

Or will new systems of research dissemination be developed? And what might those new systems look like?In this article we look at the main features of the existing monograph publication and distribution ecosystem, and question the suitability of this for open access monographs.

We look specifically at some of the key economic characteristics of the monograph publishing market and consider their implications for new infrastructures designed specifically to support open access titles.

The key observations are that the production of monographs displays constant returns to scale, and so can (and does) support large numbers of publishing initiatives; at the same time the distribution and discovery systems for monographs display increasing returns to scale and so naturally leads to the emergence of a few large providers.

We argue that in order to protect the diversity of players and outputs within the monograph publishing industry in the transition to open access it is important to create open and community-managed infrastructures and revenue flows that both cater for different business models and production workflows and are resistant to take over or control by a single (or small number) of players.

URL : https://hal.archives-ouvertes.fr/hal-02175276/

 

The advantages of UK Biobank’s open access strategy for health research

Authors : Megan Conroy, Jonathan Sellors, Mark Effingham, Thomas J. Littlejohns, Chris Boultwood, Lorraine Gillions, Cathie L.M. Sudlow, Rory Collins, Naomi E. Allen

Ready access to health research studies is becoming more important as researchers, and their funders, seek to maximise the opportunities for scientific innovation and health improvements.

Large‐scale population‐based prospective studies are particularly useful for multidisciplinary research into the causes, treatment and prevention of many different diseases. UK Biobank has been established as an open‐access resource for public health research, with the intention of making the data as widely available as possible in an equitable and transparent manner.

Access to UK Biobank’s unique breadth of phenotypic and genetic data has attracted researchers worldwide from across academia and industry. As a consequence, it has enabled scientists to perform world‐leading collaborative research.

Moreover, open access to an already deeply characterized cohort has encouraged both public and private sector investment in further enhancements to make UK Biobank an unparalleled resource for public health research and an exemplar for the development of open access approaches for other studies.

DOI : https://doi.org/10.1111/joim.12955

The Definition of Reuse

Authors : Stephanie van de Sandt, Sünje Dallmeier-Tiessen, Artemis Lavasa, Vivien Petras

The ability to reuse research data is now considered a key benefit for the wider research community. Researchers of all disciplines are confronted with the pressure to share their research data so that it can be reused.

The demand for data use and reuse has implications on how we document, publish and share research in the first place, and, perhaps most importantly, it affects how we measure the impact of research, which is commonly a measurement of its use and reuse.

It is surprising that research communities, policy makers, etc. have not clearly defined what use and reuse is yet.

We postulate that a clear definition of use and reuse is needed to establish better metrics for a comprehensive scholarly record of individuals, institutions, organizations, etc.

Hence, this article presents a first definition of reuse of research data. Characteristics of reuse are identified by examining the etymology of the term and the analysis of the current discourse, leading to a range of reuse scenarios that show the complexity of today’s research landscape, which has been moving towards a data-driven approach.

The analysis underlines that there is no reason to distinguish use and reuse. We discuss what that means for possible new metrics that attempt to cover Open Science practices more comprehensively.

We hope that the resulting definition will enable a better and more refined strategy for Open Science.

URL : The Definition of Reuse

DOI : http://doi.org/10.5334/dsj-2019-022