Group authorship, an excellent opportunity laced with ethical, legal and technical challenges

Authors : Mohammad Hosseini, Alex O. Holcombe, Marton Kovacs, Hub Zwart, Daniel S. Katz, Kristi Holmes

Group authorship (also known as corporate authorship, team authorship, consortium authorship) refers to attribution practices that use the name of a collective (be it team, group, project, corporation, or consortium) in the authorship byline. Data shows that group authorships are on the rise but thus far, in scholarly discussions about authorship, they have not gained much specific attention.

Group authorship can minimize tensions within the group about authorship order and the criteria used for inclusion/exclusion of individual authors. However, current use of group authorships has drawbacks, such as ethical challenges associated with the attribution of credit and responsibilities, legal challenges regarding how copyrights are handled, and technical challenges related to the lack of persistent identifiers (PIDs), such as ORCID, for groups.

We offer two recommendations: 1) Journals should develop and share context-specific and unambiguous guidelines for group authorship, for which they can use the four baseline requirements offered in this paper; 2) Using persistent identifiers for groups and consistent reporting of members’ contributions should be facilitated through devising PIDs for groups and linking these to the ORCIDs of their individual contributors and the Digital Object Identifier (DOI) of the published item.

URL : Group authorship, an excellent opportunity laced with ethical, legal and technical challenges

DOI : https://doi.org/10.1080/08989621.2024.2322557

From Code to Tenure: Valuing Research Software in Academia

Authors : Eric A. Jensen, Daniel S. Katz

Research software is a driving force in today’s academic ecosystem, with most researchers relying on it to do their work, and many writing some of their own code. Despite its importance, research software is typically not included in tenure, promotion, and recognition policies and processes.

In this article, we invite discussions on how to value research software, integrate it into academic evaluations, and ensure its sustainability. We build on discussions hosted by the US Research Software Sustainability Institute and by the international Research Software Engineering community to outline a set of possible activities aimed at elevating the role of research software in academic career paths, recognition, and beyond.

One is a study to investigate the role of software contributions in academic promotions. Another is to document and share successful academic recognition practices for research software. A third is to create guidance documents for faculty hiring and tenure evaluations. Each of these proposed activities is a building block of a larger effort to create a more equitable, transparent, and dynamic academic ecosystem.

We’ve assembled 44 such ideas as a starting point and posted them as issues in GitHub. Our aim is to encourage engagement with this effort. Readers are invited to do this by adding potential activities or commenting on existing ideas to improve them.

The issues page can also serve to inform the community of ongoing activities so that efforts aren’t duplicated. Similarly, if someone else has already made strides in a particular area, point out their work to build collective knowledge.

Finally, the issues page is also intended to allow anyone interested in collaborating on a specific activity to indicate their willingness to do so. This living list serves as a hub for collective action and thought, with the overall aim of recognizing the value of creating and contributing research software.

URL : From Code to Tenure: Valuing Research Software in Academia

DOI : https://doi.org/10.21428/6ffd8432.8f39775d

Enforcing public data archiving policies in academic publishing: A study of ecology journals

Authors : Dan Sholler, Karthik Ram, Carl Boettiger, Daniel S Katz

To improve the quality and efficiency of research, groups within the scientific community seek to exploit the value of data sharing. Funders, institutions, and specialist organizations are developing and implementing strategies to encourage or mandate data sharing within and across disciplines, with varying degrees of success.

Academic journals in ecology and evolution have adopted several types of public data archiving policies requiring authors to make data underlying scholarly manuscripts freely available. The effort to increase data sharing in the sciences is one part of a broader “data revolution” that has prompted discussion about a paradigm shift in scientific research.

Yet anecdotes from the community and studies evaluating data availability suggest that these policies have not obtained the desired effects, both in terms of quantity and quality of available datasets.

We conducted a qualitative, interview-based study with journal editorial staff and other stakeholders in the academic publishing process to examine how journals enforce data archiving policies.

We specifically sought to establish who editors and other stakeholders perceive as responsible for ensuring data completeness and quality in the peer review process. Our analysis revealed little consensus with regard to how data archiving policies should be enforced and who should hold authors accountable for dataset submissions.

Themes in interviewee responses included hopefulness that reviewers would take the initiative to review datasets and trust in authors to ensure the completeness and quality of their datasets.

We highlight problematic aspects of these thematic responses and offer potential starting points for improvement of the public data archiving process.

URL : Enforcing public data archiving policies in academic publishing: A study of ecology journals

DOI : https://doi.org/10.1177/2053951719836258

Software must be recognised as an important output of scholarly research

Authors : Caroline Jay, Robert Haines, Daniel S. Katz

Software now lies at the heart of scholarly research. Here we argue that as well as being important from a methodological perspective, software should, in many instances, be recognised as an output of research, equivalent to an academic paper.

The article discusses the different roles that software may play in research and highlights the relationship between software and research sustainability and reproducibility. It describes the challenges associated with the processes of citing and reviewing software, which differ from those used for papers.

We conclude that whilst software outputs do not necessarily fit comfortably within the current publication model, there is a great deal of positive work underway that is likely to make an impact in addressing this.

URL : https://arxiv.org/abs/2011.07571

The principles of tomorrow’s university

Authors : Daniel S. Katz, Gabrielle Allen, Lorena A. Barba, Devin R. Berg, Holly Bik, Carl Boettiger, Christine L. Borgman, C. Titus Brown, Stuart Buck, Randy Burd, Anita de Waard, Martin Paul Eve, Brian E. Granger, Josh Greenberg, Adina Howe, Bill Howe, May Khanna, Timothy L. Killeen, Matthew Mayernik, Erin McKiernan, Chris Mentzel, Nirav Merchant, Kyle E. Niemeyer, Laura Noren, Sarah M. Nusser, Daniel A. Reed, Edward Seidel, MacKenzie Smith, Jeffrey R. Spies, Matt Turk, John D. Van Horn, Jay Walsh

In the 21st Century, research is increasingly data- and computation-driven. Researchers, funders, and the larger community today emphasize the traits of openness and reproducibility.

In March 2017, 13 mostly early-career research leaders who are building their careers around these traits came together with ten university leaders (presidents, vice presidents, and vice provosts), representatives from four funding agencies, and eleven organizers and other stakeholders in an NIH- and NSF-funded one-day, invitation-only workshop titled “Imagining Tomorrow’s University.”

Workshop attendees were charged with launching a new dialog around open research – the current status, opportunities for advancement, and challenges that limit sharing.

The workshop examined how the internet-enabled research world has changed, and how universities need to change to adapt commensurately, aiming to understand how universities can and should make themselves competitive and attract the best students, staff, and faculty in this new world.

During the workshop, the participants re-imagined scholarship, education, and institutions for an open, networked era, to uncover new opportunities for universities to create value and serve society.

They expressed the results of these deliberations as a set of 22 principles of tomorrow’s university across six areas: credit and attribution, communities, outreach and engagement, education, preservation and reproducibility, and technologies.

Activities that follow on from workshop results take one of three forms. First, since the workshop, a number of workshop authors have further developed and published their white papers to make their reflections and recommendations more concrete.

These authors are also conducting efforts to implement these ideas, and to make changes in the university system.

Second, we plan to organise a follow-up workshop that focuses on how these principles could be implemented.

Third, we believe that the outcomes of this workshop support and are connected with recent theoretical work on the position and future of open knowledge institutions.

URL : The principles of tomorrow’s university

DOI : https://doi.org/10.12688/f1000research.17425.1

Journal of Open Source Software (JOSS): design and first-year review

Authors : Arfon M Smith, Kyle E Niemeyer, Daniel S Katz, Lorena A Barba, George Githinji, Melissa Gymrek, Kathryn D Huff, Christopher R Madan, Abigail Cabunoc Mayes, Kevin M Moerman, Pjotr Prins, Karthik Ram, Ariel Rokem, Tracy K Teal, Roman Valls Guimera, Jacob T Vanderplas

This article describes the motivation, design, and progress of the Journal of Open Source Software (JOSS). JOSS is a free and open-access journal that publishes articles describing research software. It has the dual goals of improving the quality of the software submitted and providing a mechanism for research software developers to receive credit.

While designed to work within the current merit system of science, JOSS addresses the dearth of rewards for key contributions to science made in the form of software. JOSS publishes articles that encapsulate scholarship contained in the software itself, and its rigorous peer review targets the software components: functionality, documentation, tests, continuous integration, and the license.

A JOSS article contains an abstract describing the purpose and functionality of the software, references, and a link to the software archive. The article is the entry point of a JOSS submission, which encompasses the full set of software artifacts.

Submission and review proceed in the open, on GitHub. Editors, reviewers, and authors work collaboratively and openly. Unlike other journals, JOSS does not reject articles requiring major revision; while not yet accepted, articles remain visible and under review until the authors make adequate changes (or withdraw, if unable to meet requirements).

Once an article is accepted, JOSS gives it a DOI, deposits its metadata in Crossref, and the article can begin collecting citations on indexers like Google Scholar and other services. Authors retain copyright of their JOSS article, releasing it under a Creative Commons Attribution 4.0 International License.

In its first year, starting in May 2016, JOSS published 111 articles, with more than 40 additional articles currently under review. JOSS is a sponsored project of the nonprofit organization NumFOCUS and is an affiliate of the Open Source Initiative.

URL : https://arxiv.org/abs/1707.02264