Who is Actually Harmed by Predatory Publishers?

Authors : Martin Paul Eve, Ernesto Priego

“Predatory publishing” refers to conditions under which gold open-access academic publishers claim to conduct peer review and charge for their publishing services but do not, in fact, actually perform such reviews.

Most prominently exposed in recent years by Jeffrey Beall, the phenomenon garners much media attention. In this article, we acknowledge that such practices are deceptive but then examine, across a variety of stakeholder groups, what the harm is from such actions to each group of actors.

We find that established publishers have a strong motivation to hype claims of predation as damaging to the scholarly and scientific endeavour while noting that, in fact, systems of peer review are themselves already acknowledged as deeply flawed.

URL : Who is Actually Harmed by Predatory Publishers?

Alternative location : http://www.triple-c.at/index.php/tripleC/article/view/867

 

How to share data for collaboration

Authors : Shannon E Ellis, Jeffrey T Leek

Within the statistics community, a number of guiding principles for sharing data have emerged; however, these principles are not always made clear to collaborators generating the data. To bridge this divide, we have established a set of guidelines for sharing data.

In these, we highlight the need to provide raw data to the statistician, the importance of consistent formatting, and the necessity of including all essential experimental information and pre-processing steps carried out to the statistician. With these guidelines we hope to avoid errors and delays in data analysis.

URL : How to share data for collaboration

DOI : https://doi.org/10.7287/peerj.preprints.3139v1

 

Identifiers for the 21st century: How to design, provision, and reuse persistent identifiers to maximize utility and impact of life science data

Authors : Julie A. McMurry, Nick Juty, Niklas Blomberg, Tony Burdett, Tom Conlin, Nathalie Conte, Mélanie Courtot, John Deck, Michel Dumontier, Donal K. Fellows, Alejandra Gonzalez-Beltran, Philipp Gormanns, Jeffrey Grethe, Janna Hastings, Jean-Karim Hériché, Henning Hermjakob, Jon C. Ison, Rafael C. Jimenez, Simon Jupp, John Kunze, Camille Laibe, Nicolas Le Novère, James Malone, Maria Jesus Martin, Johanna R. McEntyre, Chris Morris, Juha Muilu, Wolfgang Müller, Philippe Rocca-Serra, Susanna-Assunta Sansone, Murat Sariyar, Jacky L. Snoep, Stian Soiland-Reyes, Natalie J. Stanford, Neil Swainston, Nicole Washington, Alan R. Williams, Sarala M. Wimalaratne, Lilly M. Winfree, Katherine Wolstencroft, Carole Goble, Christopher J. Mungall, Melissa A. Haendel, Helen Parkinson

In many disciplines, data are highly decentralized across thousands of online databases (repositories, registries, and knowledgebases). Wringing value from such databases depends on the discipline of data science and on the humble bricks and mortar that make integration possible; identifiers are a core component of this integration infrastructure.

Drawing on our experience and on work by other groups, we outline 10 lessons we have learned about the identifier qualities and best practices that facilitate large-scale data integration. Specifically, we propose actions that identifier practitioners (database providers) should take in the design, provision and reuse of identifiers.

We also outline the important considerations for those referencing identifiers in various circumstances, including by authors and data generators. While the importance and relevance of each lesson will vary by context, there is a need for increased awareness about how to avoid and manage common identifier problems, especially those related to persistence and web-accessibility/resolvability.

We focus strongly on web-based identifiers in the life sciences; however, the principles are broadly relevant to other disciplines.

URL : Identifiers for the 21st century: How to design, provision, and reuse persistent identifiers to maximize utility and impact of life science data

DOI : https://doi.org/10.1371/journal.pbio.2001414

Big data is not about size: when data transform scholarship

Authors : Jean-Christophe Plantin, Carl Lagoze, Paul N. Edwards, Christian Sandvig

“Big data” discussions typically focus on scale, i.e. the problems and potentials inherent in very large collections. Here, we argue that the most important consequences of “big data” for scholarship stem not from the increasing size of datasets, but instead from a loss of control over the sources of data.

The breakdown of the “control zone” due to the uncertain provenance of data has implications for data integrity, and can be disruptive to scholarship in multiple ways. A retrospective look at the introduction of larger datasets in weather forecasting and epidemiology shows that more data can at times be counter-productive, or destabilize already existing methods.

Based on these examples, we look at two implications of “big data” for scholarship: when the presence of large datasets transforms the traditional disciplinary structure of sciences, as well as the infrastructure for scholarly communication.

URL : https://books.openedition.org/editionsmsh/9103

The heteronomy of algorithms: Traditional knowledge and computational knowledge

Author : David M. Berry

If critical approaches are to remain relevant in a computational age, then philosophy must work to critique and understand how the materiality of the modern world is normatively structured using computation and the attendant imaginaries made possible for the reproduction and transformation of society, economy, culture and consciousness.

This call is something we need to respond to in relation to the contemporary reliance on computational forms of knowledge and practices and the co-constitution of new computational subjectivities. This chapter argues that to comprehend the digital we must, therefore, know it from the inside, we must know its formative processes.

We must materialize the digital and ask about the specific mediations that are made possible in and through computation, and the infrastructural systems which are built from it. This calls for computation and computational thinking to be part of the critical traditions of the arts and humanities, the social sciences and the university as a whole, requiring new pedagogical models that are able to develop new critical faculties in relation to the digital.

URL : https://books.openedition.org/editionsmsh/9091

The legal and policy framework for scientific data sharing, mining and reuse

Author : Mélanie Dulong de Rosnay

Text and Data Mining, the automatic processing of large amounts of scientific articles and datasets, is an essential practice for contemporary researchers. Some publishers are challenging it as a lawful activity and the topic is being discussed during European copyright law reform process.

In order to better understand the underlying debate and contribute to the policy discussion, this article first examines the legal status of data access and reuse and licensing policies. It then presents available options supporting the exercise of Text and Data Mining: publication under open licenses, open access legislations and a recognition of the legitimacy of the activity.

For that purpose, the paper analyses the scientific rational for sharing and its legal and technical challenges and opportunities. In particular, it surveys existing open access and open data legislations and discusses implementation in European and Latin America jurisdictions.

Framing Text and Data mining as an exception to copyright could be problematic as it de facto denies that this activity is part of a positive right to read and should not require additional permission nor licensing.

It is crucial in licenses and legislations to provide a correct definition of what is Open Access, and to address the question of pre-existing copyright agreements. Also, providing implementation means and technical support is key. Otherwise, legislations could remain declarations of good principles if repositories are acting as empty shells.

URL ; https://books.openedition.org/editionsmsh/9082

Data-informed Open Education Advocacy: A New Approach to Saving Students Money and Backaches

Authors : Sydney Thompson, Lillian S. Rigling, Will Cross, John Vickery

The North Carolina State University Libraries has long recognized the financial burden textbook costs place on students.

By crosswalking information on use of our textbook collection with textbook cost and course enrollment data, we have begun to map the environment for textbook use at the university and identified opportunities for faculty outreach in promoting alternatives to traditional textbooks, including our Alt-Textbook program.

This article describes our programs, our investigation of textbook use patterns, and how we are using these data to inform our practice.

URL : http://ir.lib.uwo.ca/wlpub/62/