Should research misconduct be criminalized?

Authors : Rafael Dal-Ré, Lex M Bouter, Pim Cuijpers, Pim Cuijpers, Christian Gluud, Søren Holm

For more than 25 years, research misconduct (research fraud) is defined as fabrication, falsification, or plagiarism (FFP)—although other research misbehaviors have been also added in codes of conduct and legislations.

A critical issue in deciding whether research misconduct should be subject to criminal law is its definition, because not all behaviors labeled as research misconduct qualifies as serious crime. But assuming that all FFP is fraud and all non-FFP not is far from obvious.

In addition, new research misbehaviors have recently been described, such as prolific authorship, and fake peer review, or boosted such as duplication of images. The scientific community has been largely successful in keeping criminal law away from the cases of research misconduct.

Alleged cases of research misconduct are usually looked into by committees of scientists usually from the same institution or university of the suspected offender in a process that often lacks transparency.

Few countries have or plan to introduce independent bodies to address research misconduct; so for the coming years, most universities and research institutions will continue handling alleged research misconduct cases with their own procedures. A global operationalization of research misconduct with clear boundaries and clear criteria would be helpful.

There is room for improvement in reaching global clarity on what research misconduct is, how allegations should be handled, and which sanctions are appropriate.

URL : Should research misconduct be criminalized?

DOI : https://doi.org/10.1177/1747016119898400

Practices, Challenges, and Prospects of Big Data Curation: a Case Study in Geoscience

Authors : Suzhen Chen, Bin Chen

Open and persistent access to past, present, and future scientific data is fundamental for transparent and reproducible data-driven research. The scientific community is now facing both challenges and opportunities caused by the growingly complex disciplinary data systems.

Concerted efforts from domain experts, information professionals, and Internet technology experts are essential to ensure the accessibility and interoperability of the big data.

Here we review current practices in building and managing big data within the context of large data infrastructure, using geoscience cyberinfrastructure such as Interdisciplinary Earth Data Alliance (IEDA) and EarthCube as a case study.

Geoscience is a data-rich discipline with a rapid expansion of sophisticated and diverse digital data sets. Having started to embrace the digital age, the community have applied big data and data mining tools into the new type of research.

We also identified current challenges, key elements, and prospects to construct a more robust and future-proof big data infrastructure for research and publication for the future, as well as the roles, qualifications, and opportunities for librarians/information professionals in the data era.

URL : Practices, Challenges, and Prospects of Big Data Curation: a Case Study in Geoscience

DOI: https://doi.org/10.2218/ijdc.v14i1.669

Developing the Librarian Workforce for Data Science and Open Science

Authors : Lisa Federer, Sarah Clarke, Maryam Zaringhalam

URL : Developing the Librarian Workforce for Data Science and Open Science

How Many Papers Should Scientists Be Reviewing? An Analysis Using Verified Peer Review Reports

Authors : Vincent Raoult

The current peer review system is under stress from ever increasing numbers of publications, the proliferation of open-access journals and an apparent difficulty in obtaining high-quality reviews in due time. At its core, this issue may be caused by scientists insufficiently prioritising reviewing.

Perhaps this low prioritisation is due to a lack of understanding on how many reviews need to be conducted by researchers to balance the peer review process. I obtained verified peer review data from 142 journals across 12 research fields, for a total of over 300,000 reviews and over 100,000 publications, to determine an estimate of the numbers of reviews required per publication per field.

I then used this value in relation to the mean numbers of authors per publication per field to highlight a ‘review ratio’: the expected minimum number of publications an author in their field should review to balance their input (publications) into the peer review process.

On average, 3.49 ± 1.45 (SD) reviews were required for each scientific publication, and the estimated review ratio across all fields was 0.74 ± 0.46 (SD) reviews per paper published per author. Since these are conservative estimates, I recommend scientists aim to conduct at least one review per publication they produce. This should ensure that the peer review system continues to function as intended.

URL : How Many Papers Should Scientists Be Reviewing? An Analysis Using Verified Peer Review Reports

DOI : https://doi.org/10.3390/publications8010004

Formalizing Privacy Laws for License Generation and Data Repository Decision Automation

Authors : Micah Altman, Stephen Chong, Alexandra Wood

In this paper, we summarize work-in-progress on expert system support to automate some data deposit and release decisions within a data repository, and to generate custom license agreements for those data transfers.

Our approach formalizes via a logic programming language the privacy-relevant aspects of laws, regulations, and best practices, supported by legal analysis documented in legal memoranda.

This formalization enables automated reasoning about the conditions under which a repository can transfer data, through interrogation of users, and the application of formal rules to the facts obtained from users.

The proposed system takes the specific conditions for a given data release and produces a custom data use agreement that accurately captures the relevant restrictions on data use.

This enables appropriate decisions and accurate licenses, while removing the bottleneck of lawyer effort per data transfer.

The operation of the system aims to be transparent, in the sense that administrators, lawyers, institutional review boards, and other interested parties can evaluate the legal reasoning and interpretation embodied in the formalization, and the specific rationale for a decision to accept or release a particular dataset.

URL : https://arxiv.org/abs/1910.10096

Open or Ajar? Openness within the Neoliberal Academy

Authors : Kevin Sanders, Simon Bowie

The terms ‘open’ and ‘openness’ are widely used across the current higher education environment particularly in the areas of repository services and scholarly communications.

Open-access licensing and open-source licensing are two prevalent manifestations of open culture within higher education research environments. As theoretical ideals, open-licensing models aim at openness and academic freedom.

But operating as they do within the context of global neoliberalism, to what extent are these models constructed by, sustained by, and co-opted by neoliberalism?

In this paper, we interrogate the use of open-licensing within scholarly communications and within the larger societal context of neoliberalism. Through synthesis of various sources, we will examine how open access licensing models have been constrained by neoliberal or otherwise corporate agendas, how open access and open scholarship have been reframed within discourses of compliance, how open-source software models and software are co-opted by politico-economic forces, and how the language of ‘openness’ is widely misused in higher education and repository services circles to drive agendas that run counter to actually increasing openness.

We will finish by suggesting ways to resist this trend and use open-licensing models to resist neoliberal agendas in open scholarship.

URL : Open or Ajar? Openness within the Neoliberal Academy

Original location : https://www.preprints.org/manuscript/202001.0240/v1

Ouverture des données de la recherche : de la vision politique aux pratiques des chercheurs

Auteur/Author : Violaine Rebouillat

Cette thèse s’intéresse aux données de la recherche, dans un contexte d’incitation croissante à leur ouverture. Les données de la recherche sont des informations collectées par les scientifiques dans la perspective d’être utilisées comme preuves d’une théorie scientifique.

Il s’agit d’une notion complexe à définir, car contextuelle. Depuis les années 2000, le libre accès aux données occupe une place de plus en plus stratégique dans les politiques de recherche. Ces enjeux ont été relayés par des professions intermédiaires, qui ont développé des services dédiés, destinés à accompagner les chercheurs dans l’application des recommandations de gestion et d’ouverture.

La thèse interroge le lien entre idéologie de l’ouverture et pratiques de recherche. Quelles formes de gestion et de partage des données existent dans les communautés de recherche et par quoi sont-elles motivées ? Quelle place les chercheurs accordent-ils à l’offre de services issue des politiques de gestion et d’ouverture des données ?

Pour tenter d’y répondre, 57 entretiens ont été réalisés avec des chercheurs de l’Université de Strasbourg dans différentes disciplines. L’enquête révèle une très grande variété de pratiques de gestion et de partage de données. Un des points mis en évidence est que, dans la logique scientifique, le partage des données répond un besoin.

Il fait partie intégrante de la stratégie du chercheur, dont l’objectif est avant tout de préserver ses intérêts professionnels. Les données s’inscrivent donc dans un cycle de crédibilité, qui leur confère à la fois une valeur d’usage (pour la production de nouvelles publications) et une valeur d’échange (en tant que monnaie d’échange dans le cadre de collaborations avec des partenaires).

L’enquête montre également que les services développés dans un contexte d’ouverture des données correspondent pour une faible partie à ceux qu’utilisent les chercheurs.

L’une des hypothèses émises est que l’offre de services arrive trop tôt pour rencontrer les besoins des chercheurs. L’évaluation et la reconnaissance des activités scientifiques étant principalement fondées sur la publication d’articles et d’ouvrages, la gestion et l’ouverture des données ne sont pas considérées comme prioritaires par les chercheurs.

La seconde hypothèse avancée est que les services d’ouverture des données sont proposés par des acteurs relativement éloignés des communautés de recherche. Les chercheurs sont davantage influencés par des réseaux spécifiques à leurs champs de recherche (revues, infrastructures…).

Ces résultats invitent finalement à reconsidérer la question de la médiation dans l’ouverture des données scientifiques.

URL : https://tel.archives-ouvertes.fr/tel-02447653