The article addresses the problem of restricted access to industry-sponsored clinical trial data. In particular, it analyses the intersection of the competing claims that mandatory disclosure of pharmaceutical test data impedes innovation incentives, and that access facilitates new drug development.
These claims are characterised in terms of public-good and common-resource dilemmas. The analysis finds that confidentiality protection of primary research data plays an ambiguous role.
While secrecy, as such, does not solve the public-good problem in pharmaceutical innovation (in the presence of regulatory instruments that protect the originator drug against generic competition), it is likely to exacerbate the common-resource problem, in view of data as a source of verified and new knowledge.
It is argued that the claim of the research-based industry that disclosure of clinical data impedes innovation incentives is misplaced and should not be leveraged against the pro-access policies. The analysis proposes that regulation should adhere to the principle that protection should be confined to competition by imitation.
This implies that the rules of access should be designed in such a way that third-party use of data does not interfere with protection against generic competition. At the same time, the long-term collective benefit can be maximised when the ‘cooperative choice’ – i.e. when everyone shares data – becomes the ‘dominant strategy’.
This can be achieved only when access is not subject to the authorisation of the initial trial sponsors, and when primary data is aggregated, refined and managed on the collective basis.
Avec la mise en place de grandes infrastructures de recherche en sciences du patrimoine comme E-RIHS, on rassemble des acteurs divers, issus à la fois des sciences humaines et sociales et des sciences expérimentales. Le paléontologue croise l’historien de l’art, et le physicien collabore avec le restaurateur.
Dans ce cadre, la gestion des données de la recherche est un véritable défi, car elle doit rassembler, valoriser et rendre accessibles des données produites par des protagonistes très différents, utilisant des méthodes elles aussi très différentes. Comment en effet gérer et échanger à la fois des données d’expériences, des images numérisées et des rapports de restauration ?
Le cycle de vie des données de la recherche, de leur création à leur diffusion en passant par leur analyse, au sein de cette communauté interdisciplinaire interroge la définition même de ce type de données, et nous amène à questionner les pratiques autour de celles-ci.
The purpose of this paper is to propose a personal viewpoint on the development of document supply in the context of the recent European Union (EU) decisions on open science.
The paper provides some elements to the usual questions of service development, about business, customers, added value, environment and objectives.
The EU goal for open science is 100 per cent available research results in 2020. To meet the challenge, document supply must change, include more and other content, serve different targets groups, apply innovative technology and provide knowledge. If not, document supply will become a marginalized library service.
Basically, open science is not library-friendly, and it does not offer a solution for the actual problems of document supply. But it may provide an opportunity for document supply to become a modern service able to deal with new forms of unequal access and digital divide.
The way science and research is done is rapidly becoming more open and collaborative. The traditional way of publishing new findings in journals is becoming increasingly outdated and no longer serves the needs of much of science.
Whilst preprints can bring significant benefits of removing delay and selection, they do not go far enough if simply implemented alongside the existing journal system. We propose that we need a new approach, an Open Science Platform, that takes the benefits of preprints but adds formal, invited, and transparent post-publication peer review.
This bypasses the problems of the current journal system and, in doing so, moves the evaluation of research and researchers away from the journal-based Impact Factor and towards a fairer system of article-based qualitative and quantitative indicators.
In the long term, it should be irrelevant where a researcher publishes their findings. What is important is that research is shared and made available without delay within a framework that encourages quality standards and requires all players in the research community to work as collaborators.
“Open access” has become a central theme of journal reform in academic publishing. In this article, I examine the consequences of an important technological loophole in which publishers can claim to be adhering to the principles of open access by releasing articles in proprietary or “locked” formats that cannot be processed by automated tools, whereby even simple copy and pasting of text is disabled.
These restrictions will prevent the development of an important infrastructural element of a modern research enterprise, namely, scientific data science, or the use of data analytic techniques to conduct meta-analyses and investigations into the scientific corpus.
I give a brief history of the open access movement, discuss novel journalistic practices, and an overview of data-driven investigation of the scientific corpus. I argue that particularly in an era where the veracity of many research studies has been called into question, scientific data science should be one of the key motivations for open access publishing.
The enormous benefits of unrestricted access to the research literature should prompt scholars from all disciplines to reject publishing models whereby articles are released in proprietary formats or are otherwise restricted from being processed by automated tools as part of a data science pipeline.
L’archive ouverte nationale et pluridisciplinaire HAL héberge aujourd’hui des données de la recherche ainsi que des données supplémentaires sous la forme d’annexes.
Afin de tenter de définir des orientations pour cette infrastructure, ce mémoire présente un état de l’art des différents acteurs et enjeux qui gravitent autour de la thématique des données de la recherche. Ensuite, il s’attache à décrire les différents services mis en œuvre par les entrepôts de données de la recherche ainsi que les défis auxquels ils doivent répondre.
Enfin, est proposée une étude exploratoire des données supplémentaires hébergées par HAL, qui cherche à identifier quelles communautés scientifiques utilisent ce service et sous quelles formes.
Authors : Nadine Levin, Sabina Leonelli, Dagmara Weckowska, David Castle, John Dupré
This article documents how biomedical researchers in the United Kingdom understand and enact the idea of “openness.”
This is of particular interest to researchers and science policy worldwide in view of the recent adoption of pioneering policies on Open Science and Open Access by the U.K. government—policies whose impact on and implications for research practice are in need of urgent evaluation, so as to decide on their eventual implementation elsewhere.
This study is based on 22 in-depth interviews with U.K. researchers in systems biology, synthetic biology, and bioinformatics, which were conducted between September 2013 and February 2014.
Through an analysis of the interview transcripts, we identify seven core themes that characterize researchers’ understanding of openness in science and nine factors that shape the practice of openness in research.
Our findings highlight the implications that Open Science policies can have for research processes and outcomes and provide recommendations for enhancing their content, effectiveness, and implementation.