Authors : Heidi Carmen Howard, Deborah Mascalzoni, Laurence Mabile, Gry Houeland, Emmanuelle Rial-Sebbag, Anne Cambon-Thomsen
Currently, a great deal of biomedical research in fields such as epidemiology, clinical trials and genetics is reliant on vast amounts of biological and phenotypic information collected and assembled in biobanks.
While many resources are being invested to ensure that comprehensive and well-organised biobanks are able to provide increased access to, and sharing of biomedical samples and information, many barriers and challenges remain to such responsible and extensive sharing.
Germane to the discussion herein is the barrier to collecting and sharing bioresources related to the lack of proper recognition of researchers and clinicians who developed the bioresource. Indeed, the efforts and resources invested to set up and sustain a bioresource can be enormous and such work should be easily traced and properly recognised.
However, there is currently no such system that systematically and accurately traces and attributes recognition to those doing this work or the bioresource institution itself. As a beginning of a solution to the “recognition problem”, the Bioresource Research Impact Factor/Framework (BRIF) initiative was proposed almost a decade and a half ago and is currently under further development.
With the ultimate aim of increasing awareness and understanding of the BRIF, in this article, we contribute the following: (1) a review of the objectives and functions of the BRIF including the description of two tools that will help in the deployment of the BRIF, the CoBRA (Citation of BioResources in journal Articles) guideline, and the Open Journal of Bioresources (OJB); (2) the results of a small empirical study on stakeholder awareness of the BRIF and (3) a brief analysis of the ethical dimensions of the BRIF which allow it to be a positive contribution to responsible biobanking.
Although there are a number of online platforms for patient-level clinical trial data sharing from industry sponsors, they are not very harmonized regarding the role of local ethics approval in the research proposal review process.
The first and largest of these platforms is ClinicalStudyDataRequest.com (CSDR), which includes over three thousand trials from thirteen sponsors including GlaxoSmithKline, Novartis, Roche, Sanofi, and Bayer. CSDR asks applicants to state whether they have received ethics approval for their research proposal, but in most cases does not require that they submit evidence of approval.
However, the website does require that applicants without ethical approval state the reason it was not required. In order to examine the perspectives of researchers on this topic, we coded every response to that question received by CSDR between June 2014 and February 2017.
Of 111 applicants who stated they were exempt from ethics approval, 63% mentioned de-identification, 57% mentioned the use of existing data, 33% referred to local or jurisdictional regulations, and 20% referred to the approvals obtained by the original study.
We conclude by examining the experience of CSDR within the broader context of the access mechanisms and policies currently being used by other data sharing platforms, and discuss how our findings might be used to help clinical trial data providers design clear and informative access documents.
While data sharing is becoming increasingly common in quantitative social inquiry, qualitative data are rarely shared. One factor inhibiting data sharing is a concern about human participant protections and privacy.
Protecting the confidentiality and safety of research participants is a concern for both quantitative and qualitative researchers, but it raises specific concerns within the epistemic context of qualitative research.
Thus, the applicability of emerging protection models from the quantitative realm must be carefully evaluated for application to the qualitative realm. At the same time, qualitative scholars already employ a variety of strategies for human-participant protection implicitly or informally during the research process.
In this practice paper, we assess available strategies for protecting human participants and how they can be deployed. We describe a spectrum of possible data management options, such as de-identification and applying access controls, including some already employed by the Qualitative Data Repository (QDR) in tandem with its pilot depositors.
Throughout the discussion, we consider the tension between modifying data or restricting access to them, and retaining their analytic value.
We argue that developing explicit guidelines for sharing qualitative data generated through interaction with humans will allow scholars to address privacy concerns and increase the secondary use of their data.
In this paper, the authors address learning analytics and the ways academic libraries are beginning to participate in wider institutional learning analytics initiatives. Since there are moral issues associated with learning analytics, the authors consider how data mining practices run counter to ethical principles in the American Library Association’s “Code of Ethics.”
Specifically, the authors address how learning analytics implicates professional commitments to promote intellectual freedom; protect patron privacy and confidentiality; and balance intellectual property interests between library users, their institution, and content creators and vendors.
The authors recommend that librarians should embed their ethical positions in technological designs, practices, and governance mechanisms.
The risk of scooping is often used as a counter argument for open science, especially open data. In this case study I have examined openness strategies, practices and attitudes in two open collaboration research projects created by Finnish researchers, in order to understand what made them resistant to the fear of scooping.
The radically open approach of the projects includes open by default funding proposals, co-authorship and community membership. Primary sources used are interviews of the projects’ founding members.
The analysis indicates that openness requires trust in close peers, but not necessarily in research community or society at large. Based on the case study evidence, focusing on intrinsic goals, like new knowledge and bringing about ethical reform, instead of external goals such as publications, supports openness.
Understanding fundaments of science, philosophy of science and research ethics, can also have a beneficial effect on willingness to share. Whether there are aspects in open sharing that makes it seem riskier from the point of view of certain demographical groups within research community, such as women, could be worth closer inspection.
How do students comment on ethical principles, which principles are important for their awareness of librarianship, how do they understand the relevance of human rights for their future work?
The case study presents the results of a lecture on information rights and ethics with 50 Master students in library and information sciences (LIS) at the University of Lille (France) in 2014–2015. Students were asked to comment on the core principles of the International Federation of Library Association (IFLA) Code of Ethics.
The students see the library as a privileged space of access to information, where the librarian takes on the function of a guardian of this specific individual freedom—a highly political role and task.
This opinion is part of a general commitment to open access and free flowing resources on Internet. They emphasize the social responsibility toward the society as a whole but most of all toward the individual patron as a real person, member of a cultural community, a social class or an ethnic group.
With regard to Human Rights, the students interpret the IFLA Code mainly as a code of civil, political, and critical responsibility to endorse the universal right of freedom of expression.
They see a major conflict between ethics and policy. The findings are followed by some recommendations for further development of LIS education, including internship, transversality, focus on conflicts and the students’ cognitive dissonance and teaching of social skills, in terms of work-based solidarity and collective choices.
The chapter is qualitative research based on empirical data from a French LIS Master program.
Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding.
The persistence of poor methods results partly from incentives that favor them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principle factor for career advancement.
Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modeling.
We first present a 60-year meta-analysis of statistical power in the behavioral sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power.
To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods.
As in the real world, successful labs produce more « progeny », such that their methods are more often copied and their students are more likely to start labs of their own.
Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.