Authors : Tania P. Bardyn, Emily F. Patridge, Michael T. Moore, Jane J. Koh
Medical libraries need to actively review their service models and explore partnerships with other campus entities to provide better-coordinated clinical research management services to faculty and researchers. TRAIL (Translational Research and Information Lab), a five-partner initiative at the University of Washington (UW), explores how best to leverage existing expertise and space to deliver clinical research data management (CRDM) services and emerging technology support to clinical researchers at UW and collaborating institutions in the Pacific Northwest.
The initiative offers 14 services and a technology-enhanced innovation lab located in the Health Sciences Library (HSL) to support the University of Washington clinical and research enterprise.
Sharing of staff and resources merges library and non-library workflows, better coordinating data and innovation services to clinical researchers. Librarians have adopted new roles in CRDM, such as providing user support and training for UW’s Research Electronic Data Capture (REDCap) instance.
TRAIL staff are quickly adapting to changing workflows and shared services, including teaching classes on tools used to manage clinical research data. Researcher interest in TRAIL has sparked new collaborative initiatives and service offerings. Marketing and promotion will be important for raising researchers’ awareness of available services.
Medical librarians are developing new skills by supporting and teaching CRDM. Clinical and data librarians better understand the information needs of clinical and translational researchers by being involved in the earlier stages of the research cycle and identifying technologies that can improve healthcare outcomes.
At health sciences libraries, leveraging existing resources and bringing services together is central to how university medical librarians will operate in the future.
Neuroimaging methods such as magnetic resonance imaging (MRI) involve complex data collection and analysis protocols, which necessitate the establishment of good research data management (RDM). Despite efforts within the field to address issues related to rigor and reproducibility, information about the RDM-related practices and perceptions of neuroimaging researchers remains largely anecdotal.
To inform such efforts, we conducted an online survey of active MRI researchers that covered a range of RDM-related topics. Survey questions addressed the type(s) of data collected, tools used for data storage, organization, and analysis, and the degree to which practices are defined and standardized within a research group.
Our results demonstrate that neuroimaging data is acquired in multifarious forms, transformed and analyzed using a wide variety of software tools, and that RDM practices and perceptions vary considerably both within and between research groups, with trainees reporting less consistency than faculty.
Ratings of the maturity of RDM practices from ad-hoc to refined were relatively high during the data collection and analysis phases of a project and significantly lower during the data sharing phase.
Perceptions of emerging practices including open access publishing and preregistration were largely positive, but demonstrated little adoption into current practice.
Authors : Kyle Chard, Eli Dart, Ian Foster, David Shifflett, Steven Tuecke, Jason Williams
We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs.
We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities.
Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.
Authors : Mark D. Wilkinson, Ruben Verborgh, Luiz Olavo Bonino da Silva Santos, Tim Clark, Morris A. Swertz, Fleur D.L. Kelpin, Alasdair J.G. Gray, Erik A. Schultes, Erik M. van Mulligen, Paolo Ciccarese, Arnold Kuzniar, Anand Gavai, Mark Thompson, Rajaram Kaliyaperumal, Jerven T. Bolleman, Michel Dumontier
Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT).
These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not.
The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability.
Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings.
We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles.
The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs.
Authors : Catharina Wasner, Ingo Barkow, Fabian Odoni
Since 2006 the education authorities in Switzerland have been obliged by the Constitution to harmonize important benchmarks in the educational system throughout Switzerland. With the development of national educational objectives in four disciplines an important basis for the implementation of this constitutional mandate was created.
In 2013 the Swiss National Core Skills Assessment Program (in German: ÜGK – Überprüfung der Grundkompetenzen) was initiated to investigate the skills of students, starting with three of four domains: mathematics, language of teaching and first foreign language in grades 2, 6 and 9. ÜGK uses a computer-based test and a sample size of 25.000 students per year.
A huge challenge for computer-based educational assessment is the research data management process. Data from several different systems and tools existing in different formats has to be merged to obtain data products researchers can utilize.
The long term preservation has to be adapted as well. In this paper, we describe our current processes and data sources as well as our ideas for enhancing the data management.
This mixed method study determined the essential tools and services required for research data management to aid academic researchers in fulfilling emerging funding agency and journal requirements. Focus groups were conducted and a rating exercise was designed to rank potential services.
Faculty conducting research at the University of Toronto were recruited; 28 researchers participated in four focus groups from June– August 2016. Two investigators independently coded the transcripts from the focus groups and identified four themes: 1) seamless infrastructure, 2) data security, 3) developing skills and knowledge, and 4) anxiety about releasing data.
Researchers require assistance with the secure storage of data and favour tools that are easy to use. Increasing knowledge of best practices in research data management is necessary and can be supported by the library using multiple strategies.
These findings help our library identify and prioritize tools and services in order to allocate resources in support of research data management on campus.
This paper develops and tests a lifecycle model for the preservation of research data by investigating the research practices of scientists. This research is based on a mixed-method approach.
An initial study was conducted using case study analytical techniques; insights from these case studies were combined with grounded theory in order to develop a novel model of the Digital Research Data Lifecycle.
A broad-based quantitative survey was then constructed to test and extend the components of the model. The major contribution of these research initiatives are the creation of the Digital Research Data Lifecycle, a data lifecycle that provides a generalized model of the research process to better describe and explain both the antecedents and barriers to preservation.
The antecedents and barriers to preservation are data management, contextual metadata, file formats, and preservation technologies. The availability of data management support and preservation technologies, the ability to create and manage contextual metadata, and the choices of file formats all significantly effect the preservability of research data.