The advent of open access to peer reviewed scholarly literature in the biomedical sciences creates the opening to examine scholarship in general, and chemistry in particular, to see where and how novel forms of network technology can accelerate the scientific method. This paper examines broad trends in information access and openness with an eye towards their applications in chemistry.
A case study in openness: Salford University :
“A case study in institutional openness has just been published, focused on Salford University. Written by the Vice Chancellor and EOS Board member, Professor Martin Hall, the study describes the drive to openness and the benefits it brings to the University and its public. “The University aims to create economic and social value through innovative ways of working together. A key element of this is openness”, says Professor Hall.
In the paper, he develops the concept of a ‘Generic Open Access University’ and describes how the univeristy repository, USIR, is the core of intermediary agencies and a wide range of networked connections. “The open access repository is at the heart of this model, in the place that the library has occupied from the earliest days of the university”, Professor Hall says.”
URL : http://www.openscholarship.org/jcms/c_7273/a-case-study-in-openness-salford-university
[…] Digitize Me, Visualize Me, Search Me takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities.
The phrase ‘the computational turn’ has been adopted to refer to the process whereby techniques and methodologies drawn from (in this case) computer science and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being used to produce new ways of approaching and understanding texts in the humanities; what is sometimes thought of as ‘the digital humanities’.
The concern in the main has been with either digitizing ‘born analog’ humanities texts and artifacts (e.g. making annotated editions of the art and writing of William Blake available to scholars and researchers online), or gathering together ‘born digital’ humanities texts and artifacts (videos, websites, games, photography, sound recordings, 3D data), and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts – to this ‘big data’, as it has been called.
Witness Lev Manovich and the Software Studies Initiative’s use of ‘digital image analysis and new visualization techniques’ to study ‘20,000 pages of Science and Popular Science magazines… published between 1872-1922, 780 paintings by van Gogh, 4535 covers of Time magazine (1923-2009) and one million manga pages’ (Manovich, 2011), and Dan Cohen and Fred Gibb’s text mining of ‘the 1,681,161 books that were published in English in the UK in the long nineteenth century’ (Cohen, 2010).
What Digitize Me, Visualize Me, Search Me endeavours to show is that such data-focused transformations in research can be seen as part of a major alteration in the status and nature of knowledge. It is an alteration that, according to the philosopher Jean-François Lyotard, has been taking place since at least the 1950s.
It involves nothing less than a shift away from a concern with questions of what is right and just, and toward a concern with legitimating power by optimizing the social system’s performance in instrumental, functional terms. This shift has significant consequences for our idea of knowledge.
[..] In particular, Digitize Me, Visualize Me, Search Me suggests that the turn in the humanities toward datadriven scholarship, science visualization, statistical data analysis, etc. can be placed alongside all those discourses that are being put forward at the moment – in both the academy and society – in the name of greater openness, transparency, efficiency and accountability.
This report discusses the current and potential role, in a truly open society, of raw Public Sector Information (PSI) that is really open, that is fully accessible and reusable by everybody. The general characteristics of PSI and the conclusions are based on previous studies and on the analysis of
current examples both from the European Union and the rest of the world.
Generation, management and usage of data constituting what is normally called PSI is a very large topic. This report only focuses on some parts of it. First of all, we only look here at really “public” PSI, that is information (from maps to aggregate health data) that is not tied to any single individual and whose publication, therefore, raises no privacy issues.
It is also important to distinguish between actual raw data (basic elements of information like numbers, names, dates, single geographical features like the shape of a lake, addresses…), their results (more or less complex documents, policies, laws…) and the procedures and chains of command followed to generate and use such results, that is to vote or, inside Public Administrations, to take or implement decisions.
So far, discussion and research on Open Data at national level has had relatively more coverage, even if much of the PSI that has the most direct impact on the life of most citizens is the one that is generated, managed and used by local, not central, administrations and end users (citizens, businesses or other organizations). Creation of wealth and jobs can be easier, faster and cheaper to stimulate, especially in times of economic crisis, at the local level. Finally, open access to public data is much more necessary for small businesses that for big corporations, since the latter can afford to pay for access to data anyway (and high prices of data may also protect them from competition from smaller companies).
For all these reasons, the main focus of this report will be on the raw data that constitute “public” PSI as defined above. This is the reason why in this report the terms “raw data” and “PSI” are practically interchangeable. We will also focus on the local dimension of Open PSI, that is raw data
directly produced by, or directly relevant for, local communities (City and Regions), and on their
direct impact on local government and local economy.
Chapters 2 and 3 summarize the importance of data in the modern society and some recent developments on the Open Data front in Europe. Chapter 4 explains why raw PSI should be open, while Chapter 5 shows the potential of such data with a few real world examples from several (mostly EU) countries. Chapter 6 looks at some dangers that should not be ignored when promoting Open Data and Chapter 7 proposes some general practices to follow for getting the most out of them. Some conclusions and the next phases of the project are in Chapter 8.
On the Lack of Consensus over the Meaning of Openness: An Empirical Study :
“This study set out to explore the views and motivations of those involved in a number of recent and current advocacy efforts (such as open science, computational provenance, and reproducible research) aimed at making science and scientific artifacts accessible to a wider audience. Using a exploratory approach, the study tested whether a consensus exists among advocates of these initiatives about the key concepts, exploring the meanings that scientists attach to the various mechanisms for sharing their work, and the social context in which this takes place. The study used a purposive sampling strategy to target scientists who have been active participants in these advocacy efforts, and an open-ended questionnaire to collect detailed opinions on the topics of reproducibility, credibility, scooping, data sharing, results sharing, and the effectiveness of the peer review process. We found evidence of a lack of agreement on the meaning of key terminology, and a lack of consensus on some of the broader goals of these advocacy efforts. These results can be explained through a closer examination of the divergent goals and approaches adopted by different advocacy efforts. We suggest that the scientific community could benefit from a broader discussion of what it means to make scientific research more accessible and how this might best be achieved.”
URL : http://goo.gl/pEvoH
Open to All? Case studies of openness in research :
“Since the early 1990s, the open access movement has promoted the concept of openness in relationto scientific research. Focusing initially upon the records of science in the form of the text of articles in scholarly journals, interest has broadened in the last decade to include a much wider range of materials produced by researchers. At the same time, concepts of openness and access have also developed to include various kinds of use, by machines as well as humans.
Academic bodies, including funders and groups of researchers, have set out statements in support
of various levels of openness in research. Such statements often focus upon two key dimensions:
what is made open, and how; and to whom is it made open, and under what conditions? This study
set out to consider the practice of six research groups from a range of disciplines in order to better
understand how principles of openness are translated into practice.”
URL : http://www.rin.ac.uk/system/files/attachments/NESTA-RIN_Open_Science_V01_0.pdf
Reproducible Research.Addressing the Need for Data and Code Sharing in Computational Science :
“Roundtable participants identified ways of making computational research details readily available,
which is a crucial step in addressing the current credibility crisis.”
URL : http://www.stanford.edu/~vcs/papers/RoundtableDeclaration2010.pdf