Open Source Software for Creation of Digital Library: A Comparative Study of Greenstone Digital Library Software & DSpace :
“Softwares now-a-days have become the life line of modern day organizations. Organizations cannot think of doing their tasks effectively and efficiently without softwares. The extremely competitive environment, zero deficiency and enhanced productivity has made it mandatory for the organizations to carefully choose the appropriate software after comprehensive needs assessment. Softwares simply their tasks and saves a lot of precious time which can be utilized in managing other important issues. Libraries also need softwares if they want to create a parallel digital library with features which we may not find in a traditional library. There are several open source softwares available to create a digital library. For this, firstly the library professionals should be aware of the advantages of open source software and should involve in their development. They should have basic knowledge about the selection, installation and maintenance. Open source software requires a greater degree of computing responsibility than commercial software. Digitization involves huge money to create and maintain and the OSS appears to be a means to reduce it. Among these, DSpace and Greenstone are becoming more popular in India and abroad. This paper deals with the comparison of these two popular OSS from various points of view. The comparative table may help the professionals who are planning to create a digital library.”
URL : http://eprints.rclis.org/19924/
The case for open computer programs :
“Scientific communication relies on evidence that cannot be entirely included in publications, but the rise of computational science has added a new layer of inaccessibility. Although it is now accepted that data should be made available on request, the current regulations regarding the availability of software are inconsistent. We argue that, with some exceptions, anything less than the release of source programs is intolerable for results that depend on computation. The vagaries of hardware, software and natural language will always ensure that exact reproducibility remains uncertain, but withholding code increases the chances that efforts to reproduce results will fail.”
URL : http://www.nature.com/nature/journal/v482/n7386/full/nature10836.html
Curation Micro-Services: A Pipeline Metaphor for Repositories :
“The effective long-term curation of digital content requires expert analysis, policy setting, and decision making, and a robust technical infrastructure that can effect and enforce curation policies and implement appropriate curation activities. Since the number, size, and diversity of content under curation management will undoubtedly continue to grow over time, and the state of curation understanding and best practices relative to that content will undergo a similar constant evolution, one of the overarching design goals of a sustainable curation infrastructure is flexibility. In order to provide the necessary flexibility of deployment and configuration in the face of potentially disruptive changes in technology, institutional mission, and user expectation, a useful design metaphor is provided by the Unix pipeline, in which complex behavior is an emergent property of the coordinated action of a number of simple independent components. The decomposition of repository function into a highly granular and orthogonal set of independent but interoperable micro-services is consistent with the principles of prudent engineering practice. Since each micro-service is small and self-contained, they are individually more robust and collectively easier to implement and maintain. By being freely interoperable in various strategic combinations, any number of micro-services-based repositories can be easily constructed to meet specific administrative or technical needs. Importantly, since these repositories are purposefully built from policy neutral and protocol and platform independent components to provide the function minimally necessary for a specific context, they are not constrained to conform to an infrastructural monoculture of prepackaged repository solutions. The University of California Curation Center has developed an open source micro-services infrastructure that is being used to manage the diverse digital collections of the ten campus University system and a number of non-university content partners. This paper provides a review of the conceptual design and technical implementation of this micro-services environment, a case study of initial deployment, and a look at ongoing micro-services developments.”
URL : http://journals.tdl.org/jodi/article/view/1605
Since 10 years, more and more libraries are choosing an Free and Open Source Integrated Library management System (ILS). We see that their evolution is very fast and the collaborative development fosters the emergence of innovative features for libraries and their users.
The innovation is as much about free ILS unique features as on the possibility of developing an ILS in collaboration with other libraries. The establishment of a collaborative structure enables the rapid development of innovations.
We present our observations on the innovative aspects of Free and Open Source ILS, especially with Koha. Libraries have to take advantage of opportunities for innovation offered by Free and Open Source ILS.
URL : http://eprints.rclis.org/15396/
How to choose an free and open source integrated library system:
“Purpose : This paper seeks to present the results of an analysis of 20 free and open source ILS platforms offered to the library community. These software platforms were subjected to a three-step analysis, whereby the results aim to assist librarians and decision makers in selecting an open source ILS, based on objective criteria.
Design/methodology/approach : The methodology applied involves three broad steps. The first step consists of evaluating all the available ILSs and keeping only those that qualify as truly open source or freely-licensed software. During this step, the correlation between the practices within the community and the terms associated with the free or open software license was measured. The second step involves evaluating the community behind each open source or free ILS project, according to a set of 40 criteria in order to determine the attractiveness and sustainability of each project. The third step entails subjecting the remaining ILSs to an analysis of almost 800 functions and features to determine which ILSs are most suited to the needs of libraries. The final score is used to identify strengths, weaknesses and differentiating or similar features of each ILS.
Findings : More than 20 open source ILSs were submitted to this methodology, but only three passed all the steps: Evergreen, Koha, and PMB. The main goal is not to identify the best open source ILS, but rather to highlight from which, of the batch of dozens of open source ILSs, librarians and decision makers can choose without worrying about how perennial or sustainable each open or free project is, as well as understanding which ILS provides them with the functionalities to meet the needs of their institutions.
Practical implications : This paper offers a basic model so that librarians and decision makers can make their own analysis and adapt it to the needs of their libraries.
Originality/value: This methodology meets the best practices in technology selection, with a multiple criteria decision analysis. It can also be easily adapted to the needs of all libraries.”
URL : http://eprints.rclis.org/handle/10760/15387
The Dataverse Network®: An Open-Source Application for Sharing, Discovering and Preserving Data :
“The Dataverse Network is an open-source application for publishing, referencing, extracting and analyzing research data. The main goal of the Dataverse Network is to solve the problems of data sharing through building technologies that enable institutions to reduce the burden for researchers and data publishers, and incentivize them to share their data. By installing Dataverse Network software, an institution is able to host multiple individual virtual archives, called “dataverses” for scholars, research groups, or journals, providing a data publication framework that supports author recognition, persistent citation, data discovery and preservation. Dataverses require no hardware or software costs, nor maintenance or backups by the data owner, but still enable all web visibility and credit to devolve to the data owner.”
URL : http://www.dlib.org/dlib/january11/crosas/01crosas.html
Trends in Large-Scale Subject Repositories :
“Noting a lack of broad empirical studies on subject repositories, the authors investigate subject repository trends that reveal common practices despite their apparent isolated development. Data collected on year founded, subjects, software, content types, deposit policy, copyright policy, host, funding, and governance are analyzed for the top ten most-populated subject repositories. Among them, several trends exist such as a multi- and interdisciplinary scope, strong representation in the sciences and social sciences, use of open source repository software for newer repositories, acceptance of pre- and post-prints, moderated deposits, submitter responsibility for copyright, university library or departmental hosting, and discouraged withdrawal of materials. In addition, there is a loose correlation between repository size and age. Recognizing the diversity of all subject repositories, the authors recommend that tools for assessment and evaluation be developed to guide subject repository management to best serve their respective communities.”
URL : http://www.dlib.org/dlib/november10/adamick/11adamick.html