The Evolution, Approval and Implementation of the U.S. Geological Survey Science Data Lifecycle Model

Authors : John L. Faundeen, Vivian B. Hutchison

This paper details how the U.S. Geological Survey (USGS) Community for Data Integration (CDI) Data Management Working Group developed a Science Data Lifecycle Model, and the role the Model plays in shaping agency-wide policies and data management applications.

Starting with an extensive literature review of existing data lifecycle models, representatives from various backgrounds in USGS attended a two-day meeting where the basic elements for the Science Data Lifecycle Model were determined.

Refinements and reviews spanned two years, leading to finalization of the model and documentation in a formal agency publication1.

The Model serves as a critical framework for data management policy, instructional resources, and tools. The Model helps the USGS address both the Office of Science and Technology Policy (OSTP)2 for increased public access to federally funded research, and the Office of Management and Budget (OMB)3 2013 Open Data directives, as the foundation for a series of agency policies related to data management planning, metadata development, data release procedures, and the long-term preservation of data.

Additionally, the agency website devoted to data management instruction and best practices (www2.usgs.gov/datamanagement) is designed around the Model’s structure and concepts. This paper also illustrates how the Model is being used to develop tools for supporting USGS research and data management processes.

URL : http://escholarship.umassmed.edu/jeslib/vol6/iss2/4/

 

Does Peer Review Identify the Best Papers? A Simulation Study of Editors, Reviewers, and the Scientific Publication Process

Author : Justin Esarey

How does the structure of the peer review process, which can vary among journals, influence the quality of papers published in a journal? This article studies multiple systems of peer review using computational simulation. I find that, under any of the systems I study, a majority of accepted papers are evaluated by an average reader as not meeting the standards of the journal.

Moreover, all systems allow random chance to play a strong role in the acceptance decision. Heterogeneous reviewer and reader standards for scientific quality drive both results. A peer review system with an active editor—that is, one who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions—can mitigate some of these effects.

DOI : https://doi.org/10.1017/S1049096517001081

L’open innovation et les grandes entreprises françaises : de l’urgence de l’appropriation à l’opportunité de transformation : le cas d’EDF comme prisme d’étude

Auteur/Author : Céline Repoux

L’accélération imprègne tous les aspects du champ social : la consommation, les transports, les loisirs, les discours… Tout est prétexte à aller plus vite pour optimiser les effets attendus. Avec la crise économique, les grandes entreprises sont aussi concernées : pour rester compétitives dans un univers de plus en plus concurrentiel, celles-ci sont enjointes en permanence à « innover ».

Cependant, le temps long de la Recherche & Développement, entité qui gère traditionnellement l’innovation de ces grandes entreprises, n’est pas celui du nouveau marché très rapide qui se dessine et qui profite aux start-ups, leurs nouveaux concurrents directs. Comment appréhender ce bouleversement ?

Ce travail tente de montrer comment les Nouvelles Technologies de l’Information Communication (NTIC) ont transformé le rapport au temps de la société et comment cette transformation trouve ses effets dans le mode de gestion de l’innovation des grandes entreprises, au profit d’une pratique dénommée « Open innovation ».

Une étude plus particulière du cas d’EDF, étayée par l’analyse d’éléments issus de plusieurs autres grandes entreprises françaises et de start-ups, nous permet d’analyser ce phénomène. En rappelant les définitions couramment attribuées à « l’innovation », nous voyons dans un premier temps en quoi les NTIC sont étroitement liées à cette notion et comment leur association crée « l’urgence d’innover » parmi les grandes entreprises.

Nous voyons ensuite comment les imaginaires liés à ces NTIC, intégrés par les individus, transforment la gestion effective de l’innovation des grandes entreprises, mettant en tension les enjeux d’« ouverture » et de « gestion » de l’Open innovation.

Un dernier temps de l’analyse nous permet de montrer comment ce changement de paradigme affecte jusqu’à l’organisation de l’entreprise, au point de conduire à sa propre mutation.

URL : https://dumas.ccsd.cnrs.fr/dumas-01559266

Make researchers revisit past publications to improve reproducibility

Authors : Clare Fiala, Eleftherios P. Diamandis

Scientific irreproducibility is a major issue that has recently increased attention from publishers, authors, funders and other players in the scientific arena. Published literature suggests that 50-80% of all science performed is irreproducible. While various solutions to this problem have been proposed, none of them are quick and/or cheap.

Here, we propose one way of reducing scientific irreproducibility by asking authors to revisit their previous publications and provide a commentary after five years. We believe that this measure will alert authors not to over sell their results and will help with better planning and execution of their experiments.

We invite scientific journals to adapt this proposal immediately as a prerequisite for publishing.

URL : Make researchers revisit past publications to improve reproducibility

DOI : http://dx.doi.org/10.12688/f1000research.12715.1

 

Incidence of predatory journals in computer science literature

Authors : Simona Ibba, Filippo Eros Pani, John Gregory Stockton, Giulio Barabino, Michele Marchesi, Danilo Tigano

Purpose

One of the main tasks of a researcher is to properly communicate the results he obtained. The choice of the journal in which to publish the work is therefore very important. However, not all journals have suitable characteristics for a correct dissemination of scientific knowledge.

Some publishers turn out to be unreliable and, against a payment, they publish whatever researchers propose. The authors call “predatory journals” these untrustworthy journals.

The purpose of this paper is to analyse the incidence of predatory journals in computer science literature and present a tool that was developed for this purpose.

Design/methodology/approach

The authors focused their attention on editors, universities and publishers that are involved in this kind of publishing process. The starting point of their research is the list of scholarly open-access publishers and open-access stand-alone journals created by Jeffrey Beall.

Specifically, they analysed the presence of predatory journals in the search results obtained from Google Scholar in the engineering and computer science fields. They also studied the change over time of such incidence in the articles published between 2011 and 2015.

Findings

The analysis shows that the phenomenon of predatory journals somehow decreased in 2015, probably due to a greater awareness of the risks related to the reputation of the authors.

Originality/value

We focused on computer science field, using a specific sample of queries. We developed a software to automatically make queries to the search engine, and to detect predatory journals, using Beall’s list.

URL : Incidence of predatory journals in computer science literature

DOI : https://doi.org/10.1108/LR-12-2016-0108