Student perceptions of writing with Wikipedia in Australian higher education

Authors : Robert Cummings, Frances DiLauro

The benefits of teaching with Wikipedia in higher education have been investigated for more than a decade and practitioners have claimed a fairly uniform set of outcomes. Although Wikipedia is a global knowledge platform, many studies of the benefits of teaching with Wikipedia have been conducted in U.S. higher education institutions.

The authors taught with Wikipedia in writing classes at the University of Sydney, Australia, surveying and interviewing students to both verify the traditional benefits of teaching with Wikipedia and investigate a new set of perceived benefits.

This study finds evidence that students who worked with Wikipedia in the writing classroom remained neutral in their opinions as to the legitimacy of information on Wikipedia and skeptical as to its utility in mastering writing course outcomes.

DOI : http://firstmonday.org/ojs/index.php/fm/article/view/7488

Assessing the utility of an institutional publications officer: a pilot assessment

Authors : Kelly D. Cobey, James Galipeau, Larissa Shamseer, David Moher

Background

The scholarly publication landscape is changing rapidly. We investigated whether the introduction of an institutional publications officer might help facilitate better knowledge of publication topics and related resources, and effectively support researchers to publish.

Methods

In September 2015, a purpose-built survey about researchers’ knowledge and perceptions of publication practices was administered at five Ottawa area research institutions. Subsequently, we publicly announced a newly hired publications officer (KDC) who then began conducting outreach at two of the institutions.

Specifically, the publications officer gave presentations, held one-to-one consultations, developed electronic newsletter content, and generated and maintained a webpage of resources. In March 2016, we re-surveyed our participants regarding their knowledge and perceptions of publishing.

Mean scores to the perception questions, and the percent of correct responses to the knowledge questions, pre and post survey, were computed for each item. The difference between these means or calculated percentages was then examined across the survey measures.

Results

82 participants completed both surveys. Of this group, 29 indicated that they had exposure to the publications officer, while the remaining 53 indicated they did not. Interaction with the publications officer led to improvements in half of the knowledge items (7/14 variables).

While improvements in knowledge of publishing were also found among those who reported not to have interacted with the publications officer (9/14), these effects were often smaller in magnitude. Scores for some publication knowledge variables actually decreased between the pre and post survey (3/14).

Effects for researchers’ perceptions of publishing increased for 5/6 variables in the group that interacted with the publications officer.

Discussion

This pilot provides initial indication that, in a short timeframe, introducing an institutional publications officer may improve knowledge and perceptions surrounding publishing.

This study is limited by its modest sample size and temporal relationship between the introduction of the publications officer and changes in knowledge and perceptions. A randomized trial examining the publications officer as an effective intervention is needed.

URL : Assessing the utility of an institutional publications officer: a pilot assessment

DOI : https://doi.org/10.7717/peerj.3294

A Trust Framework for Online Research Data Services

Authors : Malcolm Wolski, Louise Howard, Joanna Richardson

There is worldwide interest in the potential of open science to increase the quality, impact, and benefits of science and research. More recently, attention has been focused on aspects such as transparency, quality, and provenance, particularly in regard to data.

For industry, citizens, and other researchers to participate in the open science agenda, further work needs to be undertaken to establish trust in research environments.

Based on a critical review of the literature, this paper examines the issue of trust in an open science environment, using virtual laboratories as the focus for discussion. A trust framework, which has been developed from an end-user perspective, is proposed as a model for addressing relevant issues within online research data services and tools.

URL : A Trust Framework for Online Research Data Services

DOI : http://dx.doi.org/10.3390/publications5020014

On the “persistency” of scientific publications: introducing an h-index for journals

Author : Roberto Piazza

What do we really mean by a “good” scientific journal? Do we care more about the short-time impact of our papers, or about the chance that they will still bhe read and cited on the long run?

Here I show that, by regarding a journal as a “virtual scientist” that can be attributed a time-dependent Hirsch h-index, we can introduce a parameter that, arguably, better captures the “persistency” of a scientific publication. Curiously, however, this parameter seems to depend above all on the “thickness” of a journal.

URL : https://arxiv.org/abs/1705.09390

A Bibliometric study of Directory of Open Access Journals: Special reference to Microbiology

Author : K S Savita

The present study aim is to determine the number of free e-journal in the field of Microbiology available on DOAJ.

For this study the author has adopted bibliometric method and analyzed on the basis of country-wise distribution, language wise distribution and subject heading wise distribution.

URL : A Bibliometric study of Directory of Open Access Journals: Special reference to Microbiology

Alternative location : http://ijidt.com/index.php/ijidt/article/view/466

Towards an Ethical Framework for Publishing Twitter Data in Social Research: Taking into Account Users’ Views, Online Context and Algorithmic Estimation

Authors : Matthew L Williams, Pete Burnap, Luke Sloan

New and emerging forms of data, including posts harvested from social media sites such as Twitter, have become part of the sociologist’s data diet. In particular, some researchers see an advantage in the perceived ‘public’ nature of Twitter posts, representing them in publications without seeking informed consent.

While such practice may not be at odds with Twitter’s terms of service, we argue there is a need to interpret these through the lens of social science research methods that imply a more reflexive ethical approach than provided in ‘legal’ accounts of the permissible use of these data in research publications.

To challenge some existing practice in Twitter-based research, this article brings to the fore: (1) views of Twitter users through analysis of online survey data; (2) the effect of context collapse and online disinhibition on the behaviours of users; and (3) the publication of identifiable sensitive classifications derived from algorithms.

URL : Towards an Ethical Framework for Publishing Twitter Data in Social Research: Taking into Account Users’ Views, Online Context and Algorithmic Estimation

DOI : http://dx.doi.org/10.1177%2F0038038517708140

Replicability and Reproducibility in Comparative Psychology

Author : Jeffrey R. Stevens

Psychology faces a replication crisis. The Reproducibility Project: Psychology sought to replicate the effects of 100 psychology studies. Though 97% of the original studies produced statistically significant results, only 36% of the replication studies did so (Open Science Collaboration, 2015).

This inability to replicate previously published results, however, is not limited to psychology (Ioannidis, 2005). Replication projects in medicine (Prinz et al., 2011) and behavioral economics (Camerer et al., 2016) resulted in replication rates of 25 and 61%, respectively, and analyses in genetics (Munafò, 2009) and neuroscience (Button et al., 2013) question the validity of studies in those fields. Science, in general, is reckoning with challenges in one of its basic tenets: replication.

Comparative psychology also faces the grand challenge of producing replicable research. Though social psychology has born the brunt of most of the critique regarding failed replications, comparative psychology suffers from some of the same problems faced by social psychology (e.g., small sample sizes).

Yet, comparative psychology follows the methods of cognitive psychology by often using within-subjects designs, which may buffer it from replicability problems (Open Science Collaboration, 2015). In this Grand Challenge article, I explore the shared and unique challenges of and potential solutions for replication and reproducibility in comparative psychology.

URL : Replicability and Reproducibility in Comparative Psychology

Alternative location : http://journal.frontiersin.org/article/10.3389/fpsyg.2017.00862/full