Improving Open Access Week Events Through Existing Partnerships…

Improving Open Access Week Events Through Existing Partnerships :

“Oregon State University (OSU) Libraries participated in Open Access (OA) Week in 2009 and 2010. In order to expand the range of events offered, the committee members assigned to program planning looked for opportunities to work with partners beyond the library. The collaborative activities developed through these partnerships created settings for in-depth conversations among librarians, faculty, and students about scholarly communication issues. Subject librarians’ relationships with their departments provided opportunities to host events in venues other than the library, which helped, facilitate access to a diverse audience. An established cooperative relationship with the University of Oregon made it possible to provide additional presentations to the OSU community. An evaluation of the quantity and quality of contacts made during OA Week suggests the collaborative activities enriched these outreach activities and that participation in OA Week is worthwhile for OSU Libraries to continue.”

URL : http://collaborativelibrarianship.org/index.php/jocl/article/view/150

Willingness to Share Research Data Is Related to…

Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results :

Background : The widespread reluctance to share published research data is often hypothesized to be due to the authors’ fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically.

Methods and Findings : We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.

Conclusions : Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies.”

URL : http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0026828
doi:10.1371/journal.pone.0026828

Achieving rigor and relevance in online multimedia scholarly…

Achieving rigor and relevance in online multimedia scholarly publishing :

“This paper discusses the importance of relevance and rigor in scholarly publishing in a new media–rich world. We defend that scholarship should be useful and engaging to audiences through the use of new media, and at the same time scholarly publishers must develop and maintain methods of ensuring content accuracy and providing quality controls in the production of scholarly multimedia products. We review examples and a case study of existing scholarly publishing venues that attempt to maintain quality control standards while embracing innovative multimedia formats. We also present lessons learned from the case experience and challenges that face us in the scholarly publication of multimedia.”

URL : http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3762/3119

Understanding collaboration in Wikipedia Wikipedia stands as…

Understanding collaboration in Wikipedia :

“Wikipedia stands as an undeniable success in online participation and collaboration. However, previous attempts at studying collaboration within Wikipedia have focused on simple metrics like rigor (i.e., the number of revisions in an article’s revision history) and diversity (i.e., the number of authors that have contributed to a given article) or have made generalizations about collaboration within Wikipedia based upon the content validity of a few select articles. By looking more closely at metrics associated with each extant Wikipedia article (N=3,427,236) along with all revisions (N=225,226,370), this study attempts to understand what collaboration within Wikipedia actually looks like under the surface. Findings suggest that typical Wikipedia articles are not rigorous, in a collaborative sense, and do not reflect much diversity in the construction of content and macro–structural writing, leading to the conclusion that most articles in Wikipedia are not reflective of the collaborative efforts of the community but, rather, represent the work of relatively few contributors.”

URL : http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3613/3117

A Study of Innovative Features in Scholarly Open…

A Study of Innovative Features in Scholarly Open Access Journals :

Background: The emergence of the Internet has triggered tremendous changes in the publication of scientific peer-reviewed journals. Today, journals are usually available in parallel electronic versions, but the way the peer-review process works, the look of articles and journals, and the rigid and slow publication schedules have remained largely unchanged, at least for the vast majority of subscription-based journals. Those publishing firms and scholarly publishers who have chosen the more radical option of open access (OA), in which the content of journals is freely accessible to anybody with Internet connectivity, have had a much bigger degree of freedom to experiment with innovations.

Objective: The objective was to study how open access journals have experimented with innovations concerning ways of organizing the peer review, the format of journals and articles, new interactive and media formats, and novel publishing revenue models.

Methods: The features of 24 open access journals were studied. The journals were chosen in a nonrandom manner from the approximately 7000 existing OA journals based on available information about interesting journals and include both representative cases and highly innovative outlier cases.

Results: Most early OA journals in the 1990s were founded by individual scholars and used a business model based on voluntary work close in spirit to open-source development of software. In the next wave, many long-established journals, in particular society journals and journals from regions such as Latin America, made their articles OA when they started publishing parallel electronic versions. From about 2002 on, newly founded professional OA publishing firms using article-processing charges to fund their operations have emerged. Over the years, there have been several experiments with new forms of peer review, media enhancements, and the inclusion of structured data sets with articles. In recent years, the growth of OA publishing has also been facilitated by the availability of open-source software for journal publishing.

Conclusions: The case studies illustrate how a new technology and a business model enabled by new technology can be harnessed to find new innovative ways for the organization and content of scholarly publishing. Several recent launches of OA journals by major subscription publishers demonstrate that OA is rapidly gaining acceptance as a sustainable alternative to subscription-based scholarly publishing.”

URL : http://www.jmir.org/2011/4/e115/

Longitudinal Trends in the Performance of Scientific Peer…

Longitudinal Trends in the Performance of Scientific Peer Reviewers :

Study objective : We characterize changes in review quality by individual peer reviewers over time.

Methods : Editors at a specialty journal in the top 11% of Institute of Scientific Information journals rated the quality of every review, using a validated 5-point quality score. Linear mixed-effect models were used to analyze rating changes over time, calculating within-reviewer trends plus predicted slope of change in score for each reviewer. Reviewers at this journal have been shown comparable to those at other journals.

Results : Reviews (14,808) were performed by 1,499 reviewers and rated by 84 editors during the 14-year study. Ninety-two percent of reviewers demonstrated very slow but steady deterioration in their scores (mean –0.04 points [–0.8%] per year). Rate of deterioration was unrelated to duration of reviewing but moderately correlated with mean reviewer quality score (R=0.52). The mean score of each reviewer’s first 4 reviews predicted subsequent performance with a sensitivity of 75% and specificity of 47%. Scores of the group stayed constant over time despite deterioration because newly recruited reviewers initially had higher mean quality scores than their predecessors.

Conclusion : This study, one of few tracking expert performance longitudinally, demonstrates that most journal peer reviewers received lower quality scores for article assessment over the years. This could be due to deteriorating performance (caused by either cognitive changes or competing priorities) or, to a partial degree, escalating expectations; other explanations were ruled out. This makes monitoring reviewer quality even more crucial to maintain the mission of scientific journals.”

URL : http://www.annemergmed.com/article/S0196-0644(10)01266-7/fulltext

Toward a new model of scientific publishing discussion…

Toward a new model of scientific publishing: discussion and a proposal :

“The current system of publishing in the biological sciences is notable for its redundancy, inconsistency, sluggishness, and opacity. These problems persist, and grow worse, because the peer review system remains focused on deciding whether or not to publish a paper in a particular journal rather than providing (1) a high-quality evaluation of scientific merit and (2) the information necessary to organize and prioritize the literature. Online access has eliminated the need for journals as distribution channels, so their primary current role is to provide authors with feedback prior to publication and a quick way for other researchers to prioritize the literature based on which journal publishes a paper. However, the feedback provided by reviewers is not focused on scientific merit but on whether to publish in a particular journal, which is generally of little use to authors and an opaque and noisy basis for prioritizing the literature. Further, each submission of a rejected manuscript requires the entire machinery of peer review to creak to life anew. This redundancy incurs delays, inconsistency, and increased burdens on authors, reviewers, and editors. Finally, reviewers have no real incentive to review well or quickly, as their performance is not tracked, let alone rewarded. One of the consistent suggestions for modifying the current peer review system is the introduction of some form of post-publication reception, and the development of a marketplace where the priority of a paper rises and falls based on its reception from the field (see other articles in this special topics). However, the information that accompanies a paper into the marketplace is as important as the marketplace’s mechanics. Beyond suggestions concerning the mechanisms of reception, we propose an update to the system of publishing in which publication is guaranteed, but pre-publication peer review still occurs, giving the authors the opportunity to revise their work following a mini pre-reception from the field. This step also provides a consistent set of rankings and reviews to the marketplace, allowing for early prioritization and stabilizing its early dynamics. We further propose to improve the general quality of reviewing by providing tangible rewards to those who do it well.”
URL : http://www.frontiersin.org/computational_neuroscience/10.3389/fncom.2011.00055/full