Journal Article Publishing The Review Process Ethics Publishing…

Journal Article Publishing: The Review Process, Ethics, Publishing Contracts, Open Access, and the Kitchen Sink

Journal Article Publishing: The Review Process, Ethics, Publishing Contracts, Open Access, and the Kitchen Sink from APECS Webinars on Vimeo.

“Submitting work to peer-reviewed journals is a daunting prospect for many young scientists. In this webinar, Caroline Sutton (co-founder of Co-Action Publishing) and Helle Goldman (Chief Editor of the journal Polar Research) demystify the process by explaining what happens to a manuscript after it’s submitted, focussing on how submissions are evaluated. This webinar introduces a range of topics connected to journal article publishing, including single-blind versus double-blind review, tips for authors submitting manuscripts, ethical issues (plagiarism, salami slicing, duplicate publication), understanding the fine print in publishers’ contracts, open access publishing and how authors benefit from it.”

Open peer review by a selected papers network…

Open peer review by a selected-papers network :

“A selected-papers (SP) network is a network in which researchers who read, write, and review articles subscribe to each other based on common interests. Instead of reviewing a manuscript in secret for the Editor of a journal, each reviewer simply publishes his review (typically of a paper he wishes to recommend) to his SP network subscribers. Once the SP network reviewers complete their review decisions, the authors can invite any journal editor they want to consider these reviews and initial audience size, and make a publication decision. Since all impact assessment, reviews, and revisions are complete, this decision process should be short. I show how the SP network can provide a new way of measuring impact, catalyze the emergence of new subfields, and accelerate discovery in existing fields, by providing each reader a fine-grained filter for high-impact. I present a three phase plan for building a basic SP network, and making it an effective peer review platform that can be used by journals, conferences, users of repositories such as arXiv, and users of search engines such as PubMed. I show how the SP network can greatly improve review and dissemination of research articles in areas that are not well-supported by existing journals. Finally, I illustrate how the SP network concept can work well with existing publication services such as journals, conferences, arXiv, PubMed, and online citation management sites.”

URL : http://www.frontiersin.org/Journal/FullText.aspx?ART_DOI=10.3389/fncom.2012.00001&name=Computational_Neuroscience

Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact

Background

Citations in peer-reviewed articles and the impact factor are generally accepted measures of scientific impact. Web 2.0 tools such as Twitter, blogs or social bookmarking tools provide the possibility to construct innovative article-level or journal-level metrics to gauge impact and influence. However, the relationship of the these new metrics to traditional metrics such as citations is not known.

Objective:

(1) To explore the feasibility of measuring social impact of and public attention to scholarly articles by analyzing buzz in social media, (2) to explore the dynamics, content, and timing of tweets relative to the publication of a scholarly article, and (3) to explore whether these metrics are sensitive and specific enough to predict highly cited articles.

Methods

Between July 2008 and November 2011, all tweets containing links to articles in the Journal of Medical Internet Research (JMIR) were mined. For a subset of 1573 tweets about 55 articles published between issues 3/2009 and 2/2010, different metrics of social media impact were calculated and compared against subsequent citation data from Scopus and Google Scholar 17 to 29 months later. A heuristic to predict the top-cited articles in each issue through tweet metrics was validated.

Results

A total of 4208 tweets cited 286 distinct JMIR articles. The distribution of tweets over the first 30 days after article publication followed a power law (Zipf, Bradford, or Pareto distribution), with most tweets sent on the day when an article was published (1458/3318, 43.94% of all tweets in a 60-day period) or on the following day (528/3318, 15.9%), followed by a rapid decay. The Pearson correlations between tweetations and citations were moderate and statistically significant, with correlation coefficients ranging from .42 to .72 for the log-transformed Google Scholar citations, but were less clear for Scopus citations and rank correlations. A linear multivariate model with time and tweets as significant predictors (P < .001) could explain 27% of the variation of citations. Highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles (9/12 or 75% of highly tweeted article were highly cited, while only 3/43 or 7% of less-tweeted articles were highly cited; rate ratio 0.75/0.07 = 10.75, 95% confidence interval, 3.4–33.6). Top-cited articles can be predicted from top-tweeted articles with 93% specificity and 75% sensitivity.

Conclusions

Tweets can predict highly cited articles within the first 3 days of article publication. Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations, but the true use of these metrics is to measure the distinct concept of social impact. Social impact measures based on tweets are proposed to complement traditional citation metrics. The proposed twimpact factor may be a useful and timely metric to measure uptake of research findings and to filter research findings resonating with the public in real time.

URL : http://www.jmir.org/2011/4/e123/

Longitudinal Trends in the Performance of Scientific Peer…

Longitudinal Trends in the Performance of Scientific Peer Reviewers :

Study objective : We characterize changes in review quality by individual peer reviewers over time.

Methods : Editors at a specialty journal in the top 11% of Institute of Scientific Information journals rated the quality of every review, using a validated 5-point quality score. Linear mixed-effect models were used to analyze rating changes over time, calculating within-reviewer trends plus predicted slope of change in score for each reviewer. Reviewers at this journal have been shown comparable to those at other journals.

Results : Reviews (14,808) were performed by 1,499 reviewers and rated by 84 editors during the 14-year study. Ninety-two percent of reviewers demonstrated very slow but steady deterioration in their scores (mean –0.04 points [–0.8%] per year). Rate of deterioration was unrelated to duration of reviewing but moderately correlated with mean reviewer quality score (R=0.52). The mean score of each reviewer’s first 4 reviews predicted subsequent performance with a sensitivity of 75% and specificity of 47%. Scores of the group stayed constant over time despite deterioration because newly recruited reviewers initially had higher mean quality scores than their predecessors.

Conclusion : This study, one of few tracking expert performance longitudinally, demonstrates that most journal peer reviewers received lower quality scores for article assessment over the years. This could be due to deteriorating performance (caused by either cognitive changes or competing priorities) or, to a partial degree, escalating expectations; other explanations were ruled out. This makes monitoring reviewer quality even more crucial to maintain the mission of scientific journals.”

URL : http://www.annemergmed.com/article/S0196-0644(10)01266-7/fulltext

Toward a new model of scientific publishing discussion…

Toward a new model of scientific publishing: discussion and a proposal :

“The current system of publishing in the biological sciences is notable for its redundancy, inconsistency, sluggishness, and opacity. These problems persist, and grow worse, because the peer review system remains focused on deciding whether or not to publish a paper in a particular journal rather than providing (1) a high-quality evaluation of scientific merit and (2) the information necessary to organize and prioritize the literature. Online access has eliminated the need for journals as distribution channels, so their primary current role is to provide authors with feedback prior to publication and a quick way for other researchers to prioritize the literature based on which journal publishes a paper. However, the feedback provided by reviewers is not focused on scientific merit but on whether to publish in a particular journal, which is generally of little use to authors and an opaque and noisy basis for prioritizing the literature. Further, each submission of a rejected manuscript requires the entire machinery of peer review to creak to life anew. This redundancy incurs delays, inconsistency, and increased burdens on authors, reviewers, and editors. Finally, reviewers have no real incentive to review well or quickly, as their performance is not tracked, let alone rewarded. One of the consistent suggestions for modifying the current peer review system is the introduction of some form of post-publication reception, and the development of a marketplace where the priority of a paper rises and falls based on its reception from the field (see other articles in this special topics). However, the information that accompanies a paper into the marketplace is as important as the marketplace’s mechanics. Beyond suggestions concerning the mechanisms of reception, we propose an update to the system of publishing in which publication is guaranteed, but pre-publication peer review still occurs, giving the authors the opportunity to revise their work following a mini pre-reception from the field. This step also provides a consistent set of rankings and reviews to the marketplace, allowing for early prioritization and stabilizing its early dynamics. We further propose to improve the general quality of reviewing by providing tangible rewards to those who do it well.”
URL : http://www.frontiersin.org/computational_neuroscience/10.3389/fncom.2011.00055/full

Citation and Peer Review of Data Moving Towards…

Citation and Peer Review of Data: Moving Towards Formal Data Publication

“This paper discusses many of the issues associated with formally publishing data in academia, focusing primarily on the structures that need to be put in place for peer review and formal citation of datasets. Data publication is becoming increasingly important to the scientific community, as it will provide a mechanism for those who create data to receive academic credit for their work and will allow the conclusions arising from an analysis to be more readily verifiable, thus promoting transparency in the scientific process. Peer review of data will also provide a mechanism for ensuring the quality of datasets, and we provide suggestions on the types of activities one expects to see in the peer review of data. A simple taxonomy of data publication methodologies is presented and evaluated, and the paper concludes with a discussion of dataset granularity, transience and semantics, along with a recommended human-readable citation syntax.”

URL : http://www.ijdc.net/index.php/ijdc/article/view/181

Wikis in scholarly publishing Scientific research is…

Wikis in scholarly publishing :

“Scientific research is a process concerned with the creation, collective accumulation, contextualization, updating and maintenance of knowledge. Wikis provide an environment that allows to collectively accumulate, contextualize, update and maintain knowledge in a coherent and transparent fashion. Here, we examine the potential of wikis as platforms for scholarly publishing. In the hope to stimulate further discussion, the article itself was drafted on Species ID – http://species-id.net; a wiki that hosts a prototype for wiki-based scholarly publishing – where it can be updated, expanded or otherwise improved.”

URL : http://iospress.metapress.com/content/q42617538838t6j2/