Peer2ref a peer reviewer finding web tool that…

Statut

Peer2ref: a peer-reviewer finding web tool that uses author disambiguation :

Background :
Reviewer and editor selection for peer review is getting harder for authors and publishers due to the specialization onto narrower areas of research carried by the progressive growth of the body of knowledge. Examination of the literature facilitates finding appropriate reviewers but is time consuming and complicated by author name ambiguities.

Results : We have developed a method called peer2ref to support authors and editors in selecting suitable reviewers for scientific manuscripts. Peer2ref works from a text input, usually the abstract of the manuscript, from which important concepts are extracted as keywords using a fuzzy binary relations approach. The keywords are searched on indexed profiles of words constructed from the bibliography attributed to authors in MEDLINE. The names of these scientists have been previously disambiguated by coauthors identified across the whole MEDLINE. The methods have been implemented in a web server that automatically suggests experts for peer-review among scientists that have authored manuscripts published during the last decade in more than 3,800 journals indexed in MEDLINE.

Conclusion : peer2ref web server is publicly available at http://www.ogic.ca/projects/peer2ref/ .”

URL : http://www.biodatamining.org/content/5/1/14/abstract

Multi-Stage Open Peer Review: Scientific Evaluation Integrating the Strengths of Traditional Peer Review with the Virtues of Transparency and Self-Regulation

The traditional forms of scientific publishing and peer review do not live up to all demands of efficient communication and quality assurance in today’s highly diverse and rapidly evolving world of science.

They need to be advanced and complemented by interactive and transparent forms of review, publication, and discussion that are open to the scientific community and to the public.

The advantages of open access, public peer review, and interactive discussion can be efficiently and flexibly combined with the strengths of traditional scientific peer review. Since 2001 the benefits and viability of this approach are clearly demonstrated by the highly successful interactive open access journal Atmospheric Chemistry and Physics (ACP, www.atmos-chem-phys.net) and a growing number of sister journals launched and operated by the European Geosciences Union (EGU, www.egu.eu) and the open access publisher Copernicus (www.copernicus.org).

The interactive open access journals are practicing an integrative multi-stage process of publication and peer review combined with interactive public discussion, which effectively resolves the dilemma between rapid scientific exchange and thorough quality assurance.

Key features and achievements of this approach are: top quality and impact, efficient self-regulation and low rejection rates, high attractivity and rapid growth, low costs, and financial sustainability.

In fact, ACP and the EGU interactive open access sister journals are by most if not all standards more successful than comparable scientific journals with traditional or alternative forms of peer review (editorial statistics, publication statistics, citation statistics, economic costs, and sustainability).

The high efficiency and predictive validity of multi-stage open peer review have been confirmed in a series of dedicated studies by evaluation experts from the social sciences, and the same or similar concepts have recently also been adopted in other disciplines, including the life sciences and economics.

Multi-stage open peer review can be flexibly adjusted to the needs and peculiarities of different scientific communities. Due to the flexibility and compatibility with traditional structures of scientific publishing and peer review, the multi-stage open peer review concept enables efficient evolution in scientific communication and quality assurance.

It has the potential for swift replacement of hidden peer review as the standard of scientific quality assurance, and it provides a basis for open evaluation in science.”

URL : http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3389610/

Cooperation between Referees and Authors Increases Peer Review…

Statut

Cooperation between Referees and Authors Increases Peer Review Accuracy :

“Peer review is fundamentally a cooperative process between scientists in a community who agree to review each other’s work in an unbiased fashion. Peer review is the foundation for decisions concerning publication in journals, awarding of grants, and academic promotion. Here we perform a laboratory study of open and closed peer review based on an online game. We show that when reviewer behavior was made public under open review, reviewers were rewarded for refereeing and formed significantly more cooperative interactions (13% increase in cooperation, P = 0.018). We also show that referees and authors who participated in cooperative interactions had an 11% higher reviewing accuracy rate (P = 0.016). Our results suggest that increasing cooperation in the peer review process can lead to a decreased risk of reviewing errors.”

URL : http://www.plosone.org/article/info:doi/10.1371/journal.pone.0026895

Journal Article Publishing The Review Process Ethics Publishing…

Journal Article Publishing: The Review Process, Ethics, Publishing Contracts, Open Access, and the Kitchen Sink

Journal Article Publishing: The Review Process, Ethics, Publishing Contracts, Open Access, and the Kitchen Sink from APECS Webinars on Vimeo.

“Submitting work to peer-reviewed journals is a daunting prospect for many young scientists. In this webinar, Caroline Sutton (co-founder of Co-Action Publishing) and Helle Goldman (Chief Editor of the journal Polar Research) demystify the process by explaining what happens to a manuscript after it’s submitted, focussing on how submissions are evaluated. This webinar introduces a range of topics connected to journal article publishing, including single-blind versus double-blind review, tips for authors submitting manuscripts, ethical issues (plagiarism, salami slicing, duplicate publication), understanding the fine print in publishers’ contracts, open access publishing and how authors benefit from it.”

Open peer review by a selected papers network…

Open peer review by a selected-papers network :

“A selected-papers (SP) network is a network in which researchers who read, write, and review articles subscribe to each other based on common interests. Instead of reviewing a manuscript in secret for the Editor of a journal, each reviewer simply publishes his review (typically of a paper he wishes to recommend) to his SP network subscribers. Once the SP network reviewers complete their review decisions, the authors can invite any journal editor they want to consider these reviews and initial audience size, and make a publication decision. Since all impact assessment, reviews, and revisions are complete, this decision process should be short. I show how the SP network can provide a new way of measuring impact, catalyze the emergence of new subfields, and accelerate discovery in existing fields, by providing each reader a fine-grained filter for high-impact. I present a three phase plan for building a basic SP network, and making it an effective peer review platform that can be used by journals, conferences, users of repositories such as arXiv, and users of search engines such as PubMed. I show how the SP network can greatly improve review and dissemination of research articles in areas that are not well-supported by existing journals. Finally, I illustrate how the SP network concept can work well with existing publication services such as journals, conferences, arXiv, PubMed, and online citation management sites.”

URL : http://www.frontiersin.org/Journal/FullText.aspx?ART_DOI=10.3389/fncom.2012.00001&name=Computational_Neuroscience

Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact

Background

Citations in peer-reviewed articles and the impact factor are generally accepted measures of scientific impact. Web 2.0 tools such as Twitter, blogs or social bookmarking tools provide the possibility to construct innovative article-level or journal-level metrics to gauge impact and influence. However, the relationship of the these new metrics to traditional metrics such as citations is not known.

Objective:

(1) To explore the feasibility of measuring social impact of and public attention to scholarly articles by analyzing buzz in social media, (2) to explore the dynamics, content, and timing of tweets relative to the publication of a scholarly article, and (3) to explore whether these metrics are sensitive and specific enough to predict highly cited articles.

Methods

Between July 2008 and November 2011, all tweets containing links to articles in the Journal of Medical Internet Research (JMIR) were mined. For a subset of 1573 tweets about 55 articles published between issues 3/2009 and 2/2010, different metrics of social media impact were calculated and compared against subsequent citation data from Scopus and Google Scholar 17 to 29 months later. A heuristic to predict the top-cited articles in each issue through tweet metrics was validated.

Results

A total of 4208 tweets cited 286 distinct JMIR articles. The distribution of tweets over the first 30 days after article publication followed a power law (Zipf, Bradford, or Pareto distribution), with most tweets sent on the day when an article was published (1458/3318, 43.94% of all tweets in a 60-day period) or on the following day (528/3318, 15.9%), followed by a rapid decay. The Pearson correlations between tweetations and citations were moderate and statistically significant, with correlation coefficients ranging from .42 to .72 for the log-transformed Google Scholar citations, but were less clear for Scopus citations and rank correlations. A linear multivariate model with time and tweets as significant predictors (P < .001) could explain 27% of the variation of citations. Highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles (9/12 or 75% of highly tweeted article were highly cited, while only 3/43 or 7% of less-tweeted articles were highly cited; rate ratio 0.75/0.07 = 10.75, 95% confidence interval, 3.4–33.6). Top-cited articles can be predicted from top-tweeted articles with 93% specificity and 75% sensitivity.

Conclusions

Tweets can predict highly cited articles within the first 3 days of article publication. Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations, but the true use of these metrics is to measure the distinct concept of social impact. Social impact measures based on tweets are proposed to complement traditional citation metrics. The proposed twimpact factor may be a useful and timely metric to measure uptake of research findings and to filter research findings resonating with the public in real time.

URL : http://www.jmir.org/2011/4/e123/

Longitudinal Trends in the Performance of Scientific Peer…

Longitudinal Trends in the Performance of Scientific Peer Reviewers :

Study objective : We characterize changes in review quality by individual peer reviewers over time.

Methods : Editors at a specialty journal in the top 11% of Institute of Scientific Information journals rated the quality of every review, using a validated 5-point quality score. Linear mixed-effect models were used to analyze rating changes over time, calculating within-reviewer trends plus predicted slope of change in score for each reviewer. Reviewers at this journal have been shown comparable to those at other journals.

Results : Reviews (14,808) were performed by 1,499 reviewers and rated by 84 editors during the 14-year study. Ninety-two percent of reviewers demonstrated very slow but steady deterioration in their scores (mean –0.04 points [–0.8%] per year). Rate of deterioration was unrelated to duration of reviewing but moderately correlated with mean reviewer quality score (R=0.52). The mean score of each reviewer’s first 4 reviews predicted subsequent performance with a sensitivity of 75% and specificity of 47%. Scores of the group stayed constant over time despite deterioration because newly recruited reviewers initially had higher mean quality scores than their predecessors.

Conclusion : This study, one of few tracking expert performance longitudinally, demonstrates that most journal peer reviewers received lower quality scores for article assessment over the years. This could be due to deteriorating performance (caused by either cognitive changes or competing priorities) or, to a partial degree, escalating expectations; other explanations were ruled out. This makes monitoring reviewer quality even more crucial to maintain the mission of scientific journals.”

URL : http://www.annemergmed.com/article/S0196-0644(10)01266-7/fulltext