Open peer review by a selected papers network…

Open peer review by a selected-papers network :

“A selected-papers (SP) network is a network in which researchers who read, write, and review articles subscribe to each other based on common interests. Instead of reviewing a manuscript in secret for the Editor of a journal, each reviewer simply publishes his review (typically of a paper he wishes to recommend) to his SP network subscribers. Once the SP network reviewers complete their review decisions, the authors can invite any journal editor they want to consider these reviews and initial audience size, and make a publication decision. Since all impact assessment, reviews, and revisions are complete, this decision process should be short. I show how the SP network can provide a new way of measuring impact, catalyze the emergence of new subfields, and accelerate discovery in existing fields, by providing each reader a fine-grained filter for high-impact. I present a three phase plan for building a basic SP network, and making it an effective peer review platform that can be used by journals, conferences, users of repositories such as arXiv, and users of search engines such as PubMed. I show how the SP network can greatly improve review and dissemination of research articles in areas that are not well-supported by existing journals. Finally, I illustrate how the SP network concept can work well with existing publication services such as journals, conferences, arXiv, PubMed, and online citation management sites.”

URL : http://www.frontiersin.org/Journal/FullText.aspx?ART_DOI=10.3389/fncom.2012.00001&name=Computational_Neuroscience

Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact

Background

Citations in peer-reviewed articles and the impact factor are generally accepted measures of scientific impact. Web 2.0 tools such as Twitter, blogs or social bookmarking tools provide the possibility to construct innovative article-level or journal-level metrics to gauge impact and influence. However, the relationship of the these new metrics to traditional metrics such as citations is not known.

Objective:

(1) To explore the feasibility of measuring social impact of and public attention to scholarly articles by analyzing buzz in social media, (2) to explore the dynamics, content, and timing of tweets relative to the publication of a scholarly article, and (3) to explore whether these metrics are sensitive and specific enough to predict highly cited articles.

Methods

Between July 2008 and November 2011, all tweets containing links to articles in the Journal of Medical Internet Research (JMIR) were mined. For a subset of 1573 tweets about 55 articles published between issues 3/2009 and 2/2010, different metrics of social media impact were calculated and compared against subsequent citation data from Scopus and Google Scholar 17 to 29 months later. A heuristic to predict the top-cited articles in each issue through tweet metrics was validated.

Results

A total of 4208 tweets cited 286 distinct JMIR articles. The distribution of tweets over the first 30 days after article publication followed a power law (Zipf, Bradford, or Pareto distribution), with most tweets sent on the day when an article was published (1458/3318, 43.94% of all tweets in a 60-day period) or on the following day (528/3318, 15.9%), followed by a rapid decay. The Pearson correlations between tweetations and citations were moderate and statistically significant, with correlation coefficients ranging from .42 to .72 for the log-transformed Google Scholar citations, but were less clear for Scopus citations and rank correlations. A linear multivariate model with time and tweets as significant predictors (P < .001) could explain 27% of the variation of citations. Highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles (9/12 or 75% of highly tweeted article were highly cited, while only 3/43 or 7% of less-tweeted articles were highly cited; rate ratio 0.75/0.07 = 10.75, 95% confidence interval, 3.4–33.6). Top-cited articles can be predicted from top-tweeted articles with 93% specificity and 75% sensitivity.

Conclusions

Tweets can predict highly cited articles within the first 3 days of article publication. Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations, but the true use of these metrics is to measure the distinct concept of social impact. Social impact measures based on tweets are proposed to complement traditional citation metrics. The proposed twimpact factor may be a useful and timely metric to measure uptake of research findings and to filter research findings resonating with the public in real time.

URL : http://www.jmir.org/2011/4/e123/

Longitudinal Trends in the Performance of Scientific Peer…

Longitudinal Trends in the Performance of Scientific Peer Reviewers :

Study objective : We characterize changes in review quality by individual peer reviewers over time.

Methods : Editors at a specialty journal in the top 11% of Institute of Scientific Information journals rated the quality of every review, using a validated 5-point quality score. Linear mixed-effect models were used to analyze rating changes over time, calculating within-reviewer trends plus predicted slope of change in score for each reviewer. Reviewers at this journal have been shown comparable to those at other journals.

Results : Reviews (14,808) were performed by 1,499 reviewers and rated by 84 editors during the 14-year study. Ninety-two percent of reviewers demonstrated very slow but steady deterioration in their scores (mean –0.04 points [–0.8%] per year). Rate of deterioration was unrelated to duration of reviewing but moderately correlated with mean reviewer quality score (R=0.52). The mean score of each reviewer’s first 4 reviews predicted subsequent performance with a sensitivity of 75% and specificity of 47%. Scores of the group stayed constant over time despite deterioration because newly recruited reviewers initially had higher mean quality scores than their predecessors.

Conclusion : This study, one of few tracking expert performance longitudinally, demonstrates that most journal peer reviewers received lower quality scores for article assessment over the years. This could be due to deteriorating performance (caused by either cognitive changes or competing priorities) or, to a partial degree, escalating expectations; other explanations were ruled out. This makes monitoring reviewer quality even more crucial to maintain the mission of scientific journals.”

URL : http://www.annemergmed.com/article/S0196-0644(10)01266-7/fulltext

Toward a new model of scientific publishing discussion…

Toward a new model of scientific publishing: discussion and a proposal :

“The current system of publishing in the biological sciences is notable for its redundancy, inconsistency, sluggishness, and opacity. These problems persist, and grow worse, because the peer review system remains focused on deciding whether or not to publish a paper in a particular journal rather than providing (1) a high-quality evaluation of scientific merit and (2) the information necessary to organize and prioritize the literature. Online access has eliminated the need for journals as distribution channels, so their primary current role is to provide authors with feedback prior to publication and a quick way for other researchers to prioritize the literature based on which journal publishes a paper. However, the feedback provided by reviewers is not focused on scientific merit but on whether to publish in a particular journal, which is generally of little use to authors and an opaque and noisy basis for prioritizing the literature. Further, each submission of a rejected manuscript requires the entire machinery of peer review to creak to life anew. This redundancy incurs delays, inconsistency, and increased burdens on authors, reviewers, and editors. Finally, reviewers have no real incentive to review well or quickly, as their performance is not tracked, let alone rewarded. One of the consistent suggestions for modifying the current peer review system is the introduction of some form of post-publication reception, and the development of a marketplace where the priority of a paper rises and falls based on its reception from the field (see other articles in this special topics). However, the information that accompanies a paper into the marketplace is as important as the marketplace’s mechanics. Beyond suggestions concerning the mechanisms of reception, we propose an update to the system of publishing in which publication is guaranteed, but pre-publication peer review still occurs, giving the authors the opportunity to revise their work following a mini pre-reception from the field. This step also provides a consistent set of rankings and reviews to the marketplace, allowing for early prioritization and stabilizing its early dynamics. We further propose to improve the general quality of reviewing by providing tangible rewards to those who do it well.”
URL : http://www.frontiersin.org/computational_neuroscience/10.3389/fncom.2011.00055/full

Citation and Peer Review of Data Moving Towards…

Citation and Peer Review of Data: Moving Towards Formal Data Publication

“This paper discusses many of the issues associated with formally publishing data in academia, focusing primarily on the structures that need to be put in place for peer review and formal citation of datasets. Data publication is becoming increasingly important to the scientific community, as it will provide a mechanism for those who create data to receive academic credit for their work and will allow the conclusions arising from an analysis to be more readily verifiable, thus promoting transparency in the scientific process. Peer review of data will also provide a mechanism for ensuring the quality of datasets, and we provide suggestions on the types of activities one expects to see in the peer review of data. A simple taxonomy of data publication methodologies is presented and evaluated, and the paper concludes with a discussion of dataset granularity, transience and semantics, along with a recommended human-readable citation syntax.”

URL : http://www.ijdc.net/index.php/ijdc/article/view/181

Wikis in scholarly publishing Scientific research is…

Wikis in scholarly publishing :

“Scientific research is a process concerned with the creation, collective accumulation, contextualization, updating and maintenance of knowledge. Wikis provide an environment that allows to collectively accumulate, contextualize, update and maintain knowledge in a coherent and transparent fashion. Here, we examine the potential of wikis as platforms for scholarly publishing. In the hope to stimulate further discussion, the article itself was drafted on Species ID – http://species-id.net; a wiki that hosts a prototype for wiki-based scholarly publishing – where it can be updated, expanded or otherwise improved.”

URL : http://iospress.metapress.com/content/q42617538838t6j2/

Science and Technology Committee Eighth Report Peer review…

Science and Technology Committee – Eighth Report : Peer review in scientific publications :

“Peer review in scholarly publishing, in one form or another, has always been regarded as crucial to the reputation and reliability of scientific research. In recent years there have been an increasing number of reports and articles assessing the current state of peer review. In view of the importance of evidence-based scientific information to government, it seemed appropriate to undertake a detailed examination of the current peer-review system as used in scientific publications. Both to see whether it is operating effectively and to shine light on new and innovative approaches. We also explored some of the broader issues around research impact, publication ethics and research integrity.

We found that despite the many criticisms and the little solid evidence on the efficacy of pre-publication editorial peer review, it is considered by many as important and not something that can be dispensed with. There are, however, many ways in which current pre-publication peer-review practices can and should be improved and optimised, although we recognise that different types of peer review are suitable to different disciplines and research communities. Innovative approaches—such as the use of pre-print servers, open peer review, increased transparency and online repository-style journals—should be explored by publishers, in consultation with their journals and taking into account the requirements of their research communities. Some of these new approaches may help to reduce the necessary burden on researchers, and also help accelerate the pace of publication of research. We encourage greater recognition of the work carried out by reviewers, by both publishers and employers. All publishers need to have in place systems for recording and acknowledging the contribution of those involved in peer review.

Publishers also have a responsibility to ensure that the people involved in the peer-review process are adequately trained for the role that they play. Training for editors, authors and reviewers varies across the publishing sector and across different research institutions. We encourage publishers to work together to develop standards—which could be applied across the industry—to ensure that all editors, whether staff or academic, are fully equipped for the job that they do. Furthermore, we consider that all early-career researchers should be given the option for training in peer review; responsibility for this lies primarily with the funders of research.

Funders of research have an interest in ensuring that the work they fund is both scientifically sound and reproducible. We consider that it should be a fundamental aim of the peer-review process that all publications are scientifically sound. Reproducibility should be the gold standard that all peer reviewers and editors aim for when assessing whether a manuscript has supplied sufficient information to allow others to repeat and build on the experiments. As such, the presumption must be that, unless there is a strong reason otherwise, data should be fully disclosed and made publicly available. In line with this principle, data associated with all publicly funded research should, where possible, be made widely and freely available. The work of researchers who expend time and effort adding value to their data, to make it usable by others, should be acknowledged and encouraged.

While pre-publication peer review (the first records of which date back to the 17th century) continues to play an important role in ensuring that the scientific record is sound, the growth of post-publication peer review and commentary represents an enormous opportunity for experimentation with new media and social networking tools. Online communications allow the widespread sharing of links to articles, ensuring that interesting research is spread across the world, facilitating rapid commentary and review by the global audience. They also have a valuable role to play in alerting the community to potential deficiencies and problems with published work. We encourage the prudent use of online tools for post-publication review and commentary as a means of supplementing pre-publication review.

On the subject of impact, it was clear to us that the publication of peer-reviewed articles, particularly those that are published in journals with high Impact Factors, has a direct effect on the careers of researchers and the reputations of research institutions. Assessing the impact or perceived importance of research before it is published requires subjective judgement. We therefore have concerns about the use of journal Impact Factor as a proxy measure for the quality of individual articles. While we have been assured by research funders that they do not use this as a proxy measure for the quality of research or of individual articles, representatives of research institutions have suggested that publication in a high-impact journal is still an important consideration when assessing individuals for career progression. We consider that research institutions should be cautious about this approach as there is an element of chance in getting articles accepted in such journals. We have heard in the course of this inquiry that there is no substitute for reading the article itself in assessing the worth of a piece of research.

Finally, we found that the integrity of the peer-review process can only ever be as robust as the integrity of the people involved. Ethical and scientific misconduct—such as in the Wakefield case—damages peer review and science as a whole. Although it is not the role of peer review to police research integrity and identify fraud or misconduct, it does, on occasion, identify suspicious cases. While there is guidance in place for journal editors when ethical misconduct is suspected, we found the general oversight of research integrity in the UK to be unsatisfactory. We note that the UK Research Integrity Futures Working Group report recently made sensible recommendations about the way forward for research integrity in the UK, which have not been adopted. We recommend that the Government revisit the recommendation that the UK should have an oversight body for research integrity that provides “advice and support to research employers and assurance to research funders”, across all disciplines. Furthermore, while employers must take responsibility for the integrity of their employees’ research, we recommend that there be an external regulator overseeing research integrity. We also recommend that all UK research institutions have a specific member of staff leading on research integrity.”

URL : http://www.publications.parliament.uk/pa/cm201012/cmselect/cmsctech/856/85602.htm