Does the use of open, non-anonymous peer review in scholarly publishing introduce bias? Evidence from the F1000 post-publication open peer review publishing model

Authors : Mike Thelwall, Verena Weigert, Liz Allen, Zena Nyakoojo, Eleanor-Rose Papas

This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous peer review – and used in the post-publication, peer review system operated by the open-access scholarly publishing platform F1000Research.

First, is there evidence of bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of previous reviewer?

Scrutinising the open peer review comments published on F1000Research, we assess the extent of two frequently cited potential influences on reviewers that may be the result of the transparency offered by a fully attributable, open peer review publishing model: the national affiliations of authors and reviewers, and the ability of reviewers to view previously-published reviewer reports before submitting their own.

The effects of these potential influences were investigated for all first versions of articles published by 8 July 2019 to F1000Research. In 16 out of the 20 countries with the most articles, there was a tendency for reviewers based in the same country to give a more positive review.

The difference was statistically significant in one. Only 3 countries had the reverse tendency. Second, there is no evidence of a conformity bias. When reviewers mentioned a previous review in their peer review report, they were not more likely to give the same overall judgement.

Although reviewers who had longer to potentially read a previously published reviewer reports were slightly less likely to agree with previous reviewer judgements, this could be due to these articles being difficult to judge rather than deliberate non-conformity.

URL : https://arxiv.org/abs/1911.03379

Large publishing consortia produce higher citation impact research but co-author contributions are hard to evaluate

Author : Mike Thelwall

This paper introduces a simple agglomerative clustering method to identify large publishing consortia with at least 20 authors and 80% shared authorship between articles. Based on Scopus journal articles 1996-2018, under these criteria, nearly all (88%) of the large consortia published research with citation impact above the world average, with the exceptions being mainly the newer consortia for which average citation counts are unreliable.

On average, consortium research had almost double (1.95) the world average citation impact on the log scale used (Mean Normalised Log Citation Score). At least partial alphabetical author ordering was the norm in most consortia.

The 250 largest consortia were for nuclear physics and astronomy around expensive equipment, and for predominantly health-related issues in genomics, medicine, public health, microbiology and neuropsychology.

For the health-related issues, except for the first and last few authors, authorship seem to primary indicate contributions to the shared project infrastructure necessary to gather the raw data.

It is impossible for research evaluators to identify the contributions of individual authors in the huge alphabetical consortia of physics and astronomy, and problematic for the middle and end authors of health-related consortia.

For small scale evaluations, authorship contribution statements could be used, when available.

URL : https://arxiv.org/abs/1906.01849

Can Google Scholar and Mendeley help to assess the scholarly impacts of dissertations?

Authors : Kayvan Kousha, Mike Thelwall

Dissertations can be the single most important scholarly outputs of junior researchers. Whilst sets of journal articles are often evaluated with the help of citation counts from the Web of Science or Scopus, these do not index dissertations and so their impact is hard to assess.

In response, this article introduces a new multistage method to extract Google Scholar citation counts for large collections of dissertations from repositories indexed by Google.

The method was used to extract Google Scholar citation counts for 77,884 American doctoral dissertations from 2013-2017 via ProQuest, with a precision of over 95%. Some ProQuest dissertations that were dual indexed with other repositories could not be retrieved with ProQuest-specific searches but could be found with Google Scholar searches of the other repositories.

The Google Scholar citation counts were then compared with Mendeley reader counts, a known source of scholarly-like impact data. A fifth of the dissertations had at least one citation recorded in Google Scholar and slightly fewer had at least one Mendeley reader.

Based on numerical comparisons, the Mendeley reader counts seem to be more useful for impact assessment purposes for dissertations that are less than two years old, whilst Google Scholar citations are more useful for older dissertations, especially in social sciences, arts and humanities.

Google Scholar citation counts may reflect a more scholarly type of impact than that of Mendeley reader counts because dissertations attract a substantial minority of their citations from other dissertations.

In summary, the new method now makes it possible for research funders, institutions and others to systematically evaluate the impact of dissertations, although additional Google Scholar queries for other online repositories are needed to ensure comprehensive coverage.

URL : https://arxiv.org/abs/1902.08746

Readership Data and Research Impact

Authors : Ehsan Mohammadi, Mike Thelwall

Reading academic publications is a key scholarly activity. Scholars accessing and recording academic publications online are producing new types of readership data. These include publisher, repository, and academic social network download statistics as well as online reference manager records.

This chapter discusses the use of download and reference manager data for research evaluation and library collection development. The focus is on the validity and application of readership data as an impact indicator for academic publications across different disciplines.

Mendeley is particularly promising in this regard, although all data sources are not subjected to rigorous quality control and can be manipulated.

URL : https://arxiv.org/abs/1901.08593

How quickly do publications get read? The evolution of Mendeley reader counts for new articles

Authors : Nabeil Maflahi, Mike Thelwall

Within science, citation counts are widely used to estimate research impact but publication delays mean that they are not useful for recent research. This gap can be filled by Mendeley reader counts, which are valuable early impact indicators for academic articles because they appear before citations and correlate strongly with them.

Nevertheless, it is not known how Mendeley readership counts accumulate within the year of publication, and so it is unclear how soon they can be used. In response, this paper reports a longitudinal weekly study of the Mendeley readers of articles in six library and information science journals from 2016.

The results suggest that Mendeley readers accrue from when articles are first available online and continue to steadily build. For journals with large publication delays, articles can already have substantial numbers of readers by their publication date.

Thus, Mendeley reader counts may even be useful as early impact indicators for articles before they have been officially published in a journal issue. If field normalised indicators are needed, then these can be generated when journal issues are published using the online first date.

URL : http://hdl.handle.net/2436/620522

 

Are Wikipedia Citations Important Evidence of the Impact of Scholarly Articles and Books?

Authors : Kayvan Koush, Mike Thelwall

Individual academics and research evaluators often need to assess the value of published research. Whilst citation counts are a recognised indicator of scholarly impact, alternative data is needed to provide evidence of other types of impact, including within education and wider society.

Wikipedia is a logical choice for both of these because the role of a general encyclopaedia is to be an understandable repository of facts about a diverse array of topics and hence it may cite research to support its claims.

To test whether Wikipedia could provide new evidence about the impact of scholarly research, this article counted citations to 302,328 articles and 18,735 monographs in English indexed by Scopus in the period 2005 to 2012.

The results show that citations from Wikipedia to articles are too rare for most research evaluation purposes, with only 5% of articles being cited in all fields. In contrast, a third of monographs have at least one citation from Wikipedia, with the most in the arts and humanities.

Hence, Wikipedia citations can provide extra impact evidence for academic monographs. Nevertheless, the results may be relatively easily manipulated and so Wikipedia is not recommended for evaluations affecting stakeholder interests.

URL : http://www.scit.wlv.ac.uk/~cm1993/papers/WikipediaCitations.pdf