Do funding applications where peer reviewers disagree have higher citations? A cross-sectional study

Authors : Adrian G Barnett, Scott R. Glisson, Stephen Gallo

Background

Decisions about which applications to fund are generally based on the mean scores of a panel of peer reviewers. As well as the mean, a large disagreement between peer reviewers may also be worth considering, as it may indicate a high-risk application with a high return.

Methods

We examined the peer reviewers’ scores for 227 funded applications submitted to the American Institute of Biological Sciences between 1999 and 2006. We examined the mean score and two measures of reviewer disagreement: the standard deviation and range.

The outcome variable was the relative citation ratio, which is the number of citations from all publications associated with the application, standardised by field and publication year.

Results

There was a clear increase in relative citations for applications with a better mean. There was no association between relative citations and either of the two measures of disagreement.

Conclusions

We found no evidence that reviewer disagreement was able to identify applications with a higher than average return. However, this is the first study to empirically examine this association, and it would be useful to examine whether reviewer disagreement is associated with research impact in other funding schemes and in larger sample sizes.

URL : Do funding applications where peer reviewers disagree have higher citations? A cross-sectional study

DOI : http://dx.doi.org/10.12688/f1000research.15479.2

Sharing health research data – the role of funders in improving the impact

Authors : Robert F. Terry, Katherine Littler, Piero L. Olliaro

Recent public health emergencies with outbreaks of influenza, Ebola and Zika revealed that the mechanisms for sharing research data are neither being used, or adequate for the purpose, particularly where data needs to be shared rapidly.

A review of research papers, including completed clinical trials related to priority pathogens, found only 31% (98 out of 319 published papers, excluding case studies) provided access to all the data underlying the paper – 65% of these papers give no information on how to find or access the data.

Only two clinical trials out of 58 on interventions for WHO priority pathogens provided any link in their registry entry to the background data.

Interviews with researchers revealed a reluctance to share data included a lack of confidence in the utility of the data; an absence of academic-incentives for rapid dissemination that prevents subsequent publication and a disconnect between those who are collecting the data and those who wish to use it quickly.

The role of the funders of research needs to change to address this. Funders need to engage early with the researchers and related stakeholders to understand their concerns and work harder to define the more explicitly the benefits to all stakeholders.

Secondly, there needs to be a direct benefit to sharing data that is directly relevant to those people that collect and curate the data.

Thirdly more work needs to be done to realise the intent of making data sharing resources more equitable, ethical and efficient.

Finally, a checklist of the issues that need to be addressed when designing new or revising existing data sharing resources should be created. This checklist would highlight the technical, cultural and ethical issues that need to be considered and point to examples of emerging good practice that can be used to address them.

URL : Sharing health research data – the role of funders in improving the impact

DOI : http://dx.doi.org/10.12688/f1000research.16523.1

Commons-Based Peer Production in the Work of Yochai Benkler

Author : Vangelis Papadimitropoulos

Yochai Benkler defines commons-based peer production as a non-market sector of information, knowledge and cultural production, which is not treated as private property but as an ethic of open sharing and co-operation, and is largely enhanced by the Internet and free/open source software.

This paper makes the case that there is a tension between Benkler’s liberal commitments and his anarchistic vision of the commons. Benkler limits the scope of commons-based peer production to the immaterial production of the digital commons, while paradoxically envisaging the control of the world economy by the commons.

This paradox reflects a deeper lacuna in his work, revealing the absence of a concrete strategy as to how the immaterial production of the digital commons can connect to material production and control the world economy.

The paper concludes with an enquiry into some of the latest efforts in the literature to fill this gap.

URL : Commons-Based Peer Production in the Work of Yochai Benkler

DOI : https://doi.org/10.31269/triplec.v16i2.1009

Evaluating research and researchers by the journal impact factor: is it better than coin flipping?

Authors : Ricardo Brito, Alonso Rodríguez-Navarro

The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers.

The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations.

Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF.

To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low.

However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.

URL : https://arxiv.org/abs/1809.10999

Consistency of interdisciplinarity measures

Authors : Qi Wang, Jesper Wiborg Schneider

Assessing interdisciplinarity is an important and challenging work in bibliometric studies. Previous studies tend to emphasize that the nature and concept of interdisciplinary is ambiguous and uncertain (e.g. Leydesdorff & Rafols 2010, Rafols & Meyer, 2010, Sugimoto & Weingart, 2014).

As a consequence, various different measures of interdisciplinarity have been proposed. However, few studies have examined the relations between these measures. In this context, this paper aims to systematically review these interdisciplinarity measures, and explore their inherent relations.

We examine these measures in relation to the Web of Science (WoS) journal subject categories (SCs), and also an interdisciplinary research center at Aarhus University.

In line with the conclusion of Digital Science (2016), our results reveal that the current situation of interdisciplinarity measurement in science studies is confusing and unsatisfying. We obtained surprisingly dissimilar results with measures that supposedly should measure similar features.

We suggest that interdisciplinarity as a measurement construct should be used and interpreted with caution in future research evaluation and research policies.

URL : https://arxiv.org/abs/1810.00577

Using ORCID, DOI, and Other Open Identifiers in Research Evaluation

Authors : Laurel L. Haak, Alice Meadows, Josh Brown

An evaluator’s task is to connect the dots between program goals and its outcomes. This can be accomplished through surveys, research, and interviews, and is frequently performed post hoc.

Research evaluation is hampered by a lack of data that clearly connect a research program with its outcomes and, in particular, by ambiguity about who has participated in the program and what contributions they have made. Manually making these connections is very labor-intensive, and algorithmic matching introduces errors and assumptions that can distort results.

In this paper, we discuss the use of identifiers in research evaluation—for individuals, their contributions, and the organizations that sponsor them and fund their work. Global identifier systems are uniquely positioned to capture global mobility and collaboration.

By leveraging connections between local infrastructures and global information resources, evaluators can map data sources that were previously either unavailable or prohibitively labor-intensive.

We describe how identifiers, such as ORCID iDs and DOIs, are being embedded in research workflows across science, technology, engineering, arts, and mathematics; how this is affecting data availability for evaluation purposes: and provide examples of evaluations that are leveraging identifiers.

We also discuss the importance of provenance and preservation in establishing confidence in the reliability and trustworthiness of data and relationships, and in the long-term availability of metadata describing objects and their inter-relationships.

We conclude with a discussion on opportunities and risks for the use of identifiers in evaluation processes.

URL : Using ORCID, DOI, and Other Open Identifiers in Research Evaluation

DOI : https://doi.org/10.3389/frma.2018.00028

OpenAPC: a contribution to a transparent and reproducible monitoring of fee-based open access publishing across institutions and nations

Authors : Dirk Pieper, Christoph Broschinski

The OpenAPC initiative releases data sets on fees paid for open access (OA) journal articles by universities, funders and research institutions under an open database licence.

OpenAPC is part of the INTACT project, which is funded by the German Research Foundation and located at Bielefeld University Library.

This article provides insight into OpenAPC’s technical and organizational background and shows how transparent and reproducible reporting on fee-based open access can be conducted across institutions and publishers to draw conclusions on the state of the OA transformation process.

As part of the INTACT subproject, ESAC, the article also shows how OpenAPC workflows can be used to analyse offsetting deals, using the example of Springer Compact agreements.

URL : OpenAPC: a contribution to a transparent and reproducible monitoring of fee-based open access publishing across institutions and nations

DOI : http://doi.org/10.1629/uksg.439