Indicators of research quality, quantity, openness and responsibility in institutional review, promotion and tenure policies across seven countries

Authors : Nancy Pontika, Thomas Klebel, Antonia Correia, Hannah Metzler, Petr Knoth, Tony Ross-Hellauer

The need to reform research assessment processes related to career advancement at research institutions has become increasingly recognised in recent years, especially to better foster open and responsible research practices. Current assessment criteria are believed to focus too heavily on inappropriate criteria related to productivity and quantity as opposed to quality, collaborative open research practices, and the socio-economic impact of research.

Evidence of the extent of these issues is urgently needed to inform actions for reform, however. We analyse current practices as revealed by documentation on institutional review, promotion and tenure processes in seven countries (Austria, Brazil, Germany, India, Portugal, United Kingdom and United States of America).

Through systematic coding and analysis of 143 RPT policy documents from 107 institutions for the prevalence of 17 criteria (including those related to qualitative or quantitative assessment of research, service to the institution or profession, and open and responsible research practices), we compare assessment practices across a range of international institutions to significantly broaden this evidence-base.

Although prevalence of indicators varies considerably between countries, overall we find that currently open and responsible research practices are minimally rewarded and problematic practices of quantification continue to dominate.

URL : Indicators of research quality, quantity, openness and responsibility in institutional review, promotion and tenure policies across seven countries

DOI : https://doi.org/10.1162/qss_a_00224

Publishing of COVID-19 preprints in peer-reviewed journals, preprinting trends, public discussion and quality issues

Authors : Ivan Kodvanj, Jan Homolak, Vladimir Trkulja

COVID-19-related (vs. non-related) articles appear to be more expeditiously processed and published in peer-reviewed journals.

We aimed to evaluate: (i) whether COVID-19-related preprints were favored for publication, (ii) preprinting trends and public discussion of the preprints, and (iii) the relationship between the publication topic (COVID-19-related or not) and quality issues.

Manuscripts deposited at bioRxiv and medRxiv between January 1 and September 27 2020 were assessed for the probability of publishing in peer-reviewed journals, and those published were evaluated for submission-to-acceptance time. The extent of public discussion was assessed based on Altmetric and Disqus data.

The Retraction Watch Database and PubMed were used to explore the retraction of COVID-19 and non-COVID-19 articles and preprints. With adjustment for the preprinting server and number of deposited versions, COVID-19-related preprints were more likely to be published within 120 days since the deposition of the first version (OR = 1.96, 95% CI: 1.80–2.14) as well as over the entire observed period (OR = 1.39, 95% CI: 1.31–1.48). Submission-to-acceptance was by 35.85 days (95% CI: 32.25–39.45) shorter for COVID-19 articles.

Public discussion of preprints was modest and COVID-19 articles were overrepresented in the pool of retracted articles in 2020. Current data suggest a preference for publication of COVID-19-related preprints over the observed period.

URL : https://doi.org/10.1007/s11192-021-04249-7

RipetaScore: Measuring the Quality, Transparency, and Trustworthiness of a Scientific Work

Authors : Josh Q. Sumner, Cynthia Hudson Vitale, Leslie D. McIntosh

A wide array of existing metrics quantifies a scientific paper’s prominence or the author’s prestige. Many who use these metrics make assumptions that higher citation counts or more public attention must indicate more reliable, better quality science.

While current metrics offer valuable insight into scientific publications, they are an inadequate proxy for measuring the quality, transparency, and trustworthiness of published research.

Three essential elements to establishing trust in a work include: trust in the paper, trust in the author, and trust in the data. To address these elements in a systematic and automated way, we propose the ripetaScore as a direct measurement of a paper’s research practices, professionalism, and reproducibility.

Using a sample of our current corpus of academic papers, we demonstrate the ripetaScore’s efficacy in determining the quality, transparency, and trustworthiness of an academic work.

In this paper, we aim to provide a metric to evaluate scientific reporting quality in terms of transparency and trustworthiness of the research, professionalism, and reproducibility.

URL : RipetaScore: Measuring the Quality, Transparency, and Trustworthiness of a Scientific Work

DOI : https://doi.org/10.3389/frma.2021.751734

The role of metrics in peer assessments

Authors :  Liv Langfeldt, Ingvild Reymert, Dag W Aksnes

Metrics on scientific publications and their citations are easily accessible and are often referred to in assessments of research and researchers. This paper addresses whether metrics are considered a legitimate and integral part of such assessments. Based on an extensive questionnaire survey in three countries, the opinions of researchers are analysed.

We provide comparisons across academic fields (cardiology, economics, and physics) and contexts for assessing research (identifying the best research in their field, assessing grant proposals and assessing candidates for positions).

A minority of the researchers responding to the survey reported that metrics were reasons for considering something to be the best research. Still, a large majority in all the studied fields indicated that metrics were important or partly important in their review of grant proposals and assessments of candidates for academic positions.

In these contexts, the citation impact of the publications and, particularly, the number of publications were emphasized. These findings hold across all fields analysed, still the economists relied more on productivity measures than the cardiologists and the physicists. Moreover, reviewers with high scores on bibliometric indicators seemed more frequently (than other reviewers) to adhere to metrics in their assessments.

Hence, when planning and using peer review, one should be aware that reviewers—in particular reviewers who score high on metrics—find metrics to be a good proxy for the future success of projects and candidates, and rely on metrics in their evaluation procedures despite the concerns in scientific communities on the use and misuse of publication metrics.

URL : The role of metrics in peer assessments

DOI : https://doi.org/10.1093/reseval/rvaa032

Is rapid scientific publication also high quality? Bibliometric analysis of highly disseminated COVID-19 research papers

Authors : Amandeep Khatter, Michael Naughton, Hajira Dambha-Miller, Patrick Redmond

The impact of COVID-19 has underlined the need for reliable information to guide clinical practice and policy. This urgency has to be balanced against disruption to journal handling capacity and the continued need to ensure scientific rigour.

We examined the reporting quality of highly disseminated COVID-19 research papers using a bibliometric analysis examining reporting quality and risk of bias (RoB) amongst 250 top scoring Altmetric Attention Score (AAS) COVID-19 research papers between January and April 2020.

Method-specific RoB tools were used to assess quality. After exclusions, 84 studies from 44 journals were included. Forty-three (51%) were case series/studies, and only one was an randomized controlled trial.

Most authors were from institutions based in China (n = 44, 52%). The median AAS and impact factor was 2015 (interquartile range [IQR] 1,105–4,051.5) and 12.8 (IQR 5–44.2) respectively. Nine studies (11%) utilized a formal reporting framework, 62 (74%) included a funding statement, and 41 (49%) were at high RoB.

This review of the most widely disseminated COVID-19 studies highlights a preponderance of low-quality case series with few research papers adhering to good standards of reporting. It emphasizes the need for cautious interpretation of research and the increasingly vital responsibility that journals have in ensuring high-quality publications.

URL : Is rapid scientific publication also high quality? Bibliometric analysis of highly disseminated COVID-19 research papers

DOI : https://doi.org/10.1002/leap.1403

Methodological quality of COVID-19 clinical research

Authors : Richard G. Jung, Pietro Di Santo, Cole Clifford, Graeme Prosperi-Porta, Stephanie Skanes, Annie Hung, Simon Parlow, Sarah Visintini, F. Daniel Ramirez, Trevor Simard & Benjamin Hibbert

The COVID-19 pandemic began in early 2020 with major health consequences. While a need to disseminate information to the medical community and general public was paramount, concerns have been raised regarding the scientific rigor in published reports.

We performed a systematic review to evaluate the methodological quality of currently available COVID-19 studies compared to historical controls. A total of 9895 titles and abstracts were screened and 686 COVID-19 articles were included in the final analysis.

Comparative analysis of COVID-19 to historical articles reveals a shorter time to acceptance (13.0[IQR, 5.0–25.0] days vs. 110.0[IQR, 71.0–156.0] days in COVID-19 and control articles, respectively; p < 0.0001).

Furthermore, methodological quality scores are lower in COVID-19 articles across all study designs. COVID-19 clinical studies have a shorter time to publication and have lower methodological quality scores than control studies in the same journal. These studies should be revisited with the emergence of stronger evidence.

URL : Methodological quality of COVID-19 clinical research

DOI : https://doi.org/10.1038/s41467-021-21220-5

Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature

Authors : Clarissa F. D. Carneiro, Victor G. S. Queiroz, Thiago C. Moulin, Carlos A. M. Carvalho, Clarissa B. Haas, Danielle Rayêe, David E. Henshall, Evandro A. De-Souza, Felippe Espinelli, Flávia Z. Boos, Gerson D. Guercio, Igor R. Costa, Karina L. Hajdu, Martin Modrák, Pedro B. Tan, Steven J. Burgess, Sylvia F. S. Guerra, Vanessa T. Bortoluzzi, Olavo B. Amara

Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader’s ability to independently interpret data and reproduce findings.

In this observational study, we compared random samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. We found that peer-reviewed articles had, on average, higher quality of reporting than preprints, although this difference was small.

We found larger differences favoring PubMed in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information.

Interestingly, an exploratory analysis showed that preprints with figures and legends embedded within text had reporting scores similar to PubMed articles.

These differences cannot be directly attributed to peer review or editorial processes, as manuscripts might already differ before submission due to greater uptake of preprints by particular research communities.

Nevertheless, our results show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.

An ongoing second phase of the project is comparing preprints to their own published versions in order to more directly assess the effects of peer review.

URL : Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature

DOI : https://doi.org/10.1101/581892