Rethinking success, integrity, and culture in research (part 1) — a multi-actor qualitative study on success in science

Authors : Noémie Aubert Bonn, Wim Pinxten

Background

Success shapes the lives and careers of scientists. But success in science is difficult to define, let alone to translate in indicators that can be used for assessment. In the past few years, several groups expressed their dissatisfaction with the indicators currently used for assessing researchers.

But given the lack of agreement on what should constitute success in science, most propositions remain unanswered. This paper aims to complement our understanding of success in science and to document areas of tension and conflict in research assessments.

Methods

We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science.

We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.

Results

Given the breadth of our results, we divided our findings in a two-paper series, with the current paper focusing on what defines and determines success in science. Respondents depicted success as a multi-factorial, context-dependent, and mutable construct.

Success appeared to be an interaction between characteristics from the researcher (Who), research outputs (What), processes (How), and luck. Interviewees noted that current research assessments overvalued outputs but largely ignored the processes deemed essential for research quality and integrity.

Interviewees suggested that science needs a diversity of indicators that are transparent, robust, and valid, and that also allow a balanced and diverse view of success; that assessment of scientists should not blindly depend on metrics but also value human input; and that quality should be valued over quantity.

Conclusions

The objective of research assessments may be to encourage good researchers, to benefit society, or simply to advance science. Yet we show that current assessments fall short on each of these objectives. Open and transparent inter-actor dialogue is needed to understand what research assessments aim for and how they can best achieve their objective.

URL : Rethinking success, integrity, and culture in research (part 1) — a multi-actor qualitative study on success in science

DOI : https://doi.org/10.1186/s41073-020-00104-0

Methodological quality of COVID-19 clinical research

Authors : Richard G. Jung, Pietro Di Santo, Cole Clifford, Graeme Prosperi-Porta, Stephanie Skanes, Annie Hung, Simon Parlow, Sarah Visintini, F. Daniel Ramirez, Trevor Simard & Benjamin Hibbert

The COVID-19 pandemic began in early 2020 with major health consequences. While a need to disseminate information to the medical community and general public was paramount, concerns have been raised regarding the scientific rigor in published reports.

We performed a systematic review to evaluate the methodological quality of currently available COVID-19 studies compared to historical controls. A total of 9895 titles and abstracts were screened and 686 COVID-19 articles were included in the final analysis.

Comparative analysis of COVID-19 to historical articles reveals a shorter time to acceptance (13.0[IQR, 5.0–25.0] days vs. 110.0[IQR, 71.0–156.0] days in COVID-19 and control articles, respectively; p < 0.0001).

Furthermore, methodological quality scores are lower in COVID-19 articles across all study designs. COVID-19 clinical studies have a shorter time to publication and have lower methodological quality scores than control studies in the same journal. These studies should be revisited with the emergence of stronger evidence.

URL : Methodological quality of COVID-19 clinical research

DOI : https://doi.org/10.1038/s41467-021-21220-5

How is science clicked on Twitter? Click metrics for Bitly short links to scientific publications

Authors : Zhichao Fang, Rodrigo Costas, Wencan Tian, Xianwen Wang, Paul Wouters

To provide some context for the potential engagement behavior of Twitter users around science, this article investigates how Bitly short links to scientific publications embedded in scholarly Twitter mentions are clicked on Twitter.

Based on the click metrics of over 1.1 million Bitly short links referring to Web of Science (WoS) publications, our results show that around 49.5% of them were not clicked by Twitter users. For those Bitly short links with clicks from Twitter, the majority of their Twitter clicks accumulated within a short period of time after they were first tweeted.

Bitly short links to the publications in the field of Social Sciences and Humanities tend to attract more clicks from Twitter over other subject fields. This article also assesses the extent to which Twitter clicks are correlated with some other impact indicators.

Twitter clicks are weakly correlated with scholarly impact indicators (WoS citations and Mendeley readers), but moderately correlated to other Twitter engagement indicators (total retweets and total likes).

In light of these results, we highlight the importance of paying more attention to the click metrics of URLs in scholarly Twitter mentions, to improve our understanding about the more effective dissemination and reception of science information on Twitter.

URL : How is science clicked on Twitter? Click metrics for Bitly short links to scientific publications

DOI : https://doi.org/10.1002/asi.24458

Transparency to hybrid open access through publisher-provided metadata: An article-level study of Elsevier

Authors : Najko Jahn, Lisa Matthias, Mikael Laakso

With the growth of open access (OA), the financial flows in scholarly journal publishing have become increasingly complex, but comprehensive data and transparency into these flows are still lacking.

The opaqueness is especially concerning for hybrid OA, where subscription-based journals publish individual articles as OA if an optional fee is paid. This study addresses the lack of transparency by leveraging Elsevier article metadata and provides the first publisher-level study of hybrid OA uptake and invoicing.

Our results show that Elsevier’s hybrid OA uptake has grown steadily but slowly from 2015-2019, doubling the number of hybrid OA articles published per year and increasing the share of OA articles in Elsevier’s hybrid journals from 2.6% to 3.7% of all articles.

Further, we find that most hybrid OA articles were invoiced directly to authors, followed by articles invoiced through agreements with research funders, institutions, or consortia, with only a few funding bodies driving hybrid OA uptake.

As such, our findings point to the role of publishing agreements and OA policies in hybrid OA publishing. Our results further demonstrate the value of publisher-provided metadata to improve the transparency in scholarly publishing by linking invoicing data to bibliometrics.

URL : https://arxiv.org/abs/2102.04789

Journal policies and editors’ opinions on peer review

Authors : Daniel G Hamilton, Hannah Fraser, Rink Hoekstra, Fiona Fidler

Peer review practices differ substantially between journals and disciplines. This study presents the results of a survey of 322 editors of journals in ecology, economics, medicine, physics and psychology.

We found that 49% of the journals surveyed checked all manuscripts for plagiarism, that 61% allowed authors to recommend both for and against specific reviewers, and that less than 6% used a form of open peer review.

Most journals did not have an official policy on altering reports from reviewers, but 91% of editors identified at least one situation in which it was appropriate for an editor to alter a report. Editors were also asked for their views on five issues related to publication ethics.

A majority expressed support for co-reviewing, reviewers requesting access to data, reviewers recommending citations to their work, editors publishing in their own journals, and replication studies.

Our results provide a window into what is largely an opaque aspect of the scientific process. We hope the findings will inform the debate about the role and transparency of peer review in scholarly publishing.

URL : Journal policies and editors’ opinions on peer review

DOI : https://doi.org/10.7554/eLife.62529

Which aspects of the Open Science agenda are most relevant to scientometric research and publishing? An opinion paper

Authors : Lutz Bornmann, Raf Guns, Michael Thelwall, Dietmar Wolfram

Open Science is an umbrella term that encompasses many recommendations for possible changes in research practices, management, and publishing with the objective to increase transparency and accessibility.

This has become an important science policy issue that all disciplines should consider. Many Open Science recommendations may be valuable for the further development of research and publishing but not all are relevant to all fields.

This opinion paper considers the aspects of Open Science that are most relevant for scientometricians, discussing how they can be usefully applied.

DOI : https://doi.org/10.1162/qss_e_00121

Conjoint analysis of researchers’ hidden preferences for bibliometrics, altmetrics, and usage metrics

Authors : Steffen Lemke, Athanasios Mazarakis, Isabella Peters

The amount of annually published scholarly articles is growing steadily, as is the number of indicators through which impact of publications is measured. Little is known about how the increasing variety of available metrics affects researchers’ processes of selecting literature to read.

We conducted ranking experiments embedded into an online survey with 247 participating researchers, most from social sciences. Participants completed series of tasks in which they were asked to rank fictitious publications regarding their expected relevance, based on their scores regarding six prototypical metrics.

Through applying logistic regression, cluster analysis, and manual coding of survey answers, we obtained detailed data on how prominent metrics for research impact influence our participants in decisions about which scientific articles to read.

Survey answers revealed a combination of qualitative and quantitative characteristics that researchers consult when selecting literature, while regression analysis showed that among quantitative metrics, citation counts tend to be of highest concern, followed by Journal Impact Factors.

Our results suggest a comparatively favorable view of many researchers on bibliometrics and widespread skepticism toward altmetrics.

The findings underline the importance of equipping researchers with solid knowledge about specific metrics’ limitations, as they seem to play significant roles in researchers’ everyday relevance assessments.

URL : Conjoint analysis of researchers’ hidden preferences for bibliometrics, altmetrics, and usage metrics

DOI : https://doi.org/10.1002/asi.24445