Characteristics of Spanish citizen participation practices in science

Authors : Carolina Llorente, Gema Revuelta, Mar Carrió

A new regime of science production is emerging from the involvement of non-scientists. The present study aims to improve understanding of this phenomenon with an analysis of 16 interviews with Spanish coordinators of participatory science practices.

The results indicate a majority of strategic and captive publics and point to communication as a key tool for the development of successful practices.

Five key elements of the degree of integration required to develop a citizen participation in science practice were analysed: derived outputs, level of participant contribution, participation assessment, practice replicability, and participant and facilitator training. Proposals for strategies to remove barriers to citizen participation are the study’s principal contribution.

URL : Characteristics of Spanish citizen participation practices in science

DOI : https://doi.org/10.22323/2.20040205

Science communicators intimidated: researchers’ freedom of expression and the rise of authoritarian populism

Authors : Esa Valiverronen, Sampsa Saikkonen

In this article, we explore scientists’ freedom of expression in the context of authoritarian populism. Our particular case for this analysis is Finland, where the right-wing populist Finns Party entered the government for the first time in 2015.

More recently, after leaving the government in 2017, the party has been the most popular party in opinion polls in 2021. We illustrate the current threats to Finnish researchers’ freedom of expression using their responses on three surveys, made in 2015, 2017 and 2019. We focus on politically motivated disparagement of scientists and experts, and the scientists’ experiences with online hate and aggressive feedback.

Further, we relate these findings to the recent studies on authoritarian populism and science-related populism. We argue that this development may affect researchers’ readiness to communicate their research and expertise in public.

URL : Science communicators intimidated: researchers’ freedom of expression and the rise of authoritarian populism

DOI : https://doi.org/10.22323/2.20040208

Article Processing Charges based publications: to which extent the price explains scientific impact?

Authors : Abdelghani Maddi, David Sapinho

The present study aims to analyze relationship between Citations Normalized Score (NCS) of scientific publications and Article Processing Charges (APCs) amounts of Gold Open access publications.

To do so, we use APCs information provided by OpenAPC database and citations scores of publications in the Web of Science database (WoS). Database covers the period from 2006 to 2019 with 83,752 articles published in 4751 journals belonging to 267 distinct publishers.

Results show that contrary to this belief, paying dearly does not necessarily increase the impact of publications. First, large publishers with high impact are not the most expensive.

Second, publishers with the highest APCs are not necessarily the best in terms of impact. Correlation between APCs and impact is moderate. Otherwise, in the econometric analysis we have shown that publication quality is strongly determined by journal quality in which it is published. International collaboration also plays an important role in citations score.

URL : https://arxiv.org/abs/2107.07348

From indexation policies through citation networks to normalized citation impacts: Web of Science, Scopus, and Dimensions as varying resonance chambers

Authors : Stephan Stahlschmidt, Dimity Stephen

Dimensions was introduced as an alternative bibliometric database to the well-established Web of Science (WoS) and Scopus, however all three databases have fundamental differences in coverage and content, resultant from their owners’ indexation philosophies.

In light of these differences, we explore here, using a citation network analysis and assessment of normalised citation impact of “duplicate” publications, whether the three databases offer structurally different perspectives of the bibliometric landscape or if they are essentially homogenous substitutes.

Our citation network analysis of core and exclusive 2016-2018 publications revealed a large set of core publications indexed in all three databases that are highly self-referential. In comparison, each database selected a set of exclusive publications that appeared to hold similarly low levels of relevance to the core set and to one another, with slightly more internal communication between exclusive publications in Scopus and Dimensions than WoS.

Our comparison of normalised citations for 41,848 publications indexed in all three databases found that German sectors were valuated as more impactful in Scopus and Dimensions compared to WoS, particularly for sectors with an applied research focus.

We conclude that the databases do present structurally different perspectives, although Scopus and Dimensions with their additional circle of applied research vary more from the more base research-focused WoS than they do from one another.

URL : https://arxiv.org/abs/2106.01695

Researchers’ attitudes towards the h-index on Twitter 2007–2020: criticism and acceptance

Authors : Mike Thelwall, Kayvan Kousha

The h-index is an indicator of the scientific impact of an academic publishing career. Its hybrid publishing/citation nature and inherent bias against younger researchers, women, people in low resourced countries, and those not prioritizing publishing arguably give it little value for most formal and informal research evaluations.

Nevertheless, it is well-known by academics, used in some promotion decisions, and is prominent in bibliometric databases, such as Google Scholar. In the context of this apparent conflict, it is important to understand researchers’ attitudes towards the h-index.

This article used public tweets in English to analyse how scholars discuss the h-index in public: is it mentioned, are tweets about it positive or negative, and has interest decreased since its shortcomings were exposed?

The January 2021 Twitter Academic Research initiative was harnessed to download all English tweets mentioning the h-index from the 2006 start of Twitter until the end of 2020. The results showed a constantly increasing number of tweets.

Whilst the most popular tweets unapologetically used the h-index as an indicator of research performance, 28.5% of tweets were critical of its simplistic nature and others joked about it (8%). The results suggest that interest in the h-index is still increasing online despite scientists willing to evaluate the h-index in public tending to be critical.

Nevertheless, in limited situations it may be effective at succinctly conveying the message that a researcher has had a successful publishing career.

DOI : https://doi.org/10.1007/s11192-021-03961-8

Systematizing Confidence in Open Research and Evidence (SCORE)

Authors : Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Tim Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, C Lee Giles, Michael B Gordon, Reed Gordon-Sarney, Christopher Griffin, Timothy Gulden et al.,

Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts.

Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment.

The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates.

Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.

URL : Systematizing Confidence in Open Research and Evidence (SCORE)

DOI : https://doi.org/10.31235/osf.io/46mnb