Article Processing Charges based publications: to which extent the price explains scientific impact?

Authors : Abdelghani Maddi, David Sapinho

The present study aims to analyze relationship between Citations Normalized Score (NCS) of scientific publications and Article Processing Charges (APCs) amounts of Gold Open access publications.

To do so, we use APCs information provided by OpenAPC database and citations scores of publications in the Web of Science database (WoS). Database covers the period from 2006 to 2019 with 83,752 articles published in 4751 journals belonging to 267 distinct publishers.

Results show that contrary to this belief, paying dearly does not necessarily increase the impact of publications. First, large publishers with high impact are not the most expensive.

Second, publishers with the highest APCs are not necessarily the best in terms of impact. Correlation between APCs and impact is moderate. Otherwise, in the econometric analysis we have shown that publication quality is strongly determined by journal quality in which it is published. International collaboration also plays an important role in citations score.

URL : https://arxiv.org/abs/2107.07348

From indexation policies through citation networks to normalized citation impacts: Web of Science, Scopus, and Dimensions as varying resonance chambers

Authors : Stephan Stahlschmidt, Dimity Stephen

Dimensions was introduced as an alternative bibliometric database to the well-established Web of Science (WoS) and Scopus, however all three databases have fundamental differences in coverage and content, resultant from their owners’ indexation philosophies.

In light of these differences, we explore here, using a citation network analysis and assessment of normalised citation impact of “duplicate” publications, whether the three databases offer structurally different perspectives of the bibliometric landscape or if they are essentially homogenous substitutes.

Our citation network analysis of core and exclusive 2016-2018 publications revealed a large set of core publications indexed in all three databases that are highly self-referential. In comparison, each database selected a set of exclusive publications that appeared to hold similarly low levels of relevance to the core set and to one another, with slightly more internal communication between exclusive publications in Scopus and Dimensions than WoS.

Our comparison of normalised citations for 41,848 publications indexed in all three databases found that German sectors were valuated as more impactful in Scopus and Dimensions compared to WoS, particularly for sectors with an applied research focus.

We conclude that the databases do present structurally different perspectives, although Scopus and Dimensions with their additional circle of applied research vary more from the more base research-focused WoS than they do from one another.

URL : https://arxiv.org/abs/2106.01695

Researchers’ attitudes towards the h-index on Twitter 2007–2020: criticism and acceptance

Authors : Mike Thelwall, Kayvan Kousha

The h-index is an indicator of the scientific impact of an academic publishing career. Its hybrid publishing/citation nature and inherent bias against younger researchers, women, people in low resourced countries, and those not prioritizing publishing arguably give it little value for most formal and informal research evaluations.

Nevertheless, it is well-known by academics, used in some promotion decisions, and is prominent in bibliometric databases, such as Google Scholar. In the context of this apparent conflict, it is important to understand researchers’ attitudes towards the h-index.

This article used public tweets in English to analyse how scholars discuss the h-index in public: is it mentioned, are tweets about it positive or negative, and has interest decreased since its shortcomings were exposed?

The January 2021 Twitter Academic Research initiative was harnessed to download all English tweets mentioning the h-index from the 2006 start of Twitter until the end of 2020. The results showed a constantly increasing number of tweets.

Whilst the most popular tweets unapologetically used the h-index as an indicator of research performance, 28.5% of tweets were critical of its simplistic nature and others joked about it (8%). The results suggest that interest in the h-index is still increasing online despite scientists willing to evaluate the h-index in public tending to be critical.

Nevertheless, in limited situations it may be effective at succinctly conveying the message that a researcher has had a successful publishing career.

DOI : https://doi.org/10.1007/s11192-021-03961-8

Systematizing Confidence in Open Research and Evidence (SCORE)

Authors : Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Tim Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, C Lee Giles, Michael B Gordon, Reed Gordon-Sarney, Christopher Griffin, Timothy Gulden et al.,

Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts.

Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment.

The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates.

Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.

URL : Systematizing Confidence in Open Research and Evidence (SCORE)

DOI : https://doi.org/10.31235/osf.io/46mnb

The rise of multiple institutional affiliations in academia

Authors : Hanna Hottenrott, Michael E. Rose, Cornelia Lawson

This study provides the first systematic, international, large-scale evidence on the extent and nature of multiple institutional affiliations on journal publications. Studying more than 15 million authors and 22 million articles from 40 countries we document that: In 2019, almost one in three articles was (co-)authored by authors with multiple affiliations and the share of authors with multiple affiliations increased from around 10% to 16% since 1996.

The growth of multiple affiliations is prevalent in all fields and it is stronger in high impact journals. About 60% of multiple affiliations are between institutions from within the academic sector.

International co-affiliations, which account for about a quarter of multiple affiliations, most often involve institutions from the United States, China, Germany and the United Kingdom, suggesting a core-periphery network. Network analysis also reveals a number communities of countries that are more likely to share affiliations.

We discuss potential causes and show that the timing of the rise in multiple affiliations can be linked to the introduction of more competitive funding structures such as “excellence initiatives” in a number of countries. We discuss implications for science and science policy.

URL : The rise of multiple institutional affiliations in academia

DOI : https://doi.org/10.1002/asi.24472

The Impact of the German ‘DEAL’ on Competition in the Academic Publishing Market

Authors : Justus Haucap, Nima Moshgbar, Wolfgang Benedikt Schmal

The German DEAL agreements between German universities and research institutions on the one side and Springer Nature and Wiley on the other side facilitate easy open access publishing for researchers located in Germany.

We use a dataset of all publications in chemistry from 2016 to 2020 and apply a difference-in-differences approach to estimate the impact on eligible scientists’ choice of publication outlet.

We find that even in the short period following the conclusion of these DEAL agreements, publication patterns in the field of chemistry have changed, as eligible researchers have increased their publications in Wiley and Springer Nature journals at the cost of other journals.

From that two related competition concerns emerge: First, academic libraries may be, at least in the long run, left with fewer funds and incentives to subscribe to non-DEAL journals published by smaller publishers or to fund open access publications in these journals.

Secondly, eligible authors may prefer to publish in journals included in the DEAL agreements, thereby giving DEAL journals a competitive advantage over non-DEAL journals in attracting good papers.

Given the two-sided market nature of the academic journal market, these effects may both further spur the concentration process in this market.

URL : https://ssrn.com/abstract=3815451