Open science in Sámi research: Researchers’ dilemmas

Author : Coppélie Cocq

This article discusses the challenges of Indigenous research in relation to open science, more particularly in relation to Sámi research in Sweden. Based on interviews with active scholars in the multidisciplinary field of Sámi studies, and on policy documents by Sámi organizations, this article points at the challenges that can be identified, and the practices and strategies adopted or suggested by researchers.

Topics addressed include ownership, control, sensitivity and accessibility of data, the consequences of experienced limitations, the role of the historical context, and community-groundedness.

This article has the ambition to contribute with a discussion about the tensions between standards of data management/open science and data sovereignty in Indigenous contexts. This is done by bringing in perspectives from Indigenous methodologies (the 4 R) and by contextualizing research practices and forms of data colonialism in relation to our contemporary context of surveillance culture.

Research—in relation to ethics and social sustainability—is an arena where tensions between various agendas becomes obvious. This is illustrated in this article by researchers’ dilemmas when working with open science and the advancement of Indigenous research.

Efforts toward ethically valid and cultural-sensitive modes of data use are taking shape in Indigenous research, calling for an increased awareness about the topic. In the context of Sámi research, the role of academia in such a transformation is also essential.

URL : Open science in Sámi research: Researchers’ dilemmas

DOI : https://doi.org/10.3389/frma.2023.1095169

What constitutes equitable data sharing in global health research? A scoping review of the literature on low-income and middle-income country stakeholders’ perspectives

Authors : Natalia Evertsz, Susan Bull, Bridget Pratt

Introduction

Despite growing consensus on the need for equitable data sharing, there has been very limited discussion about what this should entail in practice. As a matter of procedural fairness and epistemic justice, the perspectives of low-income and middle-income country (LMIC) stakeholders must inform concepts of equitable health research data sharing.

This paper investigates published perspectives in relation to how equitable data sharing in global health research should be understood.

Methods

We undertook a scoping review (2015 onwards) of the literature on LMIC stakeholders’ experiences and perspectives of data sharing in global health research and thematically analysed the 26 articles included in the review.

Results

We report LMIC stakeholders’ published views on how current data sharing mandates may exacerbate inequities, what structural changes are required in order to create an environment conducive to equitable data sharing and what should comprise equitable data sharing in global health research.

Conclusions

In light of our findings, we conclude that data sharing under existing mandates to share data (with minimal restrictions) risks perpetuating a neocolonial dynamic. To achieve equitable data sharing, adopting best practices in data sharing is necessary but insufficient. Structural inequalities in global health research must also be addressed.

It is thus imperative that the structural changes needed to ensure equitable data sharing are incorporated into the broader dialogue on global health research.

URL : What constitutes equitable data sharing in global health research? A scoping review of the literature on low-income and middle-income country stakeholders’ perspectives

DOI : http://dx.doi.org/10.1136/bmjgh-2022-010157

Attending to the Cultures of Data Science Work

Author : Lindsay Poirier

This essay reflects on the shifting attention to the “social” and the “cultural” in data science communities. While recently the “social” and the “cultural” have been prioritized in data science discourse, social and cultural concerns that get raised in data science are almost always outwardly focused – applying to the communities that data scientists seek to support more so than more computationally-focused data science communities.

I argue that data science communities have a responsibility to attend not only to the cultures that orient the work of domain communities, but also to the cultures that orient their own work.

I describe how ethnographic frameworks such as thick description can be enlisted to encourage more reflexive data science work, and I conclude with recommendations for documenting the cultural provenance of data policy and infrastructure.

URL : Attending to the Cultures of Data Science Work

DOI : http://doi.org/10.5334/dsj-2023-006

Metrics and peer review agreement at the institutional level

Authors : Vincent A Traag, Marco Malgarini, Sarlo Scipione

In the past decades, many countries have started to fund academic institutions based on the evaluation of their scientific performance. In this context, post-publication peer review is often used to assess scientific performance. Bibliometric indicators have been suggested as an alternative to peer review.

A recurrent question in this context is whether peer review and metrics tend to yield similar outcomes. In this paper, we study the agreement between bibliometric indicators and peer review based on a sample of publications submitted for evaluation to the national Italian research assessment exercise (2011–2014).

In particular, we study the agreement between bibliometric indicators and peer review at a higher aggregation level, namely the institutional level. Additionally, we also quantify the internal agreement of peer review at the institutional level. We base our analysis on a hierarchical Bayesian model using cross-validation.

We find that the level of agreement is generally higher at the institutional level than at the publication level. Overall, the agreement between metrics and peer review is on par with the internal agreement among two reviewers for certain fields of science in this particular context.

This suggests that for some fields, bibliometric indicators may possibly be considered as an alternative to peer review for the Italian national research assessment exercise. Although results do not necessarily generalise to other contexts, it does raise the question whether similar findings would obtain for other research assessment exercises, such as in the United Kingdom.

URL : https://arxiv.org/abs/2006.14830

Hybrid Gold Open Access Citation Advantage in Clinical Medicine: Analysis of Hybrid Journals in the Web of Science

Authors : Chompunuch Saravudecha, Duangruthai Na Thungfai, Chananthida Phasom, Sodsri Gunta-in, Aorrakanya Metha, Peangkobfah Punyaphet, Tippawan Sookruay, Wannachai Sakuludomkan, Nut Koonrungsesomboon

Biomedical fields have seen a remarkable increase in hybrid Gold open access articles. However, it is uncertain whether the hybrid Gold open access option contributes to a citation advantage, an increase in the citations of articles made immediately available as open access regardless of the article’s quality or whether it involves a trending topic of discussion.

This study aimed to compare the citation counts of hybrid Gold open access articles to subscription articles published in hybrid journals. The study aimed to ascertain if hybrid Gold open access publications yield an advantage in terms of citations.

This cross-sectional study included the list of hybrid journals under 59 categories in the ‘Clinical Medicine’ group from Clarivate’s Journal Citation Reports (JCR) during 2018–2021. The number of citable items with ‘Gold Open Access’ and ‘Subscription and Free to Read’ in each journal, as well as the number of citations of those citable items, were extracted from JCR.

A hybrid Gold open access citation advantage was computed by dividing the number of citations per citable item with hybrid Gold open access by the number of citations per citable item with a subscription.

A total of 498, 636, 1009, and 1328 hybrid journals in the 2018 JCR, 2019 JCR, 2020 JCR, and 2021 JCR, respectively, were included in this study. The citation advantage of hybrid Gold open access articles over subscription articles in 2018 was 1.45 (95% confidence interval (CI), 1.24–1.65); in 2019, it was 1.31 (95% CI, 1.20–1.41); in 2020, it was 1.30 (95% CI, 1.20–1.39); and in 2021, it was 1.31 (95% CI, 1.20–1.42).

In the ‘Clinical Medicine’ discipline, the articles published in the hybrid journal as hybrid Gold open access had a greater number of citations when compared to those published as a subscription, self-archived, or otherwise openly accessible option.

URL : Hybrid Gold Open Access Citation Advantage in Clinical Medicine: Analysis of Hybrid Journals in the Web of Science

DOI : https://doi.org/10.3390/publications11020021

Biases in scholarly recommender systems: impact, prevalence, and mitigation

Authors : Michael Färber, Melissa Coutinho, Shuzhou Yuan

With the remarkable increase in the number of scientific entities such as publications, researchers, and scientific topics, and the associated information overload in science, academic recommender systems have become increasingly important for millions of researchers and science enthusiasts.

However, it is often overlooked that these systems are subject to various biases. In this article, we first break down the biases of academic recommender systems and characterize them according to their impact and prevalence. In doing so, we distinguish between biases originally caused by humans and biases induced by the recommender system.

Second, we provide an overview of methods that have been used to mitigate these biases in the scholarly domain.

Based on this, third, we present a framework that can be used by researchers and developers to mitigate biases in scholarly recommender systems and to evaluate recommender systems fairly.

Finally, we discuss open challenges and possible research directions related to scholarly biases.

URL : Biases in scholarly recommender systems: impact, prevalence, and mitigation

DOI : https://doi.org/10.1007/s11192-023-04636-2

Do altmetric scores reflect article quality? Evidence from the UK Research Excellence Framework 2021

Authors : Mike Thelwall, Kayvan Kousha, Mahshid Abdoli, Emma Stuart, Meiko Makita, Paul Wilson, Jonathan Levitt

Altmetrics are web-based quantitative impact or attention indicators for academic articles that have been proposed to supplement citation counts. This article reports the first assessment of the extent to which mature altmetrics from Altmetric.com and Mendeley associate with individual article quality scores.

It exploits expert norm-referenced peer review scores from the UK Research Excellence Framework 2021 for 67,030+ journal articles in all fields 2014–2017/2018, split into 34 broadly field-based Units of Assessment (UoAs). Altmetrics correlated more strongly with research quality than previously found, although less strongly than raw and field normalized Scopus citation counts.

Surprisingly, field normalizing citation counts can reduce their strength as a quality indicator for articles in a single field. For most UoAs, Mendeley reader counts are the best altmetric (e.g., three Spearman correlations with quality scores above 0.5), tweet counts are also a moderate strength indicator in eight UoAs (Spearman correlations with quality scores above 0.3), ahead of news (eight correlations above 0.3, but generally weaker), blogs (five correlations above 0.3), and Facebook (three correlations above 0.3) citations, at least in the United Kingdom.

In general, altmetrics are the strongest indicators of research quality in the health and physical sciences and weakest in the arts and humanities.

URL : Do altmetric scores reflect article quality? Evidence from the UK Research Excellence Framework 2021

Original location : https://asistdl.onlinelibrary.wiley.com/doi/full/10.1002/asi.24751