La crédibilité des matériaux ethnographiques face au mouvement d’ouverture des données de la recherche

Auteur.ices/Authors : Alix Levain, Florence Revelin, Anne-Gaëlle Beurier, Marianne Noël

Les politiques d’ouverture des données de la recherche s’appuient sur des arguments de transparence, d’innovation et de démocratisation des savoirs. Cet article vise à rendre intelligibles leurs implications pour les communautés travaillant à partir de données ethnographiques, confrontées à une transformation des critères de reconnaissance de la crédibilité des savoirs qu’elles produisent.

Alors que les chercheur·e·s qui pratiquent l’ethnographie sont engagé·e·s dans des formes situées de partage des matériaux avec les pair·e·s, les autres disciplines et les « communautés sources », le renforcement du contrôle externe sur les conditions dans lesquelles ce partage s’effectue déstabilise les économies de la crédibilité qui structurent ces pratiques.

Davantage qu’une réticence au processus d’ouverture, le retrait des ethnographes du mouvement apparaît au terme de notre analyse comme résultant à la fois de l’existence d’écologies alternatives des matériaux empiriques et d’une éthique des marges incorporée dans des normes professionnelles souvent implicites.

DOI : https://doi.org/10.4000/rac.30291

Les reconfigurations des vecteurs de la crédibilité scientifique à l’interface entre les mondes sociaux

Auteur.ices/Authors : Fabrizio Li Vigni, Séverine Louvel, Benjamin Raimbault

La crédibilité des scientifiques fait l’objet de débats, qui portent sur les risques de décrédibilisation qui découleraient d’une perte d’autonomie des chercheur·e·s vis-à-vis d’intérêts économiques, de logiques militantes ou d’agendas politiques.

Ces situations de mise à l’épreuve de la crédibilité scientifique vis-à-vis de la société et de la communauté de pair·e·s soulèvent une question plus générale : comment les chercheur·e·s engagé·e·s dans des collectifs positionnés dans plusieurs mondes sociaux construisent-ils·elles leur crédibilité auprès de leurs collègues ?

Leurs activités renforcent-elles, ou affaiblissent-elles, les vecteurs classiques de la crédibilité scientifique ? De manière concomitante, observe-t-on l’émergence de nouveaux vecteurs de crédibilité ?

Les articles de ce dossier thématique interrogent les reconfigurations contemporaines de la crédibilité à partir de quatre axes de transformation des sciences, à savoir : l’ouverture et la bancarisation des données ; les relations sciences–industries ; l’interdisciplinarité ; et les engagements publics des chercheur·e·s.

Dans cet article introductif, nous revenons sur l’histoire de la notion de crédibilité scientifique dans les Science & Technology Studies – telle qu’elle a été proposée par Bruno Latour et Steve Woolgar, puis Steven Shapin et Thomas Gieryn – et sur la manière dont elle a été investie depuis ; puis nous présentons les cinq articles du dossier et en tirons les apports transversaux.

Nous soulignons que, bien davantage que l’avènement de nouveaux vecteurs de la crédibilité scientifique, ces articles donnent à voir des transformations à la marge, situées et contradictoires.

DOI : https://doi.org/10.4000/rac.30365

« Les brevets sont à peine au rang d’une publication » : Projets de valorisation et cycle de crédibilité au CNRS

Autrice/Author : Victoria Brun

Cet article vise à expliciter la place qu’occupent les activités de valorisation dans les carrières des personnels de la recherche publique et la manière dont ils travaillent ou non à les internaliser dans le cycle de crédibilité académique (Latour & Woolgar, 1979).

À partir d’une enquête conduite dans des projets de valorisation liés au CNRS, l’analyse montre que les activités de valorisation sont pensées conjointement avec les activités académiques. Si les chercheur·se·s échouent le plus souvent à les convertir en reconnaissance sans détour par la publication, il·elle·s peuvent réinjecter cet investissement sous forme de financement et d’équipement pour d’autres travaux.

D’autres décident de les externaliser, faisant de la valorisation un à-côté de la carrière. Les doctorant·e·s et les ingénieur·e·s, qui participent pourtant à alimenter le cycle de crédibilité des chercheur·se·s, investissent des voies professionnelles parallèles. Enfin, l’engagement dans des projets de valorisation expose à des risques de décrédibilisation que les chercheur·se·s dénouent en défendant une conception du désintéressement scientifique compatible avec la perspective applicative.

La transformation de l’économie de la crédibilité se fait donc à la marge, malgré les multiples dispositifs incitatifs des institutions de recherche.

DOI : https://doi.org/10.4000/rac.30214

Academia should stop using beall’s lists and review their use in previous studies

Authors : Jaime A. Teixeira da Silva, Graham Kendall

Academics (should) strive to submit to journals which are academically sound and scholarly. To achieve this, they could either submit to journals that appear exclusively on safelists (occasionally referred to as whitelists, although this term tends to be avoided), or avoid submitting to journals on watchlists (occasionally referred to as blacklists, although this term tends to be avoided).

The most well-known of these lists was curated by Jeffrey Beall. Beall’s Lists (there are two, one for stand-alone journals and one for publishers) were taken offline by Beall himself in January 2017.

Prior to 2017, Beall’s Lists were widely cited and utilized, including to make quantitative claims about scholarly publishing. Even after Beall’s Lists became obsolete (they have not been maintained for the past six years), they continue to be widely cited and used. This paper argues that the use of Beall’s Lists, pre- and post-2017, may constitute a methodological error and, even if papers carry a disclaimer or limitations section noting this weakness, their conclusions cannot always be relied upon.

This paper also argues for the need to conduct a detailed post-publication assessment of reports in the literature that used Beall’s Lists to validate their findings and conclusions, assuming that it becomes accepted that Beall’s Lists are not a reliable resource for scientific investigation.

Finally, this paper contends that any papers that have identified methodological errors should be corrected. Several lists that were cloned from Beall’s Lists have also emerged and are also being cited. These should also be included in any post-publication investigation that is conducted.

URL : Academia should stop using beall’s lists and review their use in previous studies

DOI : https://doi.org/10.47316/cajmhe.2023.4.1.04

The rise of preprints in earth sciences

Authors : Olivier Pourret, Daniel Enrique Ibarra

The rate of science information’s spread has accelerated in recent years. In this context, it appears that many scientific disciplines are beginning to recognize the value and possibility of sharing open access (OA) online manuscripts in their preprint form.

Preprints are academic papers that are published but have not yet been evaluated by peers. They have existed in research at least since the 1960s and the creation of ArXiv in physics and mathematics. Since then, preprint platforms—which can be publisher- or community-driven, profit or not for profit, and based on proprietary or free and open source software—have gained popularity in many fields (for example, bioRxiv for the biological sciences).

Today, there are many platforms that are either disciplinary-specific or cross-domain, with exponential development over the past ten years. Preprints as a whole still make up a very small portion of scholarly publishing, but a large group of early adopters are testing out these value-adding tools across a much wider range of disciplines than in the past.

In this opinion article, we provide perspective on the three main options available for earth scientists, namely EarthArXiv, ESSOAr/ESS Open Archive and EGUsphere.

Reflecting on motivations: How reasons to publish affect research behaviour in astronomy

Author : Julia Heuritsch

Recent research in the field of reflexive metrics, which analyses the effects of the use of performance indicators on scientific conduct, has studied the emergence and consequences of evaluation gaps in science.

The concept of evaluation gaps captures potential discrepancies between what researchers value about their research, in particular research quality, and what metrics measure. In the language of rational choice theory, an evaluation gap persists if motivational factors arising out of the internal component of an actor’s situation are incongruent with those arising out of the external components.

The aim of this research is therefore to study and compare autonomous and controlled motivations to become an astronomer, to do research in astronomy and to publish scientific papers. This study is based on a comprehensive quantitative survey of academic and non-academic astronomers worldwide with 3509 responses.

By employing verified instruments to measure perceived publication pressure, distributive & procedural justice, overcommitment to work and observation of scientific misconduct, this paper also investigates how these different motivational factors affect research output and behaviour.

I find evidence for an evaluation gap and that controlled motivational factors arising from evaluation procedures based on publication record drives up publication pressure, which, in turn, was found to increase the likelihood of perceived frequency of misbehaviour.

URL : Reflecting on motivations: How reasons to publish affect research behaviour in astronomy

DOI : https://doi.org/10.1371/journal.pone.0281613

Biomedical supervisors’ role modeling of open science practices

AuthorsTamarinde L Haven, Susan Abunijela, Nicole Hildebrand

Supervision is one important way to socialize Ph.D. candidates into open and responsible research. We hypothesized that one should be more likely to identify open science practices (here publishing open access and sharing data) in empirical publications that were part of a Ph.D. thesis when the Ph.D. candidates’ supervisors engaged in these practices compared to those whose supervisors did not or less often did.

Departing from thesis repositories at four Dutch University Medical centers, we included 211 pairs of supervisors and Ph.D. candidates, resulting in a sample of 2062 publications. We determined open access status using UnpaywallR and Open Data using Oddpub, where we also manually screened publications with potential open data statements. Eighty-three percent of our sample was published openly, and 9% had open data statements.

Having a supervisor who published open access more often than the national average was associated with an odds of 1.99 to publish open access. However, this effect became nonsignificant when correcting for institutions. Having a supervisor who shared data was associated with 2.22 (CI:1.19–4.12) times the odds to share data compared to having a supervisor that did not.

This odds ratio increased to 4.6 (CI:1.86–11.35) after removing false positives. The prevalence of open data in our sample was comparable to international studies; open access rates were higher. Whilst Ph.D. candidates spearhead initiatives to promote open science, this study adds value by investigating the role of supervisors in promoting open science.

URL : Biomedical supervisors’ role modeling of open science practices

DOI : https://doi.org/10.7554/eLife.83484