Economie et organisation éditoriale des plateformes et des agrégateurs de revues scientifiques françaises : Analyse comparative de huit plateformes étrangères de diffusion de revues scientifiques

Effectuée pour le compte du Comité de suivi de l’édition scientifique (CSES), cette étude présente une analyse comparative de huit plateformes étrangères avec deux objectifs : décrire leurs principales caractéristiques et enrichir l’étude des plateformes et agrégateurs de revues scientifiques françaises par une analyse du potentiel concurrentiel et des complémentarités de ces plateformes et agrégateurs étrangers.

Le panel est composé de huit plateformes, trois acteurs commerciaux (EBSCO, ProQuest, Cambridge University Press) et cinq acteurs publics ou à but non lucratif (JSTOR, Project MUSE, Érudit, SciELO, Open Library of Humanities).

L’étude présente pour chaque plateforme le modèle d’affaires, les services et fonctionnalités, le positionnement par rapport à l’Open Access, les perspectives de développement et la part des contenus français.

Elle décrit également les trajectoires, particularités et futurs développements de plusieurs plateformes dont notamment Project MUSE, JSTOR et Érudit, et s’intéresse à des aspects fonctionnels et techniques intéressants comme le TDM et l’intelligence artificielle.

Toutes ces plateformes ont en commun qu’elles diffusent des revues scientifiques en ligne, avec des technologies du web, suivant le modèle d’affaires biface (avec deux clientèles différentes, éditeurs de revues et lecteurs), et qu’elles proposent des services aux éditeurs (producteurs de contenus) aussi bien qu’aux institutions, bibliothèques et particuliers (consommateurs d’informations scientifiques et techniques).

Cependant, l’étude révèle une grande diversité de modèles économiques (chiffre d’affaires, part des ventes et des subventions, reversement aux éditeurs, open access) et propose une comparaison entre ces plateformes étrangères et le panel français, en soulignant notamment la proximité entre CAIRN, JSTOR et Project MUSE.

L’intérêt pour une revue française d’établir un partenariat avec l’une des plateformes internationales est surtout lié à la diffusion par un agrégateur commercial avec une clientèle internationale et anglophone, mais ouvert à des revues non anglophones.

Ces plateformes représentent une opportunité complémentaire plutôt qu’une alternative à leurs propres moyens de diffusion. L’étude ajoute quelques éléments d’information pour évaluer l’impact de ces plateformes sur le marché français.

Être en mesure de créer des conditions (techniques, financières, organisationnelles) favorables à l’innovation, est peut-être l’un des critères qui fera la différence entre les plateformes dans les cinq à dix ans à venir.

Mais également, la capacité de garantir une conservation (et un accès) à long terme, le degré de standardisation des systèmes et formats, et l’intégration dans les communautés et institutions scientifiques, y compris dans des projets de recherche.

URL : Economie et organisation éditoriale des plateformes et des agrégateurs de revues scientifiques françaises : Analyse comparative de huit plateformes étrangères de diffusion de revues scientifiques

Original location : https://www.enseignementsup-recherche.gouv.fr/cid149053/analyse-comparative-de-huit-plateformes-etrangeres-de-diffusion-de-revues-scientifiques.html

The stability of Twitter metrics: A study on unavailable Twitter mentions of scientific publications

Authors : Zhichao Fang, Jonathan Dudek, Rodrigo Costas

This paper investigates the stability of Twitter counts of scientific publications over time. For this, we conducted an analysis of the availability statuses of over 2.6 million Twitter mentions received by the 1,154 most tweeted scientific publications recorded by this http URL up to October 2017.

Results show that of the Twitter mentions for these highly tweeted publications, about 14.3% have become unavailable by April 2019. Deletion of tweets by users is the main reason for unavailability, followed by suspension and protection of Twitter user accounts.

This study proposes two measures for describing the Twitter dissemination structures of publications: Degree of Originality (i.e., the proportion of original tweets received by a paper) and Degree of Concentration (i.e., the degree to which retweets concentrate on a single original tweet).

Twitter metrics of publications with relatively low Degree of Originality and relatively high Degree of Concentration are observed to be at greater risk of becoming unstable due to the potential disappearance of their Twitter mentions.

In light of these results, we emphasize the importance of paying attention to the potential risk of unstable Twitter counts, and the significance of identifying the different Twitter dissemination structures when studying the Twitter metrics of scientific publications.

URL : https://arxiv.org/abs/2001.07491

Peer review and preprint policies are unclear at most major journals

Authors : Thomas Klebel, Stefan Reichmann, Jessica Polka, Gary McDowell, Naomi Penfold, Samantha Hindle, Tony Ross-Hellauer

Clear and findable publishing policies are important for authors to choose appropriate journals for publication. We investigated the clarity of policies of 171 major academic journals across disciplines regarding peer review and preprinting.

31.6% of journals surveyed do not provide information on the type of peer review they use. Information on whether preprints can be posted or not is unclear in 39.2% of journals. 58.5% of journals offer no clear information on whether reviewer identities are revealed to authors.

Around 75% of journals have no clear policy on coreviewing, citation of preprints, and publication of reviewer identities. Information regarding practices of Open Peer Review is even more scarce, with <20% of journals providing clear information.

Having found a lack of clear information, we conclude by examining the implications this has for researchers (especially early career) and the spread of open research practices.

URL : Peer review and preprint policies are unclear at most major journals

DOI : https://doi.org/10.1101/2020.01.24.918995

Altmetrics data providers: A meta-analysis review of the coverage of metrics and publication

Author : José-Luis Ortega

The aim of this paper is to review the current and most relevant literature on the use of altmetric providers since 2012. This review is supported by a meta-analysis of the coverage and metric counts obtained by more than 100 publications that have used these bibliographic platforms for altmetric studies.

The article is the most comprehensive analysis of altmetric data providers (Lagotto, Altmetric.com, ImpactStory, Mendeley, PlumX, Crossref Event Data) and explores the coverage of publications, social media and events from a longitudinal view. Disciplinary differences were also analysed.

The results show that most of the studies are based on Altmetric.com data. This provider is the service that captures most mentions from social media sites, blogs and news outlets. PlumX has better coverage, counting more Mendeley readers, but capturing fewer events.

CED has a special coverage of mentions from Wikipedia, while Lagotto and ImpactStory are becoming disused products because of their limited reach.

URL : Altmetrics data providers: A meta-analysis review of the coverage of metrics and publication

Original location : https://recyt.fecyt.es/index.php/EPI/article/view/epi.2020.ene.07

The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis

Authors :  Mark Skopec, Hamdi Issa, Julie Reed, Matthew Harris

Background

Descriptive studies examining publication rates and citation counts demonstrate a geographic skew toward high-income countries (HIC), and research from low- or middle-income countries (LMICs) is generally underrepresented. This has been suggested to be due in part to reviewers’ and editors’ preference toward HIC sources; however, in the absence of controlled studies, it is impossible to assert whether there is bias or whether variations in the quality or relevance of the articles being reviewed explains the geographic divide. This study synthesizes the evidence from randomized and controlled studies that explore geographic bias in the peer review process.

Methods

A systematic review was conducted to identify research studies that explicitly explore the role of geographic bias in the assessment of the quality of research articles.

Only randomized and controlled studies were included in the review. Five databases were searched to locate relevant articles. A narrative synthesis of included articles was performed to identify common findings.

Results

The systematic literature search yielded 3501 titles from which 12 full texts were reviewed, and a further eight were identified through searching reference lists of the full texts. Of these articles, only three were randomized and controlled studies that examined variants of geographic bias.

One study found that abstracts attributed to HIC sources elicited a higher review score regarding relevance of the research and likelihood to recommend the research to a colleague, than did abstracts attributed to LIC sources.

Another study found that the predicted odds of acceptance for a submission to a computer science conference were statistically significantly higher for submissions from a “Top University.” Two of the studies showed the presence of geographic bias between articles from “high” or “low” prestige institutions.

Conclusions

Two of the three included studies identified that geographic bias in some form was impacting on peer review; however, further robust, experimental evidence is needed to adequately inform practice surrounding this topic.

Reviewers and researchers should nonetheless be aware of whether author and institutional characteristics are interfering in their judgement of research.

URL : The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis

DOI : https://doi.org/10.1186/s41073-019-0088-0

Should research misconduct be criminalized?

Authors : Rafael Dal-Ré, Lex M Bouter, Pim Cuijpers, Pim Cuijpers, Christian Gluud, Søren Holm

For more than 25 years, research misconduct (research fraud) is defined as fabrication, falsification, or plagiarism (FFP)—although other research misbehaviors have been also added in codes of conduct and legislations.

A critical issue in deciding whether research misconduct should be subject to criminal law is its definition, because not all behaviors labeled as research misconduct qualifies as serious crime. But assuming that all FFP is fraud and all non-FFP not is far from obvious.

In addition, new research misbehaviors have recently been described, such as prolific authorship, and fake peer review, or boosted such as duplication of images. The scientific community has been largely successful in keeping criminal law away from the cases of research misconduct.

Alleged cases of research misconduct are usually looked into by committees of scientists usually from the same institution or university of the suspected offender in a process that often lacks transparency.

Few countries have or plan to introduce independent bodies to address research misconduct; so for the coming years, most universities and research institutions will continue handling alleged research misconduct cases with their own procedures. A global operationalization of research misconduct with clear boundaries and clear criteria would be helpful.

There is room for improvement in reaching global clarity on what research misconduct is, how allegations should be handled, and which sanctions are appropriate.

URL : Should research misconduct be criminalized?

DOI : https://doi.org/10.1177/1747016119898400

Practices, Challenges, and Prospects of Big Data Curation: a Case Study in Geoscience

Authors : Suzhen Chen, Bin Chen

Open and persistent access to past, present, and future scientific data is fundamental for transparent and reproducible data-driven research. The scientific community is now facing both challenges and opportunities caused by the growingly complex disciplinary data systems.

Concerted efforts from domain experts, information professionals, and Internet technology experts are essential to ensure the accessibility and interoperability of the big data.

Here we review current practices in building and managing big data within the context of large data infrastructure, using geoscience cyberinfrastructure such as Interdisciplinary Earth Data Alliance (IEDA) and EarthCube as a case study.

Geoscience is a data-rich discipline with a rapid expansion of sophisticated and diverse digital data sets. Having started to embrace the digital age, the community have applied big data and data mining tools into the new type of research.

We also identified current challenges, key elements, and prospects to construct a more robust and future-proof big data infrastructure for research and publication for the future, as well as the roles, qualifications, and opportunities for librarians/information professionals in the data era.

URL : Practices, Challenges, and Prospects of Big Data Curation: a Case Study in Geoscience

DOI: https://doi.org/10.2218/ijdc.v14i1.669