Authors : Sanja Gidakovic, Heather Moulaison-Sandy, Jenny Bossalle
Introduction
Freely available standalone AI research assistants such as Elicit and Consensus are used by academics to find relevant literature. These systems rely extensively on freely available sources, including open access journal content. No baseline for understanding the level of quality of such journals used in these assistants has been carried out.
Method
A sample of 807 English-language journals from the Directory of Open Access Journals that became open access before 2021 was investigated for quality metrics using SCImago rankings and other defining characteristics and analysed in conjunction with the Directory data.
Analysis
Scimago journal ranking quartile scores were recorded for each of the journals. Descriptive statistics were produced using Excel, and visualizations using Tableau Public.
Results
Of our sample, over half were ranked in Scopus, and many were in quartile 1. Many university or small association journals were unranked.
Conclusions
AI research assistants may miss some high-quality open access content due to reliance on metrics. Commercial enterprises play a large role in sources used to produce content, effectively gatekeeping the process and potentially shaping the creation of new knowledge.