Authors : Daniel Torres-Salinas, Juan Gorraiz, Nicolas Robinson-Garcia
The purpose of this paper is to analyze the capabilities, functionalities and appropriateness of Altmetric.com as a data source for the bibliometric analysis of books in comparison to PlumX.
We perform an exploratory analysis on the metrics the Altmetric Explorer for Institutions platform offers for books. We use two distinct datasets of books: the Book Collection included in Altmetric.com and the Clarivate’s Master Book List, to analyze Altmetric.com’s capabilities to download and merge data with external databases.
Finally, we compare our findings with those obtained in a previous study performed in PlumX. Altmetric.com combines and orderly tracks a set of data sources combined by DOI identifiers to retrieve metadata from books, being Google Books its main provider. It also retrieves information from commercial publishers and from some Open Access initiatives, including those led by university libraries such as Harvard Library.
We find issues with linkages between records and mentions or ISBN discrepancies. Furthermore, we find that automatic bots affect greatly Wikipedia mentions to books. Our comparison with PlumX suggests that none of these tools provide a complete picture of the social attention generated by books and are rather complementary than comparable tools.
URL : https://arxiv.org/abs/1809.10128
Authors : Elea Giménez-Toledo, Jorge Mañana-Rodríguez
Despite having an important role supporting assessment processes, criticism towards evaluation systems and the categorizations used are frequent. Considering the acceptance by the scientific community as an essential issue for using rankings or categorizations in research evaluation, the aim of this paper is testing the results of rankings of scholarly book publishers’ prestige, Scholarly Publishers Indicators (SPI hereafter).
SPI is a public, survey-based ranking of scholarly publishers’ prestige (among other indicators). The latest version of the ranking (2014) was based on an expert consultation with a large number of respondents.
In order to validate and refine the results for Humanities’ fields as proposed by the assessment agencies, a Delphi technique was applied with a panel of randomly selected experts over the initial rankings.
The results show an equalizing effect of the technique over the initial rankings as well as a high degree of concordance between its theoretical aim (consensus among experts) and its empirical results (summarized with Gini Index).
The resulting categorization is understood as more conclusive and susceptible of being accepted by those under evaluation.
URL : https://arxiv.org/abs/1705.04517
Author : Clifford Lynch
I outline a possible future system of many distributed university presses mainly focused on the editorial production of scholarly monographs, supported by a very small number of digital platforms for managing and delivering these monographs as a database rather than transactionally to academic and research libraries. I also touch on the ongoing evolution of various types of scholarly books into (often much more costly) networked information resources and the implications this has for the overall dissemination of scholarship and the roles of university presses.
DOI : http://dx.doi.org/10.3998/3336451.0013.207