The use of ChatGPT for identifying disruptive papers in science: a first exploration

Authors : Lutz Bornmann, Lingfei Wu, Christoph Ettl

ChatGPT has arrived in quantitative research evaluation. With the exploration in this Letter to the Editor, we would like to widen the spectrum of the possible use of ChatGPT in bibliometrics by applying it to identify disruptive papers.

The identification of disruptive papers using publication and citation counts has become a popular topic in scientometrics. The disadvantage of the quantitative approach is its complexity in the computation. The use of ChatGPT might be an easy to use alternative.

URL : The use of ChatGPT for identifying disruptive papers in science: a first exploration

DOI : https://doi.org/10.1007/s11192-024-05176-z

Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases

Authors : Lutz Bornmann, Robin Haunschild, Rüdiger Mutz

Growth of science is a prevalent issue in science of science studies. In recent years, two new bibliographic databases have been introduced, which can be used to study growth processes in science from centuries back: Dimensions from Digital Science and Microsoft Academic.

In this study, we used publication data from these new databases and added publication data from two established databases (Web of Science from Clarivate Analytics and Scopus from Elsevier) to investigate scientific growth processes from the beginning of the modern science system until today.

We estimated regression models that included simultaneously the publication counts from the four databases. The results of the unrestricted growth of science calculations show that the overall growth rate amounts to 4.10% with a doubling time of 17.3 years. As the comparison of various segmented regression models in the current study revealed, models with four or five segments fit the publication data best.

We demonstrated that these segments with different growth rates can be interpreted very well, since they are related to either phases of economic (e.g., industrialization) and/or political developments (e.g., Second World War).

In this study, we additionally analyzed scientific growth in two broad fields (Physical and Technical Sciences as well as Life Sciences) and the relationship of scientific and economic growth in UK.

The comparison between the two fields revealed only slight differences. The comparison of the British economic and scientific growth rates showed that the economic growth rate is slightly lower than the scientific growth rate.

URL : Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases

DOI : https://doi.org/10.1057/s41599-021-00903-w

Which aspects of the Open Science agenda are most relevant to scientometric research and publishing? An opinion paper

Authors : Lutz Bornmann, Raf Guns, Michael Thelwall, Dietmar Wolfram

Open Science is an umbrella term that encompasses many recommendations for possible changes in research practices, management, and publishing with the objective to increase transparency and accessibility.

This has become an important science policy issue that all disciplines should consider. Many Open Science recommendations may be valuable for the further development of research and publishing but not all are relevant to all fields.

This opinion paper considers the aspects of Open Science that are most relevant for scientometricians, discussing how they can be usefully applied.

DOI : https://doi.org/10.1162/qss_e_00121

Altmetrics and societal impact measurements: Match or mismatch? A literature review

Authors : Iman Tahamtan, Lutz Bornmann

Can alternative metrics (altmetrics) data be used to measure societal impact? We wrote this literature overview of empirical studies in order to find an answer to this question. The overview includes two parts.

The first part, “societal impact measurements”, explains possible methods and problems in measuring the societal impact of research, case studies for societal impact measurement, societal impact considerations at funding organizations, and the societal problems that should be solved by science.

The second part of the review, “altmetrics”, addresses a major question in research evaluation, which is whether altmetrics are proper indicators for measuring the societal impact of research. In the second part we explain the data sources used for altmetrics studies and the importance of field-normalized indicators for impact measurements.

This review indicates that it should be relevant for impact measurements to be oriented towards pressing societal problems. Case studies in which societal impact of certain pieces of research is explained seem to provide a legitimate method for measuring societal impact.

In the use of altmetrics, field-specific differences should be considered by applying field normalization (in cross-field comparisons). Altmetrics data such as social media counts might mainly reflect the public interest and discussion of scholarly works rather than their societal impact.

Altmetrics (Twitter data) might be especially fruitfully employed for research evaluation purposes, if they are used in the context of network approaches. Conclusions based on altmetrics data in research evaluation should be drawn with caution.

URL : Altmetrics and societal impact measurements: Match or mismatch? A literature review

Original location : https://recyt.fecyt.es/index.php/EPI/article/view/epi.2020.ene.02

Do altmetrics assess societal impact in the same way as case studies? An empirical analysis testing the convergent validity of altmetrics based on data from the UK Research Excellence Framework (REF)

Authors : Lutz Bornmann, Robin Haunschild, Jonathan Adams

Altmetrics have been proposed as a way to assess the societal impact of research. Although altmetrics are already in use as impact or attention metrics in different contexts, it is still not clear whether they really capture or reflect societal impact.

This study is based on altmetrics, citation counts, research output and case study data from the UK Research Excellence Framework (REF), and peers’ REF assessments of research output and societal impact. We investigated the convergent validity of altmetrics by using two REF datasets: publications submitted as research output (PRO) to the REF and publications referenced in case studies (PCS).

Case studies, which are intended to demonstrate societal impact, should cite the most relevant research papers. We used the MHq’ indicator for assessing impact – an indicator which has been introduced for count data with many zeros.

The results of the first part of the analysis show that news media as well as mentions on Facebook, in blogs, in Wikipedia, and in policy-related documents have higher MHq’ values for PCS than for PRO.

Thus, the altmetric indicators seem to have convergent validity for these data. In the second part of the analysis, altmetrics have been correlated with REF reviewers’ average scores on PCS. The negative or close to zero correlations question the convergent validity of altmetrics in that context.

We suggest that they may capture a different aspect of societal impact (which can be called unknown attention) to that seen by reviewers (who are interested in the causal link between research and action in society).

URL : https://arxiv.org/abs/1807.03977

Creativity in Science and the Link to Cited References: Is the Creative Potential of Papers Reflected in their Cited References?

Authors : Iman Tahamtan, Lutz Bornmann

Several authors have proposed that a large number of unusual combinations of cited references in a paper point to its high creative potential (or novelty). However, it is still not clear whether the number of unusual combinations can really measure the creative potential of papers.

The current study addresses this question on the basis of several case studies from the field of scientometrics. We identified some landmark papers in this field. Study subjects were the corresponding authors of these papers.

We asked them where the ideas for the papers came from and which role the cited publications played. The results revealed that the creative ideas might not necessarily have been inspired by past publications.

The literature seems to be important for the contextualization of the idea in the field of scientometrics. Instead, we found that creative ideas are the result of finding solutions to practical problems, result from discussions with colleagues, and profit from interdisciplinary exchange. The roots of the studied landmark papers are discussed in detail.

URL : https://arxiv.org/abs/1806.00224

Opium in science and society: Numbers

Authors : Julian N. Marewski, Lutz Bornmann

In science and beyond, numbers are omnipresent when it comes to justifying different kinds of judgments. Which scientific author, hiring committee-member, or advisory board panelist has not been confronted with page-long “publication manuals”, “assessment reports”, “evaluation guidelines”, calling for p-values, citation rates, h-indices, or other statistics in order to motivate judgments about the “quality” of findings, applicants, or institutions?

Yet, many of those relying on and calling for statistics do not even seem to understand what information those numbers can actually convey, and what not. Focusing on the uninformed usage of bibliometrics as worrysome outgrowth of the increasing quantification of science and society, we place the abuse of numbers into larger historical contexts and trends.

These are characterized by a technology-driven bureaucratization of science, obsessions with control and accountability, and mistrust in human intuitive judgment. The ongoing digital revolution increases those trends.

We call for bringing sanity back into scientific judgment exercises. Despite all number crunching, many judgments – be it about scientific output, scientists, or research institutions – will neither be unambiguous, uncontroversial, or testable by external standards, nor can they be otherwise validated or objectified.

Under uncertainty, good human judgment remains, for the better, indispensable, but it can be aided, so we conclude, by a toolbox of simple judgment tools, called heuristics.

In the best position to use those heuristics are research evaluators (1) who have expertise in the to-be-evaluated area of research, (2) who have profound knowledge in bibliometrics, and (3) who are statistically literate.

URL : https://arxiv.org/abs/1804.11210