The financial maintenance of social science data archives: Four case studies of long-term infrastructure work

Authors : Kristin R. Eschenfelder, Kalpana Shankar, Greg Downey

Contributing to the literature on knowledge infrastructure maintenance, this article describes a historical longitudinal analysis of revenue streams employed by four social science data organizations: the Roper Center for Public Opinion, the Inter-university Consortium for Political and Social Research (ICPSR), the UK Data Archive (UKDA), and the LIS Cross-National Data Center in Luxembourg (LIS).

Drawing on archival documentation and interviews, we describe founders’ assumptions about revenue, changes to revenue streams over the long term, practices for developing and maintaining revenue streams, the importance of financial support from host organizations, and how the context of each data organization shaped revenue possibilities.

We extend conversations about knowledge infrastructure revenue streams by showing the types of change that have occurred over time and how it occurs. We provide examples of the types of flexibility needed for data organizations to remain sustainable over 40–60 years of revenue changes.

We distinguish between Type A flexibilities, or development of new products and services, and Type B flexibilities, or continuous smaller adjustments to existing revenue streams. We argue that Type B flexibilities are as important as Type A, although they are easily overlooked. Our results are relevant to knowledge infrastructure managers and stakeholders facing similar revenue challenges.

URL : The financial maintenance of social science data archives: Four case studies of long-term infrastructure work

DOI : https://doi.org/10.1002/asi.24691

More journal articles and fewer books: Publication practices in the social sciences in the 2010’s

Authors : William E. Savage, Anthony J. Olejniczak

The number of scholarly journal articles published each year is growing, but little is known about the relationship between journal article growth and other forms of scholarly dissemination (e.g., books and monographs).

Journal articles are the de facto currency of evaluation and prestige in STEM fields, but social scientists routinely publish books as well as articles, representing a unique opportunity to study increased article publications in disciplines with other dissemination options.

We studied the publishing activity of social science faculty members in 12 disciplines at 290 Ph.D. granting institutions in the United States between 2011 and 2019, asking: 1) have publication practices changed such that more or fewer books and articles are written now than in the recent past?; 2) has the percentage of scholars actively participating in a particular publishing type changed over time?; and 3) do different age cohorts evince different publication strategies?

In all disciplines, journal articles per person increased between 3% and 64% between 2011 and 2019, while books per person decreased by at least 31% and as much as 54%. All age cohorts show increased article authorship over the study period, and early career scholars author more articles per person than the other cohorts in eight disciplines.

The article-dominated literatures of the social sciences are becoming increasingly similar to those of STEM disciplines.

URL : More journal articles and fewer books: Publication practices in the social sciences in the 2010’s

DOI : https://doi.org/10.1371/journal.pone.0263410

The state of social science research on COVID-19

Authors : Yan-Li Liu, Wen-Juan Yuan, Shao-Hong Zhu

Research on COVID-19 has proliferated rapidly since the outbreak of the pandemic at the end of 2019. Many articles have aimed to provide insight into this fast-growing theme. The social sciences have also put effort into research on problems related to COVID-19, with numerous documents having been published.

Some studies have evaluated the growth of scientific literature on COVID-19 based on scientometric analysis, but most of these analyses focused on medical research while ignoring social science research on COVID-19.

This is the first scientometric study of the performance of social science research on COVID-19. It provides insight into the landscape, the research fields, and international collaboration in this domain. Data obtained from SSCI on the Web of Science platform was analyzed using VOSviewer.

The overall performance of the documents was described, and then keyword co-occurrence and co-authorship networks were visualized. The six main research fields with highly active topics were confirmed by analysis and visualization. Mental health and psychology were clearly shown to be the focus of most social science research related to COVID-19.

The USA made the most contributions, with the most extensive collaborations globally, with Harvard University as the leading institution. Collaborations throughout the world were strongly related to geographical location.

Considering the social impact of the COVID-19 pandemic, this scientometric study is significant for identifying the growth of literature in the social sciences and can help researchers within this field gain quantitative insights into the development of research on COVID-19.

The results are useful for finding potential collaborators and for identifying the frontier and gaps in social science research on COVID-19 to shape future studies.

DOI : https://doi.org/10.1007/s11192-021-04206-4

Systematizing Confidence in Open Research and Evidence (SCORE)

Authors : Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Tim Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, C Lee Giles, Michael B Gordon, Reed Gordon-Sarney, Christopher Griffin, Timothy Gulden et al.,

Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts.

Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment.

The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates.

Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.

URL : Systematizing Confidence in Open Research and Evidence (SCORE)

DOI : https://doi.org/10.31235/osf.io/46mnb

What Constitutes Authorship in the Social Sciences?

Author : Gernot Pruschak

Authorship represents a highly discussed topic in nowadays academia. The share of co-authored papers has increased substantially in recent years allowing scientists to specialize and focus on specific tasks.

Arising from this, social scientific literature has especially discussed author orders and the distribution of publication and citation credits among co-authors in depth. Yet only a small fraction of the authorship literature has also addressed the actual underlying question of what actually constitutes authorship.

To identify social scientists’ motives for assigning authorship, we conduct an empirical study surveying researchers around the globe. We find that social scientists tend to distribute research tasks among (individual) research team members. Nevertheless, they generally adhere to the universally applicable Vancouver criteria when distributing authorship.

More specifically, participation in every research task with the exceptions of data work as well as reviewing and remarking increases scholars’ chances to receive authorship. Based on our results, we advise journal editors to introduce authorship guidelines that incorporate the Vancouver criteria as they seem applicable to the social sciences.

We further call upon research institutions to emphasize data skills in hiring and promotion processes as publication counts might not always depict these characteristics.

URL : What Constitutes Authorship in the Social Sciences?

DOI : https://doi.org/10.3389/frma.2021.655350

An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017)

Authors : Tom E. Hardwicke, Joshua D. Wallach, Mallory C. Kidwell, Theiss Bendixen, Sophia Crüwell, John P. A. Ioannidis

Serious concerns about research quality have catalysed a number of reform initiatives intended to improve transparency and reproducibility and thus facilitate self-correction, increase efficiency and enhance research credibility.

Meta-research has evaluated the merits of some individual initiatives; however, this may not capture broader trends reflecting the cumulative contribution of these efforts.

In this study, we manually examined a random sample of 250 articles in order to estimate the prevalence of a range of transparency and reproducibility-related indicators in the social sciences literature published between 2014 and 2017.

Few articles indicated availability of materials (16/151, 11% [95% confidence interval, 7% to 16%]), protocols (0/156, 0% [0% to 1%]), raw data (11/156, 7% [2% to 13%]) or analysis scripts (2/156, 1% [0% to 3%]), and no studies were pre-registered (0/156, 0% [0% to 1%]).

Some articles explicitly disclosed funding sources (or lack of; 74/236, 31% [25% to 37%]) and some declared no conflicts of interest (36/236, 15% [11% to 20%]). Replication studies were rare (2/156, 1% [0% to 3%]).

Few studies were included in evidence synthesis via systematic review (17/151, 11% [7% to 16%]) or meta-analysis (2/151, 1% [0% to 3%]). Less than half the articles were publicly available (101/250, 40% [34% to 47%]).

Minimal adoption of transparency and reproducibility-related research practices could be undermining the credibility and efficiency of social science research. The present study establishes a baseline that can be revisited in the future to assess progress.

URL : An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017)

DOI : https://doi.org/10.1098/rsos.190806

Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015

Authors : Colin F. Camerer, Anna Dreber, Felix Holzmeister, Teck-Hua Ho, Jürgen Huber, Magnus Johannesson, Michael Kirchler, Gideon Nave, Brian Nosek, Thomas Pfeiffer, Adam Altmejd, Nick Buttrick, Taizan Chan, Yiling Chen, Eskil Forsell, Anup Gampa, Emma Heikensten, Lily Hummer, Taisuke Imai, Siri Isaksson, Dylan Manfredi, Julia Rose, Eric-Jan Wagenmakers, Hang Wu

Being able to replicate scientific findings is crucial for scientific progress. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015.

The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies.

We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators.

Consistent with these results, the estimated true positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility.

Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.

URL : Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015

DOI : https://doi.org/10.1038/s41562-018-0399-z