Collaborative design to bridge theory and practice in science communication

Authors :

The science communication field strives to connect theory and practice. This essay delves into the potential of collaborative design to bridge this gap. Collaborative design in science communication can involve scientists, science communication researchers, designers, and other stakeholders in developing new science communication solutions.

By incorporating diverse perspectives and expertise, it can help create more effective and evidence-based communication strategies that cater to the needs of audiences. To integrate these demands, a structured approach is necessary. This paper discusses two established frameworks, Design-Based Research and Design Thinking, and applies practical insights to envision the impact of collaborative design on the future of science communication.

URL : Collaborative design to bridge theory and practice in science communication

DOI : https://doi.org/10.22323/2.23020401

May contain English – The assessment effect on language in publications and how this has manifested over a decade of DOAB titles

Authors : Danny Kingsley, Ronald Snijder

Research assessment in a major driver of research behaviour. The current emphasis on journal citations in a limited number of journals with an English focus have multiple effects. The need to publish in English even when it is not the local language affects the type of research undertaken and further consolidates the global North-centric view of scientific approach.

The bibliometric databases on which assessments of universities and journals are based are owned by two large corporate organisations, and this concentration of the market has in turn concentrated the research environment. Open infrastructure offers an alternative option for the research endeavour.

The OAPEN online open access library and the Directory of Open Access Books form part of this infrastructure and we consider the pattern of languages present in the directories over time.

DOI : https://doi.org/10.31235/osf.io/c8yq3

A survey of how biology researchers assess credibility when serving on grant and hiring committees

Authors : Iain Hrynaszkiewicz, Beruria Novich, James Harney, Veronique Kiermer

Researchers who serve on grant review and hiring committees have to make decisions about the intrinsic value of research in short periods of time, and research impact metrics such Journal Impact Factor (JIF) exert undue influence on these decisions. Initiatives such as the Coalition for Advancing Research Assessment (CoARA) and the Declaration on Research Assessment (DORA) emphasize responsible use of quantitative metrics and avoidance of journal-based impact metrics for research assessment. Further, our previous qualitative research suggested that assessing credibility, or trustworthiness, of research is important to researchers not only when they seek to inform their own research but also in the context of research assessment committees.

To confirm our findings from previous interviews in quantitative terms, we surveyed 485 biology researchers who have served on committees for grant review or hiring and promotion decisions, to understand how they assess the credibility of research outputs in these contexts. We found that concepts like credibility, trustworthiness, quality and impact lack consistent definitions and interpretations by researchers, which had already been observed in our interviews.

We also found that assessment of credibility is very important to most (81%) of researchers serving in these committees but fewer than half of respondents are satisfied with their ability to assess credibility. A substantial proportion of respondents (57% of respondents) report using journal reputation and JIF to assess credibility – proxies that research assessment reformers consider inappropriate to assess credibility because they don’t rely on intrinsic characteristics of the research.

This gap between importance of an assessment and satisfaction in the ability to conduct it was reflected in multiple aspects of credibility we tested and it was greatest for researchers seeking to assess the integrity of research (such as identifying signs of fabrication, falsification, or plagiarism), and the suitability and completeness of research methods. Non-traditional research outputs associated with Open Science practices – research data, code, protocol and preprints sharing – are particularly hard for researchers to assess, despite the potential of Open Science practices to signal trustworthiness.

Our results suggest opportunities to develop better guidance and better signals to support the evaluation of research credibility and trustworthiness – and ultimately support research assessment reform, away from the use of inappropriate proxies for impact and towards assessing the intrinsic characteristics and values researchers see as important.

DOI : https://doi.org/10.31222/osf.io/ht836

Reporting of interventional clinical trial results in a French academic center: a survey of completed studies

Authors : Anne Sophie Alix Doucet, Constant VINATIER, Loïc Fin, Hervé Léna, Hélène Rangé, Clara Locher, Florian Naudet

Background: The dissemination of clinical trial results is an important scientific and ethical endeavour. This survey of completed interventional studies in a French academic center describes their reporting status.

Methods: We explored all interventional studies sponsored by Rennes University Hospital identified on the French Open Science Monitor which tracks trials registered on EUCTR or clinicaltrials.gov, and provides an automatic assessment of the reporting of results. For each study, we ascertained the actual reporting of results using systematic searches on the hospital internal database, bibliographic databases (Google Scholar, PubMed), and by contacting all principal investigators (PIs). We describe several features (including total budget and numbers of trial participants) of the studies that did not report any results.

Results: The French Open Science Monitor identified 93 interventional studies, among which 10 (11%) reported results. In contrast, our survey identified 36 studies (39%) reporting primary analysis results and an additional 18 (19%) reporting results for secondary analyses (without results for their primary analysis). The overall budget for studies that did not report any results was estimated to be €5,051,253 for a total of 6,735 trial participants. The most frequent reasons for the absence of results reported by PIs were lack of time for 18 (42%), and logistic difficulties (e.g. delay in obtaining results or another blocking factor) for 12 (28%). An association was found between non-publication and negative results (adjusted Odds Ratio = 4.70, 95% Confidence Interval [1.67;14.11]).

Conclusions: Even allowing for the fact that automatic searches underestimate the number of studies with published results, the level of reporting was disappointingly low. This amounts to a waste of trial participants’ implication and money. Corrective actions are needed.

URL : Reporting of interventional clinical trial results in a French academic center: a survey of completed studies

DOI : https://doi.org/10.21203/rs.3.rs-3782467/v1

Hidden Inequities of Access. Document Accessibility in an Aggregated Database

Authors : Amanda Hovious, Congwen Wang

Despite ongoing efforts to improve database accessibility, aggregated database vendors concede that they do not have complete control over document accessibility. Instead, they point to the responsibility of journal publishers to deliver articles in an accessible format. This may increase the likelihood that users with disabilities will encounter articles that are not compatible with a screen reader.

To better understand the extent of the problem, a document accessibility audit was conducted of randomly selected articles from EBSCO’s Library & Information Source database. Full-text articles from 12 library science journals were evaluated against two measures of screen reader compatibility: HTML format (the optimal format for screen readers) and PDF accessibility conformance.

Findings showed inconsistencies in HTML format availability for articles in the selected journals. Additionally, the entire sample of PDF articles failed to meet the minimum standard of PDF Universal Accessibility of containing a tagged structure. However, all PDF articles passed accessibility permissions tests, so could be made accessible retroactively by a third party.

URL : Hidden Inequities of Access. Document Accessibility in an Aggregated Database

DOI : https://doi.org/10.5860/ital.v43i1.16661

To preprint or not to preprint: A global researcher survey

Authors : Rong Ni, Ludo Waltman

Open science is receiving widespread attention globally, and preprinting offers an important way to implement open science practices in scholarly publishing. To develop a systematic understanding of researchers’ adoption of and attitudes toward preprinting, we conducted a survey of authors of research papers published in 2021 and early 2022. Our survey results show that the United States and Europe led the way in the adoption of preprinting.

The United States and European respondents reported a higher familiarity with and a stronger commitment to preprinting than their colleagues elsewhere in the world. The adoption of preprinting is much stronger in physics and astronomy as well as mathematics and computer science than in other research areas. Respondents identified free accessibility of preprints and acceleration of research communication as the most important benefits of preprinting.

Low reliability and credibility of preprints, sharing results before peer review and premature media coverage are the most significant concerns about preprinting, emphasized in particular by respondents in the life and health sciences. According to respondents, the most crucial strategies to encourage preprinting are integrating preprinting into journal submission workflows and providing recognition for posting preprints.

URL : To preprint or not to preprint: A global researcher survey

DOI : https://doi.org/10.1002/asi.24880

Societal and scientific impact of policy research: A large-scale empirical study of some explanatory factors using Altmetric and Overton

Authors: Pablo Dorta-González, Alejandro Rodríguez-Caro, María Isabel Dorta-González

This study investigates how scientific research influences policymaking by analyzing citations of research articles in policy documents (policy impact) for nearly 125,000 articles across 434 public policy journals. We reveal distinct citation patterns between policymakers and other stakeholders like researchers, journalists, and the public.

News and blog mentions, social media engagement, and open access publications (excluding fully open access) significantly increase the likelihood of a research article being cited in policy documents. Conversely, articles locked behind paywalls and those published under the full open access model (based on Altmetric data) have a lower chance of being policy-cited. Publication year and policy type show no significant influence. Our findings emphasize the crucial role of science communication channels like news media and social media in bridging the gap between research and policy.

Interestingly, academic citations hold a weaker influence on policy citations compared to news mentions, suggesting a potential disconnect between how researchers reference research and how policymakers utilize it. This highlights the need for improved communication strategies to ensure research informs policy decisions more effectively.

This study provides valuable insights for researchers, policymakers, and science communicators. Researchers can tailor their dissemination efforts to reach policymakers through media channels. Policymakers can leverage these findings to identify research with higher policy relevance. Science communicators can play a critical role in translating research for policymakers and fostering dialogue between the scientific and policymaking communities.

Arxiv : https://arxiv.org/abs/2403.06714