Why is it important to implement meta-research in universities and institutes with medical research activities

Authors : Ivan David Lozada-Martinez, Dionicio Neira-Rodado, Darly Martinez-Guevara, Hary Salome Cruz-Soto, Maria Paula Sanchez-Echeverry, Yamil Liscano

In recent years, there has been a growing concern over questionable practices and a lack of rigor in scientific activities, particularly in health and medical sciences.

Universities and research institutes are key players in the development of science, technology, and innovation. Academic institutions, whose primary mission is to generate and disseminate knowledge, bear the responsibility in many parts of the world to act as consultants and guardians of scientific integrity in health research.

Then, universities and research institutes must act as guardians of the research and technological development process, utilizing methodological and operational evaluation tools to validate the rigor and quality of medical research.

Meta-research is defined as the research of research itself. Some of the most important specific objectives of meta-research include the assessment of research relevance, the evaluation of evidence validity, and the exploration of scientific integrity.

A significant portion of evidence in the medical and health sciences literature has been found to be redundant, misleading, or inconsistent. Although this issue is of great importance in global health, discussions about practical and tangible solutions remain fragmented and limited.

The aim of this manuscript is to highlight the significance of employing meta-research within universities and research institutes as a tool to monitor scientific rigor and promote responsible practices in medical research.

URL : Why is it important to implement meta-research in universities and institutes with medical research activities

DOI : https://doi.org/10.3389/frma.2025.1497280

Research on Research Visibility

Authors : Enrique Orduña-Malea, Cristina I. Font-Julián

This editorial explores the significance of research visibility within the evolving landscape of academic communication, mainly focusing on the role of search engines as online meta-markets shaping the impact of research. With the rapid expansion of scientific output and the increasing reliance on algorithm-driven platforms such as Google and Google Scholar, the online visibility of scholarly work has become an essential factor in determining its reach and influence.

The need for more rigorous research into academic search engine optimization (A-SEO), a field still in its infancy despite its growing relevance, is also discussed, highlighting key challenges in the field, including the lack of robust research methodologies, the skepticism within the academic community regarding the commercialization of science, and the need for standardization in reporting and measurement techniques.

This editorial thus invites a multidisciplinary dialogue on the future of research visibility, with significant implications for academic publishing, science communication, research evaluation, and the global scientific ecosystem.

URL : Research on Research Visibility

DOI : https://doi.org/10.1344/bid2024.53.01

Systematizing Confidence in Open Research and Evidence (SCORE)

Authors : Nazanin Alipourfard, Beatrix Arendt, Daniel M. Benjamin, Noam Benkler, Michael Bishop, Mark Burstein, Martin Bush, James Caverlee, Yiling Chen, Chae Clark, Anna Dreber Almenberg, Tim Errington, Fiona Fidler, Nicholas Fox, Aaron Frank, Hannah Fraser, Scott Friedman, Ben Gelman, James Gentile, C Lee Giles, Michael B Gordon, Reed Gordon-Sarney, Christopher Griffin, Timothy Gulden et al.,

Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts.

Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment.

The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates.

Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.

URL : Systematizing Confidence in Open Research and Evidence (SCORE)

DOI : https://doi.org/10.31235/osf.io/46mnb

The Natural Selection of Bad Science

Authors : Paul E. Smaldino, Richard McElreath

Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding.

The persistence of poor methods results partly from incentives that favor them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principle factor for career advancement.

Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modeling.

We first present a 60-year meta-analysis of statistical power in the behavioral sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power.

To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods.

As in the real world, successful labs produce more “progeny”, such that their methods are more often copied and their students are more likely to start labs of their own.

Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.

URL : The Natural Selection of Bad Science

DOI : http://dx.doi.org/10.1098/rsos.160384