Altmetrics and societal impact measurements: Match or mismatch? A literature review

Authors : Iman Tahamtan, Lutz Bornmann

Can alternative metrics (altmetrics) data be used to measure societal impact? We wrote this literature overview of empirical studies in order to find an answer to this question. The overview includes two parts.

The first part, “societal impact measurements”, explains possible methods and problems in measuring the societal impact of research, case studies for societal impact measurement, societal impact considerations at funding organizations, and the societal problems that should be solved by science.

The second part of the review, “altmetrics”, addresses a major question in research evaluation, which is whether altmetrics are proper indicators for measuring the societal impact of research. In the second part we explain the data sources used for altmetrics studies and the importance of field-normalized indicators for impact measurements.

This review indicates that it should be relevant for impact measurements to be oriented towards pressing societal problems. Case studies in which societal impact of certain pieces of research is explained seem to provide a legitimate method for measuring societal impact.

In the use of altmetrics, field-specific differences should be considered by applying field normalization (in cross-field comparisons). Altmetrics data such as social media counts might mainly reflect the public interest and discussion of scholarly works rather than their societal impact.

Altmetrics (Twitter data) might be especially fruitfully employed for research evaluation purposes, if they are used in the context of network approaches. Conclusions based on altmetrics data in research evaluation should be drawn with caution.

URL : Altmetrics and societal impact measurements: Match or mismatch? A literature review

Original location : https://recyt.fecyt.es/index.php/EPI/article/view/epi.2020.ene.02

Results dissemination of registered clinical trials across Polish academic institutions: a cross-sectional analysis

Authors : Karolina Strzebonska, Mateusz T Wasylewski, Lucja Zaborowska, Nico Riede, Susanne Wieschowski, Daniel Strech, Marcin Waligora

Objectives

To establish the rates of publication and reporting of results for interventional clinical trials across Polish academic medical centres (AMCs) completed between 2009 and 2013. We aim also to compare the publication and reporting success between adult and paediatric trials.

Design

Cross-sectional study.

Setting

AMCs in Poland.

Participants

AMCs with interventional trials registered on ClinicalTrials.gov.

Main outcome measure

Results reporting on ClinicalTrials.gov and publishing via journal publication.

Results

We identified 305 interventional clinical trials registered on ClinicalTrials.gov, completed between 2009 and 2013 and affiliated with at least one AMC. Overall, 243 of the 305 trials (79.7%) had been published as articles or posted their summary results on ClinicalTrials.gov.

Results were posted within a year of study completion and/or published within 2 years of study completion for 131 trials (43.0%). Dissemination by both posting and publishing results in a timely manner was achieved by four trials (1.3%).

Conclusions

Our cross-sectional analysis revealed that Polish AMCs fail to meet the expectation for timely disseminating the findings of all interventional clinical trials. Delayed dissemination and non-dissemination of trial results negatively affects decisions in healthcare.

URL : Results dissemination of registered clinical trials across Polish academic institutions: a cross-sectional analysis

DOI : http://dx.doi.org/10.1136/bmjopen-2019-034666

The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis

Authors :  Mark Skopec, Hamdi Issa, Julie Reed, Matthew Harris

Background

Descriptive studies examining publication rates and citation counts demonstrate a geographic skew toward high-income countries (HIC), and research from low- or middle-income countries (LMICs) is generally underrepresented. This has been suggested to be due in part to reviewers’ and editors’ preference toward HIC sources; however, in the absence of controlled studies, it is impossible to assert whether there is bias or whether variations in the quality or relevance of the articles being reviewed explains the geographic divide. This study synthesizes the evidence from randomized and controlled studies that explore geographic bias in the peer review process.

Methods

A systematic review was conducted to identify research studies that explicitly explore the role of geographic bias in the assessment of the quality of research articles.

Only randomized and controlled studies were included in the review. Five databases were searched to locate relevant articles. A narrative synthesis of included articles was performed to identify common findings.

Results

The systematic literature search yielded 3501 titles from which 12 full texts were reviewed, and a further eight were identified through searching reference lists of the full texts. Of these articles, only three were randomized and controlled studies that examined variants of geographic bias.

One study found that abstracts attributed to HIC sources elicited a higher review score regarding relevance of the research and likelihood to recommend the research to a colleague, than did abstracts attributed to LIC sources.

Another study found that the predicted odds of acceptance for a submission to a computer science conference were statistically significantly higher for submissions from a “Top University.” Two of the studies showed the presence of geographic bias between articles from “high” or “low” prestige institutions.

Conclusions

Two of the three included studies identified that geographic bias in some form was impacting on peer review; however, further robust, experimental evidence is needed to adequately inform practice surrounding this topic.

Reviewers and researchers should nonetheless be aware of whether author and institutional characteristics are interfering in their judgement of research.

URL : The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis

DOI : https://doi.org/10.1186/s41073-019-0088-0

Should research misconduct be criminalized?

Authors : Rafael Dal-Ré, Lex M Bouter, Pim Cuijpers, Pim Cuijpers, Christian Gluud, Søren Holm

For more than 25 years, research misconduct (research fraud) is defined as fabrication, falsification, or plagiarism (FFP)—although other research misbehaviors have been also added in codes of conduct and legislations.

A critical issue in deciding whether research misconduct should be subject to criminal law is its definition, because not all behaviors labeled as research misconduct qualifies as serious crime. But assuming that all FFP is fraud and all non-FFP not is far from obvious.

In addition, new research misbehaviors have recently been described, such as prolific authorship, and fake peer review, or boosted such as duplication of images. The scientific community has been largely successful in keeping criminal law away from the cases of research misconduct.

Alleged cases of research misconduct are usually looked into by committees of scientists usually from the same institution or university of the suspected offender in a process that often lacks transparency.

Few countries have or plan to introduce independent bodies to address research misconduct; so for the coming years, most universities and research institutions will continue handling alleged research misconduct cases with their own procedures. A global operationalization of research misconduct with clear boundaries and clear criteria would be helpful.

There is room for improvement in reaching global clarity on what research misconduct is, how allegations should be handled, and which sanctions are appropriate.

URL : Should research misconduct be criminalized?

DOI : https://doi.org/10.1177/1747016119898400

Practices, Challenges, and Prospects of Big Data Curation: a Case Study in Geoscience

Authors : Suzhen Chen, Bin Chen

Open and persistent access to past, present, and future scientific data is fundamental for transparent and reproducible data-driven research. The scientific community is now facing both challenges and opportunities caused by the growingly complex disciplinary data systems.

Concerted efforts from domain experts, information professionals, and Internet technology experts are essential to ensure the accessibility and interoperability of the big data.

Here we review current practices in building and managing big data within the context of large data infrastructure, using geoscience cyberinfrastructure such as Interdisciplinary Earth Data Alliance (IEDA) and EarthCube as a case study.

Geoscience is a data-rich discipline with a rapid expansion of sophisticated and diverse digital data sets. Having started to embrace the digital age, the community have applied big data and data mining tools into the new type of research.

We also identified current challenges, key elements, and prospects to construct a more robust and future-proof big data infrastructure for research and publication for the future, as well as the roles, qualifications, and opportunities for librarians/information professionals in the data era.

URL : Practices, Challenges, and Prospects of Big Data Curation: a Case Study in Geoscience

DOI: https://doi.org/10.2218/ijdc.v14i1.669

How Many Papers Should Scientists Be Reviewing? An Analysis Using Verified Peer Review Reports

Authors : Vincent Raoult

The current peer review system is under stress from ever increasing numbers of publications, the proliferation of open-access journals and an apparent difficulty in obtaining high-quality reviews in due time. At its core, this issue may be caused by scientists insufficiently prioritising reviewing.

Perhaps this low prioritisation is due to a lack of understanding on how many reviews need to be conducted by researchers to balance the peer review process. I obtained verified peer review data from 142 journals across 12 research fields, for a total of over 300,000 reviews and over 100,000 publications, to determine an estimate of the numbers of reviews required per publication per field.

I then used this value in relation to the mean numbers of authors per publication per field to highlight a ‘review ratio’: the expected minimum number of publications an author in their field should review to balance their input (publications) into the peer review process.

On average, 3.49 ± 1.45 (SD) reviews were required for each scientific publication, and the estimated review ratio across all fields was 0.74 ± 0.46 (SD) reviews per paper published per author. Since these are conservative estimates, I recommend scientists aim to conduct at least one review per publication they produce. This should ensure that the peer review system continues to function as intended.

URL : How Many Papers Should Scientists Be Reviewing? An Analysis Using Verified Peer Review Reports

DOI : https://doi.org/10.3390/publications8010004