Compliance with the first funder open access policy in Australia

Authors : Noreen Kirkman, Gaby Haddow

Introduction

In 2012, the National Health and Medical Research Council introduced Australia’s first national open access policy for funded journal articles. This study investigated the extent of compliance during the first two full years of the mandate.

Method

The funding acknowledgment fields in Web of Science facilitated the identification of the population of funded articles. Google Scholar, the Directory of Open Access Journals, publishers’ Websites, Trove, and Australian institutional repositories were the sources of data about open access.

Analysis

Quantitative analysis performed on the records of 3,190 articles and 1,137 journal titles enabled the calculation of descriptive statistics to present the characteristics of the sample.

Results

Over two-thirds (67.3%) of the articles were open access: 56.24% in journals and 11.06% in repositories. Hybrid open access comprised 25.58%, with 20.85% in fully open access journals and 8.75% in delayed open access journals.

Author accepted manuscripts in Australian institutional repositories (7.24%) and PubMed Central (3.82%) contributed to overall compliance but represented a small proportion of the non-open access articles.

Conclusions

As the first comprehensive study to measure compliance with Australia’s National Health and Medical Research Council Open Access Policy, this study found a relatively high level of open access in journals alongside a low level of author accepted manuscripts in repositories.

Recommendations include better guidelines, procedures, and programs for grant recipients and a coordinated approach aimed at improving institutional repository deposit rates to achieve higher levels of open access and increased compliance with funder mandates.

URL : http://www.informationr.net/ir/25-2/paper857.html

The relationship between bioRxiv preprints, citations and altmetrics

Authors : Nicholas Fraser, Fakhri Momeni, Philipp Mayr, Isabella Peters

A potential motivation for scientists to deposit their scientific work as preprints is to enhance its citation or social impact. In this study we assessed the citation and altmetric advantage of bioRxiv, a preprint server for the biological sciences.

We retrieved metadata of all bioRxiv preprints deposited between November 2013 and December 2017, and matched them to articles that were subsequently published in peer-reviewed journals.

Citation data from Scopus and altmetric data from Altmetric.com were used to compare citation and online sharing behavior of bioRxiv preprints, their related journal articles, and nondeposited articles published in the same journals. We found that bioRxiv-deposited journal articles had sizably higher citation and altmetric counts compared to nondeposited articles.

Regression analysis reveals that this advantage is not explained by multiple explanatory variables related to the articles’ publication venues and authorship. Further research will be required to establish whether such an effect is causal in nature.

bioRxiv preprints themselves are being directly cited in journal articles, regardless of whether the preprint has subsequently been published in a journal. bioRxiv preprints are also shared widely on Twitter and in blogs, but remain relatively scarce in mainstream media and Wikipedia articles, in comparison to peer-reviewed journal articles.

DOI : https://doi.org/10.1162/qss_a_00043

A systematic examination of preprint platforms for use in the medical and biomedical sciences setting

Authors : Jamie J Kirkham, Naomi Penfold, Fiona Murphy, Isabelle Boutron, John PA Ioannidis, Jessica K Polka, David Moher

Objectives

The objective of this review is to identify all preprint platforms with biomedical and medical scope and to compare and contrast the key characteristics and policies of these platforms. We also aim to provide a searchable database to enable relevant stakeholders to compare between platforms.

Study Design and Setting

Preprint platforms that were launched up to 25th June 2019 and have a biomedical and medical scope according to MEDLINE’s journal selection criteria were identified using existing lists, web-based searches and the expertise of both academic and non-academic publication scientists.

A data extraction form was developed, pilot-tested and used to collect data from each preprint platform’s webpage(s). Data collected were in relation to scope and ownership; content-specific characteristics and information relating to submission, journal transfer options, and external discoverability; screening, moderation, and permanence of content; usage metrics and metadata.

Where possible, all online data were verified by the platform owner or representative by correspondence.

Results

A total of 44 preprint platforms were identified as having biomedical and medical scope, 17 (39%) were hosted by the Open Science Framework preprint infrastructure, six (14%) were provided by F1000 Research Ltd (the Open Research Central infrastructure) and 21 (48%) were other independent preprint platforms. Preprint platforms were either owned by non-profit academic groups, scientific societies or funding organisations (n=28; 64%), owned/partly owned by for-profit publishers or companies (n=14; 32%) or owned by individuals/small communities (n=2; 5%).

Twenty-four (55%) preprint platforms accepted content from all scientific fields although some of these had restrictions relating to funding source, geographical region or an affiliated journal’s remit.

Thirty-three (75%) preprint platforms provided details about article screening (basic checks) and 14 (32%) of these actively involved researchers with context expertise in the screening process.

The three most common screening checks related to the scope of the article, plagiarism and legal/ethical/societal issues and compliance. Almost all preprint platforms allow submission to any peer-reviewed journal following publication, have a preservation plan for read-access, and most have a policy regarding reasons for retraction and the sustainability of the service.

Forty-one (93%) platforms currently have usage metrics, with the most common metric being the number of downloads presented on the abstract page.

Conclusion

A large number of preprint platforms exist for use in biomedical and medical sciences, all of which offer researchers an opportunity to rapidly disseminate their research findings onto an open-access public server, subject to scope and eligibility.

However, the process by which content is screened before online posting and withdrawn or removed after posting varies between platforms, which may be associated with platform operation, ownership, governance and financing.

DOI : https://doi.org/10.1101/2020.04.27.063578

Community curation in PomBase: enabling fission yeast experts to provide detailed, standardized, sharable annotation from research publications

Authors : Antonia Lock, Midori A Harris, Kim Rutherford, Jacqueline Hayles, Valerie Wood

Maximizing the impact and value of scientific research requires efficient knowledge distribution, which increasingly depends on the integration of standardized published data into online databases.

To make data integration more comprehensive and efficient for fission yeast research, PomBase has pioneered a community curation effort that engages publication authors directly in FAIR-sharing of data representing detailed biological knowledge from hypothesis-driven experiments.

Canto, an intuitive online curation tool that enables biologists to describe their detailed functional data using shared ontologies, forms the core of PomBase’s system.

With 8 years’ experience, and as the author response rate reaches 50%, we review community curation progress and the insights we have gained from the project.

We highlight incentives and nudges we deploy to maximize participation, and summarize project outcomes, which include increased knowledge integration and dissemination as well as the unanticipated added value arising from co-curation by publication authors and professional curators.

URL : Community curation in PomBase: enabling fission yeast experts to provide detailed, standardized, sharable annotation from research publications

DOI : https://doi.org/10.1093/database/baaa028

Open Access and Altmetrics in the pandemic age: Forescast analysis on COVID-19 literature

Authors : Daniel Torres-Salinas, Nicolas Robinson-Garcia, Pedro A. Castillo-Valdivieso

We present an analysis on the uptake of open access on COVID-19 related literature as well as the social media attention they gather when compared with non OA papers.

We use a dataset of publications curated by Dimensions and analyze articles and preprints. Our sample includes 11,686 publications of which 67.5% are openly accessible.

OA publications tend to receive the largest share of social media attention as measured by the Altmetric Attention Score. 37.6% of OA publications are bronze, which means toll journals are providing free access.

MedRxiv contributes to 36.3% of documents in repositories but papers in BiorXiv exhibit on average higher AAS. We predict the growth of COVID-19 literature in the following 30 days estimating ARIMA models for the overall publications set, OA vs. non OA and by location of the document (repository vs. journal).

We estimate that COVID-19 publications will double in the next 20 days, but non OA publications will grow at a higher rate than OA publications. We conclude by discussing the implications of such findings on the dissemination and communication of research findings to mitigate the coronavirus outbreak.

DOI : https://doi.org/10.1101/2020.04.23.057307

Did awarding badges increase data sharing in BMJ Open? A randomized controlled trial

Authors : Anisa Rowhani-Farid, Adrian Aldcroft, Adrian G. Barnett

Sharing data and code are important components of reproducible research. Data sharing in research is widely discussed in the literature; however, there are no well-established evidence-based incentives that reward data sharing, nor randomized studies that demonstrate the effectiveness of data sharing policies at increasing data sharing.

A simple incentive, such as an Open Data Badge, might provide the change needed to increase data sharing in health and medical research. This study was a parallel group randomized controlled trial (protocol registration: doi:10.17605/OSF.IO/PXWZQ) with two groups, control and intervention, with 80 research articles published in BMJ Open per group, with a total of 160 research articles.

The intervention group received an email offer for an Open Data Badge if they shared their data along with their final publication and the control group received an email with no offer of a badge if they shared their data with their final publication.

The primary outcome was the data sharing rate. Badges did not noticeably motivate researchers who published in BMJ Open to share their data; the odds of awarding badges were nearly equal in the intervention and control groups (odds ratio = 0.9, 95% CI [0.1, 9.0]). Data sharing rates were low in both groups, with just two datasets shared in each of the intervention and control groups.

The global movement towards open science has made significant gains with the development of numerous data sharing policies and tools.

What remains to be established is an effective incentive that motivates researchers to take up such tools to share their data.

URL : Did awarding badges increase data sharing in BMJ Open? A randomized controlled trial

DOI : https://doi.org/10.1098/rsos.191818

No raw data, no science: another possible source of the reproducibility crisis

Author : Tsuyoshi Miyakawa

A reproducibility crisis is a situation where many scientific studies cannot be reproduced. Inappropriate practices of science, such as HARKing, p-hacking, and selective reporting of positive results, have been suggested as causes of irreproducibility. In this editorial, I propose that a lack of raw data or data fabrication is another possible cause of irreproducibility.

As an Editor-in-Chief of Molecular Brain, I have handled 180 manuscripts since early 2017 and have made 41 editorial decisions categorized as “Revise before review,” requesting that the authors provide raw data.

Surprisingly, among those 41 manuscripts, 21 were withdrawn without providing raw data, indicating that requiring raw data drove away more than half of the manuscripts. I rejected 19 out of the remaining 20 manuscripts because of insufficient raw data.

Thus, more than 97% of the 41 manuscripts did not present the raw data supporting their results when requested by an editor, suggesting a possibility that the raw data did not exist from the beginning, at least in some portions of these cases.

Considering that any scientific study should be based on raw data, and that data storage space should no longer be a challenge, journals, in principle, should try to have their authors publicize raw data in a public database or journal site upon the publication of the paper to increase reproducibility of the published results and to increase public trust in science.

URL : No raw data, no science: another possible source of the reproducibility crisis

DOI : https://doi.org/10.1186/s13041-020-0552-2