International Collaboration in Open Access Publications: How Income Shapes International Collaboration

Authors : Michael Cary, Taylor Rockwell

Does the rise of open access journals change the way researchers collaborate? Specifically, since publishing in open access journals requires a publication fee, does income affect how researchers form international collaborations?

To answer this question, we create a new data set by scraping bibliographic data from Multidisciplinary Digital Publishing Institute (MDPI) journals. Using the four income group classifications from the World Bank Analytical Classifications, we find that researchers from low-income nations are more likely to form international collaborations than researchers from wealthier nations.

This result is verified to be significant using a series of pairwise Kolmogorov–Smirnov tests. We then study which nations most frequently form international collaborations with other nations and find that the USA, China, Germany, and France are the most preferred nations for forming international collaborations.

While most nations prefer to form international collaborations with high-income nations, some exceptions exist, where a nation most often forms international collaborations with a nearby nation that is either an upper-middle-income or lower-middle-income nation.

We further this analysis by showing that these results are apparent across the six different research categories established in the Frascati Manual. Finally, trends in publications in MDPI journals mirror trends seen in all journals, such as the continued increase in the percentage of published papers involving international collaboration.

URL : International Collaboration in Open Access Publications: How Income Shapes International Collaboration

DOI : https://doi.org/10.3390/publications8010013

Do Download Reports Reliably Measure Journal Usage? Trusting the Fox to Count Your Hens?

Authors : Alex Wood-Doughty, Ted Bergstrom, Douglas G. Steigerwald

Download rates of academic journals have joined citation counts as commonly used indicators of the value of journal subscriptions. While citations reflect worldwide influence, the value of a journal subscription to a single library is more reliably measured by the rate at which it is downloaded by local users.

If reported download rates accurately measure local usage, there is a strong case for using them to compare the cost-effectiveness of journal subscriptions. We examine data for nearly 8,000 journals downloaded at the ten universities in the University of California system during a period of six years.

We find that controlling for number of articles, publisher, and year of download, the ratio of downloads to citations differs substantially among academic disciplines.

After adding academic disciplines to the control variables, there remain substantial “publisher effects”, with some publishers reporting significantly more downloads than would be predicted by the characteristics of their journals.

These cross-publisher differences suggest that the currently available download statistics, which are supplied by publishers, are not sufficiently reliable to allow libraries to make subscription decisions based on price and reported downloads, at least without making an adjustment for publisher effects in download reports.

URL : Do Download Reports Reliably Measure Journal Usage? Trusting the Fox to Count Your Hens?

DOI: https://doi.org/10.5860/crl.80.5.694

The advantages of UK Biobank’s open access strategy for health research

Authors : Megan Conroy, Jonathan Sellors, Mark Effingham, Thomas J. Littlejohns, Chris Boultwood, Lorraine Gillions, Cathie L.M. Sudlow, Rory Collins, Naomi E. Allen

Ready access to health research studies is becoming more important as researchers, and their funders, seek to maximise the opportunities for scientific innovation and health improvements.

Large‐scale population‐based prospective studies are particularly useful for multidisciplinary research into the causes, treatment and prevention of many different diseases. UK Biobank has been established as an open‐access resource for public health research, with the intention of making the data as widely available as possible in an equitable and transparent manner.

Access to UK Biobank’s unique breadth of phenotypic and genetic data has attracted researchers worldwide from across academia and industry. As a consequence, it has enabled scientists to perform world‐leading collaborative research.

Moreover, open access to an already deeply characterized cohort has encouraged both public and private sector investment in further enhancements to make UK Biobank an unparalleled resource for public health research and an exemplar for the development of open access approaches for other studies.

DOI : https://doi.org/10.1111/joim.12955

AccessLab: Workshops to broaden access to scientific research

Authors : Amber G. F. Griffiths, Ivvet Modinou, Clio Heslop, Charlotte Brand, Aidan Weatherill, Kate Baker, Anna E. Hughes, Jen Lewis, Lee de Mora, Sara Mynott, Katherine E. Roberts, David J. Griffiths, Iain Hrynaszkiewicz​, Natasha Simons​, Azhar Hussain​,​ Simon Goudie

AccessLabs are workshops with two simultaneous motivations, achieved through direct citizen-scientist pairings: (1) to decentralise research skills so that a broader range of people are able to access/use scientific research, and (2) to expose science researchers to the difficulties of using their research as an outsider, creating new open access advocates.

Five trial AccessLabs have taken place for policy makers, media/journalists, marine sector participants, community groups, and artists. The act of pairing science academics with local community members helps build understanding and trust between groups at a time when this relationship appears to be under increasing threat from different political and economic currents in society.

Here, we outline the workshop motivations, format, and evaluation, with the aim that others can build on the methods developed.

URL : AccessLab: Workshops to broaden access to scientific research

DOI : https://doi.org/10.1371/journal.pbio.3000258

Scientific misconduct and accountability in teams

Authors : Katrin Hussinger, Maikel Pellens

Increasing complexity and multidisciplinarity make collaboration essential for modern science. This, however, raises the question of how to assign accountability for scientific misconduct among larger teams of authors. Biomedical societies and science associations have put forward various sets of guidelines. Some state that all authors are jointly accountable for the integrity of the work.

Others stipulate that authors are only accountable for their own contribution. Alternatively, there are guarantor type models that assign accountability to a single author. We contribute to this debate by analyzing the outcomes of 80 scientific misconduct investigations of biomedical scholars conducted by the U.S. Office of Research Integrity (ORI).

We show that the position of authors on the byline of 184 publications involved in misconduct cases correlates with responsibility for the misconduct. Based on a series of binary regression models, we show that first authors are 38% more likely to be responsible for scientific misconduct than authors listed in the middle of the byline (p<0.01). Corresponding authors are 14% more likely (p<0.05).

These findings suggest that a guarantor-like model where first authors are ex-ante accountable for misconduct is highly likely to not miss catching the author responsible, while not afflicting too many bystanders.

URL : Scientific misconduct and accountability in teams

DOI : https://doi.org/10.1371/journal.pone.0215962

Intellectual contributions meriting authorship: Survey results from the top cited authors across all science categories

Authors : Gregory S. Patience, Federico Galli, Paul A. Patience, Daria C. Boffito

Authorship is the currency of an academic career for which the number of papers researchers publish demonstrates creativity, productivity, and impact. To discourage coercive authorship practices and inflated publication records, journals require authors to affirm and detail their intellectual contributions but this strategy has been unsuccessful as authorship lists continue to grow.

Here, we surveyed close to 6000 of the top cited authors in all science categories with a list of 25 research activities that we adapted from the National Institutes of Health (NIH) authorship guidelines.

Responses varied widely from individuals in the same discipline, same level of experience, and same geographic region. Most researchers agreed with the NIH criteria and grant authorship to individuals who draft the manuscript, analyze and interpret data, and propose ideas.

However, thousands of the researchers also value supervision and contributing comments to the manuscript, whereas the NIH recommends discounting these activities when attributing authorship.

People value the minutiae of research beyond writing and data reduction: researchers in the humanities value it less than those in pure and applied sciences; individuals from Far East Asia and Middle East and Northern Africa value these activities more than anglophones and northern Europeans.

While developing national and international collaborations, researchers must recognize differences in peoples values while assigning authorship.

URL : Intellectual contributions meriting authorship: Survey results from the top cited authors across all science categories

DOI : https://doi.org/10.1371/journal.pone.0198117

“No comment”?: A study of commenting on PLOS articles

Authors : Simon Wakeling, Peter Willett, Claire Creaser, Jenny Fry, Stephen Pinfield, Valerie Spezi, Marc Bonne, Christina Founti, Itzelle Medina Perea

Article commenting functionality allows users to add publically visible comments to an article on a publisher’s website. As well as facilitating forms of post-publication peer review, for publishers of open-access mega-journals (large, broad scope, OA journals that seek to publish all technically or scientifically sound research) comments are also thought to serve as a means for the community to discuss and communicate the significance and novelty of the research, factors which are not assessed during peer review.

In this paper we present the results of an analysis of commenting on articles published by the Public Library of Science (PLOS), publisher of the first and best-known mega-journal PLOS ONE, between 2003 and 2016.

We find that while overall commenting rates are low, and have declined since 2010, there is substantial variation across different PLOS titles. Using a typology of comments developed for this research we also find that only around half of comments engage in an academic discussion of the article, and that these discussions are most likely to focus on the paper’s technical soundness.

Our results suggest that publishers have yet to encourage significant numbers of readers to leave comments, with implications for the effectiveness of commenting as a means of collecting and communicating community perceptions of an article’s importance.

DOI : https://doi.org/10.1177%2F0165551518819965