Factors Influencing Cities’ Publishing Efficiency

Author : Csomós György

Purpose

Recently, a vast number of scientific publications have been produced in cities in emerging countries. It has long been observed that the publication output of Beijing has exceeded that of any other city in the world, including such leading centres of science as Boston, New York, London, Paris, and Tokyo.

Researchers have suggested that, instead of focusing on cities’ total publication output, the quality of the output in terms of the number of highly cited papers should be examined. However, in the period from 2014 to 2016, Beijing produced as many highly cited papers as Boston, London, or New York.

In this paper, another method is proposed to measure cities’ publishing performance by focusing on cities’ publishing efficiency (i.e., the ratio of highly cited articles to all articles produced in that city).

Design/methodology/approach

First, 554 cities are ranked based on their publishing efficiency, then some general factors influencing cities’ publishing efficiency are revealed. The general factors examined in this paper are as follows: the linguistic environment of cities, cities’ economic development level, the location of excellent organisations, cities’ international collaboration patterns, and their scientific field profile.

Furthermore, the paper examines the fundamental differences between the general factors influencing the publishing efficiency of the top 100 most efficient cities and the bottom 100 least efficient cities.

Findings

Based on the research results, the conclusion can be drawn that a city’s publishing efficiency will be high if meets the following general conditions: it is in a country in the Anglosphere–Core; it is in a high-income country; it is home to top-ranked universities and/or world-renowned research institutions; researchers affiliated with that city most intensely collaborate with researchers affiliated with cities in the United States, Germany, England, France, Canada, Australia, and Italy; and the most productive scientific disciplines of highly cited articles are published in high-impact multidisciplinary journals, disciplines in health sciences (especially general internal medicine and oncology), and disciplines in natural sciences (especially physics, astronomy, and astrophysics).

Research limitations

It is always problematic to demarcate the boundaries of cities (e.g., New York City vs. Greater New York), and regarding this issue there is no consensus among researchers.

The Web of Science presents the name of cities in the addresses reported by the authors of publications. In this paper cities correspond to the spatial units between the country/state level and the institution level as indicated in the Web of Science.

Furthermore, it is necessary to highlight that the Web of Science is biased towards English-language journals and journals published in the field of biomedicine. These facts may influence the outcome of the research.

Practical implications

Publishing efficiency, as an indicator, shows how successful a city is at the production of science. Naturally, cities have limited opportunities to compete for components of the science establishment (e.g., universities, hospitals).

However, cities can compete to attract innovation-oriented companies, high tech firms, and R&D facilities of multinational companies by for example establishing science parks. The positive effect of this process on the city’s performance in science can be observed in the example of Beijing, which publishing efficiency has been increased rapidly.

Originality/value

Previous scientometric studies have examined cities’ publication output in terms of the number of papers, or the number of highly cited papers, which are largely size dependent indicators; however this paper attempts to present a more quality-based approach.

URL : Factors Influencing Cities’ Publishing Efficiency

DOI : https://doi.org/10.2478/jdis-2018-0014

Biomedical authors’ awareness of publication ethics: an international survey

Authors : Sara Schroter, Jason Roberts, Elizabeth Loder, Donald B Penzien, Sarah Mahadeo, Timothy T Houle

Objective

The extent to which biomedical authors have received training in publication ethics, and their attitudes and opinions about the ethical aspects of specific behaviours, have been understudied. We sought to characterise the knowledge and attitudes of biomedical authors about common issues in publication ethics.

Design

Cross-sectional online survey.

Setting and participants

Corresponding authors of research submissions to 20 journals.

Main outcome measure(s)

Perceived level of unethical behaviour (rated 0 to 10) presented in five vignettes containing key variables that were experimentally manipulated on entry to the survey and perceived level of knowledge of seven ethical topics related to publishing (prior publication, author omission, self-plagiarism, honorary authorship, conflicts of interest, image manipulation and plagiarism).

Results

4043/10 582 (38%) researchers responded. Respondents worked in 100 countries and reported varying levels of publishing experience. 67% (n=2700) had received some publication ethics training from a mentor, 41% (n=1677) a partial course, 28% (n=1130) a full course and 55% (n=2206) an online course; only a small proportion rated training received as excellent.

There was a full range (0 to 10 points) in ratings of the extent of unethical behaviour within each vignette, illustrating a broad range of opinion about the ethical acceptability of the behaviours evaluated, but these opinions were little altered by the context in which it occurred.

Participants reported substantial variability in their perceived knowledge of seven publication ethics topics; one-third perceived their knowledge to be less than ‘some knowledge’ for the sum of the seven ethical topics and only 9% perceived ‘substantial knowledge’ of all topics.

Conclusions

We found a large degree of variability in espoused training and perceived knowledge, and variability in views about how ethical or unethical scenarios were. Ethical standards need to be better articulated and taught to improve consistency of training across institutions and countries.

URL : Biomedical authors’ awareness of publication ethics: an international survey

DOI : http://dx.doi.org/10.1136/bmjopen-2017-021282

Confused about copyright? Assessing Researchers’ Comprehension of Copyright Transfer Agreements

Authors: Alexandra Kohn, Jessica Lange

INTRODUCTION

Academic authors’ confusion about copyright and publisher policy is often cited as a challenge to their effective sharing of their own published research, from having a chilling effect on selfarchiving in institutional and subject repositories, to leading to the posting of versions of articles on social networking sites in contravention of publisher policy and beyond.

This study seeks to determine the extent to which authors understand the terms of these policies as expressed in publishers’ copyright transfer agreements (CTAs), taking into account such factors as the authors’ disciplines and publishing experience, as well as the wording and structure of these agreements.

METHODS

We distributed an online survey experiment to corresponding authors of academic research articles indexed in the Scopus database. Participants were randomly assigned to read one of two copyright transfer agreements and were subsequently asked to answer a series of questions about these agreements to determine their level of comprehension.

The survey was sent to 3,154 participants, with 122 responding, representing a 4% response rate. Basic demographic information as well as information about participants’ previous publishing experience was also collected. We analyzed the survey data using Ordinary Least Squared (OLS) regressions and probit regressions.

RESULTS AND DISCUSSION

Participants demonstrated a low rate of understanding of the terms of the CTAs they were asked to read. Participants averaged a score of 33% on the survey, indicating a low comprehension level of author rights.

This figure did not vary significantly, regardless of the respondents’ discipline, time in academia, level of experience with publishing, or whether or not they had published previously with the publisher whose CTA they were administered. Results also indicated that participants did equally poorly on the survey regardless of which of the two CTAs they received.

However, academic authors do appear to have a greater chance of understanding a CTA when a specific activity is explicitly outlined in the text of the agreement.

URL : Confused about copyright? Assessing Researchers’ Comprehension of Copyright Transfer Agreements

DOI : http://doi.org/10.7710/2162-3309.2253

Marketing via Email Solicitation by Predatory (and Legitimate) Journals: An Evaluation of Quality, Frequency and Relevance

Authors: Warren Burggren, Dilip K. Madasu, Kevin S. Hawkins, Martin Halbert

INTRODUCTION

Open access (OA) journals have proliferated in recent years. Many journals are highly reputable, delivering on the promise of open access to research as an alternative to traditional, subscriptionbased journals.

Yet some OA journals border on, or clearly fall within, the realm of so-called “predatory journals.” Most discussion of such journals has focused on the quality of articles published within them.

Considerably less attention has been paid to the marketing practices of predatory journals—primarily their mass e-mailing—and to the impact that this practice may have on recipients’ perception of OA journals as a whole.

METHODS

This study analyzed a subset of the 1,816 e-mails received by a single university biology faculty member during a 24-month period (2015 and 2016) with an update from December 2017 and January 2018.

RESULTS

Of those e-mails sent in 2015, approximately 37% were copies or near-copies of previous e-mail messages sent to the recipient, less than 25% of e-mails from predatory journals mentioned publication fees, only about 30% of soliciting journals were listed in DOAJ, and only about 4% had an identifiable impact factor.

While most e-mails indicated a purported familiarity with, and respect for, the recipient, more than two thirds of the e-mails did not, implying use of mass-e-mailing methodologies.

Almost 80% of the e-mail solicitations had grammar and/or spelling mistakes. Finally, and perhaps most importantly, only a staggeringly small 4% of e-mails were judged highly relevant to the recipient’s area of expertise.

DISCUSSION AND CONCLUSION

In light of the marketing practices of many predatory journals, we advocate specific instructions for librarians, faculty mentors, and administrators of legitimate OA journals as they interact with new researchers, junior faculty, and other professionals learning how to discern the quality of journals that send direct e-mail solicitations.

URL : Marketing via Email Solicitation by Predatory (and Legitimate) Journals: An Evaluation of Quality, Frequency and Relevance

DOI : https://doi.org/10.7710/2162-3309.2246

“No comment”?: A study of commenting on PLOS articles

Authors : Simon Wakeling, Peter Willett, Claire Creaser, Jenny Fry, Stephen Pinfield, Valerie Spezi, Marc Bonne, Christina Founti, Itzelle Medina Perea

Article commenting functionality allows users to add publically visible comments to an article on a publisher’s website. As well as facilitating forms of post-publication peer review, for publishers of open-access mega-journals (large, broad scope, OA journals that seek to publish all technically or scientifically sound research) comments are also thought to serve as a means for the community to discuss and communicate the significance and novelty of the research, factors which are not assessed during peer review.

In this paper we present the results of an analysis of commenting on articles published by the Public Library of Science (PLOS), publisher of the first and best-known mega-journal PLOS ONE, between 2003 and 2016.

We find that while overall commenting rates are low, and have declined since 2010, there is substantial variation across different PLOS titles. Using a typology of comments developed for this research we also find that only around half of comments engage in an academic discussion of the article, and that these discussions are most likely to focus on the paper’s technical soundness.

Our results suggest that publishers have yet to encourage significant numbers of readers to leave comments, with implications for the effectiveness of commenting as a means of collecting and communicating community perceptions of an article’s importance.

DOI : https://doi.org/10.1177%2F0165551518819965

Unethical aspects of open access

Authors : David Shaw, Bernice Elger

In this article we identify and discuss several ethical problematic aspects of open access scientific publishing.

We conclude that, despite some positive effects, open access is unethical for at least three reasons: it discriminates against researchers, creates an editorial conflict of interest and diverts funding from the actual conduct of research. To be truly open access, all researchers must be able to access its benefits.

DOI : https://doi.org/10.1080/08989621.2018.1537789

What Value Do Journal Whitelists and Blacklists Have in Academia?

Authors : Jaime A. Teixeira da Silva, Panagiotis Tsigaris

This paper aims to address the issue of predatory publishing, sensu lato. To achieve this, we offer our perspectives, starting initially with some background surrounding the birth of the concept, even though the phenomenon may have already existed long before the popularization of the term “predatory publishing”.

The issue of predation or “predatory” behavior in academic publishing is no longer limited to open access (OA). Many of the mainstream publishers that were exclusively subscription-based are now evolving towards a state of complete OA.

Academics seeking reliable sources of journals to publish their work tend to rely on a journal’s metrics such as citations and indexing, and on whether it is blacklisted or whitelisted.

Jeffrey Beall raised awareness of the risks of “predatory” OA publishing, and his blacklists of “predatory” OA journals and publishers began to be used for official purposes to distinguish valid from perceived invalid publishing venues.

We initially reflect on why we believe the blacklists created by Beall were flawed, primarily due to the weak set of criteria confusing non-predatory with true predatory journals leading to false positives and missing out on blacklisting true predatory journals due to false negatives.

Historically, most critiques of “predatory publishing” have relied excessively on Beall’s blacklists to base their assumptions and conclusions but there is a need to look beyond these.

There are currently a number of blacklists and whitelists circulating in academia, but they all have imperfections, such as the resurrected Beall blacklists, Crawford’s OA gray list based on Beall’s lists, Cabell’s new blacklist with about 11,000 journals, the DOAJ with about 11,700 OA journals, and UGC, with over 32,600 journals prior to its recent (May 2018) purge of 4305 journals.

The reader is led into a discussion about blacklists’ lack of reliability, using the scientific framework of conducting research to assess whether a journal could be predatory at the pre- and post-study levels. We close our discussion by offering arguments why we believe blacklists are academically invalid.

URL : What Value Do Journal Whitelists and Blacklists Have in Academia?

DOI : https://doi.org/10.1016/j.acalib.2018.09.017