What Value Do Journal Whitelists and Blacklists Have in Academia?

Authors : Jaime A. Teixeira da Silva, Panagiotis Tsigaris

This paper aims to address the issue of predatory publishing, sensu lato. To achieve this, we offer our perspectives, starting initially with some background surrounding the birth of the concept, even though the phenomenon may have already existed long before the popularization of the term “predatory publishing”.

The issue of predation or “predatory” behavior in academic publishing is no longer limited to open access (OA). Many of the mainstream publishers that were exclusively subscription-based are now evolving towards a state of complete OA.

Academics seeking reliable sources of journals to publish their work tend to rely on a journal’s metrics such as citations and indexing, and on whether it is blacklisted or whitelisted.

Jeffrey Beall raised awareness of the risks of “predatory” OA publishing, and his blacklists of “predatory” OA journals and publishers began to be used for official purposes to distinguish valid from perceived invalid publishing venues.

We initially reflect on why we believe the blacklists created by Beall were flawed, primarily due to the weak set of criteria confusing non-predatory with true predatory journals leading to false positives and missing out on blacklisting true predatory journals due to false negatives.

Historically, most critiques of “predatory publishing” have relied excessively on Beall’s blacklists to base their assumptions and conclusions but there is a need to look beyond these.

There are currently a number of blacklists and whitelists circulating in academia, but they all have imperfections, such as the resurrected Beall blacklists, Crawford’s OA gray list based on Beall’s lists, Cabell’s new blacklist with about 11,000 journals, the DOAJ with about 11,700 OA journals, and UGC, with over 32,600 journals prior to its recent (May 2018) purge of 4305 journals.

The reader is led into a discussion about blacklists’ lack of reliability, using the scientific framework of conducting research to assess whether a journal could be predatory at the pre- and post-study levels. We close our discussion by offering arguments why we believe blacklists are academically invalid.

URL : What Value Do Journal Whitelists and Blacklists Have in Academia?

DOI : https://doi.org/10.1016/j.acalib.2018.09.017

Leveraging Concepts in Open Access Publications

Authors : Andrea Bertino, Luca Foppiano, Laurent Romary, Pierre Mounier

Aim

This paper addresses the integration of a Named Entity Recognition and Disambiguation (NERD) service within a group of open access (OA) publishing digital platforms and considers its potential impact on both research and scholarly publishing.

This application, called entity-fishing, was initially developed by Inria in the context of the EU FP7 project CENDARI (Lopez et al., 2014) and provides automatic entity recognition and disambiguation against Wikipedia and Wikidata. Distributed with an open-source licence, it was deployed as a web service in the DARIAH infrastructure hosted at the French HumaNum.

Methods

In this paper, we focus on the specific issues related to its integration on five OA platforms specialized in the publication of scholarly monographs in social sciences and humanities as part of the work carried out within the EU H2020 project HIRMEOS (High Integration of Research Monographs in the European Open Science infrastructure).

Results and Discussion

In the following sections, we give a brief overview of the current status and evolution of OA publications and how HIRMEOS aims to contribute to this.

We then give a comprehensive description of the entity-fishing service, focusing on its concrete applications in real use cases together with some further possible ideas on how to exploit the generated annotations.

Conclusions

We show that entity-fishing annotations can improve both research and publishing process. Entity-fishing annotations can be used to achieve a better and quicker understanding of the specific and disciplinary language of certain monographs and so encourage non-specialists to use them.

In addition, a systematic implementation of the entity-fishing service can be used by publishers to generate thematic indexes within book collections to allow better cross-linking and query functions.

URL : https://hal.inria.fr/hal-01900303/

The future of global research: A case study on the use of scenario planning in the publishing industry

Authors : Samira Rhoods, Anca Babor

Key points

  • Scenario planning is fun and engaging and is a good opportunity to revisit your company’s core strengths and competitive advantage!
  • Scenario planning should drive long‐term thinking in organizations.
  • It will change the nature of the strategic conversation and can be used to help validate business innovation.
  • Scenarios can help to engage with other organizations in the industry and help people work together to create preferred future outcomes.
  • The complexity of scenario planning should not be underestimated and shortcuts do not work.

URL : The future of global research: A case study on the use of scenario planning in the publishing industry

DOI : https://doi.org/10.1002/leap.1152

Do funding applications where peer reviewers disagree have higher citations? A cross-sectional study

Authors : Adrian G Barnett, Scott R. Glisson, Stephen Gallo

Background

Decisions about which applications to fund are generally based on the mean scores of a panel of peer reviewers. As well as the mean, a large disagreement between peer reviewers may also be worth considering, as it may indicate a high-risk application with a high return.

Methods

We examined the peer reviewers’ scores for 227 funded applications submitted to the American Institute of Biological Sciences between 1999 and 2006. We examined the mean score and two measures of reviewer disagreement: the standard deviation and range.

The outcome variable was the relative citation ratio, which is the number of citations from all publications associated with the application, standardised by field and publication year.

Results

There was a clear increase in relative citations for applications with a better mean. There was no association between relative citations and either of the two measures of disagreement.

Conclusions

We found no evidence that reviewer disagreement was able to identify applications with a higher than average return. However, this is the first study to empirically examine this association, and it would be useful to examine whether reviewer disagreement is associated with research impact in other funding schemes and in larger sample sizes.

URL : Do funding applications where peer reviewers disagree have higher citations? A cross-sectional study

DOI : http://dx.doi.org/10.12688/f1000research.15479.2

Sharing health research data – the role of funders in improving the impact

Authors : Robert F. Terry, Katherine Littler, Piero L. Olliaro

Recent public health emergencies with outbreaks of influenza, Ebola and Zika revealed that the mechanisms for sharing research data are neither being used, or adequate for the purpose, particularly where data needs to be shared rapidly.

A review of research papers, including completed clinical trials related to priority pathogens, found only 31% (98 out of 319 published papers, excluding case studies) provided access to all the data underlying the paper – 65% of these papers give no information on how to find or access the data.

Only two clinical trials out of 58 on interventions for WHO priority pathogens provided any link in their registry entry to the background data.

Interviews with researchers revealed a reluctance to share data included a lack of confidence in the utility of the data; an absence of academic-incentives for rapid dissemination that prevents subsequent publication and a disconnect between those who are collecting the data and those who wish to use it quickly.

The role of the funders of research needs to change to address this. Funders need to engage early with the researchers and related stakeholders to understand their concerns and work harder to define the more explicitly the benefits to all stakeholders.

Secondly, there needs to be a direct benefit to sharing data that is directly relevant to those people that collect and curate the data.

Thirdly more work needs to be done to realise the intent of making data sharing resources more equitable, ethical and efficient.

Finally, a checklist of the issues that need to be addressed when designing new or revising existing data sharing resources should be created. This checklist would highlight the technical, cultural and ethical issues that need to be considered and point to examples of emerging good practice that can be used to address them.

URL : Sharing health research data – the role of funders in improving the impact

DOI : http://dx.doi.org/10.12688/f1000research.16523.1

Evaluating research and researchers by the journal impact factor: is it better than coin flipping?

Authors : Ricardo Brito, Alonso Rodríguez-Navarro

The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers.

The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations.

Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF.

To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low.

However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.

URL : https://arxiv.org/abs/1809.10999

OpenAPC: a contribution to a transparent and reproducible monitoring of fee-based open access publishing across institutions and nations

Authors : Dirk Pieper, Christoph Broschinski

The OpenAPC initiative releases data sets on fees paid for open access (OA) journal articles by universities, funders and research institutions under an open database licence.

OpenAPC is part of the INTACT project, which is funded by the German Research Foundation and located at Bielefeld University Library.

This article provides insight into OpenAPC’s technical and organizational background and shows how transparent and reproducible reporting on fee-based open access can be conducted across institutions and publishers to draw conclusions on the state of the OA transformation process.

As part of the INTACT subproject, ESAC, the article also shows how OpenAPC workflows can be used to analyse offsetting deals, using the example of Springer Compact agreements.

URL : OpenAPC: a contribution to a transparent and reproducible monitoring of fee-based open access publishing across institutions and nations

DOI : http://doi.org/10.1629/uksg.439