Mega-journals are a new kind of scholarly journal made possible by electronic publishing. They are open access (OA) and funded by charges, which authors pay for the publishing services. What distinguishes mega-journals from other OA journals is, in particular, a peer review focusing only on scientific trustworthiness.
The journals can easily publish thousands of articles per year and there is no need to filter articles due to restricted slots in the publishing schedule. This study updates some earlier longitudinal studies of the evolution of mega-journals and their publication volumes.
After very rapid growth in 2010–2013, the increase in overall article volumes has slowed down. Mega-journals are also increasingly dependent for sustained growth on Chinese authors, who now contribute 25% of all articles in such journals.
There has also been an internal shift in market shares. PLOS ONE, which totally dominated mega-journal publishing in the early years, currently publishes around one-third of all articles. Scientific Reports has grown rapidly since 2014 and is now the biggest journal.
In the last couple of years, the role of Open Access (OA) publishing has become central in science management and research policy. In the UK and the Netherlands, national OA mandates require the scientific community to seriously consider publishing research outputs in OA forms.
At the same time, other elements of Open Science are becoming also part of the debate, thus including not only publishing research outputs but also other related aspects of the chain of scientific knowledge production such as open peer review and open data.
From a research management point of view, it is important to keep track of the progress made in the OA publishing debate. Until now, this has been quite problematic, given the fact that OA as a topic is hard to grasp by bibliometric methods, as most databases supporting bibliometric data lack exhaustive and accurate open access labelling of scientific publications.
In this study, we present a methodology that systematically creates OA labels for large sets of publications processed in the Web of Science database. The methodology is based on the combination of diverse data sources that provide evidence of publications being OA.
This work is an exploratory study of how we could progress a step towards an AI assisted peer- review system. The proposed approach is an ambitious attempt to automate the Desk-Rejection phenomenon prevalent in academic peer review.
In this investigation we first attempt to decipher the possible reasons of rejection of a scientific manuscript from the editors desk. To seek a solution to those causes, we combine a flair of information extraction techniques, clustering, citation analysis to finally formulate a supervised solution to the identified problems.
The projected approach integrates two important aspects of rejection: i) a paper being rejected because of out of scope and ii) a paper rejected due to poor quality. We extract several features to quantify the quality of a paper and the degree of in-scope exploring keyword search, citation analysis, reputations of authors and affiliations, similarity with respect to accepted papers.
The features are then fed to standard machine learning based classifiers to develop an automated system. On a decent set of test data our generic approach yields promising results across 3 different journals.
The study inherently exhibits the possibility of a redefined interest of the research community on the study of rejected papers and inculcates a drive towards an automated peer review system.
Authors : Misha Teplitskiy, Daniel Acuna, Aida Elamrani-Raoult, Konrad Kording, James Evans
Personal connections between creators and evaluators of scientific works are ubiquitous, and the possibility of bias ever-present. Although connections have been shown to bias prospective judgments of (uncertain) future performance, it is unknown whether such biases occur in the much more concrete task of assessing the scientific validity of already completed work, and if so, why.
This study presents evidence that personal connections between authors and reviewers of neuroscience manuscripts are associated with biased judgments and explores the mechanisms driving the effect.
Using reviews from 7,981 neuroscience manuscripts submitted to the journal PLOS ONE, which instructs reviewers to evaluate manuscripts only on scientific validity, we find that reviewers favored authors close in the co-authorship network by ~0.11 points on a 1.0 – 4.0 scale for each step of proximity.
PLOS ONE’s validity-focused review and the substantial amount of favoritism shown by distant vs. very distant reviewers, both of whom should have little to gain from nepotism, point to the central role of substantive disagreements between scientists in different « schools of thought. »
The results suggest that removing bias from peer review cannot be accomplished simply by recusing the closely-connected reviewers, and highlight the value of recruiting reviewers embedded in diverse professional networks.
Utilizing the Project COUNTER Release 4 JR1-GOA report, two librarians explore these data in comparison to journal package subscriptions represented via the JR1 reports. This paper outlines the methodology and study undertaken at the Portland State University Library and the University of Nebraska Medical Center Library using these reports for the first time.
The initial outcomes of the study are provided in various Tables for 2014 and 2015. The intent of the study was to provide both institutions with a baseline from which to do further study. In addition, some ideas are given for how these reports can be used in vendor negotiations going forward.
Research data services promise to advance many academic libraries’ strategic goals of becoming partners in the research process and integrating library services with modern research workflows. Academic librarians are well positioned to make an impact in this space due to their expertise in managing, curating, and preserving digital information, and a history of engaging with scholarly communications writ large.
Some academic libraries have quickly developed infrastructure and support for every activity ranging from data storage and curation to project management and collaboration, while others are just beginning to think about addressing the data needs of their researchers.
Regardless of which end of the spectrum they identify with, libraries are still seeking to understand the research landscape and define their role in the process.
This article seeks to blend both a general perspective regarding these issues with actual case studies derived from three institutions, University of Cincinnati, Oklahoma State University, and Florida State University, all of which are at different levels of implementation, maturity, and campus involvement.
Although librarians initially hoped institutional repositories (IRs) would grow through researcher self-archiving, practice shows that growth is much more likely through library-directed deposit. Libraries must then find efficient ways to ingest material into their IR to ensure growth and relevance.
DESCRIPTION OF PROGRAM
Valparaiso University developed and implemented a workflow that was semiautomated to help cut down on the time needed to ingest articles into its IR, ValpoScholar. The workflow, which continues to be refined, makes use of practices and ideas used by other repositories to more efficiently collect metadata for items and upload them to the repository.
The article discusses the pros and cons of this workflow and areas of ingesting that still need to be addressed, including adding full-text items, checking copyright policies, managing student staffing, and dealing with hurdles created by the repository’s software.