Authors : Daniel Paul O’Donnell, Carey Viejou, Sylvia Chow, Rumi Graham, Jarret McKinnon, Dorothea Morrison, Reed Parsons, Courtney Rieger, Vanja Spirić, Elaine Toth
The Meeting of the Minds graduate student journal is edited primarily by students from our Masters programme. This means that our editorial board is subject to high annual turnover and that our technological infrastructure and workflow needed to be easy to train for, accommodate differing levels of technological skill and editorial interest, and provide archiving that did not require a continuing interest in the journal by future generations of students.
This article provides a detailed and comparative account of the “off-the-shelf ” systems and software used in developing the journal with an explanation of the rationale behind our choices.
Conclusion and implications
The choices we made can be adopted by other journals interested in a low-cost, “future-proof ” approach to developing a publishing infrastructure.
A simulation model based on parallel systems is established, aiming to explore the relation between the number of submissions and the overall quality of academic journals within a similar discipline under peer review.
The model can effectively simulate the submission, review and acceptance behaviors of academic journals, in a distributed manner. According to the simulation experiments, it could possibly happen that the overall standard of academic journals may deteriorate due to excessive submissions.
Several authors have proposed that a large number of unusual combinations of cited references in a paper point to its high creative potential (or novelty). However, it is still not clear whether the number of unusual combinations can really measure the creative potential of papers.
The current study addresses this question on the basis of several case studies from the field of scientometrics. We identified some landmark papers in this field. Study subjects were the corresponding authors of these papers.
We asked them where the ideas for the papers came from and which role the cited publications played. The results revealed that the creative ideas might not necessarily have been inspired by past publications.
The literature seems to be important for the contextualization of the idea in the field of scientometrics. Instead, we found that creative ideas are the result of finding solutions to practical problems, result from discussions with colleagues, and profit from interdisciplinary exchange. The roots of the studied landmark papers are discussed in detail.
Authors : Asheley R. Landrum, Joseph Hilgard, Robert B. Lull, Heather Akin, Kathleen Hall Jamieson
Public trust in agricultural biotechnology organizations that produce so-called ‘genetically-modified organisms’ (GMOs) is affected by misinformed attacks on GM technology and worry that producers’ concern for profits overrides concern for the public good.
In an experiment, we found that reporting that the industry engages in open and transparent research practices increased the perceived trustworthiness of university and corporate organizations involved with GMOs.
Universities were considered more trustworthy than corporations overall, supporting prior findings in other technology domains.
The results suggest that commitment to, and communication of, open and transparent research practices should be part of the process of implementing agricultural biotechnologies.
In this exploratory case study, the interests, attitudes, and opinions of participants of the National Conference on Race and Ethnicity (NCORE) in American Higher Education are presented.
This case study sought to understand how college and university administrators and faculty perceived the need to create a peer-reviewed journal that aimed to support and create opportunities to publish research, policy, practices, and procedures within the context of race and ethnicity in American higher education.
The findings of this study reflect that the vast majority of those surveyed (n = 605) and interviewed (n = 5) support, and are interested in, having a peer-reviewed journal that focuses on race and ethnicity in American higher education.
Authors : Diego Chavarro, Ismael Rafols, Puay Tang
The assessment of research based on the journal in which it is published is a widely adopted practice. Some research assessments use the Web of Science (WoS) to identify “high quality” journals, which are assumed to publish excellent research.
The authority of WoS on journal quality stems from its selection of journals based on editorial standards and scientific impact criteria. These can be considered as universalistic criteria, meaning that they can be applied to any journal regardless of its place of publication, language, or discipline.
In this article we examine the coverage by WoS of journals produced in Latin America, Spain, and Portugal. We use a logistic regression to examine the probability of a journal to be covered by WoS given universalistic criteria (editorial standards and scientific impact of the journal) and particularistic criteria (country, language, and discipline of the journal).
We find that it is not possible to predict the inclusion of journals in WoS only through the universalistic criteria because particularistic variables such as country of the journal, its discipline and language are also related to inclusion in WoS.
We conclude that using WoS as a universalistic tool for research assessment can disadvantage science published in journals with adequate editorial standards and scientific merit. We discuss the implications of these findings within the research evaluation literature, specifically for countries and disciplines not extensively covered by WoS.
Since the 17th century, scientific knowledge has been produced through a collective process, involving specific technologies used to perform experiments, to regulate modalities for participation of peers or lay people, and to ensure validation of the facts and publication of major results.
In such a world guided by the quest for a new kind of truth against previous beliefs various forms of misconduct – from subtle plagiarism to the entire fabrication of data and results – have largely been considered as minimal, if not inexistent.
Yet, some “betrayers of the truth” have been alleged in many fraudulent cases at least from the 1970s onward and the phenomenon is currently a growing concern in many academic corners. Facing numerous alerts, journals have generalized dedicated editorial formats to notify their readers of the emerging doubts affecting articles they had published.
This short piece is exclusively focused on these formats, which consists in “flagging” some articles to mark their problematic status.The visibility given to these flags and policies undermine the very basic components of the economy of science: How long can we collectively pretend that peer-reviewed knowledge should be the anchor to face a “post-truth” world?