« The internet is greatly improving the impact of scholarly journals, but also poses new threats to their quality. Publishers have arisen that abuse the Gold Open Access model, in which the author pays a fee to get his article published, to make money with so-called predatory journals. These publishers falsely claim to conduct peer review, which makes them more prone to publish fraudulent and plagiarised research. This thesis looks at three possible methods to stop predatory journals: black- and white-lists, open peer review systems and new metrics. Black- and whitelists have set up rules and regulations that credible publishers and journals should follow. Open peer review systems should make it harder for predatory publishers to make false claims about their peer review process. Metrics should measure more aspects of research impact and become less liable to gaming. The question is, which of these three methods is the best candidate to stop predatory journals. As all three methods have their drawbacks, especially for new but high quality journals, none of them stop predatory journals on its own can. Rather, we need a system in which researchers, publishers and reviewers communicate more openly about the research they create, disseminate and read. But above all, we need to find a way to take away incentives for researchers and publishers to engage in fraudulent practices. »
New data, new possibilities: Exploring the insides of Altmetric.com
« This paper analyzes Altmetric.com, one of the most important altmetric data providers currently used. We have analyzed a set of publications with DOI number indexed in the Web of Science during the period 2011-2013 and collected their data with the Altmetric API. 19% of the original set of papers was retrieved from Altmetric.com including some altmetric data. We identified 16 different social media sources from which Altmetric.com retrieves data. However five of them cover 95.5% of the total set. Twitter (87.1%) and Mendeley (64.8%) have the highest coverage. We conclude that Altmetric.com is a transparent, rich and accurate tool for altmetric data. Nevertheless, there are still potential limitations on its exhaustiveness as well as on the selection of social media sources that need further research. »
Using Crowdsourcing to Evaluate Published Scientific Literature: Methods and Example
« Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon’s Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, “Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?” 1) Microworkers who passed a qualification test (19% passed) evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters’ conclusions was moderate (κ = 0.586), with consensus being reached in 96% of abstracts. 2) Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3) Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors’ expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity). 4) Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61). We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good reliability and low cost, crowdsourcing has potential to evaluate published literature in a cost-effective, quick, and reliable manner using existing, easily accessible resources. »
URL : Using Crowdsourcing to Evaluate Published Scientific Literature: Methods and Example
DOI: 10.1371/journal.pone.0100647
Where to publish? Development of a recommender system for academic publishing
« This thesis using the method of research design is about creating a journal recommendation system for authors. Existing systems like JANE or whichjournal.com offer recommendations based on similarities of the content. This study invests how more sophisticated factors like openness, price (subscription or article processing charge), speed of publication can be included in the ranking of a recommendation system. The recommendation should also consider the expectations from other stakeholders like libraries or funders. »
URL : Where to publish? Development of a recommender system for academic publishing
Alternative URL : http://eprints.rclis.org/23523/
Rise of the Rest: The Growing Impact of Non-Elite Journals
« In this paper, we examine the evolution of the impact of non-elite journals. We attempt to answer two questions. First, what fraction of the top-cited articles are published in non-elite journals and how has this changed over time. Second, what fraction of the total citations are to non-elite journals and how has this changed over time.
We studied citations to articles published in 1995-2013. We computed the 10 most-cited journals and the 1000 most-cited articles each year for all 261 subject categories in Scholar Metrics. We marked the 10 most-cited journals in a category as the elite journals for the category and the rest as non-elite.
There are two conclusions from our study. First, the fraction of top-cited articles published in non-elite journals increased steadily over 1995-2013. While the elite journals still publish a substantial fraction of high-impact articles, many more authors of well-regarded papers in diverse research fields are choosing other venues.
The number of top-1000 papers published in non-elite journals for the representative subject category went from 149 in 1995 to 245 in 2013, a growth of 64%. Looking at broad research areas, 4 out of 9 areas saw at least one-third of the top-cited articles published in non-elite journals in 2013. For 6 out of 9 areas, the fraction of top-cited papers published in non-elite journals for the representative subject category grew by 45% or more.
Second, now that finding and reading relevant articles in non-elite journals is about as easy as finding and reading articles in elite journals, researchers are increasingly building on and citing work published everywhere. Considering citations to all articles, the percentage of citations to articles in non-elite journals went from 27% in 1995 to 47% in 2013. Six out of nine broad areas had at least 50% of citations going to articles published in non-elite journals in 2013. »
La sérendipité sur Internet : égarement documentaire ou recherche créatrice?
« Caractérisée par une rupture apparente dans la causalité et un fonctionnement aléatoire, la sérendipité, ou le don de faire une découverte inattendue, interroge les domaines de la logique, de la sémiotique et de la recherche documentaire. Favorisée par la recherche sur Internet et la lecture non linéaire hypertextuelle, la sérendipité tend peu à peu à trouver une place légitime au sein de la recherche d’information. Cette réflexion interdisciplinaire s’ancre dans les domaines de la sémiotique, la logique, la documentation et les cultures numériques. À partir d’une lecture peircienne de la sérendipité, assimilant ce phénomène au concept d’abduction, j’effectuerai une analyse des moteurs de recherche et plus généralement de l’hypertextualité sur le Web 2.0. Les exemples convoqués seront quatre outils de recherche documentaire sur Internet, soit l’encyclopédie en ligne Wikipédia, le site Amazon et les moteurs de recherche Google et Oamos. À partir de ces analyses, il s’agira d’observer les impacts et les limites de la sérendipité dans le champ de la recherche d’information sur Internet. Favorisée par la structure du réseau, la sérendipité apparaît alors comme une disponibilité de l’esprit à l’improbable et soulève également le risque de recherches documentaires malheureuses à travers le phénomène inverse de zemblanité. »
URL : La sérendipité sur Internet : égarement documentaire ou recherche créatrice?
Alternative URL : http://www.revuecygnenoir.org/numero/article/la-serendipite-sur-internet
Open access in South Africa : a case study and reflections
« In this paper, we locate open access in the South African higher education research context where it is, distinctively, not shaped by the policy frameworks that are profoundly changing research dissemination behaviour in other parts of the world. We define open access and account for its rise by two quite different routes. We then present a case study of journal publishing at one South African university to identify existing journal publishing practices in terms of open access. This case provides the springboard for considering the implications – both positive and negative – of global open access trends for South African – and other – research and researchers. We argue that academics’ engagement with open access and scholarly communication debates is in their interests as global networked researchers whose virtual identities and online scholarship are now a critical aspect of their professional engagement. »
URL : Open access in South Africa : a case study and reflections