« This paper explores the origins of the 1:1 Principle within Dublin Core Metadata Initiative (DCMI). It finds that the need for the 1:1 Principle emerged from prior work among cultural heritage professionals responsible for describing reproductions and surrogate resources using traditional cataloging methods. As the solutions to these problems encountered new ways to model semantic data that emerged outside of libraries, archives, and museums, tensions arose within DCMI community. This paper aims to fill the gaps in our understanding of the 1:1 Principle by outlining the conceptual foundations that led to its inclusion in DCMI documentation, how the Principle has been (mis)understood in practice, how violations of the Principle have been operationalized, and how the fundamental issues raised by the Principle continue to challenge us today. This discussion situates the 1:1 Principle within larger discussions about cataloging practice and emerging Linked Data approaches. »
« MELIBEA is a Spanish database that uses a composite formula with eight weighted conditions to estimate the effectiveness of Open Access mandates (registered in ROARMAP). We analyzed 68 mandated institutions for publication years 2011-2013 to determine how well the MELIBEA score and its individual conditions predict what percentage of published articles indexed by Web of Knowledge is deposited in each institution’s OA repository, and when. We found a small but significant positive correlation (0.18) between MELIBEA score and deposit percentage. We also found that for three of the eight MELIBEA conditions (deposit timing, internal use, and opt-outs), one value of each was strongly associated with deposit percentage or deposit latency (immediate deposit required, deposit required for performance evaluation, unconditional opt-out allowed for the OA requirement but no opt-out for deposit requirement). When we updated the initial values and weights of the MELIBEA formula for mandate effectiveness to reflect the empirical association we had found, the score’s predictive power doubled (.36). There are not yet enough OA mandates to test further mandate conditions that might contribute to mandate effectiveness, but these findings already suggest that it would be useful for future mandates to adopt these three conditions so as to maximize their effectiveness, and thereby the growth of OA. »
« The internet is greatly improving the impact of scholarly journals, but also poses new threats to their quality. Publishers have arisen that abuse the Gold Open Access model, in which the author pays a fee to get his article published, to make money with so-called predatory journals. These publishers falsely claim to conduct peer review, which makes them more prone to publish fraudulent and plagiarised research. This thesis looks at three possible methods to stop predatory journals: black- and white-lists, open peer review systems and new metrics. Black- and whitelists have set up rules and regulations that credible publishers and journals should follow. Open peer review systems should make it harder for predatory publishers to make false claims about their peer review process. Metrics should measure more aspects of research impact and become less liable to gaming. The question is, which of these three methods is the best candidate to stop predatory journals. As all three methods have their drawbacks, especially for new but high quality journals, none of them stop predatory journals on its own can. Rather, we need a system in which researchers, publishers and reviewers communicate more openly about the research they create, disseminate and read. But above all, we need to find a way to take away incentives for researchers and publishers to engage in fraudulent practices. »
« This paper analyzes Altmetric.com, one of the most important altmetric data providers currently used. We have analyzed a set of publications with DOI number indexed in the Web of Science during the period 2011-2013 and collected their data with the Altmetric API. 19% of the original set of papers was retrieved from Altmetric.com including some altmetric data. We identified 16 different social media sources from which Altmetric.com retrieves data. However five of them cover 95.5% of the total set. Twitter (87.1%) and Mendeley (64.8%) have the highest coverage. We conclude that Altmetric.com is a transparent, rich and accurate tool for altmetric data. Nevertheless, there are still potential limitations on its exhaustiveness as well as on the selection of social media sources that need further research. »
« Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon’s Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, “Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?” 1) Microworkers who passed a qualification test (19% passed) evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters’ conclusions was moderate (κ = 0.586), with consensus being reached in 96% of abstracts. 2) Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3) Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors’ expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity). 4) Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61). We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good reliability and low cost, crowdsourcing has potential to evaluate published literature in a cost-effective, quick, and reliable manner using existing, easily accessible resources. »
« This thesis using the method of research design is about creating a journal recommendation system for authors. Existing systems like JANE or whichjournal.com offer recommendations based on similarities of the content. This study invests how more sophisticated factors like openness, price (subscription or article processing charge), speed of publication can be included in the ranking of a recommendation system. The recommendation should also consider the expectations from other stakeholders like libraries or funders. »
Alternative URL : http://eprints.rclis.org/23523/
« In this paper, we examine the evolution of the impact of non-elite journals. We attempt to answer two questions. First, what fraction of the top-cited articles are published in non-elite journals and how has this changed over time. Second, what fraction of the total citations are to non-elite journals and how has this changed over time.
We studied citations to articles published in 1995-2013. We computed the 10 most-cited journals and the 1000 most-cited articles each year for all 261 subject categories in Scholar Metrics. We marked the 10 most-cited journals in a category as the elite journals for the category and the rest as non-elite.
There are two conclusions from our study. First, the fraction of top-cited articles published in non-elite journals increased steadily over 1995-2013. While the elite journals still publish a substantial fraction of high-impact articles, many more authors of well-regarded papers in diverse research fields are choosing other venues.
The number of top-1000 papers published in non-elite journals for the representative subject category went from 149 in 1995 to 245 in 2013, a growth of 64%. Looking at broad research areas, 4 out of 9 areas saw at least one-third of the top-cited articles published in non-elite journals in 2013. For 6 out of 9 areas, the fraction of top-cited papers published in non-elite journals for the representative subject category grew by 45% or more.
Second, now that finding and reading relevant articles in non-elite journals is about as easy as finding and reading articles in elite journals, researchers are increasingly building on and citing work published everywhere. Considering citations to all articles, the percentage of citations to articles in non-elite journals went from 27% in 1995 to 47% in 2013. Six out of nine broad areas had at least 50% of citations going to articles published in non-elite journals in 2013. »
« Caractérisée par une rupture apparente dans la causalité et un fonctionnement aléatoire, la sérendipité, ou le don de faire une découverte inattendue, interroge les domaines de la logique, de la sémiotique et de la recherche documentaire. Favorisée par la recherche sur Internet et la lecture non linéaire hypertextuelle, la sérendipité tend peu à peu à trouver une place légitime au sein de la recherche d’information. Cette réflexion interdisciplinaire s’ancre dans les domaines de la sémiotique, la logique, la documentation et les cultures numériques. À partir d’une lecture peircienne de la sérendipité, assimilant ce phénomène au concept d’abduction, j’effectuerai une analyse des moteurs de recherche et plus généralement de l’hypertextualité sur le Web 2.0. Les exemples convoqués seront quatre outils de recherche documentaire sur Internet, soit l’encyclopédie en ligne Wikipédia, le site Amazon et les moteurs de recherche Google et Oamos. À partir de ces analyses, il s’agira d’observer les impacts et les limites de la sérendipité dans le champ de la recherche d’information sur Internet. Favorisée par la structure du réseau, la sérendipité apparaît alors comme une disponibilité de l’esprit à l’improbable et soulève également le risque de recherches documentaires malheureuses à travers le phénomène inverse de zemblanité. »
Alternative URL : http://www.revuecygnenoir.org/numero/article/la-serendipite-sur-internet
« In this paper, we locate open access in the South African higher education research context where it is, distinctively, not shaped by the policy frameworks that are profoundly changing research dissemination behaviour in other parts of the world. We define open access and account for its rise by two quite different routes. We then present a case study of journal publishing at one South African university to identify existing journal publishing practices in terms of open access. This case provides the springboard for considering the implications – both positive and negative – of global open access trends for South African – and other – research and researchers. We argue that academics’ engagement with open access and scholarly communication debates is in their interests as global networked researchers whose virtual identities and online scholarship are now a critical aspect of their professional engagement. »
« Public research institutions and scientists are principal actors in the production and transfer of scientific knowledge, technologies and innovations for application in industry as well for social and economic development. Based on the relevance of science and technology actors, the aim of this study was to identify and explain factors in research governance that influence scientific knowledge production and to contribute to empirical discussions on the impact levels of different governance models and structures. These discussions appear limited and mixed in the literature, although still are ongoing. No previous study has examined the possible contribution of the scientific committee model of research governance to scientific performance at the individual level of the scientist. In this context, this study contributes to these discussions, firstly, by suggesting that scientific committee structures with significant research steering autonomy could contribute not only directly to scientific output but also indirectly through moderating effects on research practices. Secondly, it is argued that autonomous scientific committee structures tend to play a better steering role than do management-centric models and structures of research governance. »