Commons-Based Peer Production in the Work of Yochai Benkler

Author : Vangelis Papadimitropoulos

Yochai Benkler defines commons-based peer production as a non-market sector of information, knowledge and cultural production, which is not treated as private property but as an ethic of open sharing and co-operation, and is largely enhanced by the Internet and free/open source software.

This paper makes the case that there is a tension between Benkler’s liberal commitments and his anarchistic vision of the commons. Benkler limits the scope of commons-based peer production to the immaterial production of the digital commons, while paradoxically envisaging the control of the world economy by the commons.

This paradox reflects a deeper lacuna in his work, revealing the absence of a concrete strategy as to how the immaterial production of the digital commons can connect to material production and control the world economy.

The paper concludes with an enquiry into some of the latest efforts in the literature to fill this gap.

URL : Commons-Based Peer Production in the Work of Yochai Benkler

DOI : https://doi.org/10.31269/triplec.v16i2.1009

Evaluating research and researchers by the journal impact factor: is it better than coin flipping?

Authors : Ricardo Brito, Alonso Rodríguez-Navarro

The journal impact factor (JIF) is the average of the number of citations of the papers published in a journal, calculated according to a specific formula; it is extensively used for the evaluation of research and researchers.

The method assumes that all papers in a journal have the same scientific merit, which is measured by the JIF of the publishing journal. This implies that the number of citations measures scientific merits but the JIF does not evaluate each individual paper by its own number of citations.

Therefore, in the comparative evaluation of two papers, the use of the JIF implies a risk of failure, which occurs when a paper in the journal with the lower JIF is compared to another with fewer citations in the journal with the higher JIF.

To quantify this risk of failure, this study calculates the failure probabilities, taking advantage of the lognormal distribution of citations. In two journals whose JIFs are ten-fold different, the failure probability is low.

However, in most cases when two papers are compared, the JIFs of the journals are not so different. Then, the failure probability can be close to 0.5, which is equivalent to evaluating by coin flipping.

URL : https://arxiv.org/abs/1809.10999

Consistency of interdisciplinarity measures

Authors : Qi Wang, Jesper Wiborg Schneider

Assessing interdisciplinarity is an important and challenging work in bibliometric studies. Previous studies tend to emphasize that the nature and concept of interdisciplinary is ambiguous and uncertain (e.g. Leydesdorff & Rafols 2010, Rafols & Meyer, 2010, Sugimoto & Weingart, 2014).

As a consequence, various different measures of interdisciplinarity have been proposed. However, few studies have examined the relations between these measures. In this context, this paper aims to systematically review these interdisciplinarity measures, and explore their inherent relations.

We examine these measures in relation to the Web of Science (WoS) journal subject categories (SCs), and also an interdisciplinary research center at Aarhus University.

In line with the conclusion of Digital Science (2016), our results reveal that the current situation of interdisciplinarity measurement in science studies is confusing and unsatisfying. We obtained surprisingly dissimilar results with measures that supposedly should measure similar features.

We suggest that interdisciplinarity as a measurement construct should be used and interpreted with caution in future research evaluation and research policies.

URL : https://arxiv.org/abs/1810.00577

Using ORCID, DOI, and Other Open Identifiers in Research Evaluation

Authors : Laurel L. Haak, Alice Meadows, Josh Brown

An evaluator’s task is to connect the dots between program goals and its outcomes. This can be accomplished through surveys, research, and interviews, and is frequently performed post hoc.

Research evaluation is hampered by a lack of data that clearly connect a research program with its outcomes and, in particular, by ambiguity about who has participated in the program and what contributions they have made. Manually making these connections is very labor-intensive, and algorithmic matching introduces errors and assumptions that can distort results.

In this paper, we discuss the use of identifiers in research evaluation—for individuals, their contributions, and the organizations that sponsor them and fund their work. Global identifier systems are uniquely positioned to capture global mobility and collaboration.

By leveraging connections between local infrastructures and global information resources, evaluators can map data sources that were previously either unavailable or prohibitively labor-intensive.

We describe how identifiers, such as ORCID iDs and DOIs, are being embedded in research workflows across science, technology, engineering, arts, and mathematics; how this is affecting data availability for evaluation purposes: and provide examples of evaluations that are leveraging identifiers.

We also discuss the importance of provenance and preservation in establishing confidence in the reliability and trustworthiness of data and relationships, and in the long-term availability of metadata describing objects and their inter-relationships.

We conclude with a discussion on opportunities and risks for the use of identifiers in evaluation processes.

URL : Using ORCID, DOI, and Other Open Identifiers in Research Evaluation

DOI : https://doi.org/10.3389/frma.2018.00028

OpenAPC: a contribution to a transparent and reproducible monitoring of fee-based open access publishing across institutions and nations

Authors : Dirk Pieper, Christoph Broschinski

The OpenAPC initiative releases data sets on fees paid for open access (OA) journal articles by universities, funders and research institutions under an open database licence.

OpenAPC is part of the INTACT project, which is funded by the German Research Foundation and located at Bielefeld University Library.

This article provides insight into OpenAPC’s technical and organizational background and shows how transparent and reproducible reporting on fee-based open access can be conducted across institutions and publishers to draw conclusions on the state of the OA transformation process.

As part of the INTACT subproject, ESAC, the article also shows how OpenAPC workflows can be used to analyse offsetting deals, using the example of Springer Compact agreements.

URL : OpenAPC: a contribution to a transparent and reproducible monitoring of fee-based open access publishing across institutions and nations

DOI : http://doi.org/10.1629/uksg.439

Predatory Open Access Journals Publishing: What, Why and How?

Author : Shamprasad M. Pujar

The Internet has transformed scholarly publishing and made the availability of online resources possible, both in subscription and open access models. Open access, has enabled wider access to the scholarly literature, thus reducing the digital divide among the haves and have-nots.

In the case of journal articles, even though its ‘Gold’ (author pays model) and ‘Green’ access models have risen to the occasion, but some publishers of journals have turned its ‘Gold’ model to their advantage to earn a profit by charging fees for publication and adopting certain unethical practices of publishing.

An effort has been made here to explore what is ‘Predatory’ open access journals publishing, why this kind of publishing is flourishing and how this model works.

URL : http://hdl.handle.net/10760/32032

On the role of openness in education: A historical reconstruction

Authors : Sandra Peter, Markus Deimann

In the context of education, “open(ness)” has become the watermark for a fast growing number of learning materials and associated platforms and practices from a variety of institutions and individuals. Open Educational Resources (OER), Massive Open Online Courses (MOOC), and more recently, initiatives such as Coursera are just some of the forms this movement has embraced under the “open” banner.

Yet, ongoing calls to discuss and elucidate the “meaning” and particularities of openness in education point to a lack of clarity around the concept. “Open” in education is currently mostly debated in the context of the technological developments that allowed it to emerge in its current forms.

More in-depth explorations of the philosophical underpinnings are moved to the backstage. Therefore, this paper proposes a historical approach to bring clarity to the concept and unmask the tensions that have played out in the past.

It will then show how this knowledge can inform current debates around different open initiatives.

URL : https://journals.openedition.org/dms/2491