Authors : Giovanni Abramo, Ciriaco Andrea D’Angelo, Flavia Di Costa
The incidence of extramural collaboration in academic research activities is increasing as a result of various factors. These factors include policy measures aimed at fostering partnership and networking among the various components of the research system, policies which are in turn justified by the idea that knowledge sharing could increase the effectiveness of the system.
Over the last two decades, the scientific community has also stepped up activities to assess the actual impact of collaboration intensity on the performance of research systems.
This study draws on a number of empirical analyses, with the intention of measuring the effects of extramural collaboration on research performance and, indirectly, verifying the legitimacy of policies that support this type of collaboration.
The analysis focuses on the Italian academic research system. The aim of the work is to assess the level of correlation, at institutional level, between scientific productivity and collaboration intensity as a whole, both internationally and with private organizations.
This will be carried out using a bibliometric type of approach, which equates collaboration with the co-authorship of scientific publications.
URL : https://arxiv.org/abs/1812.07847
“The Web has greatly reduced the barriers to entry for new journals and other platforms for communicating scientific output, and the number of journals continues to multiply. This leaves readers and authors with the daunting cognitive challenge of navigating the literature and discerning contributions that are both relevant and significant. Meanwhile, measures of journal impact that might guide the use of the literature have become more visible and consequential, leading to “impact gamesmanship” that renders the measures increasingly suspect. The incentive system created by our journals is broken. In this essay, I argue that the core technology of journals is not their distribution but their review process. The organization of the review process reflects assumptions about what a contribution is and how it should be evaluated. Through their review processes, journals can certify contributions, convene scholarly communities, and curate works that are worth reading. Different review processes thereby create incentives for different kinds of work. It’s time for a broader dialogue about how we connect the aims of the social science enterprise to our system of journals.”
URL : http://asq.sagepub.com/content/59/2/193.full
The quantity and quality of scientific output of the topmost 50 countries in the four basic sciences (agricultural and biological sciences, chemistry, mathematics, and physics and astronomy) are studied in the period of the recent 12 years (1996-2007). In order to rank the countries, a novel two-dimensional method is proposed, which is inspired by the H-index and other methods based on quality and quantity measures.
The countries data are represented in a “quantity-quality diagram”, and partitioned by a conventional statistical algorithm (k-means), into three clusters, members of which are rather the same in all of the basic sciences. The results offer a new perspective on the global positions of countries with regards to their scientific output.
URL : http://arxiv.org/abs/1304.2698
Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis :
“Objective To review patterns of publication of clinical trials funded by US National Institutes of Health (NIH) in peer reviewed biomedical journals indexed by Medline.
Design Cross sectional analysis.
Setting Clinical trials funded by NIH and registered within ClinicalTrials.gov (clinicaltrials.gov), a trial registry and results database maintained by the US National Library of Medicine, after 30 September 2005 and updated as having been completed by 31 December 2008, allowing at least 30 months for publication after completion of the trial.
Main outcome measures Publication and time to publication in the biomedical literature, as determined through Medline searches, the last of which was performed in June 2011.
Results Among 635 clinical trials completed by 31 December 2008, 294 (46%) were published in a peer reviewed biomedical journal, indexed by Medline, within 30 months of trial completion. The median period of follow-up after trial completion was 51 months (25th-75th centiles 40-68 months), and 432 (68%) were published overall. Among published trials, the median time to publication was 23 months (14-36 months). Trials completed in either 2007 or 2008 were more likely to be published within 30 months of study completion compared with trials completed before 2007 (54% (196/366) v 36% (98/269); P<0.001).
Conclusions Despite recent improvement in timely publication, fewer than half of trials funded by NIH are published in a peer reviewed biomedical journal indexed by Medline within 30 months of trial completion. Moreover, after a median of 51 months after trial completion, a third of trials remained unpublished.”
URL : http://www.bmj.com/content/344/bmj.d7292