Bibliometric methods for detecting and analysing emerging research topics

This study gives an overview of the process of clustering scientific disciplines using hybrid methods, detecting and labelling emerging topics and analysing the results using bibliometrics methods.

The hybrid clustering techniques are based on biblographic coupling and text-mining and ‘core documents’, and cross-citation links are used to identify emerging fields.

The collaboration network of those countries that proved to be most active in the underlying disciplines, in combination with a set of standard indicators, form the groundwork for the bibliometric analysis of the detected emerging research topics.

URL : http://hdl.handle.net/10760/16947

Trends in Research Librarianship Literature A Social Network…

Statut

Trends in Research Librarianship Literature: A Social Network Analysis of Articles :

“The purpose of this article is to identify the bibliometric characteristics of research librarianship literature and to visualize relationships in research librarianship by means of social network analysis. It was found out that the majority (66%) of the articles had single authorship and College & Research Libraries is the prominent actor among the research librarianship journals. It was also observed that Peter Hernon is the most productive and cited author in the field. The findings of this study can be used by the research librarianship community to better understand their core literature.”

URL : http://liber.library.uu.nl/publish/issues/2011-3_4/index.html?000554

Which alternative tools for bibliometrics in an research…

Which alternative tools for bibliometrics in an research institute ? :

“Nowadays, bibliometrics is a frequently used tool in scientific and technical information, it can be useful to quantify scientific production and for collective or individual evaluations. Web of Science (Thomson ISI) and impact factor calculated by JCR are the better known references. We will underline the limits and setbacks of these overused indicators, especially the bias factor h. Other alternative tools are emerging today. Our presentation will focus on comparing all these products, and we will study their interests for librarians and researchers.”

“Aujourd’hui la bibliométrie est un outil fréquemment utilisé pour quantifier la production scientifique et aussi pour les évaluations des chercheurs et des institutions. Le WoK et le JCR pour le facteur d’impact sont des outils de référence. Nous voudrions souligner les limites de ces indicateurs, nous soulignerons les biais du facteur h. D’autres outils alternatifs émergent aujourd’hui. Cette communication analysera d’autres outils qui peuvent être utilisés en bibliométrie, nous en verrons les avantages et les inconvénients pour les documentalistes et les chercheurs.”

URL : http://archivesic.ccsd.cnrs.fr/sic_00668741

Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact

Background

Citations in peer-reviewed articles and the impact factor are generally accepted measures of scientific impact. Web 2.0 tools such as Twitter, blogs or social bookmarking tools provide the possibility to construct innovative article-level or journal-level metrics to gauge impact and influence. However, the relationship of the these new metrics to traditional metrics such as citations is not known.

Objective:

(1) To explore the feasibility of measuring social impact of and public attention to scholarly articles by analyzing buzz in social media, (2) to explore the dynamics, content, and timing of tweets relative to the publication of a scholarly article, and (3) to explore whether these metrics are sensitive and specific enough to predict highly cited articles.

Methods

Between July 2008 and November 2011, all tweets containing links to articles in the Journal of Medical Internet Research (JMIR) were mined. For a subset of 1573 tweets about 55 articles published between issues 3/2009 and 2/2010, different metrics of social media impact were calculated and compared against subsequent citation data from Scopus and Google Scholar 17 to 29 months later. A heuristic to predict the top-cited articles in each issue through tweet metrics was validated.

Results

A total of 4208 tweets cited 286 distinct JMIR articles. The distribution of tweets over the first 30 days after article publication followed a power law (Zipf, Bradford, or Pareto distribution), with most tweets sent on the day when an article was published (1458/3318, 43.94% of all tweets in a 60-day period) or on the following day (528/3318, 15.9%), followed by a rapid decay. The Pearson correlations between tweetations and citations were moderate and statistically significant, with correlation coefficients ranging from .42 to .72 for the log-transformed Google Scholar citations, but were less clear for Scopus citations and rank correlations. A linear multivariate model with time and tweets as significant predictors (P < .001) could explain 27% of the variation of citations. Highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles (9/12 or 75% of highly tweeted article were highly cited, while only 3/43 or 7% of less-tweeted articles were highly cited; rate ratio 0.75/0.07 = 10.75, 95% confidence interval, 3.4–33.6). Top-cited articles can be predicted from top-tweeted articles with 93% specificity and 75% sensitivity.

Conclusions

Tweets can predict highly cited articles within the first 3 days of article publication. Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations, but the true use of these metrics is to measure the distinct concept of social impact. Social impact measures based on tweets are proposed to complement traditional citation metrics. The proposed twimpact factor may be a useful and timely metric to measure uptake of research findings and to filter research findings resonating with the public in real time.

URL : http://www.jmir.org/2011/4/e123/

Do age and professional rank influence the order of authorship in scientific publications? Some evidence from a micro-level perspective

Scientific authorship has important implications in science since it reflects the contribution to research of the different individual scientists and it is considered by evaluation committees in research assessment processes.

This study analyses the order of authorship in the scientific output of 1,064 permanent scientists at the Spanish CSIC (WoS, 1994–2004).

The influence of age, professional rank and bibliometric profile of scientists over the position of their names in the byline of publications is explored in three different research areas: Biology and Biomedicine, Materials Science and Natural Resources. There is a strong trend for signatures of younger researchers and those in the lower professional ranks to appear in the first position (junior signing pattern), while more veteran or highly-ranked ones, who tend to play supervisory functions in research, are proportionally more likely to sign in the last position (senior signing pattern).

Professional rank and age have an effect on authorship order in the three fields analysed, but there are inter-field differences. Authorship patterns are especially marked in the most collaboration-intensive field (i.e. Biology and Biomedicine), where professional rank seems to be more significant than age in determining the role of scientists in research as seen through their authorship patterns, while age has a more significant effect in the least collaboration-intensive field (Natural Resources).

URL : http://www.springerlink.com/content/e713j65334v77037/

Cite Datasets and Link to Publications This…

Cite Datasets and Link to Publications :

“This guide will help you create links between your academic publications and the underlying datasets, so that anyone viewing the publication will be able to locate the dataset and vice versa. It provides a working knowledge of the issues and challenges involved, and of how current approaches seek to address them. This guide should interest researchers and principal investigators working on data-led research, as well as the data repositories with which they work.”

URL : http://www.dcc.ac.uk/resources/how-guides/cite-datasets

Les nouvelles formes d’évaluation scientifique : quelles évolutions en sciences, technique et médecine ?

Dès 1960, l’Institute for Scientific Information (I.S.I.) de Philadelphie, sous l’impulsion d’Eugen Garfield, a mis en œuvre le Science Citation Index (S.C.I) pour l’évaluation des auteurs, suivi en 1975 par le Journal Citation Reports (J.C.R.) pour celle des revues.

Au terme d’une analyse critique de ce modèle, nous envisagerons de nouvelles approches : l’algorithme mis au point sur le site Citebase par les équipes de S.Harnad à Southampton (G.-B.) et T.Brody à Cornell (Ithaca, N.-Y.) sur le miroir britannique d’ArXiv.org. (l’un des plus importants sites mondiaux d’archives ouvertes scientifiques) Scholar Google, avatar du moteur généraliste standard lancé sur le Net en novembre 2004 deux alternatives récentes à la définition d’un facteur d’impact proposées par J.E. Hirsch (facteur h lié à la production individuelle d’un chercheur) et l’équipe de J. Bollen (Journal Status) ; donc sur les Auteurs d’une part et les Sources de l’autre le modèle du ” collectif ” Faculty of 1000 dans les domaines biomédicaux.

Son originalité par rapport aux précédents réside dans le primat de l’évaluation “humaine” qualitative sur le principe statistique de la citation. Après un essai de typologie des comités de lecture et des usages en cours dans les différentes disciplines scientifiques, on conclura sur la nécessité d’explorer rapidement la voie d’un nouvel outil d’évaluation libre d’accès, dont les règles seraient clairement définies, tant au niveau de la couverture qu’à celui des critères d’analyse qualitative et statistique.

URL : http://archivesic.ccsd.cnrs.fr/sic_00260459/fr/