Authors : Ralph Kenna, Olesya Mryglod, Bertrand Berche
Like it or not, attempts to evaluate and monitor the quality of academic research have become increasingly prevalent worldwide. Performance reviews range from at the level of individuals, through research groups and departments, to entire universities.
Many of these are informed by, or functions of, simple scientometric indicators and the results of such exercises impact onto careers, funding and prestige. However, there is sometimes a failure to appreciate that scientometrics are, at best, very blunt instruments and their incorrect usage can be misleading.
Rather than accepting the rise and fall of individuals and institutions on the basis of such imprecise measures, calls have been made for indicators be regularly scrutinised and for improvements to the evidence base in this area.
It is thus incumbent upon the scientific community, especially the physics, complexity-science and scientometrics communities, to scrutinise metric indicators. Here, we review recent attempts to do this and show that some metrics in widespread use cannot be used as reliable indicators research quality.