
Every summer, journal editors and publishers anxiously await the publication of the Journal Impact Factors, hoping that their journal’s IF will have gone up and that their competitors’ will have gone down. This used to be the only game in town but, in our heart of hearts, we know that Impact Factors are merely one, unsatisfactory, measure of the impact that publications in the biomedical sciences have on practice, policy, and society, and that better metrics are needed. An IF of 3, for example, means that on average each of the peer-reviewed research articles published in a journal has been cited in the mainstream, peer-reviewed literature three times during a specified 2-year window. At the same time, however, there will probably have been hundreds of thousands, if not millions, of website visits and full-page downloads of articles published in that journal. The IF is an aggregate measure of ‘journal impact’ and tells us nothing about the impact of an individual paper. It is also extremely susceptible to manipulation.
The limits of bibliometrics supplemented by peer review have been recognised for some time by the UK funding councils so that ‘impact case studies’ are an important component of universities’ research quality submissions. There have been many calls to ditch Impact Factors altogether, but researchers still try to get the work published in the most prestigious journals, which in practice means those with the highest Impact Factors. This can be problematic for primary care research because some of the top-quality clinical research in primary care has found its way into specialist, subject-specific journals rather than the more mainstream general practice and primary care literature.
This short book is a welcome guide to an increasingly complex and important area. The authors, from Indiana and Montréal, present a UK-friendly account of the research landscape, considered under the three general themes of input, output, and impact. There are a series of excellent descriptions of important components of the research measurement jigsaw — the Web of Science, Google Scholar, definitions of authorship and interdisciplinarity, the measurement of citations, and the calculation of the Impact Factor. Slightly more obscure aspects of bibliometrics are also lucidly described, including the Eigenfactor score, the SCImago Journal rankings, and the h-index, which increasingly appears at the top of CVs. There is an excellent section on alternative metrics, not just the widely used Altmetric — you will have seen the Altmetric donuts on the BJGP webpage — but also Plum Analytics and ImpactStory, which set out to capture article impact in different ways, all of which depend on extracting data from various social media platforms, online repositories, and social reference managers. The origins and trajectories of these innovative, leading-edge companies are extremely interesting but the best ways of making use of them as research impact metrics remain somewhat unclear. The excellent list of references includes a useful literature review on the scholarly use of social media and altmetrics, to which both authors of this book contributed.
The final section, ‘The Big Picture’, reflects on important questions about the control of research measurement, including the grip held by the big publishers on some of these metrics, the responsibilities of researchers and research administrators to understand and use the available metrics appropriately, recognising the potential adverse effects of some of these tools, and the potential for their misuse.
Biomedical science publishing and university research funding are both big businesses, and we are still some way from having perfect measures of their quality, impact, benefits, and harms.
- © British Journal of General Practice 2018