The BJGP has for many years operated on an open peer review system, in which a minimum of two peer reviewers report on each original research article considered for publication and where the identities of the authors and reviewers are known to each other. Although peer review remains the ‘gatekeeper’ to research publication, its efficacy and reliability are still a topic of controversy. There is concern about the variation in the quality of peer review, both within and between journals.1,2 Editorial decisions such as the choice of reviewers, the interpretation of their comments, and the need to navigate between reviews offering divergent advice add to the difficulties. Formal training for reviewers is rare. Recently the ability of the system to identify fraud and plagiarism has been questioned. A 2007 Cochrane review has highlighted the urgent need for high-quality research into the outcomes of peer review.3
One place to focus efforts at improvement is at the level of the individual reviewer. Until now BJGP reviewers have not routinely received feedback on their performance, although they do receive a copy of the other review(s) and the editor’s comments sent to the manuscript authors. While the quality of reviews carried out for the BJGP is almost uniformly good, we are now committed to implementing a more formal feedback system to help new reviewers, support existing reviewers, and further improve the quality of future reviews and publications.
EXISTING TOOLS
We examined the literature to identify existing tools used to assess the quality of peer reviews. These tools have often been devised to provide a quantitative measure of quality for comparison purposes in research studies. Most comprise a numerical scoring system to rate reviews ranging from 4-point to 100-point scales, some providing a single global score and others with multiple scores for subcategories. …