ArticlesThe Medical Journal of Australia internet peer-review study
Introduction
Peer review aims to identify the best scientific reports and to correct deficiencies in scientific reporting before publication.1, 2 In the process, peer review informs journal editors about the state of expert opinion and provides advice to authors about how they can improve their work and reporting.
At The Medical Journal of Australia (MJA), peer review is “masked”, with concealment of the identity of authors and reviewers. Irrespective of the recommendations of the reviewers, the editor exercises an absolute discretion in deciding what to publish. This model of peer review is typical of most biomedical journals, although many journals do not conceal authors' identity from reviewers. The main limitations of peer review are delay, inadequacy, bias, and corruption.
Delay arises from the time taken to secure and assess reviews and for the authors to revise their articles in response to reviewers' comments. Inadequacy has been shown, for example, in a study in which 420 BMJ reviewers were sent a paper that had been deliberately prepared to contain eight serious flaws. Of the 221 reviewers who gave reviews, none spotted more than five of these flaws, and the median number of flaws identified by reviewers was two.3, 4 Bias can be for or against established opinion, new ideas, particular hypotheses, research groups, institutions, or nationalities.5, 6 Corruption occurs when reviewers use their position to plagiarise the work under review (intellectual theft)7 or block its progress.
The confidential nature of peer review makes it difficult to detect these limitations. Most referees do not know whose paper they reviewed, what other reviews were received, what use the editor made of their reviews, whether all or only part of their report was passed to the author, what response the author made (usually reviewers are told only whether the article was accepted or rejected), or what other factors influenced the editor's decision. Similarly, authors do not know the identity of their reviewers, why or how the reviewers were chosen, whether the reviews were passed on in full, or (despite the editor's covering remarks) how closely they must follow the reviewers' recommendations to achieve publication. If reviewers or authors have a conflict of interest or a history that might cast doubt on their objectivity, this bias is obscured by their anonymity. Similarly, if a reviewer “borrows” ideas from an author, the cloak of anonymity may cover the theft.
Anonymous reviewers and authors are unable to assess or comment on the fairness and accuracy of the peer-review process as a whole. Only editors are in a position to make this assessment—or are they? Editors cannot be expected to have the specialist knowledge of authors and reviewers and may not be able to detect biased or inadequate reviewing. Many editors rely on authors to protest against unfair or poor reviews. Moreover, neither reviewers nor authors are able to protest if an editor makes a bad or biased use of peer review, because they do not have access to all the evidence.
As things stand, the scientific community cannot be sure whether or how frequently peer review meets the objectives of fair and accurate assessment, or how often that assessment is properly reflected in editorial decisions.
In the internet peer-review study, the MJA exposed some of its peer-review processes to public scrutiny to see whether authors and reviewers were amenable to a more open peer-review process and whether wider participation would improve the quality of reviews or published articles.
Section snippets
Study design
Since biomedical journals have not established conventions for peer review on the internet, we designed a study to conserve the current strengths of peer review and create new opportunities for wider participation and critical insight. The study procedure is shown in figure 1 and is also outlined on our website (http://www.mja.com.au, accessed on June 4, 1998).
Between March, 1996, and May, 1997, all reviewers of articles were informed about the study and told that the MJA might seek their
Acceptability of open review
Of 146 eligible articles, 72 were excluded: 25 were to be published with an accompanying editorial; 11 were newsworthy pieces to be used to publicise the issue of the journal; and 36 were excluded because the workflow of the journal limited the number of articles that could be entered into the trial at that time.
The authors of 60 (81%) of the remaining 74 articles agreed to take part. Our telephone conversations with authors suggested that most were keen to take part, and some had high
Discussion
We found that use of the internet to open and extend peer-review processes was accepted by authors and reviewers and was well received by readers. Given that the study exposed reviewers to unprecedented scrutiny, we were surprised by the high participation rate among reviewers, and by the fact that nearly two-thirds chose to be named. Perhaps they would have been less willing if we had asked to publish reviews of articles that had been rejected on their advice, but even reviewers who had
References (11)
- et al.
Readers' evaluation of effect of peer review and editing on quality of articles in the Nederlands Tijdschrift voor Geneeskunde
Lancet
(1996) - et al.
Peer review: crude and understudied, but indispensable
JAMA
(1994) Peer review: reform or revolution?
BMJ
(1997)- Godlee F, Gale CR, Martyn CN. The effect on the quality of peer review of blinding reviewers and asking them to sign...
Cited by (45)
The wisdom of crowds, the madness of crowds: Rethinking peer review in the web era
2011, Annals of Emergency MedicineShaping medical knowledge II: Bias and balance
2006, Complementary Therapies in Clinical PracticePeer review: concepts, variants and controversies
2022, Singapore Medical JournalUnderstanding and supporting anonymity policies in peer review
2017, Journal of the Association for Information Science and TechnologyWhat we still don't know about peer review
2016, Journal of Scholarly Publishing