Elsevier

The Lancet

Volume 352, Issue 9126, 8 August 1998, Pages 441-445
The Lancet

Articles
The Medical Journal of Australia internet peer-review study

https://doi.org/10.1016/S0140-6736(97)11510-0Get rights and content

Summary

Background

Peer review of medical papers is a confidential consultancy between the reviewer and the journal editor, and has been criticised for its potential bias and inadequacy. We explored the potential of the internet for open peer review to see whether this approach improved the quality and outcome of peer review.

Methods

Research and review articles that had been accepted for publication in The Medical Journal of Australia (MJA) were published together with the reviewers' reports on the worldwide web, with the consent of authors and referees. Selected readers' e-mailed comments were electronically published as additional commentary, authors could reply or revise their paper in response to readers' comments. Articles were edited and published in print after this open review.

Findings

60 (81%) of 74 authors agreed to take part in the study, together with 150 (92%) of 162 reviewers. There was no significant difference in the performance of commissioned reviewers before and during the study. Four articles were not included because of insufficient time before print publication. Of the remaining 56 papers, 28 received 52 comments from 42 readers (2% of readers submitted comments). Most readers' comments were short and specific, and seven articles were changed by the authors in response.

Interpretation

Open peer review is acceptable to most authors and reviewers. Postpublication review by readers on the internet is no substitute for commissioned prepublication review, but can provide editors with valuable input from individuals who would not otherwise be consulted. Readers also gain insight into the processes of peer review and publication.

Introduction

Peer review aims to identify the best scientific reports and to correct deficiencies in scientific reporting before publication.1, 2 In the process, peer review informs journal editors about the state of expert opinion and provides advice to authors about how they can improve their work and reporting.

At The Medical Journal of Australia (MJA), peer review is “masked”, with concealment of the identity of authors and reviewers. Irrespective of the recommendations of the reviewers, the editor exercises an absolute discretion in deciding what to publish. This model of peer review is typical of most biomedical journals, although many journals do not conceal authors' identity from reviewers. The main limitations of peer review are delay, inadequacy, bias, and corruption.

Delay arises from the time taken to secure and assess reviews and for the authors to revise their articles in response to reviewers' comments. Inadequacy has been shown, for example, in a study in which 420 BMJ reviewers were sent a paper that had been deliberately prepared to contain eight serious flaws. Of the 221 reviewers who gave reviews, none spotted more than five of these flaws, and the median number of flaws identified by reviewers was two.3, 4 Bias can be for or against established opinion, new ideas, particular hypotheses, research groups, institutions, or nationalities.5, 6 Corruption occurs when reviewers use their position to plagiarise the work under review (intellectual theft)7 or block its progress.

The confidential nature of peer review makes it difficult to detect these limitations. Most referees do not know whose paper they reviewed, what other reviews were received, what use the editor made of their reviews, whether all or only part of their report was passed to the author, what response the author made (usually reviewers are told only whether the article was accepted or rejected), or what other factors influenced the editor's decision. Similarly, authors do not know the identity of their reviewers, why or how the reviewers were chosen, whether the reviews were passed on in full, or (despite the editor's covering remarks) how closely they must follow the reviewers' recommendations to achieve publication. If reviewers or authors have a conflict of interest or a history that might cast doubt on their objectivity, this bias is obscured by their anonymity. Similarly, if a reviewer “borrows” ideas from an author, the cloak of anonymity may cover the theft.

Anonymous reviewers and authors are unable to assess or comment on the fairness and accuracy of the peer-review process as a whole. Only editors are in a position to make this assessment—or are they? Editors cannot be expected to have the specialist knowledge of authors and reviewers and may not be able to detect biased or inadequate reviewing. Many editors rely on authors to protest against unfair or poor reviews. Moreover, neither reviewers nor authors are able to protest if an editor makes a bad or biased use of peer review, because they do not have access to all the evidence.

As things stand, the scientific community cannot be sure whether or how frequently peer review meets the objectives of fair and accurate assessment, or how often that assessment is properly reflected in editorial decisions.

In the internet peer-review study, the MJA exposed some of its peer-review processes to public scrutiny to see whether authors and reviewers were amenable to a more open peer-review process and whether wider participation would improve the quality of reviews or published articles.

Section snippets

Study design

Since biomedical journals have not established conventions for peer review on the internet, we designed a study to conserve the current strengths of peer review and create new opportunities for wider participation and critical insight. The study procedure is shown in figure 1 and is also outlined on our website (http://www.mja.com.au, accessed on June 4, 1998).

Between March, 1996, and May, 1997, all reviewers of articles were informed about the study and told that the MJA might seek their

Acceptability of open review

Of 146 eligible articles, 72 were excluded: 25 were to be published with an accompanying editorial; 11 were newsworthy pieces to be used to publicise the issue of the journal; and 36 were excluded because the workflow of the journal limited the number of articles that could be entered into the trial at that time.

The authors of 60 (81%) of the remaining 74 articles agreed to take part. Our telephone conversations with authors suggested that most were keen to take part, and some had high

Discussion

We found that use of the internet to open and extend peer-review processes was accepted by authors and reviewers and was well received by readers. Given that the study exposed reviewers to unprecedented scrutiny, we were surprised by the high participation rate among reviewers, and by the fact that nearly two-thirds chose to be named. Perhaps they would have been less willing if we had asked to publish reviews of articles that had been rejected on their advice, but even reviewers who had

References (11)

There are more references available in the full text version of this article.
View full text