Braun Tibor, Schubert András (szerk.): Szakértői bírálat (peer review) a tudományos kutatásban : Válogatott tanulmányok a téma szakirodalmából (A MTAK Informatikai És Tudományelemzési Sorozata 7., 1993)

DOMENIC V. CLCCHETTI: The Reliability of Peer Review for Manuscript and Grant Submissions: A Cross-Disciplinary Investigation

58 CICHETTI: THE RELIABII .ITY OF PEER REVIEW expertise. For example, a report on a randomized trial of a new drug for the control of hypertension might be sent to a car­diologist, a pharmacologist, and a statistician. They would, and should, be alert to quite different kinds of strengths and prob­lems, and there is no reason to expect either their detailed reports or their summary judgments to agree. Too much agree­ment is in fact a sign that the review process is not working well, that reviewers are not properly selected for diversity, and that some are redundant. Without this negative point, measures of inter referee agreement are of no value in assessing peer review mechanisms. Cicchetti refers to the role of the reviewer in informing the judgment of the editor or grants manager, but does not ade­quately stress the point that reviewers are no more than sources of relevant information. I know of no leading journal where decisions about publication are made by a "vote" of the re­viewers. As a former editor (of JNCI [Journal of the National Cancer Institute], 1974-1980) I had a section on the reviewers' form asking for a judgment about publication (publish as submit­ted, publish with minor revisions, etc.) and regularly found that it was of little value in sorting out the merits of a paper. There is no substitute for careful study of specific comments, integrated with the wisdom of editorial board members and, sometimes, special consultants. As a result, it was not unusual for us to publish papers that three reviewers had recommended for disapproval, and vice versa. A further point is that editors can adjust for (or sometimes deliberately use) reviewer bias. There has been few studies of the comments of peer reviewers to date, and all have focused on what reviewers write, not on the critical issue of how they have affected the information base on which a decision was made. I knew and regularly used reviewers who could never bring themselves to criticize a colleague directly, though their de­tailed comments were full of insight. And I used others who could never find a paper good enough to publish; with appropri­ate interpretation, their comments, too, were helpful. On rare occasion, when it appeared that an editorial decision might be challenged on the basis of the position or prestige of an author rather than scientific merit, I deliberately chose reviewers from one or the other camp to ensure that a strong and balanced review would be on the record. Some other editors do the same, and our journals have been the stronger for it. The paper by Peters and Ceci (1982) is a weak reed. Shortly after it was published, I wrote to Peters with some specific questions about their work. I made at least two telephone calls to verify his address at the time, but received no reply to my letter. Folklore to the contrary, few first-class letters are really lost by the Postal Service. I must assume that I received no reply because their answers would have undercut the strength of the conclusions in their paper. I cannot find my copy of the letter at this late date, but I recall that two points of special interest were how they "randomly" chose the papers they resubmitted (in more detail than was given in their paper), and how (also in detail) they revised titles and content to reduce the likelihood of detection of their own fraud. Most long-time editors have had the experience of publishing papers and almost immediately regretting the decision to publish, so that biased selection of winners or losers is simply not informative about practice in general. I am concerned that Cicchetti accepts without comment the appropriateness of studies carried out without the consent of the subjects, whether journals (and editors) or reviewers. Substan­tial investments of time, and direct financial investments as well, have been requested under false pretenses in the name of "science." I know and understand the arguments that some research cannot be carried out if the subject is properly in­formed, but reject any notion that such research thereby be­comes ethical. Editors do, and should, base their editorial decisions in part on results. Many negative studies are never properly com­pleted; others are presented in slap-dash fashion. Some are trivial because few knowledgeable investigators would have expected anything other than negative results; still others have samples too small to have much chance of showing a real effect even if one should be present. Many other negative studies are indeed published in the sense of "made public," but not as full­length original contributions. Instead, their results may be disseminated as abstracts, in short sections of later papers that extend the work, or even by word of mouth. Arching over all of this is the proper concern of editors about their readers' in­terests. I know of no evidence that readers are harmed by editorial decisions that depend in part on results. Many fewer people, and different people, may need to know that something did not work than would need to know what did work. A good editor must be even more concerned about readers' legitimate interests than about authors' complaints, and the "need to know" is chief among these. Thus, some kinds of bias against publication of negative results in the usual full form is entirely appropriate and should be encouraged. Cicchetti's section 7, on improving the reliability of peer review, tacitly takes improved reliability as an important goal. But the fundamental objective of peer review, and of the manuscript selection process in general, is not "fairness" to authors (though that may be a welcome byproduct). It is to improve decisions. Will larger numbers of reviewers, better training, or instructions for reviewing improve decisions? The matter has not been studied, perhaps because no one has yet devised a good measure of the quality of decisions to publish or disapprove. I know of no good statistical evidence that blinding reviewers to authors, or authors to reviewers, affects editorial decisions in generally good or bad ways. There is substantial anecdotal evidence, however, that both the strengths and the weaknesses of a paper are appraised more accurately when reviewers know who the authors are, but not vice versa. I find no recognition here that editorial decisions can, do, and should make use of criteria other than abstract scientific/ technical merit. Such criteria include originality, the suitability of the topic for a given journal, readability and the appropri­ateness of length and style, the need for a balance of topics in journals with broad coverage, the importance of findings to readers, and even whether there is reason to suspect uncon­scious bias or deliberate error in the data or the analysis. Overall, I believe that Cicchetti's paper shows a misunder­standing of the role of peer review as an aid to editorial decisions and grants management. The predictive validity of peer review: A neglected issue Robert F. Bornstein Department of Psychology, Gettysburg College, Gettysburg, PA 17325 Cicchetti's analysis of inter-reviewer reliability in manuscript and grant proposal assessments is both timely and valuable, and will help to resolve a number of unsettled issues in this area. Cicchetti - like most researchers investigating aspects of the peer review process - focuses mainly on reliability issues in peer review. His analysis confirms that inter-reviewer reliability in manuscript and grant proposal assessments is generally quite low. An important question remains unanswered, however: What do we know about the validity of peer review? Peer review is, at least in part, an assessment tool designed to identify the best research efforts in a given sample of manuscripts (see Bornstein 1990; Eichorn & VandenBos 1985). Thus, we should be able to demonstrate empirically that peer reviews have predictive validity and that reviews can discriminate high­quality from low-quality research. Unfortunately, designing studies to investigate the predictive

Next

/
Oldalképek
Tartalom