Braun Tibor, Schubert András (szerk.): Szakértői bírálat (peer review) a tudományos kutatásban : Válogatott tanulmányok a téma szakirodalmából (A MTAK Informatikai És Tudományelemzési Sorozata 7., 1993)

DOMENIC V. CLCCHETTI: The Reliability of Peer Review for Manuscript and Grant Submissions: A Cross-Disciplinary Investigation

99 CICHETTI: THE RELIABII .ITY OF PEER REVIEW serve on grant review panels for such funding agencies as NIH, DVA, and NSF. In this manner scientists would be rewarded on the basis of their own scholarly contributions to peer review rather than on the basis of potentially more subjective criteria. 4.7. Allowing authors multiple manuscript submission. Agreement is voiced by Mahoney with the position taken in section 7.7 of the target article that, for a number of cogent reasons, the option of multiple manuscript sub­missions is not a viable one. His citation of Epstein (1990) adds confirming empirical support for the position. In my informal discussions with colleagues, I have yet to find one who would endorse such a practice. 4.8. Developing peer review appeals systems. Whereas an argument for a more formal appeals system for rejected manuscripts is made by Zentall, Cole, citing the finding by Stinchcombe and Ofshe (1969) that many acceptable articles are falsely rejected, opts for editors gradually to increase publication rates for submitted articles even at the risk of levying page charges on authors. Although this is an interesting suggestion, I am not sure what specific criteria editors would apply to justify increasing their acceptance rates. Commentators were in agreement, however, that the unfair disapproval of a grant submission is far more serious in its consequences than the unfair rejection of a journal article (e.g., Adams, Cole, Kiesler, Mahoney, Salzinger, Zentall). The problem is especially serious for funding in the social and behavioral sciences. Thus, as Mahoney notes, the National Research Council (1988) emphasized the need for a 30% increase in funding in these areas where funding has dropped 25% between 1972 and 1987, whereas it has increased by 36% in other areas of science during the same funding period. Greene advocates strongly the need for an appeals system for'any funding agency, because the peer review system is an imperfect one. He notes that the DVA has had an "effective" system for more than a decade. He also admits that appeal is a "sensitive" and "complex" phe­nomenon, so the ground rules on which it is based require periodic assessment. I agree with Greene's position. Rather than a formal appeal process per se, Cole recom­mends that granting foundations admit publicly that many of their rejected proposals are as fundable as many that are approved. He advocates specifically that the approval of such previously declined proposals should be undertaken even at the expense of reducing funding levels for the ensuing round of new grant proposals. My concern with Cole's recommendation is that once a grant proposal receives the official federal stamp of "dis­approval," it becomes more and more difficult to con­vince such lay persons as members of Congress that the submission should really have been funded in the first place. My solution would be to assign high priorities (no number attached) to the best considered proposals quite independently of whether there is funding available to support them. One could then request from Congress whatever additional funds may be required to support all the high priority grants. I believe that the way the system works today - assigning arbitrary funding cutoffs based on arbitrary numbers - creates the dilemma of funding a proposal with a priority score of, say, 112 and declining one with a score of 113 when in fact no reasonable peer reviewer can be expected to make a reliable differentia­tion of this minute degree of magnitude. To paraphrase Delcomyn's analogy, the task that grant reviewers face is one of being asked to measure the dimensions of a nerve cell with a yardstick. My recommendation is intended to help obviate that measurement problem. 4.9. Training reviewers. The important issue of training reviewers was mentioned, in varying degree, by several commentators (i.e., Adams, Crandall, Delcomyn, Kiesler, Rourke, and Zentall) Adams describes the typical "haphazard" and "uncer­tain" manner in which reviewers eventually learn to become "constructive" evaluators. Adam's previously mentioned support of reviewers disclosing their identity to authors is one way of producing such constructive reviewer reports. With a somewhat similar purpose in mind, Zentall proposes that editors send to reviewers a list of recommended guidelines for avoiding potential biases in the evaluation of a given submission. The same general strategy can be used with grant proposals. Crandall, Delcomyn, and Rourke write of the impor­tance of reviewers sharing others' reviews of the same manuscripts. Unfortunately, some granting agencies (e.g. , NSF) have policy forbidding such a learning experi­ence. The DVN, on the other hand, does provide this valuable service to its reviewers. Crandall and Delcomyn note that the ability to write a useful review improves with experience. Crandall la­ments the fact that this experience is often gained at authors' expense. To help remedy the situation, De­lcomyn provides a useful set of guidelines for reviewers that, though it derives from physiology, is pitched at a level general enough to be of cross-disciplinary use. The advantage of his guidelines over many others I have examined is that they contain within them the message that it is neither the task of reviewers nor editors to settle differing points of view in a given area of inquiry. Thus, if important questions raised in the introduction are an­swered through carefully controlled, well executed ex­periments, and the conclusions spring from the data, then the article should be accepted quite apart from whose particular theory or hypothesis is or is not being sup­ported. Crandall addresses more formally the notion of train­ing reviewers by introducing the provocative idea of using prototype "ideal" reviews as guides. Filling in some of the required details, I would imagine that editors could locate in their files appropriate prototypie reviews that could be reliably rated as evidence for: "Accept/as is"; "Accept/Revise"; "Reject/Revise/Resubmit"; and "Re­ject/Unconditionally." With the necessary identifying information removed and the content disguised, these can be sent to authors to use in the same general manner that, for example, prototypie stages of cataract have been used to train ophthalmologists to classify cataract stages (i.e., Cicchetti et al. 1982). The need for such formal training of reviewers may have been implicit though it appeared via a different route to Nelson. In a thoughtful commentary, she raises the issue of the specific process by which reviewers use information to arrive at publication or funding recom­mendations. She is right that very little is known about this process in peer review. Some findings reported a few

Next

/
Thumbnails
Contents