Braun Tibor, Schubert András (szerk.): Szakértői bírálat (peer review) a tudományos kutatásban : Válogatott tanulmányok a téma szakirodalmából (A MTAK Informatikai És Tudományelemzési Sorozata 7., 1993)

MARTIN RUDERFER: The Fallacy of Peer Review: Judgement without Science and a Case History

183 RUDER I ER : T IIK I' A I.I ACY O l PEER REVIEW 3. Periodically publish summaries from data gathered from authors and reviewers to enable public analysis of review precision, variation with time and interjournal comparisons. Such data provide the essential feedback to test and further improve peer review. Request essential data on all important time factors from reviewers on properly designed forms and periodically publish analyses of these. These tentative conclusions from one case history are cost effective for the journals to the extent they reduce the values of p and m. Additional measures for improving peer review and its study are possible which require investment by society. However, these are justifiable by the long-term benefits that may accrue. The probability that <p) = 0, and correspondingly that n = is remote so there undoubtedly must remain some contested manuscripts for which there is no author-reviewer agreement within a reasonable period. (The cut-off point that defines "reasonable" must be set by practical considerations, as the value of R, duration, number of reviews, space limitations and/or cost.) Because these contested remnants may involve fundamental issues, it is not mete to relegate them as heretofore, to the oblivion of journal files. Some may be suitable for mandatory publication with comments, but it may be appropriate to form an independent council, akin to an appeals court in jurisprudence, to openly review all remaining unresolved cases. Publication of the council proceedings, perhaps in a special journal, provides a reference that may be indispensable for preservation of unfalsifiable dissident views that occasionally erupt into major paradigm shifts. Such an appeals function also insures that no inequity of the review process need ever be ignored or denied; it provides the missing negative feedback for closing the loop in the process of peer review. The efficacy of an appeals council is likely to be intimately related to the choice of peers. This addresses the perennial problem of selection of peers in general. The problem is perspicuous from the revelation that a 10 to 20 percent elite group of scientists accounts for 80 to 90 percent of published research. 13 4­35 1 It is obviously impractical for the large output of this small group to be reviewed only by itself. For a random assignment of reviewers based only on professional knowledge, as approximated in current practice, how is it then possible to provide a proper peer match for the evident creativity of this indispensable elite group (or any other subset)? The resulting mismatch is undoubtedly responsible for much of the discontent with peer review. For a start, it is already known that general intelligence as measured by IQ is not significantly correlated with success in research 134 1, that IQ and creativity are not significantly correlated and that the factors involved in IQ (creativity) are mainly determined by left (right) brain function. Since creativity is measurable with a reliability equivalent to that of IQ 136 1, it appears feasible to begin to understand and investigate at least one important factor involved in peer matching other than professional expertise and, eventually, to extend this to other measurable attributes that may be involved. With development of suitable tests, the ultimate establishment of a core of relatively few properly trained professional reviewers may be the most cost-effective and ideal way to solve a large part of the review problem.

Next

/
Thumbnails
Contents