Braun Tibor, Schubert András (szerk.): Szakértői bírálat (peer review) a tudományos kutatásban : Válogatott tanulmányok a téma szakirodalmából (A MTAK Informatikai És Tudományelemzési Sorozata 7., 1993)
ALAN L. PORTER and FREDERICK A. ROSSINI: Peer Review of Interdisciplinary Research Proposals
ALAN L. PORTERAND FREDERICKA. ROSSINI: Peer Review of Interdisciplinary Research Proposals Science, Technology & Human Values, 10 (1985) 33-38 The peer review of research results submitted for journal publication raises elementary issues of fairness and reliability.' Peer review of proposals to perform research in the future, however, is even more problematic. For many reasons, judging untested ideas is inherently more uncertain than evaluating completed work. Research has a way of evolving in directions that are unchartable in advance —certain data may prove unattainable, new discoveries by the researchers or by others may point to reorientation, or personnel may change. In a 1974 study, Grace Carter found that ratings of National Institutes of Health (NIH| initial giant applications were correlated with independent ratings of their later reapplications by a thin 0.4 (i.e., only 16% of the variance was accounted for by the other ratingl. 1 While this figure includes unreliability due to an independent rerating, it also reflects changes in the perception of the value of specific projects as research progresses. One National Science Foundation (NSF) proposal reviewer made special note of "exemplary staffing plans" of a proposal he evaluated. 3 Ironically, that same project changed staff repeatedly to the extent that it was not possible even to identify a project leader. In essence, then, peer review of proposals is a difficult business. Peer review as a process has engendered strong Professor Porter is in the School of Industrial and Systems Engineering. Georgia Institute of Technology. Atlanta. GA 30332. Professor Rossini is in the School of Social Sciences. Georgia Institute of Technology. Atlanta. GA 30332. ' This research was supported by the National Science Foundation. Office of Interdisciplinary Research. Grant OIR-8209893. The views expressed in this paper are those of the authors alone. charges and defenses. 4 Rustum Roy, for example, argues that peer review of proposals has no conceptual basis, wastes resources, and impedes innovative research. 5 Setting aside such total objections to the process, Harvey Brooks makes a well-reasoned case that peer review is better in some respects than in others, and for some tasks than for others. 6 Peer review of proposals is better for evaluating within defined fields than across fields; better for collecting expert opinion on whit Brooks calls the "truth" dimension (the pursuit of knowledge for its own sake) than differentiating along a "utility" dimension (research to be applied toward a specific end). According to this view, it follows that peer review is less satisfactory for applied or policy research than for basic research. Furthermore, "the broader the intellectual territory covered, the less consensus there will be on the ranking."' These attributes of peer review point to potential difficulty in its use to evaluate crossdisciplinary research proposals. Such proposals involve multiple skills focused on a scientific research problem. Thus, these proposals present substantial difficulties in identifying an appropriate "peer" group. It is likely to be difficult to identify peers whose expertise fully encompasses the proposed crossdisciplinary research. If located, they are apt to have a strong personal stake in the outcome of the evaluation in the event that the number of researchers concentrating in the area is small. If such a peer group cannot be gathered, review by persons not fully familiar with the domain can prove especially perilous. Recognizing such issues, program managers may hesitate to undertake review of proposals that lack an established peer group. Crossdisciplinary proposals may truly "fall between the cracks" of the disciplinary programs."