Braun Tibor, Schubert András (szerk.): Szakértői bírálat (peer review) a tudományos kutatásban : Válogatott tanulmányok a téma szakirodalmából (A MTAK Informatikai És Tudományelemzési Sorozata 7., 1993)
DOMENIC V. CLCCHETTI: The Reliability of Peer Review for Manuscript and Grant Submissions: A Cross-Disciplinary Investigation
67 CICHETTI: THE RELIABII .ITY OF PEER REVIEW lishers, as well as those of the agencies and individuals responsible for funding research. Mahoney (1985) identified a number of factors responsible for the surge in published research and number of journals, including employment practices, requirements for funding, and fraud. It is possible that such factors also lead to a tendency to downplay editorial practices and commercial needs to ensure the promotion and funding of research within the community of scientists. The number of new journals in many areas of science is currently increasing rapidly (Broad 1988). This increase is accompanied by concerns about the expansion of the scientific literature and possible decline in overall quality. There is presently so much journal space that the only recourse left to many editors (and publishers) may be to include research that would not pass muster either from peer review or common sense. In this sense, the unreliability of peer review can be used as a basis for accepting manuscripts. Journals survive either through subscriptions (particularly to institutions) or through affiliation with an organized group that provides a ready pool of subscribers and other more direct sources of support, financially and through submissions. Journals with the latter sources of support can generally afford higher rejection rates and can use peer review in a manner likely to support many submissions and rejections. The research cited by Cicchetti, however, is based primarily on general journals with higher rejection rates. These journals may use a single negative review as a basis for rejection; other journals that need manuscripts (and subscribers) may use a single positive review as a basis for acceptance. While these phenomena have apparently not been adequately studied, it would seem important to begin to establish mechanisms for periodically reviewing the quality of various journals. It would be interesting to know rejection rates, number of solicited reviews, results of these reviews, and other principally empirical results of the peer review process across journals. Evaluations of quality by scientific polling of professional groups and subscribers would represent another source of information. Without more pressure on journals, editors, and publishers, Cicchetti's implicit warning that the unreliability of peer review leads to a failure to publish good research on a timely basis is somewhat diluted. If virtually all research can be matched with a publication outlet, why should the scientific community worry about the unreliability of peer review? There is apparently some type of implicit evaluative system among authors that leads them to choose and rank journals for their submissions. It would seem important to begin to try to make this evaluative system more explicit. The failure to do so may represent a tendency among researchers to operate in ways that will keep journal outlets open for publication and promote the system criticized by Mahoney (1985). The extent to which a similar problem influences funding mechanisms is not well understood and certainly not parallel to journal practices. It is simplistic, however, to lament the government's inability to fund high quality research on the basis of ever lower priority scores for unfunded research. Project officers are encouraged to solicit applications to demonstrate the need for more funds in their area to program directors and funding sources (i.e., Congress). Peer review study groups are generally informed as to the priority scores (or percentiles) necessary to ensure funding of the committee's highest rated proposals. There is a tendency to approve weakly lower quality research so that percentiles for better research will be improved. More telling, however, is the question of what happens to research quality when more funds become available. In the past decade, the federal government has placed substantially more emphasis on the war on drugs and on the AIDS problem. I am not in any sense disagreeing with these priorities; both problems clearly represent national emergencies. The question I am raising is simply whether increases in funding lead to greater availability of quality research addressing these problems. As with increasing the number of journal outlets, we may be producing a glut of information, much of which will be of questionable significance. One unintentional implication of Cicchetti's target article is that when peer review is unreliable, the only safe solution to scientific problems of national importance is to spend freely. The fact that mechanisms for evaluating the quality of funding agencies and of journals are generally ineffective, nonexistent, or self-serving should be of great concern to the scientific community, particularly if the absence of mechanisms represents in part the need for the community to publish, promote, and fund its membership. Current proposals for reform tend to focus on individual levels of responsibility. The unreliability of peer review extends beyond the peers. Additional focus on the responsible institutions (e.g., journals) would also seem warranted. Peer review is not enough: Editors must work with librarians to ensure access to research Steve Fuller Science Studies Center, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061 Electronic mail: tuller@vtvm2.bttnet I would like to propose that peer review systems in academia function like markets in society at large, and that the "rationality" (if you will) of such systems be evaluated in much the same way as markets are. It is clear from Cicchetti's target article that peer review is such a vexed issue because the variety of views on how and why the system ought to work docs not match up neatly with the equally wide range of ways in which the system in fact does work. Do we therefore conclude that peer review does not promote the growth of knowledge, or that we have yet to fathom the "invisible hand" principle by which it does promote such growth? The author himself seems to be struck most by the variety of peer review practices across the disciplines, but ultimately they are noncommittal as to whether any one practice ought to be used as the model for all the disciplines. The closest I could find to an explicit, normative commitment in the target article was a concern (in sect. 8, para. 3) that good scholarship not be lost to the world because of selection standards that are more stringent than reliable. I would guess that, given a chance to reform the system, Cicchetti would try to get other disciplines to approximate the peer review practices of cross-disciplinary research fields, in which most submitted articles eventually get published somewhere. I would like to subject this easy liberalism to the cold scrutiny of market analysis, however. Is peer review supposed to promote the growth of knowledge or the careers of scientists? It is not obvious that the two goals can be jointly maximized, though Cicchetti seems to presume that the fairer the system is to the individual scientist, the higher the quality of science that is likely to result. But why presume this? Here is why not. Peer review is only one of several selection mechanisms, or markets, that operate in the production and consumption of knowledge. For example, the differences that Cicchetti found between the rejection rates of specialities (low) and interdisciplinary fields (high) suggest that the specialists withheld submissions until they could anticipate acceptance, while interdisciplinarians failed to do this. This prior difference in self-selection is probably because of the specialists having been trained to write for certain target journals, whereas the interdisciplinarians were not. Moreover, the specialists learned to associate quality work with acceptance in those journals, whereas interdisciplinarians learned to be more flexible (or cynical?) in their journal aspirations. Thus, the