NEWS FOCUS
Misconduct in Research It may be more widespread than chemists like to think Pamela S. Zurer, C&EN Washington It was with a mixture of fascination and horror that chemists heard Ronald Breslow's news. Rumors that had been circulating for a few weeks were confirmed in his letter to the editor and an article in the Dec. 8, 1986, issue of C&EN. The Columbia University chemistry professor would be withdrawing three communications from the Journal of the American Chemical Society because parts of the work could not be reproduced. After a preliminary inquiry, Columbia set up a committee to investigate the possibility of scientific fraud. Fraud—the deliberate falsification or fabrication of data—is considered science's greatest sin. "Scientists are aware that honesty is the foundation of the system of science," says Joseph F. Bunnett, chemistry professor at the University of California, Santa Cruz. Bunnett headed the group of American Chemical Society journal editors who developed the society's ethical guidelines for publishing chemical research. "If we couldn't trust each other, the whole system would crumble," he says. Stories about fudged data scare all group leaders who rely on students and postdoctoral as10
April 13, 1987 C&EN
sociates to carry out their research ideas. And it is unusual to find an academic chemist who actually works at the bench after the first five or six years on the tenure track. "What happened to Ron Breslow could happen to most people who have large research groups," says Carl Djerassi, chemistry professor at Stanford Uni-
versity. "Anyone who read about it must say 'here but for the grace of God go Γ if they are honest at all." The Breslow incident is far from resolved. Monica P. Mehta, Breslow's former graduate student and co author of the papers describing the now-suspect work, continues to insist her results are valid. In the inter ests of fairness, the investigating committee has of fered her a chance to confirm her work at another university at Columbia's expense. According to estab lished procedures at Columbia, once the panel has completed its inquiry it will prepare a report that will be sent to the National Science Foundation, which funded the research. NSF may decide to investigate further or impose some sort of sanction. Whatever conclusions Columbia and NSF reach, whether there is a finding of outright falsification of data or of something less sinister, grave damage has been done. Time, money, and effort have been wasted. The career of Mehta—whom Philip Pechukas, chair man of the Columbia chemistry department, has de scribed as an extraordinarily bright student—has been indelibly marked and perhaps cut short. Breslow, like other established chemists with distinguished research records who have faced such situations, may emerge relatively unscathed. Despite the vulnerability that Djerassi recognizes,
many chemists believe research misconduct is rare in their discipline, much less common than in biomedicine where most of the well-publicized cases of scien tific fraud have occurred. "Out and out fraud in chemistry is a very, very small problem," says Allen J. Bard, professor of chem istry at the University of Texas, Austin, and editor of JACS. "I'm troubled that such a fuss is made. That casts a pall on the field that's unwarranted, given that these cases are so isolated. It attracts the attention of outsiders who start thinking about putting in protec tive devices that will bother the 999 out of 1000 peo ple who are trustworthy." For good or ill, science already is receiving the attention from "outsiders" that Bard is leery of. Inves tigations of faulty research make headlines in major newspapers. Congress has held hearings on the sub ject of misconduct in research. NSF has drafted formal procedures for handling allegations of misconduct, similar to guidelines the National Institutes of Health already have in place. Professional societies, too, are stepping up their roles in encouraging ethical conduct among their mem bers. An American Association for the Advancement of Science conference in September will focus on research institutions' procedures for dealing with mis conduct. Sigma Xi, the scientific honor society, has published a booklet called "Hon or In Science" intended as prac tical advice for young scientists encountering ethical problems in research. The Association of American Universities has is sued a report on maintaining the integrity of research. The editors of ACS journals in 1985 adopted ethical guide lines to the publication of chem ical research. The ACS Commit tee on Professional Relations' Subcommittee on Professional Standards & Ethics—which just a year ago added the term "eth ics" to its name—plans to soon republish those guidelines in a collection containing the soci ety's Chemist's Creed and other position statements related to ethical matters. Last December, the ACS Board approved a plan for that subcommittee to serve as a clearinghouse for ethical issues and concerns of members, including problems of research misconduct. April 13, 1987 C&EN
11
News Focus
What would you do in these situations? From time to time, situations arise in research that demand hard ethical choices. How would you respond in the following scenarios? The way the chemists involved handled them is described at the end. 1. A graduate student reports extremely interesting results to you, his thesis adviser. Another grad student in your lab is unable to get analogous chemical reactions to work and a postdoc cannot reproduce the original results. The first student says he has decided his research interests lie elsewhere and wants to switch to another professor in your department. Do you say anything to your fellow professor about the irreproducible results? The reaction in question has a radical mechanism, a type that often is sensitive to impurities. 2. After several years of work, your research team succeeds in synthesizing and characterizing a novel compound. After publishing your multistep sequence, one of your graduate students finds a simple shortcut to the final product. You also publish this new more efficient route. Later you
discover the shortcut does not work after all; your graduate student had misidentified an isomer as the desired product. What do you do? 3. Your research adviser has a hypothesis that the data from a certain experiment should show periodic oscillation at a certain frequency. When you, his graduate student, plot the scattershot data you have collected, he exclaims: "Eureka! I can see the beginnings of the confirmation of our theory." You go back to the lab and try again, but the data you gather still look like random noise to you. What do you do? 4. A member of your research group tells you he has seen one of your graduate students altering analytical data. What do you do? Here is what the chemists involved in these situations did: 1. The thesis adviser did not tell his departmental colleague about the suspect work because he did not want to prejudice the student's new adviser without evidence that the student had done anything wrong. The student eventually obtained his Ph.D. and went on to
The ultimate goal of much of this recent flurry of activity is to prevent misconduct in research from occurring in the first place. "Although these cases may be rare, they reflect very badly not only on a particular research area but on science in general/' says Daryl E. Chubin, a sociologist who has studied and written about scientific fraud. Chubin is a senior analyst with Congress' Office of Technology Assessment. "They bring to scientists all those negative perceptions that we associate w i t h lawyers and physicians—people who are known for malpractice." "There's too much to lose if the public is mistrustful of scientists," adds Mark S. Frankel, head of AAAS's committee on scientific freedom and responsibility. "Science has to be open both within the scientific community and to the public. Who pays for most of science, after all?" The actual frequency with which scientists fabricate or otherwise falsify their data is not known for science as a whole, much less for individual disciplines— and probably never will be. "There are two ways of gathering data on the frequency of deviance: selfreporting and surveillance," says Patricia K. Woolf, a sociologist of science at Princeton University. "Neither is very satisfactory among professionals." Despite the lack of data, chemists like to think of chemistry as somehow purer than other branches of 12
April 13, 1987 C&EN
postdoctoral work. Later, neither his Ph.D. thesis nor the work he did as a postdoc could be reproduced. He is still active in science in another country. 2. The research professor published a retraction of the paper describing the shortcut. 3. The graduate student tested his research professor by showing him data produced by a random-number generator. The professor still insisted the data supported his hypothesis. When told the newest points were simply random numbers, not experimental results, the professor agreed he had been influenced by wishful thinking. The student eventually obtained his Ph.D. and went on to a successful career in chemistry. 4. The research adviser confronted the student who had altered his analytical results. The student admitted it and resigned. The professor helped the student get a job in the chemical industry, but made sure the company was fully aware of the incident. The student eventually got a Ph.D. from another university, which the professor also had informed.
science, particularly biomedicine. "People who do chemistry are people who love to solve puzzles," says Columbia chemistry professor Gilbert Stork. "It's no fun if you cheat. The potential damage is not worth it. Your reputation would be destroyed." "We're different because our experiments are reproducible," says Sir Derek H. R. Barton, professor of chemistry at Texas A&M University. "Fraud is rare in chemistry as opposed to biomedicine." Barton's opinion is echoed by almost every chemist C&EN spoke with in preparing this article. Yet every one had at least heard of cases where chemists had fudged their results, and several had had direct experience of such behavior themselves. "I've had several cases in my career where students cheated—three instances out of several hundred," Barton says. Stork, too, had a student in his lab who altered data. Does the fact that chemists are aware of cases of fraud mean they are fooling themselves about how often it occurs within the community? Richard N. Zare, professor of chemistry at Stanford, argues they are not. "Do the stars fall?" he asks. "No. But everyone knows about shooting stars. They make a very vivid impression." Outright fraud may be rare in chemistry. Nobody can say for sure, as Woolf points out. But even chemists who see little or no problem with fraud admit
there are problems with less conspicuous but more subtle forms of misconduct in scientific research: sloppiness, self-deception, smoothing or omission of data that don't conform to a preset hypothesis, failure to acknowledge the source of an idea, multiple publication of the same data. The line that divides error, sloppiness, and selfdeception from deliberate deceit can be difficult to draw. The seemingly minor transgressions undoubtedly are much more common than actual falsified results. Even though most scientists would not consider them fraud, they can be just as damaging to the overall integrity of the scientific effort. Weak, incomplete, and incorrect data can work their way into the literature, causing other researchers to waste time, money, and energy attempting to separate the wheat from the chaff. "Most dangerous is self-deception/' says Zare. "There can be lots of wishful thinking that the effect you're looking for is there. Take polywater, for example. I believe there was no fraud but it was not a happy chapter in chemistry. A lot of people wanted to see certain things." Djerassi poses a hypothetical example of subtle misconduct. "You get enamored of a hypothesis of yours," he says. "You are absolutely convinced it is right. Then you prove it experimentally. Let's take a physical chemical paper. You plot your data and you need to get a straight line. You have eight points: Six of them fall on the straight line and two do not. You think of all kinds of reasons why these two can be ignored, why they are not relevant. In fact, your theory is right. People have subsequently confirmed it in other ways. So your having ignored these two points, your not having admitted this, in fact has done no damage. What about the morality of that issue? Once you find that that type of intellectual sloppiness does pay off, you would tend to do it again. "Furthermore," Djerassi continues, "No one does research alone these days. So that sloppiness is certainly obvious to the rest of your colleagues and frequently to the younger ones." "There is a lot more soft cheating than hard cheating," says Rustum Roy, chemistry professor at Pennsylvania State University. "The present mechanisms of science are making it worse. The pressure for tenure, to publish, leads to cutting corners." Djerassi notes that there are more checks and balances in industrial research than in academic research and that these tend to reduce the incidence of misconduct. "The pressure is also there to succeed," he says, "but success in industry is defined as a demonstration that something really works and pays off. In much academic work, success need be demonstrated only once, then it's published, and then you go on to something else." As Stork says, cheating may take the "fun" out of chemistry—the internal satisfaction that comes from untangling a tough intellectual knot—but it produces results. And results are what bring the external rewards at every stage of a scientist's career.
The pressure to continually have something new to report is experienced by all academic researchers. A graduate student needs to investigate 10 more cases of a new reaction she has developed before her· research adviser will allow her to write up her work for her Ph.D. thesis. An assistant professor has only a few years to churn out enough publications to make himself a viable candidate for tenure at a prestigious university. A tenured professor needs evidence of satisfactory progress to support her grant renewal application. Or—as in "Castor's Dilemma," an intriguing short story by Djerassi about a scientific fraud—an established scientist needs certain results to support a pet theory that will secure his place among the elite of his profession. Sociologists have pointed out the potential conflict between scientists' professed goal of searching for truth and their desire for the recognition and rewards that come with producing tangible results. That systemic environmental pressure doesn't lead just to outright fraud by a few aberrant individuals, it also leads to the more subtle "soft cheating" that Roy refers to. "There's an urgency in the environment," says sociologist Chubin. "That's why research misconduct is not just a matter of crackpots. The common reaction is that the person who cheats is deranged. Anybody who would think that he or she could get away with it has got to be mad, because the system is going to find them out, which means exile for that person; that person is gone; the career is over. If scientists can point the finger at a psychopath, that's tantamount to April 13, 1987 C&EN
13
News Focus saying: 'We can cut the cancer out of the body but the body's intact and we'll just keep on working the way we did before, no harm done.' Of course, it's not that simple. That view de-emphasizes the research environment in which the misconduct is going on." An attempt by two NIH scientists to find out how often the more subtle forms of misconduct occur was recently published in Nature [325, 207 (1987)]. Walter W. Stewart and Ned Feder minutely dissected 109 publications by John Darsee (a physician who in 1981 confessed to committing fraud) and 47 coworkers. Largely because of the threat of libel suits by some of Darsee's coauthors, it took more than three years for their study to be published. "The distinction between fraud, sloppiness, and selfdeception goes at the intent of the author and is hard to judge," says Stewart, a chemist. "We decided to simply study the accuracy of the scientific literature. How often does this whole range of things occur?" Stewart and Feder looked not at Darsee's forgery but at the parts of the publications the coauthors were responsible for. They found that more than half the authors had engaged in questionable practices that in some cases weakened or even invalidated their reported results. Most common were careless errors, but Stewart and Feder also found statements they thought the coauthors knew or should have known were inaccurate. Although the papers he and Feder examined were in the field of cardiology, Stewart thinks there are lessons for all scientists in what they found. "We chemists shouldn't conclude that we are better than the rest," Stewart says. "The point is not to be vigilant in shunning bad apples. We need debate and discussion about what we're accomplishing and how. What are the appropriate standards and are we following them?" Scientists point to three major stages in the dissemination of new knowledge when safeguards are supposed to catch misconduct in research. The first is in the laboratory itself, where high standards ought to prevent the transmission of unreliable or incomplete data. The second is at the publication stage, when editors and reviewers judge the quality of a contribution. The third safeguard is replication, by which scientists attempt to confirm or disprove the work of others. However, there are holes in each of these dams through which unreliable data can slip. The first line of defense against misconduct in research has to be the laboratory, where the results are produced. In theory, no data should leave the lab until the researchers themselves have confirmed and scrutinized them with extreme care. In practice, results are often put out without having had to pass an exhaustive, critical examination. One contributing factor may be that chemists very seldom talk explicitly about what standards are expected in the research lab. Students don't attend seminars or hear lectures on the necessity to watch out for selfdeception or to avoid the temptation to "improve" their results. Scientists just assume the proper standards are self-evident. "Science treats ethics as something you pick up as 14
April 13, 1987 C&EN
part of the culture," says Terrence R. Russell, a sociologist of science who is manager of ACS's office of professional relations. "You pick up the way to do research by doing research." "When someone works for you—student or postdoc—you don't expect them to cheat," says Alex Nickon, chemistry professor at Johns Hopkins University, speaking from firsthand experience. A postdoctoral research associate, whom Nickon had hired on the basis of good recommendations, managed to have several papers published that later proved to be fabricated. Nickon doesn't see what he could have done to prevent the incident, as the researcher submitted the fraudulent work totally on his own. Nickon notes that letters of recommendation are not the safeguards they might be. After his unhappy experience, Nickon wrote the people who had recommended the offending postdoc to him. One answered he had noted some incidents in the postdoc's personal affairs that should have made him suspicious, but had not w a n t e d to m e n t i o n t h e m in a professional recommendation. Another incident casts further doubt on how well recommendations work as a quality-control mechanism. A year or so after the postdoc had left Johns Hopkins, Nickon heard from a scientist in France that the postdoc had given Nickon's name as a reference, apparently thinking his new employer would never check. "Few people take the time to check recommendations," Nickon says.
'Tart of scientific training is to teach people to be objective, not subjective/ 7 Nickon says. "The research director's duty is to be critical and expect high standards. But a research director can't check everything a student or postdoc does. There has to be some element of trust." Penn State's Roy thinks there is less checking today than there once was. "Fifteen years ago I would look at all the raw data myself," he says. "Now I'm not close enough. I wouldn't catch it if someone were deliberately trying to falsify results. Every head of a large lab is in the same position. I can't even run half of my machines." "It's impossible in a research group of 20 or 30 people for the senior person to know precisely what is going on," agrees Djerassi. "A certain amount has to be taken on faith." Sociologist Chubin thinks that senior researchers often may be let off too easily in cases of research misconduct. Laboratory heads most often emerge embarrassed but not permanently damaged by incidents of misconduct in their labs. Barton's reputation, for example, certainly hasn't suffered because he had to correct some falsified results. "Lab heads have to be much more closely in touch with the work that is going on at the bench," Chubin says. "They should be able to vouch for the validity or the accuracy or the procedures by which the data have been gathered and analyzed and reported just as much as if they themselves were doing the work. If so much distance has grown up between the head of the lab and the people who are doing the work at the bench, then maybe big science—big in the sense of number of collaborators—is getting too big." If heads of large research groups can't always vouch for the work coming out of their labs, should they receive credit for it? "It seems to me that if someone is going to accept credit, then he or she has got to be prepared to accept blame when things don't go right," Chubin says. Stewart and Feder concluded that one of the most serious ethical s h o r t c o m i n g s revealed by their study of the Darsee papers was honorary authorship—the practice of scientists with no real involvement in a research project coauthoring the resulting publications. The ACS guidelines to the ethical publication of research state, "The coauthors of a paper should be all those persons who have made significant scientific contributions to the work reported and who share responsibility and accountability for the results." Among chemists it is standard practice for a group leader who has had no hands-on contact with a research project to be the senior author of the resulting papers. The principal investigator provides intellectual guidance, lab space, and funds for carrying out the work. "In a team of three or four, if one screws up, are all guilty?" asks Roy. "I didn't really do the work but I conceived the idea, got the money, and wrote it up. If I put my name on the paper, I am responsible." Chemistry is a sort of feudal system, however, with
The chemist's creed As a chemist, I have a responsibility: To the public To propagate a true understanding of chemical science, avoiding premature, false, or exaggerated statements, to discourage enterprises or practices inimical to the public interest or welfare, and to share with other citizens a responsibility for the right and beneficent use of scientific discoveries. To my science To search for its truths by use of the scientific method, and to enrich it by my own contributions for the good of humanity. To my profession To uphold its dignity as a foremost branch of learning and practice, to exchange ideas and information through its societies and publications, to give generous recognition to the work of others, and to refrain from undue advertising. To my employer To serve him undividedly and zealously in mutual interest, guarding his concerns and dealing with them as I would my own. To myself To maintain my professional integrity as an individual, to strive to keep abreast of my profession, to hold the highest ideals of personal honor, and to live an active, well-rounded, and useful life. To my employees To treat them as associates, being ever mindful of their physical and mental well-being, giving them encouragement in their work, as much freedom for personal development as is consistent with the proper conduct of work, and compensating them fairly, both financially and by acknowledgment of their scientific contributions. To my students and associates To be a fellow learner with them, to strive for clarity and directness of approach, to exhibit patience and encouragement, and to lose no opportunity for stimulating them to carry on the great tradition. To my clients To be a faithful and incorruptible agent, respecting confidence, advising honesty, and charging fairly. Approved by the council of the American Chemical Society, Sept. 14, 1965.
many independent fiefdoms. Standards differ from research group to research group. The recognition that not every young scientist may learn the same high standards was part of the motivation of the group of ACS editors who formulated the guidelines to publishing research that the society adopted in 1985. The group's chairman was Bunnett, founding editor of Accounts of Chemical Research. "Whether young scientists learned the general consensus about what's good ethical behavior in publicaApril 13, 1987 C&EN
15
News Focus tion would depend a lot on the particular milieu in which they had developed, largely the Ph.D. research group/ 7 Bunnett says. "If in that milieu there was a fair amount of discussion of ethical questions, per haps of occasional transgressions that occurred, the young people are likely to be cognizant of some of the ethical principles that are generally accepted to pre vail. But I'm not sure that's always the case. It may be that in some research groups there is not much discus sion of such things." Stanford's Zare, for one, continuously tries to pre vent ethical problems. "Every day that I'm in town, I'm walking in the lab, talking to people who are involved in the work, and seeing what they're doing. (Incidentally, almost all the time something's not work ing. That's the normal state of affairs, to be honest.) I have a good sense of what data look like. I urge people to keep good lab notebooks. Γ try to foster a sense of being critical. I try to explain to students how I see the problem of genuine self-deception as being the most serious." Bunnett points out how important it is for a re search director not to get too excited about positive results or to be too negative when the unexpected occurs. "I think in some cases this may cause people who are near some kind of an ethical borderline to switch over towards reporting results that are more favorable than they really are. In the worst case, it would be a complete fabrication. So I personally bend over backwards to compliment my coworkers when they report a result that is negative or contrary to my expectations. 'Now we're learning something inter esting,' I say." Once a piece of research is submitted for publica tion, journal editors and referees have the opportuni ty to screen out substandard material. Yet it is unlike ly they will catch fraudulent data. Reviewers are asked to judge the logical consistency, novelty, and import of a paper, not replicate every experiment or calcu lation. "There have been cases where a person who has fudged data has gone to more work than it would have taken to do the experiment," Bard says. "If a guy wants to, he's going to put it over. Most reviewers start with the assumption that the author is honest and ethical. It's very, very rare that people suspect something improper is going on. If people suspect anything, it's not fraud, but the author misinterpreting results. It's a lot to expect for reviewers to redo calcu lations, etc; we put enough of a burden on them anyway." Some chemists see a growing tendency to publish short communications rather than full papers as a potential threat to the integrity of the scientific litera ture. "A lot of leaders in my field—synthetic organic chemistry—have quit writing full papers," says Clayton H. Heathcock, chairman of the chemistry department at the University of California, Berkeley. "If your experimental procedures are never published, it's that much easier for either outright fraud or honest selfdeception to slip by. If you don't have to hang your laundry in the neighborhood where everybody can 16
April 13, 1987 C&EN
see it, you don't have to be so careful about removing all the spots." Stork disagrees that communications need be less than complete. He thinks all the details necessary to judge or use a piece of research can be included in short notes and that full papers are often packed with uninteresting filler. "It's an editorial problem if the essentials are not there," he says. "The editors and referees have a duty to ask for more details. If you are cheating, it doesn't matter whether you write a tele gram or a long letter." "I think communications are necessary," says Bard, noting how long it can take to publish a full paper. "We expect when a communication is submitted that the experimental details will be made available to the reviewers, or they may be in supplementary material. It's rare that they are nowhere. The danger in putting restraints on communications is that there will be a loss of excitement." The ultimate safeguard in science is supposed to be replication. If other researchers can't reproduce a re sult, the work is tossed out. Material that is wrong— whether because of fraud, sloppiness, self-deception, or whatever—is thus weeded out of the body of knowledge. Bunnett asserts that Breslow's withdrawal of his results when they could not be reproduced is evi dence that the corrective mechanism works. "Breslow's letter is like a person having a high fever," Bunnett says. "It's an indication that the body's own defenses are working." Results that fly in the face of expected views, have potential commercial value, or have wide implications indeed are likely to be tested or extended by other researchers. But if fraudulent or erroneous work is in a minor area, it won't be picked up by the community. "It's a myth that all experiments will be redone," Zare says. "Many are left to molder." "If someone publishes a multistep synthesis of a natural product, it is not likely someone else would try to reproduce the 37th step," Heathcock says. "And even if it didn't work, there might be a nagging worry that you are not doing it right." Forces within the scientific system itself work against routine reproduction of other researchers' work. Sci entists don't want to spend the time: There are no papers to publish or awards given for excellence in replication. Nor do funding agencies give money for redoing what someone else has done already. As a consequence, replication of results is not a foolproof system for error detection. "Anything that's fraudulent that's of significance will be quickly picked up," Nickon says. "But maybe the literature is filled with trivial, insignificant things that are all wrong." The ACS ethical guidelines to publication of re search state that editors should publish reports and corrections of erroneous material that has appeared in their journals. Such corrections are not currently in cluded in Chemical Abstracts, however. That flaw in the system means that corrections—whether of fraud or less deliberate errors—may never be seen by re searchers trying to use the original paper.
"It's a problem we've wrestled with for some time," says John T. Dickman, senior assistant for editorial operations at Chemical Abstracts Service. "We're in the process of examining a system to handle it. In our proposal a person will be able to get to the original or to the correction no matter which he or she retrieves." The problem of handling corrections is not unique to CAS but is one all the scientific abstracting services are facing. The National Library of Medicine, for example, recently began flagging articles in its on-line MEDLINE system that have been retracted. Before the abstracting services can correct their records, however, there must be a correction in print. Unfortunately, not all researchers who find they've published erroneous material step forward to correct it. "If something slips through, you publish a correction or you can forget the whole thing," says Penn State's Roy. "I suspect there's a lot of the latter." Sometimes incorrect data are retracted in a kind of backdoor fashion by publishing a new paper that supersedes the flawed one. Or sometimes, if the error hasn't been noticed outside the research group, nothing at all is done. In such cases, the unofficial correction mechanism of chemistry—the rumor mill—kicks in. Once chemists find that someone continually reports higher yields, or greater stereospecificity, or cleaner data sets than the rest of the community can manage in similar experimental work, they become suspicious. "There is a kind of corrective action in the form of gossip," Bunnett^says. "Scientists talk to each other. When ethical transgressions occur, they become known. Channels of gossip are effective, but they may not be fair." Maintaining the integrity of science while remaining fair to the individual is the subject of the AAAS
conference on fraud to be held in September. The invitation-only meeting is being sponsored by the joint AAAS/American Bar Association National Conference of Lawyers & Scientists. "We will review what has happened in the area of scientific misconduct," says Albert H. Teich, head of AAAS's public sector programs. "Then we hope to use that experience to develop guidelines for universities to handle such cases in a way that protects the rights of all the parties involved." An incentive for universities to put such guidelines in place is that in 1985 Congress passed legislation requiring institutions receiving NIH funds to develop procedures for investigating possible misconduct and to inform the agency when they initiate a formal investigation. NIH may subsequently conduct its own investigation. NIH has drafted rules to enforce the new requirements and is encouraging universities to get their procedures established in the meantime. New grant application forms, which will be out shortly, will have a check-off box for reporting whether or not the institutions have set up such policies, according to Mary L. Miers, NIH's misconduct policy officer. NIH has had its own formal policy for handling suspected research misconduct since 1983. The agency gets two or three allegations a month that require followup, although not all are found to involve misconduct, Miers says. NIH has a confidential information system called NIH ALERT that lists individuals and institutions that are under investigation. "We are very cautious in extending support during an investigation," Miers says. "The Constitution doesn't guarantee anybody a grant or a contract." If misconduct is found to have occurred, sanctions can range from a reprimand, to suspension of a research grant, to individual or institutional exclusion from even applying for funding. NSF has drafted very similar rules for handling cases of alleged misconduct among its grant recipients. The proposed regulations were published in the Federal Register in February and were open for comment through April 13. Although the guidelines are not formally in place, NSF plans to follow their general principles in handling the Breslow incident, according to Jerome H. Frigeau, director of the audit and oversight division. Ultimately, scientists themselves—including chemists—must take responsibility for maintaining the integrity of their research. "One has to be careful not to use a remedy that's worse than the problem," Stork cautions. "Just imagine an operation where all spectra are repeated, where every experiment is done under observation. It would kill science in this country." Such Draconian measures are not necessary. The chemical community, however, could be more selfcritical. It would be healthy to admit that research misconduct does occur and that there are pressures within the environment in which chemists work that give rise to unethical behavior. Of course, as long as scientists are human beings, not robots, there will be cases of fraud and self-deception. But facing them with eyes open may reduce their incidence. D April 13, 1987 C&EN
17