ANALYSES, RISKS, AND AUTHORITATIVE MISINFORMATION W. E. Harris Department of Chemistry University of Alberta Edmonton, Alberta, Canada T6G 2E1
"We seem to be reaching the point of having to regulate the regulators. " —H. A. Laitinen, Analytical Chemistry, 1979, 51, 593. The huge subject area involving chemical analysis, risks, and benefits is an important and complex one that has scientific, political, sociological, and legal ramifications. When the risks associated with exposures to low doses of harmful substances are under consideration, the public must be made aware of the scientific limitations. Risk estimation may involve hard science or scientific speculation based on sensible extensions of data. In some cases it is impossible for scientists to provide reliable answers. Thus they face the daunting challenge of disseminating to the public information that is not fragmentary or easily distorted. Analytical chemists play a central role in the reliable interpretation of modern analytical information. The focus of this REPORT is on chemical analysis and how it impinges on the risk part of risk-benefit analysis, with an emphasis on the limitations and possible distortions of science. 0003 - 2700/92/0364 -665 A/$02.50/0 © 1992 American Chemical Society
REPORT To put this material into context, brief b a c k g r o u n d information is given concerning recent developments in analytical chemistry that contribute to public perceptions. Also, the problem of "zero" has become more important in recent years, and the subject of toxicity deserves summary comment. Modern analysis An analytical revolution has taken place. Analytical methods are now so sensitive that tiny amounts of virtually any substance can be detected almost anywhere. Limits of detect-
ability have improved by - 3 orders of magnitude each decade for the past 30 years. In 1960 mercury could be detected and measured at a concentration of - 1 ppm, in 1970 the mercury detection limit was ~ 1 ppb, and in 1980 the limit was - 1 pptr (part per trillion). Detection limits for some substances are now as low as parts per quadrillion and as small as attomoles and zeptomoles (i). A few decades ago, when food, water, air, or soil was tested for toxic substances and none were detected, the tested samples were presumed to pose no threat to health or the environment. Today, most materials contain detectable amounts of numerous toxic substances. When news headlines state that a particular undesired substance is present, detected, or found in a sample, the information is unsettling to the public, and regulatory bodies are expected to take corrective action. The public and even some scientists have been slow to recognize the significance of the dramatic changes in detection limits brought about by the analytical revolution. Analytical chemists should take the lead in responsibly interpreting analytical information because environmental assessment should be based on sound science (2). In some respects, the interpretation and application of modern analytical results have extended
ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992 · 665 A
REPORT beyond the realm of science. Misin terpreting the significance of lowered detection limits has resulted in un w a r r a n t e d fears among the public a n d less t h a n optimal use of r e sources to rectify perceived problems. Often, any detectable level of a toxic substance is deemed an unac ceptable threat, and proof of elimina tion is presumed to be a reasonable demand. It is widely believed—erro n e o u s l y — t h a t the t h r e a t of toxic substances at minuscule levels can be assessed quantitatively by scien tific methods. The analytical revolu tion has inadvertently led to extrav a g a n t q u a n t i t a t i v e extrapolations t h a t are presented to the public in the name of valid science.
The "zero" problem Although we can expect that analyti cal detection limits will continue to fall, measuring zero in a chemical analysis will always be impossible. However, there is a widespread im pression t h a t undesired substances can be eliminated or t h a t their for mation can be prevented; in other words, their concentrations can be zero. Even if the last trace of an un desired m a t e r i a l were eliminated, e x p e r i m e n t a l confirmation of the zero incidence level would be impos sible. The most accurate statement would be t h a t the concentration of any toxic substance present is below current detection limits. The zero problem is far from trivial and cannot simply be ignored. Al though the goal of zero concentration might be desired, it is not a realistic objective. Misconceptions about com plete elimination of undesirable ma terials may result in squandered re sources by governments t r y i n g to reach this unattainable goal. An outstanding example of the be lief in zero c o n c e n t r a t i o n is t h e Delaney Amendment to the Federal Food, Drug, and Cosmetic Act. This a m e n d m e n t r e q u i r e s r e d u c t i o n of many m a n - m a d e additives to con centration levels of zero. (Natural carcinogens in foods are omitted from the a m e n d m e n t and therefore are considered safe, regardless of their concentration.) This law was passed about t h r e e decades ago during a more innocent time. Attempts have recently been made to interpret zero in t e r m s of de minimis ("the law takes no account of trifles" such as one-in-a-million risks) (3). Never theless, the belief in zero persists. For example, the proposed regula t i o n s of t h e C a n a d a - U . S . G r e a t Lakes Water Quality Agreement state, in part, "The philosophy
adopted for control of inputs of per sistent toxic substances shall be zero discharge." (The presumption must be that zero will be measurable.) The following recommendation was r e cently made to Canadian federal reg ulators concerning dioxin and fur an discharges from pulp and paper mills (4): "Zero discharge shall be exactly t h a t — z e r o — w i t h any m e a s u r a b l e amount in the effluent being in vio lation of the act."
Toxic substances To the question "Is a particular sub stance toxic?" the answer is 'Ύββ, certainly." All substances are toxic at some level (although not necessarily every substance to every species). Toxicity may be acute, chronic, or bioaccumulative. Mere recognition of the toxicity of materials continues to generate headlines. The sixteenth-century Swiss phy sician Paracelsus pointed out that all substances are toxic and that the dif ference between a remedy and a poi son lies in the amount. Hence, "The dose makes the poison." Water, with an LD 5 0 (the dose at which half of test organisms die) of - 500 g/kg of body weight, is among the least toxic substances, but it is toxic neverthe less (an exception is fish). Even oxy gen may be a carcinogen. Toxicities vary enormously. So dium chloride (LD 50 of - 4 g/kg) and glycol (1.5 g/kg) are ~ 2 orders of magnitude more toxic than water; so dium cyanide has an LD 5 0 of - 0.015 g/kg and is 2 orders of magnitude more toxic than sodium chloride and glycol. Aflatoxins are 1 order of mag nitude more toxic than sodium cya nide, and botulism toxin is more toxic t h a n aflatoxins by several orders of magnitude. Even "supertoxins" at low enough doses do not cause per ceptible negative effects. Poisons, such as cyanide, are usually consid ered highly toxic. Many chemicals that are essential to good health (such as sodium chlo ride) are toxic at high levels, and dys functions can also result when they are present at levels that are too low. Copper compounds t h a t are highly toxic at high doses are, in small con centrations, essential to some human enzyme functions. Similarly, chro mium and vitamin A are carcinogens at high doses but are essential in trace quantities. The effects of deficiencies of essential substances is an impor tant field of study.
The concern The concern is t h a t risk estimates for low-level exposures are often given
666 A · ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992
an unwarranted air of scientific au thority. This impression is attribut able partly to the people in science, industry, and government who desire credibility for h a n d l i n g technical problems that affect public welfare. Questions concerning the credibil ity of information have reached the point at which the public may be los ing confidence in scientific experts. This situation is only partly the fault of the media; often scientists imply scientific validity for the results of unreasonable extrapolations. To a large extent such results go unchal lenged by others in the scientific community. Many examples of spe cious quantitation could be cited; the following are a few such examples. • In connection with recommen dations of the International Commis sion on Radiological Protection, it is certified (5) that no dose of radiation is risk-free. Such a definite state ment from a reputable group can be expected to be accepted by the public as a scientific fact. • Six different technical groups made risk estimates (6) of the annual excess death rates from a lifetime ex posure to 0.00024 fibers of asbestos per milliliter of air breathed for an average life expectancy of 75 years. The estimates range from 0.005 to 0.093 excess d e a t h s per million deaths. The risk estimates come from extrapolations of high-level exposure data. Quantitative estimates that are precise to one or two significant fig ures at tiny fractions of parts per million imply to t h e public t h a t sound science is involved. • Tables (7) that compare one-ina-million risks of death include such items as traveling 60 miles by car, rock climbing for lVi min, and the excess death risks from cancer re sulting from visiting in Denver for two months (cosmic rays) or smoking one to three cigarettes. For the first two activities, death rates on average have been based on experience, and these risks are no doubt reasonably accurate expectations. However, in clusion of items such as the last two r e p r e s e n t s authoritative misinfor mation resulting from gross extrapo lation of high-level exposure data. These data are given a veneer of au thenticity by their association with items whose validity is not in ques tion. • Conversions of low-dose data to risks have often been taken well be yond reasonable extensions of scien tific data. The U.S. Environmental Protection Agency (EPA) has extrap olated data on 2,3,7,8-tetrachlorodibenzo-^»-dioxin, which for humans is
either a noncarcinogen or a weak one (8-12). The agency's conclusion is that this carcinogen is so potent that the tolerable daily intake is only 6 χ ΙΟ" 15 g/kg (13, 14). According to the EPA's reasoning, ingestion of t h a t amount of dioxin will cause one ex cess fatal cancer per million deaths. Although such examples of risk conclusions cannot be proved wrong or right, too often they are perceived as scientific facts. The problem of lack of scientific credibility is start ing to be recognized in both technical and news media (15-18). For exam ple, according to Ames and Gold (18), "Chemicals are rarely tested at doses below the MTD (maximum tolerated dose) and half the MTD. Moreover, about half of the positive sites in an imal cancer tests are not statistically significant at half the MTD." Dose-response relation A fundamental principle in toxicol ogy is that exposures to high concen trations of a substance have more pronounced effects than exposures to low concentrations. Another impor tant fact is that when a group of in dividuals from a single species is ex posed to a substance, they experience varying levels of sensitivity. The re sult for a single type of response is that the observable portions of d o s e response curves have a general sig moid shape, as shown schematically in Figure 1. A Gaussian (normal probability) distribution is one exam ple of a distribution that is consis tent (although not exclusively) with an S-shaped curve. S-shaped toxicity curves are expected whether one is
Figure 1. Dose-response relation for a group of individuals exposed to various levels of a stressor.
dealing with the responses of bacte ria to a disinfectant, cockroaches to malathion, or rats to rat poison. Figure 2 shows an example of a relatively complete d o s e - r e s p o n s e curve for the response of a group of rats to various doses of a hypnotic drug (19). Few rats responded to the lowest dose shown, but almost all re sponded to the highest dose. When approaching the low-dose portion of a curve such as that in Figure 2, a point is reached where the uncer tainties in the observations eventu ally s w a m p a t t e m p t e d m e a s u r e m e n t s of p o s s i b l e e f f e c t s . T h e question then arises as to how far science can go in providing valid quantitative estimates of the likely responses at doses below those that produce observable responses— keeping in mind that a zero toxic ef fect is not provable. The linear percent scale on the re sponse (risk) axis of a plot of re sponse versus dose can be t r a n s formed to a g r e e w i t h G a u s s i a n probability values by a probit trans formation. In such a transformation, t h e r e s p o n s e n u m b e r s a r e com pressed around the 50% mark and expand at either high or low proba bilities. When the d o s e - r e s p o n s e data are plotted with dose on a loga rithmic scale and response on the probit scale, a straight line ordinarily is obtained. Figure 3 shows the same data as in Figure 2 but in the form of the logprobit plot. As expected, this plot is in r e a s o n a b l e a g r e e m e n t w i t h a straight line. Note that zero cannot be reached on either the vertical or the horizontal axes; thus, the dia-
gram cannot provide information on zero dose or zero response. It seems clear that a modest quan titative extrapolation of the log-probit plot to lower levels can be viewed as realistic and scientifically sensi ble. Thus a small extrapolation of the straight line fitted to the data for a dose of 3 mg/kg predicts a response of ~ 0.2%. (For this particular exten sion, this response rate might be con firmed by more experiments, but in any case one can be confident that the estimate is fairly accurate.) Ex trapolation to a dose of 2 mg/kg gives a response rate of - 0.01%, a result that would be difficult to validate ex perimentally but seems logical. Quantitative extrapolation of the dose by 1 order of magnitude below t h e lowest v a l u e t e s t e d — t o 0.4 mg/kg and with a result substan tially below 0.0001%—enters the realm of questionable science. Even u n d e r favorable circumstances, quantitative extrapolations beyond 1 or 2 orders of magnitude are not scientifically valid, both from the point of view of subjective errors in establishing the slope of the straight line and in the assumption concern ing the response at low doses. The results of large extrapolations should be expressed qualitatively; the un certainties are too large for credible quantitative estimates. Assumptions In general, when risk estimates are made, an assumption about the na-
Figure 3. Response of a group of rats to various doses of a hypnotic drug with the data plotted in the log-probit form. Figure 2. Response of a group of rats to various doses of a hypnotic drug.
The probability scale is on the vertical axis. Log-probit plots are conveniently carried out by using probability graph paper.
ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992 · 667 A
REPORT ture of dose-response relations for particular responses t h a t will permit straight-line extrapolations is preferred. The more experimental points t h a t can be used, the more precisely the slope of the straight line can be established. As indicated in the preceding section, sigmoid data can be plotted on a linearized probability scale to give a probit- ver sus -logarithm (log-probit) plot of the dose. For toxicity information, the log-probit assumption has much scientific justification. Linear probit and log-linear assumptions are somewhat similar. In attempts to grapple with the question of possible adverse effects of exposures to tiny amounts of toxic substances, other assumptions r e lated to the nature of the initial part of the response curve have been used. A common assumption is t h a t if a substance is toxic at any level, it is toxic at all concentration levels. This assumption is the basis of the popular linear-no-threshold hypothesis. An adverse effect is assumed to exist at all levels all the way to zero, and the response is assumed to be directly proportional to dose. The size of the effect is then given quantitatively by a linear extrapolation to zero from a region where observations have been made or are assumed to be known. The no-threshold hypothesis a s sumes that responses such as excess cancer deaths can be quantitatively e s t i m a t e d by linear extrapolation (e.g., the one-hit model) {20). Although this assumption initially seems attractive, it has not been and c a n n o t be proved. N e v e r t h e l e s s , speculation continues about being able to prove "once a n d for a l l " whether thresholds exist. This subject can be discussed philosophically, but an answer cannot be found because t h a t kind of no-effect zero cannot be proved. Another (and counter) assumption is t h a t of a threshold below which there is no effect (such as with cancer)—a level exists below which a toxic effect disappears. Again, this assumption cannot be proved. The possibility of such a threshold is often not given much consideration when risk estimates are made, particularly with respect to cancer risks. In relation to overall well-being, the possibility of hormesis at low doses can be considered and conceivably can be d e m o n s t r a t e d experimentally. Hormesis is the positive stimulatory effect of low levels of a stressor—for example, vaccination. Most scientists seem to be uncom-
f o r t a b l e w i t h t h e p o s s i b i l i t y of hormesis, and they dismiss most evidence out of hand.
Estimating risk Comments relative to the subject of risk and useful philosophical background can be found in many sources (21-25) and in the journals Science and Risk Analysis. For doses and dose rates lower t h a n those that produce perceptible effects, and particularly when there are little or no data concerning either doses or responses, making a dose-to-risk conversion involves the application of unverifiable a s s u m p t i o n s . T h e p r e d i c t i o n of health effects rests upon extrapolation of an assumed relation between a dose and a particular type of response. Observational u n c e r t a i n t i e s become excessive at low doses and dose rates. When uncertainties are sizable, almost any hypothesis can be supported. For example, the d o s e response d a t a for the relationship between lung cancer and cigarette use has been fitted (25) to a linear q u a d r a t i c a s s u m p t i o n (essentially linear at low doses) as well as to a best-fit straight line. Further examination of these data indicates t h a t they can be judged to have an even better fit to a log-probit assumption than to the other two assumptions. The uncertainties are such that there is plenty of leeway to support a claim for a fit to almost any hypothesis. Risk estimates made in the face of unverifiable assumptions therefore should be interpreted with caution, and conclusions should be reported with frank caveats. Health protection agencies, scientific advisory boards, and panels of experts often favor the simple linearno-threshold hypothesis for estimating the risk of developing excess cancers. That assumption, by whatever name, has been implicitly affirmed in t e r m s of many reported risk estimates. P a r t of the rationalization for the linear assumption involves the fact t h a t a no-effect threshold has not been shown experimentally. A more significant reason to support the no-threshold hypothesis is t h a t it is prudent to err on the side of caution; hence, a conclusion based on a scientifically unreliable extrapolation nevertheless deserves support because it is conservative. Still another rationalization is that the conclusions are drawn from reasoned argument and good scientific judgment. A more objective approach would be to consider the science as realistically as possible—to obtain e s t i -
668 A · ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992
mates of risk along with an estimate of the uncertainty in those estimates and then include a carefully reasoned and explicitly stated safety factor such as 2, 5, 10, or 100. With such an approach, both the science and safety factors would be clearly and separ a t e l y visible, a n d each could be judged on its own merits. Even though the linear-no-threshold a s s u m p t i o n cannot be proved, scientists make extrapolations over wide ranges with this general kind of assumption to produce quantitative estimates (6, 7) of risk from low-dose exposures. A common attitude seems to be "We are dealing with an important question and therefore we must furnish answers." Scientists are often uncomfortable admitting t h a t science does not or cannot always provide quantitative answers. They imply or state that extrapolations to zero have a sound technical basis and thereby reinforce this misinformation. With respect to excess cancers, many scientists not only appear to favor t h e l i n e a r - n o - t h r e s h o l d a s sumption but also speak and write as though it had a scientific basis. Some scientists, and certainly much of the public, believe in the validity of the results of such extrapolations. Understandably, news media reinforce the misinformation. Thus, when speculating about the possible adverse effects of exposures to tiny amounts of toxic materials, some scientists manipulate numbers to lend an aura of credibility and integrity to their comments. Questions about low-exposure effects that are far outside the observable range certainly can be asked in the language of science. Many questions can be formulated in technical l a n g u a g e t h a t scientists cannot answer even with more research. The question of possible health effects of insignificant amounts of substances is beyond the realm of science to confirm now or in the future. Extrapolations should be made by using realistic assumptions, but risk estimates t h a t go beyond a modest extrapolation and are given in quantitative terms are pseudoscience.
Science and conservatism Risks should be neither overblown nor played down. How regulations should be set with respect to the analytical detection limit is the question. In the past, regulation was easier; for many materials, the regulatory limit was set at the analytical detection limit. If something was not detectable, it presumably was not there, and the threat was zero.
The practice, conscious or otherwise, of being wedded to the detection limit seems to continue for some materials. For example, the Times Beach and 2,3,7,8-tetrachlorodibenzo-/>-dioxin affair arose when the detection limit for t h a t dioxin was - 1 ppb. The regulatory limit was set (26) for Times Beach at 1 ppb. The relationship between a 1-ppb exposure limit and the level of human threat is not obvious. Does 1 ppb fail to protect at an adequate level, is it overblown, or is it just right? Later the detection limit was lowered to a few tens of parts per trillion. When dioxin was found in concentrations of typically 10-20 pptr in many products (such a s fish), t h e r e g u l a t o r y l i m i t — presumably for other reasons—was then set in that range. According to Brookes (24)," The trouble is t h a t both Congress and the EPA have powerful political incentives to run away from any alleged danger." Because detection limits are already low (and getting lower), and because detection limits have no fundamental relationship to toxic levels of threat, regulations and detection limits must be consciously divorced. If modern analytical sensitivities demonstrate regulatory violations, no matter how irrelevant these violations are to human welfare, public dread increases, and the public's confidence in governing bodies is des t r o y e d . When r e g u l a t o r y l i m i t s track detection limits, the effects are counterproductive because resources may be squandered on trivial possible risks rather than being used to reduce risks. Moreover, as Wildavsky has pointed out (27), minimizing the risk for a particular situation is not necessarily an activity without risk itself. A few decades ago, only moderate extrapolations were needed to go from the observable-effect level of a substance to the detection limit for that substance. For example, in 1960, when the toxicity and detection limits for mercury were in the part-permillion range, only modest extrapolations were needed to go from the observable toxicity limit to the detection limit. As detectability improves, going from the observable dysfunctions to the detection limit requires greater extrapolations. Extrapolations of about 6 orders of magnitude are typically required to reach current detection limits from the observed-effect levels.
One-in-a-million cancer risks Kelly (28) recently examined "the myth of 10" 6 as a definition of ac-
ceptable risk" and stated that there is no sound scientific, social, economic, or other basis for selecting the one-in-a-million criterion of acceptable risk. This number appears to have been pulled out of a hat and evidently is intended to symbolize a lifetime risk of essentially zero. A one-in-a-million involuntary level of risk is assumed to be acceptable to the public. The EPA appears to be the leader in adopting a policy of making extrapolations to risk values of one in a million (0.0001%). EPA spokesmen have stated (21) unequivocally: "We would defend our approach as good science and good public policy." Science certainly has a role in risk estimation, and that role should be clarified. T h e E P A u n d e r s t a n d a b l y wishes to enjoy public confidence in its regulatory policies and wants to appear accurate in its conclusions. Bad science, however, should not be used to meet objectives concerning good public policy. Nichols (29) questions the wisdom of the EPA in the matter of one-in-a-million hypothetical risks: "Everything in life is full of one-in-a-hundred risks. If the EPA is spending all its time trying to protect the public against one-in-a-million h y p o t h e t i c a l r i s k s , t h e n it's spending its time on trivia." Risk estimates of one excess cancer death in a million require extrapolations over many orders of magnitude. If the quality of the information were h i g h enough t h a t e x t r a p o l a t i o n s comparable to the log-probit type
could be performed, even these logprobit extrapolations would have to be carried into the realm of pseudoscience to reach the one-in-a-million risk level. If quantitative risk estimates for excess cancers resulting from low doses are based on research findings a t the molecular-mechanism level or are derived from highdose rodent data, the extrapolations are largely deceptive quantification. As Abelson (30) points out, "Are humans to be regarded as behaving biochemically like huge, obese, inbred c a n c e r - p r o n e r o d e n t s ? Sooner or later Congress must recognize a new flood of scientific information that renders suspect the Delaney clause and procedures for determining carcinogenicity of substances."
Linear-no-threshold vs. logprobit extrapolations The results of extended extrapolations can be compared for the data shown in Figure 2. Figure 4 depicts two kinds of extrapolations. In Figure 4a the linear-no-threshold ext r a p o l a t i o n is shown as a s h o r t straight line to zero from the lowest point on the sigmoid curve. From the direct proportionality between the lowest point on the curve and zero, the calculated response of 0.0001% (one in a million) corresponds to a dose o f - 0 . 0 0 0 1 mg/kg. The extrapolation can also be perf o r m e d from t h e s t r a i g h t - l i n e log-probit graph (Figure 4b). Such an extrapolation (again with the conservative no-threshold assumption)
Figure 4. Comparison of linear-no-threshold and log-probit extrapolations. (a) Extrapolation from the lowest point of Figure 2 to zero as shown by the dotted straight line, (b) First segment of the straight-line extension of the log-probit plot of Figure 3 to tiny doses. ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992 · 669 A
REPORT to the same dose of 0.0001 mg/kg yields a response estimate that is astronomically smaller than the 0.0001% obtained with the linear-nothreshold extrapolation by a factor of at least 10 2 0 -10 4 0 , or 2 0 - 4 0 orders of magnitude. The purist could claim that a level of threat persists because even the estimated response value is never zero. Beyond the tiniest direct proportionality extrapolations, the resulting safety factor explodes in comparison with log—probit values. Forget about t h e o n e - i n - a - m i l l i o n r i s k s . Even a small linear-no-threshold extrapolation (Figure 4) to give a onein-a-hundred (i.e., 1%) risk appears to overestimate the risk by a huge factor (> 100,000). These comparisons illustrate the importance of making sound scientific judgments from reasonable extensions of observational data and defining prudent safety factors. Huge safety factors would not seem to be reasonable under any circumstances, b u t n o n e t h e l e s s t h e y s h o u l d be clearly visible and not concealed by a dubious extrapolation procedure. Although the comparisons made in this section are for the specific example of F i g u r e 2, t h e r e a l i s t i c a l l y h a r s h conclusion concerning linearno-threshold extrapolations has broader validity because d o s e response curves are generally sigmoidal. Although the slope of the log-probit straight line would be expected to vary for other response systems, the main thrust of the conclusion concerning t h e invalidity of l i n e a r - n o - t h r e s h o l d extrapolations would not be expected to be profoundly affected. It seems reasonable to consider the science associated with cancer risk estimates a t three clearly defined levels: the direct observational data, reasonable extensions of the data, and further extensions into speculative hypotheses. Without experimental d a t a t h a t clearly show direct proportionality between cancer incidence and lowdose exposures, linear-no-threshold extrapolations are not credible for risk estimates because even modest extrapolations quickly lead to the inclusion of unreasonable safety factors. Such one-in-a-million or even one-in-a-thousand estimates should not be considered credible. More realistic models should be used, and they should clearly separate the two functions of making common-sense modest scientific extrapolations and incorporating cautious conservatism. Devising valid models will not be
easy, but for the sake of the reputation of science, sound and explicitly stated assumptions, as well as the limitations of those assumptions, are essential. Regulatory limits may best be prescribed in t e r m s of dose w i t h o u t making a questionable conversion to risk, including a possibly huge and unknown safety factor. At the same time, dose selection is a judgment call, and the selected dose should not be accompanied by a claim that it is scientific.
Limitations of science Limitations to reaching definitive conclusions concerning r i s k e s t i mates can be grouped under the following five headings. Human epidemiological database. Only rarely are the available observational data for doses and responses complete enough to reveal high-to-low response sensitivities a n d to d e f i n e a d e t a i l e d d o s e response curve. The paucity of data presents a problem that is unlikely to be resolved because h u m a n s cannot always be used as experimental subjects. However, available information for h u m a n s should be given precedence over other sources of information when assessing h u m a n risks. Animal epidemiological database. Animal toxicology studies include too few examples of informat i o n t h a t is complete e n o u g h to define the detailed d o s e - r e s p o n s e curve. Usually, the crucial information is from experiments at high exposure levels. In other words, the data are likely to be from near the top of a sigmoidal response curve. Modeling h y p o t h e s e s . At one extreme, the dose-response information would be complete enough to define a d o s e - r e s p o n s e curve. Moderate extensions of the data to low doses could be carried out directly. There would be little need to dwell on the merits of a model for extrapolation. For modest extensions, the risk or response estimates obt a i n e d from such e x t r a p o l a t i o n s would have a high level of scientific credibility. The precision of the slope of the straight line would be expected to improve with increased amounts of data. With the log-probit model, the slope of a straight line could be chosen with only two data points, but the result would be a higher level of uncertainty. If there were but a single data point, the slope would have to be i n f e r r e d from o t h e r d o s e response systems and with still more uncertainty. The less that is known about the
670 A · ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992
dose—response relation, the greater t h e t e n d e n c y to fill t h e g a p s in knowledge with assumptions and the greater the inclination to use a model on which to base subjective decisions. When the response data are limited and are primarily at lower doses, terms such as nonlinear, quadratic, and sublinear are useful in describing the initial portions of response curves. Low-dose risk estimates then strongly depend on the validity of the model. Extrapolations involving direct proportionality with a no-threshold assumption include huge u n k n o w n safety factors t h a t overwhelm the information concerning response. In t e r m s of validity, most other models probably lie between the extremes of the log-probit and linearn o - t h r e s h o l d models. In g e n e r a l , speculative model hypotheses concerning t h e risks of excess cancer deaths at doses below which there are observable effects cannot be validated. However, once a number—no matter how spurious—is calculated, it often serves as a platform for argument and ratcheting. E x t r a p o l a t i o n s . Extrapolations are always dangerous, and extrapolation of information from another species to h u m a n s is an especially serious limitation {30). Another limitation relates to how far scientific validity can be assumed to apply when an extrapolation is more than moderate. The possibility of a threshold below which there is no effect must be considered a reasonable assumption, even though a zero r e sponse can never be proved. Communication. This refers to a different kind of limitation. Analytical chemists and scientists in general often fail to transfer their information into the wider sphere. Rarely do they state that the actual risks may be the unprovable zero, in the range between a numerical upper-bound estimate and zero, or between negligible and zero. Risk estimates at low doses should include caveats t h a t indicate the limit to which quantitative e s t i m a t e s can be m a d e a n d where qualitative j u d g m e n t t a k e s over. An overall problem is the possible or likely adverse effects resulting from exposure to harmful substances at levels below which the effects are observable. Analytical chemists must help to define the limits to which q u a n t i t a t i v e c o n c l u s i o n s can be drawn, provide estimates of the uncertainty levels for the conclusions, and provide informed commentary on reasonable safety factors. Scien-
tists are better equipped than most people to indicate when the limita tions of science apply and when spec ulations, hypotheses, consensus, or reasoned arguments take over. None of the latter estimation methods can be deemed scientific, and thus they cannot provide q u a n t i t a t i v e (e.g., one-in-a-million) risk estimates. Finkel (22) has stated, "The debate over whether risk numbers are cred ible has begun to resolve itself. . . . These (risk) numbers are systemati cally skewed in the direction of over estimating risk—so overly conserva tive as to be a caricature of itself." Most scientists conducting research on risk a s s e s s m e n t are no doubt aware of the scientific limitations of attempting to move into areas that involve not only science but also so cial and political values. The media may sometimes misuse and distort judgments that have scientific as pects. However, the media cannot be blamed too much in view of state ments made by reputable scientists concerning quantitative risk esti mates that may result from unscien tific extrapolations.
Summary All substances are toxic at some con centration, and toxicities vary enor mously. Because individual sensitivi ties vary, the observable portions of dose-response toxicity curves are ex pected to be sigmoidal in shape. For trace exposures, a zero toxic effect can never be proved. Uncertainties con cerning exposure levels and lack of knowledge about excess cancer r e sponses make it difficult to obtain sound scientific low-exposure risk es timates. Committees of experts and health protection agencies commonly assume direct proportionality between dose and response with the rational ization that it is a straightforward and conservative assumption. Extrapolations that are made to obtain estimates of low-dose risks should be based on sound assump tions. The technical database used as the foundation for low-dose extrapo lations should be clearly specified. Even the most credible of extrapola tions to lower doses should be re stricted to - 1 order of magnitude outside the observable range, beyond which quantitative dose-to-risk con versions should not be attempted. Prudent safety factors should be in cluded, but their presence should be stated explicitly and should be inde pendent of scientific estimates. A realistic hypothesis may be that as doses are lowered, a threshold may be reached. Neither the assump
tion of adverse effects such as cancer that continue at all levels down to zero nor the assumption of a thresh old below which there is no such ef fect can be scientifically proved. The consequence of exposures to negligi ble doses may well be zero, but this result cannot be proved. In the qual itative sense, using words such as "insignificant," "not worth consider ing," or "immeasurably low" becomes appropriate even though the public expects black and white numerical answers. When uncertainties are sizable, no h y p o t h e s i s can be c o n s i d e r e d uniquely verified. Obtaining compel ling observational information t h a t would confirm a linear d o s e - r e sponse relation is improbable. Thus scientific credibility should not rou tinely be given to even modest di rect proportionality extrapolations for estimating low-dose risks. Simple linear-no-threshold extrapolations of dose-response information to gener ate quantitative risk estimates of even one in a thousand are pseudoscience and r e p r e s e n t n u m e r i c a l rhetoric.
(16) Benarde, M. A. Chem. Eng. News 1989, Dec. 11, pp. 47-48. (17) Kochland, D. Science 1990, 249, 1357. (18) Ames, B. N.; Gold, L. S. Science 1990, 250, 1645. (19) Tallardia, R. J.; Jacob, L. S. The Dose-Response Relation in Pharmacology; Springer: New York, 1979; p. 108. (20) White, M. G; Infante, P. F.; Chu, K. C. Risk Analysis 1982, 2, 195-204. (21) Chem. Eng. News Forum 1991, Jan. 7, 27-55. (22) Finkel, A. M. Resources for the Future 1989 (summer issue), 11-13. (23) Freeman, A. M.; Portney, P. R. Re sources for the Future 1989 (spring issue), 1-4. (24) Brookes, W. T. Forbes 1990, April 30, 151-72. (25) Day, Ν. Ε. In Toxicological Risk Assess ment; Clayson, D. B.; Krewski, D.; Munro, I., Eds.; CRC Press: Boca Raton, FL, 1985; p. 7. (26) Taylor, J. K., NIST, personal com munication. (27) Wildavsky, A. Am. Sci. 1979, 67, 3237. (28) Kelly, Κ. Ε. Presented at the Air and Waste Management Association's 84th Annual Meeting, Vancouver, B.C., Can ada, June 1991; Presentation no. 91175.4. (29) Nichols, A. B. Water Environment and Technology 1991, May, 57-71. (30) Abelson, P. Science 1992, 255, 141.
The advice, suggestions, and encouragement of the late L. B. Rogers are especially appreciated. The advice and suggestions of many other col leagues in reviewing drafts of the material are also appreciated. The following individuals of fered comments, which may or may not be re flected in the final manuscript: M. A. Armour, J. E. Bertie, F. Cantwell, C. Carr, R. Coutts, P. Harris, S. Hrudey, B. Kratochvil, J. MacGregor, B. Mitchell, J. Shortreed, K. J. Simpson, K. Simpson, and R. Uffen.
W. E. Harris obtained his Ph.D. under the direction of I. M. Kolthoffat the Univer sity ofMinnesota. Now a Professor Emeri References tus, he taught at the University of Alberta (1) Dovichi, N. J. et al. Anal. Chem. 1991, for more than 40 years. After retiring, he 63, 2835-41. (2) Mendelsohn, R. Am. Sci. 1991, 79, 178 continued his service to the university as (March-April). Chairman of the President's Advisory (3) Chem. Eng. News 1991, March 4, p. 14. Committee on Campus Reviews. He has (4) Rawson Academy of Aquatic Science, published approximately 100 papers and Ottawa, Ontario, Canada; personal communication, June 1, 1990. has authored five books, including the sec (5) Atomic Energy Control Board; Re ond edition of Chemical Analysis with porter 1991 (spring issue). H A. Laitinen. As a consultant to Alberta (6) Mossman, B. T.; Bignon, J.; Corn, M.; Environment, he was involved in the suc Seaton, Α.; Gee, J.B.L. Science 1990, 247, 294-300. cessful siting of the Alberta hazardous (7) Upton, A. C. Sci. Am. 1982, 246, 4 1 waste facility. 49. (8) Hiremath, C; Bayliss, D.; Bayard, S. Chemosphere 1986, 15, 1815-23. (9) Appel, K. E.; Kildebrandt, A. G.; Lingk, W; Kunz, H. W. Chemosphere CORRECTION 1986, 15, 1825-34. (10) Gough, M. Resources for the Future Coherent Forward Scattering 1988 (summer issue), 2-5. Atomic Spectrometry (11) Bertazzi, M. et al. Am. J. Epidemiol. Gerd M. Hermann (Anal. Chem. 1989, 129, 1187. (12) Chem. Eng. News 1991, Oct. 28, p. 6. 1992, 64, 571 A - 5 7 9 A) (13) Chem. Eng. News 1991, Aug. 12, p. 8. (14) Schneider, K. Sunday Oregonian 1991, The spectra in Figures 2 and 6 Aug. 28, p. A5. were inadvertently switched. In (15) Olive, D. Toronto Globe and Mail 1991, t h e caption for Figure 5, t h e Nov. 23. (See also Thompson, D. "The value of b should be 10~ 2 . Danger of Doomsaying"; Time, March 9, 1992; p. 50.) ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992 · 671 A