t
W. E. Harris Department of Chemistry University of Alberta Edmonton, Alberta, Canada T6G 2E1
‘We seem to be reaching the Point of having to regulate the regulators.” -H. A. Laitinen, Analytical Chemistry, 1979,51, 593. The huge subject area involving chemical analysis, risks, and benefits is an important and complex one that has scientific, political, sociological, and legal ramifications. When the risks associated with exposures to low doses of harmful substances are under consideration, the public must be made aware of the scientific limitations. Risk estimation may involve hard science or scientific speculation based on sensible extensions of data. In some cases it is impossible for scientists to provide reliable answers. Thus they face the daunting challenge of disseminating to the public information that is not fragmentary or easily distorted. Analytical chemists play a central role in the reliable interpretation of modern analytical information. The focus of this REPORT is on chemical analysis and how it impinges on the risk part of risk-benefit analysis, with an emphasis on the limitations and possible distortions of science. 0003-2700/92/0364-665A/$02.50/0 0 1992 American Chemical Society
I I
I t
To put this material into context, brief background information is given concerning recent developments in analytical chemistry that contribute to public perceptions. Also, the problem of “zero” has become more important in recent years, and the subject of toxicity deserves summary comment.
Modern analysis An analytical revolution has taken place. Analytical methods are now so sensitive that tiny amounts of virtually any substance can be detected almost anywhere. Limits of detect-
ability have improved by -3 orders of magnitude each decade for the past 30 years. In 1960 mercu’ry could be detected and measured at a concentration of - 1 ppm, in 1970 the mercury detection limit was - 1 ppb, and in 1980 the limit was - 1 pptr (part per trillion). Detection limits for some substances are now as low as parts per quadrillion and as small as attomoles and zeptomoles ( I ) . A few decades ago, when food, water, air, or soil was tested for toxic substances and none were detected, the tested samples were presumed to pose no threat to health or the environment. Today, most materials contain detectable amounts of numerous toxic substances. When news headlines state that a particular undesired substance is present, detected, or found in a sample, the information is unsettling to the public, and regulatory bodies are expected to take corrective action. The public and even some scientists have been slow to recognize the significance of the dramatic changes in detection limits brought about by the analytical revolution. Analytical chemists should take the lead in responsibly interpreting analytical information because environmental as sessment should be based on sound science (2). In some respects, the interpretation and application of modern analytical results have extended
ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992
665 A
REPORT beyond the realm of science. Misinterpreting the significance of lowered detection limits has resulted in unwarranted fears among the public and less t h a n optimal use of r e sources to rectify perceived problems. Often, any detectable level of a toxic substance is deemed a n unacceptable threat, and proof of elimination is presumed to be a reasonable demand. It is widely believed-erroneously-that the t h r e a t of toxic substances a t minuscule levels can be assessed quantitatively by scientific methods. The analytical revolution has inadvertently led to extravagant quantitative extrapolations that are presented to the public in the name of valid science. The “zero” problem Although we can expect that analytical detection limits will continue to fall, measuring zero in a chemical analysis will always be impossible. However, there is a widespread impression that undesired substances can be eliminated or that their formation can be prevented; in other words, their concentrations can be zero. Even if the last trace of an undesired material were eliminated, experimental confirmation of t h e zero incidence level would be impossible. The most accurate statement would be that the concentration of any toxic substance present is below current detection limits. The zero problem is far from trivial and cannot simply be ignored. Although the goal of zero concentration might be desired, it is not a realistic objective. Misconceptions about complete elimination of undesirable materials may result in squandered resources by governments trying to reach this unattainable goal. An outstanding example of the belief i n zero concentration is t h e Delaney Amendment to the Federal Food, Drug, and Cosmetic Act. This amendment requires reduction of many man-made additives to concentration levels of zero. (Natural carcinogens in foods are omitted from the amendment and therefore are considered safe, regardless of their concentration.) This law was passed about three decades ago during a more innocent time. Attempts have recently been made to interpret zero i n terms of de minimis (“the law takes no account of trifles’’ such as one-in-a-million risks) ( 3 ) . Nevertheless, the belief in zero persists. For example, the proposed regulations of t h e Canada-U.S. Great Lakes Water Quality Agreement state, in part, “The philosophy 666 A
adopted for control of inputs of persistent toxic substances shall be zero discharge.” (The presumption must be that zero will be measurable.) The following recommendation was recently made to Canadian federal regulators concerning dioxin and furan discharges from pulp and paper mills (4): “Zero discharge shall be exactly that-zero-with any measurable amount in the effluent being in violation of the act.” Toxic substances To the question “IS a particular substance toxic?” the answer is “Yes, certainly.” All substances are toxic at some level (although not necessarily every substance to every species). Toxicity may be acute, chronic, or bioaccumulative. Mere recognition of the toxicity of materials continues to generate headlines. The sixteenth-century Swiss physician Paracelsus pointed out that all substances are toxic and that the difference between a remedy and a poison lies in the amount. Hence, “The dose makes the poison.’’ Water, with an LD,, (the dose a t which half of test organisms die) of - 500 glkg of body weight, is among the least toxic substances, but it is toxic nevertheless (an exception is fish). Even oxygen may be a carcinogen. Toxicities vary enormously. Sodium chloride (LD,, of - 4 glkg) and glycol (1.5 g/kg) are - 2 orders of magnitude more toxic than water; sodium cyanide has an LD,, of - 0.015 glkg and is 2 orders of magnitude more toxic than sodium chloride and glycol. Aflatoxins are 1 order of magnitude more toxic than sodium cyanide, and botulism toxin is more toxic than aflatoxins by several orders of magnitude. Even “supertoxins” a t low enough doses do not cause perceptible negative effects. Poisons, such as cyanide, are usually considered highly toxic. Many chemicals that are essential to good health (such as sodium chloride) are toxic at high levels, and dysfunctions can also result when they are present a t levels that are too low. Copper compounds that are highly toxic a t high doses are, in small concentrations, essential to some human enzyme functions. Similarly, chromium and vitamin A are carcinogens at high doses but are essential in trace quantities. The effects of deficiencies of essential substances is an important field of study. The concern The concern is that risk estimates for low-level exposures are often given
ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992
an unwarranted air of scientific a u thority. This impression is attributable partly to the people in science, industry, and government who desire credibility for handling technical problems that affect public welfare. Questions concerning the credibility of information have reached the point at which the public may be losing confidence in scientific experts. This situation is only partly the fault of the media; often scientists imply scientific validity for the results of unreasonable extrapolations. To a large extent such results go unchallenged by others in the scientific community. Many examples of specious quantitation could be cited; the following are a few such examples. In connection with recommendations of the International Commission on Radiological Protection, it is certified ( 5 )that no dose of radiation is risk-free. Such a definite statement from a reputable group can be expected to be accepted by the public as a scientific fact. Six different technical groups made risk estimates (6) of the annual excess death rates from a lifetime exposure to 0.00024 fibers of asbestos per milliliter of air breathed for an average life expectancy of 75 years. The estimates range from 0.005 to 0.093 excess d e a t h s per million deaths. The risk estimates come from extrapolations of high-level exposure data. Quantitative estimates that are precise to one or two significant figures a t tiny fractions of parts per million imply to t h e public t h a t sound science is involved. Tables (7) that compare one-ina-million risks of death include such items as traveling 60 miles by car, rock climbing for 1% min, and the excess death risks from cancer resulting from visiting in Denver for two months (cosmic rays) or smoking one to three cigarettes. For the first two activities, death rates on average have been based on experience, and these risks are no doubt reasonably accurate expectations. However, inclusion of items such as the last two represents authoritative misinformation resulting from gross extrapolation of high-level exposure data. These data are given a veneer of a u thenticity by their association with items whose validity is not in question. Conversions of low-dose data to risks have often been taken well beyond reasonable extensions of scientific data. The U.S. Environmental Protection Agency (EPA) has extrapolated data on 2,3,7,8-tetrachlorodibenzo -$-dioxin, which for humans is
either a noncarcinogen or a weak one (8-12).The agency’s conclusion is that this carcinogen is so potent that the tolerable daily intake is only 6 x 10-l‘ g/kg (13,14). According to the EPA’s reasoning, ingestion of that amount of dioxin will cause one excess fatal cancer per million deaths. Although such examples of risk conclusions cannot be proved wrong or right, too often they are perceived as scientific facts. The problem of lack of scientific credibility is starting to be recognized in both technical and news media (15-18).For example, according to Ames and Gold (18), “Chemicals are rarely tested at doses below the MTD (maximum tolerated dose) and half the MTD. Moreover, about half of the positive sites in animal cancer tests are not statistically significant at half the MTD.”
Dose-response relation A fundamental principle in toxicology is that exposures to high concentrations of a substance have more pronounced effects than exposures to low concentrations. Another important fact is that when a group of individuals from a single species is exposed to a substance, they experience varying levels of sensitivity. The result for a single type of response is that the observable portions of doseresponse curves have a general sigmoid shape, as shown schematically in Figure 1. A Gaussian (normal probability) distribution is one example of a distribution that is consistent (although not exclusively) with an S-shaped curve. S-shaped toxicity curves are expected whether one is
dealing with the responses of bacteria to a disinfectant, cockroaches to malathion, or rats to rat poison. Figure 2 shows an example of a relatively complete dose-response curve for the response of a group of rats to various doses of a hypnotic drug (19). Few rats responded to the lowest dose shown, but almost all responded to the highest dose. When approaching the low-dose portion of a curve such as that in Figure 2, a point is reached where the uncertainties in the observations eventually swamp attempted measurem e n t s of possible effects. T h e question then arises as to how far science can go in providing valid quantitative estimates of the likely responses at doses below those that produce observable responseskeeping in mind that a zero toxic effect is not provable. The linear percent scale on the response (risk) axis of a plot of response versus dose can be transformed to agree with Gaussian probability values by a probit transformation. In such a transformation, t h e response numbers a r e compressed around the 50% mark and expand at either high or low probabilities. When the dose-response data are plotted with dose on a logarithmic scale and response on the probit scale, a straight line ordinarily is obtained. Figure 3 shows the same data as in Figure 2 but in the form of the logprobit plot. As expected, this plot is i n reasonable agreement with a straight line. Note that zero cannot be reached on either the vertical or the horizontal axes; thus, the dia-
gram cannot provide information on zero dose or zero response. It seems clear that a modest quantitative extrapolation of the log-probit plot to lower levels can be viewed as realistic and scientifically sensible. Thus a small extrapolation of the straight line fitted to the data for a dose of 3 mg/kg predicts a response of 0.2%. (For this particular extension, this response rate might be confirmed by more experiments, but in any case one can be confident that the estimate is fairly accurate.) Extrapolation to a dose of 2 mg/kg gives a response rate of -0.01%, a result that would be difficult to validate experimentally but seems logical. Quantitative extrapolation of the dose by 1 order of magnitude below t h e lowest value tested-to 0.4 mg/kg and with a result substantially below 0.0001%-enters the realm of questionable science. Even u n d e r favorable circumstances, quantitative extrapolations beyond 1 or 2 orders of magnitude are not scientifically valid, both from the point of view of subjective errors in establishing the slope of the straight line and in the assumption concerning the response at low doses. The results of large extrapolations should be expressed qualitatively; the uncertainties are too large for credible quantitative estimates.
-
Assumptions In general, when risk estimates are made, an assumption about the na-
t Figure 1. Dose-response relation for a group of individuals exposed to various levels of a stressor.
Figure 3. Response of a group of rats to various doses of a hypnotic drug with the data plotted in the log-probit form. Figure 2- Response of a group of rats to various doses of a hypnotic drug.
The probability scale is on the vertical axis. Log-probit plots are conveniently carried out by using probability graph paper.
ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1,1992
667 A
REPORT ture of dose-response relations for particular responses that will permit straight-line extrapolations is preferred. The more experimental points that can be used, the more precisely the slope of the straight line can be established. As indicated in the preceding section, sigmoid data can be plotted on a linearized probability scale to give a probit-versus-logarithm (log-probit) plot of the dose. For toxicity information, the log-probit assumption has much scientific justification. Linearprobit and log-linear assumptions are somewhat similar. In attempts to grapple with the question of possible adverse effects of exposures to tiny amounts of toxic substances, other assumptions related to the nature of the initial part of the response curve have been used. A common assumption is that if a substance is toxic at any level, it is toxic a t all concentration levels. This assumption is the basis of the popular linear-no-threshold hypothesis. An adverse effect is assumed to exist a t all levels all the way to zero, and the response is assumed to be directly proportional to dose. The size of the effect is then given quantitatively by a linear extrapolation to zero from a region where observations have been made or are assumed to be known. The no-threshold hypothesis assumes that responses such as excess cancer deaths can be quantitatively estimated by linear extrapolation (e.g., the one-hit model) (20).Although t h i s assumption initially seems attractive, it has not been and cannot be proved. Nevertheless, speculation continues about being able to prove “once and for all” whether thresholds exist. This subject can be discussed philosophically, but a n answer cannot be found because that kind of no-effect zero cannot be proved. Another (and counter) assumption is that of a threshold below which there is no effect (such as with cancer)-a level exists below which a toxic effect disappears. Again, this assumption cannot be proved. The possibility of such a threshold is often not given much consideration when risk estimates are made, particularly with respect to cancer risks. In relation to overall well-being, the possibility of hormesis at low doses can be considered and conceivably can be demonstrated experimentally. Hormesis is the positive stimulatory effect of low levels of a stressor-for example, vaccination. Most scientists seem to be uncom668 A
fortable with t h e possibility of hormesis, and they dismiss most evidence out of hand. Estlmating risk Comments relative to the subject of risk and useful philosophical background can be found in many sources (21-25) and in the journals Science and Risk Analysis. For doses and dose rates lower than those that produce perceptible effects, and particularly when there are little or no data concerning either doses or responses, making a dose-to-risk conversion involves the application of unverifiable assumptions. T h e prediction of health effects rests upon extrapolation of a n assumed relation between a dose and a particular type of response. Observational uncertainties become excessive a t low doses and dose rates. When uncertainties are sizable, almost any hypothesis can be supported. For example, the doseresponse data for the relationship between lung cancer and cigarette use has been fitted (25) to a linear quadratic assumption (essentially linear at low doses) as well as to a best-fit straight line. Further examination of these data indicates that they can be judged to have an even better fit to a log-probit assumption than to the other two assumptions. The uncertainties are such that there is plenty of leeway to support a claim for a fit to almost any hypothesis. Risk estimates made in the face of unverifiable assumptions therefore should be interpreted with caution, and conclusions should be reported with frank caveats. Health protection agencies, scientific advisory boards, and panels of experts often favor the simple linearno-threshold hypothesis for estimat ing the risk of developing excess cancers. That assumption, by whatever name, has been implicitly affirmed in terms of many reported risk estimates. Part of the rationalization for the linear assumption involves the fact that a no-effect threshold has not been shown experimentally. A more significant reason to support the no-threshold hypothesis is that it is prudent to err on the side of caution; hence, a conclusion based on a scientifically unreliable extrapolation nevertheless deserves support because it is conservative. Still a n other rationalization is that the conclusions are drawn from reasoned argument and good scientific judgment. A more objective approach would be to consider the science as realistically a s possible-to obtain esti-
ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992
mates of risk along with an estimate of the uncertainty in those estimates and then include a carefully reasoned and explicitly stated safety factor such as 2 , 5 , 10, or 100. With such an approach, both the science and safety factors would be clearly and separately visible, and each could be judged on its own merits. Even though the linear-no-threshold assumption cannot be proved, scientists make extrapolations over wide ranges with this general kind of assumption to produce quantitative estimates (6, 7) of risk from low-dose exposures. A common attitude seems to be ‘We are dealing with an important question and therefore we must furnish answers.’’ Scientists are often uncomfortable admitting t h a t science does not or cannot always provide quantitative answers. They imply or state that extrapolations to zero have a sound technical basis and thereby reinforce this misinformation. With respect to excess cancers, many scientists not only appear to favor t h e linear-no-threshold a s sumption but also speak and write as though it had a scientific basis. Some scientists, and certainly much of the public, believe in the validity of the results of such extrapolations. Understandably, news media reinforce the misinformation. Thus, when speculating about the possible adverse effects of exposures to tiny amounts of toxic materials, some scientists manipulate numbers to lend an aura of credibility and integrity to their comments. Questions about low-exposure effects that are far outside the observable range certainly can be asked in the language of science. Many questions can be formulated i n technical language that scientists cannot answer even with more research. The question of possible health effects of insignificant amounts of substances is beyond the realm of science to confirm now or in the future. Extrapolations should be made by using realistic assumptions, but risk estimates that go beyond a modest extrapolation and are given in quantitative terms are pseudoscience. Science and conservatism Risks should be neither overblown nor played down. How regulations should be set with respect to the analytical detection limit is the question. In the past, regulation was easier; for many materials, the regulatory limit was set a t the analytical detection limit. If something was not detectable, it presumably was not there, and the threat was zero.
The practice, conscious or otherwise, of being wedded to the detection limit seems to continue for some materials. For example, the Times Beach and 2,3,7,8-tetrachlorodibenzo-#-dioxin affair arose when the detection limit for that dioxin was - l ppb. The regulatory limit was set (26) for Times Beach at 1 ppb. The relationship between a l-ppb exposure limit and the level of human threat is not obvious. Does 1ppb fail to protect a t an adequate level, is it overblown, or is it just right? Later the detection limit was lowered to a few tens of parts per trillion. When dioxin was found in concentrations of typically 10-20 pptr in many products (such as fish), t h e regulatory limitpresumably for other reasons-was then set in that range. According to Brookes (24),“The trouble is t h a t both Congress and the EPA have powerful political incentives to run away from any alleged danger.” Because detection limits are already low (and getting lower), and because detection limits have no fimdamental relationship to toxic levels of threat, regulations and detection limits must be consciously divorced. If modern analytical sensitivities demonstrate regulatory violations, no matter how irrelevant these violations are to human welfare, public dread increases, and the public’s confidence in governing bodies is destroyed. When regulatory limits track detection limits, the effects are counterproductive because resources may be squandered on trivial possible risks rather than being used to reduce risks. Moreover, as Wildavsky has pointed out (27),minimizing the risk for a particular situation is not necessarily an activity without risk itself. A few decades ago, only moderate extrapolations were needed to go from the observable-effect level of a substance to the detection limit for that substance. For example, in 1960, when the toxicity and detection limits for mercury were in the part-permillion range, only modest extrapolations were needed to go from the observable toxicity limit to the detection limit. As detectability improves, going from the observable dysfunctions to the detection limit requires greater extrapolations. Extrapolations of about 6 orders of magnitude are typically required to reach current detection limits from the observed-effect levels.
ceptable risk” and stated that there is no sound scientific, social, economic, or other basis for selecting the one-in-a-million criterion of acceptable risk. This number appears to have been pulled out of a hat and evidently is intended to symbolize a lifetime risk of essentially zero. A one-in-a-million involuntary level of risk is assumed to be acceptable to the public. The EPA appears to be the leader in adopting a policy of making extrapolations to risk values of one in a million (0.0001%). EPA spokesmen have stated (21)unequivocally: ‘We would defend our approach as good science and good public policy.” Science certainly has a role in risk estimation, and that role should be clarified. The EPA u n d e r s t a n d a b l y wishes to enjoy public confidence in its regulatory policies and wants to appear accurate in its conclusions. Bad science, however, should not be used to meet objectives concerning good public policy. Nichols (29) questions the wisdom of the EPA in the matter of one-in-a-million hypothetical risks: “Everything in life is full of one-in-a-hundred risks. If the EPA is spending all its time trying to protect the public against one-in-a-million hypothetical risks, then it’s spending its time on trivia.” Risk estimates of one excess cancer death in a million require extrapolations over many orders of magnitude. If the quality of the information were high enough t h a t extrapolations comparable to the log-probit type
One- in-a- million cancer risks Kelly (28) recently examined “the myth of as a definition of ac-
Figure 4. Comparison of linear-no-threshold and log-probit extrapolations.
could be performed, even these logprobit extrapolations would have to be carried into the realm of pseudoscience to reach the one-in-a-million risk level. If quantitative risk estimates for excess cancers resulting from low doses are based on research findings at the molecular-mechanism level or are derived from highdose rodent data, the extrapolations are largely deceptive quantification. As Abelson (30)points out, “Are humans to be regarded as behaving biochemically like huge, obese, inbred cancer-prone rodents? Sooner o r later Congress must recognize a new flood of scientific information that renders suspect the Delaney clause and procedures for determining carcinogenicity of substances.” Linear-no-threshold vs. logprobit extrapolations The results of extended extrapolations can be compared for the data shown in Figure 2. Figure 4 depicts two kinds of extrapolations. In Figure 4a the linear - no - threshold extrapolation is shown as a short straight line to zero from the lowest point on the sigmoid curve. From the direct proportionality between the lowest point on the curve and zero, the calculated response of 0.0001% (one in a million) corresponds to a dose of - 0.0001 mg/kg. The extrapolation can also be performed from t h e s t r a i g h t - l i n e log-probit graph (Figure 4b). Such an extrapolation (again with the conservative no- threshold assumption)
I
I
’
//
I
/-I--
(a) Extrapolation from the lowest point of Figure 2 to zero as shown by the dotted straight line. (b) First segment of the straight-line extension of the log-probit plot of Figure 3 to tiny doses.
ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1,1992
669 A
REPORT to the same dose of 0.0001 mg/kg yields a response estimate that is astronomically s m a l l e r t h a n t h e 0.0001% obtained with the linear-nothreshold extrapolation by a factor of a t least 1020-1040, or 20-40 orders of magnitude. The purist could claim that a level of threat persists because even the estimated response value is never zero. Beyond the tiniest direct proportionality extrapolations, the result ing safety factor explodes in comparison with log-probit values. Forget about t h e one-in-a-million risks. Even a small linear-no-threshold extrapolation (Figure 4) to give a onein-a-hundred (i-e., 1%) risk appears to overestimate the risk by a huge factor (> 100,000). These comparisons illustrate the importance of making sound scientific judgments from reasonable extensions of observational data and defining prudent safety factors. Huge safety factors would not seem to be reasonable under any circumstances, b u t nonetheless they should be clearly visible and not concealed by a dubious extrapolation procedure. Although the comparisons made in this section are for the specific example of Figure 2, t h e realistically harsh conclusion concerning linearno- threshold extrapolations h a s broader validity because doseresponse curves are generally sigmoidal. Although the slope of the log-probit straight line would be expected to vary for other response systems, the main thrust of the conclusion concerning t h e invalidity of linear - no-threshold extrapolations would not be expected to be profoundly affected. It seems reasonable to consider the science associated with cancer risk estimates a t three clearly defined levels: the direct observational data, reasonable extensions of the data, and further extensions into speculative hypotheses. Without experimental data t h a t clearly show direct proportionality between cancer incidence and lowdose exposures, linear-no-threshold extrapolations are not credible for risk estimates because even modest extrapolations quickly lead to the inclusion of unreasonable safety factors. Such one-in-a-million or even one-in-a-thousand estimates should not be considered credible. More realistic models should be used, and they should clearly separate the two functions of making common-sense modest scientific extrapolations and incorporating cautious conservatism. Devising valid models will not be 670 A
*
easy, but for the sake of the reputation of science, sound and explicitly stated assumptions, as well as the limitations of those assumptions, are essential. Regulatory limits may best be prescribed i n terms of dose without making a questionable conversion to risk, including a possibly huge and unknown safety factor. At the same time, dose selection is a judgment call, and the selected dose should not be accompanied by a claim that it is scientific. Limitations of science Limitations to reaching definitive conclusions concerning risk estimates can be grouped under the following five headings.
Human epidemiological database. Only rarely are the available observational data for doses and responses complete enough to reveal high-to-low response sensitivities a n d to define a d e t a i l e d doseresponse curve. The paucity of data presents a problem that is unlikely to be resolved because humans cannot always be used as experimental subjects. However, available information for humans should be given precedence over other sources of information when assessing human risks. Animal epidemiological database. Animal toxicology studies include too few examples of information t h a t is complete enough to define the detailed dose-response curve. Usually, the crucial information is from experiments a t high exposure levels. In other words, the data are likely to be from near the top of a sigmoidal response curve. Modeling hypotheses. At one extreme, the dose-response information would be complete enough to define a dose-response curve. Moderate extensions of the data to low doses could be carried out directly. There would be little need to dwell on the merits of a model for extrapolation. For modest extensions, the risk or response estimates obtained from such extrapolations would have a high level of scientific credibility. The precision of the slope of the straight line would be expected to improve with increased amounts of data. With the log-probit model, the slope of a straight line could be chosen with only two data points, but the result would be a higher level of uncertainty. If there were but a single data point, the slope would have to be inferred from other doseresponse systems and with still more uncertainty. The less that is known about the
ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1, 1992
dose-response relation, the greater t h e tendency to fill t h e gaps i n knowledge with assumptions and the greater the inclination to use a model on which to base subjective decisions. When the response data are limited and are primarily a t lower doses, terms such as nonlinear, quadratic, and sublinear are useful in describing the initial portions of response curves. Low-dose risk estimates then strongly depend on the validity of the model. Extrapolations involving direct proportionality with a no-threshold assumption include huge unknown safety factors that overwhelm the information concern ing response. In terms of validity, most other models probably lie between the extremes of the log-probit and linearno-threshold models. In general, speculative model hypotheses concerning t h e risks of excess cancer deaths a t doses below which there are observable effects cannot be validated. However, once a number-no matter how spurious-is calculated, it often serves as a platform for argument and ratcheting. Extrapolations. Extrapolations are always dangerous, and extrapolation of information from another species to humans is an especially serious limitation (30).Another limitation relates to how far scientific validity can be assumed to apply when a n extrapolation is more than moderate. "he possibility of a threshold below which there is no effect must be considered a reasonable assumption, even though a zero response can never be proved. Communication. This refers to a different kind of limitation. Analytical chemists and scientists in general often fail to transfer their information into the wider sphere. Rarely do they state that the actual risks may be the unprovable zero, in the range between a numerical upper-bound estimate and zero, or between negligible and zero. Risk estimates a t low doses should include caveats t h a t indicate the limit to which quantitative estimates can be made and where qualitative judgment takes over. An overall problem is the possible or likely adverse effects resulting from exposure to harmful substances a t levels below which the effects are observable. Analytical chemists must help to define the limits to which q u a n t i t a t i v e conclusions can be drawn, provide estimates of the uncertainty levels for the conclusions, and provide informed commentary on reasonable safety factors. Scien-
tists are better equipped than most people to indicate when the limitations of science apply and when speculations, hypotheses, consensus, or reasoned arguments take over. None of the latter estimation methods can be deemed scientific, and thus they cannot provide quantitative (e.g., one - in - a-million) risk estimates. Finkel (22)has stated, “The debate over whether risk numbers are credible has begun to resolve itself. . . . These (risk) numbers are systematically skewed in the direction of overestimating risk-so overly conservative as to be a caricature of itself.” Most scientists conducting research on risk assessment are no doubt aware of the scientific limitations of attempting to move into areas that involve not only science but also social and political values. The media may sometimes misuse and distort judgments that have scientific aspects. However, the media cannot be blamed too much in view of statements made by reputable scientists concerning quantitative risk estimates that may result from unscientific extrapolations.
tion of adverse effects such as cancer that continue at all levels down to zero nor the assumption of a threshold below which there is no such effect can be scientifically proved. The consequence of exposures to negligible doses may well be zero, but this result cannot be proved. In the qualitative sense, using words such as “insignificant,” “not worth considering,” or “immeasurably low” becomes appropriate even though the public expects black and white numerical answers. When uncertainties are sizable, no h y p o t h e s i s c a n be c o n s i d e r e d uniquely verified. Obtaining compelling observational information that would confirm a linear dose-response relation is improbable. Thus scientific credibility should not routinely be given to even modest direct proportionality extrapolations for estimating low - dose risks. Simple linear -no-threshold extrapolations of dose-response information to generate quantitative risk estimates of even one in a thousand are pseudoscience and represent numerical rhetoric.
(16) Benarde, M.A. Chem. Eng. News 1989,Dec. 11, p 47-48. (17) Kochland, Science 1990, 249, 1357. (18) Ames, B. N.; Gold, L. S. Science 1990, 250. 1645. (19) Tallardia, R. J.; Jacob, L. S. The
E.
Dose-Response Relation in Pharmacology;
Sprin er: New York, 1979; p. 108. (20) wflite, M. C.; Infante, P. F.; Chu, K. C. Risk Analysis 1982,2,195-204. (21) Chem. Eng. News Forum 1991,Jan. 7, 27-55. (22) Finkel, A. M. Resourcesfir the Future 1989 (summer issue), 11-13. (23) Freeman, A. M.; Portney, P. R. Resources for the Future 1989 (spring issue), 1-4. (24) Brookes. W. T. Forbes 1990.ADrd 30. . 151-72. ‘ (25) Dav. N. E.In Toxicological Risk Assess. ment; Clayson, D. B.; Krgwski, D.; Munro, I., Eds.; CRC Press: Boca Raton, FL, 1985; p. 7. (26) Taylor, J. K., NIST, personal communication. (27) Wildavsky, A. Am. Sci. 1979,67,3237. (28) Kelly, K.E. Presented at the Air and Waste Management Association’s 84th Annual Meeting, Vancouver, B.C., Canada, June 1991; Presentation no. 91175.4. (29) Nichols, A. B. Water Environment and Technology 1991,May, 57-71. (30) Abelson, P. Science 1992,255,141. I
-
Summary
All substances are toxic at some concentration, and toxicities vary enormously. Because individual sensitivities vary, the observable portions of dose-response toxicity curves are expected to be sigmoidal in shape. For trace exposures, a zero toxic effect can never be proved. Uncertainties concerning exposure levels and lack of knowledge about excess cancer responses make it difficult to obtain sound Scientific low-exposure risk estimates. Committees of experts and health protection agencies commonly assume direct proportionality between dose and response with the rationalization that it is a straightforward and conservative assumption. Extrapolations that are made to obtain estimates of low - dose risks should be based on sound assumptions. The technical database used as the foundation for low - dose extrapolations should be clearly specified. Even the most credible of extrapolations to lower doses should be restricted to - 1 order of magnitude outside the observable range, beyond which quantitative dose-to-risk conversions should not be attempted. Prudent safety factors should be included, but their presence should be stated explicitly and should be independent of scientific estimates. A realistic hypothesis may be that as doses are lowered, a threshold may be reached. Neither the assump-
The advice, suggestions, and encouragementof the late L. B. Rogers are especiallyappreciated. The advice and suggestions of many other colleagues in reviewing drafts of the material are also appreciated. The following individuals offered comments, which may or may not be reflected in the final manuscript:M. A. Armour, J. E. Bertie, F. Cantwell, C. Carr, R. Coutts, P. Harris, S. Hrudey, B. Kratochvil,J. MacGregor, B. Mitchell, J. Shortreed, K. J. Simpson, K. Simpson, and R. Uffen.
References (1) Dovichi, N.J. et al. Anal. Chem. 1991, 63,2835-41. (2) Mendelsohn, R. Am. Sci. 1991,79, 178 (March- April). (3) Chem. Eng. News 1991,March 4, p. 14. (4) Rawson Academy of Aquatic Science, Ottawa, Ontario, Canada; personal communication, June 1, 1990. (5) Atomic Energy Control Board; Reporter 1991 (spring issue). (6) Mossman, B. T.; Bignon, J.; Corn, M.; Seaton, A.; Gee, J.B.L. Science 1990, 247,294-300. (7) Upton, A. C. Sci. Am. 1982,246, 4149. (8)Hiremath, C.; Bayliss, D.; Bayard, S. Chemosphere 1986,15,1815-23. (9) A pel, K. E.; Kildebrandt, A. G.; Ling!, W.; Kunz, H. W. Chemosphere 1986,15,1825-34. (10) Gough, M. Resources fir the Future 1988 (summer issue), 2-5. (11) Bertazzi, M. et al. Am. J. Epidemiol. 1989,129,1187. (12) Chem. Eng. News 1991,Oct. 28, p. 6. (13) Chem. E%. News 1991,Aug. 12, p. 8. (14) Schneider, K.Sunhy Oregonian 1991, Aug. 28, p. A5. (15) Olive, D. Toronto Globe andMuill991, Nov. 23. (See also Thompson, D. “The Danger of Doomsaying“; Time, March 9, 1992; p. 50.)
W.E. Harris obtained his Ph.D. under the direction of I. M. Kolthof at the University of Minnesota. Now a ProfissoorEmeritus, he taught at the University ofAlberta for more than 40 years. After retiring, he continued his service to the university as Chairman of the President’s Advisory Committee on Campus Reviews. He has Published approximately 100 papers and has authoredfive books, including the second edition of Chemical Analysis with H. A. Laitinen. As a consultant to Alberta Environment, he was involved in the successfil siting of the Alberta hazardous waste facility.
Coherent Forward S Atomic Spectrometry Gerd M. Hermann (Anal. Ch *992,64,571A-579 A ‘he spectra in Figures 2 and 6 were inadvertently switched. In the caption for Figur2 alue of b should be 10
ANALYTICAL CHEMISTRY, VOL. 64, NO. 13, JULY 1,1992
671 A