Biomarkers: Coming of Age for Environmental Health and Risk

reports addressing agency-specific efforts were published by the EPA and the ... biomarkers in risk assessment and environmental health management, an...
0 downloads 0 Views 336KB Size
Critical Review

Biomarkers: Coming of Age for Environmental Health and Risk Assessment ANTHONY P. DECAPRIO* ChemRisk Division, McLaren/Hart, Inc., 28 Madison Avenue Extension, Albany, New York 12203

Biological markers, or biomarkers, reflect molecular and cellular alterations that occur along the temporal and mechanistic pathways connecting exposure to toxic chemicals or physical agents and the presence or risk of clinical disease. Biomarkers include a vast array of measurements that reflect exposure, effect, and/or susceptibility. The development and validation of potential biomarkers is a long-term endeavor that proceeds from basic research to pilot human studies to full-scale epidemiological investigations. The past decade has seen extensive research investigation of biomarkers and the beginnings of their practical application for risk assessment and environmental health management. These potential applications include improved exposure characterization and dose-response assessment, measurement of interindividual variability, and evaluation of causation in toxic tort and environmental litigation. This paper reviews the biomarker paradigm as applied to xenobiotic exposure in humans, discusses progress toward applying biomarker technology in environmental epidemiology, and summarizes recent biomarker data for three important environmental and occupational agents, benzo[a]pyrene, 1,3-butadiene, and acrylamide.

Introduction The past decade has witnessed a dramatic increase in the level of research activity, derivation of theoretical constructs, and development of practical applications for the direct measurement of biological events or responses that result from human exposure to xenobiotics (1-5). These measurements, conveniently grouped under the descriptor “biological markers” or “biomarkers”, reflect molecular and/or cellular alterations that occur along the temporal and mechanistic pathways connecting ambient exposure to a toxicant and eventual disease. As such, an almost limitless array of biomarkers is theoretically available for assessment, and only a minute fraction of these has been recognized and investigated to date. Some phenomena that can technically be classified as biomarkers of chemical exposure (e.g., hematological changes accompanying high levels of exposure to lead or benzene, acetylcholinesterase inhibition by organophosphates) have been measured for decades. However, the recent surge of interest in this field has been driven by technical advances in analytical and molecular genetic * Corresponding author telephone: [email protected].

S0013-936X(96)00920-0 CCC: $14.00

(518)869-6192; e-mail:

 1997 American Chemical Society

techniques and by the recognition that “classical” toxicology and epidemiology may not be able to alone solve critical questions regarding causation of environmentally induced disease. Some of the many recognized potential uses of biomarkers in various environmentally related fields are illustrated in Figure 1. Biomarkers are an important component of the emerging discipline of molecular epidemiology, which seeks to expand the capabilities and overcome the limitations of classical epidemiology by incorporating biological measurements collected in exposed humans (6-8). Early efforts at utilizing biomarkers to make quantitative estimates of exposure and for prediction of human cancer risk were made by Ehrenberg and Osterman-Golkar (9-11). Using ethylene oxide as a model xenobiotic, these investigators explored the use of macromolecular reaction products (i.e., hemoglobin adducts) as internal dosimeters. By employing hemoglobin adduction data, they predicted the level of ambient ethylene oxide that would correspond to a tumorigenic dose of γ-radiation, which they termed the “rad-equivalent dose”. Seminal work in the area of biomarkers as applied to the molecular epidemiology of cancer was performed by Perera and Weinstein (12), who proposed the use of such techniques in identifying environmental contributors to human cancer incidence. Important early applications of biomarkers for characterizing environmental and occupational exposure were also explored by several other groups (13-23). The original U.S. National Academy of Science (NAS) framework for quantitative risk assessment (QRA) and its use in regulatory decision-making did not specifically address the issue of biomarkers (24). Similarly, biomarkers were not recognized by the U.S. EPA as potentially important tools for QRA in their 1986 carcinogen risk assessment guidelines (25). The first comprehensive discussion of biomarkers in this context was made by Hattis, who described their value in dose-response characterization, estimation of internal dose, interspecies extrapolation, and assessment of interindividual variability (26). In 1987, the National Research Council (NRC) of the NAS issued a report examining the current state of the science for the use of biomarkers in environmental health research and risk assessment (27). The NRC report discussed definitions, validation of biomarkers, and ethical issues, in addition to presenting the by now well-known conceptual paradigm shown in Figure 2. This model has since been expanded and/or modified in numerous articles (1, 5, 2835). Following the 1987 NRC report, U.S. regulatory agencies became more actively involved in biomarker research, and reports addressing agency-specific efforts were published by the EPA and the Agency for Toxic Substances and Disease Registry (36, 37). The NAS subsequently issued several reports

VOL. 31, NO. 7, 1997 / ENVIRONMENTAL SCIENCE & TECHNOLOGY

9

1837

biomarker data into QRA, the confluence of recent scientific advances and regulatory agency acceptance indicates that biomarkers will evolve as important components of this process. This paper reviews the biomarker paradigm as applied to human exposure to xenobiotics, the potential applications of biomarkers in risk assessment and environmental health management, and the biomarker database for three important xenobiotics (benzo[a]pyrene, 1,3-butadiene, and acrylamide). Although aspects of all three biomarker categories will be addressed, the use of exposure biomarkers in molecular epidemiology and risk assessment will be emphasized. The reader is referred to other reviews for additional discussion of methodology, statistics, validation, and related considerations in the use of biological markers (1, 4, 30, 31, 43-52).

Biomarker Paradigm

FIGURE 1. Potential uses of biomarker data in various environmental and health assessment disciplines. Solid box indicates area where major applications of biomarker data have been made, dashed boxes indicate preliminary and research-based efforts, and dotted boxes illustrate disciplines where efforts remain primarily conceptual. detailing the use of biomarkers for elucidating exposure and effects in specific organ systems (38-40), and biomarkers figured prominently in the 1992 EPA guidelines for exposure assessment (41). Their increasing importance in environmental policy is further indicated by the explicit inclusion of the biomarker concept in the 1996 draft of the proposed EPA Carcinogen Risk Assessment Guidelines (42). Although a formal regulatory mandate does not yet exist for incorporating

For all manifestations of xenobiotic-induced toxicity, a series or “cascade” of events must occur between ambient exposure and the observation of clinical disease. While this concept is most clear in the field of carcinogenesis, it applies equally to non-cancer end points such as neurotoxicity and immunotoxicity. Prior to the last two decades, classical toxicological methods were not sufficiently sensitive to identify and characterize these intermediate events; they were instead considered part of the “black box” (Figure 2) linking exposure and disease (53, 54). Consequently, they could not be directly exploited for predicting risk associated with exposure or for identification of potential toxicity. Taken together, these intermediate events (proceeding from left to right in the biomarker paradigm) constitute the toxicokinetics, toxicodynamics, and mechanism of action for a given xenobiotic under specified exposure conditions. The NRC construct divided this sequence of events into several discrete, temporally-linked stages; internal dose, biologically effective dose, early biological effect, and altered structure/function (27). These stages were grouped into two classes, with the first two parameters representing biomarkers of exposure and the latter two representing those of effect. These assignments are not mutually exclusive, and the distinctions between adjacent stages are frequently blurred. The rate of progression between adjacent stages is governed by kinetic factors, both stochastic and deterministic. The

FIGURE 2. Classic epidemiological “black box” model linking exposure with disease (top) and the NRC biomarker paradigm that expands the black box to reveal discrete measurable stages in the exposure/disease continuum (bottom). Adapted from ref 27.

1838

9

ENVIRONMENTAL SCIENCE & TECHNOLOGY / VOL. 31, NO. 7, 1997

third class of biomarker, biomarkers of susceptibility, reflects these kinetics and applies to all transition steps in the sequence (Figure 2). The 1987 NRC report emphasized that, in actuality, the link between exposure and disease is likely to be a continuum rather than a series of distinct events. Other workers have since argued that although the NRC paradigm is a useful conceptual framework for biomarker research and application, causal pathways and links are likely to be substantially more complex (5, 28). For purposes of molecular epidemiology and risk assessment, all three classes of biomarkers are relevant. Exposure biomarkers are employed to measure actual absorbed dose (internal dose) and extent of delivery of xenobiotic (or active metabolite) to the putative target site (biologically effective dose). Such measurements are superior to questionnaire data and ambient monitoring for dose reconstruction in individuals. Effect biomarkers measure early biochemical or cellular responses in target or non-target tissue (early biological effect), frank structural or functional changes in affected cells or tissues (altered structure/function), or actual clinical disease. Ideally, these data facilitate assignment of subjects to affected or unaffected groups and allow estimation of disease progression. Susceptibility biomarkers reveal individuals with genetically (or otherwise) mediated predisposition to xenobiotic induced toxicity. As discussed later, all of these measurements have the potential to improve the accuracy, reliability, and scientific basis for quantitative risk assessment.

Biomarkers of Exposure As presented in the original NRC report, biomarkers of internal dose indicate the absorbed fraction of a xenobiotic, i.e., the amount of material that has successfully crossed physiological barriers to enter the organism. Consequently, they reflect bioavailability and are influenced by numerous parameters such as route of exposure, physiological characteristics of the receptor, and chemical characteristics of the xenobiotic. Although technically considered “biomonitoring”, simple measurement of xenobiotic levels in biological media (blood, tissue, urine) can provide data on internal dose (51). A more sophisticated and relevant biomarker of internal dose (in terms of proximity to downstream events in the sequence) is the measurement of a metabolite in selected biological media, particularly if such metabolite is active or critical to the toxic effects seen. Macromolecular Adducts. Very useful exposure biomarkers for reactive xenobiotics or their activated (i.e., electrophilic) metabolites are macromolecular reaction products. Substantial research effort has been devoted to the use of protein and DNA adducts as molecular dosimeters (45, 55-57). Ehrenberg and co-workers first proposed using hemoglobin (Hb) adducts to monitor the internal dose of alkenes and epoxides such as ethylene oxide over two decades ago (9). This methodology has since evolved into a widelyused and highly sensitive technique for quantitating Nterminal Hb adducts of a variety of xenobiotic metabolites in human blood (58-60). Hb adducts have been employed as internal exposure biomarkers for aromatic amines, nitrosamines, polycyclic aromatic hydrocarbons (PAHs), and other compounds (61-70). An advantage in using Hb as a dosimeter is its relatively extended physiological lifetime. Once synthesized and incorporated into red blood cells (RBCs), Hb is only removed as senescent erythrocytes are removed from the circulation (after about 120 days in man) (71). Consequently, Hb can act as a cumulative dosimeter for reactive xenobiotics, and chronic exposure conditions will result in a steady-state level of adducts dependent upon dose level (72-74). In addition, Hb adducts are not repaired, although a decreased lifespan for RBCs containing adducted Hb has occasionally been reported (20, 75). Due to the large amounts of Hb available

in human blood specimens, a high degree of sensitivity can often be achieved in these assays. While Hb has many advantages as an internal dosimeter, one important disadvantage concerns the requirement for reactive xenobiotic to have sufficient half-life and appropriate physicochemical characteristics to cross the RBC membrane. In contrast, serum albumin (SA) is synthesized in hepatocytes and released directly to the circulation. SA adducts have been employed as biomarkers of exposure to short-lived metabolites such as those produced from aflatoxin metabolism (76-79). Unfortunately, the substantially shorter lifetime of SA (∼20 days in the human) limits the useful temporal range for this technique. Other proteins, including histones and collagen, have also been proposed for use as substrates to monitor in vivo exposure to reactive chemicals (69, 80). Covalent DNA adducts are also employed as exposure biomarkers (14, 56, 70, 81, 82). As with protein, DNA adduction is limited to reactive agents (83, 84). Lymphocyte DNA is typically utilized as the macromolecular substrate, and numerous techniques are available for DNA adduct detection and quantitation. One of these, 32P-postlabeling, is exquisitely sensitive and can detect one adduct in 1010 bases, although qualitative characterization is not yet possible with this method (85). Other methods, such as immunoassay, fluorescence line narrowing spectroscopy, and GC-MS analysis are also quite sensitive and have been employed in human molecular epidemiological investigations (4, 86-88). Unlike protein, adducted DNA can undergo various degrees of repair over a relatively short time frame following reaction (84, 89). Despite this phenomenon, a certain fraction of adducts will generally persist, sometimes for long periods of time. In addition, transcriptional errors caused by the presence of adducts can result in fixation of DNA damage in the form of a mutation. As discussed below, such mutations are considered biomarkers of either exposure or effect. Detection and characterization of adducted DNA bases excreted in urine has also become an important dosimetric technique (70, 90, 91), as has measurement of covalent protein-DNA cross-links induced by certain reactive xenobiotics and metals (92, 93). Protein and DNA adducts can be considered as biomarkers of either internal or biologically effective dose, depending upon how close their relationship is to actual disease occurrence. The NRC report defined biologically effective dose as dose at the site of action, dose at the receptor site, or dose to target macromolecules (27). This definition is troublesome since, strictly speaking, complete characterization of molecular site and mechanism of action for a given xenobiotic would be necessary in order to assign a particular measured end point as a marker of biologically effective dose. For example, protein adducts cannot be considered as effective dose biomarkers for carcinogens, since they do not satisfy the above criteria. Ambiguity exists even for DNA adducts, since in no reported instance has xenobiotic-induced adduction of a specific base within a particular DNA sequence in a target cell type been unequivocally linked to a specific clinical outcome in people. Despite these uncertainties, adducts in total lymphocyte DNA are considered as appropriate biologically effective dose biomarkers for carcinogens, based upon the postulated mechanism of chemical carcinogenesis and limited experimental data indicating correlations between DNA adducts in lymphocytes and target tissues (1, 82 ,87, 89, 94-96). In addition, studies have demonstrated concordance between levels of protein and DNA adducts for various carcinogens, indicating that in some cases protein adducts are appropriate surrogates for measuring DNA adduction (6, 21, 56, 97-99). Temporal and Toxicokinetic Considerations. Temporal issues strongly influence the design and interpretation of molecular epidemiology studies employing exposure biomarkers (72, 74, 89). For example, using Hb adducts, it is in

VOL. 31, NO. 7, 1997 / ENVIRONMENTAL SCIENCE & TECHNOLOGY

9

1839

theory possible to detect an acute exposure to a reactive xenobiotic that occurred 1 month, but not 4 months, in the past. Measurement of urinary metabolites or albumin adducts would not provide useful dosimetry under either scenario due to their shorter lifetime. In contrast, continuing chronic (particularly if relatively constant) exposure could be assessed using any of these three markers. Retrospective dose reconstruction requires the use of a battery of exposure indicators with a range of half-lives or life spans in order to provide maximum information about the timeframe of the exposure event(s). The most challenging aspect of dose reconstruction involves characterizing either acute exposure occurring or chronic exposure ending more than 120 days in the past. At the present time, only the measurement of persistent DNA adducts and (possibly) adducted DNA bases in urine can provide relevant information. Thus, there is a great need for readily accessible, long-lived protein targets that can provide dosimetric data beyond the 120-day limit available with Hb. Internal dose and biologically effective dose are seldom, if ever, equivalent. Toxicokinetics and the physicochemical characteristics of the toxicant govern this relationship and determine what fraction of absorbed dose reaches the target tissue, the target macromolecule, and ultimately, the critical receptor site and over what timeframe these events occur. Physiologically-based pharmacokinetic (PBPK) models are of substantial value in predicting toxicokinetics in humans based upon animal data and, when available, in vitro data using human cells or tissues (100-102). Efforts to explicitly incorporate biomarker data into these models are currently underway, with the goal of extending the usefulness of the PBPK modeling techniques.

Biomarkers of Effect Biomarkers of effect are defined as any change that is qualitatively or quantitatively predictive of health impairment or potential impairment resulting from exposure (27). As these changes become more persistent and/or serious, the marker becomes more relevant for disease prediction (i.e., shifts to the right) (1, 46). A vast array of measurable biological end points can constitute biomarkers of effect. While these are by nature and definition more predictive of ultimate toxicity, they tend to be less clearly associated with exposure to specific chemical agents. Thus, a given effect biomarker can be highly predictive of hepatotoxicity but can be associated with a number of different chemical exposures. Examples of biomarkers of early biologic effect include sister chromatid exchange (SCE), DNA single-strand breakage (SSB), chromosomal aberration (CA), micronuclei, enzyme induction, and enzyme inhibition. Mutational events can also be regarded as effect biomarkers (103). They can be measured in somatic cells by various means, including hypoxanthine guanine phosphoribosyltransferase (hprt) mutant frequency and mutational spectra determinations and glycophorin A (gpa) assays (104). Oncogene activation is an additional effect biomarker. Further along the cause-effect continuum are biomarkers of altered structure/function, including organspecific enzymatic changes, tissue hyperplasia, and functional test abnormalities. These markers are considered the immediate precursors of frank clinical disease. In the case of macromolecular adducts, the distinction between a biomarker of exposure and that of effect is not always straightforward. DNA adduct formation can ultimately result in mutations, thus this end point is predictive of potential toxicity resulting from exposure. In this situation, DNA adducts would qualify as effect biomarkers. While protein adducts are clearly not direct biomarkers of effect for carcinogens, they may be highly correlated with more relevant markers. In addition, for non-cancer end points where protein adduction is involved in the molecular mechanism of action, protein adducts may be highly relevant indicators

1840

9

ENVIRONMENTAL SCIENCE & TECHNOLOGY / VOL. 31, NO. 7, 1997

of toxicity. For example, covalent modification of neuronal proteins is believed to be critical to the mechanism of certain reactive neurotoxicants, such as carbon disulfide, acrylamide, and 2,5-hexanedione (105-107). Blood protein adducts formed by these compounds would therefore be useful as either exposure dosimeters or effect biomarkers.

Biomarkers of Susceptibility In contrast to the indicators discussed above, biomarkers of susceptibility do not represent stages along the dose-effect continuum, rather they are conditions that increase the rate of transition between one or more steps (1, 27). For example, differences in Hb adduct data between individuals with similar ambient exposure to a xenobiotic may be attributed to differences in activity of a bioactivating enzyme. Alternatively, decreased DNA repair enzyme activity could result in increased mutation rates and enhanced tumor formation. The activities of enzyme systems such as cytochrome P-450 (CYP), glutathione-S-transferase (GST), and N-acetyltransferase (NAT) represent important types of susceptibility markers. Genetic polymorphism in enzymatic activity is a common basis for interindividual differences in toxicity (1). This phenomenon complicates the use of exposure and effect biomarkers in molecular epidemiological studies since the resultant increase in within-group variability decreases study power and offsets some of the gain in sensitivity over classical epidemiological techniques. However, the assessment of such polymorphisms may ultimately provide intervention strategies to modify disease risk in susceptible individuals (1, 46, 108), a goal that is aggressively being pursued by researchers.

Validation of Biomarkers Ideally, a biomarker should be biologically relevant, sensitive, and specific (i.e., valid). In addition, it should be readily accessible, inexpensive, and technically feasible. This combination of requirements is rarely achieved, and some tradeoff is inevitable in order to obtain useful biomarker data in a timely manner. The validation process for a biomarker involves determining the relationship between the biological parameter measured and both upstream and downstream events in the continuum, i.e., the dose-response curve must be characterized (5, 28, 47). For example, a Hb adduct considered for use as an exposure biomarker for a xenobiotic should exhibit a predictable relationship to ambient exposure level. In addition, if used as a surrogate for DNA adduction, then a reproducible correlation between Hb and DNA adducts must be demonstrated. Biological relevance refers to the nature of the phenomenon being measured and its mechanistic involvement in the pathway from exposure to disease. For biomarkers of exposure, disease relevance is not as critical a requirement as is a predictable exposure-response relationship; the opposite is true for biomarkers of effect. Sensitivity reflects the ambient exposure level that can be detected by means of the biomarker. Highly sensitive markers are necessary to quantitate the low ambient levels typical of environmental exposures in advanced Western nations. Specificity is the probability that the biomarker is indicative of actual exposure to the specific xenobiotic that it is designed to detect. Certain macromolecular adducts can be derived from exposure to a number of chemical species and are thus less specific than one unique to a single compound. Biomarkers must also be reasonably accessible; invasive sampling procedures are generally unacceptable. Thus, with the exception of occasional tissue biopsies, samples for use in exposure biomarker studies generally consist of blood, urine, milk, or other readily obtainable biological media. Since these are rarely target tissues for toxicological or carcinogenic effects, such studies are almost invariably conducted with surrogate biological materials. Finally, cost and technical feasibility are important considerations in selection of appropriate biomarkers for applied studies.

Biomarkers and the Science of Risk Assessment Application of biomarkers in both qualitative and quantitative aspects of risk assessment has been eagerly anticipated for over a decade, since Hattis first proposed their use in this process (26). Numerous refinements to and expansions of these early discussions have appeared (2, 3, 5, 109-112). As formulated in the original NAS paradigm, risk assessment involves four components, hazard identification, exposure assessment, dose-response assessment, and risk characterization (24). Hazard identification encompasses the qualitative determination that a particular xenobiotic has the potential to cause harm (hazard) at levels encountered in the environment. This determination can be based upon human occupational or epidemiological data, but more typically depends upon the literature database from standardized animal toxicity bioassays. Until recently, such studies generally employed end points located at the extreme right of the exposure-disease continuum, i.e., frank disease, tumor formation, or biochemical and physiological measurements indicating gross organ and tissue dysfunction. Since biomarkers measure events along the entire continuum, they offer promise for earlier and more sensitive detection of toxicity in animal bioassays. Even more significantly, sensitive and non-invasive biomarkers of exposure or effect can be employed to assess early indicators of harm in human populations with suspected environmental chemical exposure. Biomarkers are also expected to shed light on qualitative aspects of the hazard identification phase of risk assessment. Information on molecular mechanisms of action and structure/activity relationships can be invaluable for indicating the potential relevance to humans of certain toxic effects detected in animal species. For example, the presence of R2u-globulin associated with non-covalently-bound metabolites of various branched hydrocarbons in the serum of rats exposed to these compounds is a biomarker for renal toxicity (113). However, this marker has not been detected in human serum, indicating that these compounds are unlikely to present a similar hazard in humans. In addition, human biomarker data can play an important role in the regulatory classification of suspected carcinogens. This is evidenced by the recent International Agency for Research on Cancer (IARC) decision to classify ethylene oxide as a category 1 human carcinogen based in part on biomarker information. The uses of biomarkers in the exposure assessment segment of QRA are obvious and many, and development of these applications is currently driving research efforts in the field. The questions potentially addressed in this phase of QRA include whether or not detectable exposure occurred, what the relationship between ambient levels and internal dose is, what portion of the dose reaches target tissues, how persistent the xenobiotic is within the organism, and what the temporal characteristics of the internal exposure are (5, 109). All of these data can assist in decreasing misclassification error in human studies, a significant problem with classical epidemiological methods (114, 115). Highly specific biomarkers can be employed to identify the chemical nature of the exposure, especially under mixed exposure conditions. In addition, technological advances in mass spectrometricbased methods of adduct analysis will eventually result in sensitive, generic methods for exposure biomarker analysis. Thus, screening of human populations for multiple chemical exposures using single biological samples will become possible. Such technology has the potential to revolutionize epidemiological investigation of environmentally-induced disease and the methods employed for QRA. The development of very long-lived exposure (or effect) indicators may someday allow direct evidence of linkage between distant past exposure and current disease, thus overcoming the problem of latency in epidemiology and risk assessment.

Many aspects of dose-response assessment and extrapolation methodology can be improved by incorporation of biomarker data. One of the most important questions involves animal-to-human extrapolation in QRA. Quantitative differences in toxic potency of a xenobiotic between species can be marked. This phenomenon complicates the use of animal data in risk assessment and forces the assessor to assume that humans are at least as susceptible as the most sensitive animal species tested and to use “uncertainty factors” to adjust for possible differences. Interspecies variations are typically mediated by differences in metabolism, physiological and anatomic characteristics, and molecular receptor structure. Since many biomarkers directly or indirectly measure such parameters, they can provide information regarding the basis for observed differences. For example, a biomarker of biologically effective dose implicitly accounts for all of the toxicokinetic processes occurring from external exposure to arrival of xenobiotic (or metabolite) at the target site. Utilizing this measure rather than external dose as input for doseresponse models effectively “normalizes” the data, allowing more direct interspecies comparisons. This approach removes the need for uncertainty factors and empirically derived allometric conversions for species-to-species extrapolation. High-to-low dose extrapolation is the cornerstone of QRA as it is presently conducted. This is because, at best, animal carcinogenesis bioassays are generally capable of detecting no less than a 10% increase (p ) 0.1) in tumor incidence over background. In contrast, QRA seeks to predict dose levels associated with tumor incidence increments of, say, one in 1 million (p ) 0.000001). Statistical considerations indicate that defining the shape of the dose-response curve at such low response rates would require millions of animal test subjects per experiment. Consequently, data acquired at high dose levels are “extrapolated” by means of mathematical modeling procedures to yield usable information. This extrapolation is complicated by uncertainties associated with the choice of modeling procedure, the underlying biological nature of the response (threshold vs nonthreshold), the validity of animal data usually generated at dose levels where significant toxicity and/or non-linear toxicokinetics are present, and the low “dynamic range” of the typical bioassay (i.e., high dose to low dose ratio generally