Applying Mechanisms of Chemical Toxicity to Predict Drug Safety

Feb 16, 2007 - Success in this area must be based on an understanding of the mechanisms of toxicity. This review summarizes and extends some of the ...
0 downloads 15 Views 2MB Size
344

Chem. Res. Toxicol. 2007, 20, 344-369

ReViews Applying Mechanisms of Chemical Toxicity to Predict Drug Safety F. Peter Guengerich*,† and James S. MacDonald‡ Department of Biochemistry and Center in Molecular Toxicology, Vanderbilt UniVersity School of Medicine, NashVille, Tennessee 37232-0146, and Drug Safety and Metabolism, Schering-Plough Research Institute, Kenilworth, New Jersey 07033-0530 ReceiVed October 2, 2006

Toxicology can no longer be used only as a science that reacts to problems but must be more proactive in predicting potential human safety issues with new drug candidates. Success in this area must be based on an understanding of the mechanisms of toxicity. This review summarizes and extends some of the concepts of an American Chemical Society ProSpectives meeting on the title subject held in June 2006. One important area is the discernment of the exact nature of the most common problems in drug toxicity. Knowledge of chemical structure alerts and relevant biological pathways are important. Biological activation to reactive products and off-target pharmacology are considered to be major contexts of drug toxicity, although defining exactly what the contributions are is not trivial. Some newer approaches to screening for both have been developed. A goal in predictive toxicology is the use of in Vitro methods and database development to make predictions concerning potential modes of toxicity and to stratify drug candidates for further development. Such predictions are desirable for several economic and other reasons but are certainly not routine yet. However, progress has been made using several approaches. Some examples of the application of studies of wide-scale biological responses are now available, with incorporation into development paradigms. Introduction Potential Causes of Drug Toxicity Contributions of Individual Contexts of Drug Toxicity Approaches to Predicting Drug Toxicity Consideration of Structural Alerts Covalent Binding Assays Consideration of the Knowledge of Biological Pathways of Toxicity Off-Target Toxicology Transcriptomics Metabolomics Proteomics Logical Uses of Systems Conclusions

344 346 346 350 351 351 353 354 356 357 361 362 363 366

Introduction This review follows an American Chemical Society ProSpective Series meeting on the subject Applying Mechanisms of Chemical Toxicity to Predict Drug Safety, organized by the authors and held 4-6 June, 2006, in Washington, D.C. This is not intended to be a meeting report in the typical sense but rather * To whom correspondence should be addressed: Tel: (615) 322-2261. Fax: (615) 322-3141. E-mail: [email protected]. † Vanderbilt University. ‡ Schering-Plough Research Institute.

a synopsis of discussions on some of the topics covered there, including others relevant to the overall theme. We have written the article with not only experts in mind but also a more general audience of toxicologists, medicinal chemists, drug metabolism specialists, academic chemists, and others interested in the progress in the field of pharmaceutical discovery and development. We have freely used much of the information presented at this meeting and, rather than citing all individuals’ specific contributions, thank all of the speakers collectively (listed alphabetically, with affiliations): Cindy Afshari (Amgen), Eric Blomme (Abbott), Christopher Bradfield (University of Wisconsin), Bruce Car (Bristol-Myers Squibb), Michael Cunningham (National Institute of Environmental Health Sciences (NIEHS1)), David Evans (Johnson and Johnson), Thomas Kensler (Johns Hopkins University), Lois Lehman-McKeeman (Bristol-Myers Squibb), John Leighton (Food and Drug Administration (FDA)), Timothy Macdonald (University of Virginia), Sidney Nelson (University of Washington), Gilbert Omenn (University of Michigan), Timothy Ryan (Eli Lilly), Donald Robertson (Pfizer), Byoung-Joon Song (National Institute of Alcohol Abuse), James Stevens (Eli Lilly), and David Thompson (Pfizer). We also thank the other discussants in the 1 Abbreviations: AhR, aryl hydrocarbon receptor; ALT, alanine aminotransaminase; ARE, antioxidant response element; AST, aspartate transaminase; e-QTL, expression-quantitative trait loci; FDA, Food and Drug Administration (U. S.); HCA, hierarchical cluster algorithm; hERG, human ether-a-go-go related gene; JNK, Jun kinase; MTT, 3-(4,5-dimethylthiazol2-yl)-2,5-diphenyltetrazolium bromide; NIEHS, National Institute of Environmental Health Sciences; NIH, National Institutes of Health; PC, principal component; PCA, principal component analysis; PPAR, peroxisome proliferation activation receptor; TCDD, 2,3,7,8-tetrachloro-p-dibenzodioxin.

10.1021/tx600260a CCC: $37.00 © 2007 American Chemical Society Published on Web 02/16/2007

ReViews

Chem. Res. Toxicol., Vol. 20, No. 3, 2007 345

Figure 1. The path of drug discovery and development. From ref 1.

Figure 2. Investment escalation per successful compound. From ref 1.

audience for their input as well as the American Chemical Society for sponsoring this timely discussion. The scheme shown in Figure 1 is intended as an introduction to the pharmaceutical industry, for those who are not familiar with the process. A key point is the distinction between the discovery and development phases. These two terms have specific meanings in the pharmaceutical industry, although in recent years more overlap between the two has occurred in most companies. Safety issues occur throughout the process (1). Much has been written about the present state of the pharmaceutical industry (2). The past decade has yielded an explosion of biological information, and chemical methods have

also been considerably improved. However, the number of new drugs reaching the market has not increased (1, 3, 4). The cost of bringing a new drug to market has risen (5), regardless of how one does the accounting (Figure 2), and only 3 of the 10 drugs that do reach the market recover the original investment made in them (3). We will discuss the safety assessment of pharmaceuticals in scientific terms, although we must realize that resources are an issue (Figure 2). Clearly pharmaceutical research has become more competitive and more costly at the same time. The nature of the research has also changed. We have moved from a set of ∼500 potential targets to >5,000 and a world in which newer methods of biology are being applied to target prediction and validation at a rapid pace. The paucity of information about many of the new targets is probably often a reason for failure because of the lack of efficacy or detrimental off-target effects. There is a general feeling that many of today’s disease targets are more difficult than those in the past, for example, animal models for many of the neurobehavioral systems are few and difficult to validate. Another general point is that there is less room for “me, too” drugs and a premium on new-concept or blockbuster drugs because of the economic reality (Figure 2). One can also consider the issue of why drug candidates fail to reach the market. Two decades ago, a major problem was unpredictable metabolism and pharmacokinetics in humans (6). More knowledge about basic and practical aspects of human metabolism/disposition has helped a great deal. In Vitro systems and in Vitro/in ViVo extrapolations have helped considerably, although some cases still require a reasonable amount of effort. Nevertheless, the number of failures due to totally unexpected metabolism (particularly poor pharmacokinetics) today is small

Figure 3. Reasons for the termination of drug candidates in development (2000) (3).

346 Chem. Res. Toxicol., Vol. 20, No. 3, 2007

compared to that a relatively short time ago (3), although the prediction of drug interactions due to metabolism can often still be an issue. Of course, “pro-drug” projects would be expected to continue to have more problems. Advances in our knowledge from research focused in this area has had a significant positive impact in the reduction of failures due to metabolic issues; it is clear that we have the capability to address these issues with advances in our basic sciences. A main reason why drugs fail today is toxicity, as indicated by the sum of the sections of Figure 3 labeled animal toxicity and human adverse events (3). These percentages are probably similar to those of most pharmaceutical companies today, with some variations. One of the real issues is reducing late phase attrition and successfully moving more drug candidates to patients by predicting the candidates to be less likely to have toxicity problems in humans. The goal is not to be able to make perfect predictions because that would probably involve eliminating too many candidates. Even a 2-fold improvement in predicting toxicities early would be a major advance; therefore, the goal is not an impossible one. A need clearly exists to move from a co-incident to a predictive mode of toxicology and safety assessment. The same also applies to the selection of which biomarkers to use.

Guengerich and MacDonald Table 1. Tools, Technologies, and Other Needs Identified to Help Improve Preclinical Toxicity PredictionsAn NIH Workshop Summarya Greater understanding of species differences with respect to drug action and drug-induced toxicities to permit the prediction of drug effects in humans Transgenic mouse models to understand the function of gene targets in specific organs and to permit specific hypothesis testing for individual gene products in target organs, which may help predict how drugs will affect humans Improved primary cell culture procedures to create stable cell lines for drug metabolism and toxicity testing Biomarkers of toxicity based on genomic, proteomic, and metabolomic studies to understand toxicity in humans Improved proteomic platforms to permit the use of mice, dogs, and monkeys to improve predictive toxicity testing in humans Improved tools to link mRNA levels, protein expression levels, protein activities, and metabolite profiles with chemical scaffolds to permit hypothesis development and testing Quantitative in Vitro assays to identify toxicities in target organs (brain, GI tract, heart, kidney, liver, lung, and other organs) Methods to determine the concentration of drugs or drug metabolites in target organs Additional information on drug transport processes and metabolism in target organs, particularly the central nervous system Highly specific inhibitors and specific identified substrates to permit the characterization of various drug transporters Greater information on reactive metabolite formation by drug-metabolizing enzymes in different species, especially humans, and the effects of these metabolites on cellular processes In silico modeling data of drug transport and drug metabolism to understand drug action and drug-induced toxicities Additional structure studies of different drugmetabolizing enzymes to permit more effective 3D QSAR modeling Improved data sets and QSAR modeling tools to improve predictive toxicity modeling Accurate databases of structure-pharmacokinetic -pharmacodynamic models of drug toxicities in humans

Potential A 2004 National Institutes of Health (NIH) workshop (4) developed an extensive list of goals for the better understanding of metabolism and toxicity (Table 1). This set may be too extensive, in light of the resources available, the fact that some of the issues are already addressed reasonably well (e.g., metabolism prediction), and the limited usefulness of some of these approaches to date. What might be more useful is to consider the science necessary to address the list of questions in Table 2 for new drug candidates. Key issues in predictive toxicology are (i) whether we can predict in ViVo responses from in Vitro systems and (ii) whether we can predict long-term effects from short term ones. These challenges are considerable. Although we will review some aspects of the chemistry of certain functional groups later, the point should be made that chemical similarity is not very predictive for biological responses, particularly in the area of toxicology. An axiom of medicinal chemistry is that there are usually multiple groups of molecules that can serve as leads for a drug target. A necessary corollary is that different groups of structures can also generate the same toxicity. One way of looking at the problem of predicting toxicity is with a paradigm developed by Zimmerman (7, 8) for hepatotoxicity (Figure 4). We are able to make the best predictions in situations where the inherent toxicity is either high or in which the host influence on susceptibility is low (the variation can be either between species or within a species, e.g., polymorphism). The current models do not work as well in cases where metabolism varies among species or where immunological hypersensitivity is an issue, and idiosyncracies are a potential problem. The point of addressing toxicity issues earlier in the drug discovery/development process was already mentioned, and more issues are raised in Figure 5. In addition to the logic of reducing the cost of doing clinical trials on drugs that will be toxic and thus saving resources (Figure 2), some other issues require consideration. Predicting toxicity earlier in the drug development paradigm has three other advantages, which may not have been obvious (Figure 5). One is that costs are lower for simpler experiments, particularly in Vitro. The other issues

a

Obtained from ref 4. Table 2. Applying Mechanisms of Toxicity to Human Safety What is toxic (parent drug or metabolite(s))? How is it toxic? What is the dose-response relationship? Does toxicity occur in humans? Can a screen be developed to asses the liability? Can the liability be eliminated?

are the time involved and the amount of drug used. The latter point may not have been appreciated, but scaling up the synthesis of what are often complex molecules is not trivial, particularly in early stages when the demands are present to prepare many analogues. It should be noted that although in Vitro predictive screens are the ultimate goal, the screens must be robust and decisional. Overscreening, especially with marginal screens, can lead to large opportunity costs because there is always the choice in the discovery stage to discard compounds and do more synthetic chemistry, seeking the optimal compound, or to take a compound that does not pass every test forward into development.

Causes of Drug Toxicity No drug is safe at all doses, a concept really first invoked by Paracelsus over 500 years ago (9). We can never completely rule out hazards associated with patients overdosing or ignoring

ReViews

Chem. Res. Toxicol., Vol. 20, No. 3, 2007 347

Figure 4. Hypothetical, idealized relationship between the inherent toxicity of chemicals and the variability of the response among hosts (e.g., test animals and humans). In principle, the dose is not a consideration in this treatment, which is adapted from Zimmerman (7, 8). At toxic doses, the most readily understood compounds are those with higher toxicity in all animal species. Variation among the host (test species) introduces more uncertainty in extrapolation. Predictions can be made with problem cases if the issue is metabolism, but idiosyncratic problems are difficult to understand with animal models. Table 3. Categories of Drug Toxicitya cell death/tissue injury altered phenotype/function immunological hypersensitivity cancer a

Figure 5. Significance of being able to make assessments of toxicity earlier in the drug discovery/development process. Three major issues are time, the amount of drug that must be synthesized, and cost.

warnings about contraindications or recognized interactions. An example of a problem here is acetaminophen (known as paracetamol in Europe), which is the most widely used drug in the U. S., U. K., Denmark, and probably other countries. One estimate is that 23% of the population uses one (or more) of the >75 different formulations of acetaminophen for periods of greater than 1 week (acetaminophen is added to a number of different formulations, causing difficulty in determining the total dose). More than 1/2 of the cases of primary liver failure in the U. S. are drug-induced, and a majority of these are due to acetaminophen (10, 11). However, acetaminophen is safely tolerated by the vast majority of people who use it at the prescribed level. A key issue here is the benefit/risk ratio. This is also an important consideration in the development of drugs that can be used to treat serious life threatening diseases, particularly if no good alternatives are available. Thus, there is more tolerance of some safety issues with drugs used to treat

From ref 12.

life-threatening conditions such as antibiotic-resistant infections, AIDS, and cancer. One classification of categories of drug toxicity is shown in Table 3 (12). Most of the attention in toxicology studies is given to cell death and tissue injury. We have mechanistic knowledge of the systems in only a few cases, but the outcomes can be scored in model animal systems and to some extent in clinical trials. Some aspects of altered phenotype and function are also observable, although the effects are not often obvious unless researchers are looking for some specific outcome or the phenotype has an overt nature that expresses itself as tissue injury. However, as we will see, a goal is to use data in this area (e.g., transcriptomics and proteomics) to predict toxic responses in the other categories. Immunological hypersensitivity is not trivial to predict, except perhaps in cases where a strong effect is seen in Vitro or in some animal models. There is a deficiency in the availability of approaches to reliably predict idiosyncratic reactions (Vide infra) (13, 14). Cancer can be considered a specialized type of toxicity, and extensive screening is done with drug candidates. The point should be made that the animal models used in cancer bioassays have a known history of over-responding to several kinds of carcinogens (Table 4) (15-18). In today’s regulatory climate, problems in these areas are generally recognized not to be indicative of an issue for humans. The record of keeping carcinogenic drugs off the market appears to be remarkably good. Some earlier drugs with questionable carcinogenicity (e.g., phenacetin) have been withdrawn, although in these cases, the

348 Chem. Res. Toxicol., Vol. 20, No. 3, 2007

Guengerich and MacDonald

Table 4. Rodent Carcinogenic Responses Not Likely to Apply to Humans tumor site

illustrative chemical agents

male rat kidney male bladder rat thyroid forestomach

d-limonene, unleaded gasoline saccharin, nitrilotriacetic acid goitrogens, some alkylcarbamates, fungicides butylated hydroxyanisole, propionic acid, ethyl acrylate barbiturates, peroxisome proliferators

mouse liver

Table 6. Clinical Signs of Hepatotoxicity issue

aspects serum biomarkers

hepatocellular damage

transaminases (ALT, aspartate transaminase (AST))

cholestatic injury physiological mechanism vs tissue injury

bilirubin, alkaline phosphatase Are serum ALT levels simply a result of cell turnover/injury? adaptive response (tolerance)

Table 5. Contexts of Drug Toxicitya

a

current dogma

type

example

on-target (mechanism-based) hypersensitivity and immunological off-target biological activation idiosyncratic

statins penicillins terfenadine acetaminophen halothane

From ref 12.

science has been questionable. Some controversial drugs exist on the market today. One example is tamoxifen (19); an issue is the benefit/risk ratio (Vide supra) in the treatment of cancer,that is, tamoxifen can prevent the recurrence of breast cancer in high-risk individuals but may increase the risk of endometrial cancer. Another way to consider drug toxicity is within the contexts used previously (Table 6) (12). On-target toxicity involves toxicity through the same biological system used in treatment. An example of this mechanism-based toxicity is the statins. Statins elicit their therapeutic effects mainly by inhibiting 3-hydroxy-3-methylglutaryl CoA reductase in the liver, blocking

(Hy) Zimmerman’s

rulea

Rezulin rule a

ALT g 3× plus bilirubin g 2×: high mortality rate ALT g 3× in g 2% in clinical trial: trouble

From refs 7 and 8.

cholesterol biosynthesis. However, if the statin levels in muscle increase, then inhibition of the same reductase will block geranylgeranylation, at least in a rat model (20), and lead to rhabdomyolysis (and possibly kidney damage associated with the overload of heme from the muscle) (21). Hypersensitivity and immunological reactions are associated with the penicillins and some other classes of drugs. As mentioned earlier, the animal models are limited. Felbamate (Figure 6) has been postulated to exert its effects through such a mechanism after bioactivation (Figure 7). Off-target pharmacology simply means that the drug is exerting an effect on another biological system, in addition to the target. In principle, such effects can be screened for, although our knowledge of the biological events is still too limited to

Figure 6. Pathways of metabolism of felbamate relevant to toxicity (22). A number of comparisons can be made between the pathways in rats and humans. The two hydroxylation pathways are 5-fold factors in rats than in humans. Rats also accumulate considerably less (4- to 5-fold) of the glucuronide, the carboxylic acid, and the mercapturate derived from the GSH conjugate. These differences may explain, at least in part, why rats are not very prone to felbamate toxicity. In humans, the aldehyde dehydrogenase (ALDH 1A1) and GSH trasnsferase (GST M1-1) pathways are catalyzed by polymorphic enzymes, and interindividual variations may be important in predisposing patients to toxicity.

ReViews

Chem. Res. Toxicol., Vol. 20, No. 3, 2007 349 Table 9. Drugs Withdrawn for Hepatotoxicity (U.S.)a

Figure 7. Interaction of metabolism and immunology in toxicity. See ref 13 regarding the hapten hypothesis and also the danger and pharmacological interaction hypotheses (23). In some cases, autoantibodies are produced, but whether or not these are causal in the hepatoxicity has yet to be established (24).

drug

date

dose (mg/day)

reactive products

cincophen iproniazid pipamazine fenclozic acid oxyphenisatin nialamide tienilic acid benoxaprofen nomifensine chlormezanone bromfenac troglitazone nefazodone pemoline

1930 1959 1969 1970 1973 1974 1980 1982 1986 1996 1998 2000 2004 2005

300 25-150 15 300 50 200 250-500 300-600 125 600 25-50 400 200 38-110

no yes no yes no yes yes yes yes no yes yes yes no

a

idiosyncratic

predictable (?) relatively common occurrence detected pre-clinically dose dependent acute or sub-acute onset no immune component

unpredictable rare occurrence (100%.

to see a larger fraction of drug candidates being dropped because of off-target effects if a full analysis of the data were possible. The issue of what fraction of drugs are toxic because of reactive metabolites is important. Getting an exact estimate of the fraction is difficult for two reasons: (i) such information would have to come from proprietary data within a (relatively large) company, and (ii) the reason for toxicity is not often delineated when drug candidates die early in the process, that is, mechanistic studies are not done because of issues of resources. However, the information in Table 12, provided by B. D. Car, reflects the experience at Bristol-Myers Squibb and DuPont-Merck over the past 13 years. Biotransformation-based mechanisms appeared to contribute in about 27% of the cases of toxicology attrition (although the point should be make that these are not necessarily all reactive metabolite cases). Targetbased toxicity was at the same level. Ion channel inhibition (e.g., human “ether-a-go-go-related gene” (hERG)) is a special kind of off-target toxicity; the fraction may or may not be reflective of the nature of the current therapeutic targets of the company. All other mechanisms (or undefined) accounted for 36% (and immune-mediated reasons for 7%). The Bristol-Myers Squibb list of tissue/organ/system sites of toxicity is given in Table 13. Interestingly, the leading area, ahead of the liver, was cardiovascular, which may be related to the choice of therapeutic areas and the unexpectedly high fraction of ion channel toxicity issues (Table 12). In considering Tables 9-12, some conclusions may be made. Bioactivation is strongly represented in toxicity problems, but the exact contribution is hard to assess. Moreover, no one has proven that protein binding caused the toxicity in most of these cases. Nevertheless, reactive metabolites do seem to be an issue in at least one-fourth of the toxicities. The actual fraction may

ReViews

Chem. Res. Toxicol., Vol. 20, No. 3, 2007 351 Table 13. Sites for Toxicology Attritiona target organ or tissue

percent of all advanced moleculesb

cardiovascular liver teratogenicity hematologic central and peripheral nervous system retina mutagenicity and clastogenicity male and female reproductive toxicity gastrointestinal and pancreatic muscle carcinogenicity lung acute death renal irritant skeletal (arthritis/bone development)

27.3 14.8 8.0 6.8 6.8 6.8 4.5 4.5 3.4 3.4 3.4 2.3 2.3 2.3 2.3 1.1

a Based on experience (in animal models) from DuPont-Merck and Bristol-Myers Squibb, 1993-2006. This information was kindly provided by B. D. Car, Bristol-Myers Squibb. b n ) 88. Because categories are partially overlapping, the total is >100%.

Figure 8. Bromfenac and structural alerts (aryl bromide, aniline, and carboxylic acid).

be higher, but proving this will be difficult, at least at this time. Another striking feature of the analyses of the withdrawn and Black Box drugs, which has already been noted by others (13, 48, 70), is that all of these were used at high doses. None of the drugs causing unexplained hepatotoxicity were used at doses e10 mg/day. Further consideration of the literature seems to

bear this fact out. Thus, the development of drugs with high potency is one possible step to reducing toxicity. The fact that high dose drugs account for most, if not all, of the cases of hepatoxicity may be consistent with a dominant role for bioactivation and covalent binding but does not prove the hypothesis. That is, the overload of biological systems with reactive metabolites may trigger rather irreversible problems. For instance, the toxic drug troglitazone (withdrawn) and the much safer replacement rosiglitazone both produce protein binding, but the dose of the latter is far lower. The case of ceruvistatin (Baychol), which was withdrawn, may be considered again (Vide supra). Rhabdomyolysis was seen even with a dose of Cl > F) nitroaromatics moieties that form R,β-unsaturated enol-like structures Thiols, thiono compounds, thiazolidinediones, thioureasb aminothiazolesb a

Courtesy of S. D. Nelson. b Added at the suggestion of a reviewer.

shown in Figure 9. The pathways that can lead to reactive products have not all been elucidated, but these could make good test questions for a toxicology or drug metabolism course. For consideration of the myriad of possible P450-catalyzed and other reactions see refs 56 and 71-73. Acetaminophen has already been mentioned. A summary of the pathways involved in metabolism is shown in Figure 10A. A comparison with the much less toxic meta congener is presented in Figure 10B. Both compounds (acetaminophen and

Table 15. Toxicity as a Function of the Rate of Metabolisma

year approved metabolism number cases idiosyncratic reactions a

halothane

enflurane

isoflurane

desflurane

1956 20% hundreds 1/104

1972 2.5% 50 ndb

1981 0.2% 6 nd

1993 0.02% 1 nd

From ref 76. b nd: not detected.

3-hydroxyacetanilide) generate similar levels of covalent binding in microsomal reactions and in ViVo (74). However, the adducts derived from acetaminophen in ViVo are preferentially localized in the mitochondria, whereas those from 3-hydroxyacetanilide are preferentially found in the cytosol and endoplasmic reticulum. The quinoneimine (Figure 10A) is hypothesized to be more stable than the ortho and para quinones (Figure 10B) and to be able to reach and enter mitochondria. This general concept of a certain balance of reactivity versus stability determining toxicity is an old one (75), although more remains to be learned about more specific compounds. One final point should be made in consideration of this comparison (Figure 10): The meta and

Figure 10. Metabolic pathways involved in acetaminophen bioactivation and detoxication. (A) Acetaminophen pathways. (B) Pathways for 3-hydroxyacetanilide, a relatively non-toxic congener that produces a similar level of total protein adducts in ViVo (74).

ReViews

Chem. Res. Toxicol., Vol. 20, No. 3, 2007 353 Table 16. Approaches to the Estimation of Covalent Binding

approach

analysis

advantages

disadvantages

radioactive drug

wash and scintillation counting

quantitative, can be used in ViVo

“peptide”

do reaction with peptide, bind to chip, wash, SELDI-TOF MS

inexpensive, fast

need labeled drug expensive, need to wait for label quantitation not simple

LC-MS/MS (e.g., negative ion precursor scanning, m/z 272 (77) LC-MS/MS (78, 79) LC-fluorescence (80)

quantitative

need LC-MS

LC-MS (81) LC-scintillation counting (82) LC-MS (83) fluorescent plate reader

simple

need LC-MS, selective for hard electrophiles

high throughput

not reduced to practice

GSH methods: GSH [glycine-13C2,15N]GSH N-dansyl GSH trapping agents: K13C15N K14CN semicarbazide fluorescent thiols

need LC

Table 17. Optimization of a Series of Aryloxy Derivativesa

para congeners have been studied in animal models and provide an interesting story, but we have no solid evidence as to whether a similar pattern would occur in humans. The structural alerts (Table 14) are worthy of note; companies and even units within companies vary in terms of how actively they avoid these. There are examples of safe thiophenes, halides, and so forth. However, even a phenyl group is always potentially only 1-3 steps away from a reactive product. These guidelines (Table 14) can be used wisely, at least in consideration of a group of possible alternatives. Attenuating metabolism can be very beneficial, as shown in the example for the improvement of anesthetics in Table 15. Attention to structural alerts has been useful, at least within the context of a family of compounds. Although some commercial efforts have been made to develop software to predict toxicity de noVo from chemical structures, these systems have not been successful with new compounds in industry, in our experience and consultation.

Covalent Binding Assays Covalent binding experiments have been done with many compounds, both in Vitro and in ViVo, and reported in the literature. In recent years, several pharmaceutical companies have begun to use such assays on a broader scale as an approach to the prediction of toxicity problems, as opposed to a followup on problems encountered with pathology. A list of methods that have been proposed or published is presented in Table 16. The in Vitro systems are designed with the hope of supplanting radioactive methods because of the cost and complexity of radiosynthesis. A few points are in order. The LC-MS assays require a mass spectrometer and a chromatography step. The CN- trapping method appears selective for hard electrophiles. The in ViVo assays still require the use of the 14C (or otherwise radiolabeled) drug. A major issue is the use of data generated by these methods. Much has been made of the “50 pmol (adduct)/mg protein” level mentioned by the Merck group in 2004 (33). This review, which incidentally provides an excellent industry perspective on drug development and covalent adducts in general, does not really advocate the 50 pmol/mg parameter as “go” or “no-go”. Actually any number is somewhat arbitrary because the level of binding will depend on the drug dose or concentration, and the point has been made that 50 pmol adduct/mg protein is a goal, not a cutoff value (33). Covalent binding data is also often used for interspecies comparisons, in order to ensure that the values for human tissue (e.g., microsomes) are no greater than the values obtained in the toxicology test species. A practical and useful application of the concept is shown in Table 17, where the initial

a

From ref 84.

lead candidate yielded a high level of covalent binding in both rat and human liver microsomes and was considered a risk. A series of substitutions lead to an attenuation of reactive metabolite production, to the point at which the new compound could be considered a reasonable lead (and still remains proprietary, to our knowledge). Another issue is whether doing these assays is really helpful if pathology (and transcriptomic) analyses do not show toxicity. That issue must be dealt with on a case-by-case basis. However, one point to consider is that some of the immunological toxicity phenomena that we do not understand well seem to be initiated by covalent binding. A major portion of this review has dealt with reactive metabolites. At this time, regulatory agencies do not have a general policy on considering covalent binding and reactive metabolites. One of the issues here is species extrapolation, particularly from test animals to humans. If a drug yielded considerably more covalent binding in human microsomes or hepatocytes than in rodent preparations, there would be cause for concern. A related issue is human-selective stable metabo-

354 Chem. Res. Toxicol., Vol. 20, No. 3, 2007

Guengerich and MacDonald

Figure 11. Generalized scheme of biological events associated with the toxicity of drugs and other chemicals. Adapted from ref 12.

lites, where the FDA has issued draft guidance (85). Previous work by industry had proposed that consideration of these should not be an issue when they are