A Quantitative StructureActivity Relationships ... - ACS Publications

Chemistry Department, Monash University, Clayton, Victoria 3168, Australia ... CSIRO Division of Molecular Science, Private Bag 10, Clayton South MDC,...
0 downloads 0 Views 58KB Size
436

Chem. Res. Toxicol. 2000, 13, 436-440

Articles A Quantitative Structure-Activity Relationships Model for the Acute Toxicity of Substituted Benzenes to Tetrahymena pyriformis Using Bayesian-Regularized Neural Networks Frank R. Burden* Chemistry Department, Monash University, Clayton, Victoria 3168, Australia

David A. Winkler CSIRO Division of Molecular Science, Private Bag 10, Clayton South MDC, Clayton, Victoria 3169, Australia Received April 13, 1999

We have used a new, robust structure-activity mapping technique, a Bayesian-regularized neural network, to develop a quantitative structure-activity relationships (QSAR) model for the toxicity of 278 substituted benzenes toward Tetrahymena pyriformis. The independent variables used in the modeling were derived solely from the molecular structure, and the model was tested on 20% of the data set selected from the whole set by cluster analysis and which had not been used in training the network. The results show that the method is robust and reliable and give results for mixed class compounds which are comparable to earlier QSAR work on single-chemical class subsets of the 278 compounds and which employed measured physicochemical parameters as independent variables. Comparisons of Bayesian neural net models with those derived by classical PLS analysis showed the superiority of our method. The method appears to be able to model more diverse chemical classes and more than one mechanism of toxicity.

Introduction Since Hansch and Fujita (1) developed the quantitative structure-activity relationships (QSAR)1 method, it has been successfully applied to drug and agrochemical design as well as to the prediction of toxicological endpoints. Finding structure-activity relationships is essentially a regression or pattern recognition process, and historically, linear regression methods such as MLR (multiple linear regression) and PLS (partial least squares) have been used to develop QSAR models. Regression is an “ill-posed” problem in statistics, which sometimes results in QSAR models exhibiting instability when trained with noisy data. In addition, traditional regression techniques often require subjective decisions to be made on the part of the investigator as to the likely functional (e.g., nonlinear) relationships between structure and activity. It is important that QSAR methods be * To whom all correspondence should be addressed. 1 Abbreviations: SAR, structure-activity relationships; QSAR, quantitative structure-activity relationships; BRANNs, Bayesianregularized artificial neural networks; IC50, concentration to produce a response in 50% of the organisms; MLR, multiple-linear regression analysis; PLS, partial least-squares analysis; PCA, principal components analysis; NN/ANN, (artificial) neural net; SEE, standard error of estimation; SEP, standard error of prediction; NPC, number of principal components; R2, square of the correlation coefficient for training; Q2, square of the correlation coefficient for testing.

efficient, give unambiguous models, not rely on any subjective decisions about the functional relationships between structure and activity, and be easy to validate. Recently (2), regression methods based on neural networks have been shown to overcome some of these problems as they can account for nonlinear structureactivity relationships, and can deal with linear dependencies which sometimes appear in real SAR problems. Neural network training can be regularized, a mathematical process which converts the regression into a well-behaved, “well-posed” problem. The mathematics of “well-posedness” and regularization can be found in the papers by Hadamard and Tikhonov (3, 23). Regression methods, including back-propagation neural nets, still present some problems, and principal among these are overtraining, overfitting, and selection of the best QSAR model from a number of models obtained in the validation process. Overtraining results from running the neural network training for too long and results in a loss of ability of the trained net to generalize. Overtraining can be avoided by use of a validation set. However, the effort to validate (e.g., by cross validation) QSAR models scales as order(N2P2) (4), where N is the number of data points and P is the number of input parameters, which is exacerbated by large data sets. Validation procedures also produce a family of similar QSAR models, and it is

10.1021/tx9900627 CCC: $19.00 © 2000 American Chemical Society Published on Web 05/18/2000

BRANN QSAR Model of Substituted Benzene Toxicity

Chem. Res. Toxicol., Vol. 13, No. 6, 2000 437

Table 1. Molecular Indices Used in the QSAR Studies index name atomistic A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 A11 A12 A13 A14 A15 A16 A17 A18 A19 A20 A21 A22 A23 A24 A25 A26

element molecular mass H C C C N N N N O O F Si Si Si P P P P S S S S Cl Br I

no. of connections

1 2 3 4 1 2 3 4 1 2 1 2 3 4 2 3 4 5 1 2 3 4 1 1 1

not clear which of these models is preferred, or how they may be combined to give the “best” model. Overfitting results from the use of too many adjustable parameters in modeling the training data and is avoided by use of test sets of data, not used in the training and validation. However, QSAR using neural networks has advantages of speed, simplicity, and flexibility in dealing with complex response surfaces. As neural nets are superb pattern recognition methods, they are intrinsically capable of accounting for multiple mechanisms of toxicity which different classes of chemical compounds may exhibit in a particular organism [e.g., cytotoxicity (5) or mutagenicity (6)]. In this paper, we use a Bayesian-regularized neural network (BRANN) to overcome the remaining deficiencies of neural networks. The network is trained using atomistic and topological indices that are independent of measured properties. The main purpose of the paper is to apply the BRANN method to modeling the toxicity of substituted benzenes toward Tetrahymena pyriformis to illustrate the usefulness of this method in toxicological QSAR problems. Such general toxicological prediction methods may find use in screening of large numbers of chemicals for toxic effects such as endocrine disruption, as well as in the generation of QSAR models in helping correlate and interpret mechanisms of toxicity.

Materials and Methods We carried out QSAR analyses of four data sets of substituted benzenes for which toxicity data in T. pyriformis have been reported. The T. pyriformis data have been the subject of several QSAR analyses by Cronin et al. (7-10), where traditional physicochemical indices were used in modeling the data using multiple linear regression and it was necessary to remove some outliers to obtain good models. We combined the four data sets into one set containing all of the reported compounds without removal of outliers to determine whether a model with low errors of prediction and high correlation coefficients can be produced using our method even though there are several mechanisms of toxicity involved (7-10). We developed QSAR models using our

atom type

H1 C2 (sp) C3 (sp2) C4 (sp3) N1 N2 N3 N4 O1 O2 F1 Si2 Si3 Si4 P2 P3 P4 P5 S1 S2 S3 S4 Cl1 Br1 I1

index name Randic R1 R2 R3 R4 R5 Kier and Hall K1 K2 K3 K4 K5 fragment F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

index 0χ 1χ 2χ 3χ 4χ

0χv 1χv 2χv 3χv 4χv

H-O-C H-O-N C-O-C N-O-C N-O-N CdO OdC-N OdC-O NdO OdNdO

new BRANN method and compared these with models derived using classical PLS analysis. Substituted Benzene Data Sets. Cronin et al. reported QSAR analyses on data sets of 166 phenols (7), 47 nitrobenzenes (8), 34 benzonitriles (9), and a further 43 nitrobenzenes (10). These papers report the IC50 (millimolar) toxicity of the compounds to T. pyriformis, and we have used these values as log 1/IC50 as the dependent variables in our BRANN modeling. The reader is referred to these papers for details of the toxicity assessment. We have combined these 290 compounds into a single data set of 278 compounds (with some repeats removed as well as pentachlorobenzene for which the measurement was numerically undefined) and used a subset of 20% (56 compounds) as a test set. Molecular Indices. In this study, we employed four molecular indices: the well-studied Randic (11) index (R), the valence modification to the Randic index by Kier and Hall (K) (12), an atomistic counting index (A) (2), and a fragment index (F). The R and K indices are produced from the path length and valence electron counts in the molecule. The atomistic index (A) counts the numbers of each type of atom present in the molecule where type is defined by its element and the number of connections from it to other atoms. The fragment index (F) counts the occurrence of common functional groups such as COOH, COC, NO2, etc., per molecule. The fragment index was derived from the work of Andrews et al. (13). The indices that were used are summarized in Table 1. As we have shown previously (2), these indices implicitly account for molecular hydrophobicity, an important factor in many toxicological models. The indices are simple to comprehend and fast to compute and do not rely on any measured values. However, the methodology we described can be applied equally well to produce QSAR models from physicochemical descriptors (e.g., log P) more commonly employed in toxicity prediction and which may offer advantages in mechanistic interpretation. The four types of index (R, K, A, and F) are complementary; however, the proliferation of indices may lead to overfitting, so principal component analysis (PCA) is used to reduce redundant information. When principal components, which result from a linear transformation, are used prior to a nonlinear regression, the usual criterion of ignoring those components with small variance is inappropriate. The number of principal components that gives the lowest standard error of prediction is a better measure. While the Bayesian method provides the best SAR

438

Chem. Res. Toxicol., Vol. 13, No. 6, 2000

model for a given set of indices, the use of a test set allows selection of the combination of indices with the best predictivity. Bayesian-Regularized Artificial Neural Networks (BRANNs). Bayesian methods are optimal methods for solving learning problems. They are very useful for comparison of data models (something orthodox statistics cannot do well) as they automatically and quantitatively embody “Occam’s Razor”. Complex models are automatically self-penalizing under Bayes’ Rule (14, 24, 25). Bayesian methods are complementary to neural networks as they overcome the tendency of an overflexible network to discover nonexistent, or overly, complex data models. Any other method not approximating them should not perform as well on average (15). Unlike a standard feed forward neural network training method where a single set of parameters (weights, biases, etc.) is used, the Bayesian approach to neural network modeling considers all possible values of network parameters weighted by the probability of each set of weights. Where orthodox statistics provide several models with several different criteria for deciding which model is best, Bayesian statistics only offer one answer to a well-posed problem. The Bayesian method is summarized in the papers by Mackay (14, 24, 25) and Buntine and Weigend (15), and only a brief summary will be provided here. Bayesian inference is used to determine the posterior probability distribution P(w|D,Hi) of weights, and related properties from a prior probability distribution P(w|Hi) according to updates provided by the training set D using the BRANN model Hi. Bayesian methods can simultaneously optimize the regularization constants in neural nets, a process that is very laborious using cross validation. There is no better method for reliably finding and identifying better models using only the training set (14, 24, 25). Advantages of Bayesian Regularization. The advantages of Bayesian methods are that they produce models that are robust and well matched to the data, and which make optimal predictions. No test or validation sets are involved so that all available training data can be devoted to the model and the potentially lengthy validation process, discussed above, is avoided (14, 24, 25). The Bayesian objective function is not noisy, in contrast to the cross validation measure. At the end of training, a Bayesian-regularized neural network has optimal generalization qualities. There is no need for a test set, since the application of the Bayesian statistics provides a network that has maximum generalization (14, 24, 25). The theory pertaining to this particularly desirable feature of Bayesianregularized neural nets is discussed by MacKay (14, 24, 25). Very recently, Husmeier, Penny, and Roberts (16) have shown theoretically and by example that in Bayesian-regularized neural nets the training and test set performance do not differ significantly, and the generalization performance can be estimated by the training error. However, it may still be prudent to use a test set in cases where training sets are small. We have used a test set in this study to give a clearer picture of the ability of the method to predict data not used in training, and to choose optimum molecular representations. The Bayesian neural net has the potential to give models which are relatively independent of neural network architecture, above a minimum architecture, and the Bayesian regularization method estimates the number of effective parameters. The concerns about overfitting and overtraining are also eliminated by this method so that the production of a definitive and reproducible model is attained. It has been found those networks that converge to finite weights produce near identical answers. We have made use of the Bayesian-regularized neural network package included in the MATLAB (17) Neural Network Toolbox for the work described here. Characteristics of the Neural Network. Our Bayesian neural networks are classical back-propagation neural nets that incorporate the Bayesian regularization algorithm for finding the optimum weights. We used three-layer fully connected feedforward networks with four neurodes in the hidden layer and one in the output layer. Each neurode in the hidden and output

Burden and Winkler layer uses a sigmoidal transfer function The basic method used in the network training is derived from the LevenbergMarquardt algorithm (18) and the MATLAB implementation of the algorithm uses an automatic adjustment of the learning rate and the Marquardt µ parameter. The Bayesian regularization takes place within the Levenberg-Marquardt algorithm and uses back-propagation to minimize the linear combination of squared errors and weights. The training is stopped if the maximum number of epochs is reached, the performance has been minimized to a suitable small goal, the performance gradient falls below a suitable target, or the Marquardt µ parameter exceeds a suitable maximum. Each of these targets and goals was set at the default values set by the MATLAB implementation. The training was carried out many times, and the final model was chosen with reference to the test set to assess robustness. An expanded description of the properties of Bayesian neural nets, their ability to optimize neural net architecture, and their training can be found in our recent work (19). Production Procedure Used in Forming the Model. For each data set, the following steps were taken. (a) The data set was consolidated into a single file [MDL structure-data format (sdf)] which contained structural and activity data for each molecule. (b) For partially ordered sets, the sdf file was used to construct a coding file which contained the values of selected indices and activity data for each molecule. (c) The order of the molecules in the data set was shuffled to remove ordering effects. (d) The data set was divided into a training set and a test set chosen by a K-means clustering algorithm clustering on the X (molecular indices) and Y (toxicity data) values taken together. The clustering was done at a level that allowed selection of a test set of 20% of the total number of compounds in the data set by randomly choosing one member of each cluster. The training set data were mean centered, and the means that were obtained were subtracted from the test set data. (e) Several training sessions were carried out with different neural net architectures using different numbers of principal components (PCs) derived from the X data. Since the modeling procedure using neural networks is nonlinear, the number of PCs used was determined by the standard error of prediction of the test set rather than by the minimum variance described by the PCs. It was found that an architecture comprising one hidden layer with four neurodes was sufficient in all cases reported here. The number of effective parameters is given by the Bayesian-regularized neural network (BRANN). (f) With an optimal number of PCs and architecture, the BRANN was trained independently 30 times to eliminate spurious effects caused by the random set of initial weights.

Results and Discussion The results for the Bayesian neural net models are given in Table 2 together with models from PLS calculations and those of Cronin et al. (7-10). Robustness of the Models. The BRANN entry is the result of 30 independent calculations, and the one that gave the lowest SEP is the one reported, though all of the training sessions gave results that were very similar to the one reported. This contrasts with the common experience with neural networks used in QSAR studies where the use of initial randomized weights often lead to different models, with different weights, though often leading to similar SEPs. The use of BRANNs overcomes this shortcoming where models giving similar SEPs have similar weights and more robust models (19). The BRANN also calculates the number of effective parameters (essentially the number of nontrivial weights in the trained neural network). It was found that the number of effective parameters converges when the

BRANN QSAR Model of Substituted Benzene Toxicity

Chem. Res. Toxicol., Vol. 13, No. 6, 2000 439

Table 2. Comparison of BRANN QSAR Models of the Toxicity of Substituted Benzenes on T. pyriformis with Those of Cronin et al. (7-10) data set phenols (7), eq 1 nitrobenzenes (8), eq 9 benzonitriles (9), eq 4 nitrobenzenes (10), eq 7 combined data setg combined data set combined data set combined data set

size 166 47 33e 42f 278 278 278 278

method

NPCd

SEEtrainb

MLR MLR MLR MLR PLS (RKA)h BRANN (RKA)h PLS (RKAF)i BRANN (RKAF)i

N/Aa

N/A N/A N/A (0.229) 0.110 0.076 0.104 0.064

N/A N/A N/A 14 12 10 18

R2

SEPval/testc

Q2 j

0.749 0.858 0.752 0.897 0.650 0.829 0.688 0.943

0.108 0.072 0.121 0.082 0.117 0.096 0.112 0.106

0.733 0.826 N/A 0.888 0.601 0.631 0.631 0.808

a N/A, not applicable or available. b SEE, standard error of estimation (data scaled from 0 to 1). c SEP, standard error of prediction (data scaled from 0 to 1). d Number of principal components from PCA. e Thirty-four in data set less 1-cyanonaphthalene removed as an outlier. f Forty-three in data set less 2,4-dinitrotoluene removed as an outlier. g Two hundred ninety in data set less 11 repeated or updated entries and pentachloronitrobenzene, which was not assigned a numerical measurement. h The 5 × Randic + 5 × Kier and Hall + 13 × atomistic indices. i The 5 × Randic + 5 × Kier and Hall + 13 × atomistic 13 × fragment indices. j Note that Cronin et al.’s predictive statistics are based on a less rigorous leave-one-out cross validation procedure, not a test set independent of the training set.

number of hidden layer nodes is increased beyond a minimum value and four nodes were found to be sufficient (19). Comparison with Previous Work. The previous work (7-10) reported toxicological QSAR models obtained using physicochemical parameters and MLR. Each of the previous papers reported several MLR models where different physicochemical parameters were used and/or outliers were removed. The MLR models reported in Table 2 were those where a minimal number of outliers were removed. Although the QSAR models using MLR were highly satisfactory, it must be noted that they were all obtained from subsets of the total data, each of which contained a single chemical class (e.g., phenols). It should also be noted that the predictive statistics quoted by Cronin et al. are based on cross validation (leave-oneout) procedures. This is a less stringent test of the predictive quality of the models than statistics based on a test set not used in the training, as we report here. When the separate chemical class data sets are combined into one large set, it is recognized that more than one mechanism of toxicity will be operative [e.g., polar narcosis, nonpolar narcosis, and uncoupling of oxidative phosphorylation or reactive phosphorylation (20)]. In this paper, the data set was analyzed by two methods, PLS and BRANN, using first the atomistic and topological descriptors (RKA) and then all four indices (RKAF), all of which are described above. The relative predictive quality of the models is indicative of their abilities to accommodate nonlinearities in the activity surface and to account for more than one mechanism of toxicity. The PLS calculations show that a good model, relative to those of Cronin et al., can be obtained using the RKA indices. As PLS is a linear method, the quality of the model must be ascribed to the indices, all of which can be calculated simply from a knowledge of the connectivity matrix and atomic hybridization. The addition of the fragment indices adds some extra information and provides a better model. The BRANN calculations show that the relationship between the indices and toxicity is nonlinear since the model is clearly better than that obtained with PLS. It is very satisfying that the RKAF model can produce a toxicological QSAR model with an SEP of 0.106 (data scaled from 0 to 1) and a Q2 of 0.808 for the independent test set of 56 compounds, when the training set contains multiple chemical classes. Attempts to analyze mixed classes of chemicals [2-, 3-, and 4-nitrobenzenes (10)], also reported by Cronin et al., yielded lower-quality QSAR models.

We also looked at the ability of our model to predict the activity of the separate structural subclasses (phenols, nitrobenzenes, and benzonitriles). The SEP values (scaled) for the four data sets were 0.090 for phenols (7), 0.127 for nitrobenzenes (first set) (8), 0.064 for benzonitriles (9), and 0.063 for nitrobenzenes (second set) (10). The slightly worse prediction of the first nitrobenzene data set (8) can be attributed to the lower accuracy of measurement of toxicity in this data set as can be seen when the values for equivalent compounds in this and the second nitrobenzene paper (10) are compared. This was confirmed by the authors (21).

Conclusions The results of this, and our previous studies (19), indicate that Bayesian-regularized artificial neural networks possess several properties that are useful in the analysis of structure-activity data where the relationships are nonlinear or multiple mechanisms are present: (1) the method provides a unique SAR model which is essentially independent of neural network architecture beyond a minimum (see also ref 19); (2) the number of effective parameters used in the model is lower than the number of weights, as some weights do not contribute to the models which minimizes the likelihood of overfitting; (3) multiple training of a given data set/ index/architecture combination result in models which are very similar, suggesting that the method is robust (see also ref 19); and (4) various modes of toxicity are encompassed within the one model which makes it more generally applicable when predicting the toxicity of untested compounds. Our work also shows that simple atom properties and connectivity indices are capable of forming a useful model of the toxicity data, thereby removing the need for measured or predicted physicochemical values for compounds when making QSAR models. The indices used in producing the model are simple to comprehend and fast to compute and do not rely on any measured values. In addition, the model is simple to apply to untested substituted benzenes to obtain an estimate of their toxicity. However, the methodology can be applied equally to models formed from more traditional physicochemical descriptors. We have found that our methodology can form a model with low errors of prediction and high correlation coefficients when the four data sets are combined into one set containing all of the reported compounds, without removal of outliers, even though there are several mechanisms of toxicity involved (7-10).

440

Chem. Res. Toxicol., Vol. 13, No. 6, 2000

These and our previous studies on the application of BRANN to the development of QSAR models suggest that the method has the potential to become a “universal” robust method that can be applied to a wide range of problems, and we feel that the method merits consideration by others developing QSAR models in the drug, agrochemical, and toxicological research areas. We are now investigating (22) the use of the automatic relevance detection (ARD) for input variables method to eliminate the need for PCA variable reduction prior to training the BRANN. This technique, reviewed by Mackay (14, 24, 25), allows all input parameters to be used in the neural net, with Bayesian inference eliminating those which contain no or redundant information. Successful application of ARD will further simplify, and increase the robustness of, QSAR models developed by BRANN.

References (1) Hansch, C., and Fujita, T. (1964) F-σ-π Analysis. A Method for the Correlation of Biological Activity and Chemical Structure. J. Am. Chem. Soc. 86, 1616. (2) Burden, F. R. (1996) Using Artificial Neural Networks to Predict Biological Activity from Simple Molecular Structural Considerations. Quant. Struct.-Act. Relat. 15, 7-11. (3) Hadamard, J. (1902) Sur les problemes aux derivees parielies et leur signification physique. Bulletin of the University of Princeton, 49-52. (4) Goutte, C. (1997) Statistical Learning and Regularization for Regression. Ph.D. Thesis, University of Paris, Paris. (5) Weinstein, J. N., Kohn, K. W., Grever, M. R., Viswanadhan, V. N., Rubenstein, L. V., Monks, A. P., Scudiero, D. A., Welsch, L., Koutsoukos, A. D., Chiausa, A. J., and Paull, K. D. (1992) Neural Computing in Cancer Drug Development: Predicting Mechanism of Action. Science 258, 447-451. (6) Brinn, M. W., Walsh, P. T., Payne, M. P., and Bott, B. (1993) Neural Network Prediction of Mutagenicity Using StructureProperty Relationships. SAR QSAR Environ. Res. 1, 169-210. (7) Cronin, M. T. D., and Schultz, T. W. (1996) Structure-toxicity relationships for phenols to Tetrahymena pyriformis. Chemosphere 32, 1453-1468. (8) Dearden, J. C., Cronin, M. T. D., Schultz, T. W., and Lin, D. T. (1995) QSAR study of the toxicity of nitrobenzenes to Tetrahymena pyriformis. Quant. Struct.-Act. Relat. 14, 427-432.

Burden and Winkler (9) Cronin, M. T. D., Bryant, S. E., Dearden, J. C., and Schultz, T. W. (1995) Quantitative structure-activity study of the toxicity of benzonitriles to the ciliate Tetrahymena pyriformis. SAR QSAR Environ. Res. 3, 1-13. (10) Cronin, M. T. D., Gregory, B. W., and Schultz, T. W. (1998) Quantitative Structure-Activity Analyses of Nitrobenzenes to Tetrahymena pyriformis. Chem. Res. Toxicol. 11, 902-908. (11) Randic, M. (1975) On Characterization of Molecular Branching. J. Am. Chem. Soc. 97, 6609-6615. (12) Kier, L. B., and Hall, L. H. (1995) The Molecular Connectivity Chi Indexes and kappa Shape Indexes in Structure-Property Modelling. In Reviews in Computational Chemistry (Lipkowitz, K. B., and Boyd, D. B., Eds.) Vol. 2, pp 367-422, VCH Publishers, New York. (13) Andrews, P. R., Craik, D. J., and Martin, J. L. (1984) Functional Group Contributions to Drug-Receptor Interactions. J. Med. Chem. 27, 1648-1657. (14) MacKay, D. J. C. (1992) A Practical Bayesian Framework for Backprop Networks. Neural Comput. 4, 415-447. (15) Buntine, W. L., and Weigend, A. S. (1991) Bayesian BackPropagation. Complex Syst. 5, 603-643. (16) Husmeier, D., Penny, W. D., and Roberts, S. J. (1999) An Empirical Evaluation of Bayesian sampling with Monte Carlo for Training Neural Net Classifiers. Neural Networks 12, 677-705. (17) MATLAB (1998) The MathWorks, Inc., Natick, MA. (18) Hagen, M. T., and Menhaj, M. (1994) Training Feedforward Networks With The Marquardt Algorithm. IEEE Trans. Neural Networks 5, 989-993. (19) Burden, F. R., and Winkler, D. A. (1999) Robust QSAR Models using Bayesian Regularized Artificial Neural Networks. J. Med. Chem. 42, 3183-3187. (20) Karche, W., and Karabunarliev, S. (1996) The Use of Computer Based Structure-Activity Relationships in Risk Assessment of Industrial Chemicals. J. Chem. Inf. Comput. Sci. 36, 672-677. (21) Cronin, M. T. D. Unpublished/private communication. (22) Burden, F. R., Ford, M., Whitley, D., and Winkler, D. A. (2000) The Use of Automatic Relevance Determination in QSAR Studies using Bayesian Neural Networks. J. Chem. Inf. Comput. Sci. (submitted for publication). (23) Tikhonov, A., and Arsenin, V. (1977) Solution of Ill-Posed Problems, Winston, Washington, DC. (24) Mackay, D. J. C. (1995) Probable Networks and Plausible Predictions: a Review of Practical Bayesian Methods for Supervised Neural Networks. Comput. Neural Syst. 6, 469-505. (25) Mackay, D. J. C. (1992) Bayesian Interpolation. Neural Comput. 4, 415-447.

TX9900627