Simplex pattern recognition applied to carbon-13 nuclear magnetic

Jan 23, 1976 - spectra were examined. It is shown that the simplex-based functions are superior to linear learning machine functions for prediction...
2 downloads 0 Views 639KB Size
(14) L. Meites, “Handbook of Analytical Chemistry”, McGraw-Hill, New York. N.Y.. 1963, pp 1-8. (15) J. L. Dye and V. A. Nicely, J. Chem. Educ., 48, 443 (1971). (16) R. W. Hay and P.J. Morris, J. Chem. SOC., Perkin Trans. 2, 1021 (1972). (17) A. R. Quirt, J. R. Lyerla, Jr., I. R. Peat, J. S. Cohen, W. F. Reynolds, and M. H. Freedman, J. Am. Chem. Soc., 95, 570 (1974). (18) M. H. Freedman, J. R. Lyerla, Jr., I. M. Chaiken, and J. S. Cohen, Eur. J. Biochern., 32, 215 (1973).

RECEIVEDfor review January 23, 1976. Accepted March 15, 1976. This research was supported in part by a grant from the National Research Council of Canada and by t h e University of Alberta. Financial support to T.L.S. by a National Research FelOf Canada and by an I. lowship is gratefully acknowledged.

w.

Simplex Pattern Recognition Applied to Carbon- 13 Nuclear Magnetic Resonance Spectrometry Thomas R. Brunner, Charles L. Wilkins,” T. Fai Lam, Leonard J. Soltzberg, and Steven L. Kaberline Department of Chemistry, University of Nebraska-Lincoln, Lincoln, Neb. 68588

Linear discriminant functions for recognition and predlction of three common organic structural features via examination of proton noise-decoupledcarbon-I3 nmr spectra have been developed using a modified simplex algorithm. The functions, designed to be used routinely by an nmr spectroscopist, were derived from training sets containing several hundred spectra. Subsequently, the functions were used lo interpret approximately 2000 spectra in order to predict the presence or absence of each of the three features for the compounds whose spectra were examined. It is shown that the simplex-based functions are superior to linear learnlng machine functions for prediction. These results indicate the potential of the simplex method for generating threshold logic units to be incorporated in an on-line pattern recognitionsystem for nmr.

Previous investigations (1-3) have shown t h a t t h e linear learning machine method ( 4 ) can be used to develop a spectral interpretation system for proton noise-decoupled carbon-13 high resolution nmr spectra. Furthermore, multiple discriminant function analysis, using a committee consensus and various preprocessing algorithms, increased the reliability of t h e method in predicting t h e presence or absence of various functional groups ( 5 ) .T h e linear learning machine approach is a computationally economical and convenient technique, a n d the resulting functions are well-suited for incorporation in an on-line interpretation system. Thus, spectroscopists can use the method without significant increases in either experiment time or cost, Complete discussions of t h e merits of the method, its operational principles, and comparisons with other pattern recognition methods are contained in several recent review articles (6-9). However, the linear learning machine technique also has certain disadvantages. One major disadvantage is the inability of the method t o yield optimum pattern classifiers in cases when t h e d a t a are not linearly separable or not sufficiently representative of t h e classes. As t h e classification problem becomes more difficult (e.g., for subtle spectral interpretation questions) t h e condition t h a t d a t a be linearly separable becomes increasingly unrealistic. Reliability of threshold logic units computed from inseparable d a t a in this way is highly dependent upon t h e conditions immediately prior to terminating computation. Most often, t h e computation is terminated after t h e expenditure of a predetermined arbitrary amount of computer time or after the completion of a n arbitrary number of error correction feedback iterations. A second disadvantage is the lack of any convenient means of assuring t h a t , for separable data, t h e linear discriminant is the best 1146

ANALYTICAL CHEMISTRY, VOL. 48, NO. 8, JULY 1976

possible one, which might be defined as t h a t which gives t h e most accurate results for unknowns (best prediction, as opposed t o best recognition). A previous study of mass spectral interpretation ( 1 0 ) has shown t h a t a modified sequential simplex method ( I I , 1 2 ) offers promise of overcoming these problems. I n this paper, t h e application of simplex pattern recognition t o nmr data is reported.

EXPERIMENTAL Data Bases. Three data bases were employed. These were: a published collection of 500 carbon-13 nmr spectra (A) ( 1 3 ) ;a collection of 99 13Cspectra determined in our laboratories (B);and a recent collection of 1767 13Cspectra obtained from the literature (C) (14). Chemical shifts were referenced to internal tetramethylsilane and covered a range of approximately 200 ppm. All spectra were proton noise-decoupled. Only collections A and B contained intensity information, which was digitized to integer values between 1and 100. In each spectrum, the most intense peak was assigned an intensity of 100 and the remaining peaks were encoded relative to that peak. We call this representation absolute intensity encoding (AI). An alternate coding wherein each spectrum within a training set has its intensities normalized to sum to 100 was called normalized absolute intensity coding (NAI). Binary coding (designated PNP, peak-nopeak) was also used by assigning the value “1” to each resolution element possessing a peak and “0” to those not containing peaks. Because Collection C contained only chemical shifts, PNP coding was the only form used for that set. Collection C contained a large number (ca. 15%) of spectra containing fewer peaks than the theoretical number expected. Collection A included 80 spectra measured in the continuous-wave mode and 420 spectra obtained in the Fourier transform mode. Collection B spectra were measured in the Fourier transform mode using a Varian XL-100-15 spectrometer equipped with 16K word Varian 620/i computer and a Sykes cassette tape unit for mass storage. Collection C contained spectra measured in both continuous-wave and Fourier transform modes. Duplicate spectra were removed from the three data sets. Since Collection C contained no intensities, when duplicates were found, Collection C spectra were eliminated. Preprocessing of NAI data via Fourier transformation to produce simulated free induction decay data for simplex analysis was as described previously (3).Training sets were drawn from Collection A and contained 400 compounds (200 with methyl) for the methyl functional group question, 340 compounds (167 with carbonyl) for the carbonyl determination, and 268 compounds (130 with phenyl) for the phenyl functional group determination. Prediction sets were obtained by using all of Collections B and C, together with any spectra from Collection A not used for training. This procedure resulted in a 2098 spectrum set (with 545 phenyl spectra) for the phenyl question, a 2026 spectrum set containing 471 carbonyl (C=O attached to anything) spectra for the carbonyl question, and 1966 spectra containing 1456 spectra from methyl compounds for the methyl question. Computations. Programs for both linear learning machine and modified sequential simplex calculations were written in FORTRAN IV and all computations were performed using an IBM 360/66 computer.

Simplex Method. Although the simplex optimization method has been extensively applied to experiment optimization (1.9,it is a new approach to chemical pattern recognition (10). Because of this, the details of the method will be briefly reviewed here. The sequential simplex method used was that originally proposed by Spendley, Hext, and Himsworth (11),as later modified by Nelder and Mead (12). A simplex is a geometric figure which is used in the optimization procedure. If the optimization is to be done over d variables, the simplex will contain d 1vertices in a d-dimensional variable space. For example, a two-dimensional simplex is a triangle and a threedimensional simplex is a tetrahedron. A response function is evaluated for each of the vertices. Then the simplex is moved along the response surface in weight vector space to find an optimum. This optimum is approached by movement away from the least desirable response. In its original form, the simplex moved only by a direct reflection away from the worst response across the other d vertices ( 1 1 ) . Modifications by Nelder and Mead allow the simplex to follow more closely the contours of the response surface

Table I. Coordinates f o r Initial Weight Vectors d

+ 1 coordinates for each weight vector

+

(12).

For chemistry pattern recognition, the optimization problem has been defined as finding a weight vector which gives the maximum recognition for a training set of patterns. The search variables are the components of the weight vector and the response is the number of members of the training set correctly recognized by the weight vector. Whenever a new weight vector is defined, the dot product of the weight vector, W, with each pattern vector must be calculated. The recognition is the number of the patterns correctly classified by W. Since the (d 1)st term of the weight vector is defined to be -100, one half of all possible solutions of the problem have been excluded. That is, if a complete search is to be made through weight space, possibilities (a) and (b) both must be considered.

+

-

(a) W, X

-

> 0 implies category 1

< 0 implies category 2 (b) Wd. X < 0 implies category 1 wd. X > 0 implies category 2 W, X

(1)

(2)

The transformation w d = -W, is required to change from (a) to (b). However, if in W, the (d 1)st component is -100, then Wd will never be searched. Two solutions are evident. First, a search may be run through weight space with W d + l = -100 and another search with Wd+l 100. The relative optima can then be compared and the largest chosen as the true optimum. Or, alternately, the recognition is not allowed to fall below 50%. Whenever the recognition falls below 50%,the decision hyperplane is inverted (W, transformed to wd) and recognition taken as the complement of the value below 50%.In this procedure, not only must the recognition be recorded, but also whether the decision surface has been inverted. The original simplex method was based on the assumption that the response surface is continuous and has a unique maximum in the region of the search. However, by its specific nature, the response surface in the recognition problem is discontinuous; that is, only integral numbers of patterns may be classified. Therefore, the response surface is a series of plateaus with heights proportional to the number of patterns correctly classified, and i t is possible for the simplex to become stranded on such a plateau. To force the response surface to be continuous, a second optimization criterion is added. For each vertex of the simplex, not only is the recognition tabulated but also the perceptron criterion function is computed. The response then has the form of a continuous variable. The best vertex (weight vector) will have the minimum perceptron function value with maximum recognition. The perceptron function (16) is the sum of the absolute values of the dot products of all misclassified patterns with the weight vector (represented by the vertex in question),

+

(3)

where S, is the set of all patterns misclassified by W. In general, the perceptron criterion will be small if few patterns are misclassified and those misclassified lie close to the decision hyperplane. This criterion is used only as a smoothing function so that the ultimate solution will be minimum perceptron value for the weight vector giving maximum recognition. The initial vertices of the simplex should be located in a region in the weight vector space where the optimum is likely to occur. The pattern vectors are the only guides to choosing this location. A reasonable approach to calculating the initial vectors is to find

some point which is typical of category 1and some other point typical of category 2. An easily calculated typical or representative point for a given category is the center of mass or mean pcttern of the category. Given that the mean pattern for category i is Xi,then

R, = ( f i 1 , f i 2 , . . . , f b d , 100)

(4)

where li, is the average value of the feature j for members of class i. When the two class means have been calculated, the decision boundary may be approximated as the hyperplane bisecting these two means. Once the initial boundary has been approximated, the starting simplex is chosen so as to span the weight vector space which is to be searched (10). This is ensured by selecting coordinates for the initial weight vectors according to the scheme outlined in Table I. The initial values wd are obtained by selecting values mid-way between the average data values for each of the two categories being classified (Equation 4). The spanning constant, c, (Table I) is an empirically determined constant which had the value 10 000 in this study. To proceed with optimization using, for example, a 102 feature pattern set, one would form 103 initial weight vectors (the vertices of a starting simplex) as described above and generate new simplexes following the protocols described here and previously (IO) until a weight vector giving maximum recognition and minimum perceptron is located. Feature Selection. In order to reduce the number of features to a computationally convenient number while maintaining acceptable recognition levels, the Fisher ratio (17) was used. The procedure employed was to rank the features according to the Fisher ratios calculated and to use those having the largest ratios. For feature K , in a 2-class problem, the Fisher ratio is given by (17): (5)

where RIK= the average value of feature K for members of class I and V I K = the variance in the values of feature K for members of class I. Thus, only features having the favorable properties of disparate means between the categoriescombined with small scatter within each category will have large values for the Fisher ratio and be selected. For feature selection, the 200-ppm shift range was divided into 1-ppm resolution elements.

RESULTS AND DISCUSSION Determination of t h e presence or absence of three func-

tional groups (carbonyl, phenyl, and methyl) were chosen for this investigation. These groups occur frequently in spectra Collection A. Table I1 summarizes the results for both simplex and linear learning machine analysis of PNP encoded data. Several trials, employing successively fewer features, were carried out for each functional group question and each method. T h e same features and training sets were used for each of t h e methods. For computational convenience, a n arbitrary upper limit of 102 of the 200 possible features was imposed. For these data, t h e values of t h e 200 Fisher ratios ranged from 0.00 t o 2.39. Feature selection was accomplished by selecting those features having t h e largest Fisher ratios. If t h e Fisher ratios for all 200 features are ranked according t o magnitude and summed, t h e largest 102 of those features account for 95%of t h e sum. Study of the PNP data (no peak intensity information used) contained in Table I1 reveals a number of significant facts. ANALYTICAL CHEMISTRY, VOL. 48,

NO. 8, JULY 1976

1147

Table 11. Comparison of PNP Analyses Using Simplex and Linear Learning Machine (Recognition - prediction) Recognition (%) Functional group

Features

Simp1ex

LLM

Prediction (%) Simplex

(%)

LLM

Simplex

LLM

15 21 78.8 100 80.0 102 95 20 16 79.8 78.8 100 95 85 20 13 81.3 80.0 94 100 68 17 66.4 16 76.6 83 93 52 64.7 35 7 74.0 100 81 Carbonylb,' 102 2 38 62.2 76.8 79 100 85 4 11 69.9 78.4 82 81 68 22 56.6 5 75.3 79 52 80 8 15 69.4 71.9 77 87 Methyld 100 12 9 62.1 72.4 71 84 80 2 68.5 10 71.0 70 70 81 a Training set was 268 spectra from Collection A, containing 130 phenyl compound spectra. Prediction set was 2098 spectra comprised of Collections B and C and all spectra from set A not used for training; 545 spectra were those of phenyl compounds. Training set was 340 spectra from Collection A, containing 167 carbonyl compound spectra. Prediction set was 2026 spectra comprised of Collections B and C and all spectra from set A not used for training; 471 spectra were those of carbonyl compounds. Carbonyl was defined as the C=O feature, irrespective of what was attached. Training set was 400 spectra from Collection A, containing 200 methyl compound spectra. Prediction set was 1966 spectra comprised of Collections B and C and all spectra from set A not used for training; 1456 spectra were those of methyl compounds.

Phenyla

Table 111. Comparison of Simplex and Linear Learning Machine for Various Data Representations Recognition (%)" Functional group Phenyl

Features

Simplex

LLM

Simplex

LLM

AIC

102 85 68 52 102 85 68 52 102 102 85 68 102 85 68 102 100 80 70 100 80 70 102

96 91 91 88 94 93 95 81 94 80 79 79 79 79 78 74 83 80 82 81 80 83 61

100 100 100 81 100 100 100 84 83 100 76 79 100 75 78 55 68 62 60 68 63 61 48

98 95 100 84 93 94 95 90 84 68 57 67 68 56 66 60 69 64 65 71 68 69 59

83 80 80 76 81 80 80 75 72 56 56 57 55 56 56 52 68 56 60 64 60 60 45

FT' AI NAI

Methyl

(%)b

Preprocessing

NAId

Carbonyl

Prediction

FT AI NAI FT

(Recognition Prediction) (%) Simplex

LLM

-2

17 20 20 5 7 20 20 9

-4 -9 4 6 -1 0

-9 10 12

22 12 11

23 12 14 14 14 17 10 12

11

44 19 22 32 19 22 3 0 6 0 4 3

14

1

2

3

a Training sets were those used for P N P analyses. b Prediction for the 99 compounds in Collection B. Absolute intensity, largest peak in spectrum assigned 100; others relative to that. d Absolute intensities normalized so all peaks in spectrum sum to 100. e Fourier transformed NAI data.

First, for the separable data sets (linear learning machine recognition 100%),the simplex procedure employed converged to a local maximum in the recognition response surface. This disadvantage is difficult to avoid since results can be highly dependent on the location of the initial simplex chosen. In this regard, it should be noted the simplex method shares with the linear learning machine method its dependence on choice of starting conditions and details of the optimization (or for LLM, training).procedure. In spite of this, in all but one of the eleven training sets examined prediction accuracy was better for the simplex-derived weight vectors. Furthermore, when 1148

ANALYTICAL CHEMISTRY, VOL. 48, NO. 8, JULY 1976

sufficient feature elimination had taken place to make the d a t a no longer separable by the linear learning machine method, within the computational constraints imposed, recognition performance of the simplex weight vectors was better in every case. Equally important, there appears to be relatively little degradation of either recognition or prediction performance for the simplex weight vectors as separability of the d a t a sets is reduced by feature elimination. For two of the three questions explored, the parallelism between recognition and prediction percentages is more pronounced for simplex weight vectors. These results clearly support the previous

conclusion (IO)t h a t the near-optimal weight vectors obtained by t h e simplex method appear t o perform equally well for separable or inseparable data. T h e observed parallelism between recognition a n d prediction is a desirable property of t h e functions, since it is further evidence t h a t t h e weight vectors obtained are nearoptimum in t h e desired sense of not being highly dependent on t h e specific choice of a limited training set. I n this connection, it should be emphasized t h a t correct prediction performance averaging 76% for close t o 2000 unknowns is a level which should be of substantial help in spectral interpretation by trained spectroscopists when combined with ancillary information (eg., infrared and mass spectral data). Accordingly, it seems reasonable t o extend t h e present a p proach t o determination of weight vectors for a number of t h e most common structural questions and t o incorporate those results in a n on-line minicomputer system. I n this way, t h e functions can be applied routinely t o all spectra obtained. Because of t h e preponderance of literature spectra which d o not include carbon-13 nmr intensity information, it was of interest t o determine for the same functional group questions a n d d a t a sets how important this information is. I n order t o examine this question, as well as t o determine t h e relative merits of various intensity preprocessing algorithms within t h e present context, it was necessary t o use a much smaller prediction test set (Collection B) which contained intensity d a t a . However, training for both simplex and linear learning machine weight vectors could be carried out on t h e same preprocessed training set t h a t was used for t h e P N P study. T h e expected result of using a 99 compound prediction set, instead of t h e 2000 compounds used t o obtain the data in Table 11, would be t o raise t h e performance percentages somewhat, since t h e Collection B d a t a are clearly more complete and error-free. Thus, t h e results reported in Table I11 are, if anything, biased in favor of t h e intensity encoded approach. Yet only for t h e phenyl question do any of the preprocessing methods yield substantial improvements over the P N P results. Irrespective of whether absolute intensities (AI), normalized absolute intensities (NAI), or Fourier transformed NAI data were used, superior prediction performance of simplex weight vectors over linear learning machine vectors was found. As with the P N P data, a closer parallelism between recognition and prediction performance was found for simplex weight vectors derived from t h e various intensity data. Consistent recognition performance for both separable and inseparable d a t a sets is once more found a n d , again, it is observed t h a t the simplex converged on local maxima during the training process in those cases where those data were known, from t h e success of t h e linear learning machine, t o be separable. When the data in Table I1 and Table I11 are compared, it does not appear t h a t lack of intensity information seriously compromises t h e usefulness of resulting weight vectors. Accordingly, in future work t o develop simplex weight vectors for incorporation into a n on-line nmr system, it is planned t o use P N P d a t a exclusively.

SUMMARY I t has been demonstrated t h a t t h e modified sequential simplex method can be applied successfully t o analysis of peak/no peak-encoded carbon-13 nmr data. Furthermore, when resulting weight vectors for t h e three questions examined are applied t o large sets of literature data, t h e correct prediction percentages are sufficiently high t h a t t h e predictions should be of aid to spectroscopists in interpreting t h e spectra of unknown compounds. Therefore, t h e best simplex P N P vectors are included in Appendix A. A particularly useful finding is t h a t the performance of simplex-derived weight vectors is approximately as good for inseparable d a t a as for

separable. T h e method therefore shows promise as t h e basis for elaboration of a n on-line interpretation system. Further work directed toward development of such a system is under way.

Appendix A This appendix contains t h e best P N P weight vectors for each functional group investigated in this study. They were scaled by dividing by lo3. Included with the weight vectors are their corresponding features. Chemical shifts less t h a n tetramethylsilane (0.00 ppm) are included in feature one. Chemical shifts greater t h a n 199.00 ppm are included in feature 200. T h e other features correspond to consecutive 1-ppm intervals from 0.00 t o 198.99 ppm. Thus, feature 2 contains chemical shifts between 0.00 and 0.99 ppm, feature 3 between 1.00 and 1.99 ppm, , . . , etc. Finally, t h e weight for the d 1 feature is 0.1000. T o make a prediction t h e summation:

+

CXW, + ( d + 1) over all features used is carried out. A positive sum predicts t h e presence of t h e group in question. Phenyl Carbonyl Methyl Feature 14 15 18 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 41 43 44 53 56 61 65 70 71 113 114 115 116 117 118 119 121 122 123 124 125 126 127 128 129

Weight

Feature

-10.51 -12.43 -10.00 -9.542 -6.913 2.500 -7.943 -6.604 -11.44 -10.53 -8.221 -11.41 -8.621 -2.729 -7.965 -6.459 -12.41 0.06042 -2.530 -1.491 -7.758 0.09517 -3.893 12.96 0.7765 1.167 7.360 -6.836 -2.795 -1.612 -8.292 -3.759 30.21 9.494 13.06 39.83 -0.3488 -0.5844 6.507 -6.755 25.56 7.770 9.331 37.34 13.22 16.30 39.73

2 10 11 31 34 38 42 46 49 52 62 63 64 66 67 71 72 73 75 76 77 78 94 97 102 103 105 109 119 120 121 122 123 132 136 138 142 147 148 149 150 151 157 158 159 161 162

Weight -6.455 -2.379 -3.432 -0.7807 -0.4803 -1.112 -1.021 -2.603 -0.1225 -0.3728 -0.4543 -0.3981 0.1012 -0.1677 -0.2024 -0.7510 -1.202 -0.2634 -0.2873 -0.4200 -2.492 -1.610 -1.798 -2.601 -0.5012 -0.6517 -1.001 -3.034 -0.771 -0.9819 -1.232 -0.7800 -0.5810 5.435 1.754 -0.5164 -0.2524 -2.472 -1.659 -0.7015 0.6421 0.8002 2.059 0.3876 0.1078 0.1563 0.4015

Feature 1 3 7 8 11 12 13 14 15 16 18 19 20 21 23 25 30 31 32 33 34 39 40 42 43 53 55 57 59 61 64 65 72 74 75 76 77 78 80 83 86 94 98 105 109 116 118

Weight 10.26 12.85 7.700 10.97 10.17 10.07 6.359 8.414 0.8846 16.22 8.863 9.347 13.29 9.701 2.549 6.233 0.4992 0.8420 9.954 2.414 3.489 4.313 7.131 0.9565 -3.846 5.449 4.024 4.215 2.970 6.751 -6.609 10.14 -3.979 -1.654 -5.142 2.132 -0.2139 -0.9361 0.1182 1.557 -4.507 -2.592 -2.538 -1.517 -5.325 -2.136 -1.896

ANALYTICAL CHEMISTRY, VOL. 48, NO. 8, JULY 1976

1149

Phenyl Feature 130 131 132 133 134 135 136 137 138 139 140 141 142 144 148 149 150 153 162 171 200

Weight

23.78 26.66 22.70 28.59 36.57 12.00 2.800 8.127 15.62 14.20 2.630 3.491 -6.131 -4.112 11.92 -3.392 -11.55 9.232 5.815 -6.166 -3.936

Methyl

Carbonyl Feature

Weight

Feature

Weight

163 167 168 170 171 173 175 176 178 179 180 181 184 185 192 194 195 197 198 199 200

0.2277 0.4329 0.5748 3.574 2.611 0.7677 1.764 2.032 1.007 0.8176 1.329 1.923 ’2.260 2.065 1.669 2.680 1.262 2.601 2.530 1.287 2.133

120 122 125 126 127 128 129 131 133 134 137 138 140 141 144 145 150 151 152 155 161 162 163 165 168 169 177 180 184 188 190 195 197

-9.088 -2.136 -7.652 0.7552 -0.6582 -1.417 -2.151 0.8216 -1.419 -0.9908 -3.082 -7.751 6.587 -1.121 -9.418 0.1361 -0.09909 -0.8252 -6.921 11.14 -0.9111 -1.805 9.893 0.5382 -1.200 10.08 -5.838 -3.813 7.441 -3.041 7.482 13.75 -1.905

LITERATURE CITED C. L. Wilkins, R. C. Williams, T. R. Brunner, and P. J. McCombie, J. Am. Chem. SOC., 96, 4182 (1974). T. R. Brunner, R. C. Williams, C. L. Wilkins, and P. J. McCombie, Anal. Chem., 46, 1798 (1974). T. R. Brunner, C. L. Wilkins, R. C. Williams, and P. J. McCombie, Anal. Chem.. 47. 662 119751. N. j. Nksdn. “Learning Machines”, McGraw-Hill, New York. N.Y., 1965. C. L. Wilkins and T. L. Isenhour. Anal. Chem.. 47. 1849 (19751. T. L. Isenhour and P. C. Jurs, Anal. Chem., 43, (IO), 20 A (1971). T. L. Isenhour, B. R. Kowalski. and P. C. Jurs, Crlt. Rev. Anal. Chem., 5 , (3), 1-44 (1974). P. C. Jurs and T. J. Isenhour, “Chemical Applications of Pattern Recognition”, John Wiley and Sons, New York, N.Y.. 1975. 6.R. Kowalski, Comput. Chem. Blochem. Res., 2, 1-76 (1974). G. L. Ritter, S.R. Lowry, C. L. Wilkins, and T. L. Isenhour, Anal. Chem.. 47, 1951 (1975). W. Spendley, G. R. Hext, and F. R. Himsworth, Technometrics,4, 441 (1962). J. A. Nelder and R . Mead, Compuf. J., 7, 308 (1965). L. F. Johnson and W. C. Jankowski. “Carbon-I3 NMR Spectra”, John Wiley and Sons, New York, N.Y., 1972. B. Jezl and D. L. Dalrymple, Anal. Chem., 47, 203 (1975). In addition to the collection reported here, spectra collected by the Environmental Protection Agency are included in this set. S.N. Deming and S.L. Morgan, Anal. Chem., 45, 278A (1973). R . 0. Duda and P. E. Hart, “Pattern Classification and Scene Analysis”, Wiley-lnterscience, New York, N.Y.. 1973, p 141. R. 0. Duda and P. E. Hart, Ref. 16, p 116. 1

- I

RECEIVEDfor review February 13, 1976. Accepted April 5, 1976. Support of this research by t h e National Science Foundation under Grant MPS-74-01249 is gratefully acknowledged. Partial support for t h e spectrometer a n d d a t a systems was provided by N S F Grants GP-10293 a n d G P 18383. LJS is Visiting Associate Professor, 1975-76, from Simmons College, Boston, Mass.

Normal-to-Sequency-Ordered Hadamard Matrix Conversion Robert C. Williams” and F. D. Crary 3 M Central Research Laboratories, St. Paul, Minn. 55 133, and Department of Computer Sciences, University of Wisconsin, Madison, Wis.

53 706

A new algorithm for the interconversion of Kronecker Normal and Sequency-Ordered Hadamard Matrices Is reported. The mathematical proof (by induction) of this algorithm is given. The generalized schematic of an electronic circuit which implements this algorithm is also included.

T h e increasing use of t h e Hadamard-Walsh transform in spectral signal processing applications, in general (1-9), and in pattern recognition studies in particular (10-12), has drawn our attention t o the Kronecker Normal to Sequency-Ordered Hadamard Matrix conversion (13).I n this paper, we report 1)a conversion algorithm more concise t h a n t h a t previously discussed (13);2) a pruof of both algorithms; and 3) t h e successful development of a prototype device which performs the desired conversion electronically. This algorithm should significantly reduce the time and tedium required to preprocess spectral data, when a sequency-ordered Hadamard transform is desired. Similarly, t h e hardwired circuit complements presently existing (cyclic) Hadamard transform spectrometers 1150

ANALYTICAL CHEMISTRY, VOL. 48, NO. 8, JULY 1976

(4-9),a n d should facilitate t h e on-line conversion between Kronecker normal a n d sequency-ordered d a t a in real-time applications.

DISCUSSION Rather than t h e complex sequence of bit-complement and bit-zeroing operations reported previously (13), t h e present conversion algorithm requires only a series of “exclusive or” operations, followed by a bit reversal of the number obtained. As in the previous case, this algorithm is designed for use with Hadamard matrices of order N = 2n, whose indices are positive integers ranging from zero t o 2n - 1. Using t h e notation of t h e previous paper, let M be t h e n - b i t binary number representation for a given row in t h e Kronecker Normal Hadamard Matrix:

M=

n-1

1 2imi

i=O

where mi are t h e individual bits in M. L e t P be the binary number of t h e corresponding row in t h e sequency-ordered matrix (Le., t h e number of level crossings in t h e M t h row):