10 The Universe Is Stochastic and Nonlinear
Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
LARRY
M . S T U R D I V A N and
BARBARA
A . B. S E I D E R S
Chemical Research and Development Center, Aberdeen Proving Ground, MD 21010
It is perhaps in the nature of man to look for determinism in the universe. In a primitive society, necessarily very close to nature, sound and movement in inanimate objects were associated with unseen, and potentially dangerous, animal life: a wolf brushing past the undergrowth, or a snake slithering between the rocks. It was only natural that man would feel compelled to invent invisible spirits to move the wind and water. Today while we know what moves the wind and water, their apparent randomness continues to trouble us. We build in the paths of flood and hurricane, and continue to blame the damage they cause on their unpredictable nature. The implication is that i f we knew all the factors involved, we could predict the weather long enough in advance to do something about it. After millennia of debating whether nature is in its essence deterministic, and therefore predictable, the answer is s t i l l not known. Even Albert E i n s t e i n who pioneered work i n s t a t i s t i c a l mechanics was quoted (1) as saying: "Quantum mechanics i s very impressive. But an inner voice t e l l s me that t h i s i s not the r e a l Jacob. The theory has much to o f f e r , but it does not bring us closer to the secret of the Old One. At least I am convinced that He does not throw d i c e . " As long as our knowledge of the nature of the universe i s obtained by observation rather than by i n s p i r a t i o n , we would argue that the question cannot be answered. Limits on our ability to observe d e t a i l s at the atomic and subatomic l e v e l s , as expressed by Heisenberg's Uncertainty P r i n c i p l e , put the answer permanently beyond our grasp. However, i n the context of the everyday laboratory the question is moot. It may be that the immutable laws of a deterministic universe dictated that i n the middle of a star at the edge of the universe m i l l i o n s of years ago an atom was stripped of i t s electrons and sent our way at just less than the speed of l i g h t . But when that "cosmic ray" crashes through out cloud chamber, ruining our experiment, we have no choice but to regard it as a chance event. Even if we had an i n f i n i t e capacity to store facts about the present state of the universe and had them a l l i n place (universal data base) and if we had an i n f i n i t e processing rate (the ultimate computer), we still would need an exact model of the universe T h i s chapter not subject to U . S . copyright. P u b l i s h e d 1984, A m e r i c a n C h e m i c a l Society
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.
Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
110
C O M P U T E R S IN T H E
LABORATORY
(method of combining the facts) to predict the future unerringly. However, i t i s l i k e l y that the only exact model of the universe i s the universe i t s e l f . I t i s i n e v i t a b l e that any model we construct of some piece of the universe w i l l have a r t i f a c t s that have no analog i n r e a l i t y . Since we cannot f i n d exact models, the best we can do i s to b u i l d models which are useful i n the context within which we wish to employ them. I t may be that the concept of p r o b a b i l i t y i s i t s e l f an a r t i f a c t which man has invented to express the uncertainty which a r i s e s from excluding relevant factors from the model which are i n f e a s i b l e or impossible to measure or whose influence i s unsuspected. I f so, i t i s an a r t i f a c t which i s often very useful when employed properly. Another a r t i f a c t whose appealing s i m p l i c i t y has resulted i n i t s overuse i s the concept of l i n e a r i t y . In f a c t , the term i s used to express several c l o s e l y related concepts. O r i g i n a l l y , the term l i n e a r meant a straight l i n e . Mathematically, the equation for a straight l i n e i s the same as for a s t r i c t l y proportional r e l a t i o n ship: y = a + bx This was
extended to include additive proportionality models: y= a + bx + cz
or, most generally,
y
=
I
b
i
x
i
where the b± are (unknown) p r o p o r t i o n a l i t y constants and the x± are independent variables or functions of independent variables and known constants. The l a t t e r equation includes polynomials as well as functions of several v a r i a b l e s . I t i s termed l i n e a r with respect to the unknown b^'s. In space, such functions are no longer equivalent to straight l i n e s but to planes and hyperplanes i n ndimensional space. When we say the universe i s nonlinear we mean that such equations are seldom very useful models for extrapolating or i n t e r p o l a t i n g natural systems. In a very r e a l sense, the phrase could also be applied to space i t s e l f . In r e l a t i v i s t i c terms, because of the curvature of space-time, the geodesies that l i g h t follows are not straight l i n e s . Thus, both l i t e r a l l y and f i g u r a t i v e l y , the universe i s nonlinear. I f one i s dealing with a small enough piece of i t , a l i n e a r approximation might be u s e f u l . I f one i s lucky, the variance might be small enough that i t could be considered deterministic. I f one i s very lucky, i t might be adequate to model i t as both l i n e a r and deterministic. But what do we do when we aren't so lucky? Most of the models that physical s c i e n t i s t s have proposed acknowledge the nonlinear nature of nature, but consider i t deterministic. S t a t i s t i c i a n s acknowledge the stochastic nature of the universe, but have only taken a few tentative steps beyond the "general l i n e a r hypotheses". Not much has been done i n that d i f f i c u l t area where a c l e a r l y nonl i n e a r phenomenon involves a s i g n i f i c a n t element of chance.
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.
Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
10.
STURDIVAN A N D SEIDERS
Stochastic
and
Nonlinear
Universe
111
However, avoiding the problem i s l i k e the drunk who looks f o r h i s keys under the s t r e e t l i g h t because the l i g h t i s better there than i n the middle of the block where he l o s t them. There are some techniques that are c l e a r l y useful and p r i n c i ples which can be generally applied to such problems. These can probably best be introduced by means of some examples. The f i r s t i s from our experience i n personal protection. The penetration of f a b r i c body armor by a b a l l i s t i c p r o j e c t i l e i s c l e a r l y a stochastic phenomenon. For any p a r t i c u l a r v e s t - p r o j e c t i l e combination, there i s a span of v e l o c i t y i n which penetrations and nonpenetrations are mixed. The p r o b a b i l i t y of penetration within t h i s zone i s influenced by at least the following f a c t o r s : V - v e l o c i t y of the p r o j e c t i l e M - mass of the p r o j e c t i l e A - size of the p r o j e c t i l e t - thickness of vest Τ - t e n s i l e strength of vest material Because the vest i s made of several layers of c l o t h , the measure ment of thickness i s a d i f f i c u l t task. (How much a i r do you attempt to squeeze out from between and within the layers?) The most con s i s t e n t method i s to c a l c u l a t e the thickness equivalent to that which the vest would have i f i t were a single s o l i d layer, i . e . , t = mass per unit area of c l o t h density of the material The size of the p r o j e c t i l e i s well represented by i t s mean pre sented area, i . e . , the mean area of i t s shadow cast on a plane averaged over a l l possible o r i e n t a t i o n s . An appropriate scaling model can be derived f o r penetration, although the derivation i s beyond the scope of t h i s paper. The r e s u l t may be expressed as χ = 3g my2/At Τ where the r a t i o X i s dimensionless. According to the p r i n c i p l e of s i m i l i t u d e ( 2 ) s c a l i n g laws must consist of combinations of dimensionless r a t i o s . Not a l l dimensionless r a t i o s constitute legitimate scaling laws, but a l l legitimate scaling laws can be expressed as a combination of dimensionless r a t i o s . I t i s postu lated that equal values of the v a r i a b l e χ would give r i s e to equal p r o b a b i l i t y of vest penetration. Figure 1 shows a plot of penetra t i o n data from a number of p r o j e c t i l e s f i r e d against vests of various thicknesses. The data points represent mean penetration v e l o c i t i e s , derived from a number of inpacts, f o r each vest/ p r o j e c t i l e combination. I t i s plotted on logarithmic axes to equalize variance. Energy (%MV*) i s plotted against the other variables to show the spread of data i n energy. Because logs are plotted against logs a l i n e of slope 1 represents a contour of equal p r o b a b i l i t y (equal values of x ) . The t e n s i l e strength Τ i s missing from these axes because the data were scaled f o r a constant Τ before p l o t t i n g ( i . e . , i t i s included i m p l i c i t l y ) . A f t e r an appropriate scaling model i s found, a p r o b a b i l i t y function may be f i t t e d to the data. The function f i t t e d should be appropriate for the s i t u a t i o n . I f there i s no t h e o r e t i c a l basis f o r choosing one over another, then the choice can be made on convenience. For
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.
112
C O M P U T E R S IN T H E LABORATORY
Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
4- ω
+
Ό
c
Γ
cd
c ο
rH
·
> >—
s: w ω
υ rH •H .O ca W -H •H S-, rH Cd rH > rd PQ Ο
Η- > ο Φ
CM
>
CO •-J I •H -Ρ rH D Σ C
·Η -Ρ ·Η C ·Η
Cm
Q) Ό
CO
ο CO Ω
Cm
C Ο •H -Ρ SL,
CO
-Ρ
—
Φ
4- ro 3 z
C
in
ο e S-, c < Φ
CO υ ·Η t. Χ) • (Ο rH EL, Φ
Σ
Q_ 0— —(£) φΰ-ΰ-ΰ-ΰ-ΰ-Ο-ΰ-ΰ. ûuûw û-Q_k-^lllll|ll l | ι
ι
°
° C V J C D O CM CD C\J Ο
H
il
II
II
II
II
II
II
II
II
II
CVJ Ο II
η
4- M
m
Φ
bO >
II
•H
CL, X
I M CVJ
Ο #•
I I > ι ι I ι ι ι ι I I » ι , I > ι ι ι I ι ι ι • ι > CsJ f
I
τ ι
ι
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.
φ ^
10.
STURDIVANA N D SEIDERS
Stochastic
and
Nonlinear
Universe
113
f i t t i n g dichotomous data to the L o g i s t i c function, a mathe matically t r a c t i b l e d i s t r i b u t i o n , the method of Walker and Duncan (3) i s convenient. Figure 2 shows the t y p i c a l S-shaped p r o b a b i l i t y d i s t r i b u t i o n r e s u l t i n g from f i t t i n g the L o g i s t i c function: Ρ =
Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
l+e-(a+4> In x) to the dichotomous data on penetration. The straight l i n e of slope 1 i n figure 1 i s a c t u a l l y the 50% p r o b a b i l i t y contour from the equation f i t t e d to raw data. I t i s not a least squares f i t to the means plotted on the f i g u r e . The second example i s from a mixed b i o l o g i c a l / p h y s i c a l problem. I t deals with the p r o b a b i l i t y that blunt trauma to the chest or abdomen would be l e t h a l to man. I t has been used to assess the hazard of large b a l l i s t i c p r o j e c t i l e s moving at moderate v e l o c i t y , the hazard behind body armor which has stopped a handgun b u l l e t , etc. The s c a l i n g model, which again i s too lengthy to derive, i s (4) χ
= h
MV2
W^td where
M - mass of the p r o j e c t i l e V =
v e l o c i t y of the p r o j e c t i l e
W =
mass of the i n d i v i d u a l
t =
thickness of the body w a l l over the vulnerable organ
d =
/A/4' = the e f f e c t i v e diameter of the p r o j e c t i l e
A =
mean presented area
Notice that i f the constants ρ= Τ =
mean density of the i n d i v i d u a l t e n s i l e strength of the tissue
were included, the product would be a dimensionless r a t i o comparable to that of the previous example; i . e . , χ =
h MV
(?>
1 / 3
2
« χ
As i n the previous model, the factors assumed to remain constant, ρ and T, are assumed to be absorbed i n the curve f i t t i n g constants when f i t t e d to the p r o b a b i l i t y function. Figure 3 shows how w e l l the model f i t s the mean data. A plot of the p r o b a b i l i t y curve would be exactly l i k e Figure 2 with a change i n scale. Given these introductory examples of applied stochastic models,
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.
Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
114
C O M P U T E R S IN T H E L A B O R A T O R Y
F i g u r e 2.
The P r o b a b i l i t y o f P e n e t r a t i n g F a b r i c Armor a s a F u n c t i o n
o f t h e Model V a r i a b l e ,
F i g u r e 3.
x.
V u l n e r a b i l i t y o f t h e Thorax t o B l u n t Trauma (see t e x t f o r
a definition of variables).
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.
Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
10.
STURDIVAN A N D SEIDERS
Stochastic
and
Nonlinear
Universe
115
we can discuss i n more d e t a i l some of the techniques and p r i n c i p l e s which are p a r t i c u l a r l y useful i n deriving and f i t t i n g t h i s type model. One of the most u s e f u l modeling andscaling techniques to be found i s Dimensional A n a l y s i s , embodying the p r i n c i p l e of s i m i l i tude (5,6) so-named by G a l i l e o i n the 17th century and given a formal framework i n 1822 by Fourier. The rules for manipulating the fundamental units of measure which Fourier proposed has evolved into the modern technique of dimensional analysis. The major addi t i o n i n modern times i s the Buckingham P i Theorem by means of which dimensionless r a t i o s of the type used above may be derived. I t should be noted that i n each of the examples the model shown was not the f i r s t t r i e d . For dimensional analysis to produce useful r e s u l t s the whole set of relevant variable must be included, the proper dimensionless r a t i o s must be found, and, f i n a l l y , the best method of employing those dimensionless r a t i o s i n a model must be determined. Dimensional analysis i s just one method of normalizing the data; i . e . , making i t independent of the units of measure. I t i s , however, the best. Another method which i s widely employed i s to subtract a known or inferred population mean from the i n d i v i d u a l datum and to divide by the population standard deviation. Once a s c a l i n g model has been found the scaled data should be examined c a r e f u l l y to ascertain that the variance i s equal over the domain of the data. I f not then a suitable transform must be found to equalize the variance. Otherwise, no single stochastic model w i l l accurately r e f l e c t the p r o b a b i l i t y of an occurrence of the "event" i n question over the data domain, much less for an extra polated p r e d i c t i o n . For example, i f the standard deviation i s proportional to the mean, a very common s i t u a t i o n i n nature, the variance i s equalized by taking the log of the model v a r i a b l e . This i s the case f o r both of the above examples, where the p r o b a b i l i t y model was f i t t i n g to In χ rather than χ i t s e l f . Suitable trans formations f o r other common s i t u a t i o n s , as well as a general method for finding transforms i s given by Johnson & Leone (7). When a suitable s c a l i n g model has been found and equal variance confirmed or obtained, a p r o b a b i l i t y function i s f i t t e d to the data. For dichotomous data, the Gaussian (probit) or L o g i s t i c ( l o g i t ) functions are the most common mathematical func tions used. The Central Limit Theorem has been used to j u s t i f y assuming normality (Gaussian) i n an over-wide number of cases. For a reasonable sample size from a d i s t r i b u t i o n quite d i f f e r e n t from the Gaussian, t h i s i s a bad assumption. I f one knows, or has reason to believe, that a c e r t a i n p r o b a b i l i t y function p r e v a i l s , then that i s the function to use. An argument can be made for not assuming any "standard" d i s t r i b u t i o n , but using a non-parametric d i s t r i b u t i o n based on the data i t s e l f . This i s fine for large amounts of data and f o r p r e d i c t i o n within the central portion (say .2 to .8) of the d i s t r i b u t i o n . However, such d i s t r i b u t i o n s are not usually well defined i n the t a i l s , e s p e c i a l l y with small sample s i z e , so some assumption must be made concerning a d i s t r i b u t i o n function appropriate for these areas. The L o g i s t i c function i s often used because of i t s mathematical t r a c t i b i l i t y . For dichotomous (0-1 or p a s s / f a i l ) data the method of Walker and Duncan (3) i s convenient. Notice, however, that they disregard
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.
C O M P U T E R S IN T H E LABORATORY
Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
116
physical units i n t h e i r example of i t s application. Ignoring the p r i n c i p l e of dimensional homogeneity i s a dangerous oversight i n any model used f o r extrapolation. I f the data to be f i t are continuous there are general non l i n e a r methods which can be used to f i t almost any p r o b a b i l i t y func t i o n (8), including a v a r i e t y of so-called probit analyses f o r (assumed) Gaussian data (9). For many of these methods, conver gence i s slow or nonexistent i f the values i n i t i a l l y selected f o r the f i t t e d parameters are not s u f f i c i e n t l y close to the f i n a l values· I f the function may be made l i n e a r with respect to i t s unknown parameters by a suitable transformation, then i t may be f i t t e d by the Linearized Least Squares method (10) so as to minimize the root mean square error i n the o r i g i n a l (untransformed) space. The essence of t h i s technique i s to use weighted ( l i n e a r ) least squares to e f f e c t a non-linear least squares f i t . Assume that the equation has been transformed into an equal variance space and l e t y = the r e s u l t i n g dependent
variable
χ = the vector of independent variables b = the vector of parameters to be f i t t e d a = the vector of known constants then
y = f (x, b, a)
(1)
The function (1) may be l i n e a r i z e d i f , through any set of mathe matical operations, equation 1 may be transformed into My)
= I b (a, x) (2) i The usual procedure i s to employ least squares d i r e c t l y on equation 2. However, t h i s r e s u l t s i n minimizing the squared error i n h, not y. That i s , the procedure finds the set b± such that the quantity ±
g i
y
y
y
j ( Δ hj)2 = j [hj(y) - I b
t
g
i
(a, x j ) ]
2
(3)
y 2
i s minimized. What i s desired i s the minimum of j ( A y j ) . This may be achieved by i t e r a t i v e l y conducting a least squares procedure on equation 2 with weights: 2
w,
=
( A l l )
2
Ah/
(4)
where the A's are from the previous i t e r a t i o n . Starting weights are obtained from the d i f f e r e n t i a l approximation to the r a t i o of differences of equation 4 ; i . e . , „ 2 =|dhj -2
where the derivative i s evaluated at the j th data point to provide
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.
10.
STURDIVAN A N D SEIDERS
Stochastic
and
Nonlinear
Universe
117
the weight appropriate at that point. Unlike most nonlinear methods, therefore, Linearized Least Squares does not require i n i t i a l guesses, but derives good s t a r t i n g values from the data and the d e r i v a t i v e . A simple example i s found i n the L o g i s t i c function discussed above: Ρ
=
1+e - (b +b! In x) Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
0
In the o r i g i n a l space the dependent v a r i a b l e i s the p r o b a b i l i t y , P. The equation may be l i n e a r i z e d as: ^ 1 h(P) = In (
)
Ρ
= b κ
1-P then:
g
ι
0
J. bκ. In ι„ χ ~ == i ± + x
0
b
±
g l
(X)
(x) = 1
Q
gj (χ) = In χ where χ i s the only independent v a r i a b l e . For the f i r s t
Ρ
dP w 2
=
(1-P)
2
y £ w.
Ρ 2
Α
P
- J
) 2
2
we minimize t* w^ squares, min
=
ISl'îj
y
iteration,
A h^ 2
which r e s u l t s i n the usual weighted least
Ah. J
J J
2
= min
y
£ w. J J
2
2
(h. - b - b. In x) · J ο 1
In t h e i r dichotomous f i t , Walker and Duncan transform the L o g i s t i c function to an equal variance space by d i v i d i n g each data point by i t s variance. The variance of a p r o b a b i l i t y value, P, i s Ρ (1-P). For the f i r s t i t e r a t i o n , Ρ (1-P) i s equal to w. This suggests minimizing the function V ΛΡ 9 Y
In l i n e a r least squares (unweighted) where * Ay i
2
i s minimized, i t
I
can be shown that sum
* Δ y \ = 0.
l
ι
j
Ayj = j
W J
Ahj
With weighted least squares, the 0. (6)
2
However, i f equation 5 i s used (weights Wj rather than Wj ), then equation 6 does equal zero. When a zero sum of deviations i s desirable, function 5 may be minimized, often without increasing the root-mean-square-error by an undue amount. In conclusion, the following p r i n c i p l e s may be of some help i n modeling i n a nonlinear, stochastic universe: Model f i r s t . Propose as many reasonable models as you
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.
C O M P U T E R S IN T H E LABORATORY
Downloaded by UNIV OF CALIFORNIA SANTA BARBARA on May 28, 2018 | https://pubs.acs.org Publication Date: October 5, 1984 | doi: 10.1021/bk-1984-0265.ch010
118
can - then design experiment(s) to discriminate among them. For maximum a p p l i c a b i l i t y (extrapolation) be consistent with physical lavs - including the p r i n c i p l e of s i m i l i t u d e . Whenever none of the proposed models i s acceptable, amend the model to f i t the data. Specific to P r o b a b i l i t y Models: . Model on means - then f i t on a l l data. . Normalization i s strongly advisable, preferably by dimensional analaysis. . Transform, i f necessary, to equalize variance over domain of d e f i n i t i o n . Stochastic models often require larger data bases than deterministic models. Be prepared to seek a nonlinear, stochastic model u n t i l i t i s demonstrated that a l i n e a r or deterministic approximation i s acceptable.
Literature Cited 1. "Albert Einstein - Hedwig und Max Born: Briefwechsel 1916-1955", Nymphenburger, Munich, 1969. 2. Rosen,R.;AmJ.Physiol, 1983, 244, R591-R599, "Role of Similarity Principles in Data Extrapolation". 3. Walker, S. and Duncan, D.; Biometrika 54, 1 and 2, 1967, 167-179, "Estimation of the Probability of an Event as a Function of Several Independent Variables". 4. Sturdivan,L.M.;"Modelingin Blunt Trauma Research", Second Annual Soft Body Armor Symposium, Miami Beach,FL,Sept 1976. 5. Bridgman, P.; "Dimensional Analysis", Yale university Press, New Haven,CT,1922. 6. Langhaar,H.;"DimensionalAnalysis and Theory of Models", Wiley,NY,1951. 7. Johnson, N. and Leone, F.; "Statistics and Experimental Design in Engineering and the Physical Sciences", Wiley,NY,Vol II, 1964, 54-56. 8. Marquardt, D.; J. Soc. Ind. App. Math II, 1963, 431-441, "An Algorithm for Least Squares Estimation of Nonlinear Parameters". 9. Finney, D.; "Probit Analysis", Cambridge University Press, NY, 1952. 10. Sturdivan,L.M.and Jameson, J.; "Linearized Least Squares", Proceedings of the 1976 Army Numerical Analysis and Computer Conference.AROReport 76-3,USArmy Research Office, 1976. RECEIVED August 6, 1984
Liscouski; Computers in the Laboratory ACS Symposium Series; American Chemical Society: Washington, DC, 1984.