NEW TIME-DOMAIN TECHNIQUE FOR IDENTIFICATION OF

Identification of time constants was tested by using analog computer simulations of systems tu which random noise was added. The technique appears to ...
0 downloads 0 Views 732KB Size
NEW TIME-DOMAIN TECHNIQUE FOR IDENTIFICATION OF PRaCESS DYNAMICS M I C H A E L H E Y M A N N , ' M . L. M C G U I R E , A N D C . M. S L l E P C E V l C H

Process Dynamics and Control Laboratory, School of Chemical Engineering and Materials Science, University of Oklahoma, Norman, Okla.

A technique is developed for determination of process dynamic equations from experimental test data and simultaneous evaluation of reliability of the equations. The mathematical analysis is carried out entirely in the tims domain. Identification of time constants was tested by using analog computer simulations of systems tu which random noise was added. The technique appears to b e a powerful tool for the direct experimental determination of linear approximations to chemical process dynamics.

problem in applying modern design, analysis, and optimum control techniques to chemical processes is the difficulty of obtaining satisfactory mathematical models. For existing plants modeling by experimental methods has been investigated extensively. Direct sinusoidal process testing has been in use for over two decades and is still being utilized on a laboratory scale (Cohen and Johnson, 1956; Fanning and Sliepcevich, 1959; Mozley, 1956; Stainthorp and Axom, 1965; Stermole and Larson, 1963). This method, however, possesses serious limitations in actual plant testing (Clements and Schnelle, 1963; Hougen and Walsh, 1961). Random noise testing and spectral analysis possess limitations that render this method impractical (Gallier, 1962 ; Gallier, Sliepcevich, and Puckett, 1961 ; Tierney and Homan, 1960). Most of the limitations in these techniques were overcome by the pulse testing and Fourier analysis method, which was extensively investigated by Hougen et d.(Dreifke and Hougen, 1963; Dreifke, Hougen. and Mesmer, 1962; Hougen and Lees, 1956; Hougen and LValsh, 1961) and has found wide application in industry. However, all of these modeling techniques which rely Ion frequency domain analysis have serious limitations due to low signal-to-noise ratio, particularly at high frequencies (Heymann, 1965). Linear mathematical models are often adequate to describe chemical processes over limited ranges of operating conditions. These processes are generally multivariable and can be described by a set of first-order ordinary differential equations called a vector model. While this description is convenient from a mathematical viewpoint, in practice it is frequently sufficient to know only the relationship between a single output variable of principal interest and the various input variables. This description is called a scalar model. The experimental identification of the vector model is often difficult because of inability to measure all of the formulated state variables and the high sensitivity of the numerical manipulations. I n the identification of the scalar model the first difficulty is eliminated and, because of the reduced number of parameters to be determined, the latter is minimized. In this work the scalar model is identified through a pseudostate space formulation. T h e scalar model is represented by a hypothetical vector mode.1 that permits direct evaluation of the system's parameters from the process responses. This operation offers a unique miathematical advantage: By proper manipulation of the equations, a differentiation procedure is AMAJOR

1

Present address, Mobil Oil Corp., Princeton, N.J.

replaced by integration of the responses. Thus, the inherent smoothing effect of integration contributes immediately to the stability of the identification results. Model reliability is frequently ignored in process identification, despite its importance in the design of modern control systems. In this work the modeling technique not only determines the system parameters, but also simultaneously evaluates confidence limits for these parameters. T h e identification is separated into two parts: First, the system's poles, which are common to all outputs, are evaluated. Then the entire model for a specific input-output pair which consists of both poles and zeros and any associated time lags is determined. The system poles are obtained from analysis of the output responses relaxing from a n excited state following pulsing of process inputs. For the identification of a n nth-order model, n linearly independent response functions are required. I n practice, this linear independence can be secured by pulsing different process inputs, combinations of inputs, or even process parameters, for each response. For any one set of tests the order of the system to be identified is assumed to be known. However, with sequential testing, if necessary, the correct order of the system is determined through interpretation of the error analysis. For the evaluation of the complete model (including the zeros) both the forcing functions and responses are utilized, along with a weighting function that is constructed of the system poles. Through a weighted integration of the forcing functions the gain factor and the values of the zeros are determined. Derivation of the Identification Method

,

A set of n first-order linear differential equations with constant parameters describing a chemical process can be written in vector form as

$0)= A Z ( t ) + Ba(t)

(11

where A and B are constant matrices of order n X n and n X m, respectively. $ ( t ) = col. [ x ~ ( t ) , . . ., x n ( t ) ] and ii(t) = col. [ u l ( t ) , . . ., u m ( t ) ] are the state and input vectors, respectively, of the process to be identified. Normally, the state vector cannot be directly observed and is related to the output vector by the relation ?(t) = P Z ( t ) where y ( t ) = col. [ y l ( t ) , . . . , y 8 ( t ) ] is the output vector and P i s a n s x n constant matrix. VOL. 6

NO. 4

(2) (s

NOVEMBER 1 9 6 7

5

n)

555

I t is possible to transform the relationship between one of the output variablesy&) and the inputs u,(t) in Equations 1 and 2 in terms of a n nth-order linear differential equation of the form

T h e homogeneous part of Equation 6, i(t) =

Kz(t)

(8)

has the general solution (Coddington and Levinson, 1955)

@ ( t ) = et%

where coefficients ai and g j r are related to the elements of A , B, and P. Equation 3 can be rewritten in the form

where L(n) [ ] is a linear differential operator and f ( t ) is a time function representing in lumped form the contribution of the various inputs. While Equation 3 describes the exact relationship between y t ( t ) and any of the inputs u&), Equation 4 describes only the system response to an "over-all" input disturbance. Thus, it is only a partial description of the process, but it can be utilized for investigation of stability and other transient characteristics. T h e solution of Equation 4 can be expressed as an integral equation of the form

Y(t) =

J;

Wt-

A)

f(W

(5)

where for simplicity in notation the subscript i is henceforth dropped. The kernel W(t - A) is called the weighting function for the system and uniquely determines its dynamic characteristics. Determination of the System Poles

In principle, the poles of the system can be determined by direct evaluation of the weighting function W(t - A) of Equation 5 (Heymann, 1965). This procedure is undesirable, however, for a number of reasons:

(9)

where C is a n arbitrary nonsingular constant matrix determined by the initial conditions. The fundamental solution matrix @ ( t ) is given by

1 where the ( p p ( t ) are linearly independent solutions to the homogeneous equation (Coddington and Levinson, 1955). The solution of Equation 6 to an arbitrary forcing function is given in terms of an integral equation similar to Equation 5 as

where T v ( t TF(t

-

-

A) is a matrix weighting function given by

A) = @ ( t ).@-I(),)

= [etA

C]

1

[&A C1-1 = e('--h)A

(12) Although Equation 12 could in principle be utilized directly for determination of the process parameters, numerical differentiation of the actual response records would be required. This is avoided by the following procedure which is of central importance to the present technique: Equation 12 can be rewritten as

@ ( t ) = e(t-x)'

+(A)

(13)

Because of noise inherent in the process, numerical differentiation of the response function p(t) is virtually impossible. T h e weighting function is obtained in numerical form and is therefore of limited practical usefulness. Only a very small part of the information contained in the response functions is utilized for determination of the dynamic equations.

Letting 6 = t - A, with 6 taken to be a constant, and integrating Equation 13 on both sides with respect to t between the limits t = t l and t = t l 6 yields

T o eliminate the above difficulties, Equation 4 is retransformed into an equivalent set of n first-order differential equations:

Changing the variable of integration on the right-hand side d A = 0 gives of Equation 14 and noting that dt

+

L:+6

@(t)dt = ebA

@(t)dt

1 0

0 1

...

.

@(A)dA

L:::'+b J:+b dtn--2 . . .

Ln-, tn-i+b

dt @ ( t ) = ea' J:+b

I

dtn-2 . . .

(7)

dti

J:-b

dX@(A)

(16)

or in compact notation

.

-

The output variables, y i ( t ) in Equation 4 and z l ( t ) in Equation 6, are the same quantities but the variables E & ) , i # l , are artificial. Equation 6 is obtainable directly from Equations 1 and 2 by a linear transformation, and from a mathematical viewpoint the two are similar systems. I&EC FUNDAMENTALS

= eaA

... ...

556

(14)

This integration procedure can be repeated, leading finally to the result

...

-an-2

@(X)dt

-

J:+b where t ( t ) is a column vector t ( t ) = col. [ t l ( t ) , . . ,, z n ( t ) ] , j ( t ) is a column vectorj(t) = col. [0, . . ., 0, f ( t ) ] ,and A is the (n X n) matrix of coefficients

J:'+b

5, = e a A 5

(17)

where the definitions of Equation 16 are implied. T h e above is the central equation of this identification procedure. T h e advantage of integrating the solution matrix can be recognized. T h e repeated n - 1 integrations of @ ( t ) eliminate the need for numerical differentiation of the response functions,

thus smoothing the results and cancelling out random noise effects. (Since nothing has been assumed to be known about the noise statistics, there is no reason to expect other filtering methods to be superior to direct integration.) Further, because of integration, all of the available information over a preselected interval n6 of the res onse curves is utilized, thereby This averaging is important obtaining average values for if drifts in steady state, or nonlinear effects, are present. Equation 17 can be rewritten to evaluate eaA:

e68

&$-I

= esA = H

T h e right-hand side operator is the adjoint of M i k i ) [ 1. If all the forcing functions u j ( t ) except for a particular u&) are kept zero, Equation 21 becomes

(18)

T h e characteristic roots vi of H are found, from which the natural frequencies (poles) p i of the system are determined through the expression

Equation 19 holds strictly only when matrix H possesses distinct characteristic roots (Heymann, 1965). However, from a numerical viewpoint, H can always be assumed to possess distinct roots, since by infinitesimally small perturbations of the matrix elements multiplicity of the roots is eliminated. Determination of the Complete Model

Associated with each of the input variables u,(t) there is a particular weighting function W,(t - A) which relates u,(t) with its contribution to the output response y ( t ) . Denoting the right-hand side diff'erential operators of Equation 3 by M i k i ) [ ] (k, 5 n - 1 since some elements g,t may be zero), Equation 5 can be rewritten to derive W j ( t A) as

-

where = (-l)(ka-') g e l ; j = 0, . . ., ke. T o evaluate the particular model associated with the input ue(t) it is necessary to determine the parameters g8,. To accomplish this the particular input u,(t) is forced k, times with linearly independent forcing functions while any other inputs x,(t) are kept a t steady state. Integration of the terms on the right of Equation 24 for each of the forcing inputs leads to a set of k, linear algebraic equations which can be solved simultaneously for the coefficients Be,. T h e time derivatives of the weighting function W(t h ) are determined analytically, since the weighting function is explicitly known from the previous determination of the poles.

-

Implementation of the Technique.

where it is implied thaty(t) = q ( t ) = 0 for t _< 0. 20 can be expressed in a more convenient form as

-

where W,(t h) is related to W(t (Laning and Battin, 1956)

-

I

t,-6

Equation

A) by the expression

I n Equation 16, two arbitrary parameters, t n - l and 8, have been introduced as the limits of integration. These permit the analysis to be performed over preselected regions of the response functions. Figure 1 shows the physical meaning of tl and 6 for a typical response of a second-order system: t l determines the location on the response functions where the fundamental matrices # and $1 are to be constructed-i.e., the time a t which the integration is to be initiated. Parameter

I

I

I

I

t l

System Poles

t---

tl*6

Figure 1. Typical response of a second-order system showing physical significance of t l and 6 VOL. 6

NO. 4

NOVEMBER 1 9 6 7

557

6 determines the width over which the integration is to be performed. Similar results hold for nth-order systems in the relationships of tn- 1 and 6. I t is desirable to utilize the initial parts of the response functions where the signal-to-noise ratio is generally largest. Thus t n - 1 is selected to equal 6. T h e optimal selection of 6 is not obvious and should be selected so as to minimize the sensitivity of the identification to error. Therefore an error propagation analysis is required which can serve a twofold purpose:

1. rl tool for the actual execution of the identification procedure. 2. A provision for estimating the model reliability. The errors of the response records and their integrals are estimated, and for any particular choice of 6, the error propagation through each of the steps of the analysis is computed. I n the analysis, the extreme error bound associated with each of the system’s poles is determined as a function of the uncertainty of the response records. ‘Two kinds of primary error sources affect the results of the numerical computations. First, there is uncertainty in the data; second, roundoff and truncation error occurs. T h e latter was found in the present analysis (Heymann, 1965) to be totally insignificant compared to the error due to data uncertainty and for this reason was neglected. T h e error propagation analysis (Heymann, 1965) shows that two factors contribute to the error of an eigenvalue V I : (1) a distortion coefficient, Ci,which is a function of the data only and not of the initial error estimates, and (2) the norm of an error matrix, Dh, Fvhich is a function of both the data and the error estimates of the original responses. Thus the error of an eigenvalue v i can be expressed as

T h e distortion coefficient constitutes a sensitivity measure for the identification of the eigenvalue, vi. I t may be considered a weighting factor for S ( D h ) with respect to v i and therefore provides a quantitative measure of how well a particular pole is represented in the response data. T h e norm, Ar(Dh), provides a measure of uncertainty associated with the entire identification but does not act as a discriminating factor between the individual eigenvalues. If, for example,



0

1.0

the response vectors are “weakly” independent, the elements of the inverse of the matrix d are very sensitive to errors. T h e resulting norm N ( D h ) will be large, and a large error will be indicated for all the eigenvalues. Thus, the error estimate establishes a criterion for acceptance or rejection of a set of response functions. Equation 26 describes the relative error associated with real poles (Heymann, 1965) :

If hvi is considered to be independent of 6, Equation 26 can be written as

where K is a proportionality constant. There is a n optimal value for 6 which minimizes the error in the final results. A plot of Equation 27 as a function of 6 for K = 1 in Figure 2 shows that the relative error associated with the pole p I is miminized when 6 = - l/pi = Ti. In general, there does not exist one unique 6 which should be selected for the entire identification; rather, a different value for 6 minimizes the error associated with each of the various time constants. Actually, K is not strictly constant and the error for p i will be minimized near but not necessarily at 6 = Ti. When the poles are complex, the error associated with the real part will behave like the errors of real poles. T h e relative error associated with the imaginary part is given (Heymann, 1965) by

Because of the sinusoidal term in the denominator of Equation 28, the relative error associated with the imaginary part possesses a periodic oscillation. Thus the relative error curve of the imaginary part of the pole possesses a set of relative minima. The absolute minimum will occur close to the minimum error of the real part of the pole.

2.0

3.0

4.0

5.0

-d

Figure 2. Theoretical error propagation for real poles as functions of 6 for various natural frequencies 558

l&EC FUNDAMENTALS

of a simulated noise-free second-order system is shown in Figure 3. T h e two forcing functions were trapezoidal pulses and forced u1 ( t ) and u 2 ( t ) individually, thereby securing linear independence of the responses. For this system the two nominal natural frequencies were -0.002 and -0.1 sec.-l T h e results of identification of this system are shown in Figure 4. I n Figure 4A, the error prediction as a function of 6 is presented, and in Figure 4B, the corresponding identification results are shown. The error prediction is based on an estimated uncertainty of 0.570 of maximum deviation, of the data corresponding to the estimated accuracy of the analog computer. Comparison of Figure 4A with the simplified theoretical curves in Figure 2 indicates that the relative error propagation patterns are in close agreement. T h e steep slopes of the error curves are a n advantage because they clearly indicate the error minimization region even when noise is present. The predicted relative error curve of the low natural frequency flattens out a t relatively low values of 6 compared to the value of the corresponding time constant (in agreement with the theoretical curves shown in Figure 2). This behavior allows proper identification over relatively short records and eliminates the need to perform tests that are as long as the highest time constant. In effect the identification procedure can be considered as a process of decoupling the various natural frequencies, each of which is identified in the region of 6 where its effect is most significant. In Figure 4B, the points marked “identified value” are the results of the identification procedure where the relative errors were minimized. The arrows marked “calculated value” are the true values corresponding to the process which was simulated. T h e apparent identification value of the high frequency pole was stable a t low values of 6, which is also the range where the relative error is minimized. For larger values of 6 where the predicted error increases, the identification becomes unstable and begins to drift. This drift is expected, since as 6 increases the high frequency contribution to the response is insignificant and its reliability doubtful. T h e identification of the low frequency pole was stable over the

A linear stepping technique was utilized to locate the values of 6 corresponding to minimal error. T h e limit of integration 6 was initially selected as a small value not larger than a fraction of the estimated lowest time constant to be identified in the system. Then the identification procedure was repeated for increasing values for b until the experimental records were completely exhausted. Elach pole was identified at that value of 6 at which its smallest rrlative error was obtained. T h e system poles are computed from n linearly independent responses recorded during the relaxation of the system following a forcing input. Linear independence can be ensured by forcing a different input variable or combination of variables for each response. Altei-natively, process parameters (which mathematically are not inputs) can also be forced to generate independent responses, since during the recording period these are restored to their nominal values. I n practice, the correct order of the system may not be known apriori, and the suspected or desired order of the model will dictate the number of responses to be generated. T h e error propagation analysis then indicates the true order of the system if the order was incorrectly estimated. When accurate estimates of the errors associated with the identification are desired, good estimates of the uncertainty of the data are required. However, if the error analysis is to be utilized primarily as a tool for the identification, no stringent requirements are necessary when estimating the error in the data. T h e error minimization characteristics depend only indirectly on the magnitude of this error. Results and Discussion

T h e advantages and limitations of the time-domain identification technique were )investigated by simulation of secondand third-order linear systems on a n analog computer. Responses from arbitrary pulses were used as data for the identification procedure. Noise from a random pulse generator was superimposed on the simulated systems. A typical segment of a irecording of a set of linearly independe n t response functions and the corresponding forcing pulses

a . T E S f NO. I - y z ( t )

+

PULSED

on

g-:iL!kB

F O R C I N G PULSE

-40

iooF

I

10

I

I

1

I

I

I

20

I

I

I

I

30

I

I

I

I

I , , , 40

TIME t

,

I

,

,

50

, ,

I

,

,

, ,

60

I , ,

70

-4 0

I

, , ,

80

,

I

,

90

r\

I

b . T E S T NO. 2 - y l ( t )

0 0 2-20

, ,

PULSED

FORCING PULSE

IO

20

30

$0

50

TIME ( ~ e c . 1

60

70

80

90

Figure 3. Typical set of linearly independent response records and forcing pulses for identification of a second-order system with reql poles PI = 0.002 ret.-' p2 = 0.1 sec.-l No superimposed noise

VOL. 6

NO. 4

NOVEMBER 1 9 6 7

559

100 a. ERROR PREDICTION

1L

90 -

( E S T I M A T E D ERROR IN THE RESPONSE DATA = 0 . 5 % )

.

0 z 0.2 2 0

I-

a z

-1 -in

3~10-~-

I

30 40

,

I

50

I

I

,

I

I

I

6 0 70 80

about I O seconds: the second terminates before reaching its minimum value. 'I he relative error of the imaginary part was lcast at the minimum c'f the branch closest to the value of 6 at \ \ I l k 1 1 the relati\-e error of the real part \\-as minimized. .\s before, the noise 'caused a significant narro\ving of the bandlvidth of 6 over Jvhich the system could be identified. 7'1ie cause of this breakdown can be appreciated by observing i n Figure 7 that beyond about t = 15 seconds the small significant response \vas totally masked by noise. 1VitIiin the idcntifiable region the error curves \yere very stable even for the s!\tem {vith 8yonoise le.vel. In Figure 6B, it is seen that in the absence of noise the identification \vas extremely stable; the results agree exactly wit11 the calculated values. As noise was introduced into the s!strni, a drift in the identified values (as a function of 6) be-

I

l

I

I

I

I

I

I

I

I

I

I

2 3 2Z

I

90 100 110 120 130 140 150

came evident and increased as the noise level \vas raired. 'l'liis effect was not a result of the presence of random noise but \vas a consequence of bias in the steady state. This bias \vas not subtracted from the responses prior to the identification calculations. 'This difficulty illustrates that correct knowledge of steady-state operating conditions is needed \\.lien the present identification method is utilized. Ho\vever, this phenomenon also suggests the means for elimination of the difficulty: \Vhen the steady-state conditions are not accurately knovn: a simple correction procedure can be established for elimination of the crror. based on the drift of the identification as a criterion. Responses from various simulated third-order systems and of an actual complex operating chemical plant \vere successfully identified by this technique (Heymann, 1965). -41~0~ a laboratory scale stirred tank heat exchanger \vas identified by the VOL. 6

NO. 4

NOVEMBER 1 9 6 7

561

," 0.161

\

-

2

h

0.

0

L

8

\

W 0.08k

n

LL

0

A

a a

19 A @

a

R E A L PART I M A G I N A R Y P A R T } NO S U P E R I M P O S E D N O I S E ( 0 . 5 % E R R O R ) R E A L PART 'I E S T I M A T E D N O I S E L E V E L 4% I M A G I N A R Y P A R T J IN R E S P O N S E S A N D I N T E G R A L S R E A L PART E S T I M A T E D NOISE L E V E L 8 %

= 0.02 d (s e c ) Figure 6.

Identification of second-order system with complex poles p1.2 =

-0.1

= j 0.1732 ret.-'

Effect of noise m a g n i t u d e on quality of identification

present technique and optimal controls desiynrd on thr basi.: of the identified model gave excellent performance (Lueckc and McGuire. 1367). For identification of higher order sy+ tems, two principal conclusions can be dra\vn from tliecc studies:

t h e respon,cs. '1 lie error analysis indicatci \rllicli oi' tile apparent natural frequencies is poorly represented i n tlie,c responses and tlirreby, implicilly. determinrs the order of'the system

Conclusions

1. Since the error analysis is based on an estiriiarc of t i l ? maximum error, the deviation l-ct\veen the predicted and t l l r real discrepancies increases \\.it11 the order of the system. This suggests that for high order systems, an error anal>-si\ based on expected error \vould be more appropriate. 2. Generation of independent responses becomcs difficiil~ \\.it11 an increase of the system order.

The error analysis, being a ver)- sensitive gage of tile reliability of the identification, indicates 1vhenevc.r difficulties in the identification are present. Invariably. the cause for difficulty of identification is absence of independence bet\vrrn 562

I&EC FUNDAMENTALS

T h e time-domain technique appears to be a useful metllotl for linear system identification. I t permits accurate d e w n3in;:tion of the system parameters and overcomes most of tile difficultics associated \vitli frequency-domain identification methods. The associated error propagation analysis is a \.cry p o v w f d tool, both as a convergence criterion for actual identification and as a device for prediction of the degree of reliability of tlie identified model. T h e computational requirements are modest, and the identification can be carried O L I effectively ~ on a medium-size digital computer.

limit of ith integration of response record ith input function to a process input vector of process iveighting function for scalar constant parameter linear system = weighting function for constant parameter equivalent canonical system = iveighting function for input output pair (complete model) = ith state of a proccss = state vector of a linear process = output variable of a scalar process = ith output variable of a linear process = output vector of a linear process = ith state variable of equivalent canonical linear process = state vector of equivalent linear system

=

= = =

' " "I o T E S T NO I-y,(tl

PULSED

501

L

I

PULSE

-5Ot\/---FORClNG I""

----I GREEK LETTERS

O E O S lT )-

+

NO Z - y , ( t )

50

PULSED

1,

=

Pi

= =

$ ?(t) ' ' ' 0

10

20

30

I

I

40

, '

I

I

L

50

b

b

I '

60

'

1

1

1

70

'

8

'

1

80

3

LLLLLJ

90

t(sec)

-

a1

a

P,0)

Figure 7. Noisy responses of second-order system with complex poles

+

= -0.1 j 0.1732 sec.-I Estimated noise level. 8 volts (8%)

p1.2

Nomenclature =

= =

= = = = =

= = =

=

= = = = = = =

] = = = =

t

= =

n X n constant coefficient transition matrix n X n coefficient transition matrix of equivalent canonical system constant paramter of (n - i) th derivative of the dependent variable in an nth-order scalar linear differential system ijth element of matrix A n X m constant coefficient input matrix ijth element of matrix B general cclnstant matrix distortion coefficient of error predicted error of matrix H general time function (general torcing function of a process) forcing function vector of equivalent canonical linear slistem coefficient of the (n - i)th derivative of the jtli independent variable of a scalar differential system constant matrix defined by the product & I & - ~ index imaginary part of complex number index order of operator A\f,(ki) [ ] linear differential operator of order 71 number of process inputs linear differential operator of order k , of t h e j t h forcing function norm of a matrix order of the system s X n conztant coefficient output matrix number of process outputs time

time interval of integration of response records particular value of time, parameter of T1.it - A) ith eigenvalue of matrix H = ith characteristic root or pole of linear system = summation symbol = fundamental solution matrix of linear s)-stem = higher integrated fundamental solution matrix = loiver integrated fundamental solution matrix = j t h response function (or solution) of linear system

6

x

I

Acknowledgment

T h e authors thank the Phillips Petroleum Co. and the National Science Foundation (Grant No. GK-98) for financial support in the form of research grants.

Literature Cited

Clements, I V . C . , Jr., Schnelle, K . B., .Jr.. ISD. Csc;. CHEM. PROCESS DESIGN DEVELOP. 2 , 9 4 (1963). Coddington. E. A , , Leirinson, S . ,"Theory of Ordinary Differential Equations," McGraw-Hill, NKWYork, 1955. Cohen,\V. C., Johnson, E. F., Ind. Eng. Chem. 48, 1031 11956). Dreifke, G. E., Hougen, J. O., "Experinlental Deterniination of System Dynamics by Pulse Methods," Preprint X X I I - 3 , 1963, Joint Automatic Control Conference. University of Minnesota, June 19.1963. Dreifke, G. I-.. Nougen. .J. (I., hlesmer, G., Z..S..4.Truns. 1, 353 (1962).

Fanning, I