Nonlinear System Identification - Industrial & Engineering Chemistry

Note: In lieu of an abstract, this is the article's first page. Click to increase image size Free first page. View: PDF. Related Content. Article Opti...
0 downloads 0 Views 488KB Size
Nonlinear System Identification Yonathan Bard' I B M Corp., New York Scientific Center, S e w 170rk,A'. l-.10017 Leon Lapidus2 Department of Chemical Engineering, Princeton Yniversity, Princeton, 'V. J . 08540

The Volterra series in discrete form forms the basis for an algorithm for identifying unknown nonlinear processes in the presence of contaminating input fluctuations. Simulation of the model of an ore-crushing rod mill i s used to test out the feasibility of the algorithm. The method i s extremely promising for this type of nonlinear identification or model building.

T h e recent interest in on-line digital control of react,ioii systems presupposes two basic areas of technology: the development, of suit'able system models (identification) and of algorit,hms for controlling the s> stem. The latt,er phase is curreiitly i n excelleiit shape, with many algorithms available for 1uiiil)ed-paraiiieter systems (Lapidus and Luus, 1967) and even for distributed-parameter tems (Koppel, 1969). However, the identification of iiont ial noiilinear syst.ems still remains a vexing problem subject to extensive difficulties. The work of Harris and Lapidus (1967), Hsieh (1965), and Roj- aiid Sherman (1967) is representative of attempt's to ated computational algorithms develop theories aiid as which treat a nonlinear tem as an essentially unknown black box subject to noise on the inputs aiid on the esperimental measurements. In each case, the results of these and other approaches are subject to severe restrictions when the noise is large and t'he number of variables is moderate. However, work along these lines continues iii hope that ail efficient procedure can be developed. In the present paper we develop the use of the Volterra series in discrete form ai a system representation or ideiitificat,ioii technique. The result is an algorithm containing a large number of unknown parameters, which are identified in a suitable sense in the presence of contaminating fluctuations. Simulation results are presented for a single system, the behavior of an ore-crushing rod mill. The results seem most promising and should have applicatioii in many areas of identification

In this infinite S'olterra reries the first term is simply the convolution of t'he input and the impulse response of the linear portion of the over-all , tem. The secoiid term is a two-dimensional convolution, the third term is a threediniensioiial convolutioii, etc. These are usually termed the kernels of a specific order of the series. -issuch, Equation 2 is a geiieralizatioii of the convolutioii iiitegral used in linear system anal\-si; but applied to noiilinear systems. Further details are given by 13aiisal (1969) and Ku and Volf (1966). The discretized version of Equation 2, to be used when mea5ured values of the variables are available only :it a dis. . Crete set of time t,, t,.-l,. . . , is gireii as

I

Zt?(tfi--yJ,

Identification Algorithm

The usual linear relationship between an output, y ( t ) , a i d an input vector, x ( t ) , 1:' expres':ed by mean? of the i m p u l ~ e response function vector, k(s), as y(t) =

J0

kT(7)x(t-

s)ds =

Jo

i

ki(T)si(t

-

7)ds

(1)

where superscript T indicates a vector transpose. By contrast, certain nonlinear system< can be represented in S'olterra series form as Present address, IBJI Corp., Cambridge Scientific Center, Cambridge, Mass. 02139 2 To whom correspondence should be sent. 628

Ind. Eng. Chem. Fundam., Vol. 9, No. 4, 1970

,

.

(3)

Consider only a small number of back-time periods in the Y,= 0

c,with J I the 41

is replaced by

data analysis. I n this case

P m

. .

As these formulas are too complicated for aiij- practical use, certain variations or truncations may be suggested which increase the feasibilit'y of the uhe of Equations 1 to 3. m

/rm

+

. , . .rinL(tfi-yJ

ri=o

number of time periods back from t p whose data are to be included in the identification process. Restrict the maximum number of indices. This is equivalent to restricting the order of any term. Even after the inclusion of these assumptions, it' is probably correct t o state t,hat many, if not, most, of the terms left in the revised version of Equation 3 \vi11 be insignificant aiid can be dropped in any meaningful fit to the experimental data. As a result, the algorithm developed here concentrates on a

Table 1. Terms to Third Order, K Order

0

1

2

a

0

0

0

0

0

1

0 1 0 0 0

0 0 1 0 0 0 0 1

0 0 0 1 0 0 0 0 0 0 1 1 2

0 0 0 2 1 0 1 0 0 1 0 0

1 2 0 1 0 0 1 0

1 2 0 0 0

1

=4

Order

a

3 2

3

1 0 2 1 0 1 0 0

2 1 0 1 0 0

1 0 0 0

means for retaining only the important terms in the series expaiisi on. d t time t p = ptbt we measure an input vector, x p , and an output scalar, y p . The input vectors from the previous time periods are joined together to form the augmented input vector

0 1 2 3 0 1 2 0

0 0 0 0 1 1 1 2 2 3 0 0 0 1

1 0 0

1 2 0 1 0 0 1 0 0

1 2

0 0 1 0

0 0 0 0 0 0 0 0 0 0

1 1 1 1 1 1 2 2 2 3

first order, t’hen of second order, etc. The terms of order are generated recursively as follows:

712

Start with the term a j = [nz,O,.. . , 01. Given a n exponent vector a3, generate the vector aj+l as follo~Ts:

If

aj1

# 0, make

aj+l,l= q+1,2 aj+1,s

aj,l-l

= aj,2+1 = aj,s (s = 3,4,. .

., K )

If ajl = 0, let r be the lowest index for which ajr # 0. Then make

K e attempt to predict z p-i.e,,

u p as a polynomial in the elements of

is t’aken as the predict’ed value of y r . 0 is a vector of coefficients and the terms u p j are products of powers of the z p i . S o t all possible terms are included in the prediction equation a t one time. In fact, our major task is to select what subset’ up of all possible terms is to be included and what values are to be taken for 0. Before we go into the selection process, n e describe the manner in which all possible u p j can be generated. We assume that the vector z p has K elements. A term u p j is specified as a n expression of the form

Thus, to define a term we must specify the vector of exponents a i l . The order of a term is defined as the sum of its exponents. The terms are generated in the following sequence: first the term of order zero-i.e., 7ip1 = 1-then all terms of

After each exponent vector is generated, it is transformed into a list of its nonzero elements. ai^ an illustration, Table I shows all the terms up to order 3 in the sequence in which they are generated for a four-dimensional z p . The number of possible terms up to R t h order is

(“ t ”)-

For practical reasons, however, we can consider at, one time only a limited number, say k , of terms. These terms are divided into two sets: up, a set of I terms which are ‘Tn” the equation, and vp, a set of k - 1 terms which are “Out,” but are being considered for inclusion in the equation. We then proceed in cycles. I n each cycle, the coefficients 0 for the I n terms are evaluated, as well as the correlation between the Out terms and the regression residuals. I n terms which seem superfluous are dropped, and promising Out terms are turned into I n terms. The remaining Out terms are dropped, and new Out terms introduced from t’he list of as yet untested terms. The current equation is tested for its prediction ability. If there is no improvement over the previous cycle, terms dropped in that cycle are restored. Ind. Eng. Chem. Fundam., Vol. 9, No. 4, 1970

629

Step D. Prediction Run. For p

vo Qo I

I

= n

+ 1, n + 2,

,

n f N Obtain x,, y, Compute u,

and form z,

7, = u,TO (predicted value of y),

r, Accumulate

=

Yrr

s=

!-I

-1_ - - - _ v 3 303

H3

-A__ --_Figure 1. Sequence of stirred t a n k reactors as a representation of rod mill The first cycle is started with the fird Z terms I n and sueceeding k - 1 terms Out. The detailed cycle procedure is then as follows: Step A. Regression Run. For Obtain x,, y,, and form z, Compute u, and v,. Kote that Accumulate B = z u , u P T

=

uPl

=

1,2,

,n

1

P

c

= CU,V,T

cy

= cy,2

I*

,

b = C y ~ l ~ p c =

CYPVP

P*

CVlll.2

, P

Step B. Matrix Pseudoinversion. Let D = diag [Bzil”] and compute E = D-1BD-I. E has unit elements on the main diagonal. Find the eigenvalues, and the eigenvectors, p,, of E. Let xi-1

0 whc-

A,

> €1

A,

I €1

if

I*t =

Step E. Prediction Evaluation. Compute s

=

;[

s ] li2,

which is the root-mean-square prediction error. Compare this value t o the value of s in the previous cycle. If it is larger, restore to u all terms dropped a t the end of the previous cycle and proceed to Step G. Otherwise, proceed to Step F. Step F. Remove from u all terms for which tc < e2 where e2 is a predetermined constant. u1 = 1 is always retained. Step G. Transfer from v to u all terms for which p i > €3 where €3 is a predetermined constant. Step H. Let 1* be t h e new dimension of u. Bring in k - I* new terms to constitute the new vector, v. Proceed t o t h e next cycle. I n carrying out this cycle of computations, the quantities listed below must be specified as input.. . The numbers in parentheses are the actual values used in the computer runs described below. K = dimension of augmented input vector (24) k = number of terms considered a t one time (56) I 5 k = number of terms in first cycle equation (25) n = number of regression time periods (41) N = number of prediction time periods (1000) = inversion threshold for eigenvalues E~ = omission threshold for I n terms (0.5) es = inclusion threshold for Out terms (0.2) System Simulation

P

=

- Ilr

c r p 2

is a preset tolerance factor. Form

To test the method, we have chosen a particular physical system-namely, t h e flow of slurry through an ore-crushing rod mill. The system is simulated by means of a series of q-stirred tank reactors. Using Figure 1 as a guide, we define the following quantities associated with the i t h tank: A , = cross-sectional area L,(t) = slurry level above bottom of tank & ( t ) = volume of solids in tank V,(t) = volumetric flow rate of liquid out of tank i, into tank i 1 Q ( t ) = volumetric flow rate of solids out of tank i, into tank i 1 H , = level difference between bottom of tank i and bottom of tank i 1

+ +

+

We assume that the total flow rate out of tank i is proportional to the slurry level difference between tanks i and i 1, but no backflow is permitted. Hence

+

Matrix B++ is an approximation to the pseudoinverse of B . The scaling accomplished by the multiplication by D-1 improves the accuracy of the computations. Step C. Regression. Compute 8 = B++ b (regression coefficients) 2 = cy - eTb (sum of squares of residuals)

+ Qz

VC

= max [O,

P ( L - L,+1

+ Hd1

(7)

where p is a constant. A balance on the volume of slurry in tank i gives

dLr At - = dt

+

81-1 Qi-I

- Vi - Qt

(8)

whereas a balance on the solids alone yields (standard deviation of the residuals)

n - 1

x=

dS t

Qc-1

ti

=

1/2

@ti [C Pt

=

Z”*(& -

630 lnd.

(t-statistics for the regression coefficients)

1 - cTeit

CIi)’/*

(correlations between residuals and

Eng. Chem. Fundam., Vol. 9,

No. 4, 1970

- &t

(9)

Finally, the composition of the effluent from tank i equals the composition of the slurry in the tank--Le.,

These equations can be integrated to give L,(t), s,(t), V,(t),Qi(t) (i = 1,2,, . ., p), provided their init'ial values at t = 0 are given and the inputs t'o the process V,(t) and Q,(t) are specified. The output's are V,(t) and &,(t). In the present discretized version of the process, t h e inputs are the average values of V,(t) and &,(t) over each sampling period, and t'he output is the average of Q J t ) over the sam-

1 N " A N D ' O U T " TERhAS

I

i

REGRESSION R U N

i

pling period. The process is nonlinear because of Equation 10. Equation 7 requires that H i - Li+l be specified for i = q . A n arbit.rary value was chosen and maintained constant throughout the simulat'ion. The following values ITere used in the simulation runs. = 4 (number of tanks) L,(O) = 50 ft (initial levels; i = 1,2,. , , 5 ) X;(O) = 0 ft3 (initial solids, i = 1,2,, . . 4 ) = 1 ft (i = 1,2,. . . 4 ) Hi = 1 f t 2 (i = 1,2,, . .4) Ai = ft3/ft sec (flow coefficient) p

P R E D I C T I O N RUN

I

YES

~

q

1

REI.lOVE INSlGhlIFlCANT W"TERISS

I

RESTORE "IN" TERk.15 DROPPED I N LAST C Y C L E

i L ~

I

CHANGESIGNIF1CANT"OUT' W T O 'Yh ' TERlilS

.1

I I 1

The input streams were generated usiiig the following forniulas ~ , ( t )= IO

Q,(t)

=

5

+a

+a

4 j-1

sin ( c j t

4 j= 1

sin ( e j t

+ dj) +

fj)

DROP I k S I G U I F I C A N T OUT' TERhlS

(11) (12)

where a is a constant, varied from run to run, c j and e3 are

Table II.

Cycle

Prediction Standard Deviation

1 2

0,0465 0.0330

3

0.0390

4

0.0209

5

0.0237

6

0.0190

7

0.0206

8

0.0217

9

0.0321

10

0,0163

11

0.0191

12

0.0160

I1

t

I

E R I U G I \ Nibs! " O U T " TEPt,'S

L

A

Figure 2.

Schematic of procedure used in identification

Details of Typical Run, a

= 1.0

In Terms Followed b y (Out Terms)

1-25 (26-56) 1, 3, 4, 6-10, 12, 14-20, 23, 24, 29, 31, 34, 36, 38-40, 47. 49-51. 53. 56 (57-81) 1, 3, 4, 6-9, 12, 14-20, 23,'24, 34, 38, 40, 47, 49, 50, 53, 79 (82-112) 1, 3, 4, 6-10, 12, 14-20, 23, 24, 29, 31, 34, 36, 38-40, 47, 49-51, 53, 56, 79, 82, 84, 88, 94, 96, 105, 107, 111 ( 113-1 28) 1, 3, 4, 6-10, 12, 14-20, 24, 29, 31, 34, 38-40, 47, 49, 50, 53, 56, 79, 82, 88, 96, 105, 107, 111, 116 (129-148) 1, 3, 4, 6-10, 12, 14-20, 23, 24, 29, 31, 34, 36, 38-40, 47, 49-51, 53, 56, 79, 82, 84, 88, 94, 96, 105, 107, 111, 116 (149-163) 1, 3, 4. 6-10, 12, 14, 15, 17-20, 23, 24, 29, 31, 34, 36, 38-40, 47, 49-51, 56, 79, 82, 84, 88, 94, 107, 116 (164-183) 1, 3, 4, 6-10, 12, 14-20, 23, 24, 29, 31, 34, 36, 38-40, 47, 49-51, 53, 56, 79, 82, 84, 88, 94, 96, 105, 107, 111, 116 (184-198) 1, 3, 4, 6-10, 12, 14-20, 23, 24, 29, 31, 34, 36, 38-40, 47, 49-51, 53, 56, 79, 82, 84, 88, 96, 105, 107, 111, 116 (199-2 14) 1, 3, 4, 6-10, 12, 14-20, 23, 24, 29, 31, 34, 36, 38-40, 47, 49-51, 53, 56, 79, 82, 84, 88, 94, 96, 105, 107, 111, 116, 211 (215-228) 1, 3, 4, 6-10, 12, 14, 15, 17-19, 23, 24, 34, 39, 40, 47, 49-51, 53, 56, 79, 82, 88, 94, 96, 105, 107, 111, 211 (229-250) 1, 3, 4, 6-10, 12, 14-20, 23, 24, 29, 31, 34, 36, 38-40, 47, 49-51, 53, 56, 79, 82, 84, 88, 94, 96, 105, 107, 111, 116, 211 (Continued on next page) Ind. Eng. Chern. Pundarn., Vol.

9, No. 4, 1970 631

(Table II. Continued)

No Further Improvement from Here On Key to In Terms Coefficient Value in Cycle 12 i Standard Deviation

Term No.

4.996 f 0.007

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 29 31 34 36 38 39 40 47 49 50 51 53 56 79 82 84 88 94 96 105 107 111 116 211

0.152 f 0.052 0.162 =t0.044 ... 0.141 - 0.030 0.255 =t0.047 0.109 f 0.0377 -0.286 f 0.018 -0.0102 i 0.0109 ... -0.175 f 0.021 -0.182 -0.340 -0.0195 0.112 0.0915 -0.280 0.0746

RMS Fluctuation,

a

RMS Prediction Error Using Terms up to O r d e r 1

2

3

0.0005 0.05 0.18

0.03 0.09

0.08

Y

cy

0.05 0.5 1

Kot tried.

632 kid.

i 0.009 i 0.0174 i 0.019

+ 0.0186 f 0.051

i 0.0481

-0.0827 i 0.0544 -0.190 f 0.031 ... -0.00215 i 0.0057 0.00310 i 0.00973 -0.00891 i 0.00263 0,000545 zt 0.004094 -0.00842 i 0.01669 0.00797 i 0.00269 0.00939 i 0.01067 0.00485 i 0.00544 -0.0126 i 0.0050 - 0.00595 i 0.00170 -0.00777 i 0.00859 0.00237 =k 0.00276 0.00514 i 0.00236 0.00346 i 0.00094 -0.00260 =t0.00185 0.000171 i 0.002878 -0.00659 i 0.00269 0.00217 i 0.00225 0.00116 i 0.00186 0,0000489 f 0.0017788 -0.0131 =t0.0031 0.00152 f 0.00264 - 0.000659 i 0.0010487 0.00725 + 0.00121

Table 111. Summary of Results and Prediction of Simulated Rod Mill Output

0.1 1 2

f 0.039

Eng. Chem. Fundam., Vol.

numbers chosen randomly in the range 0 t o 1, a i d t l j and f, are numbers chosen randomly in the range 0 t o 2a. The sampling period n-aq At = 2.0. TYitliin each period, the equations were integrated nunierically hy Euler’a mpthod, using 20 subintervals. The qysteni was 1’1111 for a period of t = 40 to allow it to reach a pseudo-steady state before sampling was started. The average value of Q, is 5 and thus Q4 must also have this average value. Hence the output was defined as

-5

Q ~ ( T ) ~ T

9, No. 4, 1970

(13)

V

True y Predicted y t

Figure 3.

Comparison of true and predicted values of y CY

since a zero me:m value i.: dei.iral)le for reahon, of numerical accuracy. The inputq were also defined so as to have zero me an values-i. e.,

= 1.0

good predictors, it does not shed much light on the iiature of the process being identified. Thiq is not, of courw, u~iezl~ectetl in the type of identificatioii being used here. To xhon- the excellent behavior in the identified .>..tern, Figure 3 compare;: the predicted and ohserved value? of y for CY = 1.0. The agreement ic all that one could ask for. Extensions

Parameter CY ap1)earing in Equations 11 and 12 ha.; a double significance. It is a measure of the departure from steady state and of the nonlinearity of the system. The latter statement follows because n-hen CY is small, Equation 10 can be approximated by a linear expression. Si mulation Results

Simulation runh were made using values of CY = 0.1, 1.0, and 2.0. In each case, (lata for 12 time periods were used to form the input vector. The input vector a t time t therefore consisted of the 24 components x l ( t ) , z 2 ( t ) , zl(t z y ( t - 2)) . , . , rl(t - 22)) r2(t- 22). Ex1)erimentation with as many a; 25 time periods indicated that increasing the length of the input vector beyolid 12 time periods did not yield any new informati on. Figure 2 i\ a schematic outline of the calculation procedure and Table I1 liqts data from a typical simulation rwi. Here we see how the prediction standard deviation decreases t’hrough 12 cycles of selecting t’he I n and Out terms. Also shown are the coefficients and their standard deviations associated with the In terms a t the 12th cycle. Table 111gi7Tes the principal results of t,he study. Listed are the root-mean-square value of output y and of t,he error of predicting the output using terms of UII to the third order. The errors of prediction then are 1, 6, and 8% of the fluct’uations y froin steady state for O( = 0.1, 1, and 2, respectively, but, only 0.01, 0.6, and 1.6%) of the total output, & 4 ( t ) . The actual results vary somewhat from one cycle to the next. Typically, the prediction errors are much larger in early cycles. h s the appropriate terms are selected, t’he prediction errors diminish and reach more or less a steady state. The values in the table are averages over several cycles after steady state ha< been reached. At thi. steady state, the predicting equation contained about 30 terms which changed little from one cycle to the next. Although the same terms (with slight, variations) were selected consistently in different runs, no rational esplanation could be found why those specific terms appeared. Thus, though the “identification” process produces fairly

AIaiiy extensioiis can he made in the present procedure to make the algorithm more veraatile. lye have con4dered here a teni. Jlultiple output-. can be easily treated t)y assigning a separate polynomial to each. Furt.her, h ~ a, suitable redefinition of the input vector, we can iiitroduce depeiideiicies of an output on previous value.: of it;elf and of other outputs. Such a model is called autoregre..iive. ,:’ Kheii our algorithm rejects a term, it m-ill never be recoiisidered after the following cycle. In this for rithm is ,suitable only for time-invariant easily modify the algorithm so that the v vector conhiyts of two llarts: new terms iiever before considered auti t e r m previously rejected. This will give the model a cnhaiice to readju3t itself to changes in the nature of the s p t e m . .Is implemented, the algorithm require; 1)rwpecification of the number of back-time periods comidered. It tvoulti be desirable to have an automatic procedure for determining this parameter. In coiit,rast, no l)respecification of the masiriiuiii order is required. Further, one would like to adapt the nlgorithm to predict more than a single time period ahead. Acknowledgment

The authors acknowledge the extensive prograiiiniing assistance by Alex Brown during the initial phases of thi.; Work. Literature Cited

Bansal, 1‘.S.,Proc. I E E E 116, 19.57 (1969). Harris, G. H., Lapidus, L., -4.I.Ch. E . J . 13, 291 (1967). Hsieh, H. C., “Advances in Control Systems,” Vol. 2, .. pp. l l + 208; Academic Press, Kew York, 1965. Koppel, L., “Introduction to Control Theory,” pp. 31.5-31, Prentice-Hall, EngleR-ood Cliffs, N. J., 1969. Ku, Y. H., Wolf, A. A., J . Franklin Inst. 2 8 1 , 9 (1966). Lapidus, L., Luus, It., “Optimal Control of Engineering Processes,” pp, 231-320, Blaisdell Publishing Co., JValthsm, >lass., 1967. Roy, R. J., Sherman, J., I E E E Auto. Control 1 2 , 761 (1967). RECEIVED for review September 15, 1969 July 30, 1970 ACCLPTED Ind. Eng. Chem. Fundam., Vol. 9, No. 4, 1970

633