Blocking and Condensing Design for Quadratic Dynamic Matrix

Blocking and Condensing Design for Quadratic Dynamic Matrix Control Using Wavelets. Srinivas Palavajjhala, Rodolphe L. Motard, and Babu Joseph. Ind. E...
0 downloads 0 Views 3MB Size
Ind. Eng. Chem. Res. 1994,33, 1159-1173

1169

PROCESS DESIGN AND CONTROL Blocking and Condensing Design for Quadratic Dynamic Matrix Control Using Wavelets Srinivas Palavajjhala, Rodolphe L. Motard, a n d B a b u Joseph' Process Control Systems Laboratory, Department of Chemical Engineering, Washington University, St. Louis, Missouri 63130-4899

Quadratic dynamic matrix control consists of the on-line solution of a quadratic programming (QP) problem. Blocking and condensing (B& C) are techniques that consider a reduced set of future manipulated variable and predicted output values in the QP problem by assuming them to be constant over intervals. B & C improve the controller robustness and reduce the QP computation time. There is a need for a systematic design methodology for the selection of B & C intervals. Wavelet transforms provide a suitable framework for developing a B & C design methodology. B & C in the time domain and in the wavelet domain are compared. It is shown that time domain B & C is edge driven, while that using wavelets is interval driven. A new design procedure for interval-driven B & C using information theory and sensitivity analysis is developed. The design procedure is applied to the Shell process control problem. Simulation results are discussed and some design guidelines are provided. B & C are also examined in the context of using move suppression factors.

I. Introduction Model predictive control (MPC) is a generic name for a widely used class of process control algorithms. The philosophy of MPC techniques is based on the on-line use of a process model to predict the effect of changes in the future control actions (manipulated variable changes) on the controlled variables (outputs) over a finite horizon. The predictions are used to obtain the future control actions that minimize the deviation of the future controlled variables from a desired trajectory. The deviation is measured using a metric norm. A few or all of the future control actionscomputedare then actually used for control. Of the numerous variations of MPC algorithms, dynamic matrix control (DMC) (Cutler and Ramaker, 1979,1980) is the most widely used in the chemical process industry, and is used in this paper. DMC controllers,for many industrial applications, have to handle a large number of manipulated variables and output variables. Applications with 20-40 controlled variables and 10-15 manipulated variables are often encountered. For reasonable prediction and control horizons this results in large dynamic matrices that are ill-conditioned. The DMC controllers that use such dynamic matrices are sensitive to measurement and modeling errors since these controllers use the inverse of the process model for control. The methods proposed to improve DMC controller robustness include ridge regression (Prett and Gillette, 1979; Marchetti et al., 1983) blocking and condensing (Reid et al., 1980; Mehra et al., 1981; Ricker, 19851, and principal component analysis (Maurath et al., 1988). In ridge regression, the changes in manipulated variables are penalized using move suppression parameters in the objective function. This is done so that the manipulated variable changes are gradual, and thus a more robust control is achieved. Principal

* Author to whom correspondence should be addressed. E-mail: [email protected].

component analysis, on the other hand, uses singular value decomposition to retain componentsthat have significant contribution to the controller performance. Blocking and condensing (Reid et al., 1980; Mehra et al., 1981; Ricker, 1985) improve the controller robustness by considering a reduced set of manipulated variable changes and predicted output values over the prediction horizon. In doing so, the number of optimizing variables is also reduced. Of these methods, ridge regression and blocking and condensing are most widely used in practice, often in conjunction with each other. Design guidelines to select move suppression parameters are available in the literature (Garcia and Morari, l982,1985a,b; Marchetti et al., 1983). However, there is a need for a design procedure for selecting blocking and condensing intervals (Ricker, 1985; Lee et al., 1992). Furthermore, there have been nostudies aimed at understanding the combined effect of move suppression and blocking and condensing on DMC controller performance and robustness. The objective of this paper is to develop a design methodology for blocking and condensing (B & C). To do this, working in the wavelet domain provides a better and a suitable framework. Using scaling functions and wavelets, we decompose the future manipulated variable values and the predicted output values to obtain a hierarchical multiresolution representation (Daubechies, 1989, 1992; Mallat, 1989; Joseph and Motard, 1994). This decomposition can be thought of as the projection of a signal onto a family of orthonormal bases. Wavelet domain B & C is implementedby selecting a subset of the projections that are significant. The error introduced by B & C influences the QDMC controller performance and robustness. Our design procedure interpets the error in approximation as an information criterion and selects B & C intervals without degrading the controller performance significantly. The new design procedure is explained and is illustrated using examples. For Haar wavelets, we show that the wavelet domain B & C is interval driven, while

0888-5885/94/2633-ll59$04.50/00 1994 American Chemical Society

1160 Ind. Eng. Chem. Res., Vol. 33, No. 5, 1994

the time domain B & C is edge driven. The advantages of interval-driven over edge-driven approach are demonstrated. The paper is organized as follows: section 2 explains time domain B & C. Section 3 gives an overview on wavelet transforms. The overview is restricted to concepts used later in the paper. Section 4 explains the differences between the time domain and the wavelet domain B 8z C. Section 5 describes the new design methodology for B & C using information theory and sensitivity analysis. Section 6 contrasts principal component analysis with blocking and condensing. Section 7 illustrates the application of this design methodology to the Shell process control problem (Prett and Morari, 1987). The simulation results are discussed and some design guidelines are provided.

2. Blocking and Condensing Consider the quadratic dynamic matrix control (QDMC) problem with linear constraints on the output and manipulated vairables. The QDMC problem can be stated as the following quadratic programming (QP) problem (Garcia and Morshedi, 1986): 1 Min F = -[AAu - e(k+1)ITrTr[AAu- e(k+l)I Au 2

+

~AUTATAAU

2

(1)

subject to CAu 1 c(k+l)

(Lee et al., 1992). An additional advantage of B & C is that a more compact and a better conditioned augmented dynamic [AT ATIT is obtained. The inverse of this matrix is used for control in the unconstrained case. B & C can be considered as methods that approximate a higher dimensional dynamic matrix, future manipulated variable changes vector, and predicted output vector by lower dimension matrix/vectors. Row-wise orthogonal (Le., P I T P I = I) projection matrices PI (referred to as the condensing projection matrix) and P 2 (referred to as the blocking projection matrix) are_used to obtain the lower dimensional dynamic matrix A = PIAPzT. The lower dimensional manipulated variable changes vector Ail = PzAu and the lower dimensional predicted error vector &(k+l) = PIe(k+l) are used in the optimization problem defined in eqs 1 and 2. B & C design involves the proper selection of these projection matrices so that the transformed optimization problem does not degrade the controller performance significantly. We classify the B & C methods based on the type of projection matrices into (a) time domain B & C (section 2.1) and (b) wavelet domain B & C (section 4). 2.1. Time Domain Blocking and Condensing. Consider the QDMC problem defined in eqs 1 and 2. Let u(k) denote a vector containing the manipulated variable values in the prediction horizon P. Then

u(k) = [u(k) u(k+l) ... u(k+P-1)lT (3) Usually, a control horizon M < Pis used. In this case, the manipulated variable is assumed to be held constant, or is blocked, in the interval [k + M - 1, k + P - 11, i.e.,

u(k) = [u(k) u(k+l)

...

u(k+M-l)

...

u(k+M-l)IT

AumbI Au IAuma

(2) A is the dynamic matrix, e(k+l) is the difference between the desired output trajectory and the predicted output trajectory over the prediction horizon P,Au(k) is the vector of manipulated variable changes over the control horizon M ,l' is the matrix of output weights and is usually chosen to be a diagonal matrix, A is a diagonal matrix of manipulated variable weights, also referred to as the move suppression parameters, and C is the constraint matrix obtained by translating the output and manipulated variable constraints to constraints on the manipulated variable changes. The QP problem is solved for Au(k) at every sample time, and the values corresponding to current time are implemented. Blocking stipulates manipulated variable changes over the prediction horizon by considering only a few changes. The manipulated variable is assumed constant over blocked intervals. The manipulated variable changes in these intervals will therefore be zero. Condensing stipulates changes in predicted output over the prediction horizon by considering only a few values. The predicted output values are assumed constant over the condensed intervals. In practice, blocking is used to reduce the number of optimizing variables in the QP problem that is solved online. Reid et al. (1980) and Ricker (1985) have shown that blocking influences the robustness of QDMC controllers using examples. For the same number of optimizing variables, Ricker shows that heavier blocking yields larger robustness. The controller robustness is obtained at the cost of controller performance. Reid et al. and Ricker have considered equally spaced blocking intervals only. The use of unequal length intervalshas not been considered in their work. Condensing is used to reduce the number of constraints on the output variable in the optimization

(4)

Or, Au(k) = [A&) Au(k+l) ... Au(k+M-1)IT (5) and Au(k+M) = ... = Au(k+P-1) = 0. In a more general case, the manipulated variable values are assumed to be held constant in blocks of different lengths spanning the prediction horizon. Suppose that the prediction horizon Pis divided into B such blocks of unequal length. Denote the starting time of each block by bo, bt, bz, b3, ...,b e l . The blocked manipulated variable QB,t(k) is then obtained by retaining only manipulated variable values a t the start of each block, i.e.,

aBLt(k) = [iit(k)

ii,(k+b,) ii,(k+b,)

...

iit(k+bEl)lT (6)

where ii,(k+bj) = G,(k+bj+l) = ... = ii,(k+bj+,-l) = u(k+b,)

(7) a n d j = 0, 1,2,...,B - 1, bo = 0, bs = P. The manipulated variable changes are given by A i l & ? )= [Aii,(k) Aiit(k+bJ

...

Aiit(k+bEl)lT = P,Au (8) where Aiit(k+bj+l) = ...= Aiit(k+bj+l-l) = 0. The blocking projection matrix Pz is constructed by dropping rows corresponding to blocked manipulated variables from a P X P identity matrix. The interval between the starting time of two consecutive blocks forms a blocking interval. Blocking intervals are denoted using a row vector with the starting time of each block as its elements. The prediction horizon is the last element of this vector. The blocking interval vector BI associated with the blocked manipulated

Ind. Eng. Chem. Res., Vol. 33, No. 5, 1994 1161 variable vector defined in equation (6) is Condensing stipulates changes in predicted output and the setpoint trajectory over the prediction horizon by consideringonly a few values. The predicted output values and the setpoint values are assumed constant over the condensed intervals. The predicted output vector YP(k) is defined as YP(k+l) = [y(k+l) y(k+2) ... y(k+P)IT (10) Let the prediction horizon P be condensed into C intervals of unequal length. Denote the time corresponding to the start of each interval by CO, c1, ..., cc-1. The condensed predicted output vector Y&(k+l) is given by

P;,t(k+l) = Ut(k+l) Yt(k+cl)

-9.

9t(k+cc-1)lT (11)

where

Y,(k+c,) = Y,(k+c,+l) = ... = y,(k+cj+l-l) = y(k+c,) (12) j = 0, 1, 2, ..., C - 1,co = 1,and cc = P + 1. Condensing intervals are denoted as a row vector with the starting time of each interval as ita elements. The last element of this vector is P + 1. The condensing interval vector CI for the condensed output vector in eq 11 is CI = [l c1 c2 ... cc-l P + 11 (13) The condensed predicted error vector computed using the condensed output is 8,(k+l) = [Zr,(k+l) 21,(k+cl)

...

Z r , ( k + ~ ~ - ~=) l ~ P,e(k+l) (14)

It(k+cj) = E,(k+c,+l) = ... = 21t(k+cj+l-l) = e(k+cj) (15) The condensing projection matrix PIis constructed by dropping rows corresponding to condensed output variables from a P X P identity matrix. Substituting the transformed variables (from eqs 8 and 14) and the transformed dynamic matrix into eqs 1and 2, we obtain a transformed optimization problem with fewer number of optimizing variables and fewer predicted error values. The transformed problem is

subject to

Aumh IAQ,(k) IAum=

a sample time of 5,a prediction horizon P = 8, and a control horizon M = 8 is: A=

1

0.0945 0.5443 0.9582 1.3387 1.6892 .0115

0 0 0 0.0945 0.5443 0.9582 1.3387 1.6892

0 0 0 0 0.0945 0.5443 0.9582 1.3387

0 0 0 0 0 0.0945 0.5443 0.9582

0 0 0 0 0 0 0.0945 0.5443

0 0 0 0 0 0 0 0.0945

AQ,(k) = [Au(k) Au(k+l) Au(k+2) Au(k+4)IT (19)

and the transformed predicted output variable vector is (from eq 11): PE,t(k+l) = b ( k + l ) y(k+3) y(k+5) y(k+6) y(k+7) y(k+8)IT (20) From eqs 7 and 12, the following projection matrices PI and PZare obtained: 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 10 0 0 0 1 0 0 01 Pl = 0 0 0 0 0 1 0 0 (21) 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1

r

1

1 J

~ 0 0 0 0 0 0 ~ 10 1 0 0 0 0 0 01 P2 = (22) 0 0 1 0 0 0 0 0 ~ 0 0 1 0 0 0 o J The lower dimension dynamic matrix At after B & C obtained by premultiplying and postmultiplying the dynamic matrix (i.e., At = PIAP;) by the projection matrices is 0

(17)

(23)

In time domain B & C, all the blocked manipulated variable changes in the above optimization problem are affected by the lost degrees of freedom and due to fewer points specified along the predicted output trajectory. Since the manipulated variable (output variable) value over blocked (condensed) intervals is assumed to be the same as the value at the start of the interval, we call this approach edge driuen B & C (see eqs 7 and 12). Example 1: Time Domain Blocking and Condensing. Consider the dynamic matrix for the transfer function 5.72e-lk/(60s + 1). The dynamic matrix obtained using

I

0 0 0 0 0 0 0 (18) Parts a and b of Figure 1 show the future manipulated variable values and the predicted output variable values with no B & C. The values of these variables are constant between sample times. The B & C interval vectors are [O 1 2 3 4 5 6 7 81 and [l 2 3 4 5 6 7 8 91, respectively. Now,let us block {Au(k+3)Au(k+5) Au(k+6) Au(k+7)) and condense {y(k+2)y(k+4)]. The manipulated variable values in the blocked intervals and the output variable values in the condensed intervals are assumed constant (see Figure lc,d). The B & C interval vectors are [O 12 4 83 and 11356 7 891, respectively. The transformed manipulated variable vector is (from eqs '7 and 8)

Substituting the transformed vectors and matrices into eqs 16 and 17,the number of independent variables in the optmization reduces from 8 to 4. 3. An Overview on Wavelet Transforms

The discrete wavelet transform (DWT) decomposes an absolutely square integrable function f E L2(R) into a

1162 Ind. Eng. Chem. Res., Vol. 33, No. 5, 1994

I

Control Horizon /-

I

u(k)

- U(t12)

u(ki1)

'('+')

"@e)"(k47)

I k+l

k+2

k+3

k+4

k+5

SmplcTirnc

k+6

kt7

k 4

kt9

Figure I(b) Output Vanable Values Over Each Interval in the Rediction Horizon. Wo Condensing)

1 Rediction Horizon /-

Control Honzon P

- u(k)

Y(k+5) y(k+5)

u(k12)

Yo;+! Yo;+5)

u(ti1)

y(kr7)

y(k+3)

y(kt8)

y(k+l)

k+l

Figure I(c) Manipulated Variable Values Over Each Interval in the Control Horizon. (With Bloclung)

k+2

k+3

k 4

k+5

k 4

kt7

k+8

kt0

Figure l(d) OuQul Variable Values Over Each Interval in the Prediction Horizon. (With Condensing)

Figure 1. Comparison of predicted outputs and manipulated variable changes before and after time domain blocking and condensing.

hierarchy of signals at different resolution or scale. Scale, denoted as m, can be defined as the inverse of frequency. In DWT, the scale is usually restricted to a discrete lattice m E Z n [OJI. Scale m = 0 corresponds to the finest scale at which the function f is sampled, and let m = L be the coarsest scale. The wavelet representation of the function f is defined as

The family of functions +,,,,,,, with m,n E Z,are called wavelets. m is called the dilation parameter (scale) and n the translation parameter. Wavelets are generated by the dilation and the translation of a single prototype function called the mother wavelet, i.e., +m,n(t)= 2-m/2+(2-mt-n) (25) &,n(t), with m,n E Z,are called the scaling functions. For dilation m and translation n they are defined as 4m,n(t)= 2-m/24(2-mt-n) (26) In eq 24, the inner products ( f , + m , n ) for various values of dilation and translation parameters and the inner product ( f , 4 L , n )are called the wavelet coefficients and are defined, respectively, as (f,+,,,)

= 2-m/2J-1f(t) +(2-"'t-n) dt

(27)

An hierarchical decomposition using the pyramid algorithm is used to compute the wavelet coefficientsefficiently (Mallat, 1989). Let, the sequence codenote the projections of the function f on the scaling functions at scale m = 0 and n E Z.The samples of a continuous function f at the finest scale are often used instead of CO. The projections of the function f on the scaling function a t the next scale can be computed using

where h(n) = 2 - 1 / 2-mJ m 4 ( t / 24(t-n) ) dt

Thus, the sequence Cm, which is the projection of the function f on the scaling functions 4m,n for all n E Z,can be computed using = Hcm-1 (30) Operator H can be interpreted as the convolution of the sequence c-1 with the filter coefficients h(n)followed by decimation (dropping even terms). The projection on the wavelet is computed using Cm

where, g(n) = 2-lizJ-"_(t,2)4 ( l - n ) dt

The sequence dm, which is the projection of the function

f on the wavelets +m,n for all n E Z,can be computed using dm = Gcm-1 (32) Operator G can be interpreted as convolution of the sequence c-1 with the filter coefficients g(n) followed by decimation. From eqs 29 and 31, the following recursive wavelet decomposition algorithm is obtained: cj = Hcj-1 and dj Gcj-1 (33) Figure 2a depicts two stage in this algorithm. The projections on the scaling function are recursively decomposed to obtain coarser approximations. The projections on the wavelets at each scalecontain the finer details. The reconstruction algorithm is similar to the decomposition algorithm, except the steps are now reversed,

Ind. Eng. Chem. Res., Vol. 33, No. 5, 1994 1163

/

\

/

\ Level 2

Level3

Figure%Wavelet packet deeompositiouofa signal (Wickerhauser, 1991).

€ and IG acting on a sequence x of size N are defined as

follows: i = 1,2, ...,N/2

i = 1,2, ...,Nl2

Figure 2 (b) BloEk Diapnm of RBconsmclwnd the amal. -

(h(n) is he complex mnjvgate ot h(n)f 2 means upsampleby 2 (pad mterms Mth a).)

Figure 2. Diaerete wavelet decomposition and reconstruction.

c.J-1 = H*ci + G*dj (34) Figure 2b depicts the reconstruction from the two-stage decomposition shown in Figure 2a. Finer details captured by the wavelets are added to the coarser approximations to reconstruct the original function. Operators H* and G* are the adjoints of operators H and G , respectively. The adjoint operators can be interpreted as upsampling (padding even terms with zeros) followed by convolution with the respective conjugate filter coefficients. Operators H and G used in the wavelet decomposition are low-pass and band-pass filters, respectively (Mallat, 1989). Filter Coefficientsfor these filters, h(n) and g(n), withdifferent properties, such asorthogonality, symmetry, smoothness, support in time domain, support in frequency domain, number of vanishing moments, are available in theliterature (e.g.,Beylkinet al., 1991;Daubechies,1992). In this paper, the Haar wavelet and scaling function are used. Example 2 The Haar Wavelet. The Haar wavelet is an example of an orthonormal wavelet that is piecewise constant (Haar, 1910; Daubechies, 1992). The scaling function for this wavelet is a pulse:

{

1 '(t)= 0

OSt