Model Predictive Control with State Estimation - American Chemical

in the use of control structures in which a model is placed ... basis for adaptive control algorithms (e.g., Clarke et al., ... nonlinear, distributed...
2 downloads 0 Views 2MB Size
Ind. Eng. Chem. Res. 1990,29, 374-382

374

Model Predictive Control with State Estimation N. Lawrence Ricker Departmen,t o f Chemical Engineering, BF- I O , University of Washington, Seattle, Washington 98195

A state-space formulation of the multivariable model-predictive controller (MPC) with provisions for state estimation is developed. Hard constraints on the manipulated variables and outputs are accommodated, as in Quadratic Dynamic Matrix Control (QDMC) and related algorithms. For unconstrained problems, a low-order analytical form of the controller is obtained. T h e potential benefits of MPC with state estimation are demonstrated for the case of dual-composition, LV control of the high-purity distillation column problem studied previously by Skogestad and Morari, which is an especially challenging problem for MPC-type algorithms. It is shown that the use of the state estimator with a single tuning parameter (beyond that required for standard MPC) provides robust performance equivalent to the best p-optimal controller designed by Skogestad and Morari. T h e method is also applied to a process that includes integration and thus is not asymptotically stable in the open loop. Over the past 10 years, there has been a growing interest in the use of control structures in which a model is placed in parallel with the plant. The advantages of this from the point of view of theoretical analysis and controller design have been explored thoroughly by Morari and coworkers (e.g., Garcia and Morari, 1982, 1985; Morari and Doyle, 1986, Morari and Zafiriou, 1989), Rouhani and Mehra (19821, and others. Its advantages in industrial applications (e.g., explicit handling of inequality constraints) have been discussed by Richalet and Rault (19781, Cutler et al. (19831, Ricker et al. (19861, Prett and Garcia (19881, and others. It is also beginning to be used as the basis for adaptive control algorithms (e.g., Clarke et al., 1987; De Keyser et al., 1988). In the present work, however, such controllers are assumed to be nonadaptive and will be referred to by the generic term “model-predictive control” (MPC). A disadvantage of the typical MPC formulation (DMC, QDMC, etc.) is that it is designed for s t e p changes in the setpoints and the output disturbances (Prett and Garcia, 1988). This may be a good assumption in the case of the setpoints but is rarely so in the case of the disturbances. Performance then depends on the degree to which the real disturbances are “steplike” and on how well shortcomings in the design assumptions can be overcome by controller tuning. In the extreme case of a ramp disturbance at the output of a nonminimum-phase process, the performance can be very poor (see the examples). Zafiriou and Morari (1987) and Astrom and Wittenmark (1984) have pointed out that the disadvantages cited above can be overcome through the use of a “2-degree-offreedom” structure, Le., in which the controller treats the effects of disturbances (as seen at the plant output) differently from the way it handles setpoint changes. Morari and Zafiriou (1989) cover the design of robust %degreeof-freedom controllers but do not show how inequality constraints can be incorporated. For problems without inequality constraints, the LQG design procedure (in which state estimation is combined with a linear state feedback controller) leads naturally t o a controller with a 2-degree-of-freedom structure (e.g., Astrom and Wittenmark, 1984). Prett and Garcia (1988) have shown that for the unconstrained problem the DMC disturbance assumptions are, in fact, equivalent to the use of a specific state estimator gain matrix in an LQG controller. Navratil et al. (1989) and Li et al. (1989) show that state estimation can be incorporated in MPC but do not consider constraints. Marquis and Broustail (1989) have :il~ rlrwriht-rl ~ ) a iombination of MPC and state estimation

which has been operating on industrial problems (with constraints) for more than 6 years. Other methods have been proposed to deal with the problem of disturbance rejection in an IMC (or Smith Predictor) structure. For example, Wellons and Edgar (19851, Yuan and Seborg (19861, and Svoronos (1986) use a form of observer to estimate the magnitude of a hypothetical load disturbance. The estimated disturbance is then incorporated into the prediction of future outputs. The philosophy is similar to the state estimation approach, but their work is intended for SISO systems without constraints. The main goal of this paper is to show that it can be advantageous to use state-estimation techniques other than the approach used in DMC. The next section gives the details of the algorithm. The final section is a demonstration of its properties for two example problems.

Derivation of the Algorithm Plant. The states of a plant are never fully measurable. In fact, one would not know the “parameters” or even the “order” of the plant, since it is generally a time-varying, nonlinear, distributed-parameter system. The values of the signals leaving the controller are known, and one can measure the plant outputs, but nothing else in the plant is accessible. As shown in Figure 1, an unmeasured disturbance corrupts the control signal before it reaches the plant, so even the values of the plant inputs are never known exactly. The distinction between the known internal model and the unknown plant is an essential element of past research and it is reemphasized here. Suppose that we approximate the plant by a discretetime, linear, time-invariant, state-space model: f(k+l) = &f(k) + h ( k ) (1) Y ( k ) = Cf(k)

+ Dti(k)

(2)

where k is the current sampling period, 2 is a vector of fi states, ii is a vector of n, p!an_t input?, 9 is a vector of p measured outputs, and CP, r, C, and D are constant matrices of appropriate size. The plant inputs can be considered to be of various types, as shown in Figure 1: iiT(k) = [mT(k) vT(k) * T ( k )

ZT(k) eT(k)lT (3)

where m ( k ) is a vector of m manipulated variables (i.e., control signals), v(k) is a vector of m, measured disturbances, and *&), Z ( k ) , and B(k) are vectors of unmeasured disturbances of length m, f i z ,and p , respectively. Of course, * ( k ) , Z(k),and 6 ( k ) could be combined into a single 1990 American Chemical Society

Ind. Eng. Chem. Res., Vol. 29, No. 3, 1990 375

r

? I

I

Figure 1. Block diagram of the plant and control system.

unmeasured disturbance variable entering at the plant output. They are trea_tedseppately here for illustrative purposes only. The I' and D matrices are partitioned accordingly:

f

[I?, f, f, f,

01 (4) D = [ O D, 0 D, I] (5) Note that 6 ( k ) does not affect the states and there is no direct transfer from m(k) to y ( k ) . The system may be nonsquare (with either more manipulated variables than outputs or vice versa). This plant model is not part of the control system. Its only purpose is to represent the plant in simulations and analytical work. Internal Model. The internal model provides a prediction of future plant outputs as a function of contemplated adjustments in the manipulated variables and estimated disturbances. The controller chooses the values of u to send to the plant such that the predicted plant outputs are optimal according to a specified criterion. The internal model is part of the control system, and its states are all known exactly. Furthermore, one knows the structure and parameters of the internal model. In the present work, the internal model has the form x(iz+i) = ~ x ( l z+ ) ru(k) (6) =

y ( k ) = Cx(k)

+ Du(k)

(7)

The number of states is n. The input vector definition is

uT(k) = [mT(k) vT(k) wT(k) zT(k)IT

(8)

(compare with eq 3). Variables w and z are unmeasured disturbances of length m and m,, respectively. As for the plant model, the I? and D matrices are partitioned:

r = [I'm r v

rzl

(9) (10) D = [0 D, 0 D,] State Estimation. In an analogy to the LQG procedure (e.g., Astrom and Wittenmark, 1984), we assume that future unmeasured disturbances will be zero and use the internal model to estimate the future state of the plant: f(k+llk) = 9f(klk-1) + r,m(k) + r,v(k) + Kd(k1k) (11) 9(k(k-1) = Cf(klk-I) + D,v(k) (12) I'm

where ii(k+llk) is the estimate of the state at future sampling period k + 1 based on information available at period k , g(klk-1) is the estimate of the plant outputs a t period k based on information at period k 1, K is a constant estimator gain matrix ( n by p ) , and d(klk) is the current value of the estimator error:

d(klk) = y ( k ) - 9(kIk-1) (13) The variable d(klk) plays the same role as the feedback signal in IMC (see, e.g., Morari and Zafiriou, 1989). If we

set K = 0 in eq 11, we get the same state estimator used in the DMC/QDMC formulation and in IMC. In that case, the state estimate depends only on the values of m(k) and v(k). The use of K # 0 provides additional flexibility for reduction of the estimator errors via feedback from the measured plant output. Specific approaches for the selection of K are given in a later section. Reference Model. The reference model represents the desired closed-loop response of the plant to a change in the setpoints:

x,(lz+i) =