Robust Model Predictive Control for Integrating Linear Systems with

This paper presents two robust model predictive control (MPC) algorithms for linear ... Journal of Control, Automation and Systems Engineering 2004,10...
3 downloads 5 Views 107KB Size
Ind. Eng. Chem. Res. 2000, 39, 2981-2991

2981

Robust Model Predictive Control for Integrating Linear Systems with Bounded Parameters Sameer Ralhan† and Thomas A. Badgwell*,† Chemical Engineering Department, Rice University, MS-362, 6100 Main Street, Houston, Texas 77005-1892

This paper presents two robust model predictive control (MPC) algorithms for linear integrating plants described by a state-space model. The first formulation focuses on steady-state offset whereas the second minimizes output deviations over the entire prediction horizon. The input matrix parameters of the plant are assumed to lie in a set defined by an ellipsoidal bound. Robustness is achieved through the addition of constraints that prevent the sequence of the optimal controller costs from increasing for the true plant. The resulting optimization problems solved at each time step are convex and highly structured. A simulation example compares the performance of these algorithms with those based on minimizing the worst-case controller cost. 1. Introduction The problem of robustly stabilizing a linear system subject to input and output constraints has received much attention in the recent control literature. Various modifications of model-based control algorithms have been proposed to ensure stability in the presence of modeling errors; these can be grouped into four general categories: (1) Detuning of the controller by suppressing input movement (Vuthandam et al.,1 Sarimveis et al.,2 and Genceli and Nikolaou3). (2) Minimization of the worst-case controller cost (Campo and Morari,4 Allwright and Papavasiliou,5 Lau et al.,6 Lee and Yu,7 and Scokaert and Mayne8). (3) State contraction or terminal state constraints (Kothare et al.,9 Zheng,10 and Michalska and Mayne11). (4) Cost function constraints (Badgwell12 and Ralhan13,14). The first method is motivated by the fact that one can always stabilize an open-loop stable system by making the controller less aggressive; in the limit of a completely passive controller the system reverts to its stable openloop behavior. Methods based on detuning the controller share two fundamental limitations; appropriate tuning factors must be computed or determined from closedloop simulation and closed-loop performance may suffer unnecessarily when the model is accurate. One of the most heavily studied robust stability methods relies on minimizing the worst-case controller cost. Lee and Yu7 summarize the development of socalled Min-Max algorithms, pointing out that the usual open-loop implementation may perform poorly. They propose a closed-loop formulation capable of better performance and discuss computationally tractable approximations. The main disadvantage of the Min-Max approach is that control performance may be too conservative for some cases. A third approach to the robust stability problem involves adding a constraint that forces all possible * To whom correspondence should be addressed. Tel.: (281) 504-3497. Fax: (281)584-4329. E-mail: tom.badgwell@ aspentech.com. † Both authors are now with Aspen Technology, Inc., 1293 Eldridge Parkway, Houston, TX 77077.

system states to contract on a finite horizon, or to terminate in a robustly stabilizable region. If the horizon is short, state contraction constraints may have feasibility problems or may be so conservative that they cause an unacceptable performance loss. The authors have recently presented an alternative method for achieving robust stability using cost function constraints. This approach involves adding constraints that prevent the sequence of controller costs from increasing for every system in the uncertainty description. A significant advantage of this method is that good performance may still be possible if the nominal system optimized in the controller objective is close to the true system under control. Previous work has shown that the use of cost function constraints leads to robust stability for a finite set of stable, multivariable state-space systems, subject to hard input and soft state constraints.12 This paper illustrates how the same basic approach can be used to robustly stabilize linear integrating systems in which the model parameters are bounded by a hyperellipsoid. Two control algorithms have been developed by building upon ideas first presented by Lee and Cooley.15 As in their paper, our first algorithm focuses on steady-state plant offset whereas the second minimizes output deviations over the entire prediction horizon. While we make use of the basic one-stage and two-stage structures laid out by Lee and Cooley, the particular structure of the cost function constraints that we employ coupled with an elliptic uncertainty domain leads to highly structured convex semi-infinite programs, allowing the development of efficient numerical solutions. The algorithms described here can be extended in a straightforward way to control linear integrating systems subject to hard input and soft output constraints.14 The paper is organized as follows. The first section defines the control problem to be solved. This is followed by a description of the cost functions used by each algorithm. The two robust controllers are then presented and analyzed in separate sections. This is followed by a comparison of the algorithms with MinMax controllers for a simulated integrating system. 2. Plant Model and Control Problem Consider a discrete-time, linear, time-invariant integrating system described by the following state-space

10.1021/ie990688s CCC: $19.00 © 2000 American Chemical Society Published on Web 07/15/2000

2982

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000

model,

Table 1. Controller Cost Function Definitions

xk+1 ) xk + B h uk

(1)

where xk ∈ Rn is the plant state and uk ∈ Rm is the plant input at time k. Assume that the matrix B h is not known exactly but its elements b h are bounded and lie in the convex set B defined as

B } {b|(b - bc)TW(b - bc) e 1}; W ) WT > 0 B(b) ) [bij]1eien,1ejem

(2) (3)

b ) [b11 b12 ... b1m ... bn1 bn2 ... bnm]T (4) The uncertainty set B describes an ellipsoid in the parameter space with its center at bc. The matrix W defines the size and orientation of the ellipse; the semiaxis lengths are defined as the square roots of the reciprocals of the eigenvalues of W, and the semiaxis directions are defined by the eigenvectors of W. The goal of the control system is to generate a sequence of inputs {uk} that bring the state of the system 1 from an initial value x0 to the origin when the input matrix, B h , is not known exactly. It is assumed for simplicity that the process state at each time step (xk) is measured perfectly. As the controller brings the output to the origin, it should respect input and state constraints:

uk ∈ U; xk ∈ X

(5)

For the sake of simplicity, the algorithms described here do not consider input and state constraints. Modifications required to incorporate hard input and soft state constraints are straightforward and described in detail in Ralhan’s thesis.14 Assume that we have selected B(b˜ ) as the input matrix most likely to correspond to the true plant. We will refer to the integrating system formed with this input matrix as the nominal model. We will show that it is possible to design an asymptotically convergent controller given B(b˜ ) and B. 3. Controller Cost Functions MPC controllers typically regulate the plant state by minimizing a cost function that penalizes deviations of the state and input away from the origin. Muske and Rawlings16 defined the cost function Φ: Rn × RNm × b f R for stable plants: ∞

Φ(x,π,b) }

∑ j)0

(zTj Qzj + vTj Rvj); Q,R > 0

(6)

zj ) zj-1 + B(b)vj-1; z0 ) x; vj ) 0 ∀ j g N (7) T ]T π ) [vT0 vT1 ... vN-1

(8)

Here, x is the current state of the plant, zj are the predicted future states, and vj are the future inputs considered by the controller. The weight matrices Q and R are chosen positive definite to ensure that Φ remains nonnegative. As shown in (7), it is assumed that the input is set to the origin at time j ) N and remains there for all future times.

controller

cost function

single stage two stage, stage I two stage, stage II

N-1 T Υ(x,π,b) } zTNQzN + ∑i)1 (vj Rvj) Ψ(x,ω,b) } zTNQzN N-1 T (zj Qzj + vTj Rvj) Π(x,π,b) } ∑i)1

In the stable plant case if we set the input to origin for j ) N,∞ then the plant state converges to the origin and we can easily compute the infinite horizon controller cost. For integrating plants, however, setting the input to the origin does not imply that the plant state will converge to the origin. Hence, for the integrating plant case we cannot, in general, bound the infinite horizon controller cost. A possible solution to this problem is to force the state to origin at time j ) N. However, it is generally impossible to force the state to go to the origin for all models in the uncertainty domain with the same input sequence. To get around this problem, we borrow an idea from Lee and Cooley15 and redefine the cost function to minimize the state at the end of the horizon only. There may be some extra degrees of freedom, however, and the optimal solution may not be unique. So as suggested by Lee and Cooley15 we can either add an input penalty term to the objective function (Singlestage optimization method) or do a second optimization to minimize an objective over the rest of the horizon (two-stage optimization method) to ensure that the algorithm is well-posed. The cost functions used in the two formulations described here are listed in Table 1. The term ω in the two-stage algorithm represents the total control action computed in Stage I:

ω ) v0 + v1 + ... + vN-1

(9)

As shown in Appendix A, these cost functions can be written as

Υ(x,π,b) ) πT(R + BTI QBI)π + 2xTQBIπ + xTQx (10) Ψ(x,ω,b) ) ωTBTQBω + 2xTQBω + xTQx T

Π(x,π,b) ) π (R +

BTIIQBII)π

(11)

T

+ 2(AIIx) QBIIπ + xTATIIQAIIx (12)

Since the matrices Q, Q, and R are positive definite, the cost functions Υ, Ψ, and Π are convex in x, ω, and π, respectively, for all b ∈ B. MPC algorithms typically minimize a cost function that is based on a specific choice for the system dynamics. For this purpose we define nominal model cost functions based on the nominal model b˜ :

Υ ˜ (x,π) } Υ(x,π,b˜ )

(13)

Ψ ˜ (x,ω) } Ψ(x,ω,b˜ )

(14)

Π ˜ (x,π) } Π(x,π,b˜ )

(15)

However, the true cost of an input sequence must be measured by its effect on the actual plant. For this purpose we define the plant cost functions based on the actual plant b h:

Υ h (x,π) } Υ(x,π,b h)

(16)

Ψ h (x,ω) } Ψ(x,ω,b h)

(17)

Π h (x,π) } Π(x,π,b h)

(18)

4. Single-Stage Robust LQR We now define a single-stage robust controller that penalizes the state error at the end of the horizon.

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000 2983

Definition: IRLQR-I. At time step k, the one-stage robust linear quadratic regulator is defined as

Υ(x,π,πˆ ,b) ) ∆ΥI(π,πˆ ) + ∆ΥII(x,π,πˆ ,b) + ∆ΥIII(x,π,πˆ ,b)

π´ k } arg min[Υ ˜ (xk,π) |Υ ˜ (xk,π,b) e Υ(xk,πˆ k,b) ∀ b ∈ B] (19)

where

[ ]

0 I · · · ·· · πˆ k } Sπ´ k-1; S ) ·· · 0 · · ·

· · · 0

0 · ·· · · · ·· ·· · ; · ·· · · · · 0 I

0 · · · · · · 0 0 πˆ 0 ) [0 ... 0]T (20)

uk ) [1...10...0]π´ k

(21)

∆ΥI(π,πˆ ) ) πTRπ - πˆ TRπˆ

(24)

ˆ )T b ∆ΥII(x,π,πˆ ,b) ) 2xTQ(V - V

(25)

ˆ TQV ˆ )b ∆ΥIII(x,π,πˆ ,b) ) bT(VTQV - V

(26)

V ˆ ) V(πˆ )

(27)

In general, it is not possible to find analytically the b ∈ B for which the maximum in the cost function constraint (22) is obtained. However, the following conservative approximation can be used:

max(∆Υ(xk,π,πˆ k,b)) e ∆ΥI(π,πˆ k) + b∈B

As defined above, the IRLQR-I algorithm is a semiinfinite program that optimizes the nominal model cost function subject to an infinite-dimensional cost function constraint. For each system in the set B, the constraint prevents the controller cost from increasing relative to the cost computed using the remainder of the previous optimal solution, denoted as πˆ k. As shown in (21), the system input is set to the first element of the optimal sequence π´ k. An important property of the cost function constraint in (19) is that it is feasible at each time step for the choice π ) πˆ k. Further, at each sample time k, the right side of the constraint is constant and the left side is a strictly convex function of the input π. The objective function in (19) is also strictly convex, which means that the IRLQR-I algorithm is a strictly convex program. Existence of the feasible input πˆ k therefore implies the existence of a unique optimal input π´ k.17 The IRLQR-I defined here extends the approach presented previously by Badgwell12 in two ways; the plant is assumed to be integrating, and the set B is infinite in dimension, giving an infinite-dimensional cost function constraint. 4.1. Numerical Solution. The IRLQR-I algorithm defined above requires the solution of a semi-infinite program at each time step k. Kassmann18 discusses several solution algorithms for this problem class, including local reduction and discretization methods. For some very simple cases the infinite set of constraints can be replaced by a small number of constraints;12 a system of this kind is presented later with the simulation examples. For the general case we use a local reduction method, in which the infinite-dimensional cost function constraint in (19) is replaced at each time step with a single constraint based on the most limiting model in the uncertainty domain. We will develop a conservative approximation for the cost function constraint that allows the limiting model to be found using an eigenvalue decomposition. First, we reformulate the cost function constraint in (19) as

max(∆Υ(xk,π,πˆ k,b)) e 0

(22)

∆Υ(x,π,πˆ ,b) } Υ(x,π,b) - Υ(x,πˆ ,b)

(23)

b∈B

Substituting for Υ(x,π,b) and Υ(x,πˆ ,b) using the results in Appendix A, we can write ∆Υ as

max(∆ΥII(xk,π,πˆ k,b)) + max(∆ΥIII(xk,π,πˆ k,b)) (28) b∈B

b∈B

It is possible to find the maximum of ∆ΥII analytically, and the maximum of ∆ΥIII can be found through an eigenvalue decomposition, so we can use the following conservative constraint:

H(xk,π,πˆ k) e 0

(29)

H(x,π,πˆ ) } ∆ΥI(π,πˆ ) + ∆ΥII(x,π,πˆ ,bÄ 1)

(30)

+ ∆ΥIII(x,π,πˆ ,bÄ 2)

(31)

bÄ 1 ) arg max (∆ΥII(x,π,πˆ ,b))

(32)

bÄ 2 ) arg max (∆ΥIII(x,π,πˆ ,b))

(33)

b∈B

b∈B

The vector bÄ 1 can be found easily since it comes from maximizing a linear function subject to a quadratic constraint. The computation of bÄ 2 is similar to the problem of minimizing a least squares objective subject to a quadratic constraint. This class of problems has been investigated in detail by Spjotvoll,19 Gander,20 and Gander et al.21 Solution details are provided in Appendix B. Thus, a conservative solution for the algorithm can be found by solving the nonlinear program that follows. Definition: CIRLQR-I. At time step k, the conservative one-stage integrating robust linear quadratic regulator is defined as the IRLQR-I algorithm with the optimization replaced by

˜ (xk,π)|H(xk,π,πˆ k) e 0] π´ k } arg min[Υ

(34)

Note that the robustness constraint is feasible for πk ) πˆ k. The conservative algorithm CIRLQR-I has the same desirable features as those described earlier for the IRLQR-I algorithm and shares the same theoretical properties discussed below. While selecting an optimization technique to solve this problem, it is important to note that although the constraint H is convex in π, it is not necessarily differentiable with respect to π and standard gradientbased methods may perform poorly unless special precautions are taken. Lau et al.6 and Boyd and Barratt22 discuss this issue and offer alternative solution

2984

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000

methods. For the simulations presented here we used a standard sequential quadratic programming (SQP) algorithm but computed the gradients analytically using a value of bÄ 2 based on the most recent iterate of π. 4.2. Theoretical Properties. Theorem 1: Robust Convergence of the IRLQR-I Algorithm. Assume that for all models in the uncertainty domain B, the input matrix B has full row rank and the individual matrix elements have the same sign. Then, when the input is computed using the IRLQR-I algorithm, the plant state converges asymptotically to the origin for all x0 ∈ Rn. Proof. Let us assume that we have found the optimal solution at the initial time step k ) 0, given by π´ 0. Let us denote this optimal open-loop input sequence as T π´ 0 } [v´ T0 ... v´ N-1 ]T

(35)

At time step k ) 0, the optimal plant cost is given by

h) Υ hÄ 0 ) Υ(x0,π´ 0,b

(36)

N-1

)

v´ Tj Rv´ j + zTNQzN ∑ j)0

(37)

h )TQ(x0 + VÄ 0b h) ) π´ T0 Rπ´ 0 + (x0 + VÄ 0b

zj ) zj-1 + B h v´ j-1; z0 ) x0

(39)

VÄ 0 ) U(v´ 0) + U(v´ 1) + ... + U(v´ N-1)

(40)

Also, the feasible plant cost at time k ) 1 can be written as

)

πˆ T1 Rπˆ 1

(41) T

+ (x1 + V ˆ 1b h ) Q(x1 + V ˆ 1b h)

(42)

where

πˆ 1 ) Sπˆ 0

(43)

T 0]T ) [v´ T1 ... v´ N-1

Υ hÄ 1 - Υ hˆ 1 e 0

hÄ 0 e -uT0 Ru0 Υ hÄ 1 - Υ

(46)

The state of the plant at time k ) 1 is given as

x1 ) x0 + Bv´ 0 ) x0 + U(v´ 0)b h

(47) (48)

Substituting for x1 in eq 42, we get

h) Υ hˆ 1 ) Υ(x1,πˆ 1,b

(54)

The same argument can be repeated at subsequent time steps to show that

hÄ k e -uT0 Ru0 Υ hÄ k+1 - Υ

(55)

This shows that the sequence of optimal plant cost values {Υ hÄ k} is nonincreasing. The plant cost is bounded below by zero and thus has a nonnegative limit. Therefore, as k f ∞, the left-hand side of (55) approaches zero, and as a result the input must converge to the origin, since R > 0:

uk f 0 as k f ∞ Now, we will show that as uk goes to the origin, xk also goes to the origin. The proof is by contradiction. Let us assume that xk does not go to the origin. Further assume that uk ) 0 for all k > k*. So for all k > k* + N

π´ k ) [0T 0T ... 0T]T; Υ(xk,π´ k,b) ) xTk Qxk ∀ b ∈ B (56) Since π´ k is optimal, there does not exist any other feasible πk that can further decrease the cost functions for all the models in the uncertainty domain. Now, the gradient of the cost function at any k > k* + N for any model is

∇πkΥk|πk)[0...0] ) 2IT1 BTQxk ) 2[B B ... B]TQxk

(45)

V ˆ 1 ) U(v´ 1) + ... + U(v´ N-1)

(53)

Combining eqs 52 and 53, we get

(44)

and

(52)

Since the actual plant lies in the family B, the cost function constraint must be satisfied for the actual plant at time k ) 1 so the optimal plant cost cannot exceed the feasible plant cost:

(38)

where

h) Υ hˆ 1 ) Υ(x1,πˆ 1,b

Υ hˆ 1 - Υ hÄ 0 ) -uT0 Ru0

(57) (58)

Since B has full row rank and its elements have the same sign for all the plants in the uncertainty domain, and xk is nonzero for all k > k* + N and Q > 0, there exists πk * π´ k such that

∇πkΥk|πTk)[0...0]πk ) (2[B B ... B]TQxk)Tπk < 0 ∀ b ∈ B (59) But this contradicts the fact that π´ k is optimal; hence, as k f ∞ both the state and the input converge to the origin. Q.E.D.

(49) 5. Two-Stage Robust LQR

h+ ) πˆ T1 Rπˆ 1 + (x0 + U(v´ 0)b h )TQ(x0 + U(v´ 0)b h+V ˆ 1b h ) (50) V ˆ 1b h )TQ(x0 + VÄ 0b h) ) πˆ T1 Rπˆ 1 + (x0 + VÄ 0b Subtracting eq 51 from eq 38, we get

(51)

The one-stage formulation developed in the previous section focuses primarily on state offset at steady state. In this section we present a two-stage algorithm that minimizes state deviations over the whole horizon and hence gives better performance. The first stage optimization finds the best overall total control action; this is

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000 2985

passed to a second stage optimization that optimizes dynamic performance subject to the total control action constraint. Definition: IRLQR-II. The two-stage integrating robust linear quadratic regulator is defined as follows:

Stage I: Steady-State Error Minimization ω ´ k } arg min [Ψ ˜ (xk,ω)|Ψ(xk,ω,b) e Ψ(xk,ω ˆ k,b) ∀ b ∈ B] (60) ω ˆk }

IT1 Sπ´ k-1;

ω ˆ 0 ) [0 ... 0]

T

I1 } [1 ... 1]T

∆ΨII(x,ω,ω ˆ ,b) ) bT(UTQU - U ˆ TQU ˆ )b

(74)

U ˆ ) U(ω ˆ)

(75)

In general, it is not possible to find analytically the b ∈ B that maximizes the cost function constraint (70). However, the following conservative approximation can be used:

max(∆Ψ(xk,ω,ω ˆ k,b)) e max(∆ΨI(xk,ω,ω ˆ k,b)) + b∈B

b∈B

(61)

ˆ k,b)) (76) max(∆ΨII(xk,ω,ω

(62)

It is possible to find the maximum of ∆ΨI analytically, and the maximum of ∆ΨII can be found with an eigenvalue decomposition, so we can use the following conservative constraint:

(63)

b∈B

Stage II: Dynamic Error Minimization ˜ (xk,π)| π´ k } arg min[Π

(64)

Π(xk,π,b) e Π(xk,πˆ k,b) ∀ b ∈ B;

(65)

ω ´k )

IT1 π]

(66)

ˆ k) e 0 H1(xk,ω,ω

(77)

H1(x,ω,ω ˆ ) } ∆ΨI(x,ω,ω ˆ ,bÄ 3)

(78)

ˆ ,bÄ 4) + ∆ΨII(x,ω,ω

(79)

bÄ 3 ) arg max(∆ΨI(x, ω,ω ˆ ,b))

(80)

ˆ ,b)) bÄ 4 ) arg max(∆ΨII(x, ω,ω

(81)

b∈B

´k - ω ˆ k); πˆ k ) [0 ... 0] πˆ k } Sπ´ k-1 + P(ω

T

(67)

b∈B

P } [0 ... 0 1]T

(68)

uk ) [1...10...0]π´ k

(69)

As defined above, the IRLQR-II algorithm consists of two semi-infinite programs. In stage I the overall control action ω ´ k is determined so as to minimize steady-state offset. The optimal input vector π´ k is then determined in stage II by minimizing state and input errors over the entire prediction horizon. The variables ω ˆ k and πˆ k represent feasible solutions for both stages. Equation 67 is defined so that πˆ k satifies constraint (66). Both semi-infinite programs are strictly convex, so the IRLQR-II algorithm has a unique optimal solution at each time step k. 5.1. Numerical Solution. The IRLQR-II algorithm defined above requires the solution of two semi-infinite programs at each time step k. To develop a reliable solution method, we follow the same path used for the IRLQR-I algorithm; we develop a conservative approximation for the cost function constraint that can be computed using an eigenvalue decomposition, and we use a local reduction method to solve the conservative semi-infinite program. Stage I: Cost Function Constraint. We reformulate the cost function constraint in (60) as

ˆ k,b)) e 0 max(∆Ψ(xk,ω,ω

(70)

∆Ψ(x,ω,ω ˆ ,b) } Ψ(x,ω,b) - Ψ(x,ω ˆ ,b)

(71)

b∈B

Substituting for Ψ(x,ω,b) and Ψ(x,ω ˆ ,b) using the results in Appendix A, we can write ∆Ψ as

ˆ ,b) + ∆ΨII(x,ω,ω ˆ ,b) (72) ∆Ψ(x,ω,ω ˆ ,b) ) ∆ΨI(x,ω,ω where

ˆ ,b) ) 2xTQ(U - U ˆ )Tb ∆ΨI(x,ω,ω

(73)

Stage II: Cost Function Constraint. We reformulate the cost function constraint in (64) as

max(∆Π(xk,π,πˆ k,b)) e 0

(82)

∆Π(x,π,πˆ ,b) } Π(x,π,b) - Π(x,πˆ ,b)

(83)

b∈B

Substituting for Π(x,π,b) and Π(x,πˆ ,b) using the results in Appendix A, we can write ∆Π as

∆Π(x,π,πˆ ,b) ) ∆ΠI(π,πˆ ) + ∆ΠII(x,π,πˆ ,b) + ∆ΠIII(x,π,πˆ ,b) (84) ∆ΠI(π,πˆ ) ) πTRπ - πˆ TRπˆ

(85)

∆ΠII(x,π,πˆ ,b) ) 2(AIIx)TQ(V - V ˆ )b

(86)

∆ΠIII(x,π,πˆ ,b) ) bT(V TQV - V ˆ TQV ˆ )b

(87)

V ˆ ) V(πˆ )

(88) (89)

In general, it is not possible to find analytically the b ∈ B for which the maximum in the cost function constraint (83) is obtained. However, the following conservative approximation can be used:

max(∆Π(xk,π,πˆ k,b)) e ∆ΠI(π,πˆ k) + b∈B

max(∆ΠII(xk,π,πˆ k,b)) + max (∆ΠIII(xk,π,πˆ k,b)) (90) b∈B

b∈B

It is possible to find the maximum of ∆ΠII analytically, and the maximum of ∆ΠIII can be found using an eigenvalue decomposition, so we can use the following conservative constraint:

H2(xk,π,πˆ k) e 0

(91)

2986

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000

H2(x,π,πˆ ) } ∆ΠI(π,πˆ ) + ∆ΠII(x,π,πˆ ,bÄ 5)

(92)

+ ∆ΠIII(x,π,πˆ ,bÄ 6)

(93)

bÄ 5 ) arg max (∆ΠII(x,π,πˆ ,b))

(94)

bÄ 6 ) arg max (∆ΠIII(x,π,πˆ ,b))

(95)

b∈B

b∈B

The vectors bÄ 3 and bÄ 5 can be found easily since they come from maximizing a linear function subject to a quadratic constraint. The vectors bÄ 4 and bÄ 6 are found by minimizing a least squares objective subject to a quadratic constraint, using the same method described earlier for bÄ 2. Solution details are provided in Appendix B. As before, we now define a conservative version of the algorithm. Definition: CIRLQR-II. At time step k, the conservative two-stage integrating robust linear quadratic regulator is defined as the IRLQR-II algorithm with the optimizations replaced by

˜ (xk,ω)|H1(xk,ω,ω ˆ k) e 0] ω ´ k } arg min[Ψ

Also, the feasible plant cost at time k ) 1 can be written as

Ψ hˆ 1 ) Ψ h (x1,ω ˆ 1)

(101)

hω ˆ 1)TQ(x1 + B hω ˆ 1) ) (x1 + B where

ω ˆ1 ) ω ´ 0 - u0

´ k ) IT1 π] (96) arg min [Π ˜ (xk,π)|H2(xk,π,πˆ k) e 0; ω Note that the cost function constraints are still ˆ k and πk ) πˆ k. The conservative feasible for ωk ) ω approximation has the same desirable features as described earlier for the IRLQR-II algorithm and shares the same theoretical properties discussed below. While selecting an optimization technique to solve this problem, it is important to note that it shares the same difficulty pointed out earlier for the CIRLQR-I algorithm; namely, that the constraints H1 and H2 are not necessarily differentiable with respect to π and the standard gradient-based methods may perform poorly unless special precautions are taken. For the simulations presented here we used a standard SQP algorithm but computed the gradients analytically using a value of bÄ based on the most recent iterate of π. 5.2. Theoretical Properties. Theorem 2: SteadyState Behavior of the IRLQR-II Algorithm. Assume that for all models in the uncertainty domain B, the input matrix B has full row rank and the individual matrix elements have the same sign. Then, when the input is computed using the IRLQR-II algorithm, if the plant reaches a steady state, there is no steady-state offset, i.e., xk f 0. Proof. Let us assume that we have found the optimal solution at the initial time step k ) 0, given by ω ´ 0 for the first stage and π´ 0 for the second stage:

(103)

The predicted steady state of the plant at time k ) 1 is given as

h v0 x1 ) x0 + B

(104)

Substituting for x1 in eq 101, we get

Ψ hˆ 1 ) (x0 + B h u0 + B hω ˆ 1)TQ(x0 + B h u0 + B hω ˆ 1) hω ´ 0)TQ(x0 + B hω ´ 0) ) (x0 + B

(105) (106)

Combining eqs 100 and 106, we get

hÄ 0 Ψ hˆ 1 ) Ψ

π´ k }

(102)

(107)

Since the robustness constraint holds for all the models in the uncertainty domain,

Ψ hÄ 1 e Ψ hˆ 1

(108)

Combining eqs 107 and 108,

hÄ 0 Ψ hÄ 1 e Ψ

(109)

The same argument can be repeated at subsequent times to show that

Ψ hÄ k e Ψ hÄ k-1

(110)

This shows that the sequence of optimal plant cost values {Ψ hÄ k} is nonincreasing. The plant cost is bounded below by zero and thus has a nonnegative limit. Therefore,

Ψ hÄ k+1 - Ψ hÄ k f 0 as k f ∞

(111)

Thus, the stage I plant cost converges to a constant value. Given this fact, a proof by contradiction similar to that given for Theorem 1 can be used to show that if the state converges to a steady state, there will be no steady-state offset. Q.E.D. 6. Simulation Example

h (x0,ω ´ 0) Ψ hÄ 0 ) Ψ

(98)

) zTNQzN

(99)

In this section we apply the robust MPC algorithms developed in the previous sections to a simulated singleinput/single output (SISO) integrating plant. We compare the performance of the IRLQR-I, CIRLQR-I, IRLQRII, and CIRLQR-II algorithms with single-stage and two-stage Min-Max algorithms: Definition: Min-Max-I. The single-stage Min-Max algorithm finds the input vector π´ k that minimizes the maximum controller cost over the entire uncertainty domain:

(100)

π´ k } arg min[max Υ(xk,π,b)]

T

π´ 0 } [v´ 0 ... v´ N-1]

(97)

At time step k ) 0 the optimal stage I plant cost is given by

hω ´ 0)TQ(x0 + B hω ´ 0) ) (x0 + B

b∈B

(112)

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000 2987

Figure 1. Single-stage controllers with γ ) 3: IRLQR-I (s); Min-Max-I (- - -); CIRLQR-I (‚-‚).

Definition: Min-Max-II. The two-stage Min-Max algorithm finds the input vector π´ k as a solution of a two-stage optimization problem; the first stage focuses on steady-state offset, and the second stage optimizes dynamic performance over the entire horizon:

Stage I: Steady-State Error Minimization ω ´ k } arg min[max Ψ(xk,ω,b)] b∈B

(113)

Q ) 1; R ) 10-3; N ) 2

Stage II: Dynamic Error Minimization π´ k } arg min[max Π(xk,π,b)|IT1 πk ) ω ´ k] (114) b∈B

All simulations reported here were performed using Matlab (version 5.2) running on a Sun workstation. The default SQP routine constr from the Matlab optimization toolbox was used for all calculations. Consider the case of a scalar integrating plant:

h uk xk+1 xk + b

(115)

The input parameter b h is known only to lie in the range

h e bmax 0 < bmin e b

(116)

Assume the nominal model is given by b˜ . The error in b˜ leads to a plant-model mismatch. Let us define the mismatch/uncertainty parameter γ as

γ)

b h b˜

IRLQR-I and IRLQR-II algorithms to be solved directly using constraints based on both of these settings. To compare performance for a specific case, let us assume bmin ) 4 and bmax ) 12. The uncertainty ellipse thus has bc ) 8 and W ) 1/16. Assume that the nominal model b˜ is chosen as bmin. The following controller settings are used for all the simulations:

(117)

The parameter γ can be interpreted as the ratio of the gain in the first derivative of the plant state to that of the nominal model state. For this simple system it can be shown that the limiting model at any time step is either the one based on bmin or the one based on bmax. This allows the

(118)

Figure 1 illustrates control responses using the IRLQR-I, CIRLQR-I, and Min-Max-I algorithms for the worst possible (most responsive) plant on the ellipse b h ) bmax ) 12.0 (γ ) 3). All three algorithms bring the state to the origin from an initial state of 1. The IRLQR-I controller gives the best performance, bringing the state to the origin in a single time step. The MinMax-I algorithm responds more slowly, and the CIRLQR-I controller is the most conservative of the three. Figure 2 shows the same test for the case when b h) bmin ) 4.0 (γ ) 1). All three responses are slower but the relative performance remains the same: IRLQR-I is the fastest, followed by Min-Max-I, with CIRLQR-I the slowest of the three. Now we repeat the same simulations using the twostage controllers IRLQR-II, CIRLQR-II, and Min-MaxII. Figure 3 shows the responses for b h ) bmax ) 12.0 (γ ) 3). The two-stage algorithms respond much more quickly than the corresponding single-stage algorithms; this is because they optimize dynamic performance over the entire horizon. The IRLQR-II version gives the best performance, consistent with the single-stage controllers. However, the Min-Max-II controller oscillates violently, so the CIRLQR-II algorithm gives the next best performance for this case. Figure 4 shows the two-stage controller responses for b h ) bmin ) 4.0 (γ ) 1). Again, the two-stage algorithms give much better performance than the corresponding single-stage algorithms. For this case the relative

2988

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000

Figure 2. Single-stage controllers with γ ) 1: IRLQR-I (s); Min-Max-I (- - -); CIRLQR-I (‚-‚).

Figure 3. Two-stage controllers with γ ) 3: IRLQR-II (s); Min-Max-II (- - -); CIRLQR-II (‚-‚).

performance is the same as that for the single-stage case; IRLQR-II gives slightly better results than MinMax-II, with CIRLQR-II giving a much slower response. 7. Conclusions Two robust model predictive control algorithms, IRLQR-I and IRLQR-II, have been presented for integrating linear systems. Model uncertainty is parameterized by ellipsoidal bounds on the input matrix of the system model. Robustness is achieved by adding cost

function constraints that restrict the future behavior of the controller cost function for all the plants in the uncertainty description. The one-stage formulation provides asymptotic convergence of the state to the origin but may give poor performance because it considers only steady-state behavior. The two-stage formulation provides better performance because it optimizes state behavior over the entire horizon. However, the two-stage algorithm comes with a weaker performance guarantee; we can say only that if the two-stage algorithm brings

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000 2989

Figure 4. Two-stage controllers with γ ) 1: IRLQR-II (s); Min-Max-II (- - -); CIRLQR-II (‚-‚).

the plant to a steady state, then there will be no state offset. The resulting semi-infinite programs are convex and highly structured, greatly simplifying their numerical solution. Controllers CIRLQR-I and CIRLQR-II were developed specifically to exploit this structure through the definition of conservative constraints that can be solved with an eigenvalue decomposition. Simulation results demonstrate that the two-stage algorithms indeed outperform the one-stage algorithms for the examples considered here. Performance comparisons consistently show the original integrating algorithms (IRLQR-I, IRLQR-II) to have the best performance, usually followed next by the Min-Max controllers (Min-Max-I, Min-Max-II), with the conservative controllers (CIRLQR-I and CIRLQR-II) generally giving the slowest responses. The algorithms presented here can be generalized in a straightforward manner to control linear integrating multivariable systems subject to hard input and soft state constraints.14 Two significant barriers must be overcome before these results (and the results of other robust MPC researchers) can be implemented in practice. The most serious limitation of this body of research is the assumption that the true plant steady state is known (setpoint at the origin). As soon as a disturbance enters the plant or the setpoint changes to a value away from the origin, the steady-state target for the true plant must be estimated in some manner. Kassmann presented a solution18 based on searching the direction given by the nominal gain, while ensuring that constraints are not violated for any gain in the uncertainty description. Ralhan14 presented another solution based on estimating different step disturbances for each model in the uncertainty description. Other solutions may be possible. A second limitation is the assumption of perfect state measurement. This is clearly a rarity in practice, and some form of robust state estimator will be required before robust MPC becomes practical. These problems

ensure that robust MPC will be an active research area for the near future. Acknowledgment The authors gratefully acknowledge the financial support of Aspen Technology, Inc., The National Science Foundation, and the Texas Higher Education Coordinating Board. Appendix A: Cost Functions This appendix shows how to write the cost functions for the single-stage and two-stage optimization algorithms in a compact form. The cost functions are defined as N-1

Υ(x,π,b) } zTNQzN +

(vTj Rvj) ∑ j)0

Ψ(x,ω,b) } zTNQzN

(119) (120)

N-1

π(x,π,b) }

(zTj Qzj + vTj Rvj) ∑ j)0

(121)

where

zj ) zj-1 + Bvj-1; z0 ) x; vj ) 0 ∀ j g N T ]T π ) [vT0 vT1 ... vN-1

(122) (123)

To simplify these expressions, let us define the following: T ξ } [zT0 zT1 ... zN-1 ]T

(124)

Q } diag[Q Q ... Q]T

(125)

2990

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000

R } diag[R R ... R]T

[

Using eqs 7 and 8, the state predictions can be written as

zN ) x + BIπ

(127)

) x + Bω

(128)

ξ ) AIIx + BIIπk

0 U(v0) U(v0) + U(v1) V(π) ) · · U(v ) + ‚‚‚· + U(v

(129)

[

0 B · · · B

(130)

· · · 0 · · · 0 · · · · · · · B · · · B

0 0 ··

]

(131)

Ψ(x,ω,b) ) (x + U(ω)b)TQ(x + U(ω)b)

(148)

Π(x,πk,b) ) (AIIx + Vb)TQ(AIIx + Vb)

(149)

Appendix B: Solution of Inner Maximization

(132)

In this appendix we discuss the solution of the problem:

bÄ ) arg max(bTHb|b ∈ B)

(150)

H ) [UTQU - U ˆ TQU ˆ]

(151)

B } {b|(b - bc)TW(b - bc) e 1}

(152)

and I is an identity matrix with dimensions m × m. So the controller cost functions can be written as

Υ(x,π,b) ) πT(R + BTI QBI)π + 2xTQBIπ + xTQx (133) T

T

T

T

Ψ(x,ω,b) ) ω B QBω + 2x QBω + x Qx T

Π(x,π,b) ) π (R +

BTIIQBII)π

(134)

T

+ 2(AIIx) QBIIπ +

where

Since W is symmetric and positive definite, it can be diagonalized by a unitary matrix,

W ) VΣVT

xTATIIQAIIx (135) Further, the state predictions depend on the input matrix elements vector b as

zj ) zj-1 + Bvj-1 ) zj-1 + U(vj-1)b

[ ]

where

· · · 0 ·· · T v · ·· ·· ·· 0 · · · · · 0 vT

(137)

(154)

x ) Σ1/2VT(b - bc)

(155)

b ) bc + VΣ-1/2x

(156)

Substituting

(138)

into eq 150, we get

x* ) arg max Ψ(x)

(157)

Ψ(x) ) (x - xˆ )TA(x - xˆ )

(158)

A ) (Σ-1/2)TVTHVΣ-1/2

(159)

xˆ ) -Σ1/2VTbc

(160)

xTxe1

(139)

So the steady-state offset zN and the prediction vector ξ can be written as

zN ) x + (U(v0) + ... + U(vj-2) + U(vN-1))b

Bx } {x|xTx e 1} where

Working backward to the first input v0 gives

zj ) x + (U(v0) + ... + U(vj-2) + U(vj-1))b

(140)

) x + Vb

(141)

) x + U(ω)b

(142)

ξ ) AIIx + Vb

(143)

where

Let λ1 e λ2 e ... e λn be the eigenvalues of A and w1 e w2 e ... e wn be a corresponding orthonormal set of eigenvectors with

Awi ) λiwi, i ) 1, ..., n

where

ω ) v0 + v1 + ... + vN-1

(153)

where Σ has eigenvalues of W as its diagonal elements and the columns of V are the eigenvectors of W. We can now transform B in (152) as follows:

(136)

vT 0

0 U(v) ) · · · 0

(146)

Υ(x,π,b) ) (x + Vb)TQ(x + Vb) + πTRπ (147)

T

BI ) [B B ... B] B B BII ) · · · B

N-1)

1

(145)

Substituting for zN, ξ in (119), (120), and (121) gives

where

AII ) [I I ... I]

]

V(π) ) U(v0) + ... + U(vj-2) + U(vN-1)

(126)

(144)

(161)

n n xˆ iwi and x ) ∑i)1 xiwi. Then, from (158) we Let xˆ ) ∑i)1

Ind. Eng. Chem. Res., Vol. 39, No. 8, 2000 2991

get

Ψ(x) )

n λi(xi - xˆ i)2 ∑i)1

(162)

The problem (157) can be further simplified by using the following results given by Spjotvoll.19 Theorem. The point x* at which Ψ attains its maximum value depends on λn ) max(λi) and there are three distinct cases: (1) If λn > 0, then the maximum is obtained at some point on the boundary xTx ) 1. n |xˆ i|2 e 1, then from (162) the (2) If λn e 0 and ∑i)1 maximum value of Ψ is 0 and it is attained if x/i ) xˆ i, ∀ i ∈ {i|λi < 0}. n |xˆ i|2 > 1, then the maximum is (3) If λn e 0 and ∑i)1 attained for some value of x, s.t. xTx ) 1. For cases 1 and 3 above, the maximum lies on the boundary of the unit ball and the solution can be found using the algorithm described by Lau et al.6 Similar results and applications can be found in the articles of Marquardt,23 Forsythe and Golub,24 Rutishauser,25 and Golub.26 Literature Cited (1) Vuthandam, P.; Genceli, H.; Nikolaou, M. AIChE J. 1995, 41 (9), 2083-2097. (2) Sarimveis, H.; Genceli, H.; Nikolaou, M. AIChE J. 1996, 42 (9), 2582-2593. (3) Genceli, H.; Nikolaou, M. AIChE J. 1995, 41 (9), 2098-2107. (4) Campo, P. J.; Morari, M. Robust Model Predictive Control; In Proceedings of the 1987 American Control Conference, 1987; pp 1021-1026. (5) Allwright, J. C.; Papavasiliou, G. C. System Control Lett. 1992, 18, 159-164. (6) Lau, M. K.; Boyd, S.; Kosut, R. L.; Franklin, G. F. A Robust Control Design for far-IR Plants with Parameter Set Uncertainty.

In Proceedings of the 1991 American Control Conference, 1991; pp 83-88. (7) Lee, J. H.; Yu, Z. Automatica 1997, 33, 763-781. (8) Scokaert, P. O. M.; Mayne, D. Q. IEEE Trans. Auto. Control 1998, 43 (8), 1136-1142. (9) Kothare, M. V.; Balakrishnan, V.; Morari, M. Automatica 1996, 32, 1361-1379. (10) Zheng, Z. Q. Robust Control of Systems Subject to Constraints; Ph.D. Thesis, California Institute of Technology, 1995. (11) Michalska, H.; Mayne, D. Q. IEEE Trans. Auto. Control 1993, 38 (11), 1623-1633. (12) Badgwell, T. A. Int. J. Control 1997, 68, 797-818. (13) Ralhan, S. Robust Model Predictive Control of Linear Finite Impulse Response Plants; Master’s Thesis, Rice University, 1998. (14) Ralhan, S. Robust Model Predictive Control of Stable and Integrating Linear Systems; Ph.D. Thesis, Rice University, 1999. (15) Lee, J. H.; Cooley, B. L. Automatica 2000, 36, 463-473. (16) Muske, K. R.; Rawlings, J. B. AIChE J. 1993, 39 (2), 262287. (17) Gill, P. E.; Murray, W.; Wright, M. H. Practical Optimization; Academic Press: London, 1981. (18) Kassmann, D. E. Robust Model Predictive Control as a Class of Semi-Infinite Programming Problems; Ph.D. Thesis, Rice University, 1999. (19) Spjotvoll, E. J. Appl. Math. 1972, 23 (3), 307-311. (20) Gander, W. Numer. Math. 1981, 36, 291-307. (21) Gander, W.; Golub, G.; Matt, U. Linear Algebra Appl. 1989, 815-839. (22) Boyd, S. P.; Barratt, C. H. Linear Controller DesignsLimits of Performance; Dover Books on Advanced Mathematics; Prentice Hall: Englewood Cliffs, NJ, 1991. (23) Marquardt, D. W. J. Soc. Ind. Appl. Math. 1963, 11 (2), 431-441. (24) Forsythe, G. E.; Golub, G. H. J. Soc. Ind. Appl. Math. 1965, 13 (4), 1050-1068. (25) Rutishauser, H. Linear Algebra Appl. 1968, 1, 479-488. (26) Golub, G. H. SIAM Rev. 1973, 15 (2), 318-334.

Received for review September 20, 1999 Revised manuscript received February 3, 2000 Accepted May 23, 2000 IE990688S