On the Tuning of Predictive Controllers: The Minimum Back-Off

A two-stage procedure for the optimal sizing and placement of grid-level energy ... EMPC-based design of energy storage units for electric power netwo...
6 downloads 0 Views 245KB Size
7814

Ind. Eng. Chem. Res. 2005, 44, 7814-7822

On the Tuning of Predictive Controllers: The Minimum Back-Off Operating Point Selection Problem Jui-Kun Peng, Amit M. Manthanwar, and Donald J. Chmielewski* Department of Chemical and Environmental Engineering, Illinois Institute of Technology, Chicago, Illinois 60616

In this work we propose a new formulation of the stochastic based minimum back-off operating point selection problem. It is shown that this formulation has a convex/reverse-convex form and is thus readily solved globally via a branch and bound search scheme. The formulation is then extended to the partial state information case as well as the discrete-time framework. The formulation is unique in that the controller feedback gain is not specified a priori but rather is to be determined by the proposed optimization. It is further shown that the obtained feedback gain is such that an LQR inverse optimal feedback is guaranteed to exist within the set of feasible feedback gains. A postprocessing procedure is proposed for the synthesis of such a feedback as well as the corresponding LQR objective function weights. This connection between the economics of operating point selection and the dynamics of predictive controller tuning (i.e., the selection of objective function weighs) is expected to significantly enhance the synergy of modern hierarchial control structures. The proposed method is illustrated, first through a simple massspring-damper example and then via a 3 input, 4 state furnace/reactor system. 1. Introduction It has long been recognized that the primary advantage of implementing a Model Predictive Controller (MPC) is the ability to move the steady-state operating point closer to operational constraints, which typically harbor the greatest profit. Unfortunately, operation at this Optimal Steady-State Operating Point (OSSOP) is typically precluded due to the likelihood of constraint violations in the face of expected disturbances. Thus, the notion of a Backed-off Operating Point (BOP) was introduced,1 to allow for disturbance induced variations while preserving much of the economic advantage of the OSSOP. These notions are illustrated in Figure 1, where the conservative operating point is desirable from a constraint observance perspective but is unattractive economically. Using the notion of an Expected Dynamic Operating Region (EDOR), the BOP should be selected to balance economic and constraint observance objectives. (The key aspect is to ensure the EDOR does not extend beyond the constraint polytope.) Zooming in on the OSSOP, Figure 2 illustrates the impact of tuning parameters on the selection of the BOP. In particular, it is shown that controller design changes can result in modifications of the size and shape of the EDOR. Thus, it is proposed that appropriate controller design will allow the BOP to be moved closer to the OSSOP. Then, by defining a simultaneous controller and BOP selection problem we arrive at the Minimum Back-off Operating Point (MBOP) selection problem advocated in this work. The proposed algorithm addresses just one component of a much larger class of operational system design problems. These operational systems, typically denoted Real-Time Optimization (RTO), look to maximize operating profit but must do so in the face of uncertainties. In ref 2, these uncertainties are classified as market, model, measurement, and process uncertainties. For the * To whom correspondence should be addressed. E-mail: [email protected].

Figure 1. The constraint polytope and a backed-off operating point.

case of parametric type uncertainties (i.e., market changes and model mismatch) redesigns of the RTO formulation based on robust or stochastic optimization methods have been proposed (see ref 2, for a steadystate formulation and ref 3, for a combined RTO/MPC formulation). To address disturbance type uncertainties (measurement error and process disturbances) a postRTO processing back-off stage is usually suggested. As stated above, Narraway et al.1 first advocated the notion of a BOP. This deterministic framework was then extended by Romagnoli et al.,4,5 Contreras-Dordelly and Marlin,6 and Figueroa et al.8,7 Stochastic versions of the problem were presented in refs 9-11. These define the EDOR based upon the covariance ellipsoid generated by a given confidence interval. As a further illustration of the stochastic based MBOP selection problem, consider the following motivating example.

10.1021/ie050175n CCC: $30.25 © 2005 American Chemical Society Published on Web 09/03/2005

Ind. Eng. Chem. Res., Vol. 44, No. 20, 2005 7815

Figure 2. Impact of controller tuning on the expected dynamic operating region.

Figure 3. Mass-spring-damper system of example 1.

Example 1. Consider the mass-spring-damper system of Figure 3. The system dynamics are described by dr/dt ) v, and dv/dt ) - 3r - 2v - g + f + w, where r is the mass position, v is the velocity, g is the gravitational force, f is the manipulated input force, and w is a disturbance force. We will further assume the system is constrained by the following inequalities rmin e r e rmax, fmin e f e fmax. The economic objective is to bring the mass as close as possible to the upper bound on position. Thus, it is easily concluded that the OSSOP is r* ) rmax, v* ) 0, and f* ) 3rmax + g (assuming fmax g 3rmax + g). Defining deviation variables with respect to the OSSOP (r˜ ) r - r*, v˜ ) v - v*, and ˜f ) f - f*) we find the following constraints, which define the set of possible BOPs (i.e., the triple r˜ ss, v˜ ss, ˜fss): ˜fss ) 3r˜ ss, v˜ ss ) 0, r˜ min e r˜ ss e r˜ max, ˜fmin e ˜fss e ˜fmax (where r˜ min ) rmin - rmax, r˜ max ) 0, ˜fmin ) fmin - 3rmax - g, and ˜fmax ) fmax - 3rmax - g). Since the implemented controller will be operating around the BOP, we must define new deviation variables x1 ) r˜ - r˜ ss, x2 ) v˜ - v˜ ss, and u ) ˜f - ˜fss, which are from the perspective of controller. Thus, the system dynamics are described as x˘ ) Ax + Bu + Gw where

A)

[

]

[]

[]

0 1 0 0 ,B) , and G ) -3 -2 1 1

(1)

Additionally, the following inequality constraints must be communicated to the controller: xmin e x1 e xmax 1 1 , min min max u e u e u , where x1 ) r˜ min - r˜ ss, xmax ) r ˜ max 1 r˜ ss, umin ) ˜fmin - ˜fss, umax ) ˜fmax - ˜fss. These constraints are illustrated in Figure 4. Let us now assume that the

Figure 4. Feasible steady-state operating points; pointwise in time perspective.

controller will take the form of a full state information linear feedback u(t) ) Lx(t), where the constant, but stabilizing, matrix L has yet to be determined. This will yield a closed-loop system x˘ ) (A + BL)x + Gw. Assuming the disturbance w to be a white, Gaussian process with zero mean and covariance Σw, we find the steady-state variance of the position to be σx21 ) φ1ΣxφT1 where φ1 ) [1 0] and Σx g 0 satisfies (A + BL)Σx + Σx(A + BL)T + GΣwGT ) 0. Additionally, the steady-state variance of the manipulated variable is given by σ2u ) LΣxLT. If we now superimpose the (x1, u) single standard deviation ellipse around the point (r˜ ss, ˜fss) we find Figure 5. If we define this ellipse as the EDOR, then the following inequality constraints must be imposed to ensure the EDOR is within the constraint polytope: σx21 max 2 2 2 2 max)2, σ2 e (umin).2 We now e (xmin 1 ) , σx1 e (x1 ) , σu e (u u have enough information to define our MBOP selection problem:

χ)

min

r˜ ss, ˜fss L,Σxg0 σx1g0,σug0

-r˜ ss

(2)

s.t. ˜fss ) 3r˜ ss

(3)

r˜ min e r˜ ss e r˜ max, ˜fmin e ˜fss e ˜fmax σx21

2

e (r˜ ss - r˜ max) ,

σx21

(4) 2

e (r˜ ss - r˜ min)

σ2u e ( ˜fss - ˜fmax)2, σ2u e ( ˜fss - ˜fmin)2 σx21

)

φ1ΣxφT1 ,

σ2u

) LΣxL

T

(A + BL)Σx + Σx(A + BL)T + GΣwGT ) 0

(5) (6) (7) (8)

In the context of Figures 4 and 5, we see constraints (4) describe the set of available BOPs within the constraint polytope, which also includes the steady-state operating line (constraint 3). Additionally, constraints (7) and (8) define the standard deviation ellipse around a candidate BOP. Finally, constraints (5) and (6) ensure that the ellipse does not extend beyond the constraint polytope. It should be noted that the formulation of ref 10 does not include explicit constraints on the steady-state variables. The formulation of ref 10 does however minimize with respect to standard deviation variables, which distinguishes it from that of refs 12 and 13, which

7816

Ind. Eng. Chem. Res., Vol. 44, No. 20, 2005

operation of the plant will occur near the point (s*, m*, p j ) and develop a linearized dynamic model; s˜˘ ) As˜ + Bm ˜ + Gp˜ where (s˜ , m ˜ , p˜ ) are deviation variables with respect to (s*, m*, p j ) and A, B, and G are partial derivatives of f(s, m, p) evaluated at (s*, m*, p j ). If we assume p to be equal to p j at steady-state (i.e., p˜ ) 0), then the following equality limits the set of available ˜ ss, where s˜ ss ) sss - s* and m ˜ ss ) BOPs: 0 ) As˜ ss + Bm mss - m* (sss, mss being the actual steady-state point). A similar development with respect to the constraints yields the following additional limitation of available BOPs: d ˜ min e q˜ ss e d ˜ max where d ˜ min ) dmin - q*, d ˜ max ) dmax - q*, q* ) Dxs* + Dum* + Dwp j , and q˜ ss ) qss - q* ) Dxs˜ ss + Dm ˜ ss. Since dynamic operation will occur around the point (s˜ ss, m ˜ ss, p j ), we define new deviation variables and constraints Figure 5. Feasible steady-state operating points; stochastic perspective.

minimizes with respect to variance. (Most economic based objective functions are linear with respect to the process variables. Constructing a physically meaningful objective function with respect to variances would be a challenge if not impossible.) Although the formulation of ref 9 includes explicit constraints on the steady-state variables and minimizes with respect to standard deviation, it assumes a fixed set of tuning parameter values (i.e., the linear feedback L is a constant). van Hessem et al.,10 however, do not make this assumption. In contrast, the current formulation possesses all three attributes: it contains explicit steady-state operating line constraints, minimizes with respect to standard deviations (or more precisely the actual steady-state values), and finally allows/requires total freedom with respect to MPC tuning values (L is unspecified). In the next section we present the generalized formulation of our MBOP selection problem. This section also illustrates how to convert the resulting nonlinear programming problem into a convex/reverse-convex form. Section 2 then concludes by discussing how to extend the basic formulation to the measurement and discrete-time cases. In section 3, we illustrate the global branch and bound search scheme proposed for solving the general formulation. In section 4, we discuss the selection of tuning parameters based on the identified MBOP solution. 2. Problem Formulation The general MBOP formulation starts by assuming the existence of a steady-state economic optimizer which operates on a nonlinear dynamic system s˘ ) f(s, m, p), where s is the state, m is the manipulated input, and p is the disturbance. The steady-state optimizer will employ a nonlinear objective along with a set of linear inequality constraints of the form dmin e q e dmax, where q ) Dxs + Dum + Dwp. Thus, the steady-state optimization problem is stated as

min g(s, m, p j) s,m

(9)

s.t. 0 ) f (s, m, p j ), q ) Dxs + Dum + Dw p j, dmin e q e dmax (10) where p j is the measured/expected value of the disturbance. The solution to problem (9) (the OSSOP) is denoted as s* and m*. Next we assume that the actual

x˘ ) Ax + Bu + Gw, z ) Dxx + Duu + Dww, d ˜ min - q˜ ss e z e d ˜ max - q˜ ss (11) where x ) s˜ - s˜ ss, u ) m ˜ -m ˜ ss, w ) p˜ , and z ) q˜ - q˜ ss (or equivalently x ) s - sss, u ) m - mss, w ) p - p j, and z ) q - qss). If we now assume a linear feedback of the form u(t) ) Lx(t) along with a stochastic framework (w being a Gaussian white process with zero mean and covariance Σw), then the steady-state covariance of the signal z is given by14

Σz ) (Dx + DuL)Σx(Dx + DuL)T + DwΣwDTw (12) where Σx is the positive semidefinite solution to (A + BL)Σx + Σx(A + BL)T + GΣwGT ) 0. (Please note the distinction between steady-state covariance and the steady-state operating point.) Defining φi as the ith row of an appropriately sized identity matrix, we define the steady-state variance of the ith output zi as ζi ) φiΣzφi. Contrasting this with the pointwise in time constraints d ˜ min,i - q˜ ss,i e zi(t) e d ˜ max,i - q˜ ss,i, i ) 1...nz, we propose the constraints ζi < (q˜ ss,i - d ˜ max,i)2 and ζi < (q˜ ss, ssi 2 d ˜ min,i) , both i ) 1...nz. These reverse-convex constraints serve to ensure that the single standard deviation ellipse defined by Σz (and centered at qss) is completely contained in the box defined by (dmin, dmax). It should be noted that each constraint imposes a 63% confidence level with respect to subjected output. If one desires a greater confidence level (say 95%), then a larger ellipse would be more appropriate. In this case, one could simply impose constraints such as ζiR2 < (q˜ ss,i - d ˜ max,i)2, where R is a constant representing the desired confidence level (if 95% is desired then set R ) 2). The MBOP selection problem is now formulated as

χ)

min

s˜ ss,m ˜ ss,q˜ ss, L,Σxg0,ζig0

gTs s˜ ss + gTmm ˜ ss

(13)

s.t. 0 ) As˜ ss + Bm ˜ ss, q˜ ss ) Dxs˜ ss + Dum ˜ ss, d ˜ min e q˜ ss e d ˜ max (14) ˜ min,i)2, ζi < (q˜ ss,i - d ˜ max,i)2, i ) 1...nz ζi < (q˜ ss,i - d (15) ζi ) φi[(Dx + DuL)Σx(Dx + DuL)T + DwΣwDTw]φi, i ) 1...nz (16) (A + BL)Σx + Σx(A + BL)T + GΣwGT ) 0

(17)

where gs and gm represent partial derivatives of g(s, m, p) with respect to s and m evaluated at the point s*,

Ind. Eng. Chem. Res., Vol. 44, No. 20, 2005 7817

m*, and p j . Using the following theorem (a slight generalization of theorem 3.1 of ref 13) we convert a portion of the above nonlinear constraints into convex form. Theorem 2.1. ∃ stabilizing L, Σx g 0 and ζi s.t. (A + BL)Σx + Σx(A + BL)T + GΣwGT ) 0, φi[(Dx + DuL)Σx(Dx + DuL)T + DwΣwDw]φTi ) ζi, i ) 1...nz, and ζi < zj2i, i ) 1...nz if and only if ∃ Y, X > 0, and ζ′i s.t.

[

(AX + BY) + (AX + BY)T + GΣwGT < 0

(18)

]

ζ′i - φiDwΣwDTw φTi φi(DxX + DuY) > 0 i ) 1...nz (DxX + DuY)TφTi X (19) ζ′i < zji2 i ) 1...nz

(20)

Figure 6. Reverse-convex portion of the feasible region for example 2.

Thus, constraints (16) and (17) can be replaced by the Linear Matrix Inequalities (LMIs) (18) and (19). (For additional details concerning LMIs see refs 15-18.) The result is a linear objective problem possessing a set of convex constraints (all except (15)) along with a set of reverse-convex constraints (i.e., the complement of the feasible region is convex). Example 2. To put the problem of example 1 into the generalized form, all that is required is to define

Dx )

[ ] 1 0 0 0

Du )

[] 0 1

and Dw )

[] 0 0

and replace σx21 with ζ1 and σ2u with ζ2. If we then assume rmin ) -1, rmax ) 1, fmin ) 0, fmax ) 15, g ) 9.8, and Σw ) 10, and then apply theorem 2.1, we arrive at the formulation

χ)

min

r˜ ss, ˜fss,q˜ ss,1,q˜ ss,2,Y, X>0,ζ1>0,ζ2>0

-r˜ ss

(21)

s.t. ˜fss ) 3r˜ ss, q˜ ss,1 ) r˜ ss, q˜ ss,2 ) ˜fss

(22)

-2 e q˜ ss,1 e 0, -12.8 e q˜ ss,2 e 2.2

(23)

ζ1 < (q˜ ss,1 + 2)2, ζ1 < (q˜ ss,1)2

(24)

ζ2 < (q˜ ss,2 + 12.8)2, ζ2 < (q˜ ss,2 - 2.2)2

(25)

[

] [ ]

Figure 7. MBOP solution for example 2.

ζ1 φ1X ζ2 Y > 0, T >0 XφT1 X Y X

(26)

(AX + BY) + (AX + BY)T + 10GGT < 0

(27)

Figure 8. Impact of constraint polytope changes for example 2.

where A, B, and G are given in eq 1. The reverse-convex constraints of (24) are depicted in Figure 6. In this particular example, we can overcome the reverse-convex issue by substituting the equalities of (22), everywhere, and obtain an optimization problem possessing a single degree of freedom with respect to the backed-off operating point. Thus, a simple bisection search can be used to find the largest value of r˜ ss such that the remaining convex constraints are feasible. (In section 3, we discuss the general, global search method assuming the above simplification is not possible.) The MBOP for this example along with its corresponding EDOR are depicted in Figure 7 (case A). Now consider the impact of slight changes to the constraint polytope. The first is to change fmax from 15

to 18 (case B), and the second is to change fmin from 0 to 9.5 (case C). Figure 8 illustrates how the MBOP solution will adjust the shape of the EDOR to balance constraint and steady-state objectives. In case B, the expanded polytope allows the manipulated variable (f) greater range of motion. This increase in available force is then exploited by the controller to reduce the standard deviation in the mass position direction, which in the end allows the MBOP to be moved closer to the OSSOP. In case C, a smaller range of motion for the input force requires the EDOR shape to move in the other direction and thus forces the MBOP to further form the OSSOP. It is important to remember that all changes to the EDOR shape stem from changes in the feedback element, u(t) ) Lx(t) (for the three cases these are L/A )

7818

Ind. Eng. Chem. Res., Vol. 44, No. 20, 2005

Figure 10. Convex partitioning of the feasible region. Figure 9. MBOP solution for example 3.

[-6.445 -2.110], L/B ) [-1.645 -0.700], and L/C ) [-22.91 -5.058]). Thus, in addition to selecting the MBOP, our formulation provides valuable tuning guidance with respect to the feedback controller. Based on the results of ref 13, we see that the continuous-time full state information (FSI) MBOP selection problem (defined by eqs 13-17) is easily extended to the partial state information (PSI) and discrete-time cases. In the continuous-time PSI case all that is required is to replace eqs 16 and 17 with

ζi ) φi[(Dx + DuL)Σxˆ (Dx + DuL)T + DwΣwDTw + DxΣeDTx ]φi, i ) 1...nz (28) (A + BL)Σxˆ + Σxˆ (A + BL)T + ΣeCTΣ-1 v CΣe ) 0 (29) where Σe, Σv, and C are fixed constant matrices defined in ref 13. Then reapplication of theorem 2.1, using appropriate variable name changes, will yield convex versions of (28) and (29). In the FSI discrete-time case, eq 17 should be replaced with

(A + BL)Σx(A + BL)T + GΣwGT ) Σx

(30)

and in (14) 0 ) As˜ ss + Bm ˜ ss should be replaced with s˜ ss ) As˜ ss + Bm ˜ ss (where A, B, and G are with respect to the discrete-time model x(k + 1) ) Ax(k) + Bu(k) + Gw(k)). Then application of a slightly generalized theorem 6.1 of ref 13 will convexify the new constraints. Using similar methods the discrete-time PSI case is easily deduced and convexified using a generalized version of theorem A.1 of ref 13. Example 3. Reconsider example 2, case A, but this time rather than assume a FSI structure, we assume that the only information available for feedback is a velocity sensor with zero bias and random noise characterized by a standard deviation of 0.5m/s. Figure 9 compares the new PSI MBOP with the corresponding FSI solution. It is interesting to note that the constraint ˜fss ) 3r˜ ss is easily found by connecting the MBOPs of this and the previous example. 3. Global Solution Scheme In this section we outline a branch and bound type procedure for obtaining global solutions to the general problem. Motivation for this approach comes from the following observation. Consider a partitioning of the reverse-convex region such that the union of the convex subregions contains the original (for the variable q˜ ss,1

this is depicted in Figure 10). If we now solve the corresponding MBOP selection problem under the restriction of each subregion, then the minimum of the subproblem optimal costs will be an underestimate of the globally optimal cost associated with the original problem (this is an underestimate because the union of the subregions is larger than the original feasible region). If we had selected smaller (and more accurate) subregions, then the above procedure would provide a better underestimate of the true solution. It is easily seen that all of the above can be recast in terms of overestimates by considering subregions such that their union is contained within the original. Clearly, a direct implementation of the above concepts is intractable due to the large number of subproblems one would need to solve, especially for the multidimensional case. Although the proposed branch and bound scheme uses similar concepts, the key to its efficiency is to use previous subproblem solutions to eliminate large regions of the search space. Before describing the proposed procedure we must first introduce some terminology. The first important concept is the set of branching variables. In the general case, each of the q˜ ss,i variables should be identified as branching variables. (One could reduce the number of branching variables down to the degree of freedom found in the equality constraints, as was done in example 2. Unfortunately, the required substitutions will complicate the form of the reverse-convex constraints and thus will not be applied here. Investigation of this important computational question will be the subject of future research.) Next is the concept of a node, which has three components. The first component is a set of interval min defining inequalities for each branching variable: q˜ ss,i max e q˜ ss,i e q˜ ss,i for all i. Based on this interval an appropriate convex underestimate subproblem should be defined and solved. This will generate the second component of a node, namely the underestimate cost of the node (denoted as χue j for node j). Finally, define a convex overestimate subproblem to generate the node overestimate cost (χoe j ). (If either of these problems is infeasible, set the appropriate subproblem cost equal to infinity.) Summarizing the key elements of a node, we have the interval definition, the underestimate cost, and the overestimate cost. As described previously, a primary goal is eliminate intervals from the search space. Thus, any node which has had its interval eliminated will be termed inactive. The set of all active nodes will then be denoted as the table. The final definition is that of node off-spring (or children). Given an active node,

Ind. Eng. Chem. Res., Vol. 44, No. 20, 2005 7819

children are created by dividing the interval into two parts and finding the under- and overestimate costs for each new node (i.e., with respect to the new intervals). Additionally, since the interval of the parent is now covered by the intervals of the children the parent becomes inactive (i.e., removed from the table). The node elimination procedure is now stated as follows. Clearly, any node with an infeasible underestimate problem should be eliminated. (However, infeasibility with respect to the overestimate is immaterial.) Next, we note that the overestimate cost, χoe j of any node j, is greater than χ, the global optimum of the original problem. Thus, any node with an underestimate oe cost greater than χoe min ) minj{χj } should be eliminated. Finally, at any point in the search one can bound the globally optimal cost with the inequality: χue min e χ ue oe e χoe min, where χmin and χmin are defined as the minimum costs over the set of active nodes. The branch and bound procedure for the general MBOP selection problem begins by defining the first node interval as the entire branching variable region (d ˜ min e q˜ ss e d ˜ max). Solving for the under- and overestimate costs of this node will generate the first entry in the table. The branch and bound procedure is now stated as the following: 1. Select an active node to be investigated. 2. Create off-spring nodes. Add children to table and remove the parent. oe 3. Find χue min and χmin based on the new table. 4. Remove from the table all nodes for which χue j > oe χmin. 5. If the table is empty, stop, the problem is infeasible. oe 6. If χue min - χmin >  go to 1. Otherwise stop. Once the stopping criteria of step 6 has been reached we will know that the globally optimal cost is within oe the interval [χue min,χmin]. However, since there will be at least one active node, it is unclear as to which point in the (s˜ ss, m ˜ ss) space we should select as the  approximation of the global solution. In our examples we selected the underestimate solution point corresponding χue min. If  has been selected sufficiently small, then there should only be one active node, and the above is clearly a reasonable choice. If there is more than one active node, then there are multiple local optima all within the oe [χue min,χmin] value interval (or there could be multiple global optima). In this case the above point is still reasonable in the sense that it is at worst  way from the globally optimal cost. As with all branch and bound schemes there is some freedom regarding the execution of steps 1 and 2. In particular, one must decide which node to investigate (step 1) and how to partition the parent node interval (step 2). Although there are many approaches we found the following logic to provide a reasonable convergence rate. In step 1, we simply selected the node containing the smallest underestimate cost. In step 2 there are two decisions that need to be made: the point of partition and the branching direction. Regarding the partition point, one approach is to select the center of the node interval. Another approach is to utilize the underestimate solution point of the parent node. In the following examples we found that a weighted average of the two gave a reasonable convergence rate (20% center, 80 underestimate solution). Regarding the split direction, let q˜ ss,i, i ) 1...nz, be the set branching variables (i.e., possible directions), each with a pair of reverse-convex constraints: ζi < (q˜ ss,i - d ˜ min,i)2 and ζi < (q˜ ss,i - d ˜ max,i)2.

Figure 11. Preheating furnace/reactor system of example 4.

Figure 12. Furnace temperature vs reactor temperature for example 4. / ) be the underestimate solution point Then let (ζ/i , q˜ ss,i min of the parent and define qˆ ss,i ) d ˜ min,i +

max ) xζ/i , qˆ ss,i

min / / max d ˜ max,i - xζ/i , and ∆qi ) max{qˆ ss,i - q˜ ss,i , q˜ ss,i - qˆ ss,i }, for each i. The terms, ∆qi, indicate how far the underestimate solution point is from the original feasible region (a negative value indicates that the point is feasible with respect to the original constraint) and thus the direction in which improved approximations are needed. As such we selected the split direction to be with / respect to variable q˜ ss,i , i* ) argmaxi{∆qi}. Example 4. We return to the preheating furnace/ reactor system of ref 13, see Figure 11. The system model is s˘ ) As + Bu + Gw where

[

-8 2 A ) 10 * 0 0 3

0 -1.5 0 0 -75 -25 B) 0 0

[

]

0 0 0 0 , -5 0 0 -5 75000 0 0 0 -8500 850000 , G ) 0 -5 * 107

] [ ] 10000 0 0 0

and the constraint polytope is the same as in ref 13. However, the variance of the disturbance input is this time assumed to be Σw ) (0.13975)2. The resulting openloop dynamic operating region is depicted in Figure 12. Following again the example of ref 13, the nominal operating point is assumed to be at the center of the constraint polytope, snom ) [375 500 4 100]T and mnom

7820

Ind. Eng. Chem. Res., Vol. 44, No. 20, 2005

Figure 13. O2 concentration vs reactor temperature for example 4.

Figure 16. Fuel feed rate vs reactor temperature for example 4.

Figure 17. Vent position vs reactor temperature for example 4. Figure 14. CO concentration vs reactor temperature for example 4.

Figure 15. Reactor feed rate vs reactor temperature for example 4.

) [10000 10 0.1]T with units as indicated in Figures 1217. To apply the MBOP selection procedure, we start by defining a process profit function as - (gTs s + gTmm) where gTs ) [0 0 0 0.01] and gTm ) [-10 30 0] (we assume all coefficients to be defined such that the profit function has units of $/day). If we now solve an LP problem similar to 9, with negative profit as the objective function and constraints 0 ) A(s - snom) + B(m mnom), we find the OSSOP to be s* ) [372 495 4.79 70]T and m* ) [10100 9.83 0.103], which properly reflects a

desire to maximize the reactant feed rate. Under these steady-state assumptions the process profit is $100 700/ day. As a point of reference the profit associated with the nominal operating point is $99 699/day. Next, we define Dx ) [I4×4| 0]T, Du ) [0 | I3×3]T, and Dw ) 0 and find d ˜ min ) [-17.5 0.0 -2.0 -57.8 -200- 1.83 -0.0142] and d ˜ max ) [22.5 10.0 0.0 72.2 0.0 2.17 0.0058]. Then application of the branch and bound scheme yields the ˜ /ss ) [0.0 MBOP as s˜ /ss ) [0.614 0.818-0.621 30.0] and m 0.065-0.003] (also depicted in Figures 12-17 as the FSI case). The minimum amount of lost profit due to the input disturbance and system dynamics is found as ˜ /ss ) $2.25/day, which gives the MBOP gTs s˜ /ss + gTmm profit as $100 698/day. To address the PSI case, we assume that the only measurement available is of the exit reactor temperature. This sensor is assumed to have a measurement error variance of Σv ) (0.1)2. The resulting MBOP is depicted in Figures 12-17. The profit is found to be $100 693/day. The loss in profit is only $5/day as compared with the FSI case. This is despite a significant enlargement in the EDOR. In this particular example, the loss in profit with respect to the OSSOP is fairly small for both the FSI and PSI cases. This feature can be attributed to our formulation’s ability to select a feedback controller, L, such that the reactant feed rate need not be utilized as a manipulated variable (this is observed as the nearly zero variance in the reactant feed rate direction of Figure 15). However, if one were to reduce the availability of the other manipulated variables, then the MBOP solution would be forced into greater utilization

Ind. Eng. Chem. Res., Vol. 44, No. 20, 2005 7821

EDOR (and thus be inverse optimal) is to solve one additional convex program

min

X>0,Y ξx>0,ξu>0,ζi>0

{g˜ Ts ξx + g˜ Tmξu}

(31)

s.t. (AX + BY) + (AX + BY)T + GΣwGT < 0 φiXφTi < ξx,i i ) 1...nx

Figure 18. Impact of constraint polytope changes for example 4.

[

[

]

ξu,i φiY > 0; i ) 1...nu YTφTi X

]

ζi - φiDwΣwDTw φTi φi(DxX + DuY) > 0 i ) 1...nz (DxX + DuY)TφTi X / / ζi < (q˜ ss,i -d ˜ min,i)2, ζi < (q˜ ss,i -d ˜ max,i)2 i ) 1...nz

of the reactant feed rate input. Consider, for example, a change in the fuel feed constraints from 10 ( 2 to 10 ( 0.25 (case B of Figure 18). In this case, the only way to satisfy the output constraints is to select a controller, L, that utilizes the reactant feed rate. The net result is a movement of the MBOP off of the reactant feed rate constraint (10 100 bbl/day) to a value of about 10 075 bbl/day (the resulting profit is $100 499/day, which corresponds to a loss of about $200/day as compared with case A). It is also interesting to note that similar impacts on the selected controller will result from changes to the output constraints. Consider again case A, but this time change the bound on O2 concentration from 3% to 4% (case C of Figure 18). In this case the identified profit is $100 599/day. This fairly simple example illustrates that modifications in the design of the controller may have an important impact on the economics of operating point selection. Conversely, it is proposed that the greatest value obtained from the MBOP selection problem is that it suggests a unconstrained linear feedback, L, that is motivated and guided by the economics and constraints of the of the overall control objective. In the next section, we illustrate how the obtained feedback can be applied to a constrained receding horizon controller such as those presented in refs 19-21. 4. Inverse Optimality In ref 13 it was shown that the Minimum Variance Covariance Constrained Control (MVC3) problem would generate a linear feedback gain, L, that is coincident with some LQR optimal feedback. (A key aspect of this result is that the EDOR generated by the MVC3 problem is minimal in some sense.) Then through application of an LQR inverse optimal synthesis routine,13 one can generate a predictive controller whose objective function is aligned with the imposed constraints. Unfortunately, in the MBOP selection problem the inverse optimality property may be disrupted by the equality constraints defining the steady-state operating line (i.e., 0 ) As˜ ss + Bm ˜ ss). Roughly speaking, this constraint may require some directions of the BOP to be further from the constraints than is required by the covariance terms. The net result will be the possibility of generating an L corresponding to a nonminimal EDOR. Fortunately, all that is required to find an L corresponding to a minimal

/ where q˜ ss,i is the solution point of the MBOP selection problem. Although g˜ s,i and g˜ m,i can be selected as any / positive numbers we suggest g˜ s,i ) ( gs,i/2s˜ ss,i and g˜ m,i / ) ( gm,i/2m ˜ ss,i which is a linearization of a variance form of the original standard deviation based objective function. If any of the original weights (gs,i or gm,i) or / or m ˜ /ss) are zero, then replace with solution values (s˜ ss,i any small number of appropriate sign. Example 5. Let us return to the solution found in example 4. In the FSI case the linear feedback generated by the MBOP selection problem is

[

0.0000 0.0000 0.0000 0.0000 L* ) -0.0965 -0.3370 0.7464 -0.0004 -0.0003 -0.0020 -0.0139 0.0004

]

(32)

where L* ) Y*X*-1 and Y* and X* are from the MBOP solution. Application of the inverse optimal synthesis scheme of ref 13 yields the following LQR objective functions weights

[

[

0.0242 0.0291 Q) 0.0793 0.0022

0.0001 0.0001 M) 0.0010 0.0000

0.0291 0.0790 -0.111 0.0010

0.0546 0.2187 -0.5532 -0.0007

0.0793 -0.111 1.2819 0.0154

]

0.0022 0.0010 0.0154 0.0003

]

0.0674 0.0432 , 0.3400 0.0068 0.6667 0.0000 0.0000 R ) 0.0000 0.6640 0.0071 0.0000 0.0071 09003

[

]

and indicates that L* is in fact inverse optimal and application of problem (31) is not required. 5. Conclusion In this work we have proposed a new formulation of the stochastic base MBOP selection problem, which is unique in the sense that steady-state equality constraints are enforced while simultaneously allowing design freedom with respect to the controller tuning values. The resulting optimization was found to possess a convex/reverse-convex form which was shown to be

7822

Ind. Eng. Chem. Res., Vol. 44, No. 20, 2005

readily solved globally via a branch and bound procedure. It was also illustrated that the formulation is easily extended to the discrete-time and partial state information cases and LQR inverse optimality is readily achieved. It should be noted that the resulting MBOP is highly dependent on the disturbance model (i.e. the matrices G and Σw). It is further noted that the formulation is only valid for the regulation mode of the controller in the sense that only zero-mean white disturbance inputs, w, can be considered. In the case of persistent steplike disturbances, although repeated application of the economic optimizer (with p j identified) will reduce longterm cost, the current formulation is ill-prepared to address the transient aspects of a sharp step change in p j . Questions concerning the validity of the proposed model in these situations is clearly an important subject in need of additional research. Acknowledgment This work was supported by the Department of Chemical and Environmental Engineering and the Armor College of Engineering and Science, Illinois Institute of Technology, Chicago. Notation r, v ) position and velocity state variables for example 1 f, w, g ) manipulated, disturbance, and gravitational force inputs rmin, rmax) position bounds for example 1 fmin, fmax ) force bounds for example 1 r*, v*, f* ) Optimal Steady-State Operating Point (OSSOP) for example 1 r˜ , v˜ , ˜f ) deviation variables wrt. OSSOP r˜ min/max, v˜ min/max) bounds wrt. OSSOP r˜ ss, v˜ ss, ˜fss) backed-off operating point (BOP) x1, x2, u ) deviation variables wrt. BOP A, B, G ) matrices describing the linear dynamic system L ) linear feedback gain matrix Σx, Σu, Σz ) covariance matrix of signals x, u, and z σx, σx2 ) standard deviation and variance of a scalar signal x φi ) ith row of an identity matrix s, m, q, p ) state, manipulated, output, and disturbance signals Dx, Du, Dw) matrices defining output signal dmin, dmax ) bounds on the output signal s*, m*, q*, p j ) Optimal Steady-State Operating Point (OSSOP) s˜ , m ˜ , q˜ , p˜ ) deviation variables wrt. OSSOP s˜ ss, m ˜ ss, q˜ ss, p˜ ss ) backed-off operating point (BOP) d ˜ min, d ˜ max) bounds wrt. OSSOP d ˜ min,i, d ˜ max,i) ith element of d ˜ min and d ˜ max x, u, z, w ) deviation variables wrt. BOP nx, nu, nz ) dimension of the state, manipulated, and output vectors ζi, ξx, i, ξu,i ) dummy variables representing the variance of zi, xi, and ui χ ) global solution value of the MBOP selection problem ue χoe j , χj ) overestimate and underestimate costs for node j oe χmin, χue min ) minimum costs over all active nodes (also bounds on χ)

Literature Cited (1) Narraway, L. T.; Perkins, J. D.; Barton, G. W. Interaction between process design and process control: Economic analysis of process dynamics. J. Process Control 1991, 1, 243. (2) Zhang, Y.; Monder, D.; Forbes, J. F. Real-time otimization under parametric uncertianty: a probability constrained approach. J. Process Control 2002, 12, 373. (3) Kassmann D. E.; Badgwell, T. A.; Hawkins, R. B. Robust steady-state target calculation for model predicitive control. AIChE J. 2000, 46, 1007. (4) Bahri, P. A.; Bandoni, J. A.; Barton, G. W.; Romagnoli, J. A. Back-off calculations in optimising control: a dynamic approach. Comput. Chem. Eng. 1995, 19, s699. (5) Figueroa, J. L.; Bahri, P. A.; Bandoni, J. A.; Romagnoli, J. A. Economic impact of disturbances and uncertain parameters in chemical processes - a dynamic back-off analysis. Comput. Chem. Eng. 1996, 20, 453. (6) Contreras-Dordelly, J. L.; Marlin, T. E. Control design for increased profit. Comput. Chem. Eng. 2000, 24, 267. (7) Figueroa, J. L.; Desages, A. C. Dynamic ‘back-off’ analysis: use of piecewise linear approximations. Optimal Control Appl. Methods 2003, 24, 103. (8) Arbiza, M. J.; Bandoni, J. A.; Figueroa, J. L. Use of backoff computations in multilevel MPC. Lat. Am. Appl. Res. 2003, 33, 251. (9) Loeblein, C.; Perkins, J. D. Stuctural design for on-line process optimization: I. Dynamic economics of MPC. AIChE J. 1999, 45, 1018. (10) van Hessem, D. H.; Scherer, C. W.; Bosgra, O. H. LMIbased closed-loop economic optimization of stochastic process operation under state and input constraints. In Proceedings of the 40th IEEE Conference on Decision and Control, Orlando, December 2001, 4228. (11) van Hessem, D. H.; Bosgra, O. H. Closed-loop stochastic model predictive control in a receding horizon implementation on a continuous polymerization reactor example In Proceedings of the American Control Conference, 2004, pp 914-919. (12) Chmielewski, D. J. Tuning and steady-state target selection in predictive control: a covariance assignment approach. AIChE Annual Meeting, Los Angeles, November 2000; Paper 257g (available at: http://www.chee.iit.edu/∼chmielewski/). (13) Chmielewski, D. J.; Manthanwar, A. M. On the tuning of predictive controllers: Inverse optimality and the minimum variance covariance constrained control problem, Ind. Eng. Chem. Res. 2004, 43, 7807. (14) Burl, J. B. Linear optimal control H2 and H∞ methods; Addison-Wesley: Menlo Park, 1999. (15) Boyd, S. P.; El Ghaoui, L.; Feron, E.; Balakrishnan, V. Linear matrix inequalities in system and control theory; Society for Industrial and Applied Mathematics (SIAM): 1994. (16) Skelton, R. E.; Iwasaki, T.; Grigoriadis, K. M. A unified algebraic approach to linear control design; Taylor & Francis: New York, 1999. (17) Dullerud, G. E.; Paganini, F. A course in robust control theory: a convex approach; Springer-Verlag: New York, 1999. (18) VanAntwerp, J. G.; Braatz, R. B. A tutorial on linear and bilinear matrix inequalities. J. Process Control 2000, 10, 363. (19) Sznaier, M.; Damborg, M. J. Suboptimal control of linear systems with state and control inequality constraints. In Proceedings of the 26th IEEE Conference on Decision and Control, Los Angeles, 1987, 761. (20) Chmielewski, D. J.; Manousiouthakis, V. I. On constrained infinite-time quadratic optimal control. Syst. Control Lett. 1996, 29, 121. (21) Scokaert, P. O. M.; Rawlings, J. B. Constrained linear quadratic regulation. IEEE Trans. Autom. Control 1998, 43, 1163.

Received for review February 14, 2005 Revised manuscript received August 4, 2005 Accepted August 5, 2005 IE050175N