Ind. Eng. Chem. Res. 2001, 40, 4555-4569
4555
Low-Order Controller Design Using Global Optimization David E. Schmidt and Dennis D. Sourlas* Department of Chemical Engineering and Intelligent Systems Center, University of Missouri-Rolla, Rolla, Missouri 65409-1230
The paper presents an optimization-based fixed-order controller design method. A minimization problem is formulated where the objective function is an l 1 - l ∞ performance measure that quantifies time-domain performance. To achieve an optimization search over all stabilizing fixedorder controllers, a controller parametrization is used that is based on the Youla parametrization and also includes a quadratic equality constraint. The resulting infinite-dimensional optimization problem is nonconvex and is solved by identifying converging sequences of upper and lower bounds. The algorithm is demonstrated through an example. Finally, the paper introduces a global optimization method for the solution of the aforementioned optimization problem. The method is based on the branch-and-bound technique and employs interval analysis to achieve range and dimension reduction in the branching space. Two implementations of interval computations are presented, and the most efficient one consists of a modified interval Newton method that capitalizes on the structure of the problem’s equations. The performance of this hybrid method as well as its scaling characteristics are demonstrated through a PI controller optimization example. 1. Introduction The practice of chemical engineering process control greatly depends on PI/PID control loops. Such control algorithms are simple because they include a small number of tuning parameters, they are well-understood, and technicians and industrial control engineers can easily maintain them. For most chemical processes, these simple controllers have demonstrated good operational characteristics if properly tuned.1,2 In general, low-order control is computationally more efficient and has also been applied to other engineering systems such as space structures. Most standard optimal controller design methods (H∞, l 1, etc.) typically identify high order controllers. The development of specialized low-order controller design methods has attracted significant interest. Depending on the target audience, one can categorize these methods in two groups: methods that are process-controloriented and focus on PI/PID tuning and more general methods that address fixed-order controller design without any inherent order limitations. The early work of Ziegler and Nichols3 led to the development of tuning rules for P, PI, and PID controllers based on critical system parameters (gain, frequency). Since then, both industrial and academic control engineers/researchers have pursued the development of improved tuning rules. For first-order plus deadtime models, Cohen and Coon4 proposed tuning rules based on a one-quarter decay ratio requirement. Refined Ziegler-Nichols (Z-N) rules have been proposed that reduce overshoot in setpoint tracking.5 PI and PID controller tuning for processes with delays and/ or integrators has been studied using frequency response methods.6,7 Alternative approaches pursue closedloop model identification and subsequent PID tuning * Author to whom all correspondence should be addressed. Address: 143 Schrenk Hall, Department of Chemical Engineering, University of Missouri-Rolla, Rolla, MO 65409-1230. Tel.: (573)341-6331.Fax: (573)341-4377.E-mail:
[email protected].
using frequency-domain methods.8,9 Using the ideas of IMC analysis, tuning rules for low-order controllers have been proposed10,11 and later amended for improved load disturbance rejection.12 Low-order controllers via the IMC methodology have also been obtained as a result of prior model reduction.13 Aguirre reviewed some model approximation techniques that are also applicable to reduced-order controller design and compared their performance.14 Since the early 1980s, several computational tools have been developed that integrate process identification and PI/PID tuning. A° stro¨m and Ha¨gglund15 proposed relay autotuning for closed-loop determination of the critical parameters and subsequently applied the Z-N rules to tune PID controllers. The autotuning methods have evolved to encompass optimal tuning based on integral performance criteria,16 multiloop systems,17,18 and integrating processes.19 Considerable research effort has been expended on the design of fixed-order controllers and on controller order reduction. Here, the objective is to obtain loworder controllers, but not necessarily of PI or PID type. Within a robust control framework, model reduction ideas20 have been adapted to the controller order reduction problem so that robust stability and/or performance is maintained.21-24 These methods can be characterized as a posteriori reduction because the low-order controller is designed once the high-order optimal controller has been identified. An iterative model reduction and controller reduction method has also been proposed within a µ-optimization framework.25 The design of low-order controllers based on optimization of a performance measure leads to nonconvex optimization problems. For SISO PI controllers, A° stro¨m and co-workers proposed solution of a nonlinear nonconvex problem that optimizes the response to load disturbances subject to frequency-domain robustness conditions.26 Bao et al. proposed a nonconvex optimization problem formulation for the design of decentralized PID controllers based on the minimization of the H∞
10.1021/ie0006561 CCC: $20.00 © 2001 American Chemical Society Published on Web 09/25/2001
4556
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001
norm of the closed loop.27 For the more general problem of low-order controller design via solution of the nonconvex suboptimal H∞ problem, nonconvex LMI formulations have been proposed and solved.28,29 For an H2 performance measure, the nonlinear necessary conditions for optimality have been developed.30 Convex formulations of the optimal low-order controller design problem are possible provided that the control set is appropriately restricted.31 The present work considers the problem of fixed-order controller design, which encompasses PI and PID design. For a given process and a given controller order η, an η-order dynamic output feedback controller is sought that stabilizes the closed loop and optimally satisfies time-domain performance specifications. The problem is solved in the discrete-time framework. The design methodology is optimization-based and off-line in nature. It requires a process model, which separates it from autotuning approaches. The optimization problem that is solved performs a search over the (possibly nonconvex) set of η-order stabilizing controllers. This search is possible because of an implicit, quadratic parametrization of all η-order stabilizing controllers that is developed. The present approach thus differs from controller reduction methods because no controller reduction is required. Solution of the aforementioned optimization problem identifies performance limitations associated with η-order control and thus can help generate engineering justification for using low-order control systems. The current presentation focuses on the SISO case and expands upon previous results. Extensions to the decentralized case have been discussed in earlier work,32 and as will be also demonstrated in subsequent sections, they are the natural evolution of the ideas presented herein. The nonconvex nature of the fixed-order controller design problem has also been discussed by other researchers who focused on solving the feasibility problem (finding a stabilizing low-order controller) instead of performing optimization over the nonconvex set.29 To obtain the global optimum of the considered loworder optimal controller design problem, a new optimization algorithm is proposed that is based on the branch-and-bound (BB) technique accelerated by the use of interval computations. Capitalizing on the structure of the problem at hand, the interval computations (i) achieve feasibility-based range reduction at each step of the BB and (ii) reduce the dimension of the branching subspace within BB. As a result, this hybrid BB/interval global optimization strategy results in significant acceleration of convergence over the standard BB method. This original use of interval calculations for nonlinear global optimization problems can potentially impact global optimization for process design and synthesis. Relevant details are given in section 4 of this paper. 2. Low-Order Controller Design To accomplish the controller design objectives outlined in the previous section, a low-order design methodology is presented that consists of three steps: (1) a complete implicit parametrization of stabilizing fixed-order controllers is developed to allow for searches over an entire class of controllers (section 2.1), (2) introduction of the l 1 - l ∞ time-domain performance measure is employed to construct the objective function that will quantify closed-loop performance (section 2.2), and (3) the optimization problem is formulated and solved (sections 2.3 and 2.4).
Figure 1. Standard feedback control loop.
For the sake of simplicity, the ensuing presentation focuses on the SISO case. Extension of the SISO results to the MIMO case is straightforward and appropriate comments on this topic are provided at the pertinent places. The standard feedback interconnection of Figure 1 is considered in this work, where the map P represents process dynamics, Pd represents disturbance dynamics, and C is the feedback controller. Also H(P, C) is the closed-loop map with the process P and the controller C, H(P, C): d f y. All symbols used in the following discussion are defined in the notation section (Appendix B). 2.1. Stabilizing Low-Order Controllers. The work in this paper considers the class of discrete, linear timeinvariant (LTI) processes and the feedback interconnection of Figure 1. For this class of systems, the set of stabilizing unity feedback controllers, S(P), is given via the Youla parametrization in terms of a stable parameter Q33
{
˜ P)-1(X + QD ˜ P), S(P) ) C ∈ R: C ) (Y - QN ˜ - NPQ)-1 ) (X ˜ + DPQ)(Y
}
Q∈S
(1)
where NP, DP, N ˜ P, D ˜ P, X, X ˜ , Y, and Y ˜ are known and correspond to a doubly coprime fractional representation of P.33 Closed-loop stability requires that Pd be stable. For stable processes P, this parametrization can be reduced to the well-known IMC control structure,10 i.e., C ) Q(1 - PQ)-1. Any fixed-order (say ηth-order) discrete LTI controller can be described via its transfer function and an appropriate factorization (NC, DC) as
C(z) )
R0zη + R1zη-1 + ... + Rη-1z + Rη z + β1z η
η-1
+ ... + βη-1z + βη
)
NC(z)
(2)
DC(z)
where
NC(z) } (R0zη + R1zη-1 + ... + Rη-1z + Rη)/zη DC(z) } (zη + β1zη-1 + ... + βη-1z + βη)/zη
}
(3)
One can easily recast eq 2 in an equivalent form that contains classical quantities such as controller gain. For example, the PI controller transfer function is given as34
[
C(z) ) K 1 +
Kz + (A - K) A ) z-1 z-1
]
(4)
where K is the gain and A is the discrete equivalent of the reset rate. A comparison of eqs 2 and 4 indicates that PI controllers correspond (as expected) to η ) 1 and β1 ) -1, while the parameters R0 and R1 in eq 2 represent the PI controller’s two degrees of freedom with
R0 ) K and R1 ) A - K
(5)
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4557
Similarly, a general PID controller corresponds to η ) 2 with the constraint (1 + β1 + β2) ) 0 that is required by the integral mode. The controller C in eq 2 is stabilizing for the process P if and only if it can also be expressed in terms of a stable parameter Q according to eq 1. Equivalently, the controller factors NC and DC (or the coefficients Ri, i ) 0, ..., η, βi, i ) 1, ..., η) should satisfy
(Y - QN ˜ P)-1 (X + QD ˜ P) ) NCDC-1 w YNC QN ˜ PNC - XDC - QD ˜ PDC ) 0 (6) Hence, the controller given by eq 2 is stabilizing if and only if there exists a stable parameter Q such that the maps NC, DC, and Q satisfy the quadratic equality in eq 6. The quadratic equality will ensure that the optimization search over the parameter space of eq 2 will consider all parameter combinations that yield stabilizing controllers. This is one reason for introducing an infinite-dimensional variable (Q) in a problem that is finite-dimensional (only 2η + 1 controller parameters). The closed-loop map H(P, C) is nonlinear in C
H(P, C): d f y, H(P, C) ) (1 + PC)-1 Pd
(7)
If C is expressed according to the parametrization in eq 1, then H(P, C) can be equivalently expressed as an affine function of the stable parameter Q (see Vidyasagar,33 p 110)
˜ PPd (8) H(P, C): d f y, H(P, C) ) (Y ˜ - NPQ)D The transformation of the nonlinear and potentially singular form of eq 7 to the linear form of eq 8 is another benefit of using the parametrization in eq 1 and the associated stable parameter Q. Remark 2.1. For the general multivariable case, a polynomial equality similar to eq 6 can also be developed to guarantee stability. In the general case where the DC factors for the entries of the multivariable matrix C are different from each other, the equality is of order greater than 2. On the other hand, if the scalar entries, Cij, of matrix C are such that
Cij(z) )
NCij(z) DCii(z)
or Cij(z) )
NCij(z) DCjj(z)
{ } NCii(z) DCii(z)
∞ d ) {d(k)}k)0 ∈ l ∞ and ||d||∞ ) sup |d(k)| e w1 < ∞ (11) kg0
where w1 g 0 is known a priori. A practicing engineer can utilize historical process data or other process information to determine w1. It is desired to design a controller C that will maintain ∞ within certain bounds defined the output y ) {y(k)}k)0 by the parameter w2 (w2 > 0) for all times k and all disturbances d given by eq 11. Specifically, it is desired to satisfy ∞ y ) {y(k)}k)0 , -w2-1 e y(k) e w2-1, ∀k g 0,
∀d ) {d ∈ l ∞, ||d||∞ e w1} (12) By definition, H(P, C): d f y ) H(P, C)*d. Thus, eq 12 amounts to the weighted i∞ norm of H(P, C) being less than 1
{y(k) ∈ [-w2-1, +w2-1], ∀k g 0, ∀{d ∈ l ∞, ||d||∞ e w1}} S
sup ||w2·y|∞ ) ||w1-1‚d||∞e1 ||w2H(P, C)w1||i∞ e 1 (13)
(9)
then, the resulting equality remains quadratic. In this case, Q will be a stable transfer function matrix, i.e., Q ∈ M(S ). Remark 2.2. Decentralized fixed-order controllers can be parametrized in an analogous manner
C ) diag{Cii(z)} ) diag
disturbances and (b) a time-varying envelope that contains the output for a fixed disturbance. The use of a multiobjective performance measure is motivated by the practical need to design control systems capable of operating under a variety of conditions. In this case, the performance measure (a) addresses robust performance over a class of allowable disturbances and (b) quantifies settling time and steady-state requirements in the presence of typical disturbances. It is worth noting that any l 1 - l ∞ performance measure can contain several quantifiers of the second type to also address setpoint tracking objectives (setpoint-to-error map). Consideration of only the first type of specifications leads to the well-known l 1 system gain that was proposed as a performance quantifier by Vidyasagar37 and was employed in controller design by Dahleh, Pearson, and coworkers (see the text by Dahleh and Diaz-Bobillo38 for a review of key results in this area). Within the l 1 - l ∞ performance framework, timedomain specifications lead to performance weights in a straightforward manner. All signals are represented as infinite sequences. Bounded disturbances are considered
) diag{NCii(z)}·
[diag{DCii(z)}]-1 ) NC(z)·DC-1(z), NC, DC ∈ M(S ) (10) Hence, a matrix equality constraint similar to that in eq 6 can be obtained.32 2.2. Dynamic Performance. The l 1 - l ∞ performance measure for discrete systems is employed in this work.35,36 This type of multiobjective dynamic performance quantifier takes into account time-domain closedloop performance specifications of two types: (a) uniform bounds for the outputs for all bounded, persistent
The weight w2 is related to the desirable performance envelope that should contain the process output. Process safety, uniform quality, and similar operating and economic constraints guide the selection of values for w2. Within the l 1 - l ∞ framework, the control engineer can also shape the output, ys, for any fixed disturbance ∞ through an associated time-varying ds ) {ds(k)}k)0 ∞ , envelope defined by the sequence w3 ) {w3(k)}k)0 w3-1(k) > > 0, ∀k g 0, for some constant . The closedloop output ys that is the result of ds should satisfy
{
ys(k) ∈ [-w3-1(k), w3-1(k)],∀k g 0 w ||w3·ys||∞ e 1 ∞ } H(P, C)*ds ys ) {ys(k)}k)0 w ||w3·[H(P, C)*ds]||∞ e 1
}
(14)
The i∞ gain in eq 13 is inherently conservative because
4558
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001
it only accounts for the worst-case behavior, which might be due to a rare disturbance. On the other hand, the weighted-∞ norm in eq 14 accounts for the speed of response and settling time that correspond to fixed disturbance scenario ds. The selection of w3 is dictated by the desirable disturbance rejection characteristics (e.g., peak of the response, settling time within 5% from steady state) for a particular distubance ds that often affects the system. The l 1 - l ∞ performance measure is then defined as
Φ(P, C) } max{||w2H(P, C)w1|i∞, ||w3·[H(P, C)*ds]||∞} (15) This type of multiobjective performance quantifier permits dynamic performance tradeoff analysis during controller design. Changing the relative magnitudes of w2 and w3, one can affect the significance of each type of performance specification in quantifying closed-loop performance. For example, increasing the value of w3 “tightens” the envelope that should contain the response ys and puts more emphasis on this aspect of the closedloop behavior. This is desired when there is a high level of confidence that ds-type disturbances will mainly affect the closed loop. One can further expand Φ(P, C) to encompass additional performance objectives/specifications such as control effort, setpoint tracking, rejection of additional fixed disturbances, etc. For example, setpoint tracking specifications in the form of a time-varying envelope analogous to w3 can be expressed in terms of the weighted-∞ norm of the error in a manner similar to eq 14. This norm can subsequently be included in the expression that defines Φ(P, C). In a similar manner, one can address control effort concerns. This can be accomplished by incorporating into the max operator in eq 14 additional arguments that quantify the worst-case magnitude and/or the rate of change of the control variable.38 In this way, one can “penalize” the excessive control effort that could result from high-gain controllers. An alternative approach to moderate control effort includes the formulation of so-called mixed optimization problems that consist of l 1 - l ∞objectives similar to eq 15 and H∞ performance constraints on the control effort.38 Instead of the l 1 - l ∞ performance measure Φ(P, C), one can formulate performance measures by using frequency-domain performance weights. In this case, the emphasis is placed on the frequency content of the external disturbance and the disturbance rejection properties of the closed loop over low frequencies versus control effort. In this case, however, time-domain disturbance rejection characteristics are addressed in an indirect manner. The next sections consider l 1 - l ∞ performance defined by time-domain weights as in eq 15. 2.3. Low-Order Controller Design as an Optimization Problem. The low-order controller design in this work aims at two objectives: (i) stability of the resulting closed-loop and (ii) optimal closed-loop performance in terms of the l 1 - l ∞ performance measure and the associated weights (section 2.2). To achieve these goals for given η, w1, w2, and w3, a minimization problem is formulated and solved where the objective function is Φ(P, C) (see eq15) and the domain is the set of all stabilizing ηth-order controllers C
ν)
inf
Φ(P, C)
(16)
C∈S(P) C given by eq 2
The controller C that is a solution to eq 16 is such that the output y ) H(P, C)*d satisfies
||y||∞ e ν·w2-1, ∀d ∈ l ∞, ||d||∞ e w1
(17)
and, given ds ∈ l ∞
|ys(k)| ) |[H(P, C)*ds](k)| e ν·w3-1(k), ∀k g 0
(18)
If the value, ν, of the optimization problem eq 16 is less than 1, then the performance envelopes defined in eqs 13 and 14 are attainable. If ν > 1, then the specified envelopes cannot be achieved by an η-order stabilizing controller. In this case, ν represents the minimum required magnification of both output envelopes defined via w2 and w3 so as to contain the response, y, of the closed loop. Given η, the optimization problem in eq 16 also quantifies the performance degradation associated with finite order control. Ignoring the constraint imposed by eq 2 on eq 16 (i.e., that C should be given by eq 2), the best achievable performance by any linear controller can be computed. A comparison with ν (η-order achievable performance) can provide guidance to the control engineer to either increase η, redesign the performance envelopes, or accept the low-order controller as satisfactory. The solution of eq 16 presents two challenges. First, one should establish that a complete search of the feasible domain is possible. This issue is addressed by using the parametrization in eq 1 of stabilizing controllers. Then, an optimal search algorithm should be developed. In this case, the algorithm will perform a search over the stable parameters Q that satisfy the equality constraint eq 6. Based on eqs 1, 6, and 8, the optimization problem in eq 16 is equivalent to
ν ) inf max Q stable NC,DC
{
||w2 [(Y ˜ - NPQ)D ˜ PPd] w1||i∞ ||w3‚{[(Y ˜ - NPQ)D ˜ PPd]*ds}||∞
}
(19)
subject to
YNC - QN ˜ PNC - XDC - QD ˜ PDC ) 0 The optimization problem in eq 19 is infinite-dimensional as there are no a priori bounds on the order of Q. The solution of this problem follows the solution of similar quadratically constrained problems that have been considered in the context of decentralized39 and simultaneous control.36 For this reason, only the main elements of the solution procedure are outlined here. Mathematical proofs can be found in the aforementioned papers. The solution procedure for eq 19 consists of first expressing the optimization problem in terms of the corresponding impulse response sequences. Because only stable maps are considered in eq 19, the corresponding sequences are absolutely summable (belong to l 1). Also, by eq 3, NC and DC have finite impulse ∞ responses, i.e., nC, dC ∈ φ0,η, nC ) ˆ {nC(k)}k)0 , and dC ) ˆ ∞ {dC(k)}k)0 such that
nC(k) )
{
{
}
1, k ) 0 Rk, k e η , dC(k) ) βk, 1 e k e η 0, k > η 0, k > η
}
ν)
{
nC,dC∈φ0,η
}
M νN,K )
(21)
K
w2
(20)
The optimization problem in eq 19 is thus transformed into an equivalent form where the optimization variables are the infinite sequence q and the FIRs nC and dC and the quadratic operator constraint is replaced by an infinite sequence of real-scalar equality constraints, each corresponding to an element of the impulse response of the stable operator (transfer function) (YNC - QN ˜ PNC - XDC - QD ˜ PDC)
||w2[(y˜ - nP*q)*d ˜ P*pd]w1||1 inf max ||w ·{[(y ˜ n *q)*d 1 3 P p]*ds}||∞ q∈l
{
inf q∈φ0,N nC,dC∈φ0,η
max
∑ |h(k)|w1 k)0 k
∑
sup |w3(k)‚ h(k - i)ds(i)| 0ekeK i)0
(24)
subject to
˜ P*dC](k) ) 0, [y*nC - q*n˜ P*nC - x*dC - q*d k ) 0, ..., M For any combination of values for K, N, and M, eq 24 is finite-dimensional and can be transformed into a NLP. Furthermore, under a boundedness condition, utilizing denseness in l 1 arguments, it can be established that39 M νM ) lim lim νN,K Nf∞ Kf∞
subject to
˜ P*dC](k) ) 0, [y*nC - q*n˜ P*nC - x*dC - q*d k ) 0, ..., ∞ This infinite-dimensional problem is solved via an asymptotic approximation technique. Remark 2.3. In the above formulation, integral action can be enforced by adding the following finite-dimensional real-scalar constraint η
DC(z)1) ) 0 T
∑ dC(k) ) -1
(22)
k)1
In the context of numerical optimization, this constraint will often be employed to eliminate a degree of freedom. To simplify, the exposition in eq 22 will not be explicitly displayed in the remaining portions of the paper. 2.4. Solution of Equation 21 via Asymptotic Approximation. A lower bound, νM, to ν can be obtained if one considers eq 21 with only its first M + 1 constraints (i.e., constraint index k ) 0, ‚‚‚, M). If M > η, the solution of the resulting optimization problem will specify all parameters of a finite-order controller CM. Given the controller, one can then calculate the objective Φ(P, CM) of eq 16. If CM is stabilizing, then Φ(P, CM) will be finite and will be a global upper bound to ν
νM e ν e Φ(P, CM)
}
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4559
(23)
As a result, computation of a valid lower bound, νM, leads to a valid upper bound, Φ(P, CM). For M2 > M1, it follows that νM1 e νM2 e ν, and under mild conditions,36 one can show that lim νM ) ν. This limit can be Mf∞ computed within a prespecified accuracy through computation of νM for increasing values of M until [Φ(P, CM) - νM] e . The computation of νM requires the introduction of a finite-dimensional version of the optimization problem in eq 21. In this new optimization problem, N represents the order of q, and K the horizon over which the objective is calculated
(25)
Hence, the computation of νM proceeds through iterative solution of eq 24 for increasing values of N and K on the order that the iterated limit eq 25 dictates. If the coprime factors of P have been chosen as FIRs, then there exists an a priori known value K ) K*(N) such M M ) lim νN,K , which further simplifies eq that νN,K*(N) Kf∞ 25. To obtain the global optimum of the nonconvex optimization problem in eq 16, eq 24 should always be solved to global optimality. This part of the problem is the subject of section 4. 3. Example Consider a feedback loop such as the one in Figure 1 where the process model is
P ) Pd )
-0.2z + 0.3 z - 1.257z + 0.3951 2
(26)
This model exhibits inverse response and is representative of typical chemical engineering systems such as chemical reactors. Closed-loop performance is quantified in terms of an l 1 - l ∞ measure (section 2.2) according to the following specifications, which lead to the definition of the performance weights w1, w2, and w3 (i) ∞ are bounded in the interval Disturbances d ) {d(k)}k)0 [-1, +1] for all times, i.e., ||d||∞ e 1. Thus, according to eq 11, w1 ) 1. (ii) For any disturbance d, ||d||∞ e 1, it is ∞ desired that the closed-loop output, y ) {y(k)}k)0 , remains in the uniform envelope [-1, +1] [i.e., y(k) ∈ [-1, 1] ∀k g 0 or ||y||∞ e 1]. Hence, from eq 12, it follows that w2 ) 1. (iii) For a unit step disturbance (ds ) ∞ , ds(k) ) +1, ∀k g 0), it is desired that the {ds(k)}k)0 output remains within the following funnel-like envelope
ys(k)∈
{
[ -0.8 , 0.8 ], 0 e k e 3 [0.25k - 1.55 , -0.25k + 1.55], 3 < k e 5 [0.02k - 0.4 , -0.02k + 0.4], 5 < k e 15 [0.009k - 0.235, -0.009k + 0.235], 15 < k e 25 [ -0.01 , 0.01 ], k > 25
}
(27)
4560
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001
Table 1. Convergence Results for PI Controllera M
R0
R1
νM
upper bound
5 10 15 20 25 35
1.258 1.197 0.942 0.948 0.901 0.901
-1.249 -1.099 -0.788 -0.756 -0.705 -0.705
2.751 3.086 3.269 3.357 3.408 3.408
36.364 19.138 9.673 4.797 3.408 3.408
a
All values are truncated to three decimal digits.
Then, according to eq 14, the corresponding weight, w3 ∞ ) {w3(k)}k)0 , is given by
{
1.25 0eke3 -1 3 25
}
The controller design problem with the above performance weights but no constraint in the order of the controller has a value equal to 1.870. Because 1.870 > 1, it is concluded that, for this optimal controller, the output of the closed loop will be contained in envelopes that are 1.870 times the size of w2 and w3. Controller optimality in this setting depends on the weight selection. Two instances of the low-order controller design problem are solved. First, the feedback controller is restricted to be of PI type, and in the second case, a general 2nd-order controller with integral action is considered. In both instances, the same performance weights, w1, w2, and w3, are used, as well as the same fractional representation of P, namely
N ˜ P ) NP ) P, X ) 0, Y ˜ )D ˜P ) 1
(28)
MINOS 5.440 was used to solve all finite-dimensional optimization problems. 3.1. Proportional-Integral Controller Design. Table 1 summarizes the results of the iteration of section 2.3 for the PI controller. In particular, the value of νM that corresponds to the identified -approximate limit in eq 25 with ) 10-5 is given. The value νM is a lower bound to the objective value ν. The corresponding controller parameters, R0 and R1 (see eqs 4 and 5), are also reported for each value of M. Using these parameters, the upper bound, Φ(P, CM), is calculated and reported. Based on the difference [Φ(P, CM) - νM], it is declared that the algorithm converges for M ) 25, and the identified optimal PI controller is
C)
0.901z - 0.705 z-1
(29)
Because the objective value is ν ) 3.408 > 1, it is concluded that the performance specifications imposed above cannot be satisfied by a PI controller. This result indicates that the specified envelopes contain the response of the system provided that they are magnified by a factor equal to ν ) 3.408. Hence, the closed-loop output is guaranteed to remain in the range [-3.408, 3.408] for all times and for all bounded persistent disturbances, d (d ∈ l∞, ||d||∞ e 1). Similarly, the unit ∞ is contained in the envestep response ys ) {ys(k)}k)0 lope
{
ys(k) ∈ [ -2.726 , 2.726 ], 0 e k e 3 [0.852k - 5.282, -0.852k + 5.282], 3 e k e 5 [0.068k - 1.363, -0.068k + 1.363], 5 e k e 15 [0.031k - 0.801, -0.031k + 0.801], 15 e k e 25 [ -0.034 , 0.034 ], k e 25
}
which is 3.408 times the specification envelope of eq 27. These results are verified through simulations shown in Figures 2 and 3. First, the step response of the closed loop is plotted in Figure 2, which also depicts the enclosing envelope. Figure 2 indicates that the performance bottleneck for the step response is primarily due to the steady-state offset bound for sample numbers greater than 25 and, to a lesser extent, to the offset bound for sample number between 5 and 15. The worstcase response to a persistent bounded disturbance is shown next in Figure 3. The outer uniform envelope represents the guaranteed range of output values for all times and for all considered disturbances that was predicted solely on the basis of the value of the optimization problem. However, the maximum peak of the output is computed to be 1.873 and corresponds to the value of the closed-loop i∞ norm defined in eq 13. This result indicates that the actual envelope that contains the output for all times and for all disturbances is [-1.873, +1.873] (the narrow uniform envelope in Figure 3), which is 45% smaller than the one anticipated from the value of the optimization problem. This mismatch is an indication that the uniform envelope specifications are “loose” relative to the steady-state specifications, which represent the major performance bottleneck for the PI controller. The uniform envelope can also be employed as a simple process monitoring tool: if the output y is measured so that it exceeds the envelope bounds, then this is an indication that either the model description or the disturbance description has changed. Appropriate model identification can thus be initiated. Judging from the optimization problem values, the PI performance represents 82% degradation relative to the best achievable by any linear feedback controller (3.408 vs 1.870). However, closer examination of the results indicates that negligible real penalty is realized in terms of robust performance over all disturbances in a class (1.873 vs 1.870) while significant performance degradation is realized when the step response is considered (3.408 vs 1.870). This suggests that an envelope redesign or an increase in controller order might be appropriate. 3.2. Second-Order Controller Design. The results of the design procedure for this case are summarized in Table 2. The columns for M, νM, and upper bound have the same meaning as in Table 1. The remaining columns give the values of the four controller parameters (three independent because of integral action). From the lower bound, νM, and the upper bound, the value of the controller design problem is found to be equal to ν ) 1.936. This corresponds to 3.5% performance degradation relative to the best achievable by any linear controller (1.936 vs 1.870). The identified optimal controller is
C)
7.339z2 - 9.911z + 3.357 z2 + 1.252z - 2.252
(30)
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4561
Figure 2. Unit step disturbance response using the optimal PI controller.
Figure 3. Worst-case disturbance and response using the optimal PI controller. Table 2. Convergence Results for Second-Order Controller M 5 10 15 20 25 30 35 40 55 60
R0
R1
R2
β1
β2
νM
7.237 -9.861 3.275 1.266 -2.266 1.880 7.447 -10.107 3.344 1.352 -2.352 1.889 7.384 -10.001 3.332 1.303 -2.303 1.901 7.352 -9.936 3.344 1.267 -2.267 1.921 7.342 -9.914 3.357 1.252 -2.252 1.934 7.340 -9.912 3.357 1.252 -2.252 1.935 7.339 -9.911 3.357 1.252 -2.252 1.936 7.339 -9.911 3.357 1.252 -2.252 1.936 7.339 -9.911 3.357 1.252 -2.252 1.936 7.339 -9.911 3.357 1.252 -2.252 1.936
upper bound 5.398 5.020 3.898 2.602 1.937 1.936 1.936 1.936 1.936 1.936
Because the optimization value, ν, is greater than 1, the specifications cannot be satisfied in this case either. Hence, the specification envelopes should be magnified by a factor equal to ν ) 1.936 to enclose the system’s response. Simulation results using the optimal 2nd-order controller in eq 30 for both the unit step disturbance and the worst-case disturbance are shown in Figures 4 and 5, respectively. In these figures, the actual performance
Figure 4. Unit step disturbance response using the optimal 2ndorder controller.
Figure 5. Worst-case disturbance and response using the optimal 2nd-order controller.
envelopes are also shown, namely, the time-varying envelope that encloses the step response and is 1.936 times the specification envelope in eq 27 (Figure 4) and the uniform envelope, [-1.936, +1.936], that contains the output for all times and for all disturbances (Figure 5). Compared to the PI case, the step response with the controller in eq 30 exhibits less overshoot and better damping characteristics, which eventually result in a tighter enclosing envelope. To achieve this improved step response, a small penalty (3.4% increase) is paid in terms of the maximum peak (i∞ closed-loop norm). Overall, this second-order controller, which can be implemented as an appropriate PID with a filter, is a good compromise between simplicity and performance. 4. Global Optimization In this section, the global solution of eq 24 is discussed. Starting with the structure of eq 24, a new implementation of BB is presented for the solution of this problem. An overview of the branch-and-bound (BB) global optimization approach that is the foundation of
4562
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001
the present approach is given in section 4.1. A discussion of the main issues associated with the implementation of BB is given in section 4.2. Then, the use of interval calculations is discussed as a means of improving the rate of convergence of BB, modifications to the BB method are introduced, and computational results are given (section 4.3). It should be noted that the ideas in section 4.3 can be employed in the context of alternative BB-based global optimization methodologies that are different from the algorithm discussed in section 4.1. 4.1. Branch-and-Bound Global Optimization. The optimization problem in eq 24 is nonconvex because of the bilinear terms in its M + 1 quadratic equality constraints. Upon expanding the convolution sums, these constraints become k
[y(i) nC(k - i) - x(i) dC(k - i) - q(k - i) ∑ i)0 i
(n˜ P(j) nC(i - j) - d ˜ P(j) dC(i - j))] ) 0 ∑ j)0
(31)
for k ) 0, ..., M. The nonlinearities are due to the products of the variables q and nC and the variables q and dC. In this work, global solution of eq 24 is pursued according to the BB optimization method for general rational nonlinear optimization problems used by Manousiouthakis and Sourlas.41 This method consists of a problem transformation to an equivalent optimization problem and the subsequent solution of this problem using BB. Problem Transformation. The nonlinear optimization problem is algebraically transformed into an equivalent optimization problem where all constraints are convex except for a few that are quadratic, reverse convex (RC), and separable. Each constraint in eq 31 is transformed into a difference of convex functions by replacing bilinear terms as the difference of squares. Illustration. Let x represent a generic optimization variable vector, with x1 and x2 two of the involved variables. Also, let g(x) ) 0 be a generic equality constraint with bilinear nonlinearities of the form x1‚ x2. Then
}
1 x1·x2 ) [(x1 + x2)2 - (x1 - x2)2] 4 T g(x) ) 0 a
{
b
g(x) ) g (x) - g (x) ) 0 T ga(x) - w e 0 gb(x) - w e 0 -ga(x) - gb(x) + 2w e 0 where w is real and ga and gb are convex. The last constraint (-ga(x) - gb(x) + 2w e 0) is RC, quadratic and separable as the nonlinear terms are of the form (-λixi2) where λi > 0. When the above transformation is applied to eq 24, the number of the resulting RC constraints is, in general, M + 1. The variables that appear in the RC terms of those constraints are the following 2η + M + 2 variables M η η , {nC(k)}k)0 , {dC(k)}k)1 (since dC(0) ) 1) {q(k)}k)0
These variables, and the corresponding RC terms, capture the nonconvex aspects of the optimization problem and will be called nonconvex variables or NCVs. One can also combine the M + 1 RC constraints into a single RC constraint with the same total number of NCVs. This does not affect the applicability of the solution procedure that is outlined next.41 Solution of the Transformed Problem. The solution is pursued according to the BB algorithm proposed by Soland.42 First, convex underestimation of each of the RC terms in the RC constraints of the transformed optimization problems is performed: given initial ranges for all NCVs, the quadratic RC terms are underestimated using linear underestimators. These ranges represent a “rectangular domain” (hyper-rectangle) in the space of NCVs. The resulting minimization problem is convex and is a global underestimator of its nonconvex precursor over the specified hyper-rectangle of the NCVs. The gap (relaxation) between these two problems reduces as the range of the NCVs becomes smaller. Each iteration of the BB algorithm consists of two steps: (i) identification of a lower bound to the global optimum of the original problem (bounding) and (ii) for the hyperrectangle that corresponds to the lower bound from the first step, the range of one of the NCVs is split into two, thus generating two smaller hyper-rectangles (branching) over which the convex underestimating problem is solved. Over a compact domain, this algorithm converges to an -optimal solution in a finite number of iterations. In the present case, convergence is declared if, for the box selected in the first step of an iteration, the violation of the RC constraints does not exceed a prespecified tolerance . Other numerical global optimization methods can also be employed. Manousiouthakis and Sourlas41 discuss the application of generalized Benders decomposition for the same family of transformed problems, and Floudas and Visweswaran43 present a primal-dual decomposition-based method applicable to the original problem. Alternative BB methods that employ underestimating/bounding approaches other than the one considered in this paper are presented in Ryoo and Sahinidis,44 Androulakis et al.,45 and Zamora and Grossmann.46 For a more complete overview of works in the area of global optimization, the reader is advised to consult the above-cited journal articles. The emphasis of the ensuing discussion is on the implementation of the BB method (section 4.2) and on a systematic, computationally efficient methodology that speeds convergence by discarding infeasible regions of the optimization problem domain (section 4.3.2). Because all aforementioned BB global optimization methods employ a domain refinement step, they can potentially benefit by the developments described in this work. 4.2. Implementation of the BB Algorithm. There are two issues associated with the practical implementation of BB algorithms: the identification of the initial hyper-rectangle for the NCVs (the variables involved in underestimation) and the computational complexity that is inherent to all global optimization algorithms. Identification of the Initial Hyper-rectangle. The identification of an initial range for all of the NCVs is very important in the context of the BB algorithm because it defines the closed subset of the space of NCVs that the BB algorithm will search and determines the initial relaxation that the underestimation induces. Hence, the
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4563
initial intervals for the NCVs should include the global optimum, but at the same time, they should not be unnecessarily large so as to adversely affect the quality of the underestimation. For the optimization problem in eq 24, given an upper bound to the global optimum (e.g., the value of the local optimum), it is possible to compute bounds on the candidate global solution for the M + 1 q variables through linear programming. This has been demonstrated in similar problems.36,39,47 To obtain bounds on the remaining 2η + 1 NCVs (i.e., the nC and dC variables), necessary closed-loop stability conditions can be employed (e.g., the Routh-Hurwitz criterion, see A° stro¨m and Wittenmark34), combined with interval calculations that use the quadratic equalities of eq 31. Computational Complexity and Range Reduction. Global optimization algorithms suffer from the “curse of dimensionality” as the cost of the required computations increases exponentially with the size of the problem48 and, in particular, the number of NCVs. In the case of the standard BB algorithm, this corresponds to the dimensionality of the branching subspace, which, in the case of eq 24, is (M + 2η + 2). The dimensionality of this subspace does not depend on the parameters N and K of eq 24, but it increases linearly with M. According to Table 1, to identify the global solution to the PI controller design problem considered in section 3, one would need to perform BB over a 28-dimensional subspace (because of integral action, the single dC variable is eliminated). For highly coupled problems such as those considered here, the dimension of the branching subspace is considered large. One way to accelerate BB methods is to incorporate some form of range reduction. These methods are designed to eliminate regions of the branching subspace that are not promising for further investigation (i.e., that do not contain the global optimum). Range reduction is implemented within the branching stage of the BB. When the interval of one of the NCVs is split into two, range reduction takes this into account, and if appropriate, it modifies (“shrinks”) the interval of all remaining NCVs. Such branch-and-contract methods, based on local optimum information and duality (Lagrange multiplier) have been discussed by Ryoo and Sahinidis44 and Zamora and Grossmann.46 Feasibility-based range reduction utilizes the (possibly nonconvex) equality constraints of the original problem: as the range of one of the NCVs changes, one can solve the equalities and obtain the corresponding feasible ranges for all remaining NCVs. This method has been mentioned in the continuous optimization literature44,46 in an ad hoc manner. Sourlas and Manousiouthakis39 utilized linear equalities and implemented feasibility-based range reduction, with simultaneous reduction of the effective dimensionality of the branching space for a controller design problem. In the present case, interval analysis is used to systematically implement feasibility-based range reduction when nonlinear equalities are present. This turns out to be considerably more involved than our previous work.39 The feasibility-based range reduction methods discussed here can be integrated with duality-based range reduction to further accelerate BB convergence. 4.3. Feasibility-Based Range Reduction. 4.3.1. Overview. According to the standard BB algorithm presented in section 4.1, for any value of M, N and K, M g η, the optimization problem in eq 24 has M + 2η +
2 NCVs and an equal number of branching directions. Because the solution of the low-order problem requires M f ∞, the dimensionality of the branching space continuously increases in the process of implementation of the asymptotic solution method presented in section 2.4. On the other hand, the number of degrees of freedom depends only on the order, η, of the controller, which is typically small: only 2η + 1 degrees of freedom (2η in the case of integral action) exist for any combination of M, N, and K values in eq 24. If integral action is required, the number of NCVs and the number of degrees of freedom both reduce by 1 because eq 22 is used to eliminate one variable from the outset. In the context of BB, once intervals for the 2η + 1 degrees of freedom have been specified, one should be able to solve eq 31 and compute intervals for all remaining variables so that the equality constraints of eq 24 in eq 31 are satisfied for some combination of variable values within these intervals. These interval calculations are detailed in section 4.3.2. Hence, the branching step of the BB can, in principle, be limited to a subspace that only contains a number of variables equal to the (typically few) degrees of freedom instead of one that considers all NCVs. Because BB complexity seems to depend on the dimension of the branching subspace, this apparent ability to solve problems with increasing numbers of NCVs over an effective branching subspace that is dimensionally fixed provides an intriguing opportunity for BB acceleration. Compared to the BB algorithm described in section 4.1, the modified BB algorithm only differs in the branching step. Upon initialization, the 2η + 1 degreeof-freedom variables (DFVs) are selected. Consider iteration k. Once a new lower bound and the corresponding (M + 2η + 2)-dimensional hyper-rectangle, Bk, of the NCVs have been selected in the bounding step, the interval of one of the DFVs (say x*) is split into two. a This partitions Bk in two smaller rectangles Bk,1 and a a a a Bk,2 such that Bk ) Bk,1 ∪ Bk,2. For Bk,1, interval analysis is used to compute feasible ranges for the M + 1 NCVs that are not among the DFVs so that the equalities of eq 31 are satisfied. This generates a new c c box Bk,1 . Then, the box Bk,1 is defined as Bak,1 ∩ Bk,1 , and the convex underestimating problem is formulated a , and solved over Bk,1. This process is repeated for Bk,2 and the underestimating problem is solved over the new hyper-rectangle Bk,2. 4.3.2. Range Reduction via Interval Analysis. Interval calculations have been introduced to overcome a computer’s inability to exactly represent irrational numbers and the associated truncation errors that accumulate after a long sequence of calculations. Since interval arithmetic was first introduced by Moore,49 it has evolved into a complete algebraic framework in which operations are defined in terms of intervals instead of numbers. For a more detailed discussion of the foundation and applications of interval analysis, the interested reader can consult the texts by Moore,49 Alefeld and Herzberger,50 Neumaier,51 Ratschek and Rokne,52 and Hansen.53 Recently, interval analysis has found many engineering applications in the solution of nonlinear problems including process simulation54 and analysis of thermodynamic equilibria.55 A brief overview of interval analysis is given in Appendix A. This includes definitions of the basic arithmetic operations and a discussion of the estimation of the range of values of a
4564
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001
function. Readers unfamiliar with interval analysis should review Appendix A before proceeding. In the context of BB global optimization, interval calculations are used to achieve feasibility-based range reduction within the branching step of BB. All computations associated with the solution of convex optimization problems within the BB are performed using standard computer arithmetic. The computational details of the feasibility-based range reduction are discussed next, and two implementations of interval computations are presented. The independent variables (DFVs) that make up the effective branching subspace of the modified BB (section η η , {q(k)}k)0 , 4.3.1) for M g η are selected as {dC(k)}k)1 (dC(0) ) 1). Then, the compuational task within the η and branching step becomes given intervals {Iqk}k)0 η η η {Idk}k)1 for {q(k)}k)0 and {dC(k)}k)1, respectively, interval analysis tools are used to estimate intervals M η and {nC(k)}k)0 so that the resulting inclu{q(k)}k)η+1 sion of the range of the LHS of eq 31 contains zero. To demonstrate the use of interval analysis, the constraints of eq 31 are first rewritten for the coprime factorization eq 28 and for the case of strictly proper process models [i.e., p(0) ) 0] k-1
nC(k) ) q(k) +
i
p(k - i)∑nC(j) q(i - j) + ∑ i)0 j)0 k
dC(i) q(k - i), ∑ i)1
k e η (32)
and l)min(i,η)
k-1
q(k) ) -
∑ i)0
p(k - i)
∑ j)0
nC(j) q(i - j) -
η
dC(i) q(k - i), ∑ i)1
η + 1 e k e M (33)
Neither of the two conditions that led to eqs 32 and 33 is restrictive. However, as a result of those conditions, the exposition that follows is greatly simplified. Notation. Given a variable x, its interval will be denoted by Ix, the corresponding reference point (typically the midpoint of Ix) will be denoted by xj, and the interval (Ix - xj) will be denoted byIhx. Finally, if f(x) is a real-valued function of x and x ∈ Ix, then f(Ix) represents the natural interval extension of f(x) that is computed by interval arithmetic when x is replaced by Ix in the formula of f(x) (section A.1). Method 1: Natural Interval Extensions and Algebraic Manipulations. The first method consists of computing the natural interval extension of eqs 32 and 33 on a sequential manner for increasing k. The interval values for the DFVs are used in the right-hand side of eqs 32 and 33 to compute ranges for the other variables in a sequential manner. A quick inspection of eqs 32 and 33 reveals that variable redundancy (see section A.1) will be an important factor. For each value of k, the corresponding equation from eqs 32 and 33 contains, on its right-hand side, multiple occurrences of independent variable intervals and all dependent variable intervals already computed for smaller values of k. Hence, the computed ranges will overestimate the actual range of the variables. This will result in poor underestimation. To
overcome these difficulties, factorization and other algebraic manipulations were employed in an attempt to reduce variable redundancy in eqs 32 and 33. For example, in the PI case (η ) 1) and for k ) 0, eq 32 gives
nC(0) ) q(0)
(34)
Similarly, for k ) 1
nC(1) ) p(1) nC(0) q(0) + q(1) + dC(1) q(0) (35) In all subsequent steps, nC(0) can be eliminated from eqs 32 and 33 and replaced by q(0). For the PI controller structure, dC(1) ) -1. In eq 35, multiple occurrences of q(0) will create redundancy-related overestimation. By substituting eq 34 into eq 35 and completing the square, all redundancy of q(0) can be eliminated
[
nC(1) ) p(1) q(0) +
1 2p(1)
]
2
-
1 + q(1) (36) 4p(1)
The natural interval extension of eq 36 (using nonnegativity of the square) is computed to obtain In1. As k increases, the symbolic manipulations required to simplify and factorize eq 33 to obtain q(k) as a function of the DFVs only by means of a formula that does not exhibit redundancy quickly become unmanageable. Thus, redundancy elimination is intractable. However, interval computations based on eqs 32 and 33 remain a simple and computationally inexpensive alternative. Method 2: Interval Newton Method Applied to Equation 31. Given intervals for the DFVs, eq 31 (or eqs 32 and 33) is handled as a system of nonlinear equations with interval coefficients and is solved using a modified implementation of the interval Newton method (Appendix A.2). The manipulations that precede actual computation of the intervals consist of (i) construction of first-order interval Taylor expansions for each of the M + 1 equations, which leads to a linear lower triangular system of inclusions with interval coefficients, and (ii) elimination of the dependent variables by capitalizing on this lower triangular structure, thus reducing the level of redundancy. To illustrate the sequence of manipulations involved, the above-described method is implemented for η ) 2. First, the degrees of freedom (DFVs) are selected from the available optimization variables. For this imple2 2 and {dC(k)}k)1 . Let mentation, the DFVs are {q(k)}k)0 2 2 2 {Iqk}k)0 and {Idk}k)1 be known intervals for {q(k)}k)0 2 and {dC(k)}k)1, respectively. For the purpose of this illustration, nC(1) and nC(2) are treated as independent variables (i.e., DFVs). Then, let In1 and In2 be the corresponding intervals for nC(1) and nC(2), respectively. For η ) 2, it is possible to obtain good estimates of In1 and In2 via method 1 using eq 36 and an analogous expression that can be derived for nC(2). At this stage, it is assumed that intervals for all remaining branching variables are initially known. This is generally true in the context of the proposed BB methodology. Given intervals for the DFVs and nC(1) and nC(2), it is desirable to obtain updated intervals for the remaining M branching variables (i.e., {q(k)}k)η+1 ). For this reason, eq 33 is rewritten as
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4565 k-1
f(k) } -q(k) -
l)min(i,η)
p(k - i) ∑ ∑ i)0 j)0
η
nC(j) q(i - j) -
dC ∑ i)1
(i) q(k - i) ) 0, η + 1 e k e M (37) Then, an interval Taylor expansion is performed on eq 37 around the midpoints of the intervals of all variables
k-1 will represent interval quantities, Hence, {ak,i}i)0 2 2 and {ck,i}i)1 are real numbers that whereas {bk,i}i)1 k-2 . depend only on the midpoints of the intervals {Iqi}i)0 Indeed, this will be the case for any value of η. Then, q(k) satisfies eq 37 if q(k) ∈ Iqk, with
k-1
I h qk ) Iqk - q j (k) ) hf k +
k
0 ∈ hf k +
h n2 + ck,1I h d1 + ck,2I h d2 (42) bk,2I
ak,iI h qi + bk,1I h n1 + bk,2I h n2 + ck,1I h d1 + ∑ i)0
h d2 (38) ck,2I
where hfk is given by k-1
j (k) hf k ) -q
l)min(i,η)
p(k - i) ∑ ∑ i)1 j)0
η
n j C(j) q j (i - j) -
d hC ∑ i)1
(i) q j (k - i) (39) k-1 2 2 and {ak,i}i)0 , {bk,i}i)1 , and {ck,i}i)1 correspond to the partial derivatives defined according to the interval Taylor expansion of eq 37 (see sections A.1 and A.2). To calculate these partial derivatives, the vector of optimization (branching) variables is first defined as (nC(1), nC(2), dC(1), dC(2), q(0), q(1), ..., q(M)). Then, these partial derivatives are, in general, obtained as (also see eq 49)
{
bk,1 )
bk,2 ) ck,1 )
ck,2 )
ak,i )
∂f(k) (I , n j (2), d h C(1), d h C(2), q j (0), q j (1), ..., q j (M)) ∂nC(1) n1 C ∂f(k) h (1), d h C(2), q j (0), q j (1), ..., q j (M)) (I ,I , d ∂nC(2) n1 n2 C ∂f(k) (I ,I ,I , d h (2), q j (0), q j (1), ..., q j (M)) ∂dC(1) n1 n2 d1 C ∂f(k) (I ,I ,I ,I , q j (0), q j (1), ..., q j (M)) ∂dC(2) n1 n2 d1 d2 ∂f(k) j (i+1), ..., q j (M)) (I ,I ,I ,I ,I , ..., Iqi, q ∂q(i) n1 n2 d1 d2 q0
}
η + 1 e k e M (40) 0eieM
In actuality, bk,1, bk,2, ck,1, ck,2, and ak,i depend only on a small number of midpoint values or intervals as the following equations indicate.
{
ck,i ak,0
}
∂f(k) (q j (0), q j (1), ..., q j (k - i - 1)), i ) 1, 2 ∂nC(i) ∂f(k) (q j (k - i)), ) i ) 1, 2 , ∂nC(i) ∂f(k) (I ,I ,I , q j (1), q j (2) ..., q j (k - 1)) ) ∂q(0) n1 n2 q0
bk,i )
[
∂f(k) (I ) ∂q(i) q0 ∂f(k) (I , I ) ∂q(i) q0 d1 ak,i ) ∂f(k) (I , I , I ) ∂q(i) q0 n1 d2 ∂f(k) (I , I , I ) ∂q(i) q0 n1 n2
]
η + 1 e k e M (41)
kei k)i+1 k)i+2 kgi+3
ak,iI h qi + bk,1I h n1 + ∑ i)0
In the context of the iterative interval Newton method, the interval Iqk on the left-hand side of eq 42 is a first estimate of the “updated” range of feasible q(k) values. The updated range of q(k) is the intersection of Iqk and the range obtained in the last iteration. The midpoint, q j (k), of q(k) as well as the intervals and midpoints used in the right-hand side of eq 42 correspond to the feasible ranges obtained in the last iteration. The system of equations in eq 42 for k ) η + 1, η + 2, ..., M, can be explicitly solved as illustrated next. In this way, each interval Iqk will be obtained in terms of independent variables. For k ) 3 and k ) 4, eq 42 gives
k ) 3: h q0 + a3,1I h q1 + a3,2I h q2 + b3,1I h n1 + I h q3 ) hf 3 + a3,0I b3,2I h n2 + c3,1I h d1 + c3,2I h d2 (43) k ) 4: h q0 + a4,1I h q1 + a4,2I h q2 + a4,3I h q3 + I h q4 ) hf 4 + a4,0I b4,1I h n1 + b4,2I h n2 + c4,1I h d1 + c4,2I h d2 (44) Notice that eqs 43 and 44 can be combined to eliminate Ih q3 from eq 44 and thus reduce the level of redundancy
Ih q4 ) (fh4 + a4,3hf 3) + (a4,0 + a4,3a3,0)Ih q0 + (a4,1 + a4,3a3,1)Ih q1 + (a4,2 + a4,3a3,2)Ih q2 + (b4,1 + a4,3b3,1)Ih n1 + (b4,2 + a4,3b3,2)Ih n2 + (c4,1 + a4,3c3,1)Ih d1 + (c4,2 + a4,3c3,2)Ih d2 (45) In this manner, each successiveIhqk can be computed as a linear combination of intervals of only the independent variables (DFVs) and the nC’s 2
I h qk ) Fk +
Ak,iI h qi + Bk,1I h n1 + Bk,2I h n2 + ∑ i)0
h d1 + Ck,2I h d2 (46) Ck,1I
2 2 2 where Fk, {Ak,i}i)0 , {Bk,i}i)1 , and {Ck,i}i)1 can be computed recursively, thus having low memory requirements, by
k
Fk+1 ) hf k+1 +
ak+1,jFj ∑ j)3
k
,1eiek-1
Ak+1,i ) ak+1,i +
ak+1,jAj,i ∑ j)3
i ) 0, 1, 2
k
Bk+1,i ) bk+1,i +
ak+1,jBj,i ∑ j)3
i ) 1, 2
4566
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 Table 3. Iterations for PI Controller Global Optimization
k
Ck+1,i ) ck+1,i +
ak+1,jCj,i ∑ j)3
i ) 1, 2
where
F3 ) hf 3, A3,i ) a3,i, B3,i ) b3,i, C3,i ) c3,i
(47)
In other words, the recursion begins with k ) η + 1. The recursive relations implement the solution of interval linear equations required at each step of the interval Newton iteration. Upon convergence, new intervals that contain the feasible range of the branching variables are obtained. In the above discussion, Ink, k ) 1, 2, are treated as intervals of independent variables, because relatively tight bounds can be found on these variables using the substitution and factorization method described in method 1. For η g 3, it will become necessary to expand the interval Newton formulation in order to also calcuη . The succeslate ranges for the nC variables {nC(k)}k)0 sive solve-and-eliminate strategy described above (eqs 43-45) can be expanded in a straightforward manner η are computed before the so that intervals {Ink}k)0 M intervals {Iqk}k)η+1 in each iteration. It should be noted that the transformation from eq 44 to eq 45 using eq 43 does not completely eliminate redundancy as the calculated interval gradients a3,i, i g 0, still depend on the intervals of the independent variables. Furthermore, interval computations will always identify a solution in the form of a hyper-rectangle. One can construct simple examples with even linear systems of equations, where the actual solution is not a rectangular set (box) but has an irregular shape. The best interval computations can do is to identify the smallest hyper-rectangle that contains the actual solution. In the present case, this is not an issue because of the underestimating scheme that is defined over hyperrectangular domains. BB/Interval Computational Results. This section reports from the application of the above methods 1 and 2 to the PI controller optimization problem from the example with a BB convergence tolerance of ) 10-5. The effective dimension of the bounding subspace is 2. The results of Table 1 were verified as globally optimal. Table 3 and Figure 6 show the number of branching iterations necessary to obtain convergence for methods 1 and 2. For both methods 1 and 2, the globally optimal controller parameters obtained agree with those reported in Table 1 within 0.5 × 10-3. Results from the application of the standard BB method, without any range reduction, are not given in this table. The standard BB algorithm underperforms both method 1 and method 2. To globally solve the problem that corresponds to six NCVs within the same convergence tolerance , the standard BB algorithm requires more than 30 000 iterations. Method 2 achieves faster convergence and exhibits much better scaling characteristics than method 1. This is primarily attributed to the reduction of variable redundancy in the underlying interval computations. In comparison, method 2 requires 1395 iterations to solve the problem with 28 NCVs, which corresponds to the largest size problem that needs to be solved to identify the optimal PI controller for the example problem.
NCVs ) M + 3
method 1
method 2
8 13 18 23 28
189 749 8326 >62 500 not solved
134 266 589 966 1395
Figure 6. Iteration results for PI controller optimization (NCVs ) M + 3).
measure, an optimization problem was formulated and solved to identify a stabilizing controller of the specified order that minimizes the value of the specificationrelated performance measure. To ensure that a search over all candidate low-order controllers is possible, a parametrization of the set of fixed-order stabilizing controllers was developed and used. This step introduced a new, infinite-dimensional optimization variable and resulted in a quadratically constrained optimization problem that was solved iteratively. Converging sequences of upper and lower bounds are generated, and the -optimal solution can be identified. The proposed formulation of the fixed-order controller design problem can be adapted to solve the problems of fixed-order simultaneously stabilizing controller design and fixed-order approximately decoupling controller design. This is possible because the fixed-order controller design problem is based on the general parametrization in eq 1. The particular problem structure, i.e., small number of degrees of freedom relative to the total number of variables, leads to a hybrid BB/interval global optimization method. The equality constraints are used to reduce the dimension of the branching subspace. As a result, the number of BB iterations required for convergence is reduced by several orders of magnitude. This establishes the benefits arising from the integration of interval analysis tools within traditional BB global optimization methods for problems with a large number of equality constraints. Further research has demonstrated that the global solution of problems in process synthesis and optimization such as heat exchanger network optimization, can greatly benefit from such hybrid interval/BB implementations.56
5. Conclusions
Acknowledgment
Given controller order and dynamic performance specifications according to the l1 - l∞ performance
Support through a University of Missouri Research Board award is gratefully acknowledged. David Schmidt
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4567
thanks the OURE program at UMR and the Intelligent Systems Center for support provided during this project. Appendix A. Introduction to Interval Analysis This exposition of interval analysis is based primarily on the texts by and Moore49 and Ratschek and Rokne.52 The operations of addition, subtraction, multiplication, and scalar multiplication can be defined for intervals. ˆ [a1, a2], Then, for real numbers a, b, c ∈ R, a ∈ Ia ) ˆ [b1, b2], we have and b ∈ Ib )
a + b ∈ Ia + Ib ) [(a1 + b1), (a2 + b2)] a - b ∈ Ia - Ib ) [(a1 - b2), (a2 - b1)] ab ∈ Ia·Ib ) [min(a1b1, a1b2, a2b1, a2b2), max(a1b1, a1b2, a2b1, a2b2)] ca ∈ c·Ia )
{
[ca1, ca2], c g 0 [ca2, ca1], c < 0
}
A.1. Range of a Function. Given a real-valued function f(x) and an interval Ix, x ∈ Ix, it is often desired to estimate the range of values of f, i.e., the set R(f;Ix) ) ˆ {f(x)|x ∈ Ix}. Using the interval arithmetic operations presented above and direct interval-for-variable substitution in the formula of f(x), an estimate f(Ix) of R(f;Ix) is obtained. In general, R(f;Ix) ⊆ f(x), i.e., direct substitution identifies an overestimate of the actual range or a so-called “inclusion for the range”. Illustration. Consider f(a) ) a2, a ∈ Ia ) [-3, 2]. If one calculates the range of f(a) according to the rule of multiplication, the following is obtained
f(a) ) a·a w f(Ia) ) Ia·Ia ) [-6, 9]
(48)
However, based on the nonnegativity of the square, one can establish that the actual range is [0, 9]. The overestimation of the range in eq 48 is due to the multiple occurrences of Ia (or a) in f(Ia). According to the interval calculations, each occurrence of Ia is “interpreted” as an independent interval [-3, 2]. In general, the more occurrences of the variables of the function, the larger the overestimation of the actual range when these variables are substituted with intervals. This is the variable redundancy issue that can be overcome by taking into account the properties of f(‚) (nonnegativity of the square in the above illustration) or, via appropriate manipulation, designed to eliminate variable redundancy as can be seen in the following illustration. Illustration. Consider the function f(x, y, z) ) xy + xz, where x ∈ Ix ) [-2, 2], y ∈ Iy ) [3, 5], and z ∈ Iz ) [-4, -1]. Using direct substitution, f(Ix, Iy, Iz) ) [-18, 18] is obtained. However, f(x) is equivalently rewritten as fa(x, y, z) ) x(y + z). Each variable occurs only once in fa. Then, fa(Ix, Iy, Iz) ) [-8, 8] ⊂ f(Ix, Iy, Iz), and fa(Ix, Iy, Iz) turns out to be the real range R(f; Ix, Iy, Iz) of f. By eliminating redundancy, the inclusion of the range can be greatly reduced. In fact, when all redundancy is eliminated, as was the case in the above example, the exact range of the function can be calculated. Many functions are not amenable to such manipulations. Nevertheless, any manipulation that reduces the redundancy will typically produce a tighter inclusion of
the range of a function. To that end, much research effort has been expended in the area of developing systematic computational tools for the calculation of tight inclusions for the range of a function. Such methods include the centered form approach, the interval slope approach, and the interval Taylor method, which is discussed next (see Neumaier51 and Ratschek and Rokne52 for a detailed presentation). Rokne and Bao57 presented an interval form of the well-known Taylor expansion. The reader is referred to Rokne and Bao57 or Hansen53 for an in-depth analysis of this formulation. The illustration that follows describes this expansion. Illustration. Consider a function of three variables f(x1, x2, x3) and the intervals Ix1, Ix2, Ix3, where x1 ∈ Ix1, x2 ∈ Ix2, and x3 ∈ Ix3. Also, let xji ∈ Ixi, i ) 1, 2, 3. In most cases, xji will be taken as the midpoint of the interval. Let gi be the partial derivative of f with respect to xi [i.e., gi(x1, x2, x3) ) ∂f/∂xi, i ) 1, 2, 3]. Then, the range of f over the cube (Ix1, Ix2, Ix3) is contained in the natural interval extension of the first-order Taylor expansion of f at the midpoint (xj1, xj2, xj3), where the partial derivatives are appropriately calculated, i.e., given (x1, x2, x3) ∈ (Ix1,Ix2,Ix3)
f(x1, x2, x3) ∈ f(xj1, xj2, xj3) +
[]
I h x1 I g (Ix , x j , x j ) g (I , I , x j ) g (I , I , I ) [ 1 1 2 3 2 x1 x2 3 3 x1 x2 x3 ]· h x2 I h x3
(49) whereIhxi ) Ixi - xi, i ) 1, 2, 3. Note that, in eq 49, the arguments of each gi are a combination of intervals and numbers. This follows from the underlying derivation.57 Hence, the vector of partial derivatives will, in general, be an interval vector. A.2. Systems of Interval Equations. Interval analysis is used to find all solutions of systems of equations. Nonlinear systems of equations are solved via the iterative interval Newton method.53 The first step is to linearize the system of nonlinear equations according to eq 49. For a 3 × 3 system [fi(x1, x2, x3) ) 0, i ) 1, 2, 1 1 1 , Ix2 , Ix3 for the three vari3] and initial ranges Ix1 ables, this linearization will produce the following inclusion relation for the nth interval Newton iteration
[][ ] [
f1(xj1, xj2, xj3) 0 0 ∈ f2(xj1, xj2, xj3) + 0 f3(xj1, xj2, xj3) n n n n n n g11(I x1 , xj2, xj3) g12(I x1 , I x2 , xj3) g13(I x1 , I x2 , I x3 ) n n n n n n g21(I x1, xj2, xj3) g22(I x1, I x2, xj3) g23(I x1, I x2, I x3) n n n n n n , xj2, xj3) g32(I x1 , I x2 , xj3) g33(I x1 , I x2 , I x3 ) g31(I x1
[ ] n+1 I h x1 n+1 I h x2 n+1 I h x3
]
(50)
where the partial derivatives are gij(x1, x2, x3) ) ∂fi/∂xj, i, j ) 1, 2, 3. Then, at the nth interval Newton iteration, eq 50 is solved via the iterative Gauss-Seidel method54,58,59 to determine the intervals I n+1 xi , i ) 1, 2, 3. Upon convergence, the interval Newton method will
4568
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001
identify intervals I ∞xi, i ) 1, 2, 3, that contain all solutions of the original system that were contained in 1 1 1 , I x2 , I x3 ). In the case of the original hyper-cube (I x1 systems of equations with uncertain coefficients that are bounded (i.e., the coefficients are intervals), the interval Newton approach can be used to identify the range of possible solutions as the coefficients vary arbitrarily within their intervals. This is the case in the present work. Appendix B. Nomenclature B.1. Symbols Introduced in Section 2 ‚ ) scalar multiplication or element-by-element multiplication of sequences * ) composition of maps and sequences or convolution of sequences C ) controller transfer function d ) disturbance sequence d(k) ) kth element of d ds ) known disturbance in l1 - l∞ performance specifications H∞ ) set of complex functions analytic and bounded in the open right half plane H(P, C) ) closed-loop transfer function i∞ ) induced infinity operator norm l 1 ) space of absolutely summable sequences l ∞ ) space of bounded sequences M(R) ) set of causal and rational transfer function matrices M(S) ) set of causal, rational, and stable transfer function matrices (NC, DC) ) controller factorization (C ) NCDC-1) NP, DP, N ˜ P, D ˜ P, X, Y, X ˜, Y ˜ ) doubly coprime factorization of P nP, dP, n˜ P, d ˜ P, x, y, x˜ , y˜ ) impulse response sequences of the doubly coprime factors P ) plant transfer function Pd ) disturbance dynamics p ) impulse response sequence of P p(k) ) kth impulse response coefficient of P φ0,N ) sequences with first N + 1 elements nonzero Φ(P, C) ) closed-loop performance measure Q ) stable parameter (transfer function) q ) impulse response sequence of Q q(k) ) kth impulse response coefficent of Q R ) set of rational transfer functions S ) set of stable rational transfer functions wi ) performance weights, i ) 1, 2, 3 y ) output sequence y(k) ) kth element of y ys ) closed-loop response to ds Greek Symbols Ri, βi ) controller parameters (degrees of freedom) η ) order of the controller ν ) value of the infinite-dimensional optimal performance problem νM ) value of the truncated optimal performance problem with a finite number of equations (M) M νN,K ) value of the finite-dimensional optimal performance problem with a finite number of equations (M), a finite number of variables (N), and a finite objective horizon (K) B.2. Symbols Introduced in Section 4 NCV ) nonconvex variable DFV ) degree of freedom variable Ak,i ) coefficient of Ihqi in the general interval Taylor expansion
ai,j, bi,j, ci,j ) interval expansion coefficients Bk,i ) coefficient of Ihni in the general interval Taylor expansion Bk ) hyper-rectangle selected in the kth bounding iteration of BB a (i ) 1, 2) ) two hyper-rectangles generated from Bk Bk,i during standard BB splitting in DFV variables c Bk,i (i ) 1, 2) ) hyper-rectangles generated from Bk via interval computations Bk,i ) hyper-rectangles aftes feasibility based range rec a ˆ Bk,i ∩ Bk,i (i ) 1, 2) duction Bk,i ) Ck,i ) coefficient of Ihdi in the general interval Taylor expansion Fk ) term in the general interval Taylor expansion fi(x) ) 0 ) generic equality constraint Iqi ) interval for the variable q(i) Ini ) interval for the variable nC(i) Idi ) interval for the variable dC(i) I h xi ) interval Ixi - xj(i), x(i) ) nC(i), dC(i), q(i) xj(i) ) midpoint of the interval Ixi, x(i) ) nC(i), dC(i), q(i) B.3. Symbols Introduced in the Appendix fi(‚), gi(‚) ) generic functions a, b, x, y, z ) variables Ia, Ib, Ix, Iy, Iz ) intervals c ) constant R(f; Ix) ) range of values of f(x)
Literature Cited (1) Chien, I.; Fruehauf, P. S. Consider IMC Tuning to Improve Controller Performance. Chem. Eng. Prog. 1990, 33. (2) Swallow, J. P. The best of the best in advanced process control. Presented as the keynote speech of the 4th International Conference on Chemical Process Control, South Padre Island, TX, Feb 17-22, 1991 (3) Ziegler, J. G.; Nichols, N. B. Optimum Settings for Automatic Controllers. Trans. ASME 1942, 64, 759. (4) Cohen, G. H.; Coon, G. A. Theoretical Consideration of Retarded Control. Trans. ASME 1953, 827. (5) Hang, C. C.; A° stro¨m, K. J.; Ho, W. K. Refinements of the Ziegler-Nichols Tuning Formula. IEE Proc. D 1991, 111. (6) Tyreus, B. D.; Luyben, W. L. Tuning PI Controllers for Integrator/Dead Time Processes. Ind. Eng. Chem. Res. 1992, 31, 2625. (7) Luyben, W. L. Tuning Proportional-Integral-Derivative Controllers for Integrator/Deadtime Processes. Ind. Eng. Chem. Res. 1996, 35, 3480. (8) Lee, J.; Cho, W.; Edgar, T. F. An Improved Technique for PID Controller Tuning from Closed-Loop Tests. AIChE J. 1990, 36, 1891. (9) Huang, C.; Chou, C.; Chen, L. An Automatic PID Controller Tuning Method by Frequency Response Techniques. Can. J. Chem. Eng. 1997, 75, 596. (10) Rivera, D. E.; Morari, M.; Skogestad, S. Internal Model Control. 4. PID controller Design. Ind. Eng. Chem. Process Des. Dev. 1986, 25, 252. (11) Morari, M.; Zafiriou, E. Robust Process Control; Prentice Hall: Englewood Cliffs, NJ, 1989. (12) Horn, I. G.; Arulandu, J. R.; Gombas, C. J.; VanAntwerp, J. G.; Braatz, R. D. Improved Filter Design in Internal Model Control. Ind. Eng. Chem. Res. 1996, 35, 3437. (13) Rivera, D. E.; Morari, M. Low-order SISO controller tuning methods for the H2, H∞ and µ objective functions. Automatica 1990, 26 (2), 361. (14) Aguirre, L. A. Computer-Aided Analysis and Design of Control Systems using Model Approximation Techniques. Comput. Meth. Appl. Mech. Eng. 1994, 114, 273. (15) A° stro¨m, K. J.; Ha¨gglund, T. Automatic Tuning of Simple Regulators with Specifications on Phase and Amplitude Margins. Automatica 1984, 20, 645. (16) Zhuang, M.; Atherton, D. P. Automatic Tuning of Optimum PID Controllers. IEE Proc. D 1993, 216.
Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4569 (17) Friman, M.; Waller, K. V. Autotuning of Multiloop Control Systems. Ind. Eng. Chem. Res. 1994, 33, 1708. (18) Lee, J.; Cho, W.; Edgar, T. Multiloop PI Controller Tuning for Interacting Multivariable Processes. Comput. Chem. Eng. 1998, 22, 1711. (19) Kwak, H. J.; Sung, S. W.; Lee, I. On-Line Process Identification and Autotuning for Integrating Processes. Ind. Eng. Chem. Res. 1997, 36, 5329. (20) Enns, D. F. Model Reduction with Balanced Realizations: An Error Bound and a Frequency Weighted Generalization. In Proceedings of the 23rd Conference on Decision and Control; IEEE: Piscataway, NJ, 1984; p 127. (21) Yang, X. H.; Packard, A. A Low Order Controller Design Method. In Proceedings of the IEEE Conference on Decision and Control; IEEE: Piscataway, NJ, 1995; p 3068. (22) Goddard, P. J.; Glover, K. Performance-preserving Frequency Weighted Controller Approximation: A Coprime Factorization Approach. In Proceedings of the IEEE Conference on Decision and Control; IEEE: Piscataway, NJ, 1994; p 2720. (23) Zhou, K.; D’Souza, C.; Cloutier, J. R. Structurally Balanced Controller Order Reduction with Guaranteed Closed Loop Performance. Syst. Control Lett. 1995, 24, 235. (24) Figueroa, J. L.; Desages, A. C.; Romagnoli, A. C.; Palazoglu, A. Controller Order Reduction for Process Control Systems. Comput. Chem. Eng. 1992, 16, 593. (25) Rivera, D. E.; Morari, M. Plant and Controller Reduction Problems for Closed-Loop Performance. IEEE Trans. Autom. Control 1992, 37 (3), 398. (26) A° stro¨m, K. J.; Panagopoulos, H.; Ha¨gglund, T. Design of PI Controllers Based on Non-Convex Optimization. Automatica 1998, 34 (5), 585. (27) Bao, J.; Forbes, J. F.; McLellan, P. J. Robust Multiloop PID Controller Design: A Successive Semidefinite Programming Approach. Ind. Eng. Chem. Res. 1999, 38, 3407. (28) Iwasaki, T.; Skelton, R. E. The XY-centering algorithm for the dual LMI problem: A new approach to fixed-order control design. Int. J. Control 1995, 62 (6), 1257. (29) Grigoriadis, K. M.; Skelton, R. E. Low-order Control Design for LMI Problems Using Alternating Projection Methods. Automatica 1996, 32, 1117. (30) Zhou, K.; Doyle, J. C.; Glover, K. Robust and Optimal Control; Prentice Hall: Englewood Cliffs, NJ, 1996. (31) Wang, S.; Chow, J. H. Coprime Factors, Strictly Positive Real Functions, LMI and Low-Order Controller Design for SISO Systems. In Proceedings of the IEEE Conference on Decision and Control; IEEE: Piscataway, NJ, 1998; p 2738. (32) Sourlas, D.; Edgar, T.; Manousiouthakis, V. Best Achievable Low-Order Decentralized Performance. In Proceedings of the American Control Conference; IEEE: Piscataway, NJ, 1994; p 3364. (33) Vidyasagar, M. Control System Synthesis. A Factorization Approach; MIT Press: Cambridge, MA, 1985. (34) A° stro¨m, K. J.; Wittenmark, B. Computer Controlled Systems: Theory and Design; Prentice Hall: Englewood Cliffs, NJ, 1984. (35) Manousiouthakis, V.; Sourlas, D. On simultaneously optimal decentralized performance. In Proceedings of the IEEE Conference on Decision and Control; IEEE: Piscataway, NJ, 1993; Vol. 4, p 3768. (36) Sourlas, D.; Manousiouthakis, V. Best Achievable Performance: Nonswitching Compensation for Multiple Models. Int. J. Robust Nonlinear Control 1999, 9, 521. (37) Vidyasagar, M. Optimal rejection of persistent bounded disturbances. IEEE Trans. Autom. Control 1986, AC-31, 527.
(38) Dahleh, M. A.; Diaz-Bobillo, I. J. Control of Uncertain Systems: A Linear Programming Approach; Prentice Hall: Englewood Cliffs, NJ, 1995. (39) Sourlas, D.; Manousiouthakis, V. Best Achievable Decentralized Performance. IEEE Trans. Autom. Contr. 1995, 40 (11), 1858. (40) Murtagh, B. A.; Saunders: M. A. MINOS 5.4 User’s Guide; Technical Report SOL 83-20R; Department of Operations Research, Stanford University: Stanford, CA, 1995. (41) Manousiouthakis, V.; Sourlas, D. A global optimization approach to rationally constrained rational programming. Chem. Eng. Commun. 1992, 115, 127. (42) Soland, R. M. An algorithm for separable nonconvex programming problems ii: Nonconvex constraints. Manage. Sci. 1971, 17, 759. (43) Floudas, C. A.; Visweswaran, V. A Global optimization algorithm (GOP) for certain classes of nonconvex NLPs-I. theory. Comput. Chem. Eng. 1990, 14 (12), 1397. (44) Ryoo, H. S.; Sahinidis, N. V. Global Optimization of Nonconvex NLP’s and MINLP’s with Applications in Process Design. Comput. Chem. Eng. 1995, 19 (5), 551. (45) Androulakis, I. P.; Maranas, C. D.; Floudas, C. A. RΒΒ: A Global Optimization Method for General Constrained Nonconvex Problems. J. Global Optim. 1995, 7, 337. (46) Zamora, J. M.; Grossmann, I. E. Continuous Global Optimization of Structured Process Systems Models. Comput. Chem. Eng. 1998, 22 (12), 1749. (47) Sourlas, D.; Choi, J.; Manousiouthakis, V. On l1 and H∞ optimal decentralized performance. In Proceedings of the IEEE Conference on Decision and Control; IEEE: Piscataway, NJ, 1992; p 2202. (48) Vavasis, S. A. Nonlinear Optimization: Complexity Issues; Oxford University Press: New York, 1991. (49) Moore, R. E. Interval Analysis; Prentice Hall: Englewood Cliffs, NJ, 1966. (50) Alefeld, G.; Herzberger, J. Introduction to Interval Computations; Academic Press: New York, 1983. (51) Neumaier, A. Interval Methods for Systems of Equations; Cambridge University Press: Cambridge, U.K., 1990. (52) Ratschek, H.; Rokne, J. Computer Methods for the Range of Functions; Ellis Horwood Ltd.: Chichester, U.K., 1984. (53) Hansen, E. Global Optimization Using Interval Analysis; Marcel Dekker: New York, 1992. (54) Schnepper, C. A.; Stadtherr, M. A. Robust Process Simulation Using Interval Methods. Comput. Chem. Eng. 1996, 20, 187. (55) Hua, Z.; Brennecke, J. F.; Stadtherr, M. A. Enhanced Interval Analysis for Phase Stability: Cubic Equation of State Models. Ind. Eng. Chem. Res. 1998, 37, 1519. (56) Schmidt, D. E.; Sourlas, D. Interval Analysis in Branch and Bound Global Optimization. Presented at the AIChE Annual Meeting, Los Angeles, CA, Nov 11-17, 2000; Poster 235c. (57) Rokne, J. G.; Bao, P. Interval Taylor Forms. Computing 1987, 39, 247. (58) Kearfott, R. B. Decomposition of Arithmetic Expressions to Improve the Behavior of Interval Iteration for Nonlinear Systems. Computing 1991, 47, 169. (59) Kearfott, R. B. Preconditioners for the Interval GaussSeidel Methodology. SIAM J. Numer. Anal. 1990, 3, 804.
Received for review July 12, 2000 Revised manuscript received April 23, 2001 Accepted July 17, 2001 IE0006561