Optimal Measurement Combination for Local Self-Optimizing

Finding the best CVs, based on the general nonlinear formulation of self-optimizing control, can be very time-consuming. To prescreen the alternatives...
0 downloads 14 Views 61KB Size
Ind. Eng. Chem. Res. 2007, 46, 3629-3634

3629

Optimal Measurement Combination for Local Self-Optimizing Control Vinay Kariwala* DiVision of Chemical & Biomolecular Engineering Nanyang Technological UniVersity, Singapore 637459

Self-optimizing control is a useful method for finding the appropriate set of controlled variables. Recently, some researchers introduced local methods for finding the locally optimal subset or linear combination of measurements, which can be used as controlled variables. [Halvorsen et al. Ind. Eng. Chem. Res. 2003, 42 (14), 3273-3284.] In this paper, we present a method for finding the optimal combination of measurements for local self-optimizing control for nearly optimal steady-state operation. The proposed method only requires the use of singular value decomposition, followed by the determination of the eigenvectors of a matrix. The proposed results are computationally more efficient and are guaranteed to find the optimal solution, in comparison to the available nonlinear optimization-based and the approximate null space methods. 1. Introduction A key step in the design of control systems for physical processes is finding the appropriate set of controlled variables (CVs). Several methods that involve the selection of CVs have appeared in the process control literature over the past few decades. A collection of many of these methods is available in the survey article by Van de Wal and de Jager.1 Recently, Skogestad2 introduced the concept of self-optimizing control, which is useful for selecting CVs. Similar ideas were explored earlier,3,4 but the formulation presented by Skogestad2 is more general. Engell5 provides an alternate explanation and some refinements of self-optimizing control. The usefulness of the concept of self-optimizing control for the selection of CVs has been demonstrated through several case studies.6-8 The choice of CVs, based on the general nonlinear formulation of self-optimizing control, can be time-consuming. To quickly pre-screen the alternatives, Halvorsen et al.9 presented local methods that include the approximate minimum singular value rule and the exact method for nearly optimal steady-state operation. A preliminary version of the former result was presented earlier by Skogestad and Postlethwaite.10 These methods are useful for finding the locally optimal subset or linear combination of measurements, which can be used as CVs. This paper focuses on finding the optimal linear combination of measurements. Halvorsen et al.9 suggested the use of nonlinear optimization for finding the optimal linear combination. This approach can be computationally expensive and may fail to find the optimal solution. Later, Alstad and Skogestad11 utilized the problem structure and proposed the use of the null space method. This method is computationally more efficient than nonlinear optimization, but ignores the implementation error (see Section 2 for details). Thus, the null space method can only provide a suboptimal solution. Another serious drawback of the null space method is that it is applicable only when the number of measurements exceeds the sum of the number of manipulated variables and disturbances. This work is motivated by the limitations of nonlinear optimization-based and null space methods. In this paper, we present an exact and computationally efficient solution to the problem of finding the optimal linear combination of measurements for locally optimal (static) selfoptimizing control. As shown in Section 3, the method only * To whom correspondence should be addressed. Tel: +65-63168746. Fax: +65-6794-7553. E-mail: [email protected].

requires the use of singular value decomposition, followed by determination of the eigenvectors of a matrix. In comparison to the nonlinear optimization-based method of Halvorsen et al.,9 the proposed method scales well with the problem dimensions. The results presented in this paper also overcome the limitations of the null space method,6,11 in regard to the number of available measurements. We demonstrate the usefulness of the results using the example of an exothermic reactor12 presented in Section 4. 2, Local Self-optimizing Control In this section, we briefly introduce the exact local method for self-optimizing control. For this purpose, we assume that the economics of the plant are characterized by the scalar objective functional J(u,d), where u and d represent the inputs or manipulated variables and the disturbances, respectively. For the nominal disturbances d*, let uopt(d*) denote the optimal inputs. When the disturbances change from d*, the optimal operation policy is to update uopt(d), according to d. This usually requires the use of an online optimizer, which provides the optimal value of the objective functional, denoted as Jopt(d).13 A simpler strategy results when u can be indirectly adjusted using a feedback controller. In this case, the feedback controller manipulates u to hold the CVs (c) close to their specified setpoints. Here, in addition to d, J is also affected by the error (n) in implementing the constant setpoint policy, which results due to uncertainty and measurement noise. Let us denote the objective functional value for the second strategy as Jc(n,d). In this case, the worst-case loss due to the use of the suboptimal strategy is given as

Lc ) max max (Jopt(d) - Jc(n,d)) n∈N

d∈D

(1)

where D and N represent the sets of allowable disturbances and implementation errors, respectively. The average loss can be calculated similarly. Self-optimizing control is said to occur when we can achieve an acceptable loss Lc by holding the CVs close to their setpoints without the need to reoptimize when disturbances occur.2 Based on this concept, the appropriate CVs can be selected by comparing the losses for different alternatives. Remark 1. Determination of the economically optimal operation policy actually requires solution to a constrained optimization problem. The constraints include equality constraints that arise because of the model equations and inequality constraints that arise because of factors such as the physical

10.1021/ie0610187 CCC: $37.00 © 2007 American Chemical Society Published on Web 04/13/2007

3630

Ind. Eng. Chem. Res., Vol. 46, No. 11, 2007

limits on variables. These constraints also depend on the states, in addition to u and d. Typically, the set of active constraints at the nominal operating conditions is controlled, which consumes some of the available degrees of freedom. The states can be determined by implicitly solving the model equations and the set of active constraints.9 This gives rise to the unconstrained optimization problem, in terms of the remaining degrees of freedom (u), as used in this paper. Finding the best CVs, based on the general nonlinear formulation of self-optimizing control, can be very timeconsuming. To prescreen the alternatives quickly, Halvorsen et al.9 presented a local method. This method is based on the quadratic expansion of the objective functional around the nominal operating point and the assumption that the set of active constraints for the nonlinear optimization problem does not change with d. Cao14 has considered the case when the set of active constraints changes with the disturbances. In comparison to self-optimizing controls, however, the use of an online optimizer may be advantageous in the case of frequent changes in the active constraint set. Some other limitations of self-optimizing control have been discussed by Alstad and Skogestad.11 To present the local method, let the linearized model of the process, obtained around the nominally optimal operating point, be given as

y ) Gyu + GdyWdd + Wnn

(2)

where y denote the process measurements and the diagonal matrices Wd and Wn contain the magnitudes of expected disturbances and implementation errors associated with the individual measurements, respectively. We have y, n ∈ Rny, u ∈ Rnu, and d ∈ Rnd. The CVs (c) are given as

c ) Hy ) Gu + GdWdd + HWnn

(3)

(1) Similar to any non-convex optimization problem, the nonlinear optimization method can converge to a local optimum point; and (2) The nonlinear optimization method can be very timeconsuming. The implications of the nonconvexity of the optimization problem are well-known. The latter issue may seem irrelevant, because the optimization problem needs to be solved only offline. As shown in Section 4, it is usually not necessary to combine all the measurements and the use of a subset of measurements suffices. To find a suitable subset of variables, the optimization problem must be solved many times. In this sense, the use of a faster method is clearly advantageous. As an alternative to the nonlinear optimization approach, Alstad and Skogestad11 proposed the use of the null space method. In this method, the implementation error is ignored and H is selected such that σ j (Md) is minimized. To be precise, H is selected such that

H(GyJuu-1Jud - Gdy) ) 0

(8)

or H is in the null space of (GyJuu-1Jud - Gd). It can be verified that when eq 8 holds, σ j (Md) ) 0. Clearly, the assumption of ignoring the implementation error is limiting and provides only suboptimal solution. More importantly, for eq 8 to hold, it is necessary that

n y g nu + nd

(9)

When the requirement in eq 9 is not satisfied, the null space method cannot be applied. Alstad6 presents some techniques for finding the dominant disturbance variables or directions such that eq 9 holds. Clearly, this may not be possible in general, without sacrificing optimality further. 3. Optimal Combination of Measurements

where

G ) HGy

(4a)

Gd ) HGdy

(4b)

and

It is assumed that the dimension of c is same as u and G ) HGy is invertible. The second assumption is necessary for integral control and implies that the rank of Gy is at least nu. Under these assumptions, Halvorsen et al.9 have shown that the worst-case loss is given as

max

|[dT nT]T|2e1

1 L) σ j ([Md Mn])2 2

In this section, we present an exact and computationally efficient method for finding the optimal linear combination of measurements for local self-optimizing control. We also show that the null space method6,11 is a special case of the proposed method. In the following discussion, as a shorthand notation, we use

Y ) [(GyJuu-1Jud - Gdy)Wd Wn ]

The following lemma establishes the basis for finding the optimal combination of measurements. Lemma 1. The matrix H minimizing the loss in eq 5 can be found by solving

(5)

min γ H

Md ) Juu1/2(Juu-1Jud - G-1Gd)Wd

(6)

Mn ) Juu1/2G-1HWn

(7)

Here, Juu and Jud represent ∂2J/∂u2 and ∂2J/(∂u∂d), respectively, evaluated at the nominally optimal operating point. Note that Juu is positive-definite and, thus, Juu1/2 always exists. The selection of CVs involves finding the matrix H such that the loss is minimal. Halvorsen et al.9 used nonlinear optimization for finding H, which suffers from the following drawbacks:

-1

s.t. H(γ G Juu (Gy)T - YYT)HT g 0

(11)

rank(H) ) nu

(12)

2

where

(10)

y

Proof: We note that the loss is minimized when σ j ([Md Mn]) is minimized. Furthermore,

[Md Mn ] ) Juu1/2G-1[(GJuu-1Jud - Gd) HWn ] ) Juu1/2(HGy)-1HY

(13)

Based on eq 13, there exists a positiVe scalar γ and H such that

Ind. Eng. Chem. Res., Vol. 46, No. 11, 2007 3631

σ j ([Md Mn ]) e γ

∂λnu

S Juu1/2(HGy)-1(HY)(HY)T(HGy)-TJuu1/2 e γ2

∂R

S (HGy)-1(HY)(HY)T(HGy)-T e γ2Juu-1 S HYYTHT e γ2HGyJuu-1(Gy)THT S H(γ2GyJuu-1(Gy)T - YYT)HT g 0 We further note that H that solVes the optimization problem does not necessarily render HGy inVertible. Hence, the rank constraint in eq 12 also must be imposed on H. 9 The optimization problem posed in Lemma 1 is bilinear in H and, thus, is difficult to solve. The rank constraint on H further complicates solving the optimization problem. In the following proposition, we propose a solution to this problem using eigenvalue decomposition. Proposition 1. Let λ1, λ2, ..., λny be the eigenvalues of (γ2GyJuu-1(Gy)T - YYT) arranged in decreasing order. The minimal loss then is given as

L ) 0.5γ

2

(14)

where γ > 0 is the smallest scalar satisfying

λnu(γ2GyJuu-1(Gy)T - YYT) ) 0

(15)

Let ν1, ν2, ..., νny be the mutually orthogonal eigenvectors of (γ2GyJuu-1(Gy)T - YYT) such that eq 15 holds. The H matrix then can be chosen as

H ) C [ν1 ν2 ‚‚‚ νny ]T

(16)

where C ∈ Rnu×nu is any non-singular matrix. Proof: To proVe eqs 14 and 15, we need to show that there exists a nonsingular matrix H such that eq 11 holds, if and only if γ is selected such that

λnu(γ2GyJuu-1(Gy)T - YYT) g 0

(17)

The fact that eq 17 is necessary for eq 11 to hold can be established by considering the special case of nu ) 1. In this case, when eq 17 does not hold, (γ2GyJuu-1(Gy)T - YYT) is a negatiVe-definite matrix and H(γ2GyJuu-1(Gy)T - YYT)HT < 0 for all H ∈ R1×ny and the necessity follows. To show sufficiency, let eq 17 hold and (γ2GyJuu-1(Gy)T YYT) be decomposed as

[ν1 ν2 ‚‚‚ νny ] Λ [ν1 ν2 ‚‚‚ νny ]T

(18)

where Λ is a diagonal matrix containing the eigenValues and ν1, ν2, ..., νny are the mutually orthogonal eigenVectors. The matrix Λ can be further written as Λ ) diag (Λ+,Λ-), such that Λ+ ∈ Rnu×nu and Λ- ∈ R(nu-nu)×(ny-nu) contain the positiVe (including zero) and negatiVe eigenValues, respectiVely. If H is selected as giVen in eq 16, then rank(H) ) nu and

H(γ2GyJuu-1(Gy)T - YYT)HT ) T C 0 Λ+ 0 CT 0 ) CΛ+C 0 0 0 0 Λ- 0 0 0 0

[ ][

and, thus, eq 11 holds. Now, we recall that15

][ ] [

]

) νnuT(GyJuu-1(Gy)T)νnu

(R ) γ2)

Because GyJuu-1(Gy)T is nonsingular, (∂λnu/∂R) > 0. Thus, λnu(γ2GyJuu-1(Gy)T - YYT) is an increasing function of γ and the γ Value that satisfies eq 15 represents the minimal Value of γ such that eq 17 holds. Using the nonsingular H in eq 16, eq 11 then holds and the minimal Value of loss is giVen as 0.5γ2. 9 With the results of Proposition 1, determining the smallest γ value that satisfies eq 15 remains the only hurdle in finding the optimal combination of measurements. It is possible to find the optimal γ using the bisection search method. However, the computational efficiency of this method is dependent on the initialization parameters and the quality of the solution is dependent on the chosen convergence tolerance. Next, we present an alternate approach for finding the optimal γ value, using singular value decomposition. Proposition 2. Let the matrix [GyJuu-0.5 Y] be decomposed as

[GyJuu-0.5 Y ] ) U[Σ 0 ]VT

(19)

such that Σ ∈ Rny×ny. The γ value that solves eq 15 then is given as

γ)

x

1 -1 σnu (V11)

(20)

2

where σnu is the nuth largest singular value and V11 ∈ Rnu×ny is the (1,1) block of V in eq 19. Proof: With the singular Value decomposition of [GyJuu-0.5 Y ] in eq 19,

[

V V GyJuu-0.5 ) U[Σ 0 ] V11 V12 21 22

][ ] T

I 0

) UΣVT11 Similarly, it can be shown that Y ) UΣV21T. Let the eigenValue decomposition of V11TV11 be giVen as ZΛZT. Because VT V ) I, V11TV11 + V21TV21 ) I. Now,

γ2GyJuu-1(Gy)T - YYT ) UΣ(γ2VT11V11 - VT21V21)ΣUT ) UΣ((γ2 + 1)VT11V11 - I)ΣUT ) UΣZ((γ2 + 1)Λ - I)ZTΣUT The aboVe expression shows that the eigenValues of γ2GyJuu-1(Gy)T - YYT are same as the eigenValues of (γ2 + 1)Λ - I, because of the similarity transformation. Because (γ2 + 1)Λ - I is diagonal, eq 15 is satisfied if and only if

(γ2 + 1)λnu ) 1

(21)

Now, eq 20 follows by rearranging eq 21 and recognizing that λnu(VT11V11) ) σnu2(V11). 9 Based on Propositions 1 and 2, the method for finding the optimal combination of measurements is given by the following algorithm. Algorithm 1. The optimal combination of measurements that minimizes the loss in eq 5 can be found using the following steps:

3632

Ind. Eng. Chem. Res., Vol. 46, No. 11, 2007

1 (C - CA) - r ) 0 τ Ai

(22)

1 (C - CB) + r ) 0 τ Bi

(23)

where τ is the residence time (taken as 60 s) and r denotes the rate of reaction, which is given as Figure 1. Schematic of a simple exothermic reactor.

(1) Find γ g 0 using singular value decomposition in Proposition 2 such that eq 15 holds. (2) Perform an eigenvalue decomposition of (γ2GyJuu-1(Gy)T - YYT) and find the eigenvectors corresponding to the largest (and non-negative) nu eigenvalues. (3) Choose any nonsingular matrix C ∈ Rnu×nu and set H as given in eq 16. In Algorithm 1, C is an arbitrary nonsingular matrix, which does not affect the loss incurred. In this sense, the matrix C can be treated as a degree of freedom. It can be used to satisfy some other criteria, for example, for minimizing the condition number of HGy. Remark 2. In the previous discussion, we have inherently assumed that the number of available measurements exceeds the number of available degrees of freedoms (nu). When this requirement is not satisfied, Algorithm 1 requires finding a γ value such that the smallest eigenvalue of (γ2GyJuu-1(Gy)T YYT) is non-negative. This, in turn, implies that the combination matrix H can be trivially chosen to be any arbitrary nonsingular matrix. Thus, when nu ) ny, the loss remains the same with the use of any measurement combination including H ) I. Finally, we show that the null space method6,11 can be seen as a special case of Proposition 1. Corollary 1. Let the implementation error be ignored (Wn ) 0). If eq 9 holds, zero loss can be achieved by selecting H as given by eq 8. Proof: We note when Wn ) 0 and eq 9 holds, Y in eq 10 is rank-deficient and YYT has at least (ny - nd) zero eigenValues. As ny - nd g nu, this implies that at least nu eigenValues of YYT are zero. With γ ) 0,

λnu(γ2GyJuu-1(Gy)T - YYT) ) λnu(-YYT) e 0 and eq 15 holds, proViding zero loss. Now, ν1, ν2, ..., νnu represents the eigenVectors of YYT corresponding to zero eigenValues and, hence, H can be selected as giVen by eq 8. 4. Case Study: Exothermic Reactor We demonstrate the usefulness of the proposed results using the realistic example of an exothermic reactor, shown in Figure 1. Self-optimizing control of this reactor by controlling the measurement combinations obtained using the null space method has been previously studied by Alstad.6 The reactor is fed with reactant A and product B, having concentrations of CAi and CBi, respectively, at temperature Ti. In the reactor, A undergoes a first-order reversible exothermic reaction to produce B. The outlet stream has a temperature T, with the concentrations of the product B and the unused reactant A being CB and CA, respectively. The reactor is assumed to be well-mixed, i.e., the concentrations and temperature inside the reactor are same as the outlet stream. The steady-state mass balance equations for this process are given as

r ) 5000e-10000/(1.987T)CA - 106e-15000/(1.987T)CB (24) The energy balance equation is

1 (T - T) + 5r ) 0 τ i

(25)

The economic objective function to be minimized is

J ) -2.009CB + (0.001657Ti)2

(26)

The inlet temperature Ti is the manipulated variable and the inlet concentrations CAi and CBi are unmeasured disturbances. The outlet concentrations CA and CB and the temperatures T and Ti are measured. In summary, the sets of measured, manipulated, and disturbance variables are given as

y ) [CA CB T Ti ]T

(27a)

u ) Ti

(27b)

d ) [CAi CBi ]T

(27c)

The allowed set of disturbances is D ) {0.7 e CAi e 1.3, CBi e 0.3}. The case with CAi ) 1 mol/L and CBi ) 0 mol/L is considered the nominal operating point. At this point, the optimal values for the measurement set are given as

y* ) [0.498 0.502 426.803 424.292 ]T

(28)

which is obtained using the Conopt solver available with GAMS.16 Because the number of model equations is small, the following linear model at the nominal operating point is obtained analytically:

[ ][

-0.001169 0.001169 u+ y) 1.005843 1

0.494719 0.505281 2.526403 0

]

0.277996 0.722004 d + Wnn -1.389978 0 (29)

The implementation or measurement errors for the individual measurements are taken to be 0.01 mol/L for the concentration measurements (y1, y2) and 0.5 K for the two temperature measurements (y3, y4). Hence, Wn is a diagonal matrix given as Wn ) diag(0.01, 0.01, 0.5, 0.5). Note that the last measurement y4 is not affected by the disturbances, because it is same as the manipulated variable u. Based on the set of the allowable disturbances, the diagonal matrix Wd is given as Wd ) diag(0.3, 0.3). The second-order derivatives Juu and Jud are also obtained analytically. These derivatives, evaluated at the nominal operating points, are

Juu ) 0.000234 and

Jud ) [-0.00177 0.008734 ]

Ind. Eng. Chem. Res., Vol. 46, No. 11, 2007 3633 Table 1. Local Losses with the Control of Optimal Combination of Different Sets of Measurements for Exothermic Reactor Two Measurements

Table 2. Comparison of Losses for the Most-Promising Alternatives for Control of the Exothermic Reactor

Three Measurements

Loss

measurement

local loss

measurement

local loss

rank

CV

set point

n

local

worst-case

average

CA, CB CA, T CA, Ti CB, T CB, Ti T, Ti

0.108256 0.015847 0.013886 0.009747 0.008068 0.011793

CA, CB, T CA, CB, Ti CA, T, Ti CB, T, Ti

0.000297 0.000264 0.007195 0.003608

1 2 3 4 5 6

c3 c4 c2 c˜ 2 Ti T

2.199446 2.195052 14.248465 55.025686 424.292 426.803

0.016752 0.016746 0.026194 0.703951 0.5 0.5

0.000264 0.000264 0.008068 0.011793 0.015320 0.016893

0.004196 0.004201 0.024919 0.028019 0.030531 0.034484

0.000467 0.000468 0.003941 0.005196 0.005840 0.006415

Next, we compute the local loss given in eq 5, when the individual measurements are controlled. The local losses for c ) CA, CB, T, and Ti are 2.623395, 5.58784, 0.016893, and 0.01532, respectively. When the concentration measurements are controlled, the losses are large, because of the low steadystate gains. In the following discussion, we focus on finding the combinations of measurements, whose control can provide lower losses. Using Algorithm 1, we find that controlling the relation

c4 ) Hy ) - 0.761515CA + 0.648127CB + 0.000113T + 0.005187Ti (30) provides a local loss of only 0.000264, which is significantly less than using any of the individual measurements. However, generally, the use of a combination of all the measurements is unnecessary and an acceptable loss can be obtained by combining a subset of measurements. To get a nontrivial H, we need only two measurements. The local losses with controlling the optimal combinations of two or three different measurements are shown in Table 1. The best combination of two and three measurements are obtained as

c2 ) 0.999475CB + 0.032399Ti

(31)

c3 ) -0.761323CA + 0.648351CB + 0.00531Ti (32) We note that, when the two concentration measurements are combined, the loss is large, because of the low static gain. Alternatively, one may consider combining the two temperature measurements with high gains, whose optimal combination is given as

c˜ 2 ) -0.637221T + 0.770681Ti

(33)

When c˜ 2 is controlled, however, the loss is still larger than that which is obtained by controlling c2. This happens because holding the CV constant indirectly provides information about the disturbances and combining CB with one of the temperature measurements provides better information about the disturbances. The use of c3 provides a loss similar to that from controlling the optimal combination of all available measurements. This happens because of the similar coefficients of CA, CB, and Ti in c3 and c4, and the low weight of the measurement T in c4. It is also possible to use the null space method to obtain suboptimal combinations of CVs. The null space method requires the use of at least three measurements. In this case, the best combinations of three measurements provides a local loss of 0.00027, which is similar to the loss obtained by controlling c3. Somewhat surprisingly, when the combination of all the measurements is obtained using the null space method, the local loss increases to 0.00405. This demonstrates a drawback of ignoring implementation error, as also noted by Alstad.6

We also compare the losses obtained by controlling the most promising alternatives using the actual nonlinear model in eqs 22-25. For this purpose, a set of equally spaced 1331 feasible disturbances and implementation errors is produced and the average and worst-case losses are compared. The setpoints for measurement combinations are taken as Hy*, where y* is given in eq 28. The implementation errors for measurement combinations are computed as ∑|HWn|, where the absolute value of the product is taken to account for the worst case. For example, when c˜ 2 is controlled, the implementation error is taken as (0.637221 + 0.770681) × 0.5 ) 0.703951. Based on the comparison in Table 2, we note that the losses computed based on the nonlinear model follow the local analysis closely, except c4. The local analysis suggests that the control of c4 can provide minor advantage over the control of c3. However, the actual loss for these two candidates are almost the same, with c3 being slightly better than c4. This happens because of the plant-model mismatch that arises because of model linearization and demonstrates a limitation of the local approaches. In comparison to frequent online optimization, the worst-case and average relative losses with the control of c3 are only 0.5% and 0.06%, respectively. This shows that nearly optimal steady-state operation is possible using feedback. Finally, we recommend the use of c3, if both the composition measurements are available online; otherwise, control of the combination of two temperature measurements, i.e., c˜ 2, is suggested. 5. Conclusions and Future Work In this paper, we have provided a simple method for finding the locally optimal measurement combinations, which can be used as controlled variables. The proposed algorithm provides the optimal solution efficiently, in comparison to the nonlinear optimization-based and null space methods.6,9,11 The combination of measurements obtained using the proposed results or the approximate null space method6,11 may not have a physical meaning and their use as a controlled variable may seem restrictive. However, it should be emphasized that there is no fundamental limitation in controlling combinations or functions of measurements. We note that the use of combinations of all the available measurements is not always necessary. It is often possible to obtain almost the same economic performance by combining only a subset of measurements. In this paper, the best subset of measurements, whose locally optimal combinations can be used as controlled variables, is found by evaluating all possible alternatives. However, this approach is computationally intractable for large dimensional problems, as the number of subsets of measurements increase rapidly with the problem size. In the future, we will focus on finding useful search methods (e.g., branch and bound methods) to solve this combinatorial problem efficiently. A shortcoming of the results of this paper and local methods, generally, is that the selection of the controlled variables is

3634

Ind. Eng. Chem. Res., Vol. 46, No. 11, 2007

performed based on an approximate linear model. Because of the resulting plant-model mismatch, the locally optimal subset or combination of measurements may not provide acceptable loss for the actual nonlinear plant in some cases. Furthermore, the available results on local self-optimizing control only involve the steady-state case. When the economics of the plant is strongly dependent on disturbance dynamics, the use of these static methods can be limiting. Future work will focus on improving the results of this paper and various available results on local self-optimizing control by considering the process dynamics and plant-model mismatch that arises because of model linearization. Literature Cited (1) Van de Wal, M.; De Jager, B. A review of methods for input/output selection. Automatica 2001, 37 (4), 487-510. (2) Skogestad, S. Plantwide control: The search for the self-optimizing control structure. J. Process Control 2000, 10 (5), 487-507. (3) Morari, M.; Arkun, Y.; Stephanopoulos, G. Studies in the synthesis of control structures for chemical processes. Part I: Formulation of the problem, process decomposition and the classification of the controller task. Analysis of the optimizing control structures. AIChE J. 1980, 26 (2), 220232. (4) Luyben, W. L. The concept of eigenstructure in process control. Ind. Eng. Chem. Res. 1988, 27 (1), 206-208. (5) Engell, S. Feedback control for optimal process operation. J. Process Control 2007, 17 (3), 203-219. (6) Alstad, V. Studies on Selection of Controlled Variables, Ph.D. Thesis, Norwegian University of Science and Technology, 2005. Available at http:// www.nt.ntnu.no/users/skoge/publications/thesis/2005_alstad/.

(7) Govatsmark, M. S. Integrated Optimization and Control, Ph.D. Thesis, Norwegian University of Science and Technology, 2003. Available at http://www.nt.ntnu.no/users/skoge/publications/thesis/2003_govatsmark. (8) Skogestad, S. Near-optimal operation by self-optimizing control: From process control to marathon running and business systems. Comput. Chem. Eng. 2004, 29 (1), 127-137. (9) Halvorsen, I. J.; Skogestad, S.; Morud, J. C.; Alstad, V. Optimal selection of controlled variables. Ind. Eng. Chem. Res. 2003, 42 (14), 32733284. (10) Skogestad, S.; Postlethwaite, I. MultiVariable Feedback Control: Analysis and Design, 1st Edition; Wiley: Chichester, U.K., 1996. (11) Alstad, V.; Skogestad, S. Null space method for selecting optimal measurement combinations as controlled variables. Ind. Eng. Chem. Res. 2007, 46 (3), 846-853. (12) Economou, C. G.; Morari, M.; Palsson, B. O. Internal model control: Extension to nonlinear systems. Ind. Eng. Chem. Proc. Des. DeV. 1986, 25 (2), 403-411. (13) Marlin, T.; Hrymak, A. N. Real-time operations optimization of continuous processes. In Proceedings of the 5th International Conference on Chemical Process Control; Tahoe City, NV, 1996. (14) Cao, Y. Constrained self-optimizing control via differentiation. In Proceedings of the 7th International Symposium on ADCHEM; Hong Kong, 2004. (15) Harville, D. A. Matrix Algebra from a Statistician’s PerspectiVe; Springer-Verlag: New York, 1997. (16) Brooke, A.; Kendrick, D.; Meeraus, A.; Raman, R. GAMS: A Users Guide; GAMS Development Corporation: Washington, DC, 2005.

ReceiVed for reView August 3, 2006 ReVised manuscript receiVed March 11, 2007 Accepted March 12, 2007 IE0610187