Modifier-Adaptation Methodology for Real-Time Optimization

Mar 20, 2009 - Uncovering the Chemical Secrets of Burned Bones. Burned bones hold stories. From them, anthropologists can gather clues about how an...
0 downloads 0 Views 328KB Size
6022

Ind. Eng. Chem. Res. 2009, 48, 6022–6033

Modifier-Adaptation Methodology for Real-Time Optimization A. Marchetti, B. Chachuat,† and D. Bonvin* Laboratoire d’Automatique École Polytechnique Fe´de´rale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland

The ability of a model-based real-time optimization (RTO) scheme to converge to the plant optimum relies on the ability of the underlying process model to predict the plant’s necessary conditions of optimality (NCO). These include the values and gradients of the active constraints, as well as the gradient of the cost function. Hence, in the presence of plant-model mismatch or unmeasured disturbances, one could use (estimates of) the plant NCO to track the plant optimum. This paper shows how to formulate a modifed optimization problem that incorporates such information. The so-called modifiers, which express the difference between the measured or estimated plant NCO and those predicted by the model, are added to the constraints and the cost function of the modified optimization problem and are adapted iteratively. Local convergence and model-adequacy issues are analyzed. The modifier-adaptation scheme is tested experimentally via the RTO of a three-tank system. 1. Introduction The chemical industry is characterized by a large number of continuously operating plants, for which optimal operation is of economic importance. However, optimal operation is particularly difficult to achieve when the plant models are inaccurate or in the presence of process disturbances. In response to these difficulties, real-time optimization (RTO) has become the rule for a large number of chemical and petrochemical plants.1 In highly automated plants, optimal operation is typically addressed by a decision hierarchy involving several levels that include plant scheduling, RTO, and process control.2-4 At the RTO level, medium-term decisions are made on a time scale of hours to a few days by considering economical objectives explicitly. This step typically relies on an optimizer that determines the optimal operating point under slowly changing conditions such as catalyst decay or changes in raw material quality. The optimal operating point is characterized by setpoints that are passed to lower-level controllers. Model-based RTO typically involves nonlinear first-principles models that describe the steady-state behavior of the plant. Because these models are often relatively large, model-based optimization may be computationally intensive. RTO emerged in the chemical process industry in the late 1970s, at the time when online computer control of chemical plants became available. There has been extensive research in this area since then, and numerous industrial applications have been reported.3 Because accurate models are rarely available in industrial applications, RTO typically proceeds using an iterative twostep approach,3,5,6 namely, an identification step followed by an optimization step. The idea is to repeatedly estimate selected uncertain model parameters and use the updated model to generate new inputs via optimization. This way, the model is expected to yield a better description of the plant at its current operating point. The classical two-step approach works well provided that (i) there is little structural plant-model mismatch,7 and (ii) the changing operating conditions provide sufficient excitation for estimating the uncertain model parameters. Unfortunately, such conditions are rarely met in practice. Regarding (i), in the presence of structural plant-model * To whom correspondence should be addressed. E-mail address: [email protected]. † Present address: Department of Chemical Engineering, McMaster University, Hamilton, Ontario L8S4L8, Canada.

mismatch, it is typically not possible to satisfy the necessary conditions of optimality (NCO) of the plant simply by estimating the model parameters that predict the plant outputs well. Some information regarding plant gradients must be incorporated into the RTO scheme. In the so-called “integrated system optimization and parameter estimation (ISOPE) method”,8,9 a gradientmodification term is added to the cost function of the optimization problem to force the iterates to converge to a point that satisfies the plant NCO. Regarding (ii), the use of multiple datasets has been suggested to increase the number of identifiable parameters.10 In response to both (i) and (ii), methods that do not rely on model-parameter update have gained in popularity recently. These methods encompass the two classes of model-free and fixed-model methods that are discussed next. Model-free methods do not use a process model online to implement the optimization. Two approaches can be distinguished. In the first one, successive operating points are determined by “mimicking” iterative numerical optimization algorithms. For example, evolutionary-like schemes have been proposed that implement the Nelder-Mead simplex algorithm to approach the optimum.11 To achieve faster convergence rates, techniques based on gradient-based algorithms have also been developed.13 The second approach to model-free methods consists in recasting the nonlinear programming (NLP) problem into that of choosing outputs whose optimal values are approximately invariant to uncertainty. The idea is then to use measurements to bring these outputs to their invariant values, thereby rejecting the uncertainty. In other words, a feedback law is sought that implicitly solves the optimization problem, as it is done in self-optimizing control14 and NCO tracking.15 Fixed-model methods utilize both a nominal process model and appropriate measurements to guide the iterative scheme toward the optimum. The process model is embedded within an NLP problem that is solved repeatedly. However, instead of refining the parameters of a first-principles model from one RTO iteration to the next, the measurements are used to update the cost and constraint functions in the optimization problem so as to yield a better approximation of the plant cost and constraints at the current point. In a recent work,16 a modified optimization problem has been formulated wherein both the constrained values and the cost and constraint gradients are corrected. This way, an operating point that satisfies the plant NCO is obtained

10.1021/ie801352x CCC: $40.75  2009 American Chemical Society Published on Web 03/20/2009

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009

upon convergence. Note that one can choose to not include the gradient-modification terms, in which case the approach reduces to a simple constraint-adaptation scheme.17,18 The term “modifier adaptation” has recently been coined for those fixed-model methods that adapt correction terms (or modifiers) based on the observed difference between actual and predicted functions or gradients.19 Modifier-adaptation methods exhibit the nice feature that they use a model parametrization and an update criterion that are tailored to the tracking of the NCO. This paper formalizes the idea of using plant measurements to adapt the optimization problem in response to plant-model mismatch, following the paradigm of modifier adaptation. It is organized as follows. The formulation of the RTO problem and a number of preliminary results are given in section 2. Section 3 describes the modifier-adaptation approach. Model-adequacy requirements and local convergence conditions are discussed. Two filtering strategies are proposed to achieve stability and reduce sensitivity to measurement noise. Links between modifier adaptation and related work in the literature are highlighted. The modifieradaptation method is tested in an experimental three-tank setup in section 4, and conclusions are presented in section 5. 2. Preliminaries 2.1. Problem Formulation. The objective of RTO is the minimization or maximization of some steady-state operating performance (e.g., minimization of operating cost or maximization of production rate), while satisfying a number of constraints (e.g., bounds on process variables or product specifications). In the context of RTO, because it is important to distinguish between the plant and the model, we will use the notation ( · )p for the variables that are associated with the plant. The steady-state optimization problem for the plant can be formulated as follows: min Φp(u) :) φ(u, yp) u

(1)

y ) h(u, x, θ) where f ∈ Rnx is a set of process model equations including mass and energy balances and thermodynamic relationships, x ∈ Rnx are the model states, y ∈ Rny are the output variables predicted by the model, and θ ∈ Rnθ is a set of adjustable model parameters. For simplicity, we shall assume that the outputs y can be expressed explicitly as functions of u and θ, y(u, θ). Using one such model, the solution to the original problem (1) can be approached by solving the following NLP problem: u* ) arg min Φ(u, θ) u

G(u, θ) e 0 uL e u e u U with Φ(u, θ) :) φ(u, y(u, θ)) and G(u, θ) :) g(u, y(u, θ)). Assuming that the feasible set U :) {u ∈ [uL, uU]: G(u, θ) e 0} is nonempty and the cost function Φ is continuous for θ given, a minimizing solution of Problem (2) is guaranteed to exist; the set of active constraints at u* is denoted by A(u*) :) {i: Gi(u*, θ) ) 0, i ) 1, ..., ng}. 2.2. Necessary Conditions of Optimality. Assuming that the required constraint qualification holds at the solution point u* and the functions Φ and G are differentiable at u*; there exist unique Lagrange multiplier vectors µ ∈ Rng, ζU, ζL ∈ Rnu such that the following first-order Karush-Kuhn-Tucker (KKT) conditions hold at u*:21

f(u, x, θ) ) 0

u L e u e uU

G e 0, µTG ) 0,

ζUT(u - uU) ) 0, µ g 0,

where u ∈ Rnu denotes the decision (or input) variables, and yp ∈ Rny are the measured (or output) variables; φ: Rnu × Rny f R is the scalar cost function to be minimized; gi: Rnu × Rny f R (with i ) 1, ..., ng) is a set of constraint functions; and uL and uU are the lower and upper bounds on the decisions variables, respectively. These latter bounds are considered separately from the constraints g, because they are not affected by uncertainty and, therefore, do not require adaptation. An important assumption throughout this paper is that the cost function φ as well as the constraint functions g are known and can be evaluated directly from the measurements. While these assumptions are often met in practice, there also are many RTO applications in which not all output variables in the cost function and inequality constraints can be measured; that is, the RTO scheme is open-loop, with respect to unmeasured outputs. Note that the lack of measurements requires to introduce some conservatism, e.g., in the form of constraint backoffs.20 This will be addressed in future work. In any practical application, the plant mapping yp(u) is not known accurately. However, an approximate model is often available in the form

(2)

s.t.

s.t. Gp(u) :) g(u, yp) e 0 uL e u e u U

6023

ζU g 0,

ζLT(uL - u) ) 0 ζL g 0

∂L ∂Φ ∂G ) + µT + ζUT - ζLT ) 0 ∂u ∂u ∂u

(3) (4) (5) (6)

with L being the Lagrangian function (L :) Φ + µTG + ζUT(u - uU) + ζLT(uL - u)). The NCOs (3) are called primal feasibility conditions, those given in (4) are called complementarity slackness conditions, and those given in (5) and (6) are called dual feasibility conditions. 2.3. Model Adequacy for Two-Step RTO Schemes. The problem of model selection in the classical two-step approach of RTO has been discussed by Forbes and Marlin.22 A process j model is called adequate if values for its parameters, say, θ can be found such that a fixed point of the RTO scheme coincides with the plant optimum u*. p Note that the parameter j may not represent the true values of model parameters, values θ especially in the case of structural plant-model mismatch for which the concept of “true values” has no meaning. Forbes and Marlin22 proposed the following model adequacy criterion. Criterion 1 (Model Adequacy for Two-Step RTO Schemes). Let u*p be the unique plant optimum and let there j for the model parameters exist (at least) one set of values θ such that Gi(u*p, θ¯ ) ) 0,

i ∈ A(u*p)

(7)

6024

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009

Gi(u*p, θ¯ ) < 0,

i ∉ A(u*p)

∇rΦ(u*p, θ¯ ) ) 0 ∇r2Φ(u*p, θ¯ ) > 0

(9) (positive definite)

∂Jid (y , u*, θ¯ ) ) 0 ∂θ p p ∂2Jid (yp, u*p, θ¯ ) > 0 ∂θ2

(8)

(10) (11)

(positive definite)

(12)

where ∇rΦ and ∇r2Φ are the reduced gradient and reduced Hessian of the cost function, respectively;23 and Jid stands for the performance index in the (unconstrained) parameter estimation problem. Then, the process model is adequate. Conditions (7)-(10) represent sufficient conditions for u*p to be a strict local minimum of (2) with the parameters chosen as j , whereas conditions (11) and (12) are sufficient for θ j to be a θ strict local minimum of the estimation problem at u*. p Hence, if all these conditions hold, the plant optimum u*p corresponds j and conditions (7)-(12) to a (local) model optimum for θ ) θ are sufficient for model adequacy. However, these conditions are not necessary for model adequacy. Indeed, it may be that j )si.e., the u*p corresponds to the model optimum (for θ ) θ model is adequate, but conditions (7)-(12) are not met. One such situation is when the reduced Hessian is only semidefinite positive, for which (10) does not hold. Noting that the equalities (11) alone yield nθ conditions, it becomes clear that the full set of adequacy conditions (7)-(12) is overspecified. In other words, these model adequacy conditions are often not satisfied in the presence of plant-model mismatch.

Therefore, open-loop implementation of the model-based solution results in suboptimal and possibly infeasible operation. The idea behind modifier adaptation is to use measurements for correcting the cost and constraint predictions between successive RTO iterations in such a way that a KKT point for the model coincides with the plant optimum.16 Unlike two-step RTO schemes, the model parameters θ are not updated. Instead, a linear modification of the cost and constraint functions is implemented, which relies on so-called modifiers representing the difference between the plant and predicted values of some KKT-related quantities. At a given operating point u, the modified constraint functions read as Gm(u, θ) :) G(u, θ) + εG + λGT(u - u _)

where the modifiers εG ∈ Rng and λG ∈ Rnu×ng are given by εG :) Gp(u _) - G(u _, θ) λGT :)

∂Gp ∂G (u _) (u _, θ) ∂u ∂u

(14) (15)

A graphical interpretation of the modifiers in the jth input direction for the constraint Gi is depicted in Figure 1. The modifier εGi corresponds to the gap between the plant and predicted constraint values at u, whereas λGi represents the difference between the slopes of Gp, i and Gi at u. Likewise, the cost function is corrected as Φm(u, θ) :) Φ(u, θ) + λΦTu

(16)

where the modifier λΦ ∈ Rnu is given by λΦT :)

3. Modifier Adaptation Modifier adaptation is presented and analyzed in this section. The idea behind modifier adaptation is introduced in subsection 3.1. The modifier-adaptation algorithm is described in subsection 3.2, followed by discussions on KKT matching, local convergence, and model adequacy in subsections 3.3, 3.4, and 3.5, respectively. Alternative modifier-adaptation schemes are considered in subsection 3.6, and links to previous work are indicated in subsection 3.7. A discussion on the estimation of experimental gradients closes the section. 3.1. Idea of Modifier Adaptation. In the presence of uncertainty, the constraint values and the cost and constraint gradients predicted by the model do not match those of the plant.

(13)

∂Φp ∂Φ (u _) (u _, θ) ∂u ∂u

(17)

Observe that the cost modification comprises only a linear term in u, as the additional constant term (Φp(u) - Φ(u, θ) - λΦTu) to the cost function would not affect the solution point. The modifiers and KKT-related quantities in (14), (15), and (17) can be denoted collectively as nK-dimensional vectors, ΛT :) (εG1, ..., εGng, λG1T, ..., λGngT, λΦT) ∂Gng ∂Φ ∂G1 , ..., , C T :) G1, ..., Gng, ∂u ∂u ∂u

(

)

with nK ) ng + nu(ng + 1). This way, (14), (15), and (17) can be rewritten as Λ(u _) ) Cp(u _) - C (u _, θ)

(18)

Implementation of these modifications requires the cost and constraint gradients of the plant (∂Φp/∂u(u) and ∂Gp/∂u(u)) to be available at u. These gradients can be inferred from the measured plant outputs yp(u) and the estimated output gradients ∂yp/∂u(u): ∂yp ∂Φp ∂φ ∂φ (u _) ) (u _, yp(u (u _, yp(u (u _) _)) + _)) ∂u ∂u ∂y ∂u ∂Gp ∂yp ∂g ∂g (u _) ) (u _, yp(u (u _, yp(u (u _) _)) + _)) ∂u ∂u ∂y ∂u Figure 1. Linear modification of the constraint function Gi so that the value and gradient of the modified function Gm, i match those of Gp, i at u.

A discussion on how to estimate the output gradients for the plant is deferred to subsection 3.8.

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009

Figure 2. Modifier-adaptation algorithm for real-time optimization (RTO).

3.2. Modifier-Adaptation Algorithm. The proposed modifier-adaptation algorithm is depicted in Figure 2. It consists of applying the foregoing modification procedure to determine the new operating point. In the kth iteration, the next point uk+1 is obtained as * uk+1 :) uk+1

(19)

* ) arg min Φm(u, θ) :) Φ(u, θ) + λΦT uk+1 k u

(20)

where u

s.t. GT Gm(u, θ) :) G(u, θ) + εG k + λk (u - uk) e 0 uL e u e u U G Here, uk is the current operating point; εG k and λk are the constraint-value and constraint-gradient modifiers in the current iteration; and λkΦ is the cost-gradient modifier in the current iteration. These modifiers are adapted at each iteration, using (estimates of) the constraint values and cost and constraint gradients of the plant at uk. The simplest adaptation strategy is to implement the full modification given by (18) at each iteration:

Λk+1 ) Cp(uk+1) - C (uk+1, θ)

6025

local convergence introduced in subsection 3.4 can be used to check a posteriori that a proposed gain matrix is indeed appropriate. It may happen that the constraint and cost gradients cannot be reliably estimated due to the particular process characteristics or high noise level (see subsection 3.8). In this case, one may decide to not adapt the gradient modifiers, e.g., by setting q1 ) · · · ) qng ) d ) 0; that is, the modifier-adaptation algorithm reduces to a simple constraint-adaptation scheme.18 The computational complexity of the modifier-adaptation algorithm is dictated by the solution of the NLP subproblems. That is, modifier adaptation exhibits similar complexity as the conventional two-step approach (it is actually less computationally demanding, in that the solution of a parameter estimation problem is no longer needed at each iteration). 3.3. KKT Matching. Perhaps the most attractive property of modifier-adaptation schemes is that, upon convergence (i.e., under noise-free conditions), the resulting KKT point u∞ for the modified model-based optimization problem (20) is also a KKT point for the plant optimization problem (1). This is formalized in the following theorem. Theorem 1 (KKT Matching). Let the gain matrix K be nonsingular and assume that the modifier-adaptation algorithm described by (19), (20), and (22) converges, with u∞ :) limkf∞ uk being a KKT point for the modified problem (20). Then, u∞ is also a KKT point for the plant optimization problem (1). Proof. Because K is nonsingular, letting k f ∞ in (22) gives Λ∞ ) Cp(u∞) - C (u∞, θ)

(26)

That is, ε∞Gi ) Gp,i(u∞) - Gi(u∞, θ), λ∞GiT )

∂Gp,i ∂Gi (u ) (u , θ), ∂u ∞ ∂u ∞

i ) 1, ..., ng

(27)

i ) 1, ..., ng

(28)

(21)

However, this simple strategy may lead to excessive correction when operating far away from the optimum, and it may also make modifier adaptation very sensitive to measurement noise. A better strategy consists of filtering the modifiers, e.g., with a first-order exponential filter: Λk+1 ) (I - K)Λk + K[Cp(uk+1) - C (uk+1, θ)]

λ∞ΦT )

∂Φp ∂Φ (u ) (u , θ) ∂u ∞ ∂u ∞

(29)

It is then readily seen that, upon convergence, the KKT elements Cm for the modified problem (20) match the corresponding elements Cp for the plant,

(22)

Cm(u∞, θ) :) C (u∞, θ) + Λ∞ ) Cp(u∞)

(30)

nK×nK

is a gain matrix. A possible choice for K is where K ∈ R the block-diagonal matrix

or, considering the individual terms,

K :) diag(b1, ..., bng, q1Inu, ..., qngInu, dInu)

Gm(u∞, θ) :) G(u∞, θ) + ε∞G ) Gp(u∞)

(31)

where the gain entries b1, ..., bng, q1, ..., qng, d are taken in (0, 1]. A block-diagonal gain matrix has the advantage that it naturally decouples the modifier adaptation laws as

∂Gp ∂Gm ∂G (u∞, θ) :) (u∞, θ) + λ∞GT ) (u ) ∂u ∂u ∂u ∞

(32)

Gi ) (1 - bi)εGk i + bi[Gp,i(uk+1) - Gi(uk+1, θ)], εk+1 i ) 1, ..., ng (23)

∂Φp ∂Φm ∂Φ (u∞, θ) :) (u∞, θ) + λ∞ΦT ) (u ) ∂u ∂u ∂u ∞

(33)

Gi T ) (1 - qi)λGk iT + qi λk+1

Φ T ) (1 - d)λΦT +d λk+1 k

[

[

]

∂Gi ∂Gp,i (u ) (u , θ) , ∂u k+1 ∂u k+1 i ) 1, ..., ng (24)

∂Φp ∂Φ (u ) (u , θ) ∂u k+1 ∂u k+1

]

Because, by assumption, u∞ is a KKT point for the modified problem (20), it satisfies eqs (3)-(6) with the associated Lagrange Table 1. Values of the Uncertain Parameters θ in Problem (34) Corresponding to the Plant and the Model Value of Parameter

(25)

Of course, more-general choices of the gain matrix are possible, but they are typically more difficult to make. The condition for

plant model

θ1

θ2

θ3

θ4

3.5 2.0

2.5 1.5

-0.4 -0.5

1.0 0.5

6026

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009

multipliers µ∞, ζ∞U, and ζ∞L . Hence, from (31)-(33), u∞ is also a KKT point for the original problem (1), with the same set of Lagrange multipliers. 9 A direct consequence of Theorem 1 is that, at u∞, the active constraints and the corresponding Lagrange multipliers are the same for the modified problem (20) and the plant problem (1). Furthermore, note that the equalities (31)-(33) represent more than what is strictly needed for KKT matching: indeed, because the Lagrange multipliers µ∞ that correspond to inactive constraints are zero, one simply needs to match the values and gradients of the actiVe constraints. However, adaptation of the inactive constraints and their gradients is required to detect the active set, which is not known a priori. Also note that no guarantee can be given that a global optimizer for the plant has been determined, even if the successive operating points uk+1 correspond to global solutions of the modified problem (20). Indeed, the converged operating point u∞ may correspond to any stationary point for the plant (e.g., also to a local minimum). A special case where modifieradaptation guarantees a global optimum for the plant is when the optimization problem (1) is convex, although this condition can never be verified in practice. The following numerical example illustrates the convergence of modifier adaptation to a KKT point in the convex case. Example 1. Consider the following convex optimization problem: min Φ(u, θ) :) (u1 - θ1)2 + 4(u2 - 2.5)2 ug0

(34)

s.t. G :) (u1 - θ2)2 - 4(u2 - θ4)θ3 - 2 e 0 which is comprised of two decision variables (u ) [u1 u2]T), four model parameters (θ ) [θ1 θ2 θ3 θ4]T), and a single inequality constraint (G). The parameter values θ for the plant (simulated reality) and the model are reported in Table 1. Figure 3 illustrates the convergence of several implementations of the modifier-adaptation scheme. Note that, because of parametric errors, the plant optimum (point P) and model optimum (point M) are quite different. Starting from the model optimum, constraint adaptation alone is first applied, i.e., with d ) 0, and q ) 0; the a iterations with b ) 0.8 converge to the feasible, yet suboptimal operating point C. By enabling the correction of the gradient of the cost function (b ) d ) 0.8; q ) 0), one obtains the b iterations, while enabling the correction of the gradient of the constraint (b ) q ) 0.8; d ) 0) yields the c iterations. These two intermediate cases show how the plant optimum can be approached with respect to case a by correcting the different gradients involved in the KKT conditions. Finally, the full modifier-adaptation algorithm applied with b ) d ) q ) 0.8 produces the d iterations, which converge to the plant optimum. 9 For some problems, the iterations may converge by following an infeasible path (i.e., with violation of the constraints), even if the modifier-adaptation algorithm starts at a feasible point. A way of reducing the violation of a constraint is by reducing the gain coefficients in the matrix K; however, this is at the expense of a slower convergence rate. Constraint violation can also be prevented by combining the modifier-adaptation scheme with a constraint controller.24 Although the iterations may follow an infeasible path, a straightforward consequence of Theorem 1 is that a convergent modifier-adaptation scheme yields feasible

Figure 3. Modifier adaptation applied to Problem (34). Legend: thick solid line, plant constraints; thick dash-dotted line, model constraints; dotted lines, cost function contours; point P, plant optimum; point M, model optimum; and point C, optimum found using only constraint adaptation.

operation after a finite number of RTO iterations upon backingoff the constraints in the original problem. Theorem 1 establishes that, under mild conditions, a convergent implementation of the scheme described by (19), (20), and (22) finds a KKT point for the plant optimization problem (1). Yet, convergence is not ensured. It may happen, for instance, that the modified NLP problem (20) becomes infeasible because a modifier is too large, or that the modifier sequence exhibits undamped oscillations when some gain coefficients in the matrix K are too large. Some guidance regarding the choice of the gain matrix K is given subsequently, based on a local convergence analysis. 3.4. Local Convergence Analysis. This subsection derives the conditions necessary for the modifier-adaptation algorithm described by (19), (20), and (22) to converge. To conduct the analysis, an auxiliary constraint modifier εˆ G is introduced, which corresponds to the sum of the constant terms in the constraint modification (20): G GT εˆG k :) εk - λk uk

ˆ T :) The corresponding vector of modifiers is denoted by Λ (εˆ G1, ..., εˆ Gng, λG1T, ..., λGngT, λΦT) ∈ RnK, and it is related to Λ as ˆk Λk ) T(uk)Λ

( )

with the matrix T(u) ∈ RnK×nK given by uT

1

·

·

·.

·.

uT

1

T(u) :)

Inu

·

·.

Inu

Inu

Moreover, the modified optimization problem (20), expressed ˆ , reads as follows: in terms of the auxiliary modifiers Λ

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009 * ) arg min Φm(u, θ) :) Φ(u, θ) + uk+1 u

λΦT k u

(35)

s.t. GT Gm(u, θ) :) G(u, θ) + εˆ G k + λk u e 0

uL e u e u U

Next, it is assumed that this problem has a unique global ˆ k, given by u*k+1 ) U*(Λ ˆ k). solution point for each iteration Λ This uniqueness assumption precludes multiple global solution points, but it assumes nothing regarding the existence of local optima. Uniqueness of the global optimum is required to establish convergence. If the model-based optimization problem exhibited several global solutions, the optimizer would randomly pick either one of these global solution points at any iterations, thereby making convergence hopeless. ˆ ) that represents the difference between Consider the map Γ(Λ the plant and predicted KKT quantities C for a given set of ˆ: auxiliary modifiers Λ ˆ ) :) Cp(u) - C (u, θ), Γ(Λ

ˆ) u ) U*(Λ

ˆ k+1 ) (I - K)T(uk)Λ ˆ k + KΓ(Λ ˆ k) T(uk+1)Λ Noting that T is invertible, with (T(u))-1 ) T(-u), this latter law can then be written in the generic form: (37)

with ˆ k, Λ ˆ k-1) :) T(-U*(Λ ˆ k))(I - K)T(U*(Λ ˆ k-1))Λ ˆk + M (Λ ˆ k))KΓ(Λ ˆ k) T(-U*(Λ ˆ ∞ be a fixed point of the algorithm M, and let u*∞ Let Λ denote the corresponding optimal inputs. The map U* is ˆ ∞, provided that (i) differentiable in the neighborhood of Λ regularity conditions, (ii) second-order sufficient conditions for a strict local minimum, and (iii) strict complementarity slackness conditions hold at u*∞ for the modified optimization problem (35) (or (20)).25 Under these conditions, a first-order ˆ ∞ is obtained as approximation of M around Λ ˆ k+1 ) ∂M (Λ ˆ ∞)δΛ ˆ k + ∂M (Λ ˆ )δΛ ˆ k-1 δΛ ˆ ˆ k-1 ∞ ∂Λk ∂Λ

(38)

ˆ k :) Λ ˆk - Λ ˆ ∞. where δΛ Clearly, a necessary condition for the modifier-adaptation ˆ ∞ is that the gain matrix K be chosen algorithm to converge to Λ such that the 2nK × 2nK matrix given by

(

∂M ˆ ∂M ˆ (Λ∞) (Λ ) ˆ ˆ k-1 ∞ ∂Λ ∂Λ Υ∞ :) k In K 0nK×nK

)

ˆ ∞ ) [3.4 -2 -0.4 -3 0 ]T u∞ ) [2.8726 2.1632 ] , Λ At that particular point, the matrix Υ∞ is calculated and its eigenvalues are determined to be 0.427, -0.395, 0.2 (multiplicity of 4) and 0 (multiplicity of 4). Therefore, the spectral radius of Υ∞ is