Novel Nonmonotone Line-Search Method for Constrained Nonlinear

A new nonmonotone line-search procedure is presented for the generally constrained case of nonlinear programming problems. The new algorithm is based ...
0 downloads 0 Views 225KB Size
8270

Ind. Eng. Chem. Res. 2006, 45, 8270-8281

Novel Nonmonotone Line-Search Method for Constrained Nonlinear Programming: Algorithmic Concepts and Preliminary Computational Studies Vassilios S. Vassiliadis* and Intan S. Ahamad Department of Chemical Engineering, UniVersity of Cambridge, Pembroke Street, Cambridge CB2 3RA, United Kingdom

Rau´ l Conejeros Escuela de Ingenierı´a Bioquı´mica, Pontificia UniVersidad Cato´ lica de Valparaı´so, AV. Brasil N° 2147, Valparaı´so, Chile

A new nonmonotone line-search procedure is presented for the generally constrained case of nonlinear programming problems. The new algorithm is based on the use of standard penalty methods for the definition of merit functions used during line search to find the next iterate in algorithms generating a search direction iteratively. The key concept is the discretization of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order, as in standard nonmonotone line-search procedures used for unconstrained optimization. Nonmonotonicity helps in escaping from local minima, while the discretized penalty parameters overcome the difficulties in choosing a penalty parameter that varies, but having the same definition as the problem while not underpenalizing the constraints to arrive at the desired KKT point. An implementation within a customized logarithmic barrier algorithm for bounds’ handling is presented with capabilities for very large scale applications; the algorithm uses exact first and second derivative information, derived symbolically, and the search direction is generated by solution of the Lagrange-Newton equations. The case studies presented demonstrate the capabilities of the new line-search procedure, and comparisons with other methods are discussed. It is noted that we found a significantly better solution in case study 5. The new nonmonotone line-search procedure is, at present, a heuristic and from the computational point of view: future work will focus on the investigation of both the theoretical properties of the method and new implementation aspects. L(x,λ) ) ΦB(x) + λTh(x)

1. Introduction In this work, we are interested in the solution of equality constrained problems of the form

min f(x) x

(1a)

where λ ∈ Rm is the vector of Lagrange multipliers associated with equality constraints h(x). The necessary conditions of optimality are derived as

∂L

subject to h(x) ) 0

m



min ΦB(x) ) f(x) - µ [ln(xi - xi) + ln(xi - xi )] x i)1 U

L

(2a)

subject to h(x) ) 0

(2b)

Thus, any nonlinear programming (NLP) problem is then rendered into a sequence of equality-constrained problems as considered in eqs 2a and 2b. The Lagrangian function of problem 2 is * To whom correspondence should be addressed. Tel.: +44-(0)1223-330142. Fax: +44-(0)1223-334796. E-mail: vsv20@ cheng.cam.ac.uk.

)

∂ΦB

∂x

(1b)

where x ∈ Rn, f(x): Rn f R1, and h(x): Rn f Rm. Any inequalities present will be transformed to equalities by use of appropriate slack variables and new bounds. Any bounds of the form xL e x e xU are treated by logarithmic barrier terms leading to

(3)

m

∂h

λi ) 0 ∑ ∂x i)1

+

∂x

(4a)

∂L ) h(x) ) 0 ∂λ

(4b)

The Newton iteration subproblem for the above system of nonlinear equalities is

( )| ( ) ( HL JT J 0

px(k)

(x(k)λ(k))

(k)

)-



∇xL(x,λ) h(x)

)|

(5) (x(k)λ(k))

with HL is the Hessian matrix of the Lagrangian function (eq 3)

HL )

∂2ΦB

m

+

∂x2

∑ i)1

λi

∂2hi ∂x2

(6)

and J is the Jacobian matrix of the equality constraints evaluated at x(k)

J)

10.1021/ie050804t CCC: $33.50 © 2006 American Chemical Society Published on Web 06/21/2006

∂h ∂x

(7)

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8271

Updating of the solution is

x(k+1) ) x(k) + R(k)px(k)

(8a)

λ(k+1) ) λ(k) + R(k)pλ(k)

(8b)

where the step size R(k) is determined by line search such that there is a sufficient decrease in some measure of the objective function and constraint norm values. For example, a merit function φ(x) balances the dual aims of reducing the objective function value and satisfying the constraints. The idea is to choose R(k) such that φ(x(k) + R(k)px(k)) is improved over φ(x(k)). To this end, the merit functions are used to decide on acceptance or rejection of a trial step size. For example, if a step size R(k) is not acceptable, then it is reduced in simple backtracking schemes by Rnew ) Rold‚θ, where 0 < θ < 1. The use of merit functions, which invariably require some penalty parameter, is not exclusive. 1.1. Merit Function. A very crucial feature of line search is choosing the appropriate representation to balance the objective function with the feasibility of the constraints. This is due to inability of the nonlinear constrained optimization methods to give a search direction that is able to maintain feasibility at every iteration. 1.1.1. Exact l1-Penalty Function. In this work, we are considering the exact l1-penalty function, m

φ(x,r) ) f(x) + r

|hi(x)| ∑ i)1

(9)

where r is a positive constant. The drawback of this function is that it is nondifferentiable at feasible points, which does not require penalty parameters to grow to infinity.1 At r larger than the threshold jr, which is problem dependent, this penalty function has the same minimum as the original problem given in eqs 1a and 1b. This condition is met when r g |λ/i |∞, where λ/i , i ) 1, 2, ..., m, are the optimal Lagrange multiplier values associated with each equality constraint.2,3 1.1.2. Problems with Penalty Functions Used as Merit Functions. Many problems are associated with penalty merit functions, mainly because of the unknown value of the penalty parameters. For instance, for the l1-exact penalty function, even though there is a finite jr, achieving convergence is unknown a priori. Wrong estimation will lead to a wrong “balance” of objective and constraints, with the penalty function not sharing the same solution as the problem given. Even if jr is known, the value is still unreliable because it is inadequate at every iteration, except if x is in the neighborhood of x*.4 Furthermore, not only it is theoretically required to have r g λ∞, but too big a value of r beyond this threshold will overpenalize constraint satisfaction, biasing an optimization algorithm toward feasibility. This invariably will severely restrict the line-search step size, causing too many rejections. Apart from that, nonsmooth merit functions suffer a problem known as the Maratos effect, for which a number of effective remedies have been found to counter the problem (Section 1.3). However, even if the Maratos effect is overcome, the nonsmooth merit functions still interfere with good Newton steps.4 Effectively, the problem is that the objective function and the constraints are different in meanings and units, making it harder to choose the initial r(0).5 Realizing these difficulties, a new idea on nonmonotone line search using a filter has been put forward for constrained problems, as outlined next.

1.2. Recent Developments: Filter Methods. In recent years, filter methods have been introduced as an alternative to penalty functions.6 Filter methods do not require any penalty parameters and allow some nonmonotonicity, viewing optimality and feasibility as two competing aims, for which they are allowed to increase one in favor of the other. Filter methods could also suffer from the Maratos effect, where both the objective function and the constraint violation are increased. To deal with this, a second-order correction has been employed to reduce infeasibility.6 The search direction generating subproblem can sometimes become infeasible, such as when the linearizations of the nonlinear constraints are inconsistent. As a remedy, the method is forced to enter a restoration phase to correct the problem, which is achieved by minimizing the constraints’ norm to regain feasibility.6 Gomes5 questioned the usage of the filter because it is too tolerant (requiring either infeasibility or optimality to be improved, or both) and noted that the inequality in the sufficient condition of the acceptance criteria still has a merit function flavor. We also noted here that, contrary to the claims that the filter method is a nonmonotone method, it is actually not when the constraints are all linear equalities, as solving the constrained linear equality problems can be viewed as an unconstrained problem.2 Therefore, in this type of problem, the constraints will always be satisfied, and only the objective value reduction is of concern. In such a case, the filter method will behave like a monotonic line search, requiring only sufficient reduction in the objective at successive iterations. 1.3. Problems during Line Search: The Maratos Effect. Maratos, in his Ph.D. thesis,7 observed a phenomenon associated with constrained optimization problems solved via penalty methods. With Newton or quasi-Newton optimization methods, as points get closer to a critical point, quadratic or superlinearly convergent steps are expected. However, this is not always true because a full line search step can sometimes be rejected by any merit function of the type

φ(x;r) ) f(x) + rP(h(x))

(10)

P(‚) is a nonnegative function, including the exact l1-merit function. This is caused by the failure of the linearization of the constraints to estimate their nonlinear curvature. Therefore, step reduction will block superlinear convergence. Because of this effect, the algorithm will converge much slower than anticipated if no safeguards or checks are applied.8,9 However, there are ways to overcome the Maratos effect. One way is to use a Maratos effect free merit function, for example, the augmented Lagrangian merit function.10 Another way is to use the second-order correction step (SOC) (as revised in ref 11). In this method, an extra step is taken to ensure a further decrease in the constraints if the Maratos effect occurs at R(k) ) 1. It has been proven that, for a sufficiently large number of iterations, k, R(k) ) 1 will always be accepted. A final way is to use a nonmonotone strategy where the merit function is allowed to increase in certain iterations, such as the Watchdog strategy12 or other methods. Another later method is by Panier and Tits,13 where the SOC is combined with the nonmonotone line search. It has been observed that, after a few initial iterations, R(k) ) 1 will be accepted whenever the line search is performed. 1.4. Nonmonotone Line Search: The Unconstrained Optimization Case. Grippo et al.14 described the earliest nonmonotone line-search technique for unconstrained optimization. The technique, which satisfies the generalized Armijo condition,15 is to find the smallest integer c (c > 0), such that

8272

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

f(x(k) + θcRpx(k)) e

|

∂f(x) max [f(x(k-j))] + γθcR ∂x 0ejemk

x(k)

px(k) (11)

is fulfilled. The step size then will be R(k) ) θckR, where ck is the first nonnegative integer c, effectively describing a simple backtracking line search. The parameters are set to R > 0, θ ∈ (0, 1), γ ∈ (0, 1), and M g 0. mk is the number of function values kept in memory where m0 ) 0, and 0 e mk e min[mk-1 + 1, M], k g 1. Han et al.16 provided and proved the global convergence of a generalized nonmonotone line search with a flexible forcing function where

m0 ) 0, 0 e mk e min[mk-1 + 1, M], k g 1 (12a) and

f(x(k) + R(k)px(k)) e

max [f(x(k-j))] - σ(t(k)) (12b) 0ejemk

must be satisfied for mk and R(k), with R(k) being nonnegative and bounded above. σ(‚) is the forcing function, t(k) ) -g(k)Tpx(k)/ ||px(k)||, and M g 0. The nonmonotone Armijo rule by Grippo et al.14 is one of the special cases. Some numerical drawbacks of Grippo’s nonmonotone line search in terms of the expense of computations have been observed, as there exist some situations where monotone line search is more preferable.17 Therefore, a modification was proposed toward it, by combining it with a standard Armijo line search, to be used when eq 11 is not satisfied by the prior trial step length. Furthermore, the maximum in eq 11 eliminates some probable good function values.18 In conjunction to that, the proposed method is to remove the term max0ejemk[f(x(k-j))] from eq 11 and to replace it with the averaging operation of all the previous function values with a controllable weight. The nonmonotone idea can be extended directly in cases where the feasibility of the constraints can be guaranteed at every iteration. For example, for the linear equality constrained case mentioned earlier, an initial feasible point is computed, and all iterates thereafter remain feasible. This is possible because the search direction can be constructed so that the constraints are automatically satisfied at all trial points computed during the iteration.2 The question that remains open is how can a nonmonotone line search be constructed for general nonlinear programming problems (NLP) given that use of a penalty merit function requires a varying penalty parameter. Our approach in this paper starts from the work by Panier and Tits.13 This is presented in the following section. 2. Nonmononotone Penalty Function Based on a Vector of Coarse-Grained Parameter Level Values: The “Wildcat” Line-Search Method The crucial problem encountered with the use of penalty merit functions, focusing here on the l1-penalty merit function, is the fact that the penalty parameter choice is fundamental to the success of the method. Furthermore, to implement a nonmonotone scheme ,it seems rather crucial that the penalty parameter remains constant; otherwise, the values of varying parameter merit functions are, in effect, dissimilar subject to changes of the parameter value. The selection of its value is left open, as by Panier and Tits.13 On the other hand, it is also important within a penalty merit function scheme to be able to update the parameter so as to

reflect more the local and past history behavior of the Lagrange multipliers of the constraints. A delay mechanism in the updating of the parameter is usually employed if it is to be reduced. Similarly, conservative schemes increase immediately the penalty parameter if the Lagrange multiplier magnitudes have increased, to not underpenalize the constraints; this would cause the exact penalty function not to have the same minimizer as that of the original optimization problem. Taking the above summarized observations, we propose the following coarse-grained nonmonotone line-search scheme. 1. Discretize the possible range of penalty parameters. Since any number has limited precision in its representation in a finite precision computer, it stands to reason that it is possible to find upper and lower values on a meaningful number related to the maximum/minimum value a Lagrange multiplier would normally be expected to acquire in magnitude. An order-ofmagnitude approach is proposed in this work to coarse-grain the values of the allowable penalty parameters, although intermediate values may be considered in the scheme. For example, we consider as minimum penalty parameter 10-16 and maximum 10+16; thus, the vector of coarse-grained values can be, for example,

P ) {10-16, 10-15, ..., 10-1, 1, 10, ..., 10+15, 10+16} (13) In our computational experiments, we have used a more refined parametrization given by

Pi ) 10-16+0.1i, i ) 0, 1, 2, ..., nP

(14)

The number of penalty parameter levels is indicated as nP, with nP ) 320. Soon, a dynamic automated updating scheme is to be implemented and experimented on. 2. Create for each penalty parameter level a different history list with up to nH past values stored in it:

Hij, j ) 1, 2, ..., nH, i ) 1, 2, ..., nP

(15)

At initialization, a relaxation is used with the initial values relaxed upward by setting nh

Hij ) ΦB(x) + Pi

∑|hk(x)| +

k)1

nh

ω max[1, Φ(x) + Pi

∑|hk(x)|]

(16)

k)1

j ) 1, 2, ..., nH, i ) 1, 2, ..., nP ω used throughout this work is set to ω ) 0.001. The above indices indicate that, for initialization, all row i values are set to be equal. 3. At each iteration, the level i of the penalty parameter is selected according to some rule which takes into account the values of the Lagrange multipliers before the step is taken at the beginning of each Newton iteration given in eq 5. In our experimental implementation, we use

{

ˆı ) arg

min Pi: max{||λ(k)||∞, i∈{1,2,...,np}

}

||λ(k) + pλ(k)||∞} < Pi

(17)

The infinity norm λ∞ is used to measure the Lagrange multiplier values, as it is the dual norm of the λ1 norm used in the exact penalty merit function we consider in this work. The “operating”

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8273

merit function becomes nh

φ(x;Pˆi ) ) ΦB(x) + Pˆı

∑|hk(x)|

(18)

k)1

4. Acceptance of the step is based on an Armijo condition in a nonmonotone fashion for the history operating row ˆı,

φ(x(k) + R(k)px(k);Pˆı) e list_ comp(Hˆı) + γR(k)

|

Dφ p (k) Dx x(k);Pˆi x (19)

where Hˆı is the operating penalty parameter level, and also under the condition that the primal search direction is a descent direction for the merit function

|

Dφ p V

|

∂hk sgn(hk(x))

∂x

(21)

x

In our implementation, we use V ) F, where F is the feasibility tolerance for satisfaction of the constraints at the solution. 5. Second-order correction steps before backtracking are implemented, as a remedy for the Maratos effect by correcting for the curvature of the constraints. The scheme we implement is identical to that in ref 20. If a step size is not accepted directly, then a number of NSOC max second-order corrections is allowed to correct for the violation of the constraints. If this is found to yield an acceptable correction to eq 18, then the step is accepted; otherwise, backtracking takes place as described in the previous step of the line-search algorithm. 6. At present, failure of the line-search procedure is guarded by an upper number of backtracking iterations, NLS max. For the ) 20 is used during the evaluation phase of the moment, NLS max new algorithm. In the future, this should be complemented by an appropriate computation for a minimum step size Rmin e R during each major iteration (search direction generation), which if reached or violated should trigger a suitable feasibility restoration algorithm. More on future development issues will be examined in Section 5. 7. Upon acceptance of a step, the history lists are all potentially updated, for all parameter Values. The updating rule follows the same principle as the step-acceptance rule in eq 19. In particular, the implementation is as shown in Figure 1.

Figure 1. Parallel history row-vector updating after successful step acceptance in the Wildcat line search.

The algorithm, at present, is evaluated computationally and is treated as a heuristic. In the future, we would expect to see some theoretical results regarding its global convergence properties and under which conditions it might be guaranteed success. Its key feature is that it can jump from one penalty parameter to another, spanning orders of magnitude and yet being able to maintain memory for all lists kept in parallel. 3. Primal-Dual Logarithmic Barrier Solver Implementation: NLPOPT1 The new line-search method has been implemented within the scope of a sparse primal-dual logarithmic barrier nonlinear programming optimizer, NLPOPT1, programmed in Mathematica 5.0 for rapid development and evaluation. It is noted that, from the inception of the new line-search method, the total programming effort required ∼3 weeks of work to develop a fully debugged version of the software. (The software is being maintained and standardized for several future research directions by the corresponding author in the Process Optimization Group at Chemical Engineering in Cambridge University.) Key features of the current version of NLPOPT1 are outlined briefly in the following subsections. 3.1. Modeling Interface and Performance. The modeling interface follows an intuitive, equation-based optimization model description. Parsing and translation are handled within the capabilities of Mathematica. First and second derivatives are evaluated exactly in sparse format using symbolic incidencedriven differentiation, both for the objective and constraints. Ideally, this should be evolved in the future into an automatic differentiation (AD) procedure. Regarding computational efficiency issues, because Mathematica is an interpretive environment, this would imply very slow processing for large realistic problems. Fortunately, there exists a feature within the platform that allows symbolic vector expressions to be compiled on-the-fly in memory. With this approach, we have obtained numerical function evaluations at speeds approaching those of FORTRAN or C++. 3.2. Line-Search and Descent-Direction Correction. The line-search procedure is central to our development in NLPOPT1. In addition to the details provided in Section 2, we briefly discuss here the Levenberg-Marquardt scheme we implemented. Ideally, one should modify directly the diagonal entries of the Lagrangian Hessian block when using a modified Choleski factorization. Since we wished to evaluate the proce-

8274

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

Table 1. Parameters Setting for NLPOPT1 parameters

symbols

values

no. of past values stored feasibility tolerance Lagrangian gradient norm tolerance complementarity tolerance line start factor maximum number of SOC initial barrier parameter barrier parameter reducing factor

nH F L C ω NSOC max µstart γµ

5 10-8 10-8 10-6 10-3 2 0.01 10

dure quickly, the available sparse routines in Mathematica were used, which only cater for simple factorization of general matrices at present. For this reason, we employ a check on the descent condition of the primal search direction on the penalty function, as suggested by Gajulapalli and Lasdon,21 and correct using repeated application of diagonal increments on the Lagrangian Hessian. Our scheme begins by adding 10-6 and adds consecutive multiples by an order of magnitude (e.g., using a factor of 10) until the descent condition is met. 3.3. Barrier Formulation. NLPOPT1 uses three ways to implement logarithmic barriers on upper and lower bounds (including bounds for automatically generated slack variables, used to render potentially double-sided inequalities into equality constraints). The first approach simply adds the primal barrier Hessian terms into the overall primal Lagrangian Hessian,22 which is known to be very inefficient. The second approach uses quadratic extrapolations of the log-barrier function a small distance from the asymptote, which can be adjusted iteratively.22,23 It is noted that this will eventually lead to a control in the magnitude of the diagonal terms added. The third approach uses the primal-dual equations as introduced by Wa¨chter and Biegler,20 ensuring that the primal-dual barrier Hessian does not deviate by an arbitrary amount from the true primal barrier Hessian. By far, the latter approach is almost always superior. The first and third methods require safeguarding of the primal search direction, with a usual fraction-to-the-boundary rule. The quadratic extrapolation does not require such safeguarding and is left for future experimentation. It is noted that it is not applicable for cases where nonlinear functions in the model have domain restrictions, as it invariably leads to violation of the bounds during iterations. In numerical experiments presented in the following sections, we use the third option of the software as a default. 3.4. Termination Criteria, Iteration Control. The barrier parameter µ is reduced by a factor of λµ > 1, such that µnew ) µold/γµ. Termination of the inner Lagrange-Newton iterations follows the progress in satisfying feasibility of the equality constraints, with an absolute target tolerance of F for their norm and a target relative tolerance of L for the Lagrangian gradient norm, which is normalized by the objective function value (using a factor max[{1, | f(x(k))|}]). Finally, the outer iteration loop (the continuation of the barrier parameter µ) terminates when an absolute complementarity tolerance C is met. 4. Computational Case Studies In this section, we present a number of selected benchmark case studies to test the capability and reliability of NLPOPT1 using the new line-search algorithm. All the problems are solved using a Pentium IV computer with 224 MB RAM and 1.19 GHz CPU clock. The main parameters settings are as in Table 1. Changes to the values listed will be highlighted where appropriate.

4.1. Case Study 1: Maratos Effect Handling. Consider the following problem, which is known to suffer from Maratos effect 2 min f(u,V) ) 3V - 2u u,V

(22a)

h(u,V) ) u - V2 ) 0

(22b)

subject to

The optimal solution is at (u*, V*)T ) (0, 0)T with f(u*,V*) ) 0. At (u, V)T ) (2, )T, the above problem suffers from the Maratos effect.9 Applied to our case, the primal-dual function is

L(u,V,λ) ) 3V2 - 2u + λ(u - V2) The Newton’s step is

[

]( ) ( )

pu -2 + λ 0 0 1 p 0 6 - 2λ -2V V ) - 6V - 2λV pλ 1 -2V 0 u - V2

(23)

(24)

At (u, V, λ)T ) (2, , 2)T, the search direction is

()( ) pu -22 pV ) -3 pλ 0

(25)

 > 0. Therefore, the new point is (unew, Vnew)T ) (-2, 0)T, with the new objective function and constraint being

f(unew,Vnew) ) 22

(26a)

h(unew,Vnew) ) -2

(26b)

However, the calculation of the starting point gives us

f(u,V) ) 2

(27a)

h(u,V) ) 0

(27b)

where, in comparison, both the objective function and the absolute constraint are worse at the new point. The line search will reject the new step, and this will lead to a backtracking procedure which slows the convergence even when  is very close to 0; this constituting the Maratos effect. From the experiment, it is found that NLPOPT1 was able to solve the problem effectively. The starting points used were  ) 0.01, λstart ) 2.0, and solver parameters settings reported in Table 1. The convergence plots are shown in Figures 2-4. The details of the run are as in Tables 6 and 7. In the tables to follow, the feasibility norm value used is || ‚ ||∞. The line-search iteration numbers for the convergence plots are including all line searches, across all NLP and Newton’s subproblems, along with NLP initial values. It is also important to note that, for the penalty parameter evolution plots (e.g., Figure 4), 0 for the penalty parameter does not mean the penalty parameter is actually zero at the specific iteration. It is a way to indicate the initialization of a new NLP cycle. 4.2. Case Study 2: Alkylation Plant. The next case study is on a simplified model of an alkylation plant.22,24,25 The aim here is to find the optimal operating conditions for a mediumsize problem with nonlinear constraints. The problem formulation is as shown in Appendix A.1 with lower and upper bounds on each variable as shown in Table 2, along with starting values,

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8275

Figure 2. Objective function against constraint norm plot for case study 1.

Figure 3. Penalty function evolution plot for case study 1.

Figure 5. Objective function against constraint norm plot for alkylation plant problem.

Figure 6. Penalty function evolution plot for alkylation plant problem. Table 2. Summary of Alkylation Plant Test Problem lower variable bound F x1 x2 x3 x4 x5 x6 x7 x8 x9 x10

Figure 4. Penalty parameter evolution plot for case study 1.

optimal values, and reported values found in the literature. The problem was solved using the preset solver settings given in Table 1, initial variables as in Table 2, and initial Lagrange multipliers for each constraint set as λi,start ) 1.0. Comparatively, the minimum objective value found with our NLPOPT1 solver is as good as that obtained by Chen22 with optimal objective function value F ) -1.771 and slightly better than Kaijaluoto’s steady-state process simulator25 with F ) -1.765. The convergence plots are as displayed in Figures 5-7, while the runs and computational requirements are collectively reported in Tables 6 and 7. 4.3. Case Study 3: Williams-Otto Process. The WilliamsOtto Process22,25,26 is an optimization study based on a flowsheet of a reaction-separation process. The problem formulation is as presented in Appendix A.2.

0.0 0.0 0.0 0.0 0.0 85.0 90.0 3.0 1.2 145.0

upper bound

starting value

-0.900 2.00 1.745 16.0 12.0 0.12 0.11 5.00 3.048 2.00 1.974 93.0 89.2 95.0 92.8 12.0 8.0 4.0 3.6 162.0 945.0

optimum value (this paper) -1.771 1.698 15.80 0.0539 3.031 2.000 90.1 95.0 10.483 1.562 153.5

reported value15

reported value22

-1.765 -1.771 1.704 1.698 15.85 15.80 0.0543 0.0539 3.036 3.031 2.000 2.000 90.1 90.1 95.0 95.0 10.476 10.482 1.562 1.561 153.5 153.5

The problem was solved using the solver settings as given in Table 1. The initial variables are as in Table 3, and initial Lagrange multipliers for each constraint are λi,start ) 1.0. The optimal results in this study along with results from previous studies22,25 are shown in Table 3. The results produced by the solver are as good as the previous results reported with optimal objective function value F ) 121.54. The problem size is summarized in Table 6, and computational details are in Table 7. 4.4. Case Study 4: Unconstrained Nonlinear Minimax Approximation. This is another medium-size case study with nonlinear constraints, a nonlinear minimax approximation by Luksan.27 The problem formulation is given in Appendix A.3. This problem was solved successfully by using the preset settings of NLPOPT1 (Table 1), without any further modifications. The optimal solution found for this minimax problem is F(x) ) 133.730. This objective function value is close to the one calculated from the optimal values found by Luksan, which

8276

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

Figure 7. Penalty parameter evolution plot for alkylation plant problem. Table 3. Summary of Williams-Otto Process Test Problem

variable

lower bound

upper bound

F T m˘ A m˘ B m˘ G m˘ RA m˘ RB m˘ RC m˘ RP m˘ RE β

322.2 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 0.01

377.8

0.99

starting value

optimum value (this paper)

reported value25

reported value22

820.73 365.0 5 000 10 000 1 000 20 000 50 000 3 000 8 000 50 000 0.1

121.54 374.69 6128 13 966 1 457.8 21 530 66 748 3 528.6 8 767.3 66 062 0.100 17

121.54 374.67 6 128 13 966 1 396.6 29 156 53 830 3 528.9 8 768.0 66 070 0.100 16

121.54 374.69 6 128 13 966 1 457.8 21 529 66 747 3 528.5 8 767.2 66 062 0.100 17

is F(x) ) 133.728, with the one reported in this work being a slightly worse solution. The starting values (as used by Luksan27) and the optimal values for each variable as well as the convergence plots, size details, and solution statistics involved in this case study are collectively presented in Table 4, Figures 8-10, and Tables 6 and 7, respectively. The dual variable starting points used were λi,start ) 1.0. The zeros observed in Figure 10 are the initialization parameters for a new NLP cycle and not values. 4.5. Case Study 5: Constrained Nonlinear Minimax Approximation. Another case study to be discussed is a linearly constrained nonlinear minimax approximation problem by Luksan,27 as given in Appendix A.4. The optimal solution found for this problem by using NLPOPT1 is F(x) ) 0.040 138 2, whereas the solution found by Luksan was F(x) ) 0.101 830 9, with NLPOPT1 faring significantly better. Again, the preset parameter settings (Table 1) were used. The starting values (as used by Luksan27) and optimum values for each variable are as summarized in Table 5. The iterations were initialized with a constant Lagrange multiplier for all constraint equations, at λi,start ) 1.0. The summary of the sizes and the solution details can be found in Tables 6 and 7, respectively. 4.6. Case Study 6: Competing Reaction Optimal Control Problem. This is a competing reaction problem in a batch reactor.28,29 The objective is to find the optimal control profile to maximize the yield of the product while satisfying the reaction

Figure 8. Objective function against constraint norm plot for unconstrained nonlinear minimax problem.

Figure 9. Penalty function evolution plot for unconstrained nonlinear minimax problem.

Figure 10. Penalty parameter evolution plot for unconstrained nonlinear minimax problem.

equations, which include two differential equations, as described in Appendix A.5. The problem is discretized using the orthogonal collocation technique,30 with 100 elements and 3 interior collocation points per element. Continuity is enforced for differential state variables across element boundaries (4 collocation points used per

Table 4. Starting and Optimum Values for Unconstrained Nonlinear Minimax Problem x1

x2

x3

x4

x5

x6

x7

x8

x9

x10

starting value optimum value

2 2.176

3 2.351

5 8.766

5 5.066

1 0.988

2 1.431

7 1.331

3 9.837

6 8.289

10 8.369

x11

x12

x13

x14

x15

x16

x17

x18

x19

x11

starting value optimum value

2 2.275

2 1.358

6 6.079

15 14.171

1 0.996

2 0.654

1 1.466

2 2.000

1 1.072

3 2.088

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8277 Table 5. Starting and Optimum Values for Constrained Nonlinear Minimax Problem x1

x2

x3

x4

x5

x6

x7

starting val 0.5 1.0 1.5 2.0 2.5 3.0 3.5 optimum val 0.5949 0.9949 1.4671 1.8671 2.3598 2.8671 3.5000 Table 6. Summary of the Problems Sizes case study number

catalyst mixing ratio in a plug-flow isothermal reactor.29,33 Different mixing policies along the reactor length will effect rates of activation differently. Therefore, the objective is to find the best catalyst ratio to obtain maximum yield of the final product C. The reaction in the reactor can be described as

AfBhC

number of

1

2

3

4

5

6

7

8

variablesa contraints equality constraints inequality constraints lower boundsa upper boundsa linear constraints nonlinear contraints nonzeros in Jacobiana nonzeros in Hessiana nonzeros in LagrangeNewton overall matrixa

2 1 1 0 0 0 0 1 2 2 6

12 13 5 8 18 10 9 4 41 32 114

10 6 6 0 10 2 0 6 55 98 208

39 18 0 18 0 18 0 18 396 41 833

178 172 2 170 8 163 9 163 1490 178 3158

1201 802 802 0 401 401 202 600 4996 3397 13389

1201 802 802 0 401 401 202 600 4996 3997 13989

1502 1103 1103 0 401 401 503 600 6200 4298 16698

a Inclusive of the automatically generated slack variables for inequality constraints.

element). The total number of variables and constraints resulting from the transformation are 1201 and 802, respectively. The size of each element is initialized uniformly at 0.01 and allowed to fluctuate by 10% above and below. The control profile and the state variables are all initially set to a uniform value of 0.001 and bounded by 0 and 5 for u (control variable), with no bounds for both state variables. The Lagrange multipliers were initialized at λi,start ) 1.0. The NLPOPT1 settings are as in Table 1, except for the initial barrier parameter and its reducing factor, where µstart ) 0.0001 and γµ ) 2 were used, as too aggressive a reduction and a larger µ were found not to converge. The optimal objective function value found is 0.5735, which is slightly worse than 0.5738, as reported by Tanartkit and Biegler.29 The optimal control profile is as shown in Figure 11. The convergence plots are as presented in Figures 12-14. Tables 6 and 7 feature the details of the run and computations involved. 4.7. Case Study 7: Consecutive Reaction Optimal Control Problem. This case study considers a consecutive reaction in a plug-flow reactor.29,31 In the reactor, the rates of reaction are determined by the temperature profile. Therefore, it is desired to find the optimal temperature profile within the reactor temperature limit, to obtain the maximum concentration of the intermediate product. The problem is as presented in Appendix A.6. This is again solved using the orthogonal collocation method, with 100 elements, 3 interior collocation points (as outlined in Section 4.6), and yielding 1201 variables and 802 constraints. To initialize calculations, the elements are equally spaced at 0.25 time units each and bounded above and below by 10%. Furthermore, the control profile, the state variables, and the dual variables are all initially set to 800 K, 0.01, and 1, respectively. The NLPOPT1 settings are as in Table 1, except the initial barrier parameter and its reducing factor, where µstart ) 0.0001 and γµ ) 2 were used, as too aggressive a reduction and a larger µ were found not to converge. The optimal objective function value found is 0.542, which is slightly better than 0.541 as reported by Tanartkit and Biegler32 and as good as 0.542 as reported by Vassiliadis.31 The optimal control profile is as shown in Figure 15. Further computational details are given in Tables 6 and 7. 4.8. Case Study 8: Catalyst Mixing Optimal Control Problem. This problem is to determine the optimal bifunctional

(28)

The model for the optimization is given in Appendix A.7. This results in a bigger optimization problem. By using 100 elements and 3 interior collocation points, the orthogonal collocation method produced 1502 variables and 1103 constraints. To initialize calculations, the elements are equally spaced at 0.01 time units each, bounded above and below by 10%. Both the control and state variables are initialized to constant profiles of 0.001, while the Lagrange multipliers λi are initialized at 1.0. The bounds are 0 and 1 for control variable u, and there are no bounds for the state variables. All NLPOPT1 settings of Table 1 were applied to this problem, except the initial barrier parameter and its reducing factor, where µstart ) 0.0001 and γµ ) 2 were used (too aggressive a reduction and a larger µ were found not to converge). The optimal objective function value found is 0.047 98, which is slightly worse than 0.048 07 as reported by Tanartkit and Biegler.29 The optimal control profile is as shown in Figure 16. However, oscillations are observed in the region where the optimal solution is known to have a singular arc control segment. All computational requirements are presented in Tables 6 and 7. 4.9. Problems Sizes and Solution Statistics. Tables 6 and 7 summarize the problems sizes and the solution statistics, respectively. As can be seen, NLPOPT1 up to this level is capable of handling a wide range of problems, from small to big and from simple to complicated nonlinear problems. 5. Conclusions and Future Work In this paper, we have presented a novel line-search idea for constrained NLP. Our “Wildcat” line-search technique introduces a nonmonotone multilevel parameter exact l1-penalty merit function scheme. The implementation ensures the penalty parameter choice overestimates the local maximal Lagrange multiplier, while utilization of a history of past merit function values yields nonmonotone behavior in line-search step acceptance. This new technique has been implemented within a primaldual interior point NLP method in our solver NLPOPT1, programmed in Mathematica 5.0. A Levenberg-Marquardt correction scheme was included to force descent direction for the merit function in cases where it did not occur. Second-order correction steps have been included as well to address the Maratos effect. To verify that it is working satisfactorily, we tested it on a number of case studies. We showed that it is able to overcome the Maratos effect, as in case study 1. To show its versatility, we also tested the algorithm on different types of case studies: the alkylation plant and the Williams-Otto process problems to reflect small-scale applications in chemical engineering, case studies 4 and 5 to handle minimax problems, and the optimal control problems as in case studies 6-8. The sizes were being experimented on as well: from problems with small numbers of variables and constraints to large nonlinear problems in the optimal control cases. The results obtained confirmed that the scheme is robust, that it works well computationally, and that

8278

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

Table 7. Summary of the Solution Statistics case study number translation time (s) solution time (s) optimal objective reported best objective NLP cyclesa Newton’s cycles line-search cycles SOC cycles Hessian corrections function evaluationsb Jacobian evaluationsc Hessian evaluationsd Hessian factorization evaluationse

1

2

3

4

5

6

7

8

0.00 0.12 0 0 2 2 2 0 0 4 2 2 2

0.01 1.37 -1.771 -1.771 5 38 42 18 0 66 56 38 56

2.16 0.68 1.2154 1.2154 5 16 16 5 0 26 21 16 21

0.99 1.34 132.614 133.728 5 29 34 12 0 51 41 29 41

9.91 18.44 0.040138 0.101831 5 34 34 9 3 48 43 34 46

245.72 372.02 0.5735 0.5738 8 49 49 12 0 69 61 49 61

409.09 480.34 0.542 0.541 8 70 71 5 0 84 75 70 75

267.30 863.72 0.04798 0.04807 8 44 44 26 0 78 70 44 70

a Equality-constrained (inner) problem solution. b Calculated as number of NLP cycles + line searches + SOC cycles. c Calculated as number of Newton’s cycles + SOC cycles. d Calculated as number of Newton’s cycles. e Calculated as number of Newton’s cycles + SOC cycles + Hessian correction cycles.

Figure 11. Optimal control profile for competing reaction problem.

Figure 12. Objective function against constraint norm plot for competing reaction problem.

good quality solutions are found in very acceptable computational effort requirements, especially in case study 5 where a significantly better solution has been found. However, apart from the Maratos effect, there is another wellknown problem for primal-dual methods with line search, which is demonstrated, for example, in the paper by Wa¨chter and Biegler34 using a simple model. Because of this effect, these methods will fail to converge to a feasible region even when applied to well-posed problems. In the runs on the example given in ref 34, our solver was also unable to find a descent direction for the search. In the paper by Ulbrich et al.35 on their primal-dual interior point filter algorithm, it is mentioned that a type of feasibility restoration is able to overcome such a problem. So far, our NLPOPT1 solver is still not including a

Figure 13. Penalty function evolution plot for competing reaction problem.

Figure 14. Penalty parameter evolution plot for competing reaction problem.

restoration procedure to counteract this type of failure. However, we plan to include it in the future. For future work, we will apply an automatic generation of new penalty levels Pi dynamically during runs. Furthermore, since too rapid a reduction in µ causes some difficulties, a scheme similar to the one based on complementarity improvement by Wa¨chter and Biegler20 is to be explored. At the same time, there are theoretical concepts to be resolved: the conditions and the proof of convergence for the new proposed linesearch method. We expect also to investigate other exact penalty merit functions within the same nonmonotone framework. In conclusion, there will be many issues that will require investigation and refinement. Nonetheless, we believe that, with our present work, we are proposing a new idea that has not

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8279

[35.82 - 0.222x10] - 0.9x9 g 0

(29i)

x9 g0 0.99

(29j)

-[35.82-0.222x10] +

[-133 + 3x7] - 0.99x10 g 0

(29k)

x10 g0 0.99

(29l)

-[-133 + 3x7] +

xilower exi e xiupper; i ) 1, ..., 10

(29m)

A.2. Williams-Otto Process.

min F ) -

Figure 15. Optimal control profile for consecutive reaction problem.

ROI 100

(30a)

subject to (m˘ A - r1 - βm˘ RA)/6 000 ) 0

(30b)

(m˘ B - r1 - r2 - βm˘ RB)/14 000 ) 0

(30c)

(2r1 - 2r2 - r3 - βm˘ RC)/3 000 ) 0

(30d)

(2r2 - βm˘ RE)/66 000 ) 0

(30e)

(1.5r3 - m˘ G)/1 500 ) 0

(30f)

(r2 - m˘ P - 0.5r3 - β(m˘ RP - m˘ P))/2 000 ) 0 (30g) Figure 16. Optimal control profile for catalyst mixing problem.

322.2 e T e 377.8

been presented before, namely, the discretization of the penalty parameter range for line-search merit functions.

(30h)

A.3. Unconstrained Nonlinear Minimax Approximation.

min F(x) ) max fi(x) leie18

Acknowledgment We would like to thank the Department of Public Services and University of Putra, Malaysia, for the scholarship awarded to the second author (I.S.A.).

(31a)

subject to f1(x) ) x12 + x22 + x1x2 - 14x1 - 16x2 + (x3 - 10)2 + 4(x4 - 5)2 + (x5 - 3)2 + 2(x6 - 1)2 + 5x72 +

Appendix

7(x8 - 11)2 + 2(x9 - 10)2 + (x10 - 7)2 + (x11 - 9)2 +

A.1. Alkylation Plant.

10(x12 - 1)2 + 5(x13 - 7)2 + 4(x14 - 14)2 +

minF ) - (0.063x4x7 - 5.04x1 - 0.035x2 - 10x3 - 3.36x5) (29a) subject to 1.22x4 - x1 - x5 ) 0

(29b)

98 000x3 - x6(x4x9 + 1 000x3) ) 0

(29c)

x2 + x5 - x1x8 ) 0

(29d)

[x1(1.12 + 0.131 67x8 - 0.006 67x82)] - 0.99x4 g 0 (29e) x4 - [x1(1.12 + 0.131 67x8 - 0.006 67x8 )] + g 0 (29f) 0.99 2

[86.36 + 1.098x8 - 0.038x8 + 0.325(x6 - 89)] - 0.99x7 g 0 (29g) 2

-[86.36 + 1.098x8 - 0.038x82 + x7 g 0 (29h) 0.325(x6 - 89)] + 0.99

27(x15 - 1)2 + x164 + (x17 - 2)2 + 13(x18 - 2)2 + (x19 - 3)2 + x202 + 95 (31b) f2(x) ) f1(x) + 10[3(x1 - 2)2 + 4(x2 - 3)2 + 2x32 - 7x4 - 120] (31c) f3(x) ) f1(x) + 10(5x12 + 8x2 + (x3 - 6)2 - 2x4 - 40) (31d) f4(x) ) f1(x) + 10[0.5(x1 - 8)2 + 2(x2 - 4)2 + 3x52 - x6 - 30] (31e) f5(x) ) f1(x) + 10(x12 + 2(x2 - 2)2 - 2x1x2 + 14x5 - 6x6 (31f) f6(x) ) f1(x) + 10(4x1 + 5x2 - 3x7 + 9x8 - 105) (31g) f7(x) ) f1(x) + 10(10x1 - 8x2 - 17x7 + 2x8)

(31h)

f8(x) ) f1(x) + 10(-3x1 + 6x2 + 12(x9 - 8)2 - 7x10) (31i)

8280

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

f9(x) ) f1(x) + 10(-8x1 + 2x2 + 5x9 - 2x10 - 12) (31j) f10(x) ) f1(x) + 10(x1 + x2 + 4x11 - 21x12)

(31k)

f11(x) ) f1(x) + 10(x12 + 15x11 - 8x12 - 28)

(31l)

f12(x) ) f1(x) + 10(4x1 + 9x2 +

5x132

A.6. Consecutive Reaction Optimal Control Problem.

max y2(25)

(34a)

y˘ 1 ) -65.5 e-5027.7/uy1

(34b)

y˘ 2 ) 65.5 e-5027.7/uy1 - 1970 e-8044.3/uy2

(34c)

y(0) ) [1, 0]T

(34d)

u(t) ∈ [600, 1000]

(34e)

subject to

- 9x14 - 87) (31m)

f13(x) ) f1(x) + 10[3x1 + 4x2 + 3(x13 - 6) - 14x14 - 10] (31n) 2

f14(x) ) f1(x) + 10(14x12 + 35x15 - 79x16 - 92) (31o) f15(x) ) f1(x) + 10(15x22 + 11x15 - 61x16 - 54) (31p) f16(x) ) f1(x) + 10(5x12 + 2x2 + 9x174 - x18 - 68) (31q)

A.7. Catalyst Mixing Optimal Control Problem.

max y3(1)

(35a)

y˘ 1 ) u(10y2 - y1)

(35b)

y˘ 2 ) -u(10y2 - y1) - (1 - u)y2

(35c)

y 3 ) 1 - y 1 - y1

(35d)

y(0) ) [1, 0, 0]T

(35e)

u(t) ∈ [0, 1]

(35f)

subject to

f17(x) ) f1(x) + 10(x12 - x9 + 19x19 - 20x20 + 19) (31r) f18(x) ) f1(x) + 10(7x12 + 5x22 + x122 - 30x20) (31s) A.4. Constrained Nonlinear Minimax Approximation.

min F(x) )

max fi(x) 1eie163

(32a)

with fi(x) )

1 15

2

+

Nomenclature

7



15 j)1

cos(2πxj sin θi), 1 e i e 163 (32b)

and θ)

π (8.5 + 0.5i), 1 e i e 163 180

(32c)

subject to x1 g10.4

(32d)

-x1 + x2 g10.4

(32e)

-x2 + x3 g10.4

(32f)

-x3 + x4 g10.4

(32g)

-x4 + x5 g10.4

(32h)

-x5 + x6 g10.4

(32i)

-x6 + x7 g10.4

(32j)

-x4 + x6 ) 1

(32k)

x7 ) 3.5

(32l)

A.5. Competing Reaction Optimal Control Problem.

max y2(1) subject to

(

y˘ 1 ) - u +

(33a)

)

u2 y 2 1

(33b)

y˘ 2 ) uy1

(33c)

y(0) ) [1, 0]T

(33d)

u(t) ∈ [0, 5]

(33e)

Latin Symbols c, ck ) standard counter used in eq 11 F ) objective function H ) Hessian matrix, history list at each penalty parameter level, defined by eq 16 HL ) Hessian matrix of the Lagrangian function, defined by eq 6 h ) equality constraints as defined in the context i ) standard index ˆı ) operating level of the “Wild cat” line-search penalty parameter J ) Jacobian matrix of the equality constraints, defined by eq 6 j ) standard index k ) iteration index L ) Lagrangian function, defined by eq 3 M ) maximum number of function values kept in memory, used in eqs 11 and 12a m ) number of equality constraints m0 ) minimum number of function values kept in memory, used in eq 12a mk ) number of function values kept in memory, used in eq 11 m˘ A, m˘ B, m˘ G, m˘ RA, m˘ RB, m˘ RC, m˘ RP, m˘ RE ) primal variables, used in Appendix A.2 NLS max ) maximum number of line-search steps NSOC max ) maximum number of second-order corrections n ) number of variables nH ) number of past values stored np ) number of penalty parameter levels P ) “Wildcat” line-search penalty parameter, penalty function pλ ) search direction for Lagrange multipliers, λ pu, pV ) search direction for primal variables u and V, used in Section 4.1 px ) search direction for primal variables x r ) penalty function parameter

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8281

T ) primal variable in Appendix A.2 t ) time, constant as used in eq 12b u ) primal variable in Section 4.1 and a vector of control variables from Section 4.6 onward V ) primal variable, used in Section 4.1 x ) primal variables xL ) lower bound on variables x xU) upper bound on variables x y ) state variables AbbreViations CPU ) CPU time KKT ) Karush-Kuhn-Tucker LS ) line search NLP ) nonlinear programming problem sgn ) sign SOC ) second-order correction Greek Symbols || ‚ ||∞ ) infinity norm R ) line-search step size Rmin ) minimum step size β ) primal variable, used in Appendix A.2  ) tolerance, used in Section 4.1 C ) complementarity tolerance F ) feasibility tolerance L ) Lagrangian gradient norm tolerance γ ) violation tolerance γ ) line-search relaxation parameter used in eqs 11 and 19 γµ ) barrier parameter reduction factor λ ) Lagrange multipliers λ∞ ) maximal Lagrange multiplier λstart ) initial Lagrange multipliers µ ) barrier parameter µnew ) new barrier parameter µold ) previous barrier parameter ω ) line-search history relaxation factor, used in eq 16 ΦB ) barrier function, defined by eq 2a φ(‚) ) merit function σ(t(k)) ) a forcing function used in eq 12b θ ) step size reduction parameter used in backtracking scheme Literature Cited (1) Han, S.; Mangasarian, O. Math. Program. 1979, 17, 251-269. (2) Gill, P. E.; Murray, W.; Wright, M. H. Practical Optimization; Academic Press: San Diego, CA, 1988. (3) Herskovits, J. A View on Nonlinear Optimization. In AdVances in Structural Optimization; Herskovits, J., Ed.; Kluwer: Norwell, MA, 1996. (4) Byrd, R. H.; Nocedal, J.; Waltz, R. A. Steering Exact Penalty Methods; Technical Report; Northwestern University: Evanston, IL, 2004. (5) Gomes, F. Optim. Online Dig. 2004, 44-59. (6) Fletcher, R.; Leyffer, S. Math. Program. 2002, 91, 239-269. (7) Maratos, N. Exact Penalty Function Algorithms for Finite Dimensional and Optimization Problems. Ph.D. Thesis, University of London, Imperial College, London, U.K., 1978.

(8) Fletcher, R. Practical Methods of Optimization, 2nd ed.; John Wiley & Sons: New York; 1987. (9) Sun, W.; Yuan, Y. Optimization Theory and Methods; Science Press: Beijing, 1997. (10) Biegler, L.; Cuthrell, J. Comp.ut. Chem. Eng. 1985, 9, 257-267. (11) Nocedal, J.; Wright, S. Numerical Optimization; Springer Series in Operations Research; Springer: New York, 1999. (12) Chamberlain, R.; Lemarechal, C.; Pedersen, H.; Powell, M. Math. Program. Stud. 1982, 16, 1-17. (13) Panier, E.; Tits, A. SIAM J. Numer. Anal. 1991, 28, 1183-1195. (14) Grippo, L.; Lampariello, F.; Lucidi, S. SIAM J. Numer. Anal. 1986, 23, 707-716. (15) Armijo, L. Pac. J. Math. 1966, 16, 1-3. (16) Han, J.; Sun, J.; Sun, W. J. Comput. Appl. Math. 2002, 146, 8998. (17) Dai, Y. J. Optim. Theory Appl. 2002, 112, 315-330. (18) Zhang, H.; Hager, W. SIAM J. Optim. 2004, 14, 1043-1056. (19) Mahdavi-Amiri, N.; Bartels, R. ACM Trans. Math. Software 1989, 15, 220-242. (20) Wachter, A.; Biegler, L. On the Implementation of an InteriorPoint Filter Line-Search Algorithm for Large-Scale Nonlinear Programming. Technical Report RC 23149; IBM T. J. Watson Research Center: Yorktown Heights, NY, 2004. (21) Gajulapalli, R.; Lasdon, L. Comput. Optim. Appl. 2001, 19, 107120. (22) Chen, T. W. C. The Modified Barrier Method for Large Scale Nonlinear Steady State and Dynamic Optimization. Ph.D. Thesis, Chemical Engineering Department, University of Cambridge, Cambridge, U.K., 2002. (23) Breitfeld, M.; Shanno, D. Computational Experience with PenaltyBarrier Methods for Nonlinear Programming; Technical Report RRR 1793 Revised; Rutgers Center for Operations Research and Graduate School of Management, Rutgers University: New Brunswick, NJ, 1994. (24) Bracken, J.; McCormic, G. P. Selected Applications of Nonlinear Programming; Wiley: New York, 1968. (25) Kaijaluoto, P. I. S. Process Optimisation by Flowsheet Simulation. Ph.D. Thesis, Chemical Engineering Department, University of Cambridge, Cambridge, U.K., 1984. (26) Williams, T.; Otto, R. AIEE Trans. Pt. I, Commun. Elect. 1960, 79. (27) Luksan, L. Kybernetika 1985, 21, 22-40. (28) Ray, W. AdVanced Process Control; McGraw-Hill: New York, 1981. (29) Tanartkit, P.; Biegler, L. Comput. Chem. Eng. 1997, 21, 13651388. (30) Cuthrell, J.; Biegler, L. AIChE J. 1987, 33, 1257-1270. (31) Vassiliadis, V. S. Computational Solution of Dynamic Optimization Problems with General Differential-Algebraic Constraints. Ph.D. Thesis, Imperial College, University of London, London, U.K., 1993. (32) Tanartkit, P.; Biegler, L. Ind. Eng. Chem. Res. 1995, 34, 12531266. (33) Gunn, D. J.; Thomas, W. J. Chem. Eng. Sci. 1965, 20, 89-100. (34) Wachter, A.; Biegler, L. Math. Program., Ser. A 88 2000, 565574. (35) Ulbrich, M.; Ulbrich, S.; Vicente, L. N. Math. Program. 2004, 100, 379-410.

ReceiVed for reView July 8, 2005 ReVised manuscript receiVed May 4, 2006 Accepted May 10, 2006 IE050804T