Solution of a Class of Multistage Dynamic ... - ACS Publications

dynamic programming techniques (Bellman, 1957; Ja- cobson and Mayne, 1970; Luus, 1990a,b, 1993; Luus and. Rosen, 1991). A second class is based on the...
0 downloads 0 Views 3MB Size
Ind. Eng. Chem. Res. 1994,33, 2111-2122

2111

PROCESS DESIGN AND CONTROL Solution of a Class of Multistage Dynamic Optimization Problems. 1. Problems without Path Constraints V. S. Vassiliadis, R. W. H. Sargent, and C. C. Pantelides' Centre for Process Systems Engineering, Imperial College of Science, Technology and Medicine, London SW7 2BY, United Kingdom

This paper considers the optimization of transient systems consisting of a fixed number of stages, each of which is described by an index-1 system of differential-algebraic equations (DAE). General initial conditions a t the start of the first stage and junction conditions between stages are allowed, as well as point equality and inequality constraints a t the end of each stage. A control vector parametrization approach is used to convert the above problem to a finite dimensional nonlinear programming (NLP) problem. The function gradients required for the solution of the NLP are calculated through the solution of a multistage DAE system in the variable sensitivities. 1. Introduction The transient behavior of many chemical engineering systems is described by mixed sets of differential and algebraic equations (DAEs). The optimization of such systems has received significant attention over the past decade (Morison, 1984;Cuthrell and Biegler, 1987;Morison and Sargent, 1986; Logsdon and Biegler, 1989; Gritsis, 1990; Vasantharajan and Biegler, 1990). One assumption underlying much of the earlier work is that the physical system under consideration is described by a single continuous set of DAEs over the entire time domain of interest. However,most real systems have both discrete and continuous characteristics (see, e.g., Pantelides and Barton, 1993). The discrete component arises from discontinuities caused by either intrinsic physical changes (e.g., phase or flowregime transitions), or external actions and disturbances affecting the system (e.g., opening and closing of valves). Thus many practical process problems involve a sequence of distinct operations, forming a multistage system in which the duration and the operating conditions of each stage must be chosen to achieve some overall optimal result for the process as a whole. To the authors' knowledge, there has been only one previous publication on this type of problem, by Morison and Sargent (1986),who considered some of the theoretical issues for a class of multistage optimal control problems but did not describe an implementable algorithm. This paper generalizes the work of Morison and Sargent to deal with a broader class of initial conditions and interstage transitions, and describes in detail an algorithm implementing the techniques presented. The formulation of the problem is set out in section 2. Some issues relating to the degrees of freedom available to the optimization are also discussed here. Section 3 addresses the secondary issue of the choice of algorithm for solving the resulting dynamic optimization problem and gives our reasons for choosing the control vector parametrization approach, followed by details of the implementation. This approach converts the optimal

* Author to whom correspondence should be addressed. Electronic mail address: [email protected]. 0888-588519412633-2111$04.50/0

control problem into a standard nonlinear programming problem, and section 4 gives the derivation of expressions for the gradients of the objective and constraint functions. Section 5 presents two examples of simple multistage systems, both to illustrate the type of problem which can be solved, and to indicate the numerical performance of the proposed algorithm. Finally, section 6 gives some further discussion of the choices made in the light of these results, and makes some concluding remarks. Another long-standing problem has been a satisfactory technique for dealing with inequality p a t h contraints in optimal control algorithms. These are constraints on state variables which must be satisfied at every instant of time throughout the period of operation. Vassiliadis e t al. (1994) discuss earlier approaches to this problem and present a new efficient technique for dealing with it. Again this technique has been implemented in our algorithm, and numerical examples are presented to indicate its performance. 2. Mathematical Problem Statement We consider a multistage system involving a sequence of stages k = 1,2, ...,each of which is described by a DAE system of the form

where t is time, x ( t ) E X C Rn and y ( t ) E Y C Rm are sets of the differential and the algebraic variables respectively, and k denotes the time derivatives of x ( t ) . Also u ( t ) E U C R" and u f V C Rq are sets of time-varying control variables and time-invariant parameters (e.g., relating to equipment characteristics), respectively. The time horizon of interest 10, tfl is partitioned into NS stages, with t o = 0 and ~ N S= tf. The equations describing the system behavior over the kth stage are denoted by f k : X X Rn X Y X u X v X [tk-l, tk) Rnim. For the purposes of this paper, the following are assumed (a) The number of stages NS is known a priori. (b) The nature of each stage (Le., the describing equations) is known a priori.

-

0 1994 American Chemical Society

2112 Ind. Eng. Chem. Res., Vol. 33, No. 9,1994

(c) The index (Brenan et al., 1989) of the DAE systems (1)with respect to the variables x , y is at most 1,with the matrix

being nonsingular (here the notation dflax is used to denote the Jacobian matrix of a set of functions f with respect to a set of variables x ) . (d) The DAE system (1)for stage k is solvable for x ( t ) , y ( t ) ,t E [&-I, tkl for any u ( t ) E U and any u E V given consistent values X ( t k - l ) , Y(tk-1)at the start of the stage. The values of the variables x , k, and y at the start of the first stage may be related through a number of general nonlinear equalities of the form

J&(tO),

&(to), Y(tO),u(to),u ) = 0

(2)

This general form of initial conditions includes as a special case the specification of fixed initial values for a subset of W o ) , k(to),Y(t0)). The values of the variables at the beginning of each subsequent stage may be related to each other and to the values of these variables at the end of the previous stage through equalities of the form

J,(x(t,), W k ) , Y(tk),U ( t k ) , x ( t i ) , W,), Y(ti), ~ ( t , )U ,, t k ) = 0 12 = 1, ...,N S - 1 (3) where ti denotes the end time of stage k. The precise form of these junction conditions depends on the physical situation. In many cases, physical reasoning dictates that the differential variables x ( t ) (often representing preserved quantities such as material and energy holdups) are continuous across the stage boundaries, i.e., that X(tk)

= x(t$

(4)

On the other hand, this may not always be true if the system is subject to impulsive inputs such as instantaneous additions of mass and/or energy. Equality and inequality constraints may also be imposed at various points during the horizon. Without loss of generality, we assume that these points coincide with the end of individual stages. The constraints can then be written in the form

r y ( ~ ( t ik)(,t i ) , ti),

ti), U, ti) = 0

k = 1, ...,NS - 1 (5a)

and

Physically, such point constraints may be used to define conditions that must be satisfied for a stage of the system operation to be considered as complete-thus implicitly defining the stage duration. Of course, they may also take the explicit form of fixing the end time of one or more stages, or the length of the entire time horizon. We can also have inequality constraints at the initial time, to:

It should be noted that equality constraints at the initial time can be considered as initial conditions (2). In addition to the point constraints ( 5 ) and (6), in some situations it may be desirable to impose constraints that are valid over the entire duration of a stage. These path constraints may express conditions relating to safe or proper (e.g., with respect to product quality) system operation, and are of the form c T ( x ( t ) , k ( t ) ,Y W , u ( t ) ,u , t ) = 0

k = 1,

..e,

NS - 1;

t E [t,-,, t k ) , t E [ t N s l , t f ] , k = NS (7)

and c?(x(t),

W ) ,Y ( t ) ,u ( 0 , u, t ) I0 t E Vk-1, t k ) , k = 1, ..., NS -1; t E it,,,, t f ] , k = NS (8)

As already stated in the Introduction, it is assumed in this paper that the actual number of stages and the nature of each stage are known a priori. Therefore, a minimum, nonzero duration is imposed on each stage. An upper bound on the stage durations may also be imposed if necessary. AFin Itk - tk-l I

k = 1, ...,NS

(9)

If the duration of a certain stage k is known a priori, then the above constraint may be converted to equality by setting A;" = A;". The control variables u ( t ) may be subject to lower and upper bounds, both of which may depend on the individual stage and vary over its duration:

t E [t,-,, t k ) , NS - 1; t E [ t N s l , t f ] ,k = NS (IO)

uk(t) 5 u ( t ) Iu!(t)

k = 1,

General inequality constraints of the form uL I9 ( u ) Iu"

(11)

may also be imposed on the time invariant parameters u. The objective function for such dynamic optimization problems is normally a performance measure that relates to the system operation, the imposed control actions, and the selected time-invariant parameters. Here we consider objective functions expressed entirely in terms of the final values of the variables, i.e., min

x ( t ) , k ( t ) y ( t ) , u ( t ) , u , t k , k..., ~ NS l,

C(x(tf),k(tf), Y ( t f ) u(t,), , u, t f ) (12)

It can readily be shown that this form of the objective function also covers the common special case of objective functions involving instantaneous performance measures integrated over the duration of one or more stages. Problems involvingpath constrainta (7) and (8)are dealt with in the second part of this paper (Vassiliadis et al., 1994). The present paper is therefore concerned with the minimization of the objective function (12) subject to constraints (1)-(3), (5), (6), and (9)-(11). We assume that these constraints define a nonempty feasible region. It is instructive at this point to establish the degrees of freedom for the problem described by the DAE systems (1)and the initial and junction conditions (2) and (3). We have already assumed system solvability from consistent initial variable values in each stage. Furthermore, for many optimal control problems, the set of relations (2) is a complete and nonredundant set of initial conditions that can be solved together with f1(.) = 0 at t = to to compute

Ind. Eng. Chem. Res., Vol. 33, No.9, 1994 2113 x(to), W o ) , y(t0) for given u(t0) and u. Similarly, the junction conditions (3)may be complete and nonredundant in the sense that they can be solved together with f k + l ( * ) = 0 at time t k to compute x ( t k ) , k(tk),y(tk)for given values of the other variables occurring in (3). In such cases, the only degrees of freedom are the control variables u(t),the time-invariant parameters u, and the stage times tk: once these are specified, a unique solution of the problem (1)(3) can be determined. However, there is, in general, no need for either ( 2 ) or (3) to be complete and nonredundant. If they are underdetermined, then there is scope for optimizing the initial conditions at the first or later stages. On the other hand, if they are overdetermined, then they imply some restriction on u(t),u, tk. We consider first the set of initial conditions (21, where Jo: X X Rn X Y X U X V RJQ. Normally, a set of consistent initialvaluesx(to),x(to),y(to) would satisfy both the first-stage equations at t = to and the conditions ( 2 ) . We therefore consider the Jacobian matrix for this system

-

+ +

which is of size ( n m po) X (2n + m). If the rank of A0 is i o , then

ioImin(n + m + po, 2n + m)

(14) Then there exists a lower triangular matrix LOwith nonzero diagonal, and row and column permutation matrices PO and 80, respectively, such that -

-

where UOis an upper triangular ~ l Xo ~ l mjtrix o with unit diagonal, Bo is a general h X (2n + m - pd matrix, and 0 is a ( n + m + po - h) x (2n + m - i o ) null matrix. Clearly, the equations corresponding to the null rows of (151, i.e., the last ( n + m + po - h)of the permuted vector, (16) cannot be used for determining the values of x(to),i ( t o ) , y(t0). Instead, they can be viewed as imposing some implicit restrictions on the allowable u(0)and u, and must be treated as point constraints at time to in the context of the dynamic optimization problem. Of course, these constraints could also be inconsistent with the rest of the initial conditions, but this possibility is excluded by the assumption of a nonempty feasible region for the overall optimization problem. On-the other hand, the equations corresponding to the first po rows of the permuted vector (16) are clearly of full rank, and they can be taken into account in determining a consistent set of initial values for x(to), $(to), y(to). However, these values will not be unique unless Go = 2n + m (17) If this is not the case, then the initial values of the variables corresponding to the last (2n + m - h)columns of PoAoQo, i.e., the corresponding elements of the permuted variable vector,

can be given arbitrary values. In the context of the

dynamic optimization problem, these must be treated as optimization decision variables. This is equivalent to introducing an additional vector, D, of time invariant parameters defined by the additional initial conditions

where 0 is a (2n + m) X h null matrix and I a (2n + m) (2n + m - &) unit matrix. In this case, the Jacobian involved in the DAE initialization is X

(U0 P) which is nonsingular. This permits the use of a Newton type iteration to establish aconsistent set of initial variable values. In summary, the procedure described above partitions the constraint set ( 2 ) into two parts. The first, of cardinality /lo, is the maximal subset of initial conditions ( 2 ) that can be accommodated in the context of _theDAE system (1). The second part comprises (po - p,-,) point constraints that must be satisfied by the solution of the optimization problem a t to. Furthermore, it determines a subset of (2n + m -h)of the variables ( x , x , y ) ,the initial values of which must also be treated as optimization decision variables. A similar procedure may be applied to the junction conditions (3),again potentially converting some of them to point constraints a t time tk to be satisfied by the optimization and/or indicating that a subset of the variable be treated asoptimizationdecision values {X(tk),k(tk),Y(tk)) variables. 3. Solution Algorithm

A number of different solution approaches to dynamic optimization problems for systems described by ordinary differential equations (ODES)or DAEs have been proposed in the literature. One such approach involves the use of dynamic programming techniques (Bellman, 1957; Jacobson and Mayne, 1970; Luus, 1990a,b, 1993; Luus and Rosen, 1991). A second class is based on the solution of the necessary optimality conditions expressed as a twopoint boundary value problem; these methods include the quasilinearization approach proposed by Miele and his co-workers (Miele et al., 1970,1974; Miele, 1975) and the use of multiple shooting algorithms (Bulirsch, 1971), as well as other shooting algorithms such as that proposed by Dixon and Bartholomew-Biggs (1981). A different approach is based on converting the dynamic optimization problem to a finite dimensional nonlinear program (NLP) through the discretization of all variables. Initial work was based on finite difference approximations to the system constraints (e.g., Canon et al., 19701, but later global orthogonal collocation (Tsang et al., 1975; Biegler, 1984), global approximation using Chebyshev series expansions (Ulassenbroeck and van Dooren, 1988), and orthogonal collocation on finite elements (Renfro, 1986;Renfro et al., 1987;Cuthrell and Biegler, 1987)were also used. In particular, the use of approximations over finite elements, the size and number of which can be determined automatically, allows some control over the error of discretization (Logsdon and Biegler, 1989, Vasantharajan and Biegler, 1990). The key characteristic of the complete discretization approach outlined above is the fact that the optimization is carried out in the full space of the discretized variables, and the discretized constraints are, in general, satisfied at

2114 Ind. Eng. Chem. Res., Vol. 33, No. 9, 1994 STAGE 1

STAGE 2

STAGE 3

Control intervals

Figure 1. Subdivision of stages into control intervals.

the solution of the optimization problem only. This is therefore often called an “infeasible path” approach. An alternative approach is to carry out the optimization in the space of the decision variables only (see section 2). In this case, it is necessary to discretize only the control variables u ( t ) . For given u ( t ) and values of the other decision variables, it is then possible to integrate the underlying DAE system using standard integration algorithms so as to evaluate the objective function (12) and other constraints that have to be satisfied by the solution. This control vector parameterization (Pollard and Sargent, 1970; Sullivan, 1977; Sargent and Sullivan, 1977, 1979; Morison, 1984; Morison and Sargent, 1979; Gritsis, 1990) corresponds to a “feasible path” approach since the DAEs are satisfied at each step of the optimization algorithm. In addition to the smaller size of the optimization problem, this approach has the advantage of efficiently controlling the discretization error by adjusting the size and order of the integration steps using well-established ODE/DAE integration techniques. Also the number of such steps need not be known in advance, nor does it have to be constant during the course of the optimization. The complete discretization approach has the advantage of not wasting computational effort in obtaining feasible DAE solutions away from the solution of the optimization problem. This is counterbalanced by the size of the NLP problem that has to be addressed, which may exceed the capabilities of currently available NLP algorithms and codes. This difficulty has led to the consideration of decomposition schemes operating in the reduced space resulting from elimination of the discretized equality constraints. This approach was advocated by Polak (1971) and more recently by Logsdon (1990) and Logsdon and Biegler (1992). In fact, the use of decomposition techniques for the solution of the large NLP arising from the complete discretization approach leads to methods that are, in general, very similar to the control parameterization approach. The main remaining difference between the two approaches is the precise method used for approximating the infinite dimensional DAEs by a finite number of constraints. It can readily be shown that the collocation methods on finite elements (that are primarily used in the context of the complete discretization approach) are equivalent to fully implicit Runge-Kutta single-step integration methods. On the other hand, most implementations of control vector parametrization rely on multistep backward-difference formula (BDF) integration methods (Gear, 1971). However, in principle, there is no reason why either integration method could not be used in either approach although one would expect multistep methods to lead to more efficient (i.e., smaller) approximations especially at high accuracies. The algorithm presented in this paper employs a control parametrization approach coupled with a BDF method for the integration of the DAEs. The duration of each stage is divided into a number of control intervals, as shown in Figure 1. In the interests of notational simplicity, we treat each element as a separate stage, and enforce continuity of the differential variables x ( t ) at the boundaries between the elements through simple junction

conditions of the form (4). Therefore we shall henceforth use NS to denote the total number of stages (real and artificial), and the range of the stage subscript k will be expanded accordingly. A low-order polynomial form is assumed for the control variables u(t)over each stage k. We assume that the same polynomial order Mj is employed for a given control variable uj(t) over all stages. However, different orders may be used for different control variables. For convenience, the control variables are expressed in terms of a set of Lagrange polynomials. Thus, over stage k, the j t h control variable can be written as Mi

where dk)is normalized time over stage k given by

and the Lagrange polynomials of order M , defined in the standard manner, i.e., @)(T)

#4-)‘(7)

=

E

1 if M = 1

nM

7

- 7i‘

Z’E!

7j

- 7i’

i‘fr

if M > 2

#!m

are

(23a) (23b)

Overall, (21) expresses the control variables in terms of a finite vector of parameters Ujjk to be determined by the optimization. It should be noted that the choice of the set of normalized time points used for the construction of the polynomials does not affect the solution obtained. However,to acertain extent the choice of some points may be dictated by the need to enforce control variable bounds (10). For instance, for piecewise constant (Mj = 1)and piecewise linear (Mj = 2) controls, it is useful to have 71 = 0 and T M ~= 1, in which case the bounds on variable uj(t)over a stage k can be enforced simply through

In fact, bounds can also be enforced for piecewisequadratic or cubic controls through inequality constraints expressed in terms of the parameters uijk. However, such bounding is problematic for polynomials of higher order. For some applications, it may be desirable to enforce some degree of continuity in the control variable profiles across stage boundaries. Co continuity can be achieved simply by constraints of the form

Higher-order continuity can be obtained by linear constraints derived by differentiating the polynomials defined by (21)-(23) with respect to time for the requisite number of times and enforcing equality of the first and higherorder time derivatives of u ( t ) at the stage boundaries. Examples of control variable profiles of various degrees and continuity orders are shown in Figure 2.

Ind. Eng. Chem. Res., Vol. 33, No. 9,1994 2115 dimensional optimization problem has been converted to a finite dimensional NLP.

By applying the degree of freedom analysis also presented in the previous section, any variable values at the start of the first or subsequent stages that must be treated as optimization decision variables can be identified, and new time-invariant parameters and initial or junction conditions can be introduced (cf. eq 19). Similarly, any of the original initial and/or junction conditions that are redundant in the context of the DAE integration are identified and are treated as point equality constraints to be enforced by the optimization at the start of individual stages. We can therefore assume that the optimization problem now comprises (a) a multistage DAE system (1)with a complete and nonredundant set of initial and junction conditions; (b) a set of point equality and inequality constraints (7) and (8) a t the end of individual stages; (c) a set of point equality constraints at the start of individual stages, arising in the manner detailed above; (d) bounds constraints (9) on stage durations, constraints (11)on the time-invariant parameters, and bounds on the control variable discretization parameters (e.g., (24));(e) continuity relations expressed in terms of the control variable discretization parameters (e.g., (25)); and ( f ) an objective function (12). Given values of the optimization parameters (uijk,

4. Function Gradient Calculation In addition to values of the objective function and the constraints, most efficient NLP algorithms require their gradients with respect to the optimization decision variables. There are several possible approaches for computing gradients for systems described by ODES or DAEs, including finite difference perturbations, adjoints, and sensitivity equations. Recently, Rosen and Luus (1991) presented a thorough comparative study of these supported by both theoretical considerations and computational experiments. They concluded that using the trajectory sensitivity and finite difference approaches can be computationally more efficient than the adjoint system approach, especially on computers with nonconventional architecture. The use of finite difference suffers from the need for careful selection of the perturbation size taking account of the relative magnitudes of the truncation and integration errors. For these reasons, and also because of its conceptual simplicity, our algorithm employs the sensitivity equation approach. 4.1. Sensitivity Equations for MultistageSystems. We start by deriving the sensitivity equations for the multistage DAE system (1). By differentiating (1)with respect t o p , one of the optimization parameters (261, we obtain

(26)

tk}

the initialization problem at time t o involves solving simultaneously the DAE equations fork = 1and the initial conditions (cf. eq 20). We can then integrate the firststage DAEs from toto tl. The values of the variables at the start of the second stage can be determined by simultaneous solution of the DAE equations for k = 1and the first set of junction conditions. By proceeding in the same manner, we can reach the final time tf, thus obtaining the solution x ( t ) , i ( t ) ,y ( t ) over the entire time domain [ t o , tfl. From this solution and the values of the optimization parameters (261, we can readily evaluate the constraints and the objective function (b)-(f) above. Overall, then, the original infinite

afk

-i

ai

afk

p

+-x

ax

afk

p

afk a U

+-yp+--+--=o ay au ap

and

L

Picawise linear with con uitv

Control Vuiabk

. I ........... piscewise &without continuitv ........ .... .... ...... ........ ............ ..... ....................... ..... ....... ............... ........ ........... ... ................... ......... ...... ...... %.

.e.

.................. .........................

Stage 1

I

to

stage 2

Time

Figure 2. Control variable profile examples.

I 12

tl

av ap k = 1,....NS (27)

where the variable sensitivities are defined by

Piecewise amstant

I

a f k dV

.........

ms-1)

2116 Ind. Eng. Chem. Res., Vol. 33, No. 9, 1994

ip=

-(-)a ax at ap

The above equations are linear with respect to x p and yp. The major step in terms of the computational cost of their integration using implicit methods is the formation and factorization of the matrices

(34) In many practical applications, the junction conditions are of a less general nature than that implied by (3). A particularly common form is

These are exactly the same matrices as those involved in the integration of the original DAE system, and this fact can be exploited to reduce the additional cost incurred in evaluating sensitivities (Leis and Kramer, 1985; Caracotsios and Stewart, 1985). The initial conditions (2) can also be differentiated with respect to p yielding 8JO -x ax

dJ0

p

(to)+-i ax

p

8JO (to)+-y aY

~ J au o

~ J au o

au ap

au ap

(to)+--+--=0

Assuming that a complete and nonredundant set of initial conditions is available (obtained, if necessary, by the application of the degree of freedom analysis of section 21, the initial values x,(to), i,(to), yp(to)can be calculated by solving the nonsingular linear system

J k ( X ( t k ) , X ( t i ) , U, t k )

=0

k = 1, ..., NS - 1 (35)

where the Jacobian matrix BJkldx is nonsingular. In such cases, it is possible to solve (35) in isolation to obtain the values of the differential variables X ( t k ) at the start of each new stage solely in terms of the values of the same variables at the end of the previous stage, x(t& and the time-invariant parameters u. Equations 32, 34, and 35 can then be combined to yield

An even more specialized form of junction condition is (4). It can be seen that, for this case, (36) simplifies to

(37)

(31) In a similar fashion, differentiation of the junction conditions (3) leads to the linear system

-(

afk+l

0

-

aJ k aJ k aY- au-

au

0

which is the same result as that reported by Morison (1984) and Rosen and Luus (1991). In summary, the general equations (27) and (31)-(32) themselves define a multistage DAE system, the solution of which allows the calculation of the sensitivities x p , ip, and y p over the time domain of interest [to, tfl. It should be noted that the matrices in the left-hand sides of (31) and (32) are the same as those involved in the initialization and reinitialization of the original DAE system (1)at t k , k = 0, ...,NS - 1,and therefore the additionalcost incurred beyond the solution of the original system (1) can be reduced. The solution of the sensitivity equations necessitates the evaluation of the quantities &lap and aulap, where the control variables u ( t ) are understood to be replaced by the appropriate parametrized polynomials (21) over each stage k. Below we consider each of the three possible types of optimization parameter p (cf. (26)) separately. 4.2. Derivatives with Respect to Parameters &jk. From (21), it can be seen that

1, ...,NS (38a) where bkk’ and bjjt are Kronecker deltas. Also, obviously (32) where x-, i-,y, and u- denote x ( t i ) , *(ti),y ( t k ) , and u (ti), respectively. The derivative atkldp is 1 if the parameterp with respect to which the sensitivities is being evaluated is t k , and 0 otherwise. The values obtained by solving (32) are the correct sensitivities at a fixed time t k . However, if the parameter p is t k , then the true gradients required by the optimization are the total derivatives of the variables. We note that, for any t E [ t k , t k + l ) , we have

av -=

0 i , j , k = l , ...,NS

aUijk

4.3. Derivatives with Respect to Parameters v.

Clearly

au -(t) = 0 au

and

(39)

Ind. Eng. Chem. Res., Vol. 33, No. 9, 1994 2117 (40)

4.4. Derivatives with Respect to Parameters By differentiating (21) and (22) with respect to t k , and combining the two, we obtain

More details on the DAEOPT code are given by Vassiliadis (1993). We now consider the application of this code to two chemical engineering problems. 5.1. Optimizationof a Two-Stage Reactor System. We consider a system comprising two batch reactors as shown in Figure 3. The first reactor is initially loaded with 0.1 m3 of an aqueous solution of component A of concentration 2000 mol/m3. In the presence of a solid catalyst, this reacts to form components B and C according to the consecutive reaction scheme

k = 1, ...,NS (41) Similarly, by differentiating (21) and (22) with respect to tk-1, we obtain

auj -(t)

t,-

=-

t

Mj

puijk ( t k - t k - 1 ) 2 -1

&$ji)'

ddk'

E ftk-1, t k l ,

k = 1,...,NS (42)

The derivatives of u ( t ) over stage k with respect to all other stage end times tl, 1 # k - 1, k are zero. The derivatives of u with respect to all t k are also zero. Our impleaentation treats the stage durations hk

tk

- tk-1

k = 1,..., NS

(43)

as optimization parameters instead of the stage end times t k . This has been found to lead to better performance as it is easier to ensure correct bounding of stage duration (cf. eq 9) by enforcing bounds directly on the hk. Since (44) KP1

the gradients of any quantity rC. with respect to hk can be computed from the gradients with respect to t k (derived in section 4) through the simple relation

2A-B-C The first reactor is fitted with a heating coil which can be used to manipulate the reactor temperature over time. After completion of the first reaction, an amount of dilute aqueous solution of component B of concentration 600 mol/m3 is added instantaneously to the products of the first reactor, and the mixture is loaded into a second reactor where the three parallel reactions B-D

B-E

2B-F

take place under isothermal conditions at a fixed temperature. The objective of the optimization is to maximize the amount of component D in the product of the second reactor while maintaining the combined processing time for both reaction steps to under 180 min. The decision variables are the temperature profile of the first reaction step, the durations of the two steps, and the amount of B added at the mixing step. The above defines a two-stage system, the key variables being the concentrations of the six components A, B, C, D, E, and F. The equations describing the two stages are as follows. Stage 1 cA

= -2kl(T)c:

= kl(T)C: - k2(7?cB

c c = kZ(T)CB

5. Implementation and Numerical Experiments The algorithm presented in this paper has been implemented as DAEOPT, a code for the solution of large-scale dynamic optimization problems. The solution of the DAE system is performed using the DASOLV code (Jarvis and Pantelides, 1992) which implements a BDF algorithm for large sparse systems of DAEs, and incorporates efficient sensitivity evaluation and discontinuity handling. The optimization is carried out using the SRQPD code (Chen and Macchietto, 1989) which implements a reduced quadratic programming algorithm. DAEOPT allows control variables of up to fifth polynomial order, with or without continuity a t the control interval boundaries. The lengths of the control intervals may be fixed or variable. However, the choice of the number of the control intervals and the order of the polynomial representation of the controls is left to the user, as these are essentially engineering decisions. For example, the way the controls are to be implemented on the real process may dictate the form of control and the degree of continuity required (e.g., piecewise constant). More generally, the polynomial representation may be merely an approximation to some more general nonlinear control function, and only the user can make a judgment on the number and order of polynomials required.

c, = 0 e, = 0 e, = 0 where the kinetic constants are given by k , ( T ) = 0.0444e-2m/Tm3/(mol-min) and k,(T) = 6889.0e-IT

min-'

Stage 2 c A CB

=0

-

= -0.02CB 0.05CB - 2 X 4.0 X 106C$

c, = 0

2118 Ind. Eng. Chem. Res., Vol. 33, No. 9, 1994

tD= 0.O2CB

(48d)

cE= 0.05CB

(4W

*

e, = 4.0 X 10dCBz

(480

Heat exchan

.

The mass balances on E and F are not a c t d y required for the purposes of the optimization calculations,but they are nevertheless included in the model for completeness. The mixing operation at time tl is described by the following equations:

vZcA(t1) = vlcA(t3

(49a)

vZcB(t1) = vlCB(t2 + SG

(49b)

V,C,(t,) = V,C,(t;,

(49c)

where VI is the volume of material loaded in the first reactor (0.1 m3),and V, is the volume in the second reactor given hy

v, = v,+ s

Reactor 2 Figure 3. Twwtage reactor system. Multistage Reactom

(50)

where S is the amount of solution of B added. The concentration of the latter is denoted by (=600mol/ m3). The optimization problem to be solved can then he written as

G

subject to the various constraints already mentioned, the bounds 298 5 T(t)5 398 (K) t E [O,t,]

o 5 s50.1

(m3)

1

(52a) (52h)

on the control variable "(0, and the time-invariant parameter S, and the end-point constraint

I m 30803

cD(tz) 2 150 (mol/ms)

(52~)

on the final concentration of component D. For the solution of this problem, the first stage was divided into five elements, with a piecewise constant temperature profile T(t)over them. As the second stage does not involve any time-varying controlvariables, it was not necessary to subdivide it into more than one element. Thus, a total number of six elements were considered,and their length was initially set at 15 min and bounded between 10 and 100 min. The temperature profile was initialized to a uniform 350 K throughout the first stage, and an initial value of 0.1 m3was assumed for the injected volume S. The optimal durations for the two stages are found to be 106.0 and 74.0 min, respectively, thus taking up the entire available time of 180 min. The optimal injected volume S is 0.070 ms, and the maximum attainable yield of component D is 25.55 mol. Figure 4 shows the optimal temperature profile for the first reaction step, while concentration profiles for the key componentsduring the two stages are shown in Figure 5. The solution of this problem using DAEOPT required the solution of 19 quadratic programming (QP) subproblems by the SRQPD code, as well as 25 line-search evaluations of the objective function and the constraints.

tinr

om mm una mm m.m imm Figure 4. Optimal temperature profile for fvst reaction step.

The optimization and integration tolerances were set at 106 and 10-1, respectively. Each QP subproblem necessitates the solution of the original DAE system and the corresponding sensitivity equations, while only the DAE equationsneed to he solved during line-search evaluations. The overall computational cost was under 2 min of CPU time on a SUN SPARCstation 10/41 computer. 5.2. Optimization of a Batch Distillation System. This example is concerned with the separation of a 10component mixture using batch distillation. The equip ment used for this purpose consists of a 20-tray column, a total condenser, and a reboiler, as shown in Figure 6. The column is operated in three stages, producing a light cut rich in component 1,an off-cut, and a heavy cut rich in component 6. An amount of 10 kmol of feed with the composition shown in Table 1is charged to the reboiler a t the start of the operation. It is required that both the light and the heavy cuts have a purity of at least 90% in their main component. Furthermore, at least 2 kmol of light cut and 1.4 kmol of heavy cut must be recovered fromeachbatch. Subjectto theseconstraints,theobjective of the optimization is to minimize the overall processing time for the hatch by manipulating the internal column reflux ratio, R ( t ) . For the purposes of this paper, this is

Ind. Eng. Chem. Res., Vol. 33, No. 9, 1994 2119 Condenser cooling water

-

1m 130

2

-

4

i o1m amllD 1-

Distillate tank

IM

am-

-

am

bm0s-

oa-

22

OJD-

k-4

OlD-

steam

Figure 6. Batch distillation column.

(a) Reaction stage 1

-

lmmmm

I

I

Table 1. Data for Batch Distillation Example

feed mole relative feed mole relative comDonent fraction volatility component fraction volatility 1 0.30 8.0 6 0.30 4.0 2 0.06 6.0 7 0.05 3.0 3 0.06 5.8 8 0.04 2.5 4 0.05 5.2 9 0.05 2.0 5 0.04 5.0 10 0.05 1.0

I

yDm nam-

Table 2. Optimal Cut Characteristics for Batch Distillation Problem

DDDD-

-

mm mlammm -

component

mmimm lcom -

mm om 1400

lnm

amOOD-

-

DOD-

-

aao. L '

om

1

I

I

sm

MDm

UDOD

h m

(b) Reaction stage 2 Figure 5. Concentration profiles for two-stage reactor system.

defined as the ratio of the reflux to the vapor stream entering the condenser. The behavior of the distillation column is modeled assuming fixed molar holdups of O.OO5 kmol for each tray and 0.05 kmol in the condenser. Constant relative volatilities for the various components (see Table 1)are used for modeling phase equilibrium. The initial tray and condenser compositions are assumed to be the same as that of the feed. Junction conditions are introduced to reset the holdup in the accumulator tank to zero at the end of the first and second stages, while enforcing continuity of the reboiler holdup and column compositions. For the purposes of the optimization, each of the three stages is subdivided into three elements, with a piecewise constant reflux ratio R ( t ) over them. Lower and upper

1 2 3 4 5 6 7 8 9 10 amount (kmol) duration (h)

light cut 0.899 97 0.049 26 0.035 03 0.007 27 0.003 39 0.003 98 O.OO0 38 O.OO0 25 O.OO0 27 o.OO0 20 2.00 4.50

mole fraction off-cut 0.245 89 0.09884 0.103 54 0.090 13 0.070 54 0.321 14 0.032 63 0.02006 0.016 81 O.OO0 42 5.06 6.50

heavy cut 0.001 01 0.007 70 0.011 06 0.026 56 0.030 23 0.900 00 0.022 66 O.OO0 76 o.OO0 01 o.OO0 00 1.40 4.02

bounds of 0.2 and 0.99 are imposed on the R(t). The corresponding bounds on the element durations are 0.5 and 3.5 h. The problem described above results in a system of 444 equations comprising 232 differential and 212 algebraic variables. There are also 18 optimization parameters, namely the level of the control over each of the 9 (=3 X 3) elements, plus the durations of the latter. The optimization of this problem using DAEOPT was initialized assuming a duration of 3 h for each of the three stages, with corresponding reflux ratios of 0.9, 0.6, and 0.9. The minimum batch processing time in the optimal solution obtained is 15.0 h. Table 2 shows the characteristics of the three cuts in this solution. It can be seen that the purity constraints on both the light and the heavy cuts are active, and so is the production constraint on the heavy cut. Note that the recovery of the first three components exceeds 100?6 of the amount charged initially in the reboiler. This is due to the additional initial holdup on the trays and the condenser. The initial and optimal reflux policy are shown in Figure 7,while Figure 8 illustrates the profiles of the holdups in

2120 Ind. Eng. Chem.

Res., Vol. 33, No. 9,1994

Batch Distillation .m

-1

w ..w .

OW

bnx

am

5.m

15.W

l0.W

Figure 7. Initial and optimal d u x ratio policies. Batch Distillation

1

0.W

5.W

l0.W

Ism

Figure 8. Molar holdups in column reboiler and aeeumulator.

the reboiler and the accumulator in the optimal solution. The profiles of the mole fractions of the two main components in the liquid collected in the accumulator are shown in Figure 9. The optimizationrun (with optimizationandintegration tolerances of 1V and lod, respectively) required the solution of 68 QP subproblems and 116 evaluations of the objective function and the constraints during line search. The overall computation took approximately 12 h on a Silicon Graphics IRIS workstation. 6. Concluding Remarks

This paper has considered the formulationand solution of multistage dynamic optimization problems, each stage of which is described by an index-1 DAE system. The form of the problem considered allows point equality and inequality constraints to be imposed on the system.

sm

l0.W

ism

Figure 9. Mole fiactiona of components 1 and 6 in aeeumulator.

Generalinitialandjunctionconditionscanala0be imposed, and a method for extracting complete and nonredundant subsetsof these is presented, identifyingadditionaldegrees of freedom for the optimization where appropriate. The problem is solved using a control vector parametrization technique employing piecewise polynomial control variables. The main attraction of this approach is its ability to handle large systems without the need to solve excessively large optimization problems. Consider, for instance, the example presented in section 5.2 of this paper, which was described by a system of 444 differential and algebraic variables. Even for a relatively loose integration tolerance of 10-2, the integration of this system using a state-of-the-art BDF-based DAE integrator with the optimization parameters set at their optimal values requires 960 integration steps. It is therefore not unreasonable to infer that a complete discretization of the differential and algebraic variables would require the generation of hundreds of thousands (=(number of steps) X (system size)) variables and constraints in order to achieve comparable solution accuracy. Furthermore, it is worth noting that nonnegligible differences exist between the DAE solution obtained at the 10-*tolerance and that reported in section 5.2 (obtained a t a tolerance of lod). In particular, the mole fractions of the main components in the a c m u l a t o r calculated at the looser tolerance exceed by more than 5 % those calculated at the tighter one. Thus using the former may lead to "optimal" reflux policies that do not actually satisfy the purity requirements. On the other hand, the use of the tighter tolerance increases the required number of steps by almost a factor of 6, and this would appear to accentuate even further the disadvantages of complete discretization. Despite the relativegeneralityofthe problemconsidered in this paper, a number of restrictive assumptions were made regarding the form of the underlying multistage system. The assumption of solvability of the DAE system given values of the control variables and the time-invariant parameters is areawnableone for most systemsof practical interest,providedu(t) anduaremaintainedwithinrealistic hounds. In particular, for index-1DAE systems satisfying condition c in Section 2, the existence of the solution can be guaranteed provided an initial point exists. However, it is possible in principle that such a point may not exist for certain control and time invariant parameter values, and judicious bounds must he imposed on these quantities to ensure that this is avoided. From a more practical viewpoint,DAEcodesmayfailtoestablishan initial point

Ind. Eng. Chem. Res., Vol. 33, No. 9,1994 2121 even if one does exist. The DASOLV code that we used employs a robust algebraic equation solver for this initialization, which, in our experience, renders such failures very rare. As is well-known, the modeling of some practical engineering systems may lead to DAE systems of index exceeding unity (see, e.g., Pantelides et al., 1988). These are clearly outside the scope of the present paper. The optimization of multistage DAE systems of index higher than unity has recently been considered by Pantelides et al. (1994). The assumption that all variables may occur in all stages was made for notational and implementational simplicity and allows complete generality. However, for large-scale problems it may be better to provide for specific nonredundant formulations for each stage, with appropriate specific junction conditions. Similarly, the assumption of a piecewise polynomial representation of the controls can be relaxed. In fact, any piecewise continuous function defined in terms of a finite set of parameters could be used for this purpose. Of course, the user would then have to provide subroutines for the computation of the function and ita derivatives with respect to these parameters and time. It is also worth emphasizing that the conversion of the optimal control problem to a nonlinear program solved by a standard NLP code implies that only a local optimum is obtained, with no guarantee of global optimality. This limitation is, of course, not specific to the control parameterization approach, but applies to all the approaches discussed in section 3. The really limiting assumptions made in this work are those relating to the number and nature of the individual stages being known a priori. For some practical applications this is clearly not true. For instance, individual batch processes (e.g., reaction, distillation, filtration) can be described in terms of sets of DAEs of the type considered in this paper. However, scheduling the operation of entire batch plants over a given time horizon involves the determination of the number of batches that have to undergo each type of process, as well as their relative ordering with respect to time. Such problems may have important applications (e.g., in batch process synthesis), but their optimal solution would necessitate the introduction of discrete decisions, resulting in a mixed integer nonlinear programming problem (MINLP) rather than a simple nonlinear program. The solution of these problems must therefore await the evolution of effective algorithms for large-scale MINLPs. Finally, general path equality and inequality constraints of the form (7) and (8) were excluded from the solution algorithm presented here. Their effective handling is considered in the second part of this paper (Vassiliadis et al., 1994).

Acknowledgment This research was supported by a joint SERC/AFRC grant. Nomenclature A0 = matrix defined by eq 13 Bo = matrix defined by eq 15 c? = equality path constraints over stage k = inequality path constraints over stage k C = objective function CA = concentration of component A CB = concentration of component B CC = concentration of component C CD = concentration of component D

cr

CE = concentration of component E = concentration of component F = equations for stage k hk = duration of stage k i = standard index for Lagrange polynomials I = unit matrix j = standard index for control variables JO = initial conditions Jk = junction conditions at end of stage k k = standard stage index kl = rate constant for reaction 1 kz = rate constant for reaction 2 LO = lower triangular matrix defined by eq 15 m = number of algebraic variables y M j = polynomial order of jth control variable profile n = number of differential variables x NS = number of stages p = standard symbol for an optimization parameter (cf. eq Cp

fk

26)

PO= row permutation matrix defined by eq 15 q = number of time-invariant parameters u

QO= column permutation matrix defined by eq 15 r? = equality constraints at end of stage k r r = inequality constraints at initial time r;P = inequality constraints at end of stage k S = amount of material added between first and second reactors t = time to = initial time t f = final time t k = end-time of stage k T = temperature u = control variables U L = lower bounds on control variables uu = upper bounds on control variables Uijk = value of control variable j at time 7i in stage k U = domain of u UO= upper triangular matrix defined by eq 15 u = time-invariant parameters V = domain of u V I = volume of material loaded in first reactor V , = volume of material in second reactor x = differential variables i = time-derivatives of differential variables x x p = sensitivities of x with respect to p x p = time derivatives of sensitivities of x with respect to p X = domain of x y = algebraic variables y p = sensitivities of y with respect to p Y = domain of y Greek Letters A": = upper bound on duration of stage k AEm = lower bound on duration of stage k m = number of initial conditions JO = rank of matrix A0

= number of control variables u dk)= normalized time over stage k q = point at which 4iMj)(7J= 1 4!Mj)= Lagrange polynomial of order M j Q = constraints on time-invariant parameters u T

Literature Cited Bellman, R. Dynamic Proprammina: -. Princeton Universitv Press: Princeton, NJ, 1957. Bieeler. L. T. Solution of Dvnamic ODtimization Problems bv &ccessive Quadratic Programming and Orthogonal Collocatioi.

Comput. Chem. Eng. 1984,8,243-248. Brenan, K. E.; Campbell, S. L.; Petzold, L. R. Numerical Solution

of Initial-Value Problems in Differential-Algebraic Equations; North-Holland, Elsevier Scientific Publishing Co.: New York, 1989.

2122 Ind. Eng. Chem. Res., Vol. 33, No. 9, 1994 Bulirsch, R. Die Mehrzielmethode zur numerischen Lasung von nichtlinearen Randwertproblemen und Aufgaben der optimalen Steuerung; Deutsche Forschungs- und Versuchsanstalt fiir Luftund Raumfahrt, Oberpfaffenhofen, Federal Republic of Germany, Carl-Cranz Gesebchaft, 1971. Canon, M. D.; Cullum, C. D.; Polak, E. Theory of Optimal Control and kfathemutical&ogramming; McGraw-Hilk New York, 1970. Caracotsios, M.; Stewart, W. E. Sensitivity Analysis of Initial Value Problems with Mixed ODESand Algebraic Equations. Comput. Chem. Eng. 1985,9,359-365. Chen, C. L.; Macchietto, S. Successive Reduced Quadratic Programming Code-User's Guide; Technical Report, Centre for Process Systems Engineering, Imperial College, London, 1989. Cuthrell, J. E.; Biegler, L. T. On the Optimization of DifferentialAlgebraic Process Systems. AZChE J. 1987,33,1257-1270. Dixon, L. C. W.; Bartholomew-Biggs,M. C. Adjoint-Control Transformations for Solving Practical Optimal Control Problems. Optim. Control Appl. Methods 1981,2,365-381. Gear, C. W. Simultaneous Numerical Solution of DifferentialAlgebraic Equations. ZEEE Trans. Circuit Theory 1971,CT-18, 89-95. Gritsis, D. M. The Dynamic Simulation and Optimal Control of SystemsDescribedby Index Two Differential-AlgebraicEquations. Ph.D. Thesis, University of London, 1990. Jacobson, D. H.; Mayne, D. Q. Differential Dynamic Programming; American Elsevier: New York, 1970. Jarvis, R. B.; Pantelides, C. C. DASOLV--a Differential-Algebraic Equation Solver, us. 1.2; Technical Report, Centre for Process Systems Engineering, Imperial College: London, 1992. Leis, J. R.; Kramer, M. A. Sensitivity Analysis of Systems of Differential and AlgebraicEquations. Comput. Chem. Eng. 1985, 9,93-96. Logsdon, J. S.Efficient Determination of Optimal Control Profiles for Differential Algebraic Systems. Ph.D. Thesis, CarnegieMellon University, 1990. Logsdon, J. S.; Biegler, L. T. Accurate Solution of DifferentialAlgebraic Optimization Problems. Znd. Eng. Chem. Res. 1989, 28, 1628-1639. Logsdon, J. S.;Biegler, L. T. Decomposition Strategies for LargeScale Dynamic Optimization Problems. Chem. Eng. Sci. 1992, 47,851-864. Luus, R. Optimal Control by Dynamic Programming Using Systematic Reduction in Grid Size. Znt. J. Control 1990a,51, 9951013. Luus, R. Application of Dynamic Programming to High-Dimensional Non-Linear Optimal Control Problems. Znt. J. Control 1990b, 52,239-250. Luus, R. Piecewise Linear Continuous Optimal Control by Iterative Dynamic Programming. Znd. Eng. Chem. Res. 1993,32,859-865. Luus, R.; Rosen, 0. Application of Dynamic Programming to Final State Constrained Optimal Control Problems. Znd. Eng. Chem. Res. 1991,30,1525-1530. Miele, A.; Pritchard, R. E.; Damoulakis, J. N. Sequential GradientRestoration Algorithm for Optimal Control Problems. JOTA 1970, 5,235-282. Miele, A.; Damoulakis, J. N.; Cloutier, J. R.; Tietze, J. L. Sequential Gradient-Restoration Algorithm for Optimal Control Problems with Nondifferential Constraints. JOTA 1974,13,218-255. Miele, A. Recent Advances in Gradient Algorithms for Optimal Control Problems. JOTA 1975,17,361-430. Morison, K. R. Optimal Control of Processes Described by Systems of Differential-Algebraic Equations. Ph.D. Thesis, University of London, 1984.

Morison, K. R.; Sargent, R. W. H. Optimization of Multistage Processes Dwribed by Differential-Algebraic Equations. Lect. Notes Math. 1986,1230,86-102. Pantelides, C. C.; Barton, P. I. Equation-Oriented Dynamic Simulation: Current Status and Future Perspectives. Comput. Chem. Eng. 1993,17S,S263-S285. Pantelides, C. C.; Gritsis, D.; Morison, K. R.; Sargent, R. W. H. The Mathematical Modellingof Transient System Using DifferentialAlgebraic Equations. Comput. Chem. Eng. 1988,12,449-454. Pantelides, C. C.; Sargent, R. W. H.; Vassiliadis,V. S. Optimalcontrol of Multistage Systems Described by High-Index DifferentialAlgebraic Equations. In Computational Optimal Control; Bulirsch, R., Kraft, D. E&.; Int. Ser. Numer. Math. 115;Birkhiuser Publishers: Basel, 1994; pp 177-191. Polak, E. Computational Methods in Optimization: Q Unified Approach; Academic Press: New York, 1971. Pollard, G. P.; Sargent, R. W. H. Off Line Computation of Optimum Controls for a Plate Distillation Column. Automatica 1970,6, 59-76. Renfro, J. G. Computational Studies in the Optimization of Systems Described by DifferentiaAlgebraic Equations. Ph.D. Thesis, University of Houston, 1986. Renfro, J. G.; Morshedi, A. M.; Asbjornsen, 0. A. Simultaneous Optimization and Solution of Systems Described by Differential/ Algebraic Equations. Comput. Chem. Eng. 1987,11,503-517. Rosen, 0.;Luus, R. Evaluation of Gradients for Piecewise Constant Optimal Control. Comput. Chem. Eng. 1991,15,273-281. Sargent, R. W.H.; Sullivan, G. R. The Development of an Efficient Optimal Control Package. Roc. ZFZP Conf. Optim. Tech. 1977, 8th, part 2. Sargent, R. W. H.; Sullivan, G. R. Development of Feed Change-over Policies for Refinery Distillation Units. Znd. Eng. Chem. Process Des. Dev. 1979,18,113-124. Sullivan,G .R. Developmentof Feed Changeover Policiesfor Refinery Distillation Units. Ph.D. Thesis, University of London, 1977. Tsang, T. H.; Himmelblau, D. M.; Edgar, T. F. Optimal Control via Collocation and Nonlinear Programming. Znt. J . Control 1975, 21,763-768. Vasantharajan, S.; Biegler, L. T. Simultaneous Strategies for the Optimization of Differential-AlgebraicSystems with Enforcement of Error Criteria. Comput. Chem. Eng. 1990,14,1083-1100. Vaasiliadis, V. S. DAEOPT-A Differential-Algebraic Dynamic Optimization Code, us. 2.0;Technical Report, Centre for Process Systems Engineering, Imperial College: London, 1993. Vaasiliadis, V. S.; Sargent, R. W. H.; Pantelides, C. C. Solution of a C h of Multistage DynamicOptimization Problems. 2. Problems with Path Constraints. Znd. Eng. Chem. Res. 1994,followingpaper in this issue. V k n b r o e c k , J.;vanDooren, R. A. ChebyshevTechnique for Solving Nonlinear Optimal Control Problems. ZEEE Trans. Autom. Control 1988,33,333-340. Received for reuiew October 11, 1993 Revised manuscript received May 26, 1994 Accepted June 15,1994. a Abstract published in Advance ACS Abstracts, Auguat 1, 1994.