Sensitivity Analysis in Optimal Systems Based on the Maximum

Sensitivity Analysis in Optimal Systems Based on the Maximum Principle. T. M. Chang, and C. Y. Wen. Ind. Eng. Chem. Fundamen. , 1968, 7 (3), pp 422–...
4 downloads 0 Views 712KB Size
= n-dimensional functional = = = =

= = = = =

= = = =

=

R1,R,, R a t , etc. = = r = t = = = = =

GREEKLETTERS

Hamiltonian gain matrix gain gain matrix for related lumped-parameter system stationary gain gain matrix defined by Equation 33 dimension of control vector dimension of state vector performance index performance index of related lumpedparameter system adjoint variable n X n positive definite constant matrix n X m positive definite constant matrix regions divided by characteristic lines normalized distance, re[O, 11 normalized time, t e [ o , t f ] or te[to, t i ] control vector minimum of performance index, Equation 19 velocity defined by Equation 19 state variable

= weighting factor in performance index

2t)

= delta (impulse) function

SUPERSCRIPTS

*

= optimal

T

= transpose

SUBSCRIPTS

f 0

= final time = initial time

literature Cited

Athans, M., Falb, P. L., “Optimal Control,” Chap. 9, McGrawHill. New York. 1966. Koppel, L. B., Shih, Y. P., Coughanowr, D. R., IND.ENG.CHEM. FUNDAMENTALS7, 286 (1968). Shih, Y. P., ‘‘Optimum Feedback Control of Tubular Process,” thesis for Ph.D. degree, - - Purdue University,. - West Lafayette, . Ind., August 1967. Sirazetdinov, T. K., Automation Remote Control 26, 1449 (1965). Voelker, D., Doetsch, G., “Die Zweidimensionale Laplace Transformation,” Verlag Birkhauer, Basel, 1950. RECEIVED for review August 30, 1967 ACCEPTEDApril 12, 1968 Work supported by a research grant from the National Science Foundation.

SENSITIVITY ANALYSIS IN OPTIMAL SYSTEMS BASED ON T H E MAXIMUM PRINCIPLE T. M. C H A N G ’ A N D

C . Y. WEN

Chemical Engineering Department, West Virginia University, Morgantown, W . Va

.

The sensitivity equations, necessary for determining the effects o f the first-order variations in parameters on the optimal objective function and decisions of a continuous process, are derived based on Pontryagin’s maximum principle. Methods for optimal design of parameter-sensitive systems are presented. The algorithm o f the maximum principle i s extended to problems with sensitivity constraints. The sensitivities of the optimal temperature and the maximum conversion owing to a single exothermic reaction taking place in a tubular reactor are calculated.

solution of an optimization problem is obtained based on set of numerical values of the parameters. Usually, the values of the parameters are subject to change owing either to the uncertainties in the experimental evaluation or to variations of the operating and surrounding conditions. Therefore, it is desirable to know how the optimal solution changes if some of the parameters are changed. T o answer this question, it is necessary to make a sensitivity study on an optimal system. So far, the sensitivity analysis has been made in a few classes of optimization problems such as the linear programming problem (Garvin, 1960; Hadley, 1962) and the convex quadratic programming problem (Boot, 1963, 1964). Since the maximum principle has been shown to be powerful in optimizing continuous processes, a sensitivity study on problems solved by the maximum principle is reported here. If the sensitivity analysis shows that the optimal performance of a system is strongly dependent on the selected parameter values which are subject to variation, the actual system perHE

Ta

1

422

Present address, Goodyear Tire & Rubber Co., Akron, Ohio l&EC FUNDAMENTALS

formance may substantially deviate from the specifications. To ensure better system performance over a range of parameter values, the parametric sensitivity must be taken into consideration in optimal system design. In other words, the decision variables of a system should he determined on the basis of not only optimality but also sensitivity of the system. I t is the purpose of this paper to derive sensitivity equations and show how they may be used to develop methods for the design of parameter-sensitive systems. Deflnition of Sensitivity

T h e sensitivity of x 1 to the variation of Let xi = f(wl). near w1 = al is defined as the ratio of the percentage change in .xl to the percentage change in w 1 a t w l = 2 j 1 (Bode, 1945). where z = f ( @ l ) . T h e Let 4x1 = x 1 - 31 and 4 w = w1 sensitivity of x1 to the variation of w1 a t ~1 is expressed as

wl

where 4x1/4w1 is called the sensitivity coefficient.

If change A w l is infinitesimal, Equation 1 can be \vritten as

L Tvhere dxl/dzul is the sensitivity coefficient for the first-order variation. T h e definition of sensitivity can be readily extended to a function of several variables. Let XI = f(w1, w2, . , ., the sensitivity of x1 to the first-order variation of w t where i = 1, 2, . . ., p , a t a given set of a = (a1, ~ 2 ., . ., cP}can be defined as

where S.,"! is the sensitivity of x1 to zest. If x is a column vector having components X I , x?, , , ,, x, and each component is a function of a column vector, w = { w l , w ? , . , . , 1 ~ then ~ all ~ the sensitivity coefficients of x \vith respect to w can be expressed by the following (n x p ) matrix,

Figure 1.

Simple process

The performance equations of a continuous simple process as showm in Figure 1 have the form

(5)

)

with the initial condition x(0) = a, where x ( t ) and a are sdimensional column vectors representing the state of the process, O(t) is a n r-dimeqsional column vector representing the decision, and is ap-dimensional column vector of the system ~ parameters. A4noptimization problem associated with such a process is to find a decision vector, O ( t ) , subject to the constraints

~ , ! , ~ [ e ( t5 ) ] 0,

i

= 1,

2,

. . ., m

(6)

such that the objective function S

J =

which is called the sensitivity coefficient matrix. Here, x, represents the matrix of partial derivatives of x with respect to

c,x,(T) = c x ( T )

(7)

*=I

is maximized (or minimized), where c is a constant row vector with s components. To solve a free right-end problem, an s-dimensional row vector, z ( t ) , of the adjoint variables, and a Hamiltonian function, H, are introduced which satisfy the following relations:

Le'.

The simultaneous influence of the variations of several parameters w on X I can be expressed by the Taylor expansion

and

t(T)= c where all the partial derivatives are evaluated at w = 13, T h e change in XI may be approximated by the first-order term in Equation 4 if only the first-order variation is important. However, if a more accurate estimate is required or in the case of a larger variation in parameters, the higher order terms in Equation 4 must be considered. Since the changes in the system performance caused by the variation in parameters in most cases can be closely approximated by the first-order term, consideration of the first-order variation is sufficient. As can be seen from the above definition, the main task in the sensitivity analysis is to find the partial derivatives as given in Equation 3-namely, the sensitivity coefficients. A method of finding the sensitivity coefficients in the problem of solving a system of differential equations by differential analyzers has been developed by Miller and Murray (1953) and used in the sensitivity analysis of control systems (Chang, 1961 ; Tomovic, 1964). Sensitivity Analysis

In the following, the sensitivity analysis in a problem solved by the Pontryagin's maximum principle is considered. T h e algorithm of the maximum principle is given as follows (Fan, 1966; Pontryagin et al., 1962).

(10)

where bH/bx is a row vector with components bH/dxt, (i = 1, 2, , . . , s), and df,'dx is a n (s X s) matrix,

T h e necessary condition for the objective function, J , to be a maximum (or minimum) with respect to O(t) is

H = max(or min)

for 0

5t5 T

(11)

or, if @t) lies in the interior of the decision domain,

dirt - -- 0 be where dH/dO is a row vector Lvith components dH/dO,, i = 1 , 2 , . . ' , r. The sensitivity analysis associated with such a n optimization problem includes the study of the effects of the variations in system parameters, w , initial conditions, ci, and constants, c, on the optimal objective function and the optimal decisions. VOL. 7

NO. 3

AUGUST

1968

423

A. Effect of Variation in w on J with Fixed Optimal Decisions. This analysis deals with the sensitivity of the system performance with respect to the variation in parameters when it is operated a t optimal conditions. The sensitivity coefficient, b J / d w , can be obtained by differentiating Equation 7 with respect to w ,

bJ bW

= c-

bx(T)

principle due to changes in the system parameters are investigated. I n terms of sensitivity, this is a problem of finding bO/dw a t the optimal conditions. T h e necessary condition given by the maximum principle requires that g ( t ) satisfies Equation 12 if it lies in the interior of the admissible region of O ( t ) . Let bH(z, x , 0, w ) / & = h ( z , x , 0, w ) ; then 6 must satisfy h(Z, 2,

aw

or in short form J , = cx,(T), where J , is a row vector with components, bJ/bwi, i = 1, 2 , . . . ,p, and

s, w ) = 0

(16)

where h is an r-dimensional row vector with components bH/bO,, (i = 1, 2, . , ., r). Differentiating Equation 16 with respect to w , one obtains h,z,

+ h,., + heew + h,

=

0

T h e orders of matrices h,, h,, he, h,, z,, and Ow are ( r X s), ( r x I), (I X T I , ( r X PI, (J X p), and (I X respectively. If the square matrix he is nonsingular, then its inverse, denoted by he-', exists. Multiplying the above equation by he-' and solving for Ow, one obtains

PI,

The Jacobian x, can be obtained from the method proposed by Miller and Murray (1953). This method is based upon the differentiation of Equation 5 with respect to w by assuming that the functions in the equation are differentiable and the change of w does not change the order of the differential equation, Following this method and noting that the decision variable is fixed a t its optimal value, the following sensitivity differential equation is obtained: d

-b,) dt with initial conditions x,(O)

=

f,., + f ,

=

0, where

-

0, = -(hs-'h,)z,

(ho-'hz)Xw

-

(hs-lh,)

(17)

Here, h,, h,, h,, and h , are evaluated a t the optimal conditions. I n order to obtain 0, from Equation 17 it is necessary to know x , and 2,. Differentiating Equation 5 with respect to ai, one obtains d -

dt

(I,.

=

fix,

+ fs0, + f ,

(18)

with initial conditions x,(O) = 0. Substitution of Equation 17 into the above equation yields d dt

(xu) = ( f z

- fshe-'hz)xw - ( f s h e - W z m f

(fw

- fehe-'hto) (19)

= -bf=

f to

bw

Equation 14 is a system of linear differential equations with respect to x, in which f , and f , are evaluated a t w = tE and 0 = 8. From Equations 13 and 14 it is seen that the problem of finding the sensitivity coefficient, J,, is reduced to that of solving a system of linear differential equations. The above result is similar to that obtained by Dorato (1963) in the analysis of an optimal control system, except that the objective function is of a different form. T h e value of J , can be obtained from the adjoint system. Equations 9 and 14 can be combined to give

with initial conditions x,(O) = 0. For simplicity in notation, let the differential equation in Equation 9 be expressed as

where g is an s-dimensional row vector. Differentiating Equation 20 with respect to w , one obtains d

& zw

=

gzzw

+ szx, + SSB, + g ,

(21)

with boundary conditions z,( T ) = 0. Substituting Equation 17 into the above equation, one obtains

gehe-lhw)

Integrating the above equation from 0 to T , one obtains

Hence, Equation 13 becomes

B. Effect of Variation in w on @t). Changes in the optimal solution (optimal decisions) based o n the maximum 424

l&EC FUNDAMENTALS

(22)

with boundary conditions z,( T) = 0. The orders of the matrices g,, g,, go,g,, and fe are (s X s), (s X s), (s X r ) , (s X p), and (s x Y), respectively, and they are all evaluated a t the optimal conditions. I t is seen that Equations 19 and 22 consist of a system of the first-order linear differential equations in x , and z , with given initial values of x , and final values of I,. This is known as the linear twopoint value problem, whose numerical solution has been well established. Once z , and x , are known, 0, can be easily obtained from Equation 17.

The combined effect of the changes in both the parameters and the optimal decisions on the optimal objective function For this case, J is a function of w can be obtained from J,. and 6, where is also a function of w . I t can be shown that = 0 according J , = Joe, ex,( T ) . Since Jo = a J / & to the necessary condition for optimality, J,' = cx,(T), where x u ( T ) is obtained from Equations 19 and 22.

+

e

where J , is a row vector with components bJ/bci, i = 1, 2, Here x , ( T ) , . ., s, and [ x ( T ) ] 'is the transpose of x ( T ) . is obtained by solving the following linear differential equation :

d

1

C. Effect of Variation in CY on J with Fixed e(t). This is a problem of finding J,. From Equation 7, we have

dt

with initial conditions x c ( 0 ) = 0 where x c is an (s X s) matrix. The solution of the above equation is x , = 0. Hence, Equation 29 becomes Jc

J,

=

cx,(T)

(23)

Differentiating Equation 5 with respect to e ( t ) is fixed at e ( t ) , one obtains

CY

and noting that

=

x

dt

s) unit

d - zc

dt

Integration of the above equation from 0 to T yields

cx,(T) = z ( 0 )

=

Hence, Equation 21 becomes

(25)

on B ( t ) . T h e procedure of obtaining 8, is similar to that in Case B. Differentiating Equation 16 with respect to CY and solving for e,, one obtains

e,

= -(hs-'h,)z,

CY

-

(hs-'hz)xa

(26)

il.k(e)

= (fz

- fohs-lhzlxa -

(fd-'hz)zu

(32)

+ (9,- gohs-'h,)xc

(33)

0, j

< 0,

k

=

q

=

1, 2, .

.

.)

q (34)

+ 1 , q + 2, . . , m

I),

where or ii is , not necessarily corresponding to in Equation 6 and q may depend on t. If a Lagrangian function $(e, A) is introduced such that

+(e, A) then 0 and

2,

=

and

Differentiating Equations 5 and 20 with respect to ct and substituting Equation 26 into the resultant equations, we have d

(9,- goho-'hz)zc

$,(e)

D. Effect of Variation in

(foh,-'h,)z,

with boundary conditions z,( T ) = I . So far the sensitivity of the optimal solution to the changes in w , CY, and c has been studied for the case where e ( t ) lies in the interior of the admissible region of e ( t ) . I n some cases, e ( t ) may lie on the boundary of the constraints for which the condition in Equation 12 is no longer satisfied. I n the following, modifications of the sensitivity analysis for such cases are presented. Suppose that the optimal decisions 8(t) which are obtained by maximizing (or minimizing) H subject to the constraints in Equation 6 lie on the boundary of some of the constraints. I n terms of mathematical expressions, s ( t ) satisfies

d - (zx,) = 0 dt

z(0)

- foho-'h,)x, -

(31)

and

Combination of Equations 9 and 24 gives

=

(30)

with initial conditions x,(O) = 0,

1:' J,

[4T)1'

- ( h o - l h , ) ~ c - (hs-'h,)x,

d - xc = (fz = I, where I is an (s

=

F. Effect of Variation in c on e ( t ) . Following the same procedure, the necessary equations for evaluating Oc are obtained as

e, with initial conditions x,(O) matrix and

xc = fix,

-

P

=

H(0) -

Wt(6') z=1

x must satisfy the following r + q conditions

(27)

and

with boundary conditions z,( T ) = 0. T h e linear two-point boundary value system of Equations 27 and 28 is then solved for xu and z,. Then ea can be obtained from Equation 26. T h e corresponding change in J can be obtained from J , = ex,( T ) . E. Effect of Variation in c on J with Fixed e(t). T h e sensitivity coefficient, Jc, can be obtained by the following procedure. Differentiating Equation 7 with respect to c, we have

$(e) = 0

x)

x

where u ( 4 , = x[b$(e)/be], is a q-dimensional row vector, IJis a q-dimensional column vector,

Wl -

and

VOL. 7

NO. 3

AUGUST

1960

425

If we exclude degenerate cases, Equations 34 and 35 are the necessary conditions for to satisfy Equations 6 and 11, which are not affected by the infinitesimal changes in parameters w , However, the values of 8 may be incy, and c (Boot, 1963). fluenced by the changes in the parameters as shown in the following. Differentiation of Equations 35 and 36 with respect to w yields

e

hlzw

+ hxxw 4- (he - uePW + u x b 4-hw

=

0

and $&, = 0. T h e above two equations together with Equations 18 and 21 can be solved for Ow. T h e effects of cy and c on d(t) can also be obtained in the same manner. Example

Consider a first-order, reversible and exothermic reaction with the reaction rate constant, k , and the activation energy, E, as indicated in the following:

I

I

0

2

ki,Ei

A

d

B

kn,Ez

Figure 2.

I

I

4 6 Reactor LQngth, f t .

I

8

Optimal temperature and concentration profiles

Let x denote the mass fraction of B. A steady-state differential material balance for B in a plug-flow tubular reactor gives (37)

The optimal temperature profile for such a reaction has been obtained (Aris, 1961; Fan, 1966). The optimization problem considered here is to find an optimal temperature profile along the reactor such that the concentration of B a t the outlet of max [ J = x ( T ) ] . T h e the reactor is maximized-namely, initial mass fraction of B is given as x ( 0 ) = cy1. According to the maximum principle, the Hamiltonian function in Equation 8 for this problem becomes

H =

- z1

[kl(e)

+ k2(e)1x - kde) 1

(38)

where z1 satisfies dri-dt

zl[kl(e)

+ k2(e)i

(39)

Since the objective function is J = x(t), c 1 = 1 and zl( T ) = 1. T h e optimal solution, 0, must satisfy Equation 12. From Equations 12, 16, and 38, one can obtain

The optimal solution is then obtained by solving Equations 37 and 40 with x ( 0 ) = a1. A numerical computation of the optimal conditions is based on the following data (Fan, 1966) : u = 1,000 ft./hr. klo = 2.51 X lOS/hr. kz0 = 1.995 X lO’/hr. El = 10,000 B.t.u./lb.-mole E2 = 20,000 B.t.u./lb.-mole R = 1.987 B.t.u./(lb.-mole) a1 = 0.1 L’ = 10 ft. T = 0.01 hr. 426

I&EC FUNDAMENTAL-S

The optimal temperature profile and the corresponding optimal concentration profile of B are shown in Figure 2. The sensitivity analysis in this problem is to find effects of changes in the present set of parameter values on the optimal temperature profile and the optimal outlet concentration of B. The system parameters considered are El, EP, klo, and k10. The effect of the initial condition is also considered. J,, Ja,SwJ,and SaJwith Fixed d(t). For the present problem Equations 13 and 25 become, respectively, Jw = X d T )

and J , = zi(0)

and the sensitivity Equation 14 becomes

where w is a vector with components El, Ez, klo, and k 2 0 , and f, and f w are evaluated a t the optimal conditions. The analytical expressions of these derivatives are not given here. Equation 42 is solved simultaneously with Equations 37 and 40 to give xw by the Runge-Kutta integration method on a digital computer. Also, the solution of Equations 37, 39, and 40 gives zl(0). The sensitivity coefficients in Equation 41 are then obtained and the corresponding sensitivities can be calculated from the definition in Equation 3 (Table I ) . The optimal objective function (the outlet concentration of B ) is insensitive to the change in the feed concentration of B. If both E1 and E2 have the same percentage increase or decrease, there will be no change in the optimal outlet concentration of B. I n addition, corresponding to 1% increase in each of El, E I , k10, and k20, the total percentage change in x ( T ) is (-0.983 0.983 0.148 0.074) = 0.074. Effect of Variations in w and cy on the Optimal Temperature Profile, Ow a n d ea. Since h in Equation 40 is independent of 21 and he-’ = - 1, Equations 17 and 26 become, respectively,

+

+

-

0, = hzxw

R.

+ h,

and

ea

= hzxa

(43)

I

I

I

I

I

1

I

1

1

Table 1. Sensitivity and Sensitivity Coefficient of Optimal Outlet Concentration of 6 to Variation in System Parameters and Initial Condition

Parameter, W

E1 E2 kio k2o a1

Sensitivity Cocficient, J , -0.896 X lo-' 0.448 x 0.538 X 10-6 -0.338 X 10-8 0.106 x l o - *

Sensitivity, S, -0.983 0.983 0.148 -0,074 0.116 X

J

where h,, h, are all evaluated a t the optimal conditions. Their analytical expressions are not shown here. For the present problem, f j = 0, because H = -rlf and (bH/bO),=,- = 0. Therefore, the sensitivity equations for this case are the same as those in Equation 42. The sensitivity coefficients 0, and Om are obtained by solving Equations 37, 40, 42, and 43 simultaneously. Figures 3 and 4 show the sensitivity coefficients and the sensitivities of the optimal temperature profiles along the reactor, respectively. The sensitivities indicated represent the adjustment necessary in the optimal temperature profile in order to keep the system at the optimal conditions when the variations of system parameters take place. A relatively larger adjustment must be made at the section near the inlet of a tubular reactor. Since only the first-order variation is considered, the effect of the change in 8 on J , J e , vanishes as the result of the necessary condition for optimality. Also, f j = 0 for the present problem. Therefore, the combined effect of the changes both in the system parameters and the optimal temperature profile on the optimal concentration is the same as that discussed above, shown in Table I .

Reactor Length

,

ft.

Figure 3. Sensitivities of optimal temperature profile along a tubular reactor

Optimization with Sensitivity Constraints

This is a problem of finding a decision vector, the sensitivity constraints

0,

subject to

such that the objective function, J(x, 0, w ) is minimized (or maximized), where 6%is a preassigned sensitivity tolerance which is a positive quantity. The problem is equivalent to that of optimization with constraints on state variables. Methods of treating problems with constrained state variables have been presented (Bryson et al., 1963). The optimal solution, 6, to the problem without the constraints represented by Equation 44 is obtained first. If the resultant sensitivity satisfies Equation 44, then 6 is the solution to the problem with sensitivity constraints. If 8 does not satisfy Equation 44, an optimal solution is obtained by solving min J(x, e, w )

-0.58

1

I

2

I

4

I

I

I

6

1

8

1

1

Reactor Length # f t . Figure 4. Sensitivity coefficients of optimal temperature profile along a tubular reactor

e

subject to

I-

a'i J,; J

= bi,

i

= 1,

2,

..,p

(45)

The solution so obtained will result in a system which is less sensitive to parameter variation and yet close to the optimum. An optimal design of parameter-sensitive systems can also be achieved by the method recently presented by Tuel et al. (1966): Choose decision variables by minimizing a modified objective function which is formulated by adding a sensitivity function to the original objective function. While Tuel et al. (1966) did not suggest any specific form of the sensitivity function, here we consider the function as a linear combination

of absolute values of sensitivity coefficients. Accordingly, a modified objective function may be formulated as

J

=

J f XJ,'

(46)

where X is a weighted multiplier whose component Xi is assigned to have the same sign as that of J,, such that XJ,, is a positive quantity. The prime indicates the transpose of the matrix. The solution to the problem with sensitivity constraints is obtained from min e VOL. 7

J

(47)

NO. 3

AUGUST

1968

427

The value of X is related to the degree of reduction in sensitivity. The corresponding value of the sensitivity is not known until the problem is solved. However, the resulting sensitivity decreases as h increases. Suppose that a given X will result in a reduction of sensitivity u as given in Equation 45. Then, it can be shown that Equation 46 becomes

by using the result given in Equation 25. The above equation can be transformed into the standard form as

9

x*(T)

=

by introducing x*(t) = c x ( t )

- Xz’(t)

dx

and

+ X[c’ + z’(O)]

*

- = cfdt

Xg’

(54)

(55)

+

where the coefficient of J is a constant. Therefore, the minimization of 9 is equivalent to that of J. This shows that the solution of Equation 45 and that of Equations 46 and 47 are identical for the same resultant sensitivity. In general, it is more difficult to solve an optimization problem with constraints as given in Equation 45 than that without constraints. Hence, the method of minimizing a modified objective function presents no additional difficulty, except that the number of state variables is increased by the introduction of sensitivity coefficients. M a x i m u m Principle with Sensitivity Constraints

The algorithm of the maximum principle is extended to handle problems with sensitivity constraints based on the method given in Equations 46 and 47. This is to find a decision vector for a system described by the performance equations given in Equation 5 and the sensitivity equations given in Equation 14, such that the modified objective function given in Equation 46 is minimized. Sensitivity equations obtained in the previous section can be used to simplify the system equations. The parameters considered here are w ,a, and c. Constraint on J,. T h e modified objective function is

9

= cx(T)

+ XJ,’

which can be transformed into a general form as

9

=

cx(T)

+ XX*(T)

by introducing x*(T) = J,’. that

(49)

From Equation 15, we notice

and Equation 50. The adjoint vectors, v and y, and a Hamiltonian function are introduced to satisfy

+ +

and H = vf yg’ Xf*. The necessary conditions for optimal solution are bH/bO = 0 andy(0) = 0 (Fan, 1966). Constraint on J,. T h e modified objective function is expressed as

428

=

cx(T)

+ XJ,’

=

I&EC FUNDAMENTALS

cx(T)

+ Xz’(0)

2=

dt

and

H

-ygz - (Xg’),

= (u

+ c)f + ( y - X)g’

The necessary conditions for optimal solution are dH/& = 0 and y(0) = 0.

Constraint on J,. written as

3

T h e modified objective function can be =

cx(T)

+ XJ,’

(57)

which, by the result in Equation 30, can be rewritten as

J

= (c

+ A) x ( T )

(58)

The performance equations, the adjoint vector, and the Hamiltonian are the same as those in Equations 5, 9, and 8, respectively, except that the end point condition of z is changed to z ( T ) = c A. The necessary condition for optimal solution is given in Equation 12.

+

Discussion

Accordingly, system performance equations are

9

with initial conditions x * ( O ) = ca Xc’. The system performance equations are given in Equations 51 and 55. The adjoint vectors and a Hamiltonian function for the present system can be introduced to satisfy

(53)

Using the methods described above, analogous sensitivity equations can be obtained for stagewise processes based on the discrete version of the maximum principle (Chang, 1967). The discrete version (Fan and Wang, 1964; Katz, 1962) results only in a weak form of the maximum principle-that is, the extremum of an objective function corresponds only to a stationary point of the Hamiltonian function (Horn and Jackson, 1965). Therefore, the nature of the objective function corresponding to the stationary points of the Hamiltonian function must be carefully examined. However, in sensitivity analysis of a discrete case, the sensitivity information of an optimal system can always be obtained from the discrete version of the maximum principle as long as the optimum solution is known. As shown in the previous derivation of sensitivity equations, the sensitivity analysis in cases A, C, and E is independent of optimization techniques, while in cases B, D, and F, the necessary condition of optimality is needed. As an alternative to the detailed computation of sensitivity results along the lines discussed above, a direct approach by repeated calculations of optimal policies based on small changes of the data can be easily used to study sensitivity problems. However, it may be impractical to obtain sensitivity information by this direct approach method, because in order to be fully informative, the values at several points around a specific one must be investigated. Such sensitivity information is provided through the sensitivity coefficients developed here, since they represent the slopes of the changes. Sensitivity information will be helpful to those who want to

know the sensitivity of the optimal solution but do not have the original optimization program or do not want to rework the problem. If the sensitivity information is obtained simultaneously with the optimal solution, the users of the results can quickly locate the new optimal policy corresponding to new values of the parameters. I n some cases, the sensitivity information is obtained as soon as the optimization problem is solved without additional effort-for example, the sensitivity results in case C were obtained a t the same time the problem was solved.

W

= parameter vector

X

= state vector; mass fraction of B = adjoint vector

Y, z

GREEKLETTERS = initial value vector = increment = decision vector; temperature = optimal value of 0 9 = Lagrangian function h = Lagrange multiplier ic = constraint vector U = sensitivity

a A 9

e

Acknowledgment

The authors are indebted to Rutherford Ark, University of Minnesota, for calling to our attention the adjointness in the sensitivity equations. The financial support of the Office of Coal Research, Department of the Interior, Washington, D. C., is acknowledged. Nomenclature

A, B C

= chemical species = constant in objective function

El, Ea = activation energy, B.t.u./lb.-mole fi = performance function defined in Equation 5 P = function defined in Equation 20 h = function defined in Equation 16 H = Hamiltonian function I = unit matrix J = objective function j = modified objective function = reaction rate constant, hr.-l k l , kz k l o , k 2 0 = frequency factor, hr.-I L = axial position in tubular reactor L‘ = length of tubular reactor, ft. m = dimension of constraint vector on decisions = dimension of parameter vector, w P = dimension of equality constraint vector 4 r = dimension of decision vector 0 R = gas constant, B.t.u./(lb.-mole) (” R.) = dimension of state vector, x S = sensitivity of J to change in w S,J t = time; holding time T = final value of time; total holding time; hr. U = linear velocity, ft./hr. U = X(@L/&); adjoint vector

literature Cited

Aris, R., “The Optimal Design of Chemical Reactors,” Academic Press, New York, 1961. Bode, H. W., “Network Analysis and Feedback Amplifier Design,” Chap. 4, 5, Van Nostrand, New York, 1945. Boot, J. C. G., Operations Res. 11, 771 (1963). Boot, J. C. G., “Quadratic Programming,” Rand McNally, Chicago, 1964. Bryson, A. E., Jr., Denham, W.F., Dreyfus, S. E., AZAA J. 1, 2544 (1963). Chang, S. S. L., “Synthesis of Optimum Control Systems,” Chap. 8, McGraw-Hill, New York, 1961. Chang, T. M., “Sensitivity Analysis in Optimum Process Design,” Ph.D. Dissertation, West Virginia University, Morgantown, LV. Va., 1967. Dorato, P., ZEEE Trans. Automatic Control AC-8, 256 (1963). Fan, L. T., “The Continuous Maximum Principle,” Wiley, New York, 1966. Fan, L. T., Wang, S. C., “The Discrete Maximum Principle,” LYiley, New York, 1964. Garvin, W.W., “Introduction to Linear Programming,” Chap. 4, McGraw-Hill. New York. 1960. Hadley, G., “Linear Programming,” Chap. 11, Addison-Wesley, Reading, Mass., 1962. 4, 110 Horn, F., Jackson, R., IND.ENG.CHEM.FUNDAMENTALS (,-,--,. 1 Oh