Inequality Methods for Computation of Optimal Systems. Problems in

Inequality Methods for Computation of Optimal Systems. Problems in Multistage Reactor Analysis. C. D. Eben, and J. R. Ferron. Ind. Eng. Chem. Fundamen...
1 downloads 0 Views 869KB Size
INEQUALITY METHODS F O R COMPUTATION OF OPTIMAL SYSTEMS Problems in Multistage Reactor Analysis CARL D. EBEN' AND J O H N

R. FERRON2

Department of Chemical Engineering, University of Delaware, Newark, Del. 19711 Quantities which are invariant from stage to stage of a multistage reactor system during optimal operation are identified by use of a new algebraic inequality. These invariance conditions, together with normalization and orthogonalization conditions, fully describe the optimal states, including points a t which derivatives do not exist. A computational scheme based on successive substitutions is employed to solve the describing equations. The procedure handles constraints readily, and it converges rapidly for suitable ranges of starting conditions. The method is illustrated for representative chemical systems. Computing times are reduced significantly from those observed previously with several other optimization schemes.

RECESTLY we presented a series of new "conjugate" inequalities and illustrated their application to the solution of problems in nonlinear programming (Eben and Ferron, 1968). The inequalities provide a revealing format in which t o phrase optimization problems, but their greatest utility appears to be in suggesting simple computational schemes for relatively complicated problems. Computational utility is exploited in the present series of papers, and we extend consideration to discrete and continuous variational problems. I n this first paper, we examine design of optimal, multistage, chemical reaction systems having several representative chemical and mathematical features. These problems fit the form susceptible to solution by geometric programming (Duffin et al., 1967), as modified to permit negative terms (Eben and Ferron, 1968; Passy and Rilde, 1967). The present computational schemes have advantages of simplicity over those of geometric programming; and in the cases illustrated, they also have advantages over other well-tested methods in handling constraints and in computational speed. Conjugate-Inequality Methods

Strong Form. The simplest and most useful conjugateinequality pair is that, involving conjugate arithmetic means ( A and A * ) and geometric means (G and G*) : If yi and 2%are positive, real numbers, with i = 1, 2, , . . , n, and if

then n

A*

=

~

1

-~

C1 y i ~ 5i X I Y ~

n n

~i-7;

=

G"

(4)

i=2

i--2

Inequalities 2 and 4 are equat'ions if and only if all of the xi are equal (invariance condition). When the invariance condit,ion holds, A of Inequality 2 takes on its minimum value and A* assumes its maximum value. Optimization schemes can be based on these properties. Thus, geometric programming originally utilized 1 n e q d i t . y 2 ; and we have previously shown how t,he conjugate form, Inequality 4, can be used similarly when the objective function and constraints each contain one positive t,erm and the rest negative. We refer t o the latt'er procedure as the st.rong form of the conjugate-inequality method. This method determines maxima and minima, not just stationary point,s, of objective functions having the forms of A* and A , respectively. Weak Form. %'hen the objective function has any mixt,ure of positive and negative t'erms, we employ the weak form of the conjugate-inequality method. This can be phrased appropriately for use in this paper as follows. If yi,ci, and xi, i = 1, 2 , . , ., n, and y3, j = I, 2 , . . ., m, are positive, real numbers and if ui is either +1 or - 1, with

n

cyi=1 i=l

where the ut3 are real constants, then the stationary points of u2x1are those for which x,/y, is invariant for i = 1,2, . . . , n, and are given by solution of

then

~ ~ ' ,

s (8) =

Furthermore if

n

culc%gp'lgzQ~z.. *

grna,,

=

2=1

n 71-

n

Cy;= I

n (cigpilg2aiz. . .

i--2

Present address, New Enterprise Division, Monsanto Co., St. Louis, Mo. Present address, Department of Chemical Engineering, University of Rochester, Rochester, N. Y. 14627

gma;m/yi)-

=

P (y)

(7)

i=l

where the bar on 5 signifies the value a t the stationary point and where the exponents of each gj on the right-hand side are VOL.

a

NO.

4

NOVEMBER 1969

749

chosen to sum to zero-that

is,

n

auuayz=

0,

J

= 1,2,

. . ., m

(81

a=1

The last equations comprise the orthogonality conditions. If the normality condition, Equation 5, is not imposed, normalization of the products u,y, can be obtained explicitly by writing the right-hand side as P(y)=

( 5..y,)( a=1

.

" )

f i ( c a f j l ~ . y z a ~ ~ .. f j n a t m / y * ) c Z y r ' , ~ ~ , y ] ,=1

Of the common methods for solving sets of nonlinear algebraic equations, that of successive substit'ut'ions (Hildebrand, 1956; Scarborough, 1958) fits most conveniently here. It is not necessarily the most efficient method. We use it here for simplicity and reserve consideration of alternat'ives for later discussion. I n the method of successive substitutions a given set of equat'ions

. . . ,w,)

2=1

I? (ca/ya)+l~~~~)

u.Y.)(

2=1

(10)

The polynomial form, S ( y ) , is like that associated with geometric programming, except that we allow the uI to be either positive or negative. The stationary values of S ( y ) are given by those of P (y ). R e choose to evaluate these by simultaneous solution of the n invariance conditions:

wi(j+l) = (i[wl(j)

together with the m orthogonality conditions (Equation 8) and the normality condition (Equation 5). The unknowns are the n values of the auxiliary variables, yz,the m values of the state variables, fjj, and the value of the invariant, X. Whereas the weak form cannot be expressed as an inequality, the algebraic solution procedure, using Equations 5, 8, and 11, also applies to the strong form and determines the minimum of A and the maximum of A*. (As with the weak form, we can use Equation 6 to generalize the definition of ztin Inequalities 2 and 4 . ) We state, therefore, that a problem has strong form if objective function and constraints each have either all terms positive or all but one term negative. All other problems are said to be of weak form.

k=l

Solution by Successive Substitutions

Consider a one-dimensional, unconstrained problem. l$7ithout regard to the signs of u,, we can say that the stationary points of n

S(y) =

c

=

*

.., n

(20 1

ci[g(j)-J=i, i # n

7,(fi1) = -

(aiai/u&)ci[fj(j)1"~ i=l; if?

+

This will be a minimum if all ui are 1. It will be a maximum if all but one of the ui are equal to - 1. I n all other cases Equation 13 locates stationary points. The character of each, whether maximum, minimum, or inflection, must be determined separately. I n any case the orthogonality, normality, and invariance conditions, by means of which we determine fj, yl, y ~., . . , yn and A, are, respectively, n

aiy,ai = 0 i=l n

aiyi = 1 i=l

l&EC

i = 1,2,

We can now take advantage of t8he1inearit.y of Equations 14 to 16 and express the various unknowns explicitly in t'erms of fj. This provides a successive substitutions calculation analogous to Equation 19 having the form

(12)

are given by

(19)

This is stringent and not generally necessary (Scarborough, 1958). Equations 14 to 16 can be treated by this scheme. We n0t.e that. upon assumption of a value for 9, we obtain an overdetermined set of linear equations in yi and l / X (with X # 0 ) . If now we temporarily ignore one member of Equat'ions 16let us say that f o r i = r, 1 5 r 5 n-we can solve for the yi, i # r, and for X in terms of the assumed value of g. (We suppose that our choice of f j does not give rise t'o linear dependence.) We refer to the quotient c,fjCr/y, as the "undet'ermined invariant." The problem can be simplified further by defining v i = XYi (21 1

i=l

750

, ' . . ,wi(j), ..., w n ( j ) ]

1 d [ i / d W k I < 1,

7i(fi1)

UaCiYLI'

(18 1

where j = 0, 1, 2, . . . , Provided wi(O) is fairly close to the correct solution of Equation 18, we can state that a sufficient condit'ion for asympt,otically stable convergence of the scheme of Equation 19 is n

X = P ( y ) = S(g)

(17)

We then choose a set of initial guesses, toi(O), and compute successively the set of approximate solut'ions of Equation 18

(11)

( C a / y , ) f j p f j Z a ~ 2 . . .fjmatm =

. . ., n

wi = (i(W1, w2,. . ., Wi, . ..,w,)

or upon substitution of the orthogonality conditions

(E

0 , i = 1, 2 ,

is inverted t,o some (nonunique) implicit form

(9 1

P(Y =

=

~ i ( ~ i , 2 0 2 ,

FUNDAMENTALS

(14)

together with the identit.y

To use the scheme for computation, one assumes a value for gc0)and employs only the middle member of Equation 22 along with Equation 23 to carry out the iterations. (This can be done because there are no constraints on this problem and no danger of infeasible solutions.) Indeed, we can combine these two equations explicitly. The result is nearly the same as a successive substitutions procedure for finding a root of dS/dy = 0, where S(y) is defined by Equation 12. There is a major difference, however, in that the inequality scheme does not require the existence of a derivative of S ( y ) . As a result the combination of Equations 22 and 23 permits determination of a stationary point a t f j = 0 when one or more

of the ai of Equation 12 are in the range 0 < ai < 1. Such a root of d S / d y = 0 cannot be found because the derivative is undefined under such conditions (Eben and Ferron, 1968, Equations 46 et seq.), The combination of Equations 22 and 23 has the further advantage that one can often better identify feasible choices of fj(O) and of the index, r , of the undetermined invariant. This is of considerable importance in multidimensional problems. Finally, in the presence of inequality constraints, values of the 7; are needed to determine boundary extrema. I n such cases the classical condition of calculus, d S / d y = 0, is not often satisfied. The inequality procedure is unhampered by the lack of a derivative and readily determines boundary extrema (Eben and Ferron, 1969). There is a precise analogy with the variational calculus: Classical procedures are useful for determining extrema1 solutions, but maximum principles must be employed when boundary extrema exist. The conjugate-inequality method serves both purposes. To choose a suitable index for r , we may apply the sufficiency conditions for asymptotic stability, Equation 20. The proper independent variable for this purpose is v,(J). The most informative conditions are those resulting from Equat.ions 22, which now take the forms (c~/c,)

I

(aipi/aipr)

n

C i-1:

(ci/cr) (ai/ar)'

iz r

j gai/rrI

(24a)

2.42. We find that choosing 0 < x(O) < 2.42 gives fairly rapid convergence to the global maximum a t the positive stationary point, 3 = 1.92288415. I n Figure 1 the rate of convergence is compared with that obtained with Newton-Raphson iteration and with a form of successive substitutions applied directly to Equation 26. These t'wo schemes are, respectively,

Example of an Unconstrained Problem

A brief numerical example is given here to illustrate the procedure and test the efficiency of the successive substitutions method. The extreme points of F (x) = -z8/8 - (28/5)x5

+ 4802

as determined from the roots of d F / d x = 0-that -x7 - 28x4

(25)

is,

+ 480 = 0

Io-' (26)

are interesting because of a weak, relative maximum a t x = -2.5778 (with F = -843.64), a n adjacent, weak relative minimum a t z = -2.4591 ( F = -842.94) and a global maximum at z = 1.9229 ( F = 752.4055) (Stanton, 1961). For z positive there is just one positive term in F (z), and the problem of determining the global maximum is in strong form. We can write

F

5

(x8/8yl)-71 ( 2 8 ~ ~ / 5 y z ) - (480x/ya)Y3 7~

-71

+ - Yz +

- 5%

Y3

LL I

ILL

I o-6

10-7

(27)

2

0

Orthogonality, normality, and invariance conditions are -871

Io-'

1

x8/871= 28x5/5yn = 4 8 0 ~ / = ~ ,X

6

8

IO

NUMBER OF ITERATIONS

=0

Y3 =

4

(28 )

Figure 1. Convergence rate for successive substitutions, Equation 29 ( A ) compared with Newton-Raphson iteration (6) and with Equation 31 (C)

Equations 27 and 28 could be treated by duality methods, as

All iterations begin at x @ ) = 1 .O VOL.

8

NO.

4

NOVEMBER

1 9 6 9

751

If 0 < do)< 2 , the Newton-Raphson iterations initially diverge to give a large value of dl), and then slowly converge t80the positive root. From Figure 1 we conclude that near the root t.he Newton-Raphson procedure is efficient while the method of successive substitution is superior farther from the root. One should not expect in general that the successive substitution method will be as efficient as Newton-Raphson iteration, but in this case the comparison is favorable. If do)is chosen in t’he range 0 > do)> -2.5778 (location of the relative maximum), Equations 29 converge fairly rapidly to the minimum a t z = -2.45808917. When both F and z are negative, Equation 25 takes the weak form; one is no longer assured of locating maximum points. For do)< -2.5778 t’he iterations of Equations 29 diverge. The scheme is unst’able near t’he local maximum. For negative values of do) we find that Kewton-Raphson iterat,ions converge to Z = -2.4581 for -2.52 < 2 < -1.62 and to Z = -2.5778 otherwise. Again convergence is most’ rapid near the eventual root. Evident’ly the Newton-Raphson procedure is somewhat more certain of success in this case, despite its relatively slow convergence to the positive stationary point. Although the successive substitutions algorithm converges to two of the three stationary point,s, sufficiency conditions for asymptotic stability are not fully satisfied. Thus, a t the posit,ive maximum the left-hand side of Inequality 24a for i = 1 is 0.2539; for i = 3 it is 1.2539. The left-hand side of 24b is 0.6571, and that of 24c is 1.1555. Despite these large values, the iterations are well behaved, as Figure 1 illustrates. For comparison, the iterat,ive equat’ions based on choice of the first and third invariants of Equation 28 as “undetermined” can be constructed. For the latter choice one sees immediately that for do)positive, the iterations diverge unless do)= 2. Only negative values in the range - (28)1’3
O,

T , 5 T,,,,,

=O,

Tt>

(60 1

(61) Tmax

i- 3(O)

= [ ~ i ( o ) / P i ( n ) l / [ ~ li-6(o)/Ylli-.,(n)l l (64 1 When either or both coiistraints are met, the correspoiiding undetermined invariants are no longer free; and the values of weighting factors ylli-3 or Y11t-4 as well as y11i-y or y11i-10 are fixed to satisfy the invariance condition. A digital computer (cycle t'inie of 3.64 microseconds) was used to apply this scheme to the calculation of optimal policies for 2- to 12-stage systems. Some results are shown in Table I. I n each case initial conditions were taken a t the constrailit T.' - 394" K., ui(O) = 2100. Successire estimates of t j converged to fire significant figures wit.hia six to 14 iterations. Each iterat.ion required 0.3 second per st8age;as expect'ed, the computing t,ime increased linearly with t,he number of of stages. The total computing time required for the 3-staf;e system was 11 seconds, which conipares with 2 hours required by Fan and Wang's maximum-principle solution for two stages (using a computer with cycle time = 20 microseconds) and approximately 14 minutes ior Storey's hill-climbing t,echnique (on a Ferranti-Mercury computer). Fan and T a n g estimate that t,he 3-stage problem could be solved in 30 minutes with a slight decrease in accuracy. In any case, the procedure used here appears to be a t least an order of magnitude faster than either previous solution for this problem. The 2- and 3-stage policies of Table I are slight~lybetter than Fan and Wang's results (in terms of yield obtained). They are substantially improved over Xris' dynamic programming results. Evidently very lit8tle increase in yield is realized by including more than approximately six reactors. Also, the total volunie of the reactors is essentially independent of the number of st'ages. Increasing the volume of the final stage beyond the constraint would effect a modest' increase in yield (