DESIGN OF AN OPTIMAL BRANCHED ALLOCATION SYSTEM

DESIGN OF AN OPTIMAL BRANCHED ALLOCATION SYSTEM. Charles S. Beightler, and William L. Meier. Ind. Eng. Chem. , 1968, 60 (2), pp 44–49...
0 downloads 0 Views 7MB Size
OPERATIONS RESEARCH SYMPOSIUM

recent years, serial multistage optimization (dynamic Inprogramming) has been used in the solution of a wide variety of engineering management and design problems which have come to be known as allocation problems. A schematic representation (called a functional diagram) of a serial multistage allocation system is given in Figure 1. In this diagram, the stages (analogous to physical facilities or activities) are represented by squares numbered in reverse order by computational convention. State and decision variables are indicated by horizontally and vertically directed lines and arrows, respectively. State variables carry information on the condition of the system and are not within the direct control of the decision d e r . Decision variables are those Over which the decision maker can exercise direct control. Return functions in terms of the state and decision variables describe the economic utility resulting from the decision made at each state. In Figure 1, d., r,,, ,.s and J. represent, respectively, the decision, return, input state, and output state at a particular stage n. The multistage optimization problem is to maximize the sum of all of the returns while satisfying given constraints which the decision variables must satisfy. The transition function at each stage, n, describes how the decision, d,,, maps the input state, s , onto the output state, J,,. Problems such as this arise in storage systems in which the storage reservoirs or depots are represented by stages. This type of problem also occurs in process industries where the stages correspond to processing units, the output from each of which forms the input to the next succeeding unit. Many optimization problems have been solved in which the multistage system was composed only of a single chain of stages. While dynamic programming has provided useful solutions to such serial systems, it has not been able to cope with problems in which the stages form a nonserial structure. Recently, however, methods have been developed which extend the usefulness of multistage optimization techniques. These methods (7, 2) have permitted the optimization of nonserial multistage system using a modification of the dynamic programming approach. In general, a nomerial multistage structure exists whenever at least one stage in the system receives input from more than one stage or provides input to more than one stage. 44

I N D U S T R I A L A N D E N G I N E E R I N G CHEMISTRY

Design of an Optimum Branched AI Iocation System B

CHARLES S. BEICHTLER WILLIAM L. MEIER, JR.

A modified form of dynamic programming optimizes multistage, nonserial sys tems. Analytical and numerical solution procedures illustrate the method

Optimization of Multistage Systems

There are basically two types of optimization techniques usetiul in the analysis of a complex macrqstem. One type exploits the structure of the system so as to decompose it into more manageable parts. The partial optimization procedure of dynamic programming exemplifies this approach. The other approach to optimization ignores the structure of the problem and corrects all decision variables simultaneously in the quest for an optimum. This second approach may be referred to as simultaneous optimization. As the com, plexity of the system is increased and the number of decision variables grows, the usefulness of simultaneous optimization methods is diminished. I t then becomes advantageous to employ methods which permit decomposition of the large, complex problem into more manageable subproblems. Multistage optimization methods provide this ability. By use of multistage analysis, each facility is analyzed independently while interaction between adjacent facilities is accounted for. The technique used to optimize serial multistage systems is known as dynamic programming (3). I t effectively decomposes the multistage problems, in which each decision affects all s u c m d i g decisions, into a sequence of single-stage optimizations. Dynamic programming is directly applicable only to problems exhibiting a serial structure, however. In such problems, each decision variable may d e c t only the return and transition for just one stage of the system. Recently, howevex, methods have been developed by Arb, Nemhauser, and Wdde (7) for decomposing a nonserial system into equivalent serial systems solvable by saial dynamic programming methods. Use of these methods involves introduction of a cut state at the junction between the branch and serial chain. The introduction of the cut state allows the separation of the problems into P serial problems where P is the number of junctions br cut states. In the simplified example shown in Figure 2, two serial problems will be mated. Functional diagrams for t h e e two said problems are shown in Figure 3 and will be designated as serial problem 1 and serial problem 2. Serial problem 1, the main chain, would be a typical dynamic programming problem similar to the one previously described if the branch input were removed. The branch must be solved as a twepoint boundary

namiol q s k m of Figure 2 VOL 60

NO. 2

F E B R U A R Y 19611

U

value problem where SBN and c are fixed. Hence, ordinary dynamic programming methodology cannot be used on the branch. Instead, the cut state, c, is carried along as a parameter in the optimization of the entire branch. Optimization of serial problem 1 is carried out by dynamic programming up through stagej 1. At stagej the input, s , is no longer equal only to the output, s,+I,from the previous stage, but is given by the sum of ?,+I and c, the input from the branch. When the optimization of serial problem 2 has been accomplished, fEN(c),the optimal K N m from the entire branch, w i l l be available either in functional or tabular form as a function only of the cut state c. With this information it is possible to write the functional equation to guide the optimization at stagej:

-

fj+B(%+1)

= max bt(d,) 46

+ fj-l(srl) +

*r

+

fj-1(33+1,

C)

dJ

-

1

As indicated previously, the branch must be optimized fEN(c)j

Here f,+. represents the optimal return from all stages in the branch plus the last j stages of the main chain, r,(d,) is the return at stagej, and j,-i(s,-l) is the optimal Murn from the last j-1 stages in the main chaii. The term f B N (c) is the optimal return from the N-stage branch as a function of the cut state. Because the cut state can be chosen freely, in this case to maximize f,+#, it is actually a decision variable. Substitution of the transition function for s,-l yields

++&,+I) = max [rj(dJ

I

T

I

+

as a two-point boundary value problem &cause the branch receives a fixed input, it is possible to determine the optimal branch return as a function of the cut state c. The optimal branch return is found to be

+

fBN(c)I

Optimization of this equation involves determining value^ of d, and c which will render f,+&+i) maximal, subject to the constraint that 0 5 d, 5 3,+l c. This results in a twedecision, one-state optimization problem. When f+&+i) has been obtained for all s,+~, optimization of the remaining stages in serial problem 1 proceeds using ordinary dynamic programming techniques. I t is important to carry out the optimization with respect to c at stagejfor otherwise c would have to be carried along throughout the entire main chain to be examined exhaustively at each stage.

+

fi03

- di' = 10 d, - dt" ?a 30 da - dr' ri = 20 di 13

- dal' rm = 12 dm - 2 d& rm = 10 dal

Systems inputs SI and sm are equal to 15 and 10 unitr, respectively, and the transition function at each stage is given by: input state minus decision variable equals output state. 46

INDUSTRIAL AND ENGINEERING CHEMISTRY

di* = 10; ifsi 2 10

100; = {20

I

- si'; di* = SI; i f 0 5 5 1 < 10

T o perform the optimization at stage 2, the junction stage, it is necessary to solve a twodecision, onestate optimization problem guided by the functional equation: f d 3 a ) = max [lod; b o

- da'

+

fi(&

+

c

- d,)

+f&)]

+

with 0 5 d; 5 h c and 0 5 c 5 9.5. The optimal values o f d , and c thus can be obtained as functions of &:

168; dt* = 5; C* = 1.99; ifSa 2 13 119.6 7.45 SI - 0.287 38;

+

= 1.29

Solution of Example Problem

The solution technique for branched multistage systems will be illustrated for a highly simplied aample. The functional diagram for this problem is shown in Figure 4. System input is assumed to enter the system above stages 3 and B2, in the known amounts sa and sm, respectively. Individual return functions for each stage are listed below; the problem is to maximize the sum of these returns.

-

fm(c) 40.4 2.67 c 0.67 8 The cut state is subject to the constraint 0 5 c 5 9.5. Stage 1 of the main chaii of stages is optimized using standard dynamic programming. The resulting optimal return at stage 1 in terms of the input state, si, is found to be

C*

+ 0.286 sa; i f 0 5 Sa< 13

= 7.56 - 0.428 38;

Thus at stage 2 the cut state c has been "optimized out" This is accomplished by treating the cut state as a decision variabl-a valid operation becauae it may be chosen f d y to maximize the total return of stages 1 and 2 plus the branch. When f%&) has been found, the optimal return for stage 3 in terms of the 6xed input sa AUTHORS Charles S. Bcightler is Associate Rofesar of Mechanical Engineering of the Uniuersie of Texas, Austin, Tax. William L. Meier, Jr., is Assirlmrt Rojessor of Industrial Engineering at Texas A&M Univusi&, College Starion, Tex. This work was nrpPorkd by the U. S.Dspart r n d of the Interim under Federal Water Pollution Conid Administration Grant No. WP4?786-07 and OWRR Grant B4.Z-TEX.

can be determined. This is accomplished using standard dynamic programming, resulting in : f 3 ( ~ 3 )=

354; when

s3

=

15

The optimal decision policy now can be defined using the appropriate transition functions. Thus, the optimal return is obtained when: di* = 6.10

dBl*

=

dz" = 3.12

dB,*

= 1.56

c*

= 6.32

d3"

= 12.10

2.12

The next section will outline the numerical solution method analogous to the analytical technique just presented. Numerical Optimization Procedure

While analytical solutions provide valuable insight into the solution method, most real problems must be solved numerically with the aid of a digital computer. Use of the numerical solution method involves computation of the optimal decision policy at each stage for each of a finite number of possible input state values. Consider the functional equation used to guide the optimization at stage 1 in a serial system as given below:

Based on knowledge of the system being analyzed, a range of possible input states is selected. This range is divided into a finite number of grid points ( S H , s12, . . . , sin). For each of these possible input state values, the maximization must be carried out. This can be accomplished by establishing a decision grid d l , corresponding to the input state grid. Then for each input state grid point s l j the return r 1 [ d 1 ~ ( s i , ) ]is evaluated for all d l , beginning with d11 until dli = SI,. The value of d l , (which may not be unique) producing the maximum return is chosen as dl,". A table of values such as those given in Table I is formed. Corresponding to each of the discrete values of the input state, the value of the optimal return and the decision producing that return is recorded. This procedure is repeated at each succeeding stage by completing a table similar to Table I. Thus, for a particular value of input state ,s, at stage n (2 5 n 5 N ) , the decision optimization is guided by fnj(Jnj)

= max do,

[rn(dnJ

+

fn-l(Snj

- dnJ1

When a value for d,, is chosen, r,(d,,,) can be evaluated and the input (s, - d n f ) to the next stage may be computed. Then fn-l(s,, - dnt) can be obtained from the table for stage n - 1. This is accomplished by using the value of f , - 1 ( ~ ~ - 1 , ~ )corresponding to the value of ~ n - l , k = J n j - dnt* A similar procedure is followed at each stage up to stage N , where the optimal return is found only for the given total system input, sN (a constant). Once the tabulated results have been obtained for each stage in the system, the optimal decision policy may be determined by inspection of the tables. At stage

N the optimal decision for the given input sN is determined directly from the one entry in the table for stage N in the last step of the optimization process. The input state to stage N-1 can be obtained then by applying the transition function sN-1 = sN - dN. Subsequently, the optimal decision at stage N - 1 can be found in the tabulation for that stage by selecting the optimal decision corresponding to the input state sN- 1 . Applying the transition function at stage N - 1 produces the input to stage N - 2 . When the stage input is known, it is possible to locate the optimal decision corresponding to this input from the table for stage N-2. This process is repeated for each stage, progressing in the direction opposite to the stage numbering. The result of these computations is the optimal system policy. Note that the dynamic programming algorithm requires that the system stages be traversed in both directions. During the first traversal, the optimal decisions are found as functions of the system input, while in the second traversal, the numerical values of these optimal decisions are obtained. Optimum-Seeking Methods

Optimum-seeking (direct search) methods can be used profitably in the decision optimization carried out at each stage of a multistage decision process. In direct search, successive moves are made toward an optimum by continually incrementing decision variables using information generated along the way. Between state and decision variables there is a fundamental difference -important when computations are performed using tabular data. At each stage, the optimal decision policy must be determined for all possible values of the input states. However, for a particular value of an input state, the optimal value of the decision variable can be obtained without examining all possible values of the decision variable. I n the case where the returns and transitions are given as analytic functions, optimization can be carried out by differentiating the functional equation with respect to decision variables while taking constraints into account. This procedure involves solving a mathematical programming problem for each possible value of the ith input state. In the numerical procedure just described, the optimal decisions may usually be found quickly and precisely with the aid of direct search methods. This is true because accelerating steps can be taken in the direction of the optimum. For stages with one-dimensional, unimodal return functions, the Fibonacci search method is exceedingly powerful. When we use this method,

T A B L E I. T A B U L A R M U L T I S T A G E O P T I M I Z A T I O N RESULTS A T S T A G E 1 Input State, Optimal Return, Decision, d l j * Slj rlj*

VOL. 6 0

NO. 2 F E B R U A R Y 1 9 6 8

47

only 11 points need be evaluated to reduce the interval within which the true optimal decision value lies to less than 1% of the original interval of uncertainty (5). In the storage reservoir system studied in connection with this paper, it was desired to use a direct search technique which would permit optimization of one or more decision variables at each stage. The cut state which may be chosen freely to maximize system returns also can be evaluated as an additional decision variable using this approach. The particular search technique employed was the “pattern search” method of Hooke and Jeeves (4). This method pc#lsessesthe ability to locate local optima even when sharp ridges are present. I t is important to consider the difficultiesimposed by ridges when considering direct searches along hypersurfaces. A ridge may be defined as the locus of points where the “one-at-atime” or “sectioning” search method will terminate before reaching an optimum (6). This method adjusts one variable at a time, holding all other variables constant during the adjustment. The pattern search technique has the capacity to climb ridges and is particularly adapted to functions with long, straight crests. Therefore, it is useful in the location of optima for a wide variety of h p u r f a c e s . The name “pattern” is derived from the way in which moves accelerate toward an optimum as a result of successes in previous moves. This continued acceleration of the size of moves made toward the optimum is called “establishing a pattern.” A further advantage of this type of search is that computation time usually increases roughly as the first power of the number of variables being searched. For classical indirect search methods, computation time usually increases as at least the cube of the number of decision variables (6). To use the technique, one begins at an initial feasible point. Figure 5 shows the contours for a return function, r, to be maximized. In this figure, (a, 6 ) is the beginning point for the search and will be called the

initial base pointpto. The first step involves determining the location of the second point p l l . The first subscript onp denotes the pattern number and the second indicates the number of moves made in that pattern. The relations governing selection of a new point are given below:

AJ-1

Pu

-

- 6,; i f d P z J - 1

= PSJ-1; i f d A J - l )

- 6,) >

> m=

[dPZ,,-d,r(P.,,-l -t8, )I [r@,,,-l

+ 6,),

r@,,,-l

- a,)]

The quantity 6, represents the step size by which the point p,,,-l is perturbed. Geometrically, an individual t be thought of as defining a vector with move p ~ o p t can the base atplo and the head at ptz. When an improved point pl, is found, a pattern is established. I t is a& sumed then that a furthu move in the direction of the improved point will yield an even greater return. Thus, the vector ploplz is extended immediately by doubling its length to obtain point pzo, a new or second base point for a second pattern move. This extension will be referred to as the accelerating move of the pattern is search. If it is then established that the point better than $12, the process of perturbing point #lo is carried out in a manner similar to that followed in the perturbation of point IO indicated above. The only difference is that the subscript i is changed from 1 to 2. As long as the accelerating moves produce improved points, the pattern moves continue to grow. This is indicated by comparing the length of move p@ps with the previous move plop^^. If an accelerating move fails to produce a better point, the perturbation takes place around the previous successful point. If a better point is not located in the vicinity of that point, the step size is reduced. The step size is continually reduced until either a better point is found or the step size is reduced below some preset minimum value. If a new, improved point is found, a new series of pattern moves is made. The search terminaten when no improvement can be obtained. I t should be observed that this technique converges only to a local optimum. If the surface to be searched is not unimodal, several different starting positions will need to be tried. In this case the absolute optimum is taken to be the beat point obtained in all of the trials. The pattern search method described above was modified for use in optimizing the storage ieservoir system studied to include a constraint stating that the sum of the decision variables at any stage i cannot exceed the 5 s{. input statei.e.,

m, I

Computer Solution and Results

Figura 5. Illustratton of p a t h samch m m s 40

INDUSTRIAL A N D ENGINEERING CHEMISTRY

A computer program was developed to perform the optimization of the branched multistage storage reservoir system. This program utilizes the numerical

I

multistage optimization b d u r e d w w d previously, augmented with a pattern search routine for decision optimization. The flow diagram for the n o d a l multistage optimization program is shown in Figure 6. This prcgram is designed to perform computations analogous to those made in the analytical solution given previously. At each stage, the optimal decision or decisions are found for each value of the input state in the state grid using the pattern search technique. As discussed previously, the stages in the main cham to the right of the junction stage are optimized recursively in reverse order beginning with the last (lowest numbered-see Figure 2) stage. Thus, until a junction stage (one at which a branch enters) is contacted, each stage is analyzed by finding the optimal values of the decision variable for each input state in the state grid using the pattern search technique. The pattern search subroutine utilizes other subroutines to evaluate return functions at individual stages and interpolate for the resulting optimal return at succeeding stages. When a stage has been processed, the values of the optimal returns corresponding to the various input state values are retained in c m memory for use at the next stage. When a junction stage is encountered, the number of decision variables is increased by one since the cut state is considered an additional decision variable. The trial input state (state grid) is i n c r e d by the grid for the cut state to obtain the total input grid to the junction stage. When the cut state is perturbed during the search process, the optimal return for the branch is evaluated. Thus, the cut state and the decision variable or variables at the junction stage are directed to their optimal values by the pattern search method. In this way the branch is absorbed into the main chain and the analysis at the junction stage is completed.

Following the optimization procedure at the junction stage, all stages in the main chaii to the left of this stage are optimized sequentially, moving one stage to the left at each step, using serial dynamic prcgramming techniques. This process is continued until another junction stage is encountered or the final stage in the main chain is reached. Each time another branch is encountered it is analyzed as described previously. When the 6nal stage is reached, the decision optimization is carried out only for the one fixed value of the input state at that stage. The computer program was written in the FORTRAN language (Chipewa-CDC) and run on the Control Data 6600 computer at The University of Texas Computation Center. For illustrative purposes, this program was used to solve the analytic function nonserial problem which was given earlier in this paper. The computations were performed with a state input grid consisting of positive integers. The t y p of state input grid used is important mainly for its effects upon the accuracy of interpolation for the returns f m all succeeding stages. With an initial search step size of 0.5, a total of 175 seconds of central procars computation time was required in the optimization of this illustrative branched system.