1708
Ind. Eng. Chem. Res. 2006, 45, 1708-1721
Supply Chain Management through Dynamic Model Parameters Optimization Fernando D. Mele, Antonio Espun˜ a, and Luis Puigjaner* Chemical Engineering Department, UniVersitat Politecnica de Catalunya, ETSEIB, AV. Diagonal 647, E-08028 Barcelona, Spain
Supply chain management based on a discrete event-driven model is proposed. The model contemplates each supply chain entity as an agent whose activity is described by a collection of states and transitions. This activity is characterized by a set of parameters whose values can be optimized to achieve a better system performance. Genetic algorithms are incorporated as a useful tool to optimize large and complex problems, even subjected to uncertainty, through an oriented search, thus avoiding the exhaustive inspection of all the possible solutions. Results obtained have demonstrated that the proposed methodology has an appreciable practical importance as it may lead to substantial economic improvements. The way in which the proposed approach works is illustrated using several examples. 1. Introduction Supply chain management (SCM) involves the decisionmaking related to resources administration through the entire supply chain (SC), from the initial suppliers to the last consumers, coming through the manufacturing, storage, and distribution centers. The main objective is to achieve suitable economic results while taking into account the large amount of constraints featuring this kind of problem.1 Previous work on computer-aided SCM has mainly adopted two types of approaches: methods based on operational research, such as mathematical programming, statistical analysis, and so on (this type includes most of the contributions found in the literature), and methods based on control theory and simulation. This work belongs to the second group and addresses some specific problems found in the chemical process industry (CPI), such as inventory management. It presents a solution approach for SCM that considers the event-driven nature of real SCs through a dynamic simulation using a set of independent agents. Supply chains, with the CPI SCs not being an exception at all, are too complex to allow realistic models to be optimized using mathematical programming. Thus, an approach that considers the SC by means of agents and discrete event-based simulation2 is better suited to analyze such systems. Compared to mathematical programming techniques, simulation provides the flexibility to accommodate arbitrary stochastic elements and generally admits modeling the complexities and dynamics of real world SCs without excessive simplifying assumptions. Research in dynamic simulation dates back to the 1960s3 and has evolved to incorporate nowadays object-oriented and agentoriented paradigms. However, as occurs in any descriptive methodology, the lack of optimization skills becomes a major shortcoming of simulation models, which can only be used to perform optimization through a big number of what if case studies using external optimization algorithms. Much of the recent work on SC simulation has focused on modeling the dynamics of individual components as well as on the emergent interacting behavior.4-7 In addition to being used as a tool for solving a variety of operational research problems,8,9 agent-based simulation has been specifically applied as a technique to analyze cooperation in SC systems10 and negotiation techniques.11 * To whom all correspondence should be addressed. E-mail:
[email protected]. Tel.: +34-93-401.66.78. Fax: +34-93-401.09.79.
In this work, a systematic methodology based on simulation is proposed to support SC managers decision making at the operational/tactical level, assuming that the chain operates under a perfect pull mechanism. Under these circumstances, SC works as a completely reactive system in which nothing is done until some event arrives. Consequently, these systems usually implement a number of logical or heuristic rules to react at arriving events. These rules can be parametrized, thus making it possible to invoke an optimization algorithm that, acting over such parameters, produces an improvement of some objective function. The purpose of this approach is to obtain the parameters values associated with the inventory control of the entities involved under different situations, in such a way that an operational SC objective function is optimized over a given time horizon. Nevertheless, the proposed approach is general enough to deal with any other operating parameter influencing the optimization of specific SC attributes such as the transport policy or the policy for managing backlogged orders. The paper is organized as follows. First, the SCM problem is exposed, and then, a methodology based on genetic algorithms (GA) is presented as a suitable tool for improving the SC performance. Next, some motivating case studies are presented to illustrate the application and flexibility of the proposed model and methodology for the SC analysis. Finally, the results obtained are discussed, and the main guidelines for generalization, as well as some ideas for future work, are indicated. 2. Supply Chain Management Problem The key SCM problem consists of making the appropriate decisions regarding the entire SC network performance. Therefore, the models that support these decisions should be built and used from this wide perspective.12 In general, the actions taken by the manager in order to meet the SC goals may be as diverse as design (setting up a new SC), redesign (adding or taking away products, as well as contraction or expansion of the chain), and planning, scheduling, and control. On carrying out these actions, the following issues appear: different temporal scales, multiple levels of organization, multiple sites, and a high degree of uncertainty. Tasks planning is essential for a coordinated SC operation and improved performance. But the dynamics/uncertainty associated with business and market fluctuations makes this difficult: bank rates change overnight, political situations change, materials do not arrive on time, production facilities
10.1021/ie050189t CCC: $33.50 © 2006 American Chemical Society Published on Web 02/02/2006
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006 1709
fail, workers become ill, customers change or cancel orders, etc., causing deviations from the plan. The agility with which the SC is managed at the tactical and operational levels to enable timely dissemination of information, accurate coordination of decisions, and management activities is what will ultimately determine the efficiency of the SC. Many of the approaches found in the literature look for solutions that give a detailed operations schedule of all the manufacturing plants, distribution centers, etc. belonging to the SC network. This outcome results in inappropriate decisions in practice, because the aforementioned uncertainty of the environment rapidly makes the proposed solution useless. Moreover, at this level, it does not make much sense to make detailed decisions over the charge and discharge of specific vessels of facilities dispersed in a worldwide context. Instead, the decisions and activities contemplated in the SCM are often classified into three levels: operational, tactical, and strategic, depending on both the time horizon of the analysis and the degree of detail of the information involved. These decisions have to be made in a carefully separated manner to be then appropriately integrated and coordinated. This work focuses on the tactical and operational levels, which include decisions typically updated between once every week and once every few months: purchasing and production decisions, inventory policies, and transport strategies.13 Given the SCM complexity, one of the most critical issues becomes the objective function definition. Usually, in a SC, there is not a well-defined global objective, since the different network nodes are usually blind to the impact their decisions cause over the entire system. Thus, a central entity that has a global view and tries to smooth out the stress created among the SC nodes gains a major prominence. Related to this issue, it is also important to realize the different degree of information sharing and the decision-making mode.14 For instance, local information implies that each location sees the demand only in terms of orders that arrive from other SC nodes it directly supplies. It has visibility of only its own inventory status, costs, and so on. Instead, global information means that the decision maker has visibility of the demand, costs, and inventory status of all the locations in the system. With regard to the decision-making mode, centralized control implies that attempts are made to jointly optimize the entire system, usually by the decisions of one individual or group, for instance, a central decision maker pushes stock to the locations that need it most. One of the earliest attempts in this way was reported by Forrester.3 More recent contributions are those by Backx et al.,15 Bose and Pekny,16 Ettl et al.,17 Perea-Lo´pez et al.,18 and Mele et al.19 On the contrary, decentralized control entails decisions that are made independently by separate locations; independent decision makers pull stock from their suppliers. Previous studies for the decentralized approach include the works by Towill,20,21 PereaLo´pez et al.,22 and Sivakumar.23 Moreover, there is some work in the area of coordination between partners without sharing too much information.24 The best solutions will likely be obtained by using global information and centralized control, while local information with a centralized control does not make sense. Therefore, two different situations are posed in order to demonstrate the validity of the proposed methodology: (1) centralized approach, global information, and (2) decentralized approach, partially shared information. Because a pure pull mechanism is assumed for the chain operation, production and distribution planning is not
employed in these case studies. The resulting model has been implemented in an agent-based simulator, as is described in the Appendix. 2.1. Centralized Approach, Global Information. The first question to be solved is to define the boundaries of the decision scope. In some cases, there is an actual agreement among the different partners so as to grant the general coordination task to a central entity. Thus, partners having agreed with it will belong to the SC, while the others remain outside. This is the case studied, in which the analysis has been made assuming a working mode characterized by a centralized control with global information. An omniscient agent can see all the variables associated with the SC and also can select suitable values for these variables according to an objective function adequately chosen. The objective of this approach is to obtain the parameter values associated with the operation of each node in the chain in such a way that the SC profit is maximized over a given time horizon. 2.2. Decentralized Approach, Partially Shared Information. The case in which the management is decentralized and the global information is not completely available has been also analyzed. This situation is very usual in practice and occurs when the SC entities belong to different organizations. In this approach, each SC node looks for its own profit and the information regarding the rest of the SC is partially known through an available model that each individual entity has. The operation of the rest of the SC may be simulated by using a variety of models. The only common feature of these models is that the information they provide is assumed to be incomplete and uncertain. Therefore, the behavior of the SC will be more or less accurate according to the quality of the model and the availability of the information used. The objective of this approach is to calculate the parameters values associated with the operation of a particular node in the chain, looking for the maximum profit of this node over the time horizon of the study. 3. Simulation-Based Optimization By utilizing a simulation modelslike the one to be describeds to represent the SC, a preliminary study has shown the nonconvex nature of different objective functions in the solution space of the SC model input variables. In addition to that, the unavailability of explicit equations have encouraged the utilization of a framework integrating discrete event-based simulation with a GA-based optimization technique. The optimization based on simulation models deals with the problem of finding which of the possible sets of input parameters leads to optimal performance of the system represented. Thus, a simulation model can be understood as a function whose explicit form is unknown, which turns input parameters into performance measures.25 Simulation-based optimization is an active area in the field of stochastic optimization. A review of the current research in this area can be found in the paper by Fu.26 In the process system engineering literature, the simulationbased optimization approaches for SCM have received little attention but are currently a subject of further study. The works from Subramanian et al.27 and Jung et al.28 are appreciated contributions to this field. There are four main approaches for optimizing simulations: 29 stochastic optimization (e.g., random search and stochastic approximation), statistical procedures (e.g., sequential response surface methodology, ranking and selection procedures, and multiple comparison procedures), metaheuristics (e.g., simulated annealing, taboo search, and GA), and others (e.g., ordinal optimization and sample path optimization). Except for meta-
1710
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006
heuristic approaches, these methodologies account for most of the literature on simulation optimization; however, they have not been used to develop optimization for simulation software because they often need a considerable amount of technical sophistication on the part of the user, as well as a substantial amount of computation time.30 This is closely related to the fact that some of these techniques are local search strategies and may be strongly problem dependent. On the contrary, the success of the metaheuristics is perhaps their design to seek global optimality and their apparent robust properties in practice, even if not completely supported theoretically yet. The metaheuristic approaches to simulation optimization are based on viewing the simulation model as a black-box function evaluator. The optimizer chooses a set of input parameters values (decision variables) and uses the responses generated by the simulation to make decisions regarding the selection of the next trial solution according to some criterion. The most popular metaheuristic approaches are the evolutionary systems. They search the solution space by building and then evolving a population of solutions. The evolution is achieved by means of mechanisms that create new trial solutions out of the modification or a combination of one, two, or more solutions that are in the current population. The main advantage of evolutionary approaches over other metaheuristics approaches, e.g., simulated annealing, is that they are capable of exploring a larger area of the solution space with a smaller number of objective function evaluations. Since in the context of simulation optimization evaluating the objective function entails running the simulation model, being able to find high quality solutions early in the search space is of critical importance. Evolutionary algorithms, among them GA, have been used to optimize multimodal, discontinuous, and nondifferentiable functions. In the field of SCM, these approaches are supported by the research in GA combined with mathematical programming.31-33 However, the works in the literature mostly deal with strategic decisions rather than tactical or operational ones, and even though progress made in this area is huge, usually the problem is different from the one undertaken here. Research in GA for SCM aims to solve, for instance, combinatorial operations research problems such as the multistage facility location/ allocation problem in which decisions are the facilities to be opened or the distribution network to be used. The motivation for using GA in this work is to take advantage of the features of this technique, applying it to problems so complex as those described in the previous section. An important advantage of GA-based approaches is the availability of a big amount of operators, but this is also the source of their main drawback: the tuning of the different parameters in order to obtain good results. The sensitivity of the algorithm to the variations of these parameters has to be analyzed in the different cases studied. Therefore, several runs using different values of these parameters have to be carried out before using the algorithm as a decision-making tool. Viewing a simulation model as a function has motivated a family of approaches to optimize simulation such as those based on metamodels, which approximate the response surface of the system. In this work, the GA strategy includes metamodels as filters with the goal of screening out solutions that are predicted to be inferior compared to the current best known solution. Since simulations are computationally expensive, the optimization process should be able to search the solution space more extensively if it was capable of quickly eliminating from
Figure 1. Implementation framework.
consideration low-quality solutions, where quality is based on the performance measure being optimized.34 The general procedure used is presented in Figure 1. A given real-world SC problem is modeled by means of an agent-based SC simulator. A set of parameters of interest (e.g., replenishment frequency of a warehouse) and an objective function (e.g., total profit along one year) are then identified, and next, these parameters are subject to a GA-based optimization algorithm that produces a set of improved values for the decision variables. Although this scheme could look rather obvious, it includes a number of questions to solve that have been partially treated and that need further study. This work focuses on the interaction optimization-simulation in SCM. 3.1. Proposed Methodology. The flow diagram for the GAbased strategy implemented in this work is shown in Figure 2. The first step is to initialize the population. The initial population is created using a random number generator which produces random values for each variable between its lower and upper bounds. Within each individual (chromosome), variables (genes) are represented by means of real-valued encoding, since this presents a number of advantages, e.g., there is no need to convert chromosomes to real values before each evaluation, there is no loss of precision due to binary discretization, etc.35 But real-valued encoding requires an adequate selection of the lower and upper bounds for the variables. This range has to be determined through several experiments using different choices and looking for the best system performance. The validity of this assumption has to be further confirmed by the results obtained. Additionally, the influence of the population size on the GA performance also has to be studied through a sensitive analysis. For instance, for a population of N individuals composed of V variables, VN random numbers uniformly distributed between the corresponding lower and upper bounds are produced. The second step is to run a number of simulations for each individual in order to evaluate the initial population and then calculate the objective function value. In general, in all the case studies, the objective function is the expected value of the SC profit. From this objective function, a fitness function is derived in order to be used by the GA. In this work, the fitness function for the GA is the result of a linear ranking, in which the fitness of an individual F(xind) is computed as the individual objective function value of this individual E[f(xind)], relative to the whole population, as eq 1 shows.
F(xind) )
E[f(xind)] N
(1)
∑ E[f(xind)]
ind)1
The selection method used is the stochastic universal sampling,36 a method with zero bias and minimum spread that is
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006 1711
Figure 2. Implemented GA-based methodology flowchart.
able to select a number of individuals in a single-phase procedure. Regarding the crossover, the simplest method of the single-point has been used. This operator is applied to the population with a probability Px when the pairs are chosen for breeding. The mutation operator takes the current population and mutates each element of an individual with a low probability Pm. Px and Pm are parameters whose values have to be set before the GA execution. Experiments have to be made so as to test the sensitivity of the results to those parameters. After a given number of generations, a metamodel of the system represented is constructed on the basis of the simulator outputs by using neural network (NN) technology. The main issues that need to be resolved in such implementation are as follows: the architecture of the NN used to construct the metamodel, data collection and training frequency, and filtering rules. In this work, a radial-basis-type NN37 is used, taking advantage of its self-training features. At the beginning of the optimization process, during the first generations, there is not enough data to train the NN. However, as the search progresses,
data becomes available because new trial solutions are evaluated by running the simulation model. Hence, the algorithm must decide when the training process has to be started and the frequency for the retraining process. Once the NN is trained, it is used for filtering purposes. Figure 3 helps to explain the filtering rule applied. Suppose that xind is a new trial solution and let x/ind be the best solution found so far in the search. Since the simulation involves several stochastic parameters emulating the sources of uncertainty, for a given input value xind, different outputs, f(xind), will be obtained when running the simulation several times; then, an expected value of the simulation output E[f(xind)] can be computed. Besides, let ˆf(xind) be the value obtained by evaluating the metamodel with solution xind. The filtering rules are based on the following calculation (for a minimization problem):34
if ˆf (xind) > E[f(x/ind)] + R, then discard xind
(2)
The value of R accounts for the variability of the objective
1712
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006
Figure 4. Simple supply chain structure.
4. Case StudiessMotivating Example
Figure 3. Filtering rule.
function values and the error derived from the training process, being another algorithm parameter to be set in order to ensure effective elimination of xind from further consideration. If the training error is small enough, E[f(xind)] ≈ ˆf(xind), then the values of xind, whose E[f(xind)] drops above the line representing E[f(x/ind)] + R in Figure 3, will be discarded. Laguna and Martı´34 propose R ) 2σˆ f(xind), where σˆ f(xind) is the sample standard deviation of f(xind) calculated over the entire space of solutions xind. This definition is moderate with respect to the rejection criterion. While a conservative definition would add a factor to consider the training error, a more-aggressive definition may reject a new best solution to the simulation-based optimization problem before its evaluation. Using a similar criterion, the definition applied in this work is R ) 2σf(xind), where σf(xind) is the mean standard deviation of the function f(xind) taking into account the standard deviation of f(xind) in different points of the space of solutions xind. We consider this definition more representative for our purposes. After that, the new individuals are inserted in the population by means of a fitness-based approach, i.e., offsprings replace least-fitted parents. In all the cases, the criterion for terminating the GA has been fixing the maximum number of generations (MaxGen). It should be emphasized that this value has to be chosen after a careful monitoring of the algorithm evolution so as to verify that the number of generations is sufficient for convergence. Obviously, this parameter value depends on the complexity of the problem and the desired accuracy of the optimization results and should be determined for each particular system. The computational load associated with the proposed algorithm depends on the simulation-model complexity, regardless of the number of decision variables manipulated; that is to say that the number and kind of equations evaluated during a simulation run are responsible for the computational time required. Thus, the performance of the algorithm should be inferred from the number of model simulations. The maximum number of model simulations n is calculated as eq 3 indicates.
n ) NsN(Rep‚MaxGen + 1)
(3)
where MaxGen is the maximum number of generations, Rep is the number of individuals in a population that are replaced at each generation, N is the population size, and Ns is the number of simulations performed for each individual in order to calculate an expected value of the simulation output.
4.1. Centralized Approach, Global Information. The SC scheme first considered consists of 6 entities connected between them as Figure 4 shows. There is a supplier S, a plant F, two distribution centers (DA and DB), and two retailers (RA and RB). The only raw material P enters plant F, and the products A and B manufactured are distributed by the rest of the chain. The plant supplies the distribution centers, whereas S supplies raw material to the plant. Therefore, the material flow will move from the entity S to the customers and the information flow (ordering flow) will do in the opposite direction. Details about the operational scheduling at plant F are not considered according to the tactical level of the analysis. Then, production is characterized by a normal distribution of production capacities. With regard to the inventory policy, a periodic revision strategy is implemented in all the entities i for all the products j, with Rij being the time period between two consecutive inventory reviews. Every Rij time units, the inventory position Invijt is checked. If Invijt is below the reorder point sij, a replenishment quantity uijt ) sij - Invijt is ordered, that is, the replenishment quantity is proportional to the difference between the reorder point and the current inventory level. If the position is above sij, nothing is done until the next review. Thus, the strategy has two parameters whose values have to be determined (Rij and sij) for all the entities. Demand is considered as a set of events distributed over the time horizon of the study, with each of these events having an associated amount of material and time of occurrence. The material amounts are normally distributed around a mean value, and the interarrival time intervals are uniformly distributed. The values of the stochastic parameters for a given simulation run are selected from the corresponding probabilistic distribution in the Monte Carlo manner. Within this approach, two subcases are considered in order to better illustrate the features and results obtained by using the proposed methodology. The first one, subcase I, deals with the determination of two inventory parameters at retailer RB: sRBj and RRBj. The parameters for the other entities are supposed to be known. This approach with only two decision variables gives useful insight about the problem structure, since results may be more easily analyzed. In the second situation, subcase II, it is assumed that each one of the six entities i in Figure 4 has an inventory control policy with two parameters to tune: sij and Rij for all the products j. Thus, there are 12 variable values to determine. Each chromosome in the GA coding consists of the 2 or 12 concatenated variables, according to the subcase. The influence of the population size (N) on the algorithm performance as well as the crossover and mutation probabilities have been studied. The termination criteria consists of reaching a prefixed maximum number of generations, MaxGen. Related to this, there is a tradeoff between N and MaxGen, because a given level of convergence can be achieved by increasing both of these parameters. A relatively big MaxGen has been chosen because of the increasing number of data stored that would demand an
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006 1713
increasing of N, thus saving virtual memory during the execution of the algorithm. The goal in this centralized approach is to obtain the parameter values associated with the inventory control (e.g., Rij and sij) for products j at nodes i involved in each subcase, in such a way that the expected SC operation profit over a given time horizon (objective function) is maximized. Although any other operating parameter might be selected for optimization, the following case studies deal with inventory parameters as decision variables because of the interest they entail in pure pull systems. Quantitative issues related to inventory management are addressed in a large body of literature known as stochastic inventory theory. On the basis of foundational works such as the classical EOQ (economic order quantity) model, research in inventory theory has been ongoing to extend these analytical results to the conditions experienced in the day-to-day SC operation.14,38,39 A key limitation of all these studies, however, is that the effects that production and distribution tasks may have on inventories are not considered. The objective function, the total SC profit (Profit), is calculated as
Profit ) Revenues - C
(4)
The revenues, Revenues, are obtained according to eq 5.
Revenues )
∑t j∈Prod ∑ ∑i Qijt‚priceij
(5)
where Qijt is the quantity (in units) of final product j produced at node i at time t, and priceij represents the unit price of j at node i. Prod is the subset of materials managed by the SC that are sold to external customers. The global cost, C, is defined as
C ) TrnC + InvC + OrdC + PrdC
(6)
In this expression, TrnC is the transport cost, InvC is the inventory cost, OrdC is the penalization for cumulated nonsatisfied orders, and PrdC is the production cost, in the time horizon θ. The transport cost has two terms, as eq 7 shows,
TrnC )
∑i Ntrip,i‚ftci + ∑i ∑j Qtrip,ij‚vtcij
(7)
where Ntrip,i is the number of trips from node i in the time horizon θ, ftci is the fixed transportation cost per trip, Qtrip,ij is the transported quantity of product j (units) from node i in time horizon θ, and vtcij is the variable transportation cost per transported unit. InvC is calculated as the sum of fixed and variable inventory costs,
InvC )
∑t ∑j ∑i ficijt + ∑t ∑j ∑i Invijt‚vicijt
(8)
where ficijt is the fixed inventory cost for j at node i at time unit t, Invijt is the stored quantity of j (in units) at node i per time unit t, and vicij is the variable inventory cost at node i per stored unit of j and per simulation step t. The penalty for cumulated nonsatisfied orders is obtained according to eq 9.
OrdC )
∑t ∑j ∑i Ordijt‚ocijt
Table 1. Production Rates and Demand for the Case Studies production rate
customer demand
product
mean (unit/h)
std. dev. (%)
mean quantity (unit/event)
quantity std. dev. (%)
interarrival time (h)
A B
0.07 0.07
1 1
10 3
20 20
2.5 2.5
Table 2. Transport Capacity
S-F F-DA F-DB DA-RA DB-RB RB-customers RA-customers
mean (unit/h)
std. dev. (%)
47.6 119.0 23.8 23.8 23.8 23.8 23.8
4.2 1.0 8.4 4.2 4.2 4.2 4.2
Table 3. Cost and Price Data for the Case Studies parameter
scope
value
fixed transport cost, ftci ($/trip) variable transport cost, vtcij ($/unit) fixed inventory cost, ficijt ($/h) variable inventory cost, vicijt ($/unit/h) cumulated order cost, ocijt ($/unit/h) variable production cost, vpcij ($/unit) raw material price, prmij ($/unit) product price, priceij ($/unit)
all nodes and prods all nodes and prods all nodes and prods all nodes and prods all nodes and prods A and B at plant F P at plant F A and B at retailers
1 0.5 1.19 0.12 0.01 1 0.05 50
node i and time t and ocijt represents the cumulated orders cost at node i per unit of j and per simulation step t. Finally, the production cost is obtained according to eq 10.
PrdC )
∑t ∑j ∑i Qprod,ijt‚vpcij + ∑t j∈ERM ∑ ∑i Qrm,ijt‚prmij
where Qprod,ijt is the quantity of product j (units) produced at node i at time t, vpcij represents the unit variable production cost for j at node i, Qrm,ijt is the quantity (units) of raw material j consumed at node i at time t, and prmij represents the price of raw material j at node i. ERM is the subset of materials circulating in the SC that are raw materials. Tables 1-3 show data used in this work for all case studies. The simulation time is set to 4 months (1000 simulation steps). Thirty simulation runs (Ns ) 30) are performed for each individual in order to compute the objective function (E[Profit]). The SC model has been built in MATLAB by means of the Stateflow and Simulink toolboxes. It is an agent-based model in which discrete events are used to represent system dynamics instead of solving differential equations. A detailed explanation of how this model works can be found in the Appendix. 4.2. Decentralized Approach, Partially Shared Information. The procedure used to illustrate this approach is similar to the centralized one. It is supposed that the parameters to tune, Ri and si, belong to the distribution center DB that is the only location on which the optimization results can be applied by the manager (see Figure 5). The operation of the rest of the SC
(9)
where Ordijt is the orders quantity of j (units) backlogged at
(10)
Figure 5. Supply chain structure for the decentralized approach.
1714
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006
Figure 6. Centralized approach, subcase I: Objective function evolution.
has been emulated by using a discrete-event model (a number of different models may be used without any loss of generality). The settings for the implementation of the methodology are similar to those for the centralized approach. The objective function is a local one, similar to that utilized in the centralized approach as given by eqs 11-17. In these equations, the subscript DB represents that the analysis is made over the location DB of the SC network.
ProfitDB ) RevenuesDB - CDB RevenuesDB )
∑t j∈Prod ∑ QDBjt‚priceDBj
(11) (12)
CDB ) TrnCDB + InvCDB + OrdCDB + PrdCDB (13)
∑j Qtrip,DBj‚vtcDBj
(14)
∑t ∑j ficDBjt + ∑t ∑j InvDBjt‚vicDBjt
(15)
∑t ∑j OrdDBjt‚ocDBjt
(16)
TrnCDB ) Ntrip,DB‚ftcDB + InvCDB )
OrdCDB ) PrdCDB )
∑t ∑j Qprod,DBjt‚vpcDBj + ∑t j∈ERM ∑ Qrm,DBjt‚prmDBj
(17)
4.3. Complex Case. This is a more complex case in terms of the size and the product-exchange policy between the different nodes. It is presented with the purpose of demonstrating that the power of the approach is not limited to small examples. The SC layout considered in this case consists of twelve entities connected as Figure 17 shows. This SC network involves three manufacturing plants (F1-F3) that consume raw material P. One of them has its own supplier (S), and the others get P from external sources. Manufactured products (A, B, and C) are distributed to the markets through a network of five retailers (R1-R5) and three distribution centers (D1-D3). A periodic revision strategy similar to that described in Section 4.1 is implemented in the supplier S, the plants, and the distribution centers. Under this policy, every Rij time units, the inventory position Invijt is checked. If Invijt is below the reorder point sij, a replenishment quantity uijt ) Sij - Invijt is ordered, that is, the replenishment quantity is the difference
between the desired level Sij and the current inventory level. If the position is above sij, nothing is done until the next review. Therefore, this strategy has three parameters to determine (Rij, Sij, and sij) for each node i and manipulated product j. The number of decision variables is 3 for the supplier, 36 for the plants, and 27 for the distribution centers. Retailers have a continuous inventory revision strategy. In this case, the revision is done every time material leaves the retailer; then, the parameters to be determined for retailers are Sij and sij for each handled product and node (30 decision variables). Demand is also considered as a set of events distributed over the time horizon of the study, and each of these events having an associated amount of material and time of occurrence. The material amounts are normally distributed around a mean value, and the interarrival time intervals are uniformly distributed. The goal in this centralized approach is to obtain the parameter values associated with the inventory control policy for the 12 nodes and for each product manipulated at each node (the total number of variables involved is 96). The objective function used and the settings for the simulation are the same as in the case described in Section 4.1. To not extend unnecessarily the presentation of this case, data will be omitted. However, they are available by request from the authors. 5. Results and Analysis A number of situations have been investigated using different tunings in a sensitivity analysis to choose the most appropriate values for an acceptable algorithm performance. Such an algorithm performance is evaluated by taking into account the CPU time, which is related with the number of simulations performed during an algorithm run, and the final maximum profit achieved (expected value and variance). This tuning procedure is omitted in the article for the sake of conciseness, even though this is a key stage during the application of the proposed methodology. Only the final values for the tuning parameters used in the cases presented in this work are shown in Table 4. It is important to say that the computational effort of the tuning stage is several times the computational effort of applying the solving methodology. That is due to the larger number of simulations that have to be made during the tuning stage, with different choices of the tuning parameters values. The tuning values include the following: population size, N; lower and upper bounds for the decision variables; number of simulations for each individual, Ns; percentage of new individuals, Rep; probability of crossover, Px, and mutation, Pm; maximum number of generations, MaxGen; training frequency of the NN, φ; and the threshold value for filtering, R. Concerning the lower and upper bound for the decision variables, the following values have been considered: 0 e Rij e 20 (sim. steps) and 0 e Sij e 100 (units), for all entities i and product j. Right after the algorithm is tuned, the manager can use it systematically as a tool for SC decision-making. 5.1. Centralized Approach, Global Information. Regarding the first subcase, 10 runs of the proposed strategy are performed after tuning the algorithm, i.e., 10 different seeds or starting points are considered. Figure 6 shows the evolution of the objective function with the generations in this subcase. Four curves are depicted in the figure. The curves above and below represent the maximum and the minimum objective function values, respectively, obtained at each generation. In a way, these curves determine the most probable region of objective function values where a given tuning of the applied strategy could drive.
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006 1715
Figure 7. Centralized approach, subcase I: Inventory levels in the SC network before and after the optimization process.
Figure 8. Centralized approach, subcase I: Objective function evolution after a subsequent evolution.
Figure 10. Centralized approach, subcase I: Approximation to the response surface.
Figure 9. Centralized approach, subcase I: Standard deviation evolution.
Figure 11. Centralized approach, subcase I: Obtained solutions increasingly sorted.
The curves in the middle represent the arithmetic average objective function value at each generation, E h [Profit], and the objective function value of one particular run, respectively. Figure 7 shows the inventory levels in some of the nodes in the SC network for the initial and final solutions found. In general, the inventory levels corresponding to the final solution
are lower than the initial ones. Moreover, in the final solution, inventory shortages are avoided since they never fall to zero in this case. As a tactical decision, the parameter setting for the inventory policies is made once every four months. Therefore, when the initial values of the decision variables are fixed in a random manner, the algorithm produces a big change in the objective
1716
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006
Figure 12. Centralized approach, subcase II: Objective function evolution.
Figure 15. Decentralized approach: Objective function evolution.
Figure 13. Centralized approach, subcase II: Standard deviation evolution.
Figure 16. Decentralized Approach: Analysis of the model uncertainty.
Figure 17. Complex supply chain structure.
Figure 14. Comparison between the deterministic and the stochastic solution.
function (from $10,300 $ to $12,700 $, approximately). On the contrary, if the initial population includes the last solution found and the uncertain variables (i.e., demand) do not experience a drastic change, the objective function will change more slightly and the number of necessary generations for convergence will be reduced. This statement can be drawn from Figure 8, which
represents the case in which the system is optimized starting from a previous solution (in this case, experiment IX in Table 5). The mean value of the demand is increased by 2% for both products. The final values of the decision variables are RRBj ) 10.6 and SRBj ) 60.3, and the expected value of Profit is equal to $13,053 $. In Figure 9, it is possible to see that the standard deviation of the E[Profit] at each generation tends to go down. Figure 10 represents an approximation of the average response surface of this problem as a function of the two decision variables. This value has been obtained by sampling uniformly 157 pairs of values in the space of the decision variables and then running
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006 1717 Table 4. Settings for the GA-Based Optimization Strategy centralized approach subcase I
subcase II
decentr. approach
complex case
2 90 100 10 30 10 80 400 1/20
12 90 200 10 30 10 80 1000 1/50
2 90 100 10 30 10 80 500 1/30
96 90 200 10 30 10 80 2000 1/50
V Rep (%) MaxGen N Ns Pm (%) Px (%) R ($) φ
Table 5. Centralized Approach, Subcase I Results experiment I II III IV V VI VII VIII IX X average std. dev. avg. sim. runs CPU time (min) best individual
RRB 3.7 0.5 1.1 2.7 4.2 7.8 15.5 0.3 11.6 3.4
RS ) 10 RDA ) 5 RRA ) 5
SRB
E[Profit] ($)
Profit ($) (exp. IX)
51.3 40.0 58.9 58.9 28.0 59.0 60.6 58.7 61.2 29.1
12,549 12,361 12,574 12,603 12,461 12,120 12,479 12,594 12,681 12,404
12,534 12,347 12,619 12,644 12,426 12,781 12,781 12,594 12,650 12,627 12,600 137
SS ) 100 SDA ) 50 SRA ) 50
12,483 161 2,730 71.0 RF ) 10 RDB ) 5 RRB ) 11.6
SF ) 100 SDB ) 50 SRB ) 61.2
a simulation several times for each point. As anticipated in previous sections, this surface gives an idea of the unkind characteristics of this optimization problem and justifies the application of the proposed methodology. Finally, Figure 11 shows these results sorted increasingly, to give an idea about the E[Profit] range for this subcase as well as the place that the results obtained with the methodology occupy in this range. This figure shows, as three horizontal lines, the best value obtained by applying the methodology (E h [Profit] ) $12,681 $) and the upper and lower confidence bounds corresponding to the simulation output standard deviation, locally calculated for the point found with the solving procedure. This solution corresponds to the following parameter values (experiment IX): RS ) 10, SS ) 100, RF ) 10, SF ) 100, RDA ) 5, SDA ) 50, RDB ) 5, SDB ) 50, RRA ) 5, SRA ) 50, RRB ) 11.6, and SRB ) 61.2. It can be seen that this solution represents a significant improvement and corresponds to the parameter values at the bottom of Table 5. The last column of this table contains 10 values obtained with the values for each parameter taken from experiment IX. With regard to subcase II, Figure 12 shows the evolution of the total SC profit expectation with the generations. As in the previous subcase, the curves represent the maximum and minimum objective function value of each generation, the average objective function value, and one particular result for one of the strategy runs. Figure 13 represents the standard deviation of the E[Profit] at each generation. Results can be seen in Tables 6 and 7. The average number of simulation runs resulting from the 10 experiments is 8 430 (see Table 7). By replacing this value in eq 3, it is possible to calculate the generation at which the number of simulation runs is 8 430, in the case of using the methodology without any filtering procedure. This calculation
indicates that this number approximately corresponds to an optimization until the 30th generation. The E h [Profit] at this generation has been calculated, giving a value of $13,928 $ that represents ∼11% lower than the value obtained using the filtering rules ($15,650$). Moreover, a verification has been done in order to demonstrate the value of including uncertainty in this approach. Figure 14 shows the evolution of the mean value of the objective function E h [Profit] (as in Figure 12) together with a curve representing the evolution of the mean value of the objective function Profit for the nominal case. The line over the name of the variable is related to the fact that the Profit is the mean of 10 different values of Profit taken from 10 different executions of the optimization strategy. In the deterministic case, the objective function is directly the Profit obtained only once, with the nominal values of the uncertain parameters. This value has been obtained by using the same scheme depicted in Figure 2 but without using the Monte Carlo sampling loop of Ns random scenarios to get the value of E[Profit], as in the stochastic case. Afterward, the assessment of the obtained solution through all the Ns scenarios used in the stochastic case gives a value of E[Profit] ) $13,991 $, which is ∼10% less than the stochastic solution. This result shows the usefulness of the stochastic approach for a case with such a level of uncertainty in the parameters. 5.2. Decentralized Approach, Partially Global Information. The settings and results for this case are shown in Table 8. As in the centralized approach subcases, 10 runs of the algorithm are performed. The parameter values for the entities in the SC are as follows: RS ) 14, SS ) 26, RF ) 13, SF ) 30, RDA ) 5, SDA ) 21, RRA ) 9, SRA ) 19, RRB ) 12, and SRB ) 19, while RDB and SDB are now the decision variables. Figure 15 shows the evolution of the local objective function with the generations. The final values for the DB parameters after optimization are RDB ) 5.7 and SDB ) 11.3. The number of simulations made to reach these values is 811. In addition, the approach described here can be used to perform another kind of analysis that is related to the uncertainty in the model utilized to represent the external SC, i.e., all the entities except DB. To represent the uncertainty due to the incomplete knowledge of the rest of the chain, the Rij and sij values used by the model can be expressed as uncertain, belonging to a normal distribution characterized by the respective mean and variance values. The mean values are chosen to be equal to those previously used in a deterministic manner, and two situations have been considered regarding two different degrees of uncertainty in the model used: first, all the variance values, σRij2 and σsij2, are set equal to 1, and second, all these values are set equal to 2. The parameters for DB are set equal to those obtained before (experiment VIII). To analyze each situation, 100 simulations have been made. Table 9 shows some results for both cases. These include the average value of DB profit for the 100 simulation runs. With this information, it is possible to analyze both situations and draw some conclusions. To do this, the definition of risk given by Barbaro and Bagajewicz40 is used. According to this definition, if the manager considers a profit target Ω as the value of total profit ProfitDB at distribution center DB he/she is willing to achieve, the risk associated with this profit target, Risk(Ω), can be evaluated as the probability P of getting scenarios whose ProfitDB values are smaller than the target; then,
Risk(Ω) ) P(ProfitDB < Ω)
(18)
1718
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006
Table 6. Centralized Approach, Subcase II Results exp.
RSP
SSP
RF
SF
RDA
SDA
RDB
SDB
RRA
SRA
RRB
SRB
I II III IV V VI VII VIII IX X
5.5 3.9 4.8 12.6 7.2 15.7 4.7 3.6 14.8 1.3
33.4 54.0 47.2 46.3 33.1 55.5 29.0 40.6 27.0 31.6
0.6 1.7 0.2 14.0 5.0 15.2 14.6 11.0 12.2 2.1
62.7 55.0 59.6 55.5 60.7 50.6 62.9 58.9 62.0 37.0
2.9 10.9 6.1 4.5 6.2 8.8 13.8 5.8 6.9 1.6
23.3 13.0 34.7 43.8 21.9 19.8 30.7 12.7 10.0 47.0
0.1 1.7 10.3 4.8 6.5 15.2 5.7 11.1 15.0 6.0
36.2 52.6 36.4 26.1 54.5 49.6 47.0 42.0 49.9 37.4
2.5 15.6 14.1 6.6 13.6 15.2 12.6 13.3 10.8 11.8
11.4 46.8 12.7 10.0 25.0 54.9 27.1 35.6 48.9 19.3
2.5 8.7 5.3 15.4 11.5 3.2 4.9 10.3 2.0 1.3
57.1 33.5 43.6 60.9 49.7 28.4 36.7 53.4 31.4 14.7
Table 7. Centralized Approach, Subcase II Results (Continued) E[Profit] ($)
Profit ($) (exp. IV)
I II III IV V VI VII VIII IX X
experiment
17,053 15,427 14,332 18,691 14,790 14,478 16,621 15,468 15,563 14,073
17,899 18,392 17,682 19,179 17,926 18,062 18,576 18,032 17,948 17,551
average std. dev. avg. sim. runs CPU time (min)
15,650 1,437 8,430 219.2
18,125 477
Table 8. Decentralized Approach Results experiment I II III IV V VI VII VIII IX X
RDB
SDB
E[Profit] ($)
Profit ($) (exp. VIII)
3.7 12.2 6.4 9.9 11.2 7.9 5.5 5.7 5.5 14.2
29.7 25.3 11.8 14.1 26.3 36.8 14.8 11.3 32.5 18.8
1,895 1,769 1,775 1,786 1,770 1,623 1,838 2,011 1,703 1,726
1,823 1,772 1,904 2,001 1,801 1,772 1,831 1,920 1,771 1,737
1,790 107 6,390 166.1
1,833 83
average std. dev. avg. sim. runs CPU time (min)
Table 9. Results of the Uncertainty Analysis for the Decentralized Approach: Profit at Distribution Center DB average ($)
standard deviation ($)
minimum ($)
maximum ($)
1,817
σRi2 ) 1 and σsi2 ) 1 191 1,339
2,268
1,809
σRi2 ) 2 and σsi2 ) 2 205 1,228
2,253
Then, Risk(Ω) can be easily represented by the cumulative probability distribution curve obtained through the evaluation of all the generated scenarios. Figure 16 represents the risk curves associated with each of the analyzed situations. For instance, if the profit target is Ω ) $1,900 $, there is a Risk(Ω) ) 68% of not getting this target in the most certain case, while this risk is increased until 72% in the most uncertain one. This study is an example of the versatility and potential profitability given by the proposed methodology for the decision maker. 5.3. Complex Case. After tuning the algorithm, 20 runs of the proposed strategy are performed. Figure 18 shows the evolution of the objective function through the generations in this case. As in the cases before, four curves are depicted in the figure. The curves have the same meaning as those in Figure 6. The algorithm converges more slowly than in the other cases.
Figure 18. Complex case: Objective function evolution. Table 10. Complex Case Results experiment
E[Profit] (M$)
experiment
E[Profit] (M$)
I II III IV V VI VII VIII IX X
8.50 8.57 8.24 9.16 9.29 9.75 10.29 10.18 8.96 9.33
XI XII XIII XIV XV XVI XVII XVIII XIX XX
9.17 8.23 8.96 10.52 9.22 9.14 10.00 8.57 8.57 9.70
average std. dev. avg. sim. runs CPU time (min)
9.22 0.68 25,047 4,175
Because of the complexity of the model, one simulation run takes ∼10 s (1.5 s in the case before). Therefore, the execution of the simulation-optimization algorithm takes 4 175 min approximately (Table 10). All the CPU time requirements reported in this article correspond to the execution of the methodology on an AMD Athlon computer, 1.6 GHz. 6. Conclusions The event-driven model is a useful tool for representing the real-world SCM problems. Therefore, key issues in the SCM are a good model and a well-fitted stochastic search optimization method. They allow effective managing of the complexity and the uncertainty. For instance, the risk of stock-outs is reduced, thus avoiding keeping unnecessary amounts of stored materials and leading to substantial savings. In this work, the SC behavior has been simulated by using a dynamic discrete-event model. Some of the parameters that feature this behavior have been obtained by using GA as an
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006 1719
Figure 19. Generic unit state-transition diagram.
improving/optimization technique. Although system managers were able to play what if scenarios with input data and simulation models to evaluate alternative solutions, the proposed methodology helps to eliminate the need for random trial and error and allows the automation of the optimization procedure. It is important to highlight that this approach has been applied to inventory variables but is flexible enough to accept the use of any other kinds of variables, even at different levels. Even though there is no guarantee that the resulting solutions will be optimal, this strategy obtains high-quality solutions in reasonable computing times with little or no user supervision, thus enhancing the usefulness of the simulation models. An important contribution of this study is the methodology applied, because the problem of integrating simulation with optimization techniques is a field that can still offer new promising opportunities of improvement. Unfortunately, comparisons are difficult because of the use of algorithms strongly dependent on an appropriate tuning. Tuning is a key aspect in the methodology and perhaps the most effort-intensive task when dealing with real-world cases. Although the uncertainty aspects in the model have been mentioned in this work, the integral uncertainty management requires further study and is a subject of our present research. The objective is to obtain a robust enough SC operation so that it allows for dealing with internal and external uncertainty. Acknowledgment Financial support received from the Spanish “Ministerio de Educacio´n, Cultura y Deporte” (FPU programs), “Generalitat de Catalunya” (FI programs), and from GICASA-D (I0353) and OCCASION (DPI2002-00856) projects is gratefully acknowledged. Additionally, partial support from the European Community (MRTN-CT-2004-512233) is very much appreciated. Appendix: Implementation of the Discrete-Event SC Simulator The discrete-event SC simulator enables one to analyze the behavior and response of the system under different conditions
in order to take decisions onto a real system. A model is constructed on a computer and then executed to simulate system behavior; the model specifies in a given way, e.g., logic rules, the behavior of the individual participants in the system, instead of solving a set of differential equations. In this paper, it is proposed to use a model that includes all the entities that belong to the SC as independent and well-defined agents, with each of these being represented by a collection of states and transitions, that is, as an event-driven system. Agent modeling is based on communication between the coexisting entities in a system. In this work, the model has been constructed using two toolboxes of Matlab: Stateflow and Simulink. Stateflow allows for representing the system by means of state-transitions diagrams, and Simulink offers the simulation environment. In these systems, transitions from one state to another occur if the condition defining the change is true. In this model, each realworld SC node is represented externally by a generic unit in Simulink; thus, a SC consisting of six entities corresponds to a network of six blocks connected between them in Simulink. This generic unit operates as a discrete, event-driven system. The basic model for this entity is a set of four states that works in a parallel manner. Figure 19 shows an example of the internal representation of the generic unit, such as it is viewed in the Stateflow environment. The boxes represent each of the states among which the system switches. Each circle is a transition node where logical conditions are evaluated, and every line represents a logical condition that must be satisfied to go from one state to another. The state Order_reception operates driven by the ORin event. This event makes the diagram to abandon the state Idle in order to analyze whether there is inventory enough for satisfying the order. If the order can be satisfied, state Inv1 is activated, and if it cannot, the system returns to the state Idle, creating a note in a list. In state Inv1, the ordered quantity is subtracted from the inventory level I, and it is transferred to a repository H from which it will be ready for delivering. The state Material_reception depends on the event MAin, that is, on the material arriving. This event carries the system from the state Idle2 to
1720
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006
Table 11. Summary of the Results
case I, subcase I ($) case I, subcase II ($) case II ($) complex case (M$)
best E[Profit]
worst E[Profit]
E h [Profit]
std. dev. of E[Profit]
12,681 18,691 2,011 9.22
12,120 14,073 1,623 8.23
12,483 15,650 1,790 10.52
161 1,437 107 0.68
state Inv2 in which material is added to the inventory level I. The state Delivering, by seeing that the content of H is greater than a given threshold, sends off the event MAout outside the generic unit. The time between two consecutive material deliveries as well as the threshold magnitude is a parameter that can be modified. The state Ordering implements a control law over the inventory level, that is, the criterion in which the request of materials from the suppliers is based on. This state generates the event ORout associated with a material quantity. The inclusion of uncertainty is easily managed as the simulator accepts the definition of parameters and the occurrence of events on the time line as belonging to probability distributions. Since this generic unit might represent a plant, a warehouse, or whatever component of a SC, it can be connected with other ones to represent the entire SC in each of the case studies tackled before. Note that there is no additional implication for the methodology proposed if other, more-complex methods such as MILP are used in the simulation to model the production. Demand Model. To perform simulations, it is necessary to establish the way in which demand is expressed. In this approach, the demand has been modeled as a set of events distributed over the time horizon of the study, with each of these events having an associated amount of material and time of occurrence. In the general case, the amounts and the interarrival intervals between events are both variables. In the case of distributed amounts, it can be used as a uniform or a normal distribution law around a given value. Instead, a Poisson process is used to model the interarrival times between orders, as the literature prescribes.25 The customer demands are fully backlogged by retailers. Transport Policies. The transport policy accounts for the way in which the repository H of “material ready for delivery” is depleted. A number of policies can be used, with their parameters being a source of uncertainty, e.g., number of vehicles, delivery time, material handling time, etc. Inventory Policies. The inventory policy aims to give rules in order to answer the questions: “when should a replenishment order be placed?” and “how large should the replenishment order be?” There are a number of possible inventory-control systems. All these strategies can be classified as continuous-revision strategies or periodic-revision ones. In a continuous review, each transaction (shipment, receipt, etc.) triggers an analysis of the inventory status in order to update it. With periodic review, the stock status is determined only every R time units. Then, R is the time period between two consecutive inventory reviews. This simulator is able to implement different strategies such as the following: order-point (s, k) system, order-point order-quantity (s, Q) system, periodic-review order-up-to-level (R, S) system, and so on. Obviously, it is possible to devise or implement other strategies more or less complex, including forecasting information and cost-optimization models.
E h [f(xind)] ) expected value of the objective function computed over several seeds ERM ) set of raw materials ficijt ) fixed inventory cost of product j at node i and time t ftci ) fixed transportation cost from node i per trip f(xind) ) simulation output for the individual xind ˆf(xind) ) output of the metamodel for the individual xind F(xind) ) fitness function of the individual xind InvC ) inventory cost in θ Invijt ) inventory position of product j at entity i and time t MaxGen ) maximum number of generations n ) number of simulation runs N ) number of individuals in each population Ns ) number of simulations performed for each individual Ntrip,i ) number of trips from node i in θ ocijt ) cumulated orders cost for product j at node i and time t OrdC ) penalizsation for nonsatisfied orders in θ Ordijt ) orders backlogged of product j at node i and time t P ) probability PrdC ) production cost in θ prmij ) price of raw material j at node i priceij ) price of product j at node i Prod ) set of final products Profit ) total SC profit in θ Profit ) expected value of the Profit computed over several seeds Pm ) mutation probability Px ) crossover probability Qijt ) units of final product j leaving node i at time t Qprod,ijt ) units produced of j at node i and time t Qrm,ijt ) units of raw material j consumed at node i and time t Qtrip,ij ) units of j transported from node i in θ Rep ) percentage of individuals replaced per population at each generation Revenues ) total SC revenues in θ Rij ) inventory review period of product j at entity i Risk(Ω) ) risk at a target value Ω sij ) inventory reorder point for product j at entity i TrnC ) transportation cost in θ uijt ) inventory replenishment quantity at entity i and time t V ) number of variables in each chromosome vicijt ) variable inventory cost per unit of j at node i and time t vpcij ) variable production cost per unit of j at node i vtcij ) variable transportation cost per unit of j from node i xind ) individual or chromosome x/ind ) best individual found Greek Characters R ) threshold for discarding individuals using the metamodel φ ) training frequency for the metamodel σf(xind) ) mean standard deviation of f(xind) considering all the response surface σˆ f(xind) ) sample standard deviation of f(xind) σ2Rij ) variance of Rij σ2sij ) variance of sij θ ) time horizon Ω ) profit target
Nomenclature
Subscripts
C ) total SC cost in the time horizon θ E[f(xind)] ) expected value of the simulation outputs (objective function)
i ) SC entity j ) SC product t ) simulation time unit
Ind. Eng. Chem. Res., Vol. 45, No. 5, 2006 1721
Acronyms GA ) genetic algorithms NN ) neural networks SC ) supply chain SCM ) supply chain management Literature Cited (1) Kafoglis, C. C. Maximize competitive advantage with a supply chain vision. Hydrocarbon Process. 1999, 2, 47-50. (2) Banks, J., Ed. Handbook of simulation: Principles, methodology, adVances, applications, and practice; John Wiley & Sons: New York, 1998. (3) Forrester, J. W. Industrial Dynamics; MIT Press: Cambridge, MA, 1961. (4) Swaminathan, J. M.; Smith, S. F.; Zadeh, N. M. Modelling supply chain dynamics: A multi-agent approach. Decis. Sci. 1998 29, 607-632. (5) Parunak, H. V.; Savit, R.; Riolo, R. L.; Clark, S. J. DASCh: Dynamic analysis of supply chains; DASCh Final Report CEC-0 115, 1999. Industrial Technology Institute, Ann Arbor, MI, January 30, 2006. http://www.erim.org/-vparunak/dasch99.pdf. (6) Gjerdrum, J.; Shah, N.; Papageorgiou, L. G. A combined optimisation and agent-based approach for supply chain modelling and performance assessment. Prod. Plann. Control 2000, 12, 81-88. (7) Garcia-Flores, R.; Wang, X. Z. A multi-agent system for chemical supply chain simulation and management support. OR Spectrum 2002, 24, 343-370. (8) Pendharkar, P. C. A computational study on design and performance issues of multi-agent intelligent systems for dynamic scheduling systems. Exp. Syst. Appl. 1999 16, 121-133. (9) Knotts, G.; Dror, M.; Hartman, B. C. Agent-based project scheduling. IIE Trans. 2000 32, 387-401. (10) Fox, M. S.; Barbuceanu, M.; Teigen, R. Agent-oriented supply chain management. Int. J. Flexible Manuf. Syst. 2000, 12, 165-188. (11) Chen, Y.; Peng, Y.; Finin, T.; Labrou, Y.; Cost, S.; Chu, B.; Yao, J.; Sun, R.; Wilhelm, B. A negotiation based multi-agent system for supply chain management. Working Notes of the Agents ’99 Workshop on Agents for Electronic Commerce and Managing the Internet-enabled Supply Chain, Seattle, WA, 1999. (12) Applequist, G. E.; Pekny, J. F.; Reklaitis, G. V. Risk and uncertainty in managing manufacturing supply chains. Comput. Chem. Eng. 2000, 24, 2211-2222. (13) Shapiro, J. F. Modeling the Supply Chain; Duxbury: Pacific Grove, CA, 2001. (14) Silver, E. A.; Pyke, D. F.; Peterson, R. InVentory Management and Production Planning and Scheduling; John Wiley & Sons: New York, 1998. (15) Backx, T.; Bosgra, O.; Marquardt, W. Towards intentional dynamics in supply chain conscious process operation. In Proceedings of Third International Conference on Foundations of Computer-Aided Process Operations; Pekny, J. F., Blau, G. E., Eds.; American Institute of Chemical Engineers: New York, 1998; p 5. (16) Bose, S.; Pekny, J. F. A model predictive framework for planning and scheduling problems: a case study of consumer goods supply chain. Comput. Chem. Eng. 2000, 24, 329-335. (17) Ettl, M.; Feigin, G. E.; Lin, G.; Yao, D. D. A supply network model with base stock control and service requirements. Oper. Res. 2000, 48, 216232. (18) Perea-Lo´pez, E.; Ydstie, B. E.; Grossmann, I. E. A model predictive control strategy for supply chain optimisation. Comput. Chem. Eng. 2003, 27, 1201-1218. (19) Mele, F. D.; Bagajewicz, M.; Espuna, A.; Puigjaner, L. Financial risk control in a discrete event supply chain. In Proceedings of ESCAPE13; Kraslawsky, A., Turunen, I., Eds.; Elsevier: Amsterdam, The Netherlands, 2003; pp 479-484.
(20) Towill, D. R. Dynamic analysis of an inventory and order based production control system. Int. J. Prod. Res. 1982, 6, 671-687. (21) Towill, D. R. Industrial dynamics modelling of supply chains. Logistics Inf. Manage. 1996, 4, 43-56. (22) Perea-Lo´pez, E.; Grossmann, I. E.; Ydstie, B. E.; Tahmassebi, T. Dynamic modeling and decentralized control of supply chains. Ind. Eng. Chem. Res. 2001, 40, 3369-3383. (23) Sivakumar, A. L. Multiobjective dynamic scheduling using discrete event simulation. Int. J. Comput. Int. Manuf. 2001, 14, 154-167. (24) Dudek, G. Collaborative planning in supply chains: A negotiationbased approach. In Lectures Notes in Economics and Mathematical Systems; Springer: Berlin, 2004; Vol. 533. (25) Law, A. M. Kelton, W. D. Simulation Modeling & Analysis, 3rd ed.; McGraw-Hill: New York, 2000. (26) Fu, M. C. Optimization for simulation: Theory vs practice. INFORMS J. Comput. 2002, 14, 192-215. (27) Subramanian, D.; Pekny, J. F.; Reklaitis, G. V. A simulationoptimization framework for addressing combinatorial and stochastic aspects of an R&D pipeline management problem. Comput. Chem. Eng. 2000, 24, 1005-1011. (28) Jung, J. Y.; Blau, G.; Pekny, J. F.; Reklaitis, G. V.; Eversdyk, D. A simulation based optimization approach to supply chain management under demand uncertainty. Comput. Chem. Eng. 2004, 28, 2087-2106. (29) Fu, M. C. Simulation optimization. In Proceedings of the 2001 Winter Simulation Conference; Peters, B. A., Smith, J. S., Medeiros, D. J., Rohrer, M. W., Eds.; Arlington, VA, 2001; pp 53-61. (30) Andradottir, S. A review of simulation optimization techniques. In Proceedings of 1998 Winter Simulation Conference; Medeiros, D. J., Watson, E. F., Carson, J. S., Manivannan, M. S., Eds.; Washington, DC, 1998; pp 151-158. (31) Syarif, A.; Yun, Y.; Gen, M. Study of multi-stage logistic chain network: a spanning tree-based algorithm approach. Comput. Ind. Eng. 2002, 43, 299-314. (32) Vergara, F. E.; Khouja, M.; Michalewicz, Z. An evolutionary algorithm for optimizing material flow in supply chains. Comput. Ind. Eng. 2002, 43, 407-421. (33) Zhou, G.; Min, H.; Gen, M. The balanced allocation of customers to multiple distribution centers in the supply chain network: A genetic algorithm approach. Comput. Ind. Eng. 2002, 43, 251-261. (34) Laguna, M.; Martı´, R. Neural network prediction in a system for optimizing simulations. IIE Trans. 2002, 34, 273-282. (35) Michalewicz, Z. Genetic Algorithms + Data Structure ) EVolution Programs, 3rd ed.; Springer-Verlag, Berlin, 1996. (36) Baker, J. E. Reducing bias and inefficiency in the selection algorithm. In Proceedings of ICGA 2; Lawrence Erlbuam Associates: Mahwah, NJ, 1987; pp 14-21. (37) Chen, S. C.; Cowan, F. N.; Grant, P. M. Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans. Neural Networks 1991, 2, 302-309. (38) Diks, B.; de Kok, A. G.; Lagodimos, A. G. Multi-echelon systems: A service measure perspective. Eur. J. Oper. Res. 1996, 95, 241263. (39) Tayur, S., Ganeshan, R., Magazine, M. J., Eds. QuantitatiVe models for supply chain management; Kluwer Academic Publishers: Norwell, MA, 1998. (40) Barbaro, A. F.; Bagajewicz, M. J. Managing financial risk in planning under uncertainty. AIChE J. 2004, 50 (5), 963-989.
ReceiVed for reView February 17, 2005 ReVised manuscript receiVed November 25, 2005 Accepted January 9, 2006 IE050189T