Multiobjective Decision Processes under Uncertainty: Applications

We consider the decision-making problems that firms face when operating in a changing and uncertain environment. Problems of this type arise in many ...
1 downloads 0 Views 206KB Size
Ind. Eng. Chem. Res. 2005, 44, 2405-2415

2405

Multiobjective Decision Processes under Uncertainty: Applications, Problem Formulations, and Solution Strategies Lifei Cheng, Eswaran Subrahmanian, and Arthur W. Westerberg* Department of Chemical Engineering, Institute for Complex Engineered Systems, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213-3890

We consider the decision-making problems that firms face when operating in a changing and uncertain environment. Problems of this type arise in many important decision contexts in various industries and pose challenges for both practitioners and researchers. This paper is a contribution to the development of a general framework for formulating and solving this class of problems through an investigation of current applications and solution methodologies. We consider a general class of problems: multiobjective decision processes under uncertainty, with concentration on its application areas, problem formulations, and solution strategies. A morphology classifies the relevant literature by projecting the problems reported onto a multidimensional problem space. A problem of coordinated capacity planning and inventory control serves as an example of this problem class to illustrate the issues related to formulations and solutions throughout the paper. We develop an approximation architecture that decomposes the planning horizon in time into several subhorizons and constructs a distinct decision model based on the differences in information available for each subhorizon. We link these models through states at boundaries and solve them sequentially backward in time based on the principle of optimality. An iterative solution process, i.e., proposing states forward and propagating Pareto fronts backward, searches for the optimal first stage(s) decisions. We investigate and compare combinations of different approaches in solving the example problem. Numerical results demonstrate the advantages of the proposed approximation scheme and emphasize the importance of tailoring solution strategies to specific problems. 1. Introduction We explore a type of problem known as sequential decision making under uncertainty. We specifically add the complication of objectives that are competing with each. Manufacturing firms routinely face such problems when they have to make annual decisions on whether and by how much to expand manufacturing capacity and monthly decisions on how much product to make. They have to trade-off possible profit against the risk they may be taking, which, in the worst case, could cause them to go out of business. In this paper, we examine a general class of decision problems that possess all of the following characteristics: (i) The decision context, or the external environment, is rapidly changing over time and involves a substantial amount of uncertainty. (ii) The decision makers need to coordinate these decisions, made at different times and at both design or strategic and at the operational levels, because of the interconnections between them. (iii) Decision makers have multiple conflicting goals. The difficulty lies in the conflicts (at least partly) or incommensurability among these objectives. We would like to stress that this paper deals only with the decision-making problems with both multiple objectives and multiple stages. It is not intended to cover all problems of decision under uncertainty. There already has been extensive research on specific problems with single-objective and/or two stages, which this paper will * To whom correspondence should be addressed. Tel.: 412268-2344. Fax: 412-268-7139. E-mail: [email protected].

not address individually. We refer readers to other literature for those special problems.1-3 A vast literature, ranging from theoretical to “expedient” application, exists on decision making under uncertainty as it is important for a variety of problems such as capacity planning, product development, vehicle allocation, and portfolio management. Our goal in this paper is to develop a general framework for formulating this class of problems and for tailoring efficient approximation solution strategies. Figure 1 illustrates the overview of this framework. We review a broad range of applications of this problem class in various areas from the literature, along with our investigations into specific contexts.4-7 We show how one can formulate this type of problems into a generic multiobjective decision process using an example in capacity planning and inventory control. We develop and tailor different solution methods to solve specific problems, based on which we propose an approximation architecture that combines different solution methods to solve problems efficiently by taking advantages of these methods. This paper is the final piece of a series of papers dedicated to formulating and solving this class of problems. It specifically focuses on a general framework that decomposes the problems and combines different methods. Besides, it also serves as an overview of our previous related work that elaborates from individual perspectives of theory and computation, specifically, optimality of the multiobjective problems,4 simulationbased optimization approach,5 and comparison between dynamic programming and stochastic programming.6 The organization of the rest of the paper is as follows. In section 2, we present a morphological scheme to

10.1021/ie049622+ CCC: $30.25 © 2005 American Chemical Society Published on Web 02/17/2005

2406

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005

Figure 1. Overview of the framework presented in this paper.

Figure 2. A morphological scheme for literature classification.

classify the relevant literature along several dimensions. We review the application of this class of problems to various areas and solution approaches from different research streams. In section 3, we present an example problem on coordinated capacity planning and production-inventory control to illustrate the issues in model formulations and solution strategies. With an understanding of different solution approaches and the underlying problem, in section 4 we propose a decomposition and approximation architecture to search for the optimal first stage(s) decisions or policies. We demonstrate these approximation and decomposition strategies on the example problem in section 5 and briefly compare different solution approaches. Finally, we conclude our paper in section 6 and suggest directions for further research. 2. Classification and Literature Review Decision making inherently involves consideration of multiple objectives and uncertain outcomes; and in many situations, we have to take into account both the outcomes of current decisions and future decision opportunities. This class of problems encompasses a wide range of applications and motivates advances in theoretical properties, algorithmic developments, and com-

putational capabilities. In this section, we will review the literature on the multistage decision process under uncertainty, with a focus on the applications in various areas and on different solution methodologies. To organize the vast literature related to this class of problems in a meaningful manner, we propose a morphological scheme to classify the relevant literature along the dimensions of application areas, problem formulations, and solution strategies. Figure 2 illustrates the classification scheme that places seven research papers, chosen as examples with no prejudice implied in their choice. 2.1. Application Areas. Decision processes under uncertainty deal with the optimization of decision making under uncertainty over time. Problems of this type have found applications in a variety of decision contexts in different industries, including manufacturing, R&D management, finance, transportation, power systems, and water management. We shall review the application of this class of problems in different fields and emphasize the diversity and importance of these applications. 2.1.1. Production and Inventory Control. Manufacturing firms operate in an environment in which such factors as product demand and technology evolution inevitably involve uncertainty. Production planning and

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005 2407

inventory control are operational level decisions that firms must make on a regular basis. Effective inventory control is important to managing cost by properly balancing various costs such as inventory carrying costs and demand shortage penalties. Sequential decision models applied to production and inventory control represent one of the earliest areas of application. The scope of these applications ranges from determining reorder points for a single product to controlling a complex multiproduct multiechelon supply chain. Some of the earliest and most noteworthy research on inventory control mostly concerns the form of the optimal policy under various assumptions about the economic parameters.9-11 For example, Scarf9 shows that optimal policy in each period is always of the (s, S) type (i.e., replenish inventory level up to S when it is below s) for a fixed charge linear cost structure. Based on these structural results, extensions in recent research compute the values of the parameters in the structural policies and solve inventory control problems numerically.12-14 Specifically, Kapuscinski and Tayur13 study a single product capacitated production-inventory system with stochastic periodic demand and propose a simulation-based optimization procedure using infinitesimal perturbation analysis (IPA) to compute the optimal parameters. Recently, there have been a few research efforts that attempt to apply practical solution strategies, e.g., moving horizon control schemes, to solve production and inventory control problems in industry.8,15 2.1.2. Capacity Planning. Capacity planning and technology adoption is the crucial part of strategic level decision making in the manufacturing and service industry. Complications arise in decisions on timings and sizes of investments in capacity due to the uncertain demand for capacity, e.g., customer demand, and availability of capacity, e.g., technology development. Other factors that one has to take into consideration include economies of scale, discounting of future costs, and so on. All of these factors, along with the significant and long-term impact of capacity decisions, make capacity planning one of the most important yet complex decisions for most industries. There is increasing literature on the subject of capacity planning based on stochastic programming, with the use of scenarios to model uncertainties such as product demands.16-19 For example, Eppen et al.16 describe a model developed for General Motors to aid in capacity planning. The model maximizes the present value of the expected discounted cash flows subject to a constraint on the expected downside risk. In contrast, approaches based on optimal control place more emphasis on analytical solutions and structural results.20-26 Eberly and Van Mieghem23 and later Harrison and Van Mieghem24 present a framework to study multiresource investment under uncertainty in a dynamic environment. They show that the optimal investment strategy follows a control limit policy at each point in time. Recent work has attempted to integrate capacity planning and inventory control and optimize these decisions simultaneously.5,27-29 2.1.3. R&D Management. R&D management has far-reaching economic implications for product development driven and highly regulated industries, such as pharmaceutical and agrochemicals industries. Systematic and effective decision making, ranging from longterm portfolio selection to short-term test scheduling,

is increasingly critical to optimize such performance measures as the time to market and the cost of development in the R&D pipeline. Firms must make all of these strategic and operational decisions in the presence of significant uncertainty, e.g., product test failure, and ever-constrained resources, e.g., limited testing facilities.30-33 For example, Maravelias and Grossmann32 consider the simultaneous optimization of resourceconstrained scheduling of testing tasks in new product development and design/planning of batch manufacturing facilities. They propose a multiperiod mixed-integer linear programming to maximize the expected net present value of multiple projects. 2.1.4. Finance. Financial problems inherently involve decision making under uncertainty. Stochastic programming and related techniques, as powerful modeling paradigms, have found uses in various applications in financial management. Applications in this field include portfolio optimization34 and asset-liability management for banks,35 for insurance companies,36-38 for pension funds,39,40 and for fixed income portfolios.41 Ziemba and Mulvey42 report on a collection of the financial models for asset and liability management. For example, Carino et al.36 develop an asset/liability management model using multistage stochastic programming to allocate funds among available assets to maximize expected wealth less expected penalized shortfalls over a planning horizon. The firm integrated this model into its financial planning process. During the first two years of use, the investment strategy devised by the model yielded an extra income of $79 million. 2.1.5. Transportation. Transportation, as an important logistics operation, has to deal with fleet and crew management on a daily basis. For example, dynamic vehicle allocation and routing problems arise in industries that need to manage a fleet of vehicles over time in response to demands. Transportation models remain a prominent area for the application of stochastic optimization, since the early work on aircraft allocation by Ferguson and Dantzig.43 Typical decisions in this industry must be made in the presence of such uncertainty as customer,44 vehicle travel time,45 or both.7 Kleywegt et al.44 consider the inventory routing problem that addresses the coordination of inventory management and transportation. The authors formulate the inventory routing problem as a discrete time Markov decision process to maximize the expected discounted value over an infinite horizon. For more details, we refer readers to two surveys on the location and routing.46,47 2.1.6. Power and Water Systems. Power and water systems have been a common area of application as well as a source of developments in formulation and solution of stochastic models. A typical example is the unit commitment of hydrothermal power systems, which must determine which generating units are to be in use at each point in time over a scheduling horizon.48 Decisions in practical situations have to be made with imperfect information about problem data, such as future dam inflows, power prices, and demand. Some practical application examples include optimal stochastic scheduling for a 39-reservoir system in Brazil,49 unit commitment of a Michigan power system under demand uncertainty48 and of a German hydrothermal generation system under uncertain load,50-52 and water management of a real-life water resource system in Eastern Czechoslovakia53 and of the Highland Lakes in central Texas.54

2408

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005

2.2. Problem Formulations. Problem formulation is critical for constructing an appropriate model for the underlying problem that we intend to solve. The most critical components in formulation of this class of problems are the handling of multiple objectives and multistage decisions and the modeling of uncertainty and integer requirement. We shall elaborate each of these issues on problem formulation. We emphasis that it is crucial to formulate the right model with the understanding of the true underlying problems, e.g., what are decision makers’ objectives, when decisions are made, and what information is available. 2.2.1. Single Objective to Multiobjectives. We strongly believe that most decision problems, especially decision making under uncertainty, inherently involve multiple objectives. For instance, even if profit is the only concern of decision makers, regardless of other consequences such as environment and social effects, the profit itself is a random variable and thus one cannot simply characterize it by only its expected value. For example, the variability of the profit is also of great concern to decision makers who are usually risk averse. Therefore, decision making under uncertainty is naturally a multiobjective problem. “In any decision under risk, expected profit is not the only objective.”16 The difficulty lies in the possible conflicts among these objectives. It is a common situation that the individual optima corresponding to the distinct objective functions are very different. Therefore, one cannot optimize the conflicting objectives simultaneously and must make tradeoffs among them. For instance, a decision that only maximizes expected profit could lead to an unacceptable risk level.4 Other examples in the process design literature include maximizing expected profit while achieving goals of design flexibility.55 Most models in the literature use a single-objective formulation to maximize/minimize the expected value of a performance measure. One of the reasons is that a complete theory and numerous algorithms are available for solving this type of problems. Several exceptions include the capacity planning model Eppen et al.16 construct for General Motors to maximize expected profit while satisfying an appended constraint limiting the expected downside risk. They generate a series of efficient solutions by successively tightening the risk constraint. Lasdon et al.54 develop a two-objective model for the management of Highland Lakes to maximize the expected revenue and to maximize recreational benefits. The objective function is the weighted sum of the two objectives, where the weight represents the value of recreation relative to the profit. 2.2.2. Two-Stage to Multistage Decisions. Twostage stochastic program with recourse is a special case of multistage stochastic program. Schultz et al.56 present a survey of the recent research on two-stage stochastic integer programming. A two-stage stochastic program coarsely divides time into “now” and “the future.” The decision maker makes the first stage decision prior to the realization of the uncertainty and then makes the second stage recourse decision(s) contingent on the revealed information upon resolution of the uncertainty. Note that the stages do not necessarily correspond to periods in time. Each stage represents a decision epoch where decision makers have an opportunity to revise decisions based on the additional available information. For example, one can formulate a two-stage stochastic

program for a multiperiod problem in which the second stage represents a group of periods in the remaining future.57 Multistage models extend two-stage models by allowing revised decisions in each stage based upon the uncertainty realized so far. Use of models with multiple stages is less prevalent than that of models with two stages because of their large size and complexity. However, multistage models have drawn an increasing amount of attention, and we expect even more theoretical and computational developments due to their vast range of potential applications. In this work, we will concentrate on general multistage decision processes that involve sequential decisions at different times and/ or levels. 2.2.3. Linear to Mixed Integer Models. The majority of research effort on stochastic optimization has been devoted to linear models, mainly due to their consistently developed state and the convexity/duality properties they inherit. The incorporation of integer variables not only increases the complexity because of the NPhardness of integer programming but also destroys the structural properties, such as convexity and duality. The analytical solutions and results based on convexity properties are no longer valid or require further extensions.9,27 Similarly, efficient decomposition approaches that were developed for linear models based on these properties can no longer be formally justified.58,59 However, some exceptions have been observed empirically for specific applications.60,48 Nevertheless, in practical optimization, integer requirements are indispensable. Some typical examples are the fixed expansion cost in capacity planning18 and fixed setup cost in inventory control9 which would require the use of integer variables in mathematical programming. In this work, we will not restrict ourselves to linear models. However, we are aware that the introduction of integrality will complicate the computation further. 2.2.4. Models of Underlying Uncertainty. Modeling the uncertainty itself is an actively developing research area. In the optimal control area, for the purpose of analytical tractability and convenience, one usually uses simple stochastic processes, e.g., Brownian motion, to model the uncertainty.21,23 In the mathematical programming area, due to the numerical complexity of multivariate integration, one usually replaces the continuous distributions by a small finite set of discrete outcomes, or scenarios. The generation of scenarios can be by discretizing the continuous probability distributions61 or by Monte Carlo type simulation techniques.62,63 The goal of modeling uncertainty is not simply to approximate the probability distributions, but to construct a tractable optimization problem that provides acceptable approximate solution to the true underlying problem.64 To achieve this goal, often we need to compromise between the precision of the approximation and the size of the approximate problem. The modeling procedure should take into account both the problem specific requirements and available information about the underlying probability distributions. Dupacova64 classify the levels of available information into four types: full knowledge of distributions, known parametric family, sample information, and low information level. For each type of information level, one can choose the origin of scenarios consistently with the knowledge of uncertainty.

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005 2409

2.3. Solution Strategies. There are two major research streams that formulate and solve this class of problems: stochastic optimal control and multistage stochastic programming. These two streams encompass a variety of solution methods that solve the same class of problems with different purposes and emphasis, e.g., analytical versus numerical, rigorous versus approximate, etc. These approaches are essentially equivalent in the sense that they find the same solutions to the same problem. However, they exhibit differences both in the formulation and in the solution process. These approaches possess favorable and unfavorable features for specific problems. 2.3.1. Stochastic Optimal Control. Stochastic optimal control, or a Markov decision process, characterizes a sequential decision problem in which the decision makers choose an action in the state occupied at any decision epoch according to a decision rule or policy. Dynamic programming provides a framework for studying such problems, as well as for devising algorithms to compute an optimal control policy. There are several comprehensive textbooks written on this subject.65,66 Based on the principle of optimality,67 a dynamic programming algorithm decomposes the optimal control problem into a sequence of single-period subproblems that are solved recursively backward in time. Research extended to multiobjective cases has enabled the development of decomposition methodologies for separable multiobjective dynamic optimization problems.4,68,69 A dynamic programming algorithm provides a general approach to solve sequential optimization under uncertainty. However, since one has to carry an optimization for each state in the state space, which is usually very large for most problems, dynamic programming suffers numerically from the “curse of dimensionality” and is prohibitive to solve large scale problems. The recent emergence of neuro-dynamic programming puts forth a possibility to avert the curse of dimensionality through suboptimal methods that center on the approximation of the optimal “objective-to-go” function.70-74 Despite its computational limitation, dynamic programming provides a framework for obtaining analytical results and structural properties of optimal solutions under simplifying assumptions. For instances, Scarf9 shows that an (s, S) type policy is optimal for a dynamic inventory problem with an ordering cost composed of a unit cost plus a reorder cost. Eberly and Van Mieghem23 show that the optimal investment strategy in a multiresource investment problem with convex cost function follows a control limit policy at each point in time. These structural results allow the development of numerical approaches to find the values of the parameters that define the optimal policies, including gradient-based optimization techniques using perturbation analysis12,75 and nongradient direct search approaches.5 2.3.2. Multistage Stochastic Programming. Multistage stochastic programming deals with problems that involve a sequence of decisions reacting to outcomes that evolve over time. At each stage, one makes decisions based on current available information, i.e., past observations and decisions, prior to the realizations of future events. There are two standard textbooks in this area, which is rapidly developing thanks to the advances in algorithms and computation.1,2 Birge,3 in a survey paper, describes basic methodology in stochastic programming, recent developments in computation, and some examples of practical applications.

For most problems in which random variables follow multidimensional continuous distributions, computation is numerically difficult or even intractable, as it requires multivariate integration. To avoid this problem, one usually generates a finite set of scenarios, from sampling or a discrete approximation of the given distributions, to represent the probability space. Generating scenarios to approximate the underlying problems is a research subject in itself.61,76 With the scenarios or scenario tree specified, we can reformulate the stochastic program into a deterministic equivalent program. The size of the deterministic program can easily grow out-of-hand for a large number of scenarios, which renders the direct solution approaches numerically intractable and thus necessitates special methods, such as decomposition and aggregation. Decomposition methods, including primal and dual decompositions, exploit the structure of the problem to split it into manageable pieces and coordinate their solutions.77 Primal decomposition approaches78-82 assign a small local optimization problem to every node and treat the coupling between stages iteratively. Dual decomposition approaches82-84 and a progressive hedging algorithm85 optimize individual scenarios and iterate on the nonanticipativity conditions. 2.3.3. Stochastic Programming versus Optimal Control. These two methodological approaches address the same class of problems, but with different perspectives and emphasis. For example, solutions obtained by both approaches to the same problem are equivalent. Stochastic programming approaches search for an optimal decision tree that hedges against the scenario tree representing the underlying uncertainty, while optimal control is more interested in optimal policies that map each state into the optimal action. Both approaches suffer numerically from the curse of dimensionality, due to the large state space in optimal control and large sample space in stochastic programming. Both methods require approximation approaches to solve large problems. The differences in formulation and solution between them lead to distinctive favorable and unfavorable features for a specific problem. We shall demonstrate with an example problem in section 5 that it is possible to combine advantages of both approaches to solve large problems. Two research streams have developed in parallel, with different areas of application depending on the problem specific requirements. For instance, stochastic programming is more suitable for solving long-term strategic planning problems, such as capacity planning18,19 with a relatively few number of periods and scenarios. Stochastic optimal control, on the other hand, works better for operational control problems such as production and inventory control, where there are relatively many periods and scenarios but a state space of modest size.12,13 However, this distinction is sometimes ambiguous and problem dependent. For example, capacity planning draws attention from both research streams. Scenario-based stochastic programming aims at finding numerical solutions to the problems, while optimal control provides a theory to obtain analytical results, such as the structure of the optimal solutions. 2.3.4. Analytical versus Numerical Solutions. Analytical and numerical solutions are usually not separable and should complement each other. Comparably speaking, literature in the area of optimal control theory9,20,27 pay more attention to analytical results than

2410

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005

numerical solutions, compared to research in the mathematical programming area,16,19,28 and vice versa. We believe that it is often useful to obtain analytical results from a simplified model and generalize the results to develop an efficient numerical approach to solve realistic problems.5 For example, Rajagopalan et al.26 develop an efficient regeneration point-based dynamic programming algorithm to solve moderate size capacity planning problems based on structural results pertaining to the optimal solutions. Kapuscinski and Tayur13 characterize some structures and properties of the optimal policy for simple inventory control models and then propose a simulation-based optimization procedure to compute the optimal parameters according to their analytical findings. Analytical solutions of decision making under uncertainty problems are possible only in exceptional cases, e.g., simple stochastic processes and linear cost functions. However, neither are numerical solutions immune from difficulties in solving general problems, which usually do not possess such nice mathematical properties as convexity and differentiability. For instance, introduction of fixed cost to model the economies of scale could render analytical solution intractable or require further extensions.9,27 It also introduces integrality into stochastic programming and destroys special properties such as convexity and duality in linear models. Therefore, those efficient numerical solution approaches developed based on these properties cannot be simply adapted to solve general problems, and one usually has to resort to problem specific approximations and/or heuristics approaches.18,60 3. An Illustrative Example We illustrate our framework with an example of coordinated capacity planning and inventory control problem. We formulate the problem into a discrete-time decision process and propose an approximation framework to find Pareto optimal strategies. One can generalize the results obtained with this problem to other decision contexts, such as the application areas reviewed in the previous section, by incorporating the problem specific characteristics. For details on the mathematical formulation and solution, we refer readers to another paper that addresses this example problem in detail with the focus being on computational and numerical aspects.6 3.1. Problem Formulation. In formulating the problem, we emphasize two most critical characteristics of this class of problems: sequential decisions and multiple objectives. We consider a firm that employs multiple resources with different technology types and equipment sizes to produce multiple products. The firm has the option to change the capacity level of each resource, if applicable, at the beginning of each year. During each year, the firm periodically reviews inventory level and makes production planning at the beginning of each month. The capacity and production planning constitutes a decision process in which decision makers choose capacity and production decisions at each stage based on the past observations and decisions. This decision process is nonanticipative in the sense that current decisions cannot depend on the specific future events that are not yet realized. We refer to a sequence of decisions in the decision process as a decision strategy.

At each stage of the decision process, upon selection and implementation of a decision, the capacity and inventory levels change, and the firm receives a profit or incurs a cost. After choosing and implementing the decision strategy, the decision maker receives the outcomes for the system performance, such as returned profit and customer service. The research problem is to find a decision strategy such that these outcomes are optimal with respect to certain decision criteria. In this work, we consider two optimality criteria: expected total discounted profit and expected downside risk. One can extend the results we obtain in this paper to cases with more than two objectives. 3.1.1. Expected Profit. Expected total discounted reward calculates the probability weighted average of the time discounted profits, assuming decision makers are risk-neutral. 3.1.2. Downside Risk. We use expected downside risk to measure the downside variability of the profit, which denotes the amount by which total profit falls below a target profit.16 Selection of risk measure deserves a research paper itself and is outside the scope of this paper. For a thorough definition of various risk measures such as variance, semivariance, and downside risk, we refer readers to other literature.86,87 The problem then becomes a multiobjective decision problem, which searches for a decision strategy that optimizes multiple objectives, i.e., maximizes expected profit and minimizes expected downside risk. Because of the conflicting nature of the objectives, e.g., decisions resulting in higher expected profit may lead to higher risk exposure, it is inevitable that one must make tradeoffs among different competing objectives. The goal of multiobjective optimization is to find a set of Pareto optimal solutions, in which the solutions are optimal in the sense that no other solutions in the search space are superior to them when considering all of the objectives.88 3.2. Approximation Solution Framework. We propose a real-time decision scheme that assists decision making in a changing and stochastic decision context. The decision scheme updates and solves the models based on current available information and recommends only the first stage(s) decisions. We develop an approximation architecture to find the Pareto optimal first stage(s) decisions. It decomposes the decision problem in time into several linked subproblems based on their distinctive characteristics. We present an iterative solution process that solves these subproblems to compute and propagate the Pareto optimal fronts and corresponding policies backward in time based on the principle of optimality. 3.2.1. A Real-Time Decision Scheme. In the realtime decision scheme, at each fixed decision epoch or at occurrences of events, we use current and historical measurements to build a model for the remaining future. We then select a sequence of decisions or policies to optimize relevant objectives while satisfying certain constraints. We implement decisions for the first few periods and repeat the solution process using the updated system information at the next decision epoch. This decision scheme is essential for decision making in a changing and uncertain context, especially when the information is incomplete, as only the decision “now and here” is most relevant and reliable. The decision model and solution structure should also depend on the problem specific requirements. For a strategic planning

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005 2411 Table 1. Summary of the Characteristics of Different Subhorizons time frame information knowledge model detail decision epoch update frequency

subhorizon 1

subhorizon 2

subhorizon 3

0-1 year full knowledge of the distributions detailed every month online, monthly

1-5 year low information level aggregated every year offline, yearly

5-10 year little or no knowledge crude/fixed once offline, yearly

problem, e.g., capacity expansion, one should solve the optimization to find the optimal first decision each time one needs a decision, e.g., once a year. For operational planning problems where one has to make decisions on a real-time basis, e.g., inventory control, the computational requirement to update and resolve the model at each decision epoch could be too expensive to be feasible. Therefore, it may be more practical from a computation and implementation point of view to find the optimal or suboptimal control policies, e.g., order-up-to policies, for the first number of periods. The research problem then is to develop an approximation architecture to find the optimal first stage(s) decisions or policies with reasonable computational effort. 3.2.2. An Approximation Architecture. The approximation architecture involves two major stages: decomposing of the problem in time into subproblems and solving the subproblems iteratively to find the optimal first decisions or policies. At any given point in time, the decision maker is facing an uncertain future, during which information at different stages exhibits distinctive characteristics. The details of the information as well as the impact of the information on current decisions diminish as time goes further into the future. We propose to decompose the planning horizon into several subhorizons, each of which is characterized by different levels of information details, frequencies of decisions, and lengths of time periods. For instance, as Table 1 shows, one can decompose a planning horizon of 10 years into three subhorizons. Using a consistent partitioning scheme of the planning horizon, we decompose the decision problem into three corresponding subproblems. For example, the decision subproblem in the first subhorizon is a detailed model that coordinates the yearly capacity decisions and monthly production decisions. The decision model in the second subhorizon could be a capacity planning problem with aggregated production planning.5 The decision model for the last subhorizon can be a simplified capacity planning model based on deterministic demands. We formulate the subproblems as either deterministic mathematical programs or optimal control problems and solve these decision models sequentially backward in time to propagate the Pareto optimal frontiers backward in time.6 However, this decomposition approach must tackle the numerical challenge that arises in linking models in neighboring subhorizons. For example, this approach has to overcome two numerical difficulties: (1) one does not know the states that one could reach at the boundaries in advance, and (2) the need for discretization and/or interpolation for a continuous state space. We propose an iterative solution process that consists of the following procedure to overcome the difficulty at the boundaries: (1) Solve an approximate problem for the current subhorizon, e.g., assuming a deterministic future. The

purpose of this run is twofold: to propose potential states to the subsequent problem and to guess the solutions of the detailed model. (2) Based on the states proposed by the approximate ancestor problem, discretize and partition the state space with the improved knowledge of the range and density of the state distributions; refine and/or expand the “look-up” table accordingly. (3) Solve a detailed problem for the current subhorizon, incorporating the updated look-up table at the boundary, to compute the Pareto optimal objectives and corresponding solutions. We shall repeat the above steps until we meet certain stopping criteria; for example, the proposed states are already at or close to the existing states in the look-up table. This approximation scheme that refines the approximation, e.g., discretization and aggregation during the course of solution, is similar to a learning process. It is especially useful for a complex system about which we have limited a priori knowledge. Through continuously evaluating and refining the Pareto optimal objectives-to-go in each future state, we will be able to obtain an increasingly accurate approximation for the future problem, which allows us to find an optimal or near optimal “now and here” decision efficiently. 3.2.3. Solution Method Selection. We consider several candidates for solution approaches to solve the problems at each subhorizon, for example, (1) multistage stochastic program, (2) dynamic programming recursion, and (3) simulation-based optimization. The specific nature and requirement of each problem should determine which method is preferable. This paper is not attempting to advocate any solution method but rather is proposing a tailored approach that combines different methods. We refer readers to another paper that specifically addresses the equivalence and difference between stochastic optimal control and multistage stochastic programming.6 For example, we believe that dynamic programming is less advantageous for this particular problem than the other two methods because of the large number of states and few numbers of periods in each subhorizon. The selection of solution methods for the first two subhorizons could follow a similar guideline as follows: (1) Subhorizon 1. If the purpose of the first problem is to find the first stage decision, a mathematical programming approach is probably preferred as it can guarantee the optimality of the first decision. However, if the purpose is to find the operational decisions, a simulation-based approach works better because it is able to find a sequence of operating policies. (2) Subhorizon 2. One can solve a mathematical program built upon a small scenario tree over a small number of periods efficiently for each initial state. Compared to stochastic programming, a simulationbased approach is less justified due to the small number of scenarios and large number of states. 3.3. Results and Discussion. We apply the approximation architecture proposed in the previous section to solve this problem, with the primary goal to find the optimal first stage decisions. We construct and solve a multiobjective optimization problem for each beginning state in the second subhorizon to find maximal expected profit-to-go for a given risk level. We then build a scenario-based mixed integer linear program for the first year to search for the optimal initial capacity level

2412

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005

tion-based optimization finds the operating parameters including the number and size of equipment to install and the base-stock inventory levels for each quarter. We utilize a multiobjective evolutionary algorithm, the elitist nondominated sorting genetic algorithm developed by Deb et al.,89 to find a diverse set of Pareto optimal solutions in one single run. Figure 4 shows the population of 50 solutions at generation 100, 200, and 300. The solutions converge to a nondominated frontier after about 300 generations and maintain a widely spread solution set.6 We compare the Pareto optimal frontiers found by multistage stochastic programming and by the simulation-based optimization respectively (Figure 5). Both Figure 3. Pareto optimal frontier for the example problem. Table 2. Decision and Objectives of Different Options

option 1 option 2 option 3

capacity decision

expected profit

expected downside risk

one large one large, one medium two large

487.1-714.9 854.8-855.2 905

23.1-34.8 35.4-39.0 50.6

and productions for each quarter. We find 50 points in the Pareto optimal set using the epsilon constraint method, as Figure 3 shows.6 The Pareto curve consists of three isolated segments, each of which represents an alternative option for the initial capacity decision. The different points on the same segment correspond to the same first stage decision but choose different future decisions. Table 2 summarizes the capacity decision and the range of objective values for each option. After generating the Pareto optimal set, decision makers would make the final selection among these Pareto optimal solutions based their preferences. As can be seen, option 2 leads to 30% less risk than option 3 with only 5.5% deterioration in expected profit. If decision makers are risk averse, i.e., they are willing to sacrifice expected return in order to reduce the risk, they are likely to prefer option 2 to option 3. This example also illustrates that it is important to formulate the problem as a multiobjective optimization in the first place; otherwise, one would only be able to find the single point corresponding to option 3, which is only one extreme point in the complete solution space that should be of concern to the decision maker. It is likely that a single objective optimization will find a solution that maximizes the expected profit but results in an unaccepted risk exposure. We also combine different approaches to exploit fully their synergy based on the specific nature of the problems. We employ a simulation-based optimization strategy5 to solve the subproblem for the first year. Assuming that the company manages their productioninventory system using a base-stock policy, the simula-

Figure 4. Convergence to the Pareto optimal frontier.

Figure 5. Comparison between two solution approaches.

approaches find Pareto optimal frontiers with the same shape, i.e., three segments with different first stage decisions. The simulation-based approach restricts the search space to solutions of certain forms, i.e., base-stock policies. Therefore, the solutions are suboptimal in general and are slightly inferior to the solutions from a stochastic program. Nevertheless, a simulation-based approach, which often requires very much lower computer resources, finds the same Pareto optimal first stage decisions and consistent shape of the Pareto frontier, a primary goal of a multiobjective decision problem. In addition, we find simulation-based optimization is considerably more efficient in solving the first subproblem, especially when the problem involves a large number of periods and scenarios. Further comparison between different approaches should require additional theoretical developments as well as numerical experiments.6 4. Conclusions We consider a general class of problems in this paper with the following characteristics: (1) decisions making in the presence of a changing and uncertain environment, (2) sequential decisions at different times and at different levels, and (3) multiple conflicting objectives.

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005 2413

The class of problems, which we refer to as multiobjective decision processes under uncertainty, has found enormous applications in various fields. At the same time, due to the tremendous complexity and prohibitive computations involved in the solving of this class of problems, it has drawn an increasing amount of attention from the research community on theoretical, algorithmic, and computational developments. Our contribution in this paper is twofold: the review of the existing literature on applications and solutions of this broad class of problems and the development of a general framework for formulating and approximately solving the problems. Specifically we have developed a morphological scheme to classify and review relevant literature along the three dimensions of application areas, problem formulations, and solution strategies. Based on an understanding of the existing solution approaches and the underlying problem, we develop a general approximation framework to find the optimal first stage(s) decisions or policies efficiently. We propose an iterative solution process to pass potential states forward and to propagate objectives backward. We demonstrate the solution procedure and computation efficiency of the approximation architecture on an example problem of coordinated capacity planning and inventory control problem. We show that the approximation scheme is able to find Pareto optimal first stage solutions with modest computational requirements. We also select and compare different solution approaches, i.e., multistage stochastic programming and simulation-based optimization, for solving the subproblems. Because of the vast range of potential applications of and the tremendous complexity involved in this class of problems, we expect many more developments in practical applications, model formulations, and solutions strategies to come soon. Literature Cited (1) Kall, P.; Wallace, S. W. Stochastic Programming; John Wiley & Sons: New York, 1994. (2) Birge, J. R.; Louveaux, F. Introduction to Stochastic Programming; Springer: New York, 1997. (3) Birge, J. R. Current Trends in Stochastic Programming Computation and Applications; Technical Report; Dept of Industrial and Operations Engineering, University of Michigan: Ann Arbor, MI, 1995. (4) Cheng, L.; Subrahmanian, E.; Westerberg, A. W. Design and Planning under Uncertainty: Issues on Problem Formulations and Solutions. Comput. Chem. Eng. 2003, 27, 781-801. (5) Cheng, L.; Subrahmanian, E.; Westerberg, A. W. MultiObjective Decisions on Capacity Planning and Inventory Control. Ind. Eng. Chem. Res. 2004, 43, 2192-2208. (6) Cheng, L.; Subrahmanian, E.; Westerberg, A. W. A Comparison of Optimal Control and Stochastic Programming from a Formulation and Computation Perspective. Comput. Chem. Eng. 2004, 29, 149-164. (7) Cheng, L.; Duran, M. A. Logistics for World Wide Crude Oil Transportation based on Discrete Event Simulation and Optimal Control. Foundations of Computer-Aided Process Operations 2003 Conference, January, 2003. Comput. Chem. Eng. 2004, 28, 897-911. (8) Cheng L.; Duran, M. A. Just-in-Time Refinery Operation Strategy: A Hybrid Make-to-Order and Make-to-Stock System; Technical Report; ExxonMobil Research and Engineering Company, in preparation for publication, 2004. (9) Scarf, H. The Optimality of (s, S) Policies in the Dynamic Inventory Problem. In Mathematics Methods in the Social Sciences; Stanford University Press: Palo Alto, CA, 1960; pp 196-202. (10) Porteus, E. L. On the Optimality of Generalized (s, S) Policies. Manage. Sci. 1971, 17 (7), 411-426.

(11) DeCroix, G. A.; Arreola-Risa, A. Optimal Production and Inventory Policy for Multiple Products under Resource Constraints. Manage. Sci. 1998, 44 (7), 950-961. (12) Glasserman, P.; Tayur, S. Sensitivity Analysis for Basestock Levels in Multiechelon Production-inventory Systems. Manag. Sci. 1995, 41 (2), 263-281. (13) Kapuscinski, R.; Tayur, S. A Capacitated ProductionInventory Model with Periodic Demand. Oper. Res. 1998, 46 (6), 899-911. (14) Bashyam, S.; Fu, M. C.; Kaku, B. K. Application of Perturbation Analysis to Multiproduct Capacitated ProductionInventory Control. Int. J. Operations Quantitative Manage., submitted. (15) Braun, M. W.; Rivera, D. E.; Flores, M. E.; Carlyle, W. M.; Kempf, K. G. A Model Predictive Control Framework for Robust Management of Multi-Product, Multi-Echelon Demand Networks. Annu. Rev. Control 2003, 27 (2), 229-245. (16) Eppen, G. D.; Martin, R. K.; Schrage, L. A Scenario Approach to Capacity Planning. Oper. Res. 1989, 37 (4), 517527. (17) Barahona, F.; Bermon, S.; Gunluk, O.; Hood, S. Robust Capacity Planning in Semiconductor Manufacturing; IBM report RC22196, 2001. (18) Ahmed, S.; Sahinidis, N. V. An Approximation Scheme for Stochastic Integer Programs Arising in Capacity Expansion. Oper. Res. 2003, 51, 461-471. (19) Ahmed, S.; King, A. J.; Parija, G. A Multi-Stage Stochastic Integer Programming Approach for Capacity Expansion under Uncertainty. J. Global Optimization 2002, 26, 3-24. (20) Manne, A. S. Capacity Expansion and Probabilistic Growth. Econometrica 1961, 29 (4), 632-649. (21) Bean, J. C.; Higle, J. L.; Smith, R. L. Capacity Expansion under Stochastic Demands. Oper. Res. 1992, 40 (2), S210S216. (22) Davis, M. H. A.; Dempster, M. A. H.; Sethi, S. P.; Vermes, D. Optimal Capacity Expansion under Uncertainty. Adv. Appl. Prob. 1987, 19, 156-176. (23) Eberly, J. C.; Van Mieghem, J. A. Multi-factor Dynamic Investment under Uncertainty. J. Economic Theory 1997, 75, 345387. (24) Harrison, J. M.; Van Mieghem, J. A. Multi-resource Investment Strategies: Operational Hedging under Demand Uncertainty. Eur. J. Oper. Res. 1999, 113, 17-29. (25) Angelus, A.; Porteus, E. L.; Wood, S. C. Optimal Sizing and Timing of Capacity Expansions with Implications for Modular Semiconductor Wafer Fabs; Research Paper No. 1479R; Graduate School of Business, Stanford University: Stanford, CA, 1999. (26) Rajagopalan, S.; Singh, M. R.; Morton, T. E. Capacity Expansion and Replacement in Growing Markets with Uncertain Technological Breakthroughs. Manage. Sci. 1998, 44 (1), 1230. (27) Angelus, A.; Porteus, E. L. Simultaneous Production and Capacity Management under Stochastic Demand for Produced to Stock Goods; Research Paper No. 1419R; Graduate School of Business, Stanford University: Stanford, CA, 2000. (28) Rajagopalan, S.; Swaminathan, J. M. A Coordinated Production Planning Model with Capacity Expansion and Inventory Management. Manage. Sci. 2001, 47 (11), 1562-1580. (29) Bradley, J. R.; Glynn, P. W. Managing Capacity and Inventory Jointly in Manufacturing Systems. Manage. Sci. 2002, 48 (2), 273-288. (30) Schmidt, C. W.; Grossmann, I. E. Optimization Models for the Scheduling of Testing Tasks in New Product Development. Ind. Eng. Chem. Res. 1996, 35, 3498-3510. (31) Jain, V.; Grossmann, I. E. Resource-Constrained Scheduling of Tests in New Product Development. Ind. Eng. Chem. Res. 1999, 38, 3013-3026. (32) Maravelias, C. T.; Grossmann, I. E. Simultaneous Planning for New Product Development and Batch Manufacturing Facilities. Ind. Eng. Chem. Res. 2001, 40, 6147-6164. (33) Subramanian D.; Pekny, J. F.; Reklaitis, G. V. A Simulation-Optimization Framework for Addressing Combinatorial and Stochastic Aspects of an R&D Pipeline Management Problem. Comput. Chem. Eng. 2000, 24, 1005-1011. (34) Dantzig, G. B.; Infanger, G. Multi-stage stochastic linear programs for portfolio optimization. Ann. Oper. Res. 1993, 45, 5976. (35) Kusy, M. I.; Ziemba, W. T. A bank asset and liability management model. Oper. Res. 1986, 34, 356-376.

2414

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005

(36) Carino, D. R.; Kent, T.; Meyers, D. H.; Stacy, C.; Sylvanus, M.; Turner, A. L.; Watanabe, K.; Ziemba, W. T. The RussellYasuda Kasai Model: An Asset/Liability Model for a Japanese Insurance Company using Multi-stage Stochastic Programming. Interfaces 1994, 24 (1), 29-49. (37) Hoyland, K. Asset liability management for a life insurance company: A stochastic programming approach. Ph.D. Dissertation, Norwegian University of Science and Technology, Trondheim, Norway, 1998. (38) Mulvey, J. M.; Gould, G.; Morgan, C. An Asset and Liability Management System for Towers Perrin-Tillinghast. Interfaces 2000, 30 (1), 96-114. (39) Dert, C. Asset liability management for pension funds: A multistage chance constrained programming approach. Ph.D. Thesis, Erasmus University Rotterdam, Department of Mathematics, Rotterdam, The Netherlands. (40) Consigli, G.; Dempster, M. A. H. Dynamic Stochastic Programming for Asset-liability Management. Ann. Oper. Res. 1998, 81, 131-161. (41) Zenios, S.; Homer, M.; McKendall, R.; Vassiadou-Zeniou, C. Dynamic Models for Fixed-Income Portfolio Management Under Uncertainty. J. Econ. Dyn. Control 1998, 22 (10), 1517-1541. (42) Ziemba, W. T.; Mulvey, J. M. Worldwide Asset and Liability Modeling; Cambridge University Press: New York, 1998. (43) Ferguson, A.; Dantzig, G. B. The Allocation of Aircraft to Routes: An Example of Linear Programming under Uncertain Demands. Manage. Sci. 1956, 3, 45-73. (44) Kleywegt, A. J.; Nori, V. S.; Savelsbergh, M. W. P. The Stochastic Inventory Routing Problem with Direct Deliveries. Transport. Sci. 2002, 36, 94-118. (45) Laporte, G.; Louveaux, F. V.; Mercure, H. The Vehicle Routing Problem with Stochastic Travel Times. Transport. Sci. 1992, 26, 161-170. (46) Kenyon, A.; Morton, D. P. A Survey on Stochastic Location and Routing Problems. Central Eur. J. Oper. Res. 2002, 9, 277328. (47) Powell, W. B. A Comparative Review of Alternative Algorithms for the Dynamic Vehicle Allocation Problem. Vehicle Routing: Methods and Studies; Golden, B., Assad, A., Eds.; NorthHolland: Amsterdam, 1988; pp 249-291. (48) Takriti, S.; Birge, J.; Long, E. A Stochastic Model for the Unit Commitment Problem. IEEE Trans. Power Syst. 1996, 11 (3), 1497-1508. (49) Pereira, M. V. F.; Pinto, L. M. V. G. Multi-stage Stochastic Optimization Applied to Energy Planning. Math. Programming 1991, 52, 359-375. (50) Nowak, M. P.; Romisch, W. Stochastic Lagrangian Relaxtion Applied to Power Scheduling in a Hydro-Thermal System under Uncertainty. Ann. Oper. Res. 2001, 100, 251-272. (51) Gollmer, R.; Nowak, M. P.; Ro¨misch, W.; Schultz, R. Unit Commitment in Power Generation: A Basic Model and Some Extensions. Ann. Oper. Res. 2000, 96, 167-189. (52) Gro¨we-Kuska, N.; Ro¨misch, W. Stochastic Unit Commitment in Hydro-thermal Power Production Planning. Preprint 023, Institut fu¨r Mathematik, Humboldt-Universita¨t Berlin, 2002 and Applications of Stochastic Programming; Wallace, S. W., Ziemba, W. T., Eds.; MPS-SIAM Series in Optimization. (53) Dupacova, J.; Gaivoronski, A.; Kos, Z.; Szantai, T. Stochastic programming in water management: A case study and a comparison of solution techniques. Eur. J. Oper. Res. 1991, 52, 28-44. (54) Lasdon, L. S.; Watkins, D.; McKinney, D.; Nielsen, S.; Martin, Q. A Scenario-Based Stochastic Programming Model for Water Supplies from the Highland Lakes. Int. Trans. Oper. Res. 2000, 7 (3), 211-230. (55) Pistikopoulos, E. N.; Ierapetritou, M. G. A Novel Approach for Optimal Process Design under Uncertainty. Comput. Chem. Eng. 1995, 19, 1089-1110. (56) Schultz, R.; Stougie, L.; van der Vlerk, M. H. Two-stage Stochastic Integer Programming: A Survey. Statistica Neerlandica 1996, 50 (3), 404-416. (57) Caroe, C. C.; Schultz, R. A Two-stage Program for Unit Commitment under Uncertainty in a Hydro-Thermal Power System. Preprint SC 98-11, Konrad-Zuse- Zentrum fu¨r Informationstechnik, Berlin, 1998. (58) Romisch W.; Schultz, R. Multi-stage Stochastic Integer Programs: An Introduction. Online Optimization of Large Scale Systems; Gro¨tschel, M., Krumke, S. O., Rambau, J., Eds.; SpringerVerlag: Berlin, 2001; pp 579-598.

(59) Klein Haneveld, W. K.; van der Vlerk, M. H. Stochastic Integer Programming: General Models and Algorithms. Ann. Oper. Res. 1999, 85, 39-57. (60) Lokketangen, A.; Woodruff, D. L. Progressive Hedging and Tabu Search Applied to Mixed Integer (0, 1) Multi-stage Stochastic Programming. J. Heuristics 1996, 2, 111-123. (61) Hoyland, K.; Wallace, S. W. Generating Scenario Trees for Multi-stage Decision Problems. Manage. Sci. 2001, 47 (2), 295307. (62) Shapiro, A.; Homem-de-Mello, T. A Simulation-based Approach to Two-stage Stochastic Programming with Recourse. Math. Programming 1998, 81, 301-325. (63) Shapiro, A.; Homem-de-Mello, T. On the Rate of Convergence of Optimal Solutions of Monte Carlo Approximations of Stochastic Programs. SIAM J. Optimization 2000, 11, 7086. (64) Dupacova, J. Stochastic Programming: Approximation via Scenarios; 3rd International Conference on Approximation, Puebla, 1995. (65) Bertsekas, D. P. Dynamic Programming and Optimal Control, Vols. I and II; Athena Scientific: Belmont, MA, 2000. (66) Puterman, M. L. Markov Decision Processes: Discrete Stochastic Dynamic Programming; John Wiley & Sons: New York, 1994. (67) Bellman, R. E. Dynamic Programming; Princeton University Press: Princeton, NJ, 1957. (68) Li, D.; Haimes, Y. Y. Multi-objective Dynamic Programming: The State of the Art. Control-Theory Adv. Technol. 1989, 5 (4), 471-483. (69) Li, D. Multiple Objectives and Non-Separability in Stochastic Dynamic Programming. Int. J. Syst. Sci. 1990, 21 (5), 933950. (70) Bertsekas, D. P.; Tsitsiklis, J. N. Neuro-Dynamic Programming; Athena Scientific: Belmont, MA, 1996. (71) Tsitsiklis, J. N.; Van Roy, B. Feature-Based Method for Large Scale Dynamic Programming. Machine Learning 1996, 22, 59-94. (72) Tesauro, G. J. Practical Issues in Temporal-Difference Learning. Machine Learning 1992, 8, 257-277. (73) Gordon, G. J. Approximate Solutions to Markov Decision Processes. Ph.D. Thesis in School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, 1999. (74) Marbach, P. Simulation-Based Methods for Markov Decision Processes. Ph.D. Thesis in Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, 1998. (75) Dupacova, J.; Consigli, G.; Wallace, S. W. Generating Scenarios for Multi-stage Stochastic Programs. Ann. Oper. Res. 2000, 100, 25-53. (76) Fu, M. C. Sample Path Derivatives for (s, S) Inventory Systems. Oper. Res. 1994, 42 (2), 351-364. (77) Ruszczyn´ski, A. Decomposition Methods in Stochastic Programming. Math. Programming 1997, 79, 333-353. (78) Van Slyke, R.; Wets, R. J. B. L-shaped linear programs with applications to optimal control and stochastic programming. SIAM J. Appl. Math. 1969, 17 (4), 638-663. (79) Birge, J. R. Decomposition and Partitioning Methods for Multi-stage Stochastic Linear Programs. Oper. Res. 1985, 33, 9891007. (80) Gassman, H. I. MSLiP: A Computer Code for the Multistage Stochastic Linear Programming Problem. Math. Programming 1990, 47, 407-423. (81) Ruszczyn´ski, A. Parallel Decomposition of Multi-stage Stochastic Programming Problems. Math. Programming 1993, 58 (2), 201-228. (82) Rosa, C. H.; Ruszczynski, A. On Augmented Lagrangian Decomposition Methods for Multi-stage Stochastic Programs. Ann. Oper. Res. 1996, 64, 289-309. (83) Mulvey, J. M.; Ruszczynski, A. A New Scenario Decomposition Method for Large Scale Stochastic Optimization. Oper. Res. 1995, 43, 477-490. (84) Caroe, C. C.; Schultz, R. Dual Decomposition in Stochastic Integer Programming. Oper. Res. Lett. 1999, 24, 37-45. (85) Rockafellar R. T.; Wets, R, J.-B. Scenario and Policy Aggregation in Optimization under Uncertainty. Math. Oper. Res. 1991, 16, 119-147. (86) Markowitz, H. M. Portfolio Selection, 1st ed.; John Wiley and Sons: New York, 1959.

Ind. Eng. Chem. Res., Vol. 44, No. 8, 2005 2415 (87) Sortino, F. A.; Van Der Meer, R. Downside Risk. J. Portfolio Manage. 1991, 17 (4), 27-32. (88) Chankong, V.; Haimes, Y. Y. Multi-objective Decision Making Theory and Methodology; North-Holland Series in System Science and Engineering; North-Holland: Amsterdam, 1983. (89) Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-

Objective Optimization: NSGA-II. In Proceedings of the Parallel Problem Solving from Nature VI; Springer: Berlin, 2000; pp 849858.

Received for review May 7, 2004 Revised manuscript received November 29, 2004 Accepted December 2, 2004 IE049622+