Dealing with Risk in Development Projects for Chemical Products and

Jump to General Concept for Dealing with Uncertainty in New Product/Process ... - Even though the challenges of risk management in new ...
0 downloads 0 Views 758KB Size
7758

Ind. Eng. Chem. Res. 2007, 46, 7758-7779

Dealing with Risk in Development Projects for Chemical Products and Processes Gerald Bode,*,† Reinhard Schoma1 cker,† Konrad Hungerbu1hler,‡ and Gregory J. McRae§ Department of Chemical Engineering, Technische UniVersita¨t Berlin, Sekretariat TC 8, Strasse des 17. Juni 124, D-10623 Berlin, Germany; Safety and EnVironmental Technology Group, Institute for Chemical and Bioengineering, Swiss Federal Institute of Technology (ETH), CH-8093 Zu¨rich, Switzerland; and Department of Chemical Engineering, The Massachusetts Institute of Technology, Cambridge, Massachusetts 02139

Decision-making and optimization approaches in chemical product/process design often incorporate probability distributions to express uncertainty about the underlying data. We will show in this paper that probabilities are subjective, that they are caused by the decision makers’ lack of confidence in the underlying knowledge, and that the shape of the distribution is affected by psychological input factors. Furthermore, distributions do not necessarily express the range of possible values but rather express the values that are perceived to be possible. Decisions based on distributions can, therefore, be wrong. Screening literature, however, reveals that the origin of risk and uncertainty is not fully understood and should be discussed. To close this gap, we explain where risk, distributions, and probabilities come from. On the basis of these insights, we provide a new optimization framework that describes a process of systematically identifying the optimum alternative by stepwise resolving uncertainty. Finally, we apply the optimization methodology on a case study. 1. Introduction

2. Subjective Risk and Probabilities

Companies must take the risk to develop new and innovative products and processes to stay competitive. Managers, scientists, and engineers, however, who work in development projects face a high degree of uncertainty. Therefore, it is important to understand where risk and uncertainty come from and what development teams have to do in order to reduce uncertainty. Research addresses the challenges of new product/process development (NPD) by presenting a vast number of publications, whereas more and more of these concepts use distributions of values and/or probabilities to express uncertainty. We will show in this paper that dealing with probabilities is rather a psychological than a mathematical challenge: the theoretical concepts help understand why decision makers decide, but applying probabilities does not make sure that the decisions are correct. The first goal of this paper is, therefore, to provide a thorough understanding of the origin of risk, uncertainty, and variability, as well as of the challenges of efficiently reducing uncertainty. On the basis of these findings, we provide a new optimization approach, which describes a way to systematically analyze the decision problem and then to efficiently resolve uncertainty. The second goal is to explain the actual theoretical approaches on decision making and optimization with regard to economic and ecological goals. This will provide the reader with sufficient knowledge to understand all relevant ideas, and it allows us to discuss similarities and differences between the concepts and to explain how the different points of view can be integrated. Since the risk definition developed in this paper is different from the usual understanding, we also explain how it affects the underlying theoretical concepts. Finally, we show in a case study how the new optimization approach can be used in real-life problems.

Before developing a framework that helps development teams deal with risk and uncertainty in projects, we first need to explain where risk, uncertainty, and variability come from. For doing this, we look at the basic principles behind uncertainty from different perspectives, and we draw conclusions from these insights. 2.1. Subjective and Objective Probabilities. Decisionmaking and optimization approaches use probability distributions to express the uncertainty involved in the decision problem. Looking into the literature reveals that statistics theory1 makes a distinction between two major concepts: objective and subjective probabilities. The frequentistic probability definition is a typical example for the objective group: the realization probability of an outcome A (pf(A)) is defined as the limit of actual realizations of A (nfA), divided by the total number of experiments (nf):

* To whom correspondence should be addressed. Phone: +49-(0)30-314-25644. Fax: +49-(0)30-314-21595. E-mail: Gerald.Bode@ alumni.tu-berlin.de. † Technische Universita ¨ t Berlin. ‡ Swiss Federal Institute of Technology (ETH). § The Massachusetts Institute of Technology.

nfA nf∞ nf

pf(A) ) lim

(1)

The Bayesian approach is a typical example for a subjective probability definition. The probability pB(θ|Information) that an outcome θ is realized is subjectively set, and it depends on the observer’s experiences and knowledge. This concept is often used to model probability-update processes.2 These processes describe the changes of distributions on receiving new information, and they are expressed mathematically by iteratively applying the Bayes theorem,

pB(θ|Information) )

pB(Information|θ)p(θ)

∫-∞ pB(Information|θ)p(θ) dθ ∞

(2)

where p(θ) denotes the probability that the outcome θ is realized, while pB(Information|θ) expresses the probability that the information would be realized if the “true state of nature” would be θ (likelihood). In this update process, the decision maker starts with setting initial probability distributions. The next steps are collecting new information, assessing the resulting situation,

10.1021/ie060826v CCC: $37.00 © 2007 American Chemical Society Published on Web 10/05/2007

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007 7759

and adjusting the distribution based on the observer’s assessment of the information. This process is started all over again when new information becomes apparent. The frequentistic and Bayesian approaches describe ways to determine probabilities. Another way to understand how people deal with uncertainties is the possibility theory.3 People are assumed to determine the degree of possibility for each possible value, based on uncertain information. In this sense, a value that is perceived to be highly possible receives a value of 1, while an unlikely value is assigned a value of 0. This logic can be used to determine possibility distributions of variables. 2.2. Experimentation and Subjective Probability Distributions. Scientists perform experiments to derive distributions of measurement values, which are then used to calculate probability distributions. This is basically what’s described in the “objective” frequentistic probability definition. However, using measurement data does not mean that the probabilities are objective: measurement data may differ between scientists, data received by other people may not be trusted, and measured data may be rejected because of a “bad gut feeling”. Distributions derived from experimentation are, therefore, subjective, and they depend on the scientist’s knowledge and experience. This is exactly what the Bayesian definition says. The frequentistic approach on defining probabilities is, therefore, a special case of the Bayesian understanding, in which scientists perform experiments. Although science aims at understanding what nature objectively does, objective probabilities do not exist. Possibility theory may be understood as an extension to the Bayesian probability definitions, because it takes all kinds of information into account, and not just measurements, to describe what an observer perceives to be possible. But since frequentistic and Bayesian probabilities express the values that are subjectively perceived to be possible, possibilities and probabilities describe the same psychological phenomenon. Dealing with distributions and probabilities is, therefore, much more a psychological than a mathematical problem. Anyway, scientific working helps reduce the impact of psychological effects: developing a theoretical concept, measuring data, assessing if the concept explains the measurement results, and adapting the theory to the new insights has two advantages over “unqualified guessing”: it provides scientific working with a strong knowledge-acquisition and quality-control mechanism, and it helps increase the chance to derive reasonable values and distributions. Distributions of measurement values in frequentistic experiments are due to lack of knowledge about uncertain known and unknown underlying factors: if the scientist is not able to keep all input factors constant, the measured value seems to fluctuate randomly. Therefore, single values out of the distribution cannot systematically be realized unless the scientists understand what factors are responsible for the fluctuations and how they can be controlled. If this would be possible, there would be no uncertainty, no distribution, no surprises, and it would simply be possible to choose the best alternative given the known realization. But if uncertainty is not completely resolved, scientists may know the “true” distribution of values, but it is impossible to forecast which of these values will be realized. Nature works deterministically in such a way that the true probability of realizing the true outcome is literally 1, while all other values’ true realization probabilities are 0. Because the decision maker’s subjective perception may be different from nature’s true realization, objective distributions do not necessarily express the values that can be realized in reality but rather

values that are believed to be realizable. If the true realization lies outside the subjective distribution, decision makers are overconfident, and even applying sophisticated decision-making methods, such as the stochastic-dominance principle, can lead to wrong decisions. Being confident does not mean that the assessment is correct, but it helps make decisions. Confident decision makers assign high probabilities to few outcomes (narrow distribution). This allows a decision maker to decide easily between several alternatives, since the outcome of the decision seems to be clear. An insecure decision maker, by contrast, assigns relatively low probabilities to a lot of realizations (broad distribution). A decision maker who does not know which of the outcomes from the perceived distribution will be realized is much more reluctant to decide and needs a much higher incentive to make a decision. This is, for example, one of the reasons why the cost of capital increases as the investors’ uncertainty grows, and why managers are reluctant to invest in ecological impact analysis techniques. 2.3. Bayesian Information Update by using Measured Distributions. Decision makers go through Bayesian information-update processes, since they change their opinion when new and trusted information becomes apparent: the scientist collects new measurement data, assesses the information, and changes the probability distribution. But the information-update process is far less mathematical than the Bayes formula looks like, since scientists usually have more possibilities to deal with information: they delete values, ignore results, reject outliers, start the experiments from the beginning, and forget all previous results; and they use “soft”, right or wrong, information to adjust the probability distribution. Since new information is unknown before it is revealed, the Bayesian update process does not tell what a decision maker should do in a specific situation; it can just be used to simulate what decision makers would do if they would receive assumed new information. Anyway, looking on the Bayes formula helps understand what is meant by information updates. At first, the scientist starts with a personal initial a priori probability distribution based on subjective knowledge and experience. It actually does not matter, from a theoretical point of view, how the distribution looks and how it is derived: the scientist may use a single value whose realization probability is 1 or a distribution of values with a probability assigned to each value; it can furthermore be based on guessing, on earlier measurements, or on values published in literature. In a second step, the scientist receives new information such as measurement data and soft information, for example, “We made a mistake! Let us delete all previous measurements, and start the experiments from the beginning!” The more the scientist trusts the new information, the higher is its impact on the probability distribution. Trust is expressed by the likelihood probability: a scientist who is confident that the new information is correct assigns a high likelihood probability to it, and the probability distribution’s shape may change significantly. If the scientist does not trust the information, the likelihood probability is 0, and the distribution will not be affected. The update procedure is repeated when new information becomes available: the former a posteriori distribution is used as the next a priori distribution, and the information-update sequence starts from the beginning. Even though psychological input factors affect the scientist’s probability distribution, one could, of course, use the Bayes theorem to iteratively adjust the probability distribution to new measurement results. But this may lead to conceptual problems, too, if the measurement data is discrete: all values appearing

7760

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007

in the a priori distribution have a positive probability, while the probabilities of achieving a value that has never been measured before is 0. Adding a new measurement result to the scientist’s data sheet may, therefore, require multiplying an a priori probability of 0 with some likelihood probability. The result is an a posteriori probability of 0: the value would not be added to the distribution. This problem can be avoided by fitting continuous distributions to the measurements, since their probabilities of deriving realizations are never 0. But these distributions contain values that have never been measured, and nobody knows for sure if they can be realized. 2.4. Limitations of Using Probabilities in Forecasts. Accidents happen or not, the oil price at a given date is $50 or not, and a certain reaction rate constant is realized or not. In this sense, nature behaves deterministically: the true probability that the true value will be realized is 1, while all other values have the true realization probability of 0. The problem is that the decision maker does not exactly know which value will be realized. This has two major implications. First, probabilities are independent from the true realization. A decision maker who is very confident to be right just takes one possible value into account whose realization probability is 1, while the probabilities of realizing other values are set to 0. Since the decision maker is not necessarily right with the assessment, this value may be different from the true realization. Even confident decision makers’ assessments can, therefore, be wrong. Since the true value will certainly be realized, and since the assigned probabilities express the decision maker’s degree of confidence, the insights from using probabilities in forecasts is limited. An analyst, for example, may assign a probability of 95% to realize an oil price of $20 at a specific day and a probability 5% to realize $50. But if the true realization that day is $50, then the true probabilities of realizing $50 and $20 are literally 100% and 0%, respectively. A probability of 95% therefore means that the analyst is very confident that $20 will be realized, and not that the true value will almost certainly be $20. What makes the situation even worse is the fact that forecasts cannot be wrong as long as the probability assigned to the true realization is not set to 0: the analyst can claim that the prediction was correct, since the true realization was not excluded from the list of possible outcomes, and that there was still a small probability of realizing this value. 2.5. Optimization under Uncertainty. Literature offers a vast number of publications about optimization problems, which, in general, include the following steps: defining the goals and modeling uncertain input variables, as well as design variables that are determined before the process is installed, and control variables that can be adjusted afterward. The next steps are to identify the feasible alternatives and, finally, to choose the option that brings the decision maker closest to the desired goal, using some sort of algorithm. A typical optimization problem under uncertainty is modeled as follows,4

maxE[O(d,c,u)] d,c,u

(3)

P(E[h(d,c,u)] ) 0) g 1 - a P(E[g(d,c,u)] e 0) g 1 - b where g and h denote technical and economic constraints, E(...) represents the expected value, and P(...) represents a probability, while the vectors a and b denote confidence limits with which the constraints g and h must be met. u characterizes uncertain input values, and d is the vector of design variables. The idea behind the optimization approach is to choose the design

variables d in a way that the variation of control variables c will yield the best expected result given the uncertainty in u. Developers of optimization procedures often make a distinction between state and control variables.5-7 State variables are fixed before the process is installed, while control variables can be adjusted afterward. This distinction apparently makes sense, but decision makers fix variables every time when they decide: the variable “reaction pathway” is set when the development team chooses one reaction pathway alternative, and the variable “heat input at one point of time” is defined when heating power is set by the measurement and control equipment in order to realize the optimum reaction medium temperature. Decisions about fixing “state variables” are, furthermore, not necessarily the most important ones: although a wrong reaction pathway selection can significantly affect the economic and ecological performance of the process, a wrong “heat input” selection may result in an accident, which may have even worse consequences. Finally, state variables are not fixed and can be reversed after installing the process: the process capacity can be altered, the process may be shut down, and process equipment may be exchanged. Literature offers approaches on dealing with almost every problem that scientists and engineers face during new product/ process development projects for chemical processes, e.g., optimizing different process types, such as batch,8 semibatch,9 and multiproduct batch plants,10 as well as combinations of continuous and discontinuous operations,11 continuous processes,12,13 and heat-exchanger networks.14 The underlying process models may include supply chains and packing operations,15 stock policy,10 and wastewater treatment.16 The design and control variables reach from process capacity,17 maintenance policy,18 production schedule,19 solvent selection,20 process concepts,4 over process flexibility,21 to two- and threedimensional layout planning22,23 for the process and the plant structure.24 The optimization goals usually involve one of the following: maximizing net present value (NPV)7 or profit,25 or minimizing the expected operating costs, energy consumption,26 amount of waste,27 or environmental impact.20 Some of the approaches are deterministic, others are stochastic. Another way of incorporating uncertainty into optimization problems is the use of fuzzy techniques.3,28 In normal optimization approaches, the goals and constraints are defined by strict mathematical rules or constants. The problem with this approach is that people usually communicate in a less mathematical and well-defined way, especially when information is uncertain. For example, a constraint may be formulated as “the value should lie below three, a little bit more would be OK, but not too much”. Fuzzy optimization approaches allow for such vague descriptions, and they use possibility distributions rather than probabilities. But as we have seen, there are no conceptual differences between these approaches. Furthermore, fuzzy optimization may help people express their perceived uncertainty and, thus, distributions, but otherwise they are not different from other distribution-based optimization approaches. 2.6. Stochastic-Dominance Principle. One possibility for distinguishing alternatives under uncertainty is the stochasticdominance principle.1,4 The basic idea is that, if alternatives are perfectly correlated, a change in the input values’ realizations would cause all alternatives’ output values to make the same shift. The difference or the quotient of the alternatives’ output values would stay the same, regardless of each alternative’s output value uncertainty: one alternative would always be the better choice. Even if the correlation is not perfect, the output

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007 7761

Figure 1. Applying the stochastic-dominance principle. The distribution of the differences between the uncertain outcomes (A - B) indicates that the distributions A and B are correlated and that B will yield better outcomes in 80% of all cases. Even though uncertainty about the outcomes (e.g., cash flow) is high, the probability of making a wrong decision is low.

Figure 2. Two subprojects yielded different realizations of the secondorder reaction rate constant (k). This, however, does not mean that all of the values can be realized. If the true realization, for example, would be 4.50 × 10-3 kg mol-1 min-1, neither of the measured values, nor the expected value, would replicate the true realization. The major challenge of reducing uncertainty is, therefore, to find out how nature works and how the true values can be controlled.

value difference or quotient distributions can help identify the alternative that would be better in most of all possible cases. Figure 1 shows such a situation. Alternative B has a higher mean than alternative A. Since both alternatives’ outcomes are highly uncertain, directly choosing the better alternative is impossible. Anyway, looking on the distribution of the outcome differences (B - A) shows that the alternatives are correlated and that alternative B is superior to A in 80% of all possible cases. Even though the initial uncertainty is very high, the probability of choosing alternative B, and making a wrong decision, is low. 2.7. Limitations of Mathematically Optimizing Expectations. Developers of mathematical optimization approaches in new product/process design use distributions of input variables, and they look for the set of design and control variables that bring the process closest to the optimum, measured as the expected value of the objective distribution. The drawback of this approach is, at first, that nature will realize the true and not the expected value. Second, in order to successfully finish the project, it is necessary to optimize outcomes instead of expectations. Because decision makers (and computers) do not know the true state before uncertainty is resolved, it is impossible to

choose the set of variable settings that yields the optimum result. As we have shown earlier, uncertainty resolution means to identify the factors that are responsible for apparently random fluctuations and to find out how nature behaves. This question, however, usually cannot be solved by mathematically optimizing the problem once. In most cases, the scientists and decision makers need to solve physical, chemical, and engineering challenges, and thus, optimizing the outcome requires analyzing partially unknown systems, stepwise resolving uncertainty, and decision making under uncertainty. Such a procedure is presented in Section 3 of this paper. 2.8. Illustrating Example. The goal of a recent research project was to prove the second-order rate law for the reaction of synthesizing 1-phenoxyoctane from 1-bromooctane and sodium phenoxide in a microemulsion, given that the temperature is kept constant. This project was subdivided into two smaller subprojects: the first one29 has proven that the linear relationship exists, and the second subproject30 has shown that this finding is not affected by the existence of excess phases. Since the project’s theoretical background and goals are not relevant for this paper’s argument, we refer interested readers to the original papers for further information. What’s important is that both sets of experiments included the same experiment in terms of reaction medium composition and temperature. The second experiment’s setup and measurement equipment were slightly improved, and the people doing these experiments were different. Anyway, the same experiments yielded the following values: a second-order reaction rate constant of 4.03 × 10-3 kg mol-1 min-1 in the first subproject and of 1.34 × 10-3 kg mol-1 min-1 in the second. Because of this difference, the second project’s team members performed additional measurements to discover any flaws in the experimental setup, but the search only revealed one minor problem with analyzing the samples in the gas chromatograph. Solving the problem made the distributions narrower for experiments within the subproject, but the absolute value difference between the subprojects did not change and the uncertainty about the difference was not resolved. Since the linear relationship has been proven in both sets of experiments, and the difference in absolute values seemed to be systematic, it was possible to conclude that the relationship really exists, which was the goal of the project. But if the project would have aimed at developing an industry-size reactor, the situation would have been much more complicated. A second-

7762

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007

order reaction rate constant that is about three times higher would require a much smaller reactor for producing the same amount of product in a given time span. This is appealing, but it is impossible to choose the “best” alternative in this situation. Uncertainty analysis does not contribute to improving decision quality. Since both values were observed once, the corresponding realization probabilities are 50% each, and the expected value is 2.69 × 10-3 kg mol-1 min-1. However, the chance that either of the measured values or the expected value will be the true value is rather small. If the true realization would, for example, be 4.50 × 10-3 kg mol-1 min-1, then the true probability of realizing this value would be 100%. The measured and calculated values’ realization probabilities would be 0, and assigning realization probabilities of 50% to each value would be basically meaningless. Even if the second team would have rejected the first team’s results because of a bad gut feeling and assigned a probability of 1 to realizing values close to 1.34 × 10-3 kg mol-1 min-1, the forecast would still have been wrong. The different distributions and experiments are presented illustratively in Figure 2. Optimizing the process means to choose the reactor concept that maximizes the reaction rate constant, but although the sets of experiments were quite similar in terms of reaction conditions and setup, it was not possible to replicate the absolute results. The reason for the differences was that known or unknown factors cannot be kept constant because of lack of knowledge. Developers are, therefore, not able to choose the concept that promises the highest reaction rate constant, unless they understand what factors are responsible for the fluctuations and how they can be controlled. Building an industry-size reactor, and realizing the optimum value, would be luck. Therefore, it is necessary to understand why the values are different and which true value would be realized in an industry-size reactor. 3. General Concept for Dealing with Uncertainty in New Product/Process Development Projects Optimizing the outcome is impossible unless uncertainty is fully resolved. A single application of mathematical stochastic optimization routines that aim at optimizing expectations, therefore, does not help identify the optimum outcome. What’s needed is a procedure of systematically applying methods of uncertainty resolution and decision making under uncertainty. In this section, we explain how such a procedure looks, and we explain how decision makers can stepwise reach the optimum solution. Even though the challenges of risk management in new product/process development projects are much bigger, the following insights concentrate on the “technical” optimization and decision-making challenges. Organizational and psychological aspects are discussed elsewhere.31 3.1. General Approach on Understanding the Origin of Uncertainty. Nature’s realization is determined in a way that one outcome will certainly be realized, depending on known (fn) and unknown (Fn) input factor realizations. Nature can, therefore, be understood as a function (N) that converts the input factors’ realizations into the output value realization (R):

R ) N(f1, f2, ..., fn, F1, ..., Fn)

(4)

The word function, however, should not be confused with its mathematical meaning, since it represents hidden rules whose origin and character are unknown. Mathematical functions, by contrast, are models that aim at replicating relationships between input factor and outcome realizations. We also do not distinguish between true realizations and measurement values, because the

true realizations are unknown, and measurements are the best available approximation. Measurement uncertainty is, therefore, included into the nature function. Researchers (as well as normal decision makers) build mathematical and nonmathematical models to forecast nature’s realization. These models include rules (M) trying to describe nature’s behavior on factor realizations, as well as a set of variables (vn) that replicates the set of known factors. A model, therefore, looks like

OC ) M(v1, v2, ..., vn)

(5)

Uncertainty stems from not forecasting nature’s realization correctly when using a model: if the outcome (OC) is different from nature’s realization (R), relevant input factors are not included into the model or the model does not perfectly describe nature’s behavior. In other words, the forecast is wrong and surprising to the observer. If the same experiment yields distributions of values, nature appears to behave randomly. To understand the origin of uncertainty and the different impacts on random fluctuations, we look on the no-uncertainty situation and the three different sources of uncertainty separately. 3.1.1. Case 0: No Uncertainty at All. This is the optimum case in which all variables perfectly replicate the underlying factors, all relevant input factors are known, their realizations are predictable, and the model (M) perfectly replicates nature’s behavior (N). In this case, it is always possible to forecast nature’s realization by applying the model on the known corresponding variable realizations. If decisions on such perfect forecasts would always be correct, there would be no uncertainty at all. 3.1.2. Case 1: Unknown Input Factors. The second case is that the observer does not know all relevant input factors. What’s happening is that the model’s output on the known input factors’ realization is often different from nature’s realization. Although one cannot rule out that the model may luckily yield to nature’s realization, performing experiments usually results in apparently random differences between the model’s outcome and nature’s realization. To reduce uncertainty, the development team has to identify the unknown factors and to properly model the corresponding variables’ relationship with the model’s outcome. 3.1.3. Case 2: Fluctuating Input Factor Realizations. In the third case, all factors are known, the model perfectly predicts nature’s behavior, but an input factor’s realization appears to fluctuate randomly. In this case, the factor’s realization is affected (N1) by unknown input factors (F11-F1n), so that

f1 ) N1(F11, ..., F1n)

(6)

Reducing uncertainty requires identifying factors F11-F1n, analyzing the relationships between factor realizations and the model’s outcome, and including them into the model. Additionally, if the possible factor realizations are known, the outcome distribution can be determined by applying the model on the input factors’ distributions. It should also be possible to replicate nature’s realizations by applying single-input factor realizations on the model’s corresponding variables. 3.1.4. Case 3: Wrong Model. The last case is that, although the development team is able to keep all input factors constant, so that nature’s realization does not fluctuate, the model’s outcome on the realizations is different from nature’s realization. In this case, the model’s link between the input variables’ realizations and the outcome is wrong. Thus, improving forecast accuracy means to update the underlying model.

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007 7763

These cases show that random fluctuations of realizations and differences between model’s outcomes and nature’s realizations are due to lack of knowledge about the underlying factors or about nature’s behavior. To reduce uncertainty, researchers have to identify the unknown factors, describe their impact on the outcome realization, and compare the model’s forecasts with measurement values to identify deviations. This procedure has to be repeated until the model is accurate and uncertainty is resolved. What makes the uncertainty-resolution process difficult is that addressing the underlying problems requires understanding how nature behaves, why certain values are realized, and how the development team can systematically realize the optimum solution. Doing this means to apply creative, smart, theoretical, and practical problem-solving capabilities on the challenges, but they are seldom mathematical in nature. Although the underlying problems are challenging, the benefits from resolving uncertainty are manifold. First, the decision maker is much more confident in his or her knowledge than without systematically resolving uncertainty. Therefore, his or her personal distributions become narrower and the decisions become easier. Second, the problem understanding becomes better, and the updated models become more accurate in predicting nature’s behavior. These effects increase the decisionmaking quality. Third, developing an understanding on how to control the underlying factors and realizations often allows the development team to realistically limit the space of possible outcomes or to extend the space of truly feasible alternatives toward a better optimum. Although we use mathematics in this paper, the reader should also keep in mind that smartly addressing uncertainty does not necessarily have to involve mathematics. It is often also appropriate to build nonmathematical models, to use nonmathematical uncertainty-resolution procedures, and to come to decisions without even considering any probability distribution. In fact, this is done almost in every daily decision-making process and also in professional analyses: e.g., resolving uncertainty by asking senior project managers, building models, and considering what-if chains in order to find out what could happen; flexibly managing a project by keeping “options” open, without calculating options values; or by techniques, such as exploratory modeling.32 Some researchers33 recognize the subjectivity of uncertainty but make a distinction between epedimistic uncertainty, which could be resolved by uncertainty resolution, and inherent systemvariability uncertainty (e.g., about human systems), in which further research may not improve knowledge. Our approach is more general, since it deals with uncertainty in general. It does not matter why the decision maker is insecure in a certain situation and how uncertainty is perceived, it is just important to recognize that uncertainty is present. The question of whether uncertainty resolution is possible depends very much on the decision maker’s capability and willingness to pay in order to resolve uncertainty. 3.2. General Procedure of Resolving Uncertainty. Uncertainty resolution should be understood as a procedure rather than as a single decision-making method or metric. This procedure is presented in Figure 3 in a general way. Later in this paper, we will use a case study to show how it should be applied in practice. The first step in solving optimization problems is to generate the alternatives. Then, decision makers should try to exclude inferior alternatives from the set of all known alternatives by applying appropriate methods of decision making under uncertainty. The problem, however, is that uncertainty is often too

Figure 3. Efficiently optimizing a decision problem requires first identifying alternatives and then systematically reducing uncertainty as well as the number of alternatives. This procedure should be repeated until one optimum alternative is realized, until the impact of choosing the best possible alternatives is negligible, or until uncertainty cannot further be reduced.

high to directly identify all inferior alternatives, or, in other words, the best alternative, so that the next steps are to efficiently reduce uncertainty and to reassess the decision situation. This process should be repeated until the optimum alternative is identified, uncertainty cannot further be reduced, or the impact of choosing the best alternative is sufficiently low. 3.2.1. Step 1: Generating and Modeling Alternatives. Alternative generation starts when the development team identifies a possibility to do something that promises to be better than the alternative to do nothing. The process of developing ideas is often very creative, and sometimes good ideas show up rather randomly, when nobody expects them to appear. Although the idea-generation process may be supported by appropriate methods and techniques, we will not go deeper into details. What’s important is that quality of the alternatives is crucial for the optimization result, and the development team should include them into the optimization process when they become apparent and not only once during the process. Thus, idea generation should be understood as an ongoing task. Next, the development team should develop models to forecast the decision’s impact on the relevant system. When doing this, the modelers should not forget modeling the “doing nothing” alternative. This alternative describes the case in which everything is left unchanged, and the realizations would be unaffected. In fact, many decision-making methodologies, e.g., the NPV and Eco-Indicator ‘99 analyses, require modeling this alternative. However, in practice, and in theoretical approaches, decision makers and researchers often use perceived distributions instead of the difference distributions between the alternatives under investigation and the alternative to do nothing to incorporate uncertainty. This prevents decision makers from applying the stochastic-dominance principle, and the decision often looks more uncertain than it actually is. Furthermore, the decision makers may oversee cases in which the best alternative is actually worse than the doing nothing alternative. The model’s system boundaries can be limited to the part of the “world” that would be affected by the decision. However, the scope of the relevant system should depend on the decision maker’s goals. While the project’s value added, for instance, is restricted to the company, analyzing the ecological impact may require looking on all up- and downstream processes from raw material extraction through waste disposal, as well as on the

7764

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007

cause-effect chains between releasing chemicals and the resulting adverse effects. The initial models should be accurate, but they do not necessarily have to be very detailed. To start with the analyses, the development team can quickly develop appropriate models suited to accurately forecast nature’s realization. The less work is necessary to reach acceptable model accuracy, the better. The high degree of initial uncertainty can be incorporated by applying broad distributions. This procedure makes sure that the models contain all relevant information and that the development team avoids spending more time than necessary on developing too-detailed models. If uncertainty about parts of the model turns out to be relevant for decision making, model accuracy can be increased where required. The goals and the decision metrics should be kept constant throughout the procedure. Decision makers often use shortcut methods or proxy measures to deal with uncertainty, because resolving uncertainty and developing detailed models require a lot of time and effort. However, the problem is not that the analysis takes time but rather that applying shortcut methods may lead to wrong decisions. In fact, calculating the project’s value added or the environmental impact is little work, and it does not become more or less difficult throughout the uncertaintyresolution process. What’s changing is that the models become more accurate, forecast quality becomes better, and distributions become narrower. Decision makers should, therefore, directly start with applying the appropriate metrics, incorporating uncertainty by using broad input-value distributions, and then using the uncertainty-resolution procedure to stepwise improve model accuracy. 3.2.2. Step 2: Decision Making about Excluding Alternatives. After developing and modeling the alternatives, the decision maker should check if some of the alternatives can be excluded from the optimization problem. We have explained that the stochastic-dominance principle appears to be a strong tool for comparing different alternatives under uncertainty. If the alternatives’ outputs are perfectly correlated, the difference or the quotient stays the same so that it becomes easy to choose the alternative that always promises to yield to the better realizations and, if the alternatives are not perfectly correlated, at least to a probability of being right. Stochastic dominance works well if the applied distributions cover the true realization and if one alternative’s realizations are always the best, but it may lead to wrong decisions if a low probability case turns out to be the true realization and if the modeled distributions do not cover the true realization (overconfidence). Therefore, dealing with distributions requires not only solving mathematical optimization challenges but also making sure that the forecasts are reasonable and that the modeled distributions are not too far away from the true distributions. Stochastic approaches usually exclude alternatives that are worse in most cases, depending on given confidence limits.4,34 As explained earlier, doing this does not make sure that the decision will be correct with respect to the true realization. To avoid that a low probability case would change the decision, decision makers should only exclude alternatives that are worse in all cases, not in most cases. The authors of this paper, however, are aware that continuous distributions assign some probability to all realizations so that one can never be 100% sure to make the right decision. However, for simplicity reasons, we assume that “almost 100%” situations are 100% certain. Overconfidence can be avoided by analyzing if broader distributions could lead to possible decision changes. If it turns

out that adverse realizations could become possible, the decision maker can check if these realizations are reasonable, if reducing uncertainty is necessary, or if the risk should be accepted. The difference between this approach and the standard stochasticdominance principle is that the decision-making process aims at identifying possible problems before they appear rather than being, for example, 95% sure to be right at a given uncertainty level; it also requires measuring the sensitivity of the decision’s change probability on variations in the uncertainty level and not only the probability of being right. 3.2.3. Step 3: Decision Making about Resolving Uncertainty. Uncertainty resolution should aim at efficiently improving the chance to decide correctly. This basically requires answering two questions: which uncertainty to address and how to address uncertainty? One possibility for answering the first question is to reduce the uncertainty about the most uncertain factors. The drawback of this approach is that not all uncertain input factors’ realizations would lead to decision changes. Therefore, it is necessary to vary one variable’s uncertainty level (V(v1)) while keeping all other variables’ (U(vn)) uncertainty levels constant, so that

OC ) M(V(v1), U(v2), U(v3), ..., U(vn))

(7)

A decision rule (DR) applied on the possible outcomes yields to a distribution of decision results (D):

D ) DR(OC)

(8)

If the variable under investigation is solely responsible for the outcome uncertainty, then resolving uncertainty about the corresponding nature’s input factor should eliminate the risk of making a wrong decision. Otherwise, if resolving uncertainty about a single factor decreases but does not eliminate risk, the outcome uncertainty is caused by joint effects of multiple factors. Joint effects can also be discovered by using Monte Carlo simulation to derive the outcome distributions and by identifying combinations of input factor realizations that might be responsible for the adverse outcomes. Using simulations on reduced factor uncertainty helps identify the relevant factors at a given uncertainty level, but it does not prevent the decision maker from being overconfident about a factor’s realization. We have already proposed the procedure of checking if broader distributions could lead to possible wrong decisions. The drawback of this method is that sensitivity analyses on uncertainty levels are computationally expensive. Since factor analysis should prevent the decision maker from making the wrong decision and not from assessing the probabilities correctly, one can apply “normal” sensitivity analyses on the decision problem for simplicity reasons, by varying the value under investigation (v1) and keeping the expected values (E(...)) of the other variables constant:

D(OC) ) M(v1, E(v2), E(v3), ..., E(vn))

(9)

If the decision reacts very sensitively on small factor realization changes, even though the current uncertainty level would not imply possible decision changes, the factor should be considered to be important: the decision maker should carefully listen to contradicting opinions and measurement results to reduce the risk of being overconfident. The last step of decision making about resolving uncertainty is to choose an appropriate uncertainty-reduction technique. We will not discuss this in general since uncertainty reduction depends very much on the underlying problem, and it requires a smart approach on understanding nature’s behavior.

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007 7765

One way to efficiently determine responses of systems to varying known factors is the design-of-experiments technique.35 This methodology aims at minimizing the experimental effort to determine important factors and the correlations between factors by statistically analyzing sets of experiments and by applying a technique called “blocking” to decrease uncertainty by taking out the effects of single factors. Our approach identifies the important factors by simulation before doing the experiments. This helps further reduce the experimental effort. One drawback of the design-of-experiments technique is that the scientist needs an idea of which factors might be the reason for the random fluctuations, but it does not help if the factors are unknown and the realizations fluctuate apparently randomly. In the illustrating example, the team did not know why the experiments yielded different measurement values, what factors were responsible for the fluctuations, and what they should have done in order to resolve uncertainty. It is often not necessary to completely eliminate uncertainty. The goal is to avoid possible decision changes: if the uncertainty level and the chance of being overconfident are low enough to avoid mistakes, the uncertainty-resolution procedure can be stopped. The development team can, therefore, choose the cheapest (and fastest) possible uncertainty-resolution technique that promises achieving the required level of uncertainty and not necessarily the method that reduces uncertainty most. 3.2.4. Step 4: The Decision to Stop Analyzing the Problem. The last step in the general uncertainty-resolution procedure is to decide whether to stop analyzing the project or to return to the uncertainty-resolution step. The decision maker now faces the challenge to compare the possible gain from getting new information with the effort necessary to receive the information. The problem with this trade-off is that the true realization, and, thus, the gain, is unknown before uncertainty is completely resolved. This also means that the true impact of investing in uncertainty resolution is unknown at the decisionmaking point. Decision makers, however, can use some rules of thumb to assess the decision situation. The simplest case is that one alternative stochastically dominates all other alternatives. In this situation, one should stop analyzing the problem and choose the optimum alternative. Second, if uncertainty resolution is less expensive than the maximum mistake, the decision maker should invest in gaining knowledge, even though the impact of deciding to resolve uncertainty is unknown. Increasing knowledge quality helps avoid decision mistakes, and the worst result is that uncertainty resolution would not change the decision. Anyway, the development team learns why the decision is correct and can use the knowledge in future projects. Third, if uncertainty resolution would cost more than making the biggest possible mistake, choosing one alternative, and observing what happens, would be the most efficient known uncertainty-resolution technique. In this case, the decision maker could accept the risk of making a wrong decisionsand choose one alternative or wait until somebody develops or presents a cheaper uncertainty-resolution technique. Some authors suggest using the “value of information” concept to rank factors1 and to apply shareholder value-oriented decision methods to decide whether investing in uncertainty resolution makes sense or not. The value of information is the value added that is expected to be realized by investing in uncertainty resolution, and it is measured by discounting the expected difference between the broader distribution before uncertainty resolution and the expected narrower distribution afterward. If the value of information is higher than the costs

of deriving the information, the decision maker should invest; otherwise, he or she should not. This, however, is what’s typically done when applying shareholder value-oriented decision-making methods on research and development (R&D) projects. Decision makers may use these methods to value the potential of uncertainty-resolution techniques. In fact, it is required by theory when using shareholder value-oriented decision making. However, it is important to keep in mind that this procedure measures the price that decision makers would pay for receiving new information and not the true value of understanding reality. The price is high if the decision maker/investor is insecure about the current state of knowledge and trusts the informationgathering method; the price is low if the decision maker is very confident to be right and insecure or pessimistic about the method’s uncertainty-reduction potential. 3.3. Fast Short-Term Optimization under Uncertainty. Although stochastic routines are often said to be computationally expensive, and the time requirements are extensively discussed,7,36 decision makers should use the uncertainty-resolution procedure as provided in this paper. In long-term optimization problems (e.g., product/process development projects), the decision makers have months or years to derive an optimum solution. This is enough time to run stochastic-dominance analyses and to identify the relevant input factors on uncertainty. The Monte Carlo simulations, for instance, used to draw the graphs in this paper took 2.5 h each on a normal laptop computer (Intel Centrino Processor, 1.40 GHz): each simulation run can be done within the time frame of an extended lunch break. This is short compared to the time necessary to do the lab and brain work required to resolve uncertainty. Even in many short-term problems, this time frame would be enough: batch tasks often run for several hours, and the optimization in this case needs to be started, at the latest, 2.5 h before the next decision point. Right now, the time frame to stochastically analyze decision situations is already quite short, and it will be further reduced by advances in computer and algorithm technology. If the time frame is really too short to stochastically analyze the problem, one can basically face two situations. First, if a decision maker is almost certain about the input values and the resulting outcomes, he or she can consider setting “safe” decision values with certain outcomes and optimizing the problem deterministically (e.g., safety measures that need to be applied immediately). The second case is that uncertainty is too high to decide, so that it makes sense to consider postponing the decision in order to have more time for uncertainty resolution. In uncertain safety-related situations, this can also mean to stay on the “safe” side, to get out of an uncertain, but unsafe, situation immediately, and then to take the time to resolve uncertainty on what to do in such a situation. 4. Combining Concepts for Economic and Ecological Decision Making Developing a new and interdisciplinary approach on decision making and optimization under uncertainty requires a thorough understanding of the concepts that are already available. We therefore provide a broad literature review that represents the current state of knowledge about optimizing the economic and ecological impacts when developing new chemical products and processes. We also explain the differences and similarities between the approach developed in Section 3 of this paper and the established concepts, and we identify research needs.

7766

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007

assumed to be convex. The investor is, therefore, risk-averse, and he or she is asking for a risk premium when investing money. This argument is used to determine the theoretical cost of capital by assuming that investors in capital markets invest in stocks that promise the highest expected return and the lowest standard deviation. Assuming further that they properly apply portfolio theory37 and use the possibility of investing or borrowing risk-free money for a risk-free rate of return finally yields to the capital asset pricing model (CAPM) formula,39-41

ri ) rf + βi(rM - rf)

Figure 4. Typical explanation for risk-averse decision making is that the decision maker aims at maximizing the personal expected utility value. Since the transformation function between the input value and the utility value is convex, the expected utility value decreases as the degree of uncertainty about the input value increases. The decision maker, therefore, chooses the low-risk over the high-risk alternative.

4.1. Measuring the Economic Impact: The Project’s Value Added. The economic impact is usually measured by some kind of shareholder value-oriented method, such as the net present value (NPV) rule, real-options, and decision-tree analyses. To explain the methods’ application and their limitations, we provide an overview of, and the background behind, these decision-making concepts, and we draw conclusions. 4.1.1. Uncertainty, Decisions, and Discount Rates. People make decisions under uncertainty in daily life situations, and researchers have developed models to explain how people decide. The most simple, and most prominent, approach is the concept of expected values:1 decision makers are assumed to multiply each alternative’s possible outcomes with the corresponding realization probabilities and to sum up the values to calculate the distribution’s expected value. A decision maker who aims at maximizing the outcome chooses the alternative that promises the highest expected value. Otherwise, if the goal is minimizing the outcome, the decision maker goes for the lowest expected value alternative. This rule, however, does not explain, what decision makers would do if the alternatives’ expected values are the same while the risk levels are different. This problem is overcome by adding a second decision rule: the decision maker chooses the lowest-risk alternative if the expected values are the same.37 The latter rule implies that the decision maker is risk-averse. This behavior is explained by the idea that decision makers want to maximize their personal “utility value”, which is a proxy measure including everything a decision maker wants to achieve.1 Every input value is transformed into a corresponding utility value by using the decision maker’s personal transformation function. This function is assumed to be convex if people are risk-averse; risk-neutral people are assumed to own a linear transformation function, and the transformation function of riskseeking people is assumed to be concave. If the decision maker is risk-averse, the convex function causes the expected utility value to decrease as the uncertain input value’s distribution becomes broader. Maximizing the personal utility value, therefore, requires the risk-averse decision maker to choose the lowest-risk alternative. This effect is shown in Figure 4. Capital market theory, for example, uses this concept to explain the existence of interest rates.38 Since the effect of getting more money on the utility value is assumed to decline as wealth grows, the investor’s transformation function is

(10)

where ri is the required return for the investment i, rf is the risk-free rate of return, rM is the market return, and the βi value is the stock price’s risk measure, which is empirically derived as the stock (i) price’s sensitivity on changes in the market index level. 4.1.2. Project’s Net Present Value (NPV). The economic impact of investing in a new process is measured by some kind of shareholder value-oriented method such as the net present value (NPV) rule or the more sophisticated decision-tree and real-options analyses.42 Whatever method is chosen, the procedure for deriving the value, at first, requires forecasting the incremental cash-flow distributions over the project’s economic life span. The next steps are to discount the expected values of the cash-flow distributions (E(CFt)) at the risk-adjusted cost of capital (q) to derive their present value (PV); and finally, summing up all the PVs and deducting the expected initial investment outlay (E(I0)) results in the NPV: T

NPV ) -E(I0) +

∑1

E(CFt) (1 + q)t

(11)

A project whose NPV is positive contributes to the company’s shareholder value and should be undertaken, while investing in a negative NPV project would destroy value, and the project should be rejected. Capital market theory does not distinguish between valuing projects and companies: the sum of all projects’ NPVs equals the company’s gross value. The company’s shareholder value is calculated by deducting the value of all cash flows going to bondholders. Dividing the shareholder value by the number of stocks issued equals the theoretical fair stock price. 4.1.3. Real-Options Pricing. Decision makers who apply the NPV method implicitly assume that investors are aware of uncertainty when valuing stocks but that managers would not react to negative outcomes. This shortcoming is overcome by real-options43 and decision-tree analyses.42 Real-options pricing stems from the idea of valuing options in the stock market.44 The most simple option type, the European call option, for example, gives the option holder the right, with no obligation, to buy a stock at a given date (maturity) for a given price (strike price). Since the investors earn the price difference (P - E), they exercise the option if the stock price (P) at maturity is higher than the strike price (E). If, by contrast, the strike price is higher than the market price, the same transaction would result in a loss, which in turn can be avoided by not exercising the option. In this case, the option value at the date of expiry is 0. The option value before the date of expiry can be calculated by using the Black/Scholes equation (eq 13).45 If one assumes that the option’s possible payoffs can be replicated by borrowing money at the risk-free rate of return and buying some amount of the underlying stock, and that the stock price follows a geometric Brownian motion process,

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007 7767

dP ) ri dt + σixdt P

(12)

where  is a normally distributed variable that has a mean of 0 and a standard deviation of 1 and ri is the risk-adjusted discount rate, then the option value (V) is given by

V ) PN(d1) - Ee-rf(T-t)N(d2)

(13)

with

ln d1 )

P 1 + rf + σ2i (T - t) E 2 σxT - t

(

)

and

ln d2 )

P 1 + rf - σ2i (T - t) E 2 σxT - t

(

)

where P and E denote the present stock price and the exercise price, respectively; N(...) is the cumulative normal distribution; σi is the standard deviation of the rates of return of the underlying stock; T - t is the time span to the exercise date; and rf is the risk-free rate of return. The idea of real-options pricing is to use capital market options pricing theory to value similar problems in real investment projects. Management, for example, has the flexibility to stop an R&D project if it turns out to be unsuccessful.46 One can, therefore, understand investing in a development stage as buying a European call option on continuing with the project’s next stage. The project’s value can, therefore, be calculated by using the Black/Scholes equation (eq 13). The decision rule is then the same as in NPV analyses: invest in the project if the initial investment in the first development stage (initial investment outlay) is lower than the option’s value. The same kind of analysis can be done by using decisiontree analyses. While real-options pricing uses continuous mathematics, the decision-tree approach simplifies the challenge to a discrete-time decision problem, but the basic idea is the same: both methods model the decision maker’s flexibility by cutting away negative outcomes. Furthermore, options pricing overcomes a theoretical problem with discounting truncated cash-flow distributions, but explaining this would go beyond the scope of this paper. 4.1.4. Challenge to Make the “Right” Decision. If the goal is to maximize shareholder value, decision makers should use shareholder value-oriented decision methods, such as NPV, realoptions, or decision-tree analyses, as required by theory, but they should not use proxy goals or shortcut methods just because information is uncertain. Using cost-comparison techniques, for example, can lead to a wrong decision if sales are affected by the decision and the decision is affected by the discount rate. To make sure that shareholder value is increased, one would have to make sure that both methods would certainly lead to the same decision, but this is much more complicated than directly calculating the project’s value added. The project’s value is subjective and, thus, affected by the investors’ perceived risk level and expectations: it is the price that investors would pay for owning the project. It is either a unique “value” to a single investor or the average of personal value assessments by many investors in capital markets: stock markets, for example, set prices that maximize sales, depending on all investors’ bids. Since the stock market price is always

assumed to be in equilibrium, it adjusts to the average of all investors’ personal values. Because of this, expected cash flows, the cost of capital, and, thus, the value are set externally by investors in capital markets and not internally by the management or analysts. Since the value added replicates the price that investors would pay for owning the project, investing in positive NPV projects may increase the current shareholder value. But it is obvious that “paying a high price” and “making the project a success” are two different approaches. The first one optimizes expectations, but if the goal would be to make the right decision in terms of maximizing the outcome, shareholder value-oriented methods are not the right choice. The question that remains here is as follows: would other decision-making methods or metrics be more appropriate to choose a potentially successful project? We leave answering this question to further discussion. 4.2. Measuring the Ecological Impact: The Eco-Indicator ’99. The second goal under consideration is the ecological impact. In this section, we explain the basics of ecological evaluations by first addressing typical cause-effect chains between emissions and adverse effects and then explaining ways to describe and to measure ecological impact. 4.2.1. Ecological Cause-Effect Chain. Humans build cities and streets, they travel, and they consume products. These actions contribute to mankind’s quality of life and wealth, but they are also the reason for negative impacts on the environment, for example, adverse health effects, reduced biodiversity and natural resources, and increased land use. The major goals of managing the environmental impact are to discover negative effects, to identify the cause-effect chains, and to solve the underlying problems. Because discussing all cause-effect chains would go far beyond the scope of this paper, we concentrate on the effects of emitting chemicals that cause negative health effects as an example, but since the theoretical challenges are basically the same for all impact categories, the insights can be applied on ecological assessments, in general. We furthermore refer to the more specialized literature presented in this paper for additional information. The first step of damaging the environment by emitting chemicals is releasing the substance under investigation.47 Then, the chemical is spread throughout nature,48 which results in a distribution of concentrations of the chemical. The concentration at one location depends on wind speed, river flow rate, rain fall, surrounding buildings and terrain, and chemical and biological degradation reactions. The latter also lead to concentrations of the reactions’ products. If the speed of emitting and producing a certain chemical is higher than the speed of degradation reactions, the chemical accumulates in the environment. Chemicals are taken in by humans, typically through membranes in lung, skin, and the digestive system. The concentration within the body depends on the chemicals’ concentrations at the relevant membranes, their capability to move through the barriers, and the duration of intake.49 Within the human body, the chemical does nothing, it contributes to health or it causes negative effects reaching from minor irritations to fatalities. The magnitude of the adverse effect, however, depends on the chemical’s potential to do harm, the mechanism causing the damage, and the chemical’s local concentration within the body. Research, for example, indicates that people usually do not show adverse effects up to a certain threshold value.47 This may be due to repair mechanisms within the body. Carcinogenic

7768

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007

chemicals, by contrast, appear to harm the body on a rather stochastic basis: even one molecule of the chemical can cause cancer. There are basically three possibilities for reducing adverse health effects: to stop emitting the chemical, to reduce the personal exposure by protection measures, and to provide medical treatment to the affected people. Stopping emissions, however, requires a very accurate understanding of the underlying problem: it is necessary to know the adverse effects, to understand where they come from, and to find out which emitters release the chemicals into the environment. Reducing personal exposure may work well if the emission source and the emitted chemicals are known and the protection is needed only temporarily. Examples are workers who protect themselves in labs or production facilities when working with chemicals or neighbors of the plants who shut their windows in the case of an accident. Anyway, protecting people against unknown chemicals, on a large scale and over a long time frame, is at least inconvenient and, in many cases, impossible. The third possibility, reducing adverse health effects, requires knowing how to treat the affected people, but it is not necessary to understand the cause-effect chain. Since usually the communities, and not the emitter, pay for reducing exposure and adverse health effects,50 these costs are said to be “external”. To deal with this effect, researchers have developed the concept of “internalizing” external costs, which means to give back the external costs to the emitter. Adding them to the production costs would make problematic products more expensive and/or provide management with a better problem understanding. Research has also developed a number of methodologies to measure external costs: e.g., to ask how much restoring nature51 or reducing the emissions would cost and to ask people how much they would pay for restoring a certain state of nature. 4.2.2. Perception and Assessment of Ecological Impacts. About 40 years ago, researchers and managers started assessing the ecological impacts of human action.52 Since then, companies have reduced emissions, the production processes became more energy efficient, the products became less problematic, and companies started using the expression “sustainability” for marketing products.53 In recent years, however, the perception of ecological impacts appeared to be declining on the one hand.54 On the other hand, the Brent Spar incident,55 and negative public reactions on potentially harmful chemicals contained in food,50 show that companies are expected to behave in an environmentally benign fashion and that the public reacts to violating the rules of environmentally conscious behavior: the company may be obliged to pay for restoring the environment,56 the company may suffer from reputation losses,57 and governments may impose emission taxes and additional laws.58 Researchers have developed concepts for evaluating and managing the company’s ecological impact. These include ecological incentive systems,59 portfolio presentations,60 and eco-efficiency and ecological cost analyses. 61 Empirical research indicates that just a few companies actually assess their ecological impact,62 although ecological analyses help understand the ecological risks.63 The analyses are perceived to be expensive,47 and the management does not expect to benefit from implementing ecological management tools.64 It wasalso shown that management accounting departments do not accept ecological cost-accounting techniques,65 even though the analyses are similar to management accounting methods.

4.2.3. Analyzing the Environmental Impact of Investing in a Chemical Process. The ecological impact analysis starts with determining the specific damage factors, which, at first, requires collecting emission data for all chemicals, as well as data about the damages under investigation. The next step is to identify the cause-effect chains and the chemicals’ relative contribution to the total damage. Given this data, one can calculate the specific damage factors (FEIi), which describe the contribution of emitting a specific amount, say 1 kg of a chemical (i), to the total damage. The second major step is to measure the mass flow changes at the balance boundaries for each relevant chemical due to the decision under investigation (∆m˘ i). The resulting change in the ecological impact (EI) is calculated by summing up all chemicals’ damage values that are derived by multiplying the incremental mass flow changes with the corresponding specific damage factors:

EI )

∑i ∆m˘ iFEIi

(14)

4.2.4. Determining the Mass Flow Changes due to an Investment. Measuring or forecasting the mass flow changes at the system boundaries requires assessing the mass flows of products, raw materials, and mass flows leaving as emission, in all processes that are affected by the decision. Raw material and product mass flows are relatively easy to detect since the transportation routes and the composition of the chemical flows are often known for existing processes. For new processes, they can be simulated during the development project, or existing processes can be used as a benchmark.47 Emissions are more difficult to analyze. Even though some emission points, e.g., smoke stacks, are obvious, the emission sources are often unknown: chemicals may be released during handling, filling, and emptying operations, as well as through leaking valves, piping, and equipment. Research indicates that such diffuse emissions add up to 6% of the product mass flows during normal plant operations.66 Diffuse emission sources can be detected47 by applying some kind of detection technique, such as wipe tests on potentially contaminated surfaces, analyzing the air within the plant building, or determining the amount of emitted chemicals by pressure drop and weight-loss measurements.67 Emissions are released within (direct) and outside (indirect) the process under investigation,47 during routine operations, and, once in while, in cases of accidents.68 These nonroutine emissions are due to some kind of accident, reaching from improper handling of chemicals, over releasing chemicals through pressure-release valves, up to major accidents, such as explosions. Accidents can be the reason for severe damages and major chemical releases.69 The mass flows’ compositions are often unknown, because accidents are often the result of unforeseen reactions; and secondary reactions, such as fires, can transform the released chemicals into other more or less dangerous, but unknown, chemicals. 4.2.5. Eco-Indicator ’99. One of the more recently developed methods for assessing the ecological impact is the Eco-Indicator ’99.20,70 This methodology aims at describing the potential damages caused by the emissions, based on the ISO 14000 life cycle assessment method. It follows a four-step procedure. First, all extractions of resources and emissions are collected to get a picture of the mass flows that are causing the damages. The second step is to use fate analyses to determine the concentrations of the chemicals in air, water, and soil, as well as the availability of natural resources. Step three, the exposure and

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007 7769

effect analysis, then determines how much of a substance is actually taken in by life forms, such as plants, animals, and humans, and it determines the effects of the substance intake and resource extraction. Depending on each chemical’s impact on the environment, the Eco-Indicator ’99 framework at first sets specific impact values with regard to the effect classes. These values are then aggregated to factors describing the effect on the three major damage categories: human health, ecosystem quality, and damage to fossil fuels and minerals. The methodology provides tables with damage factors for the most common pollutants. Finally, the three main categories are put on a common dimensionless basis, the Eco-Indicator ’99, through normalization and then by assigning weighing factors that express the subjective perception of three different groups of people: hierarchists, egalitarians, and individualists, based on the cultural theory of risk perception. Thus, the Eco-Indicator is calculated using

Eco-Indicator ’99 )

wk

∑k N ∑j ∑i Dijmi

(15)

k

where wk and Nk are the weighting and normalization factors attributed to damage category k; Dij characterizes the damage factor that a substance i has on impact category j; and mi denotes the mass flow of substance i. 4.2.6. Challenge to Cost Efficiently Assess the Uncertain Ecological Impact. Analyzing the environmental impact is some sort of risk management. If emitted chemicals turn out to be harmful, the company may be asked to pay for compensation, the company’s reputation may be damaged, and the government may impose emission taxes and additional laws that affect the company. Introducing environmental impact analyses helps identify potential hazards to the company’s balance sheets before they become a problem. Dealing with ecological impacts is important for all participating parties: researchers, policymakers, and managers. However, since the underlying system nature is complex, and the cause-effect chains are not well-understood, the assessments deal with a lot of uncertainty. Therefore, decision makers should learn how to deal with uncertain information and how to decide under uncertainty. Because decision quality depends on the quality of information, it is very important to efficiently reduce uncertainty regarding environmental impacts. The main reasons why environmental management tools have rarely been implemented are that managers are uncertain about the benefits from investing in environmental management tools and that applying the tools is perceived to be expensive. Research should, therefore, aim at reducing the costs and the handwork that goes together with environmental-impact assessments. What can be done is to have a close look on the highly automated processes used in management accounting and to apply the insights in order to reduce analysis effort and costs. 4.2.7. Internal versus External Costs. Costs are external since the damages cannot directly be linked to the emitter. Therefore, it is impossible to internalize external costs, based on the emissions’ contribution to the damages. If the link is very obvious, it should be no problem to directly internalize external cost, for example, by asking the emitter to pay for compensation, by suing the company or the person causing the damage, or by imposing emission taxes and paying the money to the people suffering. Then, the external costs would show up on the companies’ balance sheets and on profit-and-loss

statements. Thorough NPV and risk analyses have to take these payments into account, as well as other soft impacts such as reputation losses. The Brent Spar incident is often used as an example for failing to internalize soft factors into the decision problem. This understanding neglects that fact that soft factors, e.g., reputation, also have a significant impact on a company’s sales and value. In this special case, it was not impossible to calculate a monetary value. Of course, it would have been uncertain, but the problem was that management believed that disposing of the platform in the planned way would not cause damages in reputation. This is a problem of overconfidence and not of metrics. The major problem with internalizing external costs is that the relationship between emissions and damage is usually not that obvious. Research, for example, indicates that CO2 emissions contribute to global warming. The magnitude of the true adverse effects, however, is earliest known for sure when they are realized. We do not make an argument for emitting CO2, but we state that the uncertainty about the true realization is too high to honestly measure what restoring a “previous state” of nature would cost. It is not even clear if that would be possible. What becomes clear is that, even in extensively discussed cases, it is impossible to calculate the true external costs. There are basically two approaches to determine external costs: measuring the costs to repair the damages or determining the willingness to pay for reducing the perceived damages. The first one deals with the true but highly uncertain damages, while the second measure expresses the peoples’ subjective problem understanding. If emissions and damages are perceived to be irrelevant, people would not pay for their reduction. Since the perceived relevance may be very different from the true damages and costs, both types of external costs can differ significantly. 4.2.8. Challenge to Determine the True Environmental Impact. The major goal of reducing the environmental impact is to reduce adverse effects. This sounds simple, but the environmental impact-analysis tools actually confuse adverse effects with opinions about adverse effects. Even though decisions are always affected by subjective knowledge and probabilities, developers of ecological impact-assessment methodologies should avoid basing the metrics on apparently subjective input parameters such as the problem perception of different groups of people or on discount rates. These values explain why people decide, but they do not help determine the true realization and, thus, the magnitude of the damage. Ecological management tools usually use lists of harmful chemicals. This is a good start, but the process of declaring a chemical to be harmful takes time, and the chemicals are added to the list when the damage potential is almost proven. The problem is that decision makers can become overconfident when trusting the lists. Decision makers in companies and politics should, therefore, closely look on the ongoing discussion about the damage potentials and not just rely on one information source. To allow an open discussion about the chemicals’ damage potential, research should consider developing a public database that also provides broad information, contrary opinions, and measurement results before any committee agrees on the damage potential. Such an information source would not only add valuable scientific information to the “lists” but also provide decision makers with much more useful information for the decision-making process. Since environmental damages are a global problem, those lists should be accessible for all interested parties.

7770

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007

4.3. Combining Economic and Ecological Decision Making. Decision-making approaches about maximizing the economic and minimizing the ecological impacts share almost the same challenges: input and output values are uncertain, they are both based on mass and energy flow changes, and the decision makers aim at choosing the best alternative. Because of these similarities, it is possible to develop an integrated methodology that deals with ecological and economic goals simultaneously. 4.3.1. Added Value, Ecological Impact, and Stochastic Dominance. The stochastic-dominance principle appears to be a strong decision-making tool, which should be used independently from economic and ecological decision making. But, in fact, stochastic dominance is already used when determining the decision’s impact of the investment on the company’s value added and the ecological impact: modelers need to determine the difference of the values under consideration with and without investing. In a stochastic setting, this leads to difference distributions of cash flows and ecological impact values, as required in stochastic-dominance analyses. After determining these difference distributions, decision makers have to calculate the decision’s value added by discounting the expected value of the future cash-flow distributions to the current date in order to derive a single current value of the project. This is because an investor would only pay one subjective price to buy the project, which is replicated by applying capital market theory. The idea behind ecological impact analysis, by contrast, is to determine the uncertain true realization and not the willingness to pay. Therefore, distributions of ecological impact values should not be discounted. 4.3.2. Multiobjective Decision Making. Looking at just one objective requires choosing the alternative that contributes most to achieving the single desired goal. If, however, more than one objective is involved in the decision-making process, the situation becomes more complicated: goals can be conflicting, and the alternatives’ contributions to reaching the goals may differ. This is why it is important to prioritize goals. Research has developed two different approaches to handling this challenge.71 The first is to assign the goals’ weights at the beginning of the analysis, then to analyze the alternatives, and finally to choose the alternative that promises the highest overall goal achievement. This procedure is, for example, used in scoring methods. The second possibility, used in Pareto plots for instance, is to start with analyzing the alternatives first, then to find the set of noninferior alternatives that are best with regard to one goal but worse with regard to the other goals, and finally to let the decision maker choose the best alternative for achieving the personal goals. Setting weights before doing the analysis requires discussing what should be achieved by doing the project. This helps understand the problem, work in the same direction, and make decisions. This part of the decision-making process is, therefore, very important, and the development goals should be discussed at the project’s start. Furthermore, this procedure assures that the goals’ weights are independent from the available alternatives. Some approaches use minimizing risk as an objective, others do not. Investors in capital markets, on the one hand, are assumed to choose the lowest-risk alternative if the expected values are the same. The idea is similar to that used in Pareto plots. On the other hand, we have explained that innovative projects are always risky. If one would use minimizing risk on the same goal level as maximizing shareholder value, risk-averse decision makers would simply not take the risk to invest, and

the company loses a huge potential of earning money. Therefore, one should start with uncertain and risky projects and resolve uncertainty as efficiently as possible. 4.3.3. Time-Dependent and Time-Independent Distributions. Real-options pricing uses one stochastic variable to describe the uncertainty about the future value development. Although real-life decision problems are affected by many uncertain input values, so that the method should not be applied as explained above, one can use the idea of cutting away bad outcomes in more sophisticated models and also for nonfinancial problems. In real-options pricing, developers distinguish between timedependent and time-independent probability distributions. This idea can be used to model growing uncertainty as the forecast horizon moves toward the future. Anyway, other variables, such as reaction rate constants, are usually modeled by timeindependent variables: the distribution, as well as the degree of being wrong in the assessment, will not change unless the development team does something to reduce uncertainty. Basically, the same problem applies to time-dependent variables: as long as uncertainty is not resolved, the distribution stays the same. Thus, time-dependent distributions are not resolved because time goes by but because the decision maker looks at the realization. 4.3.4. Increasing Process Flexibility to Reduce Impact. Negative impacts due to overconfidence can be avoided by incorporating flexibility into a process, e.g., by using measurement and control equipment, for example, that can cope with a broad range of possible variable realizations, such as temperature or pressure fluctuations. Flexibility can also be increased by flexibly managing the project, reacting to unforeseen events, and keeping options open by first resolving uncertainty and then deciding about important settings. We have also discussed the challenges of dealing with high-impact, low-probability cases. To deal with such incidents, and to help reduce the effect of being overconfident, the process designers can, for example, implement pressurerelief valves and measures to collect spilled chemicals, even if they do not expect problems to appear. In the case that something happens, these safety measures help reduce the magnitude of the effect and, thus, also make perceived distributions narrow. 4.3.5. Decision-Making Metrics. We have discussed using the project’s value added, based on net present value and real options, as well as the Eco-Indicator’99 framework, to analyze the ecological impact of investing. We have seen in the discussion that both methods show weaknesses in their current state of development, but they are also the most appropriate valuation techniques currently available. Because this paper’s intention is to develop a new optimization procedure, we do not solve the methods’ inherent problems and leave this task to future research. That is why we apply the metrics as proposed in literature. The decision-making methods are summarized in Figure 5. The general project’s value added (∆V) equals the result of NPV, decision-tree, or real-options analyses, as required by the situation. The value added is calculated by discounting all expected values of the forecasted cash flow and initial investment outlay distributions (E(CFt), E(I0)) at the appropriate riskadjusted cost of capital (q), T

∆V ) -E(I0) +

∑1

E(CFt(∆ms;Ps)) (1 + q)t

(16)

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007 7771

Figure 5. Model for simultaneously deriving economic and ecological decision-making metrics. Starting with different models representing the alternatives, and based on uncertain market forecasts, the decision maker calculates mass-flow difference distributions. These are then valued by specific ecological impact factors and market prices. The resulting cash-flow forecast distributions’ expected values are discounted and aggregated to derive the NPV, while the ecological impact factor distributions are just aggregated.

where T is the last period under consideration. The incremental cash flows are calculated from mass-flow changes of relevant streams (∆m˘ s) in period t multiplied with the corresponding market prices (Ps). The Eco-Indicator ’99 is calculated, given that a chemical i is understood as a mass flow’s component C, by multiplying the incremental mass flow changes (∆m˘ C,t) with specific ecological impact factors (FEIC); the framework provides lists of damage (DC,j), weighing (wk), and normalization factors (Nk) for each damage category k and impact category j, so that the environmental impact becomes:

EI )

∑C

FEIC∆m˘ C,t )

wk

∑k N ∑j ∑C DC,j∆m˘ C,t

(17)

k

Both metrics are based on the incremental mass-flow change distributions, so that they can be used simultaneously. As discussed, the economic decision-making metric incorporates the cost of capital as a typical subjective input value, while the ecological impact is calculated without discounting. This helps reduce the confusion between the perceived and the true ecological impacts. 5. Modeling Chemical Processes for Optimizing Economic and Ecological Impacts and a Case Study The procedure of resolving uncertainty, and making models more accurate to improve the forecast accuracy, requires using adjustable models. In this section, we first explain how the model structure of chemical processes changes during uncertainty resolution, and we provide very basic models for reaction and separation steps that can be used in initial, simple, and accurate process models. Finally, we provide a case study and show how the optimization procedure can be applied on reallife problems. 5.1. General Process Model. As explained earlier, optimizing a chemical process should be understood as a procedure that involves repeatedly applying the stochastic-dominance principle

and uncertainty resolution to exclude stochastically inferior alternatives until the optimum alternative is identified. Therefore, the parts of the model that contribute most to decision uncertainty have to be replaced by more accurate models, which leads to a hierarchical model that is detailed where necessary and rudimentary where uncertainty does not contribute to the risk of making a wrong decision. When doing this, one should keep in mind that more-detailed models are not necessarily better. That is why it is important to develop and use more accurate models in terms of predicting nature’s behavior. Figure 6 shows how uncertainty resolution and model updating change the model throughout the uncertainty-resolution procedure. In this example, the development team has identified the “process 1” step to be relevant for possible decision changes. The team has then worked on developing deeper knowledge about this step, and it has introduced a moredetailed model consisting of “reaction” and “separation” steps. Finally, it turned out that the separation step contributed significantly to decision uncertainty, so that the separation model was substituted by two “treatment” steps. The rest of the model was kept the same throughout the process since uncertainty was irrelevant. Every step (s) in a hierarchical model can be described by the general mass and energy balances, so that

∑m˘ s,out ) ∑m˘ s,in - ∑m˘ s,storage ∑E˙ s,out ) ∑E˙ s,in - ∑E˙ s,storage

(18) (19)

where the mass (energy) flows m˘ s,out (E˙ s,out) leaving the object’s balance system equal the mass (energy) inflows m˘ s,in (E˙ s,in) minus the mass (energy) flows that stay in the balance system m˘ s,storage (E˙ s,storage).

7772

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007

Figure 6. Reducing uncertainty and improving model accuracy. This figure shows how the process model becomes more detailed in every uncertaintyresolution iteration and better suited to predict the processes behavior. In the last step, it is easy to choose the alternative that promises to be best in all possible cases.

The steps are connected through mass and energy flows, whereas each mass flow may contain more than one component (C), so that it equals

m˘ s )

∑C

m˘ s,C

Q˙ s,reaction i ) (20)

Every step is an aggregation of all the more-detailed steps within its balance boundaries. One can, therefore, substitute a rudimentary step by a set of more-detailed and accurate step models, which helps increase forecast accuracy. In this paper, however, we just provide rudimentary models for reaction and separation steps that are suited to describe every chemical process. For more-detailed process models, we refer to morespecialized literature (e.g., for distillation and fractionation columns72-74). 5.1.1. Reaction Step. The mass outflow of a product (m˘ P) is determined by the mass inflow of the major reactant (n˘ i,0Mi), its conversion (Xi), and the selectivity toward the product (SP),

m˘ P ) n˘ PMP ) n˘ i,0MiXiSP

(21)

The outgoing components’ mass flows equal the mass inflows and the mass flow changes due to one or more reactions. The reaction step’s energy balance is usually dominated by its heat balance, in which the heat outflow (Q˙ s,out) equals the inflow (Q˙ s,in) plus the chemical heat contribution (Q˙ s,reaction) and additional heat intakes (Q˙ s,else) minus the energy that stays in the step (Q˙ s,storage).

∑Q˙ s,out ) ∑Q˙ s,in + ∑i Q˙ s,reaction i + ∑Q˙ s,else - ∑Q˙ s,storage

and from the amount of the raw material consumed during the reaction:

∑P

m˘ C XCSP ∆HR υC M C

5.1.2. Separation Step. Separation steps change the ingoing mass flows’ compositions and split them into outgoing streams. This behavior can be described by applying split factors (xs,C,out) on the ingoing component’s mass flows, so that the outgoing mass flows (m˘ s,C,out) are determined by

m˘ s,C,out ) xs,C,out

m˘ s,C,in ∑ C,in

where Q˙ s,reaction i is calculated from the reaction enthalpy (∆HR)

(24)

while the step’s energy consumption (E˙ s,treatment) depends on a specific energy consumption factor multiplied with the sum of the ingoing mass flows (m˘ s,C,in)

E˙ s,treatment ) es,treatment

∑C m˘ s,C,in

(25)

The specific energy consumption factor (es,treatment) depends on the treatment technology. It can be approximated for a rectification step, for example, by calculating the energy necessary to heat (cp∆T), and to vaporize (n∆Hv), the mixture n times:

es,treatment ) cp∆T + n∆Hv

(26)

An alternative approach on deriving the specific energy consumption is to use benchmark values from existing processes, or estimates. Research,75 for example, indicates that the specific energy consumption of treating polluted solvents can be very accurately predicted by assuming that

es,treatment ) 1-2 kg of steam (0.9 kWh/kgPollutedSolvent) (22)

(23)

(27)

5.2. Model Development: Problem Statement, Balance System, and Assumptions. In this paper, we use a short version of a case study to show the challenges of decision making under uncertainty. A longer version discussing the difficulties of analyzing the investment proposal is presented

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007 7773

Figure 7. Process scheme. The idea behind the project proposal is to decompose dimethyl ether (DME) to methyl chloride (MeCl), which could then be recycled as raw material to the methylcellulose production process (OrgC ) organic condensate).

elsewhere.76 Anyway, the company under investigation has produced 36 000 tons of methylcellulose in 2004. This product is used as wallpaper glue and as adhesive for cement and mortar, and the basic raw materials are cellulose and methyl chloride. Producing methylcellulose yields to a byproduct stream consisting of dimethyl ether, methyl chloride, and some organic condensate. Right now, the waste stream (3 100 tons in 2004) is burned in a company-owned incineration plant. The idea behind the investment proposal is to decompose dimethyl ether to methyl chloride using HCl. The proposed process and the 2004 mass flows are shown in Figure 7. The questions are if investing in the dimethyl ether decomposition process would increase the company’s shareholder value and reduce the ecological impact. Economic impact analyses are restricted to the company’s boundaries, while ecological assessments look on the mass flow changes in all up- and downstream processes that are affected by the decision. Analyzing the economic impact, therefore, requires looking at the impacts on the two affected business

units (methylcellulose production and incineration) and on the company to derive the total NPV. The ecological analysis additionally has to deal with the upstream processes of producing methyl chloride, methanol, HCL, chlorine, and energy. The cellulose production process and the product’s disposal are irrelevant since the amount of methylcellulose produced is unaffected by the investment decision. The systems under investigation are shown in Figure 8. The process model includes all relevant processes, and the two alternatives are investing in the decomposition process or leaving everything as it is. In the second case, the byproduct stream would still be burned, while investing in the decomposition process would mean to recycle the byproduct stream to the methylcellulose production process. All process steps are modeled by using eqs 18-27. The specific impact factors are taken from the published tables. The economic life span of the process is assumed to be 20 years. To incorporate uncertainty into the model, we have assumed that uncertainty about the economic data, such as methylcellulose demand, market prices, currency exchange rates, and inflation, grows as the forecast horizon moves toward the future. All technological uncertainties (e.g., heating values) are modeled by time-independent distributions. Since uncertainty about specific ecological impact values is assumed to be high, we have cut away negative outcomes so that chemicals, whose adverse impacts are almost proven, will not appear to contribute to nature’s health. The technological feasibility of the process is taken for granted so that the success probability of the development project is 100%. We have used Monte Carlo simulation1 (software: @risk, Microsoft Excel, Latin-Hypercube sampling, 10 000 iterations) to propagate uncertainty through the process models. The algorithm picks one value out of every input value distribution and uses the underlying models to calculate one outcome value during each iteration. Repeating this step many times yields outcome value distributions that can be used for analyses and decision-making purposes.

Figure 8. Case study’s relevant balance systems. The economic analysis deals with the investment’s impact on the two affected business units (methylcellulose and incineration) and on the company, while the ecological analyses have to look on all affected processes from raw material extraction to methylcellulose production (NG ) natural gas, Emiss. ) emissions, and MeCell ) methylcellulose).

7774

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007

Table 1. Project’s Economic Impact on the Methylcellulose Business Unit, Which Shows the Major Expected Cash Flows and Costs and the NPV Calculations, Suggesting That the Investment Project’s NPV Is Significantly Positive and That the Project Should Be Undertaken methylcellulose business unit

2004 2005 2006 2007 2023 (mil. euro) (mil. euro) (mil. euro) (mil. euro) . . . (mil. euro) Cost Savings 1.86 1.96

methyl chloride purchase + incineration + others total

1.32 0.02 3.20

raw materials + energy + depreciation + others total

Additional Costs 0.99 1.04 0.57 0.60 0.43 0.43 0.31 0.31 2.30 2.38

profit before taxes - taxes (tax rate ) 40%) profit after taxes + depreciation ) cash flow total sum of discounted cash flows (at q ) 7%) - initial investment outlay (I0) NPV (eq 16)

2.20 1.56 0.02 3.78

3.89 0.06 9.39

1.10 0.64 0.43 0.32 2.49

1.16 0.67 0.43 0.32 2.58

2.85 1.67 0.43 0.90 5.43

Net Present Value 0.90 0.99 1.09 0.36 0.39 0.43

1.19 0.74

3.97 1.59

0.71 0.43 1.14

2.38 0.43 2.81

0.60 0.43 1.03

0.65 0.43 1.08

incineration business unit

5.44

1.48 0.02 3.58

0.54 0.43 0.97 16.18

1.39 0.02 3.36

2.08

Table 2. Project’s Economic Impact on the Incineration Business Unit; Since This Business Unit Would Lose Sales from Burning the Byproduct Stream, and Since It Would Have to Buy Additional Natural Gas, Investing in the Decomposition Process Would Result in a Significant Negative Economic Impact for the Incineration Business Unit

9.70 6.48

The results of the simulations are difference distributions of mass and cash flows, as well as of ecological impact values. This allows us to apply stochastic-dominance analyses on the outcome distributions and to calculate the project’s value added as well as the ecological impact. The project’s value added is calculated by using eq 16, and the ecological impact is determined by the using Eco-Indicator ’99 framework as given by eq 17. The result of the model development is a hierarchical model that can be used to make a first impact assessment. Since we incorporated all relevant process steps into the model, we reduce the risk that wrong balance boundaries would affect the decision, and the distributions reflect the high uncertainty about input values in general and ecological impact factors in particular. We furthermore assume that achieving both goals, maximizing shareholder value and minimizing ecological impact, are equally important, but we will see that the final decision is independent from this setting. 5.3. Analyzing the Economic Impact. The first step in this analysis is to determine the economic impact of the investment decision. We have calculated difference distributions of all expected future cash flows, which were then discounted by using the risk-adjusted cost of capital. For analysis purposes, we determine the economic impact on the company and on both affected business units. The results are given in Tables 1-3. These calculations show that, even though the project appears to be valuable from the methylcellulose production unit’s point of view (NPV ) +6.48 million euro), the impact on the incineration plant’s contribution to the company’s shareholder value is significantly negative (-17.06 million euro). The reason is that investing in the process would reduce the transfer payments for burning the byproduct stream and, thus, lower the costs and cash outflows of the methylcellulose business unit. However, at the same time, the incineration plant would lose the same amount in sales, which results in the business unit’s negative impact. Additionally, the incineration plant right now uses the high-heating-value byproduct stream to burn low-heating-value chemicals. The byproduct stream

burning byproduct stream + selling HCl (30%) total natural gas purchase + others total profit before taxes - taxes (tax rate ) 40%) ) cash flow total sum of discounted cash flows (q ) 7%, eq 16) ) NPV

2004 2005 2006 2007 2023 (mil. euro) (mil. euro) (mil. euro) (mil. euro) (mil. euro) Change in Sales -1.32 -1.39 -1.48

-1.56

-3.89

-0.18 -1.50

-0.21 -1.69

-0.22 -1.78

-0.56 -4.45

0.18 0.02 0.20

0.19 0.02 0.21

0.47 0.06 0.53

Net Present Value -1.68 -1.78 -1.88 -0.67 -0.71 -0.75

-1.99 -0.84

-4.98 -1.99

-1.20

-2.98

-0.20 -1.59

Additional Costs 0.16 0.17 0.02 0.02 0.18 0.19

-1.01 -17.06

-1.07

-1.13

Table 3. Project’s Economic Impact on the CompanysThe Total Impact Equals the Sum of the Impact on All Business Units, as Summarized in This Table; Since the Company Would Lose Value (-10.56 million euro), the Project Should Not Be Undertaken aggregated economic impact on the company

methylcellulose business unit (mil. euro)

incineration plant business unit (mil. euro)

company (mil. euro)

present value of cash flows initial investment outlay (I0) NPV (at q ) 7%, eq 16)

+16.18

-17.06

-0.88

-9.70 +6.48

-9.70 -17.06

-10.59

would, therefore, have to be substituted by natural gas (NG), which in turn adds to the incineration plant’s costs. Finally, summing up the effects on both business units results in the impact on the company’s shareholder value. Since the NPV is significantly negative (-10.59 mil. euro), the process should not be installed. The total impact on the company is significantly negative, and the chance that the true investment’s impact is positive is very low. However, to make sure that this assessment is correct, we have analyzed the risk of making a wrong decision. By analyzing if broader distributions could affect the decision, we found that increasing input value uncertainty would still lead to a negative NPV, and the chance that future discounted cash flows could be high enough to pay for the initial investment outlay is negligible. This finding is supported by applying normal sensitivity analysis on the decision problem, which shows that the total NPV reacts very insensitively to input value changes to most relevant input factors. Some of the analyses are shown in Figure 9. Even though changes in the input factor realizations would affect both business units’ contributions to the company’s value, total NPV stays almost constant. The result of the economic analysis is that the NPV of the investment project is significantly negative. By applying the NPV decision rulesinvest only in a project whose value added is positiveswe should not invest in the project. We furthermore found by applying the uncertainty analysis as provided in this paper that the risk to make a wrong decision is very low. The decision maker can, therefore, be very confident to be right in this finding, and no further model quality improvement is

Ind. Eng. Chem. Res., Vol. 46, No. 23, 2007 7775

Figure 9. Sensitivity analyses on the project’s added value. The graphs show that varying the growth rate of the methylcellulose market, the cost of capital, and the transfer price between the business units affects the business units’ results but leaves the company’s total value unaffected.

Figure 10. Impact of investing in the decomposition process on the process network’s environmental performance. All three Eco-Indicator metrics indicate that uncertainty about the ecological impacts is too high to decide which alternative would be best. Only the primary energy and the natural gas (NG) consumption would be significantly reduced by investing in the process.

necessary to make the decision. If maximizing shareholder value would be the only objective, it would be possible to stop the analysis. 5.4. Analyzing the Ecological Impact. We use stochastic dominance to analyze the investment’s effect on the environmental impact. The graphs in Figure 10 show relative distributions of the two alternatives’ effects on the ecological impact:

if all values would lie entirely >1, the decomposition process would be the better alternative, and if the distributions would lie entirely