Comprehensive Design of a Sensor Network for Chemical Plants

reliability of the fault monitoring system while satisfying the constraints imposed on the system. ... design procedure for a nonredundant sensor netw...
0 downloads 0 Views 120KB Size
1826

Ind. Eng. Chem. Res. 2002, 41, 1826-1839

Comprehensive Design of a Sensor Network for Chemical Plants Based on Various Diagnosability and Reliability Criteria. 1. Framework Mani Bhushan and Raghunathan Rengaswamy* Department of Chemical Engineering, Indian Institute of Technology, Bombay Powai, Mumbai 400 076, India

Fault diagnosis is an important area in the chemical process industry and has attracted considerable attention from researchers in the recent past. All approaches for fault diagnosis depend critically on the sensors measuring the important process variables in the system. In this paper, a reliability maximization based optimization framework for sensor location from a fault diagnosis perspective is presented. The formulation is aimed toward maximizing the reliability of the fault monitoring system while satisfying the constraints imposed on the system. A minimum-cost model which minimizes the cost of the fault monitoring system while ensuring that the solution provides a minimum threshold reliability is also presented. A one-step optimization formulation which maximizes reliability and, among the various solutions with the same reliability, chooses the one with minimum cost is discussed in this paper. A methodology for obtaining the “best” sensor location irrespective of the single/multiple fault assumption is also presented. In the first part of this two-part series of papers, the sensor location framework is discussed. In the second part, the sensor location procedure is applied to a large flowsheet, the Tennessee-Eastman flowsheet. Various issues involved in the application of the reliability maximization based optimization procedure are explained using this case study. 1. Introduction and Literature Survey For safe and optimal operation of a chemical plant, it is essential to quickly detect and identify faults when they occur. Hence, an efficient fault diagnosis methodology is very useful for modern day complex chemical plants. The increasing importance of fault diagnosis in the chemical process industry has led several researchers to work in this area. Whenever a process encounters a fault, the effect of the fault is propagated to all or some of the process variables. The main objective of the fault diagnosis step is to observe these fault symptoms and determine the root cause for the observed behavior. The fault detection step involves a comparison of the observed behavior of the process to a reference model. This observed fault-symptom pattern forms the basis for the fault identification step. Thus, the efficiency of the diagnostic system depends critically on the location of the sensors monitoring important process variables. With hundreds of process variables available for measurement in any chemical plant, selection of crucial and optimum sensor positions poses a unique problem. Hence, there is a need for an automated procedure to design a cost-optimum, fool-proof, and highly reliable fault monitoring system for the safe operation of the chemical processes. In our previous work, we have developed procedures to locate sensors for fault diagnostic observability based on digraph (DG)1 and signed digraph (SDG)2 representations of the process. However, these approaches were largely qualitative and hence did not use the quantitative information that might be available about the system. Further, the notion of reliability as discussed * To whom correspondence should be addressed. Current address: Department of Chemical Engineering, Clarkson University, Potsdam, NY 13699-5705. E-mail: [email protected].

in this paper was not discussed. Hence, the aim of the first part of this two-part series of papers is the development of a comprehensive design strategy for the design of sensor locations that takes into consideration the available quantitative information such as fault occurrence and sensor failure probabilities while handling the various constraints that might be imposed on the design problem. There have been a few researchers who have worked on the problem of sensor location. Lambert3 used probabilistic importance of events in fault trees to decide optimal sensor locations. Ali and Narasimhan4 introduced the concept of reliability of a variable. They described a graph-theoretic procedure for maximizing the reliability of linear processes in the presence of sensor failures. The reliability of the process was defined as the smallest reliability among all of the variables. They also extended this procedure for the optimal design of a redundant sensor network for linear processes.5 A design procedure for a nonredundant sensor network for bilinear processes was also discussed.6 Sen et al.7 presented a genetic algorithm based approach that can be applied for the design of nonredundant sensor networks using different objective functions. Bagajewicz8 proposed an optimization formulation to obtain cost optimal sensor networks for linear systems subject to constraints on precision, residual precision, and error detectability. Bagajewicz and Sanchez9 merged the concepts of the degree of redundancy for measurements and the degree of observability for unmeasured variables into a single concept, the degree of estimatibility of a variable. They presented optimization formulations for the design of sensor networks to achieve different degrees of estimatibility of key variables. A minimum-cost model and a generalized reliability model for the design of a reliable sensor network was also presented by them.10 The connection of the minimum-

10.1021/ie0104363 CCC: $22.00 © 2002 American Chemical Society Published on Web 03/03/2002

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002 1827

Figure 1. Two-level strategy.

cost model to the maximum-reliability model,10 and to the maximum-precision model,11 was also established. Alheritiere et al.12 dealt with the optimization of resources allocated to various sensors for improving the precision of a parameter. Recently, Bagajewicz and Sanchez13 presented a framework to perform reallocation and upgradation of existing instrumentation to achieve maximum precision of selected parameters. The optimization approaches summarized here are for locating sensors to maximize objectives such as precision, estimatibility of variables, and so on. In this paper, in contrast, the sensor location strategy is presented from a fault diagnosis perspective. The sensor location problem is formulated as an integer programming optimization problem which maximizes the system reliability from a fault diagnosis perspective while satisfying the constraints imposed on the system. The reliability is defined in terms of the probability of a fault occurring and remaining undetected. After presenting the reliability formulation, a cost minimization model is also discussed. A one-step optimization procedure which generates the most reliable sensor network and, among the multiple solutions, chooses the one with minimum cost is then presented. Use of these formulations by incorporating various process-specific constraints is also discussed. Various other related formulations are also presented. 2. General Solution Philosophy Most of the previous work in the literature pose the sensor location problem as an optimization problem. One of the key ideas in our approach to solve the sensor location problem from the fault diagnosis perspective is the decoupling of the cause-effect modeling from the optimization formulation. This is done to facilitate the use of various techniques for cause-effect modeling in conjunction with various optimization formulations to solve a particular sensor network design problem of interest. This two-level strategy is illustrated in Figure 1. The cause-effect modeling is integrated with the optimization formulation through the use of the idea of fault sets. These are sets of sensors that are generated based on cause-effect modeling which then form the basis for sensor location. For example, if an observability problem is solved, these sets are simply sets of variables that are affected by the faults; if a resolution problem is being solved, then the sets consist of variables that can discriminate between the various faults. The generation of these fault sets based on a DG model was discussed by us in work by Raghuraj et al.,1 and generation of these sets based on a SDG model was discussed by us in work by Bhushan and Rengaswamy.2 In general, in DG and SDG methods, given a process

with its faults and measurable variables (where sensors may be placed), the cause-effect information is represented in a matrix, which will be referred to as the bipartite matrix, D. The rows of this matrix correspond to faults, and the columns correspond to measurable nodes (or sensor nodes). The (i,j)th entry (dij) of this matrix is 1 if fault i affects node j and is zero otherwise. This bipartite matrix forms the basis for the generation of the fault sets. However, these are not the only methods for the generation of fault sets. The fault sets may also be generated by, say, querying an experienced process engineer or operator. Also, modifications to process SDG may be performed to remove spurious effects. The order of magnitude approach14 is one way of achieving this. For the case study presented in part 2 of this series, we have used a modified SDG representation of the process to generate this information. The process SDG was modified by not only considering the signs of arcs but also associating a gain with the arc. This enables use of order of magnitude arguments to achieve better resolution of the faults. This will be discussed in part 2 of this series. These fault sets could also be generated based on a completely quantitative model of the process. The next important advantage of this approach is that the generation of the fault sets can be tailored to the problem that is being solved. Hence, various diagnostic criteria such as observability, single-fault resolution, and multiple-fault resolution can be used in the optimization formulation. Optimization formulations can also be based on the reliability of the sensor network designed, cost of the fault monitoring system, and/or combinations thereof. Further, various information such as fault probability and sensor failure probability can be incorporated in our framework. Hence, the proposed approach provides a transparent framework that can be used to solve various sensor location problems while being generally applicable for a wide variety of causeeffect models. In this paper, we will focus on the optimization formulations for the sensor network design problem. As mentioned before, cause-effect modeling for fault diagnosis has been discussed in detail in our previous work and hence will not be dealt with in this part of the series of papers. However, in part 2 of this series, where the application of the proposed approach is demonstrated on the Tennessee-Eastman (TE) case study, we will discuss the cause-effect modeling and the generation of fault sets for the case study in detail. Hence, in this part, we will discuss various optimization formulations and the use of the fault sets in these formulations for a comprehensive solution to the sensor network design problem. 2.1. Optimization Formulation for Reliability Maximization. The aim of any sensor network design is to maximize the system reliability. A sensor network is highly reliable if the probability of any fault occurring without being detected is low. The formulation that we propose is based on maximizing the minimum reliability among all of the faults. The system reliability is defined as the lowest reliability among all faults. This is based on the philosophy that a chain can be no stronger than its weakest link.4 For a given process, the faults of that process have certain occurrence probabilities. The various available sensors also have certain failure probabilities, which depend on the type of the sensor and the variable being

1828

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002

Figure 2. Example to explain unobservability.

measured. The only way in which a fault can occur without being detected is that the fault occurs and simultaneously the sensors covering that fault fail. The probability of such an event taking place is the product of the fault occurrence and corresponding sensor failure probabilities. This product Ui, is referred to as the unobservability value of that fault: n

Ui ) fi

sjd x ∏ j)1

ij j

(1)

The concept of unobservability is illustrated in the following example. Example 1. Consider Figure 2, which is a causeeffect (bipartite) representation of a process with three faults and two measurable nodes. In this process, if sensors are placed at both nodes S1 and S2, then unobservability values of faults F1, F2, and F3 are 0.0001 (0.01 × 0.01), 0.000 02 (0.02 × 0.01 × 0.1), and 0.001 (0.01 × 0.1), respectively. On the other hand, if only node S1 is measured, the unobservability values of faults F1, F2, and F3 are 0.0001 (0.01 × 0.01), 0.0002 (0.02 × 0.01), and 0.01, respectively. The reliability of detecting a fault is inversely proportional to the unobservability value of that fault. Maximizing the reliability of the system is then equivalent to minimizing the unobservability of the system. Because we want to maximize the minimum reliability of the system, this is equivalent to minimizing the maximum unobservability of the system. With this aim, the optimization formulation for sensor network design to maximize the system reliability is as follows: Problem Ia.

min [max Ui] xj

∀i

makes the approach practical because it makes sense to use more than one sensor to measure a variable if that particular sensor has a high failure probability or if the covered fault has a high occurrence probability. Another important point to note is that in the formulations presented in this paper (including the one presented above), we are considering only one type of measurement for a given variable. Cases where a variable may be measured using different types of sensors (with possibly different costs and failure probabilities) can be easily incorporated in the formulations presented here. The optimal solution of problem Ia will give sensor locations which maximize the system reliability. As formulated above, problem Ia is a nonlinear (objective function is nonlinear) integer programming (decision variables are nonnegative integers) problem which is not easy to solve exactly. It turns out that, by a suitable transformation, the problem can be converted to a linear integer programming problem. This is discussed below. Equivalent Linear Objective Function. The objective function (2) can be replaced by a linear objective function:

min [max ln(Ui)] xj

∀i

(5)

where n

ln(Ui) ) ln(fi) +

dijxj ln(sj) ∑ j)1

(6)

ln(Ui) is linear in the decision variables xj and is obtained by taking the natural log on both sides of eq 1. Claim I. Objective function (5) is equivalent to objective function (2). Proof of Claim I. The equivalence of the two objective functions is based on the fact that the natural log, ln(x), is a monotonically increasing function of x for x > 0. This follows from the fact that the derivative of ln(x) is a positive quantity for positive x.

(2)

1 d ln(x) ) , which is >0, ∀ x > 0 dx x

(7)

x > y w ln(x) > ln(y), ∀ x, y > 0

(8)

Hence,

subject to n

cjxj e C* ∑ j)1

(3)

xj ∈ Z+, j ) 1, ..., n

(4)

where

In the above formulation, Ui is as given by eq 1. Constraint (3) ensures that the cost of the fault monitoring system is not more than the available resource, C*, where cj is the cost of placing a sensor at node j. For the problems considered in this work, cj’s will be considered to be positive constants. The decision variables (xj) are allowed to take nonnegative integer values which may be greater than 1. In other words, hardware redundancy is allowed. For example, xk ) 2 means that two sensors have to be placed on node k. This feature

n sjdijxj), is The unobservability of a fault i, Ui ) fi(∏j)1 always nonnegative, i ) 1, ..., m. Given a set of selected sensors, the fault i for which Ui is maximum will also give the maximum value of ln(Ui). Hence, minimizing the maximum Ui is the same as minimizing the maximum ln(Ui), i ) 1, ..., m. Therefore, objective function (5) is equivalent to objective function (2). The objective as given by eq 5 is still not in the standard integer linear programming (ILP) form because it involves minimization of the maximum value. By a simple modification, the problem is converted to the standard ILP form. Problem I.

min U xj

(9)

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002 1829

subject to

U g ln(Ui), i ) 1, ..., m

(10)

n

4. Application of the Proposed Formulation

cjxj e C* ∑ j)1

(11)

xj ∈ Z+, j ) 1, ..., n

(12)

Claim II. Objective function (5) is equivalent to objective function (9) with the constraint (10). Proof of Claim II. The proof is obvious because constraint (10) ensures that the quantity U being minimized in objective function (9) is g{max (ln(Ui), i ) 1, ..., m)}. Because U is being minimized, it will be equal to {max (ln(Ui), i ) 1, ..., m)}. This is equivalent to using objective function (5). Based on claims I and II, the equivalence of problems I and Ia is proved next. Claim III. Problem I is equivalent to problem Ia. Proof of Claim III. By claim II, the objective function (9) together with constraint (10) of problem I is equivalent to objective function (5). Claim I establishes the equivalence of objective function (5) to objective function (2). Hence, objective function (9) along with constraint (10) is equivalent to objective function (2). The cost constraint (3) and the integer requirements (xj ∈Z+) are present in both problems I and Ia. Therefore, problem I is equivalent to problem Ia. Problem I is now in standard ILP form. The solution to the sensor location design problem, as posed here, gives the optimal set of sensors for maximizing the system reliability (minimizing the system unobservability), subject to the cost constraint. It is also important to note that the cost constraint can be written in terms of the total number of available sensors. Also, depending on the problem at hand, different constraints for different sensors can be imposed. A minimum-cost model is presented next. 3. Minimum-Cost Model The minimum-cost model is posed as the following optimization problem: Problem II. n

min xj

cjxj ∑ j)1

abilities. In the next section, some issues related to the application of the proposed formulations for a given process are discussed.

(13)

subject to

U e U*

(14)

U g ln(Ui), i ) 1, ..., m

(15)

xj ∈ Z+, j ) 1, ..., n

(16)

In the above formulation, U is the system unobservability (maximum unobservability among all faults), Ui is the unobservability value of fault i as defined in eq 1, and U* is the threshold value for the system unobservability. This formulation selects the minimum-cost sensor network which achieves the required system reliability. Note that, instead of having a bound on the system unobservability as above, the designer may choose to have bounds on the individual fault unobserv-

Two main issues are discussed in this section: (i) use of the optimization framework to come up with the “best” design and (ii) reduction in the number of constraints for the optimization problem. 4.1. Obtaining the “Best” Sensor Location. An important aspect of the sensor location formulations presented above is that their application is not restricted to just the observability (of faults) case. As shown by Raghuraj et al.,1 any other problem (single-fault resolution, double-fault resolution, etc.) can be converted to a suitable observability problem. This is briefly discussed below: (i) Single-fault resolution: For each pair (i, j) of faults, the fault set

Bij ) Ai ∪ Aj - Ai ∩ Aj

(17)

is generated. In the above expression, Ai and Aj are the sets of measurable nodes affected by faults i and j, respectively. Bij represents the set of nodes which can be used for differentiating between faults i and j. Each Bij is treated as a pseudofault and added to the bipartite matrix D, discussed earlier in this paper. Now, the observability of Bij refers to the ability of the fault monitoring system to distinguish between faults i and j. The probability fij of occurrence of the pseudofault Bij is defined as the minimum of the probabilities of faults i and j:

fij ) min (fi, fj)

(18)

The above definition of fij is based on an intuitive understanding of the concept of the pseudofault Bij. This fault affects only those sensors which may be used for resolving faults i and j and, hence, represents the probability with which the two original faults would need to be resolved. The use of the “min” function in this definition conveys the idea that the probability of requiring resolution between faults fi and fj is no more than the minimum probability among the two faults, and hence its “weightage” (as given by its probability) in the sensor location framework is less than equals that of the two individual faults themselves. This makes sense because the sensor network designer’s aim would be to first observe the individual faults and then worry about resolving them from each other. This probability could also be defined strictly from probabilistic arguments and similar analysis could be carried out using the same formulations as proposed in this paper. (ii) Multiple-fault resolution: For illustration, consider the specific case when a maximum of two faults can occur at a time. For each pair (i, j) of faults, the set Aij ) Ai ∪ Aj is formed. Aij represents the set of nodes which are affected when both of the faults i and j occur together. The set Aij is treated as a pseudofault and added to the original set of faults. The probability of occurrence of this fault ) fi × fj, the product of occurrence probabilities of faults i and j. To generate the sets for resolution, the single-fault resolution algorithm as discussed above is applied to this extended set of faults.

1830

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002

The probabilities of occurrence of these pseudofaults are

f12 ) min (f1, f2) ) 0.01 f13 ) min (f1, f3) ) 0.01 f23 ) min (f2, f3) ) 0.02 Figure 3. Bipartite graph for example 2.

The same methodology can be applied to cases when more than two faults can occur simultaneously. Raghuraj et al.1 have given the above procedures in detail for the DG representation of the process. Bhushan and Rengaswamy2 have given the procedures when the process SDG is used. Depending on the scenario being considered (single fault, double fault, etc.), the bipartite matrix D is appropriately generated as discussed above. The reliability maximization and cost minimization formulations can then be applied to obtain optimal sensor location for the case considered. For a given available cost, for different scenarios, one may obtain different reliabilities and different sensor networks. The question that naturally arises then is, given a process and available resource, is it possible to design a “best” sensor network irrespective of the assumed scenario (single fault, double fault, etc.)? It turns out that, by a systematic application of the maximum-reliability formulation, it is possible to achieve such a design. The procedure to obtain the best design for a given process is explained through the following example. Example 2. Consider the bipartite graph shown in Figure 3. The process represented by the graph consists of three faults (F1, F2, F3), and three measurable nodes (S1, S2, S3). The probabilities of occurrence of faults are f1 ) 0.01, f2 ) 0.02, and f3 ) 0.03. The probabilities of failures of sensors available to measure the three measurable nodes are s1 ) 0.1, s2 ) 0.2, and s3 ) 0.3. The set of nodes affected by the three faults are A1 ) [S1, S2], A2 ) [S2], and A3 ) [S2, S3]. For the purpose of illustration, we will assume all of the sensors to have the same cost. The resource available can then be stated in terms of the number of available sensors. Consider the specific case of two available sensors. Depending on the scenario considered, different sensor locations may be obtained: (i) Only observability of faults is considered: For this case, the optimal sensor network is [S2, S2], that is, placing two sensors on node S2. The unobservabilities of faults then are U1 ) 4 × 10-4, U2 ) 8 × 10-4, and U3 ) 1.2 × 10-3. The system unobservability U is the maximum unobservability. Hence, U ) max (U1, U2, U3) ) 1.2 × 10-3. (ii) Resolution for the single-fault case is also considered: For this case, three pseudofaults are constructed. They are

B12 ) A1 ∪ A2 - A1 ∩ A2 ) [S1] B13 ) A1 ∪ A3 - A1 ∩ A3 ) [S1, S3] B23 ) A2 ∪ A3 - A2 ∩ A3 ) [S3]

These three pseudofaults are added to the original set of faults. Performing sensor location on this extended system gives [S2, S3] as the optimal sensor network. With these sensors, the unobservabilities of faults are U1 ) 2 × 10-3, U2 ) 4 × 10-3, U3 ) 1.8 × 10-3, U12 ) 0.01, U13 ) 3 × 10-3, and U23 ) 6 × 10-3. The system unobservability then is U ) max (U1, U2, U3, U12, U13, U23) ) 0.01. (iii) Simultaneous occurrence of two faults is also considered: The probabilities of the simultaneous occurrence of faults are (F1, F2) ) 2 × 10-4, (F1, F3) ) 3 × 10-4, and (F2, F3) ) 6 × 10-4. All of these probabilities are less than the system unobservability obtained for the single-fault resolution case ii above. Hence, even if double faults are considered, the optimal sensor location obtained in the previous case will not change. It is also clear that consideration of any other scenario where more than two faults can occur simultaneously will also not change the optimal sensor location obtained by solving the reliability maximization problem for the single-fault resolution case. Hence, for the given cost (the number of sensors available), the sensor location obtained by solving for the single-fault resolution case is the “best” sensor location irrespective of the scenario being considered. For a different (higher) given cost, the double-fault case (or other scenarios) may result in a different sensor location. The above example illustrates the methodology to be followed to obtain the best design for a given process and a given total resource. This approach is summarized below: 1. Solve the problem for the first level (single-fault assumption). Calculate the unobservability value of the system. 2. Generate the pseudofaults for the next level (say up to simultaneous occurrence of r faults at a time). If the probability of occurrence of all of the new faults generated at this level is less than the system unobservability obtained by solving the reliability maximization problem at the previous level, then stop. Otherwise, generate sets corresponding to resolution of faults for the current level, and solve the maximum-reliability problem. Then go to the next level (simultaneous occurrence of up to r + 1 faults at a time), and continue this procedure. It is possible that while generating new faults, some of the faults (corresponding to resolution of faults) may not affect any sensor node1 (the A set corresponding to these faults will be empty). These faults will not be considered for unobservability calculation. In the next section, techniques to reduce the number of constraints in the optimization problems to be solved are discussed. 4.2. Reduction in the Number of Constraints. A drawback of the approach discussed above is that the size of the problem to be solved increases rapidly as simultaneous occurrence of faults is considered. To illustrate this, consider a system with 10 faults. If only one fault at a time is assumed, then corresponding to these 10 original faults, 10C2 ) 45 Bij sets are generated.

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002 1831

The system for which the reliability maximization problem has to be solved therefore now consists of 10 + 45 ) 55 faults. If double-fault scenario is assumed, then 10C ) 45 new faults corresponding to the simultaneous 2 occurrence of any two original faults are constructed. The system, hence, now consists of 10 + 45 ) 55 faults. Applying a single-fault resolution algorithm to this system generates 55C2 ) 1485 new faults. The problem, therefore, now consists of 1485 + 55 ) 1540 faults on which the reliability maximization algorithm has to be applied. Because each fault corresponds to a constraint in the reliability maximization problem (problem I), this involves solving a problem with 1541 constraints (one cost constraint). For the triple-fault case, the number of constraints would be 15401. It may not be computationally feasible to solve a problem with so many constraints using commonly available ILP packages. It turns out that the number of constraints can be substantially reduced by removing redundant constraints. A systematic procedure to achieve this is given below. This reduction is possible at two levels. (i) At the first level, some faults in the original process may be redundant. If two faults, say i and j, affect the same sensors, that is, Ai ) Aj, then the one with the lower probability is not considered. (ii) Once redundant faults in the original process have been removed, then for the scenario considered (double fault, triple fault, etc.), appropriate sets are generated (for example, for the double-fault case, Aij sets are constructed) and added to the original system. This now becomes the system on which the resolution algorithm is applied (generation of Bij sets). The second level of reduction is performed now. Some of the Bij sets may be empty. These are not considered. Also some other redundant faults are removed. If any Bij ⊇ Bkl and the probability of Bij e the probability of Bkl, then fault Bij can be removed from the problem being solved without affecting the optimal solution. The reason for this is that whenever a sensor is selected to reduce the unobservability of fault Bkl, the unobservability of fault Bij also decreases because that sensor is affected by fault Bij as well. Also, to start with (when no sensors have been selected), the unobservability of fault Bij e the unobservability of fault Bkl. Hence, fault Bij can be removed from the problem without affecting the optimal solution. For example, while locating sensors for the observability of all faults for bipartite graph of Figure 3, fault F1 need not be considered because A1 ⊃ A2 and f1 < f2. After the above reductions are performed, the number of constraints in the problem being solved decreases considerably. The reliability maximization algorithm is then applied to this reduced system. 5. Drawback of Maximum-Reliability and Minimum-Cost Models The maximum-reliability model as presented in the previous sections will give the most reliable sensor network, but it may not give a cost optimal result. The result may not be cost optimal in the sense that there may be some other network with a lower cost which might yield the same reliability. This is explained by the following example. Example 3. Consider the bipartite graph of Figure 4. Both of the sensors have the same probability of failure (0.1) but have different costs (75 and 100 units, respectively). The fault occurrence probability of the given fault is 0.01. For a given available amount of 100

Figure 4. Bipartite graph for example 3.

units, one sensor can be placed at either S1 or S2. The unobservability value in both of the cases is 0.001, and the cost constraint is also satisfied for both of the cases. Hence, both of the solutions are optimal solutions of the reliability maximization problem. In other words, the problem has multiple solutions, all of which yield optimal results from the point of view of the achieved reliability but have different costs. Similar cases can occur in the cost minimization model, where one may obtain a solution which is least costly and also satisfies the constraint on reliability being greater than the threshold value but does not result in the most reliable network. In such cases, where multiple objectives are present, one would like to obtain a solution which optimizes the objectives in an ordered way. As a first step, the first objective is optimized. Given the optimal value for the first objective, the second objective is optimized, subject to the first objective being equal to its optimal value. Given the optimal values for the first and second objectives, the third objective is optimized, and so on. This is popularly known as lexicographic optimization.15 For the maximum-reliability model presented in this paper, maximizing the reliability is the first objective and minimizing the cost is the next objective. On the other hand, for the minimum-cost model, minimizing the cost is the first objective and maximizing the reliability is the second objective. One way for achieving the desired solution will be to solve the problems sequentially. For the maximum-reliability model, this would involve (i) solving the maximum-reliability problem and then (ii) with the reliability fixed at its optimal value solving the minimum-cost problem. This would imply solving two ILPs which may not be desirable for large problems. In the next section we present a modified maximum-reliability model, which will directly yield a solution which will be optimal in the lexicographic sense. 6. One-Step Optimization Consider the following modified, maximum-reliability optimization problem. Problem III.

min [max {ln(Ui)} - Rxs] ∀i

xj

(19)

subject to n

cjxj + xs ) C* ∑ j)1

(20)

xj ∈ Z+, j ) 1, ..., n

(21)

xs ∈ R+

(22)

where

1832

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002

In the above formulation, xs is the slack in the cost constraint, which takes nonnegative real values. R is a positive constant which has to be chosen such that the primary objective (maximizing reliability) still attains its earlier optimal value. In such a case, negative contribution of the second term (Rxs) in the objective function will ensure that the solution is also cost optimal in the lexicographic sense. Among all solutions which yield maximum reliability, the one which has the highest xs will be chosen. Thus, if the constant R is appropriately chosen, the solution to problem III will give a sensor network, which will have the least cost among all of the networks which yield the maximum reliability. The proof of existence of such an R for the maximum-reliability model is presented next. The basic idea behind the selection of the constant R is as follows: Consider the original maximum-reliability problem (problem I). Let the solutions which yield the maximum reliability be elements of a set S. Let all of the solutions which yield the optimal value of the modified problem (problem III) be elements of a set S′. The idea is to choose R such that (i) S′ ⊆ S, that is, no extra optimal solution is introduced as a result of the modification, and (ii) only the least cost elements of S are elements of S′. Note that once condition i is ensured, negative contribution of the term Rxs in the objective function ensures condition ii. To ensure condition i, it has to be ensured that, for any two feasible solutions y and z of problem I, if the objective function value with y is greater (lesser) than the objective function value with z, then the same relationship should hold between their objective function values for the modified problem. That is,

if then

ln(Uy) > ln(Uz) ln(Uy) - Rxsy > ln(Uz) - Rxsz

ln(Uz) - ln(Uy) xsz - xsy

ln(Uy) - ln(Uz) xsy - xsz

max (xsy - xsz)

(28)

An upper bound on xsy - xsz is C*, the available resource:

max (xsy - xsz) e C*

(29)

To get a lower bound on ln(Uy) - ln(Uz), consider ln(Uy): n

ln(Uy) ) ln(fi) +

dijxj ln(sj), ∑ j)1

for some fault i (30)

It is reasonable to assume that the constants ln(fi) and ln(sj) are rational numbers and, hence, can be expressed as

ln(fi) ) pi/qi

(31)

ln(sj) ) hj/ej

(32)

where pi, qi, hj, and ej are all integers. The above equation for ln(Uy) can then be rewritten as

ln(Uy) )

pi qi

hj dijxj , for some fault i ej j)1 n

+



(33)

Let

a ) LCM (q1, q2, ..., qm, e1, e2, ..., en)

(34)

(24)

min {a ln(Uy) - a ln(Uz)} g 1

(25)

(35)

However,

min {a ln(Uy) - a ln(Uz)} ) a min {ln(Uy) - ln(Uz)} (36) From eqs 35 and 36, we get a lower bound on ln(Uy) ln(Uz) as

min {ln(Uy) - ln(Uz)} g 1/a

(37)

From eqs 28, 29, and 37, an R which satisfies the following relation will satisfy inequality (27):

(26)

where the right-hand side is a negative quantity (from eq 23). This condition is trivially satisfied because we are considering R to be a positive constant. (ii) xsy - xsz > 0. Then,

R


R
1), while some other node is not selected as a sensor node at all. Let the selected sensor be the jth node. Then carry out the following updates: current solution, CS ) CS ∪ xj (note that xj may already be present in CS; in that case, increase the count of xj in CS by 1); cost utilized, C ) C + cj; currently available cost ) C* - C. After this sensor selection, recalculate the unobservabilities of all faults and the system unobservability (maximum unobservability). If this system unobservability is less than the previous value, then update the optimal solution to be the current solution: optimal solution, OS ) CS; optimal cost, OC ) C. If the system unobservability does not decrease, the optimal solution and optimal cost are not updated. 4. This procedure (from step 2 onward) is continued until the set Q turns out to be empty. This indicates that no more sensors can be selected to decrease the

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002 1835

system unobservability. The set of sensors in OS is the selected sensor location. Steps 2 and 3 ensure minimization of the system unobservability. The procedure of updating the optimal solution only if there is a decrease in the system unobservability ensures that the solution is optimal in the lexicographic sense. The above algorithm is for the one-step lexicographic optimization problem (problem III). The solution of problem III is an optimal solution to problem I also. Hence, no separate heuristics are needed for solving problem I. For the minimum cost problem (problem II), heuristics similar to the ones given above may be developed. 8. Additional Constraints for Sensor Location In the previous sections, a reliability maximization problem with cost constraints and a cost minimization problem with reliability constraints were presented. In this section various other types of problem specific constraints that may be incorporated while performing sensor location are briefly discussed. (i) Different availability of different types of sensors: Constraints such as the availability of only four temperature sensors, five pressure sensors, etc., may be incorporated in place of/along with the cost constraint. For example, consider a specific case where, besides the cost constraint, the one-step sensor location optimization is restricted by the availability of only up to four temperature sensors. The formulation for this case then is

min [max{ln(Ui)} - Rxs] ∀i

xj

(55)

SDG with gains is used to perform fault modeling for the TE case study. A related constraint might be to ensure that the unobservability of a critical fault is less than some threshold value. Another important set of constraints may be to ensure that all pairs of faults can be resolved by the selected sensor location. This particular case may be formulated as

min [max{ln(Ui)} - Rxs] ∀i

xj

(60)

subject to n

cjxj + xs ) C* ∑ j)1



xkl∈Bkl

(61)

xkl g 1, k ) 1, ..., m-1; l ) k + 1, ..., m (62)

where

Bkl ) Ak ∪ Al - Ak ∩ Al ) set of all sensors which can be used to resolve between fault k and fault l (63) xj ∈ Z+, j ) 1, ..., n

(64)

xs ∈ R+

(65)

In the above formulation, constraint (62) ensures that the selected sensors are able to resolve all resolvable pairs of faults. An important point to note here is that constraint (62) is not written for pairs of faults which are topologically unresolvable (the Bkl set is empty).

subject to 9. Lexicographic Optimization with Other Criteria

n

cjxj + xs ) C* ∑ j)1



(56)

xt e 4

(57)

xj ∈ Z+, j ) 1, ..., n

(58)

xs ∈ R+

(59)

t∈Temp.Variables

where

In the above formulation, constraint (57) ensures that the total number of sensors selected to measure the temperature variables in the process is not more than 4. These specific constraints would be especially relevant while performing a sensor network reallocation study on an existing process. (ii) Constraints requiring that a particular (critical) fault be observed by more than a fixed number of sensors, or two faults should be resolvable by more than a given number of sensors, may also be considered while performing sensor location. These constraints would be useful when the underlying causal information (based on which the fault sensor matrix is constructed) has uncertainties associated with it. Hence, one would like some redundancy (more than one sensor) while selecting sensors to resolve various faults. This aspect is highlighted in the case study in part 2 of the series, where

In this paper, sensor location with reliability maximization and cost minimization as optimization criteria has been presented. A lexicographic, one-step optimization procedure to combine these two objectives was also presented. Cost minimization was used as an objectiveto select the minimum cost network, from the various solutions which resulted in maximum system reliability. When this procedure is extended, other criteria may also be used, in a lexicographic manner, to select an optimal solution if there are still multiple sensor networks which minimize the total cost. One such criterion may be to obtain a “distributed sensor network”. A sensor network may be said to be more “distributed” than some other network if the total number of variables being measured is more in the former case. A more distributed network would be preferable for cases where the underlying cause-effect modeling approach has uncertainties associated with it. Use of a more distributed network would impart robustness to the selected sensors even in the presence of uncertainties in the cause-effect modeling approach. Another criterion to select from among the multiple solutions to the one-step optimization problem may be based on the use of sensitivity of the selected sensor location with respect to data used in the problem. The data on fault occurrence and sensor failure probabilities may not be precisely available. Different sensor locations, while being optimal with respect to one set of data, may result in different system unobservabilities when

1836

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002

a different data set is used. This performance in the presence of a different data set may then be used to screen from among the various multiple solutions obtained earlier. This idea is illustrated in the case study in part 2 of the series. Choosing a more distributed sensor network and using sensitivity analysis are just a few possible criteria which can be integrated in the lexicographic optimization framework. Depending on the specific requirements, the designer may come up with other criteria which may be used in place of/along with the criteria presented in this paper. We are currently investigating some of these issues.

Problem V: Set Cover Problem with Minimum Cost.

10. Optimization Formulations for Observability and Resolution of Faults

where cj is the cost of sensor j. A greedy search heuristic to solve the above problem is given by Bhushan and Rengaswamy.16 They have also presented a methodology for reducing the number of constraints in the above problems by systematically removing the redundant constraints. An important difference in the formulations presented in this section as compared to the reliability maximization based formulations presented earlier in this paper is that the decision variables (xj) are allowed to take only binary values in the set cover formulations, and also no fault occurrence or sensor failure probability data are used in these formulations. Some results for the formulations presented in this section are discussed in part 2 of this series.

Raghuraj et al.1 (for DG-based modeling) and Bhushan and Rengaswamy2 (for SDG-based modeling) discussed sensor location design methodologies based on criteria such as fault observability and resolution. It was mentioned in those papers that the formulated problems were set cover problems. In this section, those set cover problem formulations are presented, and different criteria for solving those problems are also discussed. Problem IV: Set Cover Problem with a Minimum Number of Selected Sensors. n

min

xj ∑ j)1

(66)

such that n

dijxj g 1, ∑ j)1 xj ) {0, 1}

i ) 1, ..., m

(67)

j ) 1, ..., n

(68)

where xj’s are the binary decision variables. xj ) 1 indicates that a sensor has to be placed at node j, and a value of 0 indicates that the corresponding variable need not be measured. dij is the (i, j)th element of the bipartite matrix D; it is 1 if fault i affects node j and is 0 otherwise. This bipartite matrix D has faults as rows and nodes as columns and is generated from the process DG/SDG during the fault modeling step as explained earlier in this paper. The constraint (67) ensures that all of the faults are covered. Minimization of the objective function then ensures that a minimum number of sensors are selected to cover all faults. The observability problem as presented above is a set cover problem. Also, any other resolution problem (even for multiple-fault cases) is converted to an appropriate observability problem. So, the sensor location problem finally involves solving an appropriate set cover problem of type problem IV. Raghuraj et al.1 gave a greedy search-based heuristic to solve such set cover problems. The formulation presented in problem IV involves minimizing the total number of sensors. It gives equal weightage to all sensors. Instead of this objective function, another objective function which minimizes the total cost of the sensors can also be chosen as shown in problem V:

n

min

cjxj ∑ j)1

(69)

such that n

dijxj g 1, ∑ j)1 xj ∈ {0, 1}

i ) 1, ..., m

(70)

j ) 1, ..., n

(71)

11. Use of Fault Measurement in Sensor Location In the various formulations presented so far, fault measurements were not considered. So, even if there are faults (such as flow rates, temperatures, etc.) which can themselves be measured, these were not considered in the sensor location formulations. In general, a better sensor location may be obtained by use of these fault measurements. In this section, the one-step optimization problem (problem III) is reformulated to include fault measurements: Problem III with Fault Measurements.

min [max{ln(Ui)} - Rxs] xj,yk

∀i

(72)

Subject to n

∑ j)1

p

cjxj +

∑ ckyk + xs ) C*

(73)

k)1

where

xj ∈ Z+, j ) 1, ..., n

(74)

yk ∈ Z+, k ) 1, ..., p

(75)

xs ∈ R+

(76)

where, as before, there are n original measurable variables and the index j runs over these measurable variables. The measurable faults are represented by the index k, the variable yk represents the number of sensors used to measure fault k, ck is the cost of measuring fault k, and p faults are measurable faults. The fault-sensor bipartite matrix D is appropriately modified to take the

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002 1837

fault measurements into consideration. This matrix is generated from the process SDG which represents the causal information among process variables. In the SDG framework, this corresponds to the addition of new nodes corresponding to fault measurements. For each measurable fault, a new node is created, and it has an arc from the corresponding fault. The sign of the arc represents the relationship between the fault and the corresponding measurement. For example, if the fault is “increase in flow rate”, then a new variable corresponding to this flow rate measurement is added with a positive arc from the fault node to this new node. On the other hand, if the fault is “decrease in flow rate”, then a negative arc to this new node is added from the fault. For cases where both positive and negative deviations in the same variable are represented as separate faults, the new node will have an arc each from both of these faults, with the signs of the arcs being + and -, respectively. As with other measurable nodes, information about the cost and failure probability is required for these new nodes corresponding to fault measurements. The fault-sensor bipartite matrix is then generated from this modified SDG. The ith row of this matrix corresponds to Ai, the set of sensors affected by fault i. This matrix is then used for calculation of ln Ui values in the sensor location formulation. Also, as before, this matrix is then used to generate the appropriate bipartite matrix for the scenario under consideration (single-fault assumption, double-fault assumption, etc.). Similar to the above formulation, other formulations (problems I, II, IV, and V) can also be modified to incorporate fault measurements. Some comparisons of sensor locations with and without consideration of fault measurements, for problems IV and V, will be presented in part 2 of this series. 12. Importance of the Proposed Formulations The formulations presented in this paper are important mainly because of their ability to generate optimal sensor locations while handling various constraints of fault diagnosis and variable measurements. (i) The cost constraint was explicitly mentioned in the proposed formulations. Other constraints which may be incorporated are the following: (1) Different availability of different types of sensors. Thus, constraints such as the availability of only four temperature sensors, five pressure sensors, etc., may be incorporated in place of/along with the cost constraint. (2) Constraints requiring that a particular (critical) fault be observed by more than a fixed number of sensors, or two faults should be resolvable by more than a given number of sensors, may also be considered while performing sensor location. A related constraint might be to ensure that the unobservability of a critical fault is less than some threshold value. Another important set of constraints may be to ensure that all pairs of faults can be resolved by the selected sensor location. (ii) The optimization framework gives flexibility to the designer to perform sensor location for various scenarios. (iii) The framework presented here may be modified to take into account different desirable characteristics of the fault monitoring system, such as quick detection of faults. This may be achieved, for example, by modify-

ing the sensor failure probabilities to penalize sensors which involve offline analysis or have low sampling rates. (iv) The concept of system unobservability gives a quantitative measure to compare different sensor locations. (v) Depending on the process requirements, various optimization criteria may be incorporated in the sensor location framework. 13. Conclusions An attempt has been made to come up with a comprehensive design strategy for the problem of finding optimum sensor locations. Depending on the requirements of the fault monitoring system, appropriate optimization problems have been posed. Models for maximum-reliability and minimum-cost sensor networks have been posed. A one-step optimization formulation which performs lexicographic optimization with reliability as the primary objective and cost as the secondary objective has also been presented. Methodology to use the proposed formulations to obtain the best design for a given process, independent of the single/ multiple-fault assumption, has also been discussed. Ways of reducing the size of the problem being solved, by reducing the number of constraints, have been discussed. Working heuristics have been provided to solve the formulated reliability maximization problems. These heuristics provide quick, near-optimal solutions even for large problems. The use of other optimization criteria, such as distributed networks, objectives from process controllability requirements, and formulations for performing sensor network retrofit studies on existing plants based on fault diagnostic criteria, is currently being investigated. The aim of this research is to seamlessly incorporate various criteria for sensor location, so that the resulting sensor location is optimal in a much broader sense and not just from a fault diagnosis perspective. In the second part of this series of papers, the application of the proposed formulations to a large flowsheet, the TE flowsheet, is presented. Various issues discussed in this paper are demonstrated in this application, and results for different cases are presented and compared. Appendix: Lexicographic Optimization for the General Case Consider the following optimization problem: Problem P1.

max F(x1,x2,...,xn)

(77)

(F takes integral values) such that n

cjxj e C* ∑ j)1

(78)

xj ∈ Z+, j ) 1, ..., n

(79)

where the cost coefficients cj are nonnegative constants and C* is the total available cost. The problem P1 may have several optimal solutions, each with possibly different costs. To obtain an optimal solution in the lexicographic sense, one needs to solve

1838

Ind. Eng. Chem. Res., Vol. 41, No. 7, 2002

two problems in sequence: the first being problem P1 and the second involving minimization of the cost subject to the primary objective F being at its optimal value. The second problem P2 is posed below. Problem P2. n

min

cjxj ∑ j)1

(80)

where F(y) and F(z) are the objective function values for problem P1 for solutions y and z, respectively, and xsy and xsz are the slacks in the cost constraint for y and z, respectively. From eq 89 we get

R(xsz - xsy) < F(y) - F(z) Now, consider the two cases: (i) xsz - xsy < 0. Then,

such that

R> F(x1,x2,...,xn) g F*

(81)

xj ∈ Z+, j ) 1, ..., n

(82)

where F* is the optimal value of problem P1. Consider another problem which combines the objective functions of problems P1 and P2 in a weighted sense. Problem P3.

max F(x1,x2,...,xn) + Rxs

n

cjxj + xs ) C* ∑ j)1

(84)

xj ∈ Z+, j ) 1, ..., n

(85)

xs ∈ R+

(86)

where R is a nonnegative constant. The following theorem states sufficient conditions on R for which solving problems P1 and P2 in sequence is equivalent to solving problem P3. Theorem 1. Solving problem P3 is equivalent to performing lexicographic optimization on problem P1 (solving problems P1 and P2 in sequence), if

0 < R < 1/C*

if then

F(y) > F(z) F(y) + Rxsy > F(z) + Rxsz

R>0

(88) (89)

(91)

(92)

(ii) xsz - xsy > 0. Then,

R