Sensor Network Design via Observability Analysis and Principal

Oct 23, 2007 - individual sensors are taken into account via principal component analysis. ... Principal component analysis is used to extract the con...
0 downloads 0 Views 348KB Size
8026

Ind. Eng. Chem. Res. 2007, 46, 8026-8032

PROCESS DESIGN AND CONTROL Sensor Network Design via Observability Analysis and Principal Component Analysis Jeremy Brewer,† Zuyi Huang,‡ Abhay K. Singh,‡ Manish Misra,† and Juergen Hahn*,‡ Department of Chemical Engineering, UniVersity of South Alabama, EGLB 248, Mobile, Alabama 36688, and Artie McFerrin Department of Chemical Engineering, Texas A&M UniVersity, College Station, Texas 77843-3122

This paper extends a recently developed technique for sensor network design such that interactions between individual sensors are taken into account via principal component analysis. In past work, the trace of the empirical observability gramian was determined to be the most promising measure for determining the location of a single sensor. The extension to placing multiple sensors was performed by interpreting the trace of the gramian as the sum of the diagonal elements and defining interactions between sensors by comparing the magnitude of diagonal entries of empirical observability gramians computed for different sensors. However, the diagonal entries of the gramian only represent the variance of the output measurements for perturbations in a state. The covariances, which are given by the entries of the gramian not on the diagonal, are neglected using such an approach. The presented work remedies this situation, as all of the information contained in the empirical observability gramian is considered. Principal component analysis is used to extract the contribution of a sensor placed at a specific location on the overall sensor network. Two approaches are presented for designing sensor networks. The first technique places sensors sequentially, such that each new sensor maximizes the amount of new information that can be gained from the system. This technique is straightforward to implement with the newly developed principal component analysis-based technique for evaluating the system’s empirical observability gramian. The second methodology designs the entire sensor network by using genetic algorithms to solve an optimization problem that maximizes observability of the system. The first technique has the advantage that it is easier to implement, whereas the second method will generally result in a larger amount of information that can be gained about the system. Both techniques are illustrated with a case study representing a distillation column. 1. Introduction Economical and safe operation of chemical plants is one of the most important aspects of modern chemical engineering. With the complexity of phenomena that are involved in regular plant operation, it becomes necessary to rely on sensor information to determine if operation is within proper limits. Accordingly, it is important to gain as much information as possible from sensors, while at the same time the sensor cost needs to stay within a reasonable range. A delicate balance exists between information that can be obtained from sensors and sensor cost due to the number of sensors placed. Placing the sensor at appropriate locations plays a key role for this objective as the same sensor installed in one location can result in significantly more information about plant operation than if it were installed at a location that provides less information about the process. One typical example of this would be that installing sensors at adjacent trays in a distillation column will not provide a significant amount of new information, but will result in almost redundant process information. A multitude of research exists on sensor placement for chemical systems. Initial techniques for sensor network design * Corresponding author. Tel.: (979) 845 3568. Fax: (979) 845 6446. E-mail: [email protected]. † University of South Alabama. ‡ Texas A&M University.

were based on steady-state behavior of systems.1-3 Research on sensor placement of dynamic systems includes evaluation of matrices used for Kalman filter design4,5 or is based on the observability gramians.6,7 However, these methods are generally restricted to linear systems. Research on sensor location for nonlinear dynamic systems has been performed by using methods for distributed systems,8 involving geometric approaches,9 and making use of nonlinear observability functions.10 Unfortunately, these methods involve a high computational cost.11 Singh and Hahn12 proposed using the empirical observability gramians and observability covariance matrices for state and parameter estimation in order to determine optimal sensor location. The advantages of their method are a low computational burden and that the results are directly tied to the fundamental properties of the system. However, the fitness measure used in their work considered only the variance aspects of the observability gramians. Furthermore, any redundancies were determined solely from these variances. This work employs principal component analysis (PCA)13 to also include the covariances of the observability gramians. Although the original work12,14 was relatively straightforward to implement and compute, it is expected that by taking the covariances in addition to the variances into account will more accurately reflect the information that can be gained from sensor placement.

10.1021/ie070547n CCC: $37.00 © 2007 American Chemical Society Published on Web 10/23/2007

Ind. Eng. Chem. Res., Vol. 46, No. 24, 2007 8027

This paper is structured as follows: Section 2 presents the preliminaries necessary for understanding the presented work. Section 3 presents the new formulation, which uses principal component analysis of the observability analysis results for sensor placement. A case study is included in section 4, which demonstrates the feasibility of implementing a sequential placement approach as well as a simultaneous placement approach. Conclusions are drawn in section 5. 2. Preliminaries 2.1. Empirical Observability Gramian. Observability refers to the property of a system that allows the reconstruction of the state variables based on data collected by the outputs.15 The empirical observability gramian has been used to describe conditions for the observability of nonlinear systems.16 Even though the empirical observability gramian cannot provide global information about observability of a nonlinear system, it does nevertheless provide more accurate information than if a linearization of a system is used and a linear observability gramian were computed. For systems with n states of the form

x˘ ) f(x, p, u)

(1a)

y ) h(x, p, u)

(1b)

where x is a vector of the states, p represents a vector of the parameters, u is a vector of the inputs, and y is a vector of the outputs, the empirical observability gramian, WO, can be computed by r

WO )

s

∑ ∑ l)1 m)1

1 (rscm2)

∫0∞ TlΨlm(t)TlT dt

(2)

ilm (t) - y )Twhere Ψlm(t) ∈ Rnxn corresponds to Ψlm ss ij (t) ) (y (yilm(t) - yss); yilm(t) is the output of the system corresponding to the initial condition x(0) ) cmTlei + xss; and yss is the steady state output of the system.16 The remaining quantities are defined by

Tn ) {T1, ..., Tr; Ti ∈ Rnxn, TiTTi ) I, i ) 1, ..., r}

(3a)

M ) {c1, ..., cs; ci ∈ R, ci > 0, i ) 1,..., s}

(3b)

En ) {e1,..., en; standard unit vectors in Rn}

(3c)

where r is the number of matrices for the perturbation directions, s is the number of different perturbation sizes for each direction, and n is the number of states in the system. The T matrix is usually chosen to contain positive and negative perturbations of the state variables, and the M matrix contains the different perturbation sizes.16 The empirical observability gramian has the properties that it is a square matrix that is symmetric and positive semidefinite or positive definite. Due to this, it is possible to use principal component analysis to determine which directions in the initial conditions of the states have the largest effect on the outputs of the system. This concept is used in this work as the measurement structures should be determined such that each measurement results in increasing the information that can be extracted about the states by a significant amount. 2.2. Measures of Observability. Muller and Weber6 presented several measures to determine the degree of observability of a system. These included the minimum eigenvalue of the

Figure 1. (a) Illustration of a genetic algorithm chromosome. (b) Illustration of reproduction with genetic algorithms.

observability gramian, the inverse of the trace of the observability gramian, and the determinant of the observability gramian.6 A summary of several of these measures as well as the application of some of these measures to empirical observability gramians were presented by Singh and Hahn.12 2.3. Principal Component Analysis. Principal component analysis was originally presented by Hotelling.13 It is a method of decomposing a data matrix R according to the form

R ) TP′ + E

(4)

where T, not to be confused with Tn from eq 3a, represents a matrix of score vectors and P is a matrix of loading vectors. E represents the residuals not described by the principal component decomposition. The loadings represent the principle components of the matrix, whereas the scores are the projections of the matrix onto the loadings. The specific ability of PCA to determine correlation among variables is a prime motivation for its use in this work. 2.4. Genetic Algorithms. A genetic algorithm is an optimization technique that is motivated by the mechanism of natural selection. To apply genetic algorithms, the independent variable space needs to be represented by a chromosome consisting of a binary string. Each entry in the chromosome is called a gene, and a gene must have values of either 1 or 0. Figure 1a illustrates a chromosome with genes. The user must provide the algorithm with a fitness function that will describe the overall fitness of each possible solution. Once the algorithm has been provided with a fitness function, the algorithm analyzes each chromosome in the initial population. The initial population is referred to as the first generation. Based on the fitness score, a certain percentage of the population is selected for reproduction as parents. Reproduction is accomplished by crossing over the genes of two fit parents to create two new offspring that will hopefully reflect the fitness of the parents. Figure 1b illustrates the idea of crossover. The next generation is formed by using the chromosomes of the parents from the initial generation and the offspring they form from crossover. Several variations of genetic algorithms exist that retain the fittest individual from one generation to the next and the algorithms usually contain other operators, e.g., mutations, as well.17 The genetic algorithm evaluates each successive population until it determines that it has converged to an appropriate solution.

8028

Ind. Eng. Chem. Res., Vol. 46, No. 24, 2007

Figure 2. Procedure for sequential placement of sensors.

3. Sensor Network Design via Observability Analysis and Principal Component Analysis The goal of the described sensor placement procedure is to design a sensor network such that information extracted from sensors provides as much information about plant operation as possible. Since empirical gramians are used for the analysis, it is ensured that changes in the operating conditions to a certain degree are reflected in the sensor network design, even though it is not possible to cover drastic changes using such an approach. Although other goals, e.g., ensuring a certain degree of information redundancy for the case of instrument failure, are also important, they are not treated in this work. 3.1. Sequential Placement of Sensors. Sequential placement of sensors requires that a measure is used which describes the amount of information that can be extracted from placing a sensor at a specific location. This measure will also have to take into account that redundant information is not counted more than once for the purpose of quantifying new information. If information is to be maximized by placing sensors judiciously, the redundant information must be kept to a minimum. To determine the initial sensor location, the empirical observability gramians are calculated for placing one sensor in

the system measuring the first state. This procedure is repeated where the next state of the system is measured, until n gramians are computed where each gramian refers to a system with one state measured. A measure is assigned to each gramian based upon the maximum of the eigenvalues of the empirical observability gramian. The measure for the mth gramian is given by

measure(WO,m) ) max(σi(WO,m)), i ) 1,2,...,n

(5)

where n is the number of states of the model and σi represents the eigenvalues of a gramian. The sensor is then placed according to the maximum measure obtained for all gramians

Sensor location ) max(measure(WO,m)), m ) 1,2,‚‚‚,n (6) This chosen location reflects the best possible placement of a sensor in the network because the sensor will be able to capture a maximum amount of information. In order to remove redundancies and ensure that the next location can be chosen properly, the difference between the information captured by the gramian chosen by eq 6 and the information captured by other gramians, resulting from different sensor locations, needs to be calculated. The gramian which contains the largest amount of information that is different from

Ind. Eng. Chem. Res., Vol. 46, No. 24, 2007 8029

the already chosen ones is the best location for placing the next sensor. The procedure that is used for determining the amount of new information that a gramian contains is described in the following. The gramian matrix WO,max as determined by eq 6 is diagonalized by a matrix PO,max containing the loading vectors as described in eq 4

W h O,max ) PTO,maxWO,max (PTO,max)-1

(7)

The resulting matrix W h O,max is a diagonal matrix where the diagonal entries represent the singular values of WO,max. The same transformation is in a next step applied to all other gramian matrices

W h m ) PTO,maxWm(PTO,max)-1, m ) j + 1, ..., n

(8)

where j is the number of sensor that have been placed so far. Now that all remaining gramian matrices have been transformed, it is possible to extract the amount of redundant information by subtracting W h O,max from each W hm

W′O,m ) (W hm-W h O,max), m ) j + 1, ..., n

(9)

If the diagonal entries of the resulting matrices W′O,m are less than zero, then they are set to zero, as the matrix W h O,max contained more information in this direction than W h m did. In a final step, W′O,m replaces WO,m. The procedure can then be repeated for placing an additional sensor by going through eqs 5-9 again. Figure 2 illustrates the method for sequential placement. 3.2. Simultaneous Placement of Sensors. The previous subsection presented a sequential method based upon principal component analysis of empirical observability gramians for placing one sensor at a time in a system. This sequential method is relatively simple to implement as the procedure for placing each of the subsequent sensors is the same as for placing the first sensor after the information obtained from the already chosen sensors has been removed from the remaining gramians. However, a sequential approach cannot take into account that two sensors may be better placed simultaneously at two different locations than if one sensor were chosen first and then the second sensor was placed afterward to complement the first measurement location that has already been chosen. To address this point, an optimization problem is formulated where a tradeoff is computed between the amount of information gained from the system by placing the sensors and the sensor cost. As this optimization problem is a binary programming problem with no constraints, it is straightforward to use genetic algorithms for the solution. Concerning this specific work, if a sensor is present at a specific location then the gene corresponding to this location is represented as a “1”, whereas a “0” represents that no sensor is placed at that specific location. All the genes are combined into a chromosome representing the entire network configuration. For example, in a system with ten states, the genetic algorithm solution chromosome 0100110001 would correspond to sensors being placed at locations 2, 5, 6, and 10. To simultaneously place a given number of sensors, the genetic algorithm is initialized with a random population. Empirical observability gramians are calculated for each state, just as for the sequential approach as explained in section 3.1. Based upon each solution chromosome provided by the genetic algorithm, gramians are used only for states that are present in the chromosome as states that are not measured in a chromosome do not contribute to the observability. Using the above

Figure 3. Procedure for simultaneous placement of sensors.

example of the chromosome 0100110001, only the gramians corresponding to states 2, 5, 6, and 10 will be considered. This is one of two primary differences from the sequential approach, where the gramians for all states were considered in the analysis. Although the genetic algorithm search space will include all possible sensor locations, only the sensors present in each chromosome will be evaluated for the fitness of the configuration. The other major difference is that this evaluation has to be performed for each member of the population in each generation, whereas the procedure shown in section 3.1 only had to be performed as often as there were sensors to be placed. The algorithm described in Figure 2 is used for simultaneous placement with the following three alterations: (i) only the gramians that correspond to the sensors present in the genetic algorithm chromosome are considered in the first step in the flowchart, (ii) an overall measure that describes the fitness of the entire sensor arrangement must be determined, and (iii) the procedure has to be repeated until the solution converges. This modified algorithm is shown in Figure 3. For k sensors present in the chromosome, the computations shown in eq 5-9 are carried out for each sensor in the system. The information about

8030

Ind. Eng. Chem. Res., Vol. 46, No. 24, 2007

Figure 4. Observability measure with (a) no sensor, (b) one sensor placed, (c) two sensors placed, and (d) three sensors placed.

these individual sensors is then analyzed with regard to redundancy in the information, and an overall measure is calculated based upon the sum of each measure calculated in eq 5. If WO,i represents the chosen gramian in the ith iteration, then the total information is given by k

total information )

measure(WO,i) ∑ i)1

(10)

A penalty term for sensor cost needs to be included in the fitness function to prevent the genetic algorithm from placing sensors at all states. This penalty term is represented as a constant, R, referring to the cost per sensor multiplied by the number of sensors in the chromosome and represents a sensor cost, as shown in eq 11.

fitness index ) Rk - (total information)

(11)

The genetic algorithm will seek to minimize this fitness function. This allows for the optimal tradeoff between sensor cost and observability. 4. Case Study The methods presented in sections 3.1 and 3.2 are illustrated by applying them to a binary distillation column model with 32 states describing the separation of a binary mixture in a column with 30 trays. The operating parameters are identical

to those used by Singh and Hahn.12,14 The column separates a binary mixture of cyclohexane and n-heptane. A constant relative volatility of 1.6 is assumed. The feed stream has a composition of xf ) 0.5, and the distillate and bottoms purities are xD ) 0.935 and xB ) 0.065, respectively. The boiling points of cyclohexane and n-heptane are 353 and 371 K, respectively. The reflux ratio is constant at 3.0. The model is described by a set of 32 nonlinear ordinary differential equations with temperatures as the state variables and 33 explicit algebraic equations. The feed to the column is located at the 17th tray. Figure 4a shows the initial observability measures for one possible measurement at any of the states of the model. The observability measures display a bimodal distribution, with one maximum in each of the stripping and rectification sections, and the region around the feed tray shows the lowest value of the observability measures. These observations are expected as the region around the feed tray is least sensitive to perturbations18 and the most sensitive regions in such distillation columns are located approximately one-quarter of the column length from the top of the rectification and from the bottom part of the stripping section.19 Consequently, the first sensor should be placed at the point with the largest observability measure, which is at tray 6 in Figure 4a. Application of the sequential placement method to the column resulted in the following order of sensor placement: tray 6, 25, 3, 11, and 21. Figures 4b-d shows the observability measures

Ind. Eng. Chem. Res., Vol. 46, No. 24, 2007 8031 Table 1. Simultaneous Sensor Placement for a Binary Distillation Column

Table 2. Comparison of Results for Binary Distillation Column.

no. of sensors

sensor location on trays

weighting factor, R

genetic algorithm parameters

no. of sensors

sensor location on trays (current work)

1 2 3 4 5 6

6 6, 25 5, 9, 25 4, 8, 12, 25 6, 10, 12, 23, 26 4, 9, 13, 19, 22, 27

1.5 1.25 1.1 1.0 0.9 0.8

no. of individuals ) 30 500 generations crossover margin ) 0.8 elite count )2 mutation ) gaussian

1 2 3 4 5 6

6 6, 25 5, 9, 25 4, 8, 12, 25 6, 10, 12, 23, 26 4, 9, 13, 19, 22, 27

for a measurement at any of the states after 1, 2, and 3 sensors have been placed. From Figure 4, it is apparent that there are two ideal locations for the first sensor, with the sixth tray being slightly better than the 25th. Figure 4b shows the observability measures after a sensor has been placed at the sixth tray. It is evident that the remaining observability measure for the sixth tray is now essentially zero as all of the information of this state is captured in the measurement. Furthermore, the measures for the surrounding trays have been significantly decreased as the information provided by possible measurements on these trays is similar to the one placed at the sixth tray. It becomes apparent that the 25th tray will be the next choice. Figure 4c shows the measures after a second sensor has been placed at tray 25. The remaining magnitudes of observability around tray 25 have now also been greatly reduced. Finally, Figure 4d illustrates that, after the third sensor is placed, there are no longer distinct choices for the next location. Simultaneous placement will provide slightly different results than using the sequential method for a larger number of sensors. A genetic algorithm was used to solve the optimization problem resulting from simultaneous sensor placement. The algorithm was run using a population of 30 individuals, 500 generations, a crossover margin of 0.8, and an Elite count of 2. Table 1 lists the results for placing up to six sensors simultaneously. An exhaustive search was employed to verify the results for up to five sensors, and it was determined that the results represent the global optimum. The value of R was determined by a trial and error procedure where it was recorded how many sensors were placed given a value of R. The procedure that was used for this task set the number of generations of the genetic algorithm to a low number to minimize computation time. Although this would not allow for an appropriate sensor network design, it was observed that the genetic algorithm consistently settled on a certain number of sensors within a very low number of generations, i.e., ten generations in this case. As this type of problem can be solved very quickly, it is possible to adjust R by a trial and error procedure with just a few guesses. The results for placing one and two sensors simultaneously give the same results as sequential placement. Figure 4a shows that there are two prime choices (trays 6 and 25) for the initial sensor placement, so a difference is not expected from the sequential method. However, when three sensors are placed, Table 1 shows that the optimal locations are trays 5, 9, and 25. Although tray 6 is the best location to place one sensor, higher overall observability can be attained by placing two sensors around tray 6 along with a third sensor at tray 25. Likewise, when placing five sensors, it is better to place two of the sensors around tray 25 instead of placing one at this tray and another at a much less desirable position. By placing the sensors simultaneously, higher overall observability may be attained than by sequential placement. A comparison between the results presented here and the ones from using the procedure described by Singh and Hahn14 is made and summarized in Table 2. The results are identical for

sensor location on trays (results of Singh and Hahn14) 6 6, 25 4, 7, 25 4, 7, 24, 26 2, 5, 7, 24, 26 2, 4, 6, 8, 24, 26

placement of up to two sensors. Given the underlying structure of this system, this is not unexpected. Comparing the results for placing three sensors, the presented work places sensors at locations that are more spread around tray 6 than was reported by Singh and Hahn.14 A similar trend is also observed when five sensors are placed. For the sensors around tray 25, the presented work shows sensors being placed at trays 23 and 26, compared to 24 and 26 as reported by Singh and Hahn. Although these differences may be minor, they can serve as an indicator that the procedure presented here takes into account that sensors are capturing more information from nearby states than was computed using the earlier approach. As a result, the presented algorithm is placing the sensor more evenly distributed along the height of the column. Several additional case studies, which are not shown here due to space limitations, have been performed for variations in the system and each one of them showed the same trend for a comparison of the procedure. 5. Summary and Conclusions This work presented an extension of the techniques by Singh and Hahn12,14 for sensor location for nonlinear dynamic systems. The previous methods only used the variances of the empirical observability gramians to determine measures for the amount of information that a sensor network can extract about a system. The work presented here uses principal component analysis to also include the covariances of the empirical observability gramians. Including these covariances can result in determining slightly different sensor locations. This became especially apparent for the simultaneous sensor placement approach, where sensors were placed more evenly throughout the column than the locations suggested by Singh and Hahn’s method. Similar to the work presented by Singh and Hahn, this methodology is also applicable to any stable nonlinear dynamic system. However, the technique presented here is more computationally expensive. Sequential computations may be done within a few seconds. However, the computation times for solving the optimization problem resulting from simultaneous sensor placement can be in the order of hours for problems of the size of the case study presented in this work. For example, the GA approach required approximately 1 h of computation time for a six-sensor arrangement where 500 generations were used. In comparison, an exhaustive search for placing five sensors required over 50 h of computation time on a high-end PC workstation. Acknowledgment The authors gratefully acknowledge partial financial support from the ACS Petroleum Research Fund (Grant PRF# 43229G9) and from the National Science Foundation (Grant CBET# 0706792; EEC# 0552655). Literature Cited (1) Vaclavek, V.; Loucka, M. Selection of Measurements Necessary to Achieve Multicomponent Mass Balances in Chemical Plant. Chem. Eng. Sci. 1976, 31, 1199.

8032

Ind. Eng. Chem. Res., Vol. 46, No. 24, 2007

(2) Krestovalis, A.; Mah, R. S. H. Effect of Redundancy on Estimation Accuracy in Process Data Reconciliation. Chem. Eng. Sci. 1987, 31, 1199. (3) Madron, F.; Veverka, V. Optimal Selection of Measuring Points in a Complex Plant by Linear Methods. AIChE J. 1995, 41, 2237. (4) Omatu, S;, Koide, S.; Soeda, T. Optimal Sensor Location for a Linear Distributed Parameter System. IEEE Trans. Autom. Control 1978, 23, 665. (5) Kumar, S.; Seinfeld, S. H. Optimal Location of Measurements in Tubular Reactors. Chem. Eng. Sci. 1978, 33, 1507. (6) Muller, P. C.; Weber, H. I. Analysis and Optimization of Certain Quantities of Controllability and Observability for Linear Dynamic System. Automatica 1972, 8, 237. (7) Dochain, D.; Tali-Mammar, N.; Babry, J. P. On Modeling, Monitoring, and Control of Fixed Bed Bioreactors. Comput. Chem Eng. 1997, 21, 1255. (8) Wouwer, A. V.; Point, N.; Porteman, S.; Remy, M. An Approach to the Selection of Optimal Sensor Locations in Distributed Parameter Systems. J. Process Control 2000, 10, 291-300. (9) Lopez, T.; Alvarez, J. On the Effect of the Estimation Structure in the Functioning of a Nonlinear Copolymer Reactor Estimator. J. Process Control 2004, 14, 99. (10) Georges, D. The Use of Observability and Controllability Gramians or Functions for Optimal Sensor and Actuator Location in FiniteDimensional Systems. Proceedings of the 34th Conference on Decision and Control; New Orleans, LA, 1995; p 3319. (11) Damak, T.; Babary, J. P.; Nihtila, M. T. Observer Design and Sensor Location in Distributed Parameter Bioreactors. Proceedings of DYCORD; Maryland, 1992; p 87.

(12) Singh, A. K.; Hahn, J. Determining Optimal Sensor Locations for State and Parameter Estimation for Stable Nonlinear Systems. Ind. Eng. Chem. Res. 2005, 44 (15), 5645. (13) Hotelling, H. Analysis of a Complex of Statistical Variables into Principal Components. J. Educat. Psychol. 1933, 24, 417. (14) Singh, A. K.; Hahn, J. Sensor Location for Stable Nonlinear Dynamic Systems: Multiple Sensor Case. Ind. Eng. Chem. Res. 2006, 45 (10), 3615. (15) Brockett, R. W. Finite Dimensional Linear System; Wiley: New York, 1970. (16) Lall, S.; Marsden, J. E.; Glavaski, S. A Subspace Approach to Balanced Truncation for Model Reduction of Nonlinear Control Systems. Int. J. Robust Nonlinear Control 2002, 12, 519. (17) Goldberg, David E. Genetic algorithms in search, optimization, and machine learning; Addison-Wesley: Reading, MA, 1989. (18) Luyben, W. L. Practical Distillation Control; Van Nostrand Reinhold: New York, 1992. (19) Bequette, B. W.; Edgar, T. F. Non-interacting control system design methods in distillation. Comput. Chem. Eng. 1989, 13, 641-650.

ReceiVed for reView April 18, 2007 ReVised manuscript receiVed August 29, 2007 Accepted August 29, 2007 IE070547N