Ind. Eng. Chem. Res. 1998, 37, 2741-2760
2741
A Partially Decentralized State Observer and Its Parallel Computer Implementation Nabil Abdel-Jabbar, Costas Kravaris,* and Brice Carnahan Department of Chemical Engineering, The University of Michigan, Ann Arbor, Michigan 48109
This paper proposes a partially decentralized state observer that can be implemented on networkbased parallel computers (multicomputers). The state estimation is decentralized in the sense of processing only local measurements, but is centralized in the sense that the interaction terms are kept in the model simulation. The state estimator is coupled with a fully decentralized state feedback to give rise to the overall model-based controller. The proposed implementation involves parallel simulation of the process model using a Jacobi-like iteration scheme. This is important in reducing the computational effort in model-based control. The proposed scheme is based on partitioning of the overall dynamic system into a number of loosely-coupled interconnected subsystems of smaller dimension and then designing a local observer for each subsystem, taking into account the interaction effects among the subsystems. A new state observer design methodology is addressed that accounts for the parallel nature of the implementation and also guarantees stability and optimal performance of the parallel observer. Important issues that are studied include the effect of observer gains on convergence of the state estimation and the computational speedup that can be attained with a different number of computer nodes (processors). Simulation results on a message-passing multicomputer for a class of chemical engineering applications demonstrate the potential of parallel processing for state estimation in the context of model-based control. 1. Introduction Many control strategies are based on state-space analysis. The control law is expressed as a function of the state variables. If it is not feasible or economically convenient to measure all the states on-line, a state observer is needed to estimate the unmeasurable state variables of the process. An accurate dynamic simulator can act as an open-loop observer, providing the controller with the estimate of the process states that cannot be directly measured. In order to account for model inaccuracies, the dynamic model outputs must be reconciled with the set of available output process measurements. Such observers are called closed-loop observers. In real life applications, process model equations are often nonlinear, sparse, and stiff (have widely separated time constants or different dynamics). Therefore, application of traditional numerical solution techniques can be time consuming. This is most evident when the dynamic system is stiff and, as a consequence, implicit stiff numerical integration algorithms are required. Stiff systems involve the solution of large systems of nonlinear algebraic equations at each integration time step. Thus, a rigorous direct integration of the process model can be computationally intensive, and as a result, model-based control for large-scale processes has not been widely adopted in industry. Centralized observer design theory has been studied extensively in the literature, especially for linear systems. Available centralized approaches, in the case of linear systems, include the classical constant-gain Luenberger observer (Luenberger, 1971) and the Kal* To whom all correspondence should be addressed. Email:
[email protected]. Fax: (734) 763-0459. Phone: (734) 764-2383.
man filter algorithm (Kalman, 1960). In the field of nonlinear systems, numerous attempts have been made to develop nonlinear observer design methods. Of these methods, one can mention the popular extended Kalman filter (EKF), whose design is based on a local linear approximation of the system around a reference trajectory and uses tunable parameters (weights of a quadratic performance index) to calculate the time-varying observer gain (see Jazwinski (1970) for the background and description of the method). For an application of EKF see, for example, Seinfeld (1970) and Lee and Ricker (1994). Despite its weak theoretical properties and its use of linear approximations of the process model, EKF has been implemented succesfully in many practical applications. Theoretical approaches to the problem of nonlinear state estimation were proposed by Krener and Isidori (1983) and Hammouri and Gauthier (1988), who use nonlinear state transformation to linearize the original system. However, this linearization approach suffers from a very restrictive set of conditions that can hardly be met by any physical system. Recent work of QuinteroMarmol et al. (1991) and Gauthier et al. (1992) focuses on high-gain Luenberger-like observers, which are in the same spirit as the extended Kalman filter and are based upon local linearization around the reconstructed states. Kazantzis and Kravaris (1995) propose a new approach for the design of nonlinear Luenberger-like observers by solving a system of first-order linear differential equations and applying Lyapunov’s auxiliary theorem. Their approach does not suffer from the restrictions of the other approaches but can be computationally very complex in large-scale systems. Tatiraju and Soroush (1997) proposed a constant-gain nonlinear observer for a certain class of nonlinear systems. Decentralized control is known to be more reliable for
S0888-5885(97)00553-8 CCC: $15.00 © 1998 American Chemical Society Published on Web 06/10/1998
2742 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998
large-scale systems than its centralized counterpart because of its robustness and simplicity in implementation. Traditional decentralized control design methods use the input/output process models to synthesize alternative manipulated-input/controlled-output pairs based on the relative gain array (RGA) (Bristol, 1966) and Niderlinski’s index (NI) (Niderlinski, 1971). In nonlinear process control theory, Mijares et al. (1985) and Manousiouthakis and Nikolaou (1989) presented some results concerning the calculation of nonlinear gains to be used in the above design approaches. Decentralized control structures based on decomposition of the process state-space model have been suggested by Stephanopoulos (1983) and Samyudia et al. (1994). In these structures, multiunit processes are decomposed into subsystems (or units), and subsequently a controller is designed for each subsystem. Charos and Arkun (1993) proposed a decentralized version of the quadratic dynamic matrix control algorithm that decomposes the original problem into smaller subproblems to be solved independently, with possible savings in computation time. Decentralized control requires that the overall system model be partitioned into a number of low-order subsystems; then local state feedback with local state observers utilizing local measurements are designed for each subsystem. In general, there will be interactions among subsystems that must be accounted for. This is crucial in order to obtain stable and optimal performance for the overall system. Work in this direction has been initiated by Siljak and Vukevic (1978) and Bachmann and Konik (1984). Their schemes use highgain feedback to ensure that the decentralized observer is asymptotically stable, based upon the concept of vector Lyapunov functions (Siljak, 1978). High-gain feedback makes the controller less robust, since it becomes more sensitive to modeling errors and/or measurement noises. Furthermore, these schemes require the transmission of local state estimates to all other observers, so that the separation property (Kailath, 1980) of the observer-controller design holds. Because of the infeasibility of communication between subsystems in the serial (uniprocessor) implementation, an unknown-input observer scheme was proposed by Viswanadham and Ramakrishna (1982), Guan and Saif (1991), and Hou and Muller (1992); this approach considers interconnections among subsystems as unknown inputs and rejects their influence on the estimation error. The advantage of this scheme is that no information about interconnection terms in the model is required. However, the unknown-input (or disturbance-decoupling) observer can only work under a quite restrictive condition on the interconnections. To remedy the restrictive existence condition of the unknown-input observer, Ikeda and Willems (1986) considered the interconnections as unmeasurable disturbances and proposed applying almost-disturbancedecoupling observers decentrally. These observers estimate approximately the local control inputs generated by decentralized state feedback. The major drawback of this approach is that the design of observers is not separable from the design of state feedback control laws; i.e., the separation property does not hold for the overall decentralized control system. The aforementioned considerations motivate the need to develop a structure that uses local output measurements for each subsystem while including the intercon-
nections in the state estimation. The advent of new networked-based parallel computers (multicomputers) or networked workstations offers the possibility of such implementation with inherent advantages of computational speed. Parallel state estimation algorithms, which are suitable for implementation on multiprocessor computer systems, were formulated by Hodzic and Siljak (1986) in the spirit of classical Jacobi and GaussSeidel iterative methods for solving linear algebraic equations. These algorithms, however, are restricted to a certain class of interconnected subsystems with a hierarchical (block-triangular) structure. Moreover, convergence issues for the iterative estimation method, as well as issues in parallel computing such as achievable speedup and communication overhead, have not been addressed. Here, we propose a partially decentralized modelbased control strategy for interconnected dynamical systems. First, a partially decentralized state observer that can be implemented on multicomputers is presented. The state observer is decentralized in the sense of processing only local measurements, but is centralized in the sense that the interaction terms are kept in the model simulation. The state observer is coupled with fully decentralized state feedback to give rise to the overall model-based controller. The proposed implementation involves parallel simulation of the process model using the dynamic block Jacobi-like iteration scheme described in Abdel-Jabbar et al. (1998). This is important in reducing the computational effort and thus enables model-based control of large-scale systems. The proposed scheme is based on appropriate partitioning of the overall dynamic system into a number of loosely coupled interconnected subsystems of smaller dimension and then design of a local observer for each subsystem to take the interaction effects into account. The resulting parallel observer, which is a union of the local observers, is stable and multirate. We study alternative design approaches for decentralized observers that are dual to the problem of decentralized stabilization. To illustrate the theoretical formulation of partially decentralized model-based control, the first application involves control of a double-effect evaporator. To illustrate the parallel implementation, the second application is a study of partially decentralized estimation for a binary distillation column. Important issues that are considered include the effect of observer gains on convergence of the iterative estimation scheme and the possible computational speedup that can be achieved from the parallel processing of state estimation. 2. A Decentralized Structure Consider a system S that is composed of N interconnected subsystems Si, i ∈ N. Each subsystem Si, i ) 1, ..., N is described by N
Si: x3 i ) Aiixi + Biiui +
∑
j)1,j*i
yi ) Ciixi
N
Aijxj +
∑
Bijuj
j)1,j*i
(1)
where xi ∈ Rni is the state vector, ui ∈ Rmi is the input vector, and yi ∈ Rpi is the output vector of subsystem Si. Aii, Bii, and Cii are constant square matrices of appropriate dimensions corresponding to the ith
Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2743
of the output vector y and all (or a subset of) the input vector u to calculate the local subsystem state estimates xˆ i. The ith observer (Sˆ i, i ∈ N) for linear dynamic systems may be represented in the state-space form as follows:
Sˆ i: xˆ3 i ) Aiixˆ i + Li(yi - Ciixˆ i) + Biiui + N
N
∑ Aijxˆ j + j)1,j*i ∑ Bijuj j)1,j*i
Figure 1. Partially decentralized observer structure.
(5)
subsystem; Aij ∈ Rni×nj and Bij ∈ Rni×mj are matrices representing the interactions between the subsystems. A similar structure can be written for nonlinear dynamic systems as follows:
Here, Li is a constant-gain matrix of suitable dimension. If a nonlinear model is used in the observer to estimate the state variables, the ith observer may be represented by
Si: x3 i ) fi(x1, ..., xi-1, xi, xi+1, ..., xN, u)
Sˆ i: xˆ3 i ) fi(xˆ , u) + Li(yi - hi(xˆ i))
yi ) hi(xi),
i ) 1, ..., N
(2)
where the fi and the hi are smooth vector functions describing the subsystem Si. By examining eqs 1 or 2, the following observations may be made: (1) the interactions among subsystems occur on the states and/or the inputs of the system; (2) the subsystem Si is outputdecentralized, but not necessarily input-decentralized. A decentralized state feedback law for each subsystem Si is of the form
ui ) -Kixi
(3)
for linear systems, or
ui ) -Ki(xi)
(4)
for nonlinear systems. Ki is a constant (or statedependent) gain matrix. In practical situations, the states xi are not all measurable, and for the control laws of eqs 3 or 4 the estimated states (xˆ i) must be used instead of the actual (unavailable) ones. The state reconstruction by state observers should not destroy the decentralized feedback structure. As such, the interconnection variables between the subsystems must be handled in a different way than for the centralized controller. To design a closed-loop observer for a multivariable large-scale system that can be implemented on multicomputers, one must first decide the extent to which decentralization is desirable. The fully decentralized structure uses only local measurements for each subsystem and ignores the interactions among subsystems. As a result, it would defeat the purpose of coordination (communication), unless of course no interactions exist among subsystems. The fully centralized structure, which uses all measurements and accounts for all interactions, would definitely be an option, but at the expense of excessive data exchange and more design effort. Therefore, a compromise structure that uses only local measurements, while still accounting for interactions, will be beneficial. We call this structure partially decentralized. Once the overall dynamic system is appropriately partitioned into a number of low-order loosely coupled subsystems, a partially decentralized observer structure can be constructed, without loss of generality, as shown in Figure 1. Each local observer only receives a subset
(6)
Here, the local gain Li can be constant, or more generally, a function of state estimates Li(xˆ ), both the local ones and those for other subsystems. Higher-order observers of the form
xˆ3 i ) fi(xˆ , u) + Li(xˆ , Pi)(yi - hi(xˆ i)) P4 i ) Φi(xˆ , P, u)
(7)
could also be used like the EKF, which will be discussed in section 4.2. 3. Parallel Simulation Scheme for the Observer The observer equations, (5), (6), or (7), will, in general, be large-scale and must be numerically solved over the length of the sampling period. Assuming zero-order hold (ZOH) for both inputs and outputs, one must simulate an autonomous system of nonlinear ordinary differential equations of the form
dXi ) Fi(X1, ..., Xi-1, Xi, Xi+1, ..., XN), dt i ) 1, ..., N (8) where X will be xˆ i in the case of observer (5) or (6) or [xˆ i, Pi]T if a higher order observer (7) is used. F will be the right hand side of the observer equations. If there are no interconnections among the subsystems (∂Fi/∂Xj ) 0 for j * i), the subsystems can be integrated in a completely independent parallel fashion. In general, there will be interaction among the subsystems that must be accounted for. This can be accomplished in an iterative fashion by employing a dynamic block Jacobi-like scheme, which can be applied to the interconnected subsystems in eq 8 as follows:
dX (k+1) i (k) (k+1) (k) ) Fi(X (k) , X i+1 , ..., X (k) 1 , ..., X i-1, X i N ), dt X (0) i ) Xi0, 0 e t e T (9) Here, k denotes the iteration index and T denotes the time horizon over which the solution is computed in parallel. Equation 9 is a functional iteration, analogous to the predictor-corrector integration methods. At k ) 0, it represents the prediction step for which the interconnection variables X (k) j , j * i will be assumed constant at their initial values over a given time horizon (i.e., a zeroth order extrapolation); for subsequent itera-
2744 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998
Figure 2. Dynamic Jacobi-like iteration scheme for coordination on multicomputers.
tions (k > 0) eq 9 represents the corrector steps that use the updated functional values of X (k) j . The dynamic block Jacobi-like iteration scheme can be summarized in a pseudocode form as follows:
kr0 Initialize X (0)(t) ) X0; t ∈ [0, T] Repeat { krk+1 for each i ∈ N{ solve ) X˙ (k+1) i (k) (k) (k+1) (k) Fi(X (k) , X i+1 , ..., X (k) 1 , X 2 , ..., X i-1, X i N )
for (X
(k+1) i
(t); t ∈ [0, T]), given X
(k+1) i
peak 266 MFLOPS (million floating-point operations per second) of processing power, and operates under the AIX operating system (IBM’s version of Unix). MPI (Gropp et al., 1994) is used as the message-passing interface library on IBM SP2 nodes. The mapping of the above simulation algorithm for the parallel observer on message-passing multicomputers will be discussed in a subsequent section. One important advantage of the parallel simulation scheme is the flexibility of using different numerical integration methods (stiff or nonstiff, of appropriate method order and stepsize sequence) for different subsystems, best-suited to its dynamic behavior (Liu and Brosilow, 1987). In order to account for the fact that different subsystems may have different integration time mesh points (not known a priori) and to eliminate the need for internodal communication during the integration of any single subsystem, the functional values of the interconnection variables can be obtained by interpolation, e.g., using cubic splines. As a result, internodal communications are activated only at the end of a selected common time horizon. In general, the size of the time horizon T is set by the speed of convergence of the iterations and it is assumed that T e Ts, where Ts is the sampling period. It is essential in the proposed parallel implementation to establish convergence conditions for the iterative scheme. Abdel-Jabbar et al. (1998) derived sufficient conditions for guaranteed convergence of the block Jacobi-like iteration scheme (9), for both linear and nonlinear dynamic systems. They are based on a contraction mapping analysis. The main results are summarized below. Let us first define the norms used in the convergence theorems. Definition 1: Let x ∈ Rn be decomposed as x ) (x1, N ni ) n. Given any x2, ..., xN), with xi ∈ Rni, where ∑i)1 vector norm ||‚||Rn(i), for Rni, the quantity
||x|| ) max ||xi||Rn(i) i
(0) ) Xi0
} (t) - x(k) } until (max1eieN maxt∈[0,T]|x(k+1) i i (t)| e ∈) Note that the differential equation (9) has only one (k) unknown variable X (k+1) . The variables X (k) i 1 , ..., X i-1, (k) (k) X i+1, ..., X N are known from the previous iteration. The above parallel simulation scheme maps well onto message-passing parallel computers, so-called multicomputers. Figure 2 depicts a pictorial representation of the dynamic Jacobi-like iteration scheme for simulation on multicomputers. Such an architecture consists of a number of individual processors (nodes), each with its own local memory and connected by a high-performance network. To share information and to synchronize their program executions, the nodes explicitly send and receive messages over the network. An IBM SP2 multicomputer at the University of Michigan Center for Parallel Computing was utilized in this work. The current SP2 is composed of a cluster of 48 processor nodes interconnected by a high-bandwidth (40 Mbytes/ s), low-latency (40 µs) communication switch. Each node is equivalent to an IBM RS/6000 model 390 workstation, is equipped with a superscalar 66.7 MHz Power 2 CPU and 256 MB of RAM memory, provides a
(10)
is a norm of Rn, which is called the block-maximum norm. Definition 2: Let Cn[a, b] denote the space of all continuous functions mapping the interval [a, b] into Rn, where [a, b] is a bounded interval in R. The quantity ||‚||C defined as
||x||C ) max ||x(t)|| t∈[a,b]
(11)
is a norm on Cn[a, b], with respect to which it is complete. The following theorems provide sufficient conditions for both linear and nonlinear dynamic systems. They are based on a contraction mapping analysis. Theorem 1: Consider a linear system composed of N interconnected subsystems represented as
dxi dt
N
) Aiixi +
∑ Aijxj, j)1,j*i
i ) 1, ..., N
(12)
where Aii ∈ Rni×ni is the diagonal block corresponding to the ith subsystem; Aij ∈ Rni×nj are the off-diagonal blocks representing the state-variable interactions between subsystems. Then, the dynamic block Jacobi-like iteration
Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2745
dx(k+1) i dt
N
)
Aiix(k+1) i
∑
+
Aijx(k) j ,
j)1,j*i
0 e t e T, i ) 1, ..., N (13)
is convergent in the Banach space (Cn[0, T], ||‚||C) over the time interval [0, T], if ||Ai,i||T
max (Te 1eieN
||ACi||) < 1
(14)
where
ACi ) [Ai1lAi2l...lAi,i-1l0lAi,i+1l...lAiN]
(15)
Theorem 2: Consider a nonlinear system composed of N interconnected subsystems represented by (8). Then the dynamic block Jacobi-like iteration (9) is convergent in the Banach space (Cn[0, T], ||‚||C) over the time interval [0, T], if ∀ i, Fi(X ) satisfy the following conditions: 1. ||Fi(X0)|| e hi 2. ||Fi(X ) - Fi(X′)|| e Li1||Xi - X′i|| + Li2[maxj*i ||Xj - X′j||], ∀ (X, X′) ∈ B where B is the ball defined by B ) {X ∈ Rn: ||X - X0|| e r}, and the positive constants hi, Li1, Li2, and r are related via the inequalities L T max(Te i1 Li2) < 1 i
(
TeLi1Thi
)
er max L T i 1 - Te i1 Li2
(16)
(17)
Remark 1: Condition 2 is a Lipschitz condition. The constants Li1 and Li2 are called Lipschitz constants and can be computed from
∂Fi(X ) || Li1 ) sup || X∈B ∂Xi
(18)
∂Fi(X ) ∂Fi(X ) ∂Fi(X ) ∂Fi(X ) Li2 ) sup || l...l l0l l...l || ∂Xi-1 ∂Xi+1 ∂XN X∈B ∂X1 (19) Also, condition 2 implies that F is locally Lipschitzcontinuous around X0 and is therefore differentiable (Vidyasagar, 1993). Remark 2: Note that given any finite constants hi, Li1, Li2, and r, inequalities (16) and (17) can always be satisfied by choosing T sufficiently small. Remark 3: In the linear case, Li1 ) ||Aii|| and Li2 ) ||ACi|| and B ) Rn; thus, (16) becomes (14) and (17) is automatically satisfied. Remark 4: The contraction mapping arguments are equally valid if the initial time is changed from 0 to an arbitrary time t0 with X(t0) as the initial state. Exactly the same conditions are valid when the iteration is applied over the interval [t0, t0 + T]. In a parallel simulation, the dynamic block Jacobi-like iteration is repeated over subsequent time intervals of length T. Corollary: At every step of the Jacobi-like iteration
||X - X (k+1)||C e F||X - X (k)||C
(20)
where F ) maxi(Te Li1TLi2); in other words, F is a guaranteed contraction factor.
In order to assist the reading of the present paper, the proofs of the above convergence theorems are included as Supporting Information. The effect of the convergence conditions on the design of state observers will be deferred to section 5, after alternative decentralized design approaches are described in the next section. 4. Decentralized Observer Design Methods The principle of designing feedback controllers by assigning the eigenvalues of the closed-loop system is well-known from multivariable control theory. The gains are obtained by placing the eigenvalues of the closed-loop system within the left-half complex plane at prescribed locations. The use of optimization methods in design of feedback controllers is also well-known for multivariable systems. As dynamic systems become more complex and large-scale, the need for “optimal” designs becomes imperative. For a given model of the process, controller parameters (gains) have to be found for which the closed-loop system optimizes an appropriate performance index. The design of state observers is dual to the design of feedback controllers. The gains of the observer can be obtained by placing the eigenvalues of the closed-loop observer in the left half of the complex plane, in order to make the error (difference between the measured process outputs and the output of the model used in the observer) converge to zero exponentially (for linear systems). The observer eigenvalues are completely arbitrary and we have no means of selecting them, apart from trial-and-error or past plant experience. Therefore, the optimal design approach is more useful and more systematic. In principle, two alternative optimal decentralized control structures can be distinguished: (1) a structure with a block diagonal gain matrix that can be selected by global optimization of the system performance or (2) a structure with local gain vectors for individual subsystems selected by local optimization of each subsystem performance and a subsequent check of the global performance. Although the former approach is more rigorous, it involves a bigger optimization problem, whose solution for large scale systems is computationally expensive. Furthermore, the structure with a block diagonal gain matrix is impractical for serial implementation, since no significant computational speedup may be realized when compared with the centralized structure. Parallel implementation of this structure, however, is more meaningful. In contrast, the second approach is computationally less intensive, as all calculations refer merely to the isolated subsystems and involve manipulations with matrices of relatively low order when compared to the order of overall system matrices. Nevertheless, it involves constraint conditions under which the stability and optimal performance are satisfied for the overall system. Since design of observers is dual to design of feedback controllers, the previously mentioned alternative decentralized design methods can be carried over to an optimal design of decentralized observers. 4.1. Linear-Quadratic (LQ) Case. Consider a linear process modeled by
x3 ) Ax + Bu + Gv y ) Cx + w
(21)
2746 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998
Here x is the n × 1 state vector, u is the m × 1 manipulated input vector, y is the p × 1 output vector, v is the nv × 1 process noise vector, and w is the p × 1 measurement noise vector. A, B, C, and G are constant matrices of appropriate dimensions. It is assumed that v and w are white noise with zero means and have covariance matrices (known or assumed) of Q and R, respectively. Suppose we try to reconstruct the process states via a state observer of the form
xˆ3 ) Axˆ + Bu + L(y - Cxˆ )
(22)
Here, because of the presence of the process and measurement noises, the actual state x and the estimated state xˆ will not agree, even if the observer is correctly initialized. In fact, if one defines the estimation error as
e ) x - xˆ
(23)
then, from eqs 21-23, the observer error dynamics can be obtained as follows:
e3 ) (A - LC)e + Gv - Lw
(24)
Intuitively speaking, an observer will be optimal if it corresponds to the smallest possible estimation error induced by the process and measurement noises. The most rigorous approach in formulating an observer optimization problem is the stochastic approach (Friedland, 1986). It has been shown (Skelton, 1988) that an alternative deterministic (nonstochastic) approach can be formulated that leads to the same result as the stochastic approach. In what follows, an outline of the deterministic formulation of decentralized state estimation is presented. Alternative 1: Global Optimization. Referring to the above error dynamics (24), let ev(t) be the estimation error induced by a unit impulse process noise (v(t) ) δ(t)), and let ew(t) be the estimation error induced by a unit impulse measurement noise (w(t) ) δ(t)). Then, a reasonable optimization criterion would be a weighted sum of the “sizes” of ev and ew. For the above process model, the following quadratic performance index is selected:
J)
∫0∞(eTv Qev + eTw Rew) dt
(25)
Here, in the deterministic framework, Q and R are diagonal positive definite matrices, which can be chosen arbitrarily as tunable weights that reflect the relative importance of the process and the measurement noises in the performance index J of (25) for which the observer is optimal. As previously mentioned, if statistical information is available, Q and R are the covariance matrices for the process noise and measurement noise, respectively. Postulating a block diagonal gain matrix L ) diag(L1, L2, ..., LN) to design a decentralized observer, the following constrained optimization problem can be solved:
min J ) Tr(P)
(26)
|
|
p11 ... p1k l ... l >0 pk1 ... pkk
(28)
where P is a positive definite symmetric matrix that is the solution of the Lyapunov equation (27). The positive definiteness is imposed by the stability constraint (28). Alternative 2: Local Optimization. Here we will see how to design local gains for the decentralized observer, which is obtained from the partitioned subsystems Si, i ) 1, ..., N and application of local optimization to each individual subsystem. In this context, the following quadratic performance index is optimized: N
Jo )
Joi ∑ i)1
(29)
with
Joi )
∫0∞(evT Qiev + ewT Riew ) dt i
i
i
i ) 1, ..., N (30)
i
The solution of the local optimization problem
Joi ) Tr(Pi)
(31)
Li ) PiCTi Ri-1
(32)
is given by
where Pi is a positive definite matrix that satisfies the local algebraic Riccati equation
AiiPi + PiATii - PiCTii Ri-1CiiPi + GiQiGTi ) 0
(33)
The question now is: under what conditions is the interconnected observer with only local decentralized gains (32) stable with respect to the overall system S ? In other words, if the local state estimates for the isolated subsystems Si are stable, under what conditions on the interactions does the local stability guarantee the global stability for the overall system S ? It turns out that this is dual to the problem of decentrally stabilizable composite systems treated by Siljak (1978) and Ikeda and Siljak (1982a). In the present work, scalar Lyapunov functions (Lunze, 1992) will be constructed to establish stability conditions for the design of decentralized observers. This analysis leads to the following theorem. Theorem 3: Suppose that the local observer gains (Li) have been designed by (32) and (33), for a given pair of weighting matrices Qi and Ri. Define an N × N matrix W ˜ ) (w ˜ ij) according to
w ˜ ii ) λmin(GiQiGTi + PiCTii Ri-1CiiPi) w ˜ ij ) -2λmax(Pi)||Aij||
(34)
where λmin(‚) and λmax(‚) denote the minimum and maximum eigenvalues of a matrix, and ||‚|| is the matrix norm induced by the Euclidean vector norm. If there exist positive numbers d1, d2, ..., dN such that the inequalities
subject to
(A - LC)P + P(A - LC)T ) -(GQGT + LRLT) (27)
(k ) 1, ..., n)
N
˜ ii| > di|w
∑ dj|w˜ ij| j)1,j*i
i ) 1, ..., N
(35)
Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2747
hold or, equivalently, if the real parts of the eigenvalues of W ˜ are all positive, then the overall observer (22) is stable. The proof of this theorem and a brief review of stability analysis methods with scalar Lyapunov functions, are provided in the Supporting Information. Theorem 3 suggests the following design algorithm. Algorithm 1: Decentralized Design of Decentralized Observers 1. Solve the local LQ problem according to eqs 3133. 2. Calculate the test matrix W ˜ according to eq 34. Check whether W ˜ satisfies the condition of Theorem 3. If the stability condition fails, change the weighting matrices Qi and Ri of the performance indices and continue with step 1; otherwise, stop. Remark 5: The advantage of applying Theorem 3 is the simplicity of the stability test. Only an N × N matrix W ˜ has to be checked for an n-dimensional system. Remark 6: In the special cases when a block-diagonal or lower block-triangular (hierarchical) structure of the subsystems is established, by appropriate partitioning, ˜ ij ) 0 ∀ i > j, and then, respectively, w ˜ ij ) 0 ∀ i * j or w hence, the stability condition (35) is automatically satisfied. This means that the interconnected observer with decentralized gains can be made stable for the overall system by stabilizing decentrally each of the local observers. This result is dual to the decentralized stabilization of hierarchical subsystems that has been proven by Michel and co-workers (1978) and Vidyasagar (1980). In regard to the optimality of the local design approach, the performance index Jo is generally expected to be greater than or equal to the performance index J, which is calculated by the global optimization approach. Certainly, they can be equal if there are no interactions among subsystems. Global optimality conditions for decentralized control of large-scale systems have been derived by Ikeda and Siljak (1982b) and Yang et al. (1989) and are found to be very similar to the ones given by Ikeda and Siljak (1982a) for global stability. As dual to the decentralized control problem, we expect to see a similar condition for global optimality of the decentralized observer as the one given in the above theorem (Theorem 3) for global stability. Therefore, one can find Qi and Ri that satisfy the stability condition given in Theorem 3 and also achieve optimal performance. It should be mentioned here that a dynamic matrix Riccati equation of the form
P4 i ) AiiPi + PiATii - PiCTii Ri-1CiiPi + GiQiGTi (36) arises if the above optimization problem is defined over a finite time interval instead of an infinite time interval as given in eq 30. Note that the steady-state version of the above differential matrix Riccati equation is the algebraic Riccati equation (33). The finite-time optimization formulation gives rise to a higher-order observer, which is not linear because eq 36 is not linear. 4.2. Nonlinear Case. In principle, one could follow a nonlinear analogue to global optimization (Alternative 1 above). For example, an observer of the form of eq 6 with constant gain Li, could be optimized, but only numerically, as there exists no nonlinear analog to (26)-(28). In general, global optimization can be very
difficult and expensive for large-scale nonlinear systems, even with a constant-gain observer (6). Along the lines of Alternative 2 (local optimization), one could apply a local nonlinear extended Kalman filter (EKF) algorithm. This algorithm requires an on-line integration of ni(ni + 3)/2 ordinary differential equations (ODEs) for each subsystem Si in real time (ni for the model equations and ni(ni + 1)/2 for the matrix Riccati equation), where ni is the number of unknown states. Therefore, as the system is partitioned into a larger number of subsystems, the number of differential equations is reduced; the equations could then be integrated with less computational effort. Consider a nonlinear process described by
x3 ) f(x, u) + G(v),
x(0) ) x0
y ) h(x) + w
(37)
where f, h, and G are smooth vector functions. v and w are process and measurement noises, respectively. The EKF set of differential equations for the ith subsystem in the EKF algorithm (Seinfeld, 1970; Jang et al., 1986) is given by
xˆ3 i ) fi(xˆ , u) + PihTxi Ri-1(yi - hi(xˆ i)),
xˆ i(0) ) xˆ i0 (38)
P4 i ) fxiPi + PifTxi - PihTxi Ri-1hxiPiGviQiGvTi, Pi(0) ) Pi0 (39) where
fxi ) hxi ) Gvi )
[ ] [ ] [ ] ∂fi ∂xˆ i
(40)
∂hi ∂xˆ i
(41)
∂Gi ∂vi
(42)
The partial derivatives in eqs 40-42 can be evaluated analytically or numerically (by numerical differentiation) along the solution trajectory. Here, Qi and Ri are the positive definite weighting matrices in the same sense as for the linear case. Pi is symmetric and Pi0 is an estimate of the covariance in x0. In this study, the above ODE system is solved by the integrator LSODES (Hindmarsh, 1983), which employs the Adam’s Moulton (AM) methods as the nonstiff option and the Gear’s BDF methods as the stiff option. The parallel solution of the EKF equations (38)-(39) is performed by the block Jacobi-like iteration method described in section 3. Each subsystem’s equations (the model equations and matrix Riccati equations) can be assigned to a computer node to be solved over a common time horizon T, concurrently with the solution of the other subsystems’ equations on different computer nodes. The interactions among the subsystems are exchanged as interpolating spline functions. This makes the above estimation scheme more accurate and optimal. 5. Effect of Parallelization on Observer Design As mentioned earlier, the observers would be of the form given in eq 8, since a zero-order hold is assumed
2748 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998
for inputs and outputs over each time horizon, and the derived convergence conditions of Theorems 1 and 2 can be applied here. The following propositions provide sufficient convergence conditions for the parallel observer, both for the linear and nonlinear cases. Here, the same Banach space (Cn[t0, t0 + T], ||‚||) over the time interval [t0, t0 + T] is used. Convergence is understood in the sense that ||xˆ (k+1) - xˆ (k)||C tends to zero as k f ∞. Proposition 1: Linear Case. Consider the linear observer (5). The dynamic block Jacobi-like iteration
xˆ3 (k+1) ) (Aii - LiCii)xˆ (k+1) + i i
N
Aijxˆ (k) ∑ j + Liyi(t0) + j)1,j*i Bu(t0) (43) (Cn[t
is convergent in the Banach space over the time horizon [t0, t0 + T], if ||Aii-LiCii||T
max (Te 1eieN
0,
t0 + T], ||‚||)
||ACi||) < 1
(44)
Figure 3. Model-based controller: partially decentralized observer with decentralized state feedback.
in the computations. Further discussion appears with the example, where the above issue can be better quantified. 6. Model-Based Control Structure and Mapping to Multicomputers
where
ACi ) [Ai1lAi2l...lAi,i-1l0lAi,i+1l...lAiN]
(45)
Proposition 2: Nonlinear Case. Consider the constant gain nonlinear observer (6). The dynamic block Jacobi-like iteration scheme (k) (k) ) fi(xˆ (k) ˆ i-1 , xˆ (k+1) , xˆ i+1 , ..., xˆ (k) xˆ3 (k+1) i 1 , ..., x i N , u(t0)) +
)) (46) Li(yi(t0) - hi(xˆ (k+1) i is convergent in the Banach space (Cn[t0, t0 + T], ||‚||) over the time horizon [t0, t0 + T], if Li1T
max (Te 1eieN
Li2) < 1
(47)
where the Lipschitz constants Li1 and Li2 can be computed as
∂hi(xˆ i) ∂fi(xˆ , u) Li1 ) sup || - Li || ∂xˆ i ∂xˆ i xˆ ∈B
(48)
∂fi(xˆ ) ∂fi(xˆ ) ∂fi(xˆ ) ∂fi(xˆ ) l...l l0l l...l || Li2 ) sup || ∂xˆ i-1 ∂xˆ i+1 ∂xˆ N xˆ ∈B ∂xˆ 1
(49)
Remark 7: The convergence conditions of Theorem 2 can also be applied to more general nonlinear nonlinear observers like (6) with state-dependent gains or a higher order observer of the form (7). However, the specific conditions that will arise for these cases will be more complicated. The above convergence conditions explicitly indicate the effect of the observer gain on the convergence rate of the parallel solution scheme. Higher observer gains, which lead to faster error dynamics, will result in higher Li1 and therefore a smaller time horizon must be used in the Jacobi-like iteration scheme. Small time horizons, however, can lead to excessive communication overhead, thereby degrading possible attainable speedup
When a partially decentralized parallel observer with the structure of Figure 1 (e.g., an observer of the form (5), (6), or (7)) is coupled with a fully decentralized state feedback (3), it gives rise to an overall model-based controller of the structure shown in Figure 3. The analysis of the resulting closed-loop system is particularly easy and insightful in the linear case. This will be briefly outlined in the following paragraphs. Consider the ith subsystem given in eq 1. When the fully decentralized state feedback (3) is applied, the following compact form of the overall system can be written:
x3 ) ADx - BDKDxˆ + ACx - BCKDxˆ y ) CDx
(50)
Here, AD ) diag(A11, A22, ..., ANN), BD ) diag(B11, B22, ..., BNN), and CD ) diag(C11, C22, ..., CNN), are block diagonal matrices with blocks having appropriate dimensions. AC ) (Aij) and BC ) (Bij) are the block offdiagonal matrices representing the interactions. A fully decentralized state feedback with only block diagonal gain matrix KD ) diag(K1, K2, ..., KN) is being employed. Similarly, if fully decentralized state feedback (3) is applied to the ith observer i ∈ N), the following compact form of the observer may be obtained:
xˆ3 ) (AD - LDCD - BDKD)xˆ + LDy + ACxˆ - BCKDxˆ (51) Here, LD ) diag(L1, L2, ..., LN) is a constant observer gain matrix with blocks Li of appropriate dimension. From eqs 50 and 51, the error dynamics e3 ) x3 xˆ3 are given by
e3 ) (AD - LDCD + AC)e
(52)
The overall closed-loop system, which involves the closed-loop system (50) and the error system (52), is given by
Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2749
[] [
x3 ) e3 AD - BDKD + AC - BCKD (BD + BC)KD x AD - LDCD + AC e 0 (53)
][ ]
and can be split into two independent systems. Therefore, for linear systems, the separation property does hold for the overall partially decentralized control system. That is, the spectrum of eigenvalues σ of the closed-loop system consists of the spectrum of the state feedback system (50) and the spectrum of the observer (51), i.e.
σ ) σ(AD - BKD) ∪ σ(AD - LDCD)
(54)
In other words, the above optimal decentralized design methods can be applied to calculation of state feedback gains Ki and observer gains Li independently of each other. The proposed model-based control structure enables the building of multiple control stations for complex systems such as a chemical plant, which can be assigned to multiprocessors or to multicomputers with their inherent advantages of size and speed. A mapping of this structure on the IBM SP2 multicomputer has been performed for the master-worker paradigm using the MPI message-passing software (Gropp et al., 1994). A description of the message-passing hardware and software used in this study appears in Abdel-Jabbar et al. (1998). In our parallel implementation, the master computer node i ) 0 corresponds to the process and all its sensors, which provide the values of the inputs and outputs at a given sampling rate. Each worker computer node i (i ) 1, ..., N) corresponds to the ith partially decentralized observer coupled with the ith state feedback. The communication network switch is utilized to exchange the information between the master and the worker nodes (to receive the measurement outputs and inputs) and between the worker nodes to exchange the interconnections (observed states). The sampling rates can be assumed to be the same for all sensors, in order to keep all nodes running in a synchronized mode and to reduce waiting time. However, multirate sampling is also feasible, as nodes can run asynchronously, but at the expense of using outdated values for interconnection variables. Figure 4 shows a pictorial representation of the proposed model-based control mapping on multicomputers. 7. Implementation Examples 7.1. Linear Application: Double-Effect Evaporator. In this example, the problem of decentralized parallel state estimation and regulatory control will be considered. The process is a double effect evaporator unit, shown schematically in Figure 5. The primary control objective is regulation of the product concentration C2 at the setpoint of approximately 10%, despite disturbances (d) in feed flow rate F, feed concentration CF, and feed temperature TF. The liquid holdups in the first and second effects, W1 and W2, must also be maintained within acceptable operating limits. The primary manipulated variables u are the inlet steam flow rate S, the bottom flow rate from the first effect B1, and the product flow rate from the second effect, B2.
Figure 4. Mapping of the partially model-based control on multicomputers with the master-worker paradigm.
A fifth-order state-space model of the evaporator was derived by Newell and Fisher (1972) based on linearized material and energy balances. The state-space model equations of the form given in eq 21 are expressed in terms of normalized perturbation variables (see Appendix A). The definitions of process variables and their normal steady state values are given in Table 5. It will be assumed that all five state variables are not measured; hence, a full-order observer is considered here. First, the overall system is partitioned into two hierarchical (lower block-triangular) subsystems, as suggested by Hodzic and Siljak (1986). This partitioning was shown to have the least interaction among the subsystems and guarantee the controllability and observability of the subsystems. A detailed discussion of this partitioning is given by Abdel-Jabbar (1996).
Subsystem 1 (S1): x1 ) {H1, C1, C2} u1 ) {S} y1 ) {C2}
(55)
and
Subsystem 2 (S2): x2 ) {W1, W2} u2 ) {B1, B2} y2 ) {W1, W2}
(56)
This hierarchical structure ensures the stability of the overall system by decentralized design methods (see Remark 6 above) and makes the implementation of the parallel iteration scheme easier, since no iterations will be needed for the first subsystem at the top level of the hierarchical structure. The process noise vector v in eq 21 is assumed to consist of six elements corresponding to three disturbances d and three manipulated variables u. In all simulations, the white noises in v and w have not been included (noise free) and the performance of the observer is examined with a 20% error in all initial state estimates above the actual (true) values. As mentioned
2750 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998
Figure 5. Schematic diagram of a double-effect evaporator.
earlier, Q and R are chosen in a deterministic sense to reflect the weights of process and measurement noises on the performance index. For the observer design, matrices Q and R are assumed, with each matrix having equal diagonal elements. That is
Q ) qIs R ) rIm
(57)
where Is and Im are identity matrices of dimension s and m. Here, s ) 6 and the value of m depends on the number of available measurements for output variables y. 7.1.1. Performance of the Partially Decentralized Observer. The performance of the proposed partially decentralized observer structure shown in Figure 1 is tested. Different design methods are considered, namely, the two LQ decentralized alternative methods outlined in section 4.1 and the fully centralized design. The responses obtained from the decentralized design and partially decentralized implementation with the block Jacobi-like iteration scheme (43) are compared with the responses obtained from the centralized design and centralized implementation (with no iterations) applied to the overall system. Alternative 1 (global optimization) of the decentralized design has been performed using the optimization toolbox of MATLAB. An M-file has been written that defines the objective function (26) in terms of the symmetric P matrix elements, and the function is minimized under the equality constraint (27) and inequality constraints (28). The LQ design calculations for Alternative 2 (local optimization with separate Riccati equation for each subsystem) of the decentralized design have been performed by the linear-quadratic estimator design routine (LQE) available in MATLAB. The LQE procedure has also been applied to the overall system to calculate the centralized state observer gains. In all cases, r ) q ) 0.01 is chosen in (57). Table 1 summarizes the results that are obtained by the above design methods. It can be seen that both alternatives 1 and 2 of the decentralized design lead to essentially the same results in terms of the observer
gains and eigenvalues for the subsystems, as well as nearly identical optimality indices J ) Jo ) J 1o + J 2o. Also, the centralized design gives a similar optimal performance index. The stability condition for the decentralized design with LQE (Alternative 2) is verified. The 2 × 2 W ˜ matrix was calculated for the two subsystems as given in (34)
W ˜ )
[
0 3.4 × 10-5 -4 -4.21 × 10 9.81 × 10-5
]
(58)
Clearly, the stability condition (35) is satisfied according to Theorem 3. The stability and optimality behavior of the parallel observer are attributed, as stated in Remark 6, to the fact that the original system is partitioned into ˜ 12) hierarchical subsystems (i.e., A12 ) 0 which makes (w zero in (58)). Responses in Figures 6 and 7 illustrate the effect of inaccurate initial state estimates on the performance of the proposed partially decentralized observer, comparing it with the more rigorous centralized observer. The partially decentralized parallel observer provides satisfactory state estimates for the five state variables that were initialized with values 20% above the actual ones. The centralized observer, however, converged to the actual states faster than the partially decentralized one, especially for the levels W1 and W2 in the second subsystem. The simulation runs verified the stability of the proposed observer as predicted by the theoretical results from inspection of the W ˜ matrix. 7.1.2. Performance of Model-Based Control. Closed-loop simulation runs have been performed, in order to evaluate the effectiveness of the partially decentralized observer in multivariable state feedback control systems. The state estimates produced by the observer are used in state feedback control laws defined by eq 3. In this example, the regulatory control problem is considered, where it is desirable to restore the perturbed controlled output variables (caused by unknown disturbances) to their initial steady-state values. The optimal feedback matrices K1 and K2 used in this work were calculated by the LQ approach (routine LQR in MATLAB). Again, due to the hierarchical structure
Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2751 Table 1. Observer Design Parameters with Different Design Methods
[
]
Centralized
-0.007 -0.0251 -0.001 0.139 -0.019
0.006 0.020 L ) 0.036 -0.001 -0.033
-0.006 -0.008 -0.033 , -0.019 0.090
[
]
-0.77 -0.149 λ(A - LC) ) -0.914 ( 0.017i -0.052 0.0004 0.0001 0.0001 -0.0001 -0.0001 0.0001 0.0005 0.0002 -0.0003 -0.0001 -0.0003 , P ) 0.0001 0.0002 0.0004 0.00 -0.0001 -0.0003 0.0 0.0014 -0.0002 -0.0001 -0.0001 -0.0003 -0.0002 0.0009 J ) Tr(P) ) 0.0035
[ [
Decentralized (Global Optimization)
0.006 0.023 L ) 0.044 0 0
[
]
]
0 0 0 0.131 -0.039
0 0 0 , -0.014 0.156
[
-0.77 -0.08 ( 0.017i λ(A - LC) ) -0.144 -0.07
-2
P ) 10 × 0.0357 0.0055 0.0061 -0.0071 -0.0056
[ ]
0.0055 0.0518 0.0228 -0.0261 -0.0082
0.0061 0.0228 0.0440 0.0001 -0.0224
]
-0.0071 -0.0056 -0.0261 -0.0082 0.0001 -0.0224 , 0.1399 -0.0163 -0.0163 0.1083 J ) Tr(P) ) 0.0038
Decentralized (Local Optimization)
[
]
0.006 -0.77 L1 ) 0.023 , λ(A11 - L1C11) ) -0.08 ( 0.03i 0.044 0.131 -0.028 -0.144 L2 ) , λ(A22 - L2C22) ) -0.028 0.0835 -0.07
[
[
]
]
]
[
]
0.0357 0.0055 0.0061 P1 ) 10-2 0.0055 0.0518 0.0228 , Jo1 ) Tr(P1) ) 0.0013 0.0061 0.0228 0.0444 0.0013 -0.0003 P2 ) , Jo2 ) Tr(P2) ) 0.0021 -0.0003 0.0008
[
]
of the subsystems, global and local decentralized optimization approaches lead to the same gains. The centralized state feedback gains were also calculated and used in the simulation of model-based control with a centralized observer. The design parameters and the state feedback gain matrices are listed in Table 2. The product concentration C2 is of prime concern and hence has been assigned the highest weight as compared to liquid levels W1 and W2, which need be controlled less tightly within physical limits. The model-based control structure shown in Figure 3 is used in the simulations with the above gain specifications under the following:
Figure 6. Observer estimates for subsystem 1 for incorrect initial estimates.
1. Centralized structure (centralized state feedback coupled with centralized observer). 2. Partially decentralized structure (decentralized state feedback coupled with partially decentralized, i.e., interconnected, observer). 3. Fully decentralized structure (decentralized state feedback coupled with fully decentralized, i.e., ignoring interactions, observer). Figure 8 depicts the closed-loop responses of the above model-based control structures in the regulation of the
2752 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 Table 2. State Feedback Design Parameters with Different Design Methods Centralized
[
]
11.36 8.08 54.1 -25.8 0.098 K ) -1.19 -0.943 -74.62 -16.77 12.48 , -0.017 -1.34 -30.8 -7.3 -29.1
[]
-5.4 -1.98 λ(A - BK) ) -1.28 -1.04 -0.09 0.526 0.37 2.5 -1.2 0.005 0.374 49.4 -23.2 13.04 0.352 P ) 2.51 -23.2 72.6 -21.1 8.1 , -1.2 13.04 -21.1 15.6 1.9 0.005 0.35 8.09 1.91 7.6 J ) Tr(P) ) 145.7
[
]
Decentralized
K1 ) [12.25 9.01 94.9 ], λ(A11 - B11K1) ) K2 )
[
[
]
[ [ ]
-1.71 ( 0.55i -0.106
]
-25.99 18.01 -3.6 , λ(A22 - B22K2) ) -18.01 -25.99 -0.81
]
0.57 0.42 4.4 P1 ) 0.42 45.1 -30.2 , Jo1 ) Tr(P1) ) 176.7 4.4 -30.2 131.0 8.3 4.7 P2 ) , Jo2 ) Tr(P2) ) 15.1 4.7 6.8
[
Figure 7. Observer estimates for subsystem 2 for incorrect initial estimates.
controlled outputs. Clearly, the proposed partially decentralized structure results in satisfactory regulatory properties compared with the centralized one. Also, it outperforms the fully decentralized scheme that ignores the interactions. To improve performance for the fully decentralized scheme, comparable to that for the centralized or partially decentralized control structure, one would have to use higher gains for the observer and/or the state feedback controllers. This, however, may produce problems in robustness as the control system becomes more sensitive to process and measurement noises. 7.2. Nonlinear Application: Binary Distillation Column. To illustrate the parallel implementation of the partially decentralized observer on multicomputers, the problem of state estimation for a binary distillation column is studied. The column has 30 trays, the reboiler, and a total condenser. Typical dynamic binary distillation model equations are used (Luyben, 1990) with the following assumptions: 1. The liquid holdups are assumed constant; that is, the flow dynamics are neglected. 2. An ideal liquid-vapor equilibrium relationship is assumed, with a constant relative volatility R. 3. The vapor pressure of the pure components is calculated by the Antoine equation, leading to a oneto-one relationship between the mole fraction and the temperature on each tray for the binary mixture at constant pressure.
]
The state variables are liquid compositions x ) [xB, x1, ..., x30, xD]T. Accordingly, the column model has 32 ordinary differential equations. The detailed model equation system and the column data are given in Appendix B. Generally speaking, in distillation, the primary control objective is to control the product compositions xB and xD in face of disturbances in feed flow rate F and feed composition zF. In many practical cases, the controlled variables and disturbances cannot be measured directly. Therefore, there is a need for a state observer that provides the controllers with an estimate of the states to calculate the control moves. Temperatures and flows are often used as secondary measurements to estimate (infer) the product compositions in distillation columns. Several researchers have applied inferential estimation in the control of distillation columns. Most of these workers considered linear estimation methods (e.g., Joseph and Brosilow, 1978; Yu and Luyben, 1987; Mejdell and Skogestad, 1991) based on the steady-state analysis and using a constant gain matrix L, in order to avoid the computational complexity of dealing with nonlinearity and dynamics of the process. Quintero-Marmol et al. (1991) have studied application of an extended Luenberger-like observer where the dynamic model used to simulate the observer was nonlinear, while the design of the observer gain was based on the linear model, linearized around a nominal operating point. However, this design approach may not be suitable if the nonlinearity and dynamic changes are significant. It has been shown by Mejdell and Skogestad (1991) that a single temperature measurement is not adequate to estimate the product compositions accurately. On the other hand, the use of multiple temperature measure-
Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2753
Figure 8. Model-based control performance and effect of ignoring interactions.
ments, in order to get more accurate estimation, usually involves more processing of information and leads to excessive calculations and poorly conditioned problems (Morari and Stephanopoulos, 1980; Moore et al., 1987). Motivated by the above considerations, it is desirable to partition the overall system into a number of subsystems, where each subsystem has its own measurement subset (output decentralized) and an observer is assigned to each subsystem, taking into account the interactions among the subsystems. From the tridiago-
nal structure of the distillation model, each subsystem (observer) needs to exchange its data (here, liquid compositions) with its neighboring subsystems (the one below and the one above). A control scheme based on the LV configuration (Skogestad and Morari, 1988) and the proposed partially decentralized state observer applied to the partitioned column system is shown in Figure 9. In this case study, the nonlinear extended Kalman filter (EKF) algorithm (cf. section 4.2) is used. EKF seems more appealing than the previously mentioned approaches because of its inherent dynamic model and its flexibility in tuning the weights. In addition, the linearization is performed along the trajectory instead of around a nominal point. As such, the gains obtained by EKF are dynamic ones and they account for process nonlinearity. Despite these attractive features, EKF requires an on-line integration of the process model and the matrix Riccati differential equations (ODEs). For example, our case study will require integration of 560 ODEs. Such requirements can be prohibitive for serial implementation. Thus, the key for the success of this approach is the solution of the EKF equations on-line within the sampling period time. The use of parallel computing with partitioned subsystems is expected to bring about an order of magnitude computational speedup, which facilitates the implementation of the EKF for distillation columns. The EKF equation system and the pertinent numerical integration methods for its solution on multicomputers have been discussed in section 4.2 and will be shown below for the case study. 7.2.1. Results and Discussion. All simulation runs were made with constant manipulated inputs L and V. The temperature sensors to measure the tray temperatures are located at different trays throughout the stripping and enriching sections of the column, but not at either the column base or the reflux drum. Seven temperature sensors located at trays 1, 5, 10, 15, 20, 25, and 30 were found to be a reasonable choice. To study the performance of the parallel EKF, the states are initialized with a 5% error compared to the actual ones. The weighting matrices are chosen to be diagonal with constant entries. In all cases, R ) 0.01Im, where Im is the identity matrix of size m, equal to the number of measurements. In this example, the process noise vector is v ) [F, zF, L, V]T. Its covariance matrix, Q ) q I4, was varied in order to tune the filter. The sampling period Ts was taken to be 1 s. The total process time was 15 min. The parallel solution of the EKF is executed on 3 and 6 computer nodes of the IBM SP2 multicomputer, and the results are compared with the single node (serial) run that solves the equation system as a whole (simultaneously). For the serial or parallel runs, the masterworker paradigm was followed to map the calculations onto the SP2. The real process is assigned to the master node, while the overall system model equations with the matrix Riccati equations (in the serial run) or the subsystem model equations with the separate Riccati equations (in the parallel runs) are assigned to a number of worker nodes equal to the number of subsystems. MPI is used as the message-passing software library; it utilizes the high-performance communication switch on the IBM SP2 to exchange necessary information among the subsystem nodes and to coordinate/ synchronize integration of the subsystem equations. The configurations of the subsystems with different numbers
2754 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998
Figure 9. Control scheme based on LV configuration and partially decentralized observer. Table 3. Subsystem Construction node no.
physical subsystem
overall system
reboiler + trays 1-30 + condenser
560
1, 5, 10, 15, 20, 25, 30
1 2 3
reboiler + trays 1-14 trays 15-20 trays 21-30 + condenser
135 27 77
1, 5, 10 15, 20 25, 30
1 2 3 4 5 6
reboiler + trays 1-4 trays 5-9 trays 10-14 trays 15-19 trays 20-25 trays 26-30 + condenser
20 20 20 20 20 27
1 5 10 15 20, 25 30
of computational nodes are listed in Table 3. Notice that all the subsystems are made controllable and observable, as they are reachable from the inputs u ) [L, V]T and have at least one or more temperature sensors, so that they are output reachable. Analytical expressions for the partial derivatives in the EKF Riccati equations were derived from the model equation system given in Appendix B. Numerical integration was done by the LSODES package by employing the nonstiff (AM) or stiff (Gear’s BDF) options. To check for convergence of the block Jacobilike iterations, an absolute error tolerance of 10-3 was assigned. The time horizon T is either set equal to the sampling period Ts or calculated from the above con-
total no. of ODEs
temp trays
servative bounds from the contraction mapping analysis (cf. section 5). The transient responses of the parallel partially decentralized observer are compared to those for centralized and classical decentralized (with a diagonal gain matrix) state estimation that are implemented serially (see Figures 10 and 11). The proposed observer performed satisfactorily and in very good agreement with the more rigorous, but also more expensive, centralized observer. As q increases, a better response is obtained, as is clearly shown in Figure 10. As mentioned earlier, the weight q is varied to tune the gain such that as q increases, the gain L increases, and therefore, faster error dynamics for the observer are noticed.
Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2755
Figure 10. Observed transient profile of bottom product liquid composition: (a) q ) 0.1; (b) q ) 10.
To study the effect of parallelization on the observer design, some numerical experiments have been conducted. Figure 12 shows the effect of increasing the observer gain (by increasing q) on the size of the time horizon T. T is calculated from the conservative bound (47) that was derived from the contraction mapping analysis with different contraction factors F. The Lipschitz constants are calculated, over each time horizon, from the analytical expression for the partial derivatives, which are included in eqs 48 and 49. Clearly, the faster the error dynamics are, the slower the convergence is, and the smaller the time horizon that must be used to ensure the convergence of the block Jacobilike iterations for the parallel solution scheme. The size of the time horizon T was found to be a fraction of the sampling period Ts. The fraction f ) T/Ts is e 1, and it increases as F increases. As will be seen later, these results are very conservative and achievable computational speedup will be impacted by the small T. Table 4 lists the computational statistics for the solution of the EKF equations on one node with different values of q. Here, the stiff option (Gear’s BDF) is used for numerical integration. The results include the CPU time, number of function (derivative) evaluations (NFE), number of Jacobian evaluations (NJE), and number of integration steps (NSTEP). In view of these results, it is obvious that as q increases, the stiffness of the model equations increases, which can easily be seen from the increase in NSTEP, NFE, and NJE. As a result, the serial CPU time increases drastically with increasing
Figure 11. Observed transient profile of distillate product liquid composition: (a) q ) 0.1; (b) q ) 10.
Figure 12. Calculation of time horizon T on the basis of conservative bounds from contraction mapping analysis.
q and the state estimation on one computer node becomes infeasible at q g 10, for which the simulation time exceeds the process time limit ()15 min). Figure 13 shows the variation of total elapsed execution (wall-clock) time (includes communication and coordination/synchronization times) with the number of subsystem nodes at different values of q when stiff methods are used for all subsystem integrations. For each parallel run, note that all worker nodes have the same concurrent wall-clock time due to the global
2756 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998
Figure 13. Total execution time (wall-clock) with different q as a function of number of nodes for the column simulation. Stiff method with f ) T/Ts ) 1.
Figure 14. Total execution time (wall-clock) time with different q as a function of number of nodes for the column simulation. Nonstiff method with f ) T/Ts ) 1. Table 4. Computational Statistics for the Serial Solution of the Overall EKF Equations with Different Values of q q
CPU (s)
NFE
NJE
NSTEP
0.1 1.0 10.0 100.0
592.6 857.1 2091.5 3671.0
8762 12462 33898 67600
29 40 119 255
1204 1759 3240 4031
synchronization of the coordination algorithm (no subsystem is allowed to proceed to the next time horizon before all subsystem iterations are converged). In these parallel runs, the time horizon T is set equal to the sampling period Ts. The Jacobi iterations were converged with only one corrector iteration, which implies the conservative nature of the contraction mapping analysis. As seen from Figure 13, a drastic reduction in the execution time has been attained from the parallel computations for the EKF algorithm. This is because N ni(ni + 3)/2 ODEs in of the low cost of integrating ∑i)1 the parallel case compared to the cost of integrating n(n N ni) in the serial case. In + 3)/2 ODEs with (n ) ∑i)1 particular, the evaluation/inversion cost of the Jacobian matrix (when the stiff method is used) is much smaller 3 3 (of order ∑N i ni , n ). In addition, the number of floating point operations involved in the matrix Riccati equations (e.g., matrix multiplications, additions, etc.)
is drastically reduced as the size of the subsystems is reduced. The flexibility of the parallel approach in selecting the integration stepsizes and method orders best-suited to each subsystem (multirate integration) contributes to this speedup. In contrast to the serial implementation (single node run), the use of high observer gains (by increasing q), in order to get faster error dynamics, becomes feasible, since the execution time of the calculations over each time horizon is an order of magnitude smaller than the sampling period. Figure 14 shows the variation of total elapsed execution (wall-clock) time with the number of nodes for different values of q when nonstiff methods are used for all subsystem integrations. Here, the time horizon T is also set equal to Ts. Apparently, since the model at hand is stiff, the CPU times for the nonstiff methods exceed those when stiff methods are employed. However, the parallel runs still lead to less execution time compared to the serial time. This is because of the multirate nature of the parallel solution scheme and the fact that stiffness may be reduced as the overall stiff system is partitioned into less stiff subsystems. Again, a satisfactory convergence rate is achieved with the choice of T larger than the sufficient convergence criterion suggests, as indicated in Figure 12. In order to study the impact of using small theoretical T on the parallel solution, the simulations were performed with time horizons that are calculated from the
Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2757
Figure 15. Total execution time (wall-clock) time with different q as a function of number of nodes for the column simulation. Variable f ) T/Ts.
convergence condition at F ) 0.5 (see Figure 12). Figure 15 shows the variation of the wall-clock time with the number of nodes for different values of q with different values of f ) T/Ts. The parallel state estimation run with a conservative T has a higher wall-clock execution time than the less restricted parallel run shown previously. The trend and the trade-off between using high gains with smaller T and the achievable computational speedup is clear. This is because, as T gets smaller, there is more communication between the nodes, and the calculations become more fine-grained. Despite that, an order of magnitude speedup is still achieved as the number of computer nodes is increased.
Table 5. Evaporator Process Model Variables and Their Nominal Values variable
description
steady-state value
W1 C1 H1 T1 W2 C2 S B1 B2 F CF HF
first-effect holdup (kg) first-effect concentration (wt % glycol) first-effect enthalpy (kJ/kg) first-effect temperature (K) second-effect holdup (kg) second-effect concentration (wt % glycol) steam flow rate (kg/min) first-effect bottoms flow rate (kg/min) second-effect bottoms flow rate (kg/min) feed flow rate (kg/min) feed concentration (wt % glycol) feed enthalpy (kJ/kg)
20.8 4.59 441 310 19.0 10.11 0.91 1.58 0.72 2.26 3.2 365
8. Conclusions A partially decentralized model-based control structure for interconnected dynamical systems, which can be implemented on network-based parallel computers, is proposed. Design considerations, including closedloop stability and performance, have been studied in detail. The theoretical formulation of the proposed structure is illustrated in a control application for a double-effect evaporator. A comparison with the classical centralized and fully decentralized structures is also included. The parallel implementation is illustrated for a binary distillation column using a nonlinear state estimation algorithm. Important issues studied include the effect of state observer gains on convergence of the parallel iterative solution scheme and the possible achievable computational speedup. It has been shown that parallel implementation of state estimation for large-scale systems can be very beneficial for real-time control system applications. Complex systems can be handled more efficiently in a timely manner using available hardware and software technology. For a large-scale system composed of interconnected subsystems, a decentralized observer with only local decentralized gains, which optimizes a quadratic performance cost function with respect to individual subsystems, is shown to result in a globally stable and optimal performance, provided that the condition of Theorem 3 on the interactions is satisfied. The design of a partially decentralized state observer using a linear quadratic formulation with local optimization for isolated subsystems can be modified to include the interconnections in the performance index. This enhances
the global optimality of the observer without the need to solve the costly global optimization problem. Appendix A. State-Space Evaporator Model The evaporator has been modeled (Newell and Fisher, 1972) by a fifth-order, linear, time-invariant, state-space model in the form of
xˆ ) Ax + Bu + Dd y ) Cx
(A.1)
where
h 1, H h 1, W h 2, C h 2]T x ) [W h 1, C u ) [S h, B h 1, B h 2]T y ) [W h 1, W h 2, C h 2]T d ) [F h, C h F, H h F]T such that all the elements of the above vectors are defined as normalized perturbation variables. For example,
W h1)
W1 - W1s W1s
where W1s is the nominal steady-state of W1 (see Table 5). The coefficient matrices are
[
]
2758 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998
0 0 A) 0 0 0
-0.0011 -0.0755 -0.0060 -0.0012 0.0393
[
0 0 B ) 0.2160 0 0
[
[
-0.1255 0.1255 -0.7741 -0.1448 0.1448 -0.0766 0 0 0.0795 -0.0414
0 0 0 0 0
0 0 0 -0.0381 0
1 0 0 0 0 C) 0 0 0 1 0 0 0 0 0 1
0.1098 -0.0333 D ) -0.0188 0 0
0 0.0766 0 0 0
0 0 0 0.0001 -0.0380
]
0 0 0.0911 0 0
]
yj: vapor composition on the jth tray xj: liquid composition on the jth tray zF: feed composition xB: button product liquid composition xD: distillate product liquid composition NF: feed tray number NT: number of trays F: feed flow rate
]
Appendix B. Binary Distillation Model Description The model we consider is the classical (L, V) model of a binary distillation column based on material balances:
dxB MB ) (F + L)(x1 - xB) + V(xB - yB) dt (column base and reboiler) dx1 M1 ) (F + L)(x2 - x1) + V(yB - y1) dt (1st tray) dxj Mj ) (F + L)(xj+1 - xj) + V(yj-1 - yj) dt (j ) 2, ..., NF - 1) dxNF Mj ) F(zF - xNF) + L(xNF+1 - xNF) + dt V(yNF-1 - yNF) (feed stage) dxj Mj ) L(xj+1 - xj) + V(yj-1 - yj) dt (j ) NF + 1, ..., NT - 1) dxNT MNT ) L(xD - xNT) + V(yNT-1 - yNT) dt (top tray) dxD ) V(yNT - xD) MD dt (reflux drum and condenser) (B.1) Here,
Mj: liquid hold on the jth tray MB: liquid holdup in the column base MD: liquid holdup in the reflux drum xj: liquid composition on the jth tray
V: vapor flow rate R: reflux flow rate The state vector is x ) [xB, x1, ..., xNT, xD]T, the input vector is u ) [L, V]T, and the disturbance vector is d ) [F, ZF]T. Note that the state variables include only one component; the second component composition is obtained from the requirement that the mole fractions must sum to 1 in every stage. On each stage, the liquid and vapor compositions, xj and yj, are related by the liquid-vapor equilibrium constant
yj ) K(xj) )
Rxj
(B.2)
(1 + (R - 1)xj)
where R is the constant relative volatility, assumed to be known. According to Raoult’s law, K(xj) is defined as the ratio of the vapor pressure P* to the total column pressure P. The vapor pressure P* is calculated using the Antoine equation of the form (Henley and Seader, 1981)
ln
a2 P* ) a1 Pc T + a3
(B.3)
Here, a1, a2, and a3 are the Antoine equation parameters and Pc is the critical pressure. The relationship between the composition and the temperature on each tray for the binary mixture at constant pressure is derived from Raoult’s law and Antoine’s equation and is given as
Tj ) h(xj) )
(
a1 - ln
a2
RP Pc[1 + (R - 1)xj]
)
- a3
(B.4)
The partial derivatives fx involved in the extended Kalman filter equations are evaluated by analytical differentiation of the equation system (B.1) with respect to the states x, defined above. Also, Ev is obtained by analytical differentiation of (B.1) with respect to the noise vector v ) [F, zF, L, V]T, while hx is calculated by differentiating eq B.4 with respect to the state x. The column data are given in Table 6. The unit for the flow rate is (lb mol)/h, for the holdups is (lb mol), and for the pressure is psia. The physical properties for the light component are given in Table 7.
Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2759 Table 6. Data for Distillation Column Example NT
NF
V
F
zF
xB
xD
MB
Mj
MD
P
30
15
178
100
0.5
0.02
0.98
100
10
100
14.7
Table 7. Physical Properties Data R
Pc (psia)
a1
a2
a3
2
550.7
5.74
4126.4
409.52
Acknowledgment The authors thank the University of Michigan Center for Parallel Computing, which is sponsored in part by the NSF Grant CDA-92-14296, for use of their IBM SP2 multicomputer. Special thanks to Dr. Hal Marshall for his help in using the IBM SP2 and the MPI message passing library. The first author acknowledges Jordan University of Science and Technology for its partial financial support.
Supporting Information Available: Proofs of the convergence theorems and stability analysis with scalar Lyapunov functions (10 pages). Ordering information is given on any current masthead page.
Literature Cited Abdel-Jabbar, N. M. Parallel Simulation and State Estimation Algorithms for Dynamic Systems on Multicomputers. Ph.D. Dissertation, University of Michigan, Ann Arbor, MI, 1996. Abdel-Jabbar, N.; Carnahan, B.; Kravaris, C. A Multirate ParallelModular Algorithm for Dynamic Process Simulation Using Distributed Memory Multicomputers. To appear in Comput. Chem. Eng. Bachman, W.; Konik, D. On Stabilization of Decentralized Dynamic Output Feedback Systems. Syst. Control Lett. 1984, 5, 89. Bristol, E. H. On a New Measure of Interaction for Multivariable Process Control. IEEE Trans. Autom. Control 1966, 11, 133. Charos, G. N.; Arkun, Y. A Decentralized Quadratic Dynamic Matrix Control Algorithm. J. Proc. Control 1993, 3, 75. Friedland, B. Control System Design. An Introduction to StateSpace Methods; McGraw-Hill: New York, 1986. Gauthier, J. P.; Hammouri, H.; Othman, S. A Simple Observer for Nonlinear Systems: Applications to Bioreactors. IEEE Trans. Automat. Control 1992, 37, 875. Gropp, W.; Lusk, E.; Skjellum, A. Using MPI Portable Parallel Programming with the Message-Passing Interface; The MIT Press: Cambridge, MA, 1994. Guan, Y. P.; Saif, M. A Novel Approach to the Design of Unknown Input Observers. IEEE Trans. Automat. Control 1991, 36, 632. Hammouri, H.; Gauthier, J. P. Bilinearization up to Output Injection. Syst. Control Lett. 1988, 11, 139. Henley, E. J.; Seader, J. D. Equilibrium Stage Separation Operations in Chemical Engineering; Wiley: New York, 1981. Hindmarsh, A. C. ODEPACK, a Systematized Collection of ODE Solvers. In Scientific Computing; Stepleman, R. S., Eds.; NorthHolland: Amsterdam, 1983. Hodzic, M.; Siljak, D. D. A Parallel Estimation Algorithm. In Parallel Processing Techniques for Simulation; Singh, M. G., Allidina, A. Y., Daniels, B. K., Eds.; Plenum Press: New York, 1986. Hou, M.; Muller, P. C. On the Design of Decentralized Observers. In Proceedings of the IFAC Conference on Large Scale Systems; Beijing, PRC, 1992. Ikeda, M.; Siljak, D. On Robust Stability of Large-Scale Control Systems. Control Lett. 1982a, 3, 155. Ikeda, M.; Siljak, D. When is a Linear Decentralized Control Optimal? In Proceedings of the 5th International Conference on Analysis and Optimization of Systems, Versailles, France, 1982b.
Ikeda, M.; Willems, J. C. An Observer Theory for Decentralized Control of Large-Scale Interconnected Systems. In Proceedings of the IFAC Conference on Large Scale Systems: Theory and Applications; Zurich, 1986. Jang, S.; Joseph, B.; Mukal, H. Comparison of Two Approaches to On-line Parameter and State Estimation of Nonlinear Systems. Ind. Eng. Chem. Process Des. Dev. 1986, 25, 809. Jazwinski, A. H. Stochastic Processes and Filtering Theory; Academic Press: New York, 1970. Joseph, B.; Brosilow, C. B. Inferential Control of Processes. Part I. Steady State Analysis and Design. AIChE J. 1978, 485, 1978. Kailath, T. Linear Systems; Prentice-Hall: Englewood Cliffs, NJ, 1980. Kalman, R. E. A New Approach to Linear Filtering and Prediction Problems. Trans. ASME, J. Basic Eng. 1960, 82, 35. Kazantzis, N.; Kravaris, C. A Nonlinear Luenberger-Type Observer with Application to Catalyst Activity Estimation. Proc. ACC 1995, 1756. Krener, A. J.; Isidori, A. Linearization by Output Injection and Nonlinear Observers. Syst. Control Lett. 1983, 3, 47. Lee, J. H.; Ricker, N. L. Extended Kalman Filter Based Nonlinear Model Predictive Control. Ind. Eng. Chem. Res. 1994, 33, 1530. Liu, Y. C.; Brosilow, C. B. Simulation of Large Scale Dynamic Systems-I. Modular Integration Methods. Comput. Chem. Eng. 1987, 11, 241. Luenberger, D. G. An Introduction to Observers. IEEE Trans. Automat. Control 1971, 16, 596. Lunze, J. Feedback Control of Large-Scale Systems; PrenticeHall: Englewood Cliffs, NJ, 1992. Luyben, W. L. Process Modeling, Simulation, and Control for Chemical Engineers; McGraw-Hill: New York, 1990. Manousiouthakis, V.; Nikolaou, M. Analysis of Decentralized Control Structures for Nonlinear Systems. AIChE J. 1989, 35, 549. Mejdell, T.; Skogestad, S. Estimators for Ill-Conditioned Plants: High-Purity Distillation. IFAC Adv. Control Chem. Processes 1991, 227. Michel, A. N.; Miller, R. K.; Tang, W. Lyapunov Stability of Interconnected Systems: Decomposition into Strongly Connected Subsystems. IEEE Trans. Circuits Syst. 1978, 25, 799. Mijares, G.; Holland, C. D.; McDaniel, R. C.; Dollar, C. R.; Gallun, S. E. Analysis and Evaluation of the Relative Gains for Nonlinear Systems. Comput. Chem. Eng. 1985, 9, 61. Moore, C.; Hackney, J.; Canter, D. Selecting Sensor Location and Type for Multivariable Processes. Shell Process Control Workshop; Butterworth Publishers: Boston, 1987. Morari, M.; Stephanopoulos, G. Optimal Selection of Secondary Measurements within the Framework of State Estimation in the Presence of Persistent Unknown Disturbances. AIChE J. 1980, 26, 247. Niderlinski, A. A Heuristic Approach to the Design of Linear Multivariable interacting control systems. Automatica 1971, 7, 691. Newell, R. B.; Fisher, D. G. Model Development, Reduction, and Experimental Evaluation for an Evaporator. Ind. Eng. Chem. Process Des. Dev. 1972, 11, 213. Quintero-Marmol, E.; Luyben, W. L.; Georgakis, C. Application of an Extended Luenberger Observer to the Control of Multicomponent Batch Distillation. Ind. Eng. Chem. Res. 1991, 30, 1870. Samyudia, Y.; Lee, P. L.; Cameron, I. T. A Methodology for MultiUnit Control Design. Chem. Eng. Sci. 1994, 49, 3871. Seinfeld, J. H. Optimal Stochastic Control of Nonlinear Systems. AIChE J. 1970, 16, 1016. Siljak, D. D. Large-Scale Dynamic Systems: Stability and Structure; North-Holland: New York, 1978. Siljak, D. D. Decentralized Control of Complex Systems; Academic Press: New York, 1991. Siljak, D. D.; Vukevic, M. On Decentralized Estimation. Int. J. Control 1978, 27, 113. Skelton, R. E. Dynamic Systems Control. Linear Systems Analysis and Synthesis; Wiley: New York, 1988. Skogestad, S.; Morari, M. LV-Control of a High Purity Distillation Column. Chem. Eng. Sci. 1988, 43, 33. Stephanopoulos, G. Synthesis of Control Systems for Chemical Plants. Comput. Chem. Eng. 1983, 7, 330. Tatiraju, S.; Soroush, M. Nonlinear State Estimation in a Polymerization Reactor. Ind. Eng. Chem. Res. 1997, 36, 2679.
2760 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 Vidyasagar, M. Decomposition Technique for Large-Scale Systems with nonadditive Interactions: Stability and Stabilizability. IEEE Trans. Automat. Control 1980, 26, 439. Vidyasagar, M. Nonlinear Systems Analysis; Prentice Hall: Englewood Cliffs, NJ, 1993. Viswanadham, N.; Ramakrishna, A. Decentralized Estimation and Control for Interconnected Systems. Large Scale Syst. 1982, 3, 255. Yang, T.; Munro, N.; Brameller, A. Improved Condition for the Optimality of Decentralized Control for Large-Scale Systems. IEE Proc. 1989, 136, 44.
Yu, C. C.; Luyben, W. L. Control of Multicomponent Distillation Columns using Rigorous Composition Estimators. IChem. E. Symp. Ser. 1987, 104, 29.
Received for review August 11, 1997 Revised manuscript received March 24, 1998 Accepted March 25, 1998 IE970553R