Balancing Approach to Minimal Realization and Model Reduction of

Apr 4, 2002 - Department of Chemical Engineering, The University of Texas at Austin, Austin, Texas 78712-1062. Ind. Eng. Chem. .... Development of Rig...
0 downloads 8 Views 79KB Size
2204

Ind. Eng. Chem. Res. 2002, 41, 2204-2212

Balancing Approach to Minimal Realization and Model Reduction of Stable Nonlinear Systems Juergen Hahn and Thomas F. Edgar* Department of Chemical Engineering, The University of Texas at Austin, Austin, Texas 78712-1062

This paper presents a computational approach to determine a reduced order model of a nonlinear system. The procedure is closely related to balanced model reduction and introduces the concept of covariance matrices for local controllability and observability analysis of a nonlinear system. These covariance matrices are an extension of gramians of a linear system and are used to determine unobservable and uncontrollable parts of the system for a given operating region. Additionally, an algorithm is introduced that eliminates these nonminimal parts of the model and can further reduce the model, i.e., the number of state variables. This minimal realization/ model reduction procedure is simple to implement and can be applied locally to any stable system without making any assumptions about observability and controllability. Examples are presented to demonstrate the procedure. When the algorithm is applied to linear systems, it reduces to well-known techniques for minimal realization and balanced model reduction. 1. Introduction Simulations of nonlinear models for control purposes have become increasingly popular in recent years. This is due to increased availability of powerful optimization algorithms as well as advances in computing technology. However, many processes are modeled without regard to the control relevance of the model, and often the parts of the model that are hard to solve numerically cannot be influenced by control action or cannot be measured with the available instrumentation. Extracting the part of the model that contributes to the input-output behavior of the system is the aim of minimal realization and balanced model reduction. The former determines the smallest possible model that exhibits input-output behavior identical to that of the original system, whereas the aim of the latter is to find a model of a specified size that approximates the input-output behavior of the minimal system. For linear systems, both procedures play an important part in a model reduction framework. In a first step the minimal realization of a model can be found by eliminating the unobservable and uncontrollable parts of a model. The input-output behavior of the resulting model will be identical to that of the original model. Balanced model reduction can then be performed on this minimal model. Because of the minimality of the model, it is guaranteed that all state variables are observable and controllable, as is required for a balancing procedure. The balanced reduction will result in a model whose input-output behavior closely approximates the behavior of the original model, but it will not be identical. As the model is reduced further, larger deviations are expected between the behavior of the original model and the reduced one for a given input. The method presented in this work can be used for both minimal realization and model reduction without any modifications. The procedure forms a generalization of balanced model reduction for linear systems in that the method can deal with systems that are not com* Corresponding author. Phone: +1 (512) 471 3080. Fax: +1 (512) 471 7060. E-mail: [email protected].

pletely observable and controllable and that it is applicable to nonlinear systems as well. It should be pointed out that, for nonlinear systems, the procedure will result in a model of reduced size that describes the input-output behavior locally, but no conclusions can be drawn about the global behavior of the reduced system. The algorithm presented in this work determines covariance matrices from data collected in the operating region of the process. These covariance matrices have the advantage over empirical gramians1,2 in that they can be computed for different input types and are applicable to systems that are not control-affine. Furthermore, a procedure is introduced that balances covariance matrices which contain non-control relevant parts, i.e., uncontrollable/unobservable subsystems. The parts of the model that correspond to Hankel singular values of zero, i.e., parts that do not influence the input-output behavior of the system, can be truncated without affecting the input-output behavior of the system. Additionally, the states corresponding to small, but nonzero Hankel singular values can be truncated in order to obtain a further reduced model. Methods and supporting theories for minimal realization and model reduction of linear systems are wellestablished. However, this is not the case for nonlinear systems. The approach presented here combines the ease of use of the linear system approach with the flexibility required for nonlinear systems, leading to an efficient algorithm that can be used to find reducedorder models of nonlinear systems. 2. Previous Work: Minimal Realization and Balanced Model Reduction Kalman3,4 investigated the minimal realization of linear systems in state-space form. He proposed the Kalman canonical decomposition in which the system is split up into its controllable/uncontrollable and observable/unobservable parts. The minimal realization is given by the subsystem that is both controllable and observable. Moore5 recast Kalman’s minimal realization

10.1021/ie0106175 CCC: $22.00 © 2002 American Chemical Society Published on Web 04/04/2002

Ind. Eng. Chem. Res., Vol. 41, No. 9, 2002 2205

theory in terms of responses to input signals. This was done in order to extend the notion of controllability/ observability of states to a measure of the importance of each of the states to the input-output behavior of a system. In his work he proposed the balanced realization based upon making the gramians of the system equal to a diagonal matrix in order to determine and reduce the states that are close to being unobservable or uncontrollable. The matrix calculations were performed using Laub’s algorithm.6 The main motivation of his paper was to be able to show that slightly perturbed models will effectively have the same minimal realization as the original model. The main drawback of this approach was that the algorithm required the original model to be completely observable and controllable and thus could not be applied for minimal realization defined in the sense of Kalman’s work. Furthermore, this algorithm could lead to numerical difficulties if the system is close to being nonminimal. Tombs and Postlethwaite7 proposed a numerically reliable algorithm that can be applied to systems that contain unobservable/uncontrollable parts. It can be applied for minimal realization of linear systems as well as for linear model reduction. Analogous to the linear case, the minimal realization of a nonlinear system is given by the subsystem that is both controllable and observable.8 Nevertheless, the development of the model reduction and minimal realization for nonlinear systems is not as complete as that for the linear case, partly because of a lack of numerically reliable tools for observability/controllability analysis as well as algorithms for performing nonlinear transformations. Scherpen and Gray9 developed a set of sufficient conditions in terms of controllability and observability functions under which a given state-space realization of a formal power series is minimal. However, the paper does not mention how these conditions can be checked numerically in the form of an algorithm. Because gramians contain information about the input-output behavior of a system, most work on nonlinear systems has focused on finding nonlinear equivalents of gramians, called energy functions. Scherpen10 described how to obtain energy functions for certain classes of stable nonlinear systems and what conditions are required for a balanced realization to exist. However, her procedures present computational difficulties, and in general a closed-form solution is not guaranteed. The only numerical implementation of Scherpen’s approach is given by Newman and Krishnaprasad,11 who used a Monte Carlo approach. They tested their algorithm on a pendulum with two states to compute approximations to the energy functions as well as a balancing transformation for the nonlinear system. However, after the coordinate transformation is applied and even without reduction of the model, the transformed system does not exhibit the same inputoutput behavior as the original system because of the approximations that were applied during the computational procedure. Because of the problems encountered with general nonlinear balancing procedures, several methods that perform a Galerkin projection, based upon a linear coordinate transformation, have been developed. Pallaske12 investigated a procedure where the linear transformation is found from a covariance matrix that is computed from data collected along system trajectories.

These trajectories represent the system behavior under a constant input but starting from different initial conditions. Lo¨ffler and Marquardt13 extended this model reduction approach to models described by differentialalgebraic equation systems. They investigated the case of trajectories generated by different initial conditions under constant inputs as well as the case where the trajectories start at the steady-state operating point and are generated by step changes in the inputs to the system. While similarities exist between these methods12,13 and balanced model reduction, it should be pointed out that the emphasis of these two methods is not to retain or approximate the input-output behavior of the system. Lee et al.14 computed the linear coordinate transformation by balancing a system that was generated using subspace identification. This identified system was generated from data collected along system trajectories. Lall et al.1,2 introduced the concept of empirical gramians, which are an extension of the gramians for linear systems. These empirical gramians can be computed for control-affine nonlinear systems. The computation procedure is based upon system trajectories that include changes in the system inputs as well as in the initial conditions. Based upon the empirical gramians, a linear coordinate transformation can be computed and the model reduced via a Galerkin projection. Hahn and Edgar15 showed that the procedure presented by Lall et al.1,2 is limited to control-affine systems and requires modifications when the steady state of the system is different from zero. 3. Review of Minimal Realization and Model Reduction for Linear Systems A variety of techniques for minimal realization and model reduction of linear systems exist. Pole-zero cancellation is a common method for single-input singleoutput models in the frequency domain, whereas the Kalman canonical decomposition and different balancing based methods are frequently applied for state-space models. State-space minimal realization and model reduction procedures are reviewed in this section because they can be extended to give local results for nonlinear systems. 3.1. Kalman Canonical Decomposition. When a system is in Kalman canonical form, then the state variable model has been decomposed into four subsystems: (i) controllable and observable states; (ii) controllable but unobservable states; (iii) observable but uncontrollable states; (iv) uncontrollable and unobservable states. The states that are both controllable and observable make up the minimal realization of a linear system. Details on the computation required for Kalman canonical decomposition can be found in textbooks such as the one by Brogan.16 3.2. Balancing for Linear Systems. A linear timeinvariant system

x˘ (t) ) Ax(t) + Bu(t)

(1)

y(t) ) Cx(t) + Du(t) that is open-loop stable has controllability and observability gramians. These gramians can be used to analyze the controllability and observability of a system. For stable and controllable/observable systems, the controllability/observability gramian will have full rank.17

2206

Ind. Eng. Chem. Res., Vol. 41, No. 9, 2002

A system whose gramians are identical and equal to a positive definite diagonal matrix with decreasing values along the diagonal is called balanced or is in balanced form, and the entries along the diagonal are referred to as Hankel singular values. It can be shown that for completely controllable and observable systems there exists a state-space transformation

realization by reducing states corresponding to Hankel singular values of zero as well as for balanced model reduction by eliminating small, but nonzero Hankel singular values.

xj ) Tx

It has been shown that balancing for linear systems is a powerful technique that is simple to implement. In contrast, a general nonlinear balancing procedure cannot be implemented because of the lack of an available algorithm. In this section an efficient algorithm for the reduction of nonlinear models is developed. This new approach combines the ease of use of the procedure for linear systems, with the flexibility required for nonlinear systems. The approach is an extension of balancing of linear systems. This is accomplished by defining controllability and observability covariance matrices which can be computed for any stable system of ODEs. These covariance matrices are computed from data along system trajectories and are then balanced by a procedure similar to that used for gramians for linear systems. However, the balancing algorithm presented in this paper allows for systems that are not completely observable and controllable to be balanced and does not require any additional knowledge about the system other than the covariance matrices. The computed transformation is then used within a Galerkin projection in order to transform the nonlinear system into balanced form. The resulting nonlinear equations can be reduced to a local minimal realization using balanced state truncation of the states that correspond to Hankel singular values of zero. It is also possible to further reduce the model by truncating states that contribute little to the input-output behavior of the system. The reason this method does not approximate the global behavior of the model is that the covariance matrices are only locally defined. However, the information captured in the covariance matrices gives a better reflection of the system dynamics in this operating region than would result from linearizing the model at the operating point. The presented approach belongs to the class of order reduction techniques for nonlinear model reduction. When order reduction techniques are used for reduction of nonlinear systems, they will reduce the number of differential equations, but each of the remaining equations will be more complex than any the original equations. However, the ultimate goal of this minimal realization/model reduction is to create a model that is used for nonlinear controller design. Because the computation time for linear model predictive control is related to the number of states taken to the third power, a reduction in the number of states is important for potential online application, even if the equations of each state are somewhat harder to solve than those for the original model. 4.1. Controllability Covariance Matrix. Because it is unsatisfactory to reduce nonlinear systems based on linear gramians and nonlinear energy functions10 are too computationally intensive to compute, a hybrid method is required. This can be done by introducing the new concept of covariance matrices, which are an extension of empirical gramians, to different input types.

(2)

such that the transformed system given by

xj˙ ) TAT-1 xj + TBu ) A h xj + B hu

(3)

y ) CT-1 xj + Du ) C h xj + Du is in balanced form. The new system given by eq 3 is then called a balanced realization. A novel technique for the computation of T that guarantees that its inverse exists even for systems that are not completely observable and controllable and that only requires the gramian matrices for its computation will be introduced in section 4.4. If a linear system is in balanced form, the Hankel singular values provide a measure for the importance of the states because the state with the largest singular value is the one which is affected the most by control moves and the output is most affected by a change of this state. Therefore, the new states corresponding to the largest singular values influence the input-output behavior the most. For minimal realization, the states that correspond to Hankel singular values of zero can be eliminated, and the reduced system retains the input-output behavior of the full-order system. Once the system is in balanced form, the state vector can be partitioned into two parts: important states (xj1) and components (xj2) that do not influence the input-output behavior of the system as shown in eq 4. One approach

() (

)( ) ( ) ()

xj˙ 1 A h 11 A h 12 xj1 B h1 xj˙ 2 ) A h 21 A h 22 xj2 + B h2 u xj h1 C h 2 ) 1 + Du y ) (C xj2

(4)

to minimal realization is balanced truncation of the states that correspond to singular values of zero (or close to zero within working precision). Reduction by truncation leads to the balanced reduced system given by eq 5. The result is a system of ordinary differential equa-

h 11xj1 + B h 1u xj˙ 1 ) A

(5)

y)C h 1xj1 + Du tions (ODEs) which contains fewer states than the original system. The number of states that can be truncated depends on the system. Skogestad and Postlethwaite17 and Zhou et al.18 provide a bound for the error for balanced truncation of linear systems. They show that the maximum error due to balanced truncation is twice the sum of the Hankel singular values of the eliminated states. Because minimal realization only reduces states corresponding to singular values of zero, no error is introduced because of the truncation. The method developed in this work can be used for minimal

4. Minimal Realization and Model Reduction of Nonlinear Systems via Covariance Matrices

Ind. Eng. Chem. Res., Vol. 41, No. 9, 2002 2207

For any stable nonlinear system given by eq 6 the

x˘ (t) ) f(x(t),u(t))

(6)

y(t) ) h(x(t),u(t)) following sets are required for the definition of the controllability covariance matrix:

pulse inputs, which can be viewed as a series of two steps of equal size of different sign. The size of the steps as well as their occurrence can be varied. This input type can be applied to the computation of the controllability covariance matrix for any stable nonlinear system. However, it is important to keep in mind that even these input sequences will have to be scaled by a factor of cm. The series of steps is given by

Tp) {T1, ..., Tr; Ti ∈ R p×p, TiTTi ) I, i ) 1, ..., r}

z

v(t) )

M ) {c1, ..., cs; ci ∈ R, ci > 0, i ) 1, ..., s} E ) {e1, ..., ep; standard unit vectors in R } p

p

where r ) number of matrices for excitation directions, s ) number of different excitation sizes for each direction, and p ) number of inputs to the system. Definition 1: Controllability Covariance Matrix. Let Tp, Ep, and M be given sets as described above. The controllability covariance matrix is defined by p

WC )

r

s

∑ ∑∑ i)1 l)1m)1

1

rscm2

∫0 Φ ∞

ilm

RkS(t - tstep ∑ k ), k)1

-1 e Rk e 1

(8)

where tstep is the time when the kth step change k occurs, Rk is the size and direction of this step relative to the largest step in the sequence, and q is the number of step changes in the sequence. Using this input, the state vector xilm(t) and its desired trajectory xilm ss can be computed as

xilm(t) ) xss + eAt(xilm(0) - xss) + z

∫0teA(t-τ)BcmTlei ∑ RkS(τ - tstep k ) dτ

(9)

k)1

(t) dt

(7)

where Φilm(t) ∈ Rn×n is given by Φilm(t) ) (xilm(t) ilm(t) - xilm)T, and xilm(t) is the state of the nonxilm ss )(x ss linear system corresponding to the input u(t) ) cmTleiv(t) + uss(0). Because it is usually not possible to find a closed-form solution for the integral in eq 7, a good approximation of the controllability covariance matrix is given by the computation of Φilm(t) from data along system trajectories. The input to the system u(t) is defined as above, where the cm describes the input size, the TleI determines the input direction, and v(t) is the shape of the input. The xilm ss represents the desired system trajectory, and uss(0) refers to the input at the original steady state. The input shape should be chosen in such a way that is consistent with typical input behavior of the plant. Based upon the shape of v(t), controllability covariance matrices can be put into one of the following categories. 1. Impulse input: v(t) ) δ(t). The desired system trajectory xilm ss for this special case is time-invariant and equal to the steady-state value of the system’s states. For impulse inputs, the controllability covariance matrix reduces to the empirical controllability gramian if the trajectories resulting from the excitation by the input do not leave the region of attraction of the original operating point. Because of the fact that general nonlinear operations are not defined for impulses, the empirical controllability gramian can only be computed for control-affine systems. Because the controllability covariance matrix is equal to the empirical controllability gramian, which again reduces to the linear controllability gramian for linear systems, it is guaranteed that the covariance matrix is independent of the input size cm.1,2 2. A series of steps in v(t). The derived system trajectory xilm ss produced by a series of step changes in the input will consist of a series of steps as well. This type of input includes single steps as well as rectangular

-1 xilm ss ) -A BcmTlei

z

RkS(t - tstep ∑ k ) + xss k)1

(10)

for linear time-invariant systems in deviation variables. It is now possible to show that the controllability covariance matrix is independent of the input magnitude cm for linear systems under the assumption of xilm(0) equal to xss. In particular,

Φ

ilm

2

(t) ) cm (



t A(t-τ) e BTlei 0

A-1BTlei

z

RkS(τ - tstep ∑ k ) dτ + k)1

z

RkS(t - tstep ∑ k )) × k)1 z

(

∫0teA(t-τ)BTlei ∑ RkS(τ - tstep k ) dτ + k)1

A-1BTlei p

WC )

r

1

∑ ∑ ∫0 (∫ i)1 l)1 r

A-1BTlei

z



k)1



t A(t-τ) e BTlei 0

-1 tstep k ) dτ + A BTlei

(11)

z

RkS(τ - tstep ∑ k ) dτ + k)1



RkS(t - tstep k ))(

z

T ∑ RkS(t - tstep k )) k)1

t A(t-τ) e BTlei 0

z

∑ RkS(τ -

k)1

z

T RkS(t - tstep ∑ k )) dt k)1

(12)

For z ) 1, tstep ) 0, and R1 ) 1, this reduces to a 1 single step input to the system. The initial states of the system have been chosen equal to their steady states because only the influence of the input on the system should be reflected in the controllability covariance matrix. It is important to note that all of the chosen inputs will result in covariance matrices that are independent

2208

Ind. Eng. Chem. Res., Vol. 41, No. 9, 2002

of the input magnitude cm for a linear plant. However, this is usually not the case for nonlinear models; hence, it is important to compute the covariance matrices for a specific operating region. 4.2. Observability Covariance Matrix. To define the observability covariance matrix, the following quantities are required:

Tn ) {T1, ...,Tr; Ti ∈ R n×n, TiTTi ) I, i ) 1, ..., r}

where r(t) is the residual. This is then substituted into the original equation for the autonomous system. If P ˜ is a square matrix of full rank and its dimension is equal to the number of states of the system, then the residual is zero. However, to obtain a model of reduced dimension, P ˜ needs to be chosen in such a way that the residual is orthogonal to the reduced subspace. The projection results in the following system of equations that exactly represents the original system:

M ) {c1, ..., cs; ci ∈ R, ci > 0, i ) 1, ..., s}

xj(t) ˙ )P ˜ f(P ˜ Txj(t) + r(t))

En ) {e1, ..., en; standard unit vectors in R n}

When the residual r(t) is deleted, some of the system behavior is lost and the reduced model can only approximate the behavior of the original model. To achieve a good approximation, P ˜ has to be chosen in such a way that r(t) is minimized. The resulting reduced-order system is then given by

where r ) number of matrices for perturbation directions, s ) number of different perturbation sizes for each direction, and n ) number of states of the full-order system. Definition 2: Observability Covariance Matrix. Let Tn, En, and M be given sets as described above. The observability covariance matrix is defined by r

WO )

s

∑ ∑ l)1m)1

1

∞ TlΨlm(t) TlT dt ∫ 0 2

(13)

rscm

ilm(t) where Ψlm(t) ∈ Rn×n is given by Ψlm ij (t) ) (y ilm T jlm jlm ilm yss ) (y (t) - y ss ), y (t) is the output of the system corresponding to the initial condition x(0) ) cmTlei + xss and yilm ss is the steady state that the system will reach after this perturbation. By this definition the observability covariance matrix is equivalent to the empirical observability gramian1,2 if yilm ss is equal to the measurement at the steady-state operating point and if the perturbations of the initial conditions of the system will remain within the region of attraction of the operating point. Therefore, for this case it will reduce to the linear observability gramian if the system under investigation is linear and is independent of the size of the perturbation cm for linear systems. These covariance matrices have to be determined from simulation data, collected within a region where the process is to be controlled. The covariance matrices capture part of the nonlinear behavior within the region of operation and are more suitable for determining a reduced-order model than gramians of the linearized system or empirical gramians of the nonlinear system. 4.3. State Truncation via the Galerkin Projection. The Galerkin projection is based upon the idea that the dynamics of a system can be replaced with the dynamics based upon a subspace of the original system. This technique has been used extensively to find sets of ODEs to replace a partial differential equation (PDE) as well as to construct mathematical models of lower order. Given an autonomous system, it is possible to approximate the system with a lower order model of the form

xj(t) ˙ ) hf (xj(t))

(15)

(17)

Special attention has to be given to the choice of P ˜ . For minimal realization/model reduction, it has to be chosen as the product of two matrices: the transformation matrix T that balances the covariance matrices and a reduction matrix P that eliminates the part of the subspace that contributes little or nothing to the inputoutput behavior of the system. To obtain a local minimal realization, P has to be chosen with rank equal to the number of states that are observable and controllable. For further model reduction, P is chosen as having even smaller rank. It should also be noted that T is required to be invertible. The algorithm presented in the next section will guarantee that the matrices have these properties. This choice of P ˜ , with a modification for the case where the steady states are different from zero,15 will result in a balanced truncation as given by eq 18 for any stable original model of the form of eq 6

xj˙ 1(t) ) PTf(T-1xj(t),u(t)) ) hf (xj(t),u(t)) xj2(t) ) xj2,ss(0) h (xj(t),u(t)) y(t) ) h(T-1xj(t),u(t)) ) h

(18)

where

()

xj ) xj1 , P ) [I 0], rank(P) ) nreduced xj2 4.4. Computational Procedure. Because covariance matrices are by their nature based upon discrete measurements of system properties, like states and outputs, it is advantageous to reformulate them in a discrete form. Therefore, the following quantities are defined: Discrete Controllability Covariance Matrix. Let T p, E p, and M be given sets as described in section 4.1, where p is the number of inputs. The discrete controllability covariance matrix is defined by

(14)

To apply the Galerkin projection, x(t) has to be rewritten as

x(t) ) P ˜ Txj(t) + r(t)

xj(t) ˙ )P ˜ f(P ˜ Txj(t))

(16)

r

WC )

s

p

∑ ∑∑ l)1m)1i)1

q

1

rscm

2

Φilm ∑ k ∆tk k)0

(19)

ilm ilm ilm where Φilm ∈ Rn×n is given by Φilm k k ) (xk - xss )(xk -

Ind. Eng. Chem. Res., Vol. 41, No. 9, 2002 2209 ilm T xilm is the state of the nonlinear system at ss ) and xk time step k corresponding to the input uk ) cmTleivk + uss(0). The controllability covariance matrix can be viewed as the sum of the covariance matrices of the states over time and for different combinations of the input variables. These combinations include different input directions as well as magnitudes and should include all relevant cases for a specific model. Discrete Observability Covariance Matrix. Let Tn, En and M be given sets as described in section 4.2, where n is the number of states. The discrete observability covariance matrix is defined by

r

WO )

s

∑ ∑ l)1m)1

1

matrices obtained from the decomposition are transformed by a weighting matrix such that

(20)

rscm

n×n is given by Ψlm ) (yilm - yilm)T(y jlm where Ψlm k ∈ R k ij k ss k jlm ilm - y ss ), yk is the output of the system at time step k corresponding to the initial condition x0 ) cmTlei + xss, and yilm ss is the steady state that the system will reach after this perturbation. The observability covariance matrix can be interpreted as the sum of the covariance matrices of the outputs, corresponding to different initial conditions, and over time. Although these matrices are computed from discrete data points, the data reflect the states and output measurements of a continuous system as given by eq 6 at a specified time. The variable q refers to the number of points that are chosen for the approximation of the integral in eqs 7 and 13, and ∆tk refers to the time interval between two points. It has to be ensured that the system will have reached steady state at some k < q for any kind of input (controllability) or state perturbation (observability) in order for the covariance matrices to represent the dynamics of the system. For the purpose of this paper, the value of q was set to over 1000 in the examples presented later in order to ensure that the system behavior is correctly reflected in the data. The size of the time steps was fixed for each problem and the value determined from the longest settling time computed for the excitations/perturbations. To use the information from the covariance matrices to reduce the order of the model, a new balancing algorithm needs to be used that is applicable to systems that are not completely controllable and observable. Because this algorithm should be applicable to linear as well as nonlinear systems, it cannot use any system properties other than the covariance matrices for the computation of the balancing transformation. The task of the algorithm is to find an invertible state transformation that makes two real, symmetric, positive semidefinite matrices diagonal and equal in the states that are both controllable and observable. The mathematical formulation of the algorithm is given below. The proof that such a transformation always exists for positive semidefinite matrices is given in Zhou and Doyle.18 The balancing algorithm consists of four steps that are required to balance the states of the system that are observable and controllable. (1) A Schur decomposition of the controllability covariance matrix has to be determined; the unitary

I 0 0 0

(21)

The rows and columns that contain only zeros in eq 21 refer to the rank deficiency of the controllability covariance matrix. If the matrix has full rank, then there will be no rows or columns with only zero entries. (2) The transformation found in the previous step can then be applied to the observability covariance matrix

[

W ˜ O,11 W ˜ O,12 (T1T)-1WO(T1)-1 ) W ˜ O,21 W ˜ O,22

q

T ∑ TlΨlm k Tl ∆tk 2 k)0

[ ]

T1WCT1T )

]

(22)

and a Schur decomposition can be found for the matrix W ˜ O,11, as is shown in eq 23. The unitary matrix of this decomposition is required for the second part of the transformation and is given by eq 24. Now the rows and

˜ O,11U1T ) U 1W (T2T)-1 )

[ ] Σ12 0 0 0

(23)

[ ] U1 0 0 I

(24)

columns that contain only zeros in eq 23 refer to the rank deficiency of the controllable states in the observability covariance matrix. If the matrix has full rank, then there will be no rows or columns with only zero entries. (3) A transformation consisting of both T1 and T2 can be applied to the original observability covariance matrix in order to obtain the third transformation T3 as given by eq 26. T -1

T -1

-1

-1

(T2 ) (T1 ) WO(T1) (T2)

[

[

]

ˆ O,12 Σ12 0 W ) 0 0 0 ˆ O,22 W ˆ O,12T 0 W (25)

I 0 0 I 0 (T3T)-1 ) 0 -W ˆ O,12TΣ1-2 0 I

]

(26)

(4) A transformation involving T1 through T3 is applied to the original observability covariance matrix, and a Schur decomposition is found for the square matrix consisting of the last columns and rows of that transformed system. The size of this square matrix is equal to the number of states minus the rank of the controllability covariance matrix.

(T3T)-1(T2T)-1(T1T)-1WO(T1)-1(T2)-1(T3)-1 )

[

Σ12 0 0 0 0 0 ˆ O,12TΣ1-2W ˆ O,12 ˜ O,22 - W 0 0 W

U2(W ˜ O,22 - W ˆ O,12TΣ1-2 W ˆ O,12)U2T )

]

[ ] Σ3 0 0 0

(27)

(28)

Based upon this, the fourth transformation can be determined and the transformation that balances the states that are observable and controllable is given in eq 30. The resulting transformed covariance matrices

2210

Ind. Eng. Chem. Res., Vol. 41, No. 9, 2002

are shown in eqs 31 and 32. The Hankel singular values T -1

(T4 )

[

Σ1-1/2 0 0 ) 0 I 0 0 0 U2

]

(29)

T ) T4T3T2T1

[ ] [ ]

Σ1 0 TWCT ) 0 0

0 I 0 0 Σ1 0 (T-1)TWO(T)-1 ) 0 0 T

0 0 0 0 0 0 0 0

0 0 0 0 0 0 Σ3 0

0 0 0 0

(30)

(31)

(32)

of the states of the balanced system that is controllable and observable are given by Σ1. The Hankel singular values for all other states are defined as being zero because these states are either unobservable, uncontrollable, or both unobservable and uncontrollable. The identity matrix in eq 31 has the same rank as the number of directions that are contained in the controllability covariance matrix but not in the observability covariance matrix. Similarly, the matrix Σ3 refers to directions that are contained in the observability covariance matrix but not in the controllability covariance matrix. The algorithm described by eqs 21-32 results in a coordinate transformation T that decomposes a model into its observable/unobservable and controllable/ uncontrollable subsystems similar to Kalman canonical decomposition. Additionally, it also balances the subsystem that is both controllable and observable. It does not require any knowledge about the system other than the two covariance matrices, which makes the procedure applicable to linear as well as nonlinear systems without any modification. 5. Properties of the Proposed Algorithm In this section, some properties of the reduction procedure are investigated and the findings are illustrated by two examples. These properties are the application of the procedure to linear as well as stable nonlinear systems. In both cases the procedure can be used to eliminate unobservable and uncontrollable subsystems as well as for further reduction of the model. 5.1. Application to Linear Systems. One important aspect of the proposed scheme is that it should be applicable to linear systems as well. The proposed procedure guarantees this because it reduces to already existing procedures for linear systems because of the fact that the covariance matrices reduce to empirical gramians for impulse inputs and these reduce to linear gramians for linear systems as shown by Lall et al.1,2 When the balancing algorithm that is proposed in section 4.4 is used, then the procedure decomposes the model into the controllable/uncontrollable as well as observable/unobservable subsystems similar to the results obtained by Kalman canonical decomposition. This decomposition leads to a minimal realization of the system by truncation of the states that are unobservable or uncontrollable. In addition to obtaining a minimal realization for the system, it is also guaranteed that the minimal system is in balanced form, which facilitates further reduction of the model by truncation.

5.2. Decomposition and Reduction of Stable Nonlinear Systems. The reduction procedure can be applied to any stable system of ODEs over a specified operating region. However, because of the approximation that is being made by using covariance matrices instead of nonlinear gramians as well as the approximation made by only allowing linear state transformations, it is not always possible to compute a minimal realization that results in the minimum number of states for the description of the input-output behavior. The covariance matrices will determine which states contribute most to the input-output behavior of the system. However, it is likely that the algorithm will identify extra directions as being important because of the fact that the original system is nonlinear, and it is also likely that the covariance matrices can only approximate nonlinear behavior over an operating region. When all of the directions that are identified as being controllable and observable are included in the model, the system will exhibit the same input-output behavior as the original full-order model. However, it will most likely consist of more states in order to do so than would be required for a true minimal realization. It is also possible to further reduce the model by truncating states that provide only minor contributions to the inputoutput behavior. Example 1: Application to a Distillation Column Model. Consider a distillation column with 30 trays for the separation of a binary mixture as used by Benallou et al.21 and Horton et al.22 The column has 32 states and is assumed to have a constant relative volatility of 1.6 and symmetric product compositions. The feed stream is introduced at the middle of the column on stage 17 and has a composition of xF ) 0.5. Distillate and bottoms purities are xD ) 0.935 and xB ) 0.065, respectively. The reflux ratio (controlled variable) is set to 3.0, and the purity of the distillate is measured. The operating region for this problem was chosen to be (10% around the steady-state values. The covariance matrices for this example were computed by solving the system from time 0 to 125 min for step input changes of up to 10% and sampling data every 0.1 min from the simulation. The computation results in two matrices with 32 rows and columns each, which are not shown because of their size. The Hankel singular values in decreasing order are 0.171 × 10-1, 0.851 × 10-3, 0.21 × 10-3, 0.107 × 10-4, 0.216 × 10-5, 0.336 × 10-6, 0.762 × 10-7, 0.945 × 10-8, 0.169 × 10-8, and 0.25 × 10-9, and all remaining singular values are smaller than 1 × 10-10. It can be concluded from the Hankel singular values that a system containing 10 states will exhibit the same input-output behavior as the full-order system for the chosen numerical precision. This 10-state system represents the input-output behavior of the fullorder model perfectly for the chosen working precision. However, most of these 10 states are only required because of the approximation introduced by the covariance matrices. It can be assumed from the Hankel singular values (also confirmed with simulations) that a system containing three states gives a very close approximation to the original system. This reducedorder system can be computed using the same procedure as was used for the elimination of the unobservable/ uncontrollable states, with the only modification being that now all but the first three states are truncated. Although this smaller system cannot be considered a

Ind. Eng. Chem. Res., Vol. 41, No. 9, 2002 2211

Figure 1. Trajectories for a 10% change in the reflux ratio of the column.

Figure 2. Trajectories for a 10% change in the inlet flow rate of one of the monomers.

Table 1. Comparison of Computation Times for the Distillation Column in CPU Seconds

Table 2. Comparison of Computation Times for the Polymerization Reactor in CPU Seconds

full-order model

reduced model with 10 states

approximation consisting of three states

full-order model

reduced model with 22 states

approximation consisting of four states

14.15

9.043

8.692

837.1

803.2

1.328

minimal realization by the definition of Kalman,3,4 it serves as a de facto minimal realization as stated by Moore.5 The computation times required for the integration of each model from time 0 to 125 min for a 10% change in the reflux ratio are shown in Table 1. The time required for the solution of the reduced model is about 40% smaller than the one for the original model. The trajectories generated by the models for a 10% change in the reflux ratio are given in Figure 1. Because of the fact that the curves for the full-order model and the one with 10 states are identical for the chosen working precision, only one of them is shown in the figure. Example 2: Application to a Polymerization Reactor Model. A model describing a terpolymerization reactor is given by Ogunnaike23 and Cozewith.24 The reactor is assumed to operate under isothermal conditions and produces a terpolymer from three monomers in the presence of a catalyst, a co-catalyst, and an inhibitor. The model consists of 29 states describing the component balances as well as the zeroth, first, and second moments of the reactor. For the purpose of this example, it is assumed that the inlet flow rate of one of the monomers is manipulated and the model should be able to describe how the manipulated variable affects the number-average molecular weight, which is the output of the model. The operating region for this problem was chosen to be (10% around the steady-state values. The covariance matrices for this example were computed by solving the system from time 0 to 60 min for step input changes of up to 10% and sampling data every 0.1 min from the simulation. The model exhibits nonlinear characteristics, and the states are highly interconnected because of the complex behavior of this process. The Hankel singular values for this model in decreasing order are given by 0.352, 0.766 × 10-1, 0.569 × 10-1, 0.716 × 10-2, 0.259 × 10-5, 0.160 × 10-8, and 0.158 × 10-8, and all remaining values have magnitudes of less than 1 × 10-8. However, the 8th-22nd singular values have

magnitudes between 1 × 10-8 and 1 × 10-10. As a result of this, when only the states corresponding to singular values of less than 1 × 10-10 are reduced, the model still contains 22 states. However, the model can be further reduced to four states by model truncation without losing much of the input-output behavior of the system. Again, the only difference between the elimination of the nonminimal parts of the model and further model reduction is the choice of the projection matrix P. For truncation of the nonminimal states, the rank of P is equal to the rank of Σ1 in eqs 31 and 32, whereas for further reduction, the rank of P is chosen to be smaller than the rank of Σ1, where the exact number of states to be truncated is determined from the Hankel singular values of the covariance matrices. The trajectories generated by the models for a 10% change in the inlet flow rate of one of the monomers are given in Figure 2. Only one graph is shown for the response of the original model and the reduced model with 22 states because they exhibit identical input-output behavior. Because a reduced model with identical input-output behavior for the chosen working precision still requires most of the states of the original model as well as the computation of the balancing coordinate transformation, there is little difference in the computation times between the original model and the model containing 22 states. However, a further reduced model with only four states provides a good approximation to the inputoutput behavior of the original model, while it is much faster to solve. The reason for this is that some of the behavior that contributes little to the input-output behavior is numerically hard to solve and is truncated in this reduced model. Table 2 summarizes the computation times of the three models. It can be seen that reducing the model to 22 states for this example only slightly decreases the computation time, whereas a further reduced model requires more than two orders less computation time.

2212

Ind. Eng. Chem. Res., Vol. 41, No. 9, 2002

6. Conclusions This paper presents a new approach for order reduction of nonlinear models over a given operating region. The procedure combines elements from observability and controllability analysis with Galerkin projections and is closely related to balanced model reduction techniques. Covariance matrices are introduced as an extension of empirical gramians to deal with noncontrol-affine systems and different input types. Furthermore, a novel procedure is introduced for the computation of the balancing transformation that does not require any assumptions about observability and controllability of a system. When the resulting procedure is applied to linear non-minimal systems using impulse inputs, it reduces to a numerically robust version of the well-known method for minimal realization via the Kalman canonical decomposition. However, when the procedure is applied to linear minimal systems, then the algorithm is equivalent to balanced model reduction. These findings have been illustrated with examples of chemical processes. Literature Cited (1) Lall, S.; Marsden, J. E.; Glavaski, S. Empirical model reduction of controlled nonlinear systems. 14th IFAC World Congress, Beijing, China, 1999. (2) Lall, S.; Marsden, J. E.; Glavaski, S. A subspace approach to balanced truncation for model reduction of nonlinear control systems. Int. J. Robust Nonlinear Control 2002, submitted for publication. (3) Kalman, R. E. Mathematical description of linear systems. SIAM J. Control 1965, 1, 152. (4) Kalman, R. E. Irreducible realizations and the degree of a rational matrix. J. Soc. Ind. Appl. Math. 1965, 13, 520. (5) Moore, B. C. Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans. Autom. Control 1981, 26, 17. (6) Laub, A. J. On computing balancing transformations. Proceedings of the Joint Automatic Control Conference, San Francisco, CA, 1980. (7) Tombs, M. S.; Postlethwaite, I. Truncated balanced realization of a stable non-minimal state-space system. Int. J. Control 1987, 46, 1319. (8) Isidori, A. Nonlinear control systems: An introduction, 2nd ed.; Springer-Verlag: New York, 1989.

(9) Scherpen, J. M. A.; Gray, W. S. Minimality and Local State Decomposition of a Nonlinear State Space Realization using Energy Functions. IEEE Trans. Autom. Control 2000, 45, 2079. (10) Scherpen, J. M. A. Balancing for nonlinear systems. Syst. Control Lett. 1993, 21, 143. (11) Newman, A.; Krishnaprasad, P. S. Computing balanced realizations for nonlinear systems. 14th International Symposium on Mathematical Theory Networks and Systems, Perpignan, France, 2000. (12) Pallaske, U. Ein Verfahren zur Ordnungsreduktion mathematischer Prozessmodelle. (An Order Reduction Algorithm for Mathematical Process Models) Chem. Ing. Tech. 1987, 604. (13) Lo¨ffler, H.-P.; Marquardt, W. Order reduction of nonlinear differential-algebraic process models. J. Process Control 1991, 32. (14) Lee, K. S.; Eom, Y.; Chung, J. W.; Choi, J.; Yang, D. A control-relevant model reduction technique for nonlinear systems. Comput. Chem. Eng. 2000, 24, 309. (15) Hahn, J.; Edgar, T. F. An improved method for nonlinear model reduction using balancing of empirical gramians. Comput. Chem. Eng. 1999, submitted for publication. (16) Brogan, W. L. Modern Control Theory, 3rd ed.; Prentice Hall: Englewood Cliffs, NJ, 1991. (17) Skogestad, S.; Postlethwaite, I. Multivariable Feedback Control; John Wiley & Sons: New York, 1997. (18) Zhou, K.; Doyle, J. C.; Glover, K. Robust and Optimal Control; Prentice Hall: Englewood Cliffs, NJ, 1996. (19) Marquardt, W. Nonlinear Model Reduction for Optimization Based Control of Transient Chemical Processes. Chemical Process Control VI, Tucson, AZ, 2001. (20) Henson, M. A.; Seborg, D. E. Nonlinear Process Control; Prentice Hall: Englewood Cliffs, NJ, 1997. (21) Benallou, A.; Seborg, D. E.; Mellichamp, D. A. Dynamic compartmental models for separation processes. AIChE J. 1986, 32, 1067. (22) Horton, R. R.; Bequette, B. W.; Edgar, T. F. Improvements in dynamic compartmental modeling for distillation. Comput. Chem. Eng. 1991, 15, 197. (23) Ogunnaike, B. A. On-line modelling and predictive control of an industrial terpolymerization reactor. Int. J. Control 1994, 59, 711. (24) Cozewith, C. Transient response of continuous-flow stirredtank polymerization reactors. AIChE J. 1988, 34, 272.

Received for review July 19, 2001 Revised manuscript received December 12, 2001 Accepted March 1, 2002 IE0106175