Continuous-Time Subspace System Identification Method - American

We analyze practical problems of discrete-time subspace system identification methods from the control point of view and develop a new continuous-time...
1 downloads 0 Views 182KB Size
2886

Ind. Eng. Chem. Res. 2001, 40, 2886-2896

Continuous-Time Subspace System Identification Method Su Whan Sung* Department of Chemical Engineering, Korea Advanced Institute of Science and Technology, 373-1 Kusong-dong, Yusong-ku, Taejon 305-701, Korea

Seong Young Lee, Hee Jin Kwak, and In-Beum Lee Department of Chemical Engineering, Pohang University of Science and Technology, San 31, Hyoja Dong, Pohang 790-784, Korea

We analyze practical problems of discrete-time subspace system identification methods from the control point of view and develop a new continuous-time subspace system identification method. We use a new transform to obtain the derivative information numerically from the measured process output and the process input. Using the transform, we convert the continuoustime process model composed of differential equations into corresponding algebraic equations. We estimate the process order by inspecting singular values, and subsequently, we determine the continuous-time state space model using the least-squares method. We compare the proposed method with the discrete-time subspace system identification method for a small sampling period. 1. Introduction Discrete-time subspace system identification methods1-3 have attracted much attention during the past few years because of their ability to identify multivariable processes directly from the input-output data. Compared with the classical prediction error method (PEM) and the instrumental variable method (IVM),4-6 these subspace methods do not suffer from parametrization and nonlinear optimization. Also, their properties and common features have been well-analyzed,7-11 and their extensions to closed-loop process data have been developed.12,13 On the other hand, many continuous-time system identification methods have been proposed, such as Walsh functions,14 block pulse functions,15 Legendre polynomials,16 and multiple integrators.17,18 These functions produce initial value problems. Thus, the abovementioned methods cannot incorporate unknown initial conditions effectively. To overcome this difficulty, linear integral transforms or macro-difference expressions,19,20 digital filtering approaches,21 and ball-shaped weighting integral transforms22 have been developed. Several papers discussing these methods have justified the necessity of continuous-time system identification from the viewpoint of estimating the physical parameters of the physical continuous-time linear system. However, this justification is not as convincing from the process control point of view because the process model is needed to design the controller rather than to obtain the physical parameters and also because obtaining physical parameters using continuous-time linear estimators is useless for nonlinear systems. Therefore, the necessity for continuous-time system identification must again be justified from the control point of view. In this research, practical problems in applying discrete-time subspace system identification methods are analyzed from the control point of view, and a new * To whom all correspondence should be addressed. E-mail: [email protected]. Tel.: +82-42-869-3992. Fax: +8242-869-3910.

continuous-time subspace system identification method is developed. A new simple transform is used to obtain the derivative information numerically from the measured process output and the process input. The transform converts the continuous-time process model composed of differential equations into corresponding algebraic equations. The process order is estimated by inspecting the singular values, and subsequently, the continuous-time state space model is obtained using the least-squares method. The proposed method is compared with the discrete-time subspace system identification method for a small sampling period. This paper is organized as follows. Section 2 discusses the practical problems of discrete-time identification methods. In section 3, we propose a new transform that converts the differential equation into an algebraic equation while reducing the effects of noises. Section 4 introduces the continuous-time state space model and the transformed state space model by the proposed transform. In section 5, we present a continuous-time subspace system identification method. Section 6 gives numerical examples to confirm the identification performance of the proposed method and to compare it with an existing discrete-time subspace system identification method. Section 7 summarizes the main results. 2. Practical Problems of Discrete-Time Identification All discrete-time identification methods identify the process dynamics using a discrete-time model. Considering that most processes are controlled by digital computers with a zero-order holder, the discrete-time identification approach is a good choice for identifying the dynamic behavior of actual processes. However, this is not always the case. We analyze the practical problems of discrete-time identification methods as follows: (P1) Frequently, the process input, such as temperature or pressure, is not a discrete (piece-wise constant) signal but rather a continuous signal. For example, the output of process 1 is the input of process 2 in the

10.1021/ie000069f CCC: $20.00 © 2001 American Chemical Society Published on Web 05/30/2001

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001 2887

Figure 1. Cascade control system.

Figure 2. Convergence of system parameter and corresponding uncertainty bound as the sampling period is decreased.

cascade control system of Figure 1. In this case, the output of process 1 is a continuous signal (that is, the signal between samples is not constant). In this case, discrete-time system identification with a typical sampling period (approximately one-fourth of the time constant or one-half of the time delay) might give a poor model because of neglect of the intersample dynamics. On the other hand, if we reduce the sampling period enough to guarantee an acceptable accuracy, the following problem (P2) occurs. (P2) System parameters of discrete-time models are functions of the sampling period, but those of continuous-time models are always constant. This difference results in completely different modeling error patterns in terms of the sampling period. For example, consider the following stable (τ > 0) first-order process

τ

dz(t) + z(t) ) u(t) dt y(t) ) z(t) + e(t)

(1)

where e(t) is a discrete-time (piece-wise constant) Gaussian noise and the sampling period is ∆t. u(t) and y(t) represent the process input and the measured process output in a discrete way, respectively. Also, assume that the following discrete-time firstorder model is identified by a discrete-time identification method such as the output error identification method

zˆ (k) ) aˆ zˆ (k - 1) + (1 - aˆ )u(k - 1) yˆ (k) ) zˆ (k)

(2)

where aˆ ) exp(-∆t/τˆ ) is the estimate of the actual parameter a ) exp(-∆t/τ). The estimate would be corrupted by a modeling error of δ, as in aˆ ) a + δ. This means that the estimated time constant is

τˆ ) -

∆t ln(δ + a)

(3)

Here, it should be noted that the actual parameter (a) is a function of the sampling period. Figure 2 shows

graphically the movement of the system parameter (a) and the corresponding uncertainty bound (δ) in terms of the sampling period. As the sampling period is decreased, the discrete-time system parameter converges to 1.0. Thus, to guarantee invariant identification accuracy of the estimated time constant with respect to the sampling period, the uncertainty bound should approach 0 with the same convergent rate as the discrete-time system parameter approaches 1.0. Otherwise, different accuracy levels for the estimated time constant are obtained in terms of the sampling period. The problem of discrete-time identification methods with a small sampling period comes from trying to identify a discrete-time system parameter of almost 1.0. In this case, even a very small parameter error δ can make the estimated time constant of eq 3 completely unacceptable. We show below that the identified discretetime model becomes poor or even unstable as the sampling period is decreased. Note that the system parameter converges to 1.0 by the convergent rate of 1 - a ) 1 - exp(-∆t/τ) e ∆t/τ. If we assume that the covariance of the noise does not change with respect to the sampling time and that the total time of the test signal is fixed, the expected discrete-time parameter error is approximately inversely proportional to the square root of the total number of data points(∝ 1/xN)4 or, equivalently, proportional to the square root of the sampling period (∝ x∆t). Thus, the expected discretetime parameter error can be represented approximately by δ ) δdx∆t, whereas δd is a constant. We realize that there always exists a small sampling period that makes δ ) δdx∆t larger than ∆t/τ. In such cases, the discretetime identification with the small sampling period might result in an unstable model. It is important to realize that the convergence rate (∆t/τ) of the actual parameter (a) to 1.0 and that (δdx∆t) of the expected modeling error (δ) to 0.0 are different. This mismatch results in different accuracy levels in terms of the sampling period and even in an unstable model for a small sampling period. On the other hand, let us assume that the time constant of the continuous-time model is estimated from the same data set as the discrete-time case by a continuous-time identification method such as a continuous-time prediction error method that gives a convergent rate of 1/xN.23

τˆ

dzˆ (t) + zˆ (t) ) u(t) dt yˆ (t) ) zˆ (t)

(4)

In this continuous-time approach, there is no such sampling period problem because the system parameter τ is not a function of the sampling period. We can more systematically confirm these characteristics by comparing the frequency response of the relative modeling errors of the two approaches. To inspect the effects of the sampling period, assume that the covariance functions of the noise and the process input do not change with respect to the sampling time and that the total time is fixed. Then, we can assume the following expected parameter error of a discretetime approach with respect to the total number of data points.4

2888

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001

aˆ ) a + δd/xN G ˆ (z) )

(5)

1 - aˆ 1 - a - δd/xN ) z - aˆ z - a - δ /xN

(6)

1-a z-a

(7)

d

G(z) )

where N is the total number of data points (N ) tf/∆t, where tf is the total time of the measured data) and δd is a constant. Equation 7 is the exact process transfer function, and eq 6 is an estimated transfer function determined using a discrete-time identification method. Then, the frequency response of the relative modeling error of the identified discrete-time model is

Ed(z) )

|

|

G(z) - G ˆ (z) G(z)

(8)

z)exp(jω∆t)

In the same way, we can obtain the following for the continuous-time approach

τˆ ) τ + δc/xN G ˆ (s) )

(9)

1 (τ + δc/xN)s + 1

(10)

1 τs + 1

(11)

G(s) )

where δc is a constant. Then, the relative frequency error of the identified continuous-time model is

Ec(s) )

|

|

G(s) - G ˆ (s) G(s)

(12) s)jω

Figure 3 shows the frequency responses of the relative modeling errors when tf ) 1 and δc ) δd ) 0.01, for example. Here, we see remarkably different frequency error patterns with respect to the sampling period. The frequency error of the continuous-time model (eq 12) decreases but the frequency error of the discrete-time model (eq 8) increases. This is because the actual parameter and the parameter error of the discrete-time case converge with different rates. In contrast, the parameter error decreases as the sampling period is decreased, whereas the actual parameter is constant in the continuous-time case. From a more general point of view, we can explain the sampling period problem of the discrete-time model as follows: In discrete-time identification methods, all left-half poles in the Laplace transform domain converge to 1.0 in the z-transform domain, and the corresponding parameter errors also converge to 0 as the sampling period is decreased. There is no such sampling problem if the convergence rates of the discrete-time system poles and the corresponding estimation errors are the same. However, this might not be the case in realistic situations. The frequency modeling error might increase as the sampling period is decreased for a certain noise. In contrast, in continuous-time identification methods, all left-half poles are constant, and the parameter error converges to 0. Then, the frequency modeling error becomes smaller as the sampling period is decreased. For a numerical example, consider the following process

Figure 3. Relative errors of (a) discrete-time approach and (b) continuous-time approach with respect to the sampling period (∆t ) 0.1, solid line; ∆t ) 0.02, dotted line; ∆t ) 0.004, dashed line).

y(s) )

1 u(s) + e(s) s3 + 3s2 + 3s + 1

(13)

where e(t) is a discrete-time (piece-wise constant) Gaussian noise and the sampling period is 0.01. Figure 4 shows the activated process. Here, the process input is a random binary sequence (RBS) with a switching time of 0.5, and the variance of the noise is 0.0001. The worst result of a discrete-time subspace identification method (N4SID in Matlab) among 30 experiments is shown in Figure 5. N4SID gives an almost-perfect model when there are no uncertainties (except numerical error). However, N4SID shows unacceptable performance for a very small noise with a 0.0001 variance because of the small sampling period problem. We also applied various PEMs to the above system and obtained the same conclusion. This supports the fact that all discretetime identification methods have the same small sampling period problem. Actually, the variance of the discrete-time noise might vary with respect to the sampling period rather than be constant. However, it cannot change the relative advantage of the continuous-time approach compared

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001 2889

Figure 6. Mapping between continuous-time domain and discretetime domain for a small sampling period.

Figure 4. Activated process input (solid line) and process output (dotted line).

mates of the obtained discrete-time model) should be treated carefully.24 However, if we use the continuoustime model, this situation does not arise because the continuous-time system parameters are not functions of the sampling period. On the basis of the above-mentioned problems, we can justify the necessity of continuous-time subspace system identification methods. If we reduce the sampling period to increase the model accuracy for processes having a continuous input, problem P2 occurs. On the other hand, if we increase the sampling period to overcome problem P2, we cannot apply a discrete-time identification method to the processes having a continuous input because of problem P1. Also, when the sampling period is small compared to the process time constant, the truncation error should be treated carefully because of problem P3. For example, when we use the estimates identified by discrete-time identification methods to design the controller, we should be sure to carry all possible digits. The proposed continuous-time subspace system identification method does not suffer from these problems if the sampling period is small enough to guarantee acceptable accuracy in the numerical integration. 3. New Transform for Extracting Derivative Information The proposed transform is a weighted integral of the kth derivative of the signal as follows

yT(i,j)(k) )

T(i,j) w(t) ∫T(i,j-1)

dky(t) dt dtk

(14)

where y(t) represents the measured signal. Constants T(i,j) and T(i,j - 1) will be explained later. The given weight w(t) should satisfy the following conditions Figure 5. Identification results of a discrete-time system identification method (real, solid line; model for noise case, dashed line; model for no-noise case, dotted line).

to the discrete-time approach because the change of the noise variance affects the two identification methods in the same manner.23 (P3) When the sampling period is small enough compared to the process time constant, the numerical sensitivity of the discrete-time identification method increases as all left-half poles (p1 and p2) in the Laplace transform domain converge to one point on the unit circle in the z-transform domain ({z1 ) exp(p1∆t)} ≈ {z2 ) exp(p2∆t)} ≈ 1), as shown in Figure 6. In this case, the truncation error in the z-transform domain (esti-

|

dlw(t) dtl

)

|

dlw(t)

t)T(i,j-1)

dt l

) 0, l ) 0, 1, ..., k - 1

t)T(i,j)

(15) Proposition: If the weight satisfies eq 15, then eq 14 can be rewritten as

yT(i,j)(k) )



T(i,j)

(-1)k T(i,j-1)

dkw(t) dtk

y(t) dt

(16)

Proof: We can rewrite eq 16 using integration by parts and the conditions of eq 15

2890

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001

yT(i,j)(k) )

T(i,j) (-1)k ∫T(i,j-1)

) (-1)k

dkw(t) dtk

|

dk-1w(t)



y(t) dt

(-1)k T(i,j-1)

|



T(i,j) ∫T(i,j-1)

dk-1w(t) dy(t) dt dt dtk-1

(17)

By repeating this process k times, we obtain eq 14. Q.E.D. In this research, we propose the following weight function

fw(t,a,b) ) [-(t - T(i,j - 1))/T(i,1)]a [-(t - T(i,j))/T(i,1)]b a! b! w(t) ) fw(t,n,n)

(18)

where T(i,j - 1) e t e T(i,j). The kth derivative of the weight can be obtained easily as

(-1)

k

dkw(t) dtk

) T(i,1)

-k

k

∑ kCifw(t,n - k + i,n - i) i)0

(19)

where kCi ) {k!}/{(k - i)!i!} and k e n. It is self-evident that the proposed weight satisfies the conditions in eq 15 as w(t) is 0 at T(i,j - 1) and T(i,j). Remarks: 1) The transformed output of yT(i,j)(k) corresponds to the kth derivative information of the signal y(t) between t ) T(i,j - 1) and t ) T(i,j). (2) Note that we have difficulty in integrating eq 14 numerically because of noises or uncertainties. In contrast, the numerical calculation of eq 16 is acceptable because the given weight w(t) is exactly known. Thus, we should use eq 16 to calculate the transform of eq 14 numerically. (3) As the sampling period becomes smaller or T(i,j) T(i,j - 1) becomes larger, the noise effects on the transformed output can be reduced significantly. Assume that the following uncertainties are included in the signal

y(t) ) yo(t) + v(t) where y(t) and yo(t) are the measured discrete signal and the corresponding noise-free signal, respectively. v(t) is a zero-mean, constant-variance noise (not necessarily a white noise). Then, the transformed output will be

yT(i,j)(k) )

T(i,j) (-1)k ∫T(i,j-1)

dkw(t) dtk

yo(t) dt +

T(i,j) (-1)k ∫T(i,j-1)

dkw(t) dtk

dkw(t)

v(t) dt ≈ dtk dkw(T(i,j - 1) + l∆t) T(i,j) - T(i,j - 1)N-1 k (-1) × N l)0 dtk v(T(i,j - 1) + l∆t) (21)



y(t) dtk-1 t)T(i,j) dk-1w(t) (-1)k y(t) dtk-1 t)T(i,j-1) k-1 w(t) dy(t) T(i,j) d dt (-1)k T(i,j-1) k-1 dt dt

) (-1)k-1

T(i,j)

v(t) dt (20)

where N∆t ) T(i,j) - T(i,j - 1). Then, eq 21 decreases to 0 in the proportion of 1/N as v(t) is a zero-mean constant-variance random variable uncorrelated with the weight. (4) Roughly speaking, the larger values of T(i,j) T(i,j - 1) filter out the higher-frequency information, and vice versa. thus, larger T(i,j) - T(i,j - 1) values should be chosen to extract slower dynamics, and vice versa. We choose T(i,j) as follows in this research

T(i,1) ) Tmin + (i - 1)(Tmax - Tmin)/(NT - 1), i ) 1, ..., NT T(i,j) ) jT(i,1), j ) 0, 1, 2, ..., Ni Ni is the largest integer satisfying T(i,Ni) e tf, where tf is the final time of the measured process data. This choice of T(i,j) can be drawn as in Figure 7. As mentioned, Tmin should be large enough to suppress the high-frequency uncertainties. However, one should also considered that Tmin values that are too large filter out valuable information about the process input and output corresponding to low frequency. Up to now, many functions or transforms have been proposed to identify continuous-time system identifications such as Walsh functions,14 block pulse functions,15 Legendre polynomials,16 linear integral filters or macrodifference expressions,19,20 digital filtering approaches,21 and ball-shaped weighting integral transform.22 All methods other than linear integral filters, digital filters, and ball-shaped weighting integral transforms suffer from initial value problems. Also, it is not convenient to initialize the filter of the digital filtering approaches. Linear integral filters and ball-shaped weighting integral transforms are close to the proposed transform. These methods can overcome initial value problems easily and can also be integrated over finite time intervals. Compared to our approach, linear integral filters require a somewhat more complicate numerical integration because they use multiple integrals. Also, they use a single time interval [that is, T(i,j), i ) 1], differing from our approach. Our experience indicates that using various time intervals [that is, T(i,j), i ) 1, 2, ..., NT) as in our approach gives better robustness than using a single time interval. Remark 3 and 4 can justify this. Ball-shaped weighting integral transforms set almost zero weights to the latter part of the signal compared with those set by the proposed transform so that, roughly speaking, the former transform loses more information. Also, it is somewhat ambiguous to determine the time when integration is stopped. 4. Continuous-Time State Space Model

For example, assume that we use the rectangular numerical integration method to calculate these integrals. Then

The following continuous-time state space model is considered in this research

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001 2891

Figure 7. Schematic diagram for the choice of T(i,j).

dx(t) ) Ax(t) + Bu(t) + Ke(t) dt

(22)

y(t) ) Cx(t) + e(t)

(23)

where u(t) and y(t) denote the process input and output, respectively. x(t) is the state, and e(t) is assumed to be a zero-mean high-frequency noise. The problem in this research is to estimate the matrices A, B, and C and the dimension of the state from the discrete measurements of the process input and process output. We leave the problem of estimating K for future work. We can convert the differential equation of eq 22 to the following algebraic equations using the new transform

xT(i,j)(k + 1) ) AxT(i,j)(k) + BuT(i,j)(k) + KeT(i,j)(k) (24) yT(i,j)(k) ) CxT(i,j)(k) + eT(i,j)(k)

(25)

5. Continuous-Time Subspace System Identification Method

the system matrices. The same procedure can be applied to a continuous-time subspace system identification. 5.1. Estimating States. In this section, we explain how to estimate the state vector. We can obtain the following equations from eqs 24 and 25.

yT(i,j)(k) ) CxT(i,j)(k) + eT(i,j)(k)

(26)

yT(i,j)(k + 1) ) CAxT(i,j)(k) + CBuT(i,j)(k) + CKeT(i,j)(k) + eT(i,j)(k + 1) (27) yT(i,j)(k + 2) ) CA2xT(i,j)(k) + CABuT(i,j)(k) + CBuT(i,j)(k + 1) + CAKeT(i,j)(k) + CKeT(i,j)(k + 1) + eT(i,j)(k + 2) (28) l yT(i,j)(k + p - 1) ) CAp-1xT(i,j)(k) + CAp-2BuT(i,j)(k) + ... +

In most discrete-time subspace system identification methods, the state vector is estimated first and then

CBuT(i,j)(k + p - 2) + CAp-2KeT(i,j)(k) + ... + CKeT(i,j)(k + p - 2) + eT(i,j)(k + p - 1) (29)

Here, it is worth noting that if uT(i,j)(k) ) ... ) uT(i,j)(k + p - 2) ) 0 and eT(i,j)(k) ) ... ) eT(i,j)(k + p - 1) ) 0, then we can obtain eq 30

[

][ ]

yˆ T(i,j)(k|k - 1) C CA yˆ T(i,j)(k + 1|k - 1) yˆ T(i,j)(k + 2|k - 1) ) CA2 xT(i,j)(k) ·· ·· · · yˆ T(i,j)(k + p - 1|k - 1) CAp-1

(30)

where yˆ T(i,j)(‚|k - 1) represents the transformed process output (yT(i,j)(‚)) when uT(i,j)(k) ) ... ) uT(i,j)(k + p - 2) ) 0 and eT(i,j)(k) ) ... ) eT(i,j)(k + p - 1) ) 0. The following augmented equations in terms of T(i,j),i ) 1, ...,NT, j ) 1,...,Ni can be obtained

[

yˆ T(1,1)(k|k - 1)

yˆ T(1,2)(k|k - 1)

yˆ T(1,1)(k + 1|k - 1)

yˆ T(1,2)(k + 1|k - 1)

yˆ T(1,1)(k + 2|k - 1) yˆ T(1,2)(k + 2|k - 1) ·· ·· · · yˆ T(1,1)(k + p - 1|k - 1) yˆ T(1,2)(k + p - 1|k - 1)

· · · yˆ T(NT,Ni)(k|k - 1) · · · yˆ T(NT,Ni)(k + 1|k - 1) ··· ·· · ···

][ ]

C CA yˆ T(NT,Ni)(k + 2|k - 1) ) CA2 × ·· ·· · · CAp-1 yˆ T(NT,Ni)(k + p - 1|k - 1) [xT(1,1)(k) xT(1,2)(k) xT(1,3)(k) ‚‚‚ xT(NT,Ni)(k)] (31)

[

2892

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001

][ ]

yˆ T(1,1)(k + 1|k) yˆ T(1,2)(k + 1|k) · · · yˆ T(NT,Ni)(k + 1|k) C yˆ T(1,1)(k + 2|k) yˆ T(1,2)(k + 2|k) · · · yˆ T(NT,Ni)(k + 2|k) CA × yˆ T(1,1)(k + 3|k) yˆ T(1,2)(k + 3|k) · · · yˆ T(NT,Ni)(k + 3|k) ) CA2 ·· ·· ·· ·· ·· · · · · · CAp-1 yˆ T(1,1)(k + p|k) yˆ T(1,2)(k + p|k) · · · yˆ T(NT,Ni)(k + p|k) [xT(1,1)(k + 1) xT(1,2)(k + 1) xT(1,3)(k + 1) ‚‚‚ xT(NT,Ni)(k + 1)] (32)

where yˆ T(i,j)(‚|k) represents the transformed process output (yT(i,j)(‚)) when uTi(k + 1) ) ... ) CBuTi(k + p - 1) ) 0 and eTi(k + 1) ) ... ) eTi(k + p) ) 0. Now, consider the following singular value decomposition (SVD)

[

yˆ T(1,1)(k|k - 1)

yˆ T(1,2)(k|k - 1)

yˆ T(1,1)(k + 1|k - 1)

yˆ T(1,2)(k + 1|k - 1)

· · · yˆ T(NT,Ni)(k|k - 1) · · · yˆ T(NT,Ni)(k + 1|k - 1)

yˆ T(1,2)(k + 2|k - 1) yˆ T(1,1)(k + 2|k - 1) ·· ·· · · yˆ T(1,1)(k + p - 1|k - 1) yˆ T(1,2)(k + p - 1|k - 1)

[

The singular value matrix S will be

σ1 0 S) 0 ·· · 0

0 σ2 0 ·· · 0

··· ··· ··· ··· 0

0 0 σ3 ·· · 0

]

) USVT · · · yˆ T(NT,Ni)(k + 2|k - 1) ·· ·· · · y ˆ (k · · · T(NT,Ni) + p - 1|k - 1)

0 0 0 ·· · σp

0 0 0 ·· · 0

··· ··· ··· ··· ···

0 0 0 ·· · 0

]

(33)

(34)

where σ1 g σ2 g ... g σp and σi ≈ 0, i ) norder + 1, ..., p. Thus, we can infer the process order (norder) by inspecting the singular values. Considering eqs 32 and 33, we can choose the extended observability matrix and the state as shown below because the norder-dimensional state vector can be chosen arbitrarily under a similarity transform.

[]

C CA CA2 ) U(:,1:norder)S(1:norder,1:norder)0.5 ·· · CAp-1

(35)

where U(:,1:norder) denotes the part of U from the first column to the norderth column. From eqs 31 and 32, we obtain

[ ][

[xT(1,1)(k) xT(1,2)(k) xT(1,3)(k) ‚‚‚ xT(NT,Ni)(k)] ) C CA CA2 ·· · CAp-1

and

+

yˆ T(1,1)(k|k - 1)

yˆ T(1,2)(k|k - 1)

yˆ T(1,1)(k + 1|k - 1)

yˆ T(1,2)(k + 1|k - 1)

yˆ T(1,2)(k + 2|k - 1) yˆ T(1,1)(k + 2|k - 1) ·· ·· · · yˆ T(1,1)(k + p - 1|k - 1) yˆ T(1,2)(k + p - 1|k - 1)

[ ][

· · · yˆ T(NT,Ni)(k|k - 1) · · · yˆ T(NT,Ni)(k + 1|k - 1)

· · · yˆ T(NT,Ni)(k + 2|k - 1) ·· ·· · · · · · yˆ T(NT,Ni)(k + p - 1|k - 1)

[xT(1,1)(k + 1) xT(1,2)(k + 1) xT(1,3)(k + 1) ‚‚‚ xT(NT,Ni)(k + 1)] ) C CA CA2 ·· · CAp-1

+

yˆ T(1,1)(k + 1|k) yˆ T(1,2)(k + 1|k) · · · yˆ T(NT,Ni)(k + 1|k) yˆ T(1,1)(k + 2|k) yˆ T(1,2)(k + 2|k) · · · yˆ T(NT,Ni)(k + 2|k) yˆ T(1,1)(k + 3|k) yˆ T(1,2)(k + 3|k) · · · yˆ T(NT,Ni)(k + 3|k) ··· ··· ··· ··· yˆ T(1,1)(k + p|k) yˆ T(1,2)(k + p|k) · · · yˆ T(NT,Ni)(k + p|k)

] ]

(36)

(37)

Therefore, once we estimate the left-hand side matrices of eqs 31 and 32, we can estimate the extended observability matrix of eq 35 and the states of eqs 36 and 37 using the singular value decomposition (SVD).

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001 2893

Now, let us see how to calculate yˆ T(i,j)(‚|k - 1) and yˆ T(i,j)(‚|k). The transformed linear optimal estimator for states is constructed as ∞

xT(i,j)(k) )



G ˆ y(l) yT(i,j)(k - l) + ∑G ˆ u(l) uT(i,j)(k - l) ∑ l)1 l)1

(38)

ˆ u(l) represent the appropriate matriwhere G ˆ y(l) and G ces. Then, eq 26 can be rewritten as

∑ l)1

Gy(l) yT(i,j)(k - l) + Gu(l) uT(i,j)(k - l) + eT(i,j)(k) ∑ l)1

(39)

In practice, the infinite regression vectors of eq 39 must be truncated as follows: nARX

Gy(l) yT(i,j)(k - l) + ∑ l)1 nARX

Gu(l) uT(i,j)(k - l) + eT(i,j)(k) ∑ l)1

(40)

Now, we apply the least-squares (or instrumental variable) method to eq 40, which results in the predictor nARX LS yT(i,j) (k

+ j) )

LS GLS ∑ y (l) yT(i,j)(k + j - l) + l)1 nARX

GLS ∑ u (l) uT(i,j)(k + j - l) l)1

(41)

LS (l), From eq 41, we can calculate yˆ T(i,j)(l|k - 1) ) yT(i,j) l ) k, k + 1, ..., k + p - 1, with initial settings LS (k - l) ) yT(i,j)(k - l), l ) 1, 2, ..., nARX, and yT(i,j) uT(i,j)(k + l) ) 0, l ) 0, 1, ..., p - 1, and also yˆ T(i,j)(l|k) ) LS (l), l ) k + 1, k + 2, ..., k + p, with intial settings yT(i,j) LS yT(i,j) (k - l) ) yT(i,j)(k - l), l ) 0, 1, ..., nARX - 1, and uT(i,j)(k + l) ) 0, l ) 1, 2, ..., p. 5.2. Frequency-Weighted Least-Squares Method. As mentioned above, we use the least-squares method to solve eq 40. Consider the following residual:

nARX

yT(i,j)(k) )

min

{



LS GLS y (l|z),Gu (l|z) i)1

Ni

wfz(i)(

T rT(i,j) (k|z) rT(i,j)(k|z))} ∑ j)1

f (i) ) 0.5wfz(i) + 0.5wz+1(i), wz+1(i) ) (3) wz+1

GLS ∑ y (l) yT(i,j)(k - l) + l)1 nARX

GLS ∑ u (l)uT(i,j)(k - l) + rT(i,j)(k) l)1

Ni

T ∑rT(N ,j)(k|z) rT(N ,j)(k|z)

NT j)1



yT(i,j)(k) )

NT

1



yT(i,j)(k) )

(1) Set z ) 0 and wfz(i) ) 1.0, i ) 1, 2, ..., NT (2) Solve the following weighted-least-squares problem

(42)

It should be noted that the high-frequency part of the data should be suppressed in estimating the parameters. However, because the residual corresponding to small T(i,‚) is bigger than that corresponding to large T(i,‚) (that is, usually rT(1,‚) g rT(2,‚) g ... g rT(NT,‚), as mentioned in remark 3 of section 3), the estimate of the least-squares method would be adjusted to fit the higher-frequency region to reduce the larger residual. This is not what we want. Therefore, we use the following frequency-weighted multistage least-squares method to prevent weighting the high-frequency data more than the low-frequency data:

1

T

T

Ni

T (k|z) rT(i,j)(k|z) ∑rT(i,j)

Ni j)1

(4) Set z ) z + 1, and repeat steps 2-4 until converge but, after 3 or 4 iterations shows almost same performances. In the above, z is the stage index. rT(i,j)(k|z) and LS GLS y (l|z),Gu (l|z) denote the residual and the estimates at the zth stage, and the step 3 is just for a smoothing effect. In step 2, we weight the low-frequency region more by choosing wfz(i) inversely proportional to the residual corresponding to T(i,‚). The optimization of step 2 using the weighted-least-squares method can be done easily by applying the usual least-squares method after multiplying the regression vector corresponding to T(i,‚) by the square root of the weight (xwfz(i)). Remarks: (1) Larger values of nARX gave us better results. However, the transformed noise term (eT(i,j)(k)) is nearly negligible compared to the transformed process output and process input (yT(i,j)(k), uT(i,j)(k)), as discussed in section 2. Then, the transformed state space system of eqs 24 and 25 is very close to a deterministic system. Thus, small nARX g norder is sufficient for high-frequency noises. (2) The transformed noise term (eT(i,j)(k)) has a correlation with the regression vector (yT(i,j)(k - l), l ) 1, 2, ..., nARX) because the transform is the integral of the vector between T(i,j - 1) and T(i,j). As a result, the least-squares method gives biased estimates. On the other hand, instrumental variable methods can overcome the biased estimation problem. However, we prefer to use the least-squares method for simplicity because we can increase the accuracy in estimating yˆ T(i,j)(‚|k 1) by choosing a larger value of nARX (that is, a highorder ARX model) and we can choose just a larger process order. From the control point of view, we think that obtaining a slightly higher-order model is not important. 5.3. Estimating System Matrices. Assume that we obtained the state vectors. Then, we can use the leastsquares method to estimate system matrices for the state space system of eqs 24 and 25. First, we can estimate C and eT(i,j)(k) from eq 25 and A, B, and K from eq 24. The procedure for the continuous-time subspace system identification method is summarized as follows: (1) Calculate the transformed process output and input (yT(i,j)(k) and uT(i,j)(k), respectively) using numerical integration. (2) Construct the predictor in eq 41 by solving the weighted-least-squares problem.

2894

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001

Figure 8. Logarithms of singular values of the proposed method.

(3) Calculate yˆ T(i,j)(‚|k - 1) and yˆ T(i,j)(‚|k). (4) Determine the process order by inspecting the singular value matrix in eq 34 and estimate the states using eqs 35-37. (5) Estimate the system matrices A, B, C, and K from eqs 24 and 25 using the least-squares method. Remarks: Note that eT(i,j)(k) is nearly negligible compared to yT(i,j)(k) and uT(i,j)(k), as discussed in section 2, if e(t) is a high-frequency noise. Thus, we cannot obtain K with acceptable accuracy in the transformed state space of eq 24. We leave the study of estimating K and defining the relation between the noise dynamics and the accuracy of K as a future research topic.

Figure 9. Identification results of the proposed method and a discrete-time subspace system identification method (real, solid line; N4SID, dashed line; proposed, dotted line).

6. Simulation Results Two processes are simulated to confirm the identification performance of the proposed method and to compare it with that of an existing discrete-time subspace system identification method (N4SID in Matlab). Example 1. Consider the input time-delay system (infinite-order deterministic system)

y(t) ) u(t - 0.5)

(43)

The sampling period (∆t) is 0.01, and the process input is a random binary sequence (RBS) with a minimum switching time of 0.2. The proposed approach (with p ) 6, Tmin ) 10.0, Tmax ) 50.0, and NT ) 50) and N4SID in Matlab (with six-step-ahead prediction) identified a sixth-order model. Figures 8 and 9 show the logarithms of the singular values (log(σl/σ1)) of the proposed method and the identified results, respectively. The proposed method shows much better results compared with the discretetime subspace system identification. The discrete-time subspace identification method shows results that are very sensitive to the structural mismatch between the delay process and the sixth-order model because the sampling period is small. Example 2. Consider the following third-order plus time delay process with measurement noise

Figure 10. Activated process input (solid line) and process output (dotted line).

d3yo(t) dt

3

+3

d2yo(t) dt

2

+3

dyo(t) + yo(t) ) dt u(t - 0.1) + 0.2e(t) (44)

y(t) ) yo(t) + e(t)

(45)

where the sampling period (∆t) is 0.01, and the process input is a random binary sequence (RBS) with a minimum switching time of 0.5. The variance of e(t) is 0.01. The proposed approach (with p ) 6, Tmin ) 10.0, Tmax ) 50.0, and NT ) 50) and N4SID in Matlab (with six-step-ahead prediction) identified a fourth-order model. Figure 10 shows the activated process output, and Figures 11 and 12 show the logarithms of the singular values (log(σl/σ1)) of the proposed method and the

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001 2895

frequency uncertainties and because the continuoustime approach does not suffer from the sampling problem, the proposed method gives good identification results. In addition, discrete-time subspace system identification methods exhibit poor robustness to structural uncertainties and noises compared with the proposed method when the sampling period is small. Acknowledgment This work was supported by the Post-Doctorial Fellowship Program of Korea Science and Engineering Foundation (KOSEF) and the Brain Korea 21 Program. The authors are grateful to Professor Jay H. Lee for valuable comments on continuous-time system identification. Literature Cited

Figure 11. Logarithms of singular values of the proposed method.

Figure 12. Identification results of the proposed method and a discrete-time subspace system identification method (real, solid line; N4SID, dashed line; proposed, dotted line).

identified results of the proposed method and N4SID. N4SID with the small sampling period shows very poor identification results. In contrast, the proposed method exhibits good robustness to the high-frequency noise and the structural plant/model mismatch because the proposed transform significantly reduces the noise effects and does not suffer from the small sampling period problem. 7. Conclusions We discussed the practical problems of existing discrete-time subspace system identification methods. We developed a new transform to convert the differential equations into corresponding algebraic equations. Subsequently, we proposed a continuous-time subspace system identification method. Because the transform significantly reduces the effects of high-

(1) Larimore, W. E. Canonical Variate Analysis in Identification, Filtering, and Adaptive Control. 29th IEEE Conference on Decision and Control, Honolulu, Hawaii, 1990; p 596. (2) Verhaegen, M. Identification of the Deterministic Part of MIMO State Space Model Given in Innovations Form from InputOutput Data. Automatica 1994, 30, 1877. (3) Van Overschee, P.; De Moor, B. N4SID: Subspace Algorithms for the Identification of Combined Deterministic-Stochastic Systems. Automatica 1994, 30, 75. (4) Ljung, L. System Identification: Theory for the User; Prentice Hall: Englewood Cliffs, NJ, 1987. (5) So¨derstro¨m, T.; Stoica, P. System Identification; Prentice Hall: Englewood Cliffs, NJ, 1989. (6) Forssell, U.; Ljung, L. Closed-Loop Identification Revisited. Automatica 1999, 35, 1215. (7) Van Overschee, P.; De Moor, B. A Unifying Theorem for Three Subspace System Identification Algorithms. Automatica 1995, 31, 1855. (8) Viberg, M. Subspace-based Methods for the Identification of Linear Time-Invariant Systems. Automatica 1995, 31, 1835. (9) Bauer, D.; Deistler, M.; Scherrer, W. Consistency and Asymptotic Normality of Some Subspace Algorithms for Systems without Observed Inputs. Automatica 1999, 35, 1243. (10) Larimore, W. E. Statistical Optimality and Canonical Variate Analysis System Identification. Signal Process. 1996, 52, 131. (11) Deistler, M.; Peternell, K.; Scherrer, W. Consistency and Relative Efficiency of Subspace Methods. Automatica 1996, 31, 1865. (12) Chou, C. T.; Verhaegen, M. Subspace Algorithms for the Identification of Multivariable Dynamic Errors-in-Variables Models. Automatica 1997, 33, 1857. (13) Ljung, L.; McKelvey, T. Subspace Identification from Closed-Loop Data. Signal Process. 1996, 52, 209. (14) Rao, G. P.; Palanisamy, K. R. Improved Algorithm for Parameter Identification in Continuous Systems via Walsh Functions. Proc. IEEE D 1983, 130, 9. (15) Cheng, B.; Hsu, N. Analysis and Parameter Estimation of Bilinear Systems via Block-Pulse Functions. Int. J. Control 1982, 36, 53. (16) Hwang, C.; Guo, T. Transfer-Function Matrix Identification in MIMO Systems via Shifted Legendre Polynomials. Int. J. Control 1984, 39, 807. (17) Mathew, A. V.; Fairman, F. W. Transfer Function Matrix Identification. IEEE Trans. Circuits Syst. 1974, CAS-21, 584. (18) Whitfield, A. H.; Messali, N. Integral-Equation Approach to System Identification. Int. J. Control 1987, 45, 1431. (19) Sagara, S.; Zhao, Z. Recursive Identification of Transfer Function Matrix in Continuous Systems via Linear Integral Filter. Int. J. Control 1989, 50, 457. (20) Eitelberg, E. Continuous-Time System Representation with Exact Macro-Difference Expressions. Int. J. Control 1988, 47, 1207. (21) Sagara, S.; Yang, Z.; Wada, K. Recursive Identification Algorithms for Continuous Systems Using an Adaptive Procedure. Int. J. Control 1991, 53, 391.

2896

Ind. Eng. Chem. Res., Vol. 40, No. 13, 2001

(22) Sung, S. W.; Lee, I.; Lee, J. New Process Identification Method for Automatic Design of PID Controllers. Automatica 1998, 34, 513. (23) Sung, S. W.; Lee, I. Prediction Error Identification Method for Continuous-Time Processes with Time Delay. Ind. Eng. Chem. Res., in revision. (24) Ninness, B. M.; Goodwin, G. C. The Relationship Between Discrete-Time and Continuous-Time Linear Estimation. Identifi-

cation of Continuous-Time Systems; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1991; p 79.

Received for review January 18, 2000 Revised manuscript received September 7, 2000 Accepted April 17, 2001 IE000069F