Reconstruction-Based Fault Identification Using ... - ACS Publications

The sensor data x are then generated for building a PCA model. A constant bias sensor ...... Xing , Jiongqi Wang. Chemometrics and Intelligent Laborat...
6 downloads 5 Views 247KB Size
Ind. Eng. Chem. Res. 2001, 40, 4403-4414

4403

Reconstruction-Based Fault Identification Using a Combined Index H. Henry Yue Tokyo Electron America, Inc., 2400 Grove Boulevard, Austin, Texas 78741

S. Joe Qin* Department of Chemical Engineering, The University of Texas at Austin, Austin, Texas 78712

Process monitoring and fault diagnosis are crucial for efficient and optimal operation of a chemical plant. This paper proposes a reconstruction-based fault identification approach using a combined index for multidimensional fault reconstruction and identification. Fault detection is conducted using a new index that combines the squared prediction error (SPE) and T2. Necessary and sufficient conditions for fault detectability are derived. The combined index is used to reconstruct the fault along a given fault direction. Faults are identified by assuming that each fault in a candidate fault set is the true fault and comparing the reconstructed indices with the control limits. Fault reconstructability and identifiability on the basis of the combined index are discussed. A new method to extract fault directions from historical fault data is proposed. The dimension of the fault is determined on the basis of the fault detection indices after fault reconstruction. Several simulation examples and one practical case are presented. The method proposed here is compared with two existing methods in the literature for the identification single-sensor and multiple-sensor faults. We analyze the reasons that the other two methods lead to erroneous identification results. Finally, the proposed approach is applied to a rapid thermal annealing process for fault diagnosis. Fault subspaces of several typical process faults are extracted from the data and then used to identify new faults. 1. Introduction Process monitoring is important for the safe, efficient operation of a chemical plant. As more sensors are installed in a chemical process to ensure that the process is operating normally, not only are large amounts of sensor data generated and collected, but the chance of sensor faults increases. Whereas sensor faults are localized at the faulty sensors, process faults usually cause several sensors to violate the normal correlations among the sensors, which leads to the detection of the fault. After detection, the cause of the fault is identified. In the case of sensor faults, the faulty sensors are identified and reconstructed from other normal sensors. In the case of a process fault, the fault type needs to be classified or identified from a set of faulty cases. Knowledge of the fault type greatly assists engineers in correcting the fault and quickly returning the process to normal operation. If the fault is classified as something that has happened before, immediate actions can be taken to correct the fault. Because the process variables are often strongly correlated, a multivariate method is preferred for fault detection and diagnosis. Principal component analysis (PCA) is a multivariate statistical method that has been intensively studied and widely applied. A number of papers have illustrated the use of PCA in process monitoring and statistical process control,1,2 fault diagnosis,3-7 sensor validation,8,9 sensor fault identification,9,10 and process control.11,12 Wise and Gallagher13 give a review of PCA and several other chemometrics methods used in process monitoring. Whereas fault detection using PCA is relatively well established, fault identification based on PCA models is quite complicated and has few results reported. A widely used fault diagnosis method is the contribution plot approach proposed by Miller et al.14 The contribu* Author to whom all correspondence should be addressed. Tel.: (512)471-4417.Fax: (512)471-7060.E-mail: [email protected].

tion to the squared prediction error (SPE) from each variable is calculated to determine which variable is most likely to be faulty. The contribution to T2 is calculated for each variable and each principal component, with no overall measure for all of the components. Westerhuis et al.15 provide an approximate overall T2 contribution calculation and established confidence limits for the contributions. Although this method is easy to use, it is difficult to identify process faults using contribution plots because a process fault tends to affect multiple sensors simultaneously. In this case, the contribution plots give the effect, but not the cause, of the fault. Another problem is that a fault in one sensor could lead to an increase in the contributions of other sensors,8 thereby increasing the chance of false identification. To identify multiple-sensor faults, Stork et al.10 proposed a backward elimination sensor identification (BESI) algorithm that requires a time-consuming recalculation of the PCA model every time one sensor is removed from the model. Another drawback of BESI is that T2 is not considered. Gertler et al.16 proposed an isolation-enhanced PCA similar to the structured residual approach.17 Multiple partial PCA models have been built to isolate different sensor or actuator faults. Dunia et al.5-7,9 proposed a reconstruction-based approach for fault identification that involves minimization of the SPE along a fault direction. Faults are identified using a sensor validity index (SVI) or an identification index, which is the ratio of the reconstructed SPE to the faulty SPE. Both unidimensional and multidimensional faults are considered. Although this approach is straightforward for sensor faults, the derivation of fault directions for process faults remains an open issue. The occurrence of a fault usually leads to changes in either the SPE or T2 or both. The faulty SPE and T2 can exceed the control limits if the fault is large enough. Fault detection, reconstruction, and identification should be formulated in terms of both indices. In this paper,

10.1021/ie000141+ CCC: $20.00 © 2001 American Chemical Society Published on Web 08/30/2001

4404

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001

we extend the reconstruction-based method of Dunia and Qin6 to incorporate the two indices. We first derive the necessary and sufficient conditions for fault detectability based on the SPE and T2 and then formulate a new criterion by combining the SPE and T2 for fault detection and identification. The combined index is applied for fault reconstruction to remove the effect of the fault on the SPE and T2 simultaneously. A fault is identified if the indices after reconstruction are within the normal region. To derive the fault directions or subspaces, we propose that singular value decomposition (SVD) be applied to historical fault data. Data averaging is used to improve the accuracy of the method. The proposed methods are used in several simulation and industrial case studies. The organization of the paper is as follows. We discuss fault detection based on the SPE and T2 in section 2 and derive the necessary and sufficient conditions for fault detectability. In section 3, we propose a combined index for fault detection and compare it with other quadratic indices. In section 4, fault reconstruction is conducted by minimizing the combined index subject to constraints. A quadratic constrained quadratic programming (QCQP) problem is solved for fault reconstruction. Section 5 discusses the procedure for fault identification. Section 6 proposes using SVD to extract fault subspaces. In section 7, we first illustrate the proposed methods in two simulation cases, including single-sensor and multiple-sensor fault identification, and then apply the approach to a rapid thermal annealing (RTA) process. The proposed method is compared with two other approaches: the contribution plot and the sensor validity index. We analyze the reasons that these two approaches lead to erroneous identification. Section 8 concludes the paper. 2. Fault Detection and Detectability 2.1. Fault Detection Based on the SPE and T2. Let X be an n × m data matrix in which the rows are samples and the columns are variables. It is assumed that X is at least scaled to zero mean. PCA decomposes X into

X ) TPT + E ) TPT + T ˜P ˜T

(1)

where T ∈ Rn×l and P ∈ Rm×l are the score and loading matrices, respectively, and l is the number of principal components (PCs) retained in the model P. The residual matrix E can be further decomposed into the product of a residual score matrix T ˜ and a residual loading matrix P ˜. The range space of P is the principal component subspace (PCS), S, with dimension l. The range space of P ˜ is the residual subspace (RS), S˜, with dimension m - l. P and P ˜ can also be interpreted as the eigenvectors of the covariance matrix S ) XTX/(n-1) associated with the principal eigenvalues Λ ) TTT/(n-1) and ˜ /(n-1), respectively. residual eigenvalues Λ ˜ )T ˜ TT A new sample vector x is partitioned into two portions

x) xˆ + x˜

(2)

xˆ ) PPTx ∈ S

(3)

where

and

x˜ ) (I - PPT)x ∈ S˜

(4)

are the projections on the PCS and the RS, respectively. When a fault occurs, the faulty sample vector can be represented as

x ) x* + Ξif

(5)

where Ξi ∈ Rm×li is an orthonormal matrix that spans the fault subspace with a dimension of li and ||f|| represents the magnitude of the fault. Consequently, the projections of x onto the PCS and/or RS increase. Two indices are used to detect deviations in the PCS and RS. One is the SPE, which is a measure of variations in the RS

˜ x||2 SPE ≡ ||x˜ ||2 ) ||(I - PPT)x||2 ≡ ||C

(6)

The other is Hotelling’s T2, which measures variations in the PCS

T2 ) xTPΛ-1PTx

(7)

The process is considered normal if

SPE e δ2

(8)

T2 e χl2

(9)

where δ2 and χl2 denote the confidence limits for the SPE and T2, respectively. Jackson and Mudholkar18 developed a confidence limit expression for the SPE assuming that x follows a normal distribution. Under the same assumption, T2 follows a χ2 distribution of l degrees of freedom. 2.2. Fault Detectability Based on the SPE and T2. Fault detectability represents the capability of a model to detect the presence of a fault. Traditional univariate SPC methods can detect a fault in a single variable, but it is difficult for them to detect a multivariate fault. Dunia and Qin6 derived necessary and sufficient conditions for fault detectability using only the SPE. Here, we derive detectability conditions using both the SPE and T2. First, we examine how a fault represented by eq 5 affects the two indices. The SPE can be represented as follows from eqs 5 and 6

SPE) ||x˜ * + Ξ ˜ if||2

(10)

where Ξ ˜ i ) (I - PPT)Ξi is the projection of Ξi on the RS. From eq 7, T2 can be represented by

T2 ) ||Λ-(1/2)PTPPTx||2 )||Λ-(1/2)PT(PPTx* + PPTΞif)||2 )||Λ-(1/2)PT(xˆ * + Ξ ˆ if)||2

(11)

where Ξ ˆ i ) (PPT)Ξi is the projection of Ξi on the PCS. From the above equations, contributions to the SPE and T2 have two parts: projections of normal process vector (x*) and the fault displacement. Therefore, whether a fault is detectable depends on the magnitude of the fault displacement. Dunia and Qin6 classified the detectability conditions into necessary detectability and sufficient detectability.

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001 4405

Necessary detectability examines the angle or direction of the fault, whereas the sufficient condition requires the magnitude of the fault. Dunia and Qin6 further classified necessary detectability into partial detectability and complete detectability based on the ˜ i has full rank, it is referred to as rank of Ξ ˜ i. When Ξ completely detectable, whereas when Ξ ˜ i has reduced rank, it is partially detectable. In this section, we examine these fault detectability conditions based on the SPE and T2 jointly. ˆi Although the fault matrix Ξi has full column rank, Ξ and Ξ ˜ i might not be full column rank. An extreme case ˜ i becomes a zero matrix. If Ξ ˆ i ) 0, the is when Ξ ˆ i or Ξ ˜ i ) 0, the fault is not detectable by T2 from eq 11. If Ξ fault is not detectable by the SPE from eq 10. Because ˜ i cannot simultaneously be zero, at least one Ξ ˆ i and Ξ index, either the SPE or T2, will detect the fault if its magnitude is large enough. Therefore, the necessary detectability is always satisfied when the SPE and T2 are used jointly. From eqs 10 and 11, we have the following partial detectability conditions: (1) If rank(Ξ ˆ i) < li, then Ξi is partially detectable by ˆ i) are not detectable and all f ∉ N(Ξ ˆ i) T2, i.e., all f ∈ N(Ξ are detectable by T2, where N() denotes the null space of a matrix. (2) If rank(Ξ ˜ i) < li, then Ξi is partially detectable by the SPE, i.e., all f ∈ N(Ξ ˜ i) are not detectable and all f ∉ N(Ξ ˜ i) are detectable by the SPE. ˜ i are in the two complementary Notice that Ξ ˆ i and Ξ subspaces S and S˜, respectively. If f ∈ N(Ξ ˆ i), then f ∉ N(Ξ ˜ i), and vice versa. Therefore, a fault is always necessarily detectable by either the SPE or T2. Similarly, complete detectability conditions can be derived as follows: (1) If rank(Ξ ˆ i) ) li, then Ξi is completely detectable by T2 for all possible f with large enough fault magnitude ||f||. (2) If rank(Ξ ˜ i) ) li, then Ξi is completely detectable by the SPE for all possible f with large enough fault magnitude ||f||. A fault might be completely detectable by both the SPE and T2. Alternatively, if a fault is completely detectable by the SPE, then it might be undetectable, partially detectable, or completely detectable by T2, and vice versa. Note that the partial detectability is applicable to multidimensional faults only. For simple unidimensional faults, only complete detectability applies. If a fault is necessarily detectable, its magnitude must be large enough to be sufficiently detected. Dunia and Qin6 give the sufficient condition of fault detection using the SPE. A fault is guaranteed to be detectable by the SPE if

||Ξ ˜ if|| > 2δ

(12)

This means that the magnitude of the fault projected on the RS has to be larger than the diameter of the confidence region. Similarly, we can derive the sufficient condition for T2 to detect a fault with a confidence limit. From eq 11, we have 2

-(1/2)

T ) |Λ

T

P (xˆ * + Ξ ˆ if)|| g ||Λ

-(1/2)

T

P Ξ ˆ if|| -

||Λ

-(1/2)

T

P xˆ *|| (13)

Because ||Λ-(1/2)PTxˆ *||2 e χl2, to guarantee that T2 > χl2, ˆ if)|| g 2χl. Therefore, we must require that ||Λ-(1/2)PTΞ

a fault is sufficiently detectable by T2 if

||Λ-(1/2)PTΞ ˆ if|| g 2χl

(14)

In summary, we have the following cases for sufficient detectability: (1) A fault is guaranteed to be detectable by the SPE if ||Ξ ˜ if|| > 2δ. (2) A fault is guaranteed to be detectable by T2 if ˆ if|| > 2χl. ||Λ-(1/2)PTΞ (3) A fault is not guaranteed to be detectable if ||Ξ ˜ if|| ˆ if|| e 2χl. e 2δ and ||Λ-(1/2)PTΞ 3. Fault Detection Using a Combined Index We note from the above derivation and also from practical experience that the SPE and T2 indices behave in a complementary manner. Therefore, it is possible to combine the two indices into one to simplify the fault detection task. In this section, we propose such an index, which incorporates the SPE and T2 in a balanced way. The confidence limit is also established using the distribution of quadratic forms. 3.1. A Combined Index. In practice, one index, rather than two indices, is preferred to monitor a process. Yue and Qin19 propose a combined index for fault reconstruction that combines the SPE and T2 as follows

φ)

SPE(x) δ

2

+

T2(x) χl

2

) xTΦx

(15)

where

Φ)

˜P ˜T PΛ-1PT I - PPT PΛ-1PT P + ) + (16) χl2 δ2 χl2 δ2

Notice that Φ is symmetric and positive definite. Application of this index for fault detection requires that the statistical distribution and alarm thresholds for given confidence levels be derived. From eq 15, the index φ is a quadratic function of vector x. Box20 discussed the distribution of quadratic forms assuming that x has a multinormal distribution. Both an exact and a fairly good approximate distribution are derived, with the latter having the same first two moments as the former. We use the approximate distribution to calculate the confidence limits of the combined index. The distribution of φ can be approximated using gχ2(h)

φ ) xTΦx ∼ gχ2(h)

(17)

where the coefficient g is given by

g)

∑νjλj2 ) tr(SΦ)2 ∑νjλj tr(SΦ)

(18)

and the degree of freedom for the χ2 distribution is

h)

∑νjλj)2 ) [tr(SΦ)]2 ∑νjλj2 tr(SΦ)2

(

(19)

λj is the jth eigenvalue of the matrix SΦ with a multitude νj, where S ) cov(x). If all of the eigenvalues are distinctive, then νj ) 1. After g and h are calculated,

4406

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001

the confidence limit of φ can be obtained for a given confidence level. For a confidence level of γ, a fault is detected by φ if

φ g gχγ2(h) ≡ ζ2

(20)

It is worth noting that Raich and Cinar4 suggested the following combined statistic

c

SPE(x) δ2

T2(x)

+ (1 - c)

χl2

Figure 1. Comparison of confidence regions for four statistical tests.

where c ∈ (0, 1) is a constant. They further gave a rule that the statistic less than 1 is considered normal. This, however, can lead to erroneous results because it is possible to have either SPE(x) > δ2 or T2(x) > χl2 even if the above statistic is less than 1. 3.2. Comparison with Other Quadratic-Form Indices. Tong and Crowe8 compared several univariate and collective statistical tests, including PCA and global χ-square tests. In this subsection, we compare the combined index with related tests available in the literature. The global χ-square test, which is the Mahalanobis distance, is defined as

D ) xTS-1x ∼ χ2(m) For singular S with rank(S) ) r < m, Mardia21 discussed the use of the pseudoinverse of S, leading to the Mahalanobis distance for the reduced rank covariance matrix

Dr ) xTS+x ∼ χ2(r) where S+ is the Moore-Penrose pseudoinverse. Mardia21 further showed that the distance Dr is independent of the types of pseudoinverses. Hence, we choose the Moore-Penrose pseudoinverse. Denoting

t ) [P P ˜ ]Tx

(21)

as the principal components and

Λ ) diag{λ1 λ2 ‚‚‚ λm}

(22)

as the eigenvalues of S in descending order, the above fault detection indices can be unified as follows

d ) tTA+t ) tTdiag{a1 a2 ‚‚‚ am}t m

) where

ai+ti2 ∑ i)1

(23)

Two major differences among these indices are the scaling of the principal components and the subspace covered by each index. The SPE, T2, and Dr indices cover subspaces of the entire measurement space. The Mahalanobis distance, D, if S is nonsingular, covers the entire space. The combined index always covers the entire space whether S is singular or not, thus providing a complete measure of the variability in the entire space. Early work by Takemura22 shows that, in practice, the Mahalanobis distance often gives confidence limits that are too wide. The T2 index in the PCS is better because it avoids inversion of small but nonzero eigenvalues. These aforementioned four indices (the SPE in the RS, T2 in the PCS, the Mahalanobis distance, and the combined index) are all collective statistical tests. They are preferable to univariate statistical tests because they take into account the correlation in the data. One common feature is that they are all quadratic-form indices. The confidence regions of the four indices are illustrated in Figure 1 for an example with two variables. The horizontal axis represents the PCS, while the vertical axis represents the RS. The two dashed lines represent the confidence region defined by the SPE in the RS, while the dotted lines represent the confidence region defined by T2 in the PCS. The confidence regions of the combined index and the Mahalanobis distance are represented by the two ellipses. From Figure 1, we observe that the region defined by the SPE and T2 jointly is quite similar to that of the combined index. However, the combined index defines an elliptical region that is more compatible with the assumption that the data are multinormal. Tong and Crowe8 point out two types of errors that occur when using the Mahalanobis distance for fault detection: the first is the false alarm, shown by point B, and the second is an undetected fault, shown by point A. Therefore, using the Mahalanobis distance will increase the chance not only of type I errors, but also of type II errors. This is because the Mahalanobis distance makes no distinction between noise in the data and the latent states. If the latent dimension of the data is equal to the dimension of the data space or if the number of principal components is equal to the data dimension, the SPE is no longer necessary, and T2, the Mahalanobis distance, and the combined index are equivalent. 4. Fault Reconstruction and Reconstructability 4.1. Fault Reconstruction Based on the Combined Index. The task of fault reconstruction is to estimate the normal values x* by eliminating the effect of a fault Fi on both the SPE and T2. If the fault direction or the fault subspace is already known, the normal portion (x*) can be reconstructed from the corrupted x using the PCA model. A reconstruction xj

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001 4407

is an adjustment of the sample vector along a given fault direction Ξj

xj ) x - Ξjfj

where

(24)

Bj ) (ΞjTΦΞj)-1ΞjTΦ

(31)

BjΞj ) I

(32)

Note that where fj is the estimated fault magnitude such that xj is closest to the normal region. For the moment, we assume that the reconstruction direction Ξj is different from the true fault direction Ξi. To eliminate the effects of a fault on the SPE and T2, we propose the following criterion for fault reconstruction

{

}

SPE(xj) T2(xj) + min w1 w2 fj

(25)

where w1 and w2 are weighting factors that determine the relative importance between the SPE and T2. A natural selection of the weighting factors is the confidence limits, i.e., w1 ) δ2, w2 ) χl2, which leads to the index in eq 15. This weighting scheme using δ2 and χl2 makes sure that both indices are minimized to the same extent with respect to their control limits. If a fault mainly influences the SPE, then the SPE term in the above formula will be larger, and the T2 term will be smaller. The minimization tends to reduce the SPE more than T2. If a fault mainly influences T2, then the minimization will reduce the T2 term more than it does the SPE term. The reconstructed SPE and T2 should be within the normal control limits, i.e.

From eq 30 the reconstructed xj and φj can be calculated. If φj g 2 after reconstruction, either the SPE or T2 violates the control limit. Thus, the constraint cannot be satisfied. This is the case where the fault cannot be reconstructed. If φj is between 1 and 2, the SPE and T2 need to be examined against their confidence limits. If both are within the normal region, then the unconstrained solution is feasible, and there is no need to solve the constrained problem. Otherwise, we need to solve the QCQP problem using the unconstrained solution as a starting point. Because Φ is positive definite and Ξj has full column rank, ΞjTΦΞj is also positive definite and thus invertible. Therefore, the fault is always reconstructable using eq 30. If the combined index φ is used to detect and reconstruct a fault and the individual SPE and T2 limits need not to be respected, the reconstruction problem is always unconstrained, and the solution is given in eq 30. 4.2. Reconstruction Error. The reconstruction error is obtained from eqs 5 and 24

x* - xj ) Ξjfj - Ξif

(33)

Substituting eq 30 into eq 33, we have

SPE(xj) ) ||x˜ j||2 e δ2 -(1/2)

2

T (xj) ) ||Λ

T

(26) 2

P xj|| e χl

φj )

δ2

+

T2(xj) χl2

(34)

If Ξj ) Ξi, i.e., the true fault subspace is used in reconstruction, then from eq 32

(28)

x* - xj ) ΞjBjx*

(35)

and from eqs 30 and 32

subject to the constraints in eqs 26 and 27, where

SPE(xj)

) ΞjBjx* + (ΞjBj - I)Ξif

(27)

The fault estimation can be obtained by solving the optimization problem

fj ) arg min{φj} fj

x* - xj ) ΞjBj(x* + Ξif) - Ξif

fj ) Bjx* + f

(36)

T

) xj Φxj ) (x - Ξjfj)T Φ(x - Ξjfj) (29)

Equation 28 is a quadratic constrained quadratic programming (QCQP) problem. Several software packages are available to solve this particular type of problem. Mittelmann23 gives a benchmark on several algorithms that solve the QCQP problem. We use the sequential quadratic programming to solve this constrained problem. Because the nonlinear programming problem involves intensive computation, we examine several cases in which the computational load can be reduced. If φj e 1 after reconstruction, then the constraints are naturally satisfied, so that the problem in eq 28 becomes unconstrained and has the following analytical solution

From the above equations, we conclude the following: (1) If Ξj ) Ξi, then from eq 35, E(x* - xj) ) 0 because E(x*) ) 0, i.e., xj is an unbiased estimate of x*. From eq 36, fj is an unbiased estimate of f. (2) If Ξj ) Ξi, then the reconstruction error is independent of the fault magnitude f but dependent on the fault direction or subspace Ξi. From eq 35, we obtain

xj ) (I - ΞjBj)x* Therefore

φj ) xjTΦxj ) x*T(I - ΞjBj)TΦ(I - ΞjBj)x* ) x*T(Φ - BjTΞjTΦ)x*

fj ) (ΞjTΦΞj)-1ΞjTΦx ≡ Bjx ) Bj(x* + Ξif)

) x*TΦx* - x*TΦΞjBjx* (30)

(37)

Because

(38)

4408

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001

x*TΦx* ) φ e ζ2

(39)

ΦΞjBj ) ΦΞj(ΞjTΦΞj)-1ΞjTΦ > 0

(40)

φj < φ e ζ2

(41)

and

we have

Therefore, if the true fault direction is reconstructed, the reconstruction can always bring the process back to the normal region. (3) If Ξj * Ξi, then the second term in eq 34 cannot be canceled. The reconstruction error depends on the fault magnitude and the fault subspace Ξj. A large enough fault f will make φj > ζ2, which can be used for fault identification. 5. Fault Identification Following detection, we wish to identify the fault from a set of possible faults {Fj, j ) 1, ..., J} and reconstruct the normal process condition. The task for fault identification is to find the true fault subspace from a set of possible fault subspaces, {Fj}. Different faults are classified by their subspaces. If a faulty observation can be reconstructed back to the normal region in a fault subspace, the fault subspace is identified as the true fault. It is possible for several fault subspaces to be identical. In this case, additional information is needed beyond the subspace information to further narrow down the root cause. Dunia and Qin6 proposed a fault identification method by assuming each of the faults in {Fj} in turn and performing reconstruction. If the actual fault Fi is assumed, then the reconstructed sample vector xi is closest to the model, and the SPE of xi (SPE(xi) ) ||x˜ i||2) is brought within the control limit. However, the use of the SPE alone, which neglects the T2 index, can sometimes lead to erroneous identification. On the other hand, if multiple fault candidates satisfy the SPE limit after reconstruction, the use of T2 in conjunction with the SPE for identification can help to further narrow down fault candidates. For instance, a reconstruction that minimizes the SPE can result in a T2 value outside the confidence limit. Consider a simple case illustrated in Figure 2. The original data set has only two variables, x1 and x2, and only one PC is needed in the PCA model. A fault in the direction of x2 shifts the operating point to the point denoted by the asterisk. Two single-sensor fault directions are ξ1 ) [1 0]T and ξ2 ) [0 1]T. The reconstruction along either direction results in zero SPE, making both faults valid candidates. However, the reconstruction along ξ2 results in an out-of-limit T2. By considering T2 in conjunction with the SPE, ξ1 can be uniquely identified as the true fault. In this section, we identify faults on the basis of whether a reconstruction can bring the process back to the normal region defined by both the SPE and T2. If the reconstruction in a fault subspace leads to a feasible solution in the normal region, the fault subspace is considered the true fault. The normal process region can be defined by the SPE and T2 jointly or by φ alone. If the SPE and T2 are used, both the SPE and T2 need to be smaller than their control limits; the two constraints

Figure 2. Reconstruction based on the SPE along ξ1 and ξ2 both results in zero SPE, but ξ2 leads to an out-of-limit T2 value.

in eqs 26 and 27 have to be satisfied. If the combined index φ is used, Ξj is considered the true fault if φj e ζ2. There are two major advantages of the identification criterion based jointly on the SPE and T2. First, the use of the SPE and T2 helps identify faults that cannot be identified using the SPE alone, as shown in Figure 2. Second, the reconstructed indices have the same control limits as the detection indices. This method is superior to both the contribution plot and the identification index6 in that neither of them has well-defined confidence limits. The fault identification procedure can be described as follows. (1) If a fault is detected using the SPE and T2, each fault in a set of candidate faults {Fj, j ) 1, ..., J} is assumed and reconstructed using eq 28 with the constraints in eqs 26 and 27. If SPE(xj) e δ2 and T2(xj) e χl2, then Fj is identified as the true fault; otherwise, it is not. (2) If the combined index φ is used in fault detection instead, then Fj is identified as the true fault if φj e ζ2 after reconstruction. Otherwise, Fj is not the true fault. (3) If there is only one identified fault Fj, then the true fault is uniquely identified. If there is more than one fault that satisfies the identification criterion, then a subset of faults is identified, but unique identification is not possible. 6. Fault Subspace Extraction In the preceding sections, we assume that the fault direction matrices {Ξj, j ) 1, 2, ..., J} are known. Whereas for sensor faults, the fault direction matrix Ξj is easily derived,9 such a derivation is not straightforward for process faults. In this section, we propose a new method for extracting fault directions from historical fault data. For each type of fault that occurred in the past, we collect a number of samples from the fault period and extract the fault direction using singular value decomposition. The extracted fault directions are used for fault identification when a new fault is detected on-line. If the fault has occurred in the past, it will be identified using the set of fault directions. If the fault has never occurred in the past, none of the faults in the set will be feasible for fault identification. In this case, the data for this new type of fault are used to extract the fault direction, which is then added to the fault set for future fault identification. To explain the extraction of the fault subspace, we take a portion of the fault data Xj that are collected under fault Ξj. Denoting the kth sample of Xj as x(k),

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001 4409

we have

Xj ) [x(1), x(2), ‚‚‚, x(k)]

(42)

According to eq 5

x(k) ) x*(k) + Ξjf(k)

(43)

Because x*(k) is zero-mean, it can be neglected for a large enough f(k), i.e.

x(k) ≈ Ξjf(k)

(44)

subspace. Because of the projection onto residual subspace, the residual part (x˜ *) is negligible compared with the fault displacement in residual subspace, and there is no need to average the data.24 The residual fault subspace obtained by applying SVD provides an accurate estimation of the true residual fault subspace. If the reconstruction in the residual subspace is sufficient to identify the fault, then knowledge of Ξ ˜ j is sufficient. However, it is impossible to obtain the fault subspace (S) from the residual subspace (S˜) because Ξ ˜j is a projection of Ξj to the RS. 7. Simulation and Application Studies

Therefore

XjT ) Ξj[f(1), f(2), ‚‚‚, f(p)]

(45)

which shows that Ξj has the same column space as XjT. Note that the fault magnitude is only required in the extraction of the fault subspace. Once an accurate subspace is derived, this information is not needed in the on-line identification of faults with small magnitudes. Performing singular value decomposition on XjT to obtain T

T

Xj ) UjDjVj

(46)

where Dj contains the nonzero singular values in descending order, we can choose Ξj ) Uj. In practice, however, because of measurement noise and a limited number of samples, it is often difficult to differentiate between zero and nonzero singular values. Here, we propose a new method of determining the dimension of the fault subspace. We start from the singular vector of the largest singular value Uj(:, 1) as the fault direction and perform reconstruction. If the reconstructed sample xj is within the normal region, i.e., SPE(xj) e δ2 and T2(xj) e χl2 or φj e ζ2, then Uj(:, 1) can adequately represent and reconstruct the fault. Otherwise, the next singular vector is added until the reconstructed sample (xj) is within the normal region. If, overall, lj dimensions are required, the extracted fault direction for fault j is

Ξj ) Uj(:, 1:lj)

(47)

which is the first lj columns of Uj. Note that lj e p. If the fault magnitude is too small to ignore the normal variation in the process, eq 44 is no longer valid. Because the normal process vector x*(k) has a zero mean, the faulty data set can be averaged so that

x j *(k) + Ξjhf(k) ≈ Ξjhf(k)

(48)

Among many averaging schemes, two examples are given below. (1) For real-time data, a moving-window average technique can be applied. Within each window, the samples are averaged to generate a new sample. The averaged data are used to extract the fault subspace. (2) Xj can be resampled randomly to generate a number of subsets and average each subset. Then the averaged samples are used to extract the fault subspace. This is similar to the resample technique used in bootstrapping. The SVD technique can also be applied to residual data to extract the fault subspace (Ξ ˜ i) in the residual

In this section, we apply the proposed methods to two simulated systems and compare them with two existing methods, the contribution plot and the sensor validity index. Then, we apply the proposed methods to the monitoring of a RTA process in microelectronics manufacturing. 7.1. Single-Sensor Faults. We simulate a multivariate system with strong correlation in the sensors. The normal sensor data are generated using the equation

x ) Gt + e

(49)

where x is the sensor vector, t is the independent intrinsic states of the process, G is the measurement matrix from the intrinsic states to the sensors, and e is the noise. Five sensors and two intrinsic states are chosen in this simulation. G is randomly selected as

[

-0.1670 -0.5671 G ) -0.1608 0.7574 -0.2258

-0.1352 -0.3695 -0.1019 -0.0563 0.9119

]

The intrinsic state vector t is assumed to have an independent normal distribution with zero mean and unit variance. The sensor data x are then generated for building a PCA model. A constant bias sensor fault is created on sensor 4. Three methods are used to identify the fault after it is detected by the SPE and T2: the contribution plot, the sensor validity index, and the combined index. The contribution plot method is applied first, which calculates the contribution to the SPE from each sensor (Figure 3a). It is obvious that the first and third sensors make little contribution and thus can be excluded from potential faulty sensors. The fourth sensor contributes the most to the fault (42.5%), and the second sensor contributes 41.2%, leading to erroneous identification of the second sensor as faulty. The reason is that sensors 2 and 4 are highly correlated; the fault in sensor 4 causes sensor 2 to show a large residual through the relation x˜ ) C ˜ x. The matrix C ˜ in this example is

[

0.9554 -0.1419 -0.0418 0.1134 0.0888

-0.1419 0.5408 -0.11357 0.4080 0.2082

-0.0418 -0.1357 0.9598 0.1234 0.0561

0.1134 0.4080 0.4257 0.4257 0.2235

0.088 0.2082 0.2235 0.2235 0.1183

]

We can see that the fourth column has large values at the second (0.4080) and fourth (0.4257) elements, which

4410

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001 Table 1. Six Multiple-Sensor Faults fault number

faulty sensors

1 2 3 4 5 6

1, 2 4, 5 7, 8 9, 10 4, 8 3, 6

fault detection, and φ is used for fault reconstruction. Each sensor is assumed to be faulty in turn, and the reconstructed combined indices are plotted in Figure 3c. The combined indices are much larger than 2 for all sensors except sensor 4. Only reconstruction along sensor 4 brings both the SPE and T2 back to the normal region. Therefore, the faulty sensor is uniquely identified as sensor 4. Clearly, this method is superior for correctly identifying the fault. 7.2. Multiple-Sensor Faults. The following example simulates a system with 10 sensors and four intrinsic states according to eq 49, i.e., G is a 10 × 4 matrix. A PCA model is built from the simulated data. We consider six possible faults, each a combination of two sensors, as shown in Table 1. A multiple-sensor fault is created on sensors 4 and 8, which is indexed as fault number 5. The corresponding fault direction matrix is

Ξ5 ) Figure 3. Comparison of three fault identification methods for a single-sensor fault.

explains why the contribution to sensor 2 is large. Similarly, a sensor fault in sensor 2 will also strongly affect sensor 4. This is a case where a nondiagonal term in C ˜ is of the same magnitude as the diagonal term in the same column, and the contribution plot approach leads to erroneous identification. We then apply the SVI method9 to this simulated problem. The SVI is the ratio of the reconstructed SPE to the faulty SPE, which is calculated and plotted in Figure 3b. The SVI is always between 0 and 1. If the SVI is less than a given threshold, the assumed sensor is identified as the true faulty sensor because significant reduction is achieved by reconstruction.9 From Figure 3b, the first and third sensors can be excluded because they have little reduction in the SPE. Both sensor 4 and sensor 5 reduce the SPE dramatically and are identified as faulty even if the threshold is chosen as 0.02. Consequently, the SVI cannot uniquely identify the true fault. The reason that the SVI fails to work in this example is that T2 is not considered. After reconstruction using the SPE, we observe that the reconstructed T2 for sensor 5 is out of the limit, and therefore, fault reconstruction along the direction of sensor 5 cannot bring T2 to the normal region. Another kind of misidentification might occur when the magnitude of the fault is large, leading to a very large SPE. When a fault direction is used to reconstruct the fault, the SVI might be smaller than the threshold because the faulty SPE is so large, but the reconstructed SPE would still be outside the control limit. In this case, the fault direction is erroneously identified. The third method that we tested is the proposed combined index in which the SPE and T2 are used for

[

0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0

]

T

Similarly, we have the fault subspace matrices for the other fault types. Again, the three methods are used. Figure 4a shows the contribution to the SPE from each sensor. Instead of identifying sensors 4 and 8 as faulty, the contribution plot shows that sensors 5 and 8 are faulty. Figure 4b shows identification using the SVI method. Although fault 5 reduces the SPE most dramatically, fault 3 reduces the SPE by 90%, which leads to ambiguous identification results. Further examination of the reconstructed SPE for fault 3 indicates that it is still outside the control limit, although significant reduction in the SPE is achieved. Figure 4c shows the results using our method, where the combined index for fault 5 is less than 1 (i.e., the fault is reconstructed back to the normal region) whereas those for other faults are much larger than 2. Therefore, the combined index uniquely identifies the true fault. Whereas the above results are obtained for only one sample, we now conduct a test by randomly changing the fault magnitudes for sensors 4 and 8 to compare the success rates of the three methods in correctly identifying sensors 4 and 8 as faulty. In each of 10 simulations, the fault magnitude vector is normally distributed, and 100 faulty samples are generated. Different simulations use different fault magnitude mean and standard deviation values to cover the full fault subspace. All of these generated faults are successfully detected by the combined index. Then, the three methods are used to identify the faults. The threshold for the contribution plot is set at 0.2, and that for the SVI is set at 0.05, values that were adjusted to obtain the largest number of successful identifications. The success rates of the three methods in the 10 simulations are calculated as follows. The success rates in the 10 simulations are 3055% for the contribution plot, 50-85% for the SVI, and

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001 4411

Figure 6. Seven variables monitored in the RTA process.

Figure 4. Comparison of three fault identification methods for a multiple-sensor fault.

80-95% for the combined index, making it the best method for fault identification. 7.3. Application to a RTA Process. In this subsection, we apply the proposed fault extraction and identification approach to a rapid thermal annealing process in semiconductor manufacturing. RTA is widely used to thermally treat semiconductor wafers after ion implantation. The resistance of the treated layer is thus determined in this process. The RTA process diagram is shown in Figure 5. Many process and operational faults occur in RTA and result in abnormally processed wafers, making it necessary to apply process monitoring techniques to detect and identify faults.

Figure 5. Diagram of a RTA tool.

Because the final quality data are not available until all processing steps are finished, we use the RTA tool variables such as wafer temperature and lamp power for process monitoring. Multiway PCA that considers time instants in a batch as variables is used for process monitoring because RTA is a batch process. We consider post-batch analysis because the batch time is only 2 min. Earlier process monitoring applications using multiway PCA include batch polymerization reactors,25 autoclave batch processes,26,27 and plasma etching.28 Figure 6 shows the seven monitored variables in one batch. Because the first variable is the set point of the wafer temperature, which is fixed for every wafer, we use the other six variables for fault detection and identification: top and bottom lamp powers, oxygen flow, nitrogen flow, wafer temperature, and tube temperature. The set point is used to align the data for different batches. Five different types of faults are

4412

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001

Figure 8. Extracted fault subspace for fault type 1. Figure 7. Fault subspace extraction for fault type 1. The top two plots are the SPE and T2 of faulty data. The middle plots are the reconstructed SPE and T2 using the first fault dimension. The bottom two plots are the reconstruction using the first two dimensions, which reconstruct the faulty data back to the normal region.

historically detected and classified. It is desirable to identify a new fault after it is detected. Data from 1800 normal wafers are obtained, and a multiway PCA model is built for on-line fault detection. In the meantime, we collect faulty data for the five types of faults. The method proposed in section 6 is used to extract the fault subspace for each fault type. Figure 7 shows the fault subspace extraction of fault type 1. The top two plots show the SPE and T2 of the faulty batches with fault type 1. SVD is applied to the faulty data to extract the fault subspace. The first and second fault dimensions, U(:, 1) and U(:, 2), are shown in Figure 8a and b, respectively. The reconstructed SPE and T2 indices for each batch using the first direction U(:, 1) are shown in the middle two plots in Figure 7. Because this reconstruction does not bring the process to the normal region as indicated by the T2 index, the second direction U(:, 2) is added for reconstruction. The reconstruction results along the subspace spanned by U(:,1: 2) are shown in the bottom two plots of Figure 7. The reconstructed SPE and T2 are completely within the normal region. Therefore, the fault subspace of fault type 1 has a dimension of 2 and the basis matrix Ξ1 ) U(:, 1:2), which will be used for future fault identification. Similarly, the subspaces of the other four types of faults are extracted and added to the candidate fault set. After a new fault is detected, fault identification is conducted by reconstructing each of the five fault subspaces. Figure 9 shows the identification result for a known fault of type 2. The reconstructed combined index for fault 2 is less than 1, whereas those for the other faults are much larger than 2. Therefore, only reconstruction along the subspace of fault 2 can bring

Figure 9. Fault identification result for a new fault of type 2.

the process back to the normal region, which uniquely identifies the fault. Figure 8 shows that fault type 3 is also successfully identified using the combined index. Similar results for other fault types are also obtained. 8. Conclusions In this paper, we have developed a new combined index for fault detection, reconstruction, and identification. We first discussed fault detection and detectability using the SPE and T2. We then show that a fault is always necessarily detectable by the SPE and T2 jointly. The fault magnitude for a fault to be sufficiently detected is also established. The confidence limit of the combined index φ is derived for fault detection. This index, φ, is compared with other collective quadratic

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001 4413

Figure 10. Fault identification result for a new fault of type 3.

statistical tests. Because φ provides a complete measure of the variability in the entire space, it is convenient in practice to use φ for fault detection. Fault reconstruction using the combined index eliminates the effects of a fault on the SPE and T2 in a balanced way. If the SPE and T2 are used to detect and reconstruct faults, a QCQP problem has to be formulated and solved. In most cases, the problem can be simplified to a unconstrained problem. If φ is used for fault detection and reconstruction, an unconstrained problem is solved. It is shown that (i) the reconstruction leads to an unbiased estimate of the normal process vector and fault magnitude; (ii) the reconstructed φ is within the normal region if the true fault direction is assumed; and (iii) a fault is always reconstructable by using φ. A new procedure for fault identification is proposed. The approach is superior to the other existing methods in two simulation examples: single-sensor and multiplesensor faults. A fault subspace extraction approach is proposed that applies singular value decomposition to fault data. Two averaging methods are used to reduce contributions due to normal process variation. The approach is successfully applied to the monitoring of a RTA process. Fault subspaces are extracted for five typical process faults. New faults are successfully detected and identified using the reconstruction-based method. It should be noted that this method requires a number of faulty samples to obtain an accurate estimate of the fault subspaces, similar to the excitation requirement in system identification. Acknowledgment Financial support for this project from the National Science Foundation under Grant CTS-9814340, from the Texas Higher Education Cordinating Board, and from Advanced Micro Devices is gratefully acknowledged. Notation a, b ) scalars a, b ) vectors

A, B ) matrices ai ) the diagonal elements of A C ˜ ) projection matrix to the residual subspace D ) Mahalanobis distance Dr ) Mahalanobis distance for reduced-rank covariance matrix D ) the matrix whose diagonal elements contain the singular values e ) noise vector E{.} ) expectation E ) residual matrix F ) fault f ) fault magnitude scalar f ) fault magnitude vector G ) measurement matrix from intrinsic states to the sensors I ) identity matrix l ) number of principal components li ) dimension of the ith fault subspace m ) number of sensors n ) number of samples N ) null space P ) loading matrix r ) rank of a covariance matrix R ) real S ) covariance matrix S ) principal component subspace Si ) the ith fault subspace S˜ ) residual subspace t ) score vector T ) score matrix U, V ) singular vectors w1, w2 ) weighting factors x ) sample vector xj ) reconstructed sample vector when Fj is assumed Greek Letters and Symbols χl2 ) confidence limit for T2 with l degrees of freedom δ2 ) confidence limit for the SPE λ ) eigenvalue Λ ) diagonal matrix of eigenvalues Φ ) matrix used to calculate the combined index φ ) combined index ζ2 ) confidence limit for φ νj ) multitude of an eigenvalue ξ ) fault direction (one-dimensional) Ξ ) fault direction matrix (multidimensional) ∈ ) belongs to ∼ ) has the distribution of ||.|| ) Euclidean norm {.} ) set Superscripts and Subscripts ˜ ) projected onto the residual subspace ˆ ) projected onto the principal component subspace - ) average * ) uncorrupted portion + ) Moore-Penrose pseudoinverse i ) subscript for the (actual) fault j ) subscript for the (assumed) fault T ) transpose Abbreviations max ) maximum min ) minimum PCA ) principal component analysis PCS ) principal component subspace RS ) residual subspace RTA ) rapid thermal annealing SPE ) squared prediction error SVD ) singular value decomposition

4414

Ind. Eng. Chem. Res., Vol. 40, No. 20, 2001

SVI ) sensor validity index tr ) trace

Literature Cited (1) Kresta, J.; MacGregor, J. F.; Marlin, T. E. Multivariable Statistical Monitoring of Process Operating Performance. Can. J. Chem. Eng. 1991, 69, 35-47. (2) De Veaux, R. D.; Ungar, L. H.; Vinson, J. M. Statistical Approaches to Fault Analysis in Multivariate Process Control. In Proceedings of the American Control Conference; Institute of Electrical and Electronics Engineers: New York, 1995; pp 12741276. (3) Kourti, T.; MacGregor, J. F. Multivariable SPC Methods for Monitoring and Diagnosing of Process Performance. In Proceedings of the Process Systems Engineering (PSE) Conference; CACHE: Austin, TX, 1995; pp 739-746. (4) Raich, A.; Cinar, A. Statistical process monitoring and disturbance diagnosis in multivariate continuous processes. AIChE J. 1996, 42, 995-1009. (5) Dunia, R.; Qin, S. J. Joint diagnosis of process and sensor faults using principal component analysis. Control Eng. Pract. 1998, 6, 457-469. (6) Dunia, R.; Qin, S. J. Subspace approach to multidimensional fault identification and reconstruction. AIChE J. 1998, 44, 18131831. (7) Dunia, R.; Qin, S. J. A unified geometric approach to process and sensor fault identification: the unidimensional fault case. Comput. Chem. 1998, 22, 927-943. (8) Tong, H.; Crowe, C. M. Detection of Gross Errors in Data Reconciliation by Principal Component Analysis. AIChE J. 1995, 41, 1712-1722. (9) Dunia, R.; Qin, J.; Edgar, T. F.; McAvoy, T. J. Identification of Faulty Sensors Using Principal Component Analysis. AIChE J. 1996, 42, 2797-2812. (10) Stork, C.; Veltkamp, D.; Kowalski, B. Identification of multiple sensor disturbances during process monitoring. Anal. Chem. 1997, 69, 5031-5036. (11) Piovoso, M.; Kosanovich, K. Applications of multivariate statistical methods to process monitoring and controller design. Int. J. Control 1994, 59, 743-765. (12) Chen, G.; McAvoy, T. J.; Piovoso, M. Multivariate statistical controller for on-line quality improvement. J. Process Control 1998, 8, 139-149. (13) Wise, B.; Gallagher, N. The process chemometrics approach to process monitoring and fault detection. J. Process Control 1996, 6, 329-348. (14) Miller, P.; Swanson, R.; Heckler, C. Contribution plots: The missing link in multivariate quality control. Presented at the Fall Conference of the ASQC and ASA, Milwaukee, WI, Nov 1993.

(15) Westerhuis, J. A.; Gurden, S.; Smilde, A. Generalized contribution plots in multivariate statistical process monitoring. Chemom. Intell. Lab. Syst. 2000, 51, 95-114. (16) Gertler, J.; Li, W.; Huang, Y.; McAvoy, T. IsolationEnhanced Principal Component Analysis. AIChE J. 1999, 45 (2), 323-334. (17) Gertler, J.; Singer, D. A New Structural Framework for Parity Equation Based Failure Detection and Isolation. Automatica 1990, 26, 381-388. (18) Jackson, J. E.; Mudholkar, G. S. Control procedures for residuals associated with principal component analysis. Technometrics 1979, 21, 341-349. (19) Yue, H.; Qin, S. Fault reconstruction and identification for industrial processes. Presented at the AIChE Annual Meeting, Miami Beach, FL, Nov 15-20, 1998. (20) Box, G. Some theorems on quadratic forms applied in the study of analysis of variance problems, I. Effect of inequality of variance in the one-way classification. Ann. Math. Stat. 1954, 25, 290-302. (21) Mardia, K. Mahalanobis Distances and Angles. In Multivariate Analysis IV; Krishnaiah, P., Ed.; North-Holland: Amsterdam, The Netherlands, 1977; pp 495-511. (22) Takemura, A. A principal decomposition of Hotelling’s T2 statistic. In Multivariate Analysis VI; Krishnaiah, P., Ed.; NorthHolland: Amsterdam, The Netherlands, 1985; pp 583-597. (23) Mittelmann, H. Benchmarking Interior Point LP/QP Solvers. Opt. Math. Software 1999, 12, 655-670. (24) Valle-Cervantes, S.; Qin, S.; Piovoso, M. Extracting fault subspaces for fault identification of a polyester film process. In Proceedings of the American Control Conference; Institute of Electrical and Electronics Engineers: New York, 2001, pp 44664471. (25) Nomikos, P.; MacGregor, J. Multivariate SPC charts for monitoring batch processes. Technometrics 1995, 37 (1), 41-59. (26) Nomikos, P. Statistical monitoring of batch processes. In Preprints of the Joint Statistical Meeting; Anaheim, CA, 1997. (27) Kosanovich, K.; Dahl, K.; Piovoso, M. Improved Process Understanding Using Multiway Principal Component Analysis. Ind. Eng. Chem. Res. 1996, 35, 138. (28) Yue, H.; Qin, S.; Markle, R.; Nauert, C.; Gatto, M. Fault detection of plasma etchers using optical emission spectra. IEEE Trans. Semicond. Manuf. 2000, 13, 374-385.

Received for review February 1, 2000 Accepted July 4, 2001 IE000141+