Subscriber access provided by UNIV OF WATERLOO
Article
Loading-based principle component selection for PCA integrated with support vector data description wang bei, Xuefeng Yan, and Qingchao Jiang Ind. Eng. Chem. Res., Just Accepted Manuscript • DOI: 10.1021/ie503618r • Publication Date (Web): 12 Jan 2015 Downloaded from http://pubs.acs.org on January 19, 2015
Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a free service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are accessible to all readers and citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.
Industrial & Engineering Chemistry Research is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.
Page 1 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Loading-based principle component selection for PCA integrated with support vector data description Bei Wang, Xuefeng Yan*, Qingchao Jiang
(Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education, East China University of Science and Technology, Shanghai 200237, P. R. China)
*: Corresponding author: Xuefeng Yan Email address:
[email protected] Tel\Fax Number: +86-21-64251036 Address: P.O. BOX 293, MeiLong Road NO. 130, Shanghai 200237, P. R. China
Abstract Given that numerous variables exist in industrial process, it is difficult to make out what real relationships are among the variables. In principal component analysis (PCA) approach, the loading matrix can reveal inner relations between variables and components, and different components contain different information about a certain variable. Therefore, this study proposes a novel method that respectively selects principle components (PCs) for each variable according to the loadings. The PCs containing more information about a certain variable are selected to construct the subspace for the corresponding variable, and then support vector machine data description (SVDD) technique is adopted to examine the variations in all subspaces. Additionally, a corresponding contribution plot is developed to identify the root cause. Finally, two case studies, a numerical example and the Tennessee Eastman (TE) system, demonstrate the effectiveness of the proposed method, with other PCA-based methods listed for comparison. 1
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Keywords: Principal component analysis; loading matrix; Multi-block; support vector data description
2
ACS Paragon Plus Environment
Page 2 of 39
Page 3 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Introduction Due to the development of processing and manufacturing industries, high quality products with low unqualified rates are needed to meet the product standard and requirement. In order to guarantee the normal operation of the industrial processes, fault detection and diagnosis take significant parts in finding and revising the fault.1,2 Owing to the improvement of data storage ability, multivariate statistical process monitoring (MSPM), a class of data-based methods, have made rapidly progress for monitoring a range of industrial processes.3-8 Among various MSPM methods, principal component analysis (PCA) 9,10 is considered as the most fundamental one for industrial process monitoring,9,10 because PCA can provide better explanation and description of the process. Through projecting original data onto low dimensional space and preserving main correlation structure, PCA accomplishes dimension reduction and characterizes the variations of the process. Based on PCA technique, many improvements and applications have been proposed to deal with different operating conditions. For non-linear systems, originally, Diamantaras and Kung constructed a non-linear PCA model integrated with neural networks.11 Later, Schölkopf proposed kernel PCA that used integral operator kernel functions12 and this approach is still used widely today.13,14 For dynamic systems, original PCA model was extended to dynamic PCA (DPCA),15,16 which used several observational data before current sample to supple information for present moment. For example, Ku et al.
17
proposed to introduce a well-known ‘time lag shift’ method into PCA, generating a simple dynamic PCA model. Besides, the multiway PCA that extracting information in the multivariate trajectory data was developed to monitor batch processes.18,19 Many other PCA-based methods were developed to solve various monitoring problems.20-24
3
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 4 of 39
Most of these methods were integrated PCA with other approaches to achieve performance improvement. However, there are a few studies focus on PCA method itself. The PCA still has a major weakness, namely, the selection of the retained PCs. In the past decades, some strategies were proposed to help choose adequate PCs. The cumulative percent variance (CPV)
25,26
is the most
commonly used one which preserves PCs according to the percent variance. When the cumulative percent variance is greater than the set value, the corresponding first several PCs are retained. The cross-validation determines optimal PCs through comparing the predicted and previous models, which respectively constructed by two parts of the training samples.27,28 The variance of the reconstruction error (VRE) points that the PCs are considered optimal only when the error fault reconstruction is at minimum.29,30 The fault signal-to-noise ratio (SNR) method selects the optimal PCs though maximizing the fault SNR.31 Most of these classical methods just select the first several PCs with larger variance information, ignoring the rest PCs. However, the discarded PCs also contain some information about the monitoring processes. Jolliffe demonstrated this principle that the PCs with small variance can be as important as those with large variance.32 In addition, top-down PC regression is proved to produce lower quality models compared with other methods.33 Therefore, how to select PCs and utilize the PC information for monitoring is still an open problem. In industrial processes, the multi-block strategy has gained increasing popularity for simplifying monitoring processes. MacGregor et al. proposed a multi-block projection method that established monitoring charts for individual process subsections in order to detect the event earlier.34 Smilde et al. respectively selected PCs for each block and generalized this method to incorporate multiway blocks.35 Then Qin et al. constructed four kinds of statistics, block and total T 2 and Q for decentralized monitoring.36 Based on this, new fault and identification approach based on 4
ACS Paragon Plus Environment
Page 5 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
multi-block partial least squares was given by Choi and Lee.37 Ge and Song suggested a two-level multi-block model through combining ICA method with PCA method for improving performance.38 Several years later, Ge et al. proposed another novel monitoring method (BSPCA) which divided the nonlinear space into several approximately linear subspaces and combined them through Bayesian inference.39 Meanwhile, Tong et al. divided the variables into four subspaces according to the relevance or irrelevance to the principal component subspace and residual subspace, developing the FSCB method.40 Wang et al. developed a KL-MBPCA method which separated the variables based on a mthematical statistics method, Kullback-Leibler Divergence.41 It can be found that these multi-block mehtods usually decomposed the process, original variable space, or extracted component space for simplifying the monitoring processes, but these division objects are complicated and massive, especially in plant-wide process, so that the division procedures are not easy. In addition, some division process also needs priori knowledge, which is difficult to obtain and is not always available in practice systems. Therefore, choosing a suitable division object is all necessary for the application of multi-block strategy. The aim of process monitoring is to differentiate the normal samples from the faulty ones, thus some classification methods are applied to monitoring work, such as the neural network based approach,42 discriminant analysis43 and self-organizing map method.44 Support vector data description (SVDD),45,46 proposed by Tax and Duin, is a relatively new data description method, which constructs the SVDD model with normal data samples to discriminate faulty ones. Recently, it has been used in process monitoring.47,48 Ge et al. utilized SVDD to get tight confidence limit for associated monitoring statistics.49 Liu et al. applied SVDD to transform nonlinear PCs into feature space.50 Jiang et al. used SVDD to examine the variations in all subspaces.51 Later, they developed a 5
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
method (JIR-PCA-SVDD) which selected PCs online based on the kernel density estimation and then utilized SVDD strategy to generate a new statistic for process monitoring.52 In PCA model, the generated loading matrix is an important part that describes the inner relations between the variables and the components. Each element in the loadings indicates the significance of the corresponding variable, and finally this significance is reflected into the statistics for process monitoring. In addition, the loading matrix has simpler construction and a lower dimension compared with mass original data samples. But there has been relatively little research focused on the loadings in PCA model, even though it contains a lot of information. In current article, the loadings are utilized for PC selection for each variable, and the selected PCs containing more information about the corresponding variable are termed as particular PCs. Herein, this article proposed a PCA-based multi-block method integrated with the particular PCs and support vector data description (PPCA-SVDD) in order to monitor the process in a comprehensive way. In the proposed method, first, the loading vectors with comparatively big elements (unsigned) corresponding to the present pointed variable are selected to combine the subspace. Then each variable has its own loading subspace as well as the PC subspace. The selection of loading vectors is based on the significance the vectors having to the variables, rather than the variance, thus the loading vectors with small variance but containing more information are preserved. For monitoring purpose, T 2 statistics for each subspace are generated and the SVDD is adopted for examining the variations in all subspaces. Given that the essence of process monitoring is to monitor the variation of the variables, detecting the state of each variable can provide a more timely and accurate monitoring result.
6
ACS Paragon Plus Environment
Page 6 of 39
Page 7 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
After detecting the fault in the process, the corresponding fault diagnosis should be done to revise the process. The contribution plot is a widely used method for finding the root causes.1 Based on the idea in PPCA-SVDD, the traditional contribution plot is made with the particular PCs. The rest of this article is organized as follows. First, section 2 briefly reviews PCA theory and SVDD technique, followed by a numerical example illustrating the motivation of the proposed PPCA-SVDD method. Then, section 3 gives comprehensive introduction regarding the proposed method. The implementation procedures are presented in section 4. In section 5, the superiority of PPCA-SVDD is demonstrated through two case studies. At last, some conclusions are drawn in section 6. Preliminaries This section will briefly review the PCA theory and SVDD technique, followed by a numerical example illustrating the motivation of the proposed PPCA-SVDD. 2.1. Principal component analysis PCA is a powerful dimension reduction technology that projects original data onto low-dimensional space and generates new components to represent the process state.1,9 Assuming that there is a data matrix X ∈ R N ×M with N samples and M variables, and each column in X is scaled to zero mean and unit variance. The loading matrix P ∈ R M ×M can be obtained through singular value decomposition (SVD) of the covariance matrix S as S=
XTX = PΛP T n -1
(1)
where Λ = diag(λ1 , λ2 ,..., λM ) is the diagonal matrix with eigenvalues arranged in descending order;
P = [ p1 , p2 ,..., pM ] is the loading matrix. According to the CPV method, defined as4
7
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
k
Page 8 of 39
M
∑ λ ∑ λ ×100% ≥ 85% i
i =1
(2)
i
i =1
the first k PCs are preserved for constructing PCA model. Through projecting the data matrix X onto the loading matrix Pˆ ∈ R M ×k , the PC score matrix T ∈ R N ×k can be produced as9
T = XPˆ
(3)
where the i th column in matrix T is the i th PC. Then the data matrix X can be decomposed as39
X = TPˆ T + E
(4)
where E is a residual matrix. For an observed sample X f that has been scaled, the statistic corresponding to PC space is calculated for process monitoring with formula expressed as
ˆ -1 Pˆ T X T 2 = X f T PΛ k f
(5)
where Λk = diag(λ1 , λ2 ,..., λk ) . The confidence limit determining whether the process is normal or not is defined as follows
T2 ≤
k ( n − 1) F n − k k ,( n−k ),α
(6)
where Fk ,( n−k ),α is an F-distribution with k and n − k degrees of freedom with significance level
α .53 2.2. Support vector data description The main idea of SVDD is to construct a spherically shaped decision boundary described by a set of support vectors (SVs). The produced hypersphere has minimum volume (or minimal radius) and contains all the data objects. For original training data
{ yu , u = 1, 2,..., n} ∈RM ,
a nonlinear
function Φ : y → F is employed to transform the data to new feature space. The n herein is the
8
ACS Paragon Plus Environment
Page 9 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
number of samples. Then the sphere described by center a and radius R needs to solve the following optimization problem 45 n
min R ,a ,ξ
s.t.
R 2 + C ∑ ξu u =1
(7)
Φ ( yu ) − a ≤ R + ξu 2
2
where C gives the trade-off between the volume of the sphere and the number of target objects rejected; ξu is the slack variable allowing some data points outside the sphere. The optimization problem can be expressed in a dual form as 46 n
min αu
n
n
∑αu K ( yu , yv ) − ∑∑αuαv K ( yu , yv ) u =1
u =1 v =1
(8)
n
s.t.
0 ≤αu ≤ C, ∑αu = 1 u =1
where αu is a Lagrange multiplier, and K ( yu , yv ) = Φ ( yu ) ,Φ ( yv )
is a kernel function for
computing the inner product in feature space, given as.
K ( yu , yv ) = exp(− yu − yv / s2 ) 2
(9)
For very large value of s , exp(−
yu − yv s2
2
) ≈ 1 ; for very small s , when ∀u ≠ v , the result
approximates zero. The samples yu with αu > 0 are termed as SVs. Thus, the squared distance from center a to boundary is written as 46 n
n
n
R2 = K ( y, y ) − 2∑αi K2 ( yu , y ) + ∑∑αuαv K ( yu , yv ) u =1
(10)
u =1 v =1
where any yu ∈ SV . For a new sample z , the squared distance to the center of the sphere can be calculated as 52 n
n
n
z − a = K ( y, y ) − 2∑αi K ( z, yu ) + ∑∑ K ( yu , yv ) u =1
u =1 v =1
2.3. Problem statement and motivational example 9
ACS Paragon Plus Environment
(11)
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 10 of 39
In traditional PCA model, the PCs are usually selected as the following two steps: first, the components are sorted corresponding to the eigenvalues which are arranged in descending order; second, the PCs are determined according to some rules, such as CPV and VRE. These rules always consider the first several PCs as dominant ones to represent the state of the process while the rest PCs are abandoned. However, each PC has different degrees of monitoring performance for different fault. The retained PCs do not necessarily capture the maximum process variation, which maybe reflected by the discarded ones. Thus, the traditional way of PC selection is inappropriate. The loadings in PCA can effectively reveal the inner relations between the variables and the PCs, with detailed relationship written as
p1,1 p 2,1 T = XP = [ x1 , x2 ,..., xi ,..., xM ] × pi ,1 pM ,1
p1,2
...
...
p2,2
...
...
pi ,2
... pi , j
pM ,2 ...
Seen from this formula, the elements
(p
i ,1
...
p1, M ... p2, M ... pi , M ... pM , M ...
(12)
, pi ,2 ,..., pi , M ) respectively show the significance
corresponding to a variable xi in each PC. Since the loading matrix is obtained from SVD, its column vectors
[ p1 , p2 ,..., pM ]
are unit ones orthogonal to each other and the element pi , j is from
-1 to 1. When the element pi , j is close to -1 or 1, it will represents a high significance of the corresponding variable xi . In addition, the diagonal matrix Λ also has some influence to the finally T 2 statistic according to Eq. (5). Therefore, the characteristic of the loadings and the diagonal matrix can be utilized for selecting PCs. In this way, the selected PCs can accurately monitor the variation of each variable.
10
ACS Paragon Plus Environment
Page 11 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
For further analysis of the PCA performance and the necessity of the proposed selection strategy, a simple numerical example with five variables is listed as follows
x1 0.57 x 0.62 2 x3 = 0.95 x4 1.1 x5 0.25
0.85 1.2 e1 0.71 0.84 r1 e2 0.75 0.65 r2 + e3 0.92 0.64 r3 e4 e5 0.87 1.5
where [ r1 , r2 , r3 ]
T
(13)
satisfy Gaussian distribution with zero-mean and standard deviation of 1.2; the
noise [ e1 , e2 ,..., e5 ]
T
follow zero-mean normal distribution with 0.4 standard deviation. First, 500
samples under normal condition are collected for PCA construction. Then after generating the loading matrix, the first two PCs, occupying >95% cumulative variance, are deemed to be dominant components for monitoring the process based on CPV rule. Two simulated faults are prepared as follows: Case 1: a step change of 2 is added to x2 from sample 151 to the end; Case 2: a ramp change of 0.015 × (i −150) is added to x4 from sample 151 to 350. Each case generates 500 samples for showing the performance of PCA method, with 99% confidence limits set for fairly judgment. Meanwhile, a normal dataset containing 500 samples is also collected as testing data. The monitoring results for these two fault cases by PCA are displayed in Fig. 1 (a) and (b), respectively. However, the monitoring statistics in both figures still stay below the confidence limits when fault happens. Obviously, the traditional PCA can hardly detect the faults. In order to analyze the fundamental cause, the data of the loadings and eigenvalues are tabulated in
Table 1. The T 2 statistics for each PC are constructed as
Ti 2 = xT pi λi -1 piT x
(14)
11
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 12 of 39
2 where pi and λi are the i th loading vector and the i th eigenvalue in PCA model. Ti statistic
directly shows the ability of each PC to monitor variation of the variables with detailed monitoring results shown in Fig. 2 and Fig. 3, respectively. It can be found from Fig. 2 that the third PC can detect the first fault case successfully with statistics exceeding the confidence limit after sample 151. This fault is caused by the step change of variable x2 . Seen from Table 1, variable x2 corresponds to the elements
( 0.4589,
0.0510, 0.8779, 0.0505, 0.1165) in each PC, where the third element
has the highest value (0.8779) among these five. That’s why PC 3 shows good monitoring performance of this fault case. However, the first element corresponding to PC 1 having the second biggest value (0.4589) does not provide any fault information. Considering expression (Eq. (14)) of
T 2 statistic, the eigenvalue also has some effect on the monitoring result. Similarly, the PC 4 in Fig. 3 can detect fault case 2 because p3,4 is the biggest value among
(p
3,1
, p3,2 ,..., p3,5 ) corresponding to fault variable x3 . Meanwhile, that PC 3 with small loading
value also has the ability of detection is attributed to the eigenvalue λ3 , while the first PC corresponding to p3,1 with comparatively bigger cannot detect the fault. Hence, it is wise to find particular PCs for every variable in the process based on the loadings and eigenvalues. Such a way of PC selection helps reveal more accurate and submerged information. This motivational example will be further analyzed in the subsequent sections
Table 1. The data in the loading matrix and eigenvalues Loading matrix
Eigenvalues
Row
p1
p2
p3
p4
p5
λ
1
0.4569
0.2824
-0.3396
-0.2324
0.7363
4.5266
2
0.4589
0.0510
0.8779
0.0505
0.1165
0.3324
12
ACS Paragon Plus Environment
Page 13 of 39
3
0.4463
-0.4578
-0.2503
0.7270
0.0126
0.0588
4
0.4421
-0.5245
-0.1202
-0.6374
-0.3298
0.0471
5
0.4312
0.6581
-0.1921
0.0935
-0.5791
0.0351
18 16 14
10 8
2
T Statistics
12
6 4 2 0
0
50
100
150
200
250
300
350
400
450
500
300
350
400
450
500
Samples
(a) 16 14
10 8
2
T Statistics
12
6 4 2 0
0
50
100
150
200
250
Samples
(b)
Figure 1. PCA monitoring results for (a) case 1; (b) case 2 PC 1
2
T Statistic
PC 3
PC 2
12
40
15
10 30
8
10
6
20
4
5 10
2 0
0
100
200
300
400
500
0
0 0
100
Samples PC 4
2
200
300
400
500
0
100
PC 5 8
10 6
8 6
4
4 2
2 0
0
100
200
300
Samples
400
500
0
0
100
200
300
200
300
Samples
Samples
12
T Statistic
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
400
500
Samples
Figure 2. Monitoring results for fault case 1 along each PC 13
ACS Paragon Plus Environment
400
500
Industrial & Engineering Chemistry Research
PC 1
2
T Statistic
15
15
10
10
5
5
6 4 2 0
0
100
200
300
400
500
0
0
100
Samples PC 4
200
300
400
500
0
0
100
200
300
400
500
Samples
Samples PC 5
60
2
PC 3
PC 2
8
T Statistic
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 14 of 39
40 30
40 20 20
0
10
0
100
200
300
Samples
400
500
0
0
100
200
300
400
500
Samples
Figure 3. Monitoring results for fault case 2 along each PC 3. PPCA-SVDD scheme The detailed description of the proposed method PPCA-SVDD is presented in this section.
3.1 PPCA-SVDD In industrial process, the occurrence of some fault no doubt brings the changes of concerning variables. That is to say, fault detection is equal to a matter that capturing the variation of the changed variables. As mentioned above, each variable has its own particular PCs, which contains more information on the corresponding variable and can accurately capture the variation of this variable. But when traditional PCA method selects PCs, it just simply retains the first several PCs in dominant subspace according to the cumulative variance, without any consideration for different variables in different fault. Selecting particular PCs for each variable can realize the comprehensively monitoring of the industrial process. On the other hand, the loading matrix can describe the inner relation between PCs and variables. Each element in the loadings indicates the significance of the PCs to variables. This kind of significance cannot totally be reflected to the final statistic, which also be related with the diagonal matrix. Therefore, both the loadings and the 14
ACS Paragon Plus Environment
Page 15 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
eigenvalue in diagonal matrix can be employed for selecting particular PCs for each variable. The generated particular PCs let fault detection and analysis more accurate and efficiently. Suppose the loading matrix P ∈ R M × M and diagonal matrix Λ are generated from dataset X ∈ R N ×M through PCA model construction. The manner of selection of particular PCs is arranged
as follows. First, a weight index inspired by the formula of T 2 statistic is defined for measuring the effects of the j th PC to the i th variable, given as
wi , j =
pi , j
(15)
λi
where i, j = 1, 2,..., M . This weight value is greater than or equal to zero while a large value represents a great weight. The loading elements without plus-minus sign in this index are to show the authentic influence degree. What’s more, this kind of processing data can decrease the complexity of the following study. Second, the PCs with great weight index should be selected for each variable, respectively. Given that the weight indices corresponding to each variables are different, setting a uniform threshold for PC selection would cause some variables having redundant PCs while some others losing significant ones. Thus, it is better to set different the selection thresholds for each variable. In addition, the number of selected PCs for each variable is not easy to determine. Too many or too few PCs for each variable would cause undesirable monitoring results. Considering these two factors, two relatively reasonable principles are used for determination. One is averaging method to guarantee an adequate quantity of PCs, and the other is median method to guarantee the selection of the significant PCs. These two methods are listed for comparison in order to find a suitable way of PC selection.
15
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 16 of 39
In averaging method, the PC is regarded as a particular one only when its weight index is greater than the mean value, defined as
wi =
1 M ∑ wi, j M j =1
(16)
Then the qualified loading vectors, as well as the eigenvalues, are picked out to construct subspaces for variable xi as follows
Pi∗ = [ p1 , p2 ,..., pm ]
(17)
Λ∗i = [ λ1, λ2 ,..., λm ]
where m = m1 , m2 ,..., mM is the number of chosen loading vectors in each subspace. Through ∗ ∗ projecting the original dataset onto each subspace, the particular PCs are generated as Ti = XPi .
The other principle is to find the loadings whose weight indices are greater than the median among
(w
i ,1
, wi ,2 ,..., wi , M ) . Such a selection method also can help construct PC subspaces for each variable.
The two principles are put forward because they have their own characteristics. Since there are only a few weight indices significantly greater than the others, the mean is greater than most indices, leading to the relatively less selected particular PCs. The dimension and computational complexity can be significantly reduced, but the extraction of the particular PCs may be insufficiency. The median principle, on the contrary, fully selects the particular PCs with a constant number, while the futile ones also may be chosen to subspaces. In brief, it is difficult to judge which selection principle is better. More concerning discussions are given in the following study. The selected particular PCs provide more information about the variables so that extraneous information can be discarded, reducing the influence of noise. 2 After constructing the subspaces for each variable, the corresponding T statistics can be
calculated through projecting monitored dataset x onto each subspace as 16
ACS Paragon Plus Environment
Page 17 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
T -1 Tˆi 2 = xT ( Pi∗ ) ( Λi ) ( Pi ∗ ) x
(18)
where the dataset x is scaled. Since each variable has its own subspace, the generated M statistics 2 2 2 can be denoted as z = Tˆ1 , Tˆ2 ,..., TˆM and the SVDD is employed to examine the variations of these
subspaces. According to Eqs.(7) to (10), the center a and the radius R in SVDD can be produced 2 2 2 through inputting Y = [ y1 , y2 ,..., yM ] = tˆ1 , tˆ2 ,..., tˆM which is calculated using normal testing
dataset. The parameter s here is set as 165, referred to other experiment. After that, the monitored dataset x are projected onto each subspace, generating corresponding statistics, so the distance of
Tˆi 2 statistics to the center can be estimated using Eq. (11). For the purpose of fault detection, a radio between measured distance and the radius R is defined as follows
DR =
z -a R2
2
(19)
Then confidence limit is set as 1. That means, when the distance is greater than R 2 , this the sample
z is regarded as a fault point, otherwise, it is deemed as normal. Remark 1 The idea that monitoring the process is from the aspect of variables impels the subspace construction for each variable. Then PPCA-SVDD can monitor any variation of variables in all directions. When certain fault occurs, concerning variables change and the statistics in the corresponding subspaces perform abnormal. For expounding this idea specifically, the above numerical simulation is adopted. Fig. 4 presents the monitoring results of all subspaces for fault case 1. It is known that the first fault is caused by a step change of x2 , so subspace 2 should detect the abnormality. As expected, subspace 2 shows good performance with the values of Tˆ22 statistic increase rapidly after sample 151, while the other subspaces show ‘normal condition’. However, if a subspace can detect the fault, it is uncertain that the corresponding variable changes. Seen from the 17
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
monitoring results of case 2, shown in Fig.5, both subspace 3 and 4 can detect the ramp change, but the fault only happens on only variable x4 . Table 1 can explain this phenomenon. For variable x4 , unsigned elements in the fourth and fifth loading vectors are comparatively large so that the corresponding PCs are selected as the particular PCs, whereas the fifth PC is also selected by subspace 3 according to the mean selection principle. That’s why the performance of subspace 3 is similar to that of subspace 4. Meanwhile, subspace 1 and 5 behave exceptionally around sample 320 because of PC 5 is shared by these two subspaces. Fortunately, these performances all indicate the occurrence of the fault in different degrees, helping detecting the fault timely. Due to the number of variables, monitoring each subspace directly for fault detection seems impossible. Therefore, SVDD technique is applied to differentiating the abnormality from original normal subspaces. In addition, the fault information in those subspaces corresponding to normal variables can also be reflected into the monitoring result and, finally, enhance the monitoring performance. But there still exist some disadvantages in SVDD-based model due to the feature space construction. The first is the increasing computational complexity and the second is the difficulty of analyzing and interpreting SVDD-based method.
2
T Statistic
Subspace 1
Subspace 2
Subspace 3
15
60
15
10
40
10
5
20
5
0
0
100
200
300
400
500
0
0
100
Samples Subspace 4
200
300
400
500
Samples Subspace 5
20
20
15
15 10 10 5 0
5 0
100
200
300
Samples
400
500
0
0
100
200
300
0
0
100
200
300
Samples
25
2
T Statistic
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 18 of 39
400
500
Samples
18
ACS Paragon Plus Environment
400
500
Page 19 of 39
Figure 4. Monitoring results of each subspace for case 1 Subspace 2
Subspace 1
10
30
20 20
5
2
T Statistic
50 40
30
10 0
10 0
100
200
300
400
500
0
0
100
200
300
400
500
0
0
100
Samples Subspace 5
Samples Subspace 4
200
300
400
500
Samples
40
60
2
Subspace 3
15
40
T Statistic
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
30 40 20 20
0
10
0
100
200
300
Samples
400
500
0
0
100
200
300
400
500
Samples
Figure 5. Monitoring results of each subspace for case 2 3.2 Fault diagnosis After monitored process triggers the alarm, the fault identification should be embarked to find the root cause so that the industrial process can be revised. The contribution plot is a commonly used method for PCA model, and the applications can be found in many reference.54,55 However, only the PCs accurately presenting variables’ variation should be employed for making contribution plot. In current study, the particular PCs for each variable have been selected, and here they are used to calculate the contribution rates for variables, with formula expressed as
cont i ,s =
tsi
λ
i s
pii,s xi
(20)
i i where ts and λs are the i th PC score and the i th eigenvalue in the i th subspace ( i = 1,2,..., m i and m is the corresponding particular PCs number); pi ,s is the (i, s) th element of the loading
matrix Pi ∗ . The total contribution of variable xi is calculated as m
CONTi = ∑ ( cont i ,s )
(21)
s =1
19
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Given that the particular PCs for each variable provide more fault information about the process, the contribution plot made with these PCs obviously gives a more precise diagnosis result.
4 PPCA-SVDD Process Monitoring The detailed implementation procedures of the proposed method are presented in this section, followed by the steps of fault diagnosis.
4.1 Fault detection based on PPCA-SVDD A schematic diagram of the proposed PPCA-ICA method based on the loadings is shown in
Fig.6, and the detailed steps are summarized as follows Step 1: Collect training dataset X from normal condition and scaled it to zero mean and unit variance. Step 2: Generate the loading matrix through using SVD and then get PCs. Step 3: Select the particular PCs for each variable based on the loadings according to the proposed selection manner, and then develop the corresponding subspaces. 2 2 2 Step 4: Project normal testing dataset into each subspace and get statistics Y = tˆ1 , tˆ2 ,..., tˆM from
these subspaces. Step 5: Build SVDD model with statistics Y and determine the center a and radius R using Eqs. (7) to (9). Step 6: Project the normalized monitored dataset x into all subspaces respectively and generate statistics z = Tˆ12 , Tˆ22 ,..., TˆM2 . Step 7: Estimate the distance to the hypersphere’s center and get the statistic DR through Eqs (11) and (19). If DR exceeds the confidence limit 1, there is a fault happening; otherwise, the process is considered normal. 20
ACS Paragon Plus Environment
Page 20 of 39
Page 21 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Figure 6. Flowchart of the PPCA-SVDD monitoring scheme. 4.2 Fault diagnosis Step 1: For a detected faulty dataset, calculate the contribution rates for each variable using Eq. (20). Step 2: Estimate the total contribution of every variable and make the contribution plot using Eq. (21). Step 3: Determine the root cause of the current fault according to the contribution plot. 5 Examples and applications In this section, a numerical example and the TE benchmark process are employed to evaluate the performance of the proposed PPCA-SVDD method. Some other methods are displayed for comparison. 5.1 Numerical example 21
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
The numerical system used as the motivational example is employed in this section to examine the performance of the proposed method. The monitoring results of the two constructed faults using traditional PCA have been shown in Fig. 1 (a) and (b), where the PCA method can hardly detect any abnormality in the process. Now, the PPCA-SVDD method is applied to this simulation with monitoring results displayed in Fig. 7 (a) and (b). Here, the particular PCs are selected according to the mean principle and the monitoring results of each subspace have been shown in Fig. 5. For fault 1, the DR statistic exceeds the confidence limit once the fault happens at sample 151. This fault can be detected timely and effectively. Due to the small change of the variable at first, case 2 cannot be found until after sample 220 on average, but this monitoring result still outperforms that of traditional PCA. In a word, the proposed PPCA-SVDD has sensitivity to the variations of each variable and shows well monitoring performance of the process. Once the fault is detected, the contribution plot is made to test the ability of fault diagnosis. Since the contribution plot is related with the selected sample, the mean value of the contribution index for several constant samples is used to make the plot. It is known that the first fault is caused by the step change of variable 2. As expected, the contribution plot for case 1, shown in Fig. 8 (a), indicates that the second variable has the highest contribution. Similarly, the diagnosis result in Fig. 8 (b) is in accordance with the fact of fault case 2 that has a ramp change in the fourth variable. Obviously, the contribution plot provides an accurate identification result.
22
ACS Paragon Plus Environment
Page 22 of 39
25
20
DR
15
10
5
0
0
50
100
150
200
250
300
350
400
450
500
300
350
400
450
500
Samples
(a) 30
25
20
DR
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
15
10
5
0
0
50
100
150
200
250
Samples
(b)
Figure 7. Monitoring results of PPCA-SVDD for (a) case 1; (b) case 2 35
30
25
Contribution
Page 23 of 39
20
15
10
5
0
1
2
3
4
Variable number
(a)
23
ACS Paragon Plus Environment
5
Industrial & Engineering Chemistry Research
40
35
30
Contribution
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 24 of 39
25
20
15
10
5
0
1
2
3
4
5
Variable number
(b)
Figure 8. Contribution plot for (a) case 1; (b) case 2 5.2 TE benchmark process Tennessee Eastman (TE) process, created by Downs and Vogel,56 is a classic industrial chemical process for studying plant-wide control and multivariable control problems. This process has been widely used in various monitoring methods to test their monitoring performance.57,58 In this simulation, there are five major unit operations: a reactor, a product condenser, a recycle compressor, a vapor liquid separator, and a product stripper, with detailed control structure displayed in Supporting Information Figure 1. This system has 41 measured variables and 12 manipulated variables, but only 33 variables of them, shown in Supporting Information Table 1, are discussed for the current study. First of all, a dataset containing 960 samples is produced as the training data for the model construction. Then, 21 programmed faults datasets are generated for testing the method’s function. Each fault consists of 960 samples and the faults, listed in Supporting Information Table 2, are respectively introduced from sample 161. In addition, a normal dataset comprised of 500 samples is produced as testing data. More details regarding this control system can be found from while the simulation code can be download from http://brahms.scs.uiuc.edu. 5.2.1 Monitoring results
24
ACS Paragon Plus Environment
1
Page 25 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
The training data is employed to generate the loading matrix for PPCA-SVDD model construction. Then the particular PCs for each variable are respectively selected based on the mean principle first. The particular PCs are all listed in Table 2, where the number of chosen PCs for each variable varies from 3 to 12, less than half. Not all PCs are selected in this method, such as the first PC, so that irrelevant information is abandoned. Through projecting the testing dataset into each subspace, the corresponding statistics are generated, and then both center and radius of the hypersphere are determined by using SVDD. After constructing the PPCA-SVDD model, 21 fault datasets are applied to this model to generate the missed detection rates, tabulated in Table 3. At the same time, the median principle for PC selection is also used to construct another PPCA-SVDD model for comparison. Since the number of selected particular PCs is constant and more than that selected by mean principle, the detailed particular PCs are not listed here, but the missed detection rates of the 21 faults are shown in Table 3. It is easy to find that mean principle selects less particular PCs, reducing complex computation, but bring slightly worse monitoring performance compared with median principle. Besides, another five methods are employed to compare the performance of the proposed method. The first is traditional PCA with CPV>85%, followed by DPCA1 that extracts dynamic information of the process. Three multi-block methods, BSPCA39 FSCB40 and KL-MBPCA41, are used to prove the reasonability of the block object. Last, the monitoring results of both PCA-SVDD and JIR-PCA-SVDD method52 that adopting SVDD strategy are listed as well. The difference between these two methods is they whether to employ just-in-time (JIR) strategy, which help selects PCs online. The missed detection rates of 21 faults by these five methods are all calculated and tabulated in Table 3. Among these faults, detecting fault 3, 9 and 15 are difficult problems for most methods so they are ignored in this study. For the rest faults, the proposed 25
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
PPCA-SVDD based on mean and median principle all show superiority with all faults’ missed detection rates declining, especially these marked with shadow. Almost all the missed detection rates calculated by PPCA-SVDD are below 0.1 except fault 11 and 21, but the missed detection rates of these two faults are halved by the proposed method compared with that by traditional PCA. For further analysis, detailed monitoring results of fault 5, 11, 19 and 21 by PPCA-SVDD using median principle and traditional PCA are enumerated for discussion. Herein, 97.5% confidence limit is set for PCA monitoring. Fault 5 is caused by a step change of the condenser cooling water inlet temperature, inducing a step change in condenser cooling water flow rate. What’s more, the increase of the outlet stream flow rate from the condenser to the vapor/liquid separator leads to a rise in the temperature. Due to the compensation of the control system, this fault is difficult for traditional PCA to detect throughout the process. Seen from Fig. 9 (a), the monitoring statistic goes far away from the confidence limit when the fault occurs at the 161th sample, but this trend changes after 200 samples with statistic returns to normal level. However, the fault still exists after sample 350 because of the excessive condenser cooling water inlet temperature1. The proposed PPCA-SVDD gives a well monitoring result, displayed in Fig. 9 (b), where DR statistic exceeds confidence limit obviously from sample 161 to the end. In the case of fault 11, there is a random variation in reactor cooling water inlet temperature. This slight fault cannot be found by traditional PCA, shown in Fig. 10 (a), where statistic fluctuates around the confidence limit up and down. The monitoring performance is greatly improved by PPCA-SVDD with monitoring result presented in Fig. 10 (b). The DR statistic goes above the
26
ACS Paragon Plus Environment
Page 26 of 39
Page 27 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
confidence limit after fault happens and the missed detection rate falls significantly from 0.516 to 0.148. Fault 19 is an unknown fault in TE process. The traditional PCA shows poor monitoring performance, displayed in Fig. 11 (a), where most statistics stay below the contribution limits. The missed detection rate of PCA is as high as 0.854. However, this condition is changed by the proposed method. Every small change of the variables is detected and the variation is reflected into the final statistic, shown in Fig. 11 (b). So PPCA-SVDD can detect this fault and the missed detection rate is reduced to 0.06. The last fault is fault 21, which happens in valve position constant (stream 4). Fig. 12 (a) provides the monitoring result of PCA that cannot find the fault until about 400 samples after fault happens. The time for detecting the fault is moved up to sample 400, on average, by PPCA-SVDD, with monitoring result given in Fig. 12 (b). This improvement lead to the decrease of missed detection rate. Remark 2 Many PCA-based methods are employed here for comparison in order to demonstrate the efficiency of the proposed PPCA-SVDD. The JIT-PCA-SVDD, as the dynamic form of PCA-SVDD, adopted the JIT strategy to capture key information of the dynamic process. After comparison, it is easy to find that JIT-PCA-SVDD outperforms the PCA-SVDD, with more variation information obtained. Thus, there is reason to believe that the monitoring performance of dynamic PPCA-SVDD, which extend PPCA-SVDD to dynamic process, would get further improvement. The concerning work is in progress. 5.2.2 Fault analysis 27
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
It is necessary to find the root causes to revise the process when the system is deemed as abnormal. The contribution plots for PCA and PPCA-SVDD are made for fault 5 with samples from 251 to 270, and diagnosis results are presented in Fig. 13 (a) and (b), respectively. As analyzed above, fault 5 is caused by a step change in the condenser cooling water inlet temperature, so the condenser cooling water flow (variable 33) and stripper temperature (variable 18) must be related with this fault. The contribution plot in Fig. 13 (a) is in accordance with the analysis. Fault 11 is another case employed for making contribution plot. The occurrence of this fault is because the random variation in the reactor cooling water inlet temperature. Referring to Supporting Information Figure 1 and Supporting Information Table 1, the liquid flow valve in both separator and stripper can be influenced. Next, the product separator level and stripper level will have variation. Thus, there are changes on variable 12, 15, 29 and 30, which have high contribution in Fig. 13 (b). The diagnosis results indicate that the contribution plot makes a reasonable and efficient decision on fault identification.
Table 2. Particular PCs selection by mean principle Variable
Particular PCs
Variable
Particular PCs
1 2 3 4 5 6 7 8 9 10 11 12 13 14
29,30,31 4,9,13,16,17,18,19,20,21,22,23,28 9,12,13,14,16,17,20,21,23 9,10,16,18,19,20,21,28,31 11,12,14,15,18,19,24,26,27,28 9,10,11,12,13,14,15,17,19,22,24,27 27,28,29,30 11,12,13,14,15,16,17,23 3,4,12,13,17,21,22,23 8,24,25,28,31,32 9,12,20,21,22,23,26,27,28 32,33 28,29,30,32 6,9,10,11,14,15,17,19
18 19 20 21 22 23 24 25 26 27 28 29 30 31
26,27,28,29,31,32 24,26,27,28,31,32 20,21,23,24,25,26,27,28,29,31 2,12,17,20,21,22,23,28,30,31 8,9,12,15,19,20,21,23,26,27,31 4,9,17,18,19,20,21,23 11,12,13,14,16,17,19,21,24 29,30,31 8,9,10,11,13,15,16,17,18,19,20 19,22,23,24,26,27,28,30 8,20,24,25,28,31,32 32,33 32,33 26,27,28,29,31
28
ACS Paragon Plus Environment
Page 28 of 39
Page 29 of 39
15 16 17
32,33 20,26,27,28,29,30,31,33 31,32
32 33
2,3,4,12,13,16,17,19,21,22,28 31,32
Table 3. Missed detection rates of each method in TE Fault number
PCA T2
DPCA T2
BSPCA BIC_T2
FSCB BIC(D)
KL-MBPCA BIC_T2
PCASVDD
JIR-PCASVDD
PPCA-SVDD (mean)
PPCA-SVDD (mid)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
0.008 0.018 0.933 0.688 0.720 0.006 0 0.026 0.948 0.543 0.516 0.015 0.058 0.005 0.911 0.696 0.198 0.101 0.854 0.571 0.594
0.006 0.019 0.991 0.939 0.758 0.013 0.159 0.028 0.995 0.580 0.801 0.010 0.049 0.061 0.964 0.783 0.240 0.111 0.993 0.644 0.644
0.008 0.015 0.988 0.849 0.769 0 0 0.029 0.980 0.659 0.570 0.011 0.058 0 0.970 0.750 0.110 0.106 0.850 0.728 0.611
0.003 0.018 / 0 0 0 0 0.021 / 0.186 0.280 0.003 0.053 0.001 / 0.135 0.056 0.102 0.168 0.196 0.528
0.003 0.014 0.964 0.669 0.718 0.006 0 0.023 0.995 0.564 0.509 0.014 0.053 0 0.915 0.758 0.100 0.101 0.895 0.493 0.555
0.008 0.016 0.915 0.569 0.719 0.006 0 0.026 0.938 0.521 0.493 0.014 0.056 0.004 0.901 0.673 0.199 0.099 0.908 0.548 0.574
0.001 0.010 0.873 0 0.678 0.005 0 0.014 0.883 0.441 0.176 0.009 0.043 0 0.805 0.558 0.028 0.088 0.769 0.311 0.390
0.001 0.011 0.939 0 0 0 0 0.021 0.954 0.095 0.294 0.001 0.046 0.001 0.851 0.068 0.031 0.099 0.100 0.116 0.349
0 0.010 0.884 0 0 0 0 0.016 0.911 0.106 0.148 0.001 0.045 0 0.784 0.048 0.028 0.091 0.060 0.086 0.330
350
300
250
2
T Statistics
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
200
150
100
50
0
0
100
200
300
400
500
600
700
800
Samples
(a)
29
ACS Paragon Plus Environment
900
1000
Industrial & Engineering Chemistry Research
7
6
5
DR
4
3
2
1
0
0
100
200
300
400
500
600
700
800
900
1000
Samples
(b)
Figure 9. Monitoring results of fault 5 (a) PCA; (b) PPCA-SVDD 350
300
2
T Statistics
250
200
150
100
50
0
0
100
200
300
400
500
600
700
800
900
1000
Samples
(a) 7
6
5
4
DR
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
3
2
1
0
0
100
200
300
400
500
600
700
800
900
1000
Samples
(b)
Figure 10. Monitoring results of fault 11 (a) PCA; (b) PPCA-SVDD
30
ACS Paragon Plus Environment
Page 30 of 39
70
60
2
T Statistics
50
40
30
20
10
0
0
100
200
300
400
500
600
700
800
900
1000
700
800
900
1000
Samples
(a) 7
6
5
DR
4
3
2
1
0
0
100
200
300
400
500
600
Samples
(b)
Figure 11. Monitoring results of fault 19 (a) PCA; (b) PPCA-SVDD
250
200
2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
T Statistics
Page 31 of 39
150
100
50
0
0
100
200
300
400
500
600
700
800
Samples
(a)
31
ACS Paragon Plus Environment
900
1000
Industrial & Engineering Chemistry Research
7
6
5
DR
4
3
2
1
0
0
100
200
300
400
500
600
700
800
900
1000
Samples
(b)
Figure 12. Monitoring results of fault 21 (a) PCA; (b) PPCA-SVDD 4
x 10
3
2.5
Contribution
2
1.5
1
0.5
0
0
5
10
15
20
25
30
25
30
35
Variable number
(a) 250
200
Contribution
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
150
100
50
0
0
5
10
15
20
35
Variable number
(b) Figure 13. Contribution plot of (a) fault 5; (b) fault 11
6. Conclusions This paper develops the PPCA-SVDD method which selects PCs in the term of the loadings for each variable and distinguishes the faulty samples from normal ones using SVDD. The useful information in the loadings are utilized and the constructed PC subspaces show sensibility to their 32
ACS Paragon Plus Environment
Page 32 of 39
Page 33 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
corresponding variables. The traditional PC selection rule has been changed in this method, which has shown significant improvement in monitoring performance compared with other PCA-based methods. Meanwhile, the feasibility and superiority of the proposed PPCA-SVDD method have been demonstrated by two case studies. This study is an extension and discussion about the loadings in PCA. The loadings can reveal a great deal of information about the process, thus, further researches could be focused on extracting information directly from the loadings and generating monitoring statistics.
Acknowledgments The authors gratefully acknowledge the support of the following foundations: 973 project of China (2013CB733605), National Natural Science Foundation of China (21176073) and the Fundamental Research Funds for the Central Universities.
Supporting information The control scheme for Tennessee Eastman process and detailed information about the faults and variables are showed in supporting information. This information is available free of charge via the Internet at http://pubs.acs.org/.
Notes The authors declare no competing financial interest. Reference (1) Chiang, L. H.; Braatz, R. D.; Russell, E. L. Fault detection and diagnosis in industrial systems; Springer, 2001. (2) Venkatasubramanian, V.; Rengaswamy, R.; Yin, K.; Kavuri, S. N. A review of process fault detection and diagnosis: Part I: Quantitative model-based methods. Comput. Chem.l Eng. 2003, 27, 33
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
293. (3) MacGregor, J.; Kourti, T. Statistical process control of multivariate processes. Control Eng. Pract. 1995, 3, 403. (4) Raich, A.; Cinar, A. Statistical process monitoring and disturbance diagnosis in multivariable continuous processes. AIChE J. 1996, 42, 995. (5) Joe Qin, S. Statistical process monitoring: basics and beyond. J. chemom. 2003, 17, 480. (6) Kano, M.; Nakagawa, Y. Data-based process monitoring, process control, and quality improvement: Recent developments and applications in steel industry. Comput. Chem. Eng. 2008, 32, 12. (7) Padilla, M.; Perera, A.; Montoliu, I.; Chaudry, A.; Persaud, K.; Marco, S. Fault detection, identification, and reconstruction of faulty chemical gas sensors under drift conditions, using Principal Component Analysis and Multiscale-PCA. In Neural Networks (IJCNN), The 2010 International Joint Conference on; IEEE, 2010; pp 1.
(8) Ge, Z.; Song, Z.; Gao, F. Review of recent research on data-based process monitoring. Ind. Eng. Chem. Res. 2013, 52, 3543.
(9) Jolliffe, I. Principal component analysis; Wiley Online Library, 2005. (10) Abdi, H.; Williams, L. J. Principal component analysis. Wiley Interdisciplinary Reviews: Comput. Statistics 2010, 2, 433.
(11) Diamantaras, K. I.; Kung, S. Y. Principal component neural networks: theory and applications; John Wiley & Sons, Inc., 1996. (12) Schölkopf, B.; Smola, A.; Müller, K.-R. Kernel principal component analysis. In Artificial Neural Networks—ICANN'97; Springer, 1997; pp 583. 34
ACS Paragon Plus Environment
Page 34 of 39
Page 35 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
(13) Alcala, C. F.; Qin, S. J. Reconstruction-based contribution for process monitoring with kernel principal component analysis. Ind. Eng. Chem. Res. 2010, 49, 7849. (14) Cheng, C.-Y.; Hsu, C.-C.; Chen, M.-C. Adaptive kernel principal component analysis (KPCA) for monitoring small disturbances of nonlinear processes. Ind. Eng. Chem. Res. 2010, 49, 2254. (15) Chen, J.; Liu, K.-C. On-line batch process monitoring using dynamic PCA and dynamic PLS models. Chem. Eng. Sci. 2002, 57, 63. (16) Dobos, L.; Abonyi, J. On-line detection of homogeneous operation ranges by dynamic principal component analysis based time-series segmentation. Chem. Eng. Sci. 2012, 75, 96. (17) Ku, W.; Storer, R. H.; Georgakis, C. Disturbance detection and isolation by dynamic principal component analysis. Chemom. Intel. Lab. Syst. 1995, 30, 179. (18) Nomikos, P.; MacGregor, J. F. Monitoring batch processes using multiway principal component analysis. AIChE J. 1994, 40, 1361. (19) Majid, N. A. A.; Taylor, M. P.; Chen, J. J.; Stam, M. A.; Mulder, A.; Young, B. R. Aluminium process fault detection by multiway principal component analysis. Control Eng. Pract. 2011, 19, 367. (20) Lu, N.; Gao, F.; Wang, F. Sub
PCA modeling and on
line monitoring strategy for batch
processes. AIChE J. 2004, 50, 255. (21) Nguyen, V. H.; Golinval, J.-C. Fault detection based on kernel principal component analysis. Eng. Struct. 2010, 32, 3683.
(22) De Leeuw, J. Nonlinear principal component analysis and related techniques. Department of Statistics, UCLA 2011.
(23) Gokgoz, E.; Subasi, A. Effect of multiscale PCA de-noising on EMG signal classification for diagnosis of neuromuscular disorders. J. Med. Syst. 2014, 38, 1. 35
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 36 of 39
(24) Grbovic, M.; Li, W.; Xu, P.; Usadi, A. K.; Song, L.; Vucetic, S. Decentralized fault detection and diagnosis via sparse PCA based decomposition and Maximum Entropy decision fusion. J.Process Control 2012, 22, 738.
(25) Johnson, R. A.; Wichern, D. W.; Education, P. Applied multivariate statistical analysis; Prentice hall Englewood Cliffs, NJ, 1992; Vol. 4. (26) Jackson, J. E. A user's guide to principal components; John Wiley & Sons, 2005; 587. (27) Wold, S. Cross-validatory estimation of the number of components in factor and principal components models. Technometrics 1978, 20, 397. (28) Camacho, J.; Ferrer, A. Cross
validation in PCA models with the element
wise k fold (ekf)
algorithm: theoretical aspects. J. Chemom. 2012, 26, 361. (29) Mnassri, B.; El Adel, E.-M.; Ananou, B.; Ouladsine, M. A generalized variance of reconstruction error criterion for determining the optimum number of Principal Components. In Control & Automation (MED), 2010 18th Mediterranean Conference on; IEEE, 2010; pp 868.
(30) Qin, S. J.; Dunia, R. Determining the number of principal components for best reconstruction. J. Process Control. 2000, 10, 245.
(31) Li, Y.; Tang, X.-C. Improved performance of fault detection based on selection of the optimal number of principal components. Acta Automatica Sinica. 2009, 35, 1550. (32) Jolliffe, I. T. A note on the use of principal components in regression. Applied Statistics. 1982, 300. (33) Togkalidou, T.; Braatz, R. D.; Johnson, B. K.; Davidson, O.; Andrews, A. Experimental design and inferential modeling in pharmaceutical crystallization. AIChE J. 2001, 47, 160. (34) MacGregor, J. F.; Jaeckle, C.; Kiparissides, C.; Koutoudi, M. Process monitoring and diagnosis 36
ACS Paragon Plus Environment
Page 37 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
by multiblock PLS methods. AIChE J. 1994, 40, 826. (35) Smilde, A. K.; Westerhuis, J. A.; Boque, R. Multiway multiblock component and covariates regression models. J. Chemom. 2000, 14, 301. (36) Qin, S. J.; Valle, S.; Piovoso, M. J. On unifying multiblock analysis with application to decentralized process monitoring. J. Chemom 2001, 15, 715. (37) Choi, S. W.; Lee, I.-B. Multiblock PLS-based localized process diagnosis. J. Process Control 2005, 15, 295. (38) Ge, Z.; Song, Z. Process monitoring based on independent component analysis-principal component analysis (ICA-PCA) and similarity factors. Ind. Eng. Chem. Res. 2007, 46, 2054. (39) Ge, Z.; Zhang, M.; Song, Z. Nonlinear process monitoring based on linear subspace and Bayesian inference. J. Process Control 2010, 20, 676. (40) Tong, C.; Song, Y.; Yan, X. Distributed Statistical Process Monitoring Based on Four-Subspace Construction and Bayesian Inference. Ind. Eng. Chem. Res. 2013, 52, 9897. (41) Wang, B.; Jiang, Q.; Yan, X. Fault detection and identification using a Kullback-Leibler divergence based multi-block principal component analysis and bayesian inference. Korean J. Chem. Eng. 2014, 1.
(42) Sorsa, T.; Koivo, H. N. Application of artificial neural networks in process fault diagnosis. Automatica 1993, 29, 843.
(43) He, Q. P.; Qin, S. J.; Wang, J. A new fault diagnosis method using fault directions in Fisher discriminant analysis. AIChE J. 2005, 51, 555. (44) Chai, Y.; Dai, W.; Guo, M.; Li, S.; Zhang, Z. A self-organizing map method for optical fiber fault detection and location. In Advances in Neural Networks–ISNN 2005; Springer, 2005; pp 470. 37
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
(45) Tax, D. M.; Duin, R. P. Support vector domain description. Pattern recognition letters 1999, 20, 1191. (46) Tax, D. M.; Duin, R. P. Support vector data description. Machine learning 2004, 54, 45. (47) Liu, X.; Xie, L.; Kruger, U.; Littler, T.; Wang, S. Statistical based monitoring of multivariate non
Gaussian systems. AIChE J. 2008, 54, 2379.
(48) Pan, Y.; Chen, J.; Guo, L. Robust bearing performance degradation assessment method based on improved wavelet packet–support vector data description. Mech. Syst. Signal Pr. 2009, 23, 669. (49) Ge, Z.; Xie, L.; Song, Z. A novel statistical-based monitoring approach for complex multivariate processes. Ind. Eng. Chem.y Res. 2009, 48, 4892. (50) Liu, X.; Li, K.; McAfee, M.; Irwin, G. W. Improved nonlinear PCA for process monitoring using support vector data description. J. Process Control 2011, 21, 1306. (51) Jiang, Q.; Yan, X.; Lv, Z.; Guo, M. Independent component analysis-based non-Gaussian process monitoring with preselecting optimal components and support vector data description. Int. J. Prod. Res. 2014, 52, 3273.
(52) Jiang, Q.; Yan, X. Just
in
time reorganized PCA integrated with SVDD for chemical process
monitoring. AIChE J. 2014, 60, 949. (53) Cheng, C.; Chiu, M.-S. Nonlinear process monitoring using JITL-PCA. Chemome. Intel. Lab. Syst. 2005, 76, 1.
(54) Miller, P.; Swanson, R. E.; Heckler, C. E. Contribution plots: a missing link in multivariate quality control. Applied Math. Comput. Sci. 1998, 8, 775. (55) Alcala, C. F.; Dunia, R.; Qin, S. J. Monitoring of Dynamic Processes with Subspace Identification and Principal Component Analysis. In Fault Detection, Supervision and Safety of 38
ACS Paragon Plus Environment
Page 38 of 39
Page 39 of 39
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Technical Processes, 2012; Vol. 8; pp 684.
(56) Downs, J. J.; Vogel, E. F. A plant-wide industrial process control problem. Comput. Chem. Eng. 1993, 17, 245. (57) Lau, C.; Ghosh, K.; Hussain, M.; Che Hassan, C. Fault diagnosis of Tennessee Eastman process with multi-scale PCA and ANFIS. Chemom. Intel. Lab. Syst. 2013, 120, 1. (58) Fan, J.; Qin, S. J.; Wang, Y. Online monitoring of nonlinear multivariate industrial processes using filtering KICA–PCA. Control Eng. Pract. 2014, 22, 205.
39
ACS Paragon Plus Environment