Subscriber access provided by HKU Libraries
Article
Robust Multivariate Statistical Process Monitoring via Stable Principal Component Pursuit Zhengbing Yan, Chun-Yu Chen, Yuan Yao, and Chien-Ching Huang Ind. Eng. Chem. Res., Just Accepted Manuscript • DOI: 10.1021/acs.iecr.5b02913 • Publication Date (Web): 24 Mar 2016 Downloaded from http://pubs.acs.org on March 27, 2016
Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a free service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are accessible to all readers and citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.
Industrial & Engineering Chemistry Research is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.
Page 1 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Robust Multivariate Statistical Process Monitoring via Stable Principal Component Pursuit
Zhengbing Yan,a Chun-Yu Chen,b Yuan Yao,b* Chien-Ching Huang b
a
College of Physics and Electronic Information Engineering, Wenzhou University, Wenzhou 325035, China b
Department of Chemical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan
Abstract: In process industries, multivariate statistical process monitoring (MSPM) has become an important technique for enhancing product quality and operation safety. Among the family of MSPM methods, principal component analysis (PCA) may be the most commonly used one, due to its capabilities of dimensionality reduction and highlighting variation in a dataset. However, the performance of a PCA model often degrades significantly when there are gross sparse errors, i.e. outliers, contained in the training data, since PCA assumes that the training data matrix only contains an 1
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
underlying low-rank structure corrupted by dense noise. In this paper, a robust matrix recovery method named stable principal component pursuit (SPCP) is adopted to solve this problem, based on which a process modeling and online monitoring procedure is developed. Such method inherits the benefits of PCA while being robust to outliers. The effectiveness of the SPCP-based monitoring method is illustrated using the benchmark Tennessee Eastman process.
Keywords: robust process monitoring, stable principal component pursuit, singular value thresholding, matrix recovery, principal component analysis.
* Corresponding author. Tel: 886-3-5713690, Fax: 886-3-5715408, Email:
[email protected] 2
ACS Paragon Plus Environment
Page 2 of 46
Page 3 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
1. Introduction In the field of process monitoring, multivariate statistical methods have been widely accepted in recent years because of the following two reasons. First, due to the progress of the manufacturing technology, modern production processes have become more complex than before. As a consequence, it is usually very time-consuming, sometimes impossible, to model them based on first principles or process knowledge. Second, thanks to the rapid development of electronic and computer techniques, large amounts of process data, which contain useful process information, are collected by the distributed control system (DCS) widely utilized in today’s industries. Since last decades, multivariate statistical process monitoring (MSPM) models, which require little process knowledge, have attracted increasing attentions
1-3
. Among them,
principal component analysis (PCA) 4 is most representative. During the step of process modeling, PCA implements a linear projection of a training dataset consisting of normal observations from the high-dimensional variable space into a low-dimensional latent space, and in the meantime maximizes the variance. Let represents the normalized training data matrix. By conducting singular value decomposition (SVD), PCA decomposes into two parts: a low-rank matrix containing most systematic variation information and a matrix mainly comprising small dense noise, i.e. = + . In the situation that the noise is 3
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 4 of 46
independently and identically Gaussian distributed, PCA gives a statistically optimal estimate of the low-rank subspace. Such a superior property makes PCA a standard method in MSPM. As a result, many extensions of PCA have been developed for monitoring different types of processes, e.g. 5-10. Nevertheless, a major shortcoming of PCA is that its performance degrades significantly when the training data are contaminated by gross sparse errors. Even with a little amount of grossly corrupted entries in , PCA often fail to exploit the low-rank structure correctly, which means PCA is not robust to process outliers. There are two possible ways to dealing with this situation
11
: (1) detecting and
eliminating the outliers before using the classical PCA, or (2) eliminating the influence of the outliers in PCA decomposition. The first way is commonly adopted in practice, where the modeling and monitoring steps are usually conducted iteratively to delete the outliers in a stepwise manner. Specifically, Cummins and Andrews
12
proposed an iterative reweighted partial least squares (PLS) algorithm for outlier detection. After slight modification, this algorithm is also applicable to PCA. Further developments of this algorithm have been proposed by Pell
13
. Kruger et al.
14-16
provided a detailed algorithmic analysis of the associated techniques, showing that the iterative formulation of the algorithm by Cummins and Andrews
12
do not guarantee
convergence, and then introduced a new conceptual outlier detection algorithm that 4
ACS Paragon Plus Environment
Page 5 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
overcomes the deficiencies of the existed work. Another robust PCA (RPCA) method proposed by Hubert and Rousseeuw
17
combines ideas of both projection pursuit and
robust covariance estimation. In their method, the projection pursuit approach is used for the initial dimension reduction. Then, the minimum covariance determinant (MCD) estimator is applied to this lower-dimensional data space. Two statistics called score distance and orthogonal distance are used to identify outliers. There are also a number of algorithms belonging to the second category of robust methods. Huber 18 proposed an M-estimator to reduce the effect of outliers based on the analysis of squared model residuals. Rousseeuw
19
developed a least median technique which achieves a robust
model by minimizing the median of the squared residual. Other methods pertaining to this theme include influence function techniques
20
, multivariate trimming
21
,
alternating minimization 22, random sampling techniques 23, etc. Instead of discarding the outliers, these methods keep all of the observations for analysis and thus better utilize the information contained in the data. However, as indicated by literatures 24, 25, none of the above-mentioned approaches yields a polynomial-time algorithm with strong performance guarantees. For more details, a good survey of robust PCA methods is given in Chapter 6 in the textbook written by Kruger and Xie 26. Recently, another robust version of PCA, named principal component pursuit (PCP), was proposed
. Based on a main assumption that = + , PCP aims to
24
5
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
recovering a low-rank matrix from a high-dimensional data matrix , where is an unknown sparse matrix consisting of gross sparse errors. Unlike the classical PCA which requires the entries in the noise term to be small, PCP can handle the entries in with arbitrarily large magnitudes. Despite that PCP is robust to outliers, it does not consider the dense noise commonly existing in process measurements. Therefore, such method is unsuited to MSPM. In 2010, Zhou et al. 27 generalized the application of PCP to recovering from despite both small entry-wise noise and gross sparse errors. Their method is named stable principal component pursuit (SPCP). According to our best knowledge, although this method has been utilized for image and video processing, a SPCP-based robust process modeling and monitoring algorithm is still lacking. In this study, the SPCP method is extended to the field of MSPM. The rest of this paper is organized as follows. Section 2 introduces the related methodologies and the motivations of this research, followed by an SPCP-based statistical process modeling and online monitoring algorithm developed in Section 3. To illustrate the effectiveness of the proposed method, case studies are provided in Section 4 using the benchmark Tennessee Eastman (TE) process. Finally, Section 5 summarizes and concludes the paper.
6
ACS Paragon Plus Environment
Page 6 of 46
Page 7 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
2. Methodologies and Motivations 2.1. Singular Value Decomposition and Principal Component Analysis As known, PCA can be achieved by applying SVD to a centered data matrix. For any normalized matrix with dimensions × , SVD results in a product of three matrices, i.e. = ,
(1)
where is an × orthogonal matrix consisting of the left singular vectors of , is an × orthogonal matrix consisting of the right singular vectors of , and is an × rectangular diagonal matrix with the non-negative singular values of on the diagonal sorted in descending order and zeros off the diagonal. In MSPM applications, is usually a training data matrix comprised of historical operation data, n is the number of collected observations and m is the number of process variables. The right singular vectors derived by SVD are called “loading vectors” in PCA, which transform the original data matrix X to a number of principal components (PCs) T. Hence, PCA can be expressed as: = ,
(2)
where = = . The magnitude of each singular value indicates the variance captured by the corresponding PC. Therefore, the first several PCs extract most 7
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 8 of 46
systematic variation information contained in , while the remainders are often noisy. Accordingly, (1) and (2) can be reformulated as: = + ,
(3)
where = , = , = , and = . is an × matrix consisting of the first columns of , is comprised of the remaining part of , and is the number of PCs retained in the score space. Similarly, is an × matrix consisting of the first columns of , and is comprised of the remaining part of . The × diagonal matrix is a submatrix of , whose diagonal consists of the first singular values in . is a rectangular diagonal matrix with the remaining singular values on its diagonal. Thus, the low-rank structure is captured by the score space , while the dense noise occupies the residual space .
2.2. Singular Value Thresholding and Stable Principal Component Pursuit SVD/PCA is not the only way for recovering a low-rank data matrix from a data matrix corrupted by small dense noise on each entry. Recently, a singular value thresholding (SVT) approach
28
was proposed for a similar target, which can be
formulated as : = , 8
ACS Paragon Plus Environment
(4)
Page 9 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
where the definitions of , and are same as those in SVD, and Dτ is a soft-thresholding operator, which is the main difference from SVD. For any specified ≥ 0, is a rectangular diagonal matrix with the non-negative values on the diagonal and zeros off the diagonal, where the i-th entry on the diagonal, , is calculated as: = − "# ,
(5)
is the i-th largest singular value of , i.e. the i-th diagonal element in , and the
subscript ‘+’ means taking the positive part, i.e. t+ = max(0, t). Thus, the operator Dτ applies a thresholding rule to the singular values, shrinking them toward zero. It is easy to understand that if a large amount of the singular values are lower than the setting threshold τ, then the rank of is considerably lower than that of . In such a way, effectively recovers the low-rank matrix from . It has been proven in the literature 28 that:
= argmin *τ‖‖∗ + ‖ − ‖. /,
(6)
where ‖0‖∗ is the nuclear norm of the matrix 0, i.e. the sum of the singular values of 0, and ‖0‖. represents the Frobenius norm, i.e. the root sum of squares of all the entries in 0. Consequently, after conducting SVT, the original data matrix can be separated into two parts: = + , 9
ACS Paragon Plus Environment
(7)
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 10 of 46
where = describes the low-rank structure, and = − represents the matrix of dense noise. For calculating efficiently, a state-of-art algorithm has been developed 29. The above discussions indicate that SVT may serve as an alternative approach for computing PCA, since SVT and SVD lead to the same singular vectors and . Due to the similar properties of SVT and SVD/PCA, SVT is unable to deal with the sparse errors, either. To cope with this problem, SPCP 27 was developed, which can be regarded as an integration of SVT and PCP. In SPCP, the data matrix X is assumed to consist of three parts: = + + ,
(8)
where S is a sparse matrix with most of its entries being zero. The following optimization problem should be solved to estimate the unknown matrices , and :
min, *‖‖∗ + 2‖‖ + 3 ‖ − − ‖. /.
(9)
In (9), ‖0‖ denotes the l1 norm of 0 viewed as a vector, which equals to the sum of the absolute values of all entries in the matrix, and the parameters 2 and 4 assign weights to the sparse error term and the dense noise term, respectively. The optimization problem described in (9) can be solved efficiently by applying a fast 10
ACS Paragon Plus Environment
Page 11 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
proximal gradient algorithm named accelerated proximal gradient (APG) 30, while the discussion on the issue of parameter selection can be found in
27
. Interested readers
may refer to the references for more details. As stated in the literature 27, in the case that is fixed to be 0, the problem in (9) has the same solution as that of (6) with the threshold τ = 4. In other words, SPCP leads to the exact same outcome as SVT, if there is no sparse error contained in the data matrix . In more general cases where outliers exist, SPCP can provide stable estimates of both and . Since the matrices and share the same singular vectors, it is possible to obtain the PCA loadings by calculating the right singular vectors of , even if the original data set is grossly corrupted by outliers. In a sense, SPCP can be viewed as a robust version of PCA. The above findings motivate this research.
3. Process Modeling and Online Monitoring Using SPCP In spite of the outstanding properties of SPCP, to our knowledge no previous attempt has been made to extend this method to the field of MSPM. In this section, a statistical process modeling and online monitoring algorithm is proposed based on SPCP, using which robust MSPM can be achieved.
11
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 12 of 46
3.1. Normalization with Robust Statistics Before statistical process modeling, data normalization is usually a necessary step for eliminating the effects of engineering units and measurement ranges, reducing nonlinearity, and emphasizing correlations among variables. Conventionally, the most common way to normalize process data is autoscaling, involving both mean centering and variance scaling. After autoscaling, all process variables are centered around a mean of 0 and have a unique variance of 1. (10) is the mathematical expression of autoscaling: 5,6 =
78,9 :7̅9 are obtained as a submatrix of , which involves all the nonzero columns of the latter. In other words, > is comprised of the left k columns of
. Defining ? = > , 13
ACS Paragon Plus Environment
(12)
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 14 of 46
where > is a × submatrix of and contains all the nonzero singular values in , the T2 monitoring statistic can be calculated for each process observation to summarize the information in the score space: @ = A ? : A.
(13)
Here, A is a row vector in the matrix > . For process monitoring, the control limit of T2 can be derived from F-distribution 32: @B ~
DE: E:D
∙ GD,E:D,H ,
(14)
where α is the significant level, n is the size of the training dataset, i.e. the total number of rows in , and GD,E:D,H is the critical value of F distribution with significant level of α and degrees of freedom of and − , whose value can be found in the statistical table. Another statistic SPE can be used to monitor the residual space. For each observation in the training dataset, the SPE value is computed as: IJK = L L,
(15)
where L is a row vector in the matrix . The corresponding control limit at significance level α can be estimated as 5: IJKH = M/2PQRS R T
(16)
,H
where b and v are the sample mean and variance of the SPE sample, and QRS R T
critical value of the Q distribution with degree of freedom of 14
ACS Paragon Plus Environment
U R V
,H
is the
at significance
Page 15 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
level α. For online monitoring, the T2 and SPE statistics should be calculated for each new observation collected online and compared to the corresponding control limits. If any of the statistics are outside the control limits, the process engineers should be alarmed.
Denote the normalized values of a new observation as WXYZ . To compute T2, the
score values of the retained PCs, A XYZ , should be calculated, where the row vector A XYZ contains the first entries of \ XYZ , and
\ XYZ = WXYZ :.
(17)
However, a difficulty is that the matrix is unknown from the modeling steps, since it is not included in the solution of (9). Only is available from (11). Hence, a ] is defined to cope with such problem, where new matrix ] = + ^.
(18)
Accordingly, the following vector is obtained:
] :, _ XYZ = WXYZ \
(19)
_ XYZ are same as those in \ XYZ . Thus, the Please note that the first entries in \ _ XYZ , based on which the T2 value is score vector A XYZ is identified from \ calculated as: @ = A XYZ ? : A XYZ . 15
ACS Paragon Plus Environment
(20)
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 16 of 46
Then, the value of the SPE statistic is derived from the residual vector L XYZ , where
− ` XYZ , L XYZ = WXYZ
(21)
` XYZ = A XYZ > ,
(22)
and > is comprised of the left k columns of . IJK = L XYZ LXYZ .
(23)
3.3. Summary of Procedures In this section, the procedures of SPCP-based process modeling and online monitoring are summarized in below. I. Procedure of offline modeling 1. Collect the historical process data, and normalize the data matrix using robust statistics, e.g. the sample median and MAD. 2. Utilize SPCP to decompose the normalized data set X into three parts, i.e. L, S and N, by solving the optimization problem described in (9). 3. Calculate = rank. 4. Obtain the @ control limit using (14). 5. Calculate the SPE statistic for each observation in the training data set using (15). 6. Obtain the SPE control limits using (16). 7. Conduct SVD on L and obtain the matrices , , and as introduced in 16
ACS Paragon Plus Environment
Page 17 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
(11). 8. Calculate the matrix W according to (12). ] using (18). 9. Obtain the matrix II. Procedure of online monitoring 1. Collect the online process measurements, and conduct normalization in the same way as that of the training data. ], and the normalized sample _ XYZ using 2. Based on (19), calculate the vector \
WXYZ .
3. Identify the score vector A XYZ which is comprised of the first entries of _ XYZ . \ 4. Calculate the @ statistic using (20). 5. Obtain the vector ` XYZ using (22). 6. Obtain the residual vector L XYZ using (21). 7. Calculate the SPE statistic using (23). 8. Compare the @ and SPE statistics with the corresponding control limits. If either of the monitoring statistics is outside the control limit, an alarm is issued; otherwise, go back to step 1.
4. Case study 17
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 18 of 46
4.1. Tennessee Eastman Process In the follows, the Tennessee Eastman (TE) process simulation
33
, which was
developed based on a real industrial process and has been widely used as a benchmark for testing the process control and MSPM algorithms, is utilized to further illustrate the proposed method. The TE process consists of five main units, i.e. reactor, condenser, separator, tripper, and compressor. In the streams of the plant, there are eight components, including four reactants, two products, a major by-product, and an inert component. For statistical process modeling and monitoring, totally 41 measured variables and 11 manipulated variables are recorded, including the measurements from both normal operation and accidental situations triggered by 20 different types of disturbances. The normal operation data stored in the training set are usually used in model building, while in each test data set a fault is introduced to the process at the 161st sampling time point. More details about the TE process, including the variable list, the description of the faults, etc., can be found in the original paper 33. In the following section, training data without and with outliers are both adopted in process modeling to compare the robustness of different methods. In addition, the test data sets containing Fault 5 and Fault 6 are used to demonstrate the performance of online monitoring, where the process disturbance causing Fault 5 is a step change 18
ACS Paragon Plus Environment
Page 19 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
in condenser cooling water inlet temperature and Fault 6 is due to a feed loss in stream 1.
4.2. Process Monitoring Based on Training Data without Outlier In the first case study, training data without outlier, collected under normal operating conditions, are utilized for model training. Two statistical process models are built based on PCA and SPCP, respectively. For PCA, 9 PCs are retained in the score space, explaining about 50% variation information contained in the data. For SPCP, the parameter λ is selected to be 0, since there is no sparse error to deal with, while the other parameter µ is specified to extract about 50% variations into L. Thus, in this case, the PCA and SPCP models reflect similar process information. For all the monitoring statistics, the control limits are determined to have a significance level of 0.05. In other words, 95% control limits are used to monitor the T2 and SPE statistics derived from both PCA and SPCP models. Then, both models are used to detect Faults 5 and 6. The monitoring results are shown in Fig. 1 to Fig. 4. Each figure displays the monitoring results of 250 sampling points. The logarithm values of the monitoring statistics are plotted to facilitate demonstration, while the dash-dot lines in the figures represent the control limits. Fig. 1 shows the monitoring results of Fault 5 based on the PCA model. The T2 control 19
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
chart indicates that this statistic detects Fault 5 with a slight delay, since the fault is identified at the 173rd sampling point although it occurs to the process at the 161st time point. However, the SPE control chart is quite efficient. For comparison, the monitoring results based on SPCP are shown in Fig. 2, which implies that SPCP and PCA have similar fault detection in this case. The T2 control chart of SPCP provides an immediate detection of Fault 5 soon after its occurrence, while the SPE chart alarms after the 164th time point. Similar results are observed in Fig. 3 and Fig. 4. The PCA model detects Fault 6 at the 167th sampling time point with T2 and at the 161st sampling point with SPE, while the two control charts based on the SPCP model detects the fault at the 163rd and 161st sampling time points, respectively. In above both cases show that, in the situation that the training data are not corrupted by outliers, the performance of SPCP is comparative with or even slightly better than that of PCA.
4.3. Process Monitoring Based on Training Data with Outliers For further comparison, outliers are added into the training data before process modeling. 5 process variables out of the total 52 are randomly chosen to be faulty with the probability of 0.05. The values of the selected measurements are adjusted in a random manner and within a certain range. 20
ACS Paragon Plus Environment
Page 20 of 46
Page 21 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
SPCP directly divided the training data matrix into three parts. With properly selected parameters, the effects of outliers are captured by the sparse matrix S and the noise is mainly contained in N. Therefore, a monitoring model is obtained by analyzing the low-rank matrix L. For comparison, two types of PCA models are built. The first PCA model (PCA1) simply assumes that there is no outlier in the data. Accordingly, all the training data are used to train the model without any modification or selection. The second PCA model (PCA2) tries to delete the observations corrupted by the outliers before achieving the final monitoring model, where the abnormal observations are removed from the training set in an iterative way. In details, first, an overview PCA model is built using the entire set of the training data. Then, the control limits of T2 and SPE are calculated. Based on these two monitoring statistics, all the data in the training set are checked. The observations with either statistic outside of the corresponding control limit are regarded as abnormal and removed from the training set. A new PCA model is then built based on the updated training data. The above procedure is conducted iteratively, until the proportion of observations out of control is less than the significance level of the control limit. In addition, the robust methods proposed in
17
and
16
are also implemented for outlier detection. After
eliminating the identified outliers from the training data, PCA models are constructed based on the remaining observations in the dataset, which are used for monitoring the 21
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
test data. In the following paragraphs, these two methods are denoted as Hubert’s RPCA and Kruger’s RPCA, respectively. All five models are used to monitor the test sets containing Faults 5 and 6. Fig. 5 shows the monitoring results of Fault 5 using PCA1. Comparing to Fig. 1, it is observed that the control limits in Fig. 5 are higher, causing some fault samples to behave in statistical control. The results based on PCA2 are displayed in Fig. 6. Although there is no significant missing alarm in these control charts, the number of false alarms before the 161st sampling time point increases significantly. Fig. 7 shows the monitoring results of Fault 5 using Hubert's RPCA. Although Hubert’s PCA model has a lower false alarm rate comparing to PCA2, the rate is still too high. In order to building Kruger’s RPCA model, the number of the δ -vectors is selected as 10 for fast computation according to the discussion in
16
. The results are shown in Fig. 8,
which are similar to those in Fig. 7. The outliers have no significant influence on the SPCP model. Fig. 9 indicates that the SPCP model can detect Fault 5 efficiently, with no increased number of false alarms or missing alarms. The monitoring results of Fault 6 show similar patterns, i.e. PCA1 resulting in more missing alarms (Fig. 10), PCA2 leading to more false alarms (Fig. 11), Hubert's RPCA (Fig. 12), Kruger's RPCA (Fig. 13) and SPCP outperforming the other methods (Fig. 14). For better comparing the performance of PCA2, Hubert's RPCA, Kruger's RPCA 22
ACS Paragon Plus Environment
Page 22 of 46
Page 23 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
and SPCP, the four methods dealing with process outliers, the above experiments are repeated for 10 times and the average false alarm rates of these methods are summarized in Table 1. From this table, it is noted that the false alarm rates of the PCA2 model are much higher than the specified significance level 0.05. In comparison, Hubert's RPCA and Kruger's RPCA perform much better. However, their false alarm rates are still too high, especially when dealing with Fault 5. SPCP model gives the best performance out of the four models, which provides reasonable false alarm rates in spite of the outliers in the training set. In our experiments, it is noticed that, when there are more severe outliers existing in the training data, the performance of PCA1 and PCA2 degrades even further, while SPCP maintains its robustness and good detection efficiency. In order to keep the length of the paper reasonable, these results are not shown here.
5. Conclusions In this paper, a robust process modeling and monitoring method is developed based on a matrix recovery technique named SPCP. Despite of its outstanding mathematical properties, in previous research SPCP was only implemented to image or video processing. This is the first attempt to extend SPCP to the field of MSPM. Here, the detailed steps of SPCP-based robust modeling are described, including the 23
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 24 of 46
calculation of the monitoring statistics. Then, an online monitoring procedure is also proposed. The applications to the TE process show that the proposed method has a similar performance to PCA when the training data are clean and still preserves its effectiveness in face of the training data contaminated by outliers. In future research, an important problem deserves further study is about the parameter selection. The reference
27
suggests that the parameters in an SPCP model
can be specified in the following way: 2=
√c
,
(24)
and 4 = d2ef,
(25)
where p is equal to the larger value between the row number n and the column number m of the matrix to decompose, and σ is the standard deviation of noise. However, our experiments showed that the parameters chosen in such way do not always perform well due to the variety of outliers. Although it is not an easy job to choose optimal parameters in an unsupervised problem, it is believed that some superficial knowledge about the possible outliers would be helpful.
Acknowledgment: This work was supported in part by the Ministry of Science and Technology, R.O.C. 24
ACS Paragon Plus Environment
Page 25 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
under Grant No. MOST 104-2221-E-007-129.
References: (1)
Qin, S. J. Survey on data-driven industrial process monitoring and diagnosis. Annual Reviews in Control. 2012, 36, 220-234.
(2)
Ge, Z.; Song, Z.; Gao, F. Review of recent research on data-based process monitoring. Industrial & Engineering Chemistry Research. 2013, 52, 3543-3562.
(3)
Yao, Y.; Gao, F. A survey on multistage/multiphase statistical modeling methods for batch processes. Annual Reviews in Control. 2009, 33, 172-183.
(4)
Jolliffe, I. Principal component analysis; 2nd ed; Springer: New York, 2002.
(5)
Nomikos, P.; MacGregor, J. Multivariate SPC charts for monitoring batch processes. Technometrics. 1995, 37, 41-59.
(6)
Ku, W.; Storer, R.; Georgakis, C. Disturbance detection and isolation by dynamic principal component analysis. Chemometrics and Intelligent Laboratory Systems. 1995, 30, 179-195.
(7)
Lee, J.; Yoo, C.; Choi, S.; Vanrolleghem, P.; Lee, I. Nonlinear process monitoring using kernel principal component analysis. Chemical Engineering Science. 2004, 59, 223-234. 25
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
(8)
Page 26 of 46
Lu, N.; Gao, F.; Wang, F. Sub-PCA modeling and on-line monitoring strategy for batch processes. AIChE Journal. 2004, 50, 255-259.
(9)
Yao, Y.; Chen, T.; Gao, F. Multivariate statistical monitoring of two-dimensional dynamic batch processes utilizing non-Gaussian information. Journal of Process Control. 2010, 20, 1187-1197.
(10)
Zhao, C.; Gao, F. Fault-relevant Principal Component Analysis (FPCA) method for multivariate statistical modeling and process monitoring. Chemometrics and Intelligent Laboratory Systems. 2014, 133, 1-16.
(11)
Chen, J.; Bandoni, J. A.; Romagnoli, J. A. Robust PCA and normal region in multivariate statistical process monitoring. AIChE Journal. 1996, 42, 3563-3566.
(12)
Cummins, D. J.; Andrews, C. W. Iteratively reweighted partial least squares: A performance analysis by monte carlo simulation. Journal of Chemometrics. 1995, 9, 489-507.
(13)
Pell, R. J. Multiple outlier detection for multivariate calibration using robust statistical techniques. Chemometrics and Intelligent Laboratory Systems. 2000, 52, 87-104.
(14)
Kruger, U.; Zhou, Y.; Wang, X.; Rooney, D.; Thompson, J. Robust partial least squares
regression:
Part
I,
algorithmic
developments.
26
ACS Paragon Plus Environment
Journal
of
Page 27 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Chemometrics. 2008, 22, 1-13. (15)
Kruger, U.; Zhou, Y.; Wang, X.; Rooney, D.; Thompson, J. Robust partial least squares regression: Part II, new algorithm and benchmark studies. Journal of Chemometrics. 2008, 22, 14-22.
(16)
Kruger, U.; Zhou, Y.; Wang, X.; Rooney, D.; Thompson, J. Robust partial least squares regression—part III, outlier analysis and application studies. Journal of Chemometrics. 2008, 22, 323-334.
(17)
Hubert, M.; Rousseeuw, P. J.; Vanden Branden, K. ROBPCA: A new approach to robust principal component analysis. Technometrics, 2005, 47, 64-79.
(18)
Huber, P. J. Robust Statistics. Wiley: New York, 1981.
(19)
Rousseeuw, P. J. Least median of squares regression. Journal of the American Statistical Association. 1984, 79, 871-880.
(20)
De la Torre, F.; Black, M. A framework for robust subspace learning. International Journal of Computer Vision. 2003, 54, 117-142.
(21)
Gnanadesikan, R.; Kettenring, J. R. Robust estimates, residuals, and outlier detection with multiresponse data. Biometrics. 1972,81-124.
(22)
Ke, Q.; Kanade, T., "Robust L1 norm factorization in the presence of outliers and missing data by alternative convex programming," in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 27
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
(CVPR 2005). San Diego, CA, USA, 2005, pp. 739-746. (23)
Fischler, M. A.; Bolles, R. C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM. 1981, 24, 381-395.
(24)
Candès, E. J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? Journal of the ACM. 2011, 58, 1-37.
(25)
Chen, T.; Martin, E.; Montague, G. Robust probabilistic PCA with missing data and contribution analysis for outlier detection. Computational Statistics & Data Analysis. 2009, 53, 3706-3716.
(26)
Kruger, U.; Xie, L. Statistical Monitoring of Complex Multivariate Processes: With Applications in Industrial Process Control. John Wiley & Sons, Ltd, 2012.
(27)
Zhou, Z.; Li, X.; Wright, J.; Candes, E.; Ma, Y., "Stable Principal Component Pursuit," in 2010 IEEE International Symposium on Information Theory (ISIT 2010), Austin, Texas, USA, 2010, pp. 1518-1522.
(28)
Cai, J.-F.; Candès, E.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization. 2010, 20, 1956-1982.
(29)
Cai, J.-F.; Osher, S. Fast singular value thresholding without singular value decomposition. Methods and Applications of Analysis. 2013, 20, 335-352. 28
ACS Paragon Plus Environment
Page 28 of 46
Page 29 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
(30)
Lin, Z.; Ganesh, A.; Wright, J.; Wu, L.; Chen, M.; Ma, Y., "Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix," in 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP 2009), Aruba, Dutch Antilles, 2009.
(31)
Rousseeuw, P. J.; Croux, C. Alternatives to the Median Absolute Deviation. Journal of the American Statistical Association. 1993, 88, 1273-1283.
(32)
Montgomery, D. C.; Runger, G. C.; Hubele, N. F. Engineering statistics John Wiley & Sons, 2009.
(33)
Downs, J.; Vogel, E. A plant-wide industrial process control problem. Computers & Chemical Engineering. 1993, 17, 245-255.
29
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Figure list: Figure 1. Monitoring results of Fault 5 using the PCA model trained with uncontaminated data Figure 2. Monitoring results of Fault 5 using the SPCP model trained with uncontaminated data Figure 3. Monitoring results of Fault 6 using the PCA model trained with uncontaminated data Figure 4. Monitoring results of Fault 6 using the SPCP model trained with uncontaminated data Figure 5. Monitoring results of Fault 5 using the PCA1 model trained with contaminated data Figure 6. Monitoring results of Fault 5 using the PCA2 model trained with contaminated data Figure 7. Monitoring results of Fault 5 using the Hubert's RPCA model trained with contaminated data Figure 8. Monitoring results of Fault 5 using the Kruger's RPCA model trained with contaminated data Figure 9. Monitoring results of Fault 5 using the SPCP model trained with contaminated data 30
ACS Paragon Plus Environment
Page 30 of 46
Page 31 of 46
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Figure 10. Monitoring results of Fault 6 using the PCA1 model trained with contaminated data Figure 11. Monitoring results of Fault 6 using the PCA2 model trained with contaminated data Figure 12. Monitoring results of Fault 6 using the Hubert's RPCA model trained with contaminated data Figure 13. Monitoring results of Fault 6 using the Kruger's RPCA model trained with contaminated data Figure 14. Monitoring results of Fault 6 using the SPCP model trained with contaminated data
31
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
T2 control chart of Faut 5 (PCA) 6
log(T2)
4
2
0
-2
-4 0
80
160
240
(a)
SPE control chart of Faut 5 (PCA) 5.5 5.0 4.5
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
4.0 3.5 3.0 2.5 0
80
160
240
(b) Figure 1. Monitoring results of Fault 5 using the PCA model trained with uncontaminated data
32
ACS Paragon Plus Environment
Page 32 of 46
Page 33 of 46
T2 control chart of Faut 5 (SPCP) 6.0 5.5
log(T2)
5.0 4.5 4.0 3.5 3.0 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 5 (SPCP) 1 0 -1
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
-2 -3 -4 -5 -6 0
80
160
240
sampling time points
(b) Figure 2. Monitoring results of Fault 5 using the SPCP model trained with uncontaminated data
33
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
T2 control chart of Faut 6 (PCA)
8
6
log(T2)
4
2
0
-2 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 6 (PCA)
10 9 8 7
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
6 5 4 3 2 0
80
160
240
sampling time points
(b) Figure 3. Monitoring results of Fault 6 using the PCA model trained with uncontaminated data
34
ACS Paragon Plus Environment
Page 34 of 46
Page 35 of 46
T2 control chart of Faut 6 (SPCP) 10 9 8
log(T2)
7 6 5 4 3 2 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 6 (SPCP)
8 6 4 2
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
0 -2 -4 -6 -8 0
80
160
240
sampling time points
(b) Figure 4. Monitoring results of Fault 6 using the SPCP model trained with uncontaminated data
35
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
T2 control chart of Faut 5 (PCA1)
log(T2)
5
0
0
80
160
240
sampling time points
(a)
SPE control chart of Faut 5 (PCA1) 6.0 5.5 5.0
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
4.5 4.0 3.5 3.0 2.5 0
80
160
240
sampling time points
(b) Figure 5. Monitoring results of Fault 5 using the PCA1 model trained with contaminated data
36
ACS Paragon Plus Environment
Page 36 of 46
Page 37 of 46
T2 control chart of Faut 5 (PCA2) 6 5
log(T2)
4 3 2 1 0 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 5 (PCA2)
5.5 5.0 4.5 4.0
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
3.5 3.0 2.5 2.0 0
80
160
240
sampling time points
(b) Figure 6. Monitoring results of Fault 5 using the PCA2 model trained with contaminated data
37
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
T2 control chart of Faut 5 (Hubert's RPCA)
log(T2)
5
0 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 5 (Hubert's RPCA)
6
5
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
4
3
2 0
80
160
240
sampling time points
(b) Figure 7. Monitoring results of Fault 5 using the Hubert's RPCA model trained with contaminated data
38
ACS Paragon Plus Environment
Page 38 of 46
Page 39 of 46
T2 control chart of Faut 5 (Kruger's RPCA) 6 5 4
log(T2)
3 2 1 0 -1 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 5 (Kruger's RPCA)
6.0 5.5 5.0 4.5 4.0
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
3.5 3.0 2.5 2.0 1.5 1.0 0
80
160
240
sampling time points
(b) Figure 8. Monitoring results of Fault 5 using the Kruger's RPCA model trained with contaminated data
39
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
T2 control chart of Faut 5 (SPCP) 6 5 4
log(T2)
3 2 1 0 -1 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 5 (SPCP)
7.0 6.5 6.0 5.5
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
5.0 4.5 4.0 3.5 3.0 2.5 0
80
160
240
sampling time points
(b) Figure 9. Monitoring results of Fault 5 using the SPCP model trained with contaminated data
40
ACS Paragon Plus Environment
Page 40 of 46
Page 41 of 46
T2 control chart of Faut 6 (PCA1)
log(T2)
5
0
0
80
160
240
sampling time points
(a)
SPE control chart of Faut 6 (PCA1)
10 9 8 7
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
6 5 4 3 2 0
80
160
240
sampling time points
(b) Figure 10. Monitoring results of Fault 6 using the PCA1 model trained with contaminated data
41
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
T2 control chart of Faut 6 (PCA2)
8 7 6
log(T2)
5 4 3 2 1 0 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 6 (PCA2)
10 9 8 7
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
6 5 4 3 2 0
80
160
240
sampling time points
(b) Figure 11. Monitoring results of Fault 6 using the PCA2 model trained with contaminated data
42
ACS Paragon Plus Environment
Page 42 of 46
Page 43 of 46
T2 control chart of Faut 6 (Hubert's RPCA)
log(T2)
10
5
0 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 6 (Hubert's RPCA)
10
SPE
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
5
0 0
80
160
240
sampling time points
(b) Figure 12. Monitoring results of Fault 6 using the Hubert's RPCA model trained with contaminated data
43
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
T2 control chart of Faut 6 (Kruger's RPCA)
8 7 6 5
log(T2)
4 3 2 1 0 -1 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 6 (Kruger's RPCA)
10 9 8 7
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
6 5 4 3 2 0
80
160
240
sampling time points
(b) Figure 13. Monitoring results of Fault 6 using the Kruger's RPCA model trained with contaminated data
44
ACS Paragon Plus Environment
Page 44 of 46
Page 45 of 46
T2 control chart of Faut 6 (SPCP)
8 7 6 5
log(T2)
4 3 2 1 0 -1 0
80
160
240
sampling time points
(a)
SPE control chart of Faut 6 (SPCP)
10 9 8 7
log(SPE)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
6 5 4 3 2 0
80
160
240
sampling time points
(b) Figure 14. Monitoring results of Fault 6 using the SPCP model trained with contaminated data
45
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Table 1. False alarm rates of different monitoring models in the TE case
Fault Type Monitoring Model False Alarm Rate (%)
Fault 5
Fault 6
PCA2
25.00
SPCP
6.25
Hubert's RPCA
14.37
Kruger's RPCA
18.75
PCA2
12.5
SPCP
2.50
Hubert's RPCA
5.63
Kruger's RPCA
5.63
46
ACS Paragon Plus Environment
Page 46 of 46