Article pubs.acs.org/IECR
Statistical Process Monitoring Based on the Kernel Mean Discrepancy Test Jiusun Zeng,†,‡ Jinhui Cai,† Lei Xie,*,‡ Jianming Zhang,*,‡ and Yong Gu‡ †
College of Metrology & Measurement Engineering, China Jiliang University, Hangzhou 310018, China National Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, China
‡
ABSTRACT: A fault detection method based on the kernel mean discrepancy test is proposed that deals with the fault detection problem from the aspect of nonparametric statistical test. The basic idea of kernel mean discrepancy is to project the training data and test data into the reproducing kernel Hilbert space, in which the mean discrepancy test is performed. Based on the kernel mean discrepancy test, a quantitative analysis of the sensitivity of the test statistic is presented, and a fault detection strategy is developed. The confidence limit is obtained by a moving-window approach, and the process faults are then detected sequentially. Simulation and application results show that the method is sensitive to process faults.
1. INTRODUCTION Thanks to the advances in sensor technology and computer technology, large amounts of data are collected and accumulated during the operation of industrial systems, leading to the rapid development of data-driven fault detection and diagnosis (FDD) methods. Because of the ability to handle data colinearity, multivariate statistical methods, such as techniques based on principal component analysis (PCA),1 partial leastsquares (PLS),2 and independent component analysis (ICA),3 have received significant attention. More recently, multivariate statistical methods have been extended to deal with processes with multiple operating conditions.4−6 The general procedure of fault detection using multivariate statistical methods consists of a feature-extraction step and a subsequent step of constructing monitoring statistics for process monitoring. If the data are Gaussian-distributed, the T2 and SPE statistics are appropriate. However, if the variables are non-Gaussian distributed, the T2 and SPE statistics can produce significant numbers of type 1 and type 2 errors. To deal with nonGaussian processes, different methods have been developed to design monitoring statistics, such as kernel density estimation,7 Gaussian mixture models,8 and methods based on ICA.9−12 It has been reported that these methods can be problematic if the data distribution is sparse or clustered,7 facing singularity problems8 among others. Provided that the probability density function (PDF) of the normal data can be obtained, the task of process monitoring reduces to inspecting whether the test data are generated from the same PDF as the normal data. However, direct estimation of the PDF from data is computationally intensive, and available methods such as kernel density estimation have difficulty in handling high-dimensional data sets. As an alternative, different kinds of two-sample test approaches have been developed in the field of machine learning and statistics to test whether two sets of samples are sampled from the same distribution, where kernel mean discrepancy13 is a state-of-the-art approach. The basic idea of kernel mean discrepancy is to project the normal and test data into the reproducing kernel Hilbert space (RKHS) in which the mean discrepancy test is performed. If © 2013 American Chemical Society
the test data share the same distribution as the normal data, the expectation of mean discrepancy would be zero. Hence, the mean discrepancy in the kernel space can be used as a test statistic to determine whether the normal and test data set are sampled from the same distribution. In this article, a quantitative analysis of the sensitivity of the kernel mean discrepancy is presented, and a fault detection method based on the test statistic is proposed. A movingwindow approach is first used to divide the test data into consecutive subsets, and the kernel mean discrepancy between the subset and the normal data set is computed. If the discrepancy is large, it is more likely that the subset contains faulty data. Hence, the key step would be to determine the critical value/confidence limit of the mean discrepancy in the RKHS. To determine the confidence limit, the moving-window approach is again performed based on the normal data set, and the 95th or 99th percentile of the test statistic is used as the confidence limit when developing the normal operation model. With the confidence limit obtained, an online monitoring strategy is then proposed. The proposed monitoring strategy is nonparametric and distribution-free, it is applicable to both Gaussian- and non-Gaussian-distributed data, and it is easy to implement and does not involve direct estimation of the PDF. Simulation and application study to the blast-furnace ironmaking process show that the proposed method is more sensitive to process faults than a monitoring strategy based on support vector data description (SVDD)14 and multidimensional mutual information (MMI).15
2. PRELIMINARIES This section introduces some basic concepts related to the reproducing kernel Hilbert space (RKHS). The RKHS is a Hilbert space of functions in which pointwise evaluation is a continuous functional. The concept of the RKHS is widely used Received: Revised: Accepted: Published: 2000
September 27, 2012 January 9, 2013 January 9, 2013 January 9, 2013 dx.doi.org/10.1021/ie3026345 | Ind. Eng. Chem. Res. 2013, 52, 2000−2007
Industrial & Engineering Chemistry Research
Article
in many subjects, such as approximation theory, machine learning theory, and statistics. It is commonly defined over the field of real numbers, , or of complex numbers, . For convenience, we consider only the case of . For a given set ? ∈ d , the set of all functions from ? to , -(?, ), is a vector space over if it is equipped with the usual operations of addition (f + g)(x) = f(x) + g(x) and scalar multiplication (λf)(x) = λf(x). Here f , g ∈ - and x ∈ ? . / is a reproducing kernel Hilbert space on ? over , provided that the following conditions hold: (1) / is a vector space of - , so that / is a function space. (2) / is endowed with an inner product ⟨·,·⟩ with associated norm ||·||, making it a complete Hilbert space; the norm can be defined as ||f || = (⟨f,f⟩)1/2 for f ∈ / . (3) For every x ∈ ? , the linear evaluation functional Q x : / → defined by Qx( f) = f(x) is bounded. For an RKHS / , there exists a unique function k(x , ·) ∈ / such that, for all f ∈ / , f(x) = ⟨f,k(x,·)⟩ holds for each x ∈ ? ,16 which defines the inner product ⟨·,·⟩. The function k(x,·) is called the reproducing kernel for point x, and the twovariable function k(y,x) = kx(y) = ⟨k(y,·), k(x,·)⟩ is called the reproducing kernel for / .
that it employed. Theoretically, any continuous bounded function in d that allows p = q to be identified uniquely can be used. However, such a rich function class is unpractical to work with in the case of finite samples. In practice, the function class should be rich enough to uniquely identify whether p and q are equal, yet restrictive enough to provide useful finite sample estimates. An ideal choice for the function class - is the unit ball in the RKHS / with associated kernel k(·,·). A unit ball in RKHS is a function class with its member || f || ≤ 1, ∀ f ∈ / . With this kind of function class, the empirical MMD can be computed efficiently, and it is true that the MMD(- , p , q) = 0 if and only if p = q.13 A common choice of the kernel function k(·,·) is the Gaussian kernel ⎡ −(x − x )T (x − x ) ⎤ i j i j ⎥ k(x i , x j) = exp⎢ ⎢⎣ ⎥⎦ σ
(3)
where σ > 0 is the kernel parameter. In an RKHS, the function can be written in the form of a dot product as f(x) = ⟨k(x,·),f⟩. Hence, the MMD can be rewritten as MMD(p , q , -) = sup ⟨μp − μq , f ⟩ = || μp − μq || || f ||≤ 1
3. KERNEL MEAN DISCREPANCY AS A TEST STATISTIC 3.1. Rationale of Kernel Mean Discrepancy. Kernel mean discrepancy was first proposed to deal with the twosample test problem,17 which tests whether two samples are drawn from the same distribution. Assume p and q to be two probability distributions defined in a compact metric space d (closed and bounded), X = {x i}, x i ∈ d , i = 1, 2, ..., N , and Y = {yj}, yj ∈ d , j = 1, 2, ..., M , to be observations drawn from p and q. The kernel mean discrepancy investigates the difference between p and q by exploiting the difference in the samples mapped in the universal reproducing kernel Hilbert space (RKHS) / , that is, the difference between the expectations of k(x,·) and k(y,·). The rationale is that, for a compact metric space d , p = q holds if and only if p[f (x)] = q[f (y)] for all continuous bounded functions in
f ∈-
p[f (x)] = p[⟨k(x , ·), f ⟩] = ⟨μp , f ⟩. In practice, the following squared form of the MMD is commonly employed
MMD2 (p , q , -) = sup || μp − μq ||
2 /
= ⟨μp , μp ⟩/ − 2⟨μp , μq ⟩/ + ⟨μq , μq ⟩/ (5)
{xi}Ni=1
{yj}M j=1
For two sets of samples X = and Y = drawn from p and q, the empirical estimations of μp and μq can be determined as N
μX =
∑i = 1 k(xi , ·) (6)
N M
μY =
∑i = 1 k(yi , ·) (7)
M
and the squared form of the MMD can be determined by its empirical estimate as ⎡ ∑N k(x , x ) i j i,j=1 MMD (X, Y, -) = sup⎢ 2 ⎢ N σ ⎣ 2
N
−
(1)
The empirical estimate of the MMD is M ⎡ N ∑ j = 1 f (yj) ⎤ ∑i = 1 f (x i) ⎢ ⎥ MMD(X, Y, -) = sup − ⎢ ⎥ N M f ∈- ⎣ ⎦
(4)
where μp = p[k(x , ·)] is the expectation of k(x,·),
d . As a result, the difference between the expectations of the kernel function in the RKHS / can be used to test whether p = q without direct estimation of density functions, hence significantly simplifying the task of the distribution test. When the maximum difference of the mean function is significant, the two samples are likely to be drawn from different distributions. More formally, let - be a class of functions f : ? → . Then, the maximum mean discrepancy (MMD) is defined as MMD(p , q , -) = sup (p[f (x)] − q[f (y)])
/
M
2 ∑i = 1 ∑ j = 1 k(x i , yj) MN
∑i , j = 1 k(yi , yj) ⎤ ⎥ ⎥ M2 ⎦ M
+
(8)
2
In the two-sample test problem, MMD is used as the test statistic to determine whether X and Y are sampled from the same distribution. It is shown in ref 13 that, under the assumption that M = N, the empirical estimate of MMD2( p , q , - ) in eq 8 is asymptotically unbiased. Because the MMD statistic involves kernelization, we call it the kernel mean discrepancy (KMD) here. The advantages of kernelization are obvious. In the RKHS, even some linear statistics can
(2)
where the supremum operator denotes the least upper bound of a sets of values. From eqs 1 and 2, it can be seen that the quantity of the MMD heavily relies on the class of functions 2001
dx.doi.org/10.1021/ie3026345 | Ind. Eng. Chem. Res. 2013, 52, 2000−2007
Industrial & Engineering Chemistry Research
Article N
capture the potential nonlinear relationship in the original MMD2 (X, Ỹ , -) =
space.18 The KMD test directly compares the data points; it is
N
applicable to arbitrary data distributions because no density −
estimation is involved. 3.2. Sensitivity Analysis. To successfully detect a fault, the
=
sensitivity of the monitoring statistic is an important issue. In fact, the MMD is an integral probability metric defined as
+
∫
MMD(p , q , -) = sup [ f (x) p(x) dx − f ∈-
∫
f (x) q(x) dx] −
(9)
We limit the MMD in the unit ball of the RKHS / . The
∑i , j = 1 k(x i , x j) N2
M
2 ∑i = 1 ∑ j = 1 k(x i , yj̃ ) MN N ∑i , j = 1 k(x i , x j) N2 M ∑i , j = 1 k(yi
−
M
+
∑i , j = 1 k(yĩ , yj̃ ) M2
N M 2 ∑i = 1 ∑ j = 1 k(x i , yi
+ Δy)
MN
+ Δy, yj + Δy)
M2 N M 2 ∑i = 1 ∑ j = 1 k(x i , yj + Δy) MN
N
= +
∑i , j = 1 k(x i , x j) N2 M ∑i , j = 1 k(yi , 2
yj)
M
(13)
Hence
reproducing property of the RKHS allows for the extraction of
MMD2 (X, Ỹ , -) − MMD2 (X, Y, -)
the value of function f as f(x) = ⟨f,k(x,·)⟩. Hence, the MMD can
N
be described in terms of the kernel function as
=
∫ ⟨f , k(x, ·)⟩ p(x) dx
MMD(p , q , /) = sup [
=
|| f ||/ ≤ 1
∫ ⟨f , k(x, ·)⟩ q(x) dx] sup ⟨f , ∫ k(x , ·) p(x) dx
M
2 ∑i = 1 ∑ j = 1 k(x i , yj) MN 2 MN
N
N
−
M
2 ∑i = 1 ∑ j = 1 k(x i , yj + Δy) MN
M
∑ ∑ [k(x i, yj + Δy) − k(x i, yj)] (14)
i=1 j=1
− =
For the sake of simplicity, consider the case of N = M, for which the change of MMD2 due to the sensor fault reduces to
|| f ||/ ≤ 1
∫ k(x, ·) q(x) dx⟩ = ||∫ k(x , ·) p(x) dx − ∫ k(x , ·) q(x) dx
MMD2 (X, Ỹ , -) − MMD2 (X, Y, -)
−
=
(10)
2 = 2 N
Correspondingly, eq 9 can be rewritten in the form MMD2 (p , q , /)
∫
= || k(x , ·) p(x) dx − =
∫ k(x, ·) q(x) dx ||-
N
N
∑ ∑ [k(x i, yj + Δy) − k(x i, yj)] i=1 j=1
⎧ ⎡ −(x − y − Δy)T (x − y − Δy) ⎤ ⎪ i i j j ⎥ ∑ ∑ ⎨exp⎢⎢ ⎥ σ ⎪ i=1 j=1 ⎩ ⎣ ⎦ N
N
⎡ −(x − y )T (x − y ) ⎤⎫ i i j j ⎥⎪ ⎬ − exp⎢ ⎢ ⎥⎪ σ ⎣ ⎦⎭
2
∬ k(x, x′) p(x) p(x′) dx dx′ + ∬ k(x , x′) q(x) q(x′) dx dx′ − 2 ∬ k(x , x′) p(x) q(x′) dx dx′
2 N2
2 = 2 N
⎧ ⎛ ⎡ 2(x − y )T Δy ⎤ ⎪ i −Δy TΔy ⎞ j ⎥ ⎟ exp⎢ ∑ ∑ ⎨exp⎜ ⎢ ⎥ σ σ ⎠ ⎝ i=1 j=1 ⎪ ⎣ ⎦ ⎩ N
N
⎫ ⎡ (x i − yj)T (x i − yj) ⎤ ⎪ ⎥ −1⎬ exp⎢ − ⎢ ⎥ σ ⎪ ⎣ ⎦ ⎭
(11)
(15)
A more detailed deduction can be found in ref 19. According to the definition of the MMD, when {xi}Ni=1 and
In this equation, exp{[2(xi−yj)TΔy]/σ} can be approximated
{y j } j =M1 are sampled from the same distribution,
using a second-order Taylor expansion as
N {MMD 2(X, Y, -)} = 0. Now assume that {x i} i=1 and
⎡ 2(x − y )T Δy ⎤ i j ⎥ exp⎢ ⎢ ⎥ σ ⎣ ⎦
{yj}M j=1 are from the same distribution, so that p = q. Consider the following sensor fault
yĩ = yi + Δy
≈1 +
(12)
MMD2 between X and Ỹ = {yj̃ }M j=1 can be computed as
2(x i − yj)T Δy σ
+
2[(x i − yj)T Δy]2 σ2
(16)
Substituting eq 16 into eq 15 gives 2002
dx.doi.org/10.1021/ie3026345 | Ind. Eng. Chem. Res. 2013, 52, 2000−2007
Industrial & Engineering Chemistry Research
Article
Theoretically, the value of MMD2 would be zero if and only if X and Y are sampled from the same distribution; otherwise, it would be greater than zero.13 If MMD2 is not equal to zero, then X and Y are from different distributions. This can be translated into the following hypothesis test
MMD2 (X, Ỹ , -) − MMD2 (X, Y, -) N N ⎡ 2(x − y )T Δy ⎤ i 2 j ⎥ = 2 ∑∑⎢ ⎢ ⎥ σ N i=1 j=1 ⎣ ⎦ ⎡ −(x − y )T (x − y ) ⎤ ⎛ −Δy TΔy ⎞ i i j j ⎥ ⎟ exp⎢ exp⎜ ⎢ ⎥ σ σ ⎠ ⎝ ⎣ ⎦ 2 + 2 N
H0: X and Yare sampled from the same distribution, such that MMD2 = 0
⎡ 2((x − y )T Δy)2 ⎤ i j ⎥ ∑ ∑ ⎢⎢ 2 ⎥ σ i=1 j=1 ⎣ ⎦ N
H1: X and Y are sampled from different distributions, such that MMD2 ≠ 0
N
(20) 2
However, in practice, the MMD statistic would not be equal to zero even if X and Y were drawn from the same distribution because of the effect of sampling. Hence, a confidence limit should be determined before the hypothesis test is used for fault detection. Reference 13 gave the confidence limits for the case of M = N under the assumption that the data are Gaussian. The confidence limit for the case of M ≠ N, however, cannot be determined. Here, we propose a moving-window approach to obtain the confidence limit from the normal data set. The moving-window approach determines the distribution of the test statistic under the null hypothesis by sampling from the normal sample set, which is performed as follows. (1) Divide the normal data set X = {xi}Ni=1 into two subsets, with the first N/2 samples as the reference set; denote the reference set as X̂ and the remaining N/2 samples as X̃ . (2) Select the first through Mth samples from X̃ and denote this set as Ỹ 1; compute the MMD2 statistic between X̂ and Ỹ 1 using eq 8; (3) Construct a new test set by the moving-window approach; that is, select the second through (M + 1)th samples from X̃ , and denote this set as Ỹ 2. compute the MMD2 statistic between X̂ and Ỹ 2 using eq 8. (4) Repeat the above moving-window procedures until (N/ 2) − M + 1 MMD2 statistics are obtained. (5) Determine the critical value CVδ as the 100 × δ percentile of the (N/2) − M + 1 values of the MMD2 statistic, where δ is commonly selected as 95% or 99%. It should be noted that one can collect plenty of normal samples so that (N/2) ≫ M and the 95th or 99th percentile of the (N/2) − M + 1 values of the MMD2 statistic can be regarded as a reasonable confidence limit. 4.2. Online Monitoring. In the context of fault detection, a large number of normal samples can be collected and stored while the test sample is collected online sequentially (i.e., one by one). Ideally, it is desired that each sample collected online can be compared with the normal set to determine whether it is a normal or faulty sample by checking the MMD2 statistic. However, this is not feasible because KMD deals with the fault detection problem by comparing the distributions of two sample sets, which requires that both the normal and test sample sets have enough samples. In this article, we use a moving-window approach to construct the test set by including the latest sample and its past M − 1 samples. The MMD2 statistic can then be computed through eq 8 and compared with the confidence limit; if MMD2 is greater than the confidence limit, the test set contains faulty samples. More specifically, assume that the confidence limit has been obtained from the normal sample set X = {xi}Ni=1. Then, the online monitoring consists of the following steps:
⎡ −(x − y )T (x − y ) ⎤ ⎛ −Δy TΔy ⎞ i i j j ⎥ ⎟ exp⎢ exp⎜ ⎢ ⎥ σ σ ⎠ ⎝ ⎣ ⎦ +
2 N2
N
N
⎡
⎛
⎢⎣
⎝
Δy ⎞ ⎤ ⎟⎟ −1⎥ σ ⎠ ⎥⎦
∑ ∑ ⎢exp⎜ −Δy i=1 j=1
T
⎡ −(x − y )T (x − y ) ⎤ i i j j ⎥ exp⎢ ⎥ ⎢ σ ⎣ ⎦
(17)
Because xi and yj are sampled from the same distribution, the following equation holds ⎧ ⎪ 2 ⎨ 2 ⎪N ⎩
N
⎡ 2(x − y )T Δy ⎤ i j ⎥ ⎥ σ j=1 ⎣ ⎦ N
∑ ∑ ⎢⎢ i=1
⎡ −(x − y )T (x − y ) ⎤⎫ ⎛ −Δy TΔy ⎞ i i j j ⎥⎪ ⎬ ⎟ exp⎢ exp⎜ ⎢ ⎥⎪ σ σ ⎠ ⎝ ⎣ ⎦⎭ =0
(18)
On the other hand, if Δy → 0, then exp[(−ΔyTΔy)/σ] − 1 ≈ 0, so that eq 17 reduces to MMD2 (X, Ỹ , -) − MMD2 (X, Y, -) N N ⎧ 2[(x − y )T Δy]2 ⎫ ⎪ ⎪ i 2 j ⎬ = 2 ∑∑⎨ 2 ⎪ σ N i=1 j=1 ⎪ ⎩ ⎭ ⎡ −(x − y )T (x − y ) ⎤ i i j j ⎥ exp⎢ ⎥ ⎢ σ ⎣ ⎦
(19)
Hence, MMD2 (X, Y ̃ , -) − MMD2 (X, Y, -) > 0, and the MMD2 statistic is sensitive to the fault.
4. FAULT DETECTION STRATEGY BASED ON KERNEL MEAN DISCREPANCY Considering the advantages of KMD in comparing data distributions, we now develop a monitoring strategy using the KMD test. The monitoring strategy consists of two steps, namely, modeling of the normal operating conditions and online monitoring. 4.1. Developing the Normal Operating Condition (NOC) Model. Let X = {xi}Ni=1 denote the normal sample set collected and stored in the database. Assume that the test sample set has M samples and is denoted as Y = {yj}M j=1. Then, the MMD2 statistic can be easily computed using eq 8. 2003
dx.doi.org/10.1021/ie3026345 | Ind. Eng. Chem. Res. 2013, 52, 2000−2007
Industrial & Engineering Chemistry Research
Article
components using the KMD test. The KMD test relies on the choice of kernel function and, hence, the kernel parameter in eq 3. A heuristic method that involves setting σ as the median distance between samples is proposed and used in ref 13. Recently, the idea of using σ producing the maximum MMD2 statistic was proposed by ref 22 and confirmed by ref 19, yielding a better result than the heuristic method. In this work, we used a grid search to find the σ value maximizing the average MMD2. A window length of M = 50 was set, and the 99th percentile is used as the confidence limit. To test the performance of the proposed monitoring strategy on the non-Gaussian process, a test set with 2000 samples was generated and tested, with the first 1000 samples from normal conditions and the remaining 1000 samples representing faulty conditions. The faulty conditions of a sensor bias in the first process variable with a magnitude of 0.4 was considered. For the sensor bias, Figure 1 shows the monitoring results of the extracted non-Gaussian components using KMD under the sensor fault.
(1) Collect the new sample xN+1 and construct the test set corresponding to xN+1; denote it as XN+1 = {xk}N+1 k=N−M+2. (2) Compute the MMD2 statistic between X̂ and XN+1 = N+1 using eq 8 and compare it with the {xk}k=N−M+2 confidence limit CVδ. If the statistic is larger than CVδ, then a faulty condition arises. (3) Collect the sample at the next time instance, and construct the corresponding test set by including the latest sample and removing the earliest sample. (4) Compute the MMD2 statistic and compare it with the confidence limit; (5) Repeat steps 3 and 4. Determination of the window length (the number of samples in the test set M) is an important issue. As is shown in section 3, the empirical estimation of MMD2 asymptotically converges to the real value when the values of N and M tend to infinity. Hence, increasing the window length means more accurate estimation and reduced number of false alarms; however, it can also decrease the sensitivity of monitoring statistics in fault detection, as is shown in refs 20 and 21. Conversely, a smaller window length can produce increased false alarms but is more sensitive to process faults. In practice, a tradeoff is often made between the false alarm rate and the sensitivity of the monitoring statistic.
5. SIMULATION EXAMPLE In this section, a simulation example involving a non-Gaussian process is considered. For the non-Gaussian process, the support vector data description (SVDD) and multidimensional mutual information (MMI) based monitoring strategy is used for comparison. SVDD determines whether a test sample is normal or faulty by projecting it into a kernel space and inspecting the distance between the sample and a center point in the kernel space,14 whereas MMI is a moving-window approach that inspects a fault detection index (DMMI) by combining multidimensional mutual information with an ICAbased statistic.15 Consider a process with five variables that are linear combination of three non-Gaussian source signals. The three source signals are generated as follows ⎧ s1(k) = 0.2 cos(0.08k) sin(0.006k) ⎪ ⎪ ⎨ s2(k) = 0.8 cos(0.1k) − 0.6 sin(0.05k) ⎪ ⎪ s (k) ∼ 5(0, 0.25) ⎩3
Figure 1. Monitoring results for the non-Gaussian components using KMD.
It is shown in Figure 1 that the fault was successfully detected by the MMD2 statistic. No false alarms were observed for the first 1000 samples, and very significant violations were observed for the 1000 faulty samples. In contrast, Figure 2 presents the monitoring results using SVDD. For SVDD, the kernel width was set as σ = 1000 by trial and error, and the significance level was also set as 99%. It is shown in Figure 2 that the sensor fault was also detected by SVDD. However, several false alarms can be observed, and a
(21)
where k is the sampling time. Based on the three source signals, five process variables are generated as z = As, s = [s1 s2 s3]. The mixing matrix A is given by ⎡ 0.86 0.79 0.67 ⎤ ⎢ ⎥ ⎢−0.55 0.65 0.46 ⎥ A = ⎢ 0.17 0.32 − 0.28 ⎥ ⎢ ⎥ ⎢−0.33 0.12 0.27 ⎥ ⎢⎣ 0.89 − 0.97 − 0.74 ⎥⎦
(22)
The process data are contaminated by a set of Gaussiandistributed noise ẑ = As + e, with (e) = 0 and (ee T) = 0.0025I. Using this model, 2000 samples representing normal operating conditions are generated. For the nonGaussian process, ICA is performed to extract three nonGaussian components. The NOC model is then developed for both the extracted non-Gaussian components and the residual
Figure 2. Monitoring results for the non-Gaussian components using SVDD. 2004
dx.doi.org/10.1021/ie3026345 | Ind. Eng. Chem. Res. 2013, 52, 2000−2007
Industrial & Engineering Chemistry Research
Article
large number of missing detection can be observed. This is in sharp contrast to the monitoring results of KMD. On the other hand, Figure 3 presents the monitoring results of MMI. For MMI, the window length was set as 50, and the confidence limit was also set as 99%.
Figure 6. Monitoring results for the residual components using the MMI-based monitoring statistic.
violations can be observed for the faulty samples for KMD. In contrast, the SPE statistic of SVDD presents several false alarms and many missing detections. The DMMI statistic successfully detects the fault; however, some obvious false alarms can be observed. This can be explained by the fact that the estimation of mutual information requires a sufficient number of samples. The simulation example clearly shows that the KMD outperforms the SVDD- and MMI-based monitoring strategies with regard to fewer false alarms and higher detection sensitivity.
Figure 3. Monitoring results for the non-Gaussian components using MMI.
It is shown in Figure 3 that significant violations were observed for the DMMI statistic of MMI-based technique, although some false alarms can be observed at around sample 150 and a few missing detections can be found. For monitoring of the residual components, the MMD2 statistic, the SPE statistic for SVDD, and the DMMI statistic were tested and compared, as is shown in Figures 4−6.
6. APPLICATION STUDY This section examines the performance of the proposed monitoring strategy by applying it to fault detection in the blast-furnace ironmaking process. A blast furnace is a kind of shaft furnace used to produce molten iron from iron ore. In a blast furnace, raw materials such as iron ore and coke are dumped from the top of the blast furnace layer by layer, whereas hot air and coal power are blasted into the furnace from the bottom. The raw materials are heated during the downward movement, and a series of chemical reactions take place throughout the furnace. The end products include molten iron and slag tapped from the bottom and flue gases exiting from the top. During the operation of the blast-furnace ironmaking process, it is important to maintain stable gas flows to facilitate the smooth production of molten iron. A typical fault involving the gas flow is the hanging fault. A hanging fault usually results when the upward gas is restricted by excessive fines or the melted iron ore causes the bosh to bridge over.23 If a hang is permitted and uncorrected, the top pressure and top heat of the furnace increase. Eventually, the top structure can be seriously damaged. In this section, we consider a hanging fault happening in a real blast furnace in China with an inner volume of 2500 m3. A total of eight variables are considered and listed in Table 1. During a hanging fault, the quantity of blast and the H2 concentration in the top gas decrease, and the top pressure and CO2 and CO concentrations in the top gas increase. For the purpose of fault detection, a training set with 2000 samples was collected under normal operation, and a test set with 1080 samples was considered, with the last 580 samples consisting of faulty samples. As in the simulation example, ICA was first used to extract five non-Gaussian sources from the process data. The NOC model was then constructed for both the source signals and the residuals. Again, the kernel width was determined by a
Figure 4. Monitoring results for the residual components using KMD.
Figure 5. Monitoring results for the residual components using the SPE statistic.
Figures 4−6 show a similar picture. Again, there are no false alarms for the first 1000 normal samples, and significant 2005
dx.doi.org/10.1021/ie3026345 | Ind. Eng. Chem. Res. 2013, 52, 2000−2007
Industrial & Engineering Chemistry Research
Article
Table 1. Process Variables Selected for Monitoring of the Hanging Fault process variable
variable description
u1 u2 u3 u4 u5 u6 u7 u8
quantity of blast blast temperature blast pressure top pressure oxygen enrichment CO2 concentration in the top gas CO concentration in the top gas H2 concentration in the top gas
grid search, the window length was set as M = 50, and the 99th percentile was used as the confidence limit. Figure 7 shows the
Figure 9. Monitoring results for the non-Gaussian components using MMI for the hanging fault.
7 and 8, it can be seen that the proposed KMD-based monitoring strategy had a higher sensitivity than SVDD. On the other hand, the MMI-based monitoring technique also detected significant violations for the last 580 samples, but with many missing detections. This is in accordance with the situation of the simulation example; that is, the fault was detected by both MMI and KMD, but KMD had a higher detection sensitivity and fewer false alarms. Figures 10−12 present a similar picture. It can be seen that the fault was successfully detected by KMD, and a significant
Figure 7. Monitoring results for the non-Gaussian components using KMD for the hanging fault.
monitoring results for the non-Gaussian components extracted using KMD. It can be seen that, for the first 500 normal samples, no violation of the confidence limit was observed whereas, for the last 580 faulty samples, significant numbers of violations were observed; hence, the fault was successfully detected. Figures 8 and 9 present the monitoring results obtained using SVDD and MMI. For SVDD, the kernel width was Figure 10. Monitoring results for the residual components using KMD for the hanging fault.
Figure 8. Monitoring results for the non-Gaussian components using SVDD for the hanging fault. Figure 11. Monitoring results for the residual components using SVDD for the hanging fault.
determined as σ = 10 by trial and error, and the confidence level was set as 99%; for MMI, the window length was set as 50, and the confidence level was set as 99%. Figure 8 shows that the hanging fault was also successfully detected by SVDD, but with a significant number of missing alarms. Comparing Figures
number of violations can be observed. In contrast, SVDD can detect the fault, but with lower sensitivity and many missing alarms. MMI is also capable of detecting the fault, but with 2006
dx.doi.org/10.1021/ie3026345 | Ind. Eng. Chem. Res. 2013, 52, 2000−2007
Industrial & Engineering Chemistry Research
Article
(8) Yu, J.; Qin, S. J. AIChE J. 2008, 54, 1811−1829. (9) Kano, S.; Tanaka, S.; Hasebe, S.; Hashimoto, I.; Ohno, H. AIChE J. 2003, 49, 969−976. (10) Lee, J. M.; Yoo, C. K.; Lee, I. B. J. Process Control 2004, 14, 467−485. (11) Lee, J. M.; Qin, S. J.; Lee, I. B. AIChE J. 2006, 52, 3501−3514. (12) Zhang, Y. W.; Zhang, Y. Chem. Eng. Sci. 2010, 65, 4630−4639. (13) Gretton, A.; Borgwardt, K. M.; Rasch, M. J.; Scholkopf, B.; Smola, A. J. Mach. Learn. Res. 2012, 13, 723−743. (14) Liu, X. Q.; Xie, L.; Kruger, U.; Littler, T.; Wang, S. Q. AIChE J. 2008, 54, 2379−2391. (15) Rashid, M. M.; Yu, J. Chemom. Intell. Lab. Syst. 2012, 115, 44− 58. (16) Steinwart, I. J. Mach. Learn. Res. 2001, 2, 67−93. (17) Devore, J. L. Probability and Statistics for Engineering and the Sciences, 8th ed.; Brooks/Cole Publishing: Pacific Grove, CA, 2011. (18) Joachims, T. Support Vector and Kernel Methods; SIGIR 2003 Tutorial; Cornell University: Ithaca, NY, 2003. (19) Sugiyama, M.; Suzuki, T.; Itoh, Y.; Kanamori, T.; Kimura, M. Neural Networks 2011, 24, 735−751. (20) Wang, X.; Kruger, U.; Irwin, G. W. Ind. Eng. Chem. Res. 2005, 44, 5691−5702. (21) Zhang, Q.; Basseville, M.; Benveniste, A. Automatica 1994, 30, 95−114. (22) Sriperumbudur, B. K.; Fukumizu, K.; Gretton, A.; Lanckriet, G. R. G.; Schölkopf, B. Kernel Choice and Classifiability for RKHS Embeddings of Probability Distributions. In Advances in Neural Information Processing Systems (NIPS); Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C. K. I., Culotta, A., Eds.; Neural Information Processing Systems Foundation: La Jolla, CA, 2009; pp 1750−1758. (23) Strassburger, J. H. Blast Furnace Theory and Practice; Taylor and Francis: London, 1969.
Figure 12. Monitoring results for the residual components using MMI for the hanging fault.
some obvious false alarms. This further confirms the effectiveness of the proposed monitoring strategy.
7. CONCLUDING SUMMARY This article proposes to monitor process faults using the kernel mean discrepancy approach, which compares the test samples and normal samples from the aspect of PDF. The normal and test samples are projected into the RKHS, and the mean functions are then compared. If the difference in the mean functions is large, the test samples are likely to contain faulty samples. A quantitative analysis on the sensitivity of the KMD statistic was then presented. A moving window approach was developed to obtain the confidence limits of the statistic. Based on the confidence limit, a moving-window approach was then developed for fault detection. The monitoring statistic does not rely on any distribution assumption and, hence, can be applied to both Gaussian and non-Gaussian processes. Application studies involving a simulation example and a blast-furnace fault were then used to test the performance of the proposed monitoring strategy. The results show that the proposed approach has higher sensitivity and fewer false alarms than the SVDD- and MMI-based approaches.
■
AUTHOR INFORMATION
Corresponding Author
*E-mail:
[email protected] (L.X.),
[email protected] (J.Z.). Notes
The authors declare no competing financial interest.
■
ACKNOWLEDGMENTS The authors acknowledge sponsorship and financial support from the Natural Science Foundation of China under Grants 61203088, 60904039, and 60974100; Zhejiang Provincial Natural Science Foundation under Grant LQ12F03015; and the Fundamental Research Funds for the Central Universities.
■
REFERENCES
(1) Tharrault, Y.; Mourot, G.; Ragot, J.; Maquin, D. Int. J. Appl. Math. Comput. Sci. 2008, 18, 429−442. (2) Kruger, U.; Dimitriadis, G. AIChE J. 2008, 54, 2581−2596. (3) Ge, Z. Q.; Song, Z. H. Ind. Eng. Chem. Res. 2007, 46, 2054−2063. (4) Ge, Z. Q.; Song, Z. H. AIChE J. 2010, 56, 2838−2849. (5) Yu, J. AIChE J. 2011, 57, 1817−1828. (6) Yu, J. Ind. Eng. Chem. Res. 2011, 50, 3390−3402. (7) Chen, Q.; Kruger, U.; Leung, A. Y. T. Control Eng. Pract. 2004, 12, 267−274. 2007
dx.doi.org/10.1021/ie3026345 | Ind. Eng. Chem. Res. 2013, 52, 2000−2007