Subscriber access provided by Temple University Libraries
Article
Adaptive Gaussian Mixture Model-based relevant sample selection for JITL soft sensor development Miao Fan, Zhiqiang Ge, and Zhihuan Song Ind. Eng. Chem. Res., Just Accepted Manuscript • DOI: 10.1021/ie5029864 • Publication Date (Web): 02 Dec 2014 Downloaded from http://pubs.acs.org on December 8, 2014
Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a free service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are accessible to all readers and citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.
Industrial & Engineering Chemistry Research is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.
Page 1 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Adaptive Gaussian Mixture Model-based relevant sample selection for JITL soft sensor development Miao Fan, Zhiqiang Ge∗, Zhihuan Song
State Key Laboratory of Industrial Control Technology, Institute of Industrial Process Control, Department of Control
Science and Engineering, Zhejiang University, Hangzhou, 310027, P. R. China
Abstract Just-in-time learning (JITL) has recently been used for online soft sensor modeling. Different from traditional global manners, the JITL-based method exhibits a local model built from historical samples similar to a query sample so that both nonlinearity and changes of the process characteristics can be well coped with. A key issue in JITL is to establish a suitable similarity criterion to select relevant samples. Conventional JITL methods, which use distance-based similarity measure for local modeling, may be inappropriate for many industrial processes exhibiting time-varying and non-Gaussian behaviors. In this paper, a GMM-based similarity measure is proposed to improve the prediction accuracy of the JITL soft sensor. By taking the non-Gaussianity of process data and the characteristics of the query sample into account, a more suitable similarity criterion is defined for sample selection of JITL soft sensor and better modeling performance can be achieved. Case studies on a numerical example as well as an industrial process are demonstrated to evaluate the feasibility and effectiveness of the proposed method.
Keywords:
Just-In-Time-Learning; Soft sensor; Gaussian mixture model; Similarity criterion;
Non-Gaussian data
∗
Corresponding author: Tel.:+86-87951442, E-mail address:
[email protected] 1
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
1. Introduction In the process industry, data-based methods have been widely used for monitoring and soft sensing of variables that are difficult to measure online.
1,2
Conventionally used data-based soft sensor modeling
methods include, principal component regression (PCR), partial least squares (PLS), artificial neural network (ANN), support vector machine (SVM), etc.3-6 However, construction of high performance soft sensors is a laborious task as input variables and training data samples for model construction have to be selected carefully and the parameters should be tuned appropriately. Even though a good soft sensor is obtained, the prediction performance will deteriorate gradually due to the changes in process characteristics, such as catalyst deactivation, process drift and changes in the state of the chemical plant. To update models automatically when process characteristics change, the moving window (MW) model and the recursive model are developed, which update models with new samples that reflect the process changes.7-9 For example, an MW model is reconstructed with the data that were measured most recently.10 However, although those methods can adapt soft sensor models to a new operation condition, they are difficult to cope with a rapid or instant change of the process. Besides, recursive soft sensors may not function well in the new operation region until a sufficient period of time when they are adapted to the new operational condition. To deal with these shortcomings, the just-in-time learning (JITL) method was proposed as an attractive alternative to cope with nonlinearity as well as changes in process characteristics.11-13 In the JITL model framework, a local model is built with the historical data set around a query sample when the estimated value of the sample is required. Different from traditional global modeling methods, the JITL-based model exhibits a local model structure. By using local models, the current operation condition can be well tracked and the model is able to cope with process nonlinearity as well as changes of process characteristics. 2
ACS Paragon Plus Environment
Page 2 of 36
Page 3 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
A key issue in JITL is to establish a suitable similarity criterion to select the relevant training data samples.14 In general, the similarity is defined on the basis of the Euclidean distance or the Mahalanobis distance among data samples. The distance-based similarity method does not take the correlation among variables into account. Cheng and Chiu11 proposed a similarity criterion to select samples on the basis of not only the distance but also the angle between two samples. However, the angle does not always describe the correlation among variables adequately due to the existence of samples which are orthogonal to each other. Another popular correlation-based similarity criterion was introduced by Fujiwara et al, which is based on the Q and T 2 statistics of PCA.15 While the aforementioned methods are suitable for processes whose variables are Gaussian-distributed, the soft sensing performance may be deteriorated when some non-Gaussian variables are incorporated for soft sensor modeling. In fact, applying an inappropriate distribution can have an undesired effect on the selection of samples for local modeling. In this case, a similarity criterion dealing with the correlation among variables and non-Gaussian data characteristics could be a good choice. On the other hand, most conventional methods are kinds of global similarity measures which may be not convincing to select the similar sample for the process with obvious local features. A more proper way is to combine local similarities in different Gaussian components through a weighed function. For example, the operation mode with a higher posterior probability will be given a larger weight through the combination when dealing with the multimode process, thus relevant samples can be selected more properly. In this paper, a novel relevant sample selection strategy based on Gaussian Mixture Model (GMM) is proposed for JITL soft sensor development. Usually, a non-Gaussian distribution signal can be approximated by a mixture of several Gaussian ones. In the first step of the proposed method, a Gaussian Mixture Model is constructed on the basis of the training samples in order to capture the non-Gaussian 3
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
information in the process dataset. The parameters such as mean, covariance and the prior probability of each Gaussian component can be obtained through this offline modeling stage. Then, for online quality estimation of each query sample, Mahalanobis distance-based similarities between the query sample and those samples within each Gaussian component need to be calculated. At the same time, the posterior probability value of the query sample affiliated to each local region is obtained through the Bayesian rule. Finally, all similarities in different local regions are combined together through a weighed function, which is termed as the new GMM-based similarity criterion in this paper. Compared to traditional JITL modeling methods, the proposed GMM-based method takes both of the non-Gaussian data characteristics and the local data structure of the query sample into account, it is some kind of local linearizing in the whole data space which can cope with the process nonlinearity well. Thus, a more suitable similarity criterion is defined for sample selection of JITL soft sensor and better modeling effect can be expected. The rest of this paper is organized as follows. In Section 2, preliminaries of just-in-time learning and Gaussian Mixture Model are briefly introduced, followed by a detailed description of the proposed GMM-based relevant sample selection method in the next section. In Section 4, validity and effectiveness of the proposed approach are demonstrated through both numerical and industrial application examples. Finally, conclusions are made at the end of the paper.
2. Preliminaries 2.1. Just-in-time-learning (JITL) Just-in-time-learning (JITL), also known as lazy learning, was used to develop adaptive process models.16,17 In contrast to the conventional adaptive modeling method where the initial model is constructed offline through a global manner, the JITL model is built online when the query sample is coming. Such modeling approach can cope with nonlinearity as well as changes of process characteristics 4
ACS Paragon Plus Environment
Page 4 of 36
Page 5 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
successfully. There are three main steps in JITL to predict the model output for each query sample18:(1) select relevant data samples from historical data based on a suitable resemble criteria, such as, Euclidean distance, similarity which combines both distance and angle, and correlation-based similarity, etc.; (2) a local model is built based on the relevant samples; (3) the output is predicted based on the local model. The local model is then discarded after the predicted output of the current data sample is obtained. Relevant samples for local modeling should be selected carefully because unsuitable training set may lead to inaccurate predict of the output. How to establish a suitable similarity criterion for sample selection is a key issue in JITL.
2.2. Gaussian Mixture Model (GMM) The Gaussian mixture model is widely used as a probabilistic modeling approach to address the unsupervised learning problems. An arbitrary probability density of an m-dimensional sample
x ∈ Rm can
be approximated by a mixture of Gaussian density functions as follows: K
p( x | θ ) = ∑ π i p( x |θi )
(1)
i =1
where K is the number of Gaussian components, π i represents the prior probability of the data point K
k th
having been generated from the
component Ck ,satisfying
∑π
k
= 1 , and 0 ≤ π k ≤ 1 .
k =1
θ = {θ1 ,…,θ K } = {µ1 , Σ1 ,…, µ K , Σ K } is the vector of Gaussian model parameters where µ k is the mean vector, Σ k is the covariance matrix of the
k th
Gaussian distribution. The corresponding
probability density function p ( x | θ k ) is given by
p( x | θk ) =
1 (2π )
m /2
| Σk |
1/2
1 exp - ( x - µ k )T Σ k -1 ( x - µ k ) 2
(2)
The mixed density function p( x | θ ) is actually a weighted sum of local Gaussian components. The complete Gaussian mixture density is parameterized by the prior probabilities, mean vectors and covariance 5
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 6 of 36
matrices from all components densities. Assuming
that
the
number
{
of
components
is
known,
the
parameters
}
Ω = {π 1 , µ1 , Σ1} , …, {π K , µ K , Σ K } can be calculated by maximization the log-likelihood:
Ω = arg max(log p( Χ | Ω))
(3)
Ω
where the log-likelihood function can be expressed as n
K
j =1
i =1
log p ( Χ | Ω ) = ∑ log(∑ π i p ( x j | θi ))
(4)
The Expectation-Maximization algorithm is commonly used for Gaussian mixture model optimization.21 The algorithm repeats the Expectation step (E-step) and maximization step (M-step) in an iterative procedure to calculate the posterior probabilities until a convergence criterion of the log-likelihood function is satisfied. Since the likelihood for the observations increase after an E-step and M-step, the maximum likelihood estimator is asymptotically obtained. Given the training data X = { x1 , x2 ,… xn } and the initial parameter
{{
}
{
Ω(0) = π 1(0) , µ1(0) , Σ1(0) } ,…, π K (0) , µK (0) , Σ K (0) }
E-step: By using the existing estimations of the current parameters θ k , the posterior probability of (l )
the
ith
k th
training sample within the
Gaussian component in the
l th
iteration is calculated according
to Bayesian rule22
p ( l ) (Ck | xi ) =
π k(l ) p ( xi | θ k(l ) )
∑
K
π (l ) p ( xi | θ j(l ) ) j =1 j
, i = 1,L , n; k = 1,L , K
(5)
M-step: By using the posterior probability, the parameters are re-estimated to maximum the log-likelihood distribution.
∑ = ∑
N
µ
( l +1) k
i =1 N
p ( l ) (Ck | xi ) xi
p (l ) (Ck | xi ) i =1
6
ACS Paragon Plus Environment
(6)
Page 7 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Σ
( l +1) k
∑ =
N
p ( l ) (Ck | xi )( xi − µ k( l +1) )( xi − µ k( l +1) )T
i =1
∑ π
where
( l +1) k
∑ =
N i =1
N i =1
p (l ) (Ck | xi )
p (l ) (Ck | xi ) N
(7)
(8)
µk (l +1) , Σ(kl +1) and π k(l +1) are the mean, covariance and prior probability of the k th Gaussian
component in the
l th
iteration is calculated respectively.
A major limitation of the basic EM algorithm is that the initial parameter needs to be given and the proper selection of initial parameter is of great importance to reduce the computation load. To overcome this drawback, the F-J algorithm is developed. The minimum message length(MML) criterion with a variant of EM is adopted ,where Eq.(8) in the M-step is modified as follows
π k(l +1) Where V =
n V max{0, ∑ j =1 P( s ) (Ck | x j ) − } 2 = K n V ∑ k =1 max{0, ∑ j =1 P( s ) (Ck | x j ) − 2 }
1 2 3 m + m denotes the total number of free parameters specifying each component. By 2 2
annihilating the insignificant component whose weight falls to zero in each iteration step, the number of effective components can be determined adaptively. In this paper, the K-means clustering method is employed which is commonly used to automatically partition a data set into k groups.23
3. GMM-based relevant sample selection for JITL soft sensor development The Euclidean distance (ED) and the Mahalanobis distance (MD) have been widely used to define the similarity. Based on those sample selection methods, the estimation accuracy of conventional JITL soft sensors is not always high. This is because those similarity measures may be not appropriate to characterize the similarity among non-Gaussian process data. On the other hand, conventional methods are kinds of 7
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 8 of 36
global similarity measures. When the process data has obvious local features, a local measure is more convincing to select similar samples for JITL modeling. In practice, as a matter of fact, the assumption that the data follow a unimodal Gaussian distribution usually becomes invalid, Gaussian mixture model (GMM) is more appropriate to characterize the data feature, which has been widely applied in data modeling and clustering to deal with non-Gaussianity.19,20 In this paper, the GMM model is used to define the new similarity criterion for JITL soft sensor modeling. In this model, the non-Gaussian component is approximated by several Gaussian ones. Having calculated the posterior probabilities of the query sample corresponds to different Gaussian components, we can extract local features through these posterior probabilities. The characteristics of the query sample have been taken into consideration to define a more appropriate similarity by introducing the Bayesian inference strategy. With GMM-based similarity, relevant samples which are used to build the local model can be selected more properly, thus better modeling effect can be achieved. In the offline training stage, a historical data of normal states is utilized to train the GMM model with the presented algorithms. For the online training stage, due to time-varying process characteristics, the probability density function may change and the model parameters should be update in order to track these changes. The update equations can be used in order to update the GMM parameters. These are obtained from by Zoran Zivkovic26
µk( m +1) = µk( m ) + Σ
( m +1) k
=Σ
( m) k
1 p ( m ) (Ck | xm +1 ) ( xm +1 − µk( m ) ) (m) πk m +1
(9)
1 p ( m ) (Ck | xm+1 ) ( xm +1 − µk( m ) )( xm +1 − µk( m ) )T − Σk( m ) + (m) πk m +1
(10)
1 p ( m ) (Ck | xm+1 ) + ( − π k( m ) ) ( m) πk m +1
(11)
π
( m +1) k
=π
(m) k
8
ACS Paragon Plus Environment
Page 9 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
where xm +1 is the sample acquired at the. (m + 1) th sampling. Through these update equations with real-time samples, the GMM adapts itself to match the latest industrial conditions. The posterior probability of the query sample xq belonging to each Gaussian component is calculated according to Bayesian rules:
p ( xq ∈ Ck | xq ) = =
where
π k is
K
∑ p( x
q
p ( xq ∈ Ck ) p ( xq | xq ∈ Ck )
∑
K j =1
p ( xq ∈ C j ) p( xq | xq ∈ C j ) (12)
π k p ( xq | µk , Σ k )
∑
K j =1
π j p ( xq | µ j , Σ j )
a prior probability that an arbitrary sample generated from the
∈ Ck | xq ) = 1 is guaranteed by the scaling factor
k =1
∑
K j =1
k th
component Ck ,
π j p( xq | µ j , Σ j )
For the query sample xq ,a local Mahalanobis distance-based similarity between xq and training
Ck can be calculated as follows:
samples within each Gaussian component
MD( xq , xi , Ck ) = e
− ( xq − xi )T Σ−k 1 ( xq − xi )
where Σ k is the covariance matrix of the
k th
i = 1, 2,..., n
(13)
Gaussian distribution.
Considering that the query sample may come from different Gaussian components. The GMM-based similarity can be defined K
GMMD( xq , xi ) = ∑ p( xq ∈ Ck | xq )MD( xq , xi , Ck )
(14)
k =1
By introducing GMM and Bayesian inference strategy, the proposed similarity takes the non-Gaussianity of process data and the characteristics of query sample into account. Thus, a more suitable similarity criterion is defined for sample selection of JITL soft sensor and the prediction accuracy could be improved. In this paper, the widely used PLS model is employed for local regression modeling between process variable and quality variables. Suppose the local input and output process datasets are given as 9
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 10 of 36
{ X , y} , PLS intends to decompose X and y into a score matrix T, loading matrix and vector P and q, weighted matrix W, which can be built by as follows[8]
X =T TP + E
(15)
y = Tq + f
where E and f are residual matrix and vector, respectively. The soft sensor prediction results of the new data sample xq can be calculated as
yq = xqW ( PTW )-1 q
(16)
As illustrated in Fig.1, The proposed GMM-based modeling method consists of an offline training stage and online modeling stage. Offline modeling stage 1.
Use F-J algorithm to learn the best Gaussian component K
2.
Calculate the initial parameter
{{
}
{
Ω(0) = π1(0) , µ1(0) , Σ1(0) } ,…, π K (0) , µ K (0) , Σ K (0) }
by
K-means method. 3.
The Gaussian Mixture Model is constructed offline on the basis of the training samples to utilize
the
non-Gaussian
{
information
in
process
data.
The
process
parameters
}
Ω = {π 1 , µ1 , Σ1} , …, {π K , µ K , Σ K } are obtained by the iterative steps in Eqs. (6-8). 4.
Store the local model and GMM parameters Ω =
{{π , µ , Σ } ,…,{π 1
1
1
K
}
, µK , ΣK } .
Online modeling stage 1.
For a new online sample xq , update the GMM model adaptively according to the steps in Eqs.
(9-11). Determine the relevant sample size L. 2.
For each query sample xq , calculate the posterior probability belonging to each Gaussian 10
ACS Paragon Plus Environment
Page 11 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
component p ( xq ∈ Ck | xq ) according to Bayesian rules described as Eq.(12) 3.
Compute the local Mahalanobis distance-based similarity between the query sample and each
training sample within each Gaussian component 4.
Ck by Eq. (13)
Combining the local Mahalanobis distance-based similarity with the posterior probability related
to each component, the GMM-based similarity can be further defined in a weighted function as Eq. (14) 5.
Sorted the samples in descending order according to the similarities, only L relevant samples
which correspond to the L largest similarity indices are selected for PLS modeling. [Figure 1 about here]
4. Case studies In this section, both numerical and real industrial examples are provided to verify the effectiveness of the proposed method. The numerical study relates to a simulation example that includes seven inputs and one output, which contains three operation modes. In the second example, the debutanizer column is used as an industrial case study. To compare the accuracy of different methods, the root mean squared error (RMSE) is defined as follows n
RMSE =
∑ ( $y
i
− yi ) 2 i = 1, 2,...n
i =1
where n represents the number of test samples,
n
(17)
yi and $y i are real and predicted values, respectively.
The configuration of the computer is listed as follows: OS: Windows 7(64 bit); CPU: Intel(R) Pentium(R) CPU
[email protected]; RAM: 6.00GB; The version of MATLAB is 2012b. 11
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
4.1. Numerical example The numerical case described in [24] is used to give an insight into the relevant sample selection algorithm. The 7 predictor variables are simulated as linear combinations of a total of 5 source variables below
s1 (k ) = 2 cos(0.08k ) sin(0.06k ) s (k ) = sin(0.3k ) + 3cos(0.1k ) 2 s3 (k ) = sin(0.4k ) + 3cos(0.1k ) s (k ) = cos(0.1k ) − sin(0.05k ) 4 s5 (k ) = uniformly distributed noise in [−1,1] 1500 samples belonging to 3 different modes are generated as follows: The first 500 samples (Mode 1):
x = As + e , y = 0.8 x1 + 0.6 x2 + 1.5 x3 The second 500 samples (Mode 2):
x = ABs + e , y = 2.4 x2 + 1.6 x3 + 4 x4 The last 500 samples (Mode 3):
x = AB2 s + e , y = 1.2 x1 + 0.4 x2 + x4 0.8 0.86 −0.55 0.17 −0.33 0.89 0.2 0.79 0.65 0.32 0.12 −0.97 0.4 0.5 where A = 0.67 0.46 −0.28 0.27 −0.74 −0.3 −0.45 0.23 0.15 0.56 0.84 0.23 0.13 0.14 0.34 0.95 0.12 0.47 0.92 0.19 0.56 1 1 1 B = 1 1 1 1
0 1 1 1 1 1 1
0 0 1 1 1 1 1
0 0 0 1 1 1 1
0 0 0 0 1 1 1
0 0 0 0 0 1 1
0 0 0 0 0 0 1
e ~ N (0,0.01) 12
ACS Paragon Plus Environment
Page 12 of 36
Page 13 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Finally, a normally distributed noise is added to the output
y = y + h h ~ N (0,0.1) For each mode, 250 samples are selected as the training data and the remaining 250 samples are used as the testing data of the JITL soft sensor. Thus there are both 750 samples in the training and testing data set, respectively. The overall 750 training samples are used to construct the GMM. The data characteristic of the output is shown in Fig.2.The output variable is transformed with an average of zero and a standard deviation of one. It can be easily seen from the figure that the process is changed dynamically. [Figure 2 about here] The component number of the PLS model is chosen as 4, which can explain most of the process data information. For JITL soft sensor modeling, the size of modeling dataset is an important parameter, which may be highly related to the estimation performance. In this example, different L values from 5 to 50 are carried out for JITL soft sensor development. To examine the effectiveness of the proposed approach for non-Gaussian process, the proposed GMM-based similarity criterion is compared with two conventional methods which are based on Mahalanobis distance (MD) and Euclidean distance (ED), respectively. In GMM-based method, the probability distributions of the historical dataset can be approximated as a mixture of three Gaussian components by implementing the F-J algorithm on the training samples. The root mean squared error values for test samples with different length L of different relevant sample selection strategy are shown in Fig.3. This result shows that conventional relevant sample selection methods do not function very well. The reason for the poor performance of conventional modeling method may be due to the definition of ED and MD similarity measures that do not take account of non-Gaussian and characteristics of the query sample when the local model is built. [Figure 3 about here] 13
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
The proposed method outperforms the conventional methods under different L values since both time-varying and non-Gaussian data features can be efficiently accommodated. Based on the observed results, it can be noticed that the prediction accuracy will deteriorate when L is too large, this is because weak correlate samples are inappropriately selected as the sample size is growing. It can be found that the best prediction result is gained when the L value is set as 10. In this case, the prediction error of three different soft sensors is provided in Fig.4. If we amplified prediction results of test samples 400 to 450 as Fig.5, it can be seen obviously the proposed method can best track the non-stationary behavior of the process data. [Figure 4-5 about here]
4.2. Debutanizer column The debutanizer is a part of a desulfuring and naphtha splitter plant, in which propane and butane are removed as overheads from the naphtha stream. The debutanizer is required to minimize the butane content in the debutanizer bottoms as well as maximize the stabilized gasoline content in the debutanizer overheads. The real-time estimation of the butane content is of great importance to improve the control quality. A number of sensors are installed on the plant to monitor product quality. An objective variable y is concentration of bottom product. Explanatory variables X are 7 variables, which are temperature, pressure and so on. The description of the explanatory variables is listed in Table 1. Fig.6 shows the data characteristics of the objective variable y. As can be seen, the process is changed dynamically. [Figure 6 about here] [Table 1 about here] To verify the non-Gaussian characteristics, the normal probability plot of the process data is carried out in Fig.7. The purpose of a normal probability plot is to graphically assess whether the data come from a 14
ACS Paragon Plus Environment
Page 14 of 36
Page 15 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Gaussian distribution. If the data are Gaussian, the plot will be linear. Other distribution types will introduce curvature in the plot. It can be seen that the input x3, x4, x6, x7 are close to a Gaussian distribution, x1, x2, x5, y are non-Gaussian. Hence, the process is non-Gaussian. The Gaussian mixture model is a simple linear superposition of Gaussian components, aimed at providing a richer class of density models than the single Gaussian. By using a sufficient number of Gaussians, and by adjusting their means and covariance as well as the coefficients in the linear combination, almost any continuous density can be approximated to arbitrary accuracy.27It is reasonable to use the GMM to describe the debutanizer column. [Figure 7 about here] Similarly, three different sample selection methods are compared in this example. 1000 samples are used for model training and the remaining 1000 samples are used as testing set. The component number of the PLS model is chosen as 4, which can again explain most of the process data information. By implementing the F-J algorithm on the training samples, the probability distributions of the historical dataset can be approximated as a mixture of seven Gaussian components. The algorithm of the proposed method is complicated and needs large calculation, the computation time of the proposed GMM-based method is larger than the traditional methods as in Table 2. Besides, the computation time increased significantly as the number of Gaussian components become larger. For example, when L is set as 10, as the value of Gaussian component varies from 3 to 7, the computation time increased 130% while the RMSE has increased 2.3% in Fig.8 and Fig.9. Considering the accuracy and computational complexity, it is reasonable to set the K as 3. [Figure 8-9 about here] Fig.10 shows the root mean squared error for test samples with different length L (from 10 to 100) of 15
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
three methods, where the GMMD is a weighted function of three local Mahalanobis distances. We can find that the best prediction result of the test dataset is obtained by the GMM-based JITL method. With the value of L varies, the proposed GMM-based method always achieves a minimum RMSE value and was superior to the conventional methods. It can be seen that the GMM-based JITL strategy provides the most accurate predictive models as is indicated by a smallest value of RMSE. When L is set as 10, the estimation error is shown in Fig.11, while the detailed estimated results from sample 850 to 950 are given in Fig.12. Additionally, the values of posterior probability within each Gaussian component for testing samples are provided in Figure 13. [Figure 10-13 about here] Although the three methods are all able to track the time-varying characteristics of the debutanizer process, the proposed method performs the best by virtue of the new similarity criterion it uses. With the proposed GMM-based similarity criterion, RMSE is decreased by about 23% and 27% in comparison with ED-based and MD-based method, respectively. These results clearly show that the proposed method functions well for those time-varying processes in which the data are non-Gaussian distribution.
5. Conclusion In this paper, a new GMM-based similarity criterion has been proposed for relevant sample selection in JITL soft sensor modeling. Compared to the conventional ED-based and MD-based similarity measurement, the new proposed method performs better when dealing with the non-Gaussian and time-varying processes. To test the quality predictive performance of the new similarity measurement based soft sensor, both numerical and industrial application case studies have been carried out, based on which both feasibility and superiority of the new soft sensor have been evaluated. 16
ACS Paragon Plus Environment
Page 16 of 36
Page 17 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Acknowledgement This work was supported in part by the National Natural Science Foundation of China (NSFC) (61370029), Project National 973 (2012CB720500), and the Fundamental Research Funds for the Central Universities (2013QNA5016).
References (1) Kadlec, P.; Gabrys, B.; Strandt, S., Data-driven soft sensors in the process industry. Computers & Chemical Engineering 2009, 33 (4), 795-814. (2) Kadlec, P.; Grbić, R.; Gabrys, B., Review of adaptation mechanisms for data-driven soft sensors. Computers & chemical engineering 2011, 35 (1), 1-24. (3) Lin, B.; Recke, B.; Knudsen, J. K.; Jørgensen, S. B., A systematic approach for soft sensor development. Computers & chemical engineering 2007, 31 (5), 419-425. (4) Wold, S.; Ruhe, A.; Wold, H.; Dunn, I., WJ, The collinearity problem in linear regression. The partial least squares (PLS) approach to generalized inverses. SIAM Journal on Scientific and Statistical Computing 1984, 5 (3), 735-743. (5) Gonzaga, J.; Meleiro, L.; Kiang, C.; Maciel Filho, R., ANN-based soft-sensor for real-time process monitoring and control of an industrial polymerization process. Computers & Chemical Engineering 2009, 33 (1), 43-49. (6) Desai, K.; Badhe, Y.; Tambe, S. S.; Kulkarni, B. D., Soft-sensor development for fed-batch bioreactors using support vector regression. Biochemical Engineering Journal 2006, 27 (3), 225-239. (7) Li, W.; Yue, H. H.; Valle-Cervantes, S.; Qin, S. J., Recursive PCA for adaptive process monitoring. 17
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Journal of process control 2000, 10 (5), 471-486. (8) Joe Qin, S., Recursive PLS algorithms for adaptive data modeling. Computers & Chemical Engineering 1998, 22 (4), 503-514. (9) Qing, Y.; Feng, T.; Dazhi, W.; Dongsheng, W.; Anna, W., Real—time fault diagnosis approach based on lifting wavelet and recursive LSSVM. Chinese Journal of Scientific Instrument 2011, 32 (3), 596-602. (10) Kaneko, H.; Funatsu, K., Classification of the Degradation of Soft Sensor Models and Discussion on Adaptive Models. AIChE Journal 2013, 59 (7), 2339-2347. (11) Cheng, C.; Chiu, M.-S., A new data-based methodology for nonlinear process modeling. Chemical Engineering Science 2004, 59 (13), 2801-2810. (12) Ge, Z.; Song, Z., A comparative study of just-in-time-learning based methods for online soft sensor modeling. Chemometrics and Intelligent Laboratory Systems 2010, 104 (2), 306-317. (13) Liu, Y.; Huang, D.; Li, Y., Development of interval soft sensors using enhanced just-in-time learning and inductive confidence predictor. Industrial & Engineering Chemistry Research 2012, 51 (8), 3356-3367. (14) Ge, Z.; Song, Z., Online monitoring of nonlinear multiple mode processes based on adaptive local model approach. Control Engineering Practice 2008, 16 (12), 1427-1437. (15) Fujiwara, K.; Kano, M.; Hasebe, S.; Takinami, A., Soft-sensor development using correlation-based just-in-time modeling. AIChE Journal 2009, 55 (7), 1754-1765. (16) Atkeson, C. G.; Moore, A. W.; Schaal, S., Locally weighted learning for control. Artificial intelligence review 1997, 11 (1-5), 75-113. (17) Yuan, X.; Ge, Z.; Song, Z., Locally Weighted Kernel Principal Component Regression Model for Soft Sensing of Nonlinear Time-Variant Processes. Industrial & Engineering Chemistry Research 2014, 53 (35), 13736-13749. 18
ACS Paragon Plus Environment
Page 18 of 36
Page 19 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
(18) Cheng, C.; Chiu, M.-S., Nonlinear process monitoring using JITL-PCA. Chemometrics and intelligent laboratory systems 2005, 76 (1), 1-13. (19) McLachlan, G.; Peel, D., Finite mixture models. John Wiley & Sons: 2004. (20) Fraley, C.; Raftery, A. E., Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association 2002, 97 (458), 611-631. (21) Moon, T. K., The expectation-maximization algorithm. Signal processing magazine, IEEE 1996, 13 (6), 47-60. (22) MacKay, D. J., Probable networks and plausible predictions-a review of practical Bayesian methods for supervised neural networks. Network: Computation in Neural Systems 1995, 6 (3), 469-505. (23) MacQueen, J. In Some methods for classification and analysis of, 5th Berkeley Symposium on Mathematical Statistics and Probability, 1967; pp 281-297. (24) Zeng, J.; Xie, L.; Gao, C.; Sha, J. In Soft sensor development using non-Gaussian Just-In-Time modeling, Decision and Control and European Control Conference (CDC-ECC), 2011 50th IEEE Conference on, IEEE: 2011; pp 5868-5873. (25) Fortuna, L., Soft sensors for monitoring and control of industrial processes. Springer: 2007. (26) Zivkovic, Z.; van der Heijden, F., Recursive unsupervised learning of finite mixture models. Pattern Analysis and Machine Intelligence, IEEE Transactions on 2004, 26 (5), 651-656. (27) Bishop, C. M., Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc.: 2006.
19
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Figure Captions Figure 1: Schematic diagram of the proposed GMM-based relevant sample selection Figure 2: Data characteristics of the output variable Figure 3: Prediction results of three similarity criterions under different L values Figure 4: Estimated errors of three similarity criterions Figure 5: Prediction results of three similarity criterions from samples 400 to 450 Figure 6: Data characteristics of the objective variable in the debutanizer column Figure 7: Normal probability plot of the debutanizer data Figure 8: CPU time under different K values for 1000 testing data samples when L=10 Figure 9: RMSE under different K values for 1000 testing data samples when L=10 Figure 10: Prediction results of three similarity criterions under different L values Figure 11: Estimation errors of three similarity criterions Figure 12: Prediction results of three similarity criterions from samples 570 to 670 Figure.13: Posterior probability within each Gaussian component for testing samples when L=10
20
ACS Paragon Plus Environment
Page 20 of 36
Page 21 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Table Captions Table 1. Input variables in the debutanizer column Table 2. CPU time of three different methods when L=10
21
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
p ( x q ∈ Ck | x q )
{{
}
{
Ω(0) = π 1(0) , µ1(0) , Σ1(0) } ,K , π K (0) , µ K (0) , Σ K (0) }
MD(x q , x i )
K
GMMD(xq ,xi ) = ∑ p(xq ∈Ck | xq ) MD(xq ,xi ) k =1
Ω=
{{π , µ , Σ } ,K , {π 1
1
1
K
}
, µK ,ΣK }
Figure1: Schematic diagram of the proposed GMM-based relevant sample selection
22
ACS Paragon Plus Environment
Page 22 of 36
Page 23 of 36
100 80 60 40 20 Y
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
0 -20 -40 -60 -80 -100
0
100
200
300
400 Samples
500
600
Figure 2: Data characteristics of the output variable
23
ACS Paragon Plus Environment
700
Industrial & Engineering Chemistry Research
RMSE ED
7
MD
6.5
GMMD
6 5.5 RMSE
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 24 of 36
5 4.5 4 3.5 3 2.5 2 10
15
20
25
30 L
35
40
45
50
Figure 3: Prediction results of three similarity criterions under different L values
24
ACS Paragon Plus Environment
Page 25 of 36
Error
20
ED
0 -20
0
100
200
300
400
500
600
700
Error
20
MD
0 -20
0
100
200
300
400
500
600
20 Error
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
700 GMMD
0 -20
0
100
200
300
400 Samples
500
600
700
Figure 4: Estimated errors of three similarity criterions
25
ACS Paragon Plus Environment
YED
60 40 20 0 -20 -40 400
405
410
415
420
425
430
435
440
445
450
YMD
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 26 of 36
60 40 20 0 -20 -40 400
405
410
415
420
425
430
435
440
445
450
YGMMD
Industrial & Engineering Chemistry Research
60 40 20 0 -20 -40 400
405
410
415
420
425 430 Samples
435
440
445
450
Real
Predicted
Figure 5: Prediction results of three similarity criterions from samples 400 to 450
26
ACS Paragon Plus Environment
Page 27 of 36
1 0.9 0.8 0.7 0.6 Y
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
0.5 0.4 0.3 0.2 0.1 0
0
100
200
300
400
500 600 Samples
700
800
900
1000
Figure 6: Data characteristics of the objective variable in the debutanizer column
27
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
2 0 -2
x2
x1
Normal Probability Plot
0.5
x6 0.5
y 0
0.5
1
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
2 0 -2
1
2 0 -2
0 2 0 -2
1
2 0 -2 0
2 0 -2
1
2 0 -2 0
x5
0.5
x4
x3
0
x7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 28 of 36
2 0 -2
Figure 7: Normal probability plot of the debutanizer data
28
ACS Paragon Plus Environment
Page 29 of 36
70
60
50 CPU time
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
40
30
20
10
1
2
3
4 K
5
6
7
Figure 8: CPU time under different K values for 1000 testing data samples when L=10
29
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
RMSE 0.155
0.15
0.145 RMSE
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 30 of 36
0.14
0.135
0.13
0.125
1
2
3
4 K
5
6
7
Figure 9: RMSE under different K values for 1000 testing data samples when L=10
30
ACS Paragon Plus Environment
Page 31 of 36
RMSE 0.155
ED MD GMMD
0.15
0.145 RMSE
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
0.14
0.135
0.13
0.125 10
20
30
40
50
60
70
80
90
100
L
Figure 10: Prediction results of three similarity criterions under different L values
31
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
Error
1
ED
0 -1
0
100
200
300
400
500
600
700
800
900
Error
1
1000 MD
0 -1
0
100
200
300
400
500
600
700
800
1 Error
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 32 of 36
900
1000
GMMD
0 -1
0
100
200
300
400
500 600 Samples
700
800
900
Figure 11: Estimation errors of three similarity criterions
32
ACS Paragon Plus Environment
1000
Page 33 of 36
YED
1
Real
Predicted
0.5 0 -0.5 850
860
870
880
890
900
910
920
930
940
950
860
870
880
890
900
910
920
930
940
950
860
870
880
890
900 910 Samples
920
930
940
950
YMD
1 0.5 0 -0.5 850 1 YGMMD
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
0.5 0 -0.5 850
Figure 12: Prediction results of three similarity criterions from samples 850 to 950
33
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 34 of 36
Posterior probability within each Gaussian component 1 0.5 0
0
100
200
300
400
500
600
700
800
900
1000
0
100
200
300
400
500
600
700
800
900
1000
0
100
200
300
400
500 600 Samples
700
800
900
1000
1 0.5 0 1 0.5 0
Fig. 13: Posterior probability within each Gaussian component for testing samples when L=10
34
ACS Paragon Plus Environment
Page 35 of 36
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Table 1. Input variables in the debutanizer column Input variables
Description
x1 x2 x3 x4 x5 x6 x7
Top temperature Top pressure Reflux flow Flow to next process 6th tray temperature bottom temperature 1 bottom temperature 2
35
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Table 2. CPU time of three different methods when L=10 CPU time/s ED MD GMMD(K=3) GMMD(K=7)
2.57 6.72 25.49 61.92
36
ACS Paragon Plus Environment
Page 36 of 36