Data Driven Modeling Using an Optimal Principle ... - ACS Publications

Apr 18, 2018 - In order to fully exploit the data information among process variables, an optimization based principle component analysis (PCA) using ...
0 downloads 7 Views 603KB Size
Subscriber access provided by UNIV OF DURHAM

Process Systems Engineering

Data driven modeling using optimal principle component analysis based neural network and its application to nonlinear coke furnace Ridong Zhang, qiang lv, Jili Tao, and Furong Gao Ind. Eng. Chem. Res., Just Accepted Manuscript • DOI: 10.1021/acs.iecr.8b00071 • Publication Date (Web): 18 Apr 2018 Downloaded from http://pubs.acs.org on April 19, 2018

Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.

is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.

Page 1 of 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Data driven modeling using optimal principle component analysis based neural network and its application to nonlinear coke furnace *

c

Ridong Zhanga , Qiang Lva, Jili Taob, Furong Gao

a

The Belt and Road Information Research Institute, Automation College, Hangzhou Dianzi University,

Hangzhou, 310018, P.R. China b

Ningbo Institute of Technology, Zhejiang University, Ningbo 315100, P R China

c

Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology,

Clear Water Bay, Kowloon, Hong Kong

Corresponding author: Ridong Zhang Emails: [email protected]

Abstract- In order to fully exploit the data information among process variables, an optimization based principle component analysis (PCA) using a neural network is proposed. Firstly, a new RV similarity criterion of PCA variable selection method is developed to select the main variables and construct the nonlinear industrial process. Secondly, a radial basis function neural network (RBFNN) is utilized to construct the nonlinear process model, where the modeling accuracy and RV criterion are optimized by an improved multi-objective evolutionary algorithm, namely, NSGA-II. To obtain the optimization of the structure and parameter of the RBFNN, encoding, prolong and pruning operators are designed. The RBFNN with good generalization capability will then be obtained based on root mean squared error of the training and testing data considering the Pareto optimal solutions. The proposed approach has efficiently selected the main disturbance of the chamber pressure control loop in a coke furnace and the RBFNN has obtained satisfactory data extraction accuracy compared with the other three typical methods. Index Terms—Principle component analysis, data feature extraction, multi-objective evolutionary algorithm, RBF neural network; coke furnace

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

1. Introduction In process industry, the fewest process variables with accurate or suitable model development are often of interest by means of the most relevant variables selection such that the modeling, control,optimization and monitoring issues for quality improvement would be much easier 1-3. By using a variable subset instead of the whole data set, it is anticipated that improved prediction accuracy can be obtained using variable selection techniques, which reduce the model complexity and better capture the nature of industrial processes 4-7. As for the nonlinear nature of industrial processes, linear model based data feature extraction is difficult to obtain satisfactory accuracy. Artificial neural networks (ANNs) are advanced nonlinear feature extraction methods and can describe complex nonlinear industrial processes, which are widely applied for nonlinear system modeling8-10. Some researchers have developed several typical variable selection methods for ANNs. Huang et al. utilized a least absolute shrinkage and selection operator for the input variables of multilayer perceptron neural network for nonlinear industrial processes 11. A sequential backward multiplayer perceptron (SBS-MLP) was proposed12. Souza et al. have considerably reduced the computational cost and improved the model accuracy by variable selection compared with SBS-MLP 13. By introducing the average normalized mutual information for the measurement of redundancy, Estévez et al. proposed an improved variable selection method14. The variable selection problem in principal component analysis (PCA) has also been studied by numerous authors 15-17. However, the principal components are usually the linear combination of all variables, which makes the interpretation of results and variable analysis quite difficult. A variety of criterion functions, such as Similarity indices, RM criterion, RV criterion, Generalized Coefficient of Determination (GCD) criterion of principal components for subset selection have been proposed 18. Moreover, the heuristics algorithm

19

, simulated annealing 4, stochastic approximation iteration20,

genetic algorithm 21, etc., have been applied to select the variable subset. Though the variable selection methods are efficient in the literature15-21, they are not included in system modeling, while the variable selection in a neural network only considered the modeling accuracy11. In this paper, PCA variable selection is combined with ANN for nonlinear system modeling. A RV criterion function of PCA is utilized to select the variables because of its effectiveness 22. Since two objectives, i.e., RV criterion and modeling accuracy, need to be considered, the multi-objective

ACS Paragon Plus Environment

Page 2 of 18

Page 3 of 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

evolution algorithm (MOEA) is adopted. Among MOEAs, NSGA-II is adopted due to its popularity and efficiency23 and used to solve ANN optimization and modeling problems 24-25. Here, it is adopted to solve the variable selection and ANN modeling. Among ANN family, RBFNN is very effective because of its excellent global approximation capability and simple local responses 26. However, the modeling accuracy of RBFNN will be affected by the input layer, the number of the hidden nodes, the parameters of the hidden functions and its connected weights

27

. An effective RBFNN for the complex practical

problem will be solved using NSGA-II, where the main process variable is selected in terms of RV criterion of PCA, and the structure of the hidden layer and the parameters of radial basis functions are optimized according to the modeling accuracy. The proposed method is applied to the nonlinear chamber pressure process in a coke furnace. The process operation data in the coke furnace are obtained and generalization capability is achieved considering the testing errors in the optimization. The paper is organized as follows: Section 2 deals with PCA variable selection criterion. The nonlinear RBFNN modeling using improved NSGA-II is detailed in section 3. Section 4 details its applications of disturbance selection and its chamber pressure modeling. Conclusion is in Section 5.

2. PCA based RBFNN modeling 2.1 RV criterion in PCA variable selection To determine the appropriate process model, the number of latent variables should be decided properly. Here, the most important or multiple variables are to be found for system modeling, e.g., select less than 3 variables from total 6 disturbances of 1000 samples in section 4. The RV criterion in principal component analysis is then utilized to measure the similarity of the selected subset. If the fitness function value of RV criterion is the largest among all possible subsets, the optimal subset will be obtained. In order to calculate the RV criterion, the following augmentation of notation has been listed in Table 1.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 4 of 18

Table 1: Notation Parameters

Description

X

an N×M data matrix for N objects measured on M variables

X(P)

the N×p vector of X for the selected variables in P

S

the covariance matrix for the full data matrix, X

S2

the product of the covariance matrix and itself, S2 = SS

SP

the p × p submatrix of S corresponding to the selected variables in P

[S2]P

the p × p submatrix of S2 corresponding to the selected variables in P

The optimal solution for a given subset, P, is equivalent to maximizing the RV criterion as follows18: f1 =

tr (( S P−1[ S 2 ]P ) 2 ) tr ( S 2 )

(1)

If all variables are selected, the maximum value of f1 reaches 1. Since f1 is to be maximized, the objective is then changed into the minimization problem as:

J1 =1 f1 .

(2)

Once the selected subset is determined in terms of J1, they are used as the inputs of the process model. 2.2 RBFNN model A three-layer RBFNN is shown in Fig. 1 as follows

Fig.1. The structure of RBF network.

Here x (k ) = [ y (k − 1),L, y (k − n), u(k − 1),L, u(k − m)]

is the input vector, u is the selected

disturbances evaluated by RV criterion of PCA, yˆ is the output of the RBFNN, ω = [ω1 ,L , ωnh ] is the weights between the hidden layer and the output layer, and nh is the number of the nodes in the hidden layer. φi ( x ) is the output of the ith neuron in the hidden layer with the Gaussian function :

ACS Paragon Plus Environment

Page 5 of 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

φi ( x ) = exp (−

where

x − ci

σ i2

) , i = 1, 2, L , nh

(3)

x − ci is the Euclidean distance between x and ci , ci ∈ ℜ n + m and σ i ∈ ℜ are the center

vector and the width of the Gaussian function. The prediction of RBFNN, yˆ(k ) , can be expressed as a linear weighted sum of nh hidden functions: nh

y( x (k )) = ∑ ωiφi ( x (k ) ) = ωΦ (k )

(4)

i =1

T where ω = [ w1 , L , wnh ] , Φ = [φ1 , L, φnh ] . Given N1 samples of training data, Y1 = [ y1 (1),L , y1 ( N1 )]

and U = [u(1),L , u( N1 )] , the weight coefficient will be calculated by recursive least squares (RLS) method 28:

ω(k ) = ω(k − 1) + K (k )[ fi (k ) − ΦT (k )ωi (k − 1)]  T −1  K (k ) = P (k − 1)Φ(k )[Φ (k ) P (k − 1)Φ(k ) + µ ]  T  P (k ) = 1 µ [ I − K (k )Φ (k )]P (k − 1)

(5)

where 0 < µ < 1 is the forgetting factor, P(k) is a positive definite covariance matrix, P(0)=α2I, I is an 5 (n + m) × (n + m) identity matrix, α is a sufficiently large real number set to 10 and ω(0) = ε , and ε is

a sufficiently small n + m real vector set to 10-3. K (k ) is a weight matrix. Once the RBFNN is trained, its modeling accuracy is evaluated according to Root Means Square Error (RMSE) using the training and testing data: J2 =

N1

∑ | y (k ) − yˆ (k ) |

2

1

k =1

1

N1 +

N2

∑ | y (k ) − yˆ (k ) | 2

2

2

N2

(6)

k =1

where y2 (k ), k = 1,L , N 2 is the testing data.

3. NSGA-II based variable selection and RBFNN modeling The two objectives, J1 , J 2 , will be optimized simultaneously. The encoding method and various operators in NSGA-II for variable selection, RBFNN structure and parameter optimization are then designed to solve this multi-objective problem. 3.1 Encoding For simplicity, n in the input layer are set as 2, while m for one input variable is set as 1 according to a prior knowledge. Once the key variables are selected, the input nodes are then determined. The number of the neurons ( nh ) and the parameters of Gaussian functions ci , σ i , i = 1,L , nh ,

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 6 of 18

where 1 ≤ nh ≤ H , H is the maximal number of the hidden layer predefined, are to be optimized. The encoding for different variables selection and RBFNN are then designed, and the ith chromosome is shown as follows: c11 c21 c31 L c81 σ1    M M M M M M  Ci = c1nh c2 nh c3nh L c8nh σ nh    0 0 0 L 0 0    0 0 1 L 1 0 

(7)

Here 1 ≤ i ≤ N p , and N p is the population size, and the elements in rows [1, nh ] can be obtained as follows:  ymin + r ( ymax − ymin ) 1 ≤ i ≤ 21 ≤ j ≤ nh cij =  umin + r (umax − umin ) 3 ≤ i ≤ 8 1 < j ≤ nh

σ j = rwmax

1 ≤ j ≤ nh

(8)

where r is randomly generated in [0.01,1], umin and umax are the minimum and maximum of the inputs, y m in

and ymax are the minimum and maximum of the outputs, respectively. wmax is the maximum width

of the Gaussian basis function set to max(umax,ymax). The last row in Ci delegates which variables of columns 3 to 8 will be selected as encoded by binary code, and the valid bits are located at [3 8], for example: cH +1 = [0 0 0 0 11 0 1 0 0]

(9)

It means that u3 , u4 , u6 are selected, and columns c5 , c6 , c8 are valid centers of Gaussian functions. Once Ci is obtained, both the structure and the parameters of RBFNN can be determined and the weight ω can be further calculated by RLS based on the training data in terms of Eq.(5). 3.2 Operators of NSGA-II (1) Selection operator The fast NSGA-II is implemented, where the rank and crowding distance are obtained. The individuals with rank 1 are regarded as elitists and chosen as the parents. Moreover, the individuals in one rank with the same values of J1 and J2 are regarded as the same individual in order to keep the population diversity. The individuals at each front from rank 1 are selected into the parent population one by one until exceeding the population size. Then, the crowing distance in the current front is compared by sorting in descend and the individuals with larger crowing distance are selected into the

ACS Paragon Plus Environment

Page 7 of 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

parent population. If the size is still less than the set population size, the Roulette wheel selection operator is implemented to select the half of the rest population in terms of J1 and the other half of the rest population in terms of J 2 . The crossover and mutation operators are carried out among the selected population to produce the offspring. (2) Crossover and mutation operators

Fig.2. Example of the crossover operator.

In Fig.2, the crossover operator with probability pc is executed between the individuals Ci and Ci′ , and the location is randomly generated between [1 9]. The parameters of the basis function are changed and the selected variables are also changed in the offspring. Note that the number of the hidden nodes cannot be changed. The element in Eq.(7) is mutated with the probability pm. When the mutation operator is implemented, the element is produced in terms of Eq.(8) and the element in Eq.(9) performs a logic NOT operation, that is, 1 to 0 and 0 to 1. The new structure of RBFNN and the different key variables can be obtained. In addition to crossover and mutation, the prolong and pruning are also designed for improved search capability of NSGA-II and the rationality of RBFNN. (3) Prolong and pruning operators Due to the fact that the number of the hidden nodes is not changed and some irrational structure may be produced by random operators, the prolong and pruning operators are then designed. If the number of the hidden neuron is less than 2, the prolong operator is executed, i.e., a random number between [1, H-2] is produced randomly as the new added neuron and the elements of the new neuron are calculated in terms of Eq.(8). If the neuron is of only one nonzero element in ci, the pruning operator is

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

implemented. Here, the neuron will be deleted and the number of the hidden neuron is decreased. 3.3 The Procedure of improved NSGA-II The whole process of the proposed NSGA-II for RBFNN optimization is shown as follows: Step 1: Initialize the population size Np, the maximum generations G, the operators’ probabilities pc, pm, the system parameters umin, umax, ymin and ymax, then generate Np chromosomes randomly. Step 2: Select variables in terms of Eq.(9) and calculate J1. Step 3: Construct the RBFNN according to Eqs.(3)-(5) and obtain the value of J2. Step 4: Implement NSGA-II and select the parent population in terms of front rank, crowing distance and Roulette wheel selection operator. Step 5: Implement the crossover and mutation operators with pc and pm, then the prolong and pruning operators for the offspring. Step 6: Repeat steps 2-5 until G is met.

4. Case Study Coke furnace is the important equipment for coking heavy oil for fuels and petrochemical needs. The chamber pressure operation in it is critical to guarantee burning security. However, its system model for advanced control is a highly complex task because nonlinear characteristics, time delay and a lot of disturbances such as fuel volume, the coupling of pressures, etc. exist in the unit 29. The input variable of the main channel is known, but the main disturbance model is especially difficult to be obtained because of the above mentioned various disturbances. How to select the disturbance variables and construct its disturbance model is still a challenge. 4.1 The coke furnace system See Fig.3, the main job of the unit is to coke residual oil (FRC8103, FRC8105) and send them to the fractionating tower (T102) for heat exchange. After this is done, they are resent into the furnace (FRC8107, FRC8108) and will be heated again. Finally, they will go to the coke towers (T101/5,6) for coke removing.

ACS Paragon Plus Environment

Page 8 of 18

Page 9 of 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Residual AUT

CAS

FRC810

FRC810 50.8%

51.4% TR8155

TR8156

To

fuel

To fuel valve AUT

AUT TRC810

TRC810 AUT

AUT

FRC8107

FRC8108

Circulating

Furnace

oil To T102

TR8129 To

Fig.3. Process flow of coke furnace (F101/3).

Here, the chamber pressure, outlet temperature, liquid level and the oxygen content have to be controlled with small fluctuation. However, a lot of disturbances, such as, load changes, unsteady flames in the coke furnace, the input fuel volume, the coupling of two-side variables, etc., are influencing the processes. 4.2 Experimental setup The outlet pressure and relevant variables are sampled using experimental apparatus using CENTUM CS3000 Distributed Control System (DCS), as shown in Fig. 4. The DCS has a data database, namely π database, for process data acquisition. Here the sampling time is 5s.

Pressure

system

FCS

system

Pr essure L Flow Monitor computer

π database

Fig.4. Data –acquisition configuration for outlet pressure system.

4.3 Experimental results The pressure PRC8112A coupled with the pressure PRC8112B, the temperature in the chamber TR8109A, TR8109B, and the oxygen content AR8102, ARC8101, the external flow XLF103, are

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

collected. Meanwhile, the other side pressure PRC8112B with similar disturbance variables is also stored in the π database. Two-side pressures and their relevant variables are shown in Fig.5 and Fig.6, respectively. Since the values of different variables in Fig.5(a) and Fig.6(a) vary considerably large, they are normalized to [0,1] and shown in Fig.5(b) and Fig.6(b). It is obvious that the dynamic response is complex and accurate modeling is difficult. 1 160

PRC8112A PRC8112B AR8102 ARC8101 XLF103 TR8109A TR8109B

(b)

0.9 140

0.8 120

0.7 100

Variables

Variables

0.6 80

PRC8112A PRC8112B AR8102 ARC8101 XLF103 TR8109A TR8109B

60

(a)

40

20

-20

0.5

0.4

0.3

0.2

0.1

0

0 0

100

200

300

400

500

600

700

800

900

1000

0

100

200

300

400

500

600

700

800

900

1000

Samples

Samples

Fig.5. (a) Original variables sampled in the PAI database for PRC8112A, (b) normalized variables.

160

1

140

0.9

PRC8112B PRC8112A AR8102 ARC8103 XLF103 TR8109A TR8109B

(b)

0.8

120

0.7 100

Variables

0.6

Variables

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 10 of 18

80 PRC8112B PRC8112A AR8102 ARC8101 XLF103 TR8109A TR8109B

60

(a)

40

0.4

0.3

20

0.2

0

-20

0.5

0.1

0

100

200

300

400

500

600

700

800

900

1000

0

0

100

200

300

400

500

600

700

800

900

1000

Samples

Samples

Fig.6. (a) Original variables sampled in the PAI database for PRC8112B, (b) normalized variables.

The proposed NSGA-II is used to select the main disturbances and optimize both the structure and the parameters of RBFNN such that the nonlinear dynamic behavior of the chamber pressure can be captured. The population size Np is set to 60, the maximal evolution generation G is 1000, and the operator probabilities pc, pm are set to 0.9 and 0.1, N1, N2 of the training data (Y1) and the testing data (Y2) are set as 400, 400, N, M in X are 6 and 800, respectively. The maximal number of the hidden layer (H) is set to 30, [umin, umax] is [0,1] and [ymin, ymax] is [0,1]. The proposed method is run using Matlab 2012a at a personal computer with Intel Core i5-3470 @ 3.2GHz and 4G RAM. The optimization result for PRC8112A and PRC8112B is a group of Pareto solutions and shown in Fig.9 and Fig.10, respectively.

ACS Paragon Plus Environment

Page 11 of 18

1.18

1.4

1.16

1.35

1.14 1.3

1.12 1.25

1.1 1.2

1.08

J1

J1

1.15

1.06 1.04

1.1

1.02

1.05

1

1

0.98 0.07

0.072

0.074

0.076

0.078

0.08

0.082

0.95

0.084

0.04

0.045

J2

0.05

0.055

0.06

J2

Fig.9. Pareto front for PRC8112A

Fig.10. Pareto front for PRC8112B

It is shown in Fig.9 and Fig.10 that the RMSE of RBF model J2 becomes larger with the value of the RV criterion J1 is close to 1, where the number of the selected variables is changed from 1 to 6. Though the RMSE become larger, the difference between the maximal and minimal values of J1 in the Pareto front is not large, i.e., PRC8112A’s is 0.18 and PRC8112B’s is 0.35. J2 is then considered to select the final solution. The individual with the minimal value of J2 is then chosen as the final result, i.e., the individuals where (J1, J2) is (1.18, 0.071) in Fig.9 and (J1, J2) is (1.18, 0.071) in Fig.10, are chosen. In the selected individual, cH +1 is

[00100000] ,

which means PRC8112B is the main

disturbance for PRC8112A and PRC8112A is the main disturbance for PRC8112B. Therefore, the input vector of RBFNN is [ y (k − 1), y (k − 2), u1 (k )] . The model output and its modeling errors for PRC8112A and PRC8112B are shown in Fig.11 and Fig.12. 0.8

0.15

0.7

0.1

0.6 0.05

0.5

Modeling error

Modeling output

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

0.4 0.3 0.2

0

-0.05

-0.1

0.1 -0.15

Real output Model output

0 -0.1

0

100

200

300

400

500

600

700

800

-0.2

0

100

200

Samples

300

400

500

600

700

800

Samples

Fig.11. Outputs and errors of RBF disturbance model for PRC8112A by proposed method

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1

0.08

Real output Model output

0.9

0.06 0.04

0.7 0.6

Modeling error

Modeling output

0.8

0.5 0.4

0.02 0 -0.02

0.3 -0.04

0.2

-0.06

0.1 0

0

100

200

300

400 500 Samples

600

700

800

-0.08

0

100

200

300

400

500

600

700

800

Samples

Fig.12. Outputs and errors of RBF disturbance model for PRC8112B by proposed method 0.1

0.2

0.08

0.15

0.06 0.1

Modeling error

Modeling error

0.04 0.05 0 -0.05

0.02 0 -0.02 -0.04

-0.1

-0.06 -0.15 -0.2

-0.08 0

100

200

300

400

500

600

700

800

-0.1

0

100

200

300

Samples

400

500

600

700

800

Samples

Fig.13. The modeling errors of PRC8112A and PRC8112B by PCARBF method 0.25

0.2

0.2 0.15 0.15 0.1

Modeling error

Modeling error

0.1 0.05 0 -0.05 -0.1

0.05

0

-0.05

-0.15 -0.1 -0.2 -0.25

0

100

200

300

400

500

600

700

-0.15

800

0

100

200

300

Samples

400

500

600

700

800

Samples

Fig.14. The modeling errors of PRC8112A and PRC8112B by SAPCA RBF method 0.3

0.2

0.25

0.15

0.2

0.1

0.15

0.05

Modeling error

Modeling error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 12 of 18

0.1 0.05 0 -0.05

-0.1 -0.15

-0.1

-0.2

-0.15 -0.2

0 -0.05

0

100

200

300

400

500

600

700

800

-0.25

0

100

200

Samples

300

400

500

600

700

800

Samples

Fig.15. The modeling errors of PRC8112A and PRC8112B by LASSO NN method

PCA variable selection method30 with RBFNN optimized by an improved GA31 aimed at minimizing J2 (PCAGA RBF), simulated annealing for variable selection in terms of RV criterion4 and RBFNN

ACS Paragon Plus Environment

Page 13 of 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

modeling (SAPCA RBF) and the multilayer perceptron (MLP) neural network with least absolute shrinkage and selection operator to select the input variables 32 (LASSO NN), are utilized to compare the performance of the constructed disturbance model. In PCAGA RBF method, the component with small eigenvalue, usually less than 0.7, is of less importance. Consequently, the variable that dominates it should be redundant. Here, the eigenvector is [19.6760, 9.3106, 6.1678, 3.8346, 3.3994, 1.2042] for PRC8112A and [17.7668, 10.5847, 8.3911, 6.6600, 4.7317, 3.2693] for PRC8112B, and obviously, all of the disturbances need to be kept. The parameters of the hidden function and the number of the hidden nodes are derived after optimization. The errors of the constructed RBFNN disturbance models for PRC8112A and PRC8112B are shown in Fig.13. Using SAPCA RBF method, the third disturbance and the fifth disturbance are selected as the main disturbance for PRC8112A and PRC8112B, respectively. The input vector of the RBFNN for the two-side disturbance model is [ y (k − 1), y (k − 2), u3 ( k )] and [ y (k − 1), y (k − 2), u5 (k )] , respectively. The modeling error of the

RBFNN for PRC8112A and PRC8112B are illustrated in Fig. 14. Using LASSO NN method, the MLP neural network is first obtained by Levenberg-Marquard learning algorithm, then LASSO is applied to select the variables. Three main disturbances [u1 (k ), u2 (k ), u5 (k )] for PRC8112A and four variables [u1 (k ), u2 (k ), u4 (k ), u5 (k )] for PRC8112B are obtained and their modeling errors are shown in Fig.15.

From Fig.11 to Fig. 15, it is seen that the proposed method has obtained the best modeling accuracy. To be convenient to compare with different methods, RMSEs of the training data and test data denoted as RMSE1 and RMSE2, the parameters of the RBFNN, the running time and the RV criterion are listed in Table 2.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 14 of 18

Table 2: The comparison results of 4 methods Methods

No. of

No. of input

No. of hidden

RMSE1

RMSE2

Run Time(s)

RV

Disturbances

nodes

Nodes

PRC8112A

1

3

7

0.019

0.020

5035

0.847

PRC8112B

1

3

6

0.0154

0.0213

5106

0.739

PCAGA

PRC8112A

6

8

20

0.032

0.043

2096

1

RBF

PRC8112B

6

8

22

0.0145

0.0258

2198

1

SAPCA

PRC8112A

1

3

6

0.0235

0.0356

606

0.953

RBF

PRC8112B

1

3

4

0.0427

0.0477

609

0.767

LASSO

PRC8112A

3

5

15

0.0542

0.0599

3.92

0.991

NN

PRC8112B

4

6

15

0.0829

0.0422

3.89

0.996

Proposed

Plant

In Table 2, the number of disturbance is only one with the less similarity RV value for the proposed method, while the PCA eigenvalue analysis method selected all of the disturbance. For SAPCA RBF method, larger RV value is obtained than the proposed method because RV criterion is optimized independently by SA. However, the final RMSE1 and RMSE2 of the disturbance models are inferior to the proposed method. Besides, the number of the hidden nodes of PCAGA RBF is the largest due to the eight inputs. Since the computation of NSGA-II is complex, the running time of NSGA-II is the longest among the four methods, while the running time of LASSO NN method without using random search algorithm is the shortest. In a word, the proposed multi-objective optimization method considering RV criterion coordinated with modeling accuracy gives a group of solutions to be selected by specific purposes. The value of RMSE1 and RMSE2 using the proposed method is the smallest with less number of input nodes and hidden nodes. The proposed modeling method is efficient in disturbance selection and disturbance model construction.

5. Conclusion The disturbance selection using RV criterion of principle analysis and RBFNN modeling for nonlinear processes are solved using an improved NSGA-II, where the RV criterion and modeling error are optimized simultaneously. Moreover, the encoding, prolong, and pruning operators are adopted to make NSGA-II suitable for RBFNN optimization. Among a group of Pareto solutions, RMSE value is used to choose the final result. The main disturbance is selected successfully and the RBFNN has

ACS Paragon Plus Environment

Page 15 of 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

constructed the disturbance model with satisfactory accuracy. Only the measured data of disturbances and output is used and the preference is easy to be included in the Pareto optimization solution selection. The proposed method can be applied to other industrial processes easily; however, the structure of the inputs is partly fixed, which is to be modified in the future.

Acknowledgements The work is partially supported by National Natural Science Foundation of China (Grant Nos. 61673147, 61433005).

References (1) Dayal, B.; MacGregor, J. Improved PLS algorithms. J. Chemomet. 1997,11, 73-85. (2) Zhang, R.D.; Tao, J. L.; Gao, F. R. A new approach of Takagi-Sugeno fuzzy modeling using improved GA optimization for oxygen content in a coke furnace. Ind. Eng. Chem. Res. 2016, 55, 6465-6474. (3) Zhang, R.; Lu, R.; Xue, A.; Gao, F. New minmax linear quadratic fault-tolerant tracking control for batch processes, IEEE Trans. Autom. Control, 2016, 61, 3045-3051. (4) Brusco, M. J. A comparison of simulated annealing algorithms for variable selection in principal component analysis and discriminant analysis. Compu. Stat. Data Anal. 2014,77, 38-53. (5) López-Carballo, G.; Cava, D.; Lagarón, J. M.; Catalá, R.; Gavara, R. A novel variable selection approach for redundant information elimination purpose of process control. IEEE Trans. Ind. Electron. 2016, 63, 1737-1744. (6) Zhang, R.D.; Tao, J. L. A nonlinear fuzzy neural network modeling approach using improved genetic algorithm. IEEE Trans. Ind. Electron. 2018, 65,5882-5892. (7) Andersen, C. M.;Bro, R. Variable selection in regression-a tutorial. J. Chemometr. 2010, 24, 728-737. (8) Yan, W. Toward automatic time-series forecasting using neural networks. IEEE Trans. Neural. Netw. Learn. Syst. 2012, 23, 1028-1039. (9) Zhang, R.D.; Tao, J.L. Data driven modeling using improved multi-objective optimization based neural network for coke furnace system. IEEE Trans. Ind. Electron. 2017, 64, 3147-3155. (10) Pashazadeh, H.; Gheisari, Y.; Hamedi, M. Statistical modeling and optimization of resistance spot welding process parameters using neural networks and multi-objective genetic algorithm. J. Intell.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Manuf. 2016, 27, 549-559. (11) Sun, K.; Huang, S. H.; Wong, D. S.; Jang, S. S. Design and application of a variable selection method for multilayer perceptron neural network with lasso. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 1386-1396. (12) Romero, E.; Sopena, J. M. Performing feature selection with multilayer perceptrons. IEEE Trans. Neural Netw. 2008, 19, 431-441. (13) Souza, F. A. A.; Rui, A.; Matias, T.; Mendes, J. A multilayer-perceptron based method for variable selection in soft sensor design. J. Process Control 2013, 23, 1371-1378. (14) Estévez P. A.; Tesmer M.; Perez C. A.; Zurada J. M. Normalized mutual information feature selection. IEEE Trans. Neural Netw. 2009, 20, 189-201. (15) Zhang, R.D.; Tao, J.L.; Lu, R. Q.; Jin, Q. B. Decoupled ARX and RBF neural network modeling using PCA and GA optimization for nonlinear distributed parameter systems. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 457-469. (16) Pacheco, J.; Casado, S.; Porras, S. Exact methods for variable selection in principal component analysis: guide functions and pre-selection. Comput. Statist. Data Anal. 2013,57, 95-111. (17) Puggini, L.; Mcloone, S. Forward selection component analysis: algorithms and applications. IEEE Trans. Patt Anal. Mach. Int., 2017, 39, 2395-2408 (18) Cadima, J.; Orestes Cerdeira ,J.; Minhotom,M. Computational aspects of algorithms for variable selection in the context of principal components. Comput. Statist. Data Anal. 2004,47, 225-236. (19) Cadima, J.; Jolliffe, I.T. Variable selection and the interpretation of principal subspaces. J. Agricultural Biol. Environ. Statist. 2001,6 , 62–79. (20) Li, C. J.; Wang, M.; Liu, H.; Zhang, T. Near-optimal stochastic approximation for online principal component estimation. Math. Program. 2017, 167,75-97 (21) Zheng, W. S.; Lai, J. H.; Yuen, P. C. GA-fisher: a new LDA-based face recognition algorithm with selection of principal components. IEEE Trans. Syst. Man. Cybern. B 2005,35, 1065-1078. (22) Dray, S.; 2008. On the number of principal components: a test of dimensionality based on measurements of similarity between matrices. Comput. Statist. Data Anal. 2008, 52, 2228–2237. (23) Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans Evol. Comput. 2002, 182-197. (24) Majhi, B.; Rout, M.; Baghel, V. On the development and performance evaluation of a

ACS Paragon Plus Environment

Page 16 of 18

Page 17 of 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

multiobjective GA-based RBF adaptive model for the prediction of stock indices. J. King Saud Univ. Comput. Inform. Sci. 2013,26, 319-331 (25) Lotfan, S.; Ghiasi, R. A.; Fallah, M.; Sadeghi, M. H. ANN-based modeling and reducing dual-fuel engine’s challenging emissions by multi-objective evolutionary algorithm NSGA-II. Appl. Energ. 2016, 175, 91-99. (26) Dash, C.; Kumar, S.; Behera, A. K.; Dehuri, S.; Cho, S. B. Radial basis function neural networks: a topical state-of-the-art survey. Open Compu. Sci. 2016,6, 33-63. (27) Reiner, P.; Wilamowski, B. M. Efficient incremental construction of RBF networks using quasi-gradient method. Neurocomputing 2015,150, 349-356. (28) Yin, S.; Li, X.; Gao, H.; Kaynak, O. Data-based techniques focused on modern industry: an overview. IEEE Trans. Ind. Electron. 2015,62, 657-667. (29) Zhang, R. D.; Xue, A. K.; Gao, F. Temperature control of industrial coke furnace using novel state space model predictive control. IEEE Trans. Ind. Informat. 2014,10, 2084-2092. (30) Mardia, K. V.; Kent, J. T.; Bibby, J. M. Multivariate analysis. Academic Press. London San Diego. 1979. (31) Zhang, R.; Tao, J.; Gao, F. Temperature modeling in a coke furnace with an improved RNA-GA based RBF network. Ind. Eng. Chem. Res. 2014,53, 3236-3245. (32) Sun, K.; Huang, S. H.; Wong, S. H.; Jang, S. S. Design and application of a variable selection method for multilayer perceptron neural network with LASSO. IEEE Trans. Neural Netw. Learn. Syst. 2017,28, 1386-1396.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

TOC Residual oil AUT

CAS FRC8103

FRC8105 50.8%

51.4%

TR8156

TR8155

To fuel valve

To fuel valve AUT

AUT TRC8103

TRC8105 AUT FRC8107

AUT FRC8108

Circulating oil from

TR8129

To T102

Furnace 101/3 To T101/5,6

ACS Paragon Plus Environment

Page 18 of 18