Prediction of Coal Ash Fusion Temperature by Least-Squares Support

Apr 23, 2010 - Energy Fuels , 2010, 24 (5), pp 3066–3071. DOI: 10.1021/ ... Coal ash fusion temperature (AFT) is one important parameter for coal-fi...
1 downloads 9 Views 2MB Size
Energy Fuels 2010, 24, 3066–3071 Published on Web 04/23/2010

: DOI:10.1021/ef100138f

Prediction of Coal Ash Fusion Temperature by Least-Squares Support Vector Machine Model Bingtao Zhao,* Zhongxiao Zhang, and Xiaojiang Wu School of Energy and Power Engineering, University of Shanghai for Science and Technology, 516 Jungong Road, Shanghai 200093, China Received February 3, 2010. Revised Manuscript Received April 6, 2010

Coal ash fusion temperature (AFT) is one important parameter for coal-fired boiler design and evaluation in a power plant. The relationship between coal AFT and the chemical composition of coal ash is rather complex in nature and makes the modeling of AFT difficult. In this work, a least-squares support vector machine (LS-SVM) model, which was based on a dynamically optimized search technique with crossvalidation, is developed to predict the coal ash softening temperature (ST). The accuracy of this LS-SVM model was verified by comparison with the experimental AFT data of different types of coal. Further, the comparison of the present LS-SVM model and the traditional models, for example, multilinear regressions (MLR) and multi-nonlinear models (MNR) as well as the artificial neural network (ANN) models, showed that the LS-SVM model was much better to provide the highest generalized accuracy with the mean squared error of 0.0128 and correlation coefficient of 0.9272. Furthermore, based on the LS-SVM model, the correlativity between coal ash composition and the ST was analyzed, which helped us deeply understand how the parameters influenced the fusion behavior of coal ash.

ash include an equilibrium phase diagram (EPD),7,8 multiple regression including multiple linear regression (MLR) and multiple nonlinear regression (MNR),9-11 and artificial neural networks (ANNs).12-14 These models could not carry out a highly accurate prediction of AFT over a wide range of coal types in spite of its success in calculating the AFT for certain species of coal. Therefore, it is necessary to explore new and effective methods to build a generalized AFT model for most types of coal. Recently, a new artificial intelligence (AI) method known as support vector machine (SVM) was developed and has been successfully applied to many fields, for example, pattern recognition, phase diagram assessment, molecular and materials design, trace element analysis, cancer diagnose, chemical engineering and technology, and so forth.15 SVM is a new statistical learning theory developed by Vapnik16 and differs

Introduction Coal ash fusion temperature (AFT) is very important for any kind of coal-fired boiler design because it may cause slag problems on wall and tube surfaces besides declined heat transfer efficiency. However, AFT is rather difficult to predict by models due to its complex nature and dependence on the chemical and mineralogical compositions of coal ash.1-6 Usually a bench scale experiment in a high temperature furnace is conducted to obtain the AFT of a special coal by viewing a molded specimen of coal ash through an observation window. Then different methods are applied to develop empirical correlations or models to calculate and predict AFT based on the experimental data. So far, AFT models for coal *To whom correspondence should be addressed. Tel.: þ86 21 55272740. Fax: þ86 21 55273704. E-mail address: zhaobingtao@usst. edu.cn. (1) Vassilev, S. V.; Kitano, K.; Takeda, S.; Tsurue, T. Influence of mineral and chemical composition of coal ashes on their fusibility. Fuel Process. Technol. 1995, 45 (1), 27–51. (2) Huang, L. Y.; Norman, J. S.; Pourkashaniana, M.; Williams, A. Prediction of ash deposition on superheater tubes from pulverized coal combustion. Fuel 1996, 75 (3), 271–279. (3) Kahraman, H.; Reifenstein, A. P.; Coin, C. D. A. Correlation of ash behavior in power stations using the improved ash fusion test. Fuel 1999, 78 (12), 1463–1471. (4) Thompson, D.; Argent, B. B. Prediction of coal ash composition as a function of feedstock composition. Fuel 1999, 78, 539–548. (5) Zevenhoven-Onderwater, M.; Blomquist, J. P.; Skrifvars, B. J.; Backman, R.; Hupa, M. The prediction of behaviour of ashes from five different solid fuels in fluidised bed combustion. Fuel 2000, 79 (11), 1353–1361. (6) Seggiani, M. Empirical correlations of the ash fusion temperatures and temperature of critical viscosity for coal and biomass ashes. Fuel 1999, 78, 1121–1125. (7) Jak, E. Prediction of coal ash fusion temperatures with the F*A*C*T thermodynamic computer package. Fuel 2002, 81, 1655–1168. (8) Chen, L.; Zhang, Z.; Wu, X.; Chen., G. An experiment study on ash fusibility under weak reducing and oxidation atmosphere. Power System Eng. (In Chinese) 2007, 23 (1), 22–24. r 2010 American Chemical Society

(9) Ozbayoglu, G.; Ozbayoglu, M. E. A new approach for the prediction of ash fusion temperatures: A case study using Turkish lignites. Fuel 2006, 85, 545–552. (10) Lolja, S. A.; Haxhi, H.; Dhimitri, R.; Drushku, S.; Malja, A. Correlation between ash fusion temperatures and chemical composition in Albanian coal ashes. Fuel 2002, 81 (17), 2257–2261. (11) Seggiani, M.; Pannocchia, G. Prediction of coal ash thermal properties using partial least-squares regression. Ind. Eng. Chem. Res. 2003, 42 (20), 4919–4926. (12) Yin, C.; Luo, Z.; Ni, M.; Cen, K. Predicting coal ash fusion temperature with a backpropagation neural network model. Fuel 1998, 77 (15), 1777–1782. (13) Zhou, H.; Zheng, L.; Fan, J.; Cen, K. Application of general regression neural network in prediction of coal ash fusion temperature. J. Zhejiang Univ. (Engineering Science) (In Chinese) 2004, 38 (11), 1479– 1482. (14) Liu, Y.; Wu, M.; Qian, J. Predicting coal ash fusion temperature based on its chemical composition using ACO-BP neural network. Thermochim. Acta 2007, 454, 64–68. (15) Chen, N.; Lu, W.; Yang, J.; Li, G. Support Vector Machine in Chemistry; World Scientific Publishing Company: Singapore, 2004. (16) Vapnik, V. N. The Nature of Statistical Learning Theory; SpringerVerlag: New York, 1995.

3066

pubs.acs.org/EF

Energy Fuels 2010, 24, 3066–3071

: DOI:10.1021/ef100138f

Zhao et al.

from ANN. It has a number of theoretical and computational advances in modeling highly coupled linear or nonlinear systems and marked the beginning of a new era in the learning from examples paradigm.17 However, there are few reports published that adopt SVM to predict coal AFT. The authors will introduce their work on coal AFT modeling based on SVM method in this paper. The ash undergoes a series of physical and chemical changes as the temperature increases, and the corresponding AFT is called initial deformation temperature (IDT or DT), softening temperature (ST), which is also called spherical temperature (ST), hemispherical temperature (HT), and fluid temperature (FT), respectively, according to its fusion phases. Among these temperatures, ST of coal ash strongly relates to the slagging behavior in a combustor. Therefore, the present work will mainly focus on ST prediction by SVM. Theory SVM is a universal approximator based on statistical and optimizing theory. Originally, SVM was developed for classification tasks.18 With the introduction of Vapnik’s ε-insensitive loss function, SVM has been extended to solve linear and nonlinear regression and time series prediction problems.19-24 Until now, SVM has been regarded as a powerful methodology for solving problems in linear and nonlinear classification, density estimation, and function estimation. Compared to standard SVM, least-squares SVM (LSSVM)25,26 can be trained much more efficiently after constructing the Lagrangian function by solving the linear Karush-Kuhn-Tucker (KKT) system: 2 3" # " # T 0 b 0 1B 4 5 ð1Þ ¼ y R 1B Ω þ γ - 1 I

Figure 1. Schematic procedure of LS-SVM training and testing.

The LS-SVM regression model can be rearranged as the following: n X Rk Kðx, xk Þ þ b ð3Þ y^ðxÞ ¼ k ¼1

Commonly, several functions including linear, polynomial, radial basis function (RBF), and multilayer perceptron (MLP) can be used as the kernel function in LS-SVM. On the basis of the comparison with availability and adaptability, the RBF function (Gaussian function), eq 4, is finally selected as the kernel function because of its good performance under general smoothness assumptions.

According to Mercer’s theory,27 the relationship between the mapping function and the kernel function was expressed as ð2Þ

2

Kðx, xk Þ ¼ expð - x - xk =σ2 Þ

(17) Jin, L.; Nikiforuk, P.; Gupta, M. Direct adaptive output tracking control using multilayered neural networks. IEEE Proc. D 1993, 140 (6), 393–398. (18) Schmidt, M. S. Identifying speakers with support vector networks. Proceedings of the 28th Symposium on the Interface (Interface 96 Proceedings), Sydney, Australia, 1996. (19) Vapnik, V. N.; Golowich, S.; Smola, A. Support Vector Method for Function Approximation, Regression Estimation and Signal Processing. Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, 1997; Vol. 9, pp 281-287. (20) Vapnik, V. N. An Overview of Statistical Learning Theory. IEEE Trans. Neural Networks 1999, 10 (5), 988–999. (21) Smola, A. J. Learning with Kernels. Ph.D. Thesis, GMD, Birlinghoven, Germany, 1998. (22) Muller, K. R.; Smola, J. A.; Ratsch, G.; Scholkopf, B.; Kohlmorgen, J. Prediction time series with support vector machines. Advances in Kernel Methods; The MIT Press: London, U.K., 1999. (23) Tay, F. E. H.; Cao, L. J. e-Descending Support Vector Machines for Financial Time Series Forecasting. Neural Process. Lett. 2002, 15, 179–195. (24) Cao, L. J.; Lee, H. P.; Seng, C. K.; Gu, Q. Saliency Analysis of Support Vector Machines for Gene Selection in Tissue Classification. Neural Comp. Appl. 2003, 11 (3-4), 244–249. (25) Suykens, J. A. K.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Letters 1999, 9 (3), 293–300. (26) Suykens, J. A. K.; Vandewalle, J.; Moor, B. D. Optimal control by least squares support vector machines. Neural Networks 2001, 14 (1), 23–35. (27) Mercer, T. Functions of Positive and Negative Type and Their Connection with the Theory of Integral Equations. Philos. Trans. R. Soc. London, Ser. A 1909, 415–446.

)

k, l ¼ 1, :::, N

)

Ωkl ¼ φðxk ÞT φðxl Þ ¼ Kðxk , xl Þ,

ð4Þ

Method Usually, the LS-SVM modeling procedure can be divided into five steps, that is, selecting variable, dividing sample, optimizing parameters, training and testing simulation, and evaluating performance. The general flowchart of the LS-SVM procedure in the present work is illustrated by Figure 1. The calculation is carried out on a computer with hardware configurations as follows: processor, AMD Athlon(TM) 64  2 Dual Core Processor 4400þ (2.31 GHz); memory, 2.00 GB (DDR2-667 1G  2); hard drive, 120 GB (5400 rpm). Variable and Sample. Analysis of the chemical composition indicates that coal ash mainly contains SiO2, Al2O3, Fe2O3, CaO, MgO, TiO2, SO3, Na2O, K2O, and very few other oxides such as P2O5, NiO, MnO, and so forth. These oxides directly affect coal ash fusibility behavior. In the previous work by other authors, five to seven main oxides excluding SO3 in coal ash are frequently employed to establish the predictive model for AFT.12-14 However, Zhang et al.28 found SO3 has a significant effect on coal AFT, too. In this work, nine oxides including SO3, (28) Zhang, D.; Long, Y.; Gao, J.; Zheng, B. Relationship between the coal ash fusibility and its chemical composition. J. East China Univ. Sci. Technol. (In Chinese) 2003, 29 (6), 590–594.

3067

Energy Fuels 2010, 24, 3066–3071

: DOI:10.1021/ef100138f

Zhao et al.

Table 1. Input and Output Variables for LS-SVM Models input parameters variables specification

x1 SiO2

x2 Al2O3

x3 Fe2O3

x4 CaO

x5 MgO

output parameter x6 TiO2

x7 SO3

x8 Na2O

x9 K2O

y^ ST

parameter γ and the kernel parameter σ2. The initial values for γ and σ2 are given to be 10 and 0.1, and then the optimal search is performed in the interval of (1, 1000) and (0.01, 10), respectively. In this stage, the effect of regularization and kernel parameters on MSE for 10-fold cross-validation is correspondingly described by Figure 2. Training and Testing Simulation. Once the optimal value for key parameters in the LS-SVM model is obtained using the optimized search technique, the configured models are trained based on the training data selected at random until it met the convergence conditions. Subsequently, the trained models are performed to predict the simulated results according to input of the testing data and then compared with the output of testing data. Finally, the model validities are criticized according to the evaluation parameters. Evaluation Parameters. To comprehensively compare the model performance, the following evaluation parameters, for example, normalized mean squared error (MSE) E2 and correlation coefficient (CC) R, are employed: n 1X ðyNi - y^Ni Þ2 ð6Þ E2 ¼ n i ¼1

as shown in Table 1, are selected as the input, and the ST is selected as the output to establish the LS-SVM model. These oxides basically cover the factors which affect coal AFT. 60 representative sample data sets of coal ash composition and fusion temperature are used in the present investigation to evaluate the prediction performance of the LS-SVM model. These species of coal are collected from different regions of China and include lignitic coal (sample size = 8), bituminous coal (sample size = 36), lean coal (sample size = 5), and anthracitic coal (sample size = 11) in terms of coal classification. The composition and fusibility of the coal ash are experimentally measured according to the standard test method GB/T 219200829 and GB/T 1574-2007,30 respectively. To more accurately simulate the actual combustion environment in a boiler, the weak reducing atmosphere is selected for the AFT testing process. Table 2 illustrates the physical and chemical properties (on a dry basis) for these coal ashes. To enhance the reliability of the LS-SVM model, the normalized method and the SRS technique are applied for data processing in this work. As a result of the large difference in the order of magnitude of the value, the available data set is transformed or scaled into the 0-1 interval using a normalized preprocessed method to avoid solution divergence, as in eq 5, where xNi, xi, xmin, and xmax are the scaled value of the observed variable, the actual value of the observed variable, the minimum observation value of the data set, and the maximum observation value of the data set, respectively. xi - xmin xNi ¼ ð5Þ xmax - xmin

n P

ðyNi - yN Þð^ y - y^ N Þ Ni R ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n n P 2 P ðyNi - yN Þ ð^ yNi - y^ N Þ2 i ¼1

i ¼1

ð7Þ

i ¼1

The simulation time (CPU time) t is also considered to criticize the computational efficiency for the model.

Correspondingly, the final simulation results are also postprocessed by the unnormalized method in terms of eq 5. In addition, to obtain more accurate evaluation of the generalization and robustness performance for the LS-SVM model in the present work, all data set samples are divided into two nonrepetitive groups as the training sample and the testing sample using the simple random sampling (SRS) technique, with the size ratio of 80%:20%. Parameters Optimization. Two key parameters including regularization parameter γ and kernel parameter (square of spread factor) σ2 play a critical role in establishing a good LSSVM regression model with high prediction accuracy and stability. Usually, these parameters are assigned as constants using semiempirical correlations. However, it is difficult to determine whether the assigned constant is an optimal value or not for the LS-SVM model. Therefore, these parameters need to be correctly determined to optimize the prediction performance. For this purpose, a multistep search (MSS) technique is used to dynamically seek the optimal values for the two key parameters. This technique actually includes two steps: a coarse search to identify a better region in the search field according to contour lines of error and then a fine search over that region. 10-fold cross-validation is employed to avoid overfitting during the parameters optimization process. The comparison of the network efficiency was carried out first according to the evaluation parameters E2 and R for the output of training data to get the optimal algorithm parameters for optimal model configurations. The present LS-SVM model employed 2D multistep grid search with 10-fold cross-validation to seek the optimal hyper-parameters, that is, the regularization

Results and Discussion Comparison of LS-SVM Model with Experimental Data. The experimental data vs the configured LS-SVM predictions for ST are shown in Figure 3. The data points with square shapes illustrate the simulated results of ST by the LSSVM model for the training samples. From this figure, it can be seen that the LS-SVM model is able to achieve high training accuracy with the training MSE of 0.0058 and correlation coefficient of 0.9624. To check the capability of generalization and robustness of the LS-SVM model which is emphatically concerned in practical applications, the open data points with spherical shape describe the simulated results of ST by the LS-SVM for the testing samples. It is found that the LS-SVM still provides good agreement with the experimental value, with the testing MSE of 0.0128 and correlation coefficient of 0.9272 for the testing stage. The results also demonstrate that the LS-SVM model can be successfully used for modeling coal ash ST and are helpful for estimating slagging characteristics and developing slag control technologies. Comparison of the LS-SVM Model with the Multiple Regression Models. Figure 4 describes the comparison of results of the LS-SVM model to conventional MLR models by Chen and Jiang31, Fan and Pan32, and the present work, (31) Chen, W.; Jiang, N. Relation between the coal ash composition and fusibility. Clean Coal Technol. (In Chinese) 1996, 2 (2), 34–37. (32) Fan, Q.; Pan, P. The correlativity on single chemistry component melting temperature and coal ash melting temperature. Boiler Technol. (In Chinese) 2007, 38 (6), 10–13,19.

(29) GB/T 219-2008, Determination of fusibility of coal ash, National Standards of the People’s Republic of China, 2008. (30) GB/T 1574-2007, Test method for analysis of coal ash, National Standards of the People’s Republic of China, 2007.

3068

Region A: Northeast and Northern China, including the local areas in the provinces (autonomous regions or municipalities) of Heilongjiang and Inner Mongolia. b Region B: Northeast, Northern, Northwest, Southwest, Southern, and Southeast China, including the local areas in the provinces (autonomous regions or municipalities) of Heilongjiang, Liaoning, Inner Mongolia, Beijing, Shandong, Shanxi, Henan, Shaanxi, Ningxia, Jiangxi, Hunan, Guizhou, and Jiangsu. c Region C: Northern China, including the local areas in the provinces (autonomous regions or municipalities) of Shanxi and Shandong. d Region D: Northern, Southwest, and Northwest China, including the local areas in the provinces (autonomous regions or municipalities) of Beijing, Shanxi, Henan, Ningxia, Sichuan, Jiangxi, and Guizhou.

Zhao et al.

as well as the MNR model by the present work. The relation for MLR and MNR is given as eqs 8 and 9 respectively ð8Þ y^ ¼ R0 þ R1 x1 þ R2 x2 þ ::: þ Rn xn and β β y^ ¼ β0 ðx1 1 Þðx2 2 Þ:::ðxβn n Þ

ð9Þ

where R0 to Rn and β0 to βn are the equation parameters for the MLR and MNR model, x1 to xn are the independent variables and y is the dependent variable for these systems, respectively. As a result of the complicity of the nonlinear relation, eq 9 is transformed into a linear equation to solve the equation parameters in this process. The calculation result shows that there are the training MSE of 0.2039, 0.0516, 0.0198, and 0.0212 and the testing MSE of 0.1449, 0.0358, 0.0189, and 0.0480 for the models of Chen et al., Fan et al., present MLR, and present MNR, respectively. It is observed that although the conventional MLR and MNR models can be used to predict the ST, they are still inferior compared to the LS-SVM model due to the weakness in algorithm nature. It is indicated that the LSSVM model based on the principle of structural risk minimization and the universal statistical and optimized algorithm has a superior nonlinear fitting capability if compared with traditional models. This result also demonstrates that the complex relationship occurs between chemical composition and fusion temperature for coal ash. Comparison of LS-SVM with ANN Models. Figure 5 describes the comparison of generalization performance of the LS-SVM with the most popular feed-forward neural networks, including back-propagation neural network (BPNN), radial basic functions neural network (RBFNN), and generalized regression neural network (GRNN). In this work, all ANNs employ the three layer architecture with a single hidden layer which is able to theoretically approximate a function with the arbitrary accuracy. In these networks, the key parameter, neurons number of hidden layer NH for BPNN, and spread factor, also called the smoothing parameter σ in the Guassian kernel function for both RBFNN and GRNN, are also optimally determined by the multistep dynamical search with 10-fold cross-validation. According to the comparison illustrated in Figure 5, the simulated results for testing samples show that the optimized BPNN, RBFNN, and GRNN yield prediction errors larger than that of the LS-SVM, with the testing MSE of 0.0365, 0.0348, and 0.0514, respectively, although they may have the small prediction error for training samples with the training MSE of 4.22  10-5, 2.80  10-32, and 0.049. This demonstrates that the LS-SVM has a superior generalization capability. By using the multistep dynamical search technique with 10-fold cross-validation, the ANN model configuration is able to be effectively improved by overcoming local minima, optimizing spread parameter, and so forth. However, the BPNN does not display a good generalization performance due to the sensitivity to the initial weight and net structure. In addition, the BPNN has a longest simulation period in term of CPU time due to the cycle process of back-propagation. As for the RBFNN and GRNN, although their learning speed is fast because they do not require an iterative procedure, the randomicity selecting center of the hidden layer neurons for RBFNN and variable dimensionality for GRNN still partially limit their applicability to give accurate prediction for ST.

a

1115-1400 1127-1460 1420-1500 1150-1500 1115-1500 0.35-2.40 0.04-3.80 0.37-1.49 0.19-3.07 0.04-3.80 0.30-3.10 0.06-1.62 0.90-4.52 0.23-5.41 0.06-5.41 0.25-7.35 0.25-26.12 0.46-2.80 0.90-4.49 0.25-26.12 0.78-3.04 0.15-2.15 1.00-1.66 0.70-2.73 0.15-3.04 0.40-4.23 0.39-9.65 0.42-2.11 0.52-7.36 0.39-9.65 1.44-14.06 0.28-22.92 1.58-5.48 1.56-12.40 0.28-22.92 9.35-21.92 2.24-37.57 27.65-36.56 18.10-33.72 22.40-37.57 41.69-67.27 23.04-65.57 45.80-56.55 30.18-59.04 23.04-67.27

2.66-15.00 1.35-41.88 2.66-12.00 4.42-23.29 1.35-41.88

SO3 (%) TiO2 (%) MgO (%)

sample size 8 36 5 13 60 source region region Aa region Bb region Cc region Dd coal classification lignite coal bituminous coal lean coal anthracitic coal summary

Physical and chemical parameters of coal ash

CaO (%) Al2O3 (%)

Fe2O3 (%)

: DOI:10.1021/ef100138f

SiO2 (%)

Table 2. Properties of Coal Ash Samples Used in the Present Work

Na2O (%)

K2O (%)

ST (°C)

Energy Fuels 2010, 24, 3066–3071

3069

Energy Fuels 2010, 24, 3066–3071

: DOI:10.1021/ef100138f

Zhao et al.

Figure 2. Effect of algorithm parameters of LS-SVM on cross-validation MSE for ST.

Figure 5. ANN simulation for ST: (a) BPNN with the optimal parameter NH = 14; (b) RBFNN with the optimal parameter σ = 0.31, and (c) GRNN with the optimal parameter σ = 0.65.

Figure 3. LS-SVM simulation for ST with optimal parameter (γ, σ2) = (9.49, 19.82).

As seen from Table 3, the correlation coefficients based on experimental measurement and theoretical prediction have a good agreement, indicating that the LS-SVM model can provide sufficient stability and reliability for prediction of coal ash ST. In addition, it is also found that in the given experimental samples, the influence of the individual explanatory variable on ST exhibits a complex mapping relationship. In all nine explanatory variables, the oxides SiO2, Al2O3, Fe2O3, CaO, MgO, and SO3 are significantly correlated with ST, whereas TiO2, Na2O, and K2O have a weak or even negligible correlativity to ST. Usually, the oxides in coal ash can be mainly classified as acids (SiO2, Al2O3, TiO2, and SO3) and bases (Fe2O3, CaO, MgO, Na2O, and K2O). According to Table 3, it can be seen that, in significantly correlated oxides, the main acidic oxides including SiO2 and Al2O3, except for SO3, have a positive contribution to ST, indicating that increasing the SiO2 and Al2O3 content increases ST. It is suggested that ST significantly increases with the increase of SiO2 content in coal ash, although ST may depend on other oxides such as Al2O3, Fe2O3, and CaO when SiO2 content is at a low level.32 Usually, Al2O3 content is less than SiO2 in coal ash. However, it presents the largest effect on ST compared to SiO2 and even other oxides with the largest correlation coefficient in the present coal ash samples. The main reason is that Al2O3 belongs to an ionic crystal with high fusion temperature

Figure 4. Multiple regression simulation for ST.

Correlativity between Coal Ash Composition and Fusion Temperature. In practice, it is more important to reveal how fusion temperature is related to coal ash composition. For this purpose, the correlation coefficient of individual input variables (SiO2, Al2O3, Fe2O3, CaO, MgO, TiO2, SO3, Na2O, and K2O) with the output variable (ST) are given by the experimental data and configured LS-SVM model, respectively, as shown in Table 3. In this stage, the full data set sample is trained and simulated to obtain the more comprehensive and accurate results. 3070

Energy Fuels 2010, 24, 3066–3071

: DOI:10.1021/ef100138f

Zhao et al.

Table 3. Correlation Coefficient (CC) between Ash Composition and ST of Coal Ash CC

SiO2

Al2O3

Fe2O3

CaO

MgO

TiO2

SO3

Na2O

K2O

actual predicted

0.3603 0.3577

0.6602 0.6594

-0.5089 -0.5072

-0.5607 -0.5584

-0.3305 -0.3283

0.0557 0.0558

-0.4036 -0.4020

-0.0289 -0.0278

-0.0671 -0.0682

(2050 °C) and can play a “skeleton” role in the fusion process of coal ash. As a result, the positive correlation appeared, which indicates ST will increase as Al2O3 content increases. On the contrary, the main basic oxides including Fe2O3, CaO, and MgO show the negative correlativity to ST, which indicates ST of coal ash will decrease as these basic oxide contents increase. The general trend showed that ST decreases with the increase of Fe2O3 content, indicating Fe2O3 has a good fluxing action. In a weak reducing atmosphere, Fe2O3 can be transformed into FeO which further reacts with SiO2 and Al2O3 to form the SiO2-Al2O3-FeO eutectic with a low fusion temperature. It should be noted that if Fe2O3 content is extremely low, ST will depend on the other oxides properties mainly. In addition, there will be a high content of Fe2O3 and SO3 if there is a high content of pyrite in the coal. They are demonstrated to have a negative correlativity to ST. Another basic oxide, CaO, can also react with SiO2 to form the silicate eutectic with low fusion temperature. The experimental result indicates that its fluxing action is related to both the CaO content and SiO2/Al2O3 (mass ratio) in coal ash.33 When the CaO content is at low level, it is easy to format the eutectic and is helpful to reduce the coal AFT. However, when the CaO content is more than the required value for formatting the eutectic, the excess CaO in coal ash will exist in monomer form which has a high fusion temperature up to 2580 °C. Therefore, a very high CaO content can lead to an increase of ST as reported by Yao.33 Compared to CaO, MgO generally has very low content (