Boiling Points of Ternary Azeotropic Mixtures Modeled with the Use of

Jun 1, 2012 - Department of Chemistry, University of Florida, Gainesville, Florida 32611, United ... Complutense University of Madrid, 28040 Madrid, S...
0 downloads 0 Views 737KB Size
Article pubs.acs.org/IECR

Boiling Points of Ternary Azeotropic Mixtures Modeled with the Use of the Universal Solvation Equation and Neural Networks Alexander A. Oliferenko,‡ Polina V. Oliferenko,‡ José S. Torrecilla,§ and Alan R. Katritzky*,‡ ‡

Department of Chemistry, University of Florida, Gainesville, Florida 32611, United States Department of Chemical Engineering, Complutense University of Madrid, 28040 Madrid, Spain

§

S Supporting Information *

ABSTRACT: Azeotropic mixtures, an important class of technological fluids, constitute a challenge to theoretical modeling of their properties. The number of possible intermolecular interactions in multicomponent systems grows combinatorially as the number of components increases. Ab initio methods are barely applicable, because rather large clusters would need to be calculated, which is prohibitively time-consuming. The quantitative structure−property relationships (QSPR) method, which is efficient and extremely fast, could be a viable alternative approach, but the QSPR methodology requires adequate modification to provide a consistent treatment of multicomponent mixtures. We now report QSPR models for the prediction of normal boiling points of ternary azeotropic mixtures based on a training set of 78 published data points. A limited set of meticulously designed descriptors, together comprising the Universal Solvation Equation (J. Chem. Inf. Model. 2009, 49, 634), was used to provide input parameters for multiple regression and neural network models. The multiple regression model thus obtained is good for explanatory purposes, while the neural network model provides a better quality of fit, which is as high as 0.995 in terms of squared correlation coefficient. This model was also properly validated and analyzed in terms of parameter contributions and their nonlinearity characteristics.



INTRODUCTION Green, ecologically benign solvents such as supercritical fluids and ionic liquids are currently available, but the great majority of the chemical industry still utilizes individual organic solvents and mixed solvent systems. Classical solvents are available commercially on a large scale, cost much less, and can be finely tuned to address a wide variety of technological needs. Thus, innumerous multicomponent solvent mixtures are currently used in the manufacture of industrial and consumer products.1 Such mixtures serve as specialty solvents for separation, electrochemistry,2 catalytic systems,3 and cleaning, to name just a few. Azeotropic mixtures, or azeotropes, constitute a special class of multicomponent solvent systems, which are highly industrially relevant and rather challenging from the scientific point of view. The principal distinctive feature of an azeotrope is that it boils at a constant temperature and its components cannot be separated by simple distillation. This happens because the ratio of the constituent solvents in the vapor phase is the same as that in the liquid phase. Binary azeotropic mixtures made up of two liquid components occur most frequently; very often the components are highly polar liquids with a pronounced capacity for hydrogen bonding. Ternary azeotropes are also common; for example, a mixture of acetonitrile (44%), methanol (52%), and water (4%) is often used in HPLC analysis. The intermolecular interactions in such mixtures are complex and cannot be satisfactorily modeled by quantum chemistry or molecular dynamics because the CPU demand is still prohibitively high (taking many days or even weeks of computation time). Molecular modeling of mixtures consisting of more than two components is even more complicated, as the number of pairwise interactions N grows © 2012 American Chemical Society

as the number of components n according to the binomial coefficient formula represented by eq 1: n n! N= = 2 2! (n − 2)! (1)

()

For example, for a ternary mixture, the number of pairwise interactions will be three, and for a quaternary mixture it will be six. To the best of our knowledge, the very few previous attempts to model properties of azeotropes4,5 were based on using experimentally derived parameters. Thus, the development of a purely theoretical approach, not requiring any empirical data, is highly desirable. Recently, we reported quantitative structure−property relationships (QSPR) studies of binary azeotropes using the CODESSA (Comprehensive Descriptors for Structural and Statistical Analysis) approach,6 which calculated a large pool of theoretical descriptors and then derived best multiple regression models using a sophisticated automated procedure implemented in CODESSA-Pro software.7 Because of the two-component nature of binary azeotropes, special mixing schemes for the descriptors had to be used to derive a one-to-one correspondence between descriptors and structures, in accordance with the CODESSA methodology.8 For two components, such mixing schemes worked well, but when the number of components is three or more, combining three or more descriptors into one becomes less definitive, and the information content of the result degenerates. If the size of the training set allows (the rule of Received: Revised: Accepted: Published: 9123

July 12, 2011 May 23, 2012 June 1, 2012 June 1, 2012 dx.doi.org/10.1021/ie202550v | Ind. Eng. Chem. Res. 2012, 51, 9123−9128

Industrial & Engineering Chemistry Research

Article

and basicity B scales are rather complex functions of molecular geometry and valence state ionization energies, resembling some equations of density functional theory. The graphical images of acetic acid and pyridine, two constituent molecules in the azeotropes under study, are shown in Figure 1 to facilitate

thumb is 5−6 experimental data points per each descriptor), one can explicitly include descriptors for all molecules constituting an azeotropic mixture. However, the number of descriptors of each sort multiplied by n (by 3 for ternary mixtures) can quickly run beyond that permitted by the rule of thumb. Such a situation normally requires a choice between two alternatives: (i) to develop a multiple regression model with a limited number (3−5) of high-quality descriptors, or (ii) to reduce a large descriptor pool using an appropriate mathematical tool such as principal component analysis, partial least-squares, or neural networks. In the present work, we advantageously utilize a combination of these alternatives. We used the “Universal Solvation Equation”, a computational analogue of the famous Abraham’s General Solvation Equation,9 previously developed by us10,11 as a source of high-quality descriptors and, in addition, used artificial neural networks as a powerful prediction tool widely used in QSPR studies.12 Very recently, a similar approach was successfully applied by us to the prediction of gas solubilities in ionic liquids, which can be viewed as typical two-component mixtures.13 Extending this approach to tertiary azeotropic mixtures constitutes a natural next step toward the new era of QSPR, the QSPR of n-component systems, where n can be arbitrarily high. This is the ultimate aim of the present line of research. The specific aim of the present contribution is to test the methodology and to develop a practical and reliable tool for the prediction of boiling points of ternary azeotropic mixtures.



MATERIALS AND METHODS All of the normal boiling point data utilized were taken from a single literature source.4 All azeotropes reported in this source have compositions characterized by molar fractions. Molar fraction data may or may not be used in the QSPR analysis depending on the particular method. If multiple regression analysis is used, the relative component loadings are implicitly accounted for by regression coefficients. Molecular structures of all molecules constituting the azeotropes were drawn using a ChemBio3D Pro software version 1114 and subsequently optimized at the AM1 (Austin Model 1)15 semiempirical level of theory by the eigenvectorfollowing algorithm. The MOPAC (Molecular Orbital PACkage) ver. 6.0 semiempirical program16 was used. A gradient norm of 0.01 kcal/A was forced along with keyword PRECISE to calculate stationary points. Polarizability was calculated by the same AM1 wave function using the finite field method.17 Universal Solvation Equation. For the purposes of the current study, the original equation10,11 was modified to include all three molecular components: 3

Tb = C0 +

3

3

Figure 1. Graphical illustration of the hydrogen-bond acidity A (red areas) and basicity B (blue areas) for acetic acid (both acidity and basicity) and pyridine (basicity only). Intensity and size of the blue and red areas qualitatively reflect relative bacisity and acidity, respectively.

understanding of the physical meaning of these descriptors. The full data, containing experimental and predicted boiling points as well as all relevant descriptors, are given in Table 1S of the Supporting Information. Statistica software18 was used for the evaluation of basic statistics and building multilinear correlations. Neural Network Model. The back-propagation feedforward artificial neural network (ANN) is one of the most widely used neural network architectures. This design has been particularly successful in many research fields because of its capability to estimate the dependent variable using an adequate number of independent variables in a relatively simple nonlinear way. ANNs are composed of interconnected centers called neurons. The neurons are allocated in layers, and each of them is furnished with a linear activation function and a nonlinear (sigmoidal or hyperbolical) transfer function. Connections between neurons are pondered by self-adjustable parameters called weights. Normally there is one weight (wij) for each connection between neurons from the ith and jth layers, as it is displayed in the flow-chart (Figure 2).19 The neural network model used here consists of several neurons arranged in three layers: input, hidden, and output layers (Figure 2). The input layer is used to input data into the neural network; the aforementioned linear and nonlinear calculations are carried out in the neurons from the other two layers. The whole calculation process consists of four steps per iteration: (i) the estimative process (forward calculation); the training set served as an input to the ANN, so that the

3

∑ Ciαi + ∑ CjAj + ∑ CkBk + ∑ ClPl i

j

k

l

(2)

where αi are the polarizabilities of the constituent molecules, Aj are the hydrogen-bond acidity values, Bk are the hydrogen-bond basicity values, Pl are the polarity parameters, C0 is the intercept, and Cijkl are the multiple regression coefficients found by fitting to experimental boiling points. The Universal Solvation Equation formalism and details of the calculation of the descriptors are described in our previous publications.10,11 Semiempirically calculated molecular polarizability serves as a main molecular volume/cavity formation term, whereas polarity P is an auxiliary polarization term calculated over the bonds adjacent to heteroatoms only. The hydrogen-bond acidity A 9124

dx.doi.org/10.1021/ie202550v | Ind. Eng. Chem. Res. 2012, 51, 9123−9128

Industrial & Engineering Chemistry Research

Article

regression coefficient at variable x in a multilinear model. Dx is the variance of the same first partial derivative taken over the data set, and Dx expresses the degree of nonlinearity of the output with respect to independent variable x. Mxx is the mean second partial derivative of the output function, f(x, y, ...), with respect to the independent variable x, while Mxy is the second mixed partial derivative with respect to two independent variables x and y. Mxy is a measure of nonlinear interaction between the variables. This methodology is described in detail in the original paper by Baskin and coauthors.23



RESULTS AND DISCUSSION Multiple Linear Regression Model. Application of eq 2 to the present boiling point data set provides the following general structure−property model based on all 78 data points with 10 descriptors selected out of the 12 in eq 2. This multiple linear regression (MLR) model is given in tabular form in Table 1. Here, columns 2−4 represent the regression coefficients Cijkl, uncertainties of the coefficients ΔCijkl, and Student’s t-statistics, respectively.

Figure 2. Architecture and flow-chart of the three-layer neural network.

output response to input vector was as close as possible to the desired response. Each time an estimation was made and the result was compared to the corresponding desired value. (ii) Next, during the learning process (back-propagation), the estimation error (the difference between the estimated and desired values) was back distributed across the ANN in a manner that allowed the interconnection weights to be optimized. (iii) When the weights were modified, the next data set was fed to the network, and a new estimation was made. (iv) Simultaneously, using the verification set (vide infra), a verification test was carried out to monitor ANN overfitting.20 To avoid overfitting, the training process was repeated until the verification error decreased.21 These calculations were carried out using MatLab.22 Optimization of the ANN Architecture. The number of input neurons equals the number of independent variables (which is 12) and the number of output neurons is one, because we are working with a single property. However, we need to optimize the number of hidden layers, which was expected to be between 5 and 15.13 On the basis of the learning sample size and bearing in mind the generalization that performance improves when the network is minimized, the smallest number of neurons for an adequate estimation of boiling points was determined to be nine. As a result, the ANN model with this architecture (12 input neurons, nine hidden neurons, and one output neuron) was able to estimate Tb with an almost negligible mean prediction error (MPE) of 0.22%. The MPE is calculated according to eq 3: MPE =

100 N

∑ k

Table 1. Multple Regression Model Cijkl

ΔCijkl

t-crit.

C0 α1 A1 B1 α2 B2 α3 A3 P1 P2 P3

140.16 13.81 2.42 2.23 2.45 1.17 3.94 0.82 7.96 1.43 3.76

18.34 2.88 0.41 0.33 0.91 0.25 0.52 0.21 1.06 0.48 0.80

7.64 4.80 5.98 6.75 2.69 4.75 7.50 3.87 7.49 2.99 4.71

Statistically speaking, the fit of observed Tb versus predicted Tb by eq 2 is quite good, with a correlation coefficient R = 0.934 (R2 = 0.873) and a mean prediction error (calculated by eq 3), MPE = 1.53%. The mean unsigned error for this correlation is 5.2 °C, which is really low. The scatter plot of this correlation is displayed in Figure 3. As can be seen in Figure 3, the boiling point range is quite large, extending from 315 to 410 K. The full solubility data in tabular form are given in Table 1S of the Supporting Information. The azeotropes boiling at highest temperatures are uniformly those containing both acetic acid and pyridine. This is not surprising as these two molecules form a strong hydrogen bond with each other. Looking at the MLR model given in Table 1, one immediately sees that without exception all of the descriptors contribute positively to the property. Each of these descriptors reflects a certain type of intermolecular interaction, and all such interactions lead to aggregation, that is, to higher boiling points. It is also seen that the descriptors referring to the first component contribute the most to the QSPR model. This may result from the ordering of the azeotrope data; in each triple, the more polar and hydrogenbonded compounds go first followed by the less polar at positions 2 and 3. The relative statistical importance of the descriptors is commonly characterized by the Student’s t-statistics. As shown in Table 1, the highest t-criterion values are assigned to alpha polarizability of the third component as well as basicity,

|rk − yk | rk

term

(3)

In eq 3, N is the number of observations, while yk, rk are the neural network estimated and the original Tb values, respectively. Interpretation of ANN Models. Analysis of multiple regression models is straightforward, as relative contributions of parameters are identifiable as values and signs of regression coefficients. However, the extraction of such information is intrinsically not straightforward in neural network models. Baskin et al.23 proposed a useful technique for analyzing ANN models, in which the influence of single and binary combinations of independent variables on the output value is estimated by special statistical scales, Mx, Dx, Mxx, and Mxy. The statistic Mx is the mean value of the first partial derivative of the output function, f(x, y, ...) with respect to the independent variable x. Mx serves as a neural network analogue of the 9125

dx.doi.org/10.1021/ie202550v | Ind. Eng. Chem. Res. 2012, 51, 9123−9128

Industrial & Engineering Chemistry Research

Article

Figure 3. Scatter plot of boiling points observed versus predicted by the MLR model.

Figure 4. Scatter plot of boiling points observed versus predicted by the ANN model.

polarity, and acidity of the first one. This is in line with chemical common sense. Large polarizability values occur in heavy molecular weight compounds boiling at high temperatures, hence the third component’s polarizability significance. For lighter, low-boiling compounds (first components in the azeotropic mixtures), hydrogen-bond capacity and polarity become important, which is reflected by the relatively high significance of B1 and P1. The two distinct outliers clearly seen in Figure 3 are both azeotropes containing carbon disulfide and also constitute the most volatile of the set with boiling points of 311.2 K (carbon disulfide/acetone/water) and 314.4 K (carbon disulfide/ ethanol/water). Carbon disulfide with a boiling point of 319 K is the lowest-boiling component in the data set under study. The molecular structure of CS2 suggests that it is a nonpolar molecule of a low propensity for hydrogen bonding, with zero acidity, rather low basicity, and moderate polarizability.

Removal of the two data points containing CS2 results in an MLR model affording slightly better statistics R = 0.953 (R2 = 0.909) and MPE = 1.33%. Undoubtedly, the MLR model given in Table 1 is a consistent QSPR model, both physically and statistically. Its explanatory power expressed in the descriptors’ values and the regression coefficients is quite satisfactory. This model can be used for making qualitative judgments and even semiquantitative predictions. However, the very nature of linear models presumes that strong nonlinearities that definitely take place in ternary azeotropic mixtures cannot be properly accounted for by a linear, even multiparameter model. It was expected that artificial neural network methodology would provide higher predictive accuracy. Artificial Neural Network Model. The intrinsically nonlinear nature of the calculations performed inside a neural network provides a higher quality of fit as compared to those 9126

dx.doi.org/10.1021/ie202550v | Ind. Eng. Chem. Res. 2012, 51, 9123−9128

Industrial & Engineering Chemistry Research

Article

Table 2. Validation Results for the ANN Model no.

component 1

component 2

component 2

Tb obs, K

Tb calc, K

PE, %

7 9 63 10 37 76 17 75 12 56 62 78 72 69 71

water water chloroform water water isopropyl alcohol water acetic acid water water water butanol acetic acid acetic acid acetic acid MPE, %

carbon disulfide chloroform methanol chloroform ethanol 2-butanone nitromethane benzene nitromethane butanol pyridine pyridine pyridine pyridine pyridine

acetone ethanol acetone acetone benzene cyclohexane heptane cyclohexane propyl alcohol nonane decane toluene octane ethylbenzene p-xylene

311.2 328.4 330.6 333.5 338.0 342.0 343.7 350.3 355.4 363.1 365.4 381.8 388.8 402.2 402.3

314.7 329.7 333.3 334.9 335.3 350.4 343.1 364.3 356.0 363.2 375.2 398.6 408.4 407.6 412.8

1.12 0.39 0.81 0.43 0.79 2.46 0.17 3.98 0.17 0.03 2.69 4.40 5.05 1.34 2.61 1.76

from linear correlations.24 Unlike a multilinear model, the learning process of a neural network utilizes various polynomial, factorial, and exponential combinations of descriptors to achieve reproducibility of the original data. The artificial neural network was first trained over the whole data set of 78 azeotropes, as described in the Material and Methods. This provided a quality of fit of 0.995 in terms of the squared correlation coefficient, with MPE as low as 0.22%. The scatter plot is shown in Figure 4, which demonstrates that almost all of the data points occur very close to the trend line. The two CS2-containing azeotropes are no longer outliers. A slightly pronounced scatter can be seen only in the upper right part of the plot, where the highest-boiling azeotropes, all containing acetic acid and pyridine, are located. The good fit suggests that this ANN model may be used for prediction purposes, for example, in a situation where boiling points of some azeotropic mixtures of known speciation are not available. Such missing boiling points can be predicted by calculating the descriptors described above and feeding them into the trained neural network. The predicted boiling points are then the output of the neural network. The large chemical space spanned by the training set should broaden the applicability of this ANN model to cover virtually all solvents and solvent combinations typically used in the chemical industry. To make sure that the ANN model developed is statistically robust and not overtrained, a special external validation procedure was performed in three steps: (i) 15 tertiary azeotrope data points were randomly selected and removed from the training set; (ii) the descriptors were calculated for each of the three components and fed into the ANN input layer; and (iii) the predicted boiling points were read from the ANN output layer and compared to the experimental ones. Comparison of these results is given in Table 2. The identification numbers in the first column are the same as those in the master Table 1S (Supporting Information). The individual prediction errors (PE) and the mean prediction error (MPE) are in general very low, 1% or less for 8 azeotrope mixtures out of 15, with only one strong outlier (acetic acid/ pyridine/octane) for which the PE was higher than 5%. The low prediction errors and the high validation coefficient (R2v = 0.969) of the external validation demonstrate the success of the ANN model.

Artificial neural networks are designed to provide a perfect fit; this is why it is not surprising that the correlation shown in Figure 4 is of such a good statistical quality. This situation is not rare for neural network applications in chemistry because almost all published correlations are close to perfection. By contrast, the precise meaning of an ANN model, that is, its interpretation, is only rarely discussed in the literature. A meaningful interpretation, rather than a “black box” solution, can be obtained using an algorithm developed by Baskin and co-workers23 to extract information from neural network data. As described in the Materials and Methods, we calculated statistical scales useful for analyzing ANN models. These scales are listed in Table 3 for each of the input parameters used for the neural network training. Table 3. Nonlinearity Parameters of the ANN Model x

Mx

Dx

Mxy y

α1 A1 B1 α2 A2 B2 α3 A3 B3 P1 P2 P3

24.18 65.11 −2.77 7.35 0.24 6.56 10.01 −0.51 4.38 100.71 19.46 −0.18

1005.96 14 826.80 471.84 9573.60 230.47 852.52 529.83 43.60 480.77 35 756.60 3768.01 34.01

75.10 (P1) 667.0 (P1) −45.80 (P1) 32.50 (A1) 56.10 (α2) 10.40 (P2) 3.49 (B2) −37.80 (α3) 7.46 (α3) 62.70 (P2) −19.70 (A1) −4.22 (α3)

As mentioned earlier, Mx can serve as an ANN analogue of a regression coefficient. In Table 3, one can see that the Mx values are generally consistent with the coefficients of the MLR model, eq 2. Molecular polarizability, polarity, and hydrogenbond acidity of the first azeotropic component make the most significant impact. These parameters also demonstrate higher nonlinearities, which are reflected by the higher values of the Dx scale, with P1 and P2 identified as most nonlinear. Here, nonlinearity means that the particular parameters are transformed by the neural network from its original form into nonlinear functions such as polynomials, exponents, logarithms, etc. Such transformations are quite relevant, because in reality it 9127

dx.doi.org/10.1021/ie202550v | Ind. Eng. Chem. Res. 2012, 51, 9123−9128

Industrial & Engineering Chemistry Research

Article

(10) Oliferenko, A. A.; Oliferenko, P. V.; Hudleston, J. G.; Rogers, R. D.; Palyulin, V. A.; Zefirov, N. S.; Katritzky, A. R. J. Chem. Inf. Comput. Sci. 2004, 44, 1042. (11) Oliferenko, P. V.; Oliferenko, A. A.; Poda, G.; Palyulin, V. A.; Zefirov, N. S.; Katritzky, A. R. J. Chem. Inf. Model. 2009, 49, 634. (12) Zupan, J.; Gasteiger, J. Neural Networks in Chemistry and Drug Design, An Introduction; Wiley-VCH: Weinheim, 1999. (13) Oliferenko, A. A.; Oliferenko, P. V.; Seddon, K. R.; Torrecilla, J. S. Phys. Chem. Chem. Phys. 2011, 13, 17262. (14) ChemBio3D Ultra ver. 11, www.chembridgesoft.com. (15) Dewar, M.; Zoebisch, E. G.; Healy, E.; Stewart, J. J. P. J. Am. Chem. Soc. 1985, 107, 3902. (16) Stewart, J. J. P. MOPAC Program Package; QCPE: Bloomington, IN, 1989; No. 455. (17) Kurtz, H. A.; Stewart, J. J. P.; Dieter, K. M. J. Comput. Chem. 1990, 11, 82. (18) Statistica ver. 6; StatSoft: Tulsa, OK, 2001. (19) Fine, T. L. Feed-forward Neural Network Methodology; SpringerVerlag: New York. 1999. (20) Ghaffari, A.; Abdollahi, H.; Khoshayand, M. R.; Soltani Bozchalooi, I.; Dadgar, A.; Rafiee-Tehrani, M. Int. J. Pharm. 2006, 327, 126. (21) Izadifar, M.; Abdolahi, F. J. Supercrit. Fluids 2006, 38, 37. (22) Demuth, H.; Beale, M.; Hagan, M. Neural Network Toolbox for Use with MATLAB User’s Guide; Ver. 5.1 (Release 2007b); 2007 (online only). (23) Baskin, I. I.; Ait, A. O.; Halberstam, N. M.; Palyulin, V. A.; Zefirov, N. S. SAR QSAR Environ. Res. 2002, 13, 35. (24) Torrecilla, J. S.; Palomar, J.; García, J.; Rojo, E.; Rodríguez, F. Chemom. Intell. Lab. Syst. 2008, 93, 149.

is barely possible to decompose a physical property into linear orthogonal components. This is why ANN provides a better description of complex phenomena. Consistent with the linear model, all of the high impact descriptors bear a positive sign. The acidity parameters A2 and A3 as well as basicity parameters B1 and B3 contribute less, and their nonlinearity values Dx are accordingly smaller. The pairwise mutual influence of the parameters on each other is reflected by the cross term Mxy. The last column of Table 3 contains only the highest Mxy values. It is seen that A1 and P1 are the most entangled descriptors, reiterating their relative importance. The results discussed suggest that the cavityformation term (polarizability), polarity, and hydrogen-bond acidity are the most important determinants of the liquid− vapor phase equilibria of tertiary azeotropic mixtures.



CONCLUSIONS A novel approach to modeling of multicomponent mixtures is applied in this Article to the prediction of the normal boiling points of ternary azeotropic mixtures by developing quantitative structure−property relationship models utilizing a limited number of high-quality, sophisticated descriptors, each encoding a specific type of intermolecular interaction in the liquid state. Multiple linear regression and artificial neural networks are used as statistical tools for building explanatory and predictive models, respectively. The statistical quality of these ternary azeotrope models is high, with the squared correlation coefficient being 0.995 for the neural network model. Molecular polarizability, polarity, and hydrogen-bond acidity are found to be the most important parameters for modeling boiling points of tertiary azeotropes. This validated model can be used for the prediction of boiling points of ternary azeotropic mixtures, and also should be broadly applicable to the modeling of complex fluid phases.



ASSOCIATED CONTENT

S Supporting Information *

Table 1S contains compositions, boiling points, and descriptor values for all 78 azeotropic mixtures. This material is available free of charge via the Internet at http://pubs.acs.org.



AUTHOR INFORMATION

Corresponding Author

*Tel.: (352) 392-0554. Fax: (352) 392-9199. E-mail: [email protected]fl.edu. Notes

The authors declare no competing financial interest.



REFERENCES

(1) Industrial Solvents Handbook; Flick, E. W., Ed.; Noyes Data Corp.: Westwood, NJ, 1998. (2) Sivakolundu, S. G.; Mabrouk, P. A. J. Am. Chem. Soc. 2000, 122, 1513. (3) Wang, P.; Chen, D.; Tang, F.-Q. Langmuir 2006, 22, 4832. (4) Demirel, Y. Thermochim. Acta 1999, 339, 79. (5) Wang, Q.; Chen, G.-H.; Han, S.-J. J. Chem. Eng. Data 1996, 41, 49. (6) Katritzky, A. R.; Stoyanova-Slavova, I. B.; Tämm, K.; Tämm, T.; Karelson, M. J. Phys. Chem. A 2011, 115, 3475. (7) CODESSA-Pro: www.codessa-pro.com. (8) Katritzky, A. R.; Kuanar, M.; Slavov, S.; Hall, C. D. Chem. Rev. 2010, 110, 5714. (9) Abraham, M. H. Chem. Soc. Rev. 1993, 22, 73. 9128

dx.doi.org/10.1021/ie202550v | Ind. Eng. Chem. Res. 2012, 51, 9123−9128