Approximating Necessary Conditions of Optimality as Controlled

Dec 13, 2012 - Global Approximation of Self-Optimizing Controlled Variables with Average Loss Minimization. Lingjian Ye , Yi Cao , and Xiaofeng Yuan...
0 downloads 0 Views 919KB Size
Article pubs.acs.org/IECR

Approximating Necessary Conditions of Optimality as Controlled Variables Lingjian Ye,*,† Yi Cao,‡ Yingdao Li,† and Zhihuan Song§ †

Ningbo Institute of Technology, Zhejiang University, 315100, Ningbo, Zhejiang, People’s Republic of China School of Engineering, Cranfield University, Cranfield, Bedford MK43 0AL, United Kingdom § Institute of Industrial Process Control, Zhejiang University, 310027, Hangzhou, Zhejiang, People’s Republic of China ‡

ABSTRACT: Selection of controlled variables (CVs) has recently gained wide attention, because of its paramount importance in real-time optimization (RTO) of plant operation. The so-called self-optimizing control (SOC) strategy aims to select appropriate CVs so that when they are maintained at constant setpoints, the overall plant operation is optimal or near optimal, despite various disturbances and uncertainties. Recent progresses of the SOC methodology have focused on finding linear combinations of measurements as CVs via linearization of the process around its nominal operating point, which results in the plant operation being only locally optimal. In this work, the concept of necessary conditions of optimality (NCO) is incorporated into CV selection to overcome the “local” shortcoming of existing SOC methods. Theoretically, the NCO should be selected as the optimal CV, although it may not be practical because of the measurability of the NCO. To address this issue, in this work, CVs are selected to approximate unmeasured NCO over the entire operation region with zero setpoints to achieve near-optimal operation globally. The NCO approximation CVs can be obtained through any existing regression approaches. Among them, two particular regression methodsnamely, least-squares and neural networksare adopted in this work as an illustration of the proposed methodology. The effectiveness and advantages of the new approach are demonstrated through two case studies. Results are compared with those obtained by using existing SOC methods and an NCO tracking technique. linear measurement combinations, Kariwala7 and Kariwala et al.6 developed eigenvalue decomposition approaches to minimize the worst-case loss and the average loss, respectively. A null space method for linear measurement combination design was initiated by Alstad and Skogestad8 and sophisticated later by Alstad et al.9 Similar to other control structure design problems, CV selection is a combinatorial problem. Finding a globally optimal measurement subset is infeasible when the number of measurements and the number of CVs to select are large. To address this problem, novel bidirectional branch-andbound algorithms were developed for the minimum singular value rule,10 the worst-case loss minimization,11 and the average loss minimization.12 Recently, a mixed-integer quadratic programming approach was proposed for CV selection, based on average loss minimization.13 In the NCO tracking, the two components of NCO, namely, the active constraints and reduced gradients, are computed online and chosen as controlled variables. The inputs then are adjusted directly to satisfy optimality conditions.14,15 The technique of NCO tracking was originally developed for batch processes, utilizing batch-to-batch repeatability to estimate the NCO so that the plant operation can eventually converge to true RTO.16 Recently, the NCO tracking was extended to continuous processes.17 The major difference between the SOC and the NCO tracking can be described as follows. In the SOC, some off-line

1. INTRODUCTION A competitive market, a shortage of energy, and environmental protection make real-time optimization (RTO) of plant operation a requirement. However, uncertainties and disturbances make implementing RTO difficult purely through model-based approaches. To tackle uncertainties and disturbances, feedback optimizing control was first suggested.1 An intuitive solution for feedback optimizing control would be to control the necessary conditions of optimality (NCO) directly.2−4 However, this is impractical, because many NCO components are immeasurable online. In order to address this issue, in the past decade, feedback optimizing control has been developed in two parallel directions, namely, self-optimizing control (SOC)5 and NCO tracking.4 The concept of SOC was proposed by Skogestad5 to select the most appropriate controlled variables (CVs) such that when these CVs are kept at their nominal constant set-points through feedback control, the plant operation is optimal or near optimal. Since an integral control is able to maintain the CV selected at the desired set-point in steady state, selecting CVs is the main focus of the SOC. In the past decade, various CV selection approaches have been developed. In the SOC framework, CVs are selected such that the economical loss caused by adopting the constant CV control, compared to the true optimal operation, is minimized. Two criteria, namely, the worst-case loss3 and the average loss,6 were derived for CV selection. By evaluating these criteria, CVs can be selected either directly from individual measurements or by using measurement combinations. Normally, adopting measurement combinations as CVs results in lower economical losses than those achieved using individual measurements directly. For © 2012 American Chemical Society

Received: Revised: Accepted: Published: 798

November 7, 2011 November 28, 2012 December 13, 2012 December 13, 2012 dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Article

(ANN)are proposed to approximate unmeasured NCO as CVs. In section 5, two case studies are provided to demonstrate the usage and advantages of the proposed method. Finally, some conclusions are drawn in section 6.

calculations are carried out primarily to select CVs based on output measurements for a given set of disturbances, while the online task is left to any feedback controller with an integral action to track the selected CVs at their set-points, which are predetermined off-line. In the NCO tracking, the active constraints are obtained online with measurements, and the reduced gradients are computed appropriately for tracking the NCO; then, the optimal input is explicitly calculated according to certain adaptation laws. It is clear that the SOC and the NCO tracking focus on different aspect of RTO, the former aims on the optimal CVs, while the latter solves the optimal input explicitly, although this difference is not fundamental because inputs can also be selected as CVs or included in measurement combinations as CVs for the SOC. For these two RTO schemes, an important issue is how the gradients are calculated online for control. In the SOC, the gradient approximations are implicitly embedded in the CV models, which are determined off-line with the aforementioned SOC techniques. Therefore, in online performance, the SOC is fastconverging and, hence, can reject fast disturbances. However, since the CVs are determined based on linearization of the process model around a normal operation point, the RTO performance is only locally optimal. In the NCO tracking, the gradients can be evaluated from either model-free or modelbased approaches.18 In model-free approaches, the gradients are online evaluated experimentally through, e.g., the finite differences method via input perturbations.4,17 Generally, such online estimation procedure is globally optimal but slow, because the controller must wait a long time until the plant operation reaches steady state before making any adjustment on inputs. To improve the convergence speed, the gradients can also be evaluated through model-based approaches, such as neighboring-extremal control (NEC),19 where the gradients are computed with the knowledge of output measurements and the optimal input is derived as a function of output measurements accordingly. The derivation is carried out off-line, so that the online control can act very quickly. However, because the optimal input of the NEC is derived around a nominal operation point, it suffers from the same “local” shortcomings as existing SOC approaches. More recently, sophisticated comparisons between the SOC and the NCO tracking are available in the literature.18,20 As a summary, current available RTO approaches suffer from either local optimality, such as the existing SOC approaches and the NEC, or slow convergence, such as model-free NCO tracking approaches. In order to integrate both merits namely, fast convergence and global optimality togetherthis work presents a novel CV selection approach for feedback optimizing control, which follows the spirit of SOC to “achieve an acceptable loss with constant set-point values for the CVs”.5 Therefore, the new approach inherits the fast converging merit from the SOC. Moreover, in the new approach, CVs are selected to approximate the NCO over the entire operation space, so that when such CVs are kept at zero, the NCO is approximately satisfied and plant operation is globally near optimal. The remaining of this paper is organized as follows: After a brief overview of the SOC methodology in section 2, the concept of selecting CVs to approximate the NCO is proposed in section 3. Then, a general procedure to select CVs to approximate the NCO is devised in section 4, where two commonly used regression methodsnamely, the least-squares (LS) regression and feed-forward artificial neural networks

2. SELF-OPTIMIZING CONTROL FOR CV SELECTION Consider the static optimization problem for continuous processes with parametric uncertainties represented as disturbances, which can be generally formulated as min J(u , d)

(1)

u

subject to

g(u , d) ≤ 0 where J is an objective function that typically represents operation cost or negative economic profit, u ∈ nu are the manipulate variables and d ∈ nd are the uncertain disturbances, g: nu × nd ⇒ ng are the constraints, which are usually related to safety assurances or product qualities. Under the influences of varying d, theoretically, the RTO problem requires to be solved repeatedly to determine the optimal inputs uopt(d) corresponding to every change of disturbances. However, the unknown disturbance d is the main obstacle for solving such an optimization problem, although for the nominal disturbance, d*, the problem can be readily solved. Suppose c ∈ nu are the CVs in the control loops with their nominally optimal set-points at copt(d*). SOC theory5 shows that, for appropriately selected c, if the values of c are maintained at copt(d*) through feedback control, then the process inputs u will automatically approach to uopt(d), despite the variation of d. The best CVs to achieve self-optimizing performance can be obtained by solving a complicated nonlinear optimization problem.3 To simplify the problem, a linearized model between output measurements and independent variables, u and d at the nominal point is derived as y = G yu + GdyWdd + Wnn y

(2)

Gyd

where G and are the steady-state gain matrices of input and disturbance, respectively. n ∈ ny is the implementation error vector associated with individual measurements. Wd and Wn are magnitude diagonal matrices to normalize d and n, respectively. The CVs, as linear combinations of all measurements y, can generally be expressed as

c = Hy

(3)

where H is the combination matrix with a full row rank nu to square the control system. Particularly, zero columns in H imply that a corresponding subset of measurements is unused. The combination matrix, H, can be determined to minimize the loss in the objective function due to adopting the control strategy to maintain selected CVs at their set-points. Two different definitions for the loss in objective function have been devised, namely, the worst-case loss3 and the average loss6 for uniformly distributed disturbances for CV selection, which are shown as follows: Lworst =

Laverage =

799

1 σmax 2(M) 2

1 M 6(ny + nd)

(4) 2

F

(5)

dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Article

where σmax(·) and ∥·∥F are the maximum singular value and Frobenius norm of a matrix, respectively. The matrix, M is defined as M = [Juu1/2 (Juu−1Jud − G−1Gd)WdJuu1/2 G−1HWn]

means CVs for SOC must change when constraints alternate between active and inactive, because of different disturbances. To circumvent this problem, in the development of SOC, it must be assumed that the set of active constraints does not change, although this assumption is not practical for most processes. Recently, this issue has been addressed through several approaches, including the split range control,22 parametric programming approach,23 cascade control approach,2 and explicit constraint handling approach.24 In this work, for simplicity and to focus on the main contribution, we assume the set of active constraints does not change with disturbances. Concerning the two parts of NCO, the control of ga has traditionally gained common consensus, whereas the compressed reduced gradients ∇crJ were overlooked mainly because the compressed reduced gradient is an abstract mathematical concept and generally is immeasurable, although, intuitively, its components should be adopted as CVs, as suggested by many researchers.2,3,14 To overcome the immeasurable problem, Cao2 suggested indirect gradient control to select CVs which makes the compressed reduced gradients less sensitive to disturbances when these CVs are perfectly controlled. However, the sensitivity measure was derived based on linearization at the nominal operating point, hence the approach is still local. In the following analysis, it is assumed that the active constraints, ga = 0 are measurable and perfectly controlled. For the remain degrees of freedom, nu − na, the selected CVs, c are perfectly controlled at corresponding set-points, cs, i.e., c − cs = 0. Therefore, it is convenient to redefine CVs as c ̃ = c − cs with zero set-points. The aim of indirect gradient control is to control the compressed reduced gradients at zero. Hence, the indirect control error is equivalent to difference between the CVs and compressed reduced gradients as shown in expresssion 10:

(6)

where G = HG , Gd = while Juu = ∂ J/∂u and Jud = ∂2J/ ∂u∂d are the diagonal and off-diagonal Hessian matrices of J, respectively, evaluated at the nominal point. Based on the above derivations, CV selection is characterized as minimizing expression 4 or 5, with respect to H. Explicit expressions for H have been derived.6,7,9 Since the formulations above are developed based on linearization of nonlinear processes, these methods are local and may result in large losses when the actual disturbances are far away from the nominal one. Such phenomena have already been notified by Kariwala et al.6 through an evaporator case study. In the next section, we are going to develop a new approach to select CVs to approximate the NCO over the entire operation space, so that the solution is globally sound. y

HGyd,

2

2

3. CONTROLLED VARIABLES (CVS) AND NECESSARY CONDITIONS OF OPTIMALITY (NCO) Suppose uopt(d) is the solution of problem (1), then at uopt(d), the following Karush−Kuhn−Tucker (KKT) conditions,21 which is also known as the first-order NCO, should hold: g(uopt , d) ≤ 0 μk gk (uopt , d) = 0 ∂J (uopt , d) + ∂u

ng

∑ μk k=1

μk ≥ 0, k = 1, 2 , ..., ng ∂gk ∂u

(uopt , d) = 0 (7)

where μk, k = 1, 2, ..., ng are the Lagrange multipliers. The constraints, gk, with strict equality held are active constraints, noted as ga, which corresponds to nonzero Lagrange multipliers and can be eliminated from the above equations if ga ∈ na is known. Then, the NCO can be represented as two parts:15 active constraints ga and reduced gradients ∇rJ.

ε = ∇cr J c =̃ 0 = ∇cr J − c ̃

Furthermore, it can be shown that the economical loss due to the indirect control error can be expressed as follows:20 L IGC =

ga = 0 ⎛ ∂g ⎞+ ∂ga ⎤ ∂J ⎡⎢ ⎥=0 I − ⎜ a⎟ ∇r J = ∂u ⎢⎣ ⎝ ∂u ⎠ ∂u ⎥⎦

(8)

ga = 0, ga ∈ na ∂J V2 = 0, ∂u

∇cr J ∈ nu − na

1 T −2 α ε ∇cr Jε ≤ ε 2 2

2 2

(11)

where ∇cr−2J is the inverse of the compressed reduced Hessian of J under active constraints ga = 0 and constant α ≥ λ̅(∇cr−2J). According to expression 11, it is desired to select CVs such that the CVs are as close to the compressed reduced gradients as possible. In other words, CVs can be selected to approximate the compressed reduced gradients as closely as possible. The approximation should be over the entire operation space such that the solution is globally sound. Numerical algorithms to solve this NCO approximation problem are to be presented in the next section. The difference between the global and existing local SOC can be illustrated in Figure 1, where the true compressed reduced gradient in solid line is approximated by the dash-dotted line as a linear combination of measurements and by the dashed line as a nonlinear combination of measurements, while the dotted line is the CV determined by local SOC using linear combination of measurements. As shown in Figure 1, the local linear combination CV perfectly matches the gradient at the nominal point marked as y*, but results in large errors, hence large losses as indicated in expression 11, when the operating point is far away from the nominal point. In contrast,

In expression 8, the reduced gradient, ∇rJ still has nu components. Note the matrix inside the square brackets is a null space projection matrix of ∂ga/∂u. The reduced gradient can be compressed to nu − na dimensions using singular value decomposition. Let ∂ga/∂u = USVT and V = [V1 V2], where V2 are nu − na right singular vectors, corresponding to nu − na zero singular values. Then, the NCO with the compressed reduced gradient is

∇cr J =

(10)

(9)

The above equation indicates that the optimal inputs uopt(d) can be sought by means of satisfying a set of NCO with exactly nu components. According to expression (9), the NCO will vary, depending on which subset of constraints are active. This 800

dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Article

numerically using existing regression or function fitting software tools. A general procedure is proposed as follows. (1) Determine the operation space and sample the entire space using independently generated disturbances, inputs, and random measurement noises. (2) Calculate the immeasurable NCO, zi,j, for j = 1, 2, ..., nu − na and measurements, yi at each sampling point, i using a process simulation model. (3) Fit each immeasurable NCO component, zi,j using linear or nonlinear functions of all measurements, ẑi,j = f(yi, θj) by adjusting the fitting parameters, θj to minimize the error sum of squares, defined as SE,j = ∑Ni=1(zi,j − ẑi,j)2. The goodness of a regression can be evaluated by using the R2 statistics, which is defined as follows. Let N samples of xi, i = 1, 2, ..., N be approximated by x̂i, i = 1, 2, ..., N and x̅ = ∑Ni=1 xi/ N. Then, Figure 1. CV comparison of local and global SOC: compressed reduced gradient (solid line, ), local linear combination (dotted line, ···), global linear combination (dash-dotted line, − · − · −), and nonlinear combination (dashed line, − − −).

R2 =

ST − SE ST

(12)

where ST = ∑Ni=1(xi − xi̅ )2 is the ∑Ni=1(xi − x̂i)2 is the error sum

total sum of squares and SE = of squares. The closer the R2 index is to a value of one, the better the regression. Generally, step 3 can be solved using any regression algorithms. Two particular algorithms, namely the least-squares regression and neural networks are illustrated in details as follows. 4.1. Least-Squares Regression (Method I). The leastsquares (LS) method is the most commonly used regression approach, which can be used for linear or nonlinear fittings. For linear fitting, the jth component of the approximated NCO is represented as

the global approach treat the entire operation space uniformly. Although the approximation by a linear combination CV may not be perfect at the nominal point, globally the average error is minimized. The minimum loss is expected for the nonlinear combination CV, which approximate the gradient globally better than other CVs. It is worth highlighting that, although the NCO is used to select CVs off-line, they are not directly tracked online. Instead, the off-line selected CVs are maintained at zero to indirectly track the NCO, as illustrated in Figure 2b. As a comparison, the traditional NCO tracking can be classified as a direct approach, which relies on an online estimator for direct NCO tracking as shown in Figure 2a.

zî , j = [1 y iT ]θj

i = 1, 2 , ..., N , j = 1, 2 , ..., nu − na (13)

where N is the number of samples. Define ⎡1 y T ⎤ 1 ⎥ ⎢ Y = ⎢⋮ ⋮ ⎥ , ⎢ ⎥ ⎢⎣1 y T ⎥⎦ N

⎡ z1, j ⎤ ⎥ ⎢ zj = ⎢ ⋮ ⎥ ⎢⎣ zN , j ⎥⎦

2 Then, to minimize the least-squares error, εj = ∑i=1 N (zi,j − ẑi,j) , the fitting parameters for the jth component of NCO can be derived as follows:

θj = (YTY)−1YTzj

j = 1, 2 , ..., nu − na

(14)

Nonetheless, the linear fitting may not be accurate enough, because of the fact that almost all real processes are nonlinear in nature. To improve the fitting, a polynomial approximation with high-order terms can be adopted. Typically, a secondorder polynomial approximation to jth component at sampling point i is as follows:

Figure 2. Comparison of (a) direct and (b) indirect NCO tracking.

T )vec[αq , r ]j zî , j = [1 y iT ]θj + vec T(yy i i

4. NCO APPROXIMATION According to the analysis presented in the previous section, CVs can be selected to approximate NCO over the entire operation space. In general, such an approximation problem is difficult to be solved analytically, but is readily to be solved

(15)

where vec(·) converts a matrix to a vector and [αq,r] is a matrix with the (q,r)th element being αq,r. This is a linear function of all fitting parameters, which can be obtained through the linear least-squares approach. To facilitate the derivation, define the following matrices: 801

dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Article

Figure 3. Feed-forward networks: (a) network structure and (b) connection between neurons.

⎡1 y T vec T(y y T) ⎤ 1 1 1 ⎥ ⎢ ⎢ ⎥ ⋮ Yp = ⋮ ⋮ ⎢ ⎥ ⎢⎣1 y T vec T(y y T )⎥⎦ N N N

zî , j = W2σ(W1yi + b1) + b2

⎤ ⎡ θj ⎥ θj̃ = ⎢ ⎢ vec[αq , r ]j ⎥ ⎦ ⎣

where W1, b1, W2, and b2 are input weight, input bias, output weight, and output bias, respectively, which are to be tuned to minimize the fitting error between ẑi,j and zi,j. Each element of the vector function (σ) is an activation function. The commonly used activation function is the sigmoid function defined as

Then, the least-squares solution to the polynomial fitting parameters is θj̃ = (YTp Yp)−1YTp zj,

j = 1, 2 , ..., nu − na

(17)

(16)

σi(x) =

Remark 1: Even with linear combinations of measurements to approximate unmeasured NCO, the new approach is still different from existing local SOC methods, because the approximations are made for samples generated over the entire operation space, hence are global, as shown in Figure 1. The differences and advantages of new approach will be illustrated through case studies. 4.2. Feed-Forward Neural Networks (Method II). Besides polynomial regression, artificial neural networks (ANNs) are also commonly used for nonlinear regression. In ANN, layers are connected by nodes called “neurons” to mimic biological neural networks. The feed-forward neural network is the simplest and most widely used type of ANN in practice, where the data are transferred in one direction from input neurons, through the hidden neurons and to the output neurons. The structure of feed-forward neural networks is shown in Figure 3. Applying the ith sample measurement, yi to the feed-forward ANN input, the jth output of the ANN can be represented as follows:

e x − e −x e x + e −x

i = 1, 2 , ..., nh

(18)

where nh is the number of hidden nodes. It has been proven that, for a large enough nh, the nonlinear map represented in expression 17 can approximate any nonlinear function.25 The fitting parameters, W1, b1, W2, and b2 are determined through training. The back-propagation is the most commonly adopted learning technique to train ANN. The learning procedure is depicted as follows: the output values of network are compared with the target values, the differences between them are computed through a predefined error function. This error information is then propagated backward through the network to adjust the weights to reduce the error values. Above steps are iteratively repeated until a sufficient large number of training cycles is reached or the error value is smaller than an acceptable threshold. The details of various algorithms can be found elsewhere (e.g., ref 25). Remark 2: Both LS regression and ANN approximation proposed above inherit their own advantages and disadvantages, just like other regression applications. For example, in the LS regression method, the number of cross-product terms 802

dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Article

grows dramatically as polynomial order increases, so a compromise between affordable efforts and fitting accuracy is critical in determining the polynomial order. In the ANN method, the fitting problem is not convex; hence, a training may be tripped at a local minimum. The consequence is that the generalization ability is not always guaranteed. Many different measures have been proposed in the literature to alleviate this problem, for example, adopting the simplest structure with minimum hidden nodes normally improves generalization. Remark 3: It is worth mentioning that using the LS regression and ANN are not compulsory. Any other regression approaches, such as partial least-squares and support vector machine, are also applicable to approximate the NCO as CVs. Remark 4: Another valuable advantage of proposed method is that it only needs first-order derivative information, compared to the fact that other SOC methods require second-order derivative information, which may sometimes cause numerical problems. This important feature makes the proposed method numerically more robust and reliable.

dCA 1 = (CAi − CA ) − r dt τ

(19)

dC B 1 = (C Bi − C B) + r dt τ

(20)

dT 1 = (Ti − T ) + 5r dt τ

(21)

where τ is the residence time (τ = 60 s), and r is the rate of reaction, which is dependent on process variables: ⎛ ⎛ 10000 ⎞ 15000 ⎞ 6 ⎟C − 10 exp⎜ − ⎟C r = 5000 exp⎜ − ⎝ 1987T ⎠ A ⎝ 1987T ⎠ B (22)

The classifications for the manipulated variable, available measured variables, and disturbances are given as

u = [Ti ] y = [CA C B T Ti ]T

5. CASE STUDIES In this section, two cases studies are presented to demonstrate the effectiveness of the proposed new approach. First, an exothermic reactor example is studied to compare the new approach with existing local SOC approaches to highlight the advantage of global approximations. Then, a more general case of evaporator process is investigated, where both active constraint and reduced gradient exist in the KKT conditions. In addition, the dynamic performances of new approach and NCO tracking with online gradient evaluation are also compared therein. We demonstrate that proposed approach can response much faster than the NCO tracking scheme, which results from the way how the gradient is estimated. 5.1. Exothermic Reactor. This exothermic reactor example has been previously studied by Alstad26 and Kariwala.7 The reactant A is fed into a continuous stirred-tank reactor (CSTR) and undergoes a reversible exothermic reaction in the CSTR as shown in Figure 4. The inlet temperature, as well as

d = [CAi C Bi ]T The anticipated noises for the measured variables are ±0.01 mol/L for concentrations CA and CB, and ±0.5 K for temperatures T and Ti. The allowable sets for disturbances are considered as 0.5 ≤ CAi ≤ 1.5 and 0 ≤ CBi ≤ 0.5. Note the variation ranges of disturbances have been enlarged in this study from original specification7,26 in order to highlight the difference between the global and local approaches. The operational objective is to maximize the economic profit, which is equivalent to minimizing a cost function: J = −20090C B + (0.1657Ti )2

(23)

where the first term of J is the negative profit of product B and the second represents the cost of heating the input stream. Here, a multiplication factor of 104 dollars has been imposed on J from the original studies,7,26 because the magnitude of original J is too small to compare. Minimizing J with respect to u means to increase the concentration of product B and reduce the energy consumption simultaneously via manipulating inlet steam temperature Ti. However, the effect of Ti on J is contradictory, i.e., increasing Ti accelerates the reaction rate and produces more B, although more energy is consumed, and vice versa. Therefore, there exists an optimal value of Ti minimizing J. The nominal values for process variables are given in Table 1. The operational degree of freedom for this case is one, that is, only one CV needs to be selected to square the control system. To achieve optimal control, the gradient dJ/du should be maintained at zero. The curve fitting methods presented in Table 1. Process Variables and Nominal Values for Exothermic Reactor

Figure 4. Exothermic reactor process.

concentrations of A and product B in the feed, are denoted as Ti, CAi, and CBi, respectively, while the outlet temperature, and the concentrations of unreacted A and product B in the outlet stream, are denoted as T, CA, and CB, respectively. The first principle models are composed of differential equations for mass and energy balances: 803

variable

physical description

nominal value

CA CB T Ti CAi CBi J

outlet A concentration outlet B concentration outlet steam temperature inlet steam temperature inlet A concentration inlet B concentration economic objective

0.498 mol L−1 0.502 mol L−1 426.803 K 424.292 K 1.0 mol L−1 0 mol L−1 −5149.3 × 10−4$

dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Article

Table 2. Average Economic Loss Comparison Without Noise

With Noise

CV

average loss

maximum loss

standard deviation

average loss

maximum loss

standard deviation

cLR ĉLR cPR ĉPR cANN cKariwala cAlstad

3.085 3.928 0.0919 0.2418 0.00561 10.014 10.077

15.824 17.747 0.974 2.925 0.094 76.358 76.719

3.139 3.480 0.116 0.358 0.0102 14.619 14.834

6.284 5.139 1.918 1.164 5.852 10.821 10.893

41.888 35.140 51.010 10.592 80.494 98.112 99.324

7.789 5.723 4.461 1.611 7.986 15.860 16.074

We do not add noise to samples for neural network fitting, because too many adjustable parameters in the model may cause overfitting and deteriorate the purpose for prediction. For comparison purposes, CVs using the methods proposed by Kariwala et al.,6 to minimize the average local loss, and by Alstad et al.,9 using the extended null space method, are also calculated. The CVs are very similar, as shown by the following:

section 4 are then used to construct the CV using available measurements. Simulations for generating samples are conducted with the factorial design method, where each of the allowable variation ranges of independent variables, 0.5 ≤ CAi ≤ 1.5 and 0 ≤ CBi ≤ 0.5 and 380 ≤ Ti ≤ 450 is divided equally into 10 parts and a total of 113 = 1331 samples are generated. The CV using linear regression Method I is found to be

c Kariwala = 0.3205y1 − 0.2741y2 − 2.78 × 10−5y3

c LR = −0.9552 − 0.2279y1 + 0.188y2 − 0.0091y3 + 0.0115y4

− 0.0022y4 + 0.9067 (24)

cAlstad = 0.3245y1 − 0.2744y2 − 1.57 × 10−5y3 − 0.0022y4

By adding uniformly distributed measurement noise to the samples, we obtain a biased solution, which gives the CV as follows:

+ 0.9052

To evaluate the performances of different CVs numerically, the steady-state loss function is defined as

c LR ̂ = −0.9479 − 0.252y1 + 0.1946y2 − 0.0043y3 + 0.0066y4

L = J(u fb , d) − Jopt (d)

(25)

where J(ufb,d) is the objective function J corresponding to implementing feedback control law to maintain the CV at zero, while Jopt(d) is the true optimal J. A small value of L indicates the control system is capable of maintaining the process near optimum. A Monte Carlo experiment is conducted through simulation to evaluate different CVs: 1000 sets of disturbances d uniformly distributed in their variation ranges are randomly generated and the steady-state losses are computed using expression 29. The average losses, considered with and without measurement noise, are listed in Table 2. From Table 2, we can see that average losses for cKariwala and cAlstad are relatively large (more than 10). Note these losses have already been significantly reduced, compared to traditional CVs using individual measurements.6,9 The losses are further reduced by applying the proposed methods in this paper. The losses for cLR, ĉLR, cPR, ĉPR, and cANN without measurement noises are much smaller than the those resulted from local CVs. The loss for ĉLR (3.928) is higher than the loss for cLR (3.085), which is anticipated because ĉLR is a biased solution with measurement noises added in the sample data. The same situation can be observed from cPR (0.0919) and ĉPR (0.2418). Selecting cANN (0.00561) as CV can further reduce the loss by several orders of magnitude, indicating the process is almost optimally controlled in the entire operation space. By considering measurement noise, selecting ĉLR as a CV (5.139) performs better than cLR as a CV (6.284), which is expected because the noise is not included when calculating the latter. cPR and ĉPR perform best with average losses of 1.918 and 1.164, respectively. This indicates that the polynomial combination is the most robust and appropriate combination form for this particular example, considering measurement

The R2 regression indices for above two CVs are 0.9464 and 0.9436, respectively, indicating the regression accuracies are not perfect but acceptable. A second-order polynomial regression is further performed resulting in the CV as follows: c PR = 0.6419 + 0.1841y1 − 0.7371y2 − 0.0599y3 + 0.0569y4 + 0.0068y1y2 − 0.0254y1y3 + 0.0246y1y4 + 0.0089y2 y3 − 0.0068y2 y4 − 0.004y3 y4 + 0.0461y12 − 0.0173y2 2 + 0.0021y32 + 0.0019y4 2

(26)

A biased solution by adding noise to the samples is also obtained as c PR ̂ = 0.7938 + 0.1297y1 − 0.5927y2 − 0.0247y3 + 0.0210y4 + 0.0213y1y2 − 0.0023y1y3 + 0.0017y1y4 − 1.38 × 10−4y2 y3 + 0.0018y2 y4 − 2.37 × 10−4y3 y4 + 0.004y12 − 0.014y2 2 + 1.5 × 10−4y32 + 9.17 × 10−5y4 2

(27)

The corresponding R2 indexes for cPR and ĉPR are 0.9997 and 0.9974, respectively, indicating that the second-order regression is good enough and no higher-order polynomial is required. Applying Method II, a neural network regression model is obtained with an R2 index of 0.9999; the CV is simply denoted here as

cANN = z(y)

(29)

(28) 804

dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Article

noise. We also note that the performance of cANN is not as good as the case when noises are not considered. This indicates that the robustness of the ANN approximation is not satisfactory. 5.2. Evaporator. To further demonstrate the effectiveness of the new approach proposed, a “forced circulation” evaporator,27 shown in Figure 5, is studied. The concentration

Table 3. Process Variables and Nominal Values for the Evaporator Process

Figure 5. Schematic of an evaporator system.

of dilute liquor is increased by evaporating solvent from the feed stream through a vertical heat exchanger with circulated liquor. Model equations are given in the Appendix. The process model contains 3 state variables and involves 20 process variables. The independent variables for manipulated variable and disturbances are given as

d = [ F1 X1 T1 T200 ]T

where the variation ranges for disturbances are defined as ±20% of their nominal values. The objective of operation is to minimize the operational cost (described in units of $ h−1) related to steam, cooling water, and pump work: The constraints related to product specification, safety, and design limits are as follows: (31)

40 kPa ≤ P2 ≤ 80 kPa

(32)

P100 ≤ 400 kPa

(33)

F200 ≤ 400 kg min −1

(34)

0 kg min−1 ≤F3 ≤ 100 kg min−1

(35)

nominal value

feed flow rate product flow rate circulating flow rate vapor flow rate condensate flow rate feed composition product composition feed temperature product temperature vapor temperature separator level operating pressure steam flow rate steam temperature steam pressure heat duty cooling water flow rate inlet cooling water temperature outlet cooling water temperature condenser duty operational cost

10 kg min−1 1.4085 kg min−1 28.0 kg min−1 8.5915 kg min−1 8.5915 kg min−1 5% 35.5% 40 °C 91.216 °C 83.608 °C 1m 56.426 kPa 10.017 kg min−1 151.52 °C 400 kPa 366.626 kW 230.525 kg min−1 25 °C 45.498 °C 330.775 kW $6178.2 h−1

⎛ T − T200 ⎞ ∇cr J = 0.6 − 0.5538⎜ 201 ⎟ F200 ⎝ ⎠ ⎡ 0.16(F1 + F2) + 0.07F1 42.F1 ⎤ + ⎥ ⎢6.306 T100 − T2 36.6 ⎦ ⎣

(30)

X 2 ≥ 35 + 0.5%

physical description

F1 F2 F3 F4 F5 X1 X2 T1 T2 T3 L2 P2 F100 T100 P100 Q100 F200 T200 T201 Q200 J

variable and thus P100 − 400 = 0 can be easily achieved in a feed-forward manner by keeping P100 at its maximum. In addition, the separator level L2, which has no steady-state effect, is open-loop unstable and, hence, must be stabilized and consumes one degree of freedom. Therefore, only one component remains in the compressed reduced gradient, which must be controlled. Without losing generality, F200 is chosen as the manipulated variable to control the gradient. When the two active constraints are perfectly controlled, the compressed reduced gradient function is analytically derived as2

u = [ F2 P100 F3 F200 ]T

J = 600F100 + 0.6F200 + 1.009(F2 + F3)

variable

(36)

However, expression 36 relies on the unmeasured disturbances; hence, it cannot be used as a CV for feedback control. Alternatively, the new approaches proposed in this work are applied to determine CVs to approximate the compressed reduced gradient. The following measurements are considered as predictors of the gradient: y = [ F2 F100 T201 F3 ]T

This subset is chosen because investigations6 have shown that using four measurements makes a good tradeoff between optimizing performances and economic investments on hardware sensors, and the above subset is identified as one of the best four-measurement subsets. For this example, we do not consider measurement noises for simplicity. Applying full factorial design method as the previous case to generate data, each disturbance variable is divided into five equal parts within their expected ranges. The manipulated variable, F200, is also partitioned into five equal parts within [180 300], which is a roughly estimated interval around the nominal point. A total of

Note a 0.5% backoff has been imposed on X2 to ensure that the variable remains feasible under all circumstances. The nominal values for process variables are listed in Table 3. At the nominal point, two process constraints are active, X2 − 35.5 = 0 and P100 − 400 = 0. These two constraints will keep active within the entire disturbance region, physical explanations on these constraints are referred to the work of Cao.2 Hence, these two NCO components are controlled for optimizing purposes, where we note that P100 is a manipulated 805

dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Article

65 = 7776 data points are generated for the measurement values and the compressed reduced gradient. Applying Method I, linear and second-order polynomial regressions are performed and two CVs are obtained:

minimum values, nominal values, or maximum values randomly, as shown in Figure 6.

c LR ̃ = 2.7655 − 0.0528F2 − 0.1205F100 − 0.0415T201 + 0.0124F3

c PR ̃ = −2.2597 + 0.1944F2 + 0.2518F100 + 0.1165T201 + 0.2235F3 + 0.0327F2F100 − 0.0029F2T201 − 0.0046F2F3 − 0.0130F100T201 − 2.95 × 10−4F100F3 + 3.69 × 10−5T201F3 − 0.0608F2 2 − 0.0319F100 2 − 9.97 × 10−4T201 − 0.0011F32

with the R2 indices of 0.7122 and 0.9681, respectively, indicating that a second-order polynomial regression approximates the gradient much better than the linear regression. An ANN approximation through Method II is also applied with an R2 index of 0.9991, the obtained CV is denoted as cANN = z(̃ y) ̃

Figure 6. Disturbance trajectories.

(37)

For the proposed approach, c̃PR is taken as the CV, because it results in the minimum loss as discussed above. Besides, L2, X2, and P100 are also the CVs to be controlled. The decentralized control structure is adopted using single-input−single-output (SISO) proportional integral (PI) controllers, where L2 is paired with F3, X2 is paired with F2, and c̃PR is paired with F200, while P100 is directly manipulated in a feed-forward manner by keeping it at the maximum value. For NCO tracking, we are interested in how the gradient and its influence on the dynamic performances are evaluated. Therefore, instead of providing the adaptation laws of all manipulated variables to perform NCO tracking, here, we first control the active constraints using SISO PI controllers in a small timescale, because active constraints usually impose a much larger impact on the objective function and must be satisfied in the first place. F200 is used as a degree of freedom to satisfy the gradient part of NCO and adapted in a much larger timescale after the active constraints have been satisfied. A period of 400 min for perturbation or adapting F200 is chosen, to ensure that the process and other control loops can reach steady state. The reduced gradient of J, with respect to F200, is evaluated with the finite difference method by perturbing F200, the magnitude of perturbation to be set as 2 kg min−1. To implement NCO tracking, the actions of perturbation and adaptation will be alternately carried out to detect whether a disturbance has entered the process and to achieve optimal control under new circumstances. Besides, if the difference between two sequent inputs is less than the perturbation magnitude, then the gradient will be directly calculated using the knowledge of these two operating points without additional perturbation. The dynamic simulation results of F200 and ∇crJ given in expression 36 are shown in Figure 7. Furthermore, a snapshot of the first 8000 min in the entire simulation is shown in Figure 8. From the simulation results, we can clearly see how the two control strategies work to achieve optimizing control. Because the proposed method does not need to wait for the system to be in steady state before taking an action, the gradient control loop can work simultaneously with other control loops. After disturbances enter the system, F200 reaches its ultimate value very quick. Although its ultimate value does not necessarily

For comparison, the local SOC approach of null space method8,9 is applied and the CV is found to be cAlstad = −0.9951 + 0.0020F2 − 0.0976F100 − 0.0081T201 ̃ + 0.0125F3

(38)

A Monte Carlo simulation experiment is conducted with 1000 sets of randomly generated disturbances. The economical losses from this test are compared in Table 4. Results show that Table 4. Average Economic Loss Comparison for the Evaporator Process CV

average loss

maximum loss

standard deviation

c̃LR c̃PR c̃ANN c̃Alstad

4.775 0.198 0.431 10.539

30.888 1.569 6.473 39.017

6.467 0.246 0.874 9.634

c̃LR, c̃PR, and c̃ANN all outperform c̃Alstad, whose average economic loss is 10.539. Nonlinear combinations c̃PR (0.198) and c̃ANN (0.431) result in much smaller economic losses than the global linear combination c̃LR (4.775), because of the nonlinear nature of the evaporator process. However, the performance of c̃ANN is not as good as c̃PR, which is different from the results observed in the CSTR case. This suggests the essential relationship between y and ∇crJ might be close to a polynomial function, and the generalization ability of c̃ANN is not good enough. Furthermore, the proposed global SOC approach is compared with NCO tracking through dynamic simulation. First, an experiment is carried out to detect the approximate time for the system to settle down from one state into the next steady state. It is found that a transient time of ∼300 min is needed to achieve new steady state when the disturbances change to their maximum or minimum values. The scenarios of disturbances are arranged as follows: in the first 4000 min, all disturbances remain at their nominal values and the system is operated in the nominal point. Then, in the next every 4000 min, the four disturbances are simultaneously changed to their 806

dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Article

(CVs) with their set-points at zero. However, the main difficulty of this strategy lies in the fact that not all NCO are measurable online. This difficulty is overcome by using measurement combinations as CVs to approximate the unmeasured NCO, typically the reduced gradients. The new approach proposed is advantageous over existing SOC formulations, in the sense that (1) it is not based on linearization around the nominal point, so that the CVs selected are globally sound over the entire operation space; (2) the CVs selected are not restricted to linear measurement combinations but extended to any possible functions of measurements by converting the challenge into a curve fitting task; and (3) the algorithm is numerically more robust and reliable, because it does not require second-order derivative information. Meanwhile, compared to the NCO tracking with the gradients being experimentally evaluated, the proposed approach is easier to implement and faster to reject the disturbances, although it does not necessarily converge to true optimality. Two case studies are investigated to demonstrate the advantages of the proposed approach. In both cases, the losses of cost function were significantly reduced in comparison to local SOC methods. In the evaporator case, we additionally compared the proposed method with NCO tracking through dynamic simulations. How to determine a proper approximating function is critical to ensure the optimizing control performances. In this paper, two commonly used curve-fitting methods, as an illustration, are adopted to model the gradients. Enormous regression methodologies and data-driven algorithms have been developed in the literature. These tools are powerful and provide a rich library for globally optimal CV selection to achieve satisfactory RTO performance, even for processes much more complicated than those presented in this work.

Figure 7. Dynamic performance comparison: (a) F200 response and (b) response of the compressed reduced gradient.

Figure 8. Snapshot of dynamic simulations for the first 8000 min: (a) F200 response and (b) response of the compressed reduced gradient.



converge to the optimal value, because of the approximating errors between the CV and the gradient function, the losses are acceptable as indicated in Table 4. As a comparison, NCO tracking will converge to true optimal points under every disturbance scenario after several iterative updates of F200. However, quite a long time is required to determine the optimal value. When the system has already been working in the optimal point, the algorithm will still perturb F200 to detect whether a disturbance has entered. On the other hand, when a disturbance occurs, the algorithm will not adapt F 200 immediately until a perturbation is performed to evaluate the gradient, then will wait for the next time instant to update F200, as indicated in Figure 8. More importantly, it is clear that if the disturbances occur faster than the time that the system needs to reach steady state, the NCO algorithm will not function successfully to achieve optimizing control. The total cost for the proposed global SOC method is calculated as $4 156 022.89, while the total cost for NCO tracking is $4 157 079.28, indicating that an extra $1056.39 can be saved over 4 × 104 min ≈ 666.7 h ≈ 27.8 days of operation by using the proposed method.

APPENDIX: MODEL EQUATIONS F − F4 − F2 dL 2 = 1 dt 20

(39)

dX 2 FX − F2X 2 = 1 1 dt 20

(40)

F − F5 dP2 = 4 dt 4

(41)

T2 = 0.5616P2 + 0.3126X 2 + 48.43

(42)

T3 = 0.507P2 + 55.0

(43)

F4 =

6. CONCLUSIONS This paper presented a new approach to overcome the local feature of self-optimizing control (SOC) by incorporating the concept of necessary conditions of optimality (NCO), which was suggested to be optimally selected as controlled variables

Q 100 − 0.07F1(T2 − T1) (44)

T100 = 0.1538P100 + 90.0

(45)

Q 100 = 0.16(F1 + F3)(T100 − T2)

(46)

F100 = 807

38.5

Q 100 36.6

(47) dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808

Industrial & Engineering Chemistry Research

Q 200 =

0.9576F200(T3 − T200) 0.14F200 + 6.84

T201 = T200 +

F5 =



13.68(T3 − T200) 0.14F200 + 6.84

Article

(16) Francois, G.; Srinivasan, B.; Bonvin, D. Use of measurements for enforcing the necessary conditions of optimality in the presence of constraints and uncertainty. J. Process Control 2005, 15 (6), 701−712. (17) Srinivasan, B.; Biegler, L. T.; Bonvin, D. Tracking the necessary conditions of optimality with changing set of active constraints using a barrier-penalty function. Comput. Chem. Eng. 2008, 32 (3), 572−579. (18) Francois, G.; Srinivasan, B.; Bonvin, D. Comparison of Six Implicit Real-Time Optimization Schemes. J. Eur. Syst. Autom. 2012, 46 (2−3), 291−305. (19) Gros, S.; Srinivasan, B.; Bonvin, D. Optimizing control based on output feedback. Comput. Chem. Eng. 2009, 33, 191−198. (20) Jaschke, J.; Skogestad, S. NCO tracking and self-optimizing control in the context of real-time optimization. J. Process Control 2011, 21 (10), 1407−1416. (21) Edgar, T. F.; Himmelblau, D. M.; Lasdon, L. S. Optimization of Chemical Processes, Second ed.; McGraw−Hill: New York, 2001. (22) Lersbamrungsuk, V.; Srinophakun, T.; Narasimhan, S.; Skogestad, S. Control structure design for optimal operation of heat exchanger networks. AIChE J. 2008, 54 (1), 150−162. (23) Manum, H. Simple implementation of optimal control of process systems. Ph.D. Thesis, Norwegian University of Science and Technology, Trondheim, Norway, 2010. (24) Hu, W.; Umar, L.M.; Kariwala, V.; , and Xiao, G.; . Local selfoptimizing control with input and output constraints. In Proceedings of the 18th IFAC World Congress, Milan, Italy, 2011. (25) Bishop, C.M.. Neural networks for pattern recognition; Clarendon Press: Oxford, U.K., 1995. (26) Alstad, V.Studies on selection of controlled variables. Ph.D. Thesis, Norwegian University of Science and Technology, Trondheim, Norway, 2005. (27) Newell, R. B.; Lee, P. Applied Process Control: A Case Study; Prentice−Hall: Englewood Cliffs, NJ, 1989.

(48)

(49)

Q 200 38.5

(50)

AUTHOR INFORMATION

Corresponding Author

*E-mail: [email protected]. Notes

The authors declare no competing financial interest.



ACKNOWLEDGMENTS The first, third, and fourth authors gratefully acknowledge the financial support from the National Basic Research Program of China (973 Program, Grant No. 2012CB720500).



REFERENCES

(1) Morari, M.; Arkun, Y.; Stephanopoulos, G. Studies in the synthesis of control structures for chemical process, Part I: Formulation of the problem. Process decomposition and the classification of the control tasks. Analysis of the optimizing control structures. AIChE J. 1980, 26 (2), 220−232. (2) Cao, Y. Direct and indirect gradient control for static optimization. Int. J. Autom. Comput. 2005, 2 (1), 13−19. (3) Halvorsen, I. J.; Skogestad, S.; Morud, J. C.; Alstad, V. Optimal selection of controlled variables. Ind. Eng. Chem. Res. 2003, 42 (14), 3273−3284. (4) Srinivasan, B.; Bonvin, D.; Visser, E.; Palanki, S. Dynamic optimization of batch processes - II. role of measurements in handling uncertainty. Comput. Chem. Eng. 2003, 27 (1), 27−44. (5) Skogestad, S. Plantwide control: The search for the selfoptimizing control structure. J. Process Control 2000, 10 (5), 487−507. (6) Kariwala, V.; Cao, Y.; Janardhanan, S. Local self-optimizing control with average loss minimization. Ind. Eng. Chem. Res. 2008, 47 (4), 1150−1158. (7) Kariwala, V. Optimal measurement combination for local selfoptimizing control. Ind. Eng. Chem. Res. 2007, 46 (11), 3629−3634. (8) Alstad, V.; Skogestad, S. Null space method for selecting optimal measurement combinations as controlled variables. Ind. Eng. Chem. Res. 2007, 46 (3), 846−853. (9) Alstad, V.; Skogestad, S.; Hori, E. S. Optimal measurement combinations as controlled variables. J. Process Control 2009, 19 (1), 138−148. (10) Cao, Y.; Kariwala, V. Bidirectional branch and bound for controlled variable selection: Part I. Principles and minimum singular value criterion. Comput. Chem. Eng. 2008, 32 (10), 2306−2319. (11) Kariwala, V.; Cao, Y. Bidirectional branch and bound for controlled variable selection: Part II. Exact local method for selfoptimizing control. Comput. Chem. Eng. 2009, 33 (8), 1402−1412. (12) Kariwala, V.; Cao, Y. Bidirectional branch and bound for controlled variable selection: Part III. Local average loss minimization. IEEE Trans. Ind. Informat. 2010, 6 (1), 54−61. (13) Yelchuru, R.; Skogestad, S. Convex formulations for optimal selection of controlled variables and measurements using mixed integer quadratic programming. J. Process Control 2012, 22, 995−1007. (14) Chachuat, B.; Marchetti, A.; Bonvin, D. Process optimization via constraints adaptation. J. Process Control 2008, 18 (3−4), 244−257. (15) Chachuat, B.; Srinivasan, B.; Bonvin, D. Adaptation strategies for real-time optimization. Comput. Chem. Eng. 2009, 33 (10), 1557− 1567. 808

dx.doi.org/10.1021/ie300654d | Ind. Eng. Chem. Res. 2013, 52, 798−808