Subscriber access provided by Warwick University Library
Article
A Moving Horizon Estimation Algorithm Based on Carleman Approximation: Design and Stability Analysis Negar Hashemian, and Antonios Armaou Ind. Eng. Chem. Res., Just Accepted Manuscript • DOI: 10.1021/acs.iecr.6b03044 • Publication Date (Web): 01 Aug 2017 Downloaded from http://pubs.acs.org on August 3, 2017
Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a free service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are accessible to all readers and citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.
Industrial & Engineering Chemistry Research is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.
Page 1 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
A Moving Horizon Estimation Algorithm Based on Carleman Approximation: Design and Stability Analysis Negar Hashemian† and Antonios Armaou∗,†,‡,¶ †Department of Chemical Engineering, The Pennsylvania State University, University Park, PA 16802 ‡Department of Mechanical Engineering, Wenzhou University, Wenzhou, China ¶Corresponding author E-mail:
[email protected] Abstract Moving Horizon Estimation (MHE) is an optimization-based technique to estimate the unmeasurable state variables of a nonlinear dynamic system with noise in transition and measurement. One of the advantages of MHE over Extended Kalman Filter, the alternative approach in this area, is that it considers the physical constraints in its formulation. However, to offer this feature, MHE needs to solve a constrained nonlinear dynamic optimization problem which slows down the estimation process. In this paper, we introduce and employ the Carleman approximation method in MHE design to accelerate the solution of the optimization problem. The Carleman method approximates the nonlinear system with a polynomial system at a desired accuracy level and recasts it in a bilinear form. By making this approximation, the Karush-Kuhn-Tucker (KKT) matrix required to solve the optimization problem becomes analytically available. Additionally, we perform a stability analysis for the proposed MHE design. As a result
1
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
of this analysis, we derive a criterion for choosing an order of Carleman approximation procedure that ensures convergence of the scheme. Finally, some simulation results are included that show a significant reduction in the estimation time when the proposed method is employed.
1
Introduction
In control engineering, some state variables, which describe the mathematical status of a dynamic system, are not easily measurable. There are examples where these states even lack a physical interpretation such as moments of particles’ distribution (e.g., coagulation and crystallization processes). However, knowledge of these state variables is critical in order to effectively control the system and reach the desirable output performance. In the literature, there are many observer design techniques to estimate the unknown system states using measurements of the input and output trajectories. 1,2 These techniques range from Luenberger observer for linear systems to more complex designs for nonlinear systems such as extended Luenberger observer, 3–5 linearization by output injection, 6 high gain observer 7 and reducedorder observer. 8–10 The state estimation becomes even harder when the measurements are corrupted with noise. Moving Horizon Estimation is a state estimation method which accounts for noise and/or disturbance on system output and system transition. 11 Indeed, MHE employs an optimization method to find the most probable current state variable given the past and current output measurements. Although this method has many advantages over other alternative methods, it is computationally demanding. In this paper, to deal with this issue, we propose a new approach to MHE design and investigate its stability requirements. Indeed, the idea of the MHE method originates from the receding horizon philosophy which was introduced for Model Predictive Control (MPC) design. 12–16 As a result, the history of the evolution of MHE is not separate from MPC design. Both designs are successful in handling the physical constraints on input and states. In order to offer this feature, the two methods formulate a dynamic constrained nonlinear optimization problem. This nonlinear 2
ACS Paragon Plus Environment
Page 2 of 40
Page 3 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
optimization does not usually have an analytical solution. In fact, simplicity is sacrificed to gain more flexibility in these two methods. However, the advantages of receding horizon philosophy motivated many researchers to look for new approaches that accelerate the scheme. In MHE design, for example, to bound the optimization problem dimensions, Rao et al. summarize the past data in a function called arrival cost. 14 Arrival cost is similar to the cost to go concept in backward dynamic optimization context. Also, Zavala et al. present an algorithm in an attempt to speed up MHE. 17 This algorithm, which was initially exploited for MPC design in Ref. 18, uses the coefficient matrix of the optimality condition called KKT matrix. This matrix is obtained at the previous step to estimate the current state. Researchers have also been studying MHE from a different yet related aspect, i.e. stability. Rao et al. proved the stability of MHE for a constrained linear system using the Kalman covariance matrix. 14 Later, Rao et al. presented a stability proof for constrained nonlinear problems assuming boundedness of noise signals. 16 These proofs assume the corresponding optimization problem is ideal, in the sense that we are able to find the exact global minimum of cost function online. Allesandri et al. accounted for the existence of an error in the solution of the optimization problem at every time step and proved the estimation error remains bounded. 19 Also, Zavala proposed a new method called approximate MHE algorithm to minimize the cost function in between the sampling times. 20 Then, the solution can be instantaneously updated at the next sampling time. This correction is based on an approximation of MHE schemes which may cause an error. However, Zavala developed a criterion to guarantee the stability of the MHE with the proposed structure. Carleman introduced a method to represent a finite dimensional system by an infinite dimensional linear system, commonly referred to as Carleman linearization. Thirty years later, it gained popularity as an approximation method to solve sets of nonlinear ordinary differential equations (ODE). 21 The Carleman method approximates the nonlinear functions of the dynamic system by their Taylor expansions around a desired point in the state space. It
3
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
then augments the original state vector by defining new states that represent the higher order deviation terms away from the desired point. This results in a bilinear dynamic system of a higher dimension that approximates the original nonlinear system more precisely than the classic linearization method. Mare and Dona employed Carleman method to approximate the original nonlinear system by linear models. 22 Armaou and Ataei used the Carleman method to approximate nonlinear systems by a bilinear form. 23 Fang and Armaou also performed a sensitivity analysis for a general cost function in an MPC design. 24 In previous results, we employed Carleman approximation and a sensitivity analysis to design an MHE for a system subject to an unmodeled disturbance signal. 25–27 However, the previous papers do not discuss about the stability of the proposed MHE or the upper bound of the estimation error. In contrast, this manuscript modifies our previous work by considering the effects of noise signals. Carleman approximation introduces a dynamic error which affects the optimization computations. Therefore, this article provides a stability proof for the combined MHE and Carleman approximation (CMHE) scheme. Also, this proof gives an intuition about how to tune the MHE parameters. The estimation error can be reduced by using higher order of Carleman approximation. In this work, using the idea in [19, 20], we proved the error is bounded in a desired range by using the proper order of Carleman approximation. We should emphasize that the goal of the proof presented in this manuscript is different from the above mentioned works. MHE and Extended Kalman Filter (EKF) are both widely used in the estimation world. It is worth pointing out that discussions on pros and cons of EKF and MHE are still ongoing. 11,28 In fact, EKF is a simple form of MHE that considers only the latest measurement data. Moreover, EKF linearizes the nonlinear system in the neighborhood of estimated states. 29,30 Our interpretation is that EKF and MHE stand at the two ends of a range of potentially existing estimators. At one end the problem is simplified to obtain an analytical solution while the other end treats the problem more meticulously by considering all the history of the system, constraints and nonlinearities.
4
ACS Paragon Plus Environment
Page 4 of 40
Page 5 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Motivated by the above issues, in this paper, we focus our attention to a “middle ground" between the extremes of the aforementioned range of estimator types. The estimators we are interested in use the MHE’s structure, but are required to have a constant horizon. Moreover, similar to the EKF design, these estimators approximate nonlinearities, but they choose a polynomial approximation based on Carleman method over the first-order Taylor expansions for linearization. In this way, the proposed estimators, which may still be called MHE, strike a balance between accuracy and simplicity. This balance can be achieved by choosing a proper order of Taylor expansion, based on the nature of the system and the performance requirements. This truncated expansion is then used to derive the approximate bilinear system that will be used in the MHE formulation. In this manuscript, we represent a nonlinear system in a bilinear form through employing Carleman approximation in the next section. Further, we introduce the MHE method using linearization results of section 3, and present a stability proof for this approach in section 4. Section 5 provides an analytical expression for derivatives of the cost function defined in section 3 which are required to obtain the KKT matrix. Finally, section 6 gives a practical example and compares the simulation run-times required for a regular MHE and the CMHE presented in this paper.
2
Carleman Approximation
Consider the following nonlinear system: x˙ = f (x) + g(x)u + w
(1)
y = h(x) + ν where x ∈ Rn , u ∈ Rm and y ∈ Rp denote the system state vector, input and output vectors, T respectively. Also, w = w1 w2 · · · wn ∈ Rn and ν ∈ Rp are unknown noise vectors on system states and output measurements. We assume the vector functions f, h and g are
5
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 6 of 40
known and locally analytic. To facilitate the formulation we introduce Γl ∈ Rn×1 that is a vector with zero elements except the lth element which is equal to unity. Thus the noise vector is expanded into n P independent noise vector terms nl=1 Γl wl . Therefore, system of (1) can be expanded: x˙ = f (x) +
m X
gj (x)uj +
j=1
n X
Γl wl
l=1
(2)
y = h(x) + ν where Γl is a vector with zero elements except the lth element which is equal to unity. Employing Carleman Approximation method explained in detail in Appendix A, the system is represented in the following bilinear format with extended state variables:
x˙ ⊗ ≃Ax⊗ + Ar +
m X
[Bj x⊗ uj + Bj0 uj ] +
j=1
n X
[Dl x⊗ wl + Dl0 wl ]
l=1
y ≃Cx⊗ + ν Then this formulation is used in the MHE structure in the following section.
3
Moving Horizon Estimation
MHE uses the history of output trajectory, to update the states estimated in the past and obtain the most probable values for the current states. However, considering all the past data leads to an optimization problem the dimension of which grows very fast. Consequently, MHE binds the state estimation at the sampling times over a sliding time window by the
6
ACS Paragon Plus Environment
Page 7 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
following cost function:
J(ˆ x0 , {wˆi }N i=1 )
= Λ(ˆ x0 ) +
N X
Li (wˆi , νˆi )
i=1
Subject to:
(3)
wˆi ∈D, xˆi ∈ X and νˆi ∈ V where νˆi = yi − yˆi , Li (wˆi , νˆi ) is the stage penalty function, and Λ(ˆ x0 ) summarizes the past data beyond the sliding time window. Additionally, D, X and V are assumed to be compact sets. In contrast to the most of efforts in this field, 16,19 this manuscript focuses on continuous models which are more realistic in chemical engineering processes. However, MHE is a discrete method, and a transition from continuous model to a discrete time model is necessary. To obtain this discrete model, we solve the approximated system. Assumption 1. The noise signals wi and νi remain constant in the time interval between any two consecutive sampling times, [ti , ti+1 ). Also, an uncertain prior knowledge about the initial state, x0 , is available, which we here denote by x¯0 (t = 0). The above assumption is not restrictive. If the original noise signal, denoted by wj is a continuous function of time, it can be discretized at every sampling time. This article approximates this signal, with a piecewise constant signal. More specifically, to solve ODE (A-4) in the ith sampling time interval, we require the analytical solution to the following integral: Z
T 0
˜ − τ ))Dj0 wj (τ )dτ = wˆi,j exp(A(t
Z
T 0
˜ − τ ))Dj0 dτ = wˆi,j A˜−1 exp(∆T A) ˜ − I Dj0 exp(A(t
(4)
where the constant value wˆi,j reflects the weighted average effect of the noise signal wj (t) during the ith sampling time interval. Note that this assumption should be considered in determining the constraints on the noise distribution using Eq. (4). 7
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 8 of 40
In this paper, we define the functions Li and Λ in the following quadratic form : Li = kˆ νi k2Q + kwˆi k2R Λ = k¯ x0 − xˆ0 k2Π0 where kzk2M = z T M z and Q, R and Π−1 0 are tuning parameters with a symmetric positivedefinite structure. In this formulation, Li penalizes the estimation error of the output, namely, νˆi , and the deviation of estimated noises from the average which is assumed to be zero. Moreover, Λ which is also called arrival cost accounts for the initial values deviation from the prior knowledge, x¯0 − xˆ0 . We use the estimated state xˆ1 at t − 1 as the prior knowledge, namely x¯0 . Note xˆ1 at t − 1 will be out of the sliding time window at t i.e. the next sampling time. For control purposes, the estimate signals are required to be smooth. As a result, the weight of deviation from the prior knowledge should be large enough in comparison with stage penalties to avoid fluctuations in the closed-loop system. Then, the system dynamics over the ith interval are given by:
x˙ i,⊗ =Axi,⊗ + Ar +
m X
[Bj x⊗ uj + Bj0 uj ] +
j=1
n X
[Dl x⊗ wi,l + Dl0 wi,l ] (5)
l=1
yi =Cxi,⊗ + νi , xi,⊗ (i = 0) = x0,⊗ The proposed CMHE, then, estimates x0,⊗ and {wi }N i=1 by solving the dynamic optimization problem in (3). Consequently, the estimated system states, xˆi , and measurement noise, νˆi , are obtained: xˆ˙ i,⊗ =Aˆ xi,⊗ + Ar +
m X
[Bj xˆ⊗ uj + Bj0 uj ] +
j=1
n X
[Dl xˆ⊗ wˆi,l + Dl0 wˆi,l ]
l=1
(6)
yˆi =C xˆi,⊗ + νˆi In the following section, we derive the conditions required for the estimation error using
8
ACS Paragon Plus Environment
Page 9 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Carleman approximation to be bounded.
4
Moving horizon Estimation Stability
In the Carleman approximation method, truncating the infinite bilinear system to a finite dimension causes a dynamic error. This error may deteriorate the optimization procedure, when the order of the finite system is not large enough. In this section, we obtain a bound on the optimization error caused by the approximation in the Carleman method.
4.1
Carleman Approximation Error
In this section, to find the dynamic error, we denote the system states and the states of the Carleman approximation by x and xc , respectively. The exact dynamics of the first and higher orders of system states are given by: dx[1] =e1 + a1,0 + a1,1 x + a1,2 x[2] + ... + a1,η x[η] dt m n X X [2] [η] + (bj,1,0 + bj,1,1 x + bj,1,2 x + ... + bj,1,η x )uj + dl,1 wl j=1
l=1
[2]
dx =e1 ⊗ x + x ⊗ e1 + a2,0 x + a2,1 x[2] + a2,2 x[3] + ... + a2,η−1 x[η] dt + a2,η x[η+1] +
m X
(bj,2,0 x + bj,2,1 x[2] + bj,2,2 x[3] + ... + bj,2,η x[η+1] )uj +
j=1
n X
dl,2 xwl
l=1
.. . dx[η] =e1 ⊗ x[η−1] + x ⊗ e1 ⊗ x[η−2] + ... + x[η−1] ⊗ e1 dt + aη,0 x[η−1] + aη,1 x[η] + aη,2 x[η+1] + ... + aη,η x[2η−1] + +
m X
j=1 n X
(bj,η,0 x[η−1] + bj,η,1 x[η] + bj,η,2 x[η+1] + ... + bj,η,η x[2η−1] )uj dl,η x[η−1] wl
l=1
9
ACS Paragon Plus Environment
(7)
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 10 of 40
where e1 is the remainder of Taylor series in Eq. (A-1). This remainder can be expressed by the Lagrange’s formula: m
e1 =
X 1 (∂f[η+1] |x=z0 + ∂gj,[η+1] |x=zj uj )x[η+1] (η + 1)! j=1
for some zk ∈ (0, x) k = 0, 1, 2, .... Therefore, dξ1 =e1 + a1,0 + a1,1 ξ1 + a1,2 ξ2 + ... + a1,η ξη dt m n X X + (bj,1,0 + bj,1,1 ξ1 + bj,1,2 ξ2 + ... + bj,1,η ξη )uj + dl,1 wl j=1
l=1
dξ2 =e1 ⊗ x + x ⊗ e1 + a2,0 ξ1 + a2,1 ξ2 + a2,2 ξ3 + ... + a2,η−1 ξη + a2,η x[η+1] dt m n X X [2] [3] [η+1] + (bj,2,0 x + bj,2,1 x + bj,2,2 x + ... + bj,2,η x )uj + dl,2 ξ1 wl j=1
l=1
.. .
(8)
dξη =e1 ⊗ x[η−1] + x ⊗ e1 ⊗ x[η−2] + ... + x[η−1] ⊗ e1 dt + aη,0 ξη−1 + aη,1 ξη + aη,2 x[η+1] + ... + aη,η x[2η−1] +
m X
(bj,η,0 ξη−1 + bj,η,1 ξη + bj,η,2 x[η+1] + ... + bj,η,η x[2η−1] )uj
+
n X
dl,η ξη−1 wl
j=1
l=1
[l]
where ξl = x[l] − xc . We present the ordinary differential equation set (8) in a compact form: ξ˙ = Aξ + Ar +
m X
[Bj ξ + Bj0 ]uj +
j=1
n X
[Dl ξwl + Dl0 wl ] + e(t)
l=1
where e ∈ Rr and the lth element of e, denoted by el , is:
10
ACS Paragon Plus Environment
(9)
Page 11 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
el =
l X
(al,η−l+α x[η+α−1] +
α=2
m X
bj,l,η−l+α x[η+α−1] ) +
l−1 X
x[α] ⊗ e1 ⊗ x[l−1−α]
α=0
j=1
As a result, ξ i at the ith sampling time is given by:
i
ξ =
where A¯i = A +
m X j=1
Z
i∆T
exp(A¯i (t − τ ))(Ar +
0
B j uj +
m X
Bj0 uj +
j=1
n X
n X
Dj0 wj + e(t))
(10)
j=1
Dj wj and ∆T = ti − ti−1 . This equation in conjunction with
j=1
using Young’s inequality provides an upper bound on the absolute value of the dynamic error:
i
kξ k∞
≤ k exp(iA¯i ∆T )k . kAr +
m X
Bj0 uj +
j=1
n X
Dj0 wj + e(t)k ≤ k exp(iA¯i ∆T )k em (11)
j=1
where k · k and k · k∞ return the Euclidean and infinity norm, respectively. Since we assumed x is in a compact set and e1 is the Taylor expansion error, these two signals are bounded. P Pn Therefore, we represent the upper bound of kAr + m j=1 Bj0 ui,j + j=1 Dj0 wi,j + e(t)k by em . As a result, there exists a constant scalar denoted by ǫ such that:
|xi − xc,i | ≤ ǫ where ǫ = k exp(iA¯i ∆T )k em , xi and xc,i are the system states and states approximated by Carleman method at the ith sampling time. In the next part, we use the ǫ value to find a bound on the optimization error at each sampling time.
4.2
Optimization Error and MHE algorithm stability
Using the Carleman approximation method in CMHE causes an error in the optimization procedure. In this part, we investigate the maximum error in the estimated optimal cost function by employing the following Lemma. 11
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 12 of 40
Lemma 1. Consider two different bounded functions f1 and f2 defined on domain D with the following conditions: x1 ∈ arg minx f1 (x),
(12)
x∗ ∈ arg minx [f1 (x) + f2 (x)] Then, f1 (x∗ ) − f1 (x1 ) ≤ 2 maxx∈D f2 (x). Proof. We use the following inequality:
f1 (x∗ ) + f2 (x∗ ) ≤ f1 (x1 ) + f2 (x1 )
By rearranging the above equation, we obtain:
(13)
f1 (x∗ ) − f1 (x1 ) ≤ f2 (x1 ) − f2 (x∗ ) ≤ 2 max kf2 (x)k x∈D
Due to Carleman method, the model used in our CMHE formulation has a bounded error, δx, in approximating the true state variables. This δx, in turn, leads to a corresponding error in the calculation of the objective function (3). The objective function including this calculation error is given by J ′ :
′
J =kx0 −
x¯0 k2Π0
+
N X
kwi k2R
i=1
=J +
N X
+
N X
kyi − h(xi + δxi )k2Q
i=1
(14)
kyi − h(xi + δxi )k2Q − kyi − h(xi )k2Q
i=1
The above functions are bounded. Therefore, using lemma 1, we obtain:
∗ J(ˆ x0 , w) ˆ − Jreal ≤ 2 max k δx
≤ 2 max δx
N X
i=1 N X
kyi − h(xi + δxi )k2Q − kyi − h(xi )k2Q k
k h(xi ) − h(xi + δxi )
i=1
12
T
(15)
Q 2yi − h(xi + δxi ) − h(xi ) k
ACS Paragon Plus Environment
Page 13 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
∗ where Jreal is the optimal cost function and J(ˆ x0 , w) ˆ denotes the cost function value for
the xˆ0 and wˆ which are obtained by minimizing the cost function J ′ in Eq.(14). Since we assumed h is an analytic function, it is a Lipschitz continuous function as well:
kh(xi + δx) − h(xi )k ≤ Kkδxk ≤ Kkǫk Incorporating the Lipschitz continuity property in Eq. (15) yields: ∗ J(ˆ x0 , w) ˆ − Jreal ≤N Kkǫk kQk k2yi − h(xi + δx) − h(xi )k
(16)
≤N Kkǫk kQk kˆ νi + νi k Since νˆi , νi ∈ V, from inequality (16) we conclude there exists the scalar ε such that: (17)
∗ J(ˆ x0 , w) ˆ − Jreal ≤ε
So far we derived an upper bound on the error of optimization caused by the approximation in Carleman method. In the rest of this section, we focus on the stability of the presented CMHE design. First, we need to define the following function:
N F (x0 , {wi }i=1 ) =
h(xt−N ) h◦F
wt−N
(xt−N )
.. . h ◦ F wt−1 ◦ ... ◦ F wt−N (xt−N )
(18)
where F wi (xi ) = F(xi , wi ) is the exact solution for xi , assuming x0 to be the initial point, and h ◦ F(x) = h(F(x)). Additionally, the subscript t − N refers to N time intervals before time t. This definition is useful to present the cost function in a compact form. Expressing the stability conditions and its proof requires some mathematical terms to be defined: Definition 1. A function ϕ(x) : R+ 7→ R+ is a K-function if it is a continuous, strictly 13
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 14 of 40
increasing and ϕ(0) = 0. Definition 2. The nonlinear system is observable in N + 1 steps if for every x1 , x2 ∈ X, there exists a K-function ϕ(x) such that: 16,19 ϕ(kx1 − x2 k2 ) ≤ kF (x1 , ¯0) − F (x2 , ¯0)k2
(19)
where ¯0 is the zero matrix with the same dimension of {wi }N i=1 . Let ∆F = JF (w1 , w2 , . . . , wN ) ∈ R(N +1)×(N n) be the Jacobian matrix of F with respect T N N T to the noise sequence {wi }i=1 at the origin. Note, wi=1 = w1T w2T · · · wN ∈ R(N n)×1 , T n×1 ∂Fp ∂Fp ∂Fp wi = wi,1 wi,2 · · · wi,n . Using this ∈ R and ∆Fp,q = ∂w · · · ∂w ∂w q,1
q,2
q,n
notation, we can write: N ¯ F (x, {wi }N i=1 ) − F (x, 0) = ∆F wi=1 + ψ
(20)
where ψ is the remainder of the linearization approximation and k∆F k denotes the operator norm of ∆F as:
N N k∆F k = min{c ≥ 0 : k∆F wi=1 k ≤ c kwi=1 k , ∀x ∈ X , w ∈ W}
(21)
Theorem 1. Assume the following assumptions are satisfied by the nonlinear system (1): 1. W and V are compact sets containing the origin. 2. For every wˆ ∈ W and xˆ0 ∈ X , the state trajectory remains in the compact set X . 3. System (1) is X-observable in N + 1 steps with a K-function ϕ, where
δ=
inf
x1 ,x2 ∈X
ϕ(kx1 − x2 k2 ) >0 kx1 − x2 k2
Then, the error of estimation obtained by the objective function (3) is bounded. 14
ACS Paragon Plus Environment
(22)
Page 15 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Proof. Let x0 and {wi }N i denote the true state vector at time t − N and the true noise sequence, respectively. Therefore, every x¯0 ∈ X and {wi }N i ⊂ D satisfy the following inequality:
∗ Jreal
≤kx0 −
x¯0 k2Π0
+
N kyi=1
−
F (x0 )k2Q
+
N X
kwi k2R
i=1
(23)
≤λΠ0 ,2 kx0 − x¯0 k2 + N (rν2 λQ,2 + rw2 λR,2 )
where
N yi=1
= y1T y2T . . .
T yN
T
, rν = maxkνk and rw = maxkwk. Also, λA,1 and λA,2 are ν∈V
w∈W
the minimum and maximum eigenvalues of square matrix A, respectively. By incorporating (23) in (17), we obtain the upper bound for the cost function: J(ˆ x0 , w) ˆ ≤ λΠ0 ,2 kx0 − x¯0 k2 + N (rν2 λQ,2 + rw2 λR,2 ) + ε
(24)
For two arbitrary vectors a, b of same dimension the following inequality holds: 1 kak2 ≥ ka + bk2 − kbk2 2
(25)
To obtain a lower bound on the cost function, we rewrite the second term of the objective function using Eq. (25) as follows: N 2 kyi=1 −F (ˆ x0 , {wˆi }N i=1 )kQ
≥ λQ,1
1 N 2 N N 2 kF (ˆ x0 , {wˆi }N i=1 )−F (x0 , {wi }i=1 )k −kyi=1 −F (x0 , {wi }i=1 )k 2 (26)
15
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 16 of 40
By using Eq. (20-21), we derive: 2 kF (x0 , {wi }N x0 , {wˆi }N i=1 ) − F (ˆ i=1 )k N = kF (x0 , ¯0) + ∆F (x0 ){wi }N i=1 + ψ(x0 , {wi }i=1 ) 2 − (F (ˆ x0 , ¯0) + ∆F (ˆ x0 ){wˆi }N x0 , {wˆi }N i=1 + ψ(ˆ i=1 ))k
1 x0 , ¯0)k2 ≥ kF (x0 , ¯0) − F (ˆ 2 N 2 − k(∆F (x0 ){wi }N x0 ){wˆi }N x0 , {wˆi }N i=1 − ∆F (ˆ i=1 ) + (ψ(x0 , {wi }i=1 ) − ψ(ˆ i=1 ))k
1 ≥ kF (x0 , ¯0) − F (ˆ x0 , ¯0)k2 2 2 2 − 8k∆F k2 rw2 − 2kψ(x0 , {wi }N x0 , {wˆi }N i=1 )k − 2kψ(ˆ i=1 )k
(27) and then employing the observability condition in the above inequality, we obtain: 2 kF (x0 , {wi }N x0 , {wˆi }N i=1 ) − F (ˆ i=1 )k
1 2 2 ≥ ϕ(kx0 − xˆ0 k2 ) − 8k∆F k2 rw2 − 2kψ(x0 , {wi }N x0 , {wˆi }N i=1 )k − 2kψ(ˆ i=1 )k 2 (28) Moreover, N 2 2 kyi=1 − F (x0 , {wi }N i=1 )k ≤ N rv
(29)
Therefore, substituting Eq. (28-29) in the right side of the inequality (26) yields: N 2 kyi=1 − F (ˆ x0 ,{wˆi }N i=1 )kQ 1 2 N 2 N 2 2 2 2 x0 , {wˆi }i=1 k − 4k∆F k rw − N rν ≥ λQ,1 ϕ(kx0 − xˆ0 k ) − kψ(x0 , {wi }i=1 k − kψ(ˆ 4
(30)
16
ACS Paragon Plus Environment
Page 17 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Applying the observability assumption of Eq.(22) in Eq. (30), we obtain: N 2 kyi=1 − F (ˆ x0 ,{wˆi }N i=1 )kQ 1 2 N 2 N 2 2 2 2 ≥ λQ,1 δkx0 − xˆ0 k − kψ(x0 , {wi }i=1 k − kψ(ˆ x0 , {wˆi }i=1 k − 4k∆F k rw − N rν 4
(31) Additionally, in Ref. 19, it is shown there exist proper constant scalars k and kˆ satisfying the following inequalities: 2 kψ(x0 , {wi }N i=1 )k ≤
1 2 k N (N + 1)(2N + 1)rw4 24 (32)
2 kψ(ˆ x0 , {wˆi }N i=1 )k ≤
1 ˆ2 k N (N + 1)(2N + 1)rw4 24
Besides, exploiting Eq. (25), we obtain: 1 k¯ x0 − xˆ0 k2 ≥ kx0 − xˆ0 k2 − kx0 − x¯0 k2 2
(33)
Substituting Eq. (31-33) in the cost function (3) yields: 1 J(ˆ x0 , w) ˆ ≥ λQ,1 ( δkx0 − xˆ0 k2 − α − 4k∆F k2 rw2 − N rν2 ) 4 1 + λΠ0 ,1 ( kx0 − xˆ0 k2 − kx0 − x¯0 k2 ) + N rw2 λR,1 2 where α =
1 (k 2 24
(34)
+ kˆ2 )N (N + 1)(2N + 1)rw4 .
By combining the lower bound and upper bound of the cost function in Eq. (24) and (34), we obtain: 1 λQ,1 ( δke0 k2 − N rν2 − α − 4k∆F k2 rw2 ) 4 1 + λΠ0 ,1 ( ke0 k2 − kx0 − x¯0 k2 ) + N rw2 λR,1 2 ≤ λΠ0 ,2 kx0 − x¯0 k2 + N (rν2 λQ,2 + rw2 λR,2 ) + ε 17
ACS Paragon Plus Environment
(35)
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 18 of 40
where e0 = x0 − xˆ0 . As a result the estimation error, e0 is bounded by: 4 ke0 k ≤ (λΠ0 ,1 + λΠ0 ,2 )kx0 − x¯0 k2 2λΠ0 ,1 + λQ,1 δ 2
+
N [rν2 (λQ,1
+ λQ,2 ) +
rw2 (λR,2
− λR,1 )] + ε + λQ,1 (α +
4k∆F k2 rw2 )
(36)
Note that F(xi−1 ) is the exact solution of the ODE given in Eq. (1). As a result, F(.) is continuously differentiable and therefore, also, locally Lipschitz continuous. Let kf be the local Lipschitz constant for this function. kx0 − x¯0 k2 = kF(x−1 , w−1 ) − F(ˆ x−1 , wˆ−1 )k2 ≤
kf2 (kx−1
2
(37) 2
− xˆ−1 k + kwˆ−1 − w−1 k )
where the subscript ‘−1’ refers to the time t − N − 1 which is now outside the time window. Then, using the Cauchy-Schwarz Inequality inequality leads to:
kx0 − x¯0 k2 ≤ kf2 (kx−1 − xˆ−1 k2 + kwˆ−1 − w−1 k2 ) =
kf2 (ke−1 k2
+
(38)
4rw2 )
Applying Eq. (38) in Eq. (36): 4 (λΠ0 ,1 + λΠ0 ,2 )(kf2 ke−1 k2 + 4kf2 rw2 ) ke0 k ≤ 2λΠ0 ,1 + λQ,1 δ 2
+
N [rν2 (λQ,1
+ λQ,2 ) +
rw2 (λR,2
− λR,1 )] + ε + λQ,1 (α +
4k∆F k2 rw2 )
(39)
Therefore, the evolution of the estimation error’s upper bound is governed by:
ξt = aξt−1 + b
18
ACS Paragon Plus Environment
(40)
Page 19 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
where ξt is the upper bound of ke0 k2 and,
a=
4 (λΠ0 ,1 + λΠ0 ,2 )kf2 2λΠ0 ,1 + λQ,1 δ
4 2 2 2 2 2 2 4kf (λΠ0 ,1 +λΠ0 ,2 )rw +N [rν (λQ,1 +λQ,2 )+rw (λR,2 −λR,1 )]+ε+λQ,1 (α+4k∆F k rw ) b= 2λΠ0 ,1 + λQ,1 δ Note a is a positive scalar. Therefore, for a < 1, dynamic of the upper bound is asymptotically stable and convergent to
b . 1−a
Corollary 1. We can choose the eigenvalues of Π0 , R and Q to ensure a is less than 1. Also, when there exists no noise, by increasing η (the order of Carleman approximation), b can be made arbitrarily small. In this case, hence, the estimation error can be driven to zero. This result enables us to better compare CMHE and EKF. More specifically, EKF is a first order CMHE with no constraint on state variables except the dynamic of the system. Therefore, EKF cannot eliminate the estimation error even if there exists no noise on the process and measurement. Even though in real cases rv , rw and ε are nonzero, increasing η still decreases b and consequently the upper bound of the estimation error. Remark 1. In case the order of Carleman approximation is large enough, then ǫ → 0 and the upper bound of error reduces. This result is also valid when the original nonlinear system is employed instead of approximated system in the MHE formulation. To study more about the stability of nonlinear MHE, we refer the interested readers to Ref. 16,19. Remark 2. For control purposes, typically MHE should estimate state variables faster than the control-loop response. As a result, sometimes MHE sampling needs to be faster than the control-loop response. Also, for bigger eigenvalues of matrix Q, the estimation error in Eq. (40) converges faster. However, the signals are required to be smooth enough. As a result, the weight of deviation from the prior knowledge should be large enough in comparison with stage penalties to avoid fluctuations in the closed-loop system.
19
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
5
Page 20 of 40
Sensitivity Analysis
This section derives analytical expressions for the first and second order derivatives of the objective function (J) with respect to the decision variables (ˆ x0 , {wˆi }N i=1 ) in Eq. (3). To this purpose, Eq.(6) is first analytically solved over the interval [ti , ti+1 ].
xˆi,⊗ = exp(∆T A˜i )ˆ xi−1,⊗ + = exp(∆T A˜i )ˆ xi−1,⊗ +
Z
∆T
exp(A˜i τ )dτ (Ar +
0
m X
Bj0 ui,j +
j=1
˜ A˜−1 i (exp(∆T Ai )
m X j=1
R ∆T 0
Bj ui,j +
n X
Dj0 wˆi,j )
j=1
− I)(Ar +
m X j=1
where A˜i = A+
n X
Bj0 ui,j +
n X
(41) Dj0 wˆi,j )
j=1
Dj wˆi,j is nonsingular. In the case A˜i is singular, the expression
j=1
exp(A˜i τ )dτ can be computed numerically. The integral could be obtained using the
definition of exponential of matrices and Padé series which are also stable. The remainder of the analysis follows similarly. As discussed before, the vectors xˆ0,⊗ and wˆi are in the following forms:
xˆ0,⊗ = xˆ0,1 xˆ0,2 ... xˆ0,r
T
wˆi = wˆi,1 wˆi,2 ... wˆi,m
,
T
Thus, the first derivative of the estimation error, vi , with respect to these vector elements is given by: ∂ xˆi,⊗ ∂ xˆi−1,⊗ ∂ xˆ2,⊗ ∂ xˆ1,⊗ ∂ xˆ0,⊗ ∂vi = −C ... = −CEi Ei−1 ...E1 ∂ xˆ0,k ∂ xˆi−1,⊗ ∂ xˆi−2,⊗ ∂ xˆ1,⊗ ∂ xˆ0,k ∂ xˆ0,k
20
ACS Paragon Plus Environment
(42)
Page 21 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
and for p ≤ i, ∂vi ∂ xˆi,⊗ ∂ xˆp+1,⊗ ∂ xˆp,⊗ = −C ... ∂ wˆp,k ∂ xˆi−1,⊗ ∂ xˆp,⊗ ∂ wˆp,k ∂Ep xˆp−1 = −CEi Ei−1 ...Ep+1 ∂ wˆp,k m m X X ∂ A˜−1 p −1 ∂Ep ˜ + [Ap + (Ep − I)](Ar + Bj0 ui,j + Dj0 wˆi,j ) ∂ wˆp,k ∂ wˆp,k j=1 j=1 −1 ˜ + Ap (Ep − I)Dk0
(43)
where Ei = exp(∆T A˜i ). Note, to derive the Hessian matrix and gradient vector, we need to T derive the derivatives with respect to the original estimated states i.e. xˆ0,1 xˆ0,2 ... xˆn ∂vi = 0. ∂ wˆp,k Similarly, the second order derivatives of vi analytically are available:
(not xˆ0,⊗ ). Also, it is obvious that for p > i,
∂ 2 xˆ0,⊗ ∂ 2 vi = −CEi Ei−1 ...E1 ∂ xˆ0,l ∂ xˆ0,k ∂ xˆ0,l ∂ xˆ0,k
(44)
∂Ep ∂ xˆ0,⊗ ∂ 2 vi = −CEi Ei−1 ...Ep+1 Ep−1 ...E1 ∂ xˆ0,l ∂ wˆp,k ∂ wˆp,k ∂ xˆ0,k
(45)
∂ A˜−1 ∂ A˜−1 ∂Ep ∂ 2 vi ∂ 2 Ep p ∂Ep p = −CEi Ei−1 ...Ep+1 xˆp−1 + + ∂ wˆp,l ∂ wˆp,k ∂ wˆp,l ∂ wˆp,k ∂ wˆp,l ∂ wˆp,k ∂ wˆp,k ∂ wˆp,l m m 2 −1 X X ∂ A˜p ∂ 2 Ei −1 ˜ + Ap + (Ep − 1) (Ar + Bj0 ui,j + Dj0 wˆi,j ) ∂ wˆp,l ∂ wˆp,k ∂ wˆp,l ∂ wˆp,k j=1 j=1 −1 ∂Ep −1 ∂Ep ∂ A˜−1 ∂ A˜−1 p h ˜ ˜ + (Ep − I) Dl0 + Ap + (Ep − I) Dk0 , + Ap ∂ wˆp,k ∂ wˆp,k ∂ wˆp,l ∂ wˆp,l
(46)
21
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 22 of 40
and ∂ 2 vi ∂Ep ∂Eq = −CEi Ei−1 ...Ep+1 Ep−1 ...Eq+1 xˆq−1 + A˜−1 p (Eq − I)Bk0 ∂ wˆp,l ∂ wˆq,k ∂ wˆp,l ∂ wˆq,k m m X X ∂ A˜−1 q −1 ∂Eq ˜ + (Eq − I)](Ar + Bj0 ui,j + Dj0 wˆi,j ) + [Aq ∂ wˆq,k ∂ wˆq,k j=1 j=1
(47)
where q < p ≤ i. By incorporating Eq. (42-47) in the aforementioned objective function (3), the gradient elements of JT are
N
X ∂J ∂vi = 2(x0 − xˆ0 )T Π0 Γk + ) 2viT Q( xˆ0,k ∂ x ˆ 0,k i=1
(48)
and N
X ∂J ∂vi = 2wˆkT R Γk + ) 2viT Q( ∂ wˆp,k ∂ wˆp,k i=j
(49)
where Γk is a vector with zero elements except the k th element which has the unity value. Also, by differentiating Eq. (48) and (49), the Hessian matrix elements are given by the following equations N
N
X ∂vi X ∂vi ∂ 2 vi ∂ 2J = 2ΓTl Π0 Γk + 2 )T Q( )+2 ) ( viT Q( ∂ xˆ0,l ∂ xˆ0,k ∂ xˆ0,k ∂ xˆ0,l ∂ xˆ0,l ∂ xˆ0,k i=1 i=1 N
(50)
N
X ∂vi X ∂vi ∂ 2J ∂ 2 vi =2 )T Q( )+2 ) ( viT Q( ∂ xˆ0,l ∂ wˆp,k ∂ xˆ0,l ∂ wˆp,k ∂ xˆ0,l ∂ wˆp,k i=1 i=1
(51)
and, N
N
X ∂vi X ∂vi ∂ 2 vi ∂ 2J = 2ΓTl R Γk + 2 )T Q( )+2 ) ( viT Q( ∂ wˆq,l ∂ wˆp,k ∂ wˆq,l ∂ wˆp,k ∂ wˆq,l ∂ wˆp,k i=1 i=1
(52)
From Clairaut’s theorem, we can conclude the second derivatives are symmetric. Therefore, from Eq. (48-52) all elements of the Hessian matrix are now available. Note this sensitivity analysis formulation is not related to a specific local point and also the approximation using Carleman method is valid for a bigger region than a linearized system. As a result, the formulation can be used during the evolution of the system in the 22
ACS Paragon Plus Environment
Page 23 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
corresponding nonlinear programming. Providing these sensitivities of the objective function with respect to the decision variables to the search algorithm significantly accelerates the CMHE. To show the efficiency of the CMHE structure, we investigate the results of the following example.
6
Simulation
Consider a continuous stirred tank reactor (CSTR), where the exothermic first-order reaction A → B occurs. The nonlinear model of the process is given by: dCA q E = (CAf − CA ) − k0 exp(− )CA dt V RTR (53) q ∆H E UA dTR = (Tf − TR ) − k0 exp(− )CA + (Tc − TR ) dt V ρCp RTR V ρCp where CA and TR are the states of the system. This system is highly nonlinear, and small changes in coil temperature or feed flow rate of the CSTR cause strong fluctuations in the tank’s concentration and temperature. Parameters and variables of the reactor at the desired operational point are provided in table 1.
Carleman Approximation In order to test the accuracy of the Carleman approximation method, we compare the second and third order linearized models against the original nonlinear model. For simplicity, we translate the steady state point to the origin through a change of variable. Fig 1 depicts the reactor’s temperature and concentration versus time obtained as obtained from the Carleman and the original nonlinear models (53). In this simulation, there is no noise and the initial concentration is just slightly above the equilibrium concentration of the linearization point. As shown in the figure, the third order Carleman model tracks the system more closely than the second order. 23
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 24 of 40
Table 1: Reactor parameters and nominal operational point
State Estimation In this illustrative process, the reactor’s temperature is measurable, while the concentrations of the species in the tank are not available. The goal is to estimate the concentrations by measuring the output temperature presented in Fig. 2. In addition, we assume the feed flow rate, q, is affected by a white noise signal uniformly distributed in [−10 10] lit/min. Also, there is a zero mean white temperature measurement noise that has a uniform distribution with upper bound 0.02◦ K. The sampling time and the estimation horizon are chosen to be ∆T = 0.05 and N = 3, respectively. We consider the following constraints on the systems:
CA ≥ 0,
−10 ≤ wj ≤ 10,
−0.02 ≤ vj ≤ 0.02,
24
ACS Paragon Plus Environment
∀j = 1, 2, 3
Page 25 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Finally, the MHE design parameters are:
100 0 10 0 Q = 1, R = , Π0 = 0 100 0 10 Figure 3 shows the performance of the CMHE using the second and third order Carleman approximation. Here, we refer to these two estimators as 2nd and 3rd order CMHE. As discussed before, the third order model is more precise. Nonetheless, the 2nd order MHE compensates this error to some extent, by the optimal choice of xˆ0 and {wˆi }N i=1 and the two trajectories of 2nd and 3rd order CMHE are virtually identical. To compare the efficiency of these two estimators more quantitatively, we define the mean squared error (MSE) as follows: MSE =
t=2.5 X
(CˆA − CA )2
t=0
MSE value for the nonlinear MHE is 4.46 × 10−6 . However, in 2nd and 3rd order CMHEs, this value decreases to 3.77 × 10−6 and 3.67 × 10−6 , respectively. Also, we employ EKF approach to estimate the state variables for the same process and noise signals. We examine different values to tune the parameters of EKF. However, the best result is achieved by using variance of noise signals. In this approach, we approximate the noise signals with a Gaussian distribution. Then, we used the estimated variances as the initial values of covariance matrices in EKF formulation. The optimum result obtained using EKF method is 6 × 10−4 , shown in Fig. 4, which is more than 100 times larger in comparison with MHE approaches. The simulations are performed using a 3.4 GHz Intel Core i7-3770 processor. We use fmincon function in MATLAB to find the optimal parameters at each time step. The simulation run-times, including optimization and integration to solve ODEs, for three different MHE structures with analytical and numerical gradients and Hessians are reported in table 2. Note to have a fair judgment, we used the same noise signals in all the different MHE
25
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
simulations to obtain MSE and runtime simulations. Table 2: Run Time in the Simulations
To solve the optimization problem, we used ‘interior-point’ algorithm in MATLAB for cases where gradient vector is not supplied. For other cases, ‘trust-region-reflective’ algorithm is employed. Also, to solve the ODE set in the nonlinear original system and the nonlinear MHE, third order RungeâĂŞKutta method in standard integration packages are used. In the 2nd and 3rd order CMHE, the analytical solution is obtained from Eq. (41). Table 2 shows this method accelerates the simulation up to nearly 50 times, without endangering the stability of the CMHE. Overall, the results suggest that Carleman approximation accelerates the online computations with the same performance level as the nonlinear MHE. This method works even more efficiently when both the Hessian and gradient of the cost function are supplied analytically.
7
Conclusions and Future Work
This work presented a new approach in MHE design using the Carleman approximation method. In this method we provided the analytical solution of the system. Consequently, using this solution, the sensitivity analysis with respect to the initial point and unknown noise vectors at sampling times are available. This information about the objective function, accelerates the constrained nonlinear dynamic optimization. However, due to the approximation applied in Carleman method, there exists an error in optimization. This error can be reduced by using higher order of Carleman approximation. In this work, we proved the error is bounded in a desired range by using the proper order. In a numerical example, 26
ACS Paragon Plus Environment
Page 26 of 40
Page 27 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
we investigated the performance and stability in the presence of noise in the transition and output. The simulation results show the proposed approach is tracking the nonlinear system states successfully, while the run-time is reduced significantly. The estimation of state variables is usually employed toward controlling them. As future work, we are interested to expand the proof of the stability for a closed-loop system. This closed-loop system will include a model predictive controller and the proposed estimator in the current manuscript. The task will be to design a stable controller/observer pair and is beyond the scope of the current work.
Appendix A This appendix explains Carleman approximation employed in the proposed MHE structure. At the desired nominal operating point, Taylor expansion approximates the nonlinear functions in (2) as follows: η X 1 f (x) ≃ f (x0 ) + ∂f[k] |x=x0 (x − x0 )[k] k! k=1
(A-1)
where x[k] denotes the k th Kronecker product, defined as x[k] = x ⊗ x[k−1] and x[0] = 1. Also, k
∂f[k] is a matrix where the element located in the ith row and the j th column is k1 ∂ k2fi kn , ∂x1 ∂x2 ···∂xn Pn th where l=1 kl = k. In this notation, kl is the order of xl in the j element of x[k] . For example, for a second order model and k = 2, ∂f[k] is given by:
∂ 2 f1 ∂x21
∂f[k] =
∂ 2 f2 ∂x21
∂ 2 f1 ∂x1 ∂x2
∂ 2 f1 ∂x2 ∂x1
∂ 2 f2 ∂x1 ∂x2
∂ 2 f2 ∂x2 ∂x1
27
∂ 2 f1 ∂x22 ∂ 2 f2 ∂x22
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 28 of 40
In addition, we can define ∂f[k] in a recursive fashion as follows:
∂f[k] =
∂f
where
∂f ∂x
if k = 1
∂x
∂ ∂x
⊗ ∂f[k−1]
otherwise
is the Jacobian of the vector-valued function f .
This is not a limiting assumption, since proper substitutions of x and y, transform the desired nominal operating point and the corresponding noise-free output to the origin. The desired nominal operating point x0 and the corresponding noise-free output is at the origin. Therefore, applying (A-1) in the system (2), in conjunction with Assumption ??, we obtain: x˙ ≃
η X
Ak x
[k]
y≃
[k]
Bjk x uj +
j=1 k=0
k=0
η X
+
η m X X
Ck x
[k]
n X
Γl wl
l=1
(A-2)
+ν
k=1
where for k = 0, A0 = f (0), Bj0 = gj (0) and C0 = h(0) and for k > 0, Ak = Bjk =
1 ∂gj[k] k!
|x=0 and Ck =
1 ∂h[k] k!
1 ∂f[k] k!
|x=0 ,
|x=0 . Differentiating the higher orders of the original
states, x[k] with respect to time and retaining the terms up to order η, we obtain the following ODEs: η−i+1 n m η−i+1 X X X X d(x[i] ) [k+i−1] bj,i,k x[k+i−1] uj + ai,k x + dl,i x[i−1] wl ≃ dt j=1 k=0 k=0 l=1
where ai,k =
i−1 X
In[α] ⊗ Ak ⊗ In[i−1−α]
i−1 X
In[α] ⊗ Bj,k ⊗ In[i−1−α]
i−1 X
In[α] ⊗ Γl ⊗ In[i−1−α]
α=0
bj,i,k =
α=0
dl,i =
α=0
28
ACS Paragon Plus Environment
(A-3)
Page 29 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Note that when Eq. (A-2) is used to find the time-derivative of x[i] , it results in a polynomial of order η + i − 1. However, in order to have a fixed-order polynomial for any i ∈ {1, 2, .., n}, we truncate the polynomial at x[η] . Because of this truncation, the relationship between the extended and the original states are not exactly the original algebraic one. As the time increases, the errors start accumulating at the higher order states which then propagate to the original states after multiple time steps. As a result, in Carleman approximation resetting the extended states and recomputing them from the original state variables is generally necessary to retain accuracy over extended periods of time. However, the Carleman model’s state variables will be reset automatically in the CMHE structure. For simplicity, we employ Kronecker product operator to present the bilinear system (A-3) in a compact form:
x˙ ⊗ ≃Ax⊗ + Ar +
m X
[Bj x⊗ uj + Bj0 uj ] +
j=1
n X
[Dl x⊗ wl + Dl0 wl ] (A-4)
l=1
y ≃Cx⊗ + ν
where x⊗ = xT x[2]
T
···
x[η]
T
T
, and A, Ar , Bj , Bj,0 , Dl and Dl,0 matrices have the fol-
lowing structure:
a1,1 a1,2 · · ·
a1,η
a1,0
a2,0 a2,1 · · · a2,η−1 0 , Ar = 0 A= 0 a · · · a 3,0 3,η−2 .. .. .. .. .. . . . . . 0 0 · · · aη,1 0
29
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 30 of 40
b b · · · bj,1,η b j,1,1 j,1,2 j,1,0 bj,2,0 bj,2,1 · · · bj,2,η−1 0 Bj = 0 bj,3,0 · · · bj,3,η−2 , Bj0 = 0 .. .. .. .. ... . . . . 0 0 ... bj,η,1 0
0 0 ··· 0 dl,2 0 · · · 0 Dl = 0 dl,3 · · · 0 .. .. .. ... . . . 0 0 ... dl,η
0
d l,1 0 0 0 , Dl0 = 0 .. .. . . 0 0
To reduce the dimension of the bilinear system, we can eliminate the repeated states in the structure of x⊗ . As a result, the dimension of extended states will be reduced to Pη n−1+i r = Cn−1 . Note nCk denotes the number of k combinations from a given set i=1 with n members. Obviously, we need to modify the corresponding matrices in the above
expression when performing this reduction. Even though the system of (A-4) appears bilinear, it is still a nonlinear system with respect to the original state vector x of (2).
Appendix B Here, we present a method to obtain the first and second derivatives of matrix Ei with respect to the decision variables which are used in Eq. (43-47). This method employs the definition of the exponential matrix as follows:
Ei =
∞ X ∆T γ γ=0
30
γ!
A˜γi
ACS Paragon Plus Environment
(A-1)
Page 31 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
Differentiating Eq. (A-1), the derivatives of Ei are available in the following: γ ∞ X ∂Ei ∂ A˜i ∆T γ X ˜λ−1 ∂ A˜i ˜γ−λ = ∆T + A A ∂ wˆi,j ∂ wˆi,j γ=2 γ! λ=1 i ∂ wˆi,j i
(A-2)
and, ∞
X ∆T γ ∂ 2 Ei = × { ∂ wˆi,q ∂ wˆi,j γ! γ=2 X γ−1 ∂ A˜i ˜γ−κ−1 ∂ A˜i ∂ A˜i ˜κ−1 ∂ A˜i ˜λ−κ−1 ] Ai Ai + A˜κ−1 A [ i ∂ wˆi,j ∂ wˆi,q ∂ wˆi,q i ∂ wˆi,j κ=1 γ−1 λ−1 X X
∂ A˜i ˜λ−κ−1 ∂ A˜i ˜γ−λ + A˜κ−1 Ai Ai i ∂ w ˆ ∂ w ˆ i,q i,j λ=2 κ=1 +
γ−1 γ−λ X X
A˜iλ−1
λ=2 κ=1
(A-3)
∂ A˜i ˜κ−1 ∂ A˜i ˜γ−λ−κ } A A ∂ wˆi,j i ∂ wˆi,q i
Moreover, the derivatives of A˜−1 used in Eq. (A-2, A-3) analytically are given by: i ∂ A˜i ˜−1 ∂(A˜−1 i ) = A˜−1 A i ∂ wˆi,j ∂ wˆi,i i (A-4) ∂ 2 (A˜−1 i ) ∂ wˆi,q ∂ wˆi,j
= A˜−1 i Bj
∂(A˜−1 i ) ∂ wˆi,q
+
∂(A˜−1 i ) ∂ wˆi,q
∂ A˜i ˜−1 A ∂ wˆi,j i
Note that , in (A-2) and (A-3), as the index γ increases, the contribution of the corresponding terms in the sum decreases. As a result, a finite upper bound on γ is sufficient for the efficient computation of the sensitivities.
Acknowledgement Financial support from the National Science Foundation, CBET Award 12-634902, Natural Sciences Foundation of Zhejiang province and the Ministry of Science and Technology of the Peoples republic of China is gratefully acknowledged. 31
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
References (1) Tatiraju, S.; Soroush, M. Nonlinear state estimation in a polymerization reactor. Industrial & engineering chemistry research 1997, 36, 2679–2690. (2) Soroush, M. State and parameter estimations and their applications in process control. In Computers and Chemical Engineering 1998, 23, 229–245. (3) Ciccarella, G.; Dalla Mora, M.; Germani, A. A Luenberger-like observer for nonlinear systems. International Journal of Control 1993, 57, 537–556. (4) Kazantzis, N.; Kravaris, C. A nonlinear Luenberger-type observer with application to catalyst activity estimation. American Control Conference, Proceedings of the 1995. 1995; pp 1756–1761. (5) Hajizadeh, I.; Shahrokhi, M. Observer-Based Output Feedback Linearization Control with Application to HIV Dynamics. Industrial & Engineering Chemistry Research 2015, 54, 2697–2708. (6) Krener, A. J.; Isidori, A. Linearization by output injection and nonlinear observers. Systems & Control Letters 1983, 3, 47–52. (7) Van Dootingh, M.; Viel, F.; Rakotopara, D.; Gauthier, J.; Hobbes, P. Nonlinear deterministic observer for state estimation: application to a continuous free radical polymerization reactor. Computers & chemical engineering 1992, 16, 777–791. (8) Daoutidis, P.; Kravaris, C. Dynamic Output Feedback Control of Minimum-phase Nonlinear Systems. American Control Conference, 1991. 1991; pp 1806–1811. (9) Soroush, M. Nonlinear state-observer design with application to reactors. Chemical Engineering Science 1996, 52, 387– 404. (10) Soroush, M.; Kravaris, C. Nonlinear control of a polymerization CSTR with singular characteristic matrix. AIChE J. 1994, 40, 980–990. 32
ACS Paragon Plus Environment
Page 32 of 40
Page 33 of 40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
(11) Haseltine, E. L.; Rawlings, J. B. Critical evaluation of extended Kalman filtering and moving-horizon estimation. Industrial & engineering chemistry research 2005, 44, 2451–2460. (12) Jazwinski, A. H. Stochastic processes and filtering theory; Courier Corporation, 2007. (13) Jang, S. S.; Joseph, B.; Mukai, H. Comparison of two approaches to on-line parameter and state estimation of nonlinear systems. Industrial & Engineering Chemistry Process Design and Development 1986, 25, 809–814. (14) Rao, C. V.; Rawlings, J. B.; Lee, J. H. Constrained linear state estimation-a moving horizon approach. Automatica 2001, 37, 1619–1628. (15) Rao, C. V.; Rawlings, J. B. Constrained process monitoring: Moving-horizon approach. AIChE journal 2002, 48, 97–109. (16) Rao, C. V.; Rawlings, J. B.; Mayne, D. Q. Constrained state estimation for nonlinear discrete-time systems: Stability and moving horizon approximations. IEEE transactions on automatic control 2003, 48, 246–258. (17) Zavala, V. M.; Laird, C. D.; Biegler, L. T. A fast moving horizon estimation algorithm based on nonlinear programming sensitivity. Journal of Process Control 2008, 18, 876– 884. (18) Zavala, V. M.; Biegler, L. T. The advanced-step NMPC controller: Optimality, stability and robustness. Automatica 2009, 45, 86–93. (19) Alessandri, A.; Baglietto, M.; Battistelli, G. Moving-horizon state estimation for nonlinear discrete-time systems: New stability results and approximation schemes. Automatica 2008, 44, 1753–1765. (20) Zavala, V. M. Stability analysis of an approximate scheme for moving horizon estimation. Computers & Chemical Engineering 2010, 34, 1662–1670. 33
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
(21) Kowalski, K.; Steeb, W.-H. Nonlinear dynamical systems and Carleman linearization; World Scientific, 1991. (22) Mare, J. B.; De Dona, J. A. Moving horizon estimation of constrained nonlinear systems by carleman approximations. Proceedings of the 45th IEEE Conference on Decision and Control. 2006; pp 2147–2152. (23) Armaou, A.; Ataei, A. Piece-wise constant predictive feedback control of nonlinear systems. Journal of Process Control 2014, 24, 326–335. (24) Fang, Y.; Armaou, A. Carleman approximation based quasi-analytic model predictive control for nonlinear systems. AIChE Journal 2016, 62, 3915–3929. (25) Hashemian, N.; Armaou, A. Fast moving horizon estimation of nonlinear processes via Carleman linearization. 2015 American Control Conference (ACC). 2015; pp 3379–3385. (26) Hashemian, N.; Armaou, A. Simulation, model-reduction, and state estimation of a two-component coagulation process. AIChE Journal 2016, 62, 1557–1567. (27) Hashemian, N.; Armaou, A. Moving horizon estimation using Carleman linearization and sensitivity analysis. Annual Meeting AIChE. 2014. (28) Schneider, R.; Georgakis, C. How to NOT make the extended Kalman filter fail. Industrial & Engineering Chemistry Research 2013, 52, 3354–3362. (29) Cinar, A.; Turksoy, K.; Hajizadeh, I. Multivariable artificial pancreas method and system. 2016; US Patent App. 15/171,355. (30) Hajizadeh, I. 462275 The Extended and Unscented Kalman Filtering Methods for RealTime Plasma Insulin Concentration Estimation in an Artificial Pancreas Control System for Patients with Type 1 Diabetes.
34
ACS Paragon Plus Environment
Page 34 of 40
Page 35 of 40
0.105
numerical integration 2nd order Carleman linearization 3rd order Carleman linearization
CA (mol/lit)
0.1
0.095
0.09
0.085
0.08 0
0.5
1
1.5
2
2.5
3
2
2.5
3
time (min) (a) 388
387
R
T (K)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
386
385
384
0
0.5
1
1.5
time (min) (b) Figure 1: Temporal profiles of reactor (a) temperature and (b) concentration. At the initial time TR = 385.02 and CA = 10.3413 × 10−2 35 ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
387 386 385
o
T ( K)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 36 of 40
384 383 382 0
0.5
1 1.5 Time (min)
2
2.5
Figure 2: Temporal profile of temperature measurement used by CMHE and EKF.
36
ACS Paragon Plus Environment
Page 37 of 40
0.11 True CA 2nd order MHE, MSE=3.77 10-6 3rd order MHE, MSE=3.67 10-6
CA (mol/lit)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Industrial & Engineering Chemistry Research
0.1
0.09
0.08
0
0.5
1 1.5 Time (min)
2
2.5
Figure 3: Concentration estimation using Carleman approximation in the presence of noise on the feed flow rate and temperature measurement
37
ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
0.11 True C A Estimated C A , MSE=6 10
CA (mol/lit)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Page 38 of 40
-4
0.1
0.09
0.08 0
0.5
1
1.5
2
2.5
Time (min) Figure 4: Concentration estimation using Extended Kalman Filter in the presence of noise on the feed flow rate and temperature measurement
38
ACS Paragon Plus Environment
Page 39 of 40
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 ACS Paragon Plus Environment
Industrial & Engineering Chemistry Research
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
ToC 74x45mm (96 x 96 DPI)
ACS Paragon Plus Environment
Page 40 of 40