Constrained Nonlinear State Estimation Using Ensemble Kalman Filters

Feb 2, 2010 - Among the variety of nonlinear state estimation approaches available in the literature, the extended Kalman filter (EKF) is arguably the...
0 downloads 0 Views 2MB Size
2242

Ind. Eng. Chem. Res. 2010, 49, 2242–2253

Constrained Nonlinear State Estimation Using Ensemble Kalman Filters J. Prakash,† Sachin C. Patwardhan,‡ and Sirish L. Shah*,§ Department of Instrumentation Engineering, Madras Institute of Technology Campus, Anna UniVersity, Chennai, 600044, India, Department of Chemical Engineering, Indian Institute of Technology, Bombay, Powai, Mumbai 400076, India, and Department of Chemical and Materials Engineering, UniVersity of Alberta, Edmonton, T6G 2G6 Canada

Recursive estimation of states of constrained nonlinear dynamic systems has attracted the attention of many researchers in recent years. In this work, we propose a constrained recursive formulation of the ensemble Kalman filter (EnKF) that retains the advantages of the unconstrained EnKF while systematically dealing with bounds on the estimated states. The EnKF belongs to the class of particle filters that are increasingly being used for solving state estimation problems associated with nonlinear systems. A highlight of our approach is the use of truncated multivariate distributions for systematically solving the estimation problem in the presence of state constraints. The efficacy of the proposed constrained state estimation algorithm using the EnKF is illustrated by application on two benchmark problems in the literature (a simulated gas-phase reactor and an isothermal batch reactor) involving constraints on estimated state variables and another example problem, which involves constraints on the process noise. 1. Introduction The area of Bayesian nonlinear state estimation is very rich and has been very well researched over the past four decades. Among the variety of nonlinear state estimation approaches available in the literature, the extended Kalman filter (EKF) is arguably the most popular state estimation tool used in the industry. The covariance propagation step in the EKF requires linearization of nonlinear system dynamics around the mean. When the system dimension is large, computing derivatives of nonlinear state transition functions and measurement functions at each time step can prove to be a computationally demanding exercise. Alleviating difficulties arising due to the computation of the Jacobian has been the main motivation behind a new class of derivative free Kalman filter, such as the unscented Kalman filter (UKF) proposed by Julier and Uhlmann,1 that have appeared recently in the literature. When compared to the EKF, the derivative free filters can be used for state estimation in a much wider class of nonlinear systems. For inputs with non-Gaussian distributions, Julier and Uhlmann1 have shown that the UKF results in approximations that are accurate to at least the second order, while the third and the higher order moments can also be approximated with good accuracy by appropriately choosing the tuning parameters. More recently, a new class of filtering technique, called particle filtering, has attracted the attention of many researchers. A particle filter (PF) approximates multidimensional integration involved in the prediction or propagation and update steps in the general Bayesian state estimation scheme using Monte Carlo sampling. The ensemble Kalman filter (EnKF), originally proposed by Evensen,2 belongs to the class of particle filters. In the EnKF formulation, similar to the EKF or UKF, the observer gain is computed using the covariance matrix of the innovations and cross covariance matrix between the predicted state estimation errors and the innovations. * To whom correspondence should be addressed. Fax: (780) 4929388. E-mail: [email protected]. † Anna University. ‡ Indian Institute of Technology. § University of Alberta.

In most physical systems, states and/or parameters are generally not unbounded, which introduces constraints on their estimates. While the EnKF formulation appears to be a promising approach for dealing with a wider class of nonlinear state estimation problems, it cannot handle bounds on state variables that are being estimated. Nonlinear dynamic data reconciliation (NDDR)3 and moving horizon estimation (MHE)4 formulations proposed in the literature ensure that the state and parameter estimates satisfy bounds. However, the MHE and NDDR techniques are nonrecursive and computationally intensive. Recursive nonlinear dynamic data reconciliation (RNDDR) has been recently developed to take into account the constraints on the state estimates.5 This approach combines computational advantages of recursive estimation while handling constraints on the state variables. Vachhani et al.6 later proposed unscented recursive nonlinear dynamic data reconciliation (URNDDR) to estimate the state variables of the nonlinear system by combining the advantages of UKF and RNDDR. Most recently, Kandepu et al.7 have proposed an alternate formulation for constrained UKF. While this modification can handle constraints on state variables, these approaches (RNDDR and URNDDR) cannot easily handle systems subject to unmeasured stochastic inputs with non-Gaussian distributions. It should be noted that, when the states are constrained, then the initial state vector will be a random vector with truncated distribution and therefore samples (sigma points) for the initial state estimate are drawn only from the truncated probability distributions. In the URNDDR approach, on the other hand, the sigma points are drawn only from nontruncated multivariate normal distributions. In addition, it is not clear from the URNDDR or constrained UKF7 formulations how the weights on the sigma points should be adjusted by systematically taking into consideration the truncated distributions. In another recent development, Lang et al.8 have proposed a version of the sampling importance sampling (SIS) based particle filter that can handle constraints. An acceptance/ rejection based algorithm has been developed for imposing constraints within the sequential Monte Carlo (SMC) algorithm. This approach is equivalent to using truncated distribution (prior or likelihood) to satisfy the constraints, which ensures that the posterior also satisfies the constraints. The acceptance/rejection

10.1021/ie900197s CCC: $40.75  2010 American Chemical Society Published on Web 02/02/2010

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010

based algorithm, however, may require a large number of samples to be drawn at each SMC step. The main contributions of this work are as follows: We propose a constrained recursive formulation of the ensemble Kalman filter (C-EnKF) that retains the advantages of unconstrained EnKF while systematically dealing with bounds on estimated state variables. In the proposed C-EnKF formulation, it becomes necessary to generate initial samples (particles) from the truncated distributions of the initial state variables. We present a systematic approach for drawing samples from truncated distributions. In the update step of the EnKF, similar to the URNDDR formulation, a constrained optimization problem is formulated over one sampling interval to compute the updated state estimates for each particle. The updated state estimate is then computed using the ensemble mean of these constrained state estimates. It may be noted that EnKF is a derivative free nonlinear state estimator and can be used for handling state estimation problems associated with systems involving discontinuities. The efficacy of the proposed approaches is demonstrated using the following benchmark problems from the literature: 1. constraints on process noise9 2. constraints on estimated states (a) gas-phase reactor10 (b) isothermal batch reactor11 The organization of the paper is as follows. Section 2 discusses recursive Bayesian state estimation and presents the unconstrained EnKF formulation. The proposed constrained state estimation formulation based on the ensemble Kalman filter is presented in section 3. Simulation results are presented in section 4, followed by main conclusions drawn through the analysis of these results as discussed in section 5. 2. Ensemble Kalman Filter Consider a nonlinear system represented by the following nonlinear state space equations: x(k) ) x(k - 1) +



kT

(k-1)T

F[x(t), u(k - 1), d¯ + w(k - 1)] dt (1)

y(k) ) H[x(k), v(k)]

(2)

In the above process model, x(k) is the system state vector (x ∈ Rn), u(k) is the known system input (u ∈ Rm), w(k) is the state noise (w ∈ Rp) with known distribution, y(k) is the measured state variable (y ∈ Rr), and v(k) is the measurement noise (v(k) ∈ Rr) with known distribution. The index k represents the sampling instant; F[ · ] and H[ · ] are the nonlinear process and nonlinear measurement models, respectively. It is further assumed that the initial state of the system x(0) is a random vector with known probability distribution. The random state noise term is due to random fluctuations in the unmeasured input j ∈ Rp). variables (d 2.1. Recursive Bayesian Estimation. The objective of the recursive Bayesian state estimation problem is to find the mean and variance of a random variable x(k) using the conditional probability density function p[x(k)|Y(k)] under the following assumptions: (i) The states follow a first-order Markov process. (ii) The observation are conditionally independent given the state variables. Y(k) denotes the set of all the available measurements, i.e., Y(k) } {y(k), y(k - 1), ...}.The posterior density p[x(k)|Yk] is estimated in two steps: (a) the prediction step, which is computed

2243

before obtaining an observation, and (b) the update step, which is computed after obtaining an observation12 In the prediction step, the posterior density p[x(k - 1)|Yk-1] at the previous time step is propagated into the next time step through the transition density {p[x(k)|x(k - 1)]} as follows: p[x(k)|Yk-1] )

∫ p[x(k)|x(k - 1)] p[x(k - 1)|Y

] dx(k - 1)

k-1

(3)

The update stage involves the application of Bayes’ rule: p[x(k)|Yk] )

p[y(k)|x(k)] p[x(k)|Yk-1] p[y(k)|Yk-1]

(4)

where p[y(k)|Yk-1] )

∫ p[y(k)|x(k)] p[x(k)|Y

k-1

] dx(k)

(5)

Combining eqs 3, 4, and 5 gives p[x(k)|Yk] ) p[y(k)|x(k)]

[ ∫ p[x(k)|x(k - 1)] p[x(k - 1)|Y ∫ p[y(k)|x(k)] p[x(k)|Y

k-1

k-1

]

] dx(k - 1)

] dx(k)

(6) Equation 6 describes how the conditional posterior density function propagates from p[x(k - 1)|Yk-1] to p[x(k)|Yk]. It should be noted that the properties of the state transition equation (1) are accounted for through the transition density function p[x(k)|x(k - 1)] while p[y(k)|x(k)] accounts for the nonlinear measurement model (2). The prediction and update strategy provides an optimal solution to the state estimation problem, which, unfortunately, involves high-dimensional integration. The solution is extremely general and aspects such as multimodality, asymmetries, and discontinuities can be incorporated.1 The exact analytical solution to the recursive propagation of the posterior density is difficult to obtain. However, when the process model is linear and noise sequences are zero mean Gaussian white noise sequences, the Kalman filter describes the optimal recursive solution to the sequential state estimation problem in the maximum likelihood sense. 2.2. Unconstrained State Estimation Using the Ensemble Kalman Filter. While dealing with nonlinear systems, it becomes necessary to develop approximate and computationally tractable suboptimal solutions to the above sequential Bayesian estimation problem. The particle filter is a numerical method for implementing an optimal recursive Bayesian filter through Monte Carlo simulation. The ensemble Kalman filter, originally proposed by Evensen,2 belongs to the class of particle filters. In this section we describe the most general form of the EnKF as available in the literature.13 The EnKF formulation is based on the premise that, to carry prediction and update steps, it is sufficient to estimate the first two moments of p[x(k)|Yk-1] and p[x(k)|Yk] using sample statistics. Thus, the filter is initialized by drawing N particles{x(i)(0|0)} from a given distribution. At each time step, N samples {w(i)(k - 1), v(i)(k): i ) 1, ..., N} for {w(k)} and {v(k) } are drawn randomly using the distributions of state noise and measurement noise. These sample points together with particles {xˆ(i)(k - 1|k - 1): i ) 1, ..., N} are then propagated through the system dynamics to compute a cloud of transformed sample points (particles) as follows:

2244

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010



xˆ(i)(k|k - 1) ) xˆ(i)(k - 1|k - 1) +

kT

(k-1)T

u(k - 1), d¯ + w(i)(k - 1)] dτ:

F[x(τ), i ) 1, 2, ..., N

(7)

These particles are then used to estimate the sample mean and covariance as follows: x¯(k|k - 1) )

y¯(k|k - 1) )

Pε,e(k) )

Pe,e(k) )

1 N

1 N

N

∑ xˆ

(k|k - 1)

(i)

(8)

i)1

N

∑ H[xˆ

(k|k - 1), v(i)(k)]

(i)

(9)

i)1

1 N-1 1 N-1

xˆ(k|k - 1) ) xˆ(k - 1|k - 1) + E[ N

∑ [ε

(i)

(k)][e(i)(k)]T

(10)

i)1

∑ [e

(i)

(k)][e(i)(k)]T

(12)

e(i)(k) ) H[xˆ(i)(k|k - 1), v(i)(k)] - y¯(k|k - 1)

(13)

The Kalman gain and samples of updated particles are then computed as follows: L(k) ) Pε,e(k)[Pe,e(k)]-1

(14)

Ψ(i)(k|k - 1) ) {y(k) - H[xˆ(i)(k|k - 1), v(i)(k)]}

(15)

xˆ(i)(k|k) ) xˆ(i)(k|k - 1) + L(k) Ψ(i)(k|k - 1)

(16)

Ψ(i)(k|k) ) {y(k) - H[xˆ(i)(k|k), v(i)(k)]}

(17)

where i ) 1, 2, ..., N. The updated state estimate is computed as the mean of the updated cloud of particles, i.e. N

∑ xˆ

(i)

(k|k)

(18)

i)1

The covariance of the updated estimates can be computed as 1 N-1

N

∑ [γ

(k-1)T



k

xˆ(k|k - 1) ≈ xˆ(k - 1|k - 1) + (k-1)T F[x(τ), u(k - 1), d¯] dτ; x(k - 1) ) xˆ(k - 1|k - 1)

ε(i)(k) ) xˆ(i)(k|k - 1) - x¯(k|k - 1)

P(k|k) )

kT

F[x(τ), ¯ u(k - 1), d + w(k - 1)] dτ] (21)

(11)

i)1

1 N



In the EKF formulation, this is approximated by propagating only the previous mean as follows:

N

where

xˆ(k|k) )

state space with the mean as the best estimate and the spread of the ensemble as the error covariance.14 2.3. Comparison with Conventional Approaches. At this point, we would like to compare the EnKF formulation described in the previous subsection with the EKF, UKF, and PF-SIR formulations. The main limitations of the EKF formulations is that the propagation step is equivalent to approximating the expected value of a nonlinear function of a random variable by propagating the mean of the random variable through the nonlinear function, i.e., E[g(x)] ≈ g[E(x)]. The predicted mean of the estimate is defined as follows:

(i)

(k)][γ(i)(k)]T

(19)

i)1

γ(i)(k) ) xˆ(i)(k|k) - xˆ(k|k)

(20)

While P(k|k) is not required in subsequent computations of the EnKF, it can serve as a measure of the uncertainty associated with the updated estimates. It may be noted that, in eq 15, the predicted observations have been perturbed by drawing samples from the measurement noise distribution. The random perturbations have to be carried out so that the updated sample covariance matrix of the ensemble Kalman filter matches that of the updated error covariance matrix of the Kalman filter.2 This step helps in creating a new ensemble of states having correct error statistics for the update step.14 It may be noted that the accuracy of the estimates depends on the number of data points (N). Gillijns et al.13 have indicated that the ensemble size between 50 and 100 suffices even for large dimensional systems. Thus, EnKF can be viewed as a statistical Monte Carlo method where the ensemble of the model states evolves in the

(22)

On the other hand, in the EnKF formulation the predicted mean is computed as follows: x¯(k|k - 1) ≈

1 N

N

∑ xˆ

(i)

(k|k - 1)

(23)

i)1

In addition, the state covariance propagation in EKF is carried out using the Taylor series expansion of nonlinear state transition operator and neglecting terms higher than second order. This step requires analytical computation of the Jacobian at each time step. The EnKF approach, on the other hand, provides a derivative free method for estimation of covariances required in the update step. Moreover, this also implies that the nonlinear function vectors F[ · ] and H[ · ] need not be smooth and differentiable. Thus, a major advantage of the EnKF is that it can be applied for state estimation in systems with discontinuous nonlinear transformations. The particles straddle the discontinuity and, hence, can approximate the effect of discontinuity while estimating the covariances. Evenson14 provides a comprehensive review of the theoretical properties and applications of EnKF. The main differences between UKF and EnKF are as follows: (1) In the UKF formulation, the method for drawing samples {w(i)(k - 1), v(i)(k): i ) 1, ..., [2(n + p + r) + 1]} and {xˆ(i)(k - 1|k - 1): i ) 1, ..., [2(n + p + r) + 1]} has been derived based on the assumption that underlying distributions are multivariate Gaussian.1 Also, in the case of UKF, the method of selecting particles (also known as sigma points) is deterministic and these sample points form a symmetric set of 2(n + r + p) points that lie on the [2(n + p + r)]1/2th covariance contour.1 Whereas, in the EnKF, at each time step, N samples {w(i)(k - 1): i ) 1, ..., N} for {w(k)} are drawn randomly using the distribution of state noise, irrespective of its distribution, Gaussian or non-Gaussian. (2) In the UKF formulation the Kalman gain is used to compute the updated mean and the covariance matrix (similar to that of EKF) and thereafter, at the next time step, a fresh set of deterministic sample points are drawn using the updated mean and the updated covariance matrix. In the update step of EnKF, the Kalman gain is applied on each particle individually to create an ensemble of updated particles. This particle cloud is propagated to the next step. Thus, in both the prediction step and the update step of EnKF, the particles are treated individually.

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010

The particle filter is a numerical method for implementing an optimal recursive Bayesian filter by sequential Monte Carlo (SMC) simulation. Furthermore, a particle filter can deal with state estimation problems arising from multimodal and nonGaussian distributions.10,12 These filters can also be classified as derivative free nonlinear filters. As pointed out by Arulampalam et al.,12 the sampling importance sampling (SIS) particle filter formulation suffers from the degeneracy problem. That is, after few iterations, all but one particle will have negligible weight. To avoid the problems associated with the particle filter, the ensemble Kalman filter has been proposed by Evensen and co-workers,2 as a hybrid approach between the particle filter and the Kalman filter. Thus, the prediction step of EnKF is identical to that of the SIS based particle filter. However, the update step of EnKF is similar to that of the Kalman filter. Therefore, the performance of EnKF may be suboptimal when compared to the SIS based particle filter in the presence of nonGaussian distributions. 3. Constrained State Estimation Using the Ensemble Kalman Filter In many practical problems of interest in the process industry, it becomes necessary to account for bounds on state variables being estimated. If it is desired to apply the ensemble Kalman filter for state estimation when state variables are bounded, then it becomes necessary to modify the ensemble Kalman filter described in section 2. To deal with such bounds in the EnKF framework, we have to deal with the following issues: (1) generating initial particles {x(i)(0|0)} that are consistent with bounds on the states; (2) reformulation of the update step in the EnKF algorithm to account for bounds on state variables. Drawing motivation from the RNDDR formulation, we propose a constrained version of the EnKF, which is referred to as C-EnKF15 in the rest of the text. To begin with, we describe the procedure for generating particles that are consistent with the bounds using the concept of truncated distribution. We then proceed to present the C-EnKF algorithm. 3.1. Generation of Truncated Distribution. When states have bounds or when there are constraints on the process noise, it becomes necessary to generate particles that are consistent with these bounds. This can be achieved by using the concept of truncated distributions. A truncated distribution is a conditional distribution that is conditioned on the bounds on the random variable. For example, given a probability density function f(ζ) and the corresponding cumulative distribution function φ(ζ) defined over (-∞,∞), the truncated density function can be defined as follows: f(ζ|a < ζ e b) )

f(ζ) φ(b) - φ(a)

(24)

such that



b

a

f(ζ|a < ζ e b) dζ )

1 φ(b) - φ(a)



b

a

f(ζ) dζ ) 1

In particular, the truncated univariate normal distribution can be obtained as follows: NT[ζ¯ , σ|a < ζ e b] )

exp(-η2 /2) 1 1/2 φ(η ) - φ(η ) σ(2π) b a

(25)

η)

ζ - ζ¯ , σ

ηa )

a - ζ¯ , σ

ηb )

2245

b - ζ¯ σ

Now, consider a situation in which the distribution of random variable vector x is approximated by truncated multivariate Gaussian density function, denoted as NT[xj,P], and defined over bounds (xL,xH). In this work, we make use of the following approach for the generation of samples from the truncated normal distribution.16 Since P is a symmetric and positive definite matrix, its Cholesky factorization yields

P

1/2

[

s11 0 s12 s22 ) ··· ··· sn1 sn2

··· 0 sii ···

0 ··· ··· snn

]

[ ][ ]

Using P1/2, a transformed random vector T such that x ) xj + P1/2T, i.e. xL,1 - x¯1 s11 xL,2 - x¯2 - s21t1 s22 l n-1

xL,n - x¯n -



snrtr

[]

t1 t e 2 e l tn

r)1

xH,1 - x¯1 s11 xH,2 - x¯2 - s21t1 s22 l n-1

xH,n - x¯n -

snn

∑s

nrtr

r)1

snn

(26)

With this transformation, we can define n-truncated univariate normal distributions NT(j)[0,1|tL,j e tj e tH,j] where the limits of the jth truncated distribution are defined as follows: j-1

xL,j - x¯j tL,j )



j-1

xH,j - x¯j -

sjrtr

r)1

sjj

;

tH,j )

∑s t

jr r

r)1

sjj

The above transformation requires that we draw samples recursively. Thus, first t1 can be drawn from NT(1)[0,1|tL,1 e ζ e tH,1]. Then, after setting limits (tL,2,tH,2), t2 can be drawn from NT(2)[0,1|tL,2 e ζ e tH,2], and so on. Thus, a sample from the truncated normal distribution N[xj,P] can be expressed as follows: x ) x¯ + (P)1/2T;

T ) [t1 t2 · · · tn ]T

(27)

Remark 1. The aboVe procedure to draw samples from truncated distribution can be used, if the initial state estimate can be adequately represented using a truncated multiVariate Gaussian density function. HoweVer, in general, the distribution associated with the initial state can be non-Gaussian and multimodal. Similarly, the probability distribution associated with the state noise can be non-Gaussian. GiVen an initial state (or state noise) with a known but non-Gaussian distribution, the truncation procedure (i.e., Geweke-HajiVassiliou-Keane (GHK) simulator) applicable for the multiVariate Gaussian noise case can be extended to the non-Gaussian noise case. The proposed extension is based on the fact that any arbitrary probability density function can be approximated by a conVex combination of Gaussian distributions.17 For example, given an initial estimate with non-Gaussian distributionp[x(0)], a Gaussian sum approximation to this density function can be developed as follows:

2246

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010 N0

∑ωN

p[x(0)] ≈

[x(0) - x(j)(0), P(j)(0)];

(j)

j

j)1

N0

0 < ωj < 1 and

∑ω

j

)1

j)1

Then, each Gaussian distribution N(j)[x(0) - x(j)(0),P(j)(0)] can be truncated by following the aboVe procedure and then used for drawing samples. 3.2. Constrained EnKF Formulation. The filter is initialized by drawing N particles {xˆ(i)(0|0)} from a given distribution. The procedure described in section 3.1 for drawing samples from a truncated distribution is used for generating initial ensemble{xˆ(i)(0|0), i ) 1:N}. At each time step, N samples {w(i)(k - 1), v(i)(k): i ) 1, ..., N} for {w(k)} and {v(k) } are drawn randomly using the distributions of state noise and measurement noise. These sample points together with particles {xˆ(i)(k - 1|k - 1): i ) 1, ..., N} are then propagated through the system dynamics to compute a cloud of transformed sample points (particles) as follows: xˆ(i)(k|k - 1) ) xˆ(i)(k - 1|k - 1) +



kT

(k-1)T

F[x(τ),

u(k - 1), d¯ + w(i)(k - 1)] dτ: i ) 1, 2, ..., N

(28)

As suggested by Kandepu et al.,7 the transformed sample points (particles) which are lying outside the feasible region are projected to the boundary to obtain the constrained sample points (x(i) c (k|k - 1)) as given below x(i) ˆ (i)(k|k - 1)]: i ) 1, 2, ..., N c (k|k - 1) ) Pr[x

(29)

These particles (x(i) c (k|k - 1)) are then used to estimate sample mean and covariance as follows: x¯(k|k - 1) )

1 N

N

∑ xˆ

(i) c (k|k

- 1)

(30)

i)1

The modifications necessary in the update step of the constrained formulation are as follows: To begin with, the covariance of the predicted estimates is estimated as P(k|k - 1) )

1 N-1

estimate is computed using eq 18. It is important to note that, in the absence of the bounds on the states and when the dynamic model given by eqs 1 and 2 is linear, the above formulation reduces to the linear EnKF formulation. The details of the derivation are presented in the Appendix. Remark 2. When the measurement model is linear, i.e., y(k) ) Cx(k) + v(k), then the set of constrained optimization problem defined by eq 33 can be reduced to the following set of quadratic programming problem min [x(i)(k)T H(k) x(i)(k) + f(k)T x(i)(k)]

x(i)(k)

H(k) ) [P(k|k - 1)-1 + CTR-1C] f(k)T ) -2[xˆc(i)(k|k - 1)T P(k|k - 1)-1 + (y(k) - v(i)(k))TR-1C]

xL e x(i)(k) e xU This step considerably reduces the computational complexity in the update step. 4. Simulation Studies In this section, we present simulation case studies to illustrate the efficacy of the proposed constrained EnKF scheme. We evaluate the efficacy of the C-EnKF on the following benchmark problems: 1. system with constraints on process noise9 2. constraints on process state variables (a) gas-phase reactor with irreversible reaction system10 (b) isothermal batch reactor11 The performance of the proposed nonlinear state estimation scheme has to be assessed through stochastic simulation studies. Thus, for each case that is being analyzed, a simulation run consisting of N trials with the length of each simulation trial being equal to L is conducted. The sum of squared estimation errors (SSEE), defined as

N

∑ [ε

(i)

(k)][ε(i)(k)]T

L

(31)

i)1

ε(i)(k) ) xˆ(i) ¯ (k|k - 1) c (k|k - 1) - x

SSEE )

∑ [(x

(k) - xˆ(i)(k|k))2]

(i)

k)1

(32)

The updated state estimates are then obtained by solving a set of constrained optimization problems formulated over one sampling interval. The optimization problem for the ith particle is formulated as follows: min [ξ(i)(k)TP(k|k - 1)-1ξ(i)(k) + e(i)(k)T R-1e(i)(k)]

x(i)(k)

(33) e(i)(k) ) y(k) - H[x(i)(k), v(i)(k)] (i) ξ(i)(k) ) xˆ(i) c (k|k - 1) - x (k)

subject to xL e x(i)(k) e xU The solutions of the above set of optimization problems yields updated particles {xˆ(i)(k|k): i ) 1, 2, ..., N}. The updated state

is used as a performance index, where x(i) denotes the ith entry of the vector x. Statistics of SSEE computed for each simulation run are used to assess the efficacy of the estimation scheme. 4.1. System with Constraints on Process Noise9. This example has been included to demonstrate the importance of using truncated distribution to account for constraints on process noise. As there are no constraints on states, we consider the unconstrained version of EnKF for state estimation together with the concept of truncated distribution. The performance of the unconstrained EnKF in the presence of additive non-Gaussian measurement error (Figure 4) is also investigated. It may be noted that, if there are constraints on the process noise and/or the measurement noise is non-Gaussian, then the conventional unconstrained versions of UKF and EKF cannot be employed to carry out state estimation. Consider the following nonlinear dynamic system with a nonnegative constraint on the process noise9 x1(k + 1) ) 0.99x1(k) + 0.2x2(k)

(34)

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010

x2(k + 1) ) -0.1x1(k) +

0.5x2(k) 1 + [x2(k)]

2

y(k) ) x1(k) - 3x2(k) + v(k)

+ w(k)

(35) (36)

We assume that {v(k)} is a sequence of independent, zero mean, normally distributed random variables with covariance 0.01 and w(k) ) |z(k)|, where {z(k)} is a sequence of independent, zero mean, normally distributed random variables with unit covariance. The probability density function of the state noise is shown in Figure 1. We also assume that the initial state is normally distributed with zero mean and covariance equal to the identity. It may be noted that only one linear combination of the state variables is measured and samples for the {w(i)(k): i ) 1, ..., N} have been drawn from the truncated normal distribution. The performance of the EnKF (N ) 150) in the presence of constraints on the process noise is shown in Figure 2. From the response (Figure 2), it can be concluded that the state estimates obtained using the EnKF in the presence of constraints on the process noise are quite satisfactory. The average sum of squares estimation errors based on 25 trials is reported in Table 1. 4.1.1. Performances of Unconstrained EnKF in the Presence of Constraint on Process Noise and Non-Gaus sian Measurement Noise. The performance of the EnKF in the presence of additive non-Gaussian measurement error is shown in Figure 3. The probability density function of the nonGaussian measurement noise is as follows:

[

]

0.5(x - µ1)2 P1 1 - P1 p[v(k)] ) exp + (2π)1/2σ1 σ12 (2π)1/2σ2 0.5(x - µ2)2 exp σ22

[

noise is reported in Table 2. From Figure 3 and Table 2, it can be concluded that, when the ensemble size is sufficiently large, the unconstrained EnKF formulation performs equally well for non-Gaussian measurement noise except for the increase in the average value of the sum of squares of estimation error (Table 2). 4.2. Gas-Phase Reactor.10 Consider the gas-phase irreversible reaction in a well-mixed, constant-volume, isothermal batch reactor. The governing equations for the isothermal batch reactor are as follows:

The distribution of measurement noise is nothing but a mixture (P1 represents the mixture coefficient) of two normally distributed random variables. P1 has been chosen to be 0.6. The distribution of the random variables that are used to generate non-Gaussian measurement noise are N[0.1,0.01] and N[-0.1,0.01], respectively. The probability density function of the measurement noise is shown in Figure 4. The average sum of squares of estimation errors based on NT ()25) simulation trials in the presence of additive non-Gaussian measurement

dpA ) -2k1pA2 dt

(37)

dpB ) k1pA2 dt

(38)

P ) [1 1 ]

[ ] pA pB

(39)

k1 ) 0.16

2A f B

(40)

where x ) [pA;pB] denotes the partial pressures of A and B. We have assumed that the random errors (Gaussian white noise) are present in the measurement (total pressure) as well as in the state variables. The covariance matrices of state noise and measurement noise are assumed as follows: Q)

]

2247

[

10-6 0 0 10-6

]

and R ) [(0.1)2]

The sampling time has been chosen as 0.1. The initial state error covariance matrix has been chosen as P(0|0) )

[

36 0 0 36

]

The initial state for the process and the state estimator are chosen as x(0|0) ) [3 1 ] and xˆ(0|0) ) [0.1 4.5 ] respectively. In all simulation runs, the process is simulated using the nonlinear first principles model (eqs 37 and 38) and the true state variables are computed by solving the nonlinear differential equations using the differential equation solver in Matlab 7.7. 4.2.1. Performances of Unconstrained and Constrained EnKF. The problem at hand is to generate nonnegative estimates of partial pressures starting from given initial estimates. Thus, the lower bound and upper bound values imposed on the state variables are xL ) [0 0 ]T and xU ) [100 100 ]T

Figure 1. Non-Gaussian state noise probability density function (PDF) for a system with nonnegative constraints on process noise.

respectively. It should be noted that initial samples for the state variables (xˆ(i)(0|0)) have been drawn from the truncated normal distribution. In Figure 5 we have compared the performance of the constrained EnKF (C-EnKF) with that of RNDDR5 and

2248

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010

Figure 2. System with constraints on process noise: evolution of true states and estimated states using unconstrained EnKF (Gaussian measurement noise). Table 1. System with Constraints on Process Noise: Mean SSEE Values for 25 Trials unconstrained EnKF no. of sample points (N)

state x1

state x2

50 150

96.9979 71.9879

10.9573 8.1511

7

C-UKF in the presence of deliberately introduced large plant model mismatch in the initial state variables. The performance of unconstrained EnKF (or EnKF) under identical conditions is shown in Figure 6. The average sum of squares of estimation errors obtained based on NT ()25) trials are reported in Table 3. From Figure 5 it can be concluded that reasonably accurate estimates of the partial pressures of A and B are obtained using C-EnKF, whereas the estimated partial pressures A and B are found to be significantly biased in the case of EnKF (unconstrained) for the sample points (particles) 10 and 100, respectively (see Figure 6). It should be noted that the estimated value of the partial pressure of A has been found to be negative in the case of unconstrained EnKF. On the other hand, constraints never get violated when the proposed C-EnKF method is employed for state estimation. The estimates stay far from the constraint boundaries and converge to the true values. Moreover, compared to the behavior of the estimates generated using the RNDDR and the C-UKF formulations, the proposed C-EnKF algorithm is much quicker in reducing the estimation error. This also reflects in significantly smaller average SSEE obtained using C-EnKF when compared to those obtained using RNDDR and C-UKF formulations. In addition, as would be expected in a statistical Monte Carlo method, the average value of the sum of squares of estimation error reduced as the ensemble size was increased. The average computation time per iteration (Matlab 7.7.0-R 2008(b), Intel Core 2 Duo Processor-2.13 GHz) of the C-EnKF algorithm is reported in Table 3. However, it should be noted that the average computation time per sampling instant

increases as we use more sample points (particles) in the C-EnKF algorithm. 4.3. Isothermal Batch Reactor11. Consider the two gasphase irreversible reaction in a well-mixed, isothermal batch reactor k1

A y\z B + C

k ) [0.5 0.05 0.2 0.01 ]T

k2 k3

2B y\z C k4

The governing equations for the isothermal batch reactor are as follows: x˙ ) f(x) ) vTr;

r)

[

k1cA - k2cBcC k3cB2 - k4cC

] [ v)

-1 1 1 0 -2 1

]

y ) [32.84 32.84 32.84 ]x where x ) [cA; cB; cC] denotes the concentrations of components A, B, and C. We have assumed that the random errors (Gaussian white noise) are present in the measurement as well as in the state variables. The covariance matrices of state noise and measurement noise are assumed11 as follows:

[

0 10-6 0 -6 Q) 0 0 10 0 0 10-6

]

and R ) [(0.25)2]

The sampling time is chosen as 0.25. The initial state error covariance matrix is chosen as

[

0.25 0 0 0.25 0 P(0|0) ) 0 0 0 0.25

]

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010

2249

Figure 3. System with constraints on process noise: evolution of true states and estimated states using unconstrained EnKF in the presence of non-Gaussian measurement noise.

Figure 4. System with constraints on process noise: probability density function of non-Gaussian measurement noise.

The initial state for the process and the state estimator are chosen as

Thus, the lower bound and upper bound values imposed on the state variables are

x(0|0) ) [0.5 0.05 0 ]T

xL ) [0 0 ]T

and

and xˆ(0|0) ) [0 0 1 ]T

xU ) [10 10 ]T

respectively. The problem at hand is to generate nonnegative estimates of concentrations starting from given initial estimates.

respectively. The sample points used to estimate the statistics of the estimated state of the model have been chosen to be equal to 10, i.e., N ) 10. It may be noted that initial samples for the

2250

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010

Figure 5. Gas-phase reactor: evolution of true states and estimated states.

Figure 6. Gas-phase reactor: evolution of true states and estimated states using unconstrained EnKF.

state variables (xˆ(i)(0|0)) are drawn from the truncated normal distribution. The performances of C-EnKF (N ) 10), RNDDR, C-UKF, and (unconstrained) EnKF (N ) 10, N ) 25) in the presence of a deliberately introduced large initial estimation error are shown in Figures 7 and 8, respectively. From Figure 7, it can be seen that the estimates generated by C-EnKF smoothly

converge to their respective true values at less than the 80th sample point. On the other hand, the estimated concentrations are found to be significantly biased in the case of EnKF (unconstrained; see Figure 8). The estimated values of the concentrations of A and B become negative in the case of unconstrained EnKF (see Figure 8). Increasing the ensemble

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010

size does not change this scenario significantly. It may also be noted that the constraints never get violated when the proposed C-EnKF method is employed for state estimation. The estimates stay far from the constraint boundaries and converge to the true values. The average sum of squares of estimation errors based on NT ()25) trials is reported in Table 4. As can be seen from this table, increasing the number of samples increases the rate

2251

of convergence and significantly reduces the SSEE values. Moreover, the average SSEE of C-EnKF (for all sample points) is to found to be significantly less than those of RNDDR and C-UKF. The average computation time per iteration (Matlab 7.7.0-R 2008(b), Intel Core 2 Duo Processor-2.13 GHz) of the C-EnKF algorithm is reported in Table 4. However, it should be noted that the average computation time per sampling instant

Figure 7. Isothermal batch reactor: evolution of true states and estimated states.

Figure 8. Isothermal batch reactor: evolution of true states and estimated states using unconstrained EnKF.

2252

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010

Table 2. System with Constraints on Process Noise: Mean SSEE Values for 25 Trials (Non-Gaussian Measurement Noise)

SUNCORE-ICORE Industrial Research Chair program in Computer Process Control at the University of Alberta.

unconstrained EnKF no. of sample points (N)

state x1

state x2

Appendix: Derivation of Linear Ensemble Kalman Filter

50 150

116.5863 71.6254

14.1268 8.8459

Consider a discrete linear stochastic system of the form

Table 3. Gas-Phase Reactor: Mean SSEE for 25 Trials (Gaussian Measurement Noise) state estimation scheme

no. of sample points (N)

partial press. of A

partial press. of B

average computation time per iteration (seconds)

C-EnKF C-EnKF C-EnKF RNDDR C-UKF

25 50 150 -

2.5309 2.2964 1.9382 37.1644 8.3219

2.9062 2.6629 2.2034 40.6176 4.3609

0.584 1.164 3.476 0.019 0.034

Table 4. Isothermal Batch Reactor: Mean SSEE for 25 Trials (Gaussian Measurement Noise) state estimation scheme C-EnKF (N ) 25) C-EnKF (N ) 100) RNDDR C-UKF

average computation time per iteration concn A concn B concn C (seconds) 0.1605 0.1401 0.2105 0.4427

0.1836 0.1345 0.3370 0.2741

0.4385 0.3308 0.7401 0.8176

0.904 3.533 0.021 0.047

x(k) ) Ax(k - 1) + w(k - 1)y(k) ) Cx(k) + v(k)

(A.1)

where w(k - 1) and v(k) are independent normally distributed random variables with covariance matrices Q and R, respectively. Assume that particles x(i)(k - 1|k - 1): i ) 1, 2, ..., N are available at the kth sampling instant and measurement y(k) has been obtained. The prediction step becomes xˆ(i)(k|k - 1) ) Axˆ(i)(k - 1|k - 1) + w(i)(k - 1) for i ) 1, 2, ..., N

(A.2)

In the absence of any state constraint violations, the uncertainty in the predicted state estimates (P(k|k - 1)) can be computed using eqs 8 and 12. With this covariance estimates, the correction step can be formulated as shown below min [ξ(i)(k)T P(k|k - 1)-1 ξ(i)(k) + e(i)(k)T R-1e(i)(k)] x(k)

(A.3) ξ(i)(k) ) x(k) - xˆ(i)(k|k - 1)

increases as we use more sample points (particles) in the C-EnKF algorithm. 5. Concluding Remarks In this work, we have proposed a constrained recursive formulation of the ensemble Kalman filter (EnKF) that retains the advantages of unconstrained EnKF while systematically dealing with bounds on the estimated state variables. We have highlighted the use of truncated multivariate distributions for systematically solving the estimation problem in the presence of state and process noise constraints. The performance of the proposed constrained EnKF is compared with the performances obtained using the recursive constrained formulations available in the literature (i.e., RNDDR5 and C-UKF7) using two benchmark examples from the literature (a gas-phase reactor and an isothermal batch reactor), which involve constraints on the estimated state variables. In both these examples, the conventional unconstrained EnKF clearly violated the bounds (i.e., generated negative concentration estimates and negative partial pressure estimates) and produced unacceptable performance. The estimates generated by the proposed C-EnKF scheme, on the other hand, remained far from the constraint boundaries, and quickly converged to the true values. Moreover, when compared on the basis of SSEE values, the performance of the proposed C-EnKF formulation was found to be significantly better than those obtained using RNDDR and C-UKF formulations. Furthermore, the performance of the proposed C-EnKF scheme was found to be satisfactory when employed for state estimation in a system having constraints on process noise. Acknowledgment J. Prakash would like to thank the Department of Science and Technology, Government of India, for providing support in the form of the BOYSCAST fellowship tenured at the University of Alberta. The authors would also like to acknowledge financial support through the NSERC-MATRIKON-

e(i)(k) ) y(k) - Cx(k) - v(i)(k) An unconstrained solution of the above optimization problem is obtained as follows xˆ(i)(k|k) ) xˆ(i)(k|k - 1) + [P(k|k - 1)-1 + CTR-1C]-1CTR-1[y(i)(k) - Cx(i)(k|k - 1)] (A.4) y(i)(k) ) y(k) - v(i)(k) Using the matrix inversion lemma, it can be shown that [P(k|k - 1)-1 + CTR-1C]-1CTR-1 ) P(k|k - 1)CT[R + CP(k|k - 1)CT]-1 ) L(k) (A.5) xˆ(i)(k|k) ) xˆ(i)(k|k - 1) + L(k)[y(i)(k) - Cx(i)(k|k - 1)] (A.6) For the additive noise case, it can be further shown that Pε,e(k) ) P(k|k - 1)CT Pe,e(k) ) [R + CP(k|k - 1)CT] L(k) ) Pε,e(k)[Pe,e(k)]-1 The ensemble mean of the above particles {xˆ(i)(k|k): i ) 1, 2, ..., N} will be identical to the ensemble mean of the unconstrained EnKF developed for the system described by eq A.1. The fact that estimates of the ensemble Kalman Filter developed for a system described by eq A.1 tend to Kalman filter estimates when the ensemble size N becomes large has been shown by Evenson.14 Literature Cited (1) Julier, S. J.; Uhlmann, J. K. Unscented Filtering and Nonlinear estimation. Proc. IEEE 2004, 92 (3), 401–422. (2) Burgers, G.; Leeuwen, P. J. V.; Evensen, G. Analysis Scheme in the Ensemble Kalman Filter. Mon. Weather ReV. 1998, 126, 1719–1724.

Ind. Eng. Chem. Res., Vol. 49, No. 5, 2010 (3) Liebman, M. J.; Edgar, T. F.; Lasdon, L. S. Efficient Data Reconciliation and Estimation for Dynamic Processes Using Nonlinear Programming Techniques. Comput. Chem. Eng. 1992, 16 (11), 963–986. (4) Michalska, H.; Mayne, D. Q. Moving horizon observers and observer-based control. IEEE Trans. Autom. Control 1995, 40 (6), 995– 1006. (5) Vachhani, P.; Rengaswamy, R.; Gangwal, V.; Narasimhan, S. Recursive Estimation in Constrained Nonlinear Dynamical Systems. AIChE J. 2004, 946–959. (6) Vachhani, P.; Narasimhan, S.; Rengaswamy, R. Robust and Reliable Estimation via Unscented Recursive Nonlinear Dynamic Data Reconciliation. J. Process Control 2006, 16, 1075–1086. (7) Kandepu, R.; Foss, B.; Imsland, L. Applying the Unscented Kalman Filter for Nonlinear State Estimation. J. Process Control 2008, 18, 753– 768. (8) Lang, L.; Chen, W.; Bakshi, B. R.; Goel, P. K.; Ungarala, S. Bayesian estimation via sequential Monte Carlo samplingsConstrained dynamic systems. Automatica 2007, 43, 1615–1622. (9) Rao, C. V.; Rawlings, J. B.; Mayne, D. Q. Constrained State Estimation for Nonlinear Discrete-time Systems: Stability and Moving Horizon Approximations. IEEE Trans. Autom. Control 2003, 48 (2), 246– 258. (10) Rawlings, J. B.; Bakshi, B. R. Particle Filtering and Moving Horizon Estimation. Comput. Chem. Eng. 2006, 30, 1529–1541.

2253

(11) Haseltine, E. L.; Rawlings, J. B. Critical evaluation of extended Kalman filtering and moving horizon estimation. Ind. Eng. Chem. Res. 2005, 44 (8), 2451–2460. (12) Arulampalam, S.; Maskell, S.; Gordon, N.; Clapp, T. A Tutorial on Particle filters for Online Nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50 (2), 174–188. (13) Gillijns, S.; Mendoza, O. B.; Chandrasekar, J.; De Moor, B. L. R.; Bernstein, D. S.; Ridley, D. S. What is the Ensemble Kalman Filter and How well does it Work. Proc. Am. Control Conf. 2006, 4448–4453. (14) Evenson, G. The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation. Ocean Dyn. 2003, 53, 343–367. (15) Prakash, J.; Patwardhan, S. C.; Shah, S. Constrained State Estimation using Ensemble Kalman Filter. Proc. Am. Control Conf. 2008, 3542– 3547. (16) Shum, M. Notes on GHK Simulator. Available at www.econ. jhu.edu/people/shum/e2901/ghk_desc.pdf. (17) Alspach, D. L.; Sorenson, H. W. Nonlinear Bayesian Estimation Using Gaussian Sum Approximations. IEEE Trans. Autom. Control 1972, 4, 439–448.

ReceiVed for reView February 4, 2009 ReVised manuscript receiVed October 14, 2009 Accepted October 14, 2009 IE900197S