Industrial Application of a Large-Scale Dynamic Data Reconciliation

May 11, 2000 - In a modern chemical plant, distributed control systems provide access to a wide variety of process data that typically contain both ra...
0 downloads 8 Views 175KB Size
Ind. Eng. Chem. Res. 2000, 39, 1683-1693

1683

Industrial Application of a Large-Scale Dynamic Data Reconciliation Strategy Tyler A. Soderstrom,† Thomas F. Edgar,† Louis P. Russo,‡ and Robert E. Young*,‡ Department of Chemical Engineering, The University of Texas at Austin, Austin, Texas 78712-1062, and Baytown Chemical Plant, ExxonMobil Chemical Company, P.O. Box 4004, Baytown, Texas 77522-4004

In a modern chemical plant, distributed control systems provide access to a wide variety of process data that typically contain both random and gross errors. Although data reconciliation has been an often-studied topic to reduce the amount of error in measurements, applications of dynamic data reconciliation to problems in industry are virtually nonexistent. This is because standard formulations of the dynamic data reconciliation problem result in large nonlinear programs, which are thought to be too difficult to solve in real time. With increases in computing speeds and improvements in optimization technology, these concerns are diminishing. This paper describes the implementation of a dynamic data reconciliation application at an ExxonMobil Chemical Company plant. The specifics of the application will be discussed and some solutions to the practical problems encountered in an industrial setting are presented. Introduction Data reconciliation is a well-documented approach for estimating values of measured process variables that are consistent with their constraining mass and energy balances. All measurements contain error and data reconciliation reduces this error by making use of redundancies in process data. A great deal of research has been done on steady-state data reconciliation, but far less has focused on dynamic data reconciliation, especially nonlinear dynamic data reconciliation. The dynamic problem shares many features with state estimation, and many advances have come by using its methodology. Although there is a large body of literature on data reconciliation, the application of these techniques to large operating industrial processes is virtually nonexistent. This article demonstrates the successful application of dynamic data reconciliation techniques to a large-scale industrial process. A real-time dynamic data reconciliation strategy is implemented to improve inventory calculations at an ExxonMobil Chemical manufacturing plant. Dynamic data reconciliation has often been thought of as too impractical to be used in an industrial setting despite the possible benefits it could provide. In general, the chemical process industry has been slow to take advantage of many advances in modern control and monitoring. Recently, Wilson et al. (1998) discussed this problem and concluded that due to model uncertainty and other problems faced in an industrial situation, the implementation of an extended Kalman filter produced relatively minor improvements from using an open-loop model with no corrections for measurement error.1 Another concern with standard moving-horizon formulations of the dynamic data reconciliation problem is that it results in a nonlinear program (NLP), which can be very large when applied to any problem of industrial importance. The prospect of solving this NLP in real time could exclude dynamic reconciliation as a viable option. One of the goals of this paper is to demonstrate *To whom correspondence should be addressed. † The University of Texas at Austin. ‡ ExxonMobil Chemical Company.

that this need not always be the case. Systematic improvements in numerical optimization algorithms and the continual increase in computing speeds have allowed the solution of the large NLP encountered in problems of this type. The data reconciliation problem is not a new one. Kuehn and Davidson (1961) recognized the benefit of using estimates of process measurements that are consistent with constraining mass and energy balances.2 Their paper contains the classic formulation of the steady-state data reconciliation problem. This method involved solving an optimization problem that minimizes a weighted least-squares objective function of the difference between measured and estimated values of the process variables. The balance equations were included as constraints to ensure the estimates were consistent, and the problem was solved analytically using Lagrange multipliers. As problems become larger, nonlinear systems are considered, inequality constraints are included, and analytical solutions become less practical or nonexistent. Knepper and Gorman (1980) suggest that the solution to these problems can be adequately obtained using successive linearization approaches; however, Liebman (1991) showed that nonlinear programming techniques offered significant improvements, especially when errors were large and the system sufficiently nonlinear.3,4 The benefits of steady-state data reconciliation are widely recognized. These include improved confidence in measurements, fault identification, higher level production planning and optimization, and steady-state identification.5-7 Currently, many steady-state data reconciliation applications are running on-line in operating refineries.8 When examining particular systems, some of the measurements may be redundant within a data set, while some process variables are unmeasured. The estimation of the values of unmeasured variables from measured process data is termed coaptation by Mah et al. (1976).9 Classification of variables to determine which unmeasured ones can be estimated and which measured ones are available for improvement by reconciliation is shown by Stanley and Mah (1981).10

10.1021/ie990798z CCC: $19.00 © 2000 American Chemical Society Published on Web 05/11/2000

1684

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000

Several good examples of simultaneous estimation and reconciliation of process variables are shown by Tamhane and Mah (1985).11 This demonstrates the close connection between data reconciliation and estimation problems. One of the major focuses for research in data reconciliation has been to use data or equation residuals for the detection of gross errors in process data. Mah et al. (1976) used reconciled data to identify gross errors such as process leaks or biased instruments. Most of these methods are based on statistical hypothesis testing of residuals. These methods include the measurement test (Tamhane and Mah, 1985), the modified iterative measurement test (Serth and Heenan, 1986), the maximum power test (Crowe, 1992), and more recently the principal component test (Tong and Crowe, 1995).11,12,13,14 Recently, Kim et al. (1997) extended the modified iterative measurement test by using nonlinear programming techniques.15 Other researchers have proposed methods to detect outliers and eliminate their impact on reconciliation results.16,17 For dynamic systems, the data reconciliation problem shares many characteristics with state and parameter estimation. This was shown by Robertson et al. (1996), where a typical formulation of the dynamic data reconciliation problem was presented as a special case of a more general moving-horizon state estimation formulation.18 State estimation problems have been often been handled by filtering and moving-horizon optimization approaches. Jang et al. (1986) compared these two approaches and concluded that nonlinear programming methods were more robust and provided superior results, at the expense of increased computation time.19 Almasy (1990) described a technique for dynamic data reconciliation using linear balancing equations to reconcile measured states, and Liebman et al. (1992) demonstrated that nonlinear balance equations could be reconciled efficiently using nonlinear programming techniques.20,21 The use of a moving-horizon approach, such as the one taken by Liebman et al. (1992) makes problem size computationally manageable.21 A similar moving-horizon data reconciliation strategy was shown by Ramamurthi et al. (1993) to improve closed-loop nonlinear model predictive control performance.22 Moving-horizon methods for state estimation are discussed by Muske and Edgar (1997), implemented industrially for a polymerization process by Russo and Young (1999), and the theory behind such methods is still being actively developed.23-25 As in steady-state data reconciliation, techniques to identify gross errors in dynamic systems are also emerging. Dynamic data reconciliation techniques have also been used to detect gross errors, identify bias in measurements, and detect outliers.17,26-28 McBrayer et al. (1998) applied bias detection and nonlinear dynamic data reconciliation to a single vessel in a process.29 Although dynamic data reconciliation has received much attention recently, its application to large operating processes is conspicuously absent. The first section discusses the motivation for the research and why nonlinear dynamic data reconciliation (NDDR) was chosen. The next section defines the scope of the problem, followed by a discussion of how the dynamic data reconciliation was formulated. This section will provide a detailed treatment of the way the problem was set up, how this particular implementation fits within the framework of a more general state estimation problem, and describes the equations used

Figure 1. Diluent losses (normalized to 1962).

in the model for the plant. This is followed by a treatment of the implementation of the problem. This includes practical items such as how certain systematic errors in the data were compensated for and how this contributed to improved results. Also discussed are changes made to improve the robustness of the application and how the application was interfaced with the distributed control system to increase its utility. Results of the improvements achieved by the application will be presented next, followed by some concluding remarks. Motivation In the manufacturing plant, the inventory of a diluent has been tracked for nearly four decades. Since the diluent mass is constantly being shifted during plant operation, tracking its inventory is an inherently nonsteady-state problem. Tracking the inventory permits both manufacturing cost reduction and improved environmental performance. Accurate inventory tracking allows material to be re-ordered before supplies become low without exceeding safe operating inventory limits. Inventory is also used in environmental loss calculations. When losses are computed long-term by summing purchases over a quarter or a year and adding the working inventory change, any errors in the inventory change are small compared to the sum of the purchases. However, short-term losses are computed by differences in average inventory. For example, daily losses are calculated by subtracting the average inventory for 1 day from the daily average of the previous day. Since losses are calculated directly from inventory values, any errors in the inventory calculation cause subsequent errors in the short-term loss calculations. The current inventory calculation swings (15% of the nominal plant inventory, which is an order of magnitude greater than the long-term loss calculation. This significant difference motivates further investigation into the current calculation method. The earliest methods of calculation assumed constant densities and compositions for most drums. Even with these gross assumptions, tracking the daily and 3-day average losses based on these inventory calculations effected major reductions in emissions by determining dominant “loss sources” and targeting them for improvements. Figure 1 shows these reductions as a percentage of a base year. The diluent is not localized to a single unit operation in the process so tracking the inventory requires considering a large section of the plant. Prior to the implementation of the data reconciliation-based application for estimating inventory, the amount of diluent

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000 1685

present in the plant was computed by summing the inventory of process vessels within the distributed control system (DCS) using the following assumptions: (1) The hold-up in all purification towers is constant. (2) The hold up in the compression and drying sections (all vapor phase) is small and constant. (3) Hold-up in the piping and process exchangers is constant These assumptions can be quite restrictive and largely contribute to the systematic inaccuracies of inventories calculated by this method. It was confirmed via process simulation that the distillation train of the plant was responsible for a large and variable portion of the diluent inventory, and when production is switched between different products, the filling and emptying of exchangers becomes significant. The presence of poor or unavailable measurements and the fact that large sections of the plant were not taken into account by this treatment led to the search for an improved alternative. Vessel inventories, m, are computed as

m ) FVxa

(1)

The volume, V, is a calculated function of a level measurement. The volume calculation is nonlinear for horizontal drums and typically involves a cubic polynomial. The density, F, is a nonlinear function of composition and temperature measurements. Compositions, xa, are analyzed by gas chromatography that causes these measurements to be available asynchronously at a much lower frequency than the other measurements. Although the DCS tracking method has been improved by using all available data to compensate for density and composition variation, the measurement errors generate inventory variation that is larger than the current loss levels. This is demonstrated via an analysis of the errors. For a single horizontal drum, the expected root-mean-square (rms) error in the inventory is given by the following equation:

∆mrms )

x( )

( )

( )

∂m 2 2 ∂m 2 2 ∂m 2 ∆L + ∆T + ∆xa2 ∂L ∂T ∂xa

(2)

Where L is the level in the drum, T is the temperature in the drum, and the partial derivatives are evaluated at the nominal operating conditions. The assumption of volume additivity was used to derive a composition dependent expression for density. This assumption has been validated via comparison with rigorous thermodynamic properties over the conditions observed in the actual plant. Given the following expressions for volume and densities (mixture and pure component),

V ) c3L3 + c2L2 + c1L + c0

)

(4)

Fa ) m1T + b1

(4a)

Fb ) m2T + b2

(4b)

F)

(

(3)

xa 1 - xa + Fa Fb

-1

The partial derivatives can be expressed as

∂m ) Faxa(3c3L2 + 2c2L + c1) ∂L

(5)

Figure 2. System of two interconnected vessels.

For a typical horizontal drum at nominal process

[ ( )(

)]

xaV m1xa m2(1 - xa) ∂m ) + 2 2 ∂T F Fa Fb2

[

(

)]

1 ∂m 1 ) V F + x a F2 ∂xa Fb Fa

(6)

(7)

conditions, the term associated with level measurement errors is much greater than the contributions by the density and composition terms. For a specific vessel in this process, the ratio of contributions to the squared error is 550:2:1 (level/temperature/composition measurements, respectively). These relative contributions are the three terms inside the square root operator in eq 2. The partial derivatives were evaluated at the nominal operation conditions for this specific drum. The measurement terms (∆L, ∆T, and ∆xa) were selected based on the expected error in the measurement device. The error in the overall inventory calculation is computed by expanding this error analysis to include all of the measurements for all vessels in the calculated inventory. As expected, the current losses are at or below the expected error in the calculation. When inventory from one vessel fills an empty pipe, exchanger, or vessel that is not in the DCS inventory calculations, that inventory is hidden or apparently consumed. Later, that inventory will reappear in a vessel that is included and appear to be generated. A major improvement can be made to the inventory tracking calculations by taking into account inconsistencies in the measurements and connections between vessels. This information about the connectivity of process units is very important and should be taken advantage of. For example, consider the system shown in Figure 2. This system has two types of functional redundancies in the measurements. If the vessels contain incompressible liquids at constant temperature, not only is each vessel overspecified, but so is the interaction between vessels. Equations can be formulated to describe the level in each individual vessel, but since they are connected, a decrease in the level in the first vessel must be accompanied by an increase in the level in the second vessel. It is exactly these types of physical relationships that are enforced when a model of the plant is derived, connections are stated explicitly, and adjustments to the data are made so they are consistent with the model. This is precisely why data reconciliation was chosen to improve the tracking of diluent inventory. Problem Scope A dynamic data reconciliation strategy was chosen to eliminate the shortcomings of the current distributed control system (DCS) approach to inventory calculations. To offer improvements over the DCS calculations,

1686

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000

specific problem, both aggregation and logic were used to define the mass balance envelopes. Problem Formulation

Figure 3. Simplified plant flow sheet.

more units where diluent is used must be considered. The ExxonMobil unit makes a variety of products and product grades, and even during a long product run, vessel inventories are moved between different sections of the plant. During product or grade switches, these changes in the location of inventory are even more dramatic. This makes the tracking of vessel inventory an inherently non-steady-state process; therefore, these dynamics must be accounted for in the inventory tracking process. This precludes the use of the more mature steady-state techniques for data reconciliation. To use a dynamic data reconciliation strategy, a dynamic model of the plant is needed. The available measurements from the distributed control system (DCS) were investigated in order to find units that had enough information about their operation available for complete mass and component balances, as well as which variables could be calculated from others, but not reconciled. This yielded all of the units considered in the original calculations, as well as the series of distillation towers. The available measurements allow for dynamic mass balances to be performed on approximately 20 vessels, including process vessels, reactors, flash vessels, distillation columns, and one piping section where the flow is split and the flow rates in all branches are measured. One vessel has enough measurements to make a component balance possible. Reconciliation of the dynamic mass balance using this vessel has previously been done off-line by McBrayer et al. (1998), but for this application, the composition data are used for complete component balances as well.29 The equations used in the model result in a large system of differential and algebraic equations (DAE). A simplified flow sheet of the plant is shown in Figure 3. The large box, delimited by the dashed line, indicates the portion of the plant modeled for the data reconciliation application. The only sections of the plant remaining unmodeled are the recycle and drying units. These sections are omitted due to a lack of available measurements, simplified models, and a relatively small contribution to the total plant diluent inventory. In the diagram, the small box outlined with the dotted line shows the boundary used for the mass balance on an aggregate of vessels. This aggregation is required wherever there are insufficient measurements or where manual lineups are made that cannot be detected via measurements. All of the vessels in the aggregation are treated as a single large vessel. As an alternative, it is often possible to use engineering judgment or logic to discriminate among alternative manual lineups. In this

As was noted previously, nonlinear dynamic data reconciliation was chosen to improve the confidence in calculated diluent inventory calculations. This section will describe the details of the approach as well as the form of the equations used in the plant model. Dynamic data reconciliation is used to reduce the error in the measurements by forcing the estimated value of the measurements used in the calculations to be consistent with their constraining dynamic mass and component balances. This approach closely follows that used by Liebman et al. (1992) with a few minor changes in the exact problem formulation.21 Inventory data is desired continuously so the reconciliation is performed in real time using a moving horizon. This approach permits a trade off between the accuracy of a classic batch data reconciliation and the low computation time required for filtering approaches such as the extended Kalman filter.18 The nonlinear dynamic data reconciliation problem involves optimizing a weighted least-squares objective function subject to both differential and algebraic equation constraints as well as simple bounds on process variables. The constraints in this application are one of four classes. The first type is the dynamic overall and component balance, represented with differential equations. The second type is general algebraic relationships between variables. These commonly arise from expressing the state variables in terms of measured or other calculated quantities. The third type is the connection constraints. These ensure that the flow out of one connected vessel is the same as the flow into another. This type of constraint helps to eliminate hidden inventory caused by miscalibrated level instruments and reduces the effects of inventory used to fill pipes and exchangers. The final type of constraint is simple variable bounds that ensure all variables have physically meaningful values at the problem solution. These inequality constraints include bounds such as nonnegative flow rates, vessel levels between 0 and 100%, and flows between the upper and lower limits of the flow meter used for the measurement. A discussion in Rao and Rawlings (1998) explains how a moving-horizon estimator may not produce estimates that converge to the true values of the states if each horizon of data is treated independently of one another. Their discussion showed that by bringing in past information on the estimated values of the states, the stability and convergence of the estimates can be controlled.25 Although the stability of the estimates was not explicitly considered in this application, the idea of utilizing past information to improve the convergence of the current run was used in the formulation of this problem. In this application, in addition to minimizing the difference between the measurement and current estimate of a variable, the difference between current and previous estimates of certain variables is also included in the objective function. In the mathematical statement of the problem, the use of inputs to differential equations is not specified explicitly. If an input is measured, it is included with measurements, and if it is unmeasured, it shows up as an auxiliary problem variable to be calculated. In mathematical notation, the nonlinear dynamic data reconciliation problem used in

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000 1687

this application is as follows: nv

yˆ ,xˆ ,z

( )

1 yˆ i - yi

∑ i)12

min Φ )

m(H-1)

2

∑ j)1

+

wi

( )

1 xˆ j - xj

2

w jj

2

(8)

subject to

f

(dxdtˆ , yˆ ,xˆ ,z) ) 0

(9)

h(yˆ ,xˆ ,z) ) 0

(10)

g(yˆ ,xˆ ,z) g 0

(11)

The weighting factors in this application are selected to correspond to the estimated accuracy of the instrument taking the reading. For example, mass flow meters are accurate to (0.25% of full scale for flows above 2% of scale. Therefore, for a meter ranged 0-10 000 lb/h flow, the weight is 25. Experience of the plant control engineers is used to modify these weights on the basis of past measurement performance. Although the weights selected using these heuristics do not possess the typical statistical interpretation presented in other estimation or data reconciliation literature, they represent a consistent way to deal with the relative accuracy of measurements. One source of error that is difficult to capture with the statistical assumption of random noise is the use of a constant density to calibrate a transmitter. This is especially problematic for level and flow meters when the actual process density varies. The use of a constant density in the transmitter calibration results in an error that increases with increasing mass holdup or flow. The current application does explicitly handle this issue; however, it is not done directly by including these equations in the plant model, but rather the data are corrected for this type of systematic error on DCS and the corrected data is then used in the reconciliation application. Details on how this is accomplished and how substantially this effects the results are delayed until a later section Most of the units in the data reconciliation application are simply modeled using dynamic mass and component balance equations. These balances take the following form: Mass balance

V ) P(L)

(12)

m ) VF

(13)

dm

)

dt

∑i

aiSi

(14)

(18)

Per tray pressure drop

ht )

(

)( )

Fl ∆p 408.6 ntrays 14.696 Fwater

)

dt

(15)

∑i xA aiSi

(16)

i

hcd ) Kc

( )( ) Fv Q FL A r

2

dm dt

)

(20)

Slot opening drop

Q ) 2.36

)[( )

( )x(

FL - FV FV

As hsh

2 Rs h 2/3 + 3 Rs + 1 so

]

5/2

4 1 - Rshso 15 1 + Rs hsh

( )

(21)

Flow parameter

Fvat )

Q F Aax V

(22)

Aeration factor

β ) -0.0418Fva3 + 0.2435Fva2 - 0.5108Fva + 1 (23) Liquid holdup per tray

ht )

ht - hcd - hso + hw - hst β

FL mtray ) ntrayshtAn 3 12

(24)

(25)

∑i xA aiSi i

(17)

(26)

Tower mass balance

dmtotal dt

Explicit form

+ xA

(19)

Bubble cap drop

mtotal ) ntray + mtank

dmA

dt

V Fv

Tower total mass

mA ) xAm

dxA

Q)

Total tray holdup

Component balance

m

The equations used to model the mass holdup in the tower section of the plant are substantially more complicated and are presented below. These equations are a slightly modified form of the equations shown in Smith (1963).30 In some cases, equations are rearranged and variables are substituted so that tower mass holdup can be calculated from a measured pressure drop and external flow rates. In the case of the aeration factor, a curve was fit to approximate the values read from a chart. These equations take the following form: Mass to volume flow

)

∑i aiSi

(27)

(The exact meaning of all of the above symbols can be found in Smith (1963).) These equations provide a “greybox” model relating pressure drop to liquid holdup. The diluent inventory in the vapor phase is insignificant compared to the errors in the liquid holdup calculation.

1688

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000

Applying a rigorous mass and energy balance model of the tower is unlikely to improve the estimate of the liquid mass holdup in the tower, and obtaining the numerical solution is computationally expensive. We have observed that when the explicit form of the component balances is used (eq 17), the SQP solver converges to a solution much faster. When the explicit form of the component balance is used, the SQP solver does not have to calculate a consistent value for either MA or dmA/dt. This eliminates variables and constraints and makes the equation structure more efficient for calculations. A weighted least-squares objective function is used to minimize the squared difference between the measured and predicted values of all measured variables within the horizon. Also included in the objective function is a term that adds a penalty for estimating values of state variables that are much different from previous estimates obtained from the last time the application was run. The appropriate mass and component balances are treated as constraints and are converted from differential equations to algebraic equations by orthogonal collocation on finite elements. The problem is posed as a nonlinear program and is solved using the NOVA optimization system, a commercial successive quadratic program solver optimized for large-scale systems.31 For this application, the model consisted of approximately 50 differential and algebraic equations with 100 measured variables. After collocation, this results in optimizing around 2700 variables subject to 1300 constraint equations. A horizon length of 60 min was chosen with data sampled at 2 min intervals. The application was designed to be run at 10 min intervals with 50 min of overlap from the previous execution. Besides including past information and unmeasured states, there are important differences between this data reconciliation application and the one described by Liebman et al. (1992). In the problem formulation proposed by Liebman et al. (1992), the inputs to the differential equations are assumed to be constant over the horizon.21 In this application, mass and concentration variables can be thought of as states, and flows as inputs to the differential equations. The application is intended to run in real time, so data must be sampled at a rate that this is feasible. Since the measurements are sampled at 2 min intervals, the flow rates are expected to change several times within one horizon; therefore, they are not treated as constant. Implementation The very act of model development aids in the identification of areas where measurements are poor or nonexistent, or manual lineup of equipment is an issue, but implementation is where these problems are overcome. To transform the statement of the problem into an on-line application, a large number of details were worked out. The first problem that needed to be overcome was to address systematic errors present in data. Because many flow and level measurements depend on a calibrated density for their reading, they become inaccurate when the density of the process fluid differs from what the instrument was calibrated for. This is especially a problem for the plant considered, since the same process lines are used to make many different products, and the instruments are calibrated for an average fluid density. Equations 28 and 30 show the relationship between the field measurements (typi-

cally change in pressure) and what is read out on the DCS.32 If the actual density of the fluid is known or can be calculated, these level and flow readings can be corrected. How this correction can be done is shown in eqs 29 and 31. Orifice plate equation

m ˘ ) CoSox2gc(pa - pb)F

(28)

Flow correction

m ˘ actual ) m ˘ measured

x

Factual Fcalibrated

(29)

Vessel level

L)

Factual hactual ∆pmeasured ) ∆prange Fcalibrated htaps

(30)

Level correction

Lactual ) Lmeasured

Fcalibrated Factual

(31)

When the process fluid density is equal to what the instrument was calibrated for, the measured and actual values become equal. After the available measurements were assessed, all streams that offered enough information to apply a flow or level density correction were noted. A physical property package was used to devise formulas for the density of the fluids in each stream as functions of temperature, pressure, and composition. Actual densities of streams were calculated from this relationship, whenever complete information was available, using calculations residing on the DCS. Streams where the actual density could not be inferred were not modified. Since the goal of this application is to obtain accurate values of fluid masses in different vessels, the impact of the density correction on the reconciliation results must be assessed. For the process vessels and flows in this system, typical corrections for density changed calculated vessel masses by several thousand pounds and process flows by 2-15%, often more than 1000 lb/ h. In a previous paper by McBrayer et al. (1998), one of the process vessels in this system was examined for measurement bias and the calculation of an unmeasured flow.29 It is certain that some of the bias identified was attributable to this density issue, and that this greatly influenced the estimated value of the unmeasured flow. Simulations were run to illustrate the effect that density differences in flow measurements have on the data reconciliation. Figure 4 shows the results on an idealized version of one of the process tanks used in the actual application. Data were simulated over 24 h by integrating smooth flow data to obtain a smooth true level trajectory. Gaussian noise with variance similar to those observed in actual data was added to all measurements. These data represent the case when no density error is present, or the error has been compensated for. These data were then reconciled, off-line, with the same moving-horizon technique discussed previously. These reconciliation results are represented on the figure as “Reconciled Level - Compensated Density.” Next a set of data was generated representing the case where the actual density of the inlet flow measurement was 17% different than the calibrated density.

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000 1689

Figure 4. Comparison of simulation results

Figure 6. Detailed horizon by Horizon Look

Figure 5. Comparison of results using process data

Figure 7. Reduced variance of estimate when density correction is applied

Noise was added to the data, and the reconciliation was again performed and is represented on the figure as “Reconciled Level - Density Induced Error.” The raw data and true solution are also shown for comparison. As expected when this density issue is not accounted for, the reconciliation results can be quite skewed. When there are density errors in both flow and level instruments, the improvements gained are even more obvious by correcting for them. This can be seen in Figure 5, which shows the reconciliation results from a single tank when raw measurements and density corrected measurements were used. In this case, the application was run off-line, and actual process data were used. Because this is actual process data, the true values are unknown, but it is likely that the results when corrected data were used more accurately represent the actual state of the process. Because a moving-horizon approach is taken, there is a great deal more insight into the performance of the data reconciliation application that can be gained by looking at the full results of each horizon, instead of just using the most recent point. This can be seen in Figure 6, which shows reconciliation results from another set of actual process data. As the horizon moves across the data, the same value is being estimated multiple times, and it is clear from Figure 6 that the variance in the estimate of the measurement is significantly reduced

when the data is corrected for the density. Although the true value of the measured variable is unknown in this case, the fact that the estimates from one horizon to the next are more consistent, indicates that the corrected data better represent the model of the unit. This is quantified in Figure 7, which shows the variance of the estimate for the cases both when density effects were and were not accounted for prior to reconciliation. Other systematic problems that needed to be resolved during implementation were insufficient measurements and the possible diversion of certain streams via manual lineups. Other difficulties stemmed from the fact that some vapor streams contained water vapor, and only the hydrocarbon portion is used in the model. If not taken into account, the presence of water would add unnecessary bias to the measurements used by the application. The problem of insufficient measurements was handled within the application by modifying the envelope over which a balance is performed. When certain measurements were unavailable, units were grouped and an overall balance was calculated for the group. The other types of difficulties were more easily handled using the DCS. Occasionally, a process line can be manually diverted after a flow meter used in one of the balance equations

1690

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000

so it appears the flow is entering a vessel, when in fact it is moving to another location. Other times, a metered line can serve multiple purposes depending on the lineup of manual valves, and only one of these lineups may be important to the mass balances being performed. Manual lineup problems were resolved by using temperature measurements from other locations to heuristically determine the origin or destination of the flow. In these cases, small pieces of code that ran on the DCS read the actual measurements from the meters, performed simple calculations, and then stored these “virtual” measurements until they were needed by the application. Other special purpose calculations were built to perform such functions as calculating the flow rate of a certain vapor streams on a dry basis. This involved reading a header temperature and pressure and then performing a vapor pressure calculation to estimate the water content of the stream. Solutions such as these were done ad-hoc, but were kept to a minimum. Certainly all of these corrections and special cases could be handled internally in the data reconciliation application through the addition of extra equations and logic; however, the decision was made early on to keep these calculations on the DCS. The reason for this decision was to keep the number of equations to be solved as small as possible to ensure that the application will run in real time. Also, this allowed for certain changes to be made without interrupting the execution of the application. Keeping with the idea of ease of maintenance, a special set of multipurpose DCS calculations was constructed to handle all interactions between the DCS and the application. These interface calculations were created to handle all preconditioning of raw data, storage of vectors of data used as the input to the application, and storage of all solution data returned by the application. Any measurement that is used in the reconciliation application will have these calculations associated with it. This also provided a concise format for viewing results from any console of the DCS. Trend plots of any variable of the solution can then be examined from any operator console. Residual values and flags signifying the use of a suboptimal solution are also available. Before the program was put on-line in the plant, it was rigorously tested off-line. Proprietary software available at ExxonMobil allowed for the collection of time-stamped data at the required time intervals from the distributed control system. Using these data, each process unit considered could be tested individually, and when its performance was satisfactory, it could be added to the larger system. An application that runs in real time must be robust enough to continue even under difficult conditions. For efficiency reasons, the application is always warmstarted with the previous solution as the initial guess. This works well because of the amount of overlap in the solution from one horizon to the next. However, if one execution of the application takes an inordinate amount of time to arrive at a solution, the state of the plant will be quite different at the beginning of the next execution and the previous solution will not be an acceptable starting point. For this reason, the on-line application has been set up to take up to a specified number of iterations to find the optimal solution to the data reconciliation problem, then after which the nearest feasible point computed.

Figure 8. Comparison of results during stable operation

On-Line Results The nonlinear dynamic data reconciliation application described in the previous sections is currently fully interfaced with the DCS system and is continuously running on-line and in real time. There are several aspects of the performance of this application that will be discussed. Not only will this section show that this type of application is effective and an improvement over the DCS calculations during long stable periods of operation, but it offers improvements even through plant upsets, when unmodeled areas decrease performance. The first comparison of results is shown in Figure 8. This figure shows a comparison of the total plant diluent inventory as calculated by the DCS and the NDDR application over a period of days when the plant was operating smoothly. In this figure, there is a gap between the inventories calculated by each method. This is caused by two factors, the first is that the NDDR application has units that are not present in the DCS calculations. These units are the distillation columns, and they hold a significant mass of diluent. The second is that using calculated densities to correct level measurements in the major vessels that are present in both calculations has a large effect on the total mass calculated for each individual vessel. Because some of the tower equations have no closed form solution, they cannot be performed on the DCS; thus, an exact comparison is not possible. Despite these differences, the results show that the inventory calculated by the NDDR application does not show nearly the large fluctuations present in the inventory calculated on the DCS with raw measurements. In the long term, both lines show an overall downward trend, reflecting the actual losses; however, the inventory calculated by the NDDR application is far more consistent. This overall smoothing of the total calculated inventory is partially due to adjustment made to the measurements around the large vessels included in the DCS calculations, but the most benefit was gained by being able to use the measurements to estimate the mass of diluent residing in the towers at any given time. This can be seen in Figure 9. In this figure, a direct comparison of inventories is made over the period of 1 day. These data are mean centered to aid in the comparison, but the scaling is identical. In this figure, one line for the NDDR calculated results includes the

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000 1691

Figure 11. Comparison of relative daily loss rates calculated from raw and reconciled data during period of unstable operation

Figure 9. Comparison of different calculated inventories

Figure 12. Convergence of inventory estimates

Figure 10. Comparison of results during particularly unstable plant operation

same vessels used in the DCS calculations, and the other NDDR line includes the towers. This figure shows that the reconciled mass in the major vessels exhibits some smoothing and attenuation of peaks, but combining this with the estimates of the mass in the towers has a much larger effect in providing a smooth estimate of total plant diluent mass. This demonstrates how important model development is to this process. The previous results were obtained when the operation of the plant was fairly stable. The most important test of the application is to examine how it performs during more difficult circumstances. The next two figures demonstrate that, although large disturbances in the operation of the plant degrade the results of the NDDR application, overall, it still performs better than the DCS application using measures important to operations personnel. Figure 10 shows a comparison of NDDR and DCS results during a period of unstable operation. Large disturbances of this type can occur for a variety of reasons, and if they are not modeled, the application will have a difficult time dealing with them. In this figure, it can be seen that the total calculated inventory shows significant swings in both the NDDR and DCS calculations. However, these fluctuations are not as pronounced in the inventory calculated by the NDDR application.

As was noted previously, differences of averaged data are used to determine loss rates of diluent. This is a measure that is very important to operations personnel. Average loss rates are tracked using control charts, and if the values go out of range, a great deal of effort is put into determining whether there is a problem in the plant and losses are out of control, or if it is just an artifact of a plant disturbance on the calculations. Figure 11 shows the scaled results of these calculated loss rates for the inventory data of Figure 10. For this figure, loss rates are calculated by differencing consecutive 3 day inventory averages. When DCS inventory data is used for these loss rate calculations, the results show large deviations from day to day, alternating between positive and negative loss rates. When the same measure is calculated using the inventory data from the NDDR application, the results are much more reasonable. The loss rate is always positive (reflecting a loss of inventory), and there is significantly less variation in the estimate of this loss rate. That last aspect of the NDDR application that warrants discussion is the performance upon initialization or restart. The first time the application is run on-line, or switched on again after being idle for a long period, estimates of calculated states from previous runs may be very inaccurate or nonexistent. Therefore, it is important to examine how the application handles this situation. There is a tradeoff of speed of convergence and stability of the estimates, which is tuned by adjusting the weighting of the past estimates in the objective function. Figure 12 shows how the application performed upon startup after being off-line for a number of days. Because of the length of time since the applica-

1692

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000

tion was run on-line, no previous estimates of diluent in certain vessels were available, so the application was initialized with a cold start. Although the estimates quickly rise and begin to converge, this takes place over multiple runs of the application. This is a result of strongly weighting previous estimates, at the cost of speed of convergence. This tuning of the application to more strongly favor stability was done by a mostly trial and error basis. Since the application is designed to run continuously for long periods and is set up to recover gracefully from problems involving the solution of the resulting NLP, it was determined that long-term stability of the estimates was more important. Conclusions This paper demonstrated the large-scale application of nonlinear dynamic data reconciliation for improving process monitoring. This relatively simple treatment of a large interconnected system was successful in effecting improvements in monitoring without the expense of more monitoring equipment or plant downtime. A framework for further improvements has already been created because of the way this application was developed. New process units and modeling improvements can easily be added. In addition to the data reconciliation application itself, many other benefits have resulted from this work. The NDDR application more effectively uses the available measurements and provided insight into where additional sensors could be located to increase the accuracy of units modeled. Going through the development process enabled the identification of large error sources. For example, the level measurements have a much greater contribution to the error than compositions or densities. The impact of manual lineups and insufficient measurements brought new challenges, but consistent ways of working around such problems were found. The reconciliation application generates significantly more information than just adjusted measurements. The residual of each measurement is datum about how much that variable had to be changed in order to satisfy the balance equations. These residuals can be examined to identify bias in measurements or detect other process faults. This effort has not attempted to mine the information contained in the residual data. Future research will focus on making use of this extra data. Other possibilities include applying neural networks or statistical approaches such as principal component analysis for identifying faults. When formulating a data reconciliation problem to be applied to an actual process, several issues arise that are not encountered when dealing with computer simulations. This not only includes how to precondition the data but also how to deal with the transfer of data to and from the application in order to make the information available to plant engineers and operators. By making use of process simulators and physical properties packages to improve modeling of a system, by removing persistent density related errors from data, and by using optimization options to improve robustness, dynamic data reconciliation can be a viable option even for large systems. And if implemented carefully, the results can be made widely available for other plant personnel or to other applications. The performance of a data reconciliation application relies heavily on the quality of the process model and data. When more redundancy is present in the system,

data reconciliation will give better results. The inclusion of even simplified models can contribute to this effect. Because of the type of instruments used in an industrial setting, much of the data available contains more than the random error that is effectively removed by data reconciliation. Not accounting for this by preconditioning the data passed to the reconciliation application or by including the effect within the application by modeling it can degrade performance. The strategy used at the ExxonMobil Chemical Baytown Chemical Plant has demonstrated that dynamic data reconciliation can be used to improve confidence in calculated chemical inventories and be implemented on a large scale and operate in real time. Further extensions of this current application include using reconciliation results in a fault/gross error detection strategy and the possible reformulation of the objective function. The L1 norm may be a more suitable choice for the objective function, especially when the results will be used to identify abnormal data. Because it is less effected by outliers, the use of the L1 norm will produce a result where suspect data will have a larger residual than if the standard L2 norm was used. This may prove important for a fault detection strategy, allowing for a higher resolution and fewer false alarms. Acknowledgment The authors acknowledge the support of ExxonMobil Chemical Company for this research. Nomenclature Aa ) active area of tray ai ) the flow coefficient (+ for flows into a vessel, - for outflows) An ) net cross-sectional area for vapor flow above the tray Ar ) total riser area per tray β ) aeration factor Co, So ) orifice constants f ) differential equation balance constraint Fva ) vapor flow parameter g ) simple bound or other inequality constraints gc ) gravitational constant H ) horizon length h ) algebraic equality constraints hactual, hcalibrated ) heights hcd ) head losses through bubble caps hl ) equivalent height of clear liquid on tray hso ) head losses through slot openings hsh ) slot height hss ) static slot seal ht ) total height of liquid on tray hw ) weir height Kc ) dry-cap head-loss coefficient L ) level m ) number of variables utilizing past information (eq 8) m ) total mass in each process unit (eqs 13-17) mtotal, mtray, mtank ) mass of fluid ˘ measured ) mass flow rates (lb/h) m ˘ actual, m mA ) mass of the diluent nv ) number of measured variables ntrays ) number of column trays P(L) ) polynomial strapping equation p ) pressure (at taps a and b) ∆p ) total column pressure drop (eq 19) ∆p ) differential pressure (eq 30) Q ) column vapor volumetric flow rate

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000 1693 Rs ) trapezoidal slot shape ratio F ) density Si ) ith process flow V ) vessel volume, calculated from an empirical polynomial in % level (eqs 12 and 13) V ) column vapor mass flow rate (eq 18) wi ) weighting factor of the ith measurement w j j ) weighting factor for past information xA ) mass fraction of the diluent xˆ j ) estimate of jth state variable xjj ) estimate of jth state variable from previous run of the application yi ) ith process measurement yˆi ) estimate of ith measurement, consistent with constraints z ) vector of auxiliary model variables Subscripts l ) liquid v ) vapor taps ) height difference in the placement of the measurement taps range and calibrated refer to that of the instrument

Literature Cited (1) Wilson, D. I.; Agarwal, M.; Rippin, D. W. T. Experiences implementing the extended Kalman filter on an industrial batch reactor. Comput. Chem. Eng. 1998, 22 (11), 1653-1672. (2) Kuehn, D. R.; Davidson, H. Computer Control II: Mathematics of Control. Chem. Eng. Prog. 1961, 57(6), 44-47. (3) Knepper, J. C.; Gorman, J. W. Statistical Analysis of Constrained Data Sets. AIChE J. 1980, 26 (2), 260-264. (4) Liebman, M. J. Reconciliation of Process Measurements using Statistical and Nonlinear Programming Techniques. Ph.D. Thesis, The University of Texas at Austin, 1991. (5) Cleaves, G. W.; Baker, T. E. Data Reconciliation Improves Quality for Higher Level Control. Tappi J. 1987, 70 (3), 75-78. (6) Lawrence, P. J. Data Reconciliation: Getting Better Information. Hydrocarbon Process. 1989, 68 (6), 55-60. (7) Albers, J. E. Online Data Reconciliation and Error Detection. Hydrocarbon Process. 1997, 76 (7), 101-104. (8) Chiari, M.; Bussani, G.; Grottoli, M. G.; Pierucci, S. Online Data Reconciliation and Optimisation: Refinery Applications. Comput. Chem. Eng. 1997, 21 (Suppl.), S1185-S1190. (9) Mah, R. S.; Stanley, G. M.; Downing, D. M. Reconciliation and Rectification of Process Flow and Inventory Data. Ind. Eng. Chem. Process Des. Dev. 1976, 15 (1), 175-183. (10) Stanley, G. M.; Mah, R. S. H. Observability and Redundancy in Process Data. Chem. Eng. Sci. 1981, 36, 259-272. (11) Tamhane; A. C.; Mah, R. S. H. Data Reconciliation and Gross Error Detection in Chemical Process Networks. Technometrics 1985, 27 (4), 409-422. (12) Heenan, W. A.; Serth, R. W. Gross Error Detection and Data Reconciliation in Steam-Metering Systems. AIChE J. 1986, 32 (5), 733-742. (13) Crowe, C. M. The Maximum-Power Test for Gross Errors in the Original Constraints in Data Reconciliation. Can. J. Chem. Eng. 1992, 70 (10), 1030-1036.

(14) Tong, H.; Crowe, C. M. Detection of Gross Errors in Data Reconciliation by Principal Component Analysis. AIChE J. 1995, 41 (7), 1712-1722. (15) Kim, I. W.; Kang, M. S.; Park, S.; Edgar, T. F. Robust Data Reconciliation and Gross Error Detection: The Modified MIMT Using NLP. Comput. Chem. Eng. 1997, 21 (7), 775-782. (16) Chen, J.; Bandoni, A.; Romagnoli, J. A. Outlier Detection in Process Plant Data. Comput. Chem. Eng. 1998, 22 (4/5), 641646. (17) Chen, J.; Romagnoli, J. A. A Strategy for Simultaneous Dynamic Data Reconciliation and Outlier Detection. Comput. Chem. Eng. 1998, 22 (4/5), 559-562. (18) Robertson, D. G.; Lee, J. H.; Rawlings, J. B. A Moving Horizon-Based Approach for Least-Squares Estimation. AIChE J. 1996, 42 (8), 2209-2224. (19) Jang, S. S.; Joseph, B.; Mukal, H. Comparison of Two Approaches to On-line Parameter and State Estimation of Nonlinear Systems. Ind. Eng. Chem. Process Des. Dev. 1986, 25, 809814. (20) Almasy, G. A. Principles of Dynamic Balancing. AIChE J. 1990, 36 (9), 1321-1330. (21) Liebman, M. J.; Edgar, T. F.; Lasdon, L. S. Efficient Data Reconciliation and Estimation for Dynamic Processes Using Nonlinear Programming Techniques. Comput. Chem. Eng. 1992, 16 (10/11), 963-986. (22) Ramamurthi, Y.; Sistu, P. B.; Bequette, B. W. ControlRelevant Dynamic Data Reconciliation and Parameter Estimation. Comput. Chem. Eng. 1993, 17 (1), 41-59. (23) Muske, K. R.; Edgar, T. F. Nonlinear State Estimation. Nonlinear Process Control; Henson, M. A., Seborg, D. E., Eds.; Prentice Hall: New Jersey, 1997; pp 311-370. (24) Russo, L. P.; Young, R. E. Moving Horizon State Estimation Applied to an Industrial Polymerization Process. Proceedings of the American Control Conference. June 1998. (25) Rao, C. V.; Rawlings, J. B. Moving Horizon State Estimation. In Nonlinear Predictive Control; Allgower, F., Zheng, A., Eds.; Progress in Systems and Control Theory Series; Birkhauser Verlag: Basel, 2000; Vol. 26. (26) Albuquerque, J. S.; Biegler, L. T. Data Reconciliation and Gross-Error Detection for Dynamic Systems. AIChE J. 1996, 42 (10), 2841-2856. (27) McBrayer, K. F. Detection and Identification of Bias in Nonlinear Dynamic Processes. Ph.D. Thesis, The University of Texas at Austin, 1996. (28) McBrayer, K. F.; Edgar, T. F. Bias Detection and Estimation in Dynamic Data Reconciliation. J. Proc. Cont. 1995, 5 (4), 285-289. (29) McBrayer, K. F.; Soderstrom, T. A.; Edgar, T. F.; Young, R. E. The Application of Nonlinear Dynamic Data Reconciliation to Plant Data. Comput. Chem. Eng. 1998, 22 (12), 1907-1911. (30) Smith, B. D. Design of Equilibrium Stage Processes; McGraw-Hill: New York, 1963. (31) NOVA Optimization System Version 3.10 Users Manual; Dynamic Optimization Technology Products, Inc. 1995. (32) McCabe, W. L.; Smith, J. C.; Harriott, P. Unit Operations of Chemical Engineering; McGraw-Hill: New York, 1993.

Received for review November 8, 1999 Accepted February 22, 2000 IE990798Z