Fault detection in a single-stage evaporator via parameter estimation

Parameter Estimation Using the Kalman Filter. David T. Dalle Molle and David M. Himmelblau*. Department of Chemical Engineering, University ofTexas, A...
0 downloads 0 Views 928KB Size
Znd. Eng. Chem. Res. 1987,26, 2482-2489

2482

Greek Symbols

Acknowledgment This work was supported by the National Science Foundation under Grant CPE-8025301.

a = relative volatility

6 = time constant r = dead time

Literature Cited

Nomenclature AE = effective absorption factor for the enriching section As = effective absorption factor for the stripping section B = bottoms flow D = distillate flow F = feed flow G i= transfer function HE = transfer function for enriching section H s = transfer function for stripping section KE = gain for enriching section Ks = gain for stripping section L = reflux flow in the enriching section L’ = reflux flow in the stripping section NE = trays in the enriching section Ns = trays in the stripping section RR = reflux ratio V = vapor flow xB = bottoms mole fraction xF = feed mole fraction xD = distillate mole fraction

Amundson, N. R. Mathematical Methods in Chemical Engineering; Prentice Hall: Englewood Cliffs, NJ, 1966. Fuentes, C.; Luyben, W. L. Ind. Eng. Chem. Process Des. Deu. 1983, 22, 361-366. Gibilaro, L.; Lees, F. Chem. Eng. Sci. 1969, 24, 85-93. Hengstebeck, R. Distillation Principles and Design Procedures; Reinhold: New York, 1969; Chapters 7 and 8. Kapoor, N.; McAvoy, T. J.; Marlin, T. E. AlChE J. 1986, 32, 411-418. Kapoor, N. Ph.D. Dissertation, University of Maryland, College Park, 1986. Kim, C.; Friedly, J. C. Ind. Eng. Chem. Process Des. Dev. 1974,13, 177-181. Lapidus, L.; Amundson, N. R. Ind. Eng. Chem. 1950, 42, 1071. Luyben, W. L. Process Modeling, Simulation and Control for Chemical Engineers; McGraw-Hill: New York, 1973. Wahl, E. F.; Harriott, P. Ind. Eng. Chem. Process Des. Deu. 1970, 9, 396-407.

Received for review November 20, 1986 Accepted June 4, 1987

Fault Detection in a Single-Stage Evaporator via Parameter Estimation Using the Kalman Filter David T. Dalle Molle and David M. Himmelblau* Department of Chemical Engineering, University of Texas, Austin, T e x a s 78713

Fault detection and diagnosis via estimating the parameters in a process model can be successful if (a) the model represents the process, (b) the measurements of the states can be processed to obtain unbiased estimates of the model coefficients, and (c) the coefficients (for diagnosis of the causes of faults) can be tied to physical features of the equipment or process fluids. We show how the Kalman filter can be applied to a single-stage evaporator and present results from simulations indicating that faults can be detected successfully for a given acceptable level of false alarms. Because process faults and degradation may lead to inefficient and even unsafe process operation, fault diagnosis has received considerable attention in the literature. Here, by a fault we mean degradation of some process characteristic from its acceptable range. The presence of a fault does not imply that the process is inoperative but rather that the process can only operate at a suboptimal level with respect to efficiency and safety. Failure, on the other hand, refers to the inability to operate the process at any performance level. Since faulty operation may lead to the failure of some part of the process, early and accurate detection of faults has substantial benefits. A variety of methods and applications for fault detection have been developed and implemented with various degrees of success. Recently, Watanabe and Himmelblau (1982, 1983) have demonstrated fault detection strategies for nonlinear chemical reactors. Geiger (1984) used parameter estimation and pattern classification to identify faults in a motor/pump system. Shiozaki et. a1 (1985) diagnosed faults in pipeline systems by using signed directional graphs. Palowitch and Kramer (1985) have applied knowledgebased approaches and expert systems to malfunction diagnosis in a recycle reactor. Details of many techniques and applications can be found in the review by Himmelblau (1986). Here we are concerned strictly with parameter estimation as a tool for fault detection and diagnosis, with a single

effect evaporator serving as an example of an application. Fault detection by estimating the states of a process is not of major interest. The principle involved is that possible faults in the process can be associated with specific parameters (or states) of a mathematical model of the process, and the parameters can be related to physical features of the process (that may or may not be controllable). For example, fouling of a heat exchanger can be related to the heat-transfer coefficient in a process model, and catalyst activity can be determined from a reaction rate constant and the process temperature. Hence, to apply parameter estimation to time-dependent chemical processes, it is necessary to have a dynamic model of the process. In addition, a minimum number of independent state measurements exist that must be available to estimate various states and parameters. For linear systems, specific conditions exist on the model for state observability (Chen, 1970). Parameter estimation, however, is inherently a nonlinear procedure (when coupled with state estimation) since the model parameters multiply the states. For a nonlinear model, the criteria for observability are different than those for linear systems, and some parameters may not be observable and may be estimated erroneously (Astrom and Eykhoff, 1971). Nevertheless, a variety of nonlinear estimation techniques have been established and applied in various fields for parameter estimation. The question is under what circumstances can they be applied

0S8S-5885/S7/2626-24~2~0~.50/00 1987 American Chemical Society

Ind. Eng. Chem. Res., Vol. 26, No. 12, 1987 2483 for fault detection and the diagnosis of the causes of the faults in a process? Parameter estimates can be made via a nonlinear state observer combined with a least-squares parameter estimation scheme (Gelb, 1974; Sorenson, 1980; Young, 1969). For nonlinear systems, either you must observe all of the states or, with a limited number of state measurements, you must first obtain estimates of all of the states of the process and then, using these estimates, estimate the parameters of the model. In this approach, however, care must be taken in reconstructing the unobserved states in the presence of model parameters that may be changing in time due to process faults. If the states are not estimated correctly, parameter estimation based on improper state estimates may lead to meaningless results. For special cases (Watanabe and Himmelblau, 1982) though, this approach can be applied to nonlinear systems with parameters that vary in time. Most of recursive least-squares algorithms provide estimates that converge to an average value as the number of measurements increases. Beyond a certain point, new measurements no longer provide any significant information for the estimate. Hence, if a fault were to develop after the algorithm has been in operation for some time, the estimate of the fault parameter would not show any change, as the algorithm would be insensitive to new measurements. This problem could be resolved in several ways. One method is to restart the algorithm when the gains have fallen below a prespecified value. This approach often involves adding large values to the covariance matrix (thus increasing the gains) to restart the algorithm. Another common technique used for keeping least-squares algorithms sensitive to new measurements is to incorporate a “forgetting factor”. The purpose of this factor is to weight the current measurements more relative to the old measurements in making the estimate so that the new estimate becomes a weighted average of current and former estimates. The weighting factor approach is essentially a first-order filter that smooths the estimates as they are calculated. Jang et al. (1986) claim that the “horizon” method (essentially the solution of a nonlinear programming problem of least squares subject to the dynamic and measurement equations) provides better tracking of the response of lumped systems than does Kalman filtering. However, the Kalman fiter can be successfully tuned as explained below and requires less computation effort. Hence, here we focus on the use of the continuous extended Kalman filter (Kalman and Bucy, 1961) as a tool for fault detection and diagnosis. (Discrete models of the evaporator were not used, but analogous characteristics have been developed.) If the model parameters to be estimated are constants, then the parameters themselves can be modeled by the differential equation Hence, for linear or nonlinear systems, the extended Kalman filter (Gelb, 1974; Jazwinski, 1970) can be applied to parameter estimation by treating the parameters as additional states augmented to the original state vector. The state vector, augmented with p , can be substituted in the extended Kalman filter equations to obtain parameter estimates. Although it may seem a bit incongruous in treating the parameters as constants in the process model but then saying under abnormal operating conditions we hope to detect faults by tracking how the parameter values change, the concept is not actually inconsistent. Under normal operating conditions, confidence limits can be placed on

Vapor

A (T,

)

Figure 1. Evaporator configuration and flow chart.

parameter (and state) values. Abnormal conditions of a sufficient degree such that they lead to changes in one or more parameter values can be detected by noting the violation of the confidence limits. Specific problems with this technique are (1)a cause of a fault that would change a parameter value may be confounded because the parameter estimation scheme may result in the apparent change in more than one parameter value beyond the prespeqified limits, (2) a parameter estimate may be biased because of noise, interaction, or other factors, and (3) the tuning of the filter (selection of the noise covariance matrices) has a significant influence on the magnitude of the confidence limits themselves and also affects the time of detection of the fault. We first summarize the evaporator model, then examine how the filter responses are related to filter parameters, and lastly provide some guidelines based on statistics for detecting abnormal performance for both discrete and continuous measurements.

Two-State Evaporator Model We consider here a single-effect evaporator to make sure that the model is easy to analyze and interpret. In multieffect evaporators, the model equations for each unit become coupled through the material and energy balances. As the model becomes more complex, so does the task of parameter estimation, and the interpretation of the results requires the use of pattern recognition techniques. Figure 1 is a schematic of the evaporator. Quite a few articles in the literature treat the design, operation, and maintenance of evaporators (Guerreri, 1965; Esplugas and Mata, 1983; Newman, 1968; Yundt, 1984). Several authors have specifically addressed the problem of modeling evaporators for the purpose of process control (Andre and Ritter, 1968; Burdett and Holland, 1971; Harvey and Fowler, 1976). Ritter and Andre (1970) and Newell and Fisher (1972) have reduced high-order nonlinear models for a double-effect evaporator to low-order linear models for simulation and process control. Hamilton et al. (1973) applied Kalman filtering to Newell and Fisher’s linearized model to estimate unmeasured states needed for a control algorithm. However, to date, no studies have been reported on parameter estimation of evaporators for fault detection. The evaporator model used here is similar to the models proposed by Andre and Ritter (1968) and Burdett and Holland (1971). Dalle Molle (1985) outlines the assumptions and equations that led to the following two state model-four states in the augmented equations-used in this study:

where V = [UA(T,- T )- FCJT - T F )- QL]/AHv.Noise

2484

Ind. Eng. Chem. Res., Vol. 26, No. 12, 1987

Table I. Values of Model Parameters and Steady-State Values of States Together with Standard Deviations for the Simulated Noise noise std dev. symbol value input measurement 2.27 kg/min 0.025 F 0.05 0.5 TF 88 “C 0.1 XF 0.032 mass fraction 0.5 0.1 TS 136 “C A 0.93 m2 L’ 43.6 kJ/(min)(m)(OC) QL 400.0 kJ/min AHv 2240 kJ/kg 4.18 kJ/(kg)(”C) CP

TB

6

E, 6

0.22 0.1

W T E V X

too

oc

83.3 %/mass fraction 0.0454 kg/min 0.06 (kg/min)/kg holdup 13.82 kg 107 “C 0.88 kg/min 1.39 kg/min 0.083 mass fraction

has added to the variables as described in the next section. The parameters used in this work were based on those used by Andre and Ritter (1968) on a pilot-scale evaporator. Table I lists the parameters and the steady-state values of the two states. Because a pressure of 1atm (101.3 kPa) was assumed for evaporator operation, TBof the solvent water was 100 OC. The boiling point elevation rate, p, was chosen to be 8.33 OC per 10% solute. A steam temperature of 135 OC was chosen to correspond to a typical lowpressure steam line of about 200 kPa gauge. A heat loss rate of 400 kJ/min corresponded to approximately 10% of the total heat transferred from the steam. E, and 6 were chosen to give reasonable holdup dynamics. At steady state, the holdup of the unit was 13.8 kg. The steady-state exit composition, determined from the steady-state temperature of 107 O C , was 8.3%, while that of the feed was 3.2% at a temperature of 88 “C.

Simulations and Tuning the Filter Detection of changes in the model parameters is a more sensitive method of fault detection than detection of changes in the states (W and T). To demonstrate the application of Kalman filtering for the evaporator, the parameters UA and xF were deemed to be the fault parameters and changed as follows: fault parameter % change in value

type of change start time of change, min stop time of change, min noise levels

UA -10.0 ramp 75.0 375.0 as in Table I1

*F

-20.0 square 165.0 285.0

Since the extended Kalman filter involves the solution of differential equations for both the state and error covariance propagation, initial conditions must be supplied for the states, parameters, and the error covariance matrix of the augmented state vector. Table I1 lists those values. For the states and parameters, the design or normal operating values can be used as the initial conditions. The initial error covariance matrix P(0) is usually assumed to be diagonal with large values of the elements to express uncertainty in the initial values of the states and parameters. We carried out simulations (not shown here) to demonstrate that the initial values of the state vector and the error covariance matrix P(0) had no effect on the discrimination ability of the algorithm. Even with poor initial guesses, the estimates converged to unbiased values.

Table 11. Initial Values and Optimal R ( t ) and Q ( t )for the Extended Kalman Filter” Initial Values

0

13.8

“Optimal” Noise Covariances

R(t) =

[i005

0

0.013

0 Q(t) =

0.003

(measurement) (input) qllG feed rate, F; qzzOfeed temperature, TF; q33Gsteam temperature, T,; q44CjUA; q55QrF nG means corresponds to. 0represents zeros.

To represent a real process in simulations, noise (w)is added to the process inputs and to the process measurements (v). Because the evaporator is lumped, the input noise is smoothed to some degree. Usually measurement noise varies more rapidly than the process input noise due to the mechanisms generating the respective noises. For the purpose of simulating a real process, noise was added to a measurement or input at a frequency representative of the rate that it would enter a real system. Thus, measurement noise was added continuously to the process outputs to generate the values of the measured variables that were used in the estimation. Measurement noise was generated by running pseudorandom numbers through a Gaussian shaping function with zero mean and a specified standard deviation. Random noise with slower variations was added to the process inputs (feed rate, feed temperature, and steam temperature) to simulate input fluctuations. These noise sequences were generated by adding random noise samples every 2 min and smoothing the intermediate values. Uncorrelated random noises were added to each of the inputs starting at different points in time so that none of the inputs were in phase with any other of the inputs. This noise was for simulating the fluctuations in the process inputs. Since these variables were measured (and later have pure white measurement noise added), the assumptions about the noise in the filter were not violated. Table I lists the values of the standard deviations used for the process input noise and measurement noise of the outputs and the inputs used for the evaporator example. Normally, the measurement noise covariance matrix, R ( t ) ,is assumed to be diagonal. The variances of each measurement can be guessed or estimated from sample output values. The input noise covariance matrix, Q ( t ) , is also assumed to be diagonal. For parameter estimation, the diagonal elements for the process inputs (qil: i = 1, ...,d ) are taken to be 0. The diagonal elements for the fault parameters (qii: i = d + l , ..., d+s) are usually assigned values that correspond to the rate of change expected for the parameter. Gelb (1974) suggests that if a parameter, p i , is expected to change by Api over a time interval, At, then as a first guess qii = ( A P J ~ / A ~

(3)

With properly tuned Q ( t ) and R(t) matrices, the fault parameter estimates are most affected by the size of the sampling interval for a reasonable range. In practice, the values for R ( t )and Q ( t ) are best determined by trial and error (“tuning the filter”) using the guidelines cited above. The “optimal” values of R ( t )and

Ind. Eng. Chem. Res., Vol. 26, No. 12, 1987 2485

: :I,

, ,

\I

HOLDUP (kg) 14

,, ,,, ,, ,,,,,,,,,,

, , , I

13

115

e

tee

aee

3ee

4ee

TInE ( M I N ) Figure 2. UA estimate as a function of qu: qu = l(r2 (-1, lo4 (- -); true value (- - -).

e

aie

3ie TIRE ( M I N I

see (--),

see

Figure 3. xF estimate as a function of qu: qu/0.34 = lo-' (-1, (--), IO4 (- -); true value (- - -1.

Q ( t )are ones that give reasonably fast and smooth estimates for the states and parameters. The initial values of the augmented state vector, x,(O), the error covariance matrix, P(O),and the "optimal" R(t) and Q ( t )values used in this work are listed in Table 11. The values for R(t) were chosen to be the variances used in the generation of the output noise. Q ( t ) was chosen to give reasonable trajectories and variances for the fault parameter estimates for this particular set of faults. What the values of the elements of Q ( t ) and R(t) are influence the biases, variances, and speed of convergence of the state and parameter estimates, i.e., the performance of the filter. For a given amount of parameter change, Api, large qii will track fast changes (At small), and smaller qii values (At large) will track slower changes. This effect can be demonstrated by varying a qii value (while holding the other filter parameters constant) and observing the estimates. Figures 2 and 3 illustrate the effect for the case of continuous measurements of varying q4 from 0.34X 1C2 to 0.34 X lo4 on the estimates of the fault parameters UA and xF for a ramp change in UA and two step changes in xF. The other filter parameters are as shown in Table 11. Figure 2 shows that the speed of convergence of the estimate of U A decreases as q4 decreases. However, the variance of the estimate also decreases when q4 decreases. Thus, there exists a trade-off between the speed of tracking and the size of the variance of the estimate for the extended Kalman filter (as there is also in least-squares method recursive filtering). Figure 3 shows that q4 has no effect on the speed of convergence or variance of the

e

i ee

aee

3ee

4ee

TIME (MINI

Figure 4. Holdup estimate as a function of rll: rll = 0.05 (-), (--), 5.0 (- -); measured value (- - -).

e

i ee

2ee

3ee

TIRE (MINI

4ee

see 0.50

5ee

Figure 5. Temperature estimate as a function of rll: rll = 0.05 (-1, 0.50 (--), 5.0 (- -); measured value (- -).

-

estimate of xF, but the estimate does appear to be increasingly biased downward as q4 decreases during the period of time that U A is changing. This bias is due to the dependence of the estimate of xF on the estimate of U A in the filter calculations. Thus, for small q4, the estimate of XF will appear to be biased since the estimate of UA has not reached its true value during the period of change in UA. As UA returns to its correct value, the estimate of X F is no longer biased. The estimates of the holdup in the evaporator show a slight bias during the period of time when the estimate of UA is slow to converge (q4 = 0.34 X lo-*) because the estimate of the holdup is based not only on the measured value but also on a model-based prediction of the state which involves a calculation using a value of U A that has not yet converged to its proper value. However, the estimate of the temperature shows no bias. The different outcomes with respect to bias are due to the relative size of influence of each parameter to each state estimate. The effect of the value of q55on the sate and parameter estimates is similar to that of qM The effect of the measurement noise covariance, R(t), on the state and parameter estimates can also be examined by varying rll or rZ2individually. Since the values of rI1 and rZ2reflect the certainty in the measurements of the holdup and temperature, respectively, varying rll or r22will directly affect the estimates of the holdup and temperature and, consequently, indirectly affect the estimates of the fault parameters UA and zF. The measurement noise covariance matrix, R(t), appears in the gain of the filter equations as R(t)-'. Hence, increasing an element of R(t), say rii, will decrease the importance of the ith measured

2486 Ind. Eng. Chem. Res., Vol. 26, No. 12, 1987

135 l 4 l

1 2 4

115

I

I

I

3

I

I

,

I

1

I

I

I

I

1

I

I

,

I

I

I

I

)

e

iee

2ee 3ee TIflE (flIN)

4ee

Figure 7. xF estimate as a function of 9%with T = 8: 9% = (--), lo* (- -); true value (- - -).

5ee

(-),

fined and representative of the process and (2) the probability density function (or its characteristics) for the states is known under both normal and abnormal operating conditions. Testa for faults might involve (1)violations of assigned error bounds associated with each measurement based on experience, (2) analysis of the noise statistics of the individual measurements, (3) violation of confidence limits based on the normal operating conditions, and (4) analysis by nonparametric methods of the trajectory of states and parameters. Among the process features that have been suggested as useful tools for fault detection are residuals, innovations, states, parameters, autocorrelation, and zero crossings, and the statistical testa that might be applied include equality of covariance, bias, spectral analysis, runs, whiteness, and likelihood ratio. We are interested for the evaporator in detecting and diagnosing the causes of faults from online measurements with (1)a very low level of false alarms and (2) a high probability of detection of faults when they occur. Specific techniques to accomplish these goals for dynamic process are reviewed by Willsky (1976), Isermann (1984, 1985), Pau (1981), and Himmelblau (1978). We focus here on applying rules established under normal operating conditions to the process parameters of the evaporator to determine if operating conditions have become abnormal. As indicated by the simulationsdescribed in the previous section, it can be demonstrated that model parameters can be estimated in real time subject to delays and possible biases. A classical way to detect shifts in the value of a parameter is to set confidence limits based on normal operating conditions about its expeeded value by assuming a certain probability density represents the parameter (which is a stochastic variable). You usually assume (but only rarely test) as we did that a parameter is ergodic so that samples in time can be treated as samples at one time. You also usually assume (but also only rarely check) that the parameter estimates which are themselves stochastic variables are normally distributed, as we have done here, but other procedures mentioned below can be used in the face of evidence of nonnormality. Confidence limits, set at P = 0.99 or 0.997 (to achieve a low false alarm rate), based on the known normal operating period can be used to make decisions as to whether a fault has occurred or not by noting when a parameter violates a confidence limit. In order to determine the confidence limits for a parameter estimate, it is necessary to first get an estimate of the variance of the parameter. If the estimation procedure has been running under normal operating conditions, then the variance can be calculated from a series of

iI ae a51

v iee

e

aee

3ee TIYE (MINI

5ee

4ee

k.

...................................................

e . e 3 2 + ~ - ~ j

e.eafi

e.et4

I

;w I.

I,,

e

, ,

I , ,

iee

1

....

, ,

I , ,

, , I , , , , I , , , ,

2ee 3ee TIME ( M I N )

4ee

,

5ee

1

e . e 2 4 [ , , ,,

e

, , , , , i ee

, , , ,

, , , ,

I

aee

3ee TIYE (MIN)

4ee

5ee

Figure 8. Extended Kalman filter estimate of (a, top) UA and (b, bottom) XF with confidence intervals: (-) estimate, (- -) true value, (- -) confidence limits.

Figure 9. Extended Kalman filter estimate of (a, top) UA and (b, bottom) XF for example 111: (-) estimate, (---) true value, (--I confidence limits.

the estimates, or it may be taken directly from the covariance matrix P in the filter. In principle a joint confidence region should be employed for testing the parameters such as the relation used for normally distributed parameter estimates at a particular time

analysis forms another basis for selecting confidence limits if n is small. Because the probability distribution of the parameter estimates may be unknown, a third way to make tests or fix confidence limits developed under normal operating conditions might be to employ a nonparametric method such as the signs test, the number or length of runs up and down, and similar "distribution free" tests (see Hollander and Wolfe (1973) and Bradley (1968)). These nonparametric methods, of course, are much less efficient than methods based on an assumed or known distribution for the parameters. Dalle Molle (1985) suggested taking into account the length of time and the amount that a parameter estimate falls outside the individual confidence limits. For example, if it is known from experience that a parameter occasionally exceeds the 0.99 confidence limits by 3 units for a period of 10 min (and the process otherwise still operates normally), then the integrated deviation over that window of 10 min is 30 unitamin. Thus, an integrated deviation beyond the 0.99 confidence limits that is less than 30 is considered normal, and any integrated deviation over 30 would be considered a fault. To use confidence intervals to detect faults via changes in parameter estimates, the Kalman (or any algorithm) must be properly tuned to avoid false alarms. Delays in the estimation of one parameter can lead to a bias in the estimate of another. Figure 8 shows how a properly tuned filter leads to estimates that are not biased or delayed for the simultaneous fault parameter changes discussed previously. The filter parameters for this case are those in Table 11, and the resulting parameter variances were 0.07 and 2.5 X lo-' for UA and xF,respectively. Simultaneous

-

(n - p)TXTX(II - p ) = u,,2qF1-, but this is only an approximate relation in any case, and individual (but even more approximate) confidence limits are easier to graph and visualize. Another possibility for setting limits for fault detection is the Cramer-Rao bound (Cramer, 1951) which uses the estimated standard deviation of the error in the parameter. There is a geometric relation between the Cramer-Rao bound and the insensitivity because the former is the largest change you can make in a parameter value and still remain within the confidence ellipsoid while all other parameters free to change at the same time. The Kalman filter is said to be nonrobust (in the sense that a spurious observation Yt can adversely affect the estimate of a state and/or parameter) because in the estimation equations the estimated state/parameter is an unbounded function of T,,and the covariance matrix does not depend directly on the measurements. By using a Bayesian approach and examining what happens to the posterior probability distribution of the estimates if a discrepancy between the prior probability distribution and the measurements arises, Meinhold and Singpurwalla (1985) concluded that for small residuals the posterior probability density is unimodel and can be approximated by a student t-density centered at the mode with n degrees of freedom, where n is the number of measurements. This

Ind. Eng. Chem. Res., Vol. 26, No. 12, 1987 2489 having tuning parameters is the ability to tune the response of each state and parameter to any desired speed independently of how the others may be tuned. The parameter estimates for the evaporator example were tested to determine whether a fault was or was not present. Criteria for determining the presence of a fault were suggested taking into account the noise in the estimates and properly selected filter parameters. These criteria would also be useful in rejecting parameter drifts beyond the confidence limits due to unmeasured disturbances. The evaporator examples demonstrated that there may be a need for some heuristics in the analysis of the estimates to avoid misdiagnosing nonexistent faults. Nomenclature A = area of heat transfer C, = heat capacity of solvent in evaporator d = number of inputs E = flow rate exiting the evaporator E, = constant flow rate exiting the evaporator F1-,= variance ratio for confidence coefficient of 1 - a F = feed flow rate to evaporator AHv = heat of vaporization of solvent p z = parameter used to estimate qzi p = vector of parameters of model P(t) = error covariance matrix P = probability q = degrees of freedom qII= element of Q(t) Q = rate of heat loss to surroundings by evaporator Q ( t ) = input noise covariance matrix rzi= element of the matrix R(t) R(t) = measurement noise covariance matrix s = number of parameters t = time T = temperature of holdup in evaporator TB= temperature of normal boiling point of solvent TF= temperature of feed to evaporator Ts = temperature of steam in calandria of evaporator U = heat-transfer coefficient v = vector of uncorrelated white measurement noise added to process measurements V = vapor flow rate from evaporator w = vector of uncorrelated white noise added to parameter or to inputs W = mass of duid holdup in evaporator x = mass fraction of solute in solvent in holdup of evaporator xF = mass fraction of solute in solvent in feed to evaporator x = vector of states of system x, = augmented vector of states of system X = vector of measurements of independent variables Yt = spurious observation Greek S y m b o l s a = significance level (1 - a is confidence coefficient) B = boiling point elevation of solvent per mass fraction solute A = change 6 = fraction proportional to holdup u = standard deviation T = sampling interval II = true parameter value vector

Subscripts F = feed p = parameter 1-4 = component number Literature Cited Andre, H.; Ritter, R. A. Can. J. Chem. Eng. 1968, 46, 259. Astrom, K. L.; Eykhoff, P. Automatica 1971, 7, 123. Bradley, J. V. Distribution-free Statistical Tests; Prentice-Hall: Englewood Cliffs, NJ, 1968. Burdett, J. W.; Holland, C. D. AZChE J. 1971, 17, 1080. Chen, C. T. Introduction to Linear System Theory; Hold, Rinehart and Winston: New York, 1970. Cramer, H. Mathematical Methods of Statistics; Princeton University Press: Princeton, NJ, 1951. Dalle Molle, D. T. "Fault Detection via Parameter Estimation i n a Single Effect Evaporator", M.S. Thesis, University of Texas, Austin, 1985. Esolueas. S.: Mata. J. Chem. E m . 1983, Feb, 59. GGge;, G. Presented at IFAC-Congress, July 1984. Gelb, A. Applied Optimal Estimation; MIT Press: Cambridge, MA, 1974. Guerreri, G. Br. Chem. Eng. 1965, 13, 524. Hamilton, J. C.; Seborg, D. E.; Fisher, D. G. AZChE J. 1973,19,901. Harvey, D. J.; Fowler, J. R. Chem. Eng. Prog. 1976, April, 47. Himmelblau, D. M. Fault Detection and Diagnosis i n Chemical and Petrochemical Processes; Elsevier: Amsterdam, 1978. Himmelblau, D. M. "Fault Detection and Diagnosis-Today and Tomorrow", presented at Kyoto Meeting International Federation Automatic Control, Oct 1986. Hollander, M.; Wolfe, D. A. Nonparametric Statistical Methods; Wiley: New York, 1973. Isermann, R. Automatica 1984, 20, 387. Isermann, R. presented a t 7th IFAC Conference, Sept 17-20, 1985. Jang, S.-S.; Joseph, B.; Mukai, H. Ind. Eng. Chem. Process Des. Deu. 1986, 25, 809. Jazwinski, A. H. Stochastic Processes and Filtering Theory; Academic: New York, 1970. Kalman, R. E.; Bucy, R. S. J. Basic Eng. 1961,83, 95. Meinhold, R. J.; Singpunvalla, N. D. Report TR-85/8, Sept 16,1985; George Washington University, Washington, DC. Newell, R. B.; Fisher, D. G. Znd. Eng. Chem. Process Des. Dev. 1972, 11, 213. Newman, H. H. Chem. Eng. Prog. 1968, 64, 33. Palowitch, B. L.; Kramer, M. A., presented at the AIChE National Meeting, Nov 10-15, Chicago, 1985. Pau, L. F. Failure Diagnosis and Performance Monitoring; Marcel Dekker: New York, 1981. Ritter, R. A.; Andre, H. Can. J. Chem. Eng. 1970, 48, 696. Shiozaki, J.; Matsuyama, H.; Tano, K.; O'shima, E. Znt. Chem. Eng. 1985,25, 561. Sorenson, H. W. Parameter Estimation; Marcel Dekker: New York, 1980. Watanabe, K.; Himmelblau, D. M. Znt. J. Systems Sci. 1982,13,137. Watanabe, K.; Himmelblau, D. M. AZChE J. 1983, 29, 243. Willsky, A. S. Automatica 1976, 12, 601. Young, P. C. Control Eng. 1969, Oct, 119. Yundt, B. Chem. Eng. 1984, 91 (Dec), 46.

Received for review December 23, 1986 Revised manuscript received August 3, 1987 Accepted August 9, 1987