I n d . Eng. Chem. Res. 1988,27, 294-303
294
Literature Cited Abrams, D. S.; Prausnitz, J. M. AZChE J. 1975,21,116. Bank, M.; Leffingwell, J.; Thies; C. Macromolecules 1971,4,43. Bates, F. S. Macromolecules 1985,18, 525. Belfiore, L. A. Polym. Prepr. (Am. Chem. SOC.,Diu. Polym. Chem.) 1987,28(2), 114. Belfiore, L. A,; Cooper, S. L. J . Polym. Sci., Polym. Phys. Ed. 1983, 21,2135. Beret, S.; Prausnitz, J. M. AIChE J . 1975,21,1123. Bondi, A. Physical Properties of Molecular Crystals, Liquids, and Glasses; Wiley: New York, 1968. Bonner, D. C.; Prausnitz, J. M. AZChE J . 1973,19,943. Brandrup, J.; Immergut, E. H. Polymer Handbook, 2nd ed.; Wiley: New York, 1975. Flory, P. J. Discuss. Faraday SOC.1970,49,7. Fredenslund, A. "Phase Equilibrium of Polymer Solutions by Group-Contribution, Part 2: Liquid-Liquid Equilibria". Personal communication, 1986. Fredenslund, A.; Gmehling, J.; Michelsen, M. L.; Rasmussen, P.; Prausnitz, J. M. Ind. Eng. Chem. Process. Des. Dev. 1977,16,450. Gmehling, J.; Rasmussen, P.; Fredenslund, A. Ind. Eng. Chem. Process Des. Deu. 1982,21,118. Gupte, P. A.; Danner, R. P. Ind. Eng. Chem. Res. 1987,26, 2036. Hadziioannou, G.;Stein, R. S. Macromolecules 1984,17,567. Hadziioannou, G.; Stein, R. S.; Higgins, J. Polym. Prepr. (Am. Chem. Soc., Diu. Polym. Chem.) 1983,24(2),213. Herkt-Maetzky, C.; Schelten, J. Phys. Rev. Lett. 1983,51(10), 896. Kammer, H. W. Acta Polym. 1986,37, 1. Kikic, I.; Alessi, P.; Rasmussen, P.; Fredenslund, A. Can. J . Chem. Eng. 1980,58, 253. Koningsveld, R.; Kleintjens, L. A. Macromolecules 1971,4,637. Kwei, T. K.; Nishi, T.; Roberts, R. F. Macromolecules 1974,7,667. Larsen, B. L.; Rasmussen, P.; Fredenslund, A. Ind. Eng. Chem. Res. 1987,26,2274. Leung, R. W. K.; Badakhshan, A. Ind. Eng. Chem. Res. 1987,26, 1593. Macedo, E. A.; Weidlich, U.; Gmehling, J.; Rasmussen, P. Ind. Eng.
Chem. Process Des. Dev. 1983,22,676. Magnussen, T.; Rasmussen, P.; Fredenslund, A. Ind. Eng. Chem. Process Des. Dev. 1981,20,331. Modell, M.;Reid, R. C. Thermodynamics and Its Applications; Prentice Hall: Englewood Cliffs, NJ, 1983; Appendix C, p 243. Nishi, T.; Kwei, T. K. Polymer 1975,16, 285. Nishi, T.; Wang, T. T.; Kwei, T. K. Macromolecules 1975,8, 227. Oishi, T.; Prausnitz, J. M. Ind. Eng. Chem. Process Des. Dev. 1978, 17,333. Robeson, L. M.; Shaw, M. T. Polymer-Polymer MisciOlabisi, 0.; bility; Academic: New York, 1979; p 65. Park, J. H.; Carr, P. W. Anal. Chem. 1987,59, 2596. Patterson, D. Macromolecules 1969,2,672. Patterson, D. Polym. Eng. Sci. 1982,22,64. Phvs. Ed. Patwardhan, A. A.: Belfiore. L. A. J . Polvm. Sci... Polvm. 1986,24,2473. Shiomi. T.: Karasz. F. E.: MacKnicht. " , W. J. Macromolecules 1986. 19,2274. Shiomi, T.; Kohno, K.; Yoneda, K.; Tomita, T.; Miya, M.; Imai, K. Macromolecules 1985,18, 414. Shultz, A. R.; Young, A. L. Macromolecules 1980,13,663. Skjold-Jerrgensen, S.; Rasmussen, P.; Fredenslund, A. Chem. Eng. Sci. 1980,35,2389. Smith, K. L.; Winslow, A. E.; Petersen, D. E. Ind. Eng. Chem. 1959, 51, 1361. Solc, K.; Kleintjens, L. A.; Koningsveld, R. Macromolecules 1984,17, 573. Somani, R. H.; Shaw, M. T. Macromolecules 1981,14,1549. Staverman, A. J. Recl. Trav. Chim. Pays-Bas 1950,69, 163. Su,C. S.; Patterson, D. Macromolecules 1977,10, 708. Thomas, E. R.; Eckert, C. A. Ind. Eng. Chem. Process Des. Dev. 1984,23,194. Weidlich, U.; Gmehling, J. Ind. Eng. Chem. Res. 1987,26, 1372. Zhikuan, C. Polym. Commun. 1984,25, 21. Received for review October 6, 1986 Revised manuscript received October 20, 1987 Accepted October 21, 1987
PROCESS ENGINEERING AND DESIGN Sequential and Nonsequential Process Data Coaptation Burugupalli V. R. K. Prasad and James L. Kuester* Department of Chemical Engineering, Arizona State University, Tempe, Arizona 85287
Two process coaptation approaches, sequential and nonsequential, for data adjustment and estimation were investigated for a microcomputer implementation for an indirect liquefaction process. The steady-state Kalman filter, a sequential estimator, was found to be suitable for on-line process flow data estimation. The technique also serves as a means to carry out integration of the process data. The integrated data, free of systematic errors, could be used by the nonsequential technique. A reduced balance approach, using optimization routines, was needed for the nonsequential data coaptation problem. Adjustment of the composition variables and calculation of the process model parameters that satisfy material balance constraints are the principle benefits of the optimization approach. Several gross measurement error detection schemes, reported in the literature, were also tested for their suitability to the two techniques. A thermochemical conversion process to convert various biomass materials to diesel-type fuels has been under development at Arizona State University since 1975 (Kuester, 1984). An indirect liquefaction approach is used, i.e., gasification to synthesis gas (pyrolysis) followed by liquefaction of the synthesis gas (Fischer-Tropsch). A supervisory control system for this biomass indirect li0888-5885/88/2627-0294$01.50/0
quefaction process pilot plant was developed by utilizing a multitasking microcomputer, the IBM 9OOO. The control system features data acquisition, reduction, data coaptation, model development, simulation, and off-line/on-line optimization. Process data coaptation involves adjustment of the erroneous measurements and estimation of the unmeasured and unknown process variables. This is an 0 1988 American Chemical Society
Ind. Eng. Chem. Res., Vol. 27, No. 2, 1988 295 important link in the data acquisition and control system setup, as the estimated variables are the foundation for modeling, design, process optimization, control system development, and cost accounting. Reliability of data is of importance in pilot-plant operation, for the data are used in the selection of alternate process designs and operating conditions. The general data adjustment problem can be classified into data reconciliation, data coaptation, fault detection, and rectification (Mah et al., 1976). In process plants, all of the above three problems occur to a varying degree. Practical considerations limit the number of sensors in a process plant and very few situations exist where all the process variables are measured, i.e., where data reconciliation alone is needed. Data coaptation is generally the most practiced method of data adjustment. The calculation of the unmeasured variables and adjustment of the measured variables can also be termed estimation or state estimation following Kalman filtering theory (Sage and Melsa, 1971). The error between the measured variables and the estimated variables, i.e., the measurement error residual, provides a criterion to qualify the estimates and to detect any instrument errors. Thus, data adjustment and rectification problems are inherently tied together. Kuehn and Davidson (1961) were the first to propose a least-squares minimization procedure for the process data adjustment problem when all the process variables are measured. Vaclavek (1969) and Mah et al. (1976) utilized a reduced balance scheme to simplify the estimation/adjustment problem, in single-component flow problems. Hlavacek (1977) discusses in detail the various techniques used for data adjustment in multicomponent streams. These are broadly classified as (1)direct substitution of the variables from process constraints (Beckman, 19821, (2) minimization via Lagrangian multipliers (Vaclavek et al., 1976), (3) direct solution by nonlinear programming (Umeda et al., 1971), (4) minimization by linear programming (Malthiesen, 1974), and (5) Chebyshev min-max criterion. Crowe et al. (1983) used a matrix projection method to reduce a linear multicomponent balance problem to a form similar to the reduced balance scheme. Murthy (1973, 1974) and Madron et al. (1977) used the least-squares criterion to calculate the extents of reactions, in addition to data adjustment. Umeda et al. (1971) used a nonlinear programming code, the Complex method of Box, to adjust process data in a chemical plant in addition to the problem of finding unknown parameters of a chemical reaction. Himmelblau (1984) proposed the use of interval arithmetic for data reconciliation problems. Very few industrial-scale data adjustment applications have been reported. Clair et al. (1975) report one such application in a large 300000 ton/year chemical plant. Data over a 24-h period were collected and used in the data adjustment problem carried out off-line on a mainframe. Woodward (1984) reported details about a process monitoring and control system under development at Du Pont. The system, in addition to the problem of data acquisition and control, also integrates the data. The integrated data were then used for data adjustment. The adjustment was however limited to process flow variables that are linearly constrained. The use of process simulators for data adjustment was proposed by Stephenson and Shewchuk (1984). All of the above nonsequential data adjustment applications treated the case of only one set of measurements being available. But with the use of on-line computers for data acquisition, large amounts of process data are acquired in a short time. The process flows and temperatures
are typically sampled at a few seconds to 5-min intervals. Composition data from chromatographs can typically be obtained at 15-min intervals. Stanley and Mah (1977) demonstrated that Kalman filtering techniques can be used to estimate the flows and temperatures of a process network, using one set of process data at a time on a continuous basis. Rooney et al. (1979) proposed the use of this sequential technique for data reconciliation, parameter estimation, and fault detection. The difference between the measured and the estimated variables, i.e., the measurement error residuals, is applied in the detection of leaks and instrument faults. Development of the estimation theory and a discussion of gross measurement error detection schemes will be followed by the application of the sequential data estimation and the nonsequential estimation (general nonlinear optimization approach) of the process flows/compositions in the Indirect Biomass Liquefaction Process.
Theory The general weighted least-squares objective function for correcting a set of measurements is N
F =
,=1
W,(i?,- x,)'
(1)
where W, is the weighting term, 3, is the adjusted process variable, x , is the raw measured process data, and N is the total number of measurements that are to be smoothed. Based on mass and energy balance considerations, the process constraints are of the form f(f)= 0
(2)
where f k is a general nonlinear function of f and f is the state vector, N X 1. By use of Lagrangian multipliers, x k , the k constraint equations (eq 2) can be augmented to form a general nonlinear unconstrained minimization problem of the form N
K
r=l
]=1
N x ,A) = CW,(i?,- x,)' + EA]f](f)
(3)
where 9 is the augmented function that can be minimized by using any nonlinear unconstrained minimization method. When the process constraints are linear, eq 2 is of the form Af=O
(4)
A direct analytical solution to the least-squares objective function (eq 1)with the above linear constraints is given by Kuehn and Davidson (1961) as
2 = [I - RAT(ARAT)-lA]x
(5)
where A is the process incidence matrix, N X N; I is a unit matrix, N X N; and R is the noise covariance matrix with diagonal elements R,, = 1/ W,. For energy and multicomponent stream composition adjustment problems, the process constraints f ( x )are in general nonlinear (quadratic) equations. Kuhen and Davidson (1961) suggested a Taylor series approximation to linearize the nonlinear constraints (eq 2) to get an approximate linear solution. Others (Hlavacek, 1977) have used different nonlinear programming techniques to solve the problem. When the process and measurement noise are considered to be normal Gaussian variables, it has been shown that a discrete Kalman filter represents essentially a recursive solution of the linear least-squares problem (Sorenson, 1980). It was also shown to be a maximum likelihood
296 Ind. Eng. Chem. Res., Vol. 27, No. 2, 1988
estimate and a unbiased minimum variance estimate (Sage and Melsa, 1971). A linear steady-state system with linear measurement equations and driven by normal Gaussian process and measurement noise (i.e., a quasi-steady-state system) is represented as
x(k + 1) = x(k) + w(k) z ( k + 1) = Hx(k + 1) + v(k + 1)
(6)
(7)
The linear Kalman filtering algorithm for the above steady-state process is given by f ( k + 1) = f ( k + l/k) K ( k + l ) [ ~ ( k+ 1) - Hf(k + 1/k)] (8)
+
f(k
K(k + 1) = P ( k
+ l/k) = f ( k )
(9)
+ l/k)HTIHP(k + l/k)HT + R1-I (10)
P(k
+ l / k + 1) = [I - K(k + l)H]P(k + l / k ) P(k
+ l/k)
= P(k/k)
+Q
(11) (12)
where w is the process noise vector, N X 1;Q is the process noise covariance matrix, N X N; z is the measurement vector, M X 1;H is the measurement matrix that relates z with x,N X M; K is the filter gain matrix, N X M, P is covariance matrix of x,N X N; and R is the covariance matrix of the measurement noise. The measurement matrix H and the noise covariance matrices Q and R were assumed to be time invariant. The Kalman filter equations are considered to be of a predictor-corrector form. The correction in eq 8 is proportional to the error residual or innovations R,where r = z(k + 1)- Hf(k + l / k ) (13) The covariance of the innovations is given by [HPHT + R] (Sage and Melsa, 1971). This property of the residuals can be used as a criterion for detecting filter divergence, faulty measurements, instrument biases, or process leaks. The application of these two functions is discussed further in detail in the later sections. In the development of the steady-state discrete Kalman filter algorithms for process data estimation, the process was considered to be at a steady state and a quasi-steady model (eq 6) was considered to suffice. Goldman and Sargent (1971) discuss the validity of such an approximation and the application of the quasi-steady-state models. For any process data estimation problem, this model is valid, as the problem here is to determine the state variables that satisfy the mass balance constraints, which is a steady-state phenomena. Any deviation from the steady state can result in the divergence of the estimates and can be detected. Such a check is a much needed criterion for the data adjustment problem.
N
A u ~=
CU~~X,
j=l
(14)
When there are no leaks and no gross measurement errors, the expected mean of the residuum is zero; i.e.,
E[Au] = 0 (15) Nogita (1972) proposed a test function which was proved to be deficient by Mah et al. (1976). These authors in turn proposed using a test function similar to the one used by Hlavacek (1977). Clair et al. (1975) qualified the process data estimates based on the corrections to the measurements carried out to balance the process constraint equations. The test criterion utilized, i.e., the “standardized square error”, is (corrected data - raw datal2 S.S.E. = (16) (uncertainty domain)2 where the uncertainty domain is a value determined from a knowledge of the process instrument errors. A measurement is considered to be suspect, in their scheme, when the above error ratio is greater than 1. Such a measurement is marked to help operators cross-check the estimates, find leaks, and schedule instrument maintenance. Since the measurement noise was considered to be a normal Gaussian distribution, the statistical parameter chi-square (x2)distribution can be used as a test for the goodness of fit, i.e., to test if the measurement errors are normal Gaussian errors (Sorenson, 1980). This criterion has been proposed by several investigators to locate gross measurement errors in process measurements. But the test functions used, however, varied to a certain extent. Romagnoli and Stephanopoulos (1981) proposed the use of a h test function,
h = rT[HP(k + l / k ) H T + R]-‘r
(17)
where the terms on the right-hand side are composed of the error residuals and their covariance matrix. The calculated value of h is tested against a chosen critical value to accept or reject a measurement data set as good. When there are M measurements, h will have a x 2 distribution with M degrees of freedom. Thus, at a specified level of significance, a , prob ( h > x?-,(M)) =
cy
(18)
This means that the probability of the event that a particular h exceeds the critical x2 value belonging to the 1 - a level of confidence for M degrees of freedom is CY. The test criterion for finding if a gross error exists is to see if h exceeds the x 2 value picked for a level of confidence 1 - a. They also proposed that a subset of measurements containing one or more measurements ( m ) can also be screened with a simpler test function ( P test) of the form m
Measurement Error Detection The least-squares data adjustment problems, both sequential and nonsequential, are based on the assumption that the measurement errors are of a normal Gaussian distribution. When gross errors occur, due to instrument failures, biases or leaks in the process equipment, the least-squares procedure distributes the errors among all the variables, rendering the data as a whole unreliable. Ripps (1965) and Malthiesen (1974) suggested a timeconsuming sequential procedure wherein one measurement at a time is discarded to find the ones causing the most trouble. Hlavacek (1977) recommends the testing of the residuum, in a reduced balance scheme, which at node i is
where ri is the individual error residual and ai is the standard deviation of measurement i. Mah and Tamhane (1982) proposed using a similar test statistic for detecting the gross errors. Rooney et al. (1979) proposed the use of the x2 distribution test function to detect gross measurement errors and process leaks. A measurement is considered to be good if the statistic S is greater than a chosen x2 value (for the required level cy) with 1 degree of freedom. The test function S is
S = ri2/ai (20) where ai is the ith diagonal term in the innovation co-
Ind. Eng. Chem. Res., Vol. 27, No. 2, 1988 297 Table I. Initial Process Stream Information stream index
4
REACTOR 6 4
I
WMP
17
5
14
1 2 3 4 5 6
I
I 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
TANK
+
4
16 12
15
i l 22
topology from to 1 0 1 0 0 1 0 2 2 3 3 3 3 8 4 4 8 5 5 6
1 2 2 0 3 0 0 0 4 4 0 5 5 0 6
I
0 8 0 0 0
7 8 8 4
measurements comDosition
flow 1 1 1 0 1 1 1 1 1 1 0 1 1 1 0 1 1 0 1 0 1 1 1
I
1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1
Table 11. Categorization of Stream Measurements
f 19
Figure 1. Biomass indirect liquefaction process graph for data
stream category
coaptation study.
+
variance matrix [HP(k l/k)HT + R]. The actual implementation of these detection schemes and the criterion for the choice of the best test will be discussed in the sections dealing with sequential and nonsequential estimation.
Indirect Liquefaction Application (a) Process Stream Classification. Figure 1 depicts the process graph for the indirect liquefaction process. The process mainly consists of a gasification system (pyrolyzer) and a liquefaction system (Fischer-Tropsch reador). The gasification system is comprised of a dual fluidized bed reactor system with connecting circulating solid transfer loops. One fluidized bed is used as a feedstock pyrolyzer, while the second bed operates in a combustion mode to heat the circulating solid media. Cellulosic materials are continuously fed to the pyrolyzer and flashed to a synthesis gas consisting of paraffins, olefins, carbon monoxide, hydrogen, and carbon dioxide. The gas passes through a cyclone-scrubber system to a compressor. From the compressor, the gas can be distributed to the pyrolyzer and/or the liquefaction reactor. A mixture of steam and recycle pyrolysis product gas is used as the fluidizing medium. The liquefaction system consists of a catalytic reactor to produce paraffinic liquid fuel. The reactive components in the pyrolysis product gas are converted to a primary paraffinic hydrocarbon phase and a secondary alcoholwater phase. The topology of the streams and other initial information, i.e., which of the streams have measured/unmeasured flows and/or compositions, is listed in Table I. Since not all of the stream variables, i.e., flows and compositions, are measured, it is necessary to evaluate the observability of the problem. When the process constraints are nonlinear, the observability matrix rank depends on the choice of x. Stanley and Mah (1981) termed this as a local observability and showed that unless this observability is satisfied, the estimates calculated by any method, sequential or nonsequential, will not be unique. Vaclavek and Loucka (1976)
1
flow measd
2 3 4
unmeasd measd unmeasd
'3 5
composition measd
streams 1-3, 5-10, 12-14, 16, 17, 19, 21-23 11, 15, 20
measd unmeasd unmeasd
4, 18
9 10
NODE1
13 23
6
l4
I
I 12
NODE 3
22
Figure 2. Reduced balance scheme of the first degree.
have proposed a classification scheme based on the process digraph. Such a scheme was used in this work to determine the observability of the process variables and also to determine the redundancy of the measured variables. Nodes 1 and 6 in the process graph (Figure 1) are reactor nodes representing the pyrolyzer and the Fischer-Tropsch reactors. Stream 23 is a fictitious stream added to follow the changes at node 4,which represents a storage tank. Vaclavek and Loucka (1976) first classified the process streams into four categories based on initial information about the stream measurements (Table I). By use of this stream classification method, all 23 streams were sorted into 1 of the 4 categories and are as listed in Table 11. A reduced balance scheme (Vaclavek, 1969) was carried out to eliminate all the streams with unmeasured and not overdetermined flow variables. The resulting balance graph is called a reduced balance scheme (RBS) of the first degree. Figure 2 depids the reduced graph that represents
298 Ind. Eng. Chem. Res., Vol. 27, No. 2, 1988 Table 111. Classification of Stream Variables as Measured and Unmeasured flows compositions Measured Overdetermined 1-3, 5-7, 10. 12-14, 16, 19, 21-23 3, 7-10, 12-14, 23
L
Not Overdetermined 1, 2, 5, 6, 11, 15-17, 19-22
17 4, 11, 15, 18, 20
3-!
NODE1
10
r
]=I
,=1
C X k m k=l
Figure 3. Reduced balance scheme of the second degree.
the reduced balance of the first degree. All the measured flows grouped as overdetermined or not overdetermined are listed in Table 111. The classification of the unmeasured flows is a much more complicated procedure where each of the unmeasured flows are considered to be measured flows in a sequential one a t a time procedure. The whole RBS procedure was repested, a t each instant, and if the resulting reduced graph was greater by a node, then the unmeasured flow is determinable. For classification of the compositions, Vaclavek and Loucka (1976) carried out a further reduction of the RBS of the first degree to eliminate all the remaining streams of the third category. The resulting reduced graph is then considered to be of a RBS of the second degree. The principle deviation from their categorization scheme was not to have fictitious streams to account for the species changes taking place in the reactor nodes. If the fictitious streams were present, a wrong classification of these streams could have eliminated the reactor nodes from the RBS of the first degree. For example, if five fictitious streams were chosen to represent the five reactions occurring in the pyrolysis reactor, and if these streams belong to the second category where the flows are unmeasured but stream compositions are known, the node with these fictitous streams would have been eliminated in the RBS of the first degree. With the proposed modifications, however, all the overdetermined flows can be properly identified and the flow data adjusted. The reactor nodes, due to the presence of the unknowns, were eliminated in the RBS of the second degree procedure. Figure 3 represents the process digraph of the reduced balance scheme of the second degree. All the measured compositions were thus classified as overdetermined variables and not overdetermined and are listed in Table 111. The unmeasured compositions were found to be determinable by carrying out the sequential tests as described above for the flows. Thus the reduced balance schemes of the first and second degree of Vaclavek and Loucka (1976) along with a modification that is based on a physical understanding of the system were used to classify all the process streams. (b) Process Constraints Setup. Process constraint equations around a node can be either a total mass or a molecular or a atomic species balance (Clair et al., 1975). The total mass balance at any node i is N j=1
N
(22)
K
14
12
CaijFj= 0
m=l
N
xkmZ6m,a,,F, + Ca,,6kF, = 0 for all k = 1, K
where X k m is the mass fraction of component k in composition stream m, 6, is 0 if composition m does not relate to flow stream j or is 1 if it relates, 6 k is the Dirac delta function which is 1 if flow j is a single species stream or is 0 if not, K is the total number of components, and L is the total number of independent composition streams. Additional constraints exist on the stream compositions from the requirement that the sum of mass fractions of the multicomponent stream is unity, i.e.,
Unmeasured Determinable 4, 18
1
where a,, is the elements of the process incidence matrix at node i, Fj is the flow in and out node i, and N is the total number of measured streams. The molecular species balance around node i is
(21)
=0
for all m = 1, L
(23)
For a species balance around a reactor node, where changes in species are involved, an atomic species balance is required to account for all the species. The balance is of the form
for k = 1 to I (24) where R is the number of independent reactions involved in the reactor, Skr is the coefficients of the stoichiometric matrix, and 5, is the extent of reaction r. When a stream contains only a single species, it is considered as a separate entity and not generalized as a multicomponent stream. Changes due to the reactions were accounted for by incorporating the extent of reaction terms. A major departure from other investigators (e.g., Romagnoli and Stephanopoulos (1980), Vaclavek et al. (1976)) was when different streams (e.g., from a splitter) have the same composition; then only a single set of the composition variables was used. In all the other studies, separate variables for the composition of each stream were used and extra constraints were imposed to equate the compositions between streams. This could result in a very large increase in the number of stream composition variables. When splitters were involved, e.g., node 3 in Figure 1, since all the streams entering or leaving have the same composition, only a total mass balance around the node was needed. The above modified nonlinear form of the balance equations, i.e., eq 22-24, were chosen to minimize the number of state variables and to simplify the data coaptation problem. The matrix rank determination method (Smith and Missen, 1982) was used to determine a set of independent reactions (stoichiometric matrices) for the two reactors, namely, pyrolyzer and Fischer-Tropsch reactors. While five independent stoichiometric reactions were sufficient to model the conversion in the pyrolyzer, the FischerTropsch reactor needed a minimum of six independent reactions to account for all the species. The stoichiometric matrices for the two reaction schemes are presented in Tables IV and V. When a particular species is not present or when it does not participate in the reaction (e.g., nitrogen), all the elements of the row corresponding to that species are zero. In the indirect liquefaction process, there are a total of eight mass balance nodes, four molecular balance nodes, and two atomic balance nodes in the process constraint
Ind. Eng. Chem. Res., Vol. 27, No. 2, 1988 299 Table IV. Stoichiometric Matrix Coefficients for the Pyrolysis Reaction Set HZ
co COZ CH,
CZH, CZH,
NZ HZO biomass diesel propanol
-4.032 -1 12.040 88.020 0.000 28.052 0.000 0.000 0.000 0.000 0.000 0.000
-4.032 -56.020 44.010 16.042 0.000 0.000 0.000 0.000 0.000 0.000 0.000
-2.016 -28.010 -44.010 0.000 0.000 0.000 0.000 18.016 0.000 0.000 0.000
-6.048 -112.040 88.020 0.000 0.000 30.068 0.000 0.000 0.000 0.000 0.000
-1.481 -35.262 11.394 0.000 0.000 0.000 0.000 0.000 25.349 0.000 0.000
Table V. Stoichiometric Matrix Coefficients for the Fischer-Tropsch Reaction Set H2
co COZ CHI CZH, CZH,
NZ H20 biomass diesel propanol
0.000 28.010 -22.005 24.063 0.000 -30.068 0.000 0.000 0.000 0.000 0.000
2.016 0.000 0.000 -32.084 0.000 30.068 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 32.084 28.052 -60.136 0.000 0.000 0.000 0.000 0.000
setup. There are 9 independent multicomponent streams with a total of 11 components in each. In this study, since gas chromatographs are used to measure the gas stream compositions, either all or none of the components in a stream are measured. (Note: For compatibility with the GC measurements, all of the streams are considered to have 11components, and when a stream does not have a particular species present, its measured mass fraction is zero.) The total number of process constraint equations is therefore 83, composed of eq 21-24. Eighteen of the flows along with seven of the independent stream compositions are measured. The total number of state variables that are to be adjusted and calculated are 133, while the total number of measurements are 95. Testing of the coaptation techniques was carried out using simulated data to gauge the performance and the computational requirements of the two methods. Process data for the data adjustment problem were obtained by adding normal Gaussian noise to a deterministic data set. The sample deterministic process data set was obtained by steady-state process (FLOWTRAN) simulation. A 5% standard deviation for flows and a 0.5% standard deviation for composition data were used for this generation. (c) Sequential Estimation. The sheer size of the data adjustment/estimation problem, with the state variables N being 133 and the number of measurements plus constraints being 178, prohibits the use of any nonlinear filtering techniques. Stanley and Mah (1977) discuss the application of such a nonlinear filter and the problems encountered in the data adjustment problem of a process network. In their method, the nonlinear constraints are linearized by a Taylor’s series approximation at each time step. In addition, the implementation of this filter, also called an extended Kalman filter, requires the inversion of an M X M matrix, where M is 178 for the problem of interest. This matrix inversion procedure, a t each sampling step, poses a severe computational load, is subject to numerical problems, and as such is not feasible for a mainframe application let alone for the IBM 9000 microcomputer. The reduced problem of a linear Kalman filter for process flow estimation was considered in this study. The practical advantages realized in using such a reduction are discussed in the later sections. Since a total of 23 flows are present in the process (Figure l), the number of state variables N is 23. Of these
0.000 0.000 -22.005 -56.147 0.000 60.136 0.000 18.016 0.000 0.000 0.000
0.000 0.000 -22.005 -8.021 0.000 -30.068 0.000 0.000 0.000 0.000 60.094
0.000 0.000 -0.658 149.509 0.000 -297.577 0.000 0.000 0.000 148.726 0.000
flows, a total of 18 are measured. There are eight process constraint equations (see eq 21) or pseudomeasurements (Rooney et al., 1979). The overall measurement equation for the process flow and pseudomeasurements is z=
=
I:[
=
I:[
+
(25)
H x + v
where z1represents the measurement vector, M1 X 1;H1 is the measurement matrix, M I X N,; v1 is the noise vector, M I X 1;Ml is the number of flow measurements; z 2 is the pseudomeasurement vector, M 2 X 1; H2 is the matrix compoied of the process incidence matrix terms, M2 x N and M2 is the number of process nodes. The piocess coaptation problem was determined to be observable, with the rank of the measurement matrix H being equal to 23. With a set of initial conditions P(0/0) and its initial covariance P(O/O), the Kalman filter estimates P(k + l/k), %(k + l / k ) , K(k + l),P(k + l), and P(k + l / k + 1)can be estimated sequentially for each z ( k + 1) for k = 0, 1, 2, etc. Tuning of the filter to avoid divergence, after a large number of iterations, can be achieved by a proper choice of the noise covariance matrices Q and R. The filter gains and the state covariance matrices are independent of the actual process states, x, and can be precomputed off-line. Precomputation vs online computation depends on the trade-off between storage space and computational speed limitations. Since the Kalman filter equations are based on a quasi-steady-state model, i.e., the process is driven by small disturbances and does not have any other dynamics, the changes in the state variables are small. A very good estimate of the initial % ( O / O ) is available from the process control set-point information and/or other instrument settings. Under these conditions, the terms in the initial covariance of the state variables P ( O / O ) are quite small (i.e., there is more confidence on initial x ) and the Kalman filter gains become essentially constant after a few iterations. In all of the initial trials, the filter gains reached a constant value after only three to four iterations. This suggests that a constant gain filter is sufficient in data adjustment problems. The constant filter gains can be precalculated a t the start of a run with the chosen noise matrices Q and R. However, when sudden large disturbances occur or when steady-state
300 Ind. Eng. Chem. Res., Vol. 27, No. 2, 1988
set points are changed (i.e., the process changes from one steady state to another), the filter estimates diverge due to the presence of incorrect tuning parameters. When the filter starts to diverge, via operator interface and/or by automatic error detection schemes, the tuning parameters Q and R can be updated and the constant filter gains recalculated. The process and measurement noise were assumed to be noninteracting normal Gaussian noise; Le., errors in one variable do not effect the other measurements/states. The resulting measurement and process error covariance matrices are diagonal. The error covariance matrix R was assumed to have a 5% standard deviation. Kalman filter estimation does not depend very strongly on the actual values of these terms, but the chosen values are to signify a maximum expected error. Rooney et al. (1979) recommend that the values be larger than the best guesses, as the Kalman filter estimates are less sensitive on the high side. Process noise through Q is used to account for the disturbances entering the system, Le., modeling errors, numerical errors, etc., and to avoid divergence in the filter estimates. The covariance terms in Q are also chosen to tune the filter. The ratio of the process noise to the measurement noise ( q / r ) affects the filter estimates through the filter gains. When this ratio is large, process disturbances are relatively large and the estimates tend to depend more on the measurements. For a low q / r ratio, measurement noise is more significant and estimates depend more on the process model/past measurements. Very low ratios make the filter sluggish and very large ratios make the filter depend only on the measurements (resulting in a repeated application of reconciliation without carry over of information from the past). Both of these conditions are undesirable. The best ratio q / r can be determined by trials that give the optimum filter performance (Prasad and Prasad, 1984). Three initial values of 4 , 1 , and 0.25 were chosen in the initial study to determine the above ratio. With the values of R fixed, the diagonal terms of Q were calculated for each of the q / r ratios. The steady-state Kalman filter gains were calculated with the initial process state covariance. The filter gains after 20 iterations were chosen as the constant fiiter gains. Process measurements, for this study, were generated with a standard deviation of 10%. These large values were used to simulate the presence of gross errors in some of the process measurements. Three test criterion, the h test (eq 171, S test (eq 201, and the standardized square error (eq 16), were investigated as the possible best candidates for gross measurement error detection. The optimum process noise ratio was considered to be the one that provides good Kalman filter estimates and also the one with which the error detection criterion work best. For each random generated measurement set z ( k + 11, the state estimates f ( k + 1) are calculated by using the precalculated steady-state filter gains (eq 8). The calculated process flow estimates were tabulated along with the calculated values of the three test functions. Figure 4 depicts the measured and the estimated process flows for flow 1 (state variable 1) for the three noise ratios. With high process noise values, the estimates follow the measurements closely, due to high process gains (e.g., K,, = 0.81). For low process noise values, i.e., the low q / r ratio, the estimates are dependent on the past measurements (model) and the corrections are of smaller magnitudes. These estimated values are also an integrated value of the past measurements.
f
+-
-
3
7
-
Figure 4. Effect of process noise to measurement noise. Table VI. Effect of Process Noise to Measurement Noise error reduction ratio data set
a i r = 0.25
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
67.0 12.845 63.77 14.57 14.070 5.495 4.137 27.988 7.652 3.452 23.37 7.386 7.442 22.52 12.47 5.406 22.738 9.19 44.987 12.08
air =
1
26.44 5.67 31.5 4.755 7.1636 3.028 3.105 5.1658 3.2557 1.743 137.53 4.77 5.6 4.219 6.978 2.718 65.43 6.656 6.369 5.195
a i r = 4.00 14.58 3.507 9.99 2.662 5.3072 2.191 3.128 2.011 2.9801 1.28 5.42 2.91 4.857 1.6865 3.1264 1.62 36.12 4.344 2.656 2.23
A measurement error reduction function was also calculated to find the best noise parameters and is given by error reduction ratio =
(measured - actual)2 (estimated - actual)2
(26)
The reduction ratios obtained with the three different ratios are listed in Table VI. These results indicate that the best results, Le., the largest reduction, were obtained with the lowest q l r noise ratio. This can be attributed to the integrated values of the estimates being closer to the actual process data. An analysis of the gross error detection tests indicate that h test function can only be used as an overall index to qualify the estimates of the entire measurement set at the kth instant. Very large values of the calculated x2 reflect a bad measurement set or an unsteady-state process condition. Individual measurement errors cannot be identified by the h test. Even when the h test passes a measurement set, individual measurement errors could still exist. The S test, however, is suited to identify measurement errors in specific measurements. Table VI1 presents the calculated values of these and the standardized square error (S.S.E.), at the three q / r ratios for the flow variable 1. Measurements with a x2 value of 3.884 and S.S.E. greater than 1 can be suspect in the two tests. Actual gross errors, errors greater than the 5% error limit,
Ind. Eng. Chem. Res., Vol. 27, No. 2, 1988 301 Table VII. Calculated Error Test Functions for Flow 1 q / r = 4.0
data set 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
S.S.E. 1.506 0.400 0.113 0.110 0.001 0.388 0.129 0.005 0.331 0.009 0.001 0.111 0.849 1.167 0.283 0.427 0.006 0.043 0.116 0.108
S test 9.264 2.505 0.815 0.448 0.008 1.558 0.808 0.029 1.595 0.038 0.022 0.694 4.089 6.247 1.045 2.01 0.002 0.161 0.867 0.461
process noise q / r = 1.0 S.S.E. S test 7.735 0.731 0.792 0.216 0.003 1.563 1.128 0.188 1.334 0.001 0.024 0.4978 4.3619 3.624 0.551 1.642 0.122 0.087 0.728 0.250
are measurements 1,2,4,5, 7,8, 13, 15, 16, and 19. The calculated S.S.E. ratios for high q / r ratio are quite small, and the test failed to detect most of the gross errors. This is due to the close proximity of the measured and the estimated variables. The calculated values of the S test were also small due to a large Q,and hence this test also fails to detect the gross errors. For the low process noise case, both of the test criterion identify the gross errors. The choice of CY level (0.05) is, however, barely sufficient, and to have a tighter error identification scheme, a level of significance a equal to 0.1 is recommended. A close examination of the calculated test function values revealed two interesting phenomena: (1)even a good measurement after the occurrence of a gross error sometimes failed the test, and (2) successive gross errors of the same measurement and with equal magnitude could not be detected with these tests (see measurements 2 and 5). On the basis of the above initial trials, the low process noise ratio, i.e., q / r equal to 0.25, was chosen as the best choice for the process data estimation problem. The calculated values of the two individual test criterion, i.e., S and S.S.E. tests, used in the above trials are recommended to facilitate operator diagnosis. The above process data estimation scheme, using the above tuning and test parameters, was tested with normal Gaussian measurements. The measurements were generated with a 5% standard deviation. Due to the inherent nature of the random generation scheme, a few of the measurements still contained gross errors. The filter estimates and the flow measurements for flow 1 are presented in Figure 5. The gross errors 1 and 13 were identified successfully by the S test. All the other measurements were within the normal error range. The Kalman filter estimates were smoothed and were distributed close to the actual value. The measurement error residuals of the pseudomeasurements were all quite small in all the tests. When gross errors are present, the calculated values of the unmeasured variables are erroneous. But the pseudomeasurement error residuals never reflected this. Hence, these cannot be used as a criterion for detection of leaks. Any leaks in the system can, however, be traced by off-line testing of the tabulated test functions. The steady-state filter gain calculation (matrix inversion and calculation) took about 15 min of computer time for 20 iterations. State estimation took about 5 s for every iteration. With the current sampling time of 30 s, the data
q / r = 0.25
20.658 1.965 2.209 0.433 0.003 3.436 2.995 0.471 3.491 0.005 0.089 1.3717 10.563 9.06 1.033 3.945 0.206 0.186 1.783 0.545
-
S.S.E.
S test
20.018 0.119 1.306 0.2835 0.003 0.569 4.409 1.730 1.705 0.023 0.120 1.060 11.967 4.643 0.848 3.523 0.731 0.017 2.072 0.155
33.006 0.206 2.189 0.402 0.000 5.493 7.223 2.819 2.795 0.042 0.223 1.7688 19.026 7.467 1.185 5.625 1.083 0.025 3.300 0.226
'T-
s >
._ TCfJ
-
Figure 5. Filter estimates with normal measurement errors.
acquisition and adjustment steps would take only about 20% of the computer time. The filter gain calculations are needed at the start-up of the control system or for fine tuning the filter at intermediate times. The computational times for calculating the filter gains can be reduced by relaxing the number of iterations needed for the steadystate gains (from 5 to the current 20). These studies with a steady-state Kalman filter demonstrated that this technique could be used for data adjustment, estimation of the unmeasured variables, and detection of the gross errors. The application of the general nonlinear optimization technique for nonsequential data coaptation, which can adjust the measured and estimate the unmeasured flow and composition variables, is discussed in the next section. (a) Nonsequential Estimation. The nonlinear optimization code VFOlAD from the Harwell subroutine library was chosen to solve the data coaptation problem by using a nonsequential approach. This code is reported to be one of the best codes that can handle nonlinear optimization problems with nonlinear equality and inequality constraints (Schittkowski, 1980). The code is based on the Fletcher-Powell penalty technique (Fletcher, 1975). The data adjustment problem for adjusting the redundant measurements and calculating the unmeasured or the unknown variables (e.g., the extents of reaction) can be carried out by using the nonlinear programming approach. The weighted least-squares objective function is given by
302 Ind. Eng. Chem. Res., Vol. 27, No. 2, 1988
eq 1, and the process constraints are given by eq 21-24. However, practical implementation of this minimization task with equality constraints poses severe problems due to the large number of variables and constraints involved. Equality constraints are typically treated by augmenting the objective function with the constraints using Lagrangian multipliers and solving the unconstrained minimization problem. This augmentation however increases the magnitude of the problem, and in methods that require first and second derivatives (or Hessian), a very large amount of space is needed to solve the problem. It was found that with the chosen code, the problem would not fit in the memory of the IBM 9000 or even in the (2Mbyte) virtual memory of the IBM 4341. Hence, a reduced balance approach (Vaclavek and Loucka, 1976) was used to simplify the problem. In the first step, all the overdetermined flows and compositions (Table 111) were first adjusted to minimize the least-squares objective function and to satisfy the corresponding process constraint equation setup. In the second step, the unmeasured flows, compositions, and the unknown extents of reaction were calculated from measured/adjusted process flows and compositions. The weighting terms in the objective function (eq 1) were chosen to be the inverse of the square of the standard deviation, i.e.,
wi = 1/a; In the indirect liquefaction pilot-plant process setup, some of the stream flows are stopped at times; Le., these flows have zero values. For such zero flows and compositions, small values for the variables (e.g., and for the standard deviations (lo", lo4) are needed to avoid numerical overflow/underflow problems. The small values for the standard deviation impose a severe penalty on the objective function when these variables are inadvertently changed, thus forcing the estimates to be near zero. In the reduced balance setup, 17 equality constraints (eq 21-24) and 50 inequality constraints were needed to solve the problem. The inequality constraints were imposed to force the nonnegativity constraint on the estimated variables. A number of trials, with 5% flow and 0.5% composition errors, indicate that the measurement errors were reduced substantially and the estimates satisfy the process constraints. The S.S.E. error test was found to be suitable for the detection of gross errors. The study also revealed that this particular estimation method was subject to some glaring deficiencies. When more than one parallel external stream was present a t a node, the corrections to the measurements were distributed on all these flows based on the relative weights. These corrections were applied even when a particular flow measurement did not have any errors. When one of these flows was large and has a corresponding large standard deviation, most of the correction occurs to that stream flow. All the other stream flows were not corrected to the extent needed. This can be attributed to the fact that most of these external streams, in this particular setup, do not have any composition redundancy; i.e., they are not involved in the composition balance constraint. The internal streams, on the other hand, with composition redundancy, were found to be adjusted well. Process data estimates were also not good in the presence of systematic errors, Le., when measurement errors of opposite signs nullify each other. The process constraint deviations calculated in the presence of such systematic errors are small and lead to smaller measurement corrections and poor estimates. All the adjusted process flows and compositions along with the measured but not overdetermined variables are saved for calculating the unmeasured/unknown variables (using eq 21-24). A gener-
alized least-squares nonlinear optimization code, LMCHOL from AMDLIB library (Hillstrom, 1976), was used for this purpose. The code was found to converge fast to global minimum, independent of the initial starting point. Computational times for these two steps were about 10 and 16 min on the IBM 9000'computer. Hence these calculations are more suited for off-line or low-priority on-line implementation.
Conclusions All the process flows and composition streams in the indirect liquefaction process were categorized and classified as measured/unmeasured and overdetermined or not overdetermined and determinable variables. Two different data coaptation approaches, the sequential and the nonsequential estimation techniques, were investigated. Kalman filter estimation was the sequential technique investigated for on-line application. The Kalman filter estimation technique, with steady-state filter gains, was shown to provide adjusted process data from measured flows and to calculate the unmeasured flows. The technique also serves as a means to carry on the integration of the process data. The integrated data, free of systematic errors, could be used further for the general nonlinear adjustment problem. Current industrial systems use the least-squares (nonsequential) approach for flow data coaptation. The Kalman filter estimator is, however, more suitable for on-line flow data estimation since it combines data estimation and integration of process data. Data at each sampling instant are adjusted rather than the hourly integrated data and at comparable computational times. The general optimization approach for data adjustment was found to be suitable for off-line application. The process constraint equations were set up to satisfy total, molecular, and atomic species balances. A two-step procedure or a reduced balance approach was chosen to handle the data coaptation problem. In the first step, all the overdetermined/adjustable variables, i.e., flows and compositions, were adjusted. A nonlinear constrained optimization routine was used to minimize the weighted least-squares objective function and to obtain the adjusted data. The unmeasured and unknown variables were then computed from the adjusted process data in the second step using a general nonlinear unconstrained optimization code. The extent of reaction terms for the pyrolysis and Fischer-Tropsch reactions was computed in the second step of the reduced balance scheme. These calculated values represent process constraints that satisfy material balance consideration and are useful for process modeling and analysis. The nonlinear optimization path for data coaptation was found to be sensitive to gross measurement errors. Estimation of some flows representing parallel external streams was found to be erroneous due to the lack of composition redundancy to these flows. Several gross measurement error detection schemes reported in the literature were investigated. Two test criteria based on the x2 values were found suitable for detecting gross errors in the sequential Kalman estimation problems. The h test was suitable for the whole measurement data set and the S test for individual measurements. The standardized square error criteria were inadequate to detect gross errors in Kalman estimation but were suitable for the nonsequential optimization-type estimation problems. The two coaptation approaches supplement each other, and when implemented together, they would provide reliable process data and parameters. Implementation of these techniques is to be an integral part of the overall supervisory control system under development for the biomass indirect liquefaction process.
Ind. Eng. Chem. Res., Vol. 27, No. 2, 1988 303 Acknowledgment The research described in this paper was sponsored by the U.S. Department of Eneregy (Office of Industrial Programs). Implementation of the IBM equipment was part of a joint study program with Arizona State University and International Business Machines Corp. Nomenclature ai = term in innovation covariance matrix, eq 20 ai, = element of process incidence matrix, eq 21 A = process incidence matrix, eq 4 E = expectant operator, eq 15 f = nonlinear function of the constraints, eq 2 F = nonlinear function of x,eq 1 Fj = flow of stream j , eq 21 h = test function in eq 17 H = measurement matrix relating z to x,eq 7 HI, Hz= measurement matrices, eq 25 I = identity matrix, eq 5 K = number of constraint equations, eq 3; total number of components, eq 22 K = Kalman filter gain-matrix, eq 10 L = total independent composition streams m, M = total number of measurements M I ,M2 = total number of measurements of categories 1 and 2
N = total number of streams, eq 22; total number of adjusted streams, eq 1; total number of state variables P = test function, eq 19 P = covariance matrix of % Q = covariance matrix of process noise w r = vector of measurement error residuals ri or innovations, eq 13 R = number of independent reactions, eq 24 R = measurement error covariance matrix, eq 10 R = matrix with diagonal terms = l/Wi, eq 5 skr = element of stoichiometric matrix, eq 24 S = test function, eq 20 A u = residuum of process constraints, eq 14 v = measurement noise vector, eq 7 W i= weighting terms, eq 1 w = process noise vector, eq 6 x i = raw measured process data, eq 1 x^, = estimated or adjusted variable x = state variable vector x ( k + 1) = state variable vector at k + 1sampling instant % ( k l / k ) = predicted estimate, eq 8 Xkm = composition variable, eq 23 z = measurement vector, eq 7
+
Greek Symbols a = significence level = one confidence coefficient
&, 6,i = Dirac delta functions, eq 22 and 24 A = Lagrangian multiplier
tr,E,
= extent of reactions r and j , eq 24
ui = standard deviation of measurement = summation = optimization function, eq 3 x = chi-square statistic
i
Abbreviations RBS = reduced balance scheme SSE = sum of squared error Literature Cited Beckman, J. R. Chem. Eng. Commun. 1982,15,357-365. Clair, R.; Lebourgeois, F.; Caujolle, J. P.; Bornard, G. Proceedings of the 6th IFAC World Congress, Boston, 1975. Crowe, C. M.; Garcia Campos, Y. A.; Hrymak, A. AIChE J. 1983,29, 881-888. Fletcher, R.Nonlinear Programming 2; Mangasrian, 0. A., et al., Eds.; Academic: New York, 1975. Goldman, S. F.; Sargent, R. W. F. Chem. Eng. Sci. 1971, 26, 1535-1553. Hillstrom, K. E. Nonlinear Optimization Routines in AMDLIB; Argonne National Laboratory: Chicago, 1976. Himmelblau, D. M. Paper presented at the American Institute of Chemical Engineers Meeting, Anaheim, CA, 1984. Hlavacek, V. Comput. Chem. Eng. 1977,1,75-100. Kuehn, D. R.; Davidson, H. Chem. Eng. Prog. 1961,55(6), 44-46. Kuester, J. L. Energy Applications of Biomass; Lowenstein, M. Z., Ed.; Elsevier: New York, 1984. Madron, F.; Veverka, V.; Vanecek, V. AIChE J. 1977,23,482-486. Mah, R. S. H.; Stanley, G. M.; Dowing, D. M. Ind. Eng. Chem. Process Des. Dev. 1976,15, 175-183. Mah, R. S. H.; Tamhane, A. C. AIChE J. 1982,28,828-830. Malthiesen, N. L. Automatica 1974,10, 431-435. Murthy, A. K. S. Ind. Eng. Chem. Process Des. Deu. 1973, 12, 246-248. Murthy, A. K. S. Ind. Eng. Chem. Process Des. Deu. 1974, 13, 347-349. Nogita, S. Ind. Eng. Chem. Process Des. Deu. 1972,11, 197-200. Prasad, C. C.; Prasad, B. V. R. K. Chem. Eng. Sci. 1984,39,185-187. Ripps, D. L. Chem. Eng. Prog. Symp. Ser. 1965,55,8-13. Romagnoli, J.; Stephanopoulos, G. Chem. Eng. Sci. 1980, 35, 1067-1081. Romagnoli, J.; Stephanopoulos, G. Chem. Eng. Sci. 1981, 36, 1849-1863. Rooney, T. B.; Mehra, R. K.; Kramerich, G. L.; Evans, L. B. A Link Between Science and Application of Automatic Control; Niemi, A., Ed.; Pergamon: New York, 1979. Sage, A. P.; Melsa, J. L. Estimation Theory with Applications to Communications and Control; McGraw-Hill: New York, 1971. Schittkowski, K. Nonlinear Programming Codes; Springer-Verlag: New York, 1980. Smith, W. R.; Missen, R. W. Chemical Reaction Equilibrium Analysis-Theory and Algorithms; Wiley: New York, 1982. Sorenson, H. W. Parameter Estimation-Principles and Problems; Marcel Dekker: New York, 1980. Stanley, G. M.; Mah, R. S. H. AIChE J . 1977,23,642-650. Stanley, G. M.; Mah, R. S. H. Chem. Eng. Sci. 1981,36, 259-272. Stephenson, C. R.; Shewchuk, C. F. Paper presented at the American Institute of Chemical Engineers Meeting, Anaheim, CA, 1984. Umeda, T.; Nishia, M.; Komatsu, S.Znd. Eng. Chem. Process Des. Deu. 1971,10,236-243. Vaclavek, V. Chem. Eng. Sci. 1969,24,947-955. Vaclavek, V.; Kubicek, M.; Loucka, M. Theor. Found. Chem. Eng. 1976,9,242-245. Vaclavek, V.; Loucka, M. Chem. Eng. Sci. 1976,31,1199-1205. Woodward, J. W. Paper presented at the American Institute of Chemical Engineers Meeting, Anaheim, CA, 1984.
Superscripts
T = transpose = estimate -1 = inverse
A
Received for review June 4, 1986 Revised manuscript received August 13, 1987 Accepted September 2, 1987