2037
Ind. Eng. Chem. Res. 1990,29, 2037-2044 Kim, M. G.; Tiedeman, G. T.; Amos, L. A. Carbon-13 NMR Study of Phenol-Formaldehyde Resins; Weyerhaeuser Company Science Symposium 11; Weyerhaeuser Co.: Tacoma, WA, 1979; pp 263-87. Kim, M. G.; Amos, L. W.; Barnes, E. C. High Molecular Structure of a Phenol-Formaldehyde Resin. Polym. Prepr. 1983, 24 (2), 25-6. Levy, G. C.; Nelson, G. L. Carbon-I3 NMR f o r Organic Chemists; Wiley-Interscience: New York, 1972. Megson, N. J. L. Phenolic Resin Chemistry; Academic Press: New York, 1958. Sarkanen, S.; Teller, D. C.; Hall, J.; McCarthy, J. L. Associative Effects among Organosolv Lignin Components. Macromolecules 1981, 14, 426-34. Sojka, S. A.; Wolfe, R. A.; Dietz, E. A.; Dannels, B. F. C-13 NMR of Phenolic Resins. Macromolecules 1979, 12, 767-70.
Walker, J. F. Formaldehyde; Reinhold New York, 1964. Werstler, D. D. Quantitative (3-13NMR Characterization of Aqueous Formaldehyde Resins. Polymer 1986, 27, 750-56. Woener, D. L.; McCarthy, J. L. Ultrafiltration and Light Scattering Evidence for Association of Kraft Lignins in Aqueous Solvents. Macromolecules 1988,21, 2160-66. Woodbrey, J. C.; Higginbottom, H. P.; Culbertson, H. M. Proton NMR Study on the Structures of Phenol-Formaldehyde Resins. J. Polym. Sci. A 1965, 3, 1079-105. Wooten, A. L.; Prewitt, M. L.; Sellers, T.; Teller, D. C. Gel Filtration Chromatography of Resole Phenolic Resins. J.Chromatogr. 1988, 345, 371-76. Received for review February 6, 1990 Revised manuscript received May 18, 1990 Accepted June 8, 1990
PROCESS ENGINEERING AND DESIGN Measurement Selection and Detection of Measurement Bias in the Context of Model Based Control and Optimization Kazuya Kaget and Babu Joseph*pt Department of Systems Science a n d Mathematics a n d Department of Chemical Engineering, Washington University, St. Louis, Missouri 63130
An important issue in model based control and optimization is the adaptation of the model to account for the differences between the model predictions and actual measurements obtained from sensors embedded in the process. Among the important questions faced by a control system designer are what kind of measurement sensors to use, where to place these sensors, how to detect possible errors in the sensor outputs, and how to evaluate the quality of these measurements. In this article, we propose and demonstrate the use of a measure of information content in the sensor data to answer some of these issues. This measure of information is based on the uncertainty associated with the model parameter estimates. The use of this measure is illustrated by using both simulated and experimental data on a heated-bar process. The results obtained indicate that the measure can be used to effectively screen out biased measurements and to select measurements that contain the most information about the model.
Introduction Process models are increasingly utilized in both on-line optimization and regulatory control of processes. The computing power available today with microprocessor based control systems allows us to use complex process models in the implementation of control systems. Since the publication of the papers on Dynamic Matrix Control (cutler and Ramaker, 1979) and Model Algorithmic Control (Richalet et al., 1978), a large number of researchers have investigated the subject of model based control. One major, but often overlooked, problem in the application of model based techniques for on-line optimization and control is the discrepancy between the actual process behavior and the model prediction. In process control literature, this is frequently referred to as the "robustness" issue; Le., we want the controller to remain stable in the presence of variations in plant characteristics,
* Author to whom correspondence should be addressed. 'Department of Systems Science and Mathematics. On leave from Nippon Steel Inc., Himeji, Japan. Department of Chemical Engineering. 0888-5885/90/2629-2037$02.50/0
which migh cuase disagreement between the model and the actual response characteristics of the plant. One frequently followed approach to this problem is to design the controller in such a way that the system remains stable over a certain expected range of parameter variations. This usually requires that the control system be detuned; i.e., some performance is sacrificed to achieve robustness. In the case of on-line optimization, this results in lost revenues and is detrimental. Another approach is to design the system to adapt to the changing plant environment. This adaptation may be based on adjustments to the process model (model adaptive control) or adjustments to the controller (self-tuning control (Astrom and Wittenmark, 1973)). This paper is concerned with the former, i.e., control and optimization systems that adjust model parameters continuously to follow changes in plant operating characteristics. Jang et al. (1986, 1987a,b) proposed a two-phase approach to model based control. In the first phase, selected parameters in the process model are adjusted such that some measure of the discrepancy between model predictions and actual measurements is minimized. In the second phase, the model is used to compute changes necessary 0 1990 American Chemical Society
2038 Ind. Eng. Chem. Res., Vol. 29, No. 10,1990
in the input manipulated variables such that some measure of the performance (economic or otherwise) is optimized over the immediate horizon. This method has been shown to yield good results in a number of application studies. An important assumption in this approach is that the measurements are already decided upon and that they are correct and error free. In practice, the control system design engineer has some choice in the location and type of measurements, and most often, redundant measurements are available. The two issues investigated in this article are (i) given a set of possible measurements, how does one select the set that gives the maximum accuracy for the process model (the measurement selection problem); and (ii) given a set of measurements, how does one evaluate the quality of these measurements; in particular, how does one detect bias (gross errors) in the measurements (the measurement error detection problem). It should be emphasized that the basic issue in measurement selection is an economic one: What are the economics of adding a measurement to a measurement set? Clearly, adding measurements increases the information content and hence should improve the performance of the control system. There are two difficulties in direct application of this criteria. First, it is difficult to evaluate the economic impact of adding a measurement to the measurement set (assuming that the control system is capable of dealing with additional measurements). Second, it is difficult to put a quantitative measure on the information content of an additional measurement. The objective of this article is to provide a partial solution to this second problem in the limited context of model based control and optimization. The question of measurement selection from the model based control point of view was first discussed by Weber and Brosilow (1972) in their paper on inferential control. They introduced the concept of selecting measurements that contain the maximum information about the process being controlled yet are not sensitive to modeling errors. These ideas were further explored by Joseph and Brosilow (1978) and Joseph et al. (1976) in the context of selecting temperature sensor locations in a distillation column. These studies were done in the context of linear input/ output models. In the first half of this paper, we extend these concepts and ideas to the more general nonlinear model based control and optimization problems. The second issue addressed in this paper, that of measurement error detection, has been of interest to process engineers in a larger context of data reconciliation. There is a wealth of literature on the topic of gross error detection and data reconciliation (see for example Mah et al. (1976)). The main difference in the problems treated in this article is that in the control context, the measurements are available over a period of time and in a recurring fashion. The ability to repeat measurements aids the elimination of zero mean random errors using conventional noise filters. However, the non-zero mean error (bias) is difficult to detect without some a priori knowledge. This problem is treated in the second half of the paper.
A Criterion for Measurement Selection The criterion that is proposed in this paper is based on the context of the two-phase approach to nonlinear model based control and optimization (Jang et al., l986,1987a,b). The problem there can be stated as follows: Given a process model of the form 1 = f(x,O,m) (1) and output measurements Y = g(x,O,m) (2)
,
Unmeasured 1 Disturbances
Control Objectives
Process Measurements
I
I
A MODEL Updated Model (Estimated States,
IDENTIFICATION
determine the inputs m such that some objective function is maximized. This objective could be that the output remains close to a specified set point (in the case of regulatory control) or it could be an economic measure of the plant performance. Figure 1shows the schematic diagram of the two-phase approach. In this approach, the problem is split into an identification phase and an optimization phase. In the identification phase, the differences between model predictions y ( t ) and actual measurements s ( t ) are minimized by adjusting some of the parameters 0 in the process model. This could also include such items as unmeasured states and input disturbances. In the optimization phase, the control policy m ( t ) is computed to achieve the desired objective. The updated process model is used to forecast the expected plant behavior over a fixed horizon into the future. In the case of regulatory control, the desired objective may be to minimize the deviation of some output variables from their set points. In the case of optimizing control, this objective may be expressed in economic terms. Consider the steady-state case with perfect measurements. Here the problem is to adjust the unknown parameters in the model to obtain a better fit to the observed data. This is accomplished by first defining an objective function (e.g., the sum of the square error residuals) and then computing a 0 that minimizes this objective. To evaluate the quality of the measurements, we can look at the quality of the resulting parameters. This can be expressed in terms of the statistics of the parameters, namely, its covariance matrix. There are several methods to compute the covariance matrix. There we follow a method suggested in Bard (1974). Let a(8,y) be an objective function defined for the identification phase. The simplest such function is the s u m of the squares of the difference between predictions and measurements. Let yp,p = 1, ...,n, be a set of observations. We seek to determine 0 that minimizes a. Let O* be the unconstrained minimum. This means that a(O*,y)/dO = 0. By varying the data slightly, we have aa(o* + 60*, y + 6y)/ao = o (3)
a2a
-6O* ao2
a2a + -sy aoay
=0
so that approximately 60* = (-H*)-l(a2@/aoay)sy
where H is the Hessian matrix,
(4)
Ind. Eng. Chem. Res., Vol. 29, No. 10, 1990 2039 The covariance matrix of the change in parameter estimates due to the change in observations is given by
V, = E(60*60*T)
uninsulated heated bar
__________
(7)
insulation
1
so that
+
VBi= (H*)-' (a20/aeay)V,(azo/aeay)T(H*)-'
(8)
V, = E(6y6yT)
(9)
where
Heal Input
___________
+ + +
++*
1
+
+
I D
Location 01 nmasurements
Figure 2. Schematic diagram of the heated-bar process.
Assuming that y , has covariance V, and is dependent of y,, (y # T ) , then (8) reduces to n
v, = (H*)-'{@E=(azay aeay,)v,(azo/aeay,T)j(H*)-l l
(io)
Bard has shown that if we assume that 0 depends only on a moment M of the residuals (deviation between the model predictions and measurements), then we can apply the Gauss approximation to this term and hence simplify the computation of V,. In particular, in the case of single equation least squares where the model is expressed in the form Y = f(x,O)
(11)
V, can be approximated as
v,
02(
5
(af,/ae)(af,/aeT))-i
,=l
lichamp and Moore (1976). Figure 2 shows a schematic diagram of the process. The steady-state temperature profile along the heated length of the aluminum bar depends on the heat conduction within the bar and heat losses to the surroundings due to natural convection. For the uninsulated portion of the bar, neglecting radial gradients, the model can be written as d2T/dt2 = 01(T - 02)
(16)
and boundary conditions T = To at [ = 0 (the hot end of the bar) dT/d[ = 0
at [ = L (the cold end of the bar)
where T is the temperature at 5, 0, = 4h/kD, h is the convective heat-transfer coefficient, K is the thermal conductivity of the bar, D is the diameter of the bar, L is the unheated length of the bar, and O2 is the surrounding temperature. The resulting temperature profile is given by
= a2(XX')-'
(12) V, can be related to the confidence level in the parameter estimates by using Neyman's theory as follows:
Pr
[(e - O*)V[l(O - e*)
IC] =
(13)
were c is constant and y depends on the choice of c. To improve the accuracy of estimates, we must minimize the volume of the region in 0 space determined by the inequality in (13). This volume is given by
V ( c )= ( c ~ ) l det1iz / ~ (V,)/I'((l/2
+ 1))
(14)
Thus, we have the criterion so that to maximize accuracy we must minimize the quantity
# = det (V,)
(15)
Note that V, is a positive definite matrix (Draper and Smith, 1966), and hence, the determinant is non-zero. With no economic constraints, the obvious choice to reduce the uncertainty in 0 is to use as many measurements as possible. Economics might dictate that we choose as few measurements as possible. Because we are not able to establish a quantitative relationship between improved accuracy and improved control system performance, it is not possible to create an absolute criterion for determining the added value of a measurement. However, by using the above criterion, it is possible to compare two measurements and select among them. In the following sections, we illustrate the use of this criterion in measurement selection and measurement error detection. Two studies are presented. The first study uses a simulation model of a heated-bar process. This allows the criterion to be exercised under controlled conditions. In the second study, a laboratory setup of the heated-bar process is used.
Description of the Heated-Bar Process The above criterion was applied to evaluate temperature measurements in a heated-bar process described in Mel-
To simulate experiments, observations Tci)(i = 0, 1,..., m) are generated by using a formula Tp(i)= T(O)(O,[y)(l+ €,(a)) (y = 1, etc.) (18) where 0 is a specific set of parameter values (0, = 3.8, e2 = 70; true parameter values), E, is the operating conditions, €,(a) is a pseudorandom number with distribution N(O,a), and a is the standard deviation of the relative error in T. A Priori Selection of Measurements The first problem posed was as followed: Given a process model (but no operation data on the process), determine the best set of measurements to use such that the accuracy of the model can be maximized by using the information contained in these sensors. This is typical of the problem faced by a control system design engineer during the plant design stage. The constraints are the accuracy, possible sensor locations, and cost of the sensors. We will consider only the problem of determining the relative metit of alternative measurement sensors. One problem here is that the process model available is not fully validated and the parameters may have a range of uncertainty associated with them. For this application, we assume that the parameters are restricted to be in the ranges 1.0 I 0, I4.0,60 IO2 I80, and 80 ITo I120. The possible choice of measurement locations are [lt(0.8125, 1.625, 2.438, 3.25, 4.063, 4.875, 5.688, 6.5) = {location 1, location 2, ..., location 8). By using the criterion developed earlier, the measurement selection problem was posed as an optimization problem min det {Vo]o-8e htT0
where El is the measurement location (assuming only one measurement is to be selected), To is the operating condition, and 0 O is any arbitrary choice of parameters. Ob-
2040 Ind. Eng. Chem. Res., Vol. 29, No. 10, 1990 Number of selecled locatkns 120,
The Measurement Selection Problem The problem of measurement selection can be stated as follows: "Given a process and an approximate model of the process, select the operating conditions and the measurements such that the accuracy of the model is maximized". The problem is similar to that faced by an experimental designer, who wants to conduct a set of experiments to validate a process model that he has developed. The classical approach to this problem utilizes the factorial design technique, in which the experiments are conducted at the extreme values of the independent variables. For the heated-bar problem, the two independent variables are 80 I To I 120 and 0.8125 I E I 6.5. Hence, factional design would choose values in the set (To,[)= ((0.8125,80), (0.8125,120),(6.5,80),(6.5,120))in a cyclic manner. If we apply the criterion developed earlier (min det V8), then the operating condition and the measurement sensor can be selected as follows (we assume that only one measurement can be made at any one time, although this restriction can be relaxed). Start with initial guesses for the parameters in the model. Use this model to predict the measurement sensors to be selected and operating condition to be used that will minimize det Vg. Conduct the experiment and use the results to update the parameters 8. Repeat the selection process as before using the newly determined values of 8. We call this sequential design. Simulated experiments were conducted by using measurements corrupted by a known amount of random noise as described earlier. Both factional design and sequential design were exercised. The results are shown in Figures 4 and 5. Figure 4 shows that the estimated parameter
I
Noise Level = 5 % 4.2
a0
True value
as &4
o
- ...................
75 ..................................................................................
ooop ~
o
& a.6......0 .?. ...*.&A.&.A
~
~0 o
o
~pnm*rnlu 82
y
~
A A.. ....................
~
~
m' 65
ooo4 A A A A 3 - ....... o...... Li .................................................................
80
~
Q
.& ...a
~
~
.n..
A
...............................................................
A A ..g.8.A.. .......................................................................
s& .................................................................................
BA' A A h 26 - . ~ '............................................................................ A
50
..................................................................................
a-blr,
4
...............
1 . s : " " : " " : " " : " " : " " : " " ~ -
40
22
- ..............
................ ororud"
0
5
.............. 10
15
P
25
30
Number of experiments
Noise Level = 10 Yo
Noise Level = 10 % mumw pnmcnlu 82
~ p r m k v r l u s
4.2 1
I
True value
as r
9 - ............ - n .....
t
I
7..5 c ................................................................................. 000000
70
A
AAAA ...................................................
' A !a,. ..~A.................................................................... 22 B .... A O - b P
................
do
00
.....................
.... ......................................................
06 ...............................
...... 1.8 - .n.A,. A
-1
TMvalue
......
3.4
28
n
80
s&
................................................................................ L A
50 45 40
0
5
10
15
P
Number of experimenia Figure 4. Sequential design versus factorial design: convergence of estimated parameters.
25
30
Ind. Eng. Chem. Res., Vol. 29, No. 10, 1990 2041
Noise Level = 5 %
Estimated parameter valua,
81
1
81
xxxx
71
6-
0
5
10
15
20
25
x x
x
x x x x
30
Number of experiments
Noise Level = 10 YO DetlXXI
80 75
60
0
5
10
15
20
25
30
Number of experiments Figure 5. Comparison of det (X'X)between sequential and factorial design.
values approach the true values as more and more experiments are performed. The rate of approach depends on the noise level in the measurements. The important point to note is that the sequential design converges faster to the true values in both cases shown. The rate of convergence is seen more clearly in Figure 5, which shows the growth of det (X'X). Note that from eq 12, det V Bis inversely proportional to det (X'X). While this study is not complete, it does give support to our earlier derivation that showed the uncertainty in the parameter estimates as a guide to selecting measurements and operating conditions. These results have some significance in the context of optimization where one must choose the input manipulated variables to optimize an objective function. The accuracy of the optimum depends on the accuracy of the model. Hence, one must also strive to find the conditions that improve the accuracy of the model. This suggests that process optimization should be coupled with maximizing the accuracy of the parameter estimates.
Gross Error (Bias) Detection In the experiments described above, we used zero-mean Gaussian random noise in the measurements, but actual plant data are likely to contain non-zero mean noise due to instrument drift and calibration errors. Measurement errors that are zero-mean random can be reduced by time series filtering techniques such as the ones described by Box and Jenkins (1970). However, such filters do not remove the bias in the measurements. Detection and removal of bias in process data have been
+
-
x x x x x
x x
A A A A A A A A A A A A A ~ A A A n OB0030o o c ~ ~ ~ ~ ~ L
A A A
o
~
L
2042 Ind. Eng. Chem. Res., Vol. 29, No. 10, 1990
7-
0 baS%
A biaslo%
Xbies
XI%
I
6 1
I
True value
a
2 t
I
-
5
0
10
15
20
25
30
Number of experiments ,8125
Estimated parametervalue, e
1.625
I
2.437 3.25 4.062 4.875 Location01 Measurems
5.667
6.500
Figure 8. Frequency of selection of sensors (sequential design). Rewl$ofsele&feEyminetkn 2oK B l a &dld EEd.5
Nooftimeswleded A o A
65
True value
0 '
r
.-
I
I
0
60-
0 k 3 5 %
0
A baslo%
X
~
Z
O
X
,8125
1.625
2.437 3.25 4.062 4.875 Locetion of Measurements
5.687
6.500
Figure 9. Frequency of selection of sensors (selective elimination). (no bias) case 1-1 (5% bias) case 1-2 (10% bias) case 1-3 (20% bias) ca& 2-1 (5% bias) case 2-2 (10% bias) case 2-3 (20% bias) true value -
~
4.01/0.60 76.6512.64 2.33 X sequential design
4.9210.87 80.8712.26 4.20 X 6.91/1.46 87.0912.10 11.10 X m
I
3.1010.31 73.6912.53 1.03 X 10" 3.1310.35 75.9312.66 1.17 X 2.9610.38 67.4314.04 1.32 X 3.80
70.00
t
I
selective elimination
4. Update the parameters by using the selected measurement. Return to step l. Figure 7 shows the results of applying this algorithm to the heated-bar process. For this test, a measurement bias of 5%, lo%, and 20% was added to the measurement at 5 = 6.5. At low bias levels, there is no significant difference between the results here and those shown m Figure 6. But as the bias level increases, selective elimination begins to recognize the problematic measurement and eliminate it, leading to more accurate estimates. This is clearer from Table I, which compares the final converged values of parameters obtained by using sequential design versus those by using the successive elimination procedure. The parameters are closer to the true values, and their expected standard deviations are smaller.
In both Figures 6 and 7, the estimates start out low initially. This is merely a consequence of the starting guesses for the parameters, and no other significance should be attached to this fact. Some anomaly might be observed in Table I where the standard deviation O2 (column 2) shows a slight decrease with the increased bias in measurement. This is a purely random phenomenon. Additional measurements would have led to a greater decrease in u for the leas biased case. It is more meaningful to compare det Vg,which shows a substantial decrease when using the successive elimination algorithm, especially when the bias gets larger. The reason for this improvement can be seen from Figures 8 and 9, which show the number of times a particular measurement is chosen. The selective elimination procedure detects the bias in location 8 and shifts the selection to location 7. Although not shown in the figures, this shift is more evident as the bias level increases. One might ask why location 8 is still chosen sometimes even when it is biased. One reason is that the measurements are corrupted by noise, and hence, it is not clear at first which sensor is biased. Another reason is that from the point of view of information content (det Vg),location 8 is still preferred as seen from the results of the sequential design procedure. Finally, this algorithm can be used to detect biased measurements by noting the frequency of selection. A measurement that is rejected often is suspect and should
Ind. Eng. Chem. Res., Vol. 29, No. 10, 1990 2043 Temperature at the hot end T~ W$
(OF)
1 ;; ; ; 82
1
A
B
3
4
5
6
2438
3.25
4063
4875
~
74
Location No.
72 70
0
0.813
1.625
Location of measurements
5.688
65
7313
(inch)
Figure 10. Experimentally measured temperature profile.
be checked for possible bias in the measurement.
Experimental Results The selective elimination procedure was applied to the operation of a laboratory-scale heated-bar setup. Figure 10 shows the typical temperature profdes measured on this setup. Judging from these and other measured profiles, the following conclusions were drawn: 1. The temperature measurement at location 2 shows a slightly higher value than expected. 2. The temperatures measured at locations 3 and 8 always show slightly lower temperatures than expected. These conclusions are drawn from a physical understanding of the system and from the theoretical model results obtained earlier. What we would like to show is that the methodology outlined in this paper can be used to demonstrate, quantitatively, that the measurement at location 8 is indeed biased. It should also be noted that the apparent bias was not artifically introduced but is caused either by poor sensor calibration and/or small errors in location of the sensor. The problem now posed was to find a good fit to the theoretical model developed earlier by using data measured on the experimental setup. Two parameters were treated as unknown, namely, O1 and 02. Note that O2 is the ambient temperature that can actually be measured independently. This was included as an unknown parameter to provide a check on the accuracy of the model fit to the data. The exact value of O1 is not known and there is no way to verify this quantity other than seeing how closely the data fit the observations. Another point to note here is that we also imposed a restriction that only one measurement be used in any one experiment. This is merely to demonstrate the utility of the measure of accuracy, $, proposed in this paper. In practice, of course, one would use all available measurements. Two successive sets of experiments were carried out. The first set used a sequential design to pick out the operating inlet temperature and sensor location to be used. The second set of experiments followed the selective elimination procedure to select among the sensors. Ten runs were made by using each procedure. The results of these 10 runs are shown in Table 11. Note that sequential design invariably selects location 8 (the same result was obtained when working with the simulated runs shown in the previous section). Selective elimination uses both locations 7 and 8 probably because of the bias at location 8 (noted earlier in this section). The parameter values estimated by using both algorithms are also shown in Table 11. The correct value of O2
Table 11. List of Parameter Estimates a n d Experimental Condition at Each Experiment sequential design selective elimination e,, e12 EX. C O . ~ e,, e12 EX. CO. run 0 1.92 63.64 (8, 80) 1.92 63.64 (7. 80) run 1 1.99 64.88 (8; 90) 2.10 66.36 run 2 2.01 65.00 (8, 80) 2.11 66.47 run 3 1.95 64.12 68.09 (8, 90) 2.25 run 4 1.91 63.74 (8, 80) 2.20 67.86 run 5 1.89 63.24 (8, 90) 2.27 68.60 run 6 1.82 62.41 (8, 80) 2.20 68.16 run 7 1.87 63.38 67.86 (8, 90) 2.17 67.72 run 8 1.84 63.14 (8, 80) 2.15 run 9 1.82 62.74 68.10 (8, 90) 2.18 run 10 1.79 62.31 67.86 (8, 90) 2.14 a Ex. Co. (experimental condition) = (location no., temperature a t the hot end).
is 67 (obtained by using an independent measurement). The successive elimination procedure gives an estimate of 67.86 after 10 runs whereas the sequential design gives a value of 62.31. Clearly the bias at location 8 contributed to this significant error in the sequential design case. These results gives further support to our approach in using the estimate of the accuracy of parameters as a guide to selecting measurements and detecting the quality of the measurements in the context of model fitting.
Conclusions The purpose of this article was to demonstrate the use of a measure of the accuracy of parameter estimates as a guide for selecting measurements and operating conditions in the context of model based control and optimization. The measure that we proposed is the determinant of the covariance matrix of the error in parameter estimates. This matrix is computed easily from the model equations. The use of this measure was illustrated for three distinct applications: (i) as a measure for quantitatively evaluating the information content in alternative sensors and hence in selecting the appropriate sensor a priori (before plant is built); (ii) as a means of deciding what conditions to operate a process to yield maximum information regarding model accuracy; (iii) as a quantitative measure for detecting biased (poor quality) measurements and eliminating them from further consideration. In all three applications, the measure proved to be successful in its intent stated. Studies on an experimental heated-bar setup further confirmed the benefits of using this measure. Finally, some comments on the limitations of this measure are in order. The measure as proposed can only be used in a relative sense, i.e., to compare alternatives. Another limitation is that in this approach the model was assumed to be structurally correct. We have not yet studied the problems that would arise if the model is inherently deficient in some way. Acknowledgment Partial support provided by National Science Foundation through Grant CBT 87-13469 is gratefully acknowledged.
Nomenclature E = expected value f = process model g = process model H = Hessian matrix L = uninsulated length of the heated bar (in.)
Ind. E n g . Chem. Res. 1990,29, 2044-2053
2044
M = moment matrix of the residuals or a set of measurements hi = normal distribution p = probability density
Pr = probability t = time V = covariance matrix V a r i a n c e (Var) X = derivative matrix with respect to parameters (see eq. 12) y = predicted value W = weighting matrix
Greek L e t t e r s Q, = objective function 7 = confidence level I.( = index used to indicate experimental runs 7 = expected value T = probability density $ = det V,, 0 = vector of parameters O* = true parameter values u = standard deviation 5 = vector of independent variables
Literature Cited Astrom, K. J.; Wittenmark, B. On Self Tuning Regulators. Automatica 1973, 9, 185. Bard, Y. N o n h e a r Parameter Estimation; Academic Press: New York 1971. Box, G. E. P.; .Jenkins, G. Time-Series Analysis; Holden-Day: New York, 1970
Cutler, C. M.; b a k e r , B. Dynamic Matrix Control Algorithm, 86th National Meeting of AIChE, Houston, TX; AIChE: New York, 1979.
Draper, N. R.; Smith, H. Applied Linear Regression Analysis; Wiley: New York, 1966. Jang, S.S.; Mukai, H.; Joseph, B. A Comparison of Two approaches to On-Line Parameter Estimation and State Estimation of Nonlinear Systems. Ind. Eng. Chem. Process Des. Deu. 1986,25,809. Jang, S.S.; Mukai, H.; Joseph, B. A On-Line Optimization of Constrained Multivariable Processes. AIChE J . 1987a, 33, 26. Jang, S. S.; Mukai, H.; Joseph, B. Control of Constrained Multivariable Processes Using a Two-Phase Approach. Ind. Eng. Chem. Res. 1987b, 26, 2106. Joseph, B.; Brosilow, C. B. Inferential Control of Processes. AIChE J . 1978, 24, 485. Joseph, B.; Brosilow, C. B.; Howell, J.; Kerr, N. R. Improved Control of Distillation Columns Using Multiple Temperature Measurements. Hydrocarbon Process. 1976, 3, 127. Mah, R. S. H.; Stanley, G. B.; Downing, D. M. Reconciliation and Rectification of Process Flow and Inventory Data. Ind. Eng. Chem. Process Des. Dev. 1976, 15, 175. Mellichamp, D. A.; Moore, T. W. The UCSB Real-Time Computing Laboratory: Description of the Heated Bar Experiment. Chemical Engineering Department, University of California, Santa Barbara, 1976. Richalet, J.; Rault, A.; Rapon J. Model Predictive Heuristic Control: Application to Industrial Processes. Automatica 1978, 14, 43. Serth, R. W.;Heenan, W. A. Gross Error Detection and Data Reconciliation in Steam Metering Systems. AZChE J . 1986, 32, 733. Weber, R.; Brosilow, C. B. The Use of Secondary Measurements to Improve Control. AIChE J. 1972, 18, 614. Received for review May 24, 1990 Accepted June 5, 1990
Potential Pitfalls in Ratio Control Schemes Heleni S. Papastathopoulou and William L. Luyben* D e p a r t m e n t of Chemical Engineering, 111 Research Drive, Lehigh University, Bethlehem, Pennsylvania 18015
The objective of this paper is t o study the behavior of various ratio control schemes implemented on distillation columns with many trays. The results show that controller structures using ratios plus direct material balance manipulation (e.g., RR-B, D-BR) can produce underdamped open-loop systems (level loops closed, composition loops open). This underdamped behavior is due to the hydraulic lags in these large columns. Because of this very unusual behavior, it is difficult to derive the transfer function matrix of such ratio control structures by using standard identification techniques. A method is presented for finding the transfer function matrix by appropriately transforming the open-loop transfer function matrices of other conventional control structures (e.g., D-V). Once the necessary transfer function matrix is found, controller tuning is feasible, and the performance of the closed-loop system is very good when the two product composition control scheme is in effect. 1. Introduction Ratio control schemes have been used in practice for many years. They are feedforward control structures in which one variable is controlled in ratio to another in order to satisfy some higher level objective (Shinskey, 1988). For example, additives are controlled in ratio to the principal component in a blend to keep the composition of the blend constant. The primary controlled variable is then composition, which is a function of the ratio of the components. In a distillation column, keeping the ratio L / V ( L and V are the internal liquid and vapor flow rates) constant in a column section sometimes reduces variations in the
* Author to whom correspondence concerning this paper should be addressed.
concentration profile (Roffel and Rijnsdorp, 1982) and results in a slow and smoothed propagation of composition changes from tray to tray. Suppose, for example, that the reflux flow (R)is adjusted in ratio to the top vapor flow from the column and that the composition controller adjusts the ratio. In addition to slow and smooth composition changes from tray to tray, the level controller of such a ratio control system exhibits increased rangeability. An increase in the vapor (V)gives an immediate proportional increase of the reflux, resulting in a decreased influence on the level. Thus, the level controller can work well even with large reflux ratios. Similar arguments hold for ratio control schemes at the column base. However, there are certain problems associated with ratio control schemes. These problems are related to ad-
O888-5885/90/ 2629-2044$02.50/0 0 1990 American Chemical Society