Local models for representing phase equilibria in ... - ACS Publications

Jul 1, 1986 - Sandro Macchietto, Eldred H. Chimowitz, Thomas F. Anderson, L. F. Stutzman. Ind. Eng. Chem. Process Des. Dev. , 1986, 25 (3), pp 674–6...
1 downloads 0 Views 1MB Size
674

Ind. Eng. Chem. Process Des. Dev. 1988, 25, 674-682

Local lbbcfe#sfor Representing Phase Equilibria in Multicomponent Nonideal Vapor-Liquid and Liquid-Liquid Systems. 3. Parameter Estimation and Update Sandro Macchletto,+ Eldred H. Chimowltz,’ Thomas F. Anderson,* and L. F. Stutzman Department of Chemical Engineering, The University of Connecticut, Storrs, Connecticut 06268

Simple, local models are used in a two-tier structure for incorporating thermodynamic and physical properties into dynamic process calculations. To maintain the accuracy of the approximated models as the process conditions vary, the parameters in the local models are adjusted periodically by using the rigorously calculated property values and a recursive, least-squares update. The sampling time for the rigorous evaluations is variable and is selected automatically. The resulting algorithms yield a substantial reduction in the number of rigorous evaluations and associated computer time. Examples are presented for the dynamic simulation of small-scale, vapor-liquid equilibrium problems.

The problem of efficiently incorporating thermodynamic and physical (TP)properties in process design calculations was considered in the first two parts of this series (Chimowitz et d., 1983,1984). Several fundamental concepts and a number of applications to typical design problems were presented there. Those basic ideas can be summarized as follows: 1. Simple (“local”) models are considered which approximate locally more rigorous and expensive TP property models. The local models contain adjustable parameters so that they can be fit to a particular region in the temperature, pressure, and composition space. 2. A given design or simulation problem is reformulated in terms of the local models. This produces a more manageable mathematical problem whose solution becomes easier and more economical. 3. Since the local models, by their very definition, do not have global validity, using them outside of the region where the parameters were fit may result in some inaccuracy. However, the model parameters can be recalculated (“updated”) in the new region. This procedure is repeated until the problem is solved. These ideas resulted in a solution procedure, two-tier in structure (Rosen, 1980), which is schematically represented in Figure 1 and which we named the P-DELTA method. P-DELTA is an acronym for Process DEsign by Limiting Thermodynamic Approximations. An important advantage of this two-tier structure is the decoupling of the thermodynamic package (data base and rigorous TP estimation routines) from the process solution procedure. In the first paper (part l), local models were developed to represent distribution coefficients ( K values) in vapor-liquid and ternary liquid-liquid systems. Several simplifications, based on pseudobinary interactions between the components of a mixture, led to compositiondependent local models which can accurately represent ideal and nonideal solutions, including azeotropic mixtures. Typical local models are shown in Scheme I. The second paper (part 2) illustrated the application of local models to fundamental problems in separation processes such as

* Author

t o whom correspondence should be addressed. Current address: Department of Chemical Engineering, Imperial College, London SW7, England. Current address: Department of Chemical Engineering, University of Rochester, Rochester, NY 14627.

*

Scheme I. Typical Local Models for Vapor-Liquid K Values” In (Ki.P) = ai + bi In f r o i = 1,...,nc;r = reference component In (Ki.P) = ai + bi In f i a + ci(1 - xi)’ i = 1,...,nc In (K,.P) = a, + b, In f r o + c,(l - x,)’ In ( K J K , ) = ai + b i / T + c i ( l - xi)’ + di(l - x,)’ 1 = 1,...,nc;i # r; r = reference component In all models, f” is the standard-state fugacity (at zero pressure and system temberature).

the computation of dew points and the optimal design of vapor-liquid and liquid-liquid flash units. Emphasis was placed on the application to rather simple problems to illustrate the reduction of rigorous TP property evaluations that results from the use of local models. Since TP evaluations can account for up to 80% of the total computer time spent in simulating a process, this reduction remains the main objective of the method. It is the purpose of the present work to discuss a third aspect of the P-DELTA method, that is, the method for evaluating and updating the parameters in the local models. The updating procedure has important effects on the efficiency, accuracy, and stability of the overall two-tier algorithm. We examine first three updating methods which have been used in the past with other tiered-solution algorithms. Their relative merits and shortcomings are discussed in some detail. A new method is then proposed, which overcomes most of those shortcomings and is at the same time economical to use, The new updating method is used with local models to dynamically simulate vaporliquid separation processes; this represents an extension and a novel application of the local model idea.

Parameter Update Methods Algorithms which incorporate TP expressions in process calculations with a two-tier structure have been proposed in the past. The earliest workers to use local models (Leesley and Heyen, 1977) only briefly discussed the problem of estimating the model parameters. Their Kvalue model was a function of temperature and pressure, and it contained two adjustable parameters (a third could be added for high-pressure applications). To illustrate their method, let f(x,,xz) be a rigorous, property model and 0 1986 American Chemical Society

Ind. Eng. Chem. Process Des. Dev., Vol. 25, No. 3, 1986 675

For their error model, the error function, e , is defined

I N I T l C I L CONDITIONS

1

7

PROCESS MODEL + SIMPLE PRDPERTY

/I SOLUTION

as e(x1,x2)

SIMPLE MODELS



RIGOROUS PRDPERTY

1

MODELS

= g(xl,x2,Pl,P2)- f ( X l J 2 )

(1)

A Taylor series expansion of the error function about a base point xo = (xI0,x2O) gives

1

I

Figure 1. Schematic representation of the P-DELTA method (PDELTA is an acronym for Process DEsign by Limiting Thermodynamic Approximations).

g(xl,x2,pl,p2)be a local approximation to it, with one adjustable parameter, pi,for each independent variable in the model. To evaluate p1 and p 2 , at least two values of f must be known at two independent points in the thermodynamic space. These represent the “measured” values, to which the function g is fit. When g is linear in the parameters, p1 and p 2 are obtained by solving a linear system of two equations. If the parameters appear nonlinearly in the local model, their determination requires solving a nonlinear, algebraic system. If more than two measured values are available, the best value of the parameters can be defined in the least-squares sense, which results in either a linear or nonlinear least-squares problem. There are two aspects of Leesley and Heyen’s work which are relevant to any two-tiered approach using local models. First, initial values of the parameters are needed. Second, the range of validity of the local models must be established for a set of parameters to avoid large extrapolations. Leesley and Heyen obtained initial values for the parameters by using the rigorous TP models for the first two iterations of the simulation. These two “measured” values were sufficient to calculate the two parameters in the local, K-value model. To avoid excessive extrapolations of the local models, Leesley and Heyen fixed upper and lower values of the two independent variables, T and P. The bounds defined an interval, which included the two data points used in calculating the parameters. Subsequent calculations would be performed by using the local model, provided the current temperature and pressure lay within those bounds. When the range of variables was too wide to be covered accurately by a single set of parameters, they recommended using more than one set of parameters for different temperature and pressure regions. Parameters in each region were kept constant for all iterations, thus leading to a “piecewise”approximation over the required temperature and pressure range. No updating was performed. With this method, some inaccuracies may result. Leesley and Heyen considered only ideal mixtures and found the error in the K values at a solution to be generally less than 5%. Therefore, they cleaned up the small error with one final iteration in which the rigorous property models were again used. Overall, they concluded that using l ~ c amodels l did reduce the amount of rigorous calculations and computer time; however, they remarked that the method had to be bypassed for units requiring too many “piecewise“ approximations. This excluded, in principle and in practice, nonideal solutions. Barrett and Walsh (1979) attempted to extend the local-model idea to nonideal mixtures. They treated each region in the process as a different thermodynamic state. With each region they associated not only a set of localmodel parameters but also a set of coefficients to an approximated error model.

e(x1,x2) = a

+ b(xl - xl0) + c ( x 2 - xZ0) + d(xl - x10)2 + h(x2 - x20)2 + m(x1 - x 1 0 ) ( x 2 - x20) + ... (2)

Parameters in both the TP model (pland p 2 )and in also the error model (a, b, c, d, h, and m) can be calculated from a system of equations obtained by evaluating f and g at a base point and at perturbed points for each unknown constant. This perturbation procedure requires one rigorous evaluation for every parameter or constant. Barrett and Walsh assumed that the error model was either first or second order (based on physical considerations); this greatly simplified the procedure since many of the coefficients were then zero. They were then able to estimate the error associated with a local-model prediction at new conditions (temperature, pressure, and composition) during the solution and were able to decide when a local approximation was no longer valid for a particular region. Parameters in the TP model and in the error model were then updated by repeating the entire perturbation about the current thermodynamic region. Barrett and Walsh successfully applied this method to nonideal mixtures with high accuracy. However, the cost was quite high. For their local approximations to the activity coefficients, computing and updating the parameters required more time than that which was saved by using the simple models. In addition, considerable storage was required since parameters for the error models also had to be stored. Another drawback to this method was that by arbitrarily setting to zero some coefficients in the error expression, the cross dependencies between the independent variables (say temperature and composition) tended to be overlooked. Another algorithm using local models is that of Boston and Britt (1978). They also required an adjustment of the local-model parameters in the outside tier. To illustrate, let p be a vector of assumed parameter values, for which the problem is solved in the inside tier. At this solution, a new set of parameters p can be computed by using the rigorous properties and a perturbation method as discussed previously. Boston and Britt then posed the problem of finding that set of values for which the computed parameters were identical with the estimated ones: P(P) - P = 0

(3)

This yields a nonlinear set of equations, which Boston and Britt solved by using a Broyden, quasi-Newton method. Each pass through the outside tier corresponds to one iteration of eq 3. If the Jacobian matrix, J, of eq 3 is diagonally dominant, the method should converge in few iterations, by starting with an identity matrix for the Jacobian. A physical interpretation of diagonal dominance is that the local-model parameters must be uncoupled; e.g., a temperature parameter in the local model must not depend (much) on composition. An unfortunate drawback of this method is that the storage requirements for the Jacobian matrix grow in proportion to the square of the total number of parameters. Thus, for a multistage separation process where the number of parameters is very large, the required storage becomes excessive and the method is inapplicable. Therefore, in practice, eq 3 was solved by either a slow, direct substitution or by applying moderate acceleration

--

676

Ind. Eng. Chem. Process Des. Dev., Vol. 25, No. 3, 1986 OLD PARAMETERS

+

NEW PROCESS CONDITIONS

1

I

L o c a L MODEL (LINEPIR I N PMAMETERS)

I 1

ESTIMATED PROPERTIES

I

the j t h parameter in the local model at the kth measurement; it is either a constant or a function of temperature, pressure, etc. The matrix A of qj's for all n parameters and k measurements is called the observation matrix. The error (residual) for the kth measurement is given by

RIGOROUS MODEL

I

MEASURED PROPERTIES

1 ESTIMTOR

ek = f k - ak'p

where ak is the vector of observations at the kth measurement. The vector of errors for the entire set of measurements is given by

e=f-A.p

I

1

(6)

(7)

The sum of the squared residuals, e T e ,after k measurements can be minimized to give the parameter vector, p , defined by

N E W PARAMETERS

Figure 2. Parameter updating viewed as an estimation problem.

with a bounded Wegstein method. The cross dependency of the variables was totally ignored which caused problems with strongly nonideal mixtures (poor convergence or even failure to converge). In a later paper, Boston (1980) included a composition-dependent term in their local, K-value model. For each component in the mixture, there were two additional parameters which were computed by perturbation of a rigorous activity coefficient correlation. The computational load was reduced somewhat by fixing one of the parameters in some cases and updating only one parameter at each iteration. No attempt was made to estimate the errors associated with the local models, nor was it possible with this method. Based on the above discussion, we find that there are specific objectives which must be satisfied by an updating procedure. These are as follows: Each update must use a minimum number of new rigorous thermodynamic property evaluations. Past history information to be stored must be kept to a minimum. Storage of past information must be efficient. Update must be computationally inexpensive. Update must be numerically stable. Our method attempts to satisfy all these objectives by using an automatic, recursive updating procedure.

Recursive Parameter Updates The calculation of the local-model parameters can be approached as an identification problem schematically shown in Figure 2. With this approach, we can use established results from the fields of parameter identification and process control. In particular, we will make use of the concepts that are applied to self-tuning regulators (Astrom e t al., 1977) and will build equivalent, "self-tuning" local models. First we briefly consider a classical least-squares problem. Let f k and g k represent the measured and estimated values of some property obtained from rigorous and an approximate models, respectively, at the kth measurement, and let f and g represent vectors of all previous measurements and estimates including the kth ones. The n local-model parameters are given by the vector p . For a local model that is linear in the parameters, we can write

and

g=Ap (5) Equation 4 is the local model at the kth measurement. The element a k j corresponds to the term which multiplies

Early in the development of the P-DELTA method, we attempted to use a simple least-squares updating procedure. We used a fixed number of rigorous "measurements" ( x , T, P, and corresponding K values) that were stored at all times. Given these data, a set of local-model parameters could be evaluated with eq 8. When a new measurement was available, one of the old data points was replaced by the new one, and a new least-squares estimate of the parameters was computed. Although this update was successfully applied in many cases (Chimowitz et al., 1984), it suffered from several problems: Initialization required a specific procedure, instability was sometimes experienced when many new points were close to each other, and storage requirements were high. It was also difficult to decide which point to reject. A more fruitful method is based on a recursive leastsquares (RLS) estimation. In eq 8, the matrix product, (ATA)-',is the parameter variance-covariance matrix based on k measurements and is given the symbol E. An interpretation of E is that it contains information about all past measurements, in addition to uncertainties associated with each parameter (diagonal terms) and with cross dependencies between parameters (off-diagonal terms). If a least-squares (LS) solution is assumed available for k data points and a new measurement is added (i.e., new values of f k + l and ak+l),the error vector is augmented by one element:

The new LS parameter estimates are given by

and the new variance-covariance matrix is given by

Based on eq 10 and 11, a recursive procedure can be constructed so that when only one new measurement is added at a time, the E matrix need not be reinverted. The current parameter vector and covariance matrix become a priori information and can be combined with the new datum to give updated values of all parameters and the variance-covariance matrix. Past data points need not be stored, since the same information is already contained in the E matrix. An initial large uncertainty about the value of the parameters can be expressed by setting Eoto the identity matrix multiplied by a large scalar value (103-106). A basic recursive least-squares (RLS) algorithm which implements these ideas is outlined in Scheme 11. The

Ind. Eng. Chem. Process Des. Dev., Vol. 25,No. 3, 1986

Scheme 11. Basic Recursive Least-Squares (RLS) Update (Bierman, 1 9 7 7 ) O Step 1. Compute the residual ek+l = f k + l - ak+l‘Pk

Step 2. Compute the gain matrix Gk+l = Ek’ak+lT-/[uk+l + ak+i’Ek’ak+iTl Step 3. Update the covariance matrix Ek+l = [I - Gk+i*ak+ilEk Step 4. Update the parameters p k + l = pk + Gk+i‘ek+i “Here Gk+l is a gain vector and past information.

vk

677

A1 ! U sl_r 4

,

Flash

Flash

a scaling factor which weighs

matrix G,indicated in Scheme 11, is sometimes called the gain matrix; it weights the predicted residual so as to give the proper correction to the old parameters in step 4. As discussed by Bierman (1977), practical implementation of the RLS algorithm requires careful attention to numerical details, which can result in different formulas for steps 3 and 4. We chose a square root method, in which E is factored in the form LDLT,where L and D are lower-triangular and diagonal matrices, respectively (Kleinman, 1977). With this implementation, only L and D have to be saved and G need not be actually stored. This greatly improves not only the speed but also the numerical robustness of the algorithm. The reader interested in the details is referred to the monograph by Bierman (1977). For our purposes, a new measurement corresponds to a rigorous property evaluation, which is performed every time the outside tier of the P-DELTA method is traversed. Thus, the index k corresponds to the outer-loop iteration counter. A single measurement is sufficient to compute a correction for all the parameters in the local model simultaneously, since the gain matrix takes into account the cross dependence of the parameters. Thus, the cost of a perturbation approach and the inaccuracy of neglecting cross dependencies are both avoided. Before obtaining a workable RLS algorithm, two additional problems must be addressed. The first concerns that of obtaining initial values of the parameters. If no information is available, the parameters are set to zero and the E matrix is set to 1061,as previously indicated. The first few evaluations are then performed by using the rigorous TP models, while the update procedure builds up information on the parameters and covariance matrix; we found that an excellent estimate is obtained after four to five measurements. Alternatively, the parameters and the variance-covariance matrix corresponding to a previous solution could be used as initial values in similar problems (similar components, temperature, pressure, etc.). The second problem to be addressed is due to the fact that the problem may or may not move into new regions where the local models are invalid. In the early iterations, the “measured” values of TP properties can be quite different from the values that they reach in a later stage. The local model parameters obtained by the standard RLS algorithm represent the best fit of both old and more recent measurements, which could lead to inaccuracies and slow convergence. This is a serious problem, which we solved by disregarding (“forgetting”) old information (measurements) when they become obsolete. With reference to Scheme 11, we see that the constant uk is merely a scaling factor; positive values of u k that are less than 1 correspond to disregarding old information (Astrom et al., 1977). In particular, setting all Uk’S to the same constant uo, (0 < 6o < 1)causes an exponential forgetting of past data. (If uo = 1, we obtain the standard LS results.) It is quite difficult to rigorously assign optimum

t Figure 3. Two stage flash with recycle.

values to u or to define a proper sequence of uk’s, although this can be done (Fortescue et al., 1981). Instead, we have achieved the same result by empirically scaling each new measurement by a scalar factor s = sV?k (12) where e k is now the weighted error for the kth measurement. From eq 12, we note that a large value of s will cause the kth measurement to have a large weight and vice versa. With s monotonically increasing with k, the old measurements become progressively less important, and the local model will evolve to fit the most recent values accurately. We chose to increase s in proportion to the square root of the estimated relative error. ek

s = s max [ l , ( ( e k / f k ) / 7 1 ) 1 ’ 2 ]

(13)

In eq 13, T~ is an adjustable tolerance. The initial value of s is taken as 1 when k = 1. We found this expression, although empirical, quite satisfactory. It gives essentially no “forgetting” when the local model is accurate, but old values are disregarded quickly in the presence of inaccuracies. Thus, the parameters in the local models rapidly adjust to the changing process conditions and are able to maintain sufficient accuracy. A final numerical problem called “blow up” is often cited in conjunction with the RLS equation. With this term we refer to the situation where past information is forgotten without being replaced by new; it may happen when the process is inactive for long periods of time. Frequent sampling in these conditions does not help, because the new data do not contain really new information about the system. A constant or exponential forgetting of past data would then lead to extremely high gains and numerical instability as soon as a small disturbance hits the system. The choice of a variable scaling factor (eq 13) is effective in dealing with this problem and makes the update very stable under all conditions; drifting of the parameters or “blow up” were never experienced with this method.

Application of Recursive Least-Squares Method Our first example which uses the principles developed above is the dynamic simulation of two flash units with a recycle. The process is shown in Figure 3. The flow rate of the recycle stream and the vapor-to-feed ratio in the first flash were assumed to be controlled independently by a perfect controller and therefore set to a fixed value. The feed flow rate was also fixed, but its composition could change. Pressure in both units was also specified at lo5 Pa. Only temperature and compositions in the two units could vary, so that equilibrium effects were isolated and

-

Ind. Eng. Chem. Process Des. Dev., Vol. 25, No. 3, 1986

678

lo

r

t

I

i

i l

061

A

-

B-

Acetone

Acetonitrile

C - Water

B 00

, c ,

I

0

500

1000

1500

, 2000

,

, 2500

Table I. Simulation Statistics for Example 1" using local models automatic using interval rigorous fixed solution interval flash I flash I1 integration steps 67 69 67 67 BP evalualtions total 146 150 158 146 rigorous models 19 21 150 56 local models 141 141 153 local model updates 21 56 19 1.72 1.46 CPU time, s 1.46 1.46 % decrease rigorous BP's 86 % 63% 87 % computer time 31% 15% 31% BP = bubble point.

Time

Figure 4. Example 1: dynamic concentration profiles.

tested independently. Liquid holdup in the drums was constant, vapor holdup was neglected, and instantaneous vapor-liquid equilibrium was assumed. The model for each unit consisted of one differential equation for the liquid mole fraction of every component in the mixture except one, which was then given by the algebraic equation cxi = 1. The dynamic response was simulated by using the General Equation Modeling System (GEMS) language, developed at the Univeristy of Connecticut (Babcock et al., 1979; Koup, 1981). GEMS uses an implicit integration algorithm (Brandon, 1974) with automatic selediop of the integration step. With IMP the error is monitored by a full-step/ half-step technique, which requires two Jacobian matrix evaluations per step (more may be required when the error over a step is too large and the integration must be repeated with a smaller step size). Equilibrium K values and temperatures corresponding to the bubble point conditions were calculated whenever a new Jacobian was computed but were kept constant for the rest of the integration step (or half-step). A conventional solution was performed first by using rigorous TP property routines from Prausnitz et al. (1980) for all bubble point calculations. These routines use the UNIQUAC equations for modeling liquid-phase nonidealities. A mixture of acetone, aceto~trile,and water was fed to the f i t flash, with overall composition z = (0.3,0.3, 0.4). This was taken as the initial liquid compositions in both units. From these conditions, the differential equations described an initial transient, until all concentrations reached their correct steady-state value. This transient is shown in Figure 4 for values of time between 0 and 700. Note that the time scale of the response is largely irrelevant (it depends on the values assigned to holdups and flow rate), because of the use of an automatic step size integrator, About the same number of integration steps are required to reach steady state with other values of the holdup. At time 700, the feed concentration was switched to a new composition z* = (0.9,0.05,0.05), which produced the second part of the transient response in Figure 4. Statistics for this initial run shown in Table I, column a, indicate that 150 bubble point calculations were required for each unit (300 total). A second run was performed by using the two-tier approach with local approximations for equilibrium K values given by the last model in Scheme 1. Two sets of localmodel parameters were used, one for each unit. Initial values of parameters and variance-covariance matrices were set to 0 and 1061,respectively; no a priori knowledge was assumed. For the first five integration steps, bubble

Table 11. Simulation Statistics for Example 2" using local models automatic using interval rigorous fixed solution interval flash I flash I1 intergration steps 137 133 136 136 BP evalualtions total 276 268 274 274 rigorous models 276 73 32 45 local models 263 269 269 local model updates 73 32 45 CPU time, s 5.79 3.59 2.88 2.88 % decrease rigorous BP's 88'70 74% 8470 computer time 38 % 50% 50% "BP = bubble point.

points were calculated by using the rigorous routine. At each time, an update was also performed. In this phase, the local model was assumed to be still inaccurate, and its predictions were not used for computing the scaling factor (eq 13). Instead, we s e t s = 1.5k (lz = 1,...,5 ) . After the initial phase, a rigorous bubble point was computed at fixed-time increments (sample intervals), and the local model was updated with a scaling factor given by eq 13 and with T~ = 0.05. The sample interval selected was 20 for times less than 700,lO for times between 700 and 850, and 100 for the rest of the simulation. Between each sample intervals, the local model was used to compute the required bubble points. Results for this run are also given in Table I. The number of integration steps and total bubble point evaluations are essentially the same as in the previous run, which indicates that the dynamic characteristics of the response have not changed. However, the number of rigorous calculations is reduced by a factor of about 3, and the overall computer time is reduced by 15%. The compositions of the liquid streams for the two runs, at some time after the step change and at the new steady state, are compared in Table H I and show good agreement. We note that the step change in the feed concentration is very large and moves the vapor-liquid equilibria into a completely different composition-temperature region; the new steady state corresponds to substantially changed liquid-phase mixtures. This poses a severe test for the update procedure and illustrates its ability to smoothly "tune" the local-model parameters. In the second example, we considered the same twostage flash process used above but under different conditions. In this case, the feed was a wide-boiling ideal mixture of eight hydrocarbons, from C to CI6, and the liquid flow rate from the second flash is set to zero initially.

Ind. Eng. Chem. Process Des. Dev., Vol. 25, No. 3, 1986 079

Table 111. Intermediate and Steady-State Compositions for Example 1 liquid-phase mole fractions rigorous fixed automatic solution update update Middle of Transient (Time = 800) flash I xl 0.7758 0.7807 0.7763 0.1175 0.1066 0.6707 0.1624 0.1668

~2 ~3

flash I1

x1 ~2

x3

0.1163 0.1030 0.6822 0.1594 0.1584

0.1175 0.1062 0.6710 0.1623 0.1667

Final, New Steady State (Time = 2500) flash I

0.8846 0.0603 0.0551 0.9250 0.0320 0.0430

x1 ~2

x3 x1

flash I1

~2 xg

0.8845 0.0603 0.0552 0.9249 0.0321 0.0430

04

A

-

C,

f r o m Flash I

B

- C,

f r o m Flash 1 I

C

- C, - C,

D

x

0.8845 0.0603 0.0551 0.9244 0.0322 0.0434

f r o m Flash I from Flash II

vi

._ * 0 03 u-

-P Y

w

f

4

0.2

0 .-m

-

-I

00

I 0

I

I

160

I

C

-

I

I

480

320

I

I

J

640

Time

Figure 5. Example 2: dynamic concentration profiles.

The mole fractions of the liquid phase were set initially to the steady-state values corresponding to this feed obtained by integrating to steady state in a previous run. Other conditions were similar to the previous example. At time 200, a specified liquid flow rate was recycled from the second stage to the first one. The resulting transients are shown in Figure 5 for two of the eight components. As shown in Table 11, the base run required 276 rigorous bubble point calculations in 137 integration steps. A second run with the two-tier procedure required only 73 rigorous evaluations in 133 integration steps. This represents a 75% reduction in the number of rigorous bubble point calculations. Since each bubble point is more expensive in this example than in the previous one (more components), the overall reduction in computer time of 38% is more significant. Parameters and covariance matrices were initialized as in example 1, and the sample interval was set to 50 time units for time less than 195, to 2 for time between 195 and 320, and to 20 when steady state was approached (times greater than 320). The composition profdes in the two runs agreed to four significant digits. The conditions in each of the two stages move toward quite different composition-temperature regions. The heavy component all but disappeared from the second stage (the mole fraction of component eight in stage 2 is about Since the recursive update adjusts the parameters for each stage in-

dependently, excellent accuracy was maintained in both regions.

Automatic Sample Interval In the dynamic simulation examples presented previously, the selection of a sample interval was based on a rather ad hoc criteria. A parameter update was performed at fixed time increments, established a priori from a knowledge of the perturbations imposed and from a knowledge of the time constants of the system. Typically this information is not available, and therefore, it would be desirable to have a more general criteria for choosing the sample interval that would minimize the number of updates. Clearly, there is no need to update when the local model is accurate in a given region and the region does not change, as may be the case near steady state. Also, updates are not required when the solution region changes but the current local model extrapolates well to the new region. Large sample intervals can be used in each of these cases. However, when significant changes in the system occur or when a local model does not extrapolate accurately, more frequent updating may be required. From this observation, it is apparent that the sample interval may be adjusted, based on a measure of the local-model accuracy. A true measure of the difference between the local-model prediction and rigorous property value at the current condition is already available; it’s error is used in the computation of the scaling factor (eq 13) for the update. This error is compared to the same error tolerance ( T J used previously. When the error is too small, the sample interval is increased; when it is too large, the sample interval is reduced. Thus, a large discrepancy between a rigorous model evaluation and a local-model estimate has a 2-fold effect: it causes the recursive update to disregard old information at a faster rate by changing the scaling factor, and it increases the update frequency by reducing the sample interval. Both actions tend to improve the accuracy of the local model in the new region. By assuming that the error is second order with respect to the sample interval, we find that the increase (or the reduction) should be by a factor equal to the square root of the error-to-toleranceratio (or of its inverse). For safety, we also limit the increase to a factor of 4 and the reduction to a factor of 10 and limit the absolute value of the sample interval by upper and lower bounds. The resulting sample-interval-control algorithms is similar to the integration step-size adjustment procedure used within the IMP integration routine (Babcock et al., 1979). However, there is one important difference. If a given integration step produces an unacceptable integration error, the step is reduced and the integration is repeated from the most recent good data which is kept in memory. This is not the case with the local-model interval. With reference to Figure 6, let told be the last time that a rigorous measurement was taken and (tnea- told) the current sample interval. All iterations between told and tnextare performed by using the local model, with the integration step adjusted independently. When the integration reaches past tnext, a rigorous property evaluation is performed and the local model is updated. If a large difference between the rigorous and the local values results, it is not possible to go back more than a single integration step, unless a complete iteration history is saved and a reinitialization procedure is followed. A large sample interval is always produced by the automatic sample adjustment during a long steady-state period; if this is followed by a rapid perturbation, a situation like the one shown in Figure 6 will inevitably occur.

880

Ind. Eng. Chem. Process Des. Dev., Vol. 25, No. 3, 1986 10

A

-

-

Rigorous Property Evaluations Local Model Evaluations

x

08-

v;

Y

B c Y

-

06-

f oLK al

A

- Acetone from Flash I

B - Acetone from Flash I/ C

c

- Water

from Flash I

b

4

Integration Step r"?

k

i

?

i

i

i

&

i

A

02

c , / i

i

4

4

l

i

r

Time

00

1

500

0 T

Figure 6. Relation between integration step and sample interval for property evaluations. Scheme 111. Algorithm for the Automatic Selection of t h e Update Times Step 1. Initial pass only: set k = 0, told = t , t,,,,

=t

+ 1; compute KF from local model for i =

Step 3. Compute largest relative change from last rigorous values; force updating when change is too large; if lK,LK,RI/K,R 2 r2 for any i, then go to step 5 Step 4. If t < tneXt,set K , = KtL (i = 1,...,nc)and return to step 2 Step 5 . Compute K,Rfrom rigorous model (i = 1,...,nJ and set K , = K F , & = t - t o l d , and t o l d = t Step 6. Compute the largest relative error between rigorous and estimated K's: RT = max [IK: - K>I/KT] for all i; RE = RT/T, Step 7. Compute the scaling factor for the recursive update; sf = sf max \1,(RE)'12 Step 8. Update the local model using the RLS algorithm Step 9. Adjust sample interval if RE < 1, ,it = At min (4, l/(RE)"'], or if RE 1 1, At max {0.1, 1 / ( R E ) ' / 2 \ Step 10. Define next sample time: tneXt= t + At Step 11. Return to step 2. Notes: K,R and K k are property values from rigorous and local passes, steps models, respectively (i = 1,...,nJ. For the first "nstartn 2, 3, 4, 6, and 9 are skipped, while step 7 is modified as sf = sf-k. We generally used nstart= 5 and r = 1.5. Atmm and Atminare upper and lower bounds for the sample interval and must be set by the user.

To safeguard against this possibility, a second and more conservative criteria was also adopted; the relative change in the K value is monitored. The local-model estimate is compared at all iterations with the last rigorous value of the property. If the relative change exceeds a threshold , update is forced at that point and the sample value, T ~ an interval is reduced accordingly. Note that this procedure requires storing only a single set of K values corresponding to the most recent rigorous evaluation. The final algorithm is outlined in detail in Scheme 111. The sample interval is adjusted in step 9, and the relative change is monitored in step 3. For the first nsW iterations, we use a slightly modified procedure: steps 2 , 3 , 4 , 6 , and 9 are skipped, and the scaling factor, s, is computed in step 7 as s = r.k. We have generally used nsW = 5 and r = 1.5. With this modification, the method is completely self-

1000

2000

1500

2500

Time Flash I

Sample Interval

/D

I

Tnext

Told

Step 2. Set k = k 1,...,nc

0 - Water from Flash II

Flash II

I

1

I l l l i

,

1

i

s

'

I

,

I

Update History

Figure 7. Example 1: dynamic response and update times.

starting; no initial estimates of either the covariance matrix or the parameters are needed. An initial estimate of the sample interval is also provided automatically; it is set equal to nstart.

Examples Using the Automatic Recursive Update The dynamic simulations of interconnected flash units presented previously (examples 1 and 2 ) were run again with the automatic sample interval algorithm shown in Scheme 111. The problem statement and operating conditions were exactly the same as before. For the automatic-sampling algorithm, two parameters must be specified: a tolerance value for the relative error in the K values ( T J and a tolerance for the maximum relative change of any K value from the last rigorous evaluation before an update is forced (TJ. These error conditions are monitored independently for each flash unit. If either error tolerance is exceeded for any single component in a mixture, the model for that unit is updated. We have used a value of 5% for the first parameter ( T = ~ 0.05) and 10% for the second one (r2 = O.l), for both problems. We also set upper and lower limits for the sample interval of 1000 and 1 in example 1 and 200 and 1 in example 2, respectively. In Tables I and 11, results using the automatic sampling algorithm are compared with previous results. A substantial reduction of rigorous bubble point evaluations and computer times is achieved by using the automatic sampling algorithm. Only 10-15% of the bubble points are computed with the rigorous routine; the overall computer time is reduced by 50%. In Table 111, which shows the liquid concentrations at two different times for example 1, we see that these improvements are obtained without any loss of accuracy in the calculations. On the contrary, a noticeable improvement in accuracy is obtained relative to the fixed-interval solution at time 800 (halfway through the step response): the steady-state values are equally good, however. An additional, important feature is that the models for each unit are updated independently. When a perturbation is applied to the process, it propagates with different amplitudes to the various parts of the system and with different time delays. The local models are updated only in those parts of the system which undergo significant transients and thus only when the perturbation actually reaches those parts. This concept is shown by Figure 7, for the first problem. The arrows along the horizontal axis mark the times when the automatic sample algorithm

Ind. Eng. Chem. Process Des. Dev., Vol. 25, No. 3, 1986 681

updates each unit. During the initial part of the transient, the sample interval becomes large for both units. When the large step change in feed concentration is imposed on the system, the sample interval is rapidly reduced. Since the feed is to the first unit, changes occur in that unit first and then propagate to the second unit. The location of the updates in Figure 7 reflects well this physical situation. Finally, as the new steady state is approached, the update frequency is decreased again and the sample interval becomes very large.

Conclusions We have presented a new procedure for updating the parameters of simple, local models, which are used for approximating more complex thermodynamic functions. The adjustment of those parameters is performed in the outer tier of a two-tier solution algorithm and is an essential part of that algorithm. The new method utilizes concepts from the field of estimation theory and consists of a modified recursive least-squares procedure. A single evaluation of a rigorous thermodynamic property permits a simultaneous update of all the parameters in the local model. We have illustrated the behavior of the update procedure with two examples where the thermodynamic conditions of the system change drastically. The examples demonstrate that the proposed update is efficient, robust, and stable and that is can rapidly “tune” a local model around the current thermodynamic conditions of the system. The update requires a minimum of n X n, words of storage for the parameters and n(n + l)n, words for the covariance matrix in factored form, where n, is the number of components in a mixture and n the number of parameters in a local model. For the K-value model that we used (four parameters), this amounts to a total of 14n, words for each distinct parameter-covariance set. These storage requirements are reasonable, especially if we consider that the covariance matrix is only needed at update times and can be kept on a secondary storage device at all other times. Thus, the different matrices could be moved into the core and updated one at a time. The method is then applicable to even very large processes. The actual updating program is short (less than 50 lines of code) and does not add significantly to the size of an application package. The coupling of simple property models with an effective parameter update makes it possible to solve dynamic simulation problems with a two-tier solution scheme. This may result in considerable savings in property evaluations and computer time, if a proper updating frequency is selected. We have proposed an algorithm for the automatic selection of the sample interval, which is effective in keeping the number of rigorous property evaluations to a minimum. Discontinuities and large variations in problem conditions are also handled by the method. In the examples presented, the automatic sample algorithm yields both a large reduction in the use of rigorous evaluations and a more accurate representation of the transient response then a fixed sampling procedure. It does so by concentrating the updates in the region(s) of rapid change and reducing their frequency elsewhere. The reduction in computer time, although relevant, is not as impressive as the reduction in rigorous evaluation. For these small problems, in fact, the fraction of total cost due to property calculations is generally smaller than for large problems, and the actual integration represents a higher fixed cost. These results, however, are very encouraging and indicate that substantial savings can be expected in large-scale applications.

Finally, we note that the use of local models does have limitations and may not be directly suitable for all applications. When large changes occur in the independent variables as may happen with a Newton-Raphson solution in a distillation algorithm, the local models could be in considerable error. This could, and likely would, lead to oscillations in the solution algorithm. In this application, a continuation algorithm would be more suitable for use with local models. For this same reason, it is likely that local models will not work well in an optimization problem, since again large changes are made in the independent variables. The ideal application for local models appears to be in dynamic simulation, where any changes can be continuously tracked. This application is also one that is computationally very expensive and one that is most likely to gain from the use of local models.

Acknowledgment We acknowledge Dr. Mordechai Shacham and Dr. David Kleinman for their helpful advice. We are also grateful to Control Data Corp. which supported this work through fellowship grants and computer facilities. Nomenclature a = observation matrix for single measurement A = observation matrix a, b, c , d = parameters in local K-value model D = diagonal matrix E = parameter error covariance matrix f“ = standard-state fugacity f = rigorous thermodynamic model g = local thermodynamic model G = gain vector I = identity matrix K = vapor-liquid equilibrium coefficient L = lower triangular matrix n = number of parameters n, = number of components in a mixture nsM = number of initial rigorous property evaluations p = local-model parameter(s1 p = vector of model parameters P = pressure s = scale factor t = time T = temperature tnext,tFld = time of the next and last rigorous property evaluations u = “forgetting”factor in RLS algorithm x = vector of liquid mole fraction y = vector of vapor mole fraction z = vector of overall mole fraction Greek Letters 7

= tolerance

= summation

Subscripts i, j , k = indexes r = reference component t = total Superscripts

k = iteration, measurement index L = local model R = rigorous model T = matrix transpose -1 = matrix inverse 0 = initial value * = new composition Literature Cited Astrom, K. J.; Borisson, U.; Ljung, L.; Wittenmark. B. Automatica 1977, 73, 457. Babcock, P. D.; Stutrman, L. F.; Brandon, D. M., Jr. Simulation 1079, 33, 1. 1.

682

Ind. Eng. Chem. Process Des. Dev. 1986, 25,682-687

Babcock. P. D.; Stutzman, L. F.; Koup, T. G. Simdation 1981, 38, 2, 41. Barrett, A.; Walsh, J. J. Comput. Chem. f n g . 1979, 3 , 397. Bierman. G. J. “Factorization Methods for Discrete Sequential Estimation”; Academic Press: New York, 1977. Boston, J. F. I n “Computer Appllcations to Chemical Engineering”; Squires, R. G., ReklaRis. G. V., Eds.; ACS American Chemical Society: Washington, DC, 1980 ACS Symp. Ser. No. 124, p 135. Boston, J. F.; Brltt, H. I . Comput. Chem. Eng. 1978, 2 , 109. Brandon, D. M.. Jr. Simulation 1974, 23, 1, 17. Chimowltz. E. H.: Anderson, T. F.; Macchletto, S.; Stutzman, L. F. Ind. Eng. Chem. Process D e s . D e v . 1983, 22, 217. Chimowltz, E. H.; Anderson, T. F.; Macchietto, S.; Stutzman, L. F. Ind. Eng . Chem. Process D e s . D e v . 1984, 23,609. Fortescue, T. R.; Kershenbaum, L. S.; Ystie, B. E. Aotomatica 1981, 77, 6, 831.

Kieinman, D. L. Technical Report TR-77-2, Department of Electrical Engineering and Computer Science, University of Connecticut, Storrs, 1977. Koup, T. G. Ph.D Thesls. Department of Chemical Engineering, University of Connecticut, Storrs, 1981. Leesiey, M. F.: Heyen, G. Compuf. Chem. Eng. 1977, 7 , 109. PrausnRz, J.; Anderson, T.; Grens, E.; Eckert, C.; Hsieh, R.: O’Conneli, J. “Computer Calculations for Multicomponent Vapor-Liquid and LiquM-Liquid Equilibria”: Prentice Hall: Englewod Cliffs, NJ, 1980. Rosen, E. M. I n “Computer Applications to Chemical Engineering”; Squires, R. G., Rekialtis, G. V., Eds.: ACS American Chemical Society: Washington, DC, 1980; ACS Symp. Ser. No. 124, p 135.

Received f o r review June 13, 1983 Accepted November 26, 1985

Mixing of Fine Particles by Means of a Negative-Pressure Air Mixer Tetsuo Aklyama, Jlng quan Zhang, Masafuml Egawa, and Hiromu KoJima Department of Chemical Engineering, Shizudta University, Hamamatsu, 432, Japan

Mixing experiments with five kinds of fine particles demonstrated that the fluidized bed type of the negative-pressure air mixer effectively mixes fine particles, although the spouted bed type of the mixer failed to do so. A criterion that assesses the mixability of fine particles as well as a means of improving the mixing efficiency is investigated. A stochastic model is also developed to simulate the mixing process, and the predictabiri of the model is examined together with an earlier theoretical model.

In industry, air mixers are commonly used to mix solid particles. The conventional air mixer or the positivepressure air mixer (PPAM) is usually equipped with bag filters to separate gas from the entrained fine particles. These bag filters often become the source of contamination. The contamination can be objectionable, particularly when particles of foodstuffs or medicine are to be treated. To overcome this drawback, the negative-pressure air mixer (NPAM) was introduced by Akiyama et al. (1982). Subsequently, it has been shown experimentally as well as theoretically that the NPAM can become more advantageous than the PPAM in the sense that air is more effectively used in the former than in the latter (Akiyama and Tada, 1984, 1985). In the above studies, mixing experiments were carried out with relatively coarse particles. We have found recently, however, that in contrast to the mixing of rather coarse particles, it is practically impossible to mix diatomaceous earth, a fine particle (dp = 8.3 pm), by means of the spouted bed type of NPAM (Figure la), although the fine particle can be mixed well by the same type of PPAM (Akiyama et al., 1985). When the spouted bed type of NPAM was used, diatomaceous earth hardened, instead of being dispersed and mixed upon the injection of air from the bottom of the particle bed. The bed spouted only partially at any level of air velocity. The shaded parts in Figure 1 denote the hardened portions of the particle bed. Figure l a shows a typical example when air was injected at relatively low speed. A high rate of air input caused even the upper portions of the bed (except near the center of the cross-sectionalarea) to harden and break into several hard lumps. In the case of a fluidized type of NPAM as shown in Figure Ib where a filter was set at the top of the cone (but without packings within the cone), the central portion hardened upon the injection of air at relatively high speed. It appeared that * T o whom all correspondence should be directed. 0196-43051861 1 125-0682$01.50/0

the injected air, passing through the particle bed in the region near the wall to the upper space of the column, compressed the particle bed from all directions, thus causing the bed to form a hard spherical lump in the central region. Thus, within a confined vessel, the particle bed characterized by a strong cohesiveness and large compressibility was found to be more easily compressed than dispersed when it was subjected to a sudden air injection. In fact, as much as 50% of the bulk volume reduction was observed when atmospheric air was suddenly released into an evacuated (-80 kPa in gage pressure) column 0.30 m high and 0.057 m in diameter, containing diatomaceous earth up to 0.20 m high. This peculiar susceptibility of fine particles to compaction upon a sudden air injection (in a confined vessel) imposes a unique problem when fine particles are to be mixed in the NPAM. Therefore, the primary purpose of the present investigation is to search for the means to achieve effective mixing of fine particles in the NPAM. In addition, an attempt was made to assess the mixability of fine particles (in the NPAM) in terms of particle physical properties. The predictability of the theoretical models concerning the extent of mixing was also examined. Experimental Section In contrast to the poor performance of the NPAM in the mixing of fine particles, an excellent mixing (of fine particles) can be obtained by the PPAM (Akiyama et al., 1985). The PPAM is a system of through flow: the top of the column is open to the atmosphere, and the injected air flows out of the column into the atmosphere through bag filters. In the case of NPAM, the injected air stays within the vessel. Thus, a conceivable way to improve the mixing performance of the NPAM is to simulate the flow pattern of the PPAM. For that purpose, we examined the effect of splitting the mixing vessel into two sections and connecting them with a pipe. One section is the column 0 1986 American Chemical Society