Application of dynamic programming to final state constrained optimal

Application ofDynamic Programming to Final State Constrained. Optimal Control Problems. Rein Luus* and Oscar Rosen. Department of Chemical Engineering...
1 downloads 0 Views 670KB Size
1525

Znd. Eng. Chem. Res. 1991,30, 1525-1530

Application of Dynamic Programming to Final State Constrained Optimal Control Problems Rein Luus* and Oscar Rosen Department of Chemical Engineering, University of Toronto, Toronto, Ontario, Canada M5S l A 4

The use of a penalty function to handle final-state constraints is incorporated into the iterative dynamic programming algorithm employing accessible grid points and region contraction. By using three examples, the importance of the size of the penalty function is illustrated. If the penalty function is very large, the rate of convergence to the opimum is slow and a large number of allowable values for control are required. If, however, the penalty function is reasonably small, the rate of convergence is rapid and a very smallnumber of allowable control values are required for convergence to the optimum.

Introduction To examine the design possibilities for a process, one frequently encounters an optimal control problem. Although the literature abounds with methods that can be used for well-behaved systems, the choice is much more limited when the equations are highly nonlinear or the system is ill-behaved. Recently Park and Ramirez (1988) showed that to obtain the optimal control policy for a fed-batch reactor is by no means an easy task when Pontryagin’s maximum principle is used. An attractive alternative to the use of Pontryagin’s maximum principle and the avoidance of the two-point boundary value problem appears to be dynamic programming (Bellman and Dreyfus, 1962). Although dynamic programming could be carried out successfully on very simple chemical engineering systems (Lapidus and Luus, 1967),the application to more meaningful systems is much more difficult, yielding wrong answers to a chemical reactor system described by only two state variables (Rosenbrock and Storey, 1966). The problems involving dimensionality and interpolation simply could not be overcome, and even relatively simple optimal control problems presented a computational nightmare. Recently, Edgar and Himmelblau (1988) further reaffirmed the problems with dynamic programming and stated that, even with our present-day computers, systems with more than three or four state variables cannot be optimized with dynamic programming even if a scalar control is involved. To overcome the dimensionality problem, Luus (1990a) suggested the use of an iterative procedure where a relatively coarse grid is used and the region over which the grid elements are taken is contracted by a small amount after each iteration, as was done for steady-state optimization problems by Luus and Jaakola (1973). In addition, to overcome the interpolation problem, it was recommended to simply use the best control policy available at the grid point closest to the point under consideration as was done by de Tremblay and Luus (1989). By using this procedure correct optimal control policy was obtained for test problems, but the computational effort was rather high. T o improve the computational efficiency, Luus (1989) recommended the use of accessible grid points for the state vector; Le., instead of arbitrarily setting up the grid, the grid points were generated by integrating the state equation for various values of control. With that modification the iterative dynamic programming was successfully applied to a nonlinear system with eight state variables and four control variables (Luus, 1990b). The rate of convergence was fast and the results were reliable. This iterative dynamic programming procedure can also be used with confidence for optimal control of systems described by difference equations (Luus and Smith, 1991) and is 0888-5886/91/2630-1525$02.50/0

ideally suited for singular control problems (Luus, 1990~). The purpose of this paper is to extend this iterative dynamic programming procedure to the optimal control of systems where final-state constraints are present.

Problem Formulation Let us consider the system described by the differential equation dx = f(x,u) dt where the initial state x(0) is given. The state vector x is ( n X 1) and u is an (r X 1) control vector bounded by j = 1, 2, ..., r (2) aj Iuj IOj At the final time tf constraints are placed on m of the state variables, where m In, in the form of either equality constraints x i ( t f ) = si i = 1, 2, ..., m (3) or inequality constraints xi&) Isi i = 1, 2, ..., m (4) The optimal control problem is to find the control policy u(t) in the time interval 0 I t < tf that will minimize the performance index

Construction of Augmented Performance Index. In order to satisfy the constraints specified at the final time tf, we construct the augmented performance index J = Z[x(O),tfl+ p(x(tt)) 6) where p(x(t3) is a penalty function introduced to ensure that the restrictions in eq 3 or 4 are satisfied. In using dynamic programming there is a wide choice one could make for the penalty function, since the function does not have to be differentiable. In this paper we shall use two types of functions for p , depending on whether equality or inequality constraints are present. When equality constraints as specified by eq 3 are imposed, we choose m

p(x(tf))= Cei(Xj(tf)-si)* ill

(7)

and when inequality constraints as specified by eq 4 are present, we choose

and p(dtr))=

iipddtr)) -1

1991 American Chemical Society

(ab)

1526 Ind. Eng. Chem. Res., Vol. 30, No. 7, 1991

It is clear that care must be taken in choosing the positive constants Bi so that during an iterative procedure a correct answer to the original problem is obtained when the augmented performance index J is minimized. The purpose of this paper is to investigate the use of such a penalty function approach when dynamic programming is used. Especially one is interested in finding a range for the constants Bi to enable a particular optimal control problem to be solved. Dynamic Programming Algorithm. We shall use essentially the algorithm given by Luus (1990b),with the exception that instead of concentrating on the original performance index, we are minimizing J. However, the essential features of the iterative procedure using region contraction are the same, and can be described in the following steps. 1. Divide the time interval [0, tf] into P time stages, each of length L. 2. Choose the number of x grid points N and the number of allowable values M for each of the control variables Ui.

3. Choose an initial region ri for each of the control variables, over which allowable values can be taken. 4. By choosing N values of control evenly distributed inside the allowable region for the control, integrate eq 1 N times with the given initial state from time 0 to tf to generate the N accessible values for the x grid at each time stage. 5. Starting at the last time stage P, corresponding to time tf - L , for each x grid point integrate eq 1 from tf L to tf once with each of the Mr allowable values for control. For each grid point choose the control u that minimizes J and store the value of control for use in step 6. 6. Step back to stage P - 1, corresponding to time tf 2L, and integrate eq 1 from tf - 2L to tf - L for each x grid point once with each of the M allowable values of control. To continue integration from tf = L to tf, choose the control from step 5 that corresponds to the grid point closest to the resulting x at tf - L. For each grid point, compare the M' values of the augmented performance index and store the value of control that gives the minimum value. 7. Repeat this procedure for stage P - 2, P - 3, etc., until stage 1, corresponding to the initial time t = 0, is reached. Store the control policy that minimizes J. 8. Reduce the region for allowable control values by an amount e; i.e. = (1 - 6)rCi)

(9) where j is the iteration index. Use the optimal control policy from step 7 as the midpoint for the allowable values for the control u at each stage. 9. Increment the iteration index j by 1 and go to step 4. Continue the procedure for a specified number of iterations such as 20 and examine the results, recording both Z and J . In order to investigate the effect of using the penalty function, let us consider three typical chemical engineering systems. Example 1: Nuclear Reactor System. Recently Van Dooren (1989) applied a Chebyshev technique to obtain the optimal control of a nuclear reactor system described by Sage and White (1977). The system is described by two differential equations

-dx-1 - 1OOO(U - O.O064)X1 + O . l X 2 dt

(10)

Table I. Value of Performance Index after 20 Iterations as a Function of the Penalty Function Factor (8) and the Number of Allowable Values for the Control (M); ZX 1v

e= M

0.001 1.976 1.777 1.774 1.775 1.774 1.774 1.774 1.774 1.774 1.774

3 5 7 9 11 13 15 17 19 21

0.01 1.988 1.789 1.775 1.774 1.774 1.775 1.775 1.775 1.774 1.774

0.10 2.425 1.890 1.892 1.861 1.777 1.789 1.774 1.776 1.776 1.775

1.0 2.487 2.210 2.134 2.010 1.784 1.792 1.852 1.780 1.786 1.777

10.0 2.666 2.436 2.173 2.145 2.031 1.819 2.235 1.870 1.822 1.800

Table 11. Optimal State and Control Obtained with 8 = 0.01, Using Nine Allowable Levels for Control; I = 1.7739 X 10-5

time

XI

x2

0.0

0.5000 0.5185 0.5408 0.5707 0.6177 0.7062 0.8825 1.1987 1.7886 2.8806 5.oooo

32.00 32.01 32.03 32.06 32.12 32.23 32.41 32.75 33.37 34.51 36.62

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

10% 0.0491 0.0786 0.1184 0.1804 0.2891 0.4557 0.6310 0.8148 0.9669 1.0974

with the initial state at x(0) = (0.5, 32)Tand the final-state constraint at tf = 1.0 ~ l ( 1 ) 5.0 (12) The performance index to be minimized by choosing a positive control u in the time interval 0 5 t < 1 is

1

1

I[x(O),l] = 0.5

0

u2 dt

(13)

We shall choose the augmented performance index

J = 0 . 5 1 ' ~dt~+ 6(xl(l) - 5.0)2 0

(14)

and the problem is now in suitable form for the dynamic programming algorithm. By choosing 10 stages, a reduction factor (1- e) of 0.75, N = 21, and allowing 20 iterations, the convergences to 1.774 x 10" was rapid when 8 was taken to be 0.01 or less as is shown in Table I. In order to get convergence for larger values of 8, the number of allowable values for control M had to be increased. With 8 = 10 convergence to the optimum could not be obtained even if 21 values for the control were allowed. The optimal control policy and the optimal state trajectory obtained with 8 = 0.01 are given in Table 11. It is noted that at the final time x1 is exactly at 5.0 as required. The performance index obtained with 10 stages compares very favorably with 1.767 X lob obtained by Van Dooren (1989). When 8 was taken to be 0.001, the convergence was more rapid and smaller values of M could be used. As shown in Table 111, the final value for x1 is 4.9988 and the performance index is marginally improved. The use of a larger number of grid points does not alleviate the convergence problem for large values for the penalty function factor, as is shown in Table IV where 41 grid points rather than 21 were used. Example 2: Optimal Control of a CSTR. Let us next consider a continuous stirred tank reactor (CSTR) modeled by Aris and Amundson (1958) and used for control studies

Ind. Eng. Chem. Res., Vol. 30, No. 7, 1991 1527 Table 111. Optimal State and Control Obtained with 8 = 0.001, Using Seven Allowable Levels for Control; Z = 1.7735

x 104

time

X1

x2

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

0.5000 0.5187 0.5395 0.5719 0.6219 0.7166 0.8921 1.2103 1.7979 2.9008 4.9988

32.00 32.01 32.03 32.06 32.12 32.23 32.43 32.77 33.40 34.55 36.67

Table V. Value of Performance Index after 20 Iterations as a Function of the Penalty Function Factor (e) and the Number of Allowable Values for Control ( M )

e=

10% 0.0465 0.0796 0.1215 0.1876 0.3005 0.4570 0.6356 0.8119 0.9680 1.0905

M

0.01 0.134 16 0.13416 0.134 16 0.13416 0.134 16 0.13416

3 5 7 9 11 13

Table IV. Value of Performance Index after 20 Iterations as a Function of the Penalty Function Factor (e) and the Number of Allowable Values for Control ( M ) Using 41 Grid Points for I:Z x 10’

z

P

1.0 0.136 10 0.136 10 0.136 10 0.136 10 0.136 10 0.136 10

100 0.14651 0.14605 0.145 11 0.14495 0.14493 0.14490

I 0.051

e= M

0.001 1.972 1.775 1.774 1.774 1.774 1.774 1.774

3 5 7 9 11 13 15

0.01 1.994 1.786 1.783 1.774 1.774 1.774 1.774

0.10 2.219 1.938 1.874 1.784 1.796 1.780 1.774

1.0 2.409 1.954 1.846 1.795 1.818 1.779 2.033

10.0 2.418 2.071 2.040 1.862 1.824 2.014 2.067

by Lapidus and Luus (1967). The system is described by the equations dx1 = -(xl dt

t

I

+ 0.25) + ( x 2 + 0.5) exp (1 + u)(xI+ 0.25) (15)

[ xy:12]

-dx2- - 0.5 - x 2 - ( x 2 + 0.5) exp dt

(16)

with the given initial state x(0) = (0.09, 0.09)T. Here x1 represents deviation from the dimensionless steady-state temperature and x 2 represents deviation from dimensionless concentration. The optimal control problem is to determine the unconstrained control policy u(t) in the time interval 0 I t < tf that will minimize the performance index

I = fr(xl2 0

+ x22 + 0 . W ) dt

(17)

with the intent of taking the system to the origin. To evaluate the performance index conveniently and without introducing any unnecessary approximation, we introduce a new state variable x 3 by

with the initial condition x 3 ( 0 ) = 0, and integrate this equation along with the other two equations to yield the performance index x3(tf). Thus I = x3(tJ (19) where the final time tf is set at 0.78. T o bring the system to the vicinity of the origin, we introduce the augmented performance index J = x3(t3 + e[x,2(tf) + ~ ~ 2 ( t ~ ) i (20) A different factor for each term in the penalty function could have been used, but for simplicity we take each of them to be 0. This problem is interesting, since in the absence of final-state constraints there is a multiplicity of

-040

~

-0.05

-

000

\ , . . . j ~

005

~

l

0.10

DIMENSIONLESS TEMPERATURE

Figure 1. State trajectories for the CSTR. A 8 = 0.01;Z = 0.134 16. B: 8 = 1.00;I = 0.13610. C: 0 = 100;Z = 0.14490.

solutions when Pontryagin’s maximum principle is used (Luus and Cormack, 1972) and it is possible to obtain the local optimum even with dynamic programming (Luus and Galli, 1991). Since the origin is unstable, the use of the performance index in eq 17 or 19 without the penalty function does not take the system very close to the origin. To solve this problem by dynamic programming, we used a two-pass method, where we used 10 stages for the first pass and 20 stages for the second pass, with 20 iterations in each pass. The optimal policy for the first pass served as a starting condition for the second pass. As before, we used a contraction factor of 0.75 and N = 21 grid points. For the initial control policy we chose do) = 2.0, and we chose the initial region size to be 1.0. When the penalty function factor 8 was 0.01 or 1.0, the algorithm yielded very rapid convergence even when only three allowable values were used for the control. When f3 = 100 was used, the convergence was slower and, to obtain the proper minimum, nine or more allowable values had to be used for control as is shown in Table V. Figure 1 shows the dramatic effect that 0 has on the trajectory. As 8 is increased from 0.01 to 100, the final state is brought much closer to the origin. With 0 = 100, at the final time we have x1 = -0.00371 and x2 = -0.00368. The optimal control policy is given in Figure 2, where it is observed that after t = 0.39 the control is zero. It is interesting to note that the performance index is increased only by 8% to bring the system very close to the origin. Example 3: Optimal Operation of a Fed-Batch Reactor. Recently Park and Ramirez (1988) presented

1528 Ind. Eng. Chem. Res., Vol. 30, No. 7, 1991

6 L

hr.' 4

-J

0

2-

a I-

2 0 V

-

0-

1

I

i 0.4

0.2

0

0.6

0.8

SOL'

0

"

"

5

" I

'

'

IO

"

'

"

15

'

"

"

20

1

ITERATION NUMBER TIME

Figure 2. Optimal control policy for the CSTR using 6 = 100.

the optimal control policy for the maximization of secreted protein in a fed-batch bioreactor. Luus (1991) found that the profitability is increased substantially when the batch time is increased beyond 15 h. However, the volume increases exponentially. Therefore we consider the problem where the volume is limited by the volume obtained by Park and Ramirez for a batch time of 15 h, namely 14.35

L. The differential equations describing the reactor are

Figure 3. Convergence to the optimum for the fed-batch reactor with t i = 20 h, 6 = 100.

The performance index to be maximized is the product of the concentration of protein x1 and the culture volume x5 at the final time tf: I = Xl(tf) X5(tf) (32) Since the problem is to maximize rather than to minimize a function, we simply make a sign change and consider the minimization of the augmented performance index J = -xl(tf) x5(tf)+ ep(x(tf)) (33) where

(29)

By use of 20 stages and reduction factor of 0.75, with this form of the penalty function, rapid convergence to the optimum was obtained with only 3 allowable values for control regardless of the value of 6 that was tried. The initial value for control was 1.0 and initial region of 2.0 was taken. In each case the volume of the culture x s was within its upper bound of 14.35 L. Figure 3 shows the rate of convergence for a typical run taking less than 1 min of CPU time on the CRAY X-MP/24 digital computer. The results obtained by dynamic programming were cross-checked by using nonlinear programming as described by Rosen and Luus (1989). The optimum results for different batch times are given in Table VI where M = 3 and 0 = 100 were used. As can be seen, the results are very close to those obtained by nonlinear programming. Also, it is observed that the profit function can be increased substantially by increasing the batch time by a relatively small amount. The optimal control policies for batch times of 22 and 25 h are shown in Figures 4 and 5. It ia observed that with tf = 22, the control policy is always positive, whereas with tf = 25 the control is zero during one sampling time.

O l U I l O (30) and the volume x 5 is constrained at the batch time tf ~ 5 ( t fI ) 14.35 (31)

Conclusions The use of penalty functions in dynamic programming appears to be an attractive means of handling final-state constraints. The rate of convergence can be affected by the size of the penalty function factors, so a few prelimi-

U -d X-4 - -7.3g3(x4)x3 + -(20

dt

- x4)

(24)

x5

dx5 = U dt

where 21.87x4

g3(x4) = (x4

+ 0.4)(x4+ 62.5)

(26)

with the initial condition x(0) = t0.0 0.0 1.0 5.0 1.OlT The control is bounded by

Ind. Eng. Chem. Res., Vol. 30, No. 7, 1991 1529 gineering Research Council of Canada Grant A-3515 is gratefully acknowledged. Computations were carried out on the Cray XMP/24 digital computer at the University of Toronto.

Table VI. Effect of Increasing the Residence Time tf on the Profitability of the Fed-Batch Reactor performance index dynamic nonlinear programming programming with M = 3 tf 32.46 42.61 52.49 61.68 70.31 78.60 86.00 93.10 100.03 106.28 112.46

32.37 42.61 52.47 61.68 70.28 78.59 86.00 93.10 99.98 106.27 112.46

15 16 17 18 19 20 21 22 23 24 25

0.51

0

,

.

,

,

l

,

,

,

.

I

.

l

l

.

l

l

l

l

.

I

I

Nomenclature f = nonlinear function of x and u (n x 1) gi = nonlinear functions defined in eqs 26-28 i = general index I = performance index j = general index; iteration number J = augmented performance index L = length of a stage = t f / P m = number of constrained state variables M = number of allowable values for each control variable n = number of state variables N = number of grid points for the state x p = penalty function introduced to augment the performance index p i = penalty function for a particular constraint P = number of stages r = number of control variables r = region over which control is taken (r X 1) si = final time constraint t = time tr = final time u = control vector ( r X 1) x = state vector (n x 1) aj = lower bound on control uj pj = upper bound on control uj t = amount of region reduction after each iteration 1 - c = reduction factor for the region 6 = penalty function factor ei = penalty function factor for ith constraint 9 = integrand in performance index in eq 5 J. = a function of final state used in performance index in eq 5

Literature Cited Aris, R.; Amundson, N. R. An Analysis of Chemical Reactor Stability and Control. Chem. Eng. Sci. 1958, 7, 121-131, 132-147. Bellman, R. E.; Dreyfus, S. E. Applied Dynamic Programming; Princeton University Press: Princeton, NJ, 1962; pp 3-336. De Tremblay, M.; Luus, R. Optimization of Non-Steady State Operation of Reactors. Can. J. Chem. Eng. 1989,67,494-502. Edgar, T. F.; Himmelblau, D. M. Optimization of Chemical Processes; McGraw-Hill: New York, 1988. Lapidus, L.; Luus, R. Optimal Control of Engineering Processes; Blaisdell: Waltham, MA, 1967. Luus, R. Optimal Control by Dynamic Programming Using Accessible Grid Points and Region Contraction. Hung. J. Ind. Chem. 1989, 17, 523-543.

-

4

OO

5

10

TIME, h

15

20

25

Figure 5. Optimal control policy for the fed-batch reactor with tf = 25 h; X g ( t f ) = 14.35 L; I 112.46.

nary runs may be necessary. Since for using dynamic programming no differentiation is required, there is a wide range of penalty functions that may be used. For all the examples tried, there were no computational difficulties, so dynamic programming may be used successfully for a wide variety of optimal control problems. The examples considered here can be easily run on a personal computer. Acknowledgment Financial support from the Natural Sciences and En-

Luus, R. Optimal Control by Dynamic Programming Using Systematic Reduction in Grid Size. Int. J. Control 1990a, 51,996-1013. Luus, R. Application of Dynamic Programming to High-Dimensional Non-Linear Optimal Control Problems. Int. J. Control 1990b, 52, 239-250.

L u u , R. Application of Dynamic Programming to Singular Optimal Control Problems. Proceedings of the 1990 American Control Conference; Automatic Control Council, IEEE Service Center: Piscataway, NJ, 199Oc; pp 2932-2937. Luus, R. Effect of the Choice of Final Time in Optimal Control of Nonlinear Systems. Can. J. Chem. Eng. 1991,69, 1441-151. Luus, R.; Cormack, D. E. Multiplicity of Solutions Resulting from the Use of Variational Methods in Optimal Control Problems. Can. J. Chem. Eng. 1972,50,309-311. Luus, R.; Galli, M. Multiplicity of Solutions in Using Dynamic Programming for Optimal Control. Hung. J. Ind. Chem. 1991, in press. Luus, R.; Jaakola, T. H. I. Optimization by Direct Search and Sye. tematic Reduction of the Size of Search Region. AIChE J. 1975, 19, 760-766.

Luus, R.; Smith, S. G. Application of Dynamic Programming to High-Dimensional Systems Described by Difference Equations. Chem. Eng. Technol. 1991, in press.

1530

Znd. Eng. Chem. Res. 1991,30, 1530-1541

Park,S.; Ramirez, W. F. Optimal Production of Secreted Protein in Fed-Batch Reactors. AZChE J. 1988,34, 1550-1558. Rosen, 0.; Luus, R. Sensitivity of Optimal Control to Final State Specification by a Combined Continuation and Nonlinear Programming Approach. Chem. Eng. Sci. 1989,44, 2527-2534. Rosenbrock, H. H.; Storey, C. Computational Techniques for Chemical Engineers; Pergamon Press: New York, 1966; pp 241-277.

Sage, A. P.; White, C. C. Optimum S y s t e m Control; Prentice-Hall: Englewood Cliffs, NJ, 1977. Van Dooren, R. A Chebyshev Technique Applied to a Controlled Nuclear Reactor System. Optim. Control Appl. Methods 1989, 10, 285-291.

Received for review September 24, 1990 Accepted March 19,1991

An Improved Autotune Identification Method Wei Li, Esref Eskinat, and William L. Luyben* Department of Chemical Engineering, Lehigh University, 1 1 1 Research Drive, Bethlehem, Pennsylvania 18015

The method proposed by Luyben (1987) for obtaining simple transfer function models from one autotune test requires that the steady-stategain be known. This limits its usefulness for identification when only plant data are available. This paper presents a modified procedure that does not require knowledge of the steady-state gain. The method uses two autotune tests. The first is a normal one; the second is run with an additional known dead time added so that the phase angle is shifted about 45’ and a smaller ultimate frequency is obtained. Then a least-square method is used to determine the unknown parameters: two time constants and the steady-state gain. The method was demonstrated on several simulated processes and successfully tested on an experimental heat-exchanger process. Autotune testing of both continuous and sampled-data systems were considered.

Introduction To design a control system for a process, one usually needs a good dynamic process model. But quite often the process is poorly understood or some parameters are unknown that makes it difficult to develop a model. In some cases, even if knowledge of the system is available, a rigorous model would be too complex and nonlinear. Because almost all control analysis and design techniques require linear process models, simplified linear models are usually required. One solution to these problems is the use of identification techniques to obtain a linear transfer function model suitable for controller design purposes. Several identification techniques exist, including direct sine-wave testing, step testing, pulse testing, and pseudorandom binary sequence (PRBS)testing. All of these have their limitations when applied to nonlinear processes. Another alternative identification method is autotune testing (Astrom and Hagglund, 1984). Autotuning testing uses a relay in the feedback loop to produce a limit cycle. The period of the cycle and the amplitudes of the output and input can be used to quickly determine the ultimate gain and ultimate frequency of the process. It is a closed loop test, so the process is kept in the linear region. I t provides accurate information at a frequency that is important for feedback controller design. A method for deriving transfer functions of highly nonlinear processes using information obtained from the autotune method was developed by Luyben (1987a). The proposed procedure is as follows: (1)Obtain the ultimate gain and ultimate frequency by using Astrom’s autotune method. (2) Obtain the dead time from the initial response of the system to the autotune test. (3) Obtain the steady-state gain from a steady-state model of the process. (4) Calculate the time constants (one or two) of simple transfer functions that fit the data at the zero frequency and ultimate frequency points. The principal reason for wanting to develop a parametric (transfer function) model is that models of this type are used in most controller design methodologies. This is

* To whom all correspondence should be addressed. 0888-5885/91/2630-1530$02.60/0

particularly true for model-based controllers where the process model is used explicitly in the controller structure. Transfer function models (or their equivalent step response models) are vital for the analysis of multivariable processes. The method was tried on several simulated processes, and the transfer functions derived matched quite well with those obtained from analytically derived linearized models,. It has been successfully applied to many simulated and real industrial processes. However, this method requires that the steady-state gain be known. This means that the method works well in an engineering analysis environment when rigorous nonlinear steady-state and dynamic models are available, and steady-state gains are easily obtained. However, when we are trying to determine a transfer function model from only plant data, the steady-state gain is usually not known without performing some additional testa like a step test. And obtaining accurate linear steady-state gains experimentally from plant data is very difficult because of noise and nonlinearity (Luyben, 1987b). This requirement of knowing the steady-state gain limits the usefulness of Luyben’s original method in an experimental environment. The purpose of this paper is to present a modified method that does not require knowledge of the steady-state gain.

Accuracy of the Autotune Method The autotune test places a relay in the feedback loop. If the process has a phase lag of at least -T radians, the relay feedback will cause the system to oscillate. Measurement of the oscillation will give important information about the process dynamics that can be used to calculate the ultimate gain and the ultimate frequency: K, = 4h/(a7f) (1) w , = 27f/P,

where h = height of the relay, a = amplitude of principal harmonic of the output, and P, = period of limit cycle. Since the relay is a nonlinear device, the ultimate gain and ultimate frequency obtained from an autotune test will 1991 American Chemical Society