Nonlinear Predictive Control of Complex Processes - Industrial

and Biochemical Engineering, University of Western Ontario, London, Ontario N6A 5B9, ... For a more comprehensive list of citations to this articl...
0 downloads 0 Views 200KB Size
Ind. Eng. Chem. Res. 1996, 35, 3539-3546

3539

Nonlinear Predictive Control of Complex Processes Edward Katende and Arthur Jutan* Department of Chemical and Biochemical Engineering, University of Western Ontario, London, Ontario N6A 5B9, Canada

Most predictive control algorithms, including the generalized predictive control (GPC) (Clarke et al., 1987), are based on linear dynamics. Many processes are severely nonlinear and would require high-order linear approximations. Another approach, which is presented here, is to extend the basic adaptive GPC algorithm to a nonlinear form. This provides a nonlinear predictive controller which is shown to be very effective in the control of processes with nonlinearities that can be suitably modeled using general Volterra, Hammerstein, and bilinear models. In developing this algorithm, the process dynamics are not restricted to a particular order, as is the case with the current nonlinear adaptive algorithms. Simulations are presented using a number of examples, and the steady state properties are discussed. 1. Introduction As a group, adaptive controllers have focused primarily on linear process models. From a parameter update point of view, it is advisable to have the parameters in a linear space. However, from the process dynamics viewpoint, linear dynamics can present a severe limitation for highly nonlinear processes. Adaptive control applications based on linear dynamics are numerous, and the basic theory and applications have been presented in, for example, Harris and Billings (1985) and Warwick (1988). Multistep long-range predictive controllers have been well received by industry. Early long-range multistep predictive algorithms such as DMC (Cutler and Ramaker, 1980) were based on deterministic impulse or step response models. Later, long-range multistep predictive controllers, based on autoregressive moving average (ARMA) models (De Keyser and Van Cauwenberghe, 1985; Peterka, 1984; Mosca et al., 1984), were developed, and these allowed parsimonious parametric model representation. These controllers could be made adaptive if combined with an appropriate recursive parameter estimation routine. The generalized predictive control (GPC) algorithm (Clarke et al., 1987, parts I and II) is a generalization of these multistep predictive algorithms, as well as being the natural long-range extension of the well-known generalized minimum variance (GMV) controller of Clarke and Gawthrop (1979). Garcia and Morari (1982) and Soeterboek (1992) showed that all model-based predictive schemes (DMC, PCA, MAC, IDCOM, GPC, EPSAC, EHAC) are structurally similar. Lee et al. (1992) and Scattolini and Schiavoni (1995) discuss model predictive control of multirate sampleddata systems. Lee and co-workers introduce a state space approach which can deal with a large, but limited, number of processes. Henson and Seborg (1994) introduced an adaptive nonlinear control strategy for a bench-scale pH neutralization system. Their nonlinear controller design is based on a modified input-output neutralization approach which accounts for the implicit output equation in the reaction invariant model. They design a nonlinear output feedback controller by combining the input-output linearizing controller with a reduced order, open-loop observer. Although their results are impressive, their controller is quite specific. * Author to whom correspondence is addressed. Tel: 519679-2111 ext. 8322. Fax: 519-661-3498.

S0888-5885(95)00728-7 CCC: $12.00

Garcia and co-workers (1991) and Clarke (1994) describe the main advances in model-based predictive control. They indicate that this control is emerging as one of the most popular and effective control techniques in the process industries. This is due to the fact that many attributes commonly found in many practical industrial control design may be incorporated in modelbased predictive control. For example, the future reference trajectory, measurable or predictable disturbances, and constraints on manipulated variables can all easily be incorporated in the controller design. Clarke and co-workers (1987) present a GPC which has attractive features in that it can be applied to (1) nonminimum phase processes, (2) open-loop unstable processes or processes with badly damped poles, (3) processes with variable or unknown dead-time, (4) processes with unknown order, and (5) multiinput, multioutput (MIMO) processes. The GPC, however, focuses on linear process models. Brengel and Seider (1989) presented a multistep nonlinear predictive controller which is a direct extension of the IMC concept. However, this method is less accurate in approximating the dynamic model across an entire sampling interval since it does not involve iterative optimizing methods. For a large number of systems a linearizing approach for model nonlinearity is acceptable. However, for some nonlinear systems linearizing can result in poor performance. Some of these highly nonlinear systems include pH control (Henson and Seborg, 1994) in some chemical or biochemical processes, thickness control for a rolling mill, or temperature control for a batch reactor with split range control (Katende and Jutan, 1993). In order to obtain improved control over these systems, while maintaining the benefits of self-tuning control techniques, it is necessary to take account of the nonlinear dynamics in an explicit manner. Atherton (1975) and Cook (1986) indicate that, because of the large number of different types of nonlinearity which can occur in practice, extending a basic linear control scheme to account for all possibilities is unrealistic. A sensible way of tackling the general problem is to employ a framework, within which a large number of nonlinear processes can be adequately approximated. For self-tuning, such a framework is provided by the Volterra and Hammerstein models (Agarwal and Seborg, 1987). Agarwal and Seborg (1987) proposed two self-tuning control strategies for nonlinear control problems. Their approach is applicable to a broad class of nonlinear © 1996 American Chemical Society

3540 Ind. Eng. Chem. Res., Vol. 35, No. 10, 1996

single-input, single-output systems which can include arbitrary nonlinear functions of the most recent input. The aim of this paper is to explore adaptive nonlinear predictive control strategies which have the attractive features of a generalized predictive controller, but without the restriction to linear dynamics. This is an advantage over the current discrete nonlinear algorithms which usually restrict the process dynamics to second order. Performance is also improved by using a nonstandard cost function. Here a nonlinear predictive control algorithm with a general nonlinear and bilinear model structure (Lachmann, 1982; Svoronos et al., 1981) is used. A nonstandard, and probably more appropriate, cost function is used here. In predictive controllers, minimization of a cost function yields the predictive control law; hence, the choice of the criterion function is of paramount importance. The nonlinear generalized predictive control (NLGPC) algorithm can deal with both linear and nonlinear processes and is developed in the framework of the GPC algorithm. Various linear and nonlinear models are simulated, and the performance of the NLGPC algorithm is compared to the standard adaptive GPC. The NLGPC performance is also evaluated using an adiabatic continuous stirred-tank reactor (CSTR).

The advantage of the above models is that they are linear in the parameters and thus can be easily estimated using, for example, recursive least squares. 2.2. A General Form of NLGPC. 2.2.1. Modeling and Prediction. Consider a process described by the following nth-order Hammerstein model n

A(q-1) y(t) ) q-(d+1)

∑ j)1

Bjuj(t) +

n

Bj(q-1) uj(t) ) x(t) ∑ j)1

A(q-1) y(t) )

Bi(q-1) ui(t-d-1) + ∑ i)1

A(q-1) ) 1 + a1q-1 + ... + anAq-nA

(2)

Bi(q-1) ) b0i + b1iq-1 + ... + bnBiq-nBi

(3)

C(q ) ) 1 + c1q

y(t) )

C(q-1) q-(d+1) x(t) + ξ(t) A(q-1) ∆A(q-1)

+ .... + cnCq

-nC

(4)

lie inside with the assumption that all roots of the unit circle in the q-plane. Parameters n, nA, nB, and nC are known integers. The following is a general form of the discrete-time Hammerstein model

A(q ) y(t) ) B(q ) f(u(t)) -1

N(t) )

C(q-1)

(5)

where f is a nonlinear function and y is the process output. A and B are polynomials in the backshift operator q-1. Equation 6 is a special case of the nonlinear model, eq 1, shown above with no bilinear term.

A(q-1) y(t) ) B1(q-1) u(t) + B2(q-1) u2(t) + ... Bn(q-1) un(t) + C(q-1) e(t) (6)

ξ(t)

∆A(q-1)

(10)

(11)

From eq 9 with the pseudoinput x, y is linear in x and polynomials A and Bj can be estimated on-line from eq 7 using a recursive least-squares method. C is usually set equal to T (a tuning polynomial), which is known as the observer polynomial. Further, if we let D ) ∆A and C ) T, the prediction of the stochastic disturbance, N, at t + i is given by

N(t+i) )

C(q-1)

-1

(9)

We denote the last term as a noise or disturbance N(t), whence

where t is the sampling instant, t ) 1, 2, 3, ...; d is the time delay and the extra delay is due to sample and hold; y(t) is the measured output at time t; u(t) is the manipulated input at time t; ξ(t) is zero-mean random noise independent of previous inputs and outputs; A, Bi, and C are polynomials given by

-1

C(q-1) ξ(t) ∆

or

Bn+1(q-1) u(t-d-1) y(t-1) + C(q-1) ξ(t) (1)

-1

(8)

where x(t) is a “pseudo” input. A bilinear term can be included in the definition of x(t) when using a process model with a bilinear term. We can write eq 7 as

2. Theoretical Development

n

(7)

where y is the system output, u is the system input, ξ is the disturbance, A, Bj, and C are polynomials, d is the dead time (one is due to sample and hold), and ∆ is 1 - q-1. We denote

A(q-1) y(t) ) q-(d+1)x(t) + 2.1. Process Model. We consider nonlinear and bilinear discrete-time process models described by Agarwal and Seborg (1987)

C(q-1) ξ(t) ∆

T ξ(t+i) D

(12)

In order to separate eq 12 into future and past terms, the following Diophantine equation is used

Fi T ) Ei + q-i D D

(13)

where Ei and Fi are polynomials with degrees

n Ei ) i - 1 nFi ) max(nT-i,nD-1) Thus, eq 12 can be written as

N(t+i) ) Eiξ(t+i) +

Fi ξ(t) D

(14)

In eq 14 the first term on the right-hand side consists

Ind. Eng. Chem. Res., Vol. 35, No. 10, 1996 3541

of future noise only and is thus unknown. The second term can be calculated using the available data at time t. Multiplying eq 10 by Fi/T, rearranging, and using D ) ∆A yields

[

]

Fi Fi q ξ(t) ) x(t-1) y(t) D T A -d

(15)

The i-step-ahead predictor for the process output (10) is thus given as

[

]

Fi q q-d y(t+i) ) x(t+i-1) + x(t-1) + y(t) A T A Eiξ(t+i) (16) -d

where

[

]

Fi q-d x(t+i-1) + [y(t) - yˆ (t)] A T

(17)

(18)

(19)

with

q-d x(t-1) yˆ (t) ) A

(20)

For the i-step-ahead predictor (19) the first term on the right-hand side has both future (unknown) and past (known) terms. The following Diophantine equation is used to separate eq 19 into future and past terms

(26)

c ) [c(t), c(t-1), ...]T

(27)

xj(t) ) x(t)/A

(28)

c ) [y(t) - yˆ (t)]/T

(29)

G)

Since this prediction error consists of future noise only and because ξ(t) is assumed to be white noise with zero mean, the variance of the prediction error is constant. We can rewrite eq 17 after applying the certainty equivalent principle as

yˆ (t+i) )

x ) [x(t), ..., x(t+N2-d-1)]T

[

g0 g1 ·· ·

0 ··· g0 · · · ·· · ···

[] []

gN2-d-1

igd+1

(21)

Fd+1 ·· · F ) Fi ·· · FN2

(32)

This matrix form will be used later to develop the control law. 2.2.3. Predictive Control Law. In order to obtain the control law, the following GPC style cost function is minimized N2

J)

∑ i)N

Nu

(x(t+i-1))2 ∑ i)1

[Pyˆ (t+i) - P(1) w(t+i)]2 + λ

(33)

where P is a monic polynomial, np

P(1) )

degree Hi ) nA - 1 Therefore, Gi is given as

(22)

Using eq 21, the i-step-ahead predictor becomes

yˆ (t+i) ) Gix(t+i-d-1) +

Fi Hi x(t-1) + [y(t)-yˆ (t)] A T (23)

n Bjuj(t + i - d - 1) conThe term Gix(t+i-d-1) ) Gi∑j)1 tains only current and future controller outputs. 2.2.2. Matrix Notation. The i-step-ahead predictor (23) for i ) d + 1, ..., N2 can be written in matrix form as (Soeterboek, 1992)

yˆ ) Gx + Hx j + Fc

(30)

(31)

degree Gi e i - d - 1

Gi ) g0 + g1q-1 + ... + gi-1q-i+d+1

0 ·· · 0 g0

Hd+1 ·· · H H) i ·· · HN2

1

Hi 1 ) Gi + q-i+d A A

]

The G, H, and F matrices are constructed as follows:

with the prediction error (t+i) given by

(t+i) ) Eiξ(t+i) ) y(t+i) - yˆ (t+i)

(25)

with

which gives the minimum variance output prediction as

Fi q-d q-d x(t+i-1) + x(t-1) y(t) yˆ (t+i) ) A T A

yˆ ) [y(t+d+1), ..., yˆ (t+N2)]T

(24)

Pj ∑ j)0

(34)

λ is the control weight, and w is the setpoint. Note that the penalty is on the pseudoinput x. This is appropriate if we consider that, for nonlinear processes, the usual input u often enters indirectly as a function of u, i.e., x, and it is this “net effect” of u that we wish to penalize. If the cost function J is minimized with respect to vector x, then any local optimum for x satisfies

∂J/∂x ) 0

(35)

Since x is a function of u, the optimal u can be calculated from optimal x. In matrix form we can write the cost function as

J ) (yˆ * - w*)T(yˆ * - w*) + λxTx where

(36)

3542 Ind. Eng. Chem. Res., Vol. 35, No. 10, 1996

w* ) [P(1) w(t+N1), ..., P(1) w(t+N2)]T

(37)

yˆ * ) [Pyˆ (t+N1), ..., Pyˆ (t+N2)]T

(38)

x ) [x(t), ..., x(t+Nu-1)]T

φ′ )

(39)

Now, for eq 36 the gradient with respect to x is

∂J ∂yˆ * )2 (yˆ * - w*) + 2λx ∂x ∂x

In order to obtain uopt, Newton’s method can be used. The gradient of φ with respect to uopt is given as follows:

(40)

) ∂uopt

∑ j)1

b0j

∂ujopt ∂uopt

uopt(m+1) ) uopt(m) -

yˆ * ) Gx + Hx j + Fc

(41)

Differentiating we have

∂yˆ */∂x ) GT

(42)

Therefore

∂J/∂x ) 2GT(Gx + Hx j + Fc - w*) + 2λx (43) or

∂J/∂x ) 2(GTG + λI)x + 2GT(Hx j + Fc - w*)

(44)

T

(45)

Equation 45 gives the optimal x vector, from which, as in GPC, we require only the first element to implement the control law. We obtain the current x value at time t (first element of x) by the following equation:

(49)

Nu × Nu

(50)

3. Simulation Studies

1 × (N2 - N1 + 1) 1 × (N2 - dˆ )

e1T ) [1, 0, ..., 0]T Rv ) (GTG + λI)

(57)

1

1 × Nu

(46)

where

χT ) e1TRv-1

[yˆ (t+j) - w(t+j)]2 ∑ j)N

Minimization of J yields a controller which minimizes the tracking error between the predicted process output and the reference trajectory. If there is no model mismatch and/or no constraints, the process will track the reference trajectory exactly on the sampling instants. If there is a time delay, d, the minimum cost horizon, N1, is set to d + 1. The optimal value of x over the prediction horizon is obtained analytically by minimization of J with respect to x (see eq 33). The optimal u is obtained numerically from the optimal x. As a result, the future tracking error is minimized with all elements of u penalized. The subsequent numerical search for u can lead to multiple solutions, some of which are clearly inadmissible, such as complex or wrong sign solutions.

j - fTc x(t) ) vTw* - hTx

vT ) χTGT

(56)

N2

J)

x ) (G G + λI) G (w* - Hx j - Fc) -1

φ φ′

2.2.5. Discussion. A general form of nonlinear generalized predictive control (NLGPC) has been derived based on processes modeled by a nonlinear and bilinear model structure which includes the Hammerstein model structure with a general nonlinear input signal. In order to define how well the predicted process output tracks the setpoint, a cost function is used. Typically, such a cost function is a function of predicted process output, setpoint (reference trajectory), and process output. The simplest cost function that can be used for predictive controller design is

Setting eq 44 to zero, we get T

(55)

The optimal control uopt is obtained by iteration of the following equation:

Repeating eq 24 gives

(47) (48)

hT ) vTH

1 × nH + 1

(51)

fT ) vTF

1 × nF + 1

(52)

Once v, h, and f are obtained, the current optimal value of x can be calculated. 2.2.4. Computation of the Optimal u. From eq 8 we know that once the optimal pseudoinput x is known, we then go ahead to compute the process input u. n

Bjuj ) x ∑ j)1

(53)

We can write eq 53 as n

φ)

n

∂φ

b0jujopt + (Bj - b0j)u jj - x ) 0 ∑ j)1

where u j involves only the past u values.

(54)

3.1. Hammerstein Models. In order to evaluate the usefulness of the new nonlinear predictive control algorithm, simulation studies were carried out for a linear process (model 1), two nonlinear processes (models 2 and 3) based on a Hammerstein model structure, and one nonlinear process with a bilinear term (model 4). These processes were also modeled as approximate linear processes in order to develop control algorithms for the standard GPC for comparison purposes. Starting with first-order polynomials, A(q-1) and B(q-1) polynomial orders were gradually increased until the best approximate CARMA model was selected which suitably matched the dynamics of the Hammerstein model, in the given range. These best-fitting CARMA models were used in the GPC design routine. Process Models Simulated

Model 1: (1 + 0.022q-1 - 0.7991q-2)y(t) ) q-1(0.0290 + 0.0269q-1)u(t) + ξ(t) (58)

Ind. Eng. Chem. Res., Vol. 35, No. 10, 1996 3543

Model 2: (1 + 0.022q-1 - 0.7991q-2)y(t) ) q-1(0.029 + 0.0269q-1)u(t) + q-1(0.021 + 0.003q-1)u2(t) + ξ(t) (59) Model 3: (1 + 0.022q-1 - 0.7991q-2)y(t) ) q-1(0.029 + 0.0269q-1 + 0.0156q-2)u(t) + q-1(0.021 + 0.003q-1 + 0.0023q-2)u2(t) + ξ(t) (60) Model 4: (1 + 0.022q-1 - 0.7991q-2)y(t) ) q-1(0.029 + 0.0269q-1)u(t) + q-1(0.023 + 0.0012q-1)u2(t) + q-1(0.021 + 0.003q-1)u(t) y(t-1) + ξ(t) (61) The process model parameters are summarized in Table 1. For all simulation runs the noise variance was set to 0.001. The parameters of the A, B1, and B2 polynomials were estimated using a standard UDU version of RLS (Bierman, 1977). Persistent excitation was guaranteed by turning off the adaptation whenever the increment in the input signal became minimal. An exponential forgetting factor was adopted with values ranging from 0.98 to unity (no forgetting). The parameter estimates were initialized in simulation with b01 and b02 equal to unity and the rest equal to very small numbers. The diagonal of the initial covariance matrix was set to 104. The figures shown below indicate the behavior of the closed-loop system with 200 samples. The setpoint was switched between two levels (one and zero) for all simulation runs. The noise variance was set at 10-4. The input signal was limited between 100 and -100. 3.1.1. Results. Figure 1 shows the performance of the NLGPC on process model 1. Model 1 has no nonlinear term as B2 ) 0. As expected, the performance of the standard GPC on process model 1 shown in Figure 2 is similar to that of NLGPC shown in Figure 1. One can easily observe that, for a linear process model, the general form of NLGPC produces a response similar to that of the GPC. However, the NLGPC seems to be more active at the instant of setpoint change. This may be due to the fact that the control weighting on x includes the effect of the b-parameters, whereas if we weight u directly its “effective parameter” is unity. By contrast, Figures 3 and 4 show a large difference in performance between NLGPC and GPC when applied to the nonlinear process. The NLGPC algorithm controls the nonlinear process model 2 very well as shown in Figure 3. On the other hand, the GPC could not control the nonlinear process model well enough as shown in Figure 4. Unacceptably, large process output occurs or sometimes the closed-loop system went unstable when the GPC was loosely tuned. Better, though still unsatisfactory, results were obtained when the GPC was heavily penalized. Even after much tuning of the GPC a persistent offset (due to the second-order input nonlinearity) was experienced which could not be properly eliminated by the integral action. Figure 5 demonstrates the performance of NLGPC on the nonlinear process model 3, and Figure 6 shows the corresponding performance of the GPC on the same model. It is clear that GPC was unable to control the nonlinear processes well (models 2 and 3), whereas the NLGPC showed good performance with both linear and nonlinear models.

Table 1. Summary of Hammerstein Model Parameters Model 1 (Linear) A ) 1 + 0.0220q-1 - 0.7991 q-1 B1 ) 0.0290 + 0.0269q-1 B2 ) 0 C)1 parameter settings: N1 ) 2, N2 ) 3, Nu ) 1, P ) 1, T ) 1, λ ) 0.0001 Model 2 (Nonlinear, 1st Order in Bi) A ) 1 + 0.0220q-1 - 0.7991 q-2 B1 ) 0.0290 + 0.0269q-1 B2 ) 0.0210 + 0.0030q-1 C)1 parameter settings: N1 ) 2, N2 ) 3, Nu ) 1, P ) 1, T ) 1, λ ) 0.0001 Model 3 (Nonlinear, 2nd Order in Bi) A ) 1 + 0.0220q-1 - 0.7991 q-2 B1 ) 0.0290 + 0.0269q-1 + 0.0156q-2 B2 ) 0.0210 + 0.0030q-1 + 0.0023q-2 C)1 parameter settings: N1 ) 2, N2 ) 3, Nu ) 1, P ) 1, T ) 1, λ ) 0.0001

Figure 1. Adaptive NLGPC on linear process model 1.

Figure 2. Adaptive GPC on linear process model 1.

The bilinear model (model 4) was simulated under the NLGPC control (Figure 7). It can be observed that good control was obtained, thus showing the extended ability of the NLGPC algorithm to control bilinear systems as well. In these examples the NLGPC outperformed the GPC for the nonlinear processes. Note that, for a linear CARMA model, the NLGPC defaults to the structure of the GPC, but the control weighting in the objective function is different. An analysis of the performance of the NLGPC is provided below.

3544 Ind. Eng. Chem. Res., Vol. 35, No. 10, 1996

Figure 6. Adaptive GPC on nonlinear process model 3.

Figure 3. Adaptive NLGPC on nonlinear process model 2.

Figure 7. Adaptive NLGPC on bilinear process model 4.

[

]

Fi Fi q-d ξ(t+i) ) x(t+i-1) y(t+i) ∆A T A

Figure 4. Adaptive GPC on nonlinear process model 2.

(63)

Multiplying the above equation by Ei and rearranging, we obtain the prediction error

Eiξ(t+i) )

[

]

∆AEi q-d x(t+i-1) y(t+i) T A

(64)

Allowing parameter estimates for A and Bi and substituting for y(t+i) with eq 62, we get

Eiξ(t+i) )

[

]

∆AEi q-d q-dˆ x(t+i-1) xˆ (t+i-1) + N(t+i) (65) T A A ˆ

Figure 5. Adaptive NLGPC on nonlinear process model 3.

3.1.2. Performance of NLGPC at Steady State. Prediction Error. The i-step-ahead predictor for the process output is

y(t+i) )

C(q-1) q-(d+1) x(t+i) + ξ(t+i) A(q-1) ∆A(q-1)

Multiplying eq 62 by Fi/T and rearranging yield

(62)

as the prediction error. Prediction Error Elimination. Soeterboek (1992) indicates that, for a stable system driven by a signal v(t) that satisfies fξ(q-1) v(t) ) 0, as t tends to infinity, then all signals vi(t) satisfy the system fξ(q-1) vi(t) ) 0 (where fξ is a stable polynomial). Soeterboek considered only linear cases. We can extend the same argument to our system. Let the disturbance, as t tends to infinity, satisfy:

fξN(t) ) 0

(66)

where fξ is a polynomial. For a stable closed-loop system, as t tends to infinity, we have

Ind. Eng. Chem. Res., Vol. 35, No. 10, 1996 3545

fξy(t) ) 0

(67)

Table 2. CSTR Data variable

If fξ is a factor of ∆A, then the prediction error at steady state is equal to zero, since prediction error as given by eq 65 is

Eiξ(t+i) )

[

]

∆AEi q-d q-dˆ x(t+i-1) xˆ (t+i-1) + N(t+i) (68) T A A ˆ Furthermore, Soeterboek (1992) indicates that the system is stable and driven by disturbance N(t) and reference trajectory w(t) which in steady state satisfy fξN(t) ) 0 and fww(t) ) 0 where fw is a polynomial. Then y(t) and u(t) (linear system) at steady state satisfy:

fmy(t) ) 0;

fmu(t) ) 0

(69)

where fm ) min{fξ,fw} (i.e., fm is the polynomial with the smallest degree for which fmw(t) ) 0 and fmN(t) ) 0. For our system assuming the system is stable and driven by disturbance N(t) and reference trajectory w(t) at steady state, we have fξN(t) ) 0 and fww(t) ) 0. At steady state we can therefore write:

fmy(t) ) 0;

fmx(t) ) 0

(70)

where fm ) min{fξ,fw}. If fm is a factor of ∆A, then for a stable closed-loop system, the steady-state errors do not occur irrespective of modeling errors. We also know that

N(t) )

C(q-1)

ξ(t)

(71)

∆A(q-1) N(t) ) ξ(t)

(72)

∆A(q-1)

Let C ) 1, then

where ξ(t) is white noise with zero mean. From eqs 66 and 71 we can say that fξ is a factor of ∆A, and from eqs 67 and 70 we can say that fm is a factor of ∆A, hence confirming that the NLGPC has no steady-state errors irrespective of any presence of modeling errors. 3.1.3. Performance of GPC at Steady State. For the GPC prediction error is given as

Eiξ(t+i) )

[

]

∆AEi q-dB u(t+i-1) y(t+i) T A

(73)

If the parameters A and B are estimated and since we simulated nonlinear Hammerstein process models, we substitute for y(t+i) and we thus get

Eiξ(t+i) )

[

q-dB2 2 ∆AEi q-dB1 u(t+i-1) + u (t+i-1) + T A A ...

]

q-dˆ B ˆ u(t+i-1) + N(t+i) (74) A ˆ

as the prediction error. Using the same argument as we used above for the NLGPC case, if fξ is a factor of ∆A, GPC will eliminate the steady-state prediction error if and only if the process output y(t+i) is a linear process. However, for our case we have a nonlinear Hammerstein process model which leads to the presence of the

V M Fi Cp -∆H k0 ∆E Ti To R

value 231.5 L 166.7 kg flow (manipulated variable) 0.85 kcal kg-1 K-1 785 kcal kg-1 2.265 × 109 kg L-1 h-1 20 456 kcal kg-1 mol-1 inlet temp (unmeasured load) outlet temp (controlled variable) 1.9872 cal g mol-1 K-1

nonlinear u terms in the prediction error equation. For this case fξ being a factor of ∆A does not guarantee elimination of prediction error. 3.2. CSTR Temperature Control. An adiabatic continuous stirred-tank reactor (CSTR) with a zeroorder, exothermic reaction and no external cooling (Agarwal and Seborg, 1987) was simulated in order to evaluate the performance of the NLGPC algorithm on a “real-process”. Agarwal and Seborg write the unsteadystate energy balance for the reactor as

MCp

( )

dTo -∆E ) FiCp(Ti - T0) + k0V(-∆H) exp dt′ RT0 (75)

where t′ is continuous time in hours; R is the universal gas constant; V is the holdup volume; M is the mass holdup of liquid; Fi is the feed flow rate; Cp is the specific heat of liquid; -∆H is the heat of reaction; k0 is the reaction rate constant; ∆E is the activation energy; Ti is the feed temperature; and To is the outlet temperature. As in Agarwal and Seborg (1987), the outlet temperature, To, and the flowrate, Fi, were considered to be the output and manipulated variables, respectively. The feed temperature, Ti, is the unmeasured load variable. In their paper Agarwal and Seborg illustrate the steadystate process conditions and gains when the load is set at Ti ) 456 K. Here Fi passes through a minimum at To ) 478.2 K. At this value of To, the process gain is discontinuous. Reducing Fi to below the minimum of Fi leads to a runaway condition. The NLGPC was based on a simple Hammerstein model

y(t) + a1y(t-1) ) b1u(t-1) + b2u2(t-1) that is, A ) 1 + a1q-1; B1 ) b1; and B2 ) b2. The tuning parameters were set as N1 ) 2, N2 ) 3, P ) 1, T ) 1, Nu ) 1, and λ ) 0.0001. The parameters a1, b1, and b2 were initialized to small numbers and estimated using a recursive least-squares method as described earlier. Persistent excitation was guaranteed by shutting off the estimation whenever the increment in the input signal became minimal. Fi was constrained between 10 000 and 50 000 kg/h, and the setpoint was changed between 473 and 463 K every 50 sampling instants. Process variables were set according to Table 2. The process was simulated using a fourth order Runge-Kutta numerical integration method. Figure 8 shows the results of a closed loop run under the NLGPC when the load, Ti, is set to 456 K. At each optimal control calculation two values of u are obtained. A selection of the correct one to implement was based on a simple logic check. A logic “check routine” was included in the simulation program to reject any values

3546 Ind. Eng. Chem. Res., Vol. 35, No. 10, 1996

Figure 8. Temperature control of CSTR using NLGPC.

which do not give outlet temperature values close enough to the setpoint as well as negative or imaginary values. From Figure 8 it can be seen that the NLGPC controls the CSTR temperature very well, while using a simple Hammerstein model for parameter estimation. 4. Conclusion A nonlinear generalized predictive control algorithm has been developed based on a class of nonlinear dynamics that can be suitably modeled using Volterra and Hammerstein structures. Nonlinear generalized predictive control (NLGPC) showed an advantage over generalized predictive control (GPC) in that NLGPC controlled processes with high nonlinearity in the input signal very well. A number of simulation runs were used to demonstrate the performance of the NLGPC and compare it with GPC. The NLGPC performed very well when used to control an adiabatic continuous stirredtank reactor (CSTR). Literature Cited Agarwal, M.; Seborg, D. E. Self-tuning Controllers for Non-linear Systems. Automatica 1987, 23 (2), 204-214. Atherton, D. P. Non-linear Control Engineering; Van Nostrand Reinhold: New York, 1975. Bierman, G. J. Factorization Methods for Discrete System Estimation; Academic Press: New York, 1977. Brengel, D. D.; Seider, W. D. Multistep Nonlinear Predictive Controller. Ind. Eng. Chem. Res. 1989, 28, 1812-1822.

Clarke, D. W. Advances in Model-Based Predictive Control. In Advances in Model-Based Predictive Control; Clarke, D. W., Ed.; Oxford University Press: Oxford, U.K., 1994. Clarke, D. W.; Gawthrop, P. J. Self-tuning Control. IEE Proc. 1979, 126, 633. Clarke, D. W.; Mohtadi, C.; Tuffs, P. S. General predictive control. Automatica 1987, 23 (3), parts I and II, 137-160. Cook, P. Non-linear dynamic systems; Prentice Hall: Englewood Cliffs, NJ, 1986. Cutler, C. R.; Ramaker, B. L. Dynamic Matrix Control. A Computer Control Algorithm; JACC: San Francisco, 1980. De Keyser, R. M. C.; Van Cauwenberghe, A. R. Extended Prediction Self-Adaptive Control. IFAC Symp. Identif. Syst. Parameter Estim. 1985. Garcia, C.; Morari, M. Internal Model Control: Part 1. A unifying Review and Some New Results. Ind. Eng. Process Des. Dev. 1982, 21, 308. Garcia, C. E.; Prett, D. M.; Morari, M. Model Predictive Control: Theory and PracticesA survey. Automatica 1991, 25, 335-348. Harris, C., Billings, S., Eds. Self-tuning Control: Theory and Applications; Peter Peregrinus Ltd.: Hitchin, U.K., 1985. Henson, M. A.; Seborg, D. E. Adaptive Nonlinear Control of a pH Neutralization Process. IEEE Trans. Control Syst. Technol. 1994, 2 (3), 169-182. Katende, E.; Jutan, A. A New Constrained Self-tuning PID Controller. Can. J. Chem. Eng. 1993, 71, 625. Lachmann, K. H. Parameter-Adaptive Control of a Class of Nonlinear Processes. 6th IFAC Symp. Identif. Syst. Parameter Estim. 1982, 372-378. Lee, J. H.; Gelormino, M. S.; Morari, M. Model Predictive Control of Multirate sampled-data systems: A State Space Approch. Int. J. Control 1992, 55, 153-191. Mosca, E.; Zappa, G.; Manfredi, C. Multistep Horizon Self-tuning Controllers: the MUSMAR Approach. IFAC 9th World Congress, Budapest, Hungary, 1984. Peterka, V. Predictor-Based Self-tuning Control. Automatica 1984, 20, 39-50. Scattolini, R.; Schiavoni, N. A Multi-rate Model-Based Predictive Controller. IEEE Trans. Autom. Control 1995, 40, 1093-1097. Soeterboek, R. Predictive ControlsA Unified Approach; Prentice Hall: London, 1992. Svoronos, S.; Stephanopoulos, G.; Aris, R. On Bilinear Estimation and Control. Int. J. Control 1981, 34, 651-684. Warwick, K., Ed. Implementation of Self-tuning Controllers; Peter Peregrinus Ltd.: Hitchin, U.K., 1988.

Received for review December 4, 1995 Revised manuscript received April 29, 1996 Accepted May 12, 1996X IE9507282

X Abstract published in Advance ACS Abstracts, August 15, 1996.