Quadratic Nonlinear Predictive Control - Industrial & Engineering

Honeywell Hi-Spec Solutions, 343 Dundas Street, London, Ontario, N6B 1V5 Canada. Arthur Jutan*. Department of Chemical and Biochemical Engineering, ...
0 downloads 0 Views 133KB Size
Ind. Eng. Chem. Res. 1998, 37, 2721-2728

2721

Quadratic Nonlinear Predictive Control Edward Katende Honeywell Hi-Spec Solutions, 343 Dundas Street, London, Ontario, N6B 1V5 Canada

Arthur Jutan* Department of Chemical and Biochemical Engineering, University of Western Ontario, London, Ontario, N6A 5B9 Canada

Rob Corless Department of Applied Mathematics, University of Western Ontario, London, Ontario, N6A 5B9 Canada

Generalized predictive control (GPC) has been well-accepted in the chemical process industries because of its adaptive nature and ease of tuning. Nevertheless, the GPC structure is based on linear models, which can be a limitation for highly nonlinear processes often found in the chemical industry. A nonlinear generalized predictive control (NLGPC) algorithm was developed by Katende and Jutan (IEEE ACC Proceedings, Session FP10, Seattle, WA, 1995; Ind. Eng. Chem. Res. 1996, 35, 3539) and compared favorably to the standard GPC; however, the computational burden could be excessive in some cases. In this paper two quadratic forms of the NLGPC are developed on the basis of a second-order Hammerstein model structure and different criterion functions. These second-order models allow for simple on-line implementation and lay the basis for analytical evaluation of the algorithms. In this paper, simulation of sample problems are carried out and the effect of each criterion function used in the nonlinear controller design is discussed. 1. Introduction Although adaptive control of nonlinear processes has been studied in the past, few researchers have considered modifying the criterion function used in controller design to improve the performance. In predictive control, minimization of a criterion function yields the predictive control law; therefore, the choice of the criterion function is of great importance. In the past, modification of the criterion function has been avoided by focusing primarily on linear processes and quadratic objective functions. A number of long-range predictive control algorithms are given in the literature. Long-range multistep predictive algorithms such as dynamic matrix control (DMC) (Cutler and Ramaker, 1980) and those based on autoregressive moving average (ARMA) models (DeKeyser and Van Cauwenberghe, 1985; Peterka, 1984; Mosca et al., 1984) were developed. These controllers could be made adaptive if combined with an appropriate recursive parameter estimation routine. Clarke and coworkers (1987) developed a generalized predictive control (GPC) algorithm which is a generalization of these multistep predictive algorithms, as well as being the natural long-range extension of the well-known generalized minimum variance (GMV) controller of Clarke and Gawthrop (1979). Garcia and Morari (1982) and Soeterboek (1992) showed that all model-based predictive schemes (DMC, PCA, MAC, IDCOM, GPC, EPSAC, and EHAC) are structurally similar. These control algorithms are based on linear process models. * To whom correspondence should be addressed. E-mail: [email protected].

Lee et al. (1992) and Scattolini and Schiavoni (1995) discuss model predictive control of multirate-sampled data systems. Lee and co-workers introduce a state space approach which can deal with a large, but limited, number of processes. Garcia and co-workers (1991) and Clarke (1994) describe in detail the advances in modelbased predictive control. They indicate that modelbased predictive control is emerging as one of the most popular and effective control techniques in process industries. This is due to the fact that many attributes fundamental in any practical industrial control design may be incorporated in model-based predictive control. For example, the future reference trajectory, measurable or predictable disturbances, constraints on manipulated variables could easily be incorporated in the controller design. For a large number of systems, a linearizing approach for model nonlinearity is acceptable. However, for some nonlinear systems, linearizing can result in poor performance. Some of these highly nonlinear systems include pH control in some chemical or biochemical processes, thickness control for a rolling mill or temperature control for a batch reactor with split-range control (Katende and Jutan, 1993). To obtain improved control over these systems, while maintaining the benefits of self-tuning control techniques, it is necessary to take account of the nonlinear dynamics in an explicit manner. Atherton (1975) and Cook (1986) indicate that because of the large number of different types of nonlinearity which can occur in practice, extending a basic linear control scheme to account for all possibilities is unrealistic. A sensible way of tackling the general problem is to employ a framework, within which a large number

S0888-5885(97)00754-9 CCC: $15.00 © 1998 American Chemical Society Published on Web 06/19/1998

2722 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998

of nonlinear processes can be adequately approximated. For self-tuning, such a framework is provided by the Volterra and Hammerstein models (Agarwal and Seborg, 1987). Agarwal and Seborg (1987) proposed two self-tuning control strategies for nonlinear control problems. Their approach is applicable to a broad class of nonlinear single-input, single-output systems which can include arbitrary nonlinear functions of the most recent input. Brengel and Seider (1989) presented a multistep nonlinear predictive controller which is a direct extension of the intermetallic matrix composite (IMC) concept. This method is less accurate in approximating the dynamic model across an entire sampling interval since it does not involve iterative optimizing methods. One aim of this paper is to explore the effect of different criterion functions on the performance of nonlinear predictive control strategies. Here, two nonlinear predictive control algorithms are explored using a second-order Hammerstein model structure. For the first algorithm, a conventional quadratic criterion function is used with no modifications. A different and probably more appropriate cost function is used, for a second algorithm, in which analytical expressions for the gradients are obtained. Last, the nonlinear generalized predictive control (NLGPC) algorithm developed by Katende and Jutan (1995, 1996) is briefly discussed.

2

yˆ (t+j) )

(i) (i) -1 (i) + gj,1 q + ... + gj,n q-ni+j-1, Gij ) gj,0 i+j-1

i ) 1, 2 (6) and ni is the degree of polynomial Bi(q-1). We set all future jEξ(t + j) values to zero for minimum variance prediction. For j ) 1, 2...N eq 5 can be written in matrix form as 2

yˆ )

-1

A(q ) y(t) ) B(q ) f(u(t))

A(q-1) y(t) ) B1(q-1) u(t) + B2(q-1) u2(t) + C(q-1) e(t) (2) which corresponds to a polynomial input nonlinearity of second order. One advantage of this model is that it is linear in the Bi parameters, and these can be easily estimated, for example, using recursive least squares. 2.2. Algorithm Based on Penalizing the Process Input. 2.2.1. Modeling and Prediction. A secondorder nonlinear Hammerstein model with nonstationary noise can be written in the well-known CARIMA form (with nonlinearity in u):

A(q-1) y(t) ) B1(q-1) u(t-1) + B2(q-1) u2(t-1) + T(q-1) ξ(t) (3) ∆

T(q-1) P(q-1) ) Ej(q-1) A(q-1)∆ + q-jFj(q-1) (4) where j ) 1, 2, ..., N and N ) the prediction horizon. For simplicity, T(q-1) and P(q-1) are set to unity. Multiplying the model equation by jEjq∆ and substituting 1 - q jFj for EjA∆ gives

(8)

where

yˆ (t+1) ) G11∆u(t) + G21∆u2(t) + F1y(t) + E1ξ(t+1) yˆ (t+2) ) G12∆u(t) + G22∆u2(t) + F2y(t) + E2ξ(t+2) (9) l

l

l

u ) [∆u(t), ∆u(t+1), ..., ∆u(t+N-1)]T

(10)

u2 ) [∆u2(t), ∆u2(t+1), ..., ∆u2(t+N-1)]T

(11)

f ) [f(t+1), f(t+2), ..., f(t+N)]T

(12)

where 2 (2) f(t+1) ) (G11 - g(1) 0 )∆u(t) + (G21 - g0 )∆u (t) + F1y(t) (1) f(t+2) ) q(G12 - q-1g(1) 1 - g0 )∆u(t) + q(G22 2 (2) q-1g(2) 1 - g0 )∆u (t) + F2y(t) (13) (1) - ... - g(1) f(t+N) ) qN-1(G1N - q-(N-1)gN-1 0 )∆u(t) + (2) 2 qN-1(G2N - q-(N-1)gN-1 - ... - g(2) 0 )∆u (t) + FNy(t) (14)

and the matrices G1 and G2 are lower triangular of dimension N × N. 2.2.2. Predictive Control LawsQuadratic Model. To compute the vector of controls, the following cost function J is minimized: N2

J)

∑ [yˆ (t+j) - w(t+j)] j)N

2

1

where ξ(t) is the disturbance, T(q-1) is an observer/filter polynomial, and ∆ ) 1 - q-1. The polynomial P(q-1) is introduced, and this gives rise to the diophantine identity:

(7)

yˆ ) [yˆ (t+1), yˆ (t+2), ..., yˆ (t+N)]T

(1)

where f is a nonlinear function and y is the process output. A and B are polynomials in the back-shift operator q. In this paper, we consider Hammerstein models of the form

Giui + f ∑ i)1

where the vectors yˆ , u, u2, and f are N × 1:

2.1. The Hammerstein Model. The following is a general form of discrete time Hammerstein models: -1

(5)

where Gij ) EjBi and

l

2. Theoretical Development

Gij∆ui(t+j-1) + Fjy(t) + Ejξ(t+j) ∑ i)1

NU

+

λj[∆u(t+j-1)]2 ∑ j)1

(15)

where w(t) is the reference signal, N1 is the minimum costing horizon, N2 is the maximum costing horizon, NU is the control horizon, and λ is the control weight. Note that the cost function is identical to that defined by Clarke et al. (1987) in the derivation of the GPC algorithm. In matrix form, we write

J ) E{(yˆ - w)T(yˆ - w) + λuTu} or

Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2723

J ) E{(G1u + G2u2 + f - w)T(G1u + G2u2 + f w) + λuTu} (16) Let us denote 2

v ) G1u + G2u + f - w

(17)

where v is a column vector. Equation 16 becomes

J ) E{vTv+λuTu}

(18)

Let the G1 and G2 matrices be represented by column vectors g0, g1, g2, ..., that is,

g(1) g(1) ...] G1 ) [g(1) 0 1 2

(19)

G2 ) [g(2) g(2) g(2) ...] 0 1 2

(20)

∂J ) 2vTG1 + 2λuT + 4vT[u0g(2) ug(2) 0 1 ∂u (2) ... uN-1gN-1 ] (28) Finally, we can rewrite the above equation in a more compact form as

∂J ) 2vTG1 + 4vTG2 diag(u) + 2λuT ∂u

(29)

The gradient in (29) can also be used to calculate the Hessian matrix directly and used for function minimization in say, Newton’s method. The Hessian of J with respect to u is written as

2.2.3. Objective Minimization. For optimal control, we minimize J in eq 18 with respect to u. This is achieved in several steps:

∂v ∂vT ∂u ∂J ) vT + v + 2λuT ∂ui ∂ui ∂ui ∂ui

(21)

or

∂v ∂J ) 2vT + 2λui ∂ui ∂ui

(22)

for the (i + 1)th column of the Jacobian matrix (since i ) 0, 1, 2...). The vector v can be written as a sum of outer products: (1) (1) (1) v ) u0g(1) 0 + u1g1 + ... + uigi + ... + uN-1gN-1 + 2 (2) 2 (2) u20 g(2) 0 + ... + ui gi + ... + uN-1 gN-1 + f - w (23)

N - 1 is the number of columns of G1 and G2 matrices. We now express the partial derivative of v with respect to ui as linear combinations of the columns of G1 and G2:

∂v (2) ) g(1) i + 2uigi ∂ui

(24)

which is a column vector. Multiplying both sides of eq 24 by 2vT, we obtain

2vT

∂v T (2) ) 2vTg(1) i + 4uiv gi ∂ui

(25)

which is a scalar. We can now obtain the partial derivative of J with respect to u as the row vector:

[

∂J ∂J ∂J ∂J ) ... ∂u ∂ui ∂ui ∂uN-1

]

(26)

That is,

∂J (1) 2vTg(1) ... 2vTgN-1 ]+ ) [2vTg(1) 0 1 ∂u (2) [4u0vTg(2) 4u1vTg(2) ... 4uN-1vTgN-1 ]+ 0 1 [2λu0 2λu1 ... 2λuN-1] (27) or

After much algebraic manipulation (see Appendix), eq 30 can be written as

∂2J ) 2G1TG1 + G1(G2 diag(u)) + ∂u2 2(G2 diag(u))TG1 + 4(G - 2 diag(u))T(G2 diag(u)) + 2λI + 4 diag(vTG2) (31) As is usually the case with receding horizon schemes, the current control law is given as the first element of vector u, obtained formally as a solution to (28) by equating to zero or numerically, using (29) and (31) iteratively. If we denote

O)

∂J ; ∂uopt

O′ )

∂J2 ∂u2opt

(32)

the optimal value of current controller output increment, ∆u(t), can be obtained by iteration of the following equation:

uopt(n+1) ) uopt(n) - [O′(n)]-1 O(n)

(33)

This algorithm can now be used to calculate the optimal change in the controller output vector, uopt, for which we implement the first element only. This algorithm is based on a process described by a secondorder Hammerstein model and using a criterion function given by eq 15 in the controller design. Simulation studies are discussed in section 3. In the next section, we modify the development by considering what may be a more natural way of penalizing the effect of the nonlinear control on the cost function J. 2.3. Penalizing the Process Pseudo Input. In choosing a cost function J of the form given by (15) in conjunction with the quadratic model form in (3), we could argue that we should penalize the control input in J in a form which more appropriately reflects its structure in the model. This leads to a modified objective.

2724 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998

To compute the vector of controls, the following cost function J is minimized: NU

N2

J)

∑ j)N

[yˆ (t+j) - w(t+j)]2 +

1

λj[B1(1) u(t+j-1) + ∑ j)1 B2(1) u2(t+j-1)]2 (34)

where nb

B ) b0 + b1q-1 + ...;

B(1) )

bi ∑ i)1

(35)

Notice that the control variable u affects the objective J in the same way as it affects the model. Also, averaged steady-state weights Bi(1) are used to reflect similar scaling that appears in the model. In matrix form, we write

J ) E{(yˆ - w)T(yˆ - w) + λ[B(1)u + B2(1)u2]T[B(1)u + B2(1)u2]} (36) or 2

T

2

J ) E{(G1u + G2u + f - w) (G1u + G2u + f w) + λ[B1(1)u + B2(1)u2]T[B1(1)u + B2(1)u2]} (37)

∂J ) [2vTg(1) 2vTg(1) ... 2vTg(1) 0 1 N-1] + ∂u T (2) T (2) [4u0v g0 4u1v g1 ... (2) 4uN-1vTgN-1 ] + 2λ[B1(1)u + B2(1)u2]T[B1(1) + B2(1)u] (44)

Finally, we can rewrite the above equation in a more compact form as

∂J ) 2vTG1 + 4vTG2 diag(u) + 2λ[B1(1)u + ∂u B2(1)u2]T[B1(1) + B2(1)u] (45) By setting eq 45 to zero, a solution for u can be obtained numerically. Note that similar arguments were used to obtain NLGPC for more complex processes in Katende and Jutan (1995, 1996). In those papers, a general form of Hammerstein and Volterra process models were considered. They defined the process model as

A(q-1) y(t) ) q-(d+1)x(t) +

(46)

where x(t) is a pseudo input and is any function of u(t). They define a cost function as Nu

N2

J)

∑ i)N

[Pyˆ (t+i) - P(1)w(t+i)]2 + λ

1

Let us denote

C(q-1) ξ(t) ∆

(x(t+i-1))2 ∑ i)1

(47)

z ) B1(1)u + B2(1)u2

(38)

After some algebraic manipulation, they arrive at a solution for the optimal pseudo control x as

v ) G1u + G2u2 + f - w

(39)

j - Fc) x ) (GTG + λI)-1GT(w* - Hx

where v and z are column vectors. Equation 36 becomes

J ) E{vTv + λzTz}

(40)

Let the G1 and G2 matrices be represented by column vectors g0, g1, g2, ... as before. 2.3.1. Objective Minimization. For optimal control we minimize J in eq 40 with respect to u. This is again achieved in several steps:

[

]

∂v ∂vT ∂z ∂zT ∂J ) vT + v + λ zT + z ∂ui ∂ui ∂ui ∂ui ∂ui

(41)

∂v ∂z ∂J ) 2vT + 2λzT ∂ui ∂ui ∂ui

(42)

or

where v can be written as (1) (1) (1) v ) u0g(1) 0 + u1g1 + ... + uigi + ... + uN-1gN-1 + 2 (2) 2 (2) u20g(2) 0 + ... + ui gi + ... + uN-1gN-1 + f - w (43)

N - 1 is the number of columns of G1 and G2 matrices. Using the same arguments as before, we can obtain the final expression for the gradient as

(48)

where w* is a modified vector of w, c is a vector of prediction error through a filter T, and G, H, and F are polynomials defined by the diophantine equations. They obtain u from x by using Newton’s method. It should be noted that this algorithm is a general form of what we have obtained above. 3. Discussion A quadratic cost function is usually chosen as a mathematical convenience. Nevertheless, it is also somewhat intuitive for linear process models with constrained control. However, when we consider nonlinear models of the Hammerstein form, a number of different options (within the basic quadratic functional form) are possible. For a general nonlinear input, do we weight just the square of the linear term uswhat if there are no linear terms in the process model? This leads us to the concept of the pseudo input x. It now makes sense to penalize the square of x which in turn contains the manipulated variable u as it appears in the process dynamics. The value of the control weight λ is, in general, difficult to choose, a priori, and is often considered an on-line tuning parameter, although there are some guidelines available in the literature (Lam, 1980), for the standard objective weighting. These guides may be of use with weighting on the pseudo input x, since this converts the nonlinear form to an equivalent linear one. Choice of λ, when we weight one of a number of nonlinear functions of u, would doubtless be

Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2725

very “ad hoc” in its approach. Selection of weighting style may not be a question of taste either, since, as some later simulations show, quite different control results. In effect, with different components of x dominating in different ranges, we end up with an effective weighting given as a function of u. An additional practical problem is the question of multiple solutions for the optimal control value uopt. In the example we encountered, it was obvious which was the correct solution since the other candidates were rejected on heuristic grounds. If say Newton’s method was used for the iterations, a good initial guess (we used the previous control action) was often sufficient to converge to the correct option. On the basis of above discussion, the following cost function is defined for the NLGPC with penalty on the pseudo input: N2

J)

∑ [Pyˆ (t+i) - P(1)w(t+i)]

2

Table 1. Level Tank Data variables

value

qi Cv p(t) P P2 F g h(t) A

inlet flow (unmeasured load) 50 valve position (manipulated variable) 105 Pa 1.1 × 105 Pa 103 kg/m3 9.8 m/s2 tank level (controlled variable) 15 m2

where q(t) ) flow, m3/s; Cv ) valve coefficient, m3/(s‚ Pa1/2); p(t) ) valve position (A value of zero indicates that the valve is closed and a value of 1 indicates that the valve is fully open.); ∆P(t) ) pressure drop across the valve, Pa. For this process, the pressure drop across the valve is

NU



i)N1

(x(t+i-1))2 ∑ i)1

(49)

Note that the x is a nonlinear function of the controller output, u. This way the control weighting affects both the linear and nonlinear parts of the input signal. In many ways, this latter weighting is more “natural” since the effect on the output y occurs not only through u but also through the various nonlinear inputs. Thus, by penalizing x, we are actually penalizing the “net effect” of the input on the process output. The optimal value of x over the prediction horizon is obtained by minimization of J with respect to x. An analytical solution is obtained since x is linear. The optimal u is then obtained numerically from the optimal x.

∆P(t) ) P + Fgh(t) - P2

where P ) pressure over the liquid, Pa; F ) density of the liquid, kg/m3; g ) acceleration due to gravity, 9.8 m/s2; P2 ) downstream pressure from the valve, Pa. Assumptions: (1) Friction loss through the pipe from the tank to valve is negligible. (2) Outlet and inlet liquid densities are equal. The unsteady-state mass balance gives

qi(t) - q0(t) ) A

dh(t) dt

(1 + 0.022q-1 - 0.7991q-2)y(t) ) q-1(0.029 + 0.0269q-1)u(t) + q-1(0.021 + 0.003q-1)u2(t) + ξ(t) (50) This model is open-loop stable with a second-order nonlinearity in the input. It also has first-order B polynomials. The tuning parameters for both NLGPC were set as follows: N1 ) 2; N2 ) 3; NU ) 1; P ) 1; T ) 1; d ) 1; λ ) 0.0001. Noise variance was set to 0.001 and the setpoint was switched between two levels (1 and zero) for all runs. The input signal, u, was limited between 100 and -100. 4.1.2. Level Tank Process Model. Consider a process where the level, h(t), of a liquid in the tank responds to changes in the inlet flow, qi(t), and to changes in the exit valve opening, p(t). The outlet flow, q0(t) of a liquid through a valve is given as

q0(t) ) Cvp(t)x∆P(t)

(51)

(53)

where A ) cross section area of the tank, m2. Substituting eq 51 into eq 50 and then (50) into (52), we get

4. Simulation Studies To evaluate the two algorithms, simulation studies were carried out on two different processes. The first process is based on a second-order Hammerstein model structure and the second one is a level tank with a nonlinear valve. 4.1. Process Models Simulated. 4.1.1. Hammerstein Process Model. Consider the process represented by the following model:

(52)

qi(t) - Cvp(t)xP + Fgh(t) - P2 ) A

dh(t) dt

(54)

The nonlinear eq 53 was simulated and solved using a second-order Runge-Kutta numerical integration method. The level tank was specified as shown in Table 1. The tuning parameters for both NLGPC were set as follows: N1 ) 2; N2 ) 3; NU ) 1; P ) 1; T ) 1; d ) 1; λ ) 0.0001. The load, qi, was set to 20 × 102 m3/s. The setpoint (tank level, hsp(t)) was switched between 1.5 and 0.5 m. The NLGPC was based on a simple Hammerstein model:

y(t) + a1y(t-1) ) b01u(t-1) + b02u2(t-1)

(55)

That is, A ) 1 + a1q-1; B1 ) b01; B2 ) b02. The NLGPC was then used to control the nonlinear tank level. For both processes, parameters were estimated using a standard UDU version of RLS (Bierman, 1977). Persistent excitation was guaranteed by turning off the adaptation whenever the increment in the input signal became minimal. An exponential forgetting factor was adopted with values ranging from 0.98 to unity (no forgetting). The parameter estimates were initialized in simulation with b01 and b02 equal to unity and the rest equal to very small numbers. The diagonal of the initial covariance matrix was set to 104. The figures shown below indicate the behavior of the closed-loop system with 200 samples.

2726 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998

Figure 1. Adaptive NLGPC with penalty on the input (Hammerstein process model).

Figure 2. Adaptive NLGPC with penalty on the pseudo input (Hammerstein process model).

4.2. Tuning To arrive at the values indicated as tuning parameters, we started by setting these parameters to some initial settings using the knowledge we acquired through experience. Fine-tuning was done thereafter, essentially by trial and error, to finally arrive at these values. The value of N1 was set to d + 1 (that is, N1 ) 2) so that the tracking error over the entire prediction horizon is included in the objective function. For example, if N1 was set to d + 2 the tracking error at t + d + 1 would not be included in the criterion. Note that there is no need to include the tracking error at t + d since the process input (controller action) at time t does not affect the process output until time is greater than t + d. N2 was set to 3. This was necessary since we needed quick process response to controller action. From experience, it was noted that increasing the value of N2 to a large value slows down the response of the process under NLGPC control. NU was set to 1, lowest value possible since from our experience large values of NU resulted in oscillatory process response or sometimes an unstable closed-loop system. The control weighting λ was set to 0.0001, using trial and error. This value gave the best controller activity and overall closed-loop robustness. For linear GPC, P is intended for model following purposes; however, we decided to set P to unity since its role under NLGPC needs a little more investigation.

Figure 3. Tank level control using NLGPC with penalty on input.

4.3. Results Figures 1 and 2 show a large difference in performance between two NLGPC algorithms, one with a penalty on the input and the other with a penalty on the pseudo input, when applied to a nonlinear Hammerstein process model. The NLGPC algorithm with penalty on the input controls the nonlinear Hammerstein process model with a slower response and the input seems to be less noisy as shown in Figure 1. On the other hand, the NLGPC with penalty on the pseudo input controls the same process model well-enough with a faster response as shown in Figure 2. Figure 3 demonstrates the performance of NLGPC with penalty on input on the level tank process model and Figure 4 shows the corresponding performance of the NLGPC with penalty on pseudo input on the same process

Figure 4. Tank level control using NLGPC with penalty on pseudo input.

model. Here, control action is much smoother and much of the noise response is eliminated. The different control behavior cannot be attributed solely to different weighting in the objective function. Here, since the control objectives, and hence the optimal

Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 2727

solutions, are different, the control behavior is expected to be different as well. Here, the two optimal solutions are quite different in their sensitivities to the same noise level. The “quality” of the final control is a function of trial and error tuning, and we were not able to match this as well as we wanted to. Nevertheless, the question of what is the more appropriate objective function remains open, and these results show that both control quality and sensitivity to noise will be affected. No doubt stability is also an issue, but this was not treated in this initial study. 5. Conclusion In this paper, the choice of penalization in the objective function is considered. The performance of two NLGPC algorithms based on different criterion functions has been evaluated using a number of simulation runs. The first algorithm was based on a quadratic cost function with penalty on the input signal. This formulation requires solving a set of N nonlinear algebraic equations given by eq 29 to obtain the optimal control difference ∆ut(t). For the second algorithm the pseudo input x was penalized which in turn contains the manipulated variable. The second algorithm solves a set of N linear equations and one nonlinear algebraic equation. Its also important to note that a second algorithm which uses the manipulated variable in position form gives a faster, although noisier, response to a step change in the setpoint on a second-order Hammerstein process model compared with a response obtained on the same process model when the first algorithm, which uses the manipulated variable in velocity form, is used. It has been shown through simulation that the selection of the control weighting style is very important, but the best choice remains an open question. Appendix The following is a matrix representation of the second partial derivative of J with respect to u:

First, obtain the second partial derivative of each of the elements in the above matrix:

( )

[

]

(A2)

]

(A3)

∂ ∂J ∂v ∂ ) 2vT + 2λuj ∂ui ∂uj ∂ui ∂uj Rearranging the above equation, we get

( )

[

∂v ∂ ∂J ∂ ) 2vT + 2λδij ∂ui ∂uj ∂ui ∂uj

Substituting for the term in square brackets on the rhs using eq 22, we obtain

( )

∂ ∂J ∂ T (2) ) [2vTg(1) j + 4ujv gj ] + 2λδij (A4) ∂ui ∂uj ∂ui

Rearranging the above equation and differentiating through the rhs, we have

( )

∂ ∂J ∂vT (1) T (2) )2 [g + 2ujg(2) j ] + 4δijv gj + 2λδij ∂ui ∂uj ∂ui j (A5) Again, we use eq 22 for substitution in the above equation:

( )

∂ ∂J (2) T (1) (2) ) 2[g(1) j + 2ujgj ] [gj + 2ujgj ] + ∂ui ∂uj 4δijvTg(2) j + 2λδij (A6) Multiplying through and collecting terms, we get

( )

∂ ∂J T (1) (2) (2)T (1) ) 2[g(1) gj + 2g(1) gj + j i ujgj + 2ujgj ∂ui ∂uj T (2) gj )] + 4δijvTg(2) 4uiuj(g(2) i j + 2λδij (A7)

Using eq A7 and putting all elements in a matrix form, we arrive at a second partial derivative of J with respect to u:

∂2J ) 2GT1 G1 + G1(G2 diag(u)) + ∂u2 2(G2 diag(u))TG1 + 4(G - 2 diag(u))T(G2 diag(u)) + 2λI + 4 diag(vTG2) (A8) Literature Cited Agarwal, M.; Seborg, D. E. Self-tuning controllers for nonlinear systems. Automatica 1987, 23 (2), 204. Atherton, D. P. Nonlinear Control Engineering; Van Nostrand Reinhold: New York, 1975. Bierman, G. J. Factorization Methods for Discrete System Estimation; Academic Press: New York, 1977. Brengel, D. D.; Seider, W. D. Multistep Nonlinear Predictive Controller. Ind. Eng. Chem. Res. 1989, 28, 1812. Clarke, D. W. Advances in Model-Based Predictive Control. In Advances in Model-Based Predictive Control; Clarke, D. W., Ed.; Oxford University Press: Oxford, 1994. Clarke, D. W.; Gawthrop, P. J. Self-tuning Control. IEE Proc. 1979, 126, 633. Clarke, D. W.; Mohtadi, C.; Tuffs, P. S. General Predictive Control. Automatica 1987, 23 (3) parts I and II, 137-160. Cook, P. A. Nonlinear Dynamical Systems; Prentice-Hall: Englewood Cliffs, NJ, 1986. Cutler, C. R.; Ramaker, B. L. Dynamic Matrix Control. A Computer Control Algorithm; ACC: San Francisco, CA, 1980. De Keyser, R. M. C.; Van Cauwenberghe, A. R. Extended prediction Self-Adaptive Control. Presented at IFAC Symposium Ident. System Parameter Estimation, York, 1985. Garcia, C.; Morari, M. Internal Model Control: Part 1. A Unifying Review and Some New Results. Ind. Eng. Process Des. Dev. 1982, 21, 308. Garcia, C. E.; Prett, D. M.; Morari, M. Model Predictive Control: Theory and PracticesA survey. Automatica 1991, 25, 335-348. Katende, E.; Jutan, A. A New Self-tuning PID Controller. Can. J. Chem. Eng. 1993, 71, 625. Katende, E.; Jutan, A. Nonlinear Predictive Control. Presented at IEEE ACC Proceedings, Session FP10, Seattle, WA, 1995. Katende, E.; Jutan, A. Nonlinear Predictive Control of Complex Processes. Ind. Eng. Chem. Res. 1996, 35, 3539. Katende, E.; Jutan, A. Experimental Evaluation of Nonlinear Predictive Control. IEEE Trans. Control Technol., in press. Lachmann, K. H. Parameter-Adaptive Control of a Class of Nonlinear Processes. Presented at 6th IFAC Symposium on Ident. System Parameter Estimation, Arlington, VA .372, 1982.

2728 Ind. Eng. Chem. Res., Vol. 37, No. 7, 1998 Lam, R. P. Implicit and Explicit Self-tuning Controllers, D.Phil Thesis, University of Oxford, 1980. Lee J. H.; Gelormino, M. S.; Morari, M. Model Predictive Control of Multirate Sampled-Data Systems: A State Space Approsch. Int. J. Control 1992, 55, 153. Mosca, E.; Zappa, G.; Manfredi, C. Multistep Horizon Self-Tuning Controllers: The MUSMAR Approach. Presented at the IFAC 9th World Congress, Budapest, Hungary, 1984. Peterka, V. Predictor-Based Self-Tuning control. Automatica 1984, 20, 39.

Scattolini, R.; Schiavoni, N. A Multi-rate Model-Based Predictive Controller. IEEE Trans. Autom. Control 1995, 40, 1093. Soeterboek, R. Predictive ControlsA Unified Approach; PrenticeHall: Englewood Cliffs, NJ, 1992.

Received for review October 29, 1997 Revised manuscript received May 1, 1998 Accepted May 6, 1998 IE970754V