An Analytical Predictive Control Law for a Class of Nonlinear

Department of Chemical Engineering, The Hong Kong University of Science and ... it is shown that a simple analytical predictive control law can be for...
0 downloads 0 Views 74KB Size
Ind. Eng. Chem. Res. 2000, 39, 2029-2034

2029

An Analytical Predictive Control Law for a Class of Nonlinear Processes Furong Gao,* Fuli Wang, and Mingzhong Li Department of Chemical Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong

Many processes in the chemical industry have modest nonlinearities; i.e., linear dynamics play a dominant role in governing the process output behavior in the operating range of interest, but the linearization errors may be significant. For these types of processes, linear-based control may yield a poor performance, while nonlinear-based control results in computation complexity. We propose to model this type of process with a composite model consisting of a linear model (LM) and a multilayered feedforward neural network (MFNN). The LM is used to capture the linear dynamics, while the MFNN is employed to predict the LM’s residual errors, i.e., the process nonlinearities. Effective off-line and on-line algorithms are proposed for the identification of the composite model. With this model structure, it is shown that a simple analytical predictive control law can be formulated to control a nonlinear process. Simulation examples are also given to illustrate the effectiveness of the model identification and the proposed predictive control. 1. Introduction Model predictive control (MPC), an intuitive and effective control strategy, has attracted much research attention,1-3 primarily because of its design features of prediction and minimization of a cost penalizing future deviations from an output (or state) reference trajectory. Linear model predictive control (LMPC), a widely recognized method, has been successfully applied to many chemical processes. A comprehensive survey on MPC industrial applications can be found in the reference of Qin and Badgwell.4 Despite the fact that most chemical processes are inherently nonlinear, they have been “adequately” controlled with a linear control design over a small operating range. Today, demands for tighter environmental regulation, better energy utilization, higher product quality and more production flexibility have made process operations more complex within a larger operating range. Consequently, predictive control based on a linear model may not always yield satisfactory results. The focus has recently been shifting toward designing predictive controllers based on nonlinear models.5-7 Neural networks, having an inherent ability to approximate an arbitrary nonlinear function, have become attractive means for modeling nonlinear processes. Several neural-network-based predictive control algorithms for nonlinear processes have been proposed.8,9 However, with nonlinear process models, two major drawbacks arise in MPC. First, it is now difficult to obtain the control law in an analytical form, and the control sequence must be obtained via a nonlinear optimization routine, resulting in a higher computational effort and the potential numerical stability problem. Second, the abundant theory and experience accumulated for the design and tuning of linear controllers cannot easily be used for the design and tuning of nonlinear controllers. * Author to whom correspondence should be addressed. Telephone: +852-2358-7139. Fax: +852-2358-0054. E-mail: [email protected].

Several simplified algorithms have been proposed to control nonlinear processes. Kim et al.10 presented a neural linearizing control scheme, in which a radial basis function (RBF) was used to linearize the relation between the output of a linear controller and the process output. This control scheme can be applied to the inputoutput linearizable processes, and a proper selection of the predefined linear reference model is crucial to the successful application of this method. Another simplified nonlinear control algorithm was proposed by Mutha et al.11 A key feature of this scheme lies in the use of a process output prediction that accounts for the changes in the operating point as well as the magnitude of the process input change. Iterative computation, however, is required for the implementation of the algorithm. It is important to know the degree of process nonlinearity when selecting an appropriate modeling and control design for nonlinear processes. A quantitative nonlinearity measure has been recently proposed by Guay et al.12,13 Our paper is concerned with the modeling and controller design of chemical processes with modest nonlinearities; i.e., linear dynamics play a dominant role in governing the process output behavior in the operating range of interest, but the linearization errors may be significant. We propose to model these types of processes with a composite model consisting of a linear model (LM) and a multilayer feedforward neural network (MFNN). The LM is used to capture the linear dynamics, while the MFNN is employed to approximate and predict the LM’s residual errors, i.e., the process nonlinearities. With this form of the model, we will show that analytical predictive controllers can be designed based on the LM and that the outputs of the MFNN, which represent the process nonlinearities, can be viewed as measurable disturbances and be eliminated through “feedforward” control. As a result, the existing linear predictive control techniques can be directly applied to yield an analytical control law. Furthermore, the process nonlinearity is compensated with the aid of the MFNN. This strategy can be expected to be applicable to a broad process operating range without any intensive computation for the control determination.

10.1021/ie9902176 CCC: $19.00 © 2000 American Chemical Society Published on Web 05/02/2000

2030

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000

Fi+1(q-1) ) Fi(q-1) + q-igi,0

(7)

Gi+1(q-1) ) q[Gi(q-1) - gi,0A(q-1)]

(8)

0 + gi,0b0) Hi+1,1(q-1) ) Hi,1(q-1) + q-i(hi,2

(9)

2. Process Modeling and Controller Design In this section, a class of single-input-single-output nonlinear processes is first modeled, followed by derivation of the predictive controller based on this model. As stated earlier, it is assumed that the process nonlinearity is modest; i.e., a linear model can represent the dominant process dynamics in the operating range of interest and, at the same time, the linearization errors are significant. A composite model consisting of a LM and a MFNN is used to model such a process. Suppose that an input sequence {u(k)} is presented to the process and that the corresponding output sequence {y(k)} is measured. The following can then be used to model the nonlinear input-output mapping from {u(k)} to {y(k)}:

y(k+1) ) x(k)Tθ1 + NN(x(k),θ2) + ξ(k+1)

(1)

In this equation, x(k)Tθ1 is a LM with parameter vector θ1 and input vector x(k) ) [y(k), y(k-1), ..., y(k-n+1), u(k), u(k-1), ..., u(k-m)]T, NN(x(k),θ2) is a MFNN with weight vector θ2 and input vector x(k), and ξ(k+1) is the modeling error. The LM represents the linearization of the process in the operating range of interest, while the MFNN approximates the linearization error, i.e., the process linearity. The good nonlinear mapping ability of the MFNN can make the modeling error, ξ(k+1), sufficiently small by properly determining the parameter vector θ1 and the network weight vector θ2. The process model of eq 1 can be rearranged as -1

-1

A(q ) y(k) ) B(q ) u(k-1) + φ(k-1) + ξ(k) (2) where A(q-1) ) 1 + a1q-1 + ... + anq-n and B(q-1) ) b0 + b1q-1 + ... + bmq-m with ai ) -θ1,i (i ) 1, ..., n) and bj ) θ1,n+1+j (j ) 0, 1, ..., m); the θ1,i’s (i ) 1, ..., n + m + 1) are the components of the parameter vector θ1; φ(k) is defined as φ(k) ) NN(x(k),θ2). In predictive control, predictive equations should be developed to predict the process outputs. To do so, introduce the following polynomial equations:

1 ) Fi(q-1) A(q-1) + q-iGi(q-1)

ig1

(3)

where Fi(q-1) ) 1 + fi,1q-1 + ... + fi,i-1q-i+1 and Gi(q-1) ) gi,0 + gi,1q-1 + ... + gi,n-1q-n+1. Then we have

y(k+i) ) Hi(q-1) u(k+i-1) + Gi(q-1) y(k) + Fi(q-1) φ(k+i-1) + Fi(q-1) ξ(k+i) (4) where Hi(q-1) ) Fi(q-1) B(q-1). By defining

Hi(q-1) ) Hi,1(q-1) + q-iHi,2(q-1)

with the initial values F1(q-1) ) 1, G1(q-1) ) q[1 A(q-1)], H1,1(q-1) ) b0, and H1,2(q-1) ) q[B(q-1) - b0]. From eq 6, the i-step-ahead prediction of the process output can be obtained by neglect of the unknown quantity Fi(q-1) ξ(k+i), which is a linear combination of the future model errors. Then the predictive equations take the form:

yˆ (k+i) ) Hi,1(q-1) u(k+i-1) + Hi,2(q-1) u(k-1) + ˆ (k+i-1) (11) Gi(q-1) y(k) + Fi(q-1) φ where φˆ (k+i-1), the prediction of φ(k+i-1), can be calculated by replacing the future process outputs with their corresponding predictions. The predictions yˆ (k+i) (i ) 1, 2, ...) can be recursively calculated using eq 11. Long-range predictive control strategies minimize a quadratic criterion consisting of future process output predictions and control inputs in a receding horizon sense. A commonly used quadratic criterion is as follows:

J)

1

M

∑{[w(k+i) - yˆ (k+i)]2 +

2 i)1

λ[u(k+i-1) - u(k+i-2)]2} (12) where w(k) is the desired process output, M is the prediction horizon, and λ g 0 is a weighting factor. Here, the choice of choosing the prediction horizon equal to the control horizon is to simplify analysis. Generally, these two horizons may have different values. The control is to minimize the criterion (12) in a receding horizon sense, resulting in a sequence of optimized control moves over the control horizon u(k), u(k+1), ..., u(k+M-1). Only the first control move in the sequence, u(k), is actually implemented. At the next instant, the procedure is repeated and u(k+1) is determined in the same fashion. A common practice in reducing the complexity of the optimization computation is to assume a constant control over the prediction horizon, i.e., u(k+i) ) u(k) (i g 1). In this case, the prediction equation of eq 11 reduces to

ˆ (k+i-1) (13) Gi(q-1) y(k) + Fi(q-1) φ

(5)

y(k+i) ) Hi,1(q-1) u(k+i-1) + Hi,2(q-1) u(k-1) + -1

0 - gi,0b0] (10) q[Hi,2(q-1) + gi,0B(q-1) - hi,2

yˆ (k+i) ) Hi,1(1) u(k) + Hi,2(q-1) u(k-1) +

0 1 -1 i-1 -i+1 where Hi,1(q-1) ) hi,1 + hi,1 q + ... + hi,1 q and 0 1 -1 m-1 -m+1 -1 , then eq 4 Hi,2(q ) ) hi,2 + hi,2q + ... + hi,2 q can be written as

-1

Hi+1,2(q-1) )

-1

Gi(q ) y(k) + Fi(q ) φ(k+i-1) + Fi(q ) ξ(k+i) (6) The polynomials Fi(q-1), Gi(q-1), Hi,1(q-1), and Hi,2(q-1) can be calculated by the following recursive formulas:

and eq 12 becomes

J)

1

M

∑[w(k+i) - yˆ (k+i)]2 +

2 i)1

1 2

λ[u(k) - u(k-1)]2 (14)

The definition of φˆ (k) indicates that φˆ (k+i-1) (i ) 1, 2, ..., M) is a function of u(k), and this makes the minimization of J with respect to u(k) a nonlinear

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000 2031

optimization problem. An iterative computation is commonly needed to obtain a numerical solution of a nonlinear optimization. For a real-time control, iterative computation is not preferred because of the time constraint. A simplification is proposed to cope with this problem. Notice that we have placed a constraint on the term u(k) - u(k-1), the change between two consecutive control moves. This implies that u(k) cannot be significantly different from u(k-1). φˆ (k+i-1) (i ) 1, 2, ..., M) can, therefore, be approximately determined by replacing the current control u(k) with the previous control u(k-1). The approximates of φˆ (k+i-1) (i ) 1, 2, ..., M), denoted by φˆ 0(k+i-1), can be computed prior to the optimization as

φ ˆ 0(k+i-1) ) NN(xˆ (k+i-1),θ2)

(15)

where xˆ (k+i-1) ) [yˆ 0(k+i-1), ..., yˆ 0(k+i-n), uˆ (k+i-1), ..., uˆ (k+i-m-1)]T with

uˆ (k+j) )

{

u(k-1) j g 0 u(k+j) j < 0

(16)

and yˆ 0(k+j) ) y(k+j) for j e 0. yˆ 0(k+i) (i ) 1, 2, ..., M), the quasi-i-step-ahead predictions, can be obtained by replacing u(k) with u(k-1) in eq 13, i.e.,

yˆ 0(k+i) ) Hi,1(1) u(k-1) + Hi,2(q-1) u(k-1) + ˆ 0(k+i-1) i ) 1, ..., M (17) Gi(q-1) y(k) + Fi(q-1) φ Replacing φˆ (k+i-1) with φˆ 0(k+i-1) in eq 13 and using eq 17, the simplified prediction equation can be obtained:

yˆ (k+i) ) Hi,1(1) [u(k) - u(k-1)] + yˆ 0(k+i) (18)

can be estimated by minimizing the following cost function: N

I(θ1,θ2) )

θ/1 ) [XT X]-1XTY where

M

(λ +

Hi,1(1)2)-1H h 1(q-1) (w(k+M) - yˆ 0(k+M)) ∑ i)1

(19) Y ) [y(1), y(2), ..., y(N)]T

where

H h 1(q-1) ) HM,1(1) + HM-1,1(1) q

-1

-M+1

+ ... + H1,1(1) q

yˆ 0(k+i) (i ) 1, 2, ..., M) in the control law (19) can be easily calculated by using eqs 15 and 17 in turn. It can be seen from eq 15 that φˆ 0(k+i-1)’s (i ) 1, 2, ..., M) are the outputs produced by the MFNN with the input xˆ (k+i-1). The process nonlinearity, characterized by φ(k+i-1), is compensated in the control law by the term φˆ 0(k+i-1). In the case where the controlled process dynamics are unknown, i.e., the parameter vectors θ1 and θ2 in eq 1 are unknown, an identification method needs to be developed for their estimation. This will be discussed in the next section. 3. Identification Algorithm Given a set of process input-output data {y(k), x(k-1); k ) 1, 2, ..., N}, the parameter vectors θ1 and θ2

(20)

This is a typical nonlinear optimization problem, and it can be solved by many standard nonlinear optimization methods, e.g., the gradient descent method. A more effective and simpler algorithm, however, can be formulated by making use of the unique structure of the process model (1). We propose the following iterative optimization procedure for the estimation of θ1 and θ2. First, make an initial guess for θ2, say, θ02, and minimize I(θ1,θ02) to obtain the optimal θ11; with the above optimized θ11, I(θ11,θ2) is then minimized to obtain the optimal θ12. The above steps can be repeated, in theory, until the modeling error ξ(k) is sufficiently small. As the process nonlinearity is modest, it can be expected that the output of the MFNN will be relatively small, implying that the optimized weight vector, θ2, is in the neighborhood of the origin. θ02 ) 0 is, therefore, a wellguessed initial. Extensive simulations indicate that, with the initial θ02 ) 0, the results after one round of optimization (i.e., θ1 ) θ11 and θ2 ) θ12) can yield satisfactory model parameters. Further iterations generate only a little improvement and thus are not necessary in reality. The above can be summarized into the following offline and on-line estimation procedures. Off-Line Estimation Algorithm. (i) Set θ2 ) 0 in eq 20, and minimizing eq 20 with respect to θ1 produces the optimum value of θ1, denoted by θ/1. This results in

Substituting eq 18 into eq 14 results in the optimization of eq 14 to be linear with the analytic solution of

u(k) ) u(k-1) +

(y(i) - x(i-1)Tθ1 - NN(x(i-1),θ2))2 ∑ i)1

(21)

[ ]

x(0)T x(1)T and X ) l x(N-1)T

Alternatively, θ/1 can be determined equivalently by the recursive least-squares algorithm:

θ1(i) ) θ1(i-1) + K(i)[y(i) - xT(i-1) θ1(i-1)]

(22a)

K(i) ) P(i-1) x(i-1)[xT(i-1) P(i-1) x(i-1) + 1]-1 (22b) P(i) ) [I - K(i) xT(i)]P(i-1)

(22c)

i ) 1, 2, ..., N with P(0) ) RI (R is a large positive real number) and θ1(0) ) 0. Then the optimum value of θ1 can be obtained by setting θ/1 ) θ1(N). (ii) Set θ1 ) θ/1 in eq 20, and minimize eq 20 with respect to θ2. A number of optimization schemes can be used to find the optimum θ2. The gradient descent method is used here for simplicity. The optimum θ2, denoted by θ/2, is calculated iteratively:

2032

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000

θ2(t) ) θ2(t-1) - η

∂J ∂θ2(t-1) N

[yN(i) - NN(x(i-1), θ2(t-1))] ∑ i)1

) θ2(t-1) + η

∂NN(x(i-1),θ2(t-1)) ∂θ2(t-1) θ2(0) ) 0; t ) 1, 2, ..., T

(23)

where yN(i) ) y(i) - xT(i - 1) θ/1 and η is the learning rate. The derivative of the network output with respect to the weight vector θ2 can be calculated by the wellknown backpropagation (BP) algorithm.14 The iteration is repeated until the cost function I is less than a prespecified value, or the maximum number of iterations, T, is reached. Suppose the iteration stops at t ) t*(eT); then set θ/2 ) θ2(t*). On-Line Estimation Algorithm. On-line estimation may be used to capture in real time the changes in the process dynamics. Assuming a pair of new process input-output data {y(k), x(k-1)} is available at time instant k, the parameter vectors θ1 and θ2 are then updated by the following formulas:

(i)

θ1(k) ) θ1(k - 1) + K(k)[yL(k) - xT(k - 1)θ1(k - 1)] (24a)

K(k) ) P(k-1) x(k-1) [xT(k-1) P(k-1) x(k-1) + 1]-1 (24b) T

P(k) ) [I - K(k) x (k)]P(k-1), P(0) ) RI

(24c)

where yL(k) ) y(k) - NN(x(k-1),θ2(k-1)). The initials θ1(0) and θ2(0) are provided by the off-line estimation algorithm, i.e., θ1(0) ) θ/1 and θ2(0) ) θ/2.

(ii)

h 2(t-1) + θ h 2(t) ) θ

η[yN(k) - NN(x(k-1),θ h 2(t-1))]

∂NN(x(k-1),θ h 2(t-1)) ∂θ h 2(t-1)

θ h 2(0) ) θ2(k-1), t ) 1, 2, ..., T*

(25a)

h 2(T*) θ2(k) ) θ

(25b)

where yN(k) ) y(k) - xT(k-1) θ1(k) and T* is the number of iterations in each sampling period. The proposed model structure allows the LM’s parameters to be estimated in the recursive least-squares form. A good adaptive ability of the algorithm can therefore be expected when used in an on-line fashion. 4. Simulation In this section, simulations are conducted to verify the proposed identification and control algorithms with two process examples. The first is a 25-tray binary distillation column process. The second is a third-order nonlinear dynamic model, to illustrate the algorithm’s performance for a more complex nonlinear process. Example 1. The distillation process model is as follows:15

y(k) ) 0.757y(k-1) + 0.243g(u(k-1)) g(x) ) 1.04x - 14.11x2 - 16.72x3 + 562.7x4

Figure 1. Model validation: (a) LM plus MFNN approximation; (b) MFNN nonlinearity approximation.

(26)

It relates the top column composition y (%) to the reflux flow rate u (mol/min). Both the input and output variables in the model are defined as deviations from nominal values. A detailed description of the model can be found in Eskinat et al.15 Model Identification. In this simulation, x(k) ) [y(k), u(k)]T. The MFNN is determined to be a threelayer feedforward neural network with 2 inputs, 12 hidden units, and 1 output. The hidden layer neurons use the sigmoidal activation function while the input and output layer neurons use a linear activation function. One hundred pairs of input-output training data are first generated with a random input of uniform distribution from -0.1 to +0.1, and the data are subsequently used for the identification of the LM and the training of the MFNN using the off-line estimation algorithm proposed in the preceding section. The initial value of the covariance matrix is set to P(0) ) 103I. The network learning rate and the iteration number are set to η ) 0.001 and T ) 10 000, respectively. The process output, the LM output, and the LM plus MFNN output, generated by the same input, are shown in Figure 1a, while the LM’s modeling error and the MFNN output are shown in Figure 1b. It can be clearly seen that the LM can capture the dominant part of the process dynamics, but with a significant modeling error. The MFNN can approximate the LM’s modeling error, i.e., the process nonlinearity, quite well, as shown in Figure 1b. This demonstrates that a nonlinear process can be modeled by a LM together with a MFNN to a satisfactory accuracy. To illustrate that iterations will not produce significant improvement of the model performance, the off-

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000 2033

Figure 2. Comparison of the proposed simplified control with exact control solution.

line estimate algorithm is executed once more by setting the 1st iteration optimized parameter as the initial for the 2nd iteration, i.e., θ02 ) θ/2. The new solution θ1 ) [0.932, 1.008]T is not much different from the 1st iteration result, θ1 ) [0.937, 1.014]T, and the same is also true for θ2. Therefore, a reasonably good model can be obtained without the necessity of iteration for the off-line modeling. Predictive Control. The distillation process is used to test the proposed control algorithm. Simplification is made in obtaining the analytical control; the 1st control simulation case will compare the control performance of the proposed algorithm with and without the simplification. Both adaptive and nonadaptive control algorithms have been proposed in this work; the 2nd simulation case will compare their control performance. Last, the distillation model will be used to compare the proposed adaptive control with an adaptive linear model control. Case 1. This compares the control performance of the proposed nonadaptive algorithm with the control of the exact optimal solution of the quadratic criterion of eq 14 by the modified Newton-type algorithm.16 The offline identified model is used for both algorithms. The control parameters M ) 4 and λ ) 0.01 are used in the simulation. Figure 2 shows that proposed simplified algorithms have results similar to those of the exact solution, except that the exact solution has a small overshoot and a smaller offset. This suggests that the simplification of the proposed algorithm is reasonable. The offset of the exact solution is obviously due to the modeling error, while the offset of the proposed control is due to the combination effect of the modeling error and control algorithm simplification. The offset with the proposed control can be easily eliminated via an error feedback scheme or using the proposed adaptation scheme. Case 2. The nonadaptive version and the adaptive version of the proposed control strategies are compared in this case. The off-line identified model is used for the nonadaptive control scheme and as the initial model for the adaptive scheme. In the adaptive case, the proposed composite model parameters θ1 and θ2 are updated online using the proposed on-line estimation. The initial covariance matrix P(0) and the network learning rate η are set to be the same as those of those of the model identification case, and the iteration number T* is set to be 1 for simplicity. The same control parameters as in case 1 are used in this simulation. The control results are shown in Figure 3a for the adaptive version (solid

Figure 3. Comparison of the proposed algorithm with and without adaptation: (a) setpoint response comparison; (b) time evolution of the LM estimated parameters.

line) and the nonadaptive version (dotted line), and the time evolution of the LM’s estimated parameters of the adaptive algorithm is graphed in Figure 3b. The adaptive version of the proposed algorithm has the advantage of eliminating steady-state error. This suggests that the adaptive version of the algorithm should be used when the confidence of the off-line model is low. Case 3. This simulation compares, in an adaptive sense, the proposed algorithm with a linear model predictor controller. The linear model adaptive control (LMAPC) is obtained by replacing φˆ 0(k+i-1) in eq 17 with e(k) ) y(k) - yˆ L(k), where y(k) is the process output and yˆ L(k) is the linear model output. The prediction horizon M and the weighting factor λ in the control law (19) are tuned to have a satisfactory control performance for the LMAPC, resulting in M ) 4 and λ ) 0.01. The linear model parameters are adjusted on-line by the recursive least-squares algorithm. The other parameters for the adaptive proposed algorithm are the same as those in case 1. The control is to track a setpoint step change. The tracking results are shown in Figure 4 for the proposed algorithm (solid line) and the LMAPC (dotdashed line). The proposed algorithm tracks the setpoint well, without overshoot and oscillation, while there is some small oscillation associated with LMAPC. This indicates that a better control is achieved with the proposed algorithm. The superiority of the proposed algorithm over the LMAPC may be attributed to the fact that the MFNN model used here has the ability to predict the errors, rather than a simple approximation of the linear model error.

2034

Ind. Eng. Chem. Res., Vol. 39, No. 6, 2000

5. Conclusion

Figure 4. Response comparison between the proposed algorithm and a linear model adaptive predictive control (LMAPC).

Analytical predictive control laws for adaptive and nonadaptive have been presented for a class of nonlinear processes. A LM plus an MFNN is proposed to model such a class of nonlinear processes, together with the corresponding identification strategy. With this model representation, a simple analytic predictive control law has been derived. Adaptive predictive control, if necessary, can be easily implemented for the process via the proposed on-line estimation algorithm. The simulation shows that the proposed LM plus MFNN model structure is suitable for modeling a nonlinear process and that the proposed predictive control algorithm has a good control performance. It is believed that the proposed algorithm will be useful for many chemical processes where the nonlinearity is modest and where intensive computation is not allowed. Literature Cited

Figure 5. Tracking result comparison for the proposed algorithm and LMAPC for a higher order process.

Example 2. A more complicated (higher-order) nonlinear process such as

y(k+1) )

2.5y(k) y(k-1) 1 + y(k)2 + y(k-1)2 + y(k-2)2

+

0.4y(k-3) + u(k) + 1.1u(k-1) (27) is used to compare the proposed algorithm and the LMAPC. The procedure of case 3 of the last example is repeated here, and the same control parameters (M ) 4 and λ ) 0.01) are used in this simulation. The control in this example is to track a composite setpoint profile consisting of a ramp, a step change, and sinusoidal signals. The tracking results are shown in Figure 5 for the proposed algorithm (solid line) and the LMAPC (dotted line). The linear model adaptive control has strong oscillation and significant offset while the proposed algorithm tracks the setpoint profile reasonably well, indicating a much better control than the LMAPC. Other sets of the control parameters have been tried, and similar phenomena are observed. It should be pointed out that the offset existing in the linear model adaptive control may be removed by including an integrator in the noise model of eq 2, at the cost of deteriorating the transient performance.

(1) Clarke, D. W.; Mohtadi, C.; Tuffs, P. S. Generalized Predictive ControlsPart 1: Basic Algorithm. Automatica 1987, 23, 137. (2) Byun, D. G.; Kwon, W. H. Predictive Control: A Review and Some New Stability Results. Proceedings of IFAC Workshop on Model Based Process Control, Oxford, England, 1989. (3) Muske, K. R.; Rawlins, J. B. Model Predictive Control with Linear Models. AIChE J. 1993, 39, 262. (4) Qin, S. J.; Badgwell, T. A. An Overview of Industrial Model Predictive Control Technology. AIChE Symp. Ser. 1997, 93, 232. (5) Wright, G. T.; Edgar, T. F. Nonlinear Model Predictive Control of a Fixed-bed Water-gas Shift Reactor: An Experimental Study. Comput. Chem. Eng. 1994, 18, 83. (6) Peng, C. Y.; Jang, S. S. Fractal Analysis of Time-Series Rule Based Model and Nonlinear Model Predictive Control. Ind. Eng. Chem. Res. 1996, 35, 2261. (7) Arpad, B.; Ferenc, S.; Tibor, C. Convolution Model based Predictive Controller for a Nonlinear Process. Ind. Eng. Chem. Res. 1999, 38, 154. (8) Jose, R. N.; Wang, H. A Direct Adaptive Neural-Network Control for Unknown Nonlinear Systems and Its Application. IEEE Trans. Neural Network 1998, 9, 27. (9) Tan, Y.; Keyser, R. Neural Network-Based Adaptive Predictive Control. Proceedings of Advances in Model-Based Predictive Control Conference, Oxford, England, 1993. (10) Kim, S. J.; Lee, M.; Park, S.; Lee, S. Y.; Park, C. H. A Neural Linearizing Control Scheme for Nonlinear Chemical Processes. Comput. Chem. Eng. 1997, 21, 187. (11) Mutha, R. K.; Cluett, W. R.; Penlidis, A. Nonlinear Modelbased Predictive Control of Control Nonaffine Systems. Automatica 1997, 33, 907. (12) Guay, M.; Mclellan, P. J.; Bacon, D. W. Measurement of Nonlinearity in Chemical Process Control Systems: the SteadyState Map. Can. J. Chem. Eng. 1995, 73, 868. (13) Guay, M.; Mclellan, P. J.; Bacon, D. W. Measure of Closedloop Nonlinearity and Interaction for Nonlinear Chemical Processes. AIChE J. 1997, 43, 2261. (14) Narendra, K. S.; Parthasarathy, K. Identification and Control of Dynamical Systems Using Neural Networks. IEEE Trans. Neural Networks 1990, 1, 4. (15) Eskinat, E.; Johnson, S. H.; Luyben, W. L. Use of Hammerstein Models in Identification of Nonlinear Systems. AIChE J. 1991, 37, 255. (16) Chen, J. Systematic Derivations of Model Predictive Control Based on Artificial Neural Network. Chem. Eng. Commun. 1998, 164, 35.

Received for review March 22, 1999 Revised manuscript received February 25, 2000 Accepted March 16, 2000 IE9902176