Adaptive Linearizing Control with Neural-Network-Based Hybrid

Adaptive Linearizing Control with Neural-Network-Based Hybrid Models ... For a more comprehensive list of citations to this article, users are encoura...
0 downloads 0 Views 235KB Size
5604

Ind. Eng. Chem. Res. 2001, 40, 5604-5620

Adaptive Linearizing Control with Neural-Network-Based Hybrid Models Mohamed Azlan Hussain* and Pei Yee Ho Department of Chemical Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia

J. C. Allwright Centre for Process Systems Engineering, Imperial College, London SW7 2BY, U.K.

A nonlinear control strategy involving a geometric feedback controller utilizing linearized models and neural networks, approximating the higher order terms, is presented. Online adaptation of the network is performed using steepest descent with a dead zone function. Closed-loop Lyapunov stability analysis for this system has been proven, where it was shown that the output tracking error was confined to a region of a ball, the size of which depends on the accuracy of the neural network models. The proposed strategy is applied to two case studies for set-point tracking and disturbance rejection. The results show good tracking comparable to that when the actual model of the plant is utilized and better than that obtained when the linearized models or neural networks are used alone. A comparison was also made with the conventional proportionalintegral-derivative approach. 1. Introduction Research in the area of robust control of systems with unknown and nonlinear dynamics has been of considerable importance in recent years. In this respect, nonlinear control strategies involving geometric nonlinear control such as feedback linearization have been one of the different strategies widely studied for the control of such systems. One of the main reasons for the development of the geometric nonlinear control techniques from differential geometry is that it provides a unified coordinate-free method for analyzing the structure of nonlinear differential systems.1 This provides the basis for an analogous development of nonlinear control theory with linear control theory. It is also able to handle process nonlinearities because it directly uses a model of the process in the controller design. However, the design of a geometric controller relies heavily on the accuracy of the model, over the operating region of the plant. It is normally difficult to obtain exact models, and most of the models only represent an approximate description of the plant. Kummel and Anderson2 have shown that the output sensitivity of geometric control toward modeling errors equals that of an open-loop controller. Other researchers have shown the dependence of these methods with respect to disturbance handling and modeling errors and suggest that these linearization techniques can be very sensitive to these factors.3,4 They are only a few results, using numerical Lypunov functions, which guarantee robust stability of systems with unmodeled dynamics in this linearization technique.5 To alleviate these problems, some researchers have used higher order approximations and polynomial functions to improve the robust performance of the global linearization technique and to characterize the operating region and the error of approximation.6 * To whom correspondence should be addressed. Tel.: 60379675214. Fax: 603-79675319. E-mail: [email protected].

However, in recent years, modeling of nonlinear systems using neural networks and their application in control strategies have been fairly widespread and found to yield good results. It has been shown by researchers using the universal approximation theorem that these networks, if properly developed and trained, can approximate any nonlinear continuous function arbitrarily accurately.7 Artificial neural networks, the most common of which in engineering application are multilayered feedforward networks, have also been widely applied in various control strategies.8-10 These networks approximate the function mapping from system inputs to outputs, given a set of observations of inputs and corresponding outputs, by adjusting its internal parameters, i.e., weights and biases, to minimize the squared error between the network’s outputs and the desired outputs. The use of neural networks to model uncertain nonlinear functions within the geometric control technique has been demonstrated by several researchers. In this respect, Piovoso et al.11 have utilized neural network modeling in the generic model control framework using discrete time input-output values to estimate the unknown functions describing the plant. Nikolaou and Hanagandi12 have utilized recurrent neural networks to model the nonlinear plant, to exactly linearize the network, and design and implement a linear controller for the linearized plant in a continuous stirred tank reactor (CSTR) system. Polycarpou and Ioannu13 have proposed using recurrent neural networks to model the unknown functions for a certaintyequivalence-type controller configuration. Jin et al.,14 Fregene and Kennedy,15 and Chen and Khalil16 have used neural nets to model the unknown functions, in a feedback linearization strategy and in an adaptive mode wherein the networks are continually adapted online to perform output tracking. Recently, Kin et al.17 have utilized a radial basis function network to linearize the relationship between the output of the linear controller and the process output. The learning of the network

10.1021/ie000919r CCC: $20.00 © 2001 American Chemical Society Published on Web 10/20/2001

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001 5605

then proceeds adaptively to minimize the difference between the output of the linear reference model and the process output. However, it is obvious and widely recognized that the modeling accuracy of a neural network depends on the data presented to it during training.18 Insufficient as well as noisy data can hamper the accuracy of the network modeling. Hence, it has been suggested that utilizing qualitative or available knowledge of the function to be modeled may be useful in overcoming data insufficiency, data sparsity, and noisy data. This would also reduce the dependency of the control strategy solely on neural network models, which enables the hybrid strategy to be more robust in nature.19,20 The work described in this paper improvises the various methods mentioned above by incorporating both known knowledge (i.e., linearized models) and neural network models, which we call “hybrid models”, in an adaptive output tracking nonlinear control strategy. Simulations performed with this method show the advantages of this improvisation. This paper is arranged as follows: The next section reviews the mathematical preliminaries associated with the nonlinear geometrical technique utilized in this work. The incorporation of neural networks in the hybrid model and their utilization in the adaptive linearizing control strategy are discussed in the following section, followed by the theoretical stability analysis of the proposed strategy. The analysis is then complemented with simulation on two case studies. The final section summarizes and discusses the results of this work. 2. Feedback Linearization: Mathematical Preliminaries Before describing the proposed adaptive geometrical nonlinear control strategy incorporating the hybrid model, we will first describe in this section the relevant mathematical concepts relating to the linearization of the geometrical technique. The mathematical preliminaries related to differential geometry and the finding of state feedback and coordinate transformation to linearize the input-to-state relationship can be found in various references.21,22 We discuss below the relevant steps in implementing the theory of linearizing transformations for a single-input nonlinear affine system

x˘ ) f(x) + g(x) u

(1)

where u is the scalar control input (u ∈ R), x ) [x1, x2, ..., xn], i.e., x ∈ Rn is the state vector assumed to be available for feedback, and f(x) and g(x) are nonlinear functions of the states which are n-dimensional C∞ vector fields. The linearization has the Brunovsky canonical form, namely:

z˘ ) A ˆz + B ˆv where

(2)

[ ] []

0 0 0 A ˆ ) l 0 0

1 0 0 l 0 0

0 1 0 l 0 0

0 0 1 l 0 0

‚‚‚ 0 0 ‚‚‚ 0 0 ‚‚‚ 0 0 , B ˆ ) l l ‚‚‚ 0 1 ‚‚‚ 0 0

0 0 0 l 0 1

z is the new state variable vector, and v is new external reference input, which is given (with respect to u) by

v ) Lfnq(x) + LgLfn-1q(x) u

(3)

LgLfn-1q(x) * 0

(4)

where

This can then be arranged to produce the control input as

u ) R(x) + β(x) v

(5)

where

R(x) ) -

Lfnq(x) LgLfn-1q(x)

(6)

and

β(x) )

1 n-1

LgLf

q(x)

(7)

With these changes in coordinates and incorporation of the control input as in eq 5, the nonlinear system of eq 1 can then be transformed into the linear controllable form of eq 2. In this form, the design of tracking controllers based on linear systems theory such as the state feedback control laws can easily be applied. This is achieved through specification of the new reference input, v. In this work we utilize state feedback augmented with trajectory-based command signals, i.e., yd ) [yd(t), yd(1)(t), ..., yd(n)(t)] in the design of v. Here yd represents the desired output which is a continuously differentiable function on [0, +∞] with its first n derivatives yd(t), yd(1)(t), ..., yd(n)(t) all uniformly bounded. Because in this case the output y equals the linearizing state component z1, these signals yd(t), yd(1)(t), ..., yd(n)(t) can also be related to the components of the linearizing state vector, z. The new input, v, for the linearization technique in this case is given in the form n

v ) vd -

∑ ck-1xk(t) k)1

(8)

where ck, ck-1, etc., are constant coefficients (with cn ) 1.0) and n

vd )

ckyd(k) ∑ k)0

(9)

If we define e(t) ) y(t) - yd(t) and then substitute v into the eq 2, we get the following tracking error equation for the system

e(n) + cn-1e(n-1) + ... + c0e ) 0

(10)

We can then ensure that the desired output yd and its derivative yd(1)(t), ..., yd(n-1)(t), all prespecified functions of time, are asymptotically tracked by choosing the coefficients of real numbers c0, c1, ..., cn-1 such that the roots of the polynomial

N(s) ) s(n) + cn-1s(n-1) + ... + c0

(11)

5606

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001

have negative real parts with desirable pole placements, ensuring the stability of the closed-loop dynamics of the system. With ∆z ) [e, e(1), ..., e(n-1)]T, it follows from eq 10 that the error equation for the idealized system can be written as

∆z3 ) C∆z where

(

0 0 C) l -c0

1 0 l -c1

0 1 l -c2

‚‚‚ ‚‚‚ l ‚‚‚

(12)

0 0 l -cn-1

)

(13)

is a Hurwitz matrix. Here the closed-loop stability of the idealized system can also be derived in the Lyapunov sense by defining the Lyapunov function as a quadratic form of the error (∆z) and showing the negative definiteness of its derivative along the state trajectory. However, for our proposed scheme using neural networks, the error equation has to be modified, and hence the stability analysis is not as straightforward as in this case, which will be seen in the next few sections. Note that for convenience and ease of notation in all future descriptions in this paper, the variable x will be used in a general sense to denote the variables represented in the transformed canonical form. 3. Adaptive Feedback Linearizing Controls with Hybrid Models 3.1. Feedback Linearization with Hybrid Models. As mentioned earlier, the application of geometrical nonlinear techniques, such as the feedback linearizing method, relies on the accuracy of the functions f and g. However, in many practical cases, the nonlinear functions f and g are unknown or known only with some degree of uncertainty, e.g., in linearized models. Modeling of the functions f and g solely by neural networks, as was done by Jin et al.14 and Chen and Khalil,16 requires a good and wide spread of data for training the network and possibly a large number of hidden nodes and layers in some cases to ensure that the functions have been modeled accurately and that the networks are not under- or overparametrized. However, in many actual systems, linear or nonlinear, knowledge of the model and dynamical behavior of the system is known to a certain extent or is known to be applicable within certain operating limits. Hence, these known models can be utilized usefully within these neural-network-based control strategies and not discarded. This would also help reduce the problem of insufficient or inadequate data for training the neural network to model the whole unknown function. In this study, we propose using the neural network to model the uncertain or unmodeled and unknown parts of f and g, i.e., the neural network models the difference between the true functions and the linearized first principle model. Here we suppose f and g consist of a linearized term, which we assume to be known, and higher order terms (i.e., second-order and above terms) which are modeled by the neural network. The representations of f and g are actually taken to be

f(x) ) f(xe) + f ′(xe) δxe + ˆf(x,w)

(14)

and

g(x) ) g(xe) + g′(xe) δxe + gˆ (x,r)

(15)

where ˆf(x,w) and gˆ (x,r) are the neural network models of the higher order terms, xe is the point of linearization (normally at the equilibrium point), and δxe ) x - xe (w and r represent the parameters or weights of the network). With these approaches, the linearized feedback control law transforming the nonlinear system into a linear one will now be uˆ and the canonical nonlinear representation of the system (with a negative sign added to the function f(x) for mathematical convenience in all of our future analysis) under this control law is

{

x˘ 1 x˘ 2 l x˘ n-1 x˘ n y

) ) ) ) ) )

x2 x3 l xn -f(x) + g(x) uˆ x1

(16)

where uˆ cancels the effect of f and g and replaces them with linear dynamics, given by n

uˆ )

∑ ck-1xk + vd + f(xe) + f ′(xe) δxe + ˆf(x,w) k)1 g(xe) + g′(xe) δxe + gˆ (x,r)

) Rˆ (x) + βˆ (x) vd

(17) (18)

and n

Rˆ (x) )

∑ ck-1xk + f(xe) + f ′(xe) δxe + ˆf(x,w) k)1 g(xe) + g′(xe) δxe + gˆ (x,r) βˆ (x) )

1 g(xe) + g′(xe) δxe + gˆ (x,r)

(19)

(20)

In this case the linear pole placement techniques can also be applied to determine the coefficients c0, c1, ..., cn-1 to ensure acceptable tracking initially and in the local linearized region of operation. However, global closed-loop stability is not guaranteed as for the idealized case of eq 12 because it depends on the other term of eq 18. The closed-loop stability of such a system will have to be analyzed using Lyapunov-based methods, as will be shown later. 3.2. Adaptive Output Control with the Hybrid Model. The adaptive output tracking control strategy, utilizing the approach mentioned in the previous section, can be visualized in Figure 1. Here the functions ˆf(x,w) and gˆ(x,r), representing the higher-order contributions to f and g, can be represented by multilayered neural networks of the form p

ˆf(x,w) )

∑ j)1

n

wjH(

q

gˆ (x,r) )

wjixi + w ˆ j) ∑ i)1

(21)

n

rjH(∑rjixi + rˆ j) ∑ j)1 i)1

(22)

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001 5607

Figure 1. Adaptive feedback linearization control strategy for output tracking using linearized models with neural networks.

where the w’s and r’s are the interconnection weights between the various layers within the neural network models representing ˆf and gˆ respectively and where the w ˆ ’s and rˆ ’s are the biases applied to the hidden layer for each of the networks. The p and q are the number of nonlinear hidden nodes (in one hidden layer) of the respective networks above. The function H in the equations above is the hyperbolic tangent activation function, which is given by

H(x) )

ex - e-x ex + e-x

(23)

Many researchers have shown the advantages of utilizing this activation function within the hidden nodes to perform modeling and identification of nonlinear functions.23 In this approach, the networks (i.e., the weights and biases, w’s and r’s) are pretrained to map the difference between functions f(x) and g(x) and their linearized models respectively to a certain degree of accuracy, prior to their incorporation in the adaptive control strategy. These should give reasonably good estimates of the weights, i.e., of w(t) and r(t), in the initial stage of implementation. The pretraining algorithm adopted in this study is the Levenberg-Marquardt method, which is a second-order optimization method.24 The Levenberg-Marquardt method is employed in the pretraining rather than normal back-propagation algorithm25 because it is faster and more accurate near the minimum error. These weights are then adjusted adaptively online during implementation, using a gradient search method similar to that of the back-propagation method. Let the error index of the output of the neural networks be defined as

These weights are then adjusted online using the updating rule given by

w(t+δt) ) w(t) - η1

r(t+δt) ) r(t) - η2

∂e* ∂r

where flin and glin are the linearization of f(x) and g(x), respectively, and yd is the desired values of y (note that y is related to the state x because y ) x1).

(26)

where η1 and η2 are the step-size parameters for the weight adjustment for each network, respectively, and

∂fˆ(x,w) ∂e* )∂w ∂w

(27)

∂e* ∂gˆ (x,r) ) ∂r ∂r

(28)

The term uˆ which appears in eq 28 can be easily incorporated with the step-size parameter η2. Equations 25-28 represent the normal back-propagation gradient descent technique. However, the gradient descent technique is slow near the minimum and difficult to implement online. To improve the rate of convergence near the minimum, we can employ the dead-zone algorithm for updating the weights as proposed by Chen and Khalil,26 where the dead-zone function D(e*) is given by

{

0 if |e*| e d0 e* d D(e*) ) 0 if e* > d0 e* + d0 if e* < -d0

(29)

The dead-zone approach is a simple approach that is suitable for online implementation. Then the output of the dead-zone function is used in the updating rule above as follows:

w(t+δt) ) w(t) - η1D(e*) (24)

(25)

and

e*(t) ) y*(n) - yd(n) ) -[fˆ(x,w) + flin] + [gˆ (x,r) + glin]uˆ - yd(n)

∂e* ∂w

∂e* ∂w

(30)

and

r(t+δt) ) r(t) - η2D(e*)

∂e* ∂r

(31)

It has also been proven by Jin et al.14 that the error

5608

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001

index e* approaches zero whenever the weight adaptation rules in the form given by eqs 30 and 31 are applied. 4. Analysis of Closed-Loop Feedback Stability To analyze the stability of the closed-loop system using the linearized neural network control law of eq 18 which incorporates both known models and the neural network approximation, some assumptions and lemmas relating to the norms of the various terms and functions are required. These are stated in the following section. 4.1. Assumptions. Assumption 1: For any x, xe ∈Rn

0 < k1 e |g(x)|

(33)

0 < kf e |f(xe)|

(34)

0 < k′f e |f ′(xe)|

(35)

0 < k′g e |g′(xe)|

(36)

|gˆ (x,r)| e β1r|x| + β2r

∀x∈Σ

(43) (44)

where p

β1w )

|wj||β1||w j j| ∑ j)1

(45)

p

β2w )

|wj||β1||w ˆ j| ∑ j)1

(46)

q

β1r )

|rj||β1||rjj| ∑ j)1

(47)

q

where the k’s are finite real numbers. Assumption 2: For any t ∈ [0, +∞), the desired output yd(t) and its first n derivatives yd(1)(t), ..., yd(n)(t) are known and uniformly bounded; that is

i ) 0, 1, ..., n

(37)

The third assumption, which follows, is about the capability of multilayered neural networks to approximate nonlinear functions, as reported by many researchers.27,28 In this study the use of neural networks is to approximate the model uncertainty or to model the higher order terms fh(x) and gh(x), i.e., the difference between the actual and the linearized model, as given by the definitions

f(x) ) f(xe) + f ′(xe) (x - xe) + fh(x)

(38)

g(x) ) g(xe) + g′(xe) (x - xe) + gh(x)

(39)

and

where fh(x) and gh(x) are abbreviations of fh(x,xe) and gh(x,xe), respectively. Assumption 3: There exist weight coefficients w and r such that ˆf(x,w) and gˆ(x,r) approximate the continuous functions fh(x) and gh(x) with accuracy  on Σ, a compact subset of Rn, i.e.

max |fˆ(x,w) - fh(x)| e  max |gˆ (x,r) - gh(x)| e 

|fˆ(x,w)| e β1w|x| + β2w

(32)

0 < kg e |g(xe)|

|yd(i)(t)| e mi

Lemma 2: There exist constants β1w, β2w > 0 and β1r, β2r > 0 such that the neural network models of eqs 21 and 22 on Σ, a compact set of Rn, satisfy the following conditions:

∀x∈Σ

(40) (41)

The lemma below shows that multilayered neural networks with the hyperbolic tangent function in the hidden layer satisfy some algebraic properties on the compact set Σ, which are all stated below. The effect of scaling and also the addition of activation in the output layer have also been considered.29 Lemma 1: The function H(x) is uniformly Lipschitz14,30 because there exists a strictly positive constant, β1, such that for all x1, x2 ∈ R

|H(x1) - H(x2)| e β1|x1 - x2|

(42)

β2r )

|rj||β1||rˆ j| ∑ j)1

(48)

where the w’s, w j and r’s, rj refer to the weights of the networks representing f and g, respectively, and w ˆ and rˆ refer to the biases, respectively. Remark 1: Each input variable is normally scaled between 0 and 1 in accordance with that used during training of the network. However, the scaling of the input variable can be accommodated by the weights and biases in the network and does not affect the final output of the network. Remark 2: The output variable is also scaled in the region -1 to 1 or 0 to 1 in accordance with the training data. Hence, if the output is linearly scaled, the norm of the output of the network, as per eqs 43 and 44, must be modified accordingly. Remark 3: In many cases, the functions f(x) and g(x), either one or both of them, need to be modeled by multilayered networks with activation applied in the output layer as well. Hence, the analysis has to be modified accordingly. Lemma 3: There exist constants kˆ 1, kˆ 2 > 0 such that the neural network gˆ(x,r) on the closed and bounded set Σ of Rn satisfies

0 < kˆ 1 e |gˆ (x,r)| e kˆ 2

∀x∈Σ

(49)

We also assume in this methodology and analysis that all of the state variables, x1-xn, are available for measurement. 4.2. Stability Theorem. The following theorem gives the stability, in the Lyapunov sense, of the adaptive learning control system on the compact set Σ. Theorem: Under the assumptions and lemmas 1-3, there exists a constant δv such that the output tracking error of the nonlinear system on the compact set Σ using the neural network control law of eq 18 is confined to a ball G defined by G ) B(0,δv) where B∆{∆x: |∆x - 0| e δv}. Here δv depends on  and δv f 0 as  f 0. Proof: To analyze the stability of the closed-loop system under this control action, uˆ , we should first analyze the time derivative of the Lyapunov function along the state trajectories of the error dynamic equation of the system under the above control law. It can be easily shown that the error dynamic equation for this

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001 5609

system is given by

∆x˘ ) C∆x + m(x,w,r) + n(x,r) vd

(50)

where C is as defined in eq 12, ∆x ) [e, e(1), ..., e(n-1)]Τ and

( (

0 0 m(x,w,r) ) l g(x) [Rˆ (x,w,r) - R(x)] 0 0 n(x,r) ) l g(x) [βˆ (x,r) - β(x)]

)

)

(52)

(53)

where P0 is a symmetric, positive-definite matrix solution of the Lyapunov equation

P0C + CΤP0 ) -I

(54)

Evaluating the time derivative of the Lyapunov function along the state trajectories of the error dynamics equation, we get

V˙ 0 ) -∆x ∆x + 2(P0∆x) [m(x,w,r)] + Τ

2(P0∆x)Τ[n(x,r)]vd (55) This derivative can be bounded by analyzing the bounds on the vector terms m(x,w,r) and n(x,r), i.e., expanding, simplifying, and taking their norms. First looking at the last entry of vector m(x,w,r) and utilizing the respective definitions of R and Rˆ , we get

(

-

) g(x)

(

-

) g(x)

(

) ) ) )

∑ ck-1xk + f(xe) + f ′(xe) δxe + ˆf(x,w) k)1

(

g(xe) + g′(xe) δxe + gˆ (x,r)

g(x) n

n

∑ k)1

∑ ck-1xk + f(xe) + f ′(xe) δxe + ˆf(x,w) k)1 g(xe) + g′(xe) δxe + gˆ (x,r) n

[g(xe) + g′(xe) δxe + gˆ (x,r)](

g(xe) + g′(xe) δxe + gˆ (x,r)



n

ck-1xk)g(xe) + (

k)1

∑ ck-1xk)g′(xe) δxe +

k)1

g(x) ) g(xe) + g′(xe) δxe + gh(x)

(59)

f(x) ) f(xe) + f ′(xe) δxe + fh(x)

(60)

and

Hence, the term above can be expanded and rearranged as follows:

g(x) [Rˆ (x,w,r) - R(x)] n 1 ) (( ck-1xk)(gˆ (x,r) g(xe) + g′(xe) δxe + gˆ (x,r) k)1 gh(x)) - ˆf(x,w) (gˆ (x,r) - gh(x)) + g(xe) (fˆ(x,w) fh(x)) + gˆ (x,r) (fˆ(x,w) - fh(x)) - f(xe) (gˆ (x,r) gh(x)) - f ′(xe) δxe (gˆ(x,r) - gh(x)) + g′(xe) δxe f(xe) + g′(xe) δxe f ′(xe) δxe + g′(xe) δxe (fˆ(x,w) - (f(xe) + f ′(xe) δxe + fh(x)))) (61)



1

n



ck-1xk)(gˆ (x,r) g(xe) + g′(xe) δxe + gˆ (x,r) k)1 gh(x)) - ˆf(x,w) (gˆ (x,r) - gh(x)) + g(xe) (fˆ(x,w) fh(x)) + gˆ (x,r) (fˆ(x,w) - fh(x)) - f(xe)(gˆ (x,r) - gh(x)) f ′(xe) δxe (gˆ (x,r) - gh(x)) + g′(xe) δxe (fˆ(x,w) - fh(x))) (62) )

((

Next analyzing the nth row element of the vector term n(x,r) and utilizing the respective definitions of β and βˆ , we get

g(x) +

(

(56)

)

1 1 (63) g(xe) + g′(xe) δxe + gˆ (x,r) g(x) )-

(gˆ (x,r) - gh(x)) g(xe) + g′(xe) δxe + gˆ (x,r)

(64)

From the derivative of the Lyapunov function and its bound given by eq 55, we get the inequality

V˙ 0 e - ∆xΤ∆x + |2(P0∆x)Τ(m(x,w,r))| + +

|2(P0∆x)Τ(n(x,r))vd| e -∆xΤ∆x + 2|P0| |∆x| |m(x,w,r)| + 2|P0| |∆x| |n(x,r)||vd| (65)

∑ ck-1xk - f(x))

k)1

n

g(x)) + (

∑ ck-1xk)(gˆ (x,r) -

k)1

g(x) (βˆ (x,r) - β(x)) )

ck-1xk - f(x) g(x)

g(xe) + g′(xe) δxe + gˆ (x,r)

However,

V0(∆x) ) ∆x P0∆x

n

((

(51)

Τ

g(x) [Rˆ (x,w,r) - R(x)]

n

1

g(x) f(xe) + g(x) f ′(xe) δxe + g(x) ˆf(x,w) - f(x) g(xe) g′(xe) δxe f(x) - gˆ (x,r) f(x)) (58)

The definitions of Rˆ and βˆ are given in eqs 19 and 20, respectively. Next we define the Lyapunov function for the system as

Τ

)

(57)

By use of the assumptions and lemma above, we obtain

5610

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001

|m(x,w,r)| ) |g(x) (Rˆ (x,w,r) - R(x))| )

|

1

g(xe) + g′(xe) δxe + gˆ (x,r)

V˙ 0(∆x) e -|∆x|2 + 2|P0|∆x|(|∆x|δ1 + δ2 + δ3) + 2|P0| |∆x||vd| (76) kg + k′g|δxe| + kˆ 1

×

e -|∆x|(|∆x|δ4 - δ5)

n

((

∑ ck-1xk)(gˆ (x,r) - gh(x)) k)1

ˆf(x,w) (gˆ (x,r) - gh(x)) + g(xe) (fˆ(x,w) - fh(x)) + gˆ (x,r) (fˆ(x,w) - fh(x)) - f(xe) (gˆ(x,r) - gh(x)) f ′(xe) δxe (gˆ (x,r) - gh(x)) + g′(xe) δxe (fˆ(x,w) - fh(x)))

where

|

(66) 1 e

|g(xe) + g′(xe) δxe + gˆ (x,r)|

×

n

((

∑ |ck-1xk|)|(gˆ (x,r) - gh(x))| + |fˆ(x,w) |(gˆ (x,r) -

k)1

gh(x))| + |g(xe) |(fˆ(x,w) - fh(x))| + |gˆ (x,r) |(fˆ(x,w) fh(x))| + |f(xe) |(gˆ (x,r) - gh(x))| + |f ′(xe) |δxe||(gˆ (x,r) - gh(x))| + |g′(xe)| |δxe||(fˆ(x,w) - fh(x))|) (67)  (|c| |x| + β1w|x| + kg + k′g|δxe| + kˆ 1 β2w + β1r|x| + β2r+ kg + kf + k′f|δxe| + k′g|δxe|) (68) e

Finally we get

|m(x,w,r)| ) |g(x) (Rˆ (x,w,r) - R(x))| e |∆x|δ1 + δ2 + δ3 (69) where

δ1 )

|c| + β1w + β1r kg + k′g|δxe| + kˆ 1

δ2 ) δ1|yd| +

δ3 )

(70)

β2w + β2r + kg + kf kg + k′g|δxe| + kˆ 1

(k′g + k′f)|δxe| kg + k′g|δxe| + kˆ 1

(71)

(72)

The relation of yd with ∆x and their definitions have been mentioned before in the previous sections. Analyzing the norm of the vector term n(x,r), we get

||n(x,r)|| ) |g(x) (βˆ (x,r) - β(x))|

|

) -

gˆ(x,r) - gh(x) g(xe) + g′(xe) δxe + gˆ (x,r)

|

|gˆ (x,r) - gh(x)|

e

|g(xe)| + |g′(xe)| |δxe| + |gˆ (x,r)|

e

 kg + k′g|δxe| + kˆ 1

Substituting eqs 69 and 75 into eq 65, we get

(77)

(73)

(74) (75)

δ4 ) 1 - 2||P0||δ1

(

δ5 ) 2||P0|| δ2 + δ3 +

|vd| kg + k′g||δxe|| + kˆ 1

(78)

)

(79)

Thus, V can be assured to be nonincreasing whenever ∆x gδv ≡ δ5/δ4, so that the output tracking error is confined in the ball G ) B(0,δv), i.e., within a neighborhood of ∆x ) 0 defined by ||∆x|| e δv, which proves the stated theorem. Thereafter, we can refer to the system as having achieved what we call “ball stability”. Here δv can be arbitrarily small as  is small, i.e., as the prediction error of the neural network models becomes smaller. 5. Simulation Studies The control strategy and its associated stability analysis can be demonstrated through simulation studies for two case studies as described in this section. All of these simulation case studies were performed for second-order single-input single-output nonlinear plants and systems. These case studies are typical nonlinear systems used to verify the application of such a nonlinear control algorithm.31 The functions, i.e., f(x) and g(x), as required in this study are linearized in the various cases with respect to the variable x (transformed variable). The proposed strategy is applied to these case studies for set-point tracking. For each case study, the adaptive linearizing control strategy with neural-network-based hybrid models is compared to four other different types of control strategies, namely, adaptive linearizing control with an actual model, adaptive linearizing control with neural-network-based models, adaptive linearizing control with linearized models (about the equilibrium value), and the conventional proportional-integralderivative (PID) controller. In addition, the controller performances are assessed when the output measurement is corrupted by noise. The proposed control strategy is also tested for plant-model mismatch and external disturbances incorporating random walk changes. Before going into the case studies, we describe, in general, the procedure used in obtaining the neural networks incorporated in these control strategies. 5.1. Neural Networks Training. The steps taken in choosing and training the neural network models, to approximate the relative functions, are similar, in general, to those used for performing systems identification with neural networks.10 A number of networks using different numbers of hidden units are trained to approximate the functions. Network selection involves the comparison of the networks by testing their generalization capabilities. For each network, two sets of training data are generated for the training. Training is switched from one set to another set in a technique similar to the “early stopping method” to improve the identification process and to obtain more robust neural network models. A validation data set with new and

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001 5611

unseen data is used for the validation of the trained neural networks. The final network models in this study were obtained when the sum of the squared error between the output and target values of the validation data set is satisfactorily small, i.e., smaller than 0.1 or so. To allow the neural network to learn the correct functional system nonlinearities, sufficient excitation must be presented in the training data and the data must be spanned in the range of probable operation. Multilevel pseudorandom sequence perturbation signals of varying frequencies are thus added to the input to obtain the training data sets. Ramp changes in the manipulated variable are used to generate the validation data set. With this, we train and validate the networks with realistic input and output values that will be encountered by the control systems during online implementation. After the pretraining step, the neural network models are utilized in the controller together with the feedback linearizing portion as shown in Figure 1. During the tracking process, the network weights are adjusted online at each control time based on the value of the calculated error index, e*. The control implementation time is decided based on the system under control, which is at least equal to the measurement sampling time. Adjustment will take place when the error index exceeds the size of the dead zone as shown in eqs 2931. The value of the dead zone, d0, used in these case studies is 0.0001. 5.2. Case Study 1. The first case study involves the control of an exothermic CSTR with first-order reactions, as is commonly found in the process industries.32 The first principle model for this system is

dC Q ) (Cf - C) - k(T) C dt V

Table 1. Nominal Operating Conditions for CSTR variable C Cf T Tc Tf

value 0.755 gmol/L 1.0 gmol/L 317.16 K 300.02 K 300 K

variable Q/V B a -(∆H/Fcp) UA/FcpV

value min-1

1 6000 K 5.336 85 × 107 min-1 105 K‚L/gmol 0.5 min-1

Figure 2. Adaptive linearizing control of CSTR (step up tracking): hybrid models.

(80)

k(T) C(-∆H) dT Q UA ) (Tf - T) + (T - T) (81) + dt V Fcp FcpV c where C is the concentration in the reactor, T is the temperature in the reactor, Tc is the temperature of coolant, and k(T) ) ae-B/T. The nominal operating conditions are shown in Table 1. By utilizing the transformation

x1 ) C x2 )

(82)

Q (C - C) - k(T) C V f

(83)

we obtain the equations

x˘ 1 ) x2

(84)

[

x1Bk(T) Q(Tf - T) Q x˘ 2 ) - x2 - (k(T) x2) + V V T2 k(T) x1(-∆H) UAT x1Bk(T) UA Tc (85) Fcp FcpV Fc VT 2

]

p

where k(T) can be formulated in the form of k(T) ) [(Q/ V)(Cf - x1) - x2]/x1 and T ) -B/ln[k(T)/a]. In this case,

y ) x1

(86)

Figure 3. Adaptive linearizing control of CSTR (step down tracking): hybrid models.

With this transformation, a system of relative degree 2 is obtained. Hence, input-output linearization via feedback is achievable by the manipulation of the coolant temperature, Tc, to control the concentration in the reactor, C. Simulations were performed by utilizing the proposed strategy for step output tracking of the concentration, i.e., of x1, from the equilibrium value of 0.755 gmol/L to

5612

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001

Figure 4. Adaptive linearizing control of CSTR (step up tracking): actual models.

Figure 6. Adaptive linearizing control of CSTR (step up tracking): neural network models only.

Figure 5. Adaptive linearizing control of CSTR (step down tracking): actual models.

Figure 7. Adaptive linearizing control of CSTR (step down tracking): neural network models only.

higher and lower set values of 0.9 and 0.1 gmol/L. The value of Tc is constrained to be in the range of 273 K e Tc e 373 K. Both networks representing fh and gh have three hidden nodes. The coefficients of the feedback error equation used were c1 ) 1 and c0 ) 0.5. The results for step tracking up and down using the proposed strategy can be seen in Figures 2 and 3, respectively. They showed good step change tracking with slight initial oscillations in both set-point changes. The strategy using the actual model also showed good tracking for both set-point changes with very little overshoot, which is expected when using the actual, perfect model in the controller equation (Figures 4 and 5). The strategy utilizing the neural network model alone (Figures 6 and 7) showed higher oscillation during

the transient period and a slower convergence rate to the set-point compared to our proposed strategy. For the scheme using the linearized model alone, large offsets were observed in the results for both set-point changes (Figures 8 and 9). With the conventional PID controller, large overshoot is observed in the set-point tracking (Figures 10 and 11), which is the expected response when controlling such a nonlinear system using the linear control strategy. The proposed control strategy is also tested for a system with a plant-model mismatch where a 10% decrease in the value of the parameter [(-∆H/Fcp)] is applied to the system. The result in Figure 12a shows that the controller can effectively reject the disturbance by the online weight adaptation technique. Parts b and

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001 5613

Figure 8. Adaptive linearizing control of CSTR (step up tracking): linearized models only.

Figure 10. CSTR: closed-loop response with a PID controller (step up tracking).

Figure 9. Adaptive linearizing control of CSTR (step down tracking): linearized models only.

Figure 11. CSTR: closed-loop response with a PID controller (step down tracking).

c of Figure 12 show the performances of the controller using neural network models alone and PID, respectively. A higher overshoot during the transient period is observed in Figure 12b, while an oscillatory behavior is observed in Figure 12c. To evaluate the proposed control strategy under realworld conditions, random noises of normal distribution with zero mean and variance of 0.005 gmol/L were added to the concentration measurement. The closedloop response of the system can be seen in Figure 13a. The system response stayed close to the set-point, with a slightly noisy output obtained by modification of the coefficients of the feedback error equation. Figure 13b shows the corresponding system response under a PID control scheme. Quite similar responses were obtained

with the above method, but more overshoots and oscillations are observed during the transient period under this control strategy. A further condition involving a random walk change in the feed temperature (Tf) was added to the system during the set-point tracking to test the ability of the proposed control strategy toward external disturbances under real conditions. In this test, the feed temperature was varied within the range of 298-302 K. Figure 14a shows the responses during step up tracking under this condition. It can be seen that the system stayed close to the set-point with a slight oscillation, which is however typically accepted in practice. The system response under PID control is shown in Figure 14b, which showed quite similar behavior but again with more overshoots and oscillations.

5614

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001

Figure 12. CSTR: closed-loop responses under plant-model mismatch. (a) Adaptive linearizing control: hybrid models. (b) Adaptive linearizing control: neural network models only. (c) PID controller.

Figure 13. CSTR: closed-loop responses with measurement noise. (a) Adaptive linearizing control: hybrid models. (b) PID controller.

A comparison between the actual value of the functions (i.e., fh and gh) and the neural network’s output [i.e., ˆf(x,w) and gˆ (x,r)] during online set-point tracking (corresponding to Figure 2) can be seen in Figure 15. It shows that the networks were able to predict the nonlinear functions with very small error over the entire range. When the accuracy of the pretrained neural network models was reduced (e.g., when the sum of the squared error between the output and target values of validation data was set to a higher value of about 50), the output tracking error increased accordingly, as predicted by the stability analysis in section 4. This is

Figure 14. CSTR: closed-loop responses with external disturbance. (a) Adaptive linearizing control: hybrid models. (b) PID controller.

Figure 15. CSTR: comparison between actual values and neural network online prediction.

shown in Figure 16, where large offsets and a slow response time were observed. 5.3. Case Study 2. The second case study involves the control of a fermentation process in a continuous biochemical reactor described by the equations31

X˙ ) -DX + µX S˙ ) D(Sf - S) -

µ X YX/S

(87) (88)

where X and S are the biomass and substrate concentrations, respectively, D is the dilution rate, Sf is the

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001 5615

Figure 16. Adaptive linearizing control of CSTR (with lower accuracy neural network models): hybrid models.

Figure 17. Adaptive linearizing control of the fermentation process (step up tracking): hybrid models.

Table 2. Nominal Operating Conditions for a Continuous Biochemical Reactor variable

value

variable

value

X S Sf D

6.0 g/L 0.872 g/L 15.87 g/L 0.202 h-1

YX/S µm Km

0.4 g/g 0.48 h-1 1.2 g/L

feed substrate concentration, and YX/S is the yield parameter. The specific growth rate µ is modeled as

µ)

µmS Km + S

(89)

where µm is the maximum specific growth rate and Km is a constant parameter. Nominal operating conditions are shown in Table 2. These equations are transformed into the canonical form by the transformation

x1 ) X

(90)

x2 ) -DX + µX

(91)

Figure 18. Adaptive linearizing control of the fermentation process (step down tracking): hybrid models.

resulting in the equations

x˘ 1 ) x2 x˘ 2 )

(92)

x22 (x2 + Dx1 - x1µm)2(x2 + Dx1) + x1 Kmµmx1YX/S (x2 + Dx1 - x1µm)(x2 + Dx1)D + µmx1 (x2 + Dx1 - x1µm)2D Sf (93) Kmµmx1

In this case,

y ) x1

(94)

To obtain a relative degree 2 system where inputoutput feedback linearization is achievable, the feed substrate concentration, Sf, is chosen as the manipulated variable to control the biomass concentration, X. For this case the adaptive output tracking using the proposed strategy was performed for step changes in the controlled variable, x1 (i.e., the biomass concentration, X), from its equilibrium value of 6.0 g/L to higher and lower set values of 10.0 and 3.0 g/L, respectively. The value of Sf is constrained to be in the range of 1 g/L e Sf e 40 g/L. In this case the network estimating fh has three hidden nodes, while that estimating gh has four hidden nodes. The coefficients of the feedback error equation used were c1 ) 0.95 and c0 ) 0.2, respectively. The results for set-point tracking with up and down changes using the proposed hybrid models are shown

5616

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001

Figure 19. Adaptive linearizing control of the fermentation process (step up tracking): actual models.

Figure 21. Adaptive linearizing control of the fermentation process (step up tracking): neural network models only.

Figure 20. Adaptive linearizing control of the fermentation process (step down tracking): actual models.

Figure 22. Adaptive linearizing control of the fermentation process (step down tracking): neural network models only.

in Figures 17 and 18, respectively. For both set-point changes, the system response converges to the set-point in a relatively short interval of time and with very little offset. The performance for both set-point changes was comparable to the case when using the actual model alone (Figures 19 and 20). The performance of the proposed control strategy was also better than that when utilizing neural networks alone in the control scheme (Figures 21 and 22). In the step up tracking of the system, the system gives a large overshoot and a long response time. However, Figures 23 and 24 show that the control scheme using linearized models alone gives a poor performance compared to our proposed strategy. The system produced large offsets at the step up set-point change (Figure 23), and it is unable to track the set-point change from 6.0 to 3.0 g/L (Figure 24).

When our strategy is compared to the conventional PI controller as shown in Figures 25 and 26, the system gives smoother responses under tight tuning of the PI controller. The result is expected because the process is only slightly nonlinear. However, tuning of the PI controller has been found after quite a bit of trial-anderror work. The proposed control strategy is also tested for plantmodel mismatch with a +20% change in the yield parameter, YX/S. Similarly, the controller is able to reject the disturbance effectively through the online weight adaptation (Figure 27). Figure 28 shows the controller performance when randomly distributed noise with zero mean and variance of 0.1 g/L is incorporated into the biomass concentration measurement. The system response follows the set-point tracking with small-

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001 5617

Figure 23. Adaptive linearizing control of the fermentation process (step up tracking): linearized models only.

Figure 24. Adaptive linearizing control of the fermentation process (step down tracking): linearized models only.

magnitude oscillations. The chattering behavior observed in the manipulated variable, Sf, is small and acceptable. Further to this, a random walk change in the dilution rate, D, is added to the system to test the ability of the proposed control strategy toward these external disturbances. Figure 29 shows the response during step up tracking when the dilution rate is varied within the range of 0.18-0.22 h-1. The system converged to the set-point in a relatively short interval of time. The actual values of the nonlinear functions (fh and gh) during set-point tracking were also compared with the neural network’s output [fˆ(x,w) and gˆ (x,r)] as in Figure 30. It shows that the networks were able to predict the nonlinear functions with very small error over the entire range. The closed-loop response with poorly trained network models (e.g., when the sum of

Figure 25. Fermentation process: closed-loop response with a PI controller (step up tracking).

Figure 26. Fermentation process: closed-loop response with a PI controller (step down tracking).

the squared error between the output and target values of validation data was much higher at about a value of 50) is shown in Figure 31. Large overshoots and offsets were observed which thus conform to the prediction of the stability analysis in section 4. 6. Summary and Conclusions As mentioned earlier, the use of neural networks alone to model any system will be accurate and adequate only with sufficiently rich data which are representative of the system. This is due to the fact that the training of the neural network models is highly dependent on the quantity and quality of the data presented to it. However, acquiring sufficiently adequate data can be difficult at times especially in real-time

5618

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001

Figure 27. Adaptive linearizing control of the fermentation process (with plant-model mismatch): hybrid models.

Figure 28. Adaptive linearizing control of the fermentation process (with measurement noise): hybrid models.

systems. Hence, the utilization of both known or available models with neural network models, i.e., hybrid models, as proposed in this paper, is more viable and practical in many cases. This proposed technique is also useful for implementation in highly nonlinear systems where exact models are difficult to obtain or only some knowledge of the system behavior is known, which is the case for most nonlinear systems. In both of the case studies, we linearized the model and then used neural networks to model the higher order terms because this is one of the approaches that can demonstrate the performance of hybrid models involving nominal model and neural networks in the simulation case studies. Other forms are also possible; for example, we can use a simple model equation to develop the controller, while the process can be dem-

Figure 29. Adaptive linearizing control of the fermentation process (with external disturbance): hybrid models.

Figure 30. Fermentation process: comparison between actual values and neural network online prediction.

onstrated by a more complicated model equation, e.g., in biotechnological processes. The results obtained in the simulation studies show that this proposed strategy gives results comparable to those obtained when utilizing the actual nonlinear plant equations, which are normally not known exactly in practice. However, this strategy gives better and more reliable results than those obtained when utilizing linearized models alone. This is definitely evident when the operating point is far away from the region of linearization, where the difference between linearized and actual models becomes increasingly pronounced. The superiority of this proposed method over that of using the linearized model alone is clear in the fermentation system where the system response was unable to track the set-point change after the system had gone

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001 5619

models, given by , approaches zero, which can be commonly achieved in many applications and further demonstrated in our simulation study. Although in this work we have approximated the actual functions by a combination of linearization and neural networks, this can also equally apply to an analysis and implementation of any uncertain system made up of a nominal model and a bounded uncertainty, where the neural network approximates the bounded uncertainty in the model. This approach then lends itself to a robust closedloop stability analysis where the same approach in the theoretical analysis as that done in this study can be followed. The proposed control strategy is also valid for multi-input multi-output systems, which will be demonstrated in our future work. Acknowledgment We acknowledge the initial involvement of Prof L.S. Kershenbaum of Imperial College in part of this work as well as the Ministry of Science, Technology and Environment, Malaysia, for the funds to carry out this project. Figure 31. Adaptive linearizing control of the fermentation process (with lower accuracy neural network models): hybrid models.

far beyond the equilibrium point. The instability in using the linearized model alone in this case also highlights the point, mentioned in the Introduction earlier, that fairly accurate models are required to achieve satisfactory controls with this geometric nonlinear linearizing-type control technique. A satisfactory high step change was implemented in all cases to test the global behavior of this methodology and for a better comparison with the linearized model case. The proposed control strategy also performed better than that when utilizing neural networks alone in the control scheme. Higher overshoot and longer response time are especially obvious in case study 2. The result of the simulation also shows that the strategy was able to cater for plant-model mismatch and external disturbance through online weight adaptation. Under noisy and real-life conditions, such as a random walk change in the external disturbance, the proposed control strategy was still able to control the system to stay close to the set-point without obvious divergence for both case studies. Nevertheless, in the first case study, the proposed control strategy performed slightly better than the PID controller. In the second case study, the PI controller showed good results under tight tuning, which is not surprising because the system is only slightly nonlinear. Furthermore, the PI controller required a great deal of trial-and-error effort in determining the best PID settings in both cases. Although in both of the case studies neural networks are required to undergo offline training before implementation into the proposed control strategy, this may not be necessary because some work has been done on the same control strategy with no prior pretraining. The weights are initialized to small numbers which is directly adjusted online. The publication regarding this method will be presented elsewhere. In this work we have also proven closed-loop “ball stability” in a bounded region, as specified in the theoretical results earlier, around the point ∆x ) 0. In fact, it has been shown that this region approaches zero as the approximation accuracy of the neural network

Nomenclature A ) area of the reactor (m2), case study 1 A ˆ ) transformed matrix (Brunovsky canonical form) B ˆ ) transformed vector (Brunovsky canonical form) ck ) constant coefficients C ) concentration in the reactor, case study 1 C ) Hurwitz matrix D ) dilution rate, case study 2 D(.) ) dead-zone function e ) output error f(x) ) nonlinear function of the states x fh ) higher order terms of f(x) modeled by the neural network flin ) linearization of f(x) ˆf(.,.) ) neural network models g(x) ) nonlinear function of the states x gh ) higher order terms of g(x) modeled by the neural network glin ) linearization of g(x) gˆ (.,.) ) neural network models H ) activation function (hyperbolic tangent) ∆H ) heat of reaction, case study 1 Km ) constant parameter, case study 2 ki, kˆ , mi ) finite real numbers k(T) ) rate constants Lf, Lg ) Lie derivatives of f and g p, q ) number of hidden nodes r, rj ) neural network weights rˆ ) neural network biases P0 ) positive-definite, matrix solution of the Lyapunov equation Q ) flow rate of the reactor, case study 1 S ) substrate concentration, case study 2 s ) Laplace transformation variable T ) temperature in the reactor, case study 1 Tc ) temperature of the coolant, case study 1 u, uˆ ) scalar control input U ) heat-transfer coefficient, case study 1 v ) new external reference input V ) volume of the reactor, case study 1 V0(.) ) Lyapunov function w, w j ) neural network weights w ˆ ) neural network biases X ) biomass concentration, case study 2 x ) state vector

5620

Ind. Eng. Chem. Res., Vol. 40, No. 23, 2001

xe ) state vector at equilibrium ∆x ) deviation in the state variable YX/S ) yield parameter, case study 2 y ) output vector yd ) desired output vector z, Z ) transformed state variable vector ∆z ) vector of the output error and its derivative up to order n Subscripts c ) cooling medium e ) equilibrium or the linearization point f ) feed condition 0 ) feed or initial condition w, r ) related to neural network weights and biases Greek Symbols R(x), Rˆ (x) ) functions in linearized control laws β(x), βˆ (x) ) functions in linearized control laws β1, β2 ) positive constants η1, η2 ) learning rates of the updating rule µ ) specific growth rate, case study 2 µm ) maximum specific growth rate, case study 2 Φ(x) ) transformed state variable vector

Literature Cited (1) Isidori A. Nonlinear Control System; Springer-Verlag: New York, 1989. (2) Kummel, M.; Anderson, H. W. Controller Adjustment for Improved Nominal Performance and Robustness of a Distillation Column. Chem. Eng. Sci. 1987, 42, 2011. (3) Doyle, F. J., III; Packard, A. K.; Morari, M. Robust Controller Design for a Nonlinear CSTR System. Chem. Eng. Sci. 1989, 44, 1929. (4) Calvert, J. P.; Arkun, Y. Feedforward and Feedback Linearization of Nonlinear Systems and Its Implementation Using Internal Model Control. Ind. Eng. Chem. Res. 1985, 27, 1822. (5) Kravaris, C.; Palanki, S. A Lyapunov Approach for Robust Nonlinear State Feedback Synthesis. IEEE Trans. Autom. Control 1988, 33, 1188. (6) Poalini, E.; Romagnoli, J. A.; Desages, A. C.; Palazoglu, A. Approximate Models for Control of Nonlinear Systems. Chem. Eng. Sci. 1992, 47, 1161. (7) Cybenko, G. Approximation by Superposition of a Sigmoidal Function. Math. Control Signal Syst. 1989, 2, 303. (8) Hussain, M. A. Review of the Applications of Neural Networks in Chemical Process ControlsSimulation and Online Implementation. Art. Int. Eng. J. 1999, 13, 55. (9) Aziz, N.; Hussain, M. A.; Mujtaba, I. M. Performance of Different Types of Controllers in Tracking Optimal Temperature Profiles in Batch Reactors. Comput. Chem. Eng. 2000, 24, 1069. (10) Hussain, M. A.; Kershenbaum, L. S. Implementation of an Inverse-Model-Based Control Strategy Using Neural Networks on a Partially Simulated Exothermic Reactor. Trans. Inst. Chem. Eng. 2000, 78, Part A, 299. (11) Piovoso, M. J.; Kosonovich, K. A.; Rokhlenko, V.; Guez, A. A Comparison of Three Nonlinear Controller Design Applied to a Non-Adiabatic First-Order Exothermic Reaction in a CSTR. Proc. Am. Control Conf. 1992, 490. (12) Nikolaou, M.; Hanagandi, V. Control of Nonlinear Dynamical Systems Modeled by Recurrent Neural Networks. AIChE J. 1993, 39, 1890.

(13) Polycarpou, M. M.; Ioannu, P. A. Modelling, Identification and Stable Adaptive Control of a Continuous-Time Nonlinear Dynamical Systems Using Neural Networks. Proc. Am. Control Conf. 1992, 36. (14) Jin, L.; Nikiforuk, P. N.; Gupta, M. M. Direct Adaptive Output Tracking Control Using Multilayered Neural Networks. IEE Proc. D 1993, 140, 393. (15) Fregene, K.; Kennedy, D. Control of a High-Order Power System by Neural Adaptive Feedback Linearization. Proc. 1999 IEEE: Int. Symp. Intell. Control/Intell. Syst. and Semiotics 1999, 34. (16) Chen, F. C.; Khalil, H. K. Adaptive Control of a Class of Nonlinear Discrete-Time Systems Using Neural Networks. IEEE Trans. Autom. Control 1995, 40, 791. (17) Kim, S. J.; Lee, M.; Park, S.; et al. A Neural Linearizing Control Scheme for Nonlinear Chemical Processes. Comput. Chem. Eng. 1997, 21, 187. (18) Thompson, M. L.; Kramer, M. A. Modeling Chemical Processes Using Prior Knowledge and Neural Networks. AIChE J. 1994, 40, 1328. (19) Eikens, B.; Karim, M. Process Identification with Multiple Neural Network Models. Int. J. Control 1999, 72, 576. (20) Schubert, J.; Simutis, R.; Dors, M.; Havlik, I.; Lu¨bbert, A. Bioprocess Optimization and Control: Application of Hybrid Modelling. J. Biotechnol. 1994, 35, 51. (21) Hunt, L. R.; Su, R.; Meyer, G. Global Transformations of Nonlinear Systems. IEEE Trans. Autom. Control 1983, AC-28, 24. (22) Slotine, J. J. E.; Li, W. Applied Nonlinear Control; PrenticeHall: Englewood Cliffs, NJ, 1991. (23) Nahas, E. P.; Henson, M. A.; Seborg, D. E. Nonlinear Internal Model Control Strategy for Neural Network Models. Comput. Chem. Eng. 1992, 16, 1039. (24) Demuth, H.; Beale, M. Neural Network ToolboxsFor Use with MATLAB; The MathWorks, Inc.: Natick, MA, 1996. (25) Rumelhart, D. E.; Hinton, G. E.; Williams, R. J. Learning Internal Representations by Error Propagation. In Parallel Distributed Processing; Rumelhart, D. E., McClelland, J. L., Eds.; MIT Press: Cambridge, MA, 1988; Vol. 1, pp 318-362. (26) Chen, F. C.; Khalil, H. K. Adaptive Control of Nonlinear Systems Using Neural NetworksA Dead Zone Approach. Proc. Am. Control Conf. 1991, 667. (27) Hecht-Nielsen, R. Theory of the Back-propagation Neural Networks. Proc. Int. Joint Conf. Neural Networks 1989, 593. (28) Blum, E. K.; Li, L. K. Approximation Theory of Feedforward Neural Networks. Neural Networks 1991, 4, 511. (29) Hussain, M. A. Inverse-Model Control Strategies Using Neural NetworkssAnalysis, Simulation and Online Implementation. Ph.D. Thesis, Imperial College, London, England, 1996. (30) Hui, S.; Zak, S. H. Analysis of Single Perceptrons Learning Capabilities. Proc. Am. Control Conf. 1991, 809. (31) Henson, M. A.; Seborg, D. E. Nonlinear Process Control; Prentice-Hall: Englewood Cliffs, NJ, 1997. (32) Limqueco, L. C.; Kantor, J. C. Nonlinear Output Feedback Control of an Exothermic Reactor. Comput. Chem. Eng. 1990, 14, 427.

Received for review October 25, 2000 Revised manuscript received May 22, 2001 Accepted August 22, 2001 IE000919R