Geometric methods for nonlinear process control. 2. Controller

Geometric methods for nonlinear process control. 2. Controller ... and Jean P. Corriou. Industrial & Engineering Chemistry Research 2014 53 (18), 7397...
1 downloads 0 Views 2MB Size
Ind. E n g . C h e m . Res. 1990,29, 2310-2323

23 10

(75) Sternberg, S. Lectures on Differential Geometry; Prentice Hall: Englewood Cliffs, NJ, 1964. (76) Stubbs, D.; Svoronos, S. A. A Simple controller via Feedback Linearization and Its Use in Adaptive Control. Proceedings of the 1989 American Control Conference, June 1989, Pittsburgh, PA; pp 1857-1862. (77) Vidyasagar, M. Nonlinear Systems Analysis; Prentice Hall: Englewood Cliffs, NJ, 1978.

(78) Wonham, W. M. Linear Multivariable Control: A Geometric Approach, 3rd ed.; Springer-Verlag: Berlin, Germany, 1985. (79) Wright, R. A.; Kravaris, C. Nonlinear pH Control in a CSTR. Proceedings of the 1989 American Control Conference, June 1989, Pittsburgh, PA; pp 1540-1544.

Received f o r review July 14, 1988 Accepted November 7, 1989

Geometric Methods for Nonlinear Process Control. 2. Controller Synthesis Costas Kravaris* Department of Chemical Engineering, T h e University of Michigan, Ann Arbor, Michigan 48109-2136

J e f f r e y C. K a n t o r Department of Chemical Engineering, University of Notre Dame, Notre Dame, Indiana 46556

This is the second part of a review paper for geometric methods in nonlinear process control. I t focuses on exact linearization methods including Su-Hunt-Meyer, input/output, and full linearization. T h e internal model control (IMC) and globally linearizing control (GLC) structures are reviewed and interpreted in the context of input/output linearization. Further topics of current research interest are also identified. 1. Introduction

In the present second part of our review paper, we will discuss controller synthesis methods for SISO nonlinear systems of the form 1 = f(x) + g(x)u Y = h(x)

(1) where x is the vector of states, u is the manipulated input, y is the output, f ( x ) and g ( x ) are vector fields on W",and h(x) is a scalar field on W". In the first part, we have already provided a brief review of results for the linear case 1 = Ax bu y = cx

+

where A , b, and c are n X n, n X 1, and 1 X n matrices, respectively; these results will now be generalized to nonlinear systems of the form (I). In the first part, we have also set up the machinery of Lie derivatives, Lie brackets, and coordinate transformations, which will be instrumental in the derivation of control laws for nonlinear systems. Finally, the first part has introduced the concepts of relative order and zero dynamics of nonlinear systems; these will be essential in both deriving and interpreting controller synthesis methodologies for nonlinear systems. The main mission of the present part is to pose and solve the feedback linearization problems introduced in the first part and to show their application in controlling nonlinear processes. Research directions and open problems in the area of nonlinear process control will also be identified at the end. The next section will provide a review of basic properties of static state feedback. Sections 3 and 4 will present a theoretical overview of the Su-Hunt-Meyer and input/ output linearization problems and illustrate their solutions in a chemical reactor example. Section 5 will address the full linearization problem and its applications and discuss the advantages, disadvantages, and limitations of each

* To whom all correspondence should be addressed. 0888-5885/90/2629-2310$02.50/0

linearization approach. Section 6 will deal with control structures for minimum-phase nonlinear processes. The IMC structure will be reviewed first and will be interpreted as providing input/output linearization in a macroscopic sense. It will be followed by the GLC structure, which is directly based on the state feedback theory of the previous sections. Finally, section 7 will briefly address further topics in the area that correspond to either current research areas or major unsolved problems. 2. Basic Properties of Static State Feedback In this section, we will establish some basic properties of static state feedback, which are completely analogous to the properties of linear static state feedback for linear systems (see subsection 2.4 of part 1). Consider a nonlinear system of the form (I) subject to static state feedback u = P(X) + d X ) U

(1)

where u is an external reference input, p ( x ) and q(x) are scalar algebraic functions of the state vector, and q ( x ) # 0. The resulting closed-loop system is then described by =

[fb) + g(x)p(x)I + [g(x)q(x)lu Y = h(x)

(2)

In the present section, we discuss some fundamental properties of state feedback of the form (1). Comparing the open-loop and the closed-loop systems ((I) and (2), respectively), we immediately observe that they have the same structure: their right-hand sides are nonlinear in x but linear in the input. (In more precise mathematical terms, one should say that the right-hand side is affine in the input.) The reason that the structure of the equations is preserved is that the state feedback (I) is linear in the external input u. This seemingly trivial observation has considerable importance in the theoretical developments that will follow. For this reason, we state it as a proposition: Proposition 2.1. A static state feedback, which is linear in the e x t e r n a l input, preserves the linearity of the 0 1990 American Chemical Society

Ind. Eng. Chem. Res., Vol. 29, No. 12, 1990 2311 Table I. Overview of Limitations of the Linearization Approaches existence of state feedback onlv for involutive svstems always only for r = n or linear zero dynamics

Su-Hunt-Mever input/output" full

state equations in the input. Since the closed-loop system has the same structure as the open-loop system (linear in the input), one can define relative order and zero dynamics for the closed-loopsystem by using the same definitions as in section 4 of part 1. Proposition 2.2. Static state feedback of the form (1) preserves the relative order of the system. Proof. Let r be the relative order of (I). This implies that (see definition 4.1.1 of part 1)

k = 0, ..., r

L&,$h(x) = 0, L&'h(x)

k = 0, ..., r - 1

(3)

Furthermore, from (51) of part 1 L,, = qLg

Thus, L,,L,$+,h(x) = dx)L&;Lgh(x), k = 0, ..., r - 1 (4)

and therefore, k = 0, ..., r - 2

LgqLt+Eph(x) = 0,

L,,Lfr:,h(x) # 0

This shows that the relative order of the closed-loopsystem (2) is equal to r. 0 Proposition 2.3. Static state feedback of the form (1) preserves the zero dynamics of the system. Proof. Under the coordinate transformation of theorem 4.2.1 of part 1, the nonlinear system (I) is transformed into the Byrnes-Isidori normal form:

in-r

Fn-r, Fn-r+l,

= Fn-r(F1,

..., Fn-r,

Fn-r+l

Fn-1

I

I

Figure 1. Classical output feedback structure.

# 0

L,$+,h(x) = L f h ( x ) ,

= Fl(t1,

closed-loop stability by tuning controller parameters always only for minimum phase always

-2

To determine the relative order of the closed-loop system (2), we must calculate the derivatives L&+,h(x), k = 0, 1, 2, .... We easily see by induction that

41

analytical calculation of state feedback only in special cases always if r = n, always if r # n, in special cases

=

tn-r+l,

Fn)

Fn)

Fn-r+2

=

Fn

i n = +(E) + G ( E ) ~

I

I

Figure 2. IMC structure.

the dynamics. In a linear setting, modification of the dynamics means relocating the poles and therefore affecting the stability characteristics (stabilization or destabilization) and the speed of the response. In a nonlinear setting, state feedback can also change the "shape" of the system: a nonlinear system can be made linear via feedback and vice versa. In the next two sections, we will see how state feedback can be used for this purpose. 3. Su-Hunt-Meyer Linearization of Nonlinear Systems The work of Su (46) and Hunt, Su, and Meyer (22) completely solved the problem of linearizing the state equations via feedback, which was posed in subsection 5.4 of part 1. In this section, we are going to review their main results and interpret them as a nonlinear extension of the linear pole placement problem (see subsection 2.5 of part 1). The linear pole placement problem was solved in subsection 2.5 of part 1as follows. We first solved the problem for the special class of systems that are in controllability canonical form. We then obtained a coordinate transformation that transforms an arbitrary controllable linear system to the controllability canonical form; this led to the general solution of the linear pole placement problem. A similar development will be followed now for the nonlinear case. The dynamic system

Y = Fn-r+l

Applying a static state feedback u = P ( 5 ) + q(5)u will only alter the nth-state equation. Consequently, the zero dvnamics. which is determined bv the first n - r equations, will' be completely unaffectid. To summarize, state feedback of the form (1) preserves (a) linearity of the state equations in the input, (b) the relative order of the system, and (c) the zero dynamics of the system. Of course, state feedback does not preserve the dynamics of the system; its raison d'6tre is to modify

XI X2

=

xp

=

x3

xn-, = x n in = +(x)

+ G(x)u

(5)

where G ( x ) # 0 is a nonlinear analogue of the canonical form for controllable linear systems (see (15) of part 1). For such a system, the synthesis of a static state feedback law for desirable closed-loop dynamics is very easy

2312 Ind. Eng. Chem. Res., Vol. 29, No. 12, 1990 F

p.:

"

More generally, given nonnegative integers m and k

P

if we restrict ourselves to linear closed-loop dynamics. Indeed, the state feedback x2,

..., x,)

- a,-,x2 - ... a l x n ) / ( G ( x l ,~ 2

- a,x1

,

...j

xn))

(6)

induces the closed-loop dynamics 1, = x p 12 =

xn-l

xg

= x,

x, = -a,x, - an-1x2 -

... - C Y I X , + u

if m

Ladfm&q(x) tf. 0;

if

(14)

To show that the coordinate transformation (10) is invertible, we must show that dq(x), d L ~ ( x )..., , dL!-'q(x) are linearly independent vector fields. Suppose that there exist scalar functions a l ( x ) , a&), ..., a,(x) such that

Figure 3. IMC structure for Q = P ' F .

u = (u - @(XI,

+k 0. To see if we have closed-loop internal stability as well, we must check whether the process is minimum phase. Since the process model is already in Bymes-Isidori normal form, the zero dynamics is simply the first two state equations with T as input:

L,,p,h(x) = 0 # 0

L,d,n-l&(x)

In other words, the scalar field q ( x ) = h(x) satisfies the conditions (9) of proposition 3.1, and therefore the system in Su-Hunt-Meyer linearizable. The state feedback (20), i.e. u = (u - L?h(x) - a,L?-"h(x) -

(39)

Local asymptotic stability of (39) around a given steady state (C4, CB, T,)can easily be checked by linear stability analysis: the eigenvalues of the system are -(F/V + 2k1(T,,)C4) and -(F/V + k2(Ts)), both negative for every steady state. Thus, from part b of theorem 4.2, we have guaranteed local asymptotic stability of the closed-loop system around any operating steady state. Stability of (39) in the sense of part a of theorem 4.2 is difficult to check theoretically, in order to draw conclusions about the asymptotic stability of the closed-loop system in the large. Simulations are best suited for this purpose. 5. Full Linearization and Comparative Evaluation of the Linearization Approaches The key idea behind the Su-Hunt-Meyer linearization is to modify the dynamics of the system in a predictable way. Since the linear dynamics is what we clearly understand and can easily analyze, linearity of the state equations is the most convenient setting. On the other hand, the key idea behind input/output linearization is to "shape" the input/output behavior of the system in a predictable way. Again, since the linear dynamics is what we clearly understand and can easily analyze, input/output linearity is a convenience. Closed-loop stability is transparent in the Su-HuntMeyer method, but closed-loop performance is not ad-

... - a,-,Lfh(x) -

anhtx))/G&L,"-'h(x))

(41)

and the coordinate transformation (lo), i.e,

will make the closed-loop state equations linear:

il = E2 f2

= 53

5,-1

f , = -cy,[1

=

- cY,-1c;p

En

- ... - a1[, + u

At the same time, the output y is expressed in the new coordinate system as Y =

51

and therefore depends linearly on the new state variables. This implies that the input/output behavior of the closed-loop system is also linear. In particular, the input/output dynamics of the closed-loopsystem is governed by

2318 Ind. Eng. Chem. Res., Vol. 29, No. 12, 1990

d"Y dn-ly dY +... + a,-1 +ad =v + a1 -

(43)

dt dtn-l On the other hand, if we apply the method of input/output linearization for r = n, the state feedback (28) becomes (41) and the closed-loop input/output behavior (29) becomes (43). Thus, we have shown the following proposition. Proposition 5.1. I f a system of the f o r m ( I ) has r = n, then there exists a static state feedback of the f o r m ( I ) and a coordinate transformation 5 = T(x)such that the closed-loop system can be represented as a linear system f = A( + bv dt"

Y = cl I n particular, the Su-Hunt-Meyer state feedback (20) with q ( x ) = h ( x ) and the inputloutput linearizing state feedback (28) with r = n are identical and can be used for this purpose. It is interesting to observe that the case r = n corresponds to systems that have no zero dynamics and are, therefore, minimum phase. Furthermore, the nonlinear analogue of the controllability canonical form (equation 5) becomes exactly the Bymes-Isidori normal form when r = n (see (73) of part 1);the Su-Hunt-Meyer coordinate transformation (10) for q ( x ) = h ( x ) becomes exactly the Byrnes-Isidori coordinate transformation for r = n (see (76) of part 1). The class of systems for which full linearization is feasible is actually a little larger than the one with r = n. It also includes systems with linear zero dynamics. Proposition 5.2. If a system of the form ( I ) has linear forced zero dynamics (in appropriate coordinates), then there exists a static state feedback of the f o r m ( 1 ) and a coordinate transformation 5 = T ( x ) such that the closed-loop system can be represented as a linear system & = A5 + bv Y =

cl

Conversely, if there exists a static state feedback of the form ( 1 ) and a coordinate transformation such that the closed-loop system is a linear system of the above form, t h e n either ( I ) has no zero dynamics (r = n) or (I)has linear forced zero dynamics under appropriate coordinate transformation. Proof. If (I) has linear zero dynamics, this means that there exists a coordinate transformation 4 = T ( x )which will put it in Byrnes-Isidori normal form with the first n - r equations linear: 41

ln-r

=

X11t1

+

*e*

+ X1,n-rtn-r + Xl,n-r+ltn-r+l + ... + XlnSn

transformation that makes the closed-loop system f = A [ + bu

Y = c5 then the zero dynamics of the closed-loop system can be easily computed via the linear coordinate transformation of (8) of part 1. Since static state feedback leaves the zero dynamics unaffected (by proposition 2.3), the zero dynamics of the open-loop system will be exactly the same; hence, linear. The above proposition characterizes the entire class of systems whose state equations and input/output behavior can both be made linear via static state feedback and coordinate transformation. Note that there is no requirement that the linear closed-loop system be controllable. If such an additional requirement is imposed, then, in addition to the zero dynamics being linear or nonexistent, the vector fields g(x),ad:g(x), adF-'g(x) must be linearly independent. An equivalent statement of these two conditions is provided in the following proposition, due to Tarn et al. (49). Proposition 5.3. Consider a nonlinear system of the form ( I ) and assume that there exists a scalar field q ( x ) and real numbers cl, cq, ..., c, such that Lgq(x) = 0 Ladtgq(X)

=0

Ladf"-2gq(X)

=0

Ladpgq(X)

# 0

(44)

and dh(x) = ~1 dq(x) + cq d L g ( ~+) ... + C , d L f - ' q ( ~ ) (45)

T h e n there exists a static state feedback of the f o r m ( 1 ) and a coordinate transformation = T(x)that makes the closed-loop system a linear controllable system of the form f = A t + bu Y = c5

Conversely, if there exists a static state feedback of the form ( 1 ) and a coordinate transformation that makes the closed-loop system a linear controllable system of the above form, then there exists a scalar field q ( x ) and real numbers cl, c2, ..., c, that satisfy (44) and (45). Proof. Assume that (44) and (45) are satisfied. Then the transformation

=

Xn-r.lE1

+ ... + Xn-r,n-rln-r + Xn-r,n-r+ltn-r+l + ... + Xn-r,ntn En-r+l

5n-1 in

= Y =

=

tn-r+2

=

is invertible. Furthermore, we can transform the gradient of h into the new coordinate system as follows: tn

+ C(€)u En-r+l

Hence, any state feedback of the form u = ( V - @,(E) + ~ 1 5 + 1 ~ z E 2+ + Fntn)/G(t) where pl, p2, .,., I.(, are scalar constant parameters, will make the entire system linear. Conversely, suppose that there exists a static state feedback and a coordinate

Taking into account (45), we easily conclude that d h ( 0 = [e1 cq ... c,] i.e.

h(t) = ~ 1 6 + 1 e252 +

+ Cntn

Suppose now that we apply the Su-Hunt-Meyer state

Ind. Eng. Chem. Res., Vol. 29, No. 12, 1990 2319 feedback (20). Then the state equations will become 41

=

42

= 53

E2

Hence, the closed-loop system will have linear and controllable state equations, and the output will depend linearly on the states. Conversely, assume that there is a static state feedback and a coordinate transformation that make the closed-loop state equations linear and controllable and the output depend linearly on the states. Given the results of section 3, we have that linearity and controllability of the closed-loop state equations imply the existence of a scalar field q ( x ) that satisfies (44) and that the coordinate transformation must be of the form

We have already encountered this equation in the example of subsection 4.2 of part 1, where we saw that SFX/(SF - S) is a particular nontrivial solution. The general solution will be

where 4 is an arbitrary nonconstant function of one real variable. Calculating the Su-Hunt-Meyer coordinate transformation

shows that y = X does not depend linearly on El and t2. Equivalently, condition (45) with h ( X , S ) = X and q(X,S) of the form (48) cannot be satisfied and therefore full linearization is not feasible. However, one can observe that the quantity X/(SF - S) is the apparent net yield (cell mass per unit mass of consumed substrate) as sensed at the reactor effluent. Consequently V

(49)

where q ( x ) satisfies (44). The linearity of the output in E implies that dh(E) = constant; from this we easily conclude that (45) must be satisfied by using a similar argument as in the proof of the forward statement. Remark 5.1. The result of the above proposition encompasses the result of proposition 5.1. When r = n, (44) and (45) are obviously satisfied for q(x) = h(x),c1 = 1, c2 = 0, ..., c, = 0. An important conclusion from the above proposition is that the state equations of (I) must satisfy the restrictive assumptions of the Su-Hunt-Meyer theory (see theorem 3.1) for full linearization with controllability to be feasible. If these assumptions are satisfied, the process output may still fail to satisfy condition (45). However, there are situations where the control engineer has the flexibility of reformulating the control problem in terms of a different output so that full linearization is feasible. In the following lines, this idea will be illustrated with an example taken from Hoo and Kantor (20). The growth of the methanol-utilizing microorganism Methylomonas in a continuous fermentor can be modeled by d X / d t = p ( S ) X - DX dS/dt = -u(S)X

+ D(SF - S )

(46)

where X is the cell mass concentration, S is the concentration of methanol, S F is the concentration of methanol in the feed, and D is the dilution rate. p(S) and a@) are specific growth rate and specific substrate utilization rate, respectively, and are given by empirical correlations (DiBiasio et al. (12)). In this system, D is the manipulated input, whereas y = X is a logical process output. Su-Hunt-Meyer linearization of (46) is feasible because n = 2 and its g ( x ) and adfg(x) are linearly independent. In fact, one can easily calculate the scalar field q(x) by finding a nontrivial solution of the linear homogeneous partial differential equation L,q(x) = 0, i.e., of (47)

is a very meaningful process output in terms of evaluating the operation of the fermentor. At the same time, this output has relative order r = 2 = n, and therefore full linearization is feasible. Reformulating the control problem with y* as the output offers distinct technical advantages; in this sense, we can say that y* is a “distinguished output” for this process (28). Another flexibility in formulating control problems is the availability of alternative manipulable inputs. The control engineer may be able to select manipulated inputs so that the given outputs can be fully linearized. A situation where this is feasible and meaningful is the control of a continuous mixed-culture bioreactor. The problem in this case is to stabilize the simultaneous growth of two cell populations that compete for a single rate-limiting substrate. The reader is referred to Hoo and Kantor (19) for details. 5.2. Discussion of the Linearization Approaches. Table I provides an overview of the limitations of each linearization approach. The basic limitation of the SuHunt-Meyer method is the involutivity condition, which is satisfied for practically all second-order systems and for a quite limited class of higher order systems. Also, the calculation of the state feedback law (whenever the method is applicable) can be difficult because analytical solutions of the system of partial differential equations are seldom available. The input/output linearization method does not suffer from these drawbacks because it is always applicable, and the state feedback law only involves calculation of derivatives. However, it only works for minimum-phase processes; when the process is non-minimum-phase, the closed-loop system will be internally unstable no matter what the controller parameters are. The class of systems for which neither method works is the class of higher order noninvolutive non-minimumphase systems. Mathematically, this is a sizeable class and is a subject of ongoing research investigation. From a practical standpoint, however, it represents a very limited class of processes. From the point of view of closed-loop performance characteristics, input/output linearity is always sought. In fact, it has been shown (32) that, for minimum-phase

2320 Ind. Eng. Chem. Res., Vol. 29, No. 12, 1990

processes, input/output linearizing state feedback provides ISE-optimal closed-loop response for step changes in the limit as the poles of the closed-loopinput/output dynamics tend to negative infinity. The Su-Hunt-Meyer state feedback does not possess any inherent optimality properties because it is a stability-oriented approach; the problem of optimal pole placement does not have an easy solution unless input/output linearity is achieved. Another very important reason that makes input/output linearity desirable will be seen in the next section. An extremely limited class of systems can be fully linearized. Whenever it can be achieved, full linearization represents the most convenient situation from a theoretical point of view. As described in the previous subsection, there have been a number of practical applications where full linearization is both feasible and physically meaningful. 6. Control Structures for Minimum-Phase

Nonlinear Processes There are two basic lines of thinking in the development of feedback control algorithms. (i) The first is the statespace perspective, in which the original analysis is done by manipulating the state equations, under the assumption that all the states are measurable. This leads to a state feedback law. Then, the case of unavailable state measurements is treated by combining the state feedback law with a state observer. (ii) The second is the inputloutput perspective, in which the original analysis is done by manipulating input/output operators, under the assumption that only the output is measurable. This leads to an output feedback law. Then, the case of additional state measurements is treated by modifying the problem formulation and its analysis in a cascade control configuration. For linear processes, both approaches have been explored in great depth, leading to identical solutions of control problems. The most popular approach has been the input/output perspective, because input/output operators can be conveniently represented by transfer functions, which encode all pertinent information (number and location of poles and zeros). For nonlinear processes, the situation is different because there is nothing like a transfer function to represent the input/output dynamics; all pertinent information is hidden inside the state equations. There is, however, a nonlinear analogue of the input/ output perspective that uses abstract nonlinear operators to represent input/output dynamics. Although these abstract operators do not encode zero/pole information and therefore the results cannot be explicit, the abstract input/output perspective does provide valuable insights and allows identifying analogies with linear control. On the other hand, the state-space perspective carries over unchanged to nonlinear systems, except that the machinery for manipulating state equations is different (Lie algebra instead of matrix algebra). In what follows, we will highlight both perspectives. In the context of the input/output perspective, we will review the nonlinear IMC structure of Economou et al. (14) and in the context of the state-space perspective the GLC structure of Kravaris and Chung (32). 6.1. Internal Model Control (IMC) of Nonlinear Processes. Consider the classical output feedback control structure of Figure 1,where P and C are nonlinear operators that describe the input/output behavior of the process and the controller, respectively. The input/output behavior of the closed-loop system is then described by y = PC(I + PC)-’y,, (50)

where Z denotes the identity operator and yapthe set point. In analogy to linear control problems, it is convenient to introduce the following parametrization of the controller operator C = Q(I - PQ)-’

(51) where the “parameter” Q is an “adjustable” nonlinear operator. Equation 51 can be equivalently rewritten as ( I + PC)(Z - PQ) = I (52) from which we easily conclude that Q = C(I PC)-’ (53)

+

and therefore the closed-loop input/output behavior (50) becomes Y = PQY,, (54)

A pictorial representation of the control structure with the controller operator parametrized according to (51) is shown in Figure 2. This has been referred to as the internal model control (IMC) structure (14, 4 4 ) . From the closed-loop input/output behavior (54), we draw the following very important conclusion with regard to closed-loop stability: If the process operator P is stable, then the closed-loop operator with also be stable for those controllers C that are generated by a stable Q. Furthermore, the controller synthesis problem can be conveniently addressed when P is stable and its inverse P’is also stable (open-loop stable minimum-phase process). If F denotes the desirable stable closed-loop operator, then from (54) we conclude that the choice Q = P’F (55) will provide closed-loop stability and the desirable closed-loop input/output behavior. Figure 3 depicts the IMC structure with Q = P’F. Using (511, we immediately see that the same closed-loop input/output behavior would have been obtained in a classical output feedback structure if we had chosen

C = P’F(I - 0 - l (56) Although the synthesis result can be recast within the classical output feedback framework, the Q parametrization makes the derivation much more transparent, and the IMC structure provides additional insights for its implementation. The most logical choice of closed-loop operator F is a linear time-invariant operator, because it is the linear dynamics that we clearly understand and can easily express performance specifications. If this is the case, the controller C given by (56) is in effect linearizing the input/ output behavior of the closed-loop system. Thus, (56) provides a solution to the macroscopic linearization problem posed in subsection 5.5 of part 1. It must be emphasized that the treatment of the controller synthesis problem up to this point is purely macroscopic. Nothing has been said about P except that it is stable and possesses a stable inverse PI.The following very important questions cannot be addressed within the abstract framework: (a) Under what condition on the operator F is C guaranteed to be proper? (b) How could P ’ F be simulated on line? To be able to answer these questions, one must look at the statespace description of the operator P and use geometric notions described in part 1. If the process has relative order r, the calculation of P’ will involve differentiations up to order r. Consequently, F must perform r integrations, Le., have relative order r,

Ind. Eng. Chem. Res., Vol. 29, No. 12, 1990 2321 Rocerr

Figure 4. GLC structure.

Figure 5. GLC structure with open-loop observer.

in order to make C proper. This was pointed out by Economou et al. (14), who were aware of the Hirschorn inversion results. They also observed that the Hirschorn inverse "does not work" in practice. The reason that the Hirschorn inverse is inappropriate for on-line simulations is that it is internally unstable (it involves r "zero-pole cancellations" at the origin). Results on minimal realizations of inverses (see subsection 4.2 of part 1)or on nonminimal internally stable realizations of inverses have only recently been developed, and instead, Ekonomou et al. (14) suggested the use of numerical methods for this purpose (contraction mapping and Newton iteration methods). The development of a geometric version of IMC (that would use geometric results to simulate P ' F on line) is feasible but has not been done yet. 6.2. Globally Linearizing Control (GLC) of Nonlinear Processes. In the previous sections, we studied the use of static state feedback of the form (1)to alter the dynamics of nonlinear systems of the form (I); the closed-loop dynamics can be made linear and with prespecified poles. It is important to observe that feedback of the form (1)does not have integral action,and this has obvious consequences in its ability to control the system without offset. There are two possible remedies to this difficulty: (a) Introduce modifications to the theory of sections 3-5 to incorporate integral action in the state feedback law. This would lead to state feedback laws that depend on Jh(x) dt and at the same time linearize the system. (b) Use static state feedback in an inner loop and an external linear controller with integral action around the u - y system. This would make the overall control system mixed state and output feedback. Although both options are feasible, the latter is the most meaningful from a practical point of view, if static state feedback provides input/output linearity. The resulting control structure is depicted in Figure 4 and is called the globally linearizing control structure (GLC). Suppose that the process and output map are modeled by equations of the form (I) with relative order r and that the system is minimum phase. Then the appropriate linearizing state feedback is given by (28), i.e. u = ( u - LFh(x)- P,L;-'h(x) - ... - p,-,L,h(x) P h ( X I ) / (L&i--'h(X 1) and induces the linear input/output dynamics given by (29), i.e.,

Stnrc Fcsdbaek

Obscrvcr

t Figure 6. GLC structure with closed-loop observer.

ple, if critically damped overall closed-loop dynamics is desired, c ( t ) can be chosen to be the inverse Laplace transform of s'

+

+ ... + PPIs + p, (€S + 1)' - 1

p1sr-1

This clearly possesses integral action and provides the overall closed-loop dynamics

( d",

c-+l

y=ysp

>'

In the special case r = 1,the external controller becomes

PI. The GLC structure of Figure 4 assumes the availability of on-line measurements of all the states of the system. This is not the case in many practical applications. However, if the process is open-loop stable, the states can be easily reconstructed via an on-line simulation of the process model (open-loop observer): 4 = f ( 2 ) + g(2)u (58) where 2 denotes the state estimates. When such an observer is implanted into the GLC structure, the control structure of Figure 5 is obtained. This is a dynamic output feedback control structure, which is derived and interpreted by state-space theory. 7. Further Topics 7.1. Closed-LoopObservers. If we wish to control an open-loop unstable process and on-line state measurements are not available, the GLC structure with open-loop observer (see Figure 5) is, of course, inappropriate. Instead, the linearizing state feedback can be coupled with a closed-loop observer; this leads to the output feedback control structure of Figure 6. A closed-loop observer for a system of the form (I) is usually of the form 4 = f(2) + g(2)u l ( y ) ( y - h(2)) (59) where ?. denotes the state estimates and 1 the observer gains, which in general depend on y. The objective is to select the observer gains for stable and fast error dynamics. The work of Krener and Isidori (39), Bestle and Zeitz (3),and Krener and Respondek (40) has identified the class of nonlinear systems for which the error dynamics of the closed-loop observer (59) can be linearized under appropriate change of coordinates and choice of 1. For this class of systems, the problem of observer design has a straightforward solution. The theoretical results have

+

Since the process is minimum phase, stability of the inner loop is guaranteed as long as the roots of the polynomial s' + p l f l + ... + PPls+ 0,are all in the open left half plane. The external controller will be of the form u ( t ) = J f c ( t - 7)[y8,(7) - y(7)l d7

(57)

and we can choose the kernel c ( t ) for desirable input/ output behavior of the overall control system. For exam-

2322 Ind. Eng. Chem. Res., Vol. 29, No. 12, 1990

found application in designing a closed-loop observer for a CSTR (26); this led to a GLC-based output feedback control algorithm for the CSTR (39). Unfortunately, the class of systems that can have closed-loop observers with linearizable error dynamics is extremely limited. The general problem of designing closed-loop observers is an open challenging problem that future research must address. 7.2. Control of Non-Minimum-Phase Nonlinear Systems. Non-minimum-phase characteristics arise from the presence of deadtime and/or unstable zero dynamics. Non-minimum-phase nonlinear processes cannot be controlled by the IMC or GLC structures of section 6, which were developed under the assumption of minimum-phase behavior. The problem of deadtime compensation for nonlinear processes with stable zero dynamics was solved by Kravaris and Wright (37) within the framework of input/output linearization. They developed a modification of the GLC structure based on a state-space version of the Smith predictor principle. Robustness conditions with respect to errors in the deadtime were also developed. The problem of controlling nonlinear processes with unstable zero dynamics is a much more difficult problem. Although it is true that most processes are designed to have stable zeros (if possible), one can encounter situations where severe nonlinearities are coupled with inverse response characteristics (see Kantor (27) for a simple chemical reactor example). The control of nonlinear processes with unstable zero dynamics cannot be approached within an abstract input/output framework because it would necessitate decomposition of the process operator into a minimum-phase and a non-minimum-phase factor (the information on zero dynamics is only available in the state equations). Also, it cannot be approached by requesting input/output linearity because the unstable modes of the zero dynamics (which in general will be nonlinear) must be present in closed loop. At the moment, the only available treatment is by Kravaris and Daoutidis (331,who solved the problem for second-order nonlinear processes with unstable zero dynamics and provided analytical expressions for the ISE-optimal state feedback. 7.3. Approximate Feedback Linearization. The idea of approximate feedback linearization was motivated by the very restrictive nature of the assumptions of the SuHunt-Meyer theory. Krener (38) defined an order p approximate linear system by 4 = A( + bu + O((,u)P+l When p > 1, such a system would, in a sense, be “closer to linearity” than a system with nonzero second-order terms. Consequently, if one could find a state feedback and a coordinate transformation to make the closed-loop state equations approximately linear of order p > 1, this may be better than conventional linear approximation. Using the concept of involutivity up to order p, Krener (38) developed an approximate Su-Hunt-Meyer linearization. Su and Hunt (47) developed a canonical expansion of nonlinear systems with which they proposed a different version of an approximate Su-Hunt-Meyer linearization. The two approaches were found to be closely related (15). Approximate linearization has found an interesting application in heat-exchanger control (2). 7.4. Robustness Considerations in Nonlinear Control. The issue of robustness is wide open in nonlinear control. Although we intuitively expect that the same tradeoffs between performance and robustness would also be present in nonlinear control, a quantitative assessment of robustness characteristics for nonlinear control systems

is far from being feasible. At the moment, only limited results are available under severely restrictive assumptions, including infinitesimal perturbations, perturbations within conic sectors, state model perturbations that satisfy mathcing conditions, etc. Representative results in this area can be found in refs 4, 7, 13, 14, 28, 31, 32, and 46. 7.5. Feedforward/Feedback Control of Nonlinear Systems. It is well-known that many disturbances can be measured on-line. In this case, one would like to utilize these on-line measurements in a feedforward/feedback configuration. This leads to a more general feedforward/feedback control problem. Daoutidis and Kravaris (11) solved this problem in the context of input/output linearization and developed a feedforward/feedback GLC structure. Calvet and Arkun (5, 6 ) developed a feedforward/feedback methodology in the context of Su-HuntMeyer linearization and for the special case where the disturbances and the manipulated input appear in the same scalar function in the state-space model. Finally, the incorporation of measured disturbances in the nonlinear IMC structure is addressed by Parrish and Brosilow (44). 7.6. Multivariable Nonlinear Control. Almost all the

major results presented in this paper can be extended to MIMO systems. The extension from SISO to MIMO is not only nontrivial, but also additional issues arise that are characteristic to multivariable problems. The Hirschorn inversion results of subsection 4.1 of part 1 have been extended to MIMO systems in refs 17,42, and 45. The concept of zero dynamics (subsection 4.2 of part 1) has three nonequivalent generalizations in MIMO systems (25). The Su-Hunt-Meyer theory carries over to MIMO systems (21),but the conditions for solvability of the linearization problem become much more restrictive. Input/output linearization theory is available for MIMO systems (10, 26, 36), but it is no longer true that every system can be made linear in an input/output sense via static state feedback. The problem of full linearization has been addressed in a MIMO setting (49),and the conditions for solvability become even more restrictive. A key problem, which is special to multivariable control, is the one of decoupling via feedback. This problem is completely solved in refs 16, 24, 43, and 50. Input/output decoupling is not always feasible via static state feedback; integral action may be necessary. The noninteracting control problem is solvable under rather restrictive conditions. The IMC and GLC control structures are both applicable to multivariable control problems (14,36). Like in the SISO case, they are restricted to minimum-phase processes. At present, multivariable nonlinear control is a very active area. Although significant advances have been made in recent years, there are still many open issues that need to be clarified.

Acknowledgment We are grateful to the reviewers Frank Doyle, Manfred Morari, Coleman Brosilow, Jean-Paul Calvet, and Yaman Azkun for their helpful comments and suggestions, which have “shaped” the structure and content of the revised version. We are also indebted to Prodromos Daoutidis for numerous suggestions that resulted in major improvements in the technical content of the revised version.

Literature Cited (1) Abramowitz, M.; Stegun, I. Handbook of Mathematical Func-

tions; Dover: New York, 1965. (2) Alsop, A. W.; Edgar, T. F. Nonlinear Heat Exchanger Control

Ind. Eng. Chem. Res., Vol. 29, No. 12, 1990 2323 through the Use of Partially Linearized Control Variables. Chem. Eng. Commun. 1989, 75, 155-170. (3) Bestle, D.; Zeitz, M. Canonical Form Observer Design for Nonlinear Time-Variable Systems. Int. J. Control 1983, 38, 419. (4) Calvet, J.-P.; Arkun, Y. Design of P and P I Stabilizing Controllers for Quasi-Linear Systems. Comput. Chem. Eng. 1990,14, 4 15-426. (5) Calvet, J.-P.; Arkun, Y. Feedforward and Feedback Linearization of Nonlinear Systems with Disturbances. Int. J. Control 1988,48, 1551-1559. (6) Calvet, J.-P.; Arkun, Y. Feedforward and Feedback Linearization of Nonlinear Systems with Disturbances and Its Implementation Using Internal Model Control. Ind. Eng. Chem. Res. 1988, 27, 1822-1831. (7) Calvet, J.-P.; Arkun, Y. Stabilization of Feedback Linearized Nonlinear Processes under Bounded Perturbations. Proc. 1989 Amer. Control Conf. Pittsburgh, PA, June 1989; pp 747-752. (8) Claude, D. Dlcouplage et Linearization des Systemes Non Linlaires par Bouclages Statiques. These d'Etat, Universitl de Paris-Sud, 1986. (9) Claude, D. Everything you always wanted to know about linearization but were afraid to ask. In Algebraic and Geometric Methods in Nonlinear Control Theory; Fliess, M., Hazewinkel, M., Eds.; D. Reidel Publishing Co.: Dordrecht, 1986. (10) Claude, D.; Fliess, M.; Isidori, A. Immersion, Directe et par Bouclage d'un Systeme Non Lineaire dans un Lineaire. C. R. Hebd. S&anc.Acad. Sci. Paris 1983,296, 237-240. (11) Daoutidis, P.; Kravaris, C. Synthesis of Feedforward/State Feedback Controllers for Nonlinear Processes. AIChE J. 1989, 35, 1602-1616. (12) DiBiasio, D.; Lim, H. C.; Weigand, W. A. An Experimental Investigation of Stability and Multiplicity of Steady States in a Biological Reactor. AIChE J. 1981, 27, 284-292. (13) Doyle, F. J., 111; Packard, A. P.; Morari, M. Robust Controller Design for a Nonlinear CSTR. Chem. Eng. Sci. 1989, 44, 1929-1947. (14) Economou, C. G.; Morari, M.; Palsson, B. 0. Internal Model Control. 5. Extension to Nonlinear Systems. Ind. Eng. Chem. Process Des. Deu. 1986, 25,403-411. (15) Goldthwait, R. G.; Hunt, L. R. Nonlinear System Approximations. Proc. 26th IEEE CDC, Los Angeles, Dec 1987; pp 1752-1756. (16) Ha, I. J.; Gilbert, E. A Complete Characterization of Decoupling Control Laws for a General Class of Nonlinear Systems. IEEE Trans. Autom. Control 1986,31, 823-829. (17) Hirschorn, R. M. Invertibility of Multivariable Nonlinear Control Systems. IEEE Trans. Autom. Control 1979,24,855-865. (18) Hoo, K. A.; Kantor, J. C. An Exothermic Continuous Stirred Tank Reactor is Feedback Equivalent to a Linear System. Chem. Eng. Commun. 1985, 37, 1-10. (19) Hoo, K. A.; Kantor, J. C. Global Linearization and Control of a Mixed Culture Bioreactor with Competition and External Inhibition. Math. Biosci. 1986, 82, 43-62. (20) Hoo, K. A.; Kantor, J. C. Linear Feedback Equivalence and Control of an Unstable Biological Reactor. Chem. Eng. Commun. 1986,46, 385-399. (21) Hunt, L. R.; Su, R.; Meyer, G. Design for Multi-Input Nonlinear Systems. In Differential Geometric Control Theory; Brockett, R. W., Millman, R. S., Sussman, H. J., Eds.; Birkhauser: Boston, 1983. (22) Hunt, L. R.; Su, R.; Meyer, G. Global Transformations of Nonlinear Systems. IEEE Trans. Autom. Control 1983, 28, 24. (23) Isidori, A. Nonlinear Control Systems: An Introduction. Lecture Notes in Control and Information Science; SpringerVerlag: Berlin, Germany, 1985; Vol. 72. (24) Isidori, A.; Krener, A. J.; Gorigiorgi, C.; Monaco, S. Nonlinear Decoupling via Feedback: a Differential Geometric Approach. IEEE Trans. Autom. Control 1981,26, 331-345. (25) Isidori, A.; Moog, C. H. On the Nonlinear Equivalent of the Notion of Transmission Zeros. In Modeling and Adaptive Con-

trol, Proc. IIASA Conf., Sopron; Byrnes, C. I., Kurzhanski, A., Eds.; Springer-Verlag: Berlin, 1988. (26) Isidori, A,; Ruberti, A. On the Synthesis of Linear Input/ Output Responses for Nonlinear Control Systems. Syst. Control Lett. 1984, 4, 17-22. (27) Kantor, J. C. Stability of State Feedback Transformations for Nonlinear Systems-Some Practical Considerations. Proc. 1986 Amer. Control Conf. Seattle, WA, June 1986; pp 1014-1016. (28) Kantor, J. C. An Overview of Nonlinear Geometrical Methods for Process Control. In Shell Process Control Workshop; Prett, D. M., Morari, M., Eds.; Butterworth London, 1987; pp 225-250. (29) Kantor, J. C. A Finite Dimensional Observer for an Exothermic Stirred-Tank Reactor. Chem. Eng. Sci. 1989, 44, 1503-1510. (30) Kantor, J. C.; Keenan, M. R. Stability Constraints for Nonlinear Static State Feedback. Proc. 1987 Amer. Control Conf., Minneapolis, MN, June 1987; pp 2126-2131. (31) Kravaris, C. Input/Output Linearization: A Nonlinear Analog of Placing Poles a t the Process Zeros. AIChE J. 1988, 34, 1803-1812. (32) Kravaris, C.; Chung, C. B. Nonlinear State Feedback Synthesis by Global Input/Output Linearization. AIChE J. 1987, 33, 592-603. (33) Kravaris, C.; Daoutidis, P. Nonlinear State Feedback Control of Second-order Non-minimum-phase Nonlinear Systems. Comput. Chem. Eng. 1990,14, 439-449. (34) Kravaris, C.; Palanki, S. A Lyapunov Approach for Robust Nonlinear State Feedback Synthesis. IEEE Trans. Autom. Control 1988, 33, 1188-1191. (35) Kravaris, C.; Palanki, S. Robust Nonlinear State Feedback Under Structured Uncertainty. AZChE J. 1988, 34, 1119-1127. (36) Kravaris, C.; Soroush, M. Synthesis of Multivariable Nonlinear Controllers by Input/Output Linearization. AIChE J. 1990,36, 249-264. (37) Kravaris, C.; Wright, R. A. Deadtime Compensation for Nonlinear Processes. AIChE J . 1989, 35, 1535-1542. (38) Krener, A. J. Approximate Linearization by State Feedback and Coordinate Change. Syst. Control Lett. 1984,5, 181-185. (39) Krener, A. J.; Isidori, A. Linearization by Output Injection and Nonlinear Observers. Syst. Control Lett. 1983,3, 47-52. (40) Krener, A. J.; Respondek, W. Nonlinear Observers with Linearizable Error Dynamics. SIAM J.Control Optim. 1985,197-216. (41) Limqueco, L. C.; Kantor, J. C. Nonlinear Output Feedback Control of an Exothermic Reactor. Comput. Chem. Eng. 1990,14, 427-437. (42) Nijmeijer, H. Invertibility of Affine Nonlinear Control Systems: A Geometric Approach. Syst. Control Lett. 1982, 2, 122-129. (43) Nijmeijer, H.; Schumacher, J. M. The Regular Local Noninteracting Control Problem for Nonlinear Control Systems. SIAM J. Control Optim. 1986,24, 1232-1245. (44) Parrish, J. R.; Brosilow, C. B. Nonlinear Inferential Control. AIChE J. 1988, 34, 633-644. (45) Singh, S. N. A Modified Algorithm for Invertibility in Nonlinear Control Systems. IEEE Trans. Autom. Control 1981, 26, 595-598. (46) Su, R. On the Linear Equivalents of Nonlinear Systems. Syst. Control Lett. 1982, 2, 48-52. (47) Su, R.; Hunt, L. R. Canonical Expansions for Nonlinear Systems. IEEE Trans. Autom. Control 1986,31,670-673. (48) Su, R.; Meyer, G.; Hunt, L. R. Robustness in Nonlinear Control. In Differential Geometric Control Theory; Brockett, R. W., Millman, R. S., Sussman, H. J., Eds.; Birkhauser: Boston, 1983. (49) Tarn, T. J.; Cheng, D.; Isidori, A. Pfaffian Basis for Affine Nonlinear Systems. Proc. 26th IEEE CDC, Los Angeles, Dec 1987; pp 493-504. (50) Van der Schaft, A. J. Linearization and Input/Output Decoupling for General Nonlinear Systems. Syst. Control Lett. 1984, 5, 27-33.

Received for review June 1, 1990 Accepted June 28, 1990