Gain-Scheduling Controllers Based on Velocity-Form Linear

The nonlinear system is approximated as a linear parameter-varying system, which ... a nonlinear state estimator, a time-varying Riccati equation appr...
0 downloads 0 Views 116KB Size
220

Ind. Eng. Chem. Res. 2002, 41, 220-229

Gain-Scheduling Controllers Based on Velocity-Form Linear Parameter-Varying Models Applied to an Example Process Rasmus H. Nystro1 m,* Bernt M. Åkesson,† and Hannu T. Toivonen‡ Process Control Laboratory, Faculty of Chemical Engineering, Åbo Akademi University, FIN-20500 Turku, Finland

A highly nonlinear pH control process is used as an example process to compare a number of methods for designing nonlinear, discrete-time controllers. The nonlinear system is approximated as a linear parameter-varying system, which is based on a set of velocity-form linearizations. The focus is on control methods which minimize a quadratic cost by using the solution of a Riccati equation in connection with gain scheduling. The compared methods include switching control, interpolation of linear controller parameters, gain scheduling with a nonlinear state estimator, a time-varying Riccati equation approach which uses an estimate of the future of the system, and a Riccati equation approach which utilizes an estimated upper bound on the future variations to guarantee an upper bound on the quadratic cost. 1. Introduction Gain scheduling is a conceptually simple method for control of nonlinear or time-varying processes which has found many applications in industry. It has been successfully used for chemical processes, flight control, and vehicles among others; see, for instance, ref 1 or 2 and references in ref 3. The basic concept of gain scheduling is to have a number of linear controllers which are in some sense optimal at several operating points. These controllers are used online by performing some form of interpolation between their parameters or outputs, based on a scheduling parameter (θ). For a long time it has been acknowledged that the theory of correct gain scheduling is not at the same level as its applications.4 Formerly, the only guidelines were that the scheduling parameter should be slowly varying, that it should capture the nonlinearity, and, preferably, that it should be external.3 A common approach when designing a controller scheduling system has been to linearize or “freeze” the time-varying system at a number of operating points and to analyze the frozen system at these points. Then, a minimum requirement is that the linearized closedloop system meets the control specifications in a local sense. This is the so-called linearization property. In the early 1990s, important contributions were made in which the significance of the linearization property was observed.3-5 The scheduling parameter was mostly assumed to be exogenous or to be an approximation of an exogenous variable. An important criterion was that it should be slowly varyingsa reflection of the assumption of a frozen system. In the important paper,4 a poleplacement scheme with state feedback which locally satisfies the linearization criterion is derived. The role of the time derivative of the scheduling variable, as well as the connection with linear parameter-varying (LPV) systems, is pointed out. In ref 5, velocity-form gainscheduled controllers are used to make the linearization * Corresponding author. Phone: +358-2-2154451. Fax: +3582-2154479. E-mail: [email protected]. † E-mail: [email protected]. ‡ E-mail: [email protected].

property hold. Recent surveys which provide an extensive overview of the field of gain scheduling for continuous-time systems can be found in refs 6 and 7. It has been noted that an examination of the locally linearized closed-loop systems alone does not guarantee the global performance of gain-scheduled controllers.3,8 To address this shortcoming, the use of dynamic linearizations where the system is linearized about a specified trajectory (see references in ref 2), as well as the use of off-equilibrium linearizations in a more generalized setting, has been proposed.2,9 The former approach has the advantage that the problem of increased dimensionality is kept within reasonable limits, but it is only valid for a nominal trajectory, from which the actual system might diverge when the control is not perfect. The transformation of the system into velocity form provides a powerful way of guaranteeing that the off-equilibrium linearizations are a correct approximation of the nonlinear system; cf. Leith and Leithead in refs 10 and 11. A particularity of the velocity form is the fact that explicit information about equilibria is lost. This does not represent a problem in the feedback control designs considered in this study, which use integral action to eliminate steady-state offsets. An alternative approach, more suitable with respect to equilibrium information, is the use of piecewise affine models; cf. ref 2 or 12. Using off-equilibrium linearizations to characterize the system is still not enough to provide globally optimal control, if the control is based on the current state of the system, and no information of the system’s future behavior is used. This has been partly remedied by recent synthesis techniques for LPV systems. In these approaches, it is assumed that bounds exist on the scheduling parameter and/or its time derivative. Performance or robustness properties of the system are then guaranteed by linear matrix inequality (LMI) techniques.13-15 The approach is particularly useful if the values of the scheduling parameter are unknown in advance, i.e., if it is external. Some applications of the approach can be found in refs 16-18. In this study, discrete-time methods for gain-scheduling-type control are compared and their advantages and drawbacks are discussed. The objective of this paper is

10.1021/ie010057+ CCC: $22.00 © 2002 American Chemical Society Published on Web 12/18/2001

Ind. Eng. Chem. Res., Vol. 41, No. 2, 2002 221

to study Riccati-based schemes in order to obtain functioning nonlinear control. As an example process, a highly nonlinear pH neutralization reactor19 is used. The control objective is to obtain good handling of setpoint changes over different operating regions, a problem which a previous study has demonstrated to be nontrivial. A velocity-form LPV model of the process is introduced and augmented to yield optimal handling of setpoint changes with a reference trajectory. Various approaches for minimizing the resulting quadratic cost (i.e., the LQ- or H2-type cost) using the velocity-form LPV system representation are studied. The LQ methods are based on time-varying Riccati equation approaches. A particularity of the methods is that they do not require the scheduling to be slowly varying in order to work. For a recent study on LQ design in connection with piecewise affine modeling, see ref 12. The compared LQ-based methods include switching control, scheduling of locally optimal controllers through interpolation between the controller parameters, gain scheduling with a nonlinear state estimator, a timevarying Riccati equation approach based on an estimate of the future process behavior, and finally an approach which is based on guaranteeing an upper bound of the LQ cost, given an estimate of the future process variations. All of the methods are derived and implemented using discrete-time control. 2. Velocity-Form LPV Systems In this section, the concept of state-dependent linear representations is introduced and the application of velocity-form descriptions to such systems is motivated. The notion of velocity-form state-dependent linear representations is then extended to velocity-form LPV systems. Finally, it is described how a discrete-time equivalent of such a system is obtained. 2.1. Velocity-Form State-Dependent Linear Representations. Consider a nonlinear system of the general form

{

x˘ c ) f(xc,uc) yc ) h(xc,uc)

(1)

{

xc ) Ac(xc) xc + Bc(xc) uc yc ) Cc(xc) xc + Dc(xc) uc

(2)

This representation is appealing because of its similarity to a general linear state-space system. As has been stated in ref 8, there are an infinite number of ways to represent (1) as (2). One approach, stemming from the traditional linearization around a steady state, is to introduce the offequilibrium2 linearization matrices

∂f ∂xcT

|

, Bc(xc,uc) :) xc,uc

∂h Cc(xc,uc) :) ∂xcT

|

∂f ∂ucT

|

,

xc,uc

∂h , Dc(xc,uc) :) T ∂u xc,uc c

|

xc,uc

[] ∂f1

∂f ∂xcT

∂xcT

|

)

xc,uc

|

xc,uc

l

∂fn

∂xcT

|

, etc.

(4)

xc,uc

Freezing the system about a steady state (xs, us) yields the linearized system

x˘ c ) Ac(xs,us) (xc - xs) + Bc(xs,us) (uc - us) yc - ys ) Cc(xs,us) (xc - xs) + Dc(xs,us)(uc - us)

(5)

This is a first-order Newton series approximation with discarded higher-order terms, which approximates the system (1) perfectly when (xc - xs) f 0 and (uc - us) f 0. The characterization (5) suggests that this form could be “unfrozen” into

x˘ c ) Ac(xc,uc) (xc - x0) + Bc(xc,uc) (uc - u0) ): fu(xc,uc) yc - y0 ) Cc(xc,uc) (xc - x0) + Dc(xc,uc) (uc - u0) ): gu(xc,uc) (6) where x0, u0, and y0 are ad-hoc offsets, which are not necessarily equal to stationary values. The unfrozen form (6), however, is not equivalent to (1). One need only examine the linearization property5 to confirm this. According to the linearization criterion, the constructed system should yield the same result as the original system when linearized, also at off-equilibrium points. However, differentiation of (6) yields

∂fu T

where xc denotes the state of the system, uc denotes the input, yc denotes the measured output, and f(‚,‚) and h(‚,‚) are nonlinear functions. We are interested in describing (1) by a so-called state-dependent linear representation (SDLR),8 given by

Ac(xc,uc) :)

where

∂xc

) Ac(xc,uc) +

∂[Ac(xc,uc) (xjc - x0)] ∂xcT

|

∂[Bc(xc,uc) (uc - u0)] ∂xcT

|

+

xjc)xc

* Ac(xc,uc), etc. (7)

xjc)xc

Obviously, the error term is identically zero only when the derivatives with respect to Ac(xc,uc) and Bc(xc,uc) are equal to zero, i.e., when the system is frozen, or at some specific point obtained by manipulating x0 and u0. Another criterion which shows that the representation (6) is incorrect is that the stationary variables xs and ys which result from a stationary input us, in general, do not match, unless the offsets x0 and u0 are appropriately selected. An example will illustrate this. Example. For the system

{

x˘ c ) xc3 + uc yc ) xc

(8)

the input uc ) us gives the stationary state and output

(3)

ys ) xs ) -us1/3

(9)

222

Ind. Eng. Chem. Res., Vol. 41, No. 2, 2002

The differentiated plant according to (3) is given by

Ac(xc,uc) ) 3xc2, Bc(xc,uc) ) 1, Cc(xc,uc) ) 1, Dc(xc,uc) ) 0 (10) A candidate for an SDLR description is then

x˘ c ) 3xc2(xc - x0) + (uc - u0) yc - y0 ) xc - x0

(11)

Redifferentiating this system yields

Ac(xc,uc) ) 9xc2 - 6xcx0, Bc(xc,uc) ) 1, Cc(xc,uc) ) 1, Dc(xc,uc) ) 0 (12) and inserting the steady-state variables from (9) into the steady-state version of (11) yields

2 1 x0 ) - us1/3 - us-2/3u0 3 3 y0 ) x0 Let (xs1, us1, ys1) denote an arbitary stationary point. Setting u0 ) us1 and y0 ) x0 ) xs1 in the nonlinear system above will guarantee that it possesses the correct dynamics and the correct correspondence between stationary signals in the direct vicinity of this stationary point. However, if the same is to hold true for another stationary point (xs2, us2, ys2), the offsets u0 and x0 must generally be redefined as u0 ) us2 and y0 ) x0 ) xs2. In other words, the nonlinear system (11) is unable to describe both stationary points correctly. A way to solve the above problems, without abandoning the idea of using off-equilibrium linearizations according to (3), is to transfer the system into velocity form.10,11 Differentiating the system (1) with respect to time t yields

{

x¨ c )

d ∂f f(x ,u ) ) dt c c ∂x T c

|

d ∂h y˘ c ) h(xc,uc) ) dt ∂x T c

∂f x˘ c + T ∂uc xc,uc

|

u˘ c )

xc,uc

Ac(xc,uc) x˘ c + Bc(xc,uc) u˘ c (13) ∂h x˘ c + u˘ c ) T ∂uc xc,uc xc,uc

|

|

Cc(xc,uc) x˘ c + Dc(xc,uc) u˘ c

This is an exact representation of the system (1), without higher-order terms. Example Continued. We can write the system in (8) as

{

d (x 3 + uc) ) 3xc2x˘ c + u˘ c ) dt c Ac(xc,uc) x˘ c + Bc(xc,uc) u˘ c y˘ c ) x˘ c ) Cc(xc,uc) x˘ c + Dc(xc,uc) u˘ c

x¨ c )

In contrast to (11), this is an exact representation of the system (8). The representation given in (13) implies that a control law should yield u˘ c rather than uc as a function of the measurement yc. Thus, the controller will also be in velocity form. Kaminer et al.5 introduced gain-scheduled controllers in velocity form in order to satisfy the

linearization condition. In ref 19 also, the use of velocityform controllers has been shown to be beneficial. The velocity-form approach has the drawback of eliminating explicit information about equilibria. An alternative approach to velocity-form modeling, more suited for correct handling of such information, is the use of piecewise affine models; cf. ref 2 or 12. This corresponds to having additional constant offsets in the expression (6). For feedback control, however, the velocity-form linearization is advantageous because the integrating action will compensate for unknown load disturbances. In practical applications, it is common to approximate the system described in (13) by interpolating between a number of linear systems. The problem of whether this approximation is valid or not then also needs to be addressed. In the setup presented here, the approach of interpolating between linear matrices will present no problem. Because the SDLR system (13) is an exact representation of the system (1), at a given state and input its dynamics will exactly match that of the system (1), provided that the matrices in (3) do so. Thus, it is only a matter of spacing the models tightly enough to get an approximation which is accurate enough.11 2.2. Velocity-Form LPV Systems. Consider now a system described by

{

x¨ c ) Ac(θ) x˘ c + Bc(θ) u˘ c y˘ c ) Cc(θ) x˘ c + Dc(θ) u˘ c

(14)

This is a LPV system in velocity form. While an LPV system normally refers to a system with an external scheduling variable, the scheduling variable may also be taken to be state dependent.3 In (14) the vector of scheduling parameters is denoted θ, and it may be either external or a function of the states. In this study, we focus on scheduling parameters θ, which are (strongly) dependent on the state. In its most nonrestrictive form, an LPV system is identical to the SDLR system. However, difficulties tend to arise if the whole state space and input space are used for scheduling, because the dimensionality of the problem will soon become intractable. If possible, a small subset of (xc, uc) that captures the nonlinearity well should be used. The problem of controlling an LPV system such as (14) has received some attention in the literature,13-15 and a number of applications can be found in refs 1618. In these methods, it is mostly assumed that the scheduling variable is exogenous or can be approximated as such (i.e., its dependence on the state is assumed to be weak). The methods usually require the existence of bounds on the scheduling variable θ and/ or its derivative and use this information through solving a number of LMIs to guarantee robustness or performance. In these studies, the actual trajectory of the scheduling parameter is typically not used. If it is assumed that the scheduling parameter can be calculated from the future outputs, its trajectory can be estimated in advance, and some form of predictive control can be considered as an alternative. In this way the conservatism of the design can be reduced. 2.3. Discrete-Time Velocity Forms. To design digital controllers, the above expressions for LPV systems should be replaced by suitable discrete-time equivalents. A convenient approach is to directly discretize the velocity-form LPV representation in (14). The discretization can be simplified by assuming that θ is constant within a sampling interval.

Ind. Eng. Chem. Res., Vol. 41, No. 2, 2002 223

Let Ts denote the sampling time and assume that we have a zero-order hold on the continuous-time input uc, i.e., ∞

u˘ c )

∑ ∆u(k) δ(t-kTs)

(15)

k)0

where u(k) denotes the discrete-time value of the input, ∆u(k) is the difference u(k) - u(k-1) and δ(‚) is Dirac’s delta function. Within a sampling interval, the continuous-time system (14) is then described by

[] [

][ ]

x¨ c Ac(θ) 0 x˘ c y˘ c ) Cc(θ) 0 y , t * kTs

(16)

Integration over a sampling interval boundary yields

[]

[]

x˘ c x˘ c (kT + ) ) s yc yc (kTs) + B (θ) kTs+ Ac(θ) 0 x˘ c + c u˘ dt kTs Cc(θ) 0 yc Dc(θ) c



[]

([

][ ] [ ] ) [ ]

x˘ B (θ) ) yc (kTs) + c ∆u(k), when  V 0 Dc(θ) c (17) If the sampling time Ts is sufficiently small, the matrices Ac(θ) and Cc(θ) can be assumed to be constant within a sampling interval (i.e., a zero-order hold is assumed to affect θ also). Then, within a sampling interval, the system (16) yields the solution

[]

([ ([

] )[ ] ] )[ ]

x˘ c Ac(θ) 0 yc (kTs + Ts) ) exp Cc(θ) 0 Ts Ac(θ) 0 exp T Cc(θ) 0 s

x˘ c yc (kTs) + Bc(θ) ∆u(k) (18) Dc(θ)

Thus, we obtain the discrete-time system

{

x(k+1) ) F(θ) x(k) + G(θ) ∆u(k) y(k) ) Hx(k)

(19)

where

[]

x˘ x(k) ) yc (kTs) c y(k) ) yc(kTs), and

F(θ) :) exp

([

])

[ ]

Ac(θ) 0 B (θ) Ts , G(θ) :) F(θ) c , Cc(θ) 0 Dc(θ) H :) [0 I ] (20)

3. Example Process: pH Control The example process studied in this paper is a continuous stirred tank reactor (CSTR) with continuously neutralized contents;19 see Figure 1. Such processes are a popular benchmark of robust control design because of the nonlinearity they exhibit and the ensuing difficulty to obtain good control in transition regions.20 In addition, the pH control systems give rise to difficult scheduling problems because of fast-changing and statedependent scheduling variables. The feed stream to the CSTR is a water solution of phosphoric acid. The pH value is controlled using an input stream of a calcium

Figure 1. Example process setup.

hydroxide solution of constant concentration by manipulation of the flow rate. The only measured variable of the process is the pH value. The process is modeled by first principles and some empirically obtained reaction rate equations.19 It is digitally controlled in discrete time, with the sampling interval being equal to Ts ) 0.2 min. White noise of covariance 0.001 is assumed to affect the measurement, and a total time delay of 1 min is assumed to occur prior to the input of the process. This is assumed to account for flow delays, as well as measurement and actuator dynamics. In a previous study21 it has been shown that the pH system can be well described by a velocity-form LPV system with an integrated output, where the value of the integrator is used for scheduling. The pH value is taken as scheduling variable θ, by mapping each pH value to the linearization at the stationary state which yields this pH value as a stationary output of the process. This approach leads to an approximation of the SDLR system (3), which has turned out to give sufficient precision to be used for feedback control, with a minimal dimension of θ. Alternatively, the system could be linearized along a specific trajectory by dynamic linearizations.2 This would lead to the same dimension of θ but would require assumptions of the state trajectories and that different sets of models should be used for different transitions. Using the above approximation and discretization according to (19), local discrete linearizations {F(θ), G(θ), H} of order 3 are obtained. Because the models are of low order and there is only one scheduling variable, there is no practical reason to restrict the number of models which are used for interpolation. This makes it possible to have a finely spaced table of model parameters. Apart from this approximation and the white noise, no external disturbances are present in the simulated system. A general property of a controller whose design is based on the LPV model is, however, that its ensuing integrating action will compensate for unknown load disturbances. In the related study,19 H2/ H∞ controllers which are optimal with respect to the rejection of external and unknown disturbances are designed and tested for the pH process. 3.1. Control Objective and Augmentation for Optimal Control. The control problem consists of controlling the pH process in a way which allows for optimal handling of setpoint changes over different operating regimes. This has turned out to be a rather demanding problem in a previous study on gain scheduling with mixed H2/H∞ methods.22 This is due to the strongly nonlinear dynamics of the process, with the titration curve containing gain variations of a factor of almost 70. A subsequent preliminary study,23 however, has shown that a number of nonlinear techniques are suitable for controlling the process. In ref 23 as well as in this study, the setpoint changes were selected large

224

Ind. Eng. Chem. Res., Vol. 41, No. 2, 2002

Figure 2. Augmentation scheme for the initial-value LQ control problem in (22) and (23).

enough to pose a challenging control problem. Within a setpoint change interval, the process gain varies by a factor of 2-9. For a given setpoint change, the initial value of the pH output is denoted θ1 and the new setpoint is denoted θ2. To give smooth setpoint tracking from θ1 to θ2, a reference trajectory of second order is used. The dynamics of the trajectory is described by the discrete-time system

[

]

2

Ar B r 0.008264z Cr Dr :) (z - 0.9091)2

(21)

which has a steady-state gain of 1. Quadratic LQ-type costs provide useful measures of the optimality of control systems and can be used to characterize a broad range of disturbances and control problems, including tracking problems. The tracking problem can be stated as an initial-value LQ-type problem with the structure in Figure 2, corresponding to the augmented state-space model

xp(k+1) ) A(θ) xp(k) + B0w(k) + B2∆u(k) z(k) ) C0xp(k) + D02∆u(k) y(k) ) C2xp(k) + D20w(k) where

[

Al B rC l A(θ) :) 0 0

(22)

] [ ]

0 0 0 0 0 Ar 0 0 0 0 , B0 :) , 0 Al 0 0 0 0 Bw 0 G(θ) Cl F(θ) 0 -DrCl -Cr 0 H 0 C0 :) , B2 :) B 0 0 0 0 l 0 0 D02 :) W C2 :) [0 0 0 H ], u D20 ) [Dw 0 ] (23)

[ ][ ] [

]

represents the time delay of 1 min, corresponding to five samples. In addition to the process model having an input delay, the reference trajectory has also been equipped with a time delay in order to obtain the correct reference signal, assuming that the setpoint-change decision is made at time t ) 0. The scalar input signal u(k) is the value of the molar feed flow of calcium hydroxide, divided by the feed flow of the control stream. The signal z(k) is the controlled output used in the subsequent quadratic loss functions. The process output is defined as the offset from the target value θ2, i.e., y(k) :) θ(k) - θ2, and the reference output is correspondingly defined as yr(k) :) θr(k) - θ2. Thus, the initial value of the process output is y(0) ) θ1 - θ2, and the initial state of the LPV system (19) is x(0) ) [0 0 y(0)T]T. The initial value of the state of the input time delay is xl(0) ) 0, the initial state of the reference trajectory is xr(0) ) (I - Ar)-1Bry(0), and the initial state of the reference time delay is xlr(0) ) (I - Al)-1Bly(0). That is, both the process model and the reference trajectory are set to generate y(0) at time t ) 0. Then, the initial value of the system (22) is given by

xp(0) :) [xlr(0)T xr(0)T xl(0)T x(0)T]T

(25)

The system is also affected by white noise w(k). The matrices Bw and Dw describe how the white noise affects the LPV system (19) and the measurement, respectively. The weighting matrix Wu can be used to reduce the magnitudes of the input changes ∆u(k). In the pH control example, the constant input weight Wu ) 1000 gives a good overall performance and is used throughout the paper. The total order of the augmented system (3.1) is np ) 15. Given the above augmentation, the LQ-optimal setpoint tracking problem consists of minimizing the finitehorizon cost

1 N z(k)T z(k) J2 :) E N k)0



1 N )E ([y(k) - yr(k)]T[y(k) - yr(k)] + N k)0 ∆u(k)TWuTWu∆u(k)) (26)



3.2. Estimating the State of the LPV System. For most advanced control methods, knowledge of the process state is required. Estimating the state of the process is also beneficial in that it gives a filtered value of the (state-dependent) scheduling variable θ. This can be used in scheduling control schemes more reliably than a direct measurement of θ. The state of the augmented system (22) can be estimated using the filtering Kalman estimator24

xˆ p(k) ) A(θ) xˆ p(k-1) + B2∆u(k-1) + K(k) [y(k) - C2(A(θ) xˆ p(k-1) + B2∆u(k-1))] K(k) ) P(k) C2T[C2P(k)C2T + D20D20T]-1 P(k+1) ) A(θ) P(k) A(θ)T - A(θ) K(k) C2P(k) A(θ)T +

Above, the system

B0B0T (27)

[ ]

Al Bl -5 Cl 0 :) z

(24)

The performance of the estimator is not overly sensitive to the choice of the noise parameters Bw and Dw.

Ind. Eng. Chem. Res., Vol. 41, No. 2, 2002 225

Here, the value Dw ) x10-3 is given by the numerical value of the measurement white noise covariance and the value Bw ) x10-9[1 1 0]T is taken so as to provide enough feedback from the measurements. The initial value of the covariance matrix P(0) is obtained from the stationary form of the Riccati equation in (27), with the system parameters evaluated as functions of θ1. 4. LQ Control of LPV Systems This section is structured as follows. In subsection 4.1, the discrete-time Riccati equation and the optimal state feedback law for solving the time-varying LQ problem are recapitulated. The expressions are directly applicable to LPV systems by considering the time variation of the scheduling parameter θ. The Riccati equation and optimal feedback law are then utilized in several ways in order to achieve scheduled LQ control. A first step is to assume a constant scheduling parameter θ, which yields a linear or switching control law, which is studied in subsections 4.2 and 4.3. To achieve nonlinear control, the scheduling parameter can be used for continuous scheduling between linear controllers. This is studied in subsection 4.4. A further improvement is to use a nonlinear state estimate for control. The simplest way to do this in the LQ setting is to schedule between locally optimal state feedbacks, which is covered in subsection 4.5. The LQ control can be improved by using information on the future trajectory of the scheduling variable. Thus, an approximately predictive LQ controller is introduced in subsection 4.6 by making a specific assumption about its future trajectory. Finally, a more cautious approach is introduced in subsection 4.7 by assuming that the variation of the scheduling variable is limited by a bound which is known in advance. The results are briefly discussed in subsection 4.8. 4.1. Time-Varying Riccati Equation. To motivate the control methods described in the sequel, some results connected with the basic LQ-optimal timevarying state feedback Riccati equation are recapitulated in this section. For a more thorough treatment, see the literature on LQ control, for instance, ref 24. Notice that the term LQ is used here in a sense which includes time-varying and LPV systems.15 In this section we consider the control of the timevarying system

xˆ p(k+1) ) A(k) xˆ p(k) + B2(k) ∆u(k), xˆ p(0) ) xˆ 0 z(k) ) C0(k) xˆ p(k) + D02(k) u(k)

C0(k)TC0(k) (31) Assuming that xˆ p(∞) ) 0, completion of squares of (30) using (28) and (31) gives ∞

J2 )

{[∆u(k) - ∆u0(k)]T[B2(k)TS(k+1) B2(k) + ∑ k)0

D02(k)TD02(k)][∆u(k) - ∆u0(k)]} + xˆ 0TS(0)xˆ 0 (32) where

∆u0(k) ) -[B2(k)TS(k+1) B2(k) + D02(k)TD02(k)]-1B2(k)TS(k+1) A(k) xˆ p(k) (33) It follows that the control law

∆u(k) ) ∆u0(k)

(34)

minimizes the cost J2, and the minimum cost is given by

min J2 ) xˆ 0TS(0) xˆ 0

(35)

Similar expressions are obtained if it is assumed that the process is subject to white noise instead of an initialvalue disturbance. Notice that the optimal control law of (33) and (34) does not require a slowly varying system for validity. 4.2. Locally Optimal Linear Controllers. If, for a given value of θ, the values of the system matrices in (22) are assumed to be constant in the future, the timevarying Riccati equation (31) can be replaced by the stationary Riccati equation

S(θ) ) A(θ)TS(θ) A(θ) - A(θ)TS(θ) B2[B2TS(θ) B2 + D02TD02]-1B2TS(θ) A(θ) + C0TC0 (36) At a given stationary value of the scheduling parameter θ a locally LQ-optimal, frozen linear controller is obtained by solving the stationary discrete-time Riccati equation (36) together with a stationary version of the estimating Riccati equation in (27),

D20D20T]-1C2P(θ) A(θ)T + B0B0T (37) (29)

A locally LQ-optimal feedback controller, frozen at a given value of the scheduling parameter θ, is then given by

xf(k+1) ) Af(θ) xf(k) + Bf(θ) y(k+1) ∆u(k) ) Cf(θ) xf(k)



∑ {z(k)Tz(k)} k)0

D02(k)TD02(k)]-1B2(k)TS(k+1) A(k) +

P(θ) ) A(θ) P(θ) A(θ)T - A(θ) P(θ) C2T[C2P(θ) C2T +

for all k. We are interested in minimizing the quadratic, infinite-horizon LQ cost

J2 )

S(k) ) A(k)TS(k+1) A(k) A(k)TS(k+1) B2(k) [B2(k)TS(k+1) B2(k) +

(28)

where it is assumed that

D02(k)TC0(k) ) 0

Introduce the discrete-time, time-varying Riccati equation

(30)

(38)

where Af(θ) ) A(θ) - B2L(θ) - K(θ) C2[A(θ) - B2L(θ)],

226

Ind. Eng. Chem. Res., Vol. 41, No. 2, 2002

Figure 3. Result of switching control. The measurement noise has been removed for clarity.

Figure 4. Result of gain-scheduled control with interpolated controller parameters. The measurement noise has been removed for clarity.

Bf ) K(θ), Cf ) -L(θ), and

K(θ) ) P(θ) C2T[C2P(θ) C2T + D20TD20]-1 L(θ) ) [B2TS(θ) B2 + D02TD02]-1B2TSA(θ) (39) Compare (27), (33), and (34). 4.3. Switching Control. The simplest way to use the frozen linear controllers given in subsection 4.2 is to implement a switching control scheme. Here, the linear control law (38) is used, with the scheduling variable θ set to the target pH value θ2 when the setpoint-change decision is made. Because the value of θ does not vary dynamically, the controller is linear and it should at least provide stability and be locally optimal around the target value θ2. Using switching controllers to control the example process yields the result shown in Figure 3. Eight separate setpoint changes have been made, and the outputs have been drawn in the same graph. The shown outputs are pH values from which the measurement noise has been excluded in order to facilitate comparison between different controllers. The result illustrates that a good response is not automatically obtained by a linear controller. However, the response obtained by a linear controller could be significantly improved by parametric design, where the controller parameters are optimized to give optimal performance for a given step change. In many cases, parametrically designed linear controllers might be preferred to nonlinear schemes. First, they might be more reliable than ad-hoc interpolating methods. Second, the computational work of designing the controller can be made offline, before the setpoint change takes place. In the sequel the result obtained by the switching controller will be improved through the introduction of various nonlinear control schemes which are based on gain scheduling. 4.4. Interpolation between Parameters of Locally Optimal Linear Controllers. A simple approach to obtain linear gain-scheduling control is to interpolate

between the parameters of the locally optimal controllers in (38). The result of applying such a gain-scheduled controller to the pH example process is shown in Figure 4. The locally optimal controllers need to have compatible state realizations for the interpolating nonlinear controller to work. This poses no problem if full-order observer-based controllers (such as the H2 controllers presented here) are used. However, problems are likely to occur if one interpolates between, e.g., parametrically designed controllers or other controllers whose states are not compatible. To form a nonlinear controller from linear controllers with incompatible states, a useful approach is to interpolate between the outputs of a number of independently running controllers, i.e., to form a local-state local-model network.19,25 Although the state of the scheduled controller above is updated in a nonlinear manner, it is not compliant with the optimal state estimator (27). In the following, a gain scheduling approach which uses the more rigorous nonlinear state estimate is described. 4.5. Gain Scheduling with a Nonlinear State Estimator. If a state estimate xˆ p from a nonlinear state estimator is available, the cost in (26) can be minimized for a frozen system at a given value of θ and with infinite time horizon N ) ∞ by solving the stationary Riccati equation (36) and by using the H2-optimal state feedback

∆u(k) ) -[B2TS(θ) B2 + D02TD02]-1B2TS(θ) A(θ) xˆ p(k) (40) compare (33) and (34). This approach is a discrete-time version of the suboptimal nonlinear control approach described in ref 8 and references therein. The result of applying such a gain-scheduled controller to the pH process is shown in Figure 5. The estimator (27) is used to obtain xˆ p(k), and the estimated value of θ at each sampling instant is obtained from this state estimate rather than as a direct measurement.

Ind. Eng. Chem. Res., Vol. 41, No. 2, 2002 227

Figure 5. Result of gain scheduling with a nonlinear state estimator. The measurement noise has been removed for clarity.

4.6. Approximately Predictive Control Using a Time-Varying Riccati Equation. The methods described so far are suboptimal because of the (at least partial) assumption of a locally frozen system. The actual system is nonlinear, and optimal control requires that its future behavior be predicted. This can readily be incorporated in a control scheme using model predictive control (MPC).26,27 However, it is also possible to use an estimate of the future trajectory of the nonlinear system and to use the time-varying Riccati equation (31) for control. This leads to simple and fast calculations in comparison with MPC. The use of time-varying Riccati equations has been deemed infeasible in other gain-scheduling frameworks,3 where the future behavior of θ is unknown. However, this need not apply to cases where θ is a function of the reference trajectory or the states, as in the present study. The controller may, for instance, use an assumption that the future process output will be approximatively equal to the output trajectory. Then we can construct a Riccati-equation-based predictive controller for the system (22) as described below. Let θr(k) be the nominal future trajectory of the scheduling variable, which is assumed to be known. It is presumed that the process will be controlled in such a way that θ(k) ≈ θr(k) for all k. Then, the future values of the matrices in (23) can be estimated as A(θ(k)) ) A(θr(k)), etc. The cost function (26) can be approximately minimized for an infinite horizon in the following way. First, assume that the trajectory θr(k) converges to a value θr(∞) as k approaches ∞. Let m be such that for all k ) m, ..., ∞ we have A(θr(k)) ≈ A(θr(∞)), etc., for all parameter-dependent system matrices; i.e., the system has practically reached steady state. Then, in this interval the stationary Riccati equation (36) gives the H2-optimal feedback law, and we obtain a control scheme which is identical to the one in subsection 4.5. Further, we obtain a good approximation of S(m) from the same stationary Riccati equation. In the interval where the model is considered to be time-varying, i.e., k ) 0, ..., m - 1, we obtain the minimum of the cost function by a backward solution

Figure 6. Result of approximately predictive control using a timevarying Riccati equation. The measurement noise has been removed for clarity.

of the time-varying Riccati equation (31). In accordance with the results of subsection 4.1, the H2-optimal control law at time k is then given by

∆u(k) ) -[B2TS(k+1) B2 + D02TD02]-1B2TS(k+1) A(k) xˆ p(k) (41) according to (33). Notice that, for a fixed reference trajectory, all values of S(k) can be calculated prior to the start of the setpoint change. This means that a minimum amount of calculation is required during control. A more time-consuming but improved approach might be to have a dynamically changing trajectory, which responds online to deviations from the nominal trajectory. The result of applying control based on such an approximately predictive control scheme to the pH process is shown in Figure 6. 4.7. Gain Scheduling Based on an Upper Bound on the Quadratic Cost. Although the method described in subsection 4.6 can provide a good control performance, it does not address the possibility that the trajectory of θ deviates from the intended one. This uncertainty can partly be accounted for by using an alternative approach which leads to the solution of a stationary Riccati equation guaranteeing that the LQ cost is below an upper bound, given an estimate of the maximum deviations of the system. The method bears a resemblance to a continuous-time approach suggested in ref 15 and is inspired by LPV synthesis techniques which are usually based on the solution of LMIs.13 Consider the time-varying Riccati equation (31) and the cost (32). By replacing the equation by the inequality

S(k) g A(k)TS(k+1) A(k) A(k)TS(k+1) B2(k) [B2(k)TS(k+1) B2(k) + D02(k)TD02(k)]-1B2(k)TS(k+1) A(k) + C0(k)TC0(k) (42)

228

Ind. Eng. Chem. Res., Vol. 41, No. 2, 2002

we obtain the cost bound ∞

J2 e

{[∆u(k) - ∆u0(k)]T[B2(k)TS(k+1) B2(k) + ∑ k)0

D02(k)TD02(k)][∆u(k) - ∆u0(k)]} + xˆ 0TS(0) xˆ 0 (43) Using the optimal feedback

∆u(k) ) ∆u0(k)

(44)

with ∆u0(k) defined as in (33), we then have J2 e xˆ 0TS(0) xˆ 0. Hence, using the optimal feedback law (44) will guarantee that the upper bound (43) of the H2 cost holds, if S(‚) is a solution of (42). Introducing the time difference ∆S(k+1) :) S(k+1) - S(k) into (42), we obtain

S(k+1) g A(k)TS(k+1) A(k) A(k)TS(k+1) B2(k) [B2(k)TS(k+1) B2(k) + D02(k)TD02(k)]-1B2(k)TS(k+1) A(k) + C0(k)TC0(k) + ∆S(k+1) (45) Now assume that the positive definite matrix Q satisfies

Q g ∆S(k), all k

(46)

Then we may replace the inequality (45) by the stationary Riccati equation

S(k+1) ) A(k)TS(k+1) A(k) A(k)TS(k+1) B2(k) [B2(k)TS(k+1) B2(k) + D02(k)TD02(k)]-1B2(k)TS(k+1) A(k) + C0(k)TC0(k) + Q (47) Solving the stationary Riccati equation (47) for k ) 1, 2, ... yields a matrix S(k) which satisfies the inequality (42), and the optimal feedback is given by (44). In the example study, a nonconservative value of Q is obtained from the reference trajectory according to the following procedure: 1. As in the method of subsection 4.6, the future values of the scheduling parameter θ(k) are first supposed to coincide with the reference values θr(k). 2. The time-varying Riccati equation (31) based on the scheduling parameters θ(k) ) θr(k) is solved for all k ) 0, ..., m. 3. A positive definite estimate Q of the upper bound on ∆S(k) is calculated from the obtained values of S(0), ..., S(m). The above steps are carried out offline. During control, the stationary Riccati equation (47) is solved at each sampling instant and the optimal feedback (44) is used. The approach yields the result shown in Figure 7 when applied to the pH control process. If so required, the approach described above can be expanded to include H∞-type costs. Further, continuoustime equivalents can be derived, in which an upper bound on S˙ (k) is used. Notice also that an approach similar to the one above could be used to determine an estimator which replaces (27). 4.8. Discussion. The simulations show that the control result can be improved by taking into account different aspects of the estimated future variation of the scheduling variable. The more sophisticated methods give a result quite similar to that obtained by using

Figure 7. Result of control based on a Riccati equation with an estimate of an upper bound on the quadratic cost. The measurement noise has been removed for clarity.

MPC,26 which uses the same LPV model as that of the previous methods and minimizes the same type of cost (the simulation result is omitted here for the sake of brevity). The advantage of the present methods with regard to MPC is their computational simplicity and smaller demand for computational time. All of the tested methods, including MPC, display some imperfections of the control results (see particularly the descendent step from pH 6 to 5). The imperfection stems from the simplification which was done in forming the LPV model in order to obtain one-dimensional interpolation. A more ideal LPV model can be obtained by using linearizations which, at each sampling instant, exactly match the current state and input of the system. Such a model is difficult to obtain in a realistic setting but yields near-perfect control, depending on the control scheme. 5. Conclusions By a simulated pH neutralization process, it has been demonstrated that a velocity-form LPV representation can provide a good model for control of a strongly nonlinear system with fast-changing scheduling variables. Several nonlinear control schemes have been devised for optimal handling of setpoint changes over a broad operating area. The control schemes attempt to minimize a quadratic cost which is based on the setpoint tracking error. It has been shown that a good control result is not automatically obtained by linear control, and so the control result has been improved through gain scheduling. A gain-scheduling control scheme based on the assumption of a frozen system can be designed with little effort, but in this case not much can be said about its global performance when it is applied to the nonlinear process for which it has been designed. The performance of a gain-scheduling controller can be improved in various ways. First, a state estimate from a nonlinear estimator can be used in conjunction with a stationary Riccati equation solution to obtain increased control accuracy. An improvement of this

Ind. Eng. Chem. Res., Vol. 41, No. 2, 2002 229

approach is to introduce a Riccati-equation-based predictive control scheme which anticipates the future behavior of the nonlinear process. Further, a stationary Riccati equation based on an estimate of the upper bound of the variation of the scheduling parameters can be used. In this study the control result improved as the level of sophistication of the controller was increased. In all of the Riccati-based methods, the timeconsuming computational work could be performed offline. Acknowledgment We thank the Academy of Finland (Grant 44031) and the Finnish Graduate School in Chemical Engineering (GSCE) for its support of B.Å. and R.N. and the foundations Ella och Georg Ehrnrooths Stiftelse, Åbo Akademis Stiftelse, and Tekniikan Edista¨missa¨a¨tio¨ for their financial support of R.N. Literature Cited (1) Åstro¨m, K. J.; Wittenmark, B. Adaptive Control; AddisonWesley: New York, 1989. (2) Johansen, T. A.; Hunt, K. J.; Gawthrop, P. J.; Fritz, H. Offequilibrium linearisation and design of gain-scheduled control with application to vehicle speed control. Control Eng. Pract. 1998, 6, 167. (3) Shamma, J. S.; Athans, M. Gain scheduling: potential hazards and possible remedies. IEEE Control Syst. Mag. 1992, 12, 101. (4) Rugh, W. J. Analytical framework for gain scheduling. IEEE Control Syst. Mag. 1991, 11, 79. (5) Kaminer, I.; Pascoal, A. M.; Khargonekar, P. P.; Coleman, E. E. A velocity algorithm for the implementation of gainscheduled controllers. Automatica 1995, 31, 1185. (6) Leith, D. J.; Leithead, W. E. Survey of gain-scheduling analysis and design. Int. J. Control 2000, 73, 1001. (7) Rugh, W. J.; Shamma, J. S. Research on gain scheduling. Automatica 2000, 46, 1401. (8) Huang, Y.; Lu, W. M. Nonlinear optimal control: alternatives to Hamilton-Jacobi equation. Proceedings of the 35th IEEE Conference on Decision and Control, Kobe, Japan, 1996; pp 39423947. (9) Hunt, K. J.; Johansen, T. A. Design and analysis of gainscheduled control using local controller networks. Int. J. Control 1997, 66, 619. (10) Leith, D. J.; Leithead, W. E. Appropriate realization of MIMO gain-scheduled controllers. Int. J. Control 1998, 70, 13. (11) Leith, D. J.; Leithead, W. E. Gain-scheduled and nonlinear systems: dynamic analysis by velocity-based linearization families. Int. J. Control 1998, 70, 289. (12) Rantzer, A.; Johansson, M. Piecewise linear quadratic optimal control. IEEE Trans. Autom. Control 2000, 45, 629.

(13) Apkarian, P.; Adams, R. J. Advanced gain-scheduling techniques for uncertain systems. IEEE Trans. Control Syst. Technol. 1998, 1, 21. (14) Wu, F.; Yang, X. H.; Packard, A.; Becker, G. Induced L2norm control for LPV system with bounded parameter variation rates. Proceedings of American Control Conference, Seattle, WA, 1995; pp 2379-2383. (15) Uchida, K.; Watanabe, R.; Fujita, M. LQG control for systems with scheduling parameter. Proceedings of European Control Conference, Karlsruhe, Germany, 1999; Paper f179. (16) Apkarian, P.; Gahinet, P.; Becker, G. Self-scheduled H∞ control of linear parameter-varying systems: a design example. Automatica 1995, 31, 1251. (17) Kajiwara, H.; Apkarian, P.; Gahinet, P. LPV techniques for control of an inverted pendulum. IEEE Control Syst. 1999, 19, 44. (18) Postlethwaite, I.; Konstantopoulos, I. K.; Sun, X. D.; Walker, D. J.; Alford, A. G. Design, flight simulation, and handling qualities evaluation of an LPV gain-scheduled helicopter flight control system. Proceedings of European Control Conference, Karlsruhe, Germany, 1999; Paper f397. (19) Nystro¨m, R. H.; Sandstro¨m, K. V.; Gustafsson, T. K.; Toivonen, H. T. Multimodel robust control of nonlinear plants: a case study. J. Process Control 1999, 9, 135. (20) Tadeo, F.; Lo´pez, O. P.; Alvarez, T. Control of neutralization processes by robust loopshaping. IEEE Trans. Control Syst. Technol. 2000, 8, 236. (21) Åkesson, B. M. Model predictive control of nonlinear processes and approximation of the optimal control strategy (in Swedish). Master’s Thesis, Process Control Laboratory, Åbo Akademi University, Åbo, Finland, 2000. (22) Nystro¨m, R. H.; Sandstro¨m, K. V.; Gustafsson, T. K.; Toivonen, H. T. Multimodel robust control applied to a pH neutralization process. European Symposium on Computer Aided Process Engineerings8, Brugge, Belgium, 1998 (published in Comput. Chem. Eng. 1998, 22, Suppl., S467). (23) Nystro¨m, R. H.; Åkesson, B. M.; Sandstro¨m, K. V.; Toivonen, H. T. A comparison of nonlinear control methods for a pH control process modeled by velocity-form linearizations. AIChE Annual Meeting, Los Angeles, CA, 2000; Paper 234j. (24) Åstro¨m, K. J.; Wittenmark, B. Computer-Controlled Systems: Theory and Design, 3rd ed.; Prentice-Hall: Englewood Cliffs, NJ, 1997. (25) Gawthrop, P. J. Continuous-time local state local model networks. Proceedings of IEEE Control Systems, Man and Cybernetics, Vancouver, Canada, 1995; pp 852-857. (26) Garcia, C.; Prett, D.; Morari, M. Model predictive control: theory and practicesa survey. Automatica 1989, 3, 335. (27) Morari, M.; Lee, J. H. Model predictive control: past, present and future. Comput. Chem. Eng. 1999, 23, 667.

Received for review January 17, 2001 Revised manuscript received July 5, 2001 Accepted October 16, 2001 IE010057+