Process control strategies for constrained nonlinear systems

Industrial & Engineering Chemistry Research 2000 39 (6), 1980-1992 ... Nonlinear Internal Model Control for Systems with Measured Disturbances and Inp...
0 downloads 0 Views 1MB Size
I n d . Eng. C h e m . Res. 1988,27, 1421-1433 Kettani, 0.;Oral, M. “Equivalent Formulations of Nonlinear Integer Problems for Efficient Optimization”. Manage. Sci. 1987, in press. Kocis, G. R. “A Mixed-Integer Nonlinear Programming Approach to Structural Flowsheet Optimization”. PhD. Thesis, CarnegieMellon University, Pittsburgh, 1988b. Kocis, G. R.; Grossmann, I. E. “Relaxation Strategy for the Structural Optimization of Process Flow Sheets”. Znd. Eng. Chem. Res. 1987,26(9), 1869-1880. Kocis, G. R.; Grossmann, I. E. “Computational Experience with DICOPT Solving MINLP Problems in Process Systems Engineering”. Technical Report 06-4388, Engineering Design Research Center, Pittsburgh, 1988. Lin, R. J.; Prokopakis, G. J. “A Mathematical Modelling Approach to the Synthesis of Separation Sequences.” Presented a t the Annual AlChE Meeting, Miami, 1986, paper 38e. Mangasarian, 0. L. Nonlinear Programming; McGraw-Hill: New York, 1969. Murtagh, B. A.; Saunders, M. A. “MINOS User’s Guide”. Technical

1421

Report SOL 83-20, 1985; Systems Optimization Laboratory, Department of Operations Research, Stanford University, Stanford, CA. Papoulias, S.; Grossmann, I. E. “A Structural Optimization Approach in Process Synthesis, Parts I, 11, and 111”. Comput. Chem. Eng. 1983, 7(6), 1. Saboo, A. K.; Morari, M.; Colberg, R. D. “RESHEX:An Interactive Software Package for the Synthesis and Analysis of Resilient Heat-Exchanger Networks.” Comput. Chem. Eng. 1986, 10(6), 577-600. Shelton, M. R.; Grossmann, I. E. “Optimal Synthesis of Integrated Refrigeration Systems. I Mixed-Integer Programming Model.” Comput. Chem. Eng. 1986,10(5), 445-459. Vaselenak, J. A.; Grossmann, I. E.; Westerberg, A. W. “Optimal Retrofit Design of Multiproduct Batch Plants.” Znd. Eng. Chem. Res. 1987, 26, 718-726.

Received for review October 26, 1987 Accepted February 26, 1988

Process Control Strategies for Constrained Nonlinear Systems Wei Chong Li and Lorenz T. Biegler* Department of Chemical Engineering, Carnegie-Mellon University, Pittsburgh, Pennsylvania 15213

Control and state variable constraints are often overlooked in the development of control laws. Although recent developments with DMC and QDMC have addressed this problem for linear systems, incorporation of state and control variable constraints for nonlinear control problems has seen relatively little development. I n this paper, a nonlinear control strategy based on operator theory is extended t o deal with control and state variable constraints. Here we show t h a t Newton-type control algorithms can easily be generalized by using an on-line optimization approach. In particular, a special form of a successive quadratic programming (SQP) strategy is used to handle control and state variable constraints. Limited stability results for this algorithm are also shown. To illustrate this approach, a number of scenarios are considered for the control of nonlinear reactor models. Here regulation and control can be done in a straightforward manner, and emphasis to more important parts of the system can be given with appropriate placement of constraints. 1. Introduction The Internal Model Control (IMC) structure was proposed by Garcia and Morari (1982) and recently extended to multivariable systems (Garcia and Morari, 1985a,b). The main attractive features in the IMC structure are as follows. First, while the controller affects the quality of the response, the filter takes care of the robustness of the control block independently. Note that a filter is a block in the feedback signal to account for the model and process mismatch reflected through the estimated disturbance and to achieve a desired degree of robustness. Second, the closed-loop system is stable as long as the controller and model are open-loop stable. Only robustness needs to be considered and this model representation is suitable for a theoretical robustness analysis. One special case of IMC is Dynamic Matrix Control (DMC, developed a t the Shell Oil Co.), which has been reported to perform very well in application (Cutler and Ramaker, 1979; Prett and Gillette, 1979). DMC is structured such that the entire control algorithm can be put into a linearly constrained optimization problem. When the objective function for this optimization problem is of the sum of the error squared form, the algorithm becomes Quadratic Dynamic Matrix Control (QDMC) (Garcia and Morshedi, 1984), the second generation of the DMC algorithm. QDMC without constraints was found to have an IMC structure (Garcia and Morari, 1982). Moreover,

* Author to whom correspondence

should be addressed.

0888-5885/88/2627-1421$01.50/0

the DMC algorithm is the first control technique which could successfully handle general process constraints. DMC is a type of model-predictive controller, which solves a new optimal problem and produces a “newn controller at each execution. The on-line measurements which cannot be foreseen a t the design stage are used to adjust controller settings and to handle the constraints. This distinguishes the algorithm from the conventional controller that uses a fixed relationship between error and manipulated variables and is solved only once, during the conceptual design phase. Even though DMC shows certain superior characteristics among linear controllers, the inherent linear model restricts its use in highly nonlinear processes. One can argue that, if the nonlinear model can be linearized, then DMC can be used to control the process. It is well-known, however, that a linearized model is only valid in a small neighborhood of the reference point. If the external forces such as unmeasured disturbances drive the system outside the valid neighborhood, the linearized model will introduce large modeling errors which can mislead control actions. Therefore, a nonlinear predictor-type controller in the IMC structure should be developed to deal with these nonlinear systems. Economou and Morari (1985,1986) extended the IMC design approach to nonlinear lumped parameter systems. Based on an operator formalism, a Newton-type control algorithm was constructed. Simulation examples demonstrated good performance of this control algorithm and showed that the control algorithm worked well even around the region where process gains changed sign. On the other 0 1988 American Chemical Society

1422 Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988

hand, this algorithm cannot deal with the process constraints, and this limits the potential usefulness of this algorithm. In this work, a special form of the SQP strategy is used to handle process constraints. A Newton-type control law in the IMC structure is briefly presented in the next section. The successive quadratic programming algorithm is developed in section 3. This approach can be shown to fit into the nonlinear IMC structure. A computing algorithm based on this strategy is outlined in section 4. In order to demonstrate the effectiveness of the strategy, two example problems are simulated in section 5. The simulation results demonstrate good performance of the suggested algorithm. Finally, stability properties of the constrained controller are briefly analyzed in section 6. 2. A Newton-Type Control Law Economou and Morari (1985,1986) extended the IMC structure to an autonomous, lumped parameter multipleinput-multiple-output (MIMO) nonlinear system. A Newton-type method was used to construct the control law for nonlinear models. The systems considered are generated by a set of ordinary differential equations, whose vector form is as follows: dx/dt = f ( x , u ( t ) ) (2.l.a) Here x E R" is the state of the system, and for every t E (O,m), u ( t ) E R" is the input, with the corresponding output map (y E R") Y = g(x) (2.l.b) Several assumptions are made in deriving this strategy. Firstly, we assume the system has a unique solution. Secondly, we assume at this point the model is perfect, in that there is no model and system mismatch. This assumption will later be relaxed for stability analysis. Finally, we assume that the system inputs are piecewise constant functions to reduce the problem at hand to a finite dimensional space. Here, the letter s is used as a superscript to mark time in discrete intervals. The sth sampling interval extends from t Sto ts+l. T = ts+l - t Sis the constant sampling time, x s is the state at tS,and us is the system input held constant over (ts;tS+'). In the discrete setting of the study, x(tS+T:tS,xS,uS) is the solution of eq 2.1 at time ts+lfor u ( t ) = us(P< t < P + l ) and initial condition x(tS;tS,xS,uS) = xs. xs will denote the state of the system a t t = ts+l,i.e., xS+l:

xs E xs+1 = X(tS+T;tS,XS,US)

(2.2)

Since (2.1) is autonomous, x(tS+T;tS,x,u)= x(tS+'+T;ts+l,x,u);i.e., f does not have an explicit dependence on t. Therefore, time will be dropped from the parameter list and the following convention will be used: xs

= X(T;XS,US) = X(tS+T;tS,XS,US)

The derivatives of defined as follows:

(2.3)

xs with respect to x s and us will be

as= a ( x s ) / a ( x s ) rs

=

US

= ays+1/axs = dg(xS)/dxS

r(ts)= o

V and rSare calculated by the following equations and were derived by Economou (1985):

ts+l

(2.5.d)

C s + s ( ~-s x S )+ ( y * - yS+l)- (Csrs)AUs = 0 (2.6)

where Aus = us+1- us. However, the algorithm developed by Economou and Morari (1986) cannot deal with constraints which inevitably exist in the chemical process. The process constraints can arise from various considerations, a few of which include the following: (i) Physical Limitations on Equipment. All equipment has physical limitations which cannot be exceeded during the operation. (ii) Product Specifications. Any intermediates or marketable products need to satisfy certain specifications required for further processing or by consumers. (iii) Safety. Many process variables should not exceed certain bounds to avoid hazardous conditions. 3. A Nonlinear Strategy for Handling Process Constraints Designing a control algorithm which has the capacity to handle the constraints is not an easy task, especially when a nonlinear system is involved. A candidate algorithm should have the following characteristics: (i) Prediction Model. A model should be based on prediction of output. Thus, any potential violation of process constraints can be foreseen, and proper corrections can be made. (ii) Optimal Input. In MIMO systems, it is often possible to trade off good performance of one output against poor performance of another; therefore, on the basis of the relative importance of the different outputs, a set of optimal outputs needs to be determined. (iii) Easy To Analyze. The structure of the controller should be transparent enough to perform stability and robustness analysis. The nonlinear IMC structure has a framework that allows this analysis and as a result is a viable candidate. Linear control variable constraints can be dealt with easily, using quadratic programming. Moreover, the following quadratic programmine problem without any constraints is identical with the Newton-type control law. The proof can be found in Appendix A. Note that the Hessian of the objective function is positive semidefinite, which means a global minimum value is guaranteed to be found 1 2

min CTAuS+ -(Aus)*HAus A d

(2.4.b)

(2.4.c)

t~ I t I

(2.5.b)

The Newton-type control law with p = 1 (where p is the number of forward steps allowed to achieve the desired output y * ) is presented here. The detailed derivation is straightforward and can be found in Economou (1985):

(2.4.a)

The yS+' = g ( x s )is the system output at tS+'. The derivative of yS+lwith respect to xs will be defined as C S

+(P) = I

(3.1.a)

where CT = -[Csas(~s - xs) + (y* - ~ H

~ + ~ ) ] ~ ((3.1.b) c~r~) = (csrs)T(csrs) (3.l.c)

With the help of the quadratic form in the objective function, the linear equality and inequality constraints of

Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988 1423 series and truncating the expansion after the second term. Here, the letter j is used as a subscript to indicate the j t h iteration: xn+l,j(ts+2. ,Xs+l ,ujs+l) - Xn+l,j-l(ts+2 ;Xs+l 9Uj-1) S+l +

rL,+l I

I

!

!

T s-1

u*+',I

T s+l

TS

!

!

-Ts+2

. s+3 T

t

past time

Figure 1. Moving time horizon.

control variables can be included. In this case, problem 3.1 becomes I

min CTAus + - ( A u ~ ) ~ H A u ~ Aub 2 subject to

I A(us + Au) Ia" (3.2) m matrix multiplying the control var-

al

where A is a m, X iables and m, defines the number of constraints. For simplicity, we write equality and inequality constraints together. When ahLequals abu,constraint k is defined as an equality constraint. In practice, we need to deal not only with the constraints of the control variables but also with the constraints of the state variables. The latter are probably more important to get a desired product or to avoid failure of the production process. Simple bounds on state variables are the most common state constraints in most chemical processes. So our objective is to handle the following problem: 1 min CTAuS+ - ( A u ~ ) ~ H A u ~ (3.3) hU8 2 subject to a1 IA(us Aus) Ia"

+

Ix ( t )

I xu

tS+1 5 t I t s + 2 For on-line implementation, the QP is solved using a moving time horizon. Given the current time ts+l, the immediate past control variable us,the initial state variable xs, and the measurement of current system output yS+l, we hope to find an optimal control variable uS+lwhich can efficiently drive the process to the set point without violating the control and state variable constraints. Figure 1 shows the schematic of the moving time horizon. In order to avoid the computational difficulties of dealing with these state constraints, a new state variable is defined to convert a path constraint into a terminal constraint (Sargent and Sullivan, 1977): dxn+l(t)/dt = n

n

i=l

i=l

C(min ( o , ~ ~ ( t ) - x f+) )Elmin ~ ( o , x y - ~ ~ ( t )(3.4) ))~ X,+l(ts+l)

p+l

(3.5.b)

Iuture time

present time

XI

Define

I t5

=0 tS+2

We then impose the constraint ~ , + ~ ( t =~ 0+and ~ ) substitute this constraint with the path constraints of state variables in (3.3). ~ , + ~ ( tis~ an + ~implicit ) function of u, and there is no direct method to solve this problem. Here an iteration algorithm is developed to deal with the above problem. We expand x,+,(P2) about u;?; by using Taylor

duS++l I =

u BI+ l

-

(3.5.c)

u4+1 1-1

The calculation form of K$_, can be found in Appendix B. The constraint x,+~,j(ts+2) = 0 is now linearized by the following equation: xn+l,j-l(ts+2;~s+1,~;?l) + K$L1 dug+' = 0 (3.6) The quadratic programming problem at the j t h iteration is as follows: min CTA$ Au

1 + -(Au;)~HAu~ 2

subject to

There are two types of constraints we need to consider: hard constraints, no dynamic violations of the bounds allowed at any time; and soft constraints, violations of bounds tolerated for satisfaction of other criteria. In problem 3.7, only hard constraints have been considered. Let us assume soft constraints for the control variables are given by ul I As(us + Aus) I a" (3.8) where A, is a m, X m matrix multiplying the control variables and m, is the number of soft constraints. Since these constraints can be violated to satisfy other criteria, slack variables am# need to be added in inequality 3.8 to increase the feasible region. The soft constraints of (3.8) become U' - 13,~I As(uS ALP) I a" + (3.9)

+

By including the summation of all elements of objective function, we obtain 1

in the

m.

min CTAus + . t ( A ~ s ) T H A+~ sc w i 6 i (3.10) A"8 2 i=l Since the importance of each constraint may not be the same, a weighting factor wi has been included to reflect the differences. If the soft constraints involve state variables, the problem becomes more complicated. Assume the following soft constraints for states xi: (3.11) xj Ix ( t ) ; I xy As with eq 3.4 above, a new variable x , + ~ +is~defined as follows: dx,+l+i(t)/dt = (min ( O , x i ( t ) - ~ f ) ]+~ (min ( o , ~ Y - x ~ ( t ) ) ) ~ (3.12) Xn+l+i(ts+l)= 0 ts+l 5 t I ts+2 A first-order approximation of the new variable is used as in (3.5), and a slack variable ~5,*+~ is added in the inequality to increase the feasible region. Then the soft

1424 Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988

constraint of (3.11) in the j t h iteration now becomes (3.13) x,+l+i, j - l ( t s + 2 ; x s + 1 ,+ ~~ Kjzi, ? ~ )j-1 duS+' I6m,+i

1

calculate

The vector form of these constraints (with dimension m,)

given i i i t i a l cond

r',o'

(3.14)

du;"

(3.15)

1

Ka+l I+i,j

axn+l+i

au Iu=u,.l~+l

U

i = 1, 2, 3, ..., m,

reduce stepsize '

7'

Including soft constraints 3.14, the quadratic programming problem at the j t h iteration now becomes mt 1 min CTAus + - ( A u ~ ) ~ H A u + Y 2 ~ ~ 6 (3.16) , AUS 2 i= 1

subject to

+

ul IA(U;-~ Aus-,) 5 u'

- 6mBIA,(uj-,

+ Auj-1) I'U + 6m8

Nj(tS+2;XS+1,ujS?i) I6mx L 0

where m, = m, + m, is the total number of all soft constraints. Finally, it is worth noting that before the hard state constraints are included in the control algorithm, the feasibility of these state constraints has to be analyzed. Even though the hard state constraints are feasible in the model, the system states can easily be pushed out from the feasible region by the disturbances in the real process. Therefore, it is recommended to treat all state constraints as soft constraints.

Having us-:' -- 1. us+l - us+l .-, .

and new QP solution us+', define dug+'

2.' (a) Sgia = 1. (b) Evaluate $(u;?: and $'(u;?:). (c) If

$(ug?f

+ a dug")

- $ ( u s )I ah1 min

+ a du), $(uj?t),

(+'(u;?:),-E)

(4.2)

go to 3. Here E is a small positive value. (d) Otherwise, use quadratic interpolation to find a new value of a

4. Algorithm

The algorithm for the nonlinear control problem is stated as follows: 1. Start with an initial condition, xs,us. 2. Solve for xs, Cs, P,and rssimultaneously. (a) Set j = 0. (b) Set u$+' = us. 3. (a) Solve QP (eq 3.16) to get (b) Choose a step size based on the Armijo line search (see below). 4. If (du;+')T(dus+l) < E , where t is a small positive number, update us++= us+' (from line search algorithm below), set s = s + 1, and return to step 1. 5 . Otherwise, go back to step 3, with j = j + 1. For clarity, the flow sheet of the algorithm for the control problem is shown in Figure 2. An Armijo (1966) line search is introduced to control the search direction stepsize algorithm. We define a test function as

(4.3)

and go to (b). Here 61 = 1 3 ~= 0.1 and 6, = 0.8 (nominal values). 3. Exit this algorithm and go to the next QP iteration with us" = us-:' + a du;". It is well-known that Newton-type algorithms with a = 1 are only locally convergent. Convergence properties can be enhanced, however, by adding a line search algorithm to determine a suitable a. In the constrained case, global convergence to a minimum of $ ( u j f l )can only be guaranteed if +'(uJ?l) < 0 for all QP iterations. This property does not always hold, although it can be shown that mt $'(uJ?l) < O for a sampling time sufficiently large (Li et $(ug+l) = (y;+2 - y*)2 + C W J i al., 1988). In any case, use of the stepsize algorithm to find i=l ma an acceptable a does yield a more reliable Newton-type c p i max ~ O , ~ ~ ( u ~ + A u ~ ) - 6 ~ - u . , a " - a ~ -+u ~ ( u ~ + ~ algorithm u~)} than eq 2.6, the original one proposed by Ecoi=l nomou (1985). m, Finally, the most crucial assumption we made in deC P m s + i max {O,xn+l+i, j(Xs+',U~+')-6iI (4.1) r=l veloping the above algorithm is that the model be perfect.

+

Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988 1425 Table I. Parameter Values of Example 1 uarameter Darameter

0 O'' .26

-

set Doint

,

0.22

0

5

0.18

C

a

-

Figure 3. Stirred tank reactor (1).

0.16

>

This is rarely the case in chemical processes because model and disturbance uncertainties unavoidably exist. An online identification phase should be added to update the model after a number of time steps by using the differences between measured and predicted values of outputs. An optimization problem can be constructed to estimate these uncertain parameters. The objective function is to minimize the differences between the measured and predicted values subject to the bounds of these parameters. By use of a nonlinear model, the physical significance of these parameters can be recognized, and it is possible to set bounds for these parameters. If uncertain parameters are slow moving, i.e., the time constants of these parameters are much larger than the time interval of control, the parameters can be assumed as constants and the updated values can be used to predict the process in a future time horizon. If variation in parameters is sufficiently large compared to the time interval of our model, the dynamic nature of these parameters should be analyzed in order to postulate some model for the variation. Jang et al. (1987) recently proposed a similar two-phase approach to control and operate a chemical process.

5. Examples Two example problems are simulated to demonstrate the effectiveness of the strategy proposed in this work. The first example problem was adapted from the paper of Matsuura and Kat0 (1967), which describes a kinetic model with multiequilibrium points a t steady state. The second example problem was modified from Economou and Morari's paper (1986). Their original problem was extended to a three-dimensional system with constraints. The control objective of both problems is to operate the reactor as closely as possible to the set point. When the system gain changes sign as the operating point changes from one side of the set point to the other, a linear controller with integral action may perform poorly (Morari, 1983). Without integral action the linear controller will result in a large offset. Both example problems possess this feature. As seen below, the nonlinear controller with constraints proposed in this work controls this system very well as seen by the results of our simulations. Example 1: A Stirred Tank Reactor (1). For the reaction A+B-P where A is in excess, the reaction rate equation can be written as follows: (5.1) where C B is the concentration of component B. The reaction occurs in an ideal stirred tank as shown in Figure 3.

0.1

U

0.06 0.02

!

04 0

I 7.

4

8

6

Ccnc.ntrctlon.

Figure 4. Multiequilibrium points at steady state.

The concentrations of B in the two inlet flows are assumed to be fixed: cB1 = 24.9, CB, = 0.1. Both inlet flows contain an excess amount of A. The tank is well stirred with a liquid outflow rate determined by the liquid height in the tank h;i.e., F ( h ) = 0.2h0.5.The cross-sectional area of the tank is 1. The sampling time of this problem is set to 1.0 min. The values of various parameters are listed in Table I. After simplification, the model becomes dxl/dt = fl(x;u)= u1 + u2 - 0 . 2 ~ , ~ . (5.2.a) ~

(0.1 - x 2 ) u 2 x 1 - ~ ~-o

XP

(1 + x

p

(5.2.b)

Y1 = x1 Y2 = x 2 where u1 = inlet flow rate with condensed B ( F J ,u2 = inlet flow rate with dilute B (F2),x1 = liquid height in the tank, and x 2 = concentration of B in the reactor. When the height of liquid level (yl) is a t its set point (yl* = 100.0) and two inlet flow rates are a t steady state (ul = 1.0; u2 = 1.01, the graphical solution of eq 5.2.b is illustrated in Figure 4. The convection term in eq 5.2.b is a straight line with slope 1 / ~ 7. is the time constant of the reactor at the steady state. When the reaction (rB)is shown as a function of C B by the curve indicated with rB(CB) in Figure 4,there are three different equilibrium points ( a ,p, y ) of the system, of which a and y are stable equilibrium points and p is unstable. Detailed discussion of the stability of this isothermal reactor was presented in Matsuura and Kato's paper (1967). Here point /3 is used as a set point to demonstrate the proposed algorithm developed above. From Figure 4 we can observe that the system gain changes sign as the operating point changes from one side of point p to the other. In addition, from this unstable equilibrium point a small disturbance can drive the system to two stable equilibrium points, a or y, depending on an increase or decrease of B concentration in the reactor caused by the disturbance. The control objective is to minimize the difference between the reactor output and set point. Two sets of initial conditions are used. Set 1is in the region where the system automatically goes to point a without any control. Set 2, on the other hand, is in the region where the system goes

1426 Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988 Table 11. Initial Conditions of Example 1 set 1 set 2 nlo = 0.10 = 7.00 100

xzo = 40.0

_ _ _ _ _- ~ _

,01

5

-

40.0

-4

4s

23 0

-

~

~ 2 = 0

10

0

ME hONE

Figure 5. y1 vs time (example 1).

0

2c

5

5

4: 10

TIIIE P.0'E

0

Figure 8. uz vs time (example 1). 120

I

3 .I

70

60

50 40 15

__ _

14

-

~

-~

-

1

Y

30

'1

,

1

40

20

0

5 2

1

,

I

0 +

5

1

0

0

NONE

__l l M E SET

POINT

Figure 9. y1 vs time (example 1).

- - 1 I :

20

0

5

+

1

0

0

40

NONE

_--TIME

SET POINT

Figure 10. y2 vs time (example 1).

set 2 initial conditions. The simulation results show that the algorithm efficiently drives the system to the set point without violating constraints. In Figure 6, if the upper bound is increased from 5.0 to 10.0 on u1 and u2,the overshoot of yz decreases. When the

Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988 1427 15 14

13 12 11

10 9

i s 7

6 5 4

3

Figure 13. Stirred tank reactor (2).

2 1

0 0

40

20 0

5

+

1

0

0

TIME NONE

0.7 0.6

1

1

Figure 11. u1 vs time (example 1).

--

~

40-

230-

II

TEMP.

K

Figure 14. Equilibrium diagram.

10

0

20

0 0

5

+

40 10

0

-1ME NONE

Figure 12. u2 vs time (example 1).

upper bounds of u1 and u2 are removed, the overshoot of yz is totally eliminated. In Figure 10 the best performance of outputs is the one with no upper bound on the control variables. In Figures 5 and 9, the rise time decreases with increasing the upper bound on u1 and uz. Intuitively, one would think that the higher the upper bound on u1and u2,the better the performance, since the feasible region of control profile is increased, and this is what we observed in all plots for this example. However, this is not always true. In the simulation results of the next example, as will be explained later, the best performance of some outputs does not occur with the most relaxed bounds on the control profile. To control and simulate this example problem, 50 sampling times required roughly 30 CPU seconds on a MicroVax 11. Example 2: A Stirred Tank Reactor (2). The first-order reverisble exothermic reaction k

A+B kz

is carried out in an ideal stirred tank shown in Figure 13. We assume that the combined concentration of A and B is constant. The tank is well stirred with liquid outlet determined by the liquid height in the tank h; i.e., F(h) = 2.5h0.6.The nonlinear differential-algebraic equations that model the dynamics of the reactor can be found in Appendix C and are derived from differential mass and energy balances. The values of various parameters cap be found in Table 111. After numerical values are substituted

Table 111. Parameter Values of ExamDle 2 parameter parameter set point A, = 1.0 mol/L E, = lo4 cal/mol yl* = 0508 mol/L A, = 6.25 m2 E , = 1.5 X lo4 cal/mol yz* = 0.16 m B1 = 0.0 mol/L H R = 5000 cal/mol C, = 5 X 103/s R = 1.987 cal/L C, = 1 X 106/s p = 1.0 kg/L C, = 1000 cal/mol

for all the parameters, the model can be simplified as follows: dxl/dt = f l ( x ; u )= -0.16x1x3-1.0u1 + K1(l.O- xl) - Kzxl dx,/dt = f z ( x ; u ) = 0.16u1u2x3-1.0 - O.l6~2x3-~.~u1 + 5.O[K1(1.0 - X I ) - Kzxl] dx3/dt = f3(x;u) = 0.164 - 0.4x~O.~ Y1 =

x1

Y z = x3

Kl = 5 x 103 exp(K z = 1.5 X

lo4 ex.(

(5.3)

5.0 x 103 xz -

) )

7.5 x 103 xz

The control objective is to operate the reactor output as closely as possible to the set point subject to the process constraints. Figure 14 shows the equilibrium conversion vs reactor temperature (xz)when the height of liquid level (yl)is a t the set point (yl* = 0.16). Two sets of initial conditions were used. One is at the left-hand side of the maximum point of the equilibrium reactor curve; another one is on the right-hand side. Since the height of liquid level is not initially at the set point, the actual initial point can only be presented in a three-dimensional plot. Here, we used two sets of initial conditions because the linear IMC controller is unstable a t set 2 initial conditions with fixed height of liquid level (Economou and Morari, 1986). Table IV shows the details of the two sets of initial conditions.

1428 Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988 Table IV. Initial Conditions of Example 2 set 1 set 2 x1 = 0.16 xl = 0.41 X Z = 503.0 ~2 = 351.0 X 3 = 0.20 xg = 0.12 ul = 0.866 u1 = 1.12 ~2 = 504.0 up = 352.0

1

25

'3 ' 5

30c

-

0 4 2

c

~1 -

zci

e 0 35

2

0 z

c

C 3

+

433

450

TME A

473

0

6

L 7 1 , ~

4g3

X

U31E

Figure 18. u2 vs time (example 2). b

025-

8

a

3 54

-

i

c55 052

I

2 TIME

3

433

f

450

6

4

0

470

A

min 490

X

NONE

Figure 15. yl vs time (example 2).

0 44

I I

I

I

04

0

4

2

::h430

0

i

450

0

'ME NONE

6

min

-

SET POIhT

Figure 19. y1 vs time (example 2). 022

0 1' 87

2

c

13:

-

-1ME 153

5

4

0

470

mn

490

3

UCUE

X

2

Figure 16. y 2 vs time (example 2).

0 16

1

i

3-5

:'I

0 12 011

I

1 ____

1

el

2

0

4

--

6

TIME m r 3

430

t

455

0

UOYE

-

S F POINT

Figure 20. yz vs time (example 2). I

0'I

2 0

430

+

6

4 -1ME

450

0

d70

n

m i 493

X

?ONE

Figure 17. u1 vs time (example 2).

Using the algorithm developed previously, we first simulate this example problem with various control variable constraints without putting any constraints on the state

variables. We varied the upupper bound with a fixed lower bound at 300. u1 is not bounded. Figures 15-18 are yl, y2, u l , and u2 vs time, respectively, using set 1 initial conditions. Figures 19-22 are similar plots with set 2 initial conditions. Footnotes in the figures indicate the various upper bounds on u2 corresponding to various curves. In Figure 16 we observe that the best performance is obtained when there is no upper bound on u2. However, Figure 15 is different; that is, the best performance is obtained when the upper bound on u2 is 490. One possible explanation for this is that in MIMO systems, i t is often possible to trade off good performance of one output against poor performance of another. Here, even though one output

Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988 1429 12 11 1

0.16

0.15

09

0.14 0.13

08

4

04

03

0.12 0.11

Y:::4

i

-

1: 1

0 07 006

4

005

4

I

0 02

0 01

0

,

,

o !

2

0

C

4

-

430

6 0

+

1

2

0

6

nME min SET POINT

3

Figure 24. y 2 vs time (example 2).

NOhE

0

4

2

0

TIME m n

450

,

,

J

Figure 21. u1 vs time (example 2). 600

,

100

1I

__

,

,

,

,

,

0

0

2

0 2 0

+

430

0

TIME min

450

0

4

6

6

4

1

+

2

TIME min 0 3

NONE

Figure 25. u1 vs time (example 2).

Figure 22. u2 vs time (example 2). 0.55

I

700

I

1

I

-I

200

_.

I

0

2

4 TIME

0

1

+

2

0

3

-

6

min SET POINT

Figure 23. y1 vs time (example 2).

has the best performance with no upper bound on u2, another output is relatively poor in performance. This observation tells us that, although the overall performance of a MIMO system in terms of achieving the lowest values in the optimal objective function may be improved by relaxing control variable bounds, the improvement of individual outputs is not guaranteed. In order to demonstrate the effectiveness of state variable constraints, we simulated this example problem with both control variable and state variable constraints. The results were compared with the output which had no state

n

i

+

2

TIME

min

0

3

Figure 26. u2 vs time (example 2). Table V. Process Constraints of Example 2" Figures 23-26 curve 1 curve 2 curve 3 Figures 27-30 curve 1 curve 2 curve 3

xi

x';

2:

X':

0.16 0.40

0.51 0.51

300.0 300.0

550.0 550.0

2:

x:

2:

x;

0.41

0.51 0.51

300.0 300.0

550.0 550.0

0.49

"No constraints on ul, u2,and x g .

1430 Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988 0s'153 i'5i

~

.

..

.

T

*

*

-ME

T#II

S T

J

POINT

Figure 27. y1 vs time (example 2).

Figure 29. u1 vs time (example 2). I

600

008

4

-

006 30-

-

I

sc; -1 1 &-.

___7

__

r -

7

4

-

2

e

,

~

-ME m n 5 7 "Olh-

00

~ _ _ __ _ - ._

-c0

2

4

6

TIME m / n

0

c

2

e

3

Figure 28. y 2 vs time (example 2).

Figure 30. upvs time (example 2).

variable constraints. Figures 23-26 show yl, y 2 ,ul,and u2 vs time, respectively, with set 1 initial condition. Three curves were plotted in each figure. Curve 1has no state variable constraints; curve 2 has a loose constraint on xl; curve 3 has a tight constraint on q. Figures 27-30 are similar plots with set 2 initial conditions. Table V shows the details of these constraints. Figures 23 and 27 show that the controller with state variable constraints performs better than one without constraints. The reason may be that, when a Newton-type control law is derived, the second-order or higher order terms are truncated and some system information related to these higher order terms is lost. Then a full Newton step will be taken which may be oversized, and the new control variable uS+lmay not yield a good value. By including the state variable constraints with our iteration algorithm, we also use an Armijo line search to control the step size and iterate until the optimal point has been found. This improves the performance of the controller. In eq 3.7, the hard constraints of control variables are always satisfied during the iterations due to the structure of the problem. In addition, the hard constraints on state variables are also guaranteed to be satisfied at the end of QP iterations. This can be shown as follows. When convergence criterion is satisfied, (du;+l)T(duy+l)5 e, which implies du;" 0, ~ , + ~ ( must t ~ +then ~ ) be equal to zero. In other words, if the state constraints are feasible, the state variable constraints are not violated in the one-step time horizon. Our simulation results also support this claim. From Figures 23 and 27, curve 3 avoids the oscillation which occurs in both curve 1and curve 2. The rise time in curve 3 is similar or shorter than that of curve 1 and

curve 2. The price to be paid for having better control on n, is that one has to tolerate sluggish response on x3. In other words, one can manipulate the state variable constraints to change the response speed of the individual state variable. The optimal solutions for this example problem usually require three to four QP iterations except for the first sampling interval, where more iterations are required to find an optimal solution. To simulate and control 10 sampling times ( T = 1.0 min) in this example required roughly 1.5-2.0 CPU minutes on a MicroVax 11. Of course problems without state variable constraints require much less computation since only one QP solution is usually required for a time step.

-

6. Stability Analysis

In the IMC structure, a filter is introduced to reduce the loop gain and to achieve the desired degree of robustness. If a perfect IMC controller has process constraints, the robustness of the system can be achieved by adjusting the constraints in place of an IMC filter. Before we show the similarity between a filter and use of process constraints, we briefly introduce the nonlinear IMC structure for an autonomous lumped parameter MIMO system (Economou and Morari, 1986). The block diagram of this structure is shown in Figure 31. Here the nonlinear operators F , C, P, and M denote the filter, controller, plant, and model, respectively. Blocks with double lines are used to emphasize that the operators are nonlinear and that the linear block diagram manipulations cannot be used. We also introduce some mathematical concepts which are needed in the proof. A more detailed treatment of

Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988 1431

id

-6 Iu, I6

(6.6) Then the constrained controller maps the error e to the constrained control variable u,.

Id'

I

h

Figure 31. Complete nonlinear IMC structure.

these concepts can be found in the work of Kolmogorov and Fomin (1954) and Rall (1969). Norm. A linear space R is said to be normed if to each element x E R there corresponds a nonnegative number IIxII, called the norm of x , such that (1) 11x11 = 0 if x = 0, (2) ((axil = Ialllxll where a is an arbitrary constant, and (3) IIx + yIJI(1x11 + Ilyll. A complete normed space is said to be a Banach space. Linear Operator. Let D and R be two Banach spaces, whose elements are denoted respectively as u and y. Let a rule be given such that for each u in some set U C D there is assigned some element y in the space R. Then an operator y = M u with range of values in R can be defined on the set U. An operator M is said to be linear if the equality M(a1U1 + C Y ~ U =~ )( Y ~ M+u ~CXZMU~ (6.1) is satisfied for any two elements u1,u2E U. a1and a2 are arbitrary real numbers. An operator M is considered nonlinear when it does not possess the above property. Operator Gain. Let us consider a nonlinear operator M which transforms the input subspace D to the output subspace R

y=M

u E D

yER

u, = C,e (6.7) where e is defined as e = y - y* and u = the inputs determined by an IMC controller without the constraints on u, C = the IMC perfect controller, C = MI, u, = the inputs determined by an IMC controller with the constraints on u , and C, = IMC controller with constraints on u. We assume the process is open loop stable, i.e. any bounded input gives a bounded and continuous output. Then since all inputs are bounded, the closed loop process with the constrained controller will remain input-output stable. Using Zames' small gain theory, one can verify this as follows. To simplify the proof we assume the plant operator pd maps zero into itself, i.e. y = Pdu = 0 for u = 0 (6.8) Consider a scaled and constrained QP so that 1 min CTAus - ( A u ~ ) ~ H A u ~ Au 2 subject to -6 Iu, I6 (6.9)

+

u, = us

+ Aus

where 6 is greater or equal to 0. We can partition u, into components at bounds ub and unconstrained components, uu,with IIub(lm= 6 and IJu,J(,< 6. Then controller gain with constraints becomes

g(C,) = sup

~

II( G e )II Ilurll = sup llell llell (6.10)

(6.2) The gain of the IMC controller is

Define the gain of M , denoted by g ( M ) ,to be (6.3) where the supremum is taken over all u E E. Zames' (1966) Small Gain Theorem. If the open loop gain is less than one, then the closed loop output is bounded. For the IMC structure, this is equivalent to (Economou and Morari, 1986)

g((pd - M ) C ) Ig ( c ) g ( p d - MI < 1 (6.4) where Pd denotes the plant operator with the effect of disturbance. When Pd = M , the perfect model case, the IMC controller C is open loop stable and then the inequality is trivially satisfied. When the process is poorly modeled, which means g(pd - M ) is large, the control gain g ( C ) has to be small. Usually, the perfect controller C = MI (where M' is the right inverse) cannot be used because condition (6.4) may not be satisfied. In order to satisfy the stability condition, an adjustable filter F is introduced such that C = M'F. Then the stability condition (6.4) becomes g(M'F)dPd - M ) Ig(F)g(M')dPd- M )

> 6 and the sufficient condition (6.13) can be made to hold. Comparing stability condition (6.13) with (6.5), we can see that putting constraints on u has a similar function as the filter in the nonlinear IMC structure. However, one should note that the filter allows direct manipulation of frequencies and may be less conservative than using constraints. On the other hand, because constraints are defined in the state space, any knowledge of process limitations or engineering judgement can now be imposed and

1432 Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988

enforced directly. The above result shows only the (rather obvious) boundedness property of the controller. To show convergence to a unique solution (asymptotic stability) the concept of incremental gains is required along with an incremental form of the small gain theorem. The incremental gain is defined for a nonlinear operator M as

and a unique solution (stability) is guaranteed (Zames, 1966; Desoer and Vidyasagar, 1975) if d ( C ) i ( p d- M ) < 1 (6.15) Since the control variable is bounded, it is easy to show that

and the constrained controller can be used to enforce stability. To understand the effect of the controller on stability, consider the following input-output maps of the controller. (a) d ( C ) increases with e. Here by adjusting 6, stability can be found arbitrarily close to the origin, as long as the stability condition holds a t the origin (assumed). (b) i ( C ) decreases with e. In this case the maximum gain occurs at the origin and the constrained controller will not improve the stability condition. Here since p d maps zero into itself, the worst case condition requires a zero input for stability. Thus to summarize we are left with the following bounds:

Now, if the leftmost quantity is strictly less than 1 (and i ( C ) is monotonically increasing), one can use 6 as a tuning parameter for asymptotic stability. 7. Conclusions An extension of the nonlinear IMC design procedure to a special form of successive quadratic programming was presented to handle process constraints. This strategy can efficiently handle both soft and hard constraints. Simulation results for two nonlinear reactor control problems show the effectiveness of this strategy. Finally, since all of the control variables can be bounded, the stability of the constrained controller is guaranteed as long as the process is open loop stable. This can be proved using the small gain theory of Zames (1966). The algorithm developed in this work is a single-step method, which means that the algorithm only predicts one step ahead. It assumes that the process will reach the set point at the end of the first step. If a process to be controlled possesses time delay that is longer than one Sampling time interval, or if the time delay associated with different pairs of inputs and outputs is not the same, the single-step algorithm is not capable of handling it. Moreover, if a batch process needs to be controlled, an optimal control profile in the entire time horizon rather than a single step should be calculated. Therefore, a multistep predictive algorithm needs to be developed. In addition, even if a process does not possess time delay, the controller performance may still benefit from the long time prediction. In addition, most chemical processes are difficult to model precisely since there unavoidably exist parameter and disturbance uncertainties. The structure of the model

can usually be derived from a mathematical description of fundamental physicochemical phenomena taking place in the process. If on-line computer control is used, the process measurements can also be used to estimate the uncertain parameters and disturbances. Then the updated model can be used to calculate the optimal control profile in future time horizon. A reliable parameter estimation algorithm for this purpose will be developed in the future.

Acknowledgment Financial support from NSF Grant ENG-8451058 is gratefully acknowledged. The authors also thank Prof. M. Morari for useful discussions concerning this paper.

Nomenclature A, = cross area of the tank Ai = concentration of A in inflow A, = concentration of A in outflow Bi = concentration of B in inflow B, = concentration of B in outflow C, = heat capacity of the liquid C, = Arrhenius’s constant, (i = 1, 2) E, = activation energy, (i = 1, 2) h = height of the liquid level in the tank K , = reaction constant, (i = 1, 2) F, = inlet flow rate F, = outlet flow rate H R = heat of reaction R = universal gas constant V = volume of the tank Greek Symbol density of the liquid in the tank

p =

Appendix A Referring to eq 2.6, let T = -cap 0 = C s @ s + y y - -f)

(La)

+ (y* - y + 1 )

(i.b)

Now eq 2.6 becomes Q

+ TAU = 0

(ii)

Equation ii is equivalent to the following problem: min QTTAu+ 0.5AuTTTTAu lu

(iii)

Proof. Let F = 0.5(0 + T A u ) ~ ( + Q TAU) = 0.5(QTR + AuTT% + QTTAu + A u ~ ~ T A u ) = 0.5(2.0QTTAu + A u T P T A u + Q T Q ) (iv) When the optimal condition is satisfied, we have

dF/dAu = 0.5(2.0QTT + 2 . 0 A u T P T ) = 0

(v)

Then we have

(aT + A U ~ P= )o T

(vi)

which is equivalent to 0

+ TAU = 0

(vii)

Then eq 3.1.b and 3.l.c become CT = QTT

(viii.a)

H = F T

(viii.b)

Here T needs to be square and nonsingular or must have full rank with

Ind. Eng. Chem. Res., Vol. 27, No. 8, 1988 1433

Xi=B, Appendix B When differentiating eq 3.5.b with respect to t , we have -aK.,t,' =-at

a

axn+l,j

au

at

= - a-

au

dXn+l,j

n

at

= 2.0C (min (~,x'(t)-xf)ji=l

Kl,j(t8+') = 0

ts+l I t I t S + 2

Similarly, the differential of Kj;'

W3,j = - aat

can be written as

aXn+l+i,j

at au = 2.0{min (O,x'(t)-xf)} -

Appendix C The following nonlinear differential-algebraic equations model the dynamics of the second example. They are derived from differential mass and energy balances. The meaning of variables can be found from the nomenclature a t the end of this appendix. The values of various parameters can be found in Table 111. d ( VA,)/dt = AiFi - A$, - V(K1A0- KZB,) d( VB,)/dt = BiFi - B$,

+ V(KIAo- KZB,)

d(VT,) /dt = TiFi - 23 ' , + (DH)V(KIAo- K&,)

A, dh/dt = Fi - F,

Ai

+ Bi = A , + Bo = 1.0 V = A,h

We define

Xz= To

U1 = Fi

X,= h

U2 = Ti

Literature Cited Armijo, L. "Minimization of Functions Having Continuous Partial Derivatives". Pac. J. Math. 1966, 16, 1. Culter, C. R.; Ramaker, B. L. "Dynamic Matrix Control-A Computer Control Algorithm". Presented at AIChE National Meeting, Houston, TX, 1979. Desoer, C. A.; Vidyasagar, M. Feedback Systems: Input-Output Properties; Academic: New York, 1975. Economou, C. G. "An Operator Theory Approach to Nonlinear Controller Design". Ph.D. Dissertation, California Institute of Technology, Pasadena, 1985. Economou, C. G.; Morari, M. "Newton Control Laws for Nonlinear Controller Design". Presented a t IEEE Conference on Decision and Control, Fort Lauderdale, FL, 1985. Economou, C. G.; Morari, M. "Internal Model Control. 5. Extension to Nonlinear Systems". Ind. Eng. Chem. Process Des. Dev. 1986, 25, 403. Garcia, G. E.; Morari, M. "Internal Model Control. 1. A Unifying Review and Some New Results". Ind. Eng. Chem. Process Des. Dev. 1982, 21, 308. Garcia, G. E.; Morari, M. "Internal Model Control. 2. Design Procedure for Multivariable Svstems". Ind. Enn. - Chem. Process Des. Dev. 1985a, 24, 472. Garcia. G. E.: Morari. M. "Internal Model Control. 3. Multivariable Control Law Computation and Tuning Guidelines". Ind. Eng. Chem. Process Des. Dev. 19858, 24, 485. Garcia, G. E.; Morshedi, A. M. "Quadratic Solution of Dynamic Matrix Control (QDMC)". Presented at Canadian Conference on Industrial Computer Systems, Ontario, 1984. Jang, S. S.; Joseph, B.; Mukai, H. "On-Line Optimization of Constrained Multivariable Chemical Processes". AIChE J. 1987,33, 1, 26. Kolmogorov, A. N.; Fomin, S. V . Elements of the Theory of Functions and Functional Analysis; Graylock: Rochester, 1957. Li, W. C.; Biegler, L. T.; Economou, C. G.; Morari, M. "A Newton Type Control Strategy for Nonlinear Process Systems". Submitted for publication, 1988. Matsuura, T.; Kato, M. "Concentration Stability of the Isothermal Reactor". Chem. Eng. Sci. 1967, 22, 171. Morari, M. "Robust Stability of System with Integral Control". Presented a t Proceedings of the 22nd IEEE Conference on Decision and Control, San Antonio, TX, 1983. Prett, D. M.; Gillette, R. D. "Optimization and Constrained Multivariable Control of a Catalytic Cracking Unit". Presented at AIChE National Meeting, Houston, TX, 1979. Rall, L. B. "Computational Solution of Nonlinear Operator Equations". Wiley: New York, 1969. Sargent, R. W. H.; Sullivan, G. R. "The Development of an Efficient Optimal Control Package". Presented a t Proceedings of IFIP Conference on Optimization Techniques, New York, Part 2,1977. Zames, G. "On the Input-Output Stability of Time-Varying Nonlinear Feedback Systems. Part I: Conditions Derived Using Concepts of Loop Gain. Conicity, and Positivity". IEEE Trans. Autom. Control 1966, AC-1 I , 228.

Received for review October 15, 1987 Revised manuscript received March 23, 1988 Accepted April 18, 1988