Design of Sampled Data Controllers - Industrial & Engineering

Jan 1, 1979 - Design of Sampled Data Controllers. Zalman J. Palmor, Reuel Shinnar. Ind. Eng. ... Citation data is made available by participants in Cr...
2 downloads 11 Views 2MB Size
8

Ind. Eng. Chem. Process Des. Dev., Vol. 18, No. 1, 1979

Design of Sampled Data Controllers Zalman J. Palmor' and Reuel Shinnar" Department of Chemical Engineering, The City College of The City University of New York, New York, New York 10037

The performance of modern optimal algorithms for sampled data controllers is analyzed and compared to classical controller design. It is shown that their performance is very sensitive to the structure of the disturbance used in the design procedure, but quite insensitive to the parameters of the noise model. For most processes stability constraints are overriding. The paper also shows that optimal algorithms require a different type of stability analysis, as they are very sensitive to small deviations between the real process model and the model used for design. Conventional stability margins are meaningless here. A design strategy utilizing optimal control theory is proposed that results in properly working and stable controllers, with a performance that is overall better than that of PID controllers. Emphasis is given to strategies for operator control but the results should be of more general applicability.

1. Introduction In the process industries it is quite common to control continuous processes on the basis of laboratory tests based on samples taken from the process. This is especially common in the paper, polymers, and metallurgical industries. The final properties of the material are affected by process conditions, but are not measurable at this stage of the process, a t least in an instantaneous way. For example, polymers for spinning must be tested in an actual spinning machine for spinnability, strength, and dyeability. Metals from a rolling mill have to be tested for strength, hardness, etc. This normally involves a considerable time delay and also limits the frequency a t which we can sample. To compensate for this we often introduce secondary continuous feedback loops measuring related variables. In the case of the polymer for spinning, we use molecular weight or viscosity as the measured variable adjusting catalyst flow or process temperature in the continuous loop. The operator then adjusts the set point of these secondary loops according to the result of the laboratory tests. Our paper is concerned with the design of simple control strategies for such cases. Several methods have been suggested in recent years and we will try to compare and evaluate them and suggest an alternative advanced design algorithm. We are going to show that some of the optimal design algorithms proposed lead to significant insights in controller design, though they do not guarantee a usable control strategy. Furthermore, once we have understood their implications we can get similar results by suitably modifying the conventional design method. While our paper concentrates on the control problems associated with relatively infrequent samplings, the results should be of wider interest as they have some important implications to continuous control, and the use of stochastic predictors in process control. 2. The Requirements of a Sampled Data Controller Algorithm The general configuration of the control system is given in Figure 1. Before going into any detailed discussion we want to first outline our approach to the design of a control algorithm. In a previous paper by one of the authors (Kestenbaum, Thau, and Shinnar, 1976) the specifications of a continuous analog controller were discussed. We will 'Taylor Instrument Co., Rochester, N.Y. 0019-7882/79/1118-0008$01.00/0

restate and modify them here in a form suitable for sampled data control. 1. The Controller Must Be Able to Maintain the Desired Output Variable at a Given Set Point. Maintaining the output variable a t the desired set point is the most important function of the control and one should always remember that we do not know a priori the exact steady-state control setting that will lead to the desired steady-state output. For simplicity we will assume that it is possible to control the state of the measured variable by adjusting just one set point. In the type of problem that one frequently deals with in polymers or other complex materials, this is often difficult as varying only one manipulated variable is not always sufficient to achieve this. Our results can, however, be generalized to the more complex case. 2. Set Point Changes Should Be Fast and Smooth. However, minimum time is not a sufficient criterion. Large overshoots cannot be tolerated as the operator does not know the final steady state and would have to step in. 3. Asymptotic Stability and Satisfactory Performance for Different Types of Disturbances That Could Arise. The algorithm must lead to stable overall control and converge fast on the desired steady state. While we can study the nature of the disturbances for the system and try to optimize our controller accordingly, the controller must be reasonably able to handle any unforeseen disturbance that might arise later. In a sampled data system we have an additional requirement. Such systems normally involve some uncorrelated measurement errors in individual samples. If we act on these measurements we will introduce perturbations into the system and the controller must be designed such that these introduced perturbations are small. 4. The Controller Should Be Designable with a Minimum of Information with Respect to the Nature of the Inputs and the Structure of the System. Any design method must be related to a modeling and identification method. As the models resulting from such methods are almost always crude approximations and simpler than the real system, our total design must be able to handle a complex system with a simple model, the parameters of which can be experimentally determined. Optimal algorithms by their very nature tend to be very sensitive to both the structure and the exact values of the parameters of the model describing the process. In chemicaI plants the exact mathematical description of the process is rather complex and cannot be obtained with any reasonable effort. Most processes are nonlinear. Despite 0 1978 American Chemical Society

Ind. Eng. Chem. Process Des. Dev., Vol. 18, No. 1, 1979

9

0 G, ( B l

Figure 2. Interpretation of the control problem as matching two predictions: f i t + k / t , k-step ahead prediction nt+k (usually in the mean square sense); X r + k ,expected process output a t time t + k to control input u t .

20

16

I2

b 08

0 4

G O I0 2

12-1

IDo

Figure 1. a, Process-disturbance configuration; b, a typical closed loop frequency response.

this we can get workable controllers for them by using linearized models. A good design procedure must take into account that there is a finite but unknown deviation between the model used for design and the real description of the process. This also applies to probabilistic models of the disturbance. 5. The Controller Must Be Reasonably Insensitive to Changes in System Parameters. It must be stable and perform well over a reasonable range of system parameters. 6. Excessive Control Actions Should Be Avoided. In process control the cost of the control effort (steam, catalyst, etc.) is determined mainly by the amount used to keep the system a t its steady state. The level of the control effort during changes or upsets has little effect on costs. What we have to consider is the fact that any controller has a limited range of action. We can look a t these criteria quantitatively by investigating the closed loop frequency response of G*(B) (Figure Ib) defined as follows

and we can rewrite our initial criteria just by writing them in terms of G*(exp(-jun). Normally it has a form as given in Figure la. Criterion 1 means that the value of G*(exp(-juT)) must be zero for low frequencies (G*(B) 0 for R 1). Criterion 3 means that G*(B) cannot have any poles inside the unit circle in the complex B plane. Furthermore, G*(exp(--jwT))must not have a high peak in the resonance region. Criterion 5 means that G*(R) should not change too much for reasonable perturbations in G ( B ) . This can also be related to criterion 4. Criterion 4 is Lest investigated in the time domain by plotting the

-

-

continuous time response of the overall system to step inputs, Criterion 6 can be evaluated by plotting G,(exp(-jur)).G*(exp(-joT)). In our specific case we can add two additional criteria. 7. Simple Control Schemes Are Preferable. As we deal here with sampled data on a relatively infrequent basis there is an inherent advantage in simple transparent control schemes. Costwise, today, there is no problem to use a complete history of the unit on a computer, printing out the instructions and directly upgrading the values after the operator punches in the last result. One should, however, convince oneself that there is a real advantage to that. If a simple scheme gives almost the same result it is preferable as it is easier to override it in special situations and helps the operator to understand the unit. 8. Satisfactory Response of the System to the Actual Disturbances Specific to the Process. It is also common to define an eighth criterion for the controller design, namely that the controller should be specifically designed to deal with the specific types of disturbances common to a given process in a specific plant. In classic design we just look if the plant has any disturbances with a frequency in the resonance region and adjust G* accordingly. In stochastic optimal control we look a t the problem in a somewhat different way. Most disturbances are not completely uncorrelated but have a pattern. If we study the time behavior of the disturbance n,, then, from the past and present states of the system, we can predict n, for the future. We can consequently design a control system as described in Figure 2 . Any control action will only affect the system in the future, t + h. If we know G, we can predict the state of the system at t + h as a result of the control action previous to t. We can then use our prediction for nt+kmade at time t, together with our previous control action to find a ut that will exactly cancel As we really do not know G,(B) exactly, we are matching here two predictions. Figure 2 and Figure l a are in some sense identical as the predictor is based on measuring past values of the error sequence, e,. What we hope to gain by writing the controller in the form of Figure 2 is some insight as t o how the feedback controller should look. We can look at the criterion that the estimated value of e, should be small for a given stochastically known n, as our eighth criterion and we normally define it such that (et2) should be minimized given an estimate of n, in a suitable probability definition. We should remember that this is one of eight criteria; a t least the first five have equal importance and one and three are overriding. While the eighth criterion can be nicely defined mathematically, the same cannot be said of the other seven. To formulate all of them together in a rigorous way is rather difficult. [Some theoreticians might object that one can perfectly well write optimization criteria that give a weight to all eight criteria. This is an illusion. We will have to give weighting coefficients to each item. The value of this weighting coefficient cannot be

10

Ind. Eng. Chem. Process Des. Dev., Vol. 18, No. 1 , 1979

estimated in advance and has to be determined by looking a t the results, which is simulation in another name (see Discussion).] On the other hand, given a controller G,, we can find it by simulation quite easily if it fulfills all of our criteria. Let us assume that there is a set of different controllers [G,]* in which all G , fulfill our criteria reasonably well. Some of them will, however, be better according to one criterion, others according to a different criterion. Talking about any real optimum here is not very meaningful as we do not know a priori how to define it. For some of these criteria we can get the limits that an optimal controller optimized for a single criterion could achieve, which is a very useful insight. Assume we can find a controller which is optimum for criterion 8 that fulfills all of our other conditions. There are two questions we have to ask ourselves. 1. n, is hard to measure. We first have to be convinced that the properties of the structure of n, are sufficiently time independent. Assume it is and we can measure n,. Is it worth the effort? To get a workable design we have to give n, a simple mathematical formulation, which given the inaccuracies of such methods, is not unique. We have to guess a form of n,, which is equivalent to defining a mathematical model for the disturbance n, and estimate the parameters of this noise model from measurements. In most cases it will be hard to estimate more than two parameters, so we need simple models. We can now look a t the whole reasonable space of these two parameters which is associated with a space of [G,"] and find what part of it is congruent with the permissible model space of [ G,] * based on the first seven conditions. If only a small part of [G,"] falls into the space [GJ* then measuring n, might be a questionable exercise. We could still modify G, for any measured n, by some mechanism (for example, a quadratic criterion for control effort) such that it falls into [G,]*, but then we have to see if the final result is really sufficiently better than that for a reasonable controller obtained by a procedure which fulfills the first seven criteria. 2. Can we find a structure of n, such that it gives us a good first guess for G,? There are two schools on stochastic controls. One believes in actually studying n, and using the information in controller design. The other considers n, solely a mathematical design tool leading to a better controller design similar to frequency response methods. The main problem is that determining a model for n, is not a unique process, as we can get reasonable fits with different models. As the controller is really defined once we write down n,, we have to have a priori knowledge as to what forms of n, give useful controllers. Some forms for n, will give a G,* that is outside IC,]*. One of the goals of the present paper is to answer those questions for the specific type of system outlined in the Introduction. We can now outline the rest of the paper. In section 3 we will deal with the choice and identification of process models for our purpose. In section 4 we will discuss noise models suitable for the identification of process noise. In section 5 we will discuss the structure of the controllers obtained by different design methods, and in section 6 we will test those controllers in terms of our criteria. In the last section we will propose a design method based on these results and show how the results of optimal stochastic control theory lead to better designs that can be interpreted in terms of classical controller design. 3. Identification of the Process Model There are basically two common approaches for identifying a process model suitable for our purpose. The first

Figure 3. Responses to step change in set point of various processes given by eq 3.1: curve a, first order + delay; curve b, second order + delay (overdamped); curve c, second order + delay (underdamped); curve d , second order + delay with inverse response.

is to set up a theoretical model of the process, building a set of differential equations describing the known physics of the process, and then trying to identify the parameters of the model by separate measurements. Considerable research is normally needed for this approach. To be useful for control purposes, we normally, in the end, have to simplify this process model into a simplified linearized version; but the complete model is useful to study the behavior of the controller, though one should remember that complex models are also inaccurate. Another approach is direct identification. In our case we mainly require a model that predicts the effect of the manipulated variable on the measured variable. We limit ourselves here to stable systems as unstable systems need a more frequent control for stabilization. We are also looking for a linear transfer function giving the output a t the time of sampling. The safest way to get such a model is to follow several step changes in the set points of the manipulated variable using steps in both directions. In an existing process a lot can be learned from observing the process as it operates and studying cross correlations between the output and the manipulated variable in the closed loop. In the experience of the authors it is risky to rely solely on such measurements unless they contain a sufficient number of significantly large intentional step changes. This also allows us to check if the assumption that the system is linear in the range of control changes studied is a sensible simplification. There have been several claims that in the process industries second-order models coupled with delays are sufficient for most purposes. One can approximately describe the continuous form of the desired transfer function by

where 72 and might be complex. This allows one to describe both overdamped and underdamped stable processes as well as processes with an inverse initial response; see Figure 3. In our case this claim is even more justified. We can approximate any linear response at fixed time intervals by the transfer function

where w(B) and 6(B)are polynomials in B. B , here, is the backward shift operator defined by

Ind. Eng. Chem. Process’Des. Dev., Vol. 18, No. 1, 1979

Bky, = Yt-k

11

(3.3)

In Table I we show the discrete equivalents to the four types of transfer functions represented by (3.11,see Figure 3, and the relations between the discrete polynomials and the parameters of the underlying continuous process. One can show that for any given stable transfer function of interest to us, the higher terms in the polynomials o(B) and b(B) decrease exponentially with increasing sample interval T . An example of this dependence is given in Table 11. In practice, therefore, a second-order polynomial for w(B) and 6(B) is probably the highest order that can be justified, and in many cases a first order plus a delay is sufficient. (3.4) We should note that the transfer function G, is not what is normally considered the open loop transfer function of the process. In most cases of control based on sampled data the process will have several continuous feedback loops, and G,(s) is the transfer function for adjusting the set point of one of these control loops. It is therefore often possible to modify G (s) as it depends on the adjustment of the parameters o f continuous feedback control loop. If this loop is adjusted in the normal way G,(s) will have an overshoot and, therefore, be underdamped. If T is larger than the settling time of the process then G,(B) is mainly a delay. If the settling time is long it might make sense to adjust G,(s) such that the overshoot is minimal. Here, we might look a t G,(s) and the desired control strategy in an integrated way. For the present we can summarize that a process model of the form

is sufficient for most practical cases of interest in sampled data control based on laboratory measurements. Furthermore, G,(B) not only depends on the interval T but also on the tunings of the continuous process controllers, and therefore it can be adjusted. 4. Noise Models

In early control applications, disturbances (or noise) were modeled by means of their power spectra or autocorrelation functions (Wiener, 1942; Newton et al., 1957). Lately, due to the extensive development of state space representation of linear systems and linear filtering (Kalman. 1960), state variable models are usually used to characterize the process as well as the disturbances. The noise in this representation may be thought of as an output of a dynamical system driven by a stochastic process which usually is assumed to be a white Gaussian noise with known mean and variance (or, in the multivariable case, a vector of white noise variables, with known mean and covariance matrix). A similar representation, though in terms of transfer function (therefore, suitable for representing linear processes), was developed from a statistical point of view, (Box and Jenkins, 1968, 1970; and Astrom, 1970). They used statistical time models called Autoregressive-Integrated Moving Average (ARIMA) models to describe the stochastic disturbances to the system and developed extensively procedures for identifying their structure and estimating their parameters. The general ARIMA model of order (p,d,q) is written as

where a, is a white noise sequence with zero mean and given variance. The roots of X(B) and @ ( B are ) assumed to be outside the unit circle. Whenever d # 0 the process n, is nonstationary due to the poles on the unit circle. In t!his description the noise is seen to be the result of a stationary white noise passing through a filter G,(B). Specifying G,(B) as well as ua2 (the variance of a,) will determine the disturbance completely. In the formulation of the problem used in Figure 2, n, is the disturbance as measured at the output of the system. It is in reality the sum of many different disturbances affecting the system. We lump those together and assume they can be represented by eq 4.1. Therefore the output variable y t can be written y , = G,(B)u,

+ n, = G,(B)u, + G,(B)a,

(4.2)

where u t is the manipulated input. Knowledge of G,(B) allows one to predict the future values of n, based on its history. Thus (4.3) where IC2@) is found from the following expansion of Gn(B)

Gn(B) = rlil(B) + Bk+’rC2(B)

(4.3a)

and therefore contains present and past values of the a, sequence. To describe a disturbance in a specific system we need to guess a simplified form of (4.1). Several simplifications are common. The simplest is given by an exponential filter

which results in a negative exponential correlation function for n,. The other one-parameter model that can be formed from eq 4.1 is 1 - AB G,(B) = 1-B

-1 0; + for X < 0) (4.9a) (-b F

dG)/2

Finally, we should point out that any G,(B) may also be represented in state space description and vice versa (see MacGregor, 1973). The results of our work should therefore apply to the latter case.

5. Controller Design In this section we will list and compare the results of different design methods and explain their interrelation, and we will start with classical design. The simplest sampled data controller used in industry by operators can be written as

-b = (m(1 - X)'/X

The operator makes an adjustment proportional to the deviation. In our notation we will assume that u and e are suitably normalized, such that exact compensation will require an Mc of unity. In good industrial practice this simple algorithm is used with a simple nonlinear filter of the type V u = 0;

= .,'(X*(m)/N

< e < 2a

(5.2)

MC

t

- Mc(l + B + B 2 + ...) et = Mc(e, + xe,_,) i=l

(5.3)

and a,*'

-2a

where a is the standard deviation of the measurement error. If we forget for one moment about the nonlinear filter and look a t (5.1) as a transfer function using the we can rewrite (5.1) backward shift operator B, (Bu, = 1 -Bet -

+ 2)

(5.1)

V u , = Mce,

ut = -

where

0.25 0.08 0.017 7.73 x 10-4 4.88 X lo-* t 1) with

(4.9b)

which is a proportional integral controller or an integral controller dependent on convention. We prefer here the notation used by Aiken (1974).

14

Ind. Eng. Chem. Process Des. Dev., Vol. 18, No. 1, 1979

An integral controller has the form t

ut = KICe,-,T 1=1

and therefore the integral constant KI is equal to M J T . This immediately illustrates one of the common problems in operator control applied to processes with delays. Intuitively we might choose M , = 1 or, if we are aware of stability problems, M , = 0.5. If Tis large in terms of the process time constants this is fine. If T is small compared to the delays inherent in the process, then M J T could be too large and can easily introduce instabilities and limiting cycles, especially as small values of M , are hard to use. An operator cannot make small adjustments. In a standard proportional integral controller the ratio between proportional and integral control is adjustable

(

ut = K , e ,

+ -Ce,-i :Ii:l

)

(5.4)

which can be written as

2

Figure 4. Polar plot for the compensator (5.10) with k = 2 and a = 0.5.

There is one type of very effective lead lag compensator which deserves special mention. Consider a transfer function of the form

G,(B) = Gpo(B)Bk which is the discrete equivalent of

G,(s) = G,o(~)e-SB

or in terms of the controller transfer function

Tuning the controller involves choosing q and K , such that it gives a satisfactory response to set point changes or to step changes in the process input and is still stable (Lopez, 1969; Rovira, 1969; Moore, 1968). In some cases a more complex lead lag compensator ( C ( B ) )is added to G,(B)

(5.7) where P ( B ) and Q ( B )are polynomials in B. A simple derivative controller which is a lead compensator has the form P(B) = 1 - aB; (a C l),but usually more complex forms are used (see, for example, Rosenbrock (1974) and Ragazzini (1958)). Our initial design criteria are investigated by looking at the properties of the overall closed loop transfer function

G*(B) = 1/(1+ G,(B)G,(B))

-

{ (

Vu, = K, e, -

(5.8)

either directly or through the properties of the open loop transfer function G,(B)G,(B) for different combinations of K,, T ~ and , C(B). T o ensure proper set point control, G*(B)B-1- 0 (or G*(B)G,(B)G,(B)B,, 11,or G Gc(B)B-l m. Stability is ensured by looking a t the popes of G*(B),which must be well outside the unit circle and must stay there for reasonable perturbations of all the parameters in both G, and G,. Furthermore, in sampled data we have to prevent amplification of measurement noise. We will discuss this later in detail. We note immediately that unlike the continuous case we have here an additional design parameter, namely, the sampling interval T , by which we can modify the process transfer function G,(B) and we have, therefore, to look at the overall performance not only in terms of G*(B)but also in terms of T.

-

(Note that G ( B )is not necessarily the discrete equivalent of G,,(s). T t i s only is the case if B/T is an integer.) A time lag reduces our ability to control the process, as it limits the permissible gain. For our case we can construct a compensator that reduces this limitation by the following argument. Assume that at the nth sample we make a correction Vu. If the sampling interval is smaller than the inherent dead time (process delay + measurement delay), then our correction has no effect whatsoever on the next sample. If nothing happened in the process, we would still measure the same deviation e. If we make another correction we over-correct. We have two sensible choices. Either we reduce the gain K, and apportion part of the adjustment to each of the samples occurring during the time lag, or we reduce the control action by taking into account all the control actions already done during the time delay, the effects of which are not yet observable. The first option is automatically obtained if we choose K , by a proper stability analysis. Long delays will reduce K,. The second option can be written as 1-

-

:I)

1

e,_l - ~ C V U , - ~(5.9) i:,

If we want to be consistent with our argument we actually should choose d = 1 as Ckl,lVut-l is the total change in V u during the period of the delay. For stability reasons, however, d should be smaller. We have now reduced the danger of over-correction during the delay and therefore we can choose K , larger than for the case without the compensator. This gives significant advantages, as will be discussed in the next section. The compensator in (5.9) was actually first derived from optimal control algorithms. It can be written as C(B)=

1 1

+ a(R + B2 + B3, ..., + Bk)

(5.10)

and we note that if h = 1, this is equivalent to a simple lead compensator 1 - aB. The properties of the full compensator can be deduced by looking at its Nyquist diagram (Figure 4).

-El--Gc I B)

-

Ind. Eng. Chem. Process Des. Dev., Vol. 18, No. 1, 1979 XcIW

c

15

that. For more rigorous treatment, see Wilson (1969). Let us look a t the unconstrained optimal controllers in terms of their X,(B) and G,(B) and first consider a simple first-order transfer function G,(s) = e-Os/(l + 7s) (5.14) choosing k = O/T as an integer such that w1 = 0 yields WO

(5.14a) G,(B) = -Bk+' 1 - 6B For the relations between 6, w0, and the parameters of the continuous process, see Table I. If n, is given by eq 4.4 (stationary first-order disturbance), eq 5.11 yields 4k+l

-(1

-

6B) (5.15)

which can be written in terms of the control action ut @k+l

ut =

$Ik+lU,-k-l

+ -(et

-

6e,_,)

(5.15a)

WO

This is essentially a proportional + derivative controller with a dead time compensator. For this case, eq 5.12 becomes *k+l

X,(B) = L ( 1 - 6B) WO

1

1

1 - qbk+'B

(5.16)

which shows that the optimal forward loop controller (for 4 > 0) is simply a PD controller with a compensator. If @ is less than zero the form of ( 5 . 1 5 ) depends on k . For even numbers of h , X,is a positive feedback controller. The overall response of the total closed loop system is G*(B) = 1 - 4 k + l B k + l (5.17) For a step input this leads to an offset as lim G*(B) = 1 - 4k+1 E-1

does not approach zero. If in eq 5.16 we substitute for 4 the value 6 from (5.14a), we get a controller identical with the one obtained from a deterministic approach to optimal control. Here we assure that the system in (5.14a) is initially a t some state yo and we want to return it to y = 0 in the absence of any disturbances. The controller then minimizes C y 2and is identical with the minimal variance controller for a stationary noise generated by 1 G,(B) = 1 - 6B Both cases lack an integral action and do not fulfill condition 1. This is due to the fact that we formulated our problem in a way that makes an integral controller unnecessary. If we really have a stationary noise with zero average then there is no need for integral control. We just want to filter the disturbance. Similarly, if all inputs to the system are zero then the process given by (5.14a) will return to the desired final state, y = 0, even if we use no control. This illustrates the iterative feature of optimal control algorithm. Real disturbances are much more complex than any of the simple mathematical models used to describe or identify them. Such identification is in no way unique. However, once we write down a mathematical formulation for the disturbances we also formulated the structure of the controller. The best we can hope from such a procedure is some valuable suggestion or insight which then has to be checked. There are many problems in which using eq (4.4) leads to good results. Here it

16

Ind. Eng. Chem. Process Des. Dev., Vol. 18, No. 1, 1979

Table 111. The Optimal Transfer Function, X,(B), of the Forward Controller for Different Noises and Processes types of processes eq eq and noises no. unconstrained controller no. constrained controller 111.5 1st + delay ARIMA (1,0,0)

B

Gpo(B)( Y ( B ) ~ ( B + )B w ( B ) Q i ( B ) ) m 1 -(1- 6 B )

@k+i

---(I

-

6)

x

YO

WO

1

111.2

- w ( B ) Q l( B )

-

1+

111.6

W tm )B +-BZ

(Y,/Yo--

YO

m=

Yo

1st + delay ARIMA (O,l,l) 111.3

1-hl-6B 1 -w0 1 - B 1 - w l l w 0 B

+

Yl@

1-hl-SB 1 -yo I - B 1 t d,B

IIL7

YO

@k+yWo-w,@)

+ dzB2

d , = (1 - h ) ( l - W O / Y O+) Y I I Y "

d, = h Y , / T o qo 1 - 6 B 1

obviously does not. If we want a controller designed by this method to fulfill criterion 1,we have to formulate the model of the disturbance in a proper way. We can achieve this by using noise models for which G*(B) 0 as B 1 (5.18)

- -

This requirement will automatically be achieved for any noise model for which X,(B) as B 1 (5.19) or (5.19a) This condition may be translated into an equivalent constraint in the state space representation but this is outside our scope here. The type of noise models described by eq 4.1 fulfill the condition in (5.19) whenever d # 0, i.e., when they exhibit nonstationarity. As stated in sections 2 and 4,most real disturbances may be represented by several models and eq 5.19 is one criterion for choosing models leading to satisfactory designs. One of the simplest forms of G,(B) that fulfills (5.19) is eq 4.5. In this case the controllers for first order + delay processes are given by eq 111.3 and IV.l in Tables I11 and IV, respectively. When O/T is an integer (wl = 0, see Table I), X,(B) is simply a PI controller of the form of (5.5) and G,(B) takes the form of (5.9) with 1/q = 1 - 6, K , = (1A)/oo,and a = 1 - A. The overall closed loop transfer function G*(B) is

G*(B) = 1 -

(1- X)Bk+l 1 - AB

(5.20)

+ (4114o)B

-

which obviously goes to 0 for B 1. We note that G*(B)does not contain G@(B).This is due to the fact that X,(B) contains GPo-l(B). If G,,(B) is different from the assumed one, as is the case in reality, then the actual G*(B) will depend on it. This will be discussed later. Equation 5.20 has been suggested by Dahlin (1968) as a basis for designing sampled data controllers. He considered X a tuning parameter ranging from 0 to 1, whereas in the original optimal formulation it is an experimentally deterministic property of the inputs and can vary from -1 to 1. We shall show later that with proper choice of X the controller (111.3) which is the same as given in (5.9) has excellent performance. In Table V we give the resulting forms of the controllers for second order + delay processes (and ARIMA (O,l,l)) in terms of G,(B) and the control action ut. It is seen that the controllers in these cases may be thought of as a PID controller and a dead time compensator. Although the dead time compensators are somewhat more complex than the one in (5.9) the net effect is similar. The representation of these controllers in terms of X,(B) is easily derived using eq 111.1 and, for BIT integers, are given by 1-

XJB) = 00

1 - SlB - s p 1 1- B 1- ( w l / w o ) B

(5.21)

which is a PID controller (see Table V) plus a filter resulting from the inversion of w(B). We note that the only difference between the optimal X,(B)'s for first-order and second-order processes is that for second-order processes a derivative action is added to the P I controller (see Phillipson, 1975). We can now look at the effect of more complicated noise models on the structure of optimal controllers derived from

Ind. Eng. Chem. Process Des. Dev., Vol. 18, No. 1, 1979

Table IV. Unconstrained Minimal Variance Control Algorithms for First Order O/T

17

+ Delay Process and ARIMA (O,l,l)Noise

eq no. 1-h

-(I-

0 4 e/T> B ) , then control is so easy and disturbances are so strongly filtered that control is normally no problem. Again, we can find a controller with a good performance over the entire spectrum of dis-

and PI by

Table VII. Tuning Parameters of the Discrete Controller for First Order + Delay Process Model"

k 2 e/T

0.2 0.5 1 2 4 m

K, 5.25 2.26 1.27 0.89 0.74 0.65

a (=i-h,)

0.5 0.5 0.5 0.56 0.64 0.65

5

a

K,

6

0.905 0.779 0.607 0.368 0.135 0

(=l-A,)

7.65 3.15 1.65 1.0 0.64 0.35

0.3 0.3 0.3 0.33 0.35 0.35

6

0.961 0.905 0.819 0.670 0.450 0

" Controller equation V u t = -a

k

Z vut-; i= 1

+ K,.(e, - 6 e t - , ) )

turbances. Here we can afford a larger ratio between the actual and optimal variance of the system to the optimum achievable as our reduction of the disturbance is going to be very large. While we hate to give recommended settings, we give in Table VI1 settings for different values of O / T and k, based on stability limits alone. Second-Order Transfer Function. The results in Figures 13 and 15 apply independent of G,(B) as long as the transfer function is exact. However, the more complex the system becomes, the more severe become stability constraints. We will discuss just one example: an underdamped second-order system with a delay e-0.5s

G,b) = and for T = 0.25

+

0 . 2 5 ~ ~0.5s

+1

(6.8)

0.104 + 0.88B B3 (6.9) 1 - 1.414B + 0.607B2 Based on our experience with the previous case we choose as our design disturbance eq 4.5. The controller has the following form G,(B) = 9.62(1 - A) X 1 - 1.414B + 0.607B2 (6.10) (1 - B)(1 - 0.846B)(1 + (1 - A)B + (1 - A)B2) which is a PID controller with a dead time compensator (see Table V).

GJB) =

Ind. Eng. Chem. Process Des. Dev., Vol. 18, No. 1, 1979 2

4

7

-

25

,

-0 8

Figure 22. Stability limits for the optimal controllers (eq V.1) as function of

01

and p (for process (6.8)).

a

-0.21

-a4

a L

U

I

I

Y!lI/Kp

Figure 24. a, Output variance for the various controllers to the minimum possible output variance (for notation, see Figure 23a) as a function of A,. b, Ratio of the output variance to the minimum output variance possible for the two optimal controllers (notation as in Figure 23b).

-

1

b

Figure 23. a, Response of the closed loop system to unit change in set point (process given in eq 6.8) controlled by the optimal Controller (eq 6.10a) and a conventional PID controller: a, controller (6.10a), exact parameters; b, controller (6.10a), disturbed parameters ( a = -0.3); c, PID, exact parameters (a= 0); d, PID, disturbed parameters ( a = -0.3). b, Response of the closed loop system to unit change in set point controlled by the optimal controller (6.10a) and an optimal controller based on the underestimated process: a, controller (6.10a); b, optimal controller based on underestimation of the process by 30%.

The stability, in terms of a and p, is given by Figure 22. Through this stability analysis all time constants in eq 6.8 are multiplied by (1+ a) or (1 B ) . Again, one should note that-in (6.9jthis not only changes tfie constants, but also the form of (6.9). Only Of in a controllero Let us choose = 0'85 as Our base case' The then be V u t = 1.44(et - 1.414e,-l + 0.607e,-2) - 0.996Vu,-, 0 . 2 7 7 0 ~ t - 2- 0 . 1 2 7 0 ~ t - 3(6.104

+

'

In Figures 23 and 24 we give the responses of this controller to a change in set point and the performance for different inputs in comparison with a conventional PID controller both for exact parameters and for 01 = 4 . 3 . [For tuning the discrete PID controller by continuous tuning methods we used a dead time equivalent of T / 2 for the sample and hold (Moore, 1969).] It is a much better controller and not very sensitive. For values of X less than 0.85 we again face the same problem as before, namely, that we have to constrain the controller gain to assure stability and that there is no guarantee that a controller designed, for example, for A, = 0 with a constraint on ( V u 2 )to assure stability is going to be better than one designed by constraining the control action by increasing A, to 0.85. If we look here at Figure 22 we note that the stability limits are highly nonsymmetrical with respect to cy and p. If we would a priori choose a system G,(s) such that all time constants are multiplied by 0.7 or &35~

G,(s) =

( 0 . 3 5 ~+ ) ~.035s+l

and based our design on this G, instead of the estimated value of G, we could have increased the gain and chose X = 0.7. We would have a fasterand smoother step response with some overshoot and its overall performance would be better (see Figures 23b and 24). This again shows that while eq 5.11 suggest a very good controller, it is not optimum in any real sense. In many situations we may

26

Ind. Eng. Chem. Process Des. Dev., Vol. 18, No. 1, 1979

1

Figure 25. Response of the closed loop system controlled by (a) controller (6.6); (b) controller (5.l), to unit change in set point.

I

I

-\

1.4

I

Figure 27. Output variance, scaled to Tbssefor the controllers given by (IV.l) (with the proper A), to the minimum possible output variance at Tbasefor various k's as function of Abase ( 8 / r = 0.5): a, k = l ( X a = 0.2, T/Tb,, = 10);b, k = 2(X, = 0.5, T / T b = 5); C, k = 5(Xa = 0.72, T/Tbase= 2); d , k = 10(Xa = 0.85, T/Tbase= 1).

ample (6.1), and look a t a controller designed according to eq IV.1. A, is chosen such as to assure good stability (system has good stability limits if it is stable for -0.5 C cy

< 1).

If we go to small intervals it is more convenient to write the controller in terms of ut. It becomes I

IO

15

08

-06

04

-02

i

00

'.

Ut

\

02

04

06

1 - A, = -(et a0

08

10

Figure 26. Output variances for the controllers given by eq 6.6 and 5.1 to the minimum possible output variance as functions of A, (for notations see Figure 25).

consider the controller described in Figure 23b to be better and it really depends on the specific case and requirements. Conventional Operator Control. We mentioned in section 5 that a standard practice in industry, in processes based on infrequent sampling, is to adjust V U , proportionally to e, (eq 5.1). Let us briefly evaluate the performance of this control strategy for our case. The maximum stable value of M , for our example (eq 6.1) is 0.56. That means that the maximum permissible value guaranteeing stability is 0.3. In Figure 25 we plot the performance of this controller for a change in set point and in Figure 26 we compare the controller with our base controller in the A domain. It has a much poorer performance in both cases. Often we do not use eq 5.1 directly but use a quality control chart in which adjustments are only made if error exceeds a predetermined value. This is equivalent to using a nonlinear filter and will avoid amplifications of measurement noise. Otherwise this does not improve the performance. The conventional operator control has another serious drawback. Most operators would choose an M , too close to unity. Even an M , of 0.5 would lead to amplification of any disturbances, with amplitudes larger than the limits of the control chart. Choice of Sampling Interval. Till now we were mainly concerned with the effect of the disturbance itself and we looked a t the design for a given G,(s), and G,(B). G,(B) is, however, a function of k or the sampling interval chosen. We need some criteria to choose k. If k is sufficiently large, the controller is essentially a continuous controller, but sampling becomes expensive. The operator also introduces errors by too frequent adjustments. To study the effect of varying k we go back to our first ex-

t

+ (1- 6) i = l et-i]- (1- A,)

k

ut-i

(6.11)

i=l

To ensure proper stability, the gain (1- A)/wo which is designated K, cannot exceed a maximum allowable value K which, in our case, is approximately 3.25. This value isc?Ge limit of 1 - A, Kc- = lim 3.25 k-0 1 - e - T / r ~

as T / r = 0 / k r , A,,,

-

for stable design approaches (6.12)

Thus, when k increases, the controller becomes equivalent to the following continuous controller

u ( t ) = K,,=[e(t)

+ l / r X , e ( t ) d t - l / r J tt-8u ( t ) dtl

(6.13) The difference between K,,= and ((1- A a ) / q ) m a x is one indication as to how closely the controller approaches a continuous controller. A second way of looking a t the problem is to use the ratio given by eq 6.7 and look at the effect of k in reducing the variance. We note that the potential reduction depends on k. For a proper comparison, we have to take into account that A, also depends on k. The variance of the optimal case (A, = A,) is a function of k in the sense that if we deal with a real physical noise, A, and the variance of the driving white noise have to be scaled with T using eq 4.9. We look a t systems described by (5.14) and found that over a wide range of 817, a value of k = 10 approaches the continuous controller in the reduction of the variance of an ARIMA (0,1,1) disturbance. A value of k = 4 or 5 is sufficient within a few percent, which suggests that T should be approximately 014. A typical comparison using k = 10 as a base case is given in Figure 27. Interestingly enough, the performance is insensitive to 017. Choosing T as a function of 0 makes sense in the way that there is very little we can do to control the disturbance with frequencies higher than 2 ~ 1 8 . On the other hand,

1 -10

Var ("l)systrm I O 1 ,

-0.8

,

~

-06

,

-04

:

~

I;/v o , ' y ~ ' ~ y ~ t e m l b l ,

-02

GO

04

OE

04

06

,

,-AG 08

L

Figure 28. The ratio between the output variance of system (a) and the output variance of system (b) as a function of A.,

frequencies higher than 27r/7 are damped by the system itself and there is no need to sample more frequently than T = 714 as higher frequencies are damped anyway. Most authors (Kalman, 1959; Koppel, 1968; and Box, 1971) suggest a criterion based on T . Our results suggest that if 8 is larger than T it is sufficient to choose T = 8/4 or T = higher value of ~ / ork B / k (4 < h