Model Predictive Control Tuning Methods - American Chemical Society

Mar 19, 2010 - to ensure stability23 or finite, in which case the final horizon should be tuned properly based on tuning rules to ensure the closed-lo...
0 downloads 0 Views 362KB Size
Ind. Eng. Chem. Res. 2010, 49, 3505–3515

3505

Model Predictive Control Tuning Methods: A Review Jorge L. Garriga and Masoud Soroush* Department of Chemical and Biological Engineering, Drexel UniVersity, Philadelphia, PennsylVania 19104

This paper provides a review of the available tuning guidelines for model predictive control, from theoretical and practical perspectives. It covers both popular dynamic matrix control and generalized predictive control implementations, along with the more general state-space representation of model predictive control and other more specialized types, such as max-plus-linear model predictive control. Additionally, a section on state estimation and Kalman filtering is included along with auto (self) tuning. Tuning methods covered range from equations derived from simulation/approximation of the process dynamics to bounds on the region of acceptable tuning parameter values. 1. Introduction Model predictive control (MPC) has been implemented widely in the process industries since its introduction in the early 1960s.1-8 Zadeh and Whalen9 first proposed the connection between the minimum time optimal control problem and linear programming, and Propoi10 introduced the idea of a moving horizon approach to the optimization problem, which has since become the foundation of all MPC implementations. Early implementations of MPC, such as the linear quadratic Gaussian (LQG) controller, did not have much impact in the process industry due its inability to handle constraints, process nonlinearities, and uncertainty, and its unique performance criteria. Later implementations of MPC, such as dynamic matrix control (DMC)11 and model algorithmic control (MAC),12 addressed some of the shortcomings of LQG by introducing constraints, both input and output, into the formulation in a systematic manner. Further additions included the set point trajectory to increase robustness and move suppression coefficients to limit the size of the manipulated variables action.7 While these additions provided MPC with enormous flexibility, they complicated the task of controller tuning. The controller designer had to set the prediction horizons (P0 and P), control horizon (M), model horizon (N), weights on the outputs (Q), weights on the rate of change of inputs (Λ), weights on the magnitude of the inputs (Ω), the reference trajectory parameter (β), and constraint parameters. Efficient setting of this large set of tunable parameters often requires a systematic tuning guideline for MPC. The tuning methods herein can be classified into two types: the methods are either explicit formulas for the various parameters based on approximation/simulation of the process dynamics or bounds on where the tuning parameters lie based on parameters of the process dynamics. For instance, the tuning guidelines proposed by Shridhar and Cooper,13 assume that the actual process is well-represented by a first-order-plus-deadtime approximation. Similarly, the bounds proposed by McIntosh et al.14,15 are based on the simulated process delay and rise time. In recent years, advances in subspace identification16,17 and automated testing have allowed practitioners to dramatically reduce the cost of tuning MPC applications.18 The autocovariance least-squares technology developed by Rawlings’ group19,20 is expected to make Kalman filtering much more accessible by automatically identifying the main tuning parameters. Yamuna Rani and Unbehauen21 reviewed various methods to tune DMC and GPC controllers. The authors’ work spans * To whom correspondence should be addressed. Phone. (215) 8951710. Fax: (215) 895-5837. E-mail: [email protected].

from 1985 to 1994 and does not include any other formulations of MPC controllers, such as state-space representations and maxplus-linear (MPL) systems. Similarly, it does not include realworld applications of the tuning strategies presented or methods developed by industrial applications. Though there are a limited number of review papers on tuning explicitly, various other review papers cover sparsely theorems regarding minimum values of the tuning parameters for robust stability.1-3,7 Although a typical MPC law has many tunable parameters, according to Badgwell,22 who has tuned MPC for a great number of industrial applications, the controller tuning is not as difficult as it looks because of the MPC approach of formulating the control problem as an open-loop optimization in the time domain. In this formulation, the translation between control specifications and tuning factors is more transparent, which places the control problem within the grasp of a process operator. The details of the MPC formulation matter when it comes to tuning.22 For example, if a state-space model is used then tuning for integrators will be transparent. If not, one should expect to find some ad-hoc tuning parameters to deal with integrators. Furthermore, an infinite horizon objective function eliminates tuning factors and improves performance. Two fundamental notions that one should consider in tuning an MPC application are:22 • The first act of “tuning” is to develop an appropriate process model. If the model is accurate enough, then the rest of the tuning is straightforward. And if the controller exhibits poor performance, then one should consider the model poor (inaccurate) unless proven otherwise. This review paper is not concerned with proper model development or selection for MPC applications. • The trade-off between performance and robustness is the second fundamental notion. Fast closed-loop performance requires a high precision model. Most MPC users in the process industries are willing to detune the controller somewhat so that the controller will remain stable as the process operating point varies. This study gives a survey of the current scope of tuning strategies, both from a theoretical perspective and from realworld applications of MPC. We provide a broad treatment of the available methods to tune not only the popular DMC and GPC controllers, but also include sections on new developments such as MPL systems and methods to calculate the covariance matrices and Kalman filter for state estimation. Finally, a section

10.1021/ie900323c  2010 American Chemical Society Published on Web 03/19/2010

3506

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010

on autotuning gives, according to the authors’ current knowledge, the most comprehensive treatment of MPC tuning in the literature. This paper is organized as follows. An overview of the mathematical preliminaries dealing with the formulation of the control law is given first. Following this are successive sections for offline tuning; each dealing individually with a tunable parameter. A section of autotuning procedures follows the offline tuning sections. A case study section is included to compare the effectiveness of the various tuning guidelines. Finally, concluding remarks are presented. 2. Mathematical Preliminaries Consider a linear, time-invariant, discrete-time plant model of the form: x(k + 1) ) Ax(k) + Bu(k) + Gw(k) y(k) ) Cx(k) + v(k) where k is the sampling instant, x is the vector of state variables, u is the vector of manipulated inputs, and y is the vector of model outputs. A ∈ Rn×n, B ∈ Rn×m, G ∈ Rn×g, and C ∈ Rp×n Nd Nd are constant matrices. {wk}k)0 and {Vk}k)0 are the vectors of plant disturbances and output measurement disturbances, respectively, modeled as zero-mean Gaussian noise sequences with covariance Qw and Rv, respectively. Estimates of the plant state variables can be obtained from the Kalman filter xˆ(k + 1|k) ) Axˆ(k|k - 1) + Bu(k) + AL(k)(y(k) - Cxˆ(k|k - 1)) where the Kalman gain is given by L(k) ) P(k|k - 1)CT[CP(k|k - 1)CT + Rv]-1 P(k|k - 1) represents the covariance matrix of the estimation error, ε(k) ≡ x(k) - xˆ(k|k - 1) and is the solution to the Ricatti equation P(k + 1|k) ) AP(k|k - 1)AT + GQwGT AP(k|k - 1)CT[CP(k|k - 1)CT + Rv]-1CP(k|k - 1)AT Consider a general moving-horizon minimization problem of the form: p

min

{

u(k),...,u(k+M-1) m

P

∑ ∑ q (r (k + l) - y (k + l)) jl

j

j)1 l)P0 P-1

∑∑ i)1 l)0

ωil(ui(k + l))2+

j

m

2

+

M-1

∑ ∑ λ (u (k + l) il

i

i)1 l)0

ui(k + l - 1))2} subject to x(k + 1) ) Ax(k) + Bu(k) + Gw(k) y(k) ) Cx(k) + v(k) u(k + l) ) u(k + l - 1), l ) M, ... Du(k) e d, k ) 0, 1, ... Hx(k) e h, k ) 1, 2, ... where rj is the reference trajectory for the controlled output yj, m is the number of inputs, and p is the number of outputs. Each reference trajectory, rj, is defined in this manner:

rj(k) ) yj(k) rj(k + 1) ) yj(k + 1) l rj(k + ηj - 1) ) yj(k + ηj - 1) rj(k + ηj) ) (1 - βj)yspj + βjyj(k + ηj - 1) rj(k + ηj + 1) ) (1 - βj)yspj + βjyj(k + ηj) l rj(k + P) ) (1 - βj)yspj + βjyj(k + P - 1) where yspj is the set point for the output yj, which can be constant or time-varying, and βj is a tunable parameter that takes a value between 0 and 1 and sets the time constant of the reference trajectory rj. ηj is the relative order (degree) of the output yj with respect to the input vector, u. 3. Offline Tuning Methods 3.1. Prediction Horizons. In this section, various techniques to tune the prediction horizons will be discussed. Intuitively, the prediction horizons describe the lower and upper limits of the time horizon (window) in the future over which the controller tries to induce a desired response to the plant outputs. The final prediction horizon (upper limit of the horizon) is set to be infinite to ensure stability23 or finite, in which case the final horizon should be tuned properly based on tuning rules to ensure the closed-loop stability of the control system. Since DMC and GPC are the most popular implementations, theoretically and industrially, the tuning rules for these are given first. Newer implementations, such as state-space representations and max-pluslinear (MPL) systems, are given toward the end of the section. A list of the tuning guidelines for setting the prediction horizons is given in Table 1. For DMC, two heuristics and a theorem are given followed by formulas to determine the final prediction horizon, P: As a default setting, one can set P equal to 10.21 Similarly, as a rule of thumb, Wojsznis et al.24 suggested setting the prediction horizon so large that further increment has no significant effect on control performance. In an article by Nagrath et al.,25 the authors used the default DMC value of the prediction horizon (i.e., 10) to tune a state-space representation of a continuousstirred tank reactor (CSTR) with excellent results. Garcia and Morari1 presented a theorem to ensure closed loop stability based on the final prediction horizon. The theorem states that if Q ) I and Λ ) 0, then there is a sufficiently small M (control horizon) and sufficiently large P > N + M - 1 to ensure the closed-loop system is stable. Maurath et al.26 suggests that P be chosen so that between 80 and 90% of the process steadystate is included in the calculation of P, with 85% being an average value. Similarly, in ref 27, the authors proposed a minimum value for the prediction horizon with respect to the control horizon and the process delay. This ensures that the dynamic matrix is of full column rank and thus invertible. Cutler28 proposed setting the value of the prediction horizon according to the sum of the control horizon and model horizon. The author suggested increasing M until there is no further effect on the first move of the controller to a set point change. Georgiou et al.29 suggested adding the rise times at 60 and 95% of process steady-state divided by the sampling time (period) minus one to determine P and applied this method to three distillation column examples.29 The control response of the first column, column A, was quite excellent. There was no offset and only very slight oscillation in the output variables. The response of the second column, column B, yielded poorer performance than the first column due to large load disturbance.

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010 Table 1. Guidelines for Tuning the Prediction Horizon MPC formulation DMC DMC DMC DMC DMC DMC DMC GPC GPC GPC GPC GPC GPC DMC, GPC

3507

Table 2. GPC Tuning Guidelines

value or bound

ref

method

P ) [(t80 + t90)/2]/Ts P > M + td/Ts P)M+N P ) t60/Ts + t95/Ts - 1 P ) (5τp + td)/Ts P ) max(5τrs/Ts + krs) krs ) τd,rs/Ts + 1 P - 1 g max((umax - umin)/∆umax, 1) nb < P e tr/Ts P ) dmax + (ts/Ts)/3.5 P ) t95/Ts P ) max(10, tr/Ts) P ) max (Pf1, Pf2) 2na - 1 < P < t95/Ts low RPN: P ) t80/Ts, P0 ) 1 high RPN: P ) t90/Ts, P0 ) t10/Ts

26 27 28 29 31 13

GPC1 GPC2 GPC3 GPC4

32 34 21 33 38 39 35 41

This illustrated that DMC “is more sensitive to process nonlinearities” than traditional proportional-integral (PI) controllers.29 The controller for column C was not designed due to the high computational effort necessary because of a very large time constant. Due to the large time constant and use of reasonable sampling times (periods), P would have been very large. This, according to Ogunnaike,30 can lead to (a) control moves of very large sizes; i.e. “excessive movement of the variable which often results in an unacceptably oscillatory system response”, and (b) bad conditioning of ATA that can result in poor control. However, using two nonlinear transformations of the two output variables, the control performance of both columns B and C were improved dramatically. A disadvantage of these two methods is that one needs to simulate the process offline beforehand in order to determine the various time constants. Hinde and Cooper31 proposed converting the model into FOPDT (first-order-plus-dead-time) form and using five times the overall process time constant plus the dead time divided by the sampling time to determine the value of P. This method allows for tuning of the prediction horizon without simulating the system beforehand. Unfortunately, it is only applicable to single-input single-output (SISO), open-loop stable, minimum phase, nonintegrating processes. Due to these constraints on the system, this tuning procedure is very specialized, applying to very few systems of practical importance. Shridhar and Cooper13 proposed a tuning strategy for SISO and multiple-input multiple-output (MIMO) unconstrained DMC controllers that are open-loop stable, including nonsquare systems. The basis of their tuning method is the condition number (c) of matrix A of the process. Their choice of condition number, 500, was based on the “rule of thumb that the manipulated variable move sizes for a change in set point should not exceed 2-3 times the final change in manipulated variable”.13 Additionally, to derive their equation, they approximated the dynamics of the process via an FOPDT model of the process with zero-order hold. From this, they arrived at an equation that determines P from the maximum of the various combinations of process inputs S and outputs R. All five of the above methods assume a starting value for the prediction horizon of 1. Genceli and Nikolau32 formulated robust stability conditions based on the tuning parameters for a linear DMC with end conditions. They proposed a bound on the prediction horizon to guarantee stability of the closed loop system with constraints on the inputs and outputs. It is based on the maximum between the difference of the minimum and maximum inputs divided by the maximum change in input or 1. This allows for robust stability of the closed-loop system in the presence of active

P0

P

1 dmax + 1 < P < t95/Ts nb + 1 M + P0 + 1 < P < t95/Ts 1 M + P0 + 1 < P < t95/Ts 1 M + dmax + 1 < P < t95/Ts

M

λ

1 trial and error na + 1 λrel[B(1)]2 na + 1 trial and error na + 1 trial and error

ref 14, 14, 14, 14,

15 15 15 15

constraints. This condition implies that a prediction horizon of a minimum length of P ) M + 1 guarantees closed-loop stability. In particular, if ∆umax g umax - umin, then a short prediction horizon of length P ) 2 with a control horizon of one (M ) 1) guarantees closed-loop stability.32 For the long horizon case, where P g M + N - 2, the move suppression coefficients can all have equal values and guarantee closedloop stability. To find the optimal value for the prediction horizon, the authors proposed minimizing the “initial online calculated optimal objective function value” subject to constraints on the set point and disturbance.32 For GPC, various methods to tune the prediction horizons exist. Typically, they fall into a class of ranges on the final prediction horizon or formulations based on assumptions of the process model. All, except two, assume the starting value of the prediction horizon to be 1.14,15,33,34 Clarke and Mohtadi35 propose setting the starting value (P0) to the order of the A polynomial in the GPC model. Clarke et al.36 proposed setting the value different than one only “if it is known a priori that the dead-time of the plant is at least k sample-intervals then [P0] can be chosen as k or more to minimize computations.” McIntosh et al.14,15 proposed four methods for the initial and final value of the prediction horizons, here named as GPC1GPC4 (Table 2). In their second method (GPC2), they proposed setting the starting value of the prediction horizon to the order of the B polynomial in the GPC model plus 1, i.e., nb + 1. For the value of the final prediction horizon, Clarke et al.36 proposed using a default value of 10 for a large class of plants. However, they proposed that P should be given a value between the order of the B polynomial of the process model and the output-response rise-time in samples. Yoshitani and Hasegawa37 applied this tuning strategy to a continuous annealing process with good results. The authors applied the strategy to the autotuning formulation of GPC to smoothly track the transition from strip temperature control to furnace temperature control. Yamuna Rani and Unbehauen21 proposed a method to tune the prediction horizon based on the maximum dead time and settling time of the system. Banerjee and Shah,33 for systems that are stable, proposed using an equation with the rise time for 95% of the process steady-state. Karacan et al.38 applied GPC tuned using this method to a packed distillation column and obtained good results. They used the default value of 10 for the prediction horizon, but stated that the prediction horizon should be the maximum of the default value and the process-output rise-time in steps. Edouard et al.,39 for a catalytic reverse flow reactor, proposed the largest of the two controlled variable active constraint lengths. Clarke and Mohtadi35 proposed a bound on the prediction horizon based on the order of the A polynomial and the rise time of 95% of the process steady-state. As given in Table 2, McIntosh et al.14,15 proposed three different bounds on the prediction horizon depending upon how other parameters in the GPC scheme are tuned. The first bound (GPC1) is based on the default tuning settings of their four methods. The second bound developed by McIntosh et al.14,15 results from deviating from the default method by either proposing a strategy for the move suppression weights (GPC2), the initial value of P (GPC2), the value for the control horizon (GPC2 and GPC3),

3508

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010

or the value for the weights on the outputs (GPC3). The third bound developed by McIntosh et al., GPC4, varies from GPC3 only in its output weights. McIntosh et al.40 suggests using a supervisory level method to periodically update the tuning parameters based on a performance criterion. One proposed by Yamuna Rani and Unbehauen21 is the following: k(Pm(t) - Pd)tp(t - 1) tp(t) ) tp(t - 1) ( Pd The performance is quantified in terms of some criteria chosen by the control practitioner in order to judge the quality of the control system. One possibility is to minimize the integral-square error (ISE). Similarly, McIntosh et al.40 proposed using a servo performance index with a specified overshoot of 5% and a specified amount of samples after each set point change in the GPC objective function to tune the final prediction horizon. This can be periodically updated to improve controller performance by resolving the optimization problem. The advantage of all these strategies of GPC is that they are easy to implement due to their straightforward approaches with minimal offline simulation and yield robust performance. Trierweiler and Farina41 proposed a tuning strategy based on the robust performance number (RPN) of the system. The strategy applies to both DMC- and GPC-type MPC controllers. Before tuning, the transfer functions of the system should be scaled based on a procedure outlined in the paper and the attainable performance of the system should be determined. Once that is complete, the RPN number can be calculated to determine how attainable the performance is. If the RPN is low, they propose setting the initial prediction horizon to 1 and the final prediction horizon to the rise time of 80% of the process steady-state. However, if the RPN is high, they propose setting the initial prediction horizon to the rise time of 10% of the process steady-state and the final prediction horizon to the rise time of 90% of the process steady-state. Lee and Yu42 proposed a tuning strategy for a stateestimation-based controller, which is applicable to systems with a state-space model. They recommended first tuning for nominal stability and then detuning the system for robust performance. For systems with stable inverses, the prediction horizon is chosen equal to the control horizon. If the system has unstable inverses, one can either choose an infinite horizon formulation, solve a Lyapunov equation to determine the appropriate terminal weights and keep the prediction horizon equal to the control horizon, or make P larger than M by the system’s settling time in sampling units and constrain the output error at zero by choosing a large terminal weight. These settings are kept when the system is detuned for robust performance. Zhang and Li16 proposed tuning the final prediction horizon by choosing initial values of the input weights, initial prediction horizon (typically set to the process delay time), and the rate of change weights. Once these are chosen, one plots the achievable performance benchmark for different values of P against the installed controller and a benchmark controller, which in the article was an LQG controller indentified via subspace identification. From this, one sets the value of P “by considering the system specification and the variance trade-off” in the benchmark plots.16 This method is applicable to state-space representations without active constraints. Max-plus-linear (MPL) models are a new class of models used for discrete event systems, which are dynamical systems that often arise in the context of manufacturing systems, telecommunication networks, railway networks, parallel computing, etc.43 In an article by van den Boom and De Schutter,44

Table 3. Guidelines for Tuning the Control Horizon MPC formulation DMC DMC DMC DMC GPC DMC, GPC

value or bound

ref

M ) t60/Ts M ) τp/Ts M ) max(τrs/Ts + krs) krs ) τd,rs/Ts + 1 M g max((umax - umin)/∆umax, 1) M e τd/Ts M ) int(P/4)

29 31 13 32 35 41

the authors give a strategy to tune the initial and final prediction horizon. The authors proposed setting the initial prediction horizon to 1 and the final prediction horizon to the length of the impulse response of the system.44 van den Boom et al.45 proposed setting the final prediction horizon according to a complex rule that requires knowledge of the spectral radius of A and the shifted state, input, and output matrices by formulas using the spectral radius. Both of these studies are for constrained SISO MPL MPC systems. In summary, there are a multitude of strategies for tuning P. Typically P0 can be set to 1 or the amount of time delay in the output response. DMC tuning strategies tend to use either the various time constants from the open-loop simulations or an FOPDT representation of the process and formulas derived from the time constants of the FOPDT representation. GPC tuning strategies almost universally are based both on the structure of the model and the various time constants obtained from openloop simulations, similar to some DMC strategies. 3.2. Control Horizon. This section reviews guidelines for tuning the control horizon. The control horizon affects how aggressive or conservative the control action is. This leads to a trade-off: increasing the control horizon from 1 creates a more robust, but more aggressive controller with increased computational load; however, keeping it at 1 increases the conservativeness and decreases the robustness of the controller but at a savings of computation. As in the previous section, this section will first start out with the tuning guidelines for DMC and GPC and then go into more current control formulations. Similarly, the guidelines to tune the control horizon are listed in Table 3. For DMC controllers, Yamuna Rani and Unbehauen21 suggested a default value of 1 for the control horizon. Maurath et al.26 proposed tuning the control horizon via trial and error: they stated that the horizon “should be large enough to extend over all significant adjustments in the manipulated variable needed to implement a set-point change.” Georgiou et al.29 proposed setting the control horizon based on the time it takes for the output response to reach 60% of the process steadystate. Hinde and Cooper,31 using an FOPDT approximation of the process dynamics, proposed setting the control horizon based on the time constant of the approximated system. As stated before, using this formulation allows for tuning without simulating the process offline and calculating the steady-state time constants. It is only applicable to SISO, open-loop stable, minimum phase, nonintegrating processes, which makes it very limiting. Shridhar and Cooper13 also proposed a method for tuning based on FOPDT approximation of the process, similar to that of Hinde and Cooper,31 for open-loop stable, unconstrained MIMO systems. It is similar to the equation for the prediction horizon, though the process time constant is not multiplied by five. Genceli and Nikolaou32 proposed setting the control horizon greater than or equal to the maximum of the difference between the maximum input and the minimum input divided by the maximum change in input or 1. This ensures robust stability of the closed-loop system with active constraints. In GPC formulations, as in DMC formulations, the default tuning value for the control horizon is 1.14,15,21,33-35 Clarke et

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010 36

al. proposed setting the control horizon to 1 for typical plants or equal to the number of poles near the stability boundary for open-loop unstable plants.36 Yoshitani and Hasegawa37 applied this tuning strategy successfully to the autotuning formulation of GPC implemented on a continuous annealing process. The control problem was to smoothly track the transition from strip temperature control to furnace temperature control. Similarly, Clarke and Mohtadi35 proposed setting M equal to 1 for most plants or to less than or equal to the plant dead-time. Karacan et al.38 used the default value of 1 for a packed distillation tower with good results. However, in the work of McIntosh et al.,14,15 for GPC2-GPC4, the authors set the control horizon to the value of the A polynomial of the process model plus 1. Trierweiler and Farina41 proposed a strategy for DMC and GPC controllers based on the robust performance number (RPN) of the system. After the proper scaling is done, as outlined in the paper, and the attainable performance is computed, the authors proposed setting the control horizon based on the final prediction horizon divided by 4. Rawlings and Muske,46 assuming an infinite horizon formulation of the control law, proposed tuning the control horizon based on the stability of A or on the ability to stabilize {A, B}. If A is stable, they proposed setting the control horizon M g 1. If {A, B} are stabilizable, they propose setting M g r, where r is the number of unstable poles. In GPC formulations, the tuning of M can typically be done solely on the basis of the structure of the system without the need for offline simulation. For state-space representations, Lee and Yu42 proposed tuning the system first for nominal stability, then detuning for robust performance. They suggested setting the control horizon “as large as possible within the computational limits.”42 Edouard et al.39 proposed setting the control horizon to 1 when tuning an MPC controller with a state-space representation to decrease the computational burden of the constrained optimization problem. They applied this strategy to a catalytic reverse flow reactor. Nagrath et al.25 suggested a tuning strategy for a statespace representation of a CSTR; as in ref 46, they proposed setting the control horizon to greater than or equal to the number of unstable poles of the system.25 Zhang and Li16 proposed tuning the control horizon by choosing initial values of the input weights, initial prediction horizon (typically set to the process delay time), and the rate of change weights. Once those are chosen, one plots the achievable performance benchmark for different values of P against the installed controller and a benchmark controller, which in the article was an LQG controller indentified via subspace identification. From this, one sets the value of M by considering the system specification and the variance trade-off in the benchmark plots described in ref 16. This method is applicable to state-space representations without active constraints. For linear quadratic regulator (LQR) controllers, Scokaert and Rawlings47 presented a strategy to tune the control horizon on an infinite horizon LQR formulation. Their strategy is to set the value of the control horizon, solve the related MPC optimization as formulated within the study, and check whether the terminal output state satisfies the constraints. If the terminal output state satisfies, then one should terminate the algorithm and use that control horizon; if it does not, then one should increase the control horizon and reinitialize the algorithm until the terminal output state satisfies. This allows for the weights in the control, both the input and output weights, to be set to unity. State-space representations typically require offline simulation and some trial-and-error tuning because of the structure of the representation. For MPL systems, van den Boom

3509

Table 4. Guidelines for Tuning the Model Horizon MPC formulation

formula or bound

ref

DMC DMC

N > (max(t95,i))/Ts P ) N ) max(5τrs/Ts + krs) krs ) τd,rs/Ts + 1

29 13

Table 5. Guidelines for Tuning the Weights on the Outputs MPC formulation

formula or bound

ref

state-space state-space

Q)CC nonstrictly proper case: Q ) X∞ - AT(I + X∞BBT)-1X∞A X∞ ) γ2XW-1 W ) (γ2 - 1)I - ZX γ > γmin ) (1 + F(XZ))1/2 X ) ATXA - ATXB(I + BTXB)-1BTXA + CTC Z ) AZAT - AZCT(I + CZCT)-1CZAT + BBT strictly proper case: Q ) CTC Qs ) 1/(1 + yZ,s)1/2

48 49

state-space DMC, GPC

T

49 41

and De Schutter44 proposed setting the control horizon equal to the upper bound of the minimal system order, which is in the range 1 e M e P.44 This is applicable to unconstrained SISO MPL MPC only. In summary, for DMC, GPC, and state-space representations, most strategies propose setting M ) 1 or to the number of unstable poles. A few, such as those in refs 14 and 15 for GPC, suggest setting to a different value (see Table 2). This saves in computational time and leads to a less aggressive control action, at the cost of robustness to process-model mismatch and unmeasured disturbances. 3.3. Model Horizon. This section reviews guidelines proposed for setting the model horizon in a finite-impulse or a finitestep response model for DMC. The model horizon affects the condition of the matrix A. As the model horizon increases, matrix A becomes more ill-conditioned.29 Guidelines for tuning the model horizon are given in Table 4. Georgiou et al.29 proposed setting the model horizon larger than the time required for the slowest open-loop process output response to reach 95% of the steady state. Shridhar and Cooper13 and Wojsznis et al.24 proposed setting the model horizon equal to the final prediction horizon. 3.4. Weights on the Outputs. This section presents a review of recommendations made on tuning the output weights. Weights on the output are used to scale the control variables and direct more control efforts toward more important controlled outputs to achieve tighter control of these controlled outputs. The tuning guidelines for the weights on the outputs are listed in Table 5. Rowe and Maciejowski48 used an LQG/LTR approach to tune an infinite horizon state-space MPC controller. Using that technique, the authors derived an expression for the output weights for minimum phase systems as the product of the transpose of the output matrix C times the output matrix. For nonminimum phase systems, the technique only works where the closed loop bandwidth is made “small” enough.48 Rowe and Maciejowski49 provided another formula based on H∞ loop shaping of the process model. This applies for nonstrictly proper cases. For strictly proper cases, the expression is the same as that in ref 48. Both studies assume that the weights on the rate of change of inputs is 1, i.e., Λ ) I. Yoshitani and Hasegawa37 used a linearly decreasing weight on the furnace temperature and a linear increasing weight on the upper limit of the furnace temperature to achieve smooth temperature transition from the initial reference temperature to the upper allowable limit. This was applied to a state-space model of a continuous annealing process. Nagrath et al.25 increased the output weight on the reactor temperature of a

3510

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010

Table 6. Guidelines for Tuning the Weights on the Rate of Change of Inputs MPC formulation DMC DMC

DMC GPC GPC DMC, GPC

formula or bound λ ) ΓKp , Γ is typically set to 10 P N P λP ) [∑j)-N+1 δj + (b + wb˘) ∑i)1 Ei + ∑j)-N+1 N (aj + wa˘j)]/[1 - (∑i)1 Ei)/|G|] λj-1 ) λj - aj - wa˘j - δj, 1 e j e P various terms are defined in the article R λs2 ) (M/500) ∑r)1 {Qr2Krs2[P - krs - (3/2) (τrs/Ts) + 2 - ((M - 1)/2)]}, s ) 1, 2, ..., S krs ) τd,rs/Ts + 1 λ ) mP + c λstart ) m|tr(GTG)|/M λ ) λrel[B(1)]2 Λs ) [(1 + uZ,s)log10(RPN + 1)]1/2 mean|gsi,j(ωsup)| 2

ref 31 32

13 21 14, 15 41

nonlinear state-space model of a CSTR over the concentration in order to make “the input action fast so that the output tracks the set point aggressively.” Trierweiler and Farina41 presented an approach for setting the input weights on the basis of the output zero directions of the RHP-zero closest to the origin calculated for the scaled system GS(s). These directions give an idea how the RHP-zero effect is distributed on the plant output. This approach can be used for DMC and GPC controllers. Wojsznis et al.24 suggested that Q should have higher coefficients applied to the areas of the step response with higher process gain and lower values or zero to areas with smaller process gain. They also suggested using linearly increasing coefficients with some initial coefficients set to zero to improve robustness. The number of initial values set to zero is called the patience factor. This method is effective when the initial part of a step response is uncertain but the steady-state gain is reasonably well-defined.24 The tuning of Q can be typically left to the control engineer who is familiar with the process requirements. If one or more process outputs are more important than others or they decrease in importance over time while others increase, such as in ref 37, one can set Q such that the control action accordingly matches the requirements. 3.5. Weights on the Rate of Change of Inputs. This section reviews existing approaches to tuning the weights on the rate of change of the inputs. Penalizing the rate of change yields a more robust controller but at the cost of the controller being more sluggish. Setting a small penalty or none whatsoever gives a more aggressive controller that is less robust. Table 6 lists the tuning guidelines herein discussed regarding the weights on the rate of change of inputs. For DMC controllers, Garcia and Morari1 presented a theorem regarding a finite value of the rate-of-change-of-inputs weights that stabilizes the closed-loop. It states the following: there exists a Λ* > 0 such that for all Λ > Λ* the closed loop is stable for all P, M g 1 and Q > 0. Georgiou et al.29 used a constrained linear minimization procedure to determine the correct suppression factors. They used in the inequality constraint the minimum integral absolute error of the outputs such that the changes of the manipulated variables were similar in maximum magnitude to those of the proportional-integral (PI) controller that the MPC formulation was compared to. Maurath et al.27 proposed a trial-and-error tuning strategy for the weights on the rate of change of inputs. They proposed setting the rateof-change weights to an initial value and then simulating the closed-loop system to determine whether the controller is too aggressive or too sluggish.27 To determine the next weight to use, the authors suggested increasing the value of the current weight by 10. Hinde and Cooper31 used an approach to set the rate of change of inputs (suppression) weights based on the desired controller performance defined as a short rise time with

10-15% overshoot. Genceli and Nikolaou32 provided two equations for the suppression weights: one is for the suppression weight at the terminal state, and the other is a recurrence relation for the other suppression weights. Using this method, the robust stability of the closed-loop system is guaranteed. Sarimveis et al.50 introduced a strategy to tune nonsquare DMC systems based on ref 32. The authors derived an optimization algorithm to determine the suppression weight. The algorithm is based on modifying the end conditions in ref 32 such that there is a pseudoinverse of the nonsquare dynamic matrix. Next, the objective is used as a Lyapunov candidate function, and the optimization is conducted to see if it is monotonically decreasing with a suboptimal solution. If this is the case, then you can solve the nonconvex optimization for the suppression coefficients with respect to the pseudoinverse of the dynamic matrix. Shridhar and Cooper13 derived a formula for the suppression weight by approximating the dynamics of the process using an FOPDT model. From this approximation, they derived an equation that relates the suppression weights to the various other tuning parameters. This method is applicable to open-loop stable processes including nonsquare systems. Wojsznis et al.24 proposed using decreasing values of the suppression weights on moves greater than M ) 1. According to the authors, this makes the first move less aggressive since the next moves are relaxed and can be used with lower penalties for error correction. Although the second and next moves may violate move constraints, this is not a practical concern since the controller moves are recalculated.24 As a result of this strategy, the controller moves are smaller and the controller is more robust. For GPC controllers, Clarke and Mohtadi35 proposed setting the value of the suppression weights to zero. However, if numerical stability is required, then they propose setting the suppression weight equal to some small value, ε. Banerjee and Shah33 proposed using a suppression weight greater than 1 but less than 2 (i.e., 1 < λ < 2), even in open-loop stable systems, to increase the robustness and detune the performance. Karacan et al.38 suggested tuning the suppression weight by a trial-anderror process. They first chose the other parameters, namely the initial and final prediction horizons and the control horizon, and then varied λ until they achieved the best result. For their system, which was a packed distillation column, their value was λ ) 1.2. Yamuna Rani and Unbehauen21 proposed a method to tune the suppression weights based on optimally tuning the final prediction horizon in ref 40. The affine constants are determined using the previously tuned λ values.21 This allows for immediate updating of the suppression weights when the final prediction horizon is retuned in ref 40. McIntosh et al.14,15 proposed a strategy for initially setting and then periodically updating the weights on the rate of change of the inputs. In their strategy, which is used in the GPC2 formulation, the rate of change of inputs weights is initially set to a value. From these initial values, the weights can be updated using a second formula. Yoshitani and Hasegawa37 used a suppression weight of 0.6 for a continuous annealing process to increase the robustness of the controller, although the authors noted that it was not critical so long as the final prediction horizon was properly tuned. Trierweiler and Farina41 proposed a formula for calculating the suppression weight based on a rescaling of the system. It is based on using a frequency representation of the model to determine the frequency where the RPN is at its maximum. This applies to both DMC and GPC controllers with inactive constraints. Rowe and Maciejowski48 used an LQG/LTR approach to tune an infinite horizon state-space MPC controller. For minimum phase systems, they set the suppression weight

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010

equal to zero since recovery is always guaranteed from the LQG formulation. For nonminimum phase systems, they set the suppression weights equal to a small value to help ensure recovery from the LQG formulation. Nagrath et al.25 set the suppression weight to 1 for a CSTR with a nonlinear statespace plant model with good results. Zhang and Li16 proposed a tuning strategy for the rate of change of outputs weights. The authors suggested setting the values of the initial and final prediction horizons, the control horizon, and the input weights ad hoc. Then, one should plot the “MPC-achievable curve plotted with the MPC-achievable benchmark variance . . . as ordinate and the input variance . . . as the abscissa by varying parameter [Λ]” along with the installed controller curve on the same plot and varying Λ until the installed controller curve approaches that of the achievable benchmark curve.16 For MPL MPC strategies, van den Boom and De Schutter44 proposed a bound on the suppression weight: 0 < λ < 1. van den Boom et al.45 further refined this by proposing a bound based on the prediction horizon: λ < 1/P, where P is the final prediction horizon. Both of these bounds apply for constrained SISO MPL MPC. For unconstrained MIMO systems, Necoara et al.43 suggested a slight adjustment of the approach proposed in ref 45 λ < 1/mP, where m is the number of manipulated inputs and P is the final prediction horizon. The rate-of-change-of-inputs weights can be used as systematic constraint to minimize controller action. From the theorem in ref 1, there is always a weight that guarantees closed-loop stability for DMC controllers. Knowing this, as long as one simulates the system and confirms that the closed-loop system is stable with a particular value of rate-of-change-of-input weight, increasing above this value is at the discretion of the control engineer based on the knowledge of the process to further suppress controller action. However, increasing it too much creates a sluggish controller, which may be unwanted in a process with small time constants. 3.6. Weights on the Magnitude of the Inputs. This section reviews the known strategies for tuning the weights on the magnitude of the input variables. This parameter penalizes excess controller action. It is a way to remove a constraint from the optimization problem and thus make it more computationally attractive. This particular penalty allows one to remove the constraints on the minimum and maximum sizes of the inputs. Nagrath et al.25 used a value of the weights on the magnitude of the inputs of zero. The authors’ reasoning was that it was unnecessary since there were no magnitude constraints for the system they studied. This tuning strategy was applied to a nonlinear state-space model of a CSTR. Garriga and Soroush51 quantitatively described the effect of increasing the size of the weights on the magnitude of the inputs. In the limiting case, as Ω f ∞, the Jacobian of the closed loop approaches the Jacobian of the open loop and thus the controller is operating in open loop mode. 3.7. Reference Trajectory Parameters. This section reviews the purpose and tuning of the reference trajectory parameters, 0 e βj < 1, j ) 1, ..., p. The reference trajectories are used to ensure a smooth transition from the current output values to the desired set point values.36 Clarke et al.36 suggested using a value of the reference trajectory parameter around 1 to ensure a slow transition from the current measured variable to the real set point. However, Garriga and Soroush51 noted that the reference trajectory parameter loses its effect when a long final prediction horizon is implemented. Seborg et al.52 proposed a way to set the reference trajectory parameter. The desired time constant for

3511

Table 7. Guidelines for Tuning the Constraint Parameters MPC formulation DMC state-space

formula or bound

ref

k1 - 1 g M g max((ujmax - ujmin)/∆ujmax) - 1, 50 1 e j e ni k1 ) max{[ln(hmin/[|H|K(T)|x0|])]/ln(λmax), 1} 46 k2 ) M + max{[ln(hmin/[|H|K(T)|xM|])]/ln(λmax), 0}

each output is specified indirectly by a performance ratio for the output, where the performance ratio is the ratio of the desired closed-loop settling time to the open-loop settling time.52 In this way, a small performance ratio correlates with a small reference trajectory parameter. 3.8. Constraint Parameters. This section provides tuning guidelines regarding when the constraints on the system are active or inactive. Typically, the objective to tuning the constraints involves knowing the window when the constraints are active or inactive in order to make the optimization problem feasible. Table 7 lists the tuning guidelines for the constraint parameters discussed in this section. Sarimveis et al.50 provided a method to determine the window of active output soft constraints for DMC-type controllers applied to nonsquare systems. Rawlings and Muske46 proposed two methods to tune the output constraints based on the stability of the open-loop system. Using an infinite horizon control method, the authors derived two bounds on when the constraints are active in a stable plant. Agarwal et al.53 used an algorithm based on Bayesian inference networks and statistics to determine which input and output constraints can be changed to optimize the process. The inference network was used to determine which variable should be used as evidence in the statistical calculation, and based on a set target, the probabilities of the other variables were calculated and determined whether or not to change the constraint. This method applies to all MPC formulations. 3.9. Covariance Matrix and Kalman Filter Gain. This section presents a review of several methods of determining the covariance matrices in a Kalman filter when it is necessary to estimate a state in the absence of a measurement of the state needed to calculate MPC control moves. Since the covariance matrices are not known a priori, they must be estimated using one of four known methods: Bayesian approaches,54,55 maximum likelihood,56,57 covariance matching,58 and correlation techniques.59-62 According to Odelson et al.,19 Bayesian and maximum likelihood methods have not been used widely because their computation time is often excessive. Covariance matching is the calculation of the covariance matrices from the residuals of the state estimation, but it has shown to give biased estimates of the true values.19 Odelson et al.19,20 introduced a new method for estimating the covariance matrices based on correlation techniques and gives necessary and sufficient conditions for the estimate to approach the true value as the sample size increases. Odelson et al.19 used this method to solve an optimization problem for the optimum gain (Kalman) filter, even though there may not be enough information to find the covariance uniquely. Since the optimization is nonlinear, the authors suggested initially setting the filter L to some value, process the data and compute ˇ 1, and then solve the linear least-squares minimization A1 and R over only Rv and PCT. Substituting these values into the constraint then gives a new value for L. This procedure should be repeated until convergence.19 The authors tested the method on a laboratory-sized reactor and in an industrial-sized facility at the Eastman Chemical Company. Åkesson et al.63 proposed a predictor-corrector algorithm, which can be viewed as an extension of an interior-point method for solving quadratic

3512

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010

programming (QP) problems, to solve the optimization problem and determine the optimum Kalman filter. Valappil and Georgakis64 used a Taylor series expansion around the nominal parameter values, on the assumption that the dependence was linear. This gave a closed-form solution for the covariance matrix. To account for nonlinearities, the authors also proposed using Monte Carlo simulations to account for the dependence on the fitted parameters. 4. Auto (Self) Tuning Methods This section provides a review of several autotuning procedures available in the literature for MPC. The obvious advantage to using an autotuning method is that the control engineer is not required to have a great amount of systems knowledge to initialize the tuning procedure. Similarly, the tuning parameters are updated along with the optimization algorithm; thus, they are always set to the optimal values. However, this comes at a computational price since the engineer is now required to calculate two optimization procedures per time step, rather than the one in an offline tuning strategy. Han et al.65 proposed an autotuning strategy for unconstrained DMC using particle swarm optimization (PSO). As a starting point for tuning values within the optimization, the authors suggested using the method proposed in ref 13. Their procedure to tune the parameters via PSO is outlined in ref 65. Once the criteria are met in the sixth step, the output of the optimization is a solution to the min-max problem posed within paper: minimize the tuning parameters while maximizing the use of the process model parameters. This strategy allows for rapid autotuning each time the optimization problem is calculated. Suzuki et al.66 used PSO to tune MPC. Instead of tuning all of the parameters, the authors allow the user to set the prediction and control horizons offline and then use PSO to determine automatically the weights on the magnitude of inputs, outputs, and rate of change of inputs. This saves computational time since the optimization to determine the tuning parameters is smaller than in ref 65. van der Lee et al.67 used genetic algorithms (GA) and multiobjective fuzzy decision making (MOFDM) to autotune unconstrained DMC. The process involves: (1) setting the tuning parameters to an initial value; (2) running simulations for a set of initial parameters; (3) calculating the objective for each tuning parameter set; (4) performing MOFDM; (5) discarding the worst set of tuning parameters and replace with the best; (6) performing GA if less than the number of predetermined generations; otherwise, exit with the optimal tuning parameters. This was applied to a hot water mixing tank with four different definitions of optimality. Al-Ghazzawi et al.68 proposed an autotuning strategy for constrained DMC that involves the exploitation of the sensitivity of the closed-loop response to the tuning parameters. The online tuning strategy involves a linear approximation of the relationship between the process output and the MPC tuning parameters.68 This allows for an automatic adjustment in the tuning parameters so that the control action does not violate the constraints. Baric et al.69 have proposed a redefinition of the objective function to include the optimization of the weights on the magnitude of the inputs. It involves changing the traditional terminal state cost function into a new linear programming cost function with the vector of optimizers, which represent the upper bounds on the components of vectors x(k) and u(k), respectively.69 Once the new optimization problem is solved, the control engineer not only obtains the solution for all feasible state variables, but also determines the input weights for the

Table 8. Parameter Values of the CSTR Model ) ) ) ) ) ) ) ) ) )

R Z Ea -∆H Fs c Ti τ V CAi

8.314 2.000 5.600 5.000 9.000 2.200 2.952 1.800 1.000 1.000

× × × × × × × × × ×

kJ/kmol · K 1/min kJ/kmol kJ/kmol kg/m3 kJ/kg · K K min m3 kmol/m3

100 104 104 104 102 100 102 102 10-2 101

specified range. This strategy is applicable to constrained DMC. Sanchez et al.17 proposed an algorithm to tune restricted structure control systems using subspace identification of the process model. Once the first N terms of the impulse response model are known, the multivariable parameters can be calculated by minimizing a finite horizon LQG optimization problem subject to nonlinear constraints. Liu and Wang70 proposed two algorithms for the self-tuning of GPC-type controllers. Their recursive algorithms ensure both the satisfaction that the objective function reaches a minimum and that there is convergence upon optimal tuning parameters. It applies to unconstrained GPC only since it requires solving a closed form control increment. Similarly, Tsai et al.71,72 proposed a self-tuning strategy for a GPC controller. They applied the strategy to an oil-cooling machine used for cooling high speed machine tools71 and a plastic injection molding process.72 The algorithm is based on using a pseudo random binary signal (PRBS) as a test signal for the initial parameter estimation via recursive least-squares estimation (RLSE). Once this is done, the algorithm automatically calculates the optimal values of the tuning parameters. 5. Case Study This section presents a comparison of closed-loop performances obtained using several tuning guidelines. The process is a continuous-stirred tank reactor (CSTR), in which a firstorder reaction takes place. A model predictive controller is implemented to control the temperature of the reactor. The process has a model of the form:

( ) ( )

CAi - CA dCA -Ea ) -Z exp CA + dt RT τ Ti - T -Ea (-∆H)Z Q dT ) exp + C + dt Fsc RT A τ FscV The model parameter values are given in Table 8. The chemical reactor has three equilibrium points (steady states) corresponding to Q ) 3 kJ/min: a high A-concentration, low temperature stable node (9.969 kmol/m3, 323.26 K), a low A-concentration, high temperature stable node (0.403 kmol/m3, 564.83 K), and a mid A-concentration, mid temperature saddle point (5.190 kmol/m3, 443.94 K). Using a first-order truncated Taylor’s series expansion of the model around the high A-concentration, low temperature stable equilibrium point, the linear approximation of the nonlinear model around the steady state is described by A)

[

-0.00557 -1.15 × 10-5 0.000451 0.00527 0 b) 0.0505 c ) [0 1 ]

[

]

]

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010

3513

strategies. From Figure 1, one can see that the response time of the process lies somewhere in between the two FOPDT tuning strategies. Figure 2 shows that the input necessary for this strategy is overall about the same as the one needed in ref 31. However, increased cost in computation due to the higher P and M may not be justified using this strategy. From this example, there is a trade-off between higher computational cost and robustness on one end and ease of tuning without being able to operate at an unstable equilibrium point. If the control engineer wishes to operate within the stable equilibrium zone or from one stable equilibrium point to another, then the strategy in ref 31 yields good results. However, if one needs to operate in an unstable equilibrium zone, then refs 13 and 31 cannot be used since they are for stable plants only and the control engineer must use the tuning guidelines proposed in ref 29. Figure 1. Temperature responses for the three tuning strategies.

Figure 2. Manipulated input responses for the three tuning strategies.

where the manipulated input u ) Q and the controlled output y ) T. The control objective is to operate the reactor at the low concentration, high temperature, stable equilibrium point, while initially (at time t ) 0) the reactor is at the high concentration, low temperature stable steady state. Using the tuning strategies outlined in refs 13, 29, and 31, a closed-loop simulation of the CSTR is carried out using the model predictive control toolbox of MATLAB. The values within the estimation tab in the model predictive control toolbox remained at their default settings. On the basis of the strategy in ref 13, P ) 7, M ) 4, and λ ) 0.0342; on the basis of the strategy in ref 29, P ) 38, M ) 9, and λ ) 0.4; and on the basis of the strategy in ref 31, P ) 6, M ) 1, and λ ) 0.4293. The responses of the reactor temperature are depicted in Figure 1. One can see that the first strategy yields a faster response, but at the premium of a more aggressive manipulated input profile, rate of energy supplied to the reactor, as shown in Figure 2. Since this plant has a large apparent time constant (275 min), the relatively small gain in response time of the output is not justified by the much higher input magnitude required to achieve it. Moreover, the third tuning strategy had no overshoot, while the first had the largest overshoot of all three strategies. On the basis of this, the third tuning strategy yields better output response than the first one. Using the second strategy, both the final prediction horizon and the control horizon are higher than the values that resulted using the FOPDT tuning

6. Concluding Remarks This paper provided a broad review of many tuning methods for several classes of MPC formulations. The review covers both theoretically based strategies and heuristic/industrial methods to tune not only DMC and GPC MPC, but also general state-space representations and other formulations such as MPL MPC. The study also touched on methods of calculating the covariance matrices for state estimation and the Kalman filter gain along with auto (self) tuning procedures. As stated before, the first act of “tuning” in MPC is to develop an appropriate process model. If the model is accurate enough, then the rest of the tuning will be straightforward. And if the controller exhibits poor performance, then one should consider the model poor (inaccurate) until proven otherwise. In this review paper, methods for proper model development or selection for MPC applications were not reviewed. In recent years, advances in subspace identification and automated testing have allowed practitioners to dramatically reduce the cost of tuning MPC applications.22 Advances in the autocovariance least-squares technology are expected to make Kalman filtering much more accessible by automatically identifying the main tuning parameters. Acknowledgment This study was supported in part by the National Science Foundation (NSF) through the grant CBET-0651706. The authors would like to thank Thomas A. Badgwell for his insightful comments on MPC tuning methods. Notation c ) heat capacity of the reacting solution dmax ) maximum dead time gi ) estimated coefficients of the step response model hmin ) smallest output constraint bound k ) adjustable gain factor in ref 21, typically set to 1 k1 ) time (in samples) when the constraint becomes active in ref 50 k1 and k2 ) lower and upper bound of the output constraint window in ref 46 m ) detuning factor in refs 14 and 15 na ) order of A polynomial in GPC formulation nb ) order of B polynomial in GPC formulation ni ) number of inputs in ref 50 nw ) output constraint window length t10 ) rise time for 10% of process steady-state t60 ) rise time for 60% of process steady-state

3514

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010

t80 ) rise time for 80% of process steady-state t90 ) rise time for 90% of process steady-state t95 ) rise time for 95% of process steady-state t95,i ) rise time for 95% of yi process steady-state td ) process delay time tp(t) and tp(t - 1) ) current and previous tuning parameter value, respectively tr ) process rise time ts ) process settling time umax ) maximum input ujmax ) maximum value for input uj in ref 50 umin ) minimum input ujmin ) minimum value for input uj in ref 50 uZ,s ) input zero direction for each respective input x0 and xM ) vectors of initial and final state values ymax ) maximum output ymin ) minimum output yZ,s ) output zero direction for each respective output A ) shaped plant’s state matrix in ref 49 B ) shaped plant’s input matrix in ref 49 B(1) ) B polynomial of the GPC process model evaluated at 1 CAi ) concentration of reactant A in the inlet stream of the CSTR CA ) concentration of reactant A in the outlet stream of the CSTR Ea ) activation energy Ei ) maximum additive errors in the step response model G ) lower triangular matrix of estimated model coefficients H ) output constraint matrix K(T) ) condition number of the Jordan decomposition of the state matrix A Kp ) PID controller gain from ref 31 Krs ) FOPDT model gain for R outputs and S inputs M ) control horizon N ) model horizon P0 ) initial prediction horizon P1f ) final prediction horizon for first constraint in ref 39 P2f ) final prediction horizon for second constraint in ref 39 P ) final prediction horizon Pd ) desired performance Pm(t) ) current measurement of the performance Q ) weights on the outputs Qr ) output weight for R outputs Q ) rate of thermal energy supplied to the CSTR R ) universal gas constant RPN ) robust performance number T ) temperature of the outlet stream of the CSTR Ti ) temperature of the inlet stream of the CSTR Ts ) sampling time (period) V ) volume of reacting mixture in the CSTR X ) stabilizing solution of LQR problem in ref 49 Z ) stabilizing solution of Kalman filter in ref 49 Z ) pre-exponential factor of the reaction rate equation λmax ) absolute value of the maximum eigenvalue of the state matrix A λrel ) constant computed from the initial values of the weights in refs 14 and 15 F ) spectral radius Fs ) solution density τ ) CSTR residence time τp ) FOPDT process time constant τd,rs ) FOPDT process delay constant(s) for R outputs and S inputs τrs ) FOPDT process time constant(s) for R outputs and S inputs ωsup ) frequency where RPN reaches maximal value Γ ) tuning factor in ref 31 based on the desired performance of system ∆umax ) maximum change in input

∆ujmax ) maximum change in value for each input in ref 50 (-∆H) ) heat of reaction Λ ) weights on the rate of change of inputs

Literature Cited (1) Garcia, C. E.; Morari. Internal Model Control. 1. A Unifying Review and Some New Results. Ind. Eng. Chem. Process Des. DeVelop. 1982, 21 (2), p. 308–323. (2) Morari, M.; Lee, J. H. Model Predictive Control: Past, Present, and Future. Comput. Chem. Eng. 1999, 23, 667–682. (3) Garcia, C. E.; Prett, D. M.; Morari, M. Model Predictive Control: Theory and Practice-A Survey. Automatica 1989, 25 (3), 335–348. (4) Bequette, B. W. Nonlinear Model Prediction Control: A Personal Perspective. Can. J. Chem. Eng. 2007, 85 (4), 408–415. (5) Allgo¨wer, F.; Zheng, A. Nonlinear Model PredictiVe Control; Progress in Systems and Control Theory; Birkha¨user: Berlin, 2000; Vol. 26. (6) Cannon, M.; Kouvaritakis, B. Nonlinear PredictiVe Control: Theory and Practice; IEE Control Series; The Institution of Electrical Engineers: London, 2001; Vol. 6197. (7) Qin, S. J.; Badgwell, T. A. A Survey of Industrial Model Predictive Control Technology. Control Eng. Pract. 2003, 11, 733–764. (8) Findeisen, R.; Allgo¨wer, F.; Biegler, L. T. Assessment and Future Directions of Nonlinear Model PredictiVe Control; Lecture Notes in Control and Information Sciences; Allgo¨wer, F., Fleming, P., Kokotovic, P., Kurzhanski, A. B., Kwakernaak, H., Rantzer, A., Tsitsiklis, J. N., Eds.; Springer-Verlag: Berlin, 2007; Vol. 358. (9) Zadeh, L. A.; Whalen, B. H. On Optimal Control and Linear Programming. IRE Trans. Automatic Control 1962, 7 (4), 45. (10) Propoi, A. I. Use of LP Methods for Synthesizing Sampled-Data Automatic Systems. Automation Remote Control 1963, 24, 837. (11) Cutler, C. R.; Ramaker, B. L. Dynamic Matrix Control - A Computer Control Algorithm. In Proceedings of the Joint Automatic Control Conference, San Francisco, CA, August 13-15, 1980. (12) Richalet, J.; Rault, A.; Testud, J. L.; Papon, J. Model Predictive Heuristic Control: Applications to Industrial Processes. Automatica 1978, 14, 413–428. (13) Shridhar, R.; Cooper, D. J. A Tuning Strategy for Unconstrained Multivariable Model Predictive Control. Ind. Eng. Chem. Res. 1998, 37, 4003–4016. (14) McIntosh, A. R.; Shah, S. L.; Fisher, D. G. Selection of Tuning Parameters for Adaptive Generalized Predictive Control. In Proceedings of the 1989 American Control Conference, Pittsburgh, PA, June 21-23, 1989. (15) McIntosh, A. R.; Shah, S. L.; Fisher, D. G. Analysis and Tuning of Adaptive Generalized Predictive Control. Can. J. Chem. Eng. 1991, 69, 97–110. (16) Zhang, Q.; Li, S. Enhanced Performance Assessment of Subspace Model-based Predictive Controller with Parameters Tuning. Can. J. Chem. Eng. 2007, 85, 537–548. (17) Sanchez, A.; Katebi, M. R.; Johnson, M. A. A Tuning Algorithm for Multivariable Restricted Structure Control Systems Using Subspace Identification. Int. J. Adapt. Control Signal Process. 2004, 18, 745–770. (18) Aspen SmartStep AdVanced; Aspentech, 2008; available from http:// www.aspentech.com/products/aspen-smartstep-advanced.cfm. (19) Odelson, B. J.; Lutz, A.; Rawlings, J. B. The Autocovariance LeastSquares Method for Estimating Covariances: Application to Model-Based Control of Chemical Reactors. IEEE Trans. Control Syst. Technol. 2006, 14 (3), 532–540. (20) Odelson, B. J.; Rajamani, M. R.; Rawlings, J. B. A New Autocovariance Least-Squares Method for Estimating Noise Covariances. Automatica 2006, 42, 303–308. (21) Yamuna Rani, K.; Unbehauen, H. Study of Predictive Controller Tuning Methods. Automatica 1997, 33 (12), 2243–2248. (22) Badgwell, T. A. personal communications, 2008. (23) Bitmead, R. R.; Gevers, M.; Wertz, V. AdaptiVe Optimal Control; Prentice-Hall: Englewood Cliffs, NJ, 1990. (24) Wojsznis, W.; Gudaz, J.; Blevins, T.; Mehta, A. Practical Approach to Tuning MPC. ISA Trans. 2003, 42, 149–162. (25) Nagrath, D.; Prasad, V.; Bequette, B. W. Model Predictive Control of Open-Loop Unstable Cascade Systems. In Proceedings of the 2000 American Control Conference, Chicago, IL, June 28-30, 2000. (26) Maurath, P. R.; Laub, A. J.; Seborg, D. E.; Mellichamp, D. A. Predictive Controller Design by Principal Components Analysis. Ind. Eng. Chem. Res. 1988, 27, 1204–1212.

Ind. Eng. Chem. Res., Vol. 49, No. 8, 2010 (27) Maurath, P. R.; Mellichamp, D. A.; Seborg, D. E. Predictive Controller Design for Single-Input/Single-Output (SISO) Systems. Ind. Eng. Chem. Res. 1988, 27, 956–963. (28) Cutler, C. R. Dynamic Matrix Control. An Optimal Multivariable Control with Constraints. Ph.D. Thesis, University of Houston, Houston, 1983. (29) Georgiou, A.; Georgakis, C.; Luyben, W. L. Nonlinear Dynamic Matrix Control for High-Purity Distillation Columns. AIChE J. 1988, 34 (8), 1287–1298. (30) Ogunnaike, B. A. Dynamic Matrix Control: A Nonstochastic, Industrial Process Control Technique with Parallels in Applied Statistics. Ind. Eng. Chem. Fundam. 1986, 25, 712–718. (31) Hinde, R. F.; Cooper, D. J. A Pattern-based Approach to Excitation Diagnostics for Adaptive Process Control. Chem. Eng. Sci. 1994, 49 (9), 1403–1415. (32) Genceli, H.; Nikolaou, M. Robust Stability Analysis of Constrained l1-norm Model Predictive Control. AIChE J. 1993, 39 (12), 1954–1965. (33) Banerjee, P.; Shah, S. L. Tuning Guidelines for Robust Generalized Predictive Control. in Proceedings of the 31st IEEE Conference on Decision and Control, Tucson, AZ, December 16-18, 1992. (34) Clarke, D. W.; Mohtadi, C.; Tuffs, P. S. Generalized Predictive Control-Part II. Extensions and Interpretations. Automatica 1987, 23 (2), 149–160. (35) Clarke, D. W.; Mohtadi, C. Properties of Generalized Predictive Control. Automatica 1989, 25 (6), 859–875. (36) Clarke, D. W.; Mohtadi, C.; Tuffs, P. S. Generalized Predictive Control-Part I. The Basic Algorithm. Automatica 1987, 23 (2), 137–148. (37) Yoshitani, N.; Hasegawa, A. Model-Based Control of String Temperature for the Heating Furnace in Continuous Annealing. IEEE Trans. Control Syst. Technol. 1998, 6 (2), 146–156. (38) Karacan, S.; Hapog˘lu, H.; Alpbaz, M. Application of Optimal Adaptive Generalized Predictive Control to a Packed Distillation Column. Chem. Eng. J. 2001, 84, 389–396. (39) Edouard, D.; Dufour, P.; Hammouri, H. Observer based Multivariable Control of a Catalytic Reverse Flow Reactor: Comparison between LQR and MPC Approaches. Comput. Chem. Eng. 2005, 29, 851–865. (40) McIntosh, A. R.; Shah, S. L.; Fisher, D. G. Performance Tuning of Adaptive Generalized Predictive Control. In Proceedings of the 11th IFAC World Congress, Tallinn, USSR, August 13-17, 1990. (41) Trierweiler, J. O.; Farina, L. A. RPN Tuning Strategy for Model Predictive Control. J. Process Control 2003, 13, 591–598. (42) Lee, J. H.; Yu, Z. H. Tuning of Model Predictive Controllers for Robust Performance. Comput. Chem. Eng. 1994, 18 (1), 15–37. (43) Necoara, I.; van den Boom, T. J. J.; De Schutter, B.; Hellendoorn, H. Stabilization of Max-plus-linear Systems Using Model Predictive Control: The Unconstrained Case. Automatica 2008, 44, 971–981. (44) van den Boon, T. J. J.; De Schutter, B. MPC for Max-plus-linear Systems: Closed-loop Behavior and Tuning. In Proceedings of the 2001 American Control Conference, Arlington, VA, June 25-27, 2001. (45) van den Boon, T. J. J.; De Schutter, B.; Necoara, I. On MPC for Max-plus-linear Systems: Analytic Solution and Stability. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, December 12-15, 2005. (46) Rawlings, J. B.; Muske, K. R. The Stability of Constrained Receding Horizon Control. IEEE Trans. Automatic Control 1993, 38 (10), 1512– 1516. (47) Scokaert, P. O. M.; Rawlings, J. B. Constrained Linear Quadratic Regulation. IEEE Trans. Automatic Control 1998, 43 (8), 1163–1169. (48) Rowe, C.; Maciejowski, J. M. Tuning Robust Model Predictive Controllers using LQG/LTR. In Proceedings of the 14th Triennial World Congress, Beijing, P.R. China, July 5-9, 1999. (49) Rowe, C.; Maciejowski, J. Tuning MPC using H∞ Loop Shaping. In Proceedings of the 2000 American Control Conference, Chicago, IL, June 28-30, 2000. (50) Sarimveis, H.; Genceli, H.; Nikolaou, M. Rigorous Design of Robust Predictive Controllers for Processes with More Inputs than Outputs. Comput. Chem. Eng. 1996, 20 (Supplement), S1065–S1070.

3515

(51) Garriga, J. L.; Soroush, M. Model Predictive Controller Tuning via Eigenvalue Placement. In Proceedings of the 2008 American Control Conference, Seattle, WA, June 11-13, 2008. (52) Seborg, D. E.; Edgar, T. F.; Mellichamp, D. A. Process Dynamics and Control, 2nd ed.; John Wiley & Sons, Inc: Hoboken, NJ, 2004,. (53) Agarwal, N.; Huang, B.; Tamayo, E. C. Assessing Model Prediction Control (MPC) Performance. 2. Bayesian Approach for Constraint Tuning. Ind. Eng. Chem. Res. 2007, 46, 8112–8119. (54) Alspach, D. A Parallel Filtering Algorithm for Linear Systems with Unknown Time-varying Noise Statistics. IEEE Trans. Automatic Control 1974, 19 (5), 552–556. (55) Hilborn, C.; Lainiotis, D. Optimal Estimation in the Presence of Unknown Parameters. IEEE Trans. Syst., Sci., & Cybernetics 1969, 5 (1), 38–43. (56) Bohlin, T. Four Cases of Identification of Changing Systems. In System Identification: AdVances and Case Studies; Mehra, R., Lainiotis, D., Eds.; Academic: New York, 1976. (57) Kashyap, R. Maximum Likelihood Identification of Stochastic Linear Systems. IEEE Trans. Automatic Control 1970, 15 (1), 25–34. (58) Myers, K.; Tapley, B. Adaptive Sequential Estimation with Unknown Noise Statistics. IEEE Trans. Automatic Control 1976, 21 (4), 520–523. (59) Mehra, R. On the Identification of Variances and Adaptive Kalman Filtering. IEEE Trans. Automatic Control 1970, 15 (12), 175–184. (60) Mehra, R. Approaches to Adaptive Filtering. IEEE Trans. Automatic Control 1972, 17 (5), 903–908. (61) Be´langer, P. Estimation of Noise Covariance Matrices for a Linear Time-Varying Stochastic Process. Automatica 1974, 10, 267–275. (62) Carew, B.; Be´langer, P. Identification of Optimum Filter Steadystate Gain for Systems with Unknown Noise Covariances. IEEE Trans. Automatic Control 1973, 18 (6), 582–587. (63) Åkesson, B. M.; Jørgensen, J. B.; Poulsen, N. K.; Jørgensen, S. B. A Generalized Autocovariance Least-squares Method for Kalman Filter Tuning. J. Process Control 2008, 18, 769–779. (64) Valappil, J.; Georgakis, C. Systematic Estimation of State Noise Statistics for Extended Kalman Filters. AIChE J. 2000, 46 (2), 292–308. (65) Han, K. Zhao, J., Qian, J. A Novel Robust Tuning Strategy for Model Predictive Control. In Proceedings of the 6th World Congress on Intelligent Control and Automation, Dalian, China, June 21-24, 2006. (66) Suzuki, R. Kawai, F. Ito, H. Nakazawa, C. Fukuyama, Y., Aiyoshi, E. Automatic Tuning of Model Predictive Control Using Particle Swarm Optimization. In Proceedings of the 2007 IEEE Swarm Intelligence Symposium, Honolulu, HI, April 1-5, 2007. (67) van der Lee, J. H.; Svrcek, W. Y.; Young, B. R. A Tuning Algorithm for Model Predictive Controllers Based on Genetic Algorithms and Fuzzy Decision Making. ISA Trans. 2008, 47, 53–59. (68) Al-Ghazzawi, A.; Ali, E.; Nouh, A.; Zafiriou, E. On-line Tuning Strategy for Model Predictive Controllers. J. Process Control 2001, 11, 265–284. (69) Baric´, M.; Baotic´, M.; Morari, M. On-line Tuning of Controllers for Systems with Contraints. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, December 12-15, 2005. (70) Liu, W.; Wang, G. Auto-Tuning Procedure for Model-based Predictive Controller. In Proceedings of the 2000 IEEE International Conference on Systems, Man, and Cybernetics, Nashville, TN, October 811, 2000. (71) Tsai, C.; Teng, F.; Lin, S. Direct Self-Tuning Model Following Predictive Control of a Variable-Frequency Oil-Cooling Machine. In Proceedings of the 2003 American Control Conference, Denver, CO, June 4-6, 2003. (72) Tsai, C.; Lu, C. Multivariable Self-Tuning Temperature Control for Plastic Injection Molding Process. IEEE Trans. Industry Appl. 1998, 34 (2), 310–318.

ReceiVed for reView February 25, 2009 ReVised manuscript receiVed September 19, 2009 Accepted February 18, 2010 IE900323C