Ind. Eng. Chem. Res. 2005, 44, 3575-3584
3575
Model Predictive Control Algorithm for Prioritized Objective Inferential Control of Unmeasured States Using Propositional Logic Christopher E. Long and Edward P. Gatzke* Department of Chemical Engineering, University of South Carolina, Columbia, South Carolina 29208
This paper presents a model predictive control (MPC) algorithm utilizing a state-space model representation, allowing for the inferential control of unmeasured process states using a prioritized control objective formulation. Knowledge of the unmeasured states is gained through the use of an external state estimation routine, while mixed-integer methods are used to implement the prioritization of the control objectives. The capabilities of the algorithm are demonstrated on an experimental multivariable air pressure tank system. 1. Introduction A dynamic process may have many process states that describe the dynamic response of the system. At any given time, some of these states may not be known. Some process states may be too costly, too timeconsuming, or simply impossible to measure directly. Nevertheless, it is often desirable to control these unmeasured states in a systematic manner both during reference change transitions and in the presence of disturbances. Model Predictive Control (MPC) methods have become quite popular in industry because of their ability to control a broad range of processes. For a detailed review of MPC methods, see refs 1-6. These methods typically are capable of handling multivariable systems, enforcing hard and soft constraints on both inputs and outputs, and accommodating process time delays and difficult process dynamics. The controller operates by using process data at discrete time intervals to formulate an optimization problem, which represents the minimization of some objective function. The optimization problem is solved at a given time step for the optimal control trajectory, and the appropriate control move is implemented. This is repeated at each time step to provide online feedback control. MPC works well under the premise that three goals can be met. First, the system in question must be modeled accurately. Second, the proper optimization problem must be formulated using the model. Finally, the optimization problem must be solved efficiently to yield the necessary control move in the allotted sampling time. Variations in these three steps affect the effectiveness of a particular MPC method. This paper demonstrates that MPC can be used to accommodate a system that has unmeasured states requiring regulation. Such a design benefits from a state-space formulation with prioritized control objectives. Conventional MPC algorithms minimize an objective function with weights for penalizing both setpoint errors and rapid changes in input levels. The problem can include additional constraints according to needs from the problem’s domain. This may lead to setpoint constraints on process measurements, one-sided soft constraints for process variable upper and lower bounds, * Corresponding author. Tel.: (803) 777-1159. Fax: (803) 777-8265. E-mail:
[email protected].
or constraints on the input movements for actuator limitations. In contrast to a conventional MPC, the use of a state-space model along with an external state estimation routine can explicitly define all states for the process across the entire prediction horizon. Thus, any unmeasured model states that may enter into unsafe or undesirable operating regions can be inferentially controlled. This is accomplished by incorporating into the optimization problem constraints for unmeasured state estimates across the horizon. Regardless of the model type chosen in the conventional linear MPC method, all variables are continuous and the optimization problem is formulated either as a linear program (LP) or a quadratic program (QP), depending on the norm used to characterize deviations from the ideal operating point of the system. Mixed-integer formulations have been used to discretize and prioritize control objectives.7-13 Thus, the controller can be relied on to accommodate high priority constraints associated with safety and equipment limits prior to attempting to meet the tighter constraints typically associated with production specifications. One advantage of methods based on propositional logic is that control objectives are explicitly stated and prioritized, thus avoiding some uncertainties associated with MPC controller tuning. For example, in a multivariable system with multiple saturated inputs, it may be difficult to determine the resulting behavior of the traditional MPC method. The main disadvantage of these methods is the computational complexity of the optimization problem to be solved online. In mixedinteger formulations, additional constraints are added that describe whether discrete objectives have been met and whether the objectives were met in a specified ranking of priority. Large objective function penalties are assigned if the discretized objectives are not met, and even larger penalties are placed on not meeting objectives in order of their priority. These objective function penalties are added to those used in the conventional MPC formulation. Again, using the statespace formulation offers the additional advantage that constraints can be placed on the explicitly defined unmeasured states. Errors associated with these states can be penalized in the objective function. Formulation of the objective function with priority control incorporates both binary and continuous variables, as opposed to the exclusive use of continuous variables in the conventional case. The result of this formulation is a
10.1021/ie049287p CCC: $30.25 © 2005 American Chemical Society Published on Web 04/12/2005
3576
Ind. Eng. Chem. Res., Vol. 44, No. 10, 2005
mixed-integer linear program (MILP) or a mixed-integer quadratic program (MIQP). Efficient solution of these optimization problems is crucial for real-time control, but this topic is beyond the scope of this work. Given that the MPC task consists of solving a new optimization problem at each time step, the limitation of the approach becomes whether the posed problem can be solved within the allotted time. Efficient solvers for the LP, QP, MILP, and MIQP problems are available. The specific details of the problem such as the structure of the problem, the total number of variables, the number of binary variables, and so forth dictate whether these solvers are sufficiently efficient for a given application. This paper presents an MPC algorithm that is capable of controlling unmeasured states while using a statespace formulation with a prioritized list of control objectives. Multiple discrete objectives can be prioritized in a manner where any number of objectives are assigned the same priority using the “and” clause from propositional logic.14 The proposed capabilities of the algorithm have been tested for a variety of applications in a simulation environment and also on an experimental system. Results from the application of the controller to an experimental multivariable pressure tank system15 are presented as a case study. In this scenario, various discrete control objectives are used to inferentially constrain unmeasured states of interest. 2. Controller Formulation approach,7,16
Using a prioritized objective control the objective function to be minimized by the controller at every time step k is p
Φ(k) ) ΓTp P(k) + ΓTo O(k) +
||ΓTe e(k+i)|| + ∑ i)1 m-1
T ||Γ∆u ∆u(k+i)|| ∑ i)0
(1)
Here, O and P are vectors of binary elements, that is, 0 or 1, that describe whether the discretized objectives have been met and if the objectives were met in their specified order according to their associated priority. The variables m and p are the move and prediction horizons of the controller, e is the vector of errors in either the states or outputs that represent the difference between the modeled value and the desired reference, and ∆u is a vector defining the optimal input movements, that is, the difference between the input positions at two consecutive time steps. Γo and Γp are vectors of weights corresponding to each discrete objective and to each priority, respectively. Γe and Γ∆u are vectors of weights with entries corresponding to each error term and input move term. Each of these weighting factors provides the ability to assign some relative importance to each individual term within the control problem. The optimization problem to be solved at every time step is constrained. Different types of constraints that could apply are detailed below. The actions of the controller are based predominantly on the ability to quantify how far the system is from the desired operating conditions for all times in the prediction horizon. As a result, the controller relies heavily on a model of the system and the current process measurements to predict how the system will evolve.
As a result of the formulation, only those variables explicitly defined by the model can be regulated. Numerous model types have been used in MPC applications, but most have focused solely on the regulation of the measured outputs of the process. This does not suffice for the control of unmeasured states. Given that a process has nu inputs, nx states, ny outputs, and nd measured disturbances, consider a linear model of the process in the typical state-space form of
x(k+1) ) Ax(k) + Buu(k) + Bdd(k)
(2)
y(k) ) Cx(k) + Duu(k) + Ddd(k)
(3)
where x(k) ∈ Rnx is the state vector at sample time k, u(k) ∈ Rnu is the vector of input values, y(k) ∈ Rny is a vector of the predicted outputs, and d(k) ∈ Rnd includes any measured disturbances. Such a model can be obtained by linearizing the system around the desired operating point. Utilizing this form of the model, all states and outputs are explicitly represented using equality constraints. For such states, the constraints are of the following form:
x(k+i) ) Ax(k+i-1) + Buu(k+i-1) + Bdd(k) ∀i ) 1...p (4) The modeled outputs are then related to the states by
y(k+i) ) Cx(k+i) + Duu(k+i) + Ddd(k) + [ym(k) Cxe(k)] ∀i ) 1...p (5) Here a disturbance update term adjusts the modeled output (y) by the difference between the vector of actual measurements (ym) and the vector of output estimates at the current time (Cxe) in order to account for any plant/model mismatch. This assumes that the current disturbance level, y(k) - ym(k), remains constant over the prediction horizon. A Kalman filter (KF) could be used to provide state estimates. However, additional states would be added to the system for offset-free tracking and would complicate the closed-loop dynamics of the process. For this reason, the necessary state estimates are provided from an open-loop state explicit model of the system. This is adequate, as the initial conditions of the states are known and the process is stable. The effectiveness of the controller in predicting the state and output trajectories is dependent on current process information: current state and measured output values. Since all of the states are not necessarily measured, estimates of these unmeasured states must be used to update the model. If the system is not fully observable, the limitation lies in the ability of the controller to regulate only those variables with known values. Note that the controller can only choose input moves over the horizon m, for every i greater than m - 1, such that the input u has the same value: u(k+m-1) ) u(k+m) ) ... ) u(p). A process may have numerous control objectives that are applied to a system using constraints on the setpoint error. These constraints vary in form depending on the type of constraint required. Traditionally, only the measured states or outputs are constrained. As mentioned previously, it could be important for safety or production reasons for the controller to control some of the unmeasured states. To this end, the controller formulation must have the ability to constrain both
Ind. Eng. Chem. Res., Vol. 44, No. 10, 2005 3577
P1 P1 P2 l P NP
types of states, that is, unmeasured and measured, as well as the measured output values. This can be achieved either with a setpoint constraint or a one-sided soft upper or lower bound. A setpoint constraint on a state, measured or unmeasured, can be written as:
|rxj(i) - xj(i)| e ex(i) + γj
∀i ) 1...p
(6)
where rxj is the reference value from a reference vector r for the jth state and γj is a tolerance value that provides a range that the state must stay within such that the constraint is satisfied. Note that γj ) 0 for setpoint constraints and γj > 0 for soft bounds on a variable. This single constraint can be written as separate constraints:
Pi+1 e Pi
∀i ) 1...p (7)
|u(k+i-1) - u(k+i-2)| e ∆u(k+i-1)
∀i ) 1...m (8)
This is the last set of constraints for the MPC algorithm without propositional logic constraints. The following constraints are imposed in addition and pertain specifically to the prioritized objective formulation. Discretizing the typical continuous control objectives immediately renders it possible to prioritize different objectives through a series of algebraic constraints. To discretize the continuous control objectives, a constraint of the form
e(i) e Nj(1-Oj) + δj
∀i ) 1...p
(9)
is needed where Nj is large. If the continuous control objective has an error less than the tolerance δj and γj at all time steps over the prediction horizon, the objective is satisfied and Oj will take a value of one. On the other hand, if the control objective is not satisfied at one of the time steps over the prediction horizon, that is, there is an error greater than δj at some time, the Oj can take on a value of zero for the constraint to be satisfied. To ensure that the objectives are met in order of priority, another type of constraint is introduced. Previous work16 in this area used a formulation in which all continuous objectives were discretized and each objective was given a unique priority. In this case, the constraints were of the form Pi e Oi for all i ) 1...Np, where Np and No are the number of priorities and objectives respectively and Np ) No. Using the “and” clause from propositional logic,14 it is possible to extend this type of constraint to cases where several objectives have equal priority, that is, Np * No. Specifically, one constraint is added for each objective that forces the objectives to be met before the corresponding priority. As an example, one might require
(10)
In this case, Np ) No - 1. The first two objectives have the same priority; therefore, both O1 and O2 must be met before P1 is satisfied. An extension to the case of equal priorities is simply a matter of propositional logic. Similar extensions can, furthermore, handle any logical clause consisting of combinations of “and” and “or” statements. Finally, constraints are needed to enforce the order in which the objectives are met based on their priority level. These constraints are
rxj(i) - xj(i) e ex(i) + γj - rxj(i) + xj(i) e ex(i) + γj
A single soft upper or lower bound can be placed on a state by enforcing only one side of the setpoint constraints defined in eq 7. Similarly, constraints can be placed on the measured process outputs. The objective function can contain terms that penalize input movements. These input movements are measured and constrained as
e O1 e O2 e O3 l e ONO
∀i ) 1...NP - 1
(11)
All of the elements of the problem are now defined, but it is still necessary to express them in a more general and compact form. The general form of the objective function for a QP or MIQP can be written in the form
1 J ) zTHz + fTz 2
(12)
where H is a diagonal weight matrix that includes the weights of each term whose magnitude is measured by the l2-norm and f is a vector holding the weights of each term whose magnitude is measured by the l1-norm. When H is nonzero and f ) 0, the optimization problem is a constrained MIQP. However, if H ) 0 and f is nonzero, the problem is simplified to a constrained MILP problem. In the objective function, z is a vector of the unknowns. It is defined as
z ) [uTm xTp yTp eTp ∆uTmOTPT]T
(13)
where
um ) [u(0)T...u(m-1)T]T xp ) [x(1)T...x(p)T]T yp ) [y(1)T...y(p)T]T ep ) [e(1)T...e(p)T]T ∆um ) [∆u(0)T...∆u(m-1)T]T It should be noted that, from this unknown vector, the controller only has m*nu true decision variables. Only um must be specified. With the value of um and the current process measurements, the rest of z is explicitly defined by the constraint relations (eqs 4-11). All constraints can then be rearranged into a single mathematical expression in the standard matrix-vector notation of
Mz e b
(14)
Note that in eq 14 some elements of b are updated at each time step from process data, while elements of M are all constant. For this work, an l1-norm is chosen to
3578
Ind. Eng. Chem. Res., Vol. 44, No. 10, 2005
Figure 1. Experimental four pressure tank system.
quantify the deviations from the desired operating point so the optimization problem posed and solved by the controller is then
min fTz
(15)
subject to
Mz e b zlb e z e zub The selection of lower and upper bounds on z (zlb and zub) allows for hard constraints such as actuator limits to be imposed on the system in addition to those constraints previously described. The desired control action is taken based on the solution to the optimization problem, and then the controller waits for the next sample time to receive the updated process data. These updated data are the basis for formulating the appropriate optimization problem for the next time step. For the work presented here, the optimization problem is formulated completely in MATLAB17 and solved using ILOG CPLEX. 3. Application to an Experimental Pressure Tank System This section describes the multivariable experimental pressure tank system, presents the development of an adequate process model, and provides an analysis of the closed-loop performance of the proposed control algorithm on the pressure tank system. 3.1. System Description. Inspired by experimental systems for liquid level modeling and control of a fourtank system,18-21 an industrially relevant multivariable experimental air tank system15 has been developed as shown in Figure 1. As opposed to liquid level systems, pressure differences in the system drive the flow, removing limitations in system flexibility associated with gravity-driven liquid systems. The multi-input multi-output (MIMO) experimental system consists of four interconnected air tanks arranged in two parallel interacting “trains”, where each train consists of two tanks in series. A schematic of the air tank system is shown in Figure 2. High-pressure air (∼45 psig) flows into the system through two airactuated Badger control valves, which serve as the manipulated variables for the system. Pressure transducers are available to measure each of the four tank
Figure 2. Schematic of the four tank system.
pressures, leading to a total of four possible process outputs. Once the air flows through the system, it exits to the atmosphere through two outlet valves. On each side of the system, the upstream tank is slightly larger than the downstream tank. Note that each of the larger tanks is fitted with a small release valve that vents the tank to the atmosphere. These valves can be used to create a disturbance on the system that might simulate a leak in a given tank, providing the opportunity to examine disturbance rejection as a possible control objective in addition to reference tracking. The disturbance flow lines are double valved, allowing for reproducible disturbance response tests. For this work, pressure data are acquired at 1 s intervals using the pressure transducers. A National Instruments Data Acquisition system is interfaced to
Ind. Eng. Chem. Res., Vol. 44, No. 10, 2005 3579
effectively closed for CV1 e 10%. It is not until the valve is opened beyond this region that the air is able to flow. 3.2. Model Identification. For process identification, it is assumed that the pressures in each of the four tanks can be measured. Process data were collected every second for a specified input trajectory. This trajectory has each control valve held for a short period of time at the nominal operating conditions (30% open) followed by three extended periods of pseudorandom binary sequences (PRBS) of various magnitudes. In these sequences, the two system inputs are randomly perturbed by (2, (5, and (8 at 3 s intervals for 1 h. The dynamic data gathered is then used as the basis for the system description needed for this model-based control approach. Using the data set, a linear discrete-time model of this system is obtained using a subspace identification method (n4sid) in MATLAB. This method provides a state-space model, which is a linear approximation of the system around a specified operating point. For this case, the steady-state operating point is chosen in terms of the percent valve open as
uss ) [30 30 ] and the resulting four steady-state tank pressures (psig) are
xss ) [31.15 21.93 35.45 23.50 ]
Figure 3. Steady-state manifolds showing the downstream pressures as a function of control valve positions while operating at steady state.
MATLAB/Simulink, which is run on a 1.8 GHz Intel Pentium 4 processor. To test the abilities of the control algorithm, the system will be assumed to be in a basic 2 × 2 MIMO control configuration where only the two downstream pressure measurements (P2 and P4) will be available. The upstream tank pressures will act as the unmeasured states to be inferentially controlled. Note that all four state measurements will be used for model development. The physical symmetry of the system leads to the ability to classify the system as two trains of air flow, each consisting of a control valve, a large tank, and a smaller tank. With this in mind, valves v14 and v32 allow for cross-train flow. This cross-train flow causes interactions between the two different sides of the system. In some cases, the interactions can lead to the presence of an adjustable, multivariable right-half plane zero and inverse response. The system itself is quite nonlinear in nature. This can be seen from the steady-state manifolds shown in Figure 3, where the steady-state process outputs are presented as a function of both of the manipulated control valve positions. This nonlinearity is of great significance as the linear control method relies on the ability of the linear process model to capture the dynamics of the system. Note that CV1 is inoperative between 0 and 10% open, causing the valve to be
The model developed from the subspace identification is fourth-order, but the states of the model do not necessarily correspond to the actual process states or tank pressures. To inferentially control the unmeasured process states, a model that explicitly defines the actual process states is necessary. For this reason, a coordinate transformation22 must be made to convert the fictitious state vector to the actual state vector. Consider the state-space model developed using the subspace identification method consisting of states without direct physical meaning to be of the form
ˆ xf(k) + B ˆ uu(k) + B ˆ dd(k) xf(k+1) ) A ˆ uu(k) + D ˆ dd(k) y(k) ) C ˆ xf(k) + D
(16)
where xf is the state vector. A nonsingular transformation matrix T is chosen such that x(k) ) Txf(k), where x(k) is the desired vector of actual states. The resulting model becomes
ˆ uu(k) + TB ˆ dd(k) x(k+1) ) TA ˆ T-1x(k) + TB ˆ uu(k) + D ˆ dd(k) y(k) ) C ˆ T-1x(k) + D
(17)
In this form, each element in the new state vector x(k) represents an actual tank pressure if the transformation T is chosen correctly. In this case, the necessary transformation matrix is defined by T ) C ˆ . This is chosen as it will force the new mapping of states to outputs to be identity. In the absence of a direct feed (D ) 0), the output equation is
ˆC ˆ -1x(k) ) Ix(k) y(k) ) C ˆ T-1x(k) ) C
(18)
This shows that the pressure measurements are now also the process states of the model. This model is then modified to reflect only the measurements assumed available in the control of the
3580
Ind. Eng. Chem. Res., Vol. 44, No. 10, 2005
basic 2 × 2 MIMO tank system. The resulting discretetime linear state-space model is then defined by
[
0.9051 0.0555 -0.0074 0.0350 0.8279 -0.0084 A) -0.0083 -0.0813 0.8545 0.0197 -0.1326 -0.0320
[
0.0411 0.0036 B) -0.0028 0.0043 C)
[
0.0017 0.0059 0.0495 0.0143
0 1 0 0 0 0 0 1
D)
[ ]
]
]
0.0098 0.1203 0.1787 1.1121
]
0 0 0 0
From this linear discrete-time model, the system poles and zeros are identified as
poles ) 0.847, 0.896, 0.975, 0.980 zeros ) 0.326, 0.982 The poles show that the model of the process is stable. Both of the two process zeros are located inside the unit disk, which is analogous to being in the LHP for continuous time systems. The level of interaction in the system is characterized by calculating the relative gain array (RGA) for the system at steady state.
RGAss(G) ) G × (G-1)T|s)0 )
[
2.50 -1.50 -1.50 2.50
]
(19)
where G ) C(I - A)-1B + D. 3.3. Closed-Loop Performance. To illustrate the advantages of the proposed mixed-integer formulation for the inferential model predictive control of the pressure tank system, multiple control strategies are considered. The three approaches shown include a traditional MPC formulation using a state-space model without inferential control, a traditional state-space MPC formulation that allows for the inferential control of unmeasured states, and, finally, the mixed-integer formulation that allows for the prioritized objective inferential control of the unmeasured states. In each case, the control algorithm is implemented in real time where the appropriate optimization problem is formulated in MATLAB and solved using ILOG CPLEX in the time allowed by a 1 s sampling interval. 3.3.1. Traditional MPC without Inferential Control. Model predictive control approaches often rely on input-output models to predict how the system at hand will evolve through time. This allows for the optimal control of the outputs defined by this model; however, the model fails to provide the controller with information about any of the unmeasured states, leaving them unregulated. To demonstrate this shortcoming, MPC using a state-space process description without inferential control is implemented on the tank system where only the two downstream (measured) tank pressures are controlled. The controller is developed with a move horizon (m) of 2 and a prediction horizon (p) of 60. This long
prediction horizon is of importance in cases of inverse response so that the controller is aware of the true direction in which the system will ultimately move as a result of a chosen input. The errors or deviations associated with both the setpoint constraints and the soft upper and lower bounds are equally weighted as Γe ) 100. To prevent unnecessary valve chattering, the input movements are penalized by Γu ) 50. To assess the performance of the control approach, both setpoint tracking and disturbance rejection capabilities are explored in this MIMO system. Specifically, a setpoint transition is made on the first process output (P2) where the output is stepped from its steady-state value of 22.00 to 23.50 psig at t ) 100 seconds and then returned back to its nominal operating condition at t ) 400 s. The second process output (P4) is held in a similar fashion at its nominal operating condition of 23.50 psig throughout the experiment. Setpoint constraints are utilized to maintain the tank pressures at their respective setpoints with γj ) 0. In addition to the setpoint constraints, a soft lower bound is enforced on P2 at a level of 0.50 psig below the reference and a soft upper bound is enforced on P4 at a level of 0.25 psig above the reference. These additional soft constraints further help prevent significant deviations from the setpoints during reference changes and under disturbance loads. At t ) 700 s, an unmeasured disturbance is imposed on the system by opening the release valve on Tank 3, which simulates a leak from Tank 3. Again, note that, without knowledge of the process states, the unmeasured tank pressures (P1 and P3) remain completely unregulated. The closed-loop results are presented in Figure 4. The controller is able to efficiently move the system through the setpoint transitions and provide offset-free tracking. The soft constraints on P2 and P4 are only violated for small periods of time that correspond with the actual times of transitions or disturbances. The controller is able to reject the disturbance and return to an offset-free position. The pressures and modelpredicted values of the “unmeasured” states are presented only as an illustration of their trajectory when they are completely unregulated. Again, for this approach, the controller has no knowledge of these values and, therefore, is unable to constrain them in any manner. 3.3.2. Inferential MPC Using a State-Space Model. Using the state information provided by the state-space model, the controller now attempts to regulate unmeasured states. The state explicit model can be used to infer unmeasured state values, and constraints can then be imposed on these inferred states, just as they are traditionally used for process measurements. For illustration purposes, the same setpoint tracking and disturbance rejection problem are studied. The only modifications are the addition of constraints to those previously unregulated, unmeasured state values. The pressure in each of the upstream tanks is bounded by soft upper and lower bounds. Specifically, P1 is bounded above by a soft constraint at a value of 34.00 psig and P3 is bounded above and below by soft constraints at 35.00 and 40.00 psig, respectively. The results are presented in Figure 5. With the additional soft constraints on the unmeasured states, the system no longer tracks the setpoints at all times. At the time of the setpoint change, the bounds on the unmeasured states prevent the system from moving to an offset-free position. Note that in
Ind. Eng. Chem. Res., Vol. 44, No. 10, 2005 3581
Figure 4. Unmeasured state, output, and input trajectories for the experimental tank system being controlled by MPC using a state-space model shown for both a reference transition and a disturbance load without inferential control of P1 and P3.
Figure 5. Unmeasured state, output, and input trajectories for the experimental tank system being controlled by inferential MPC using a state-space model with soft constraints on unmeasured states, shown for both a reference transition and a disturbance load with Γe ) 100 and Γ∆u ) 50.
attempting to regulate the unmeasured states, the controller is acting on the unmeasured state estimate provided by the open-loop observer. This allows the unmeasured states to be controlled as expected, provided that the model is able to capture the true dynamics of the system. However, any plant-model mismatch leaves the controller enforcing constraints on an unmeasured state estimate, which is not necessarily equivalent to the actual state value. This is seen when the system is under a disturbance. At approximately t ) 800 s, the controller is holding the unmeasured estimates of P1 and P3 at their respective upper bounds despite the actual tank pressures being well below these imposed limits. The closed-loop response of traditional MPC formulations is often highly dependent on the controller tunings, specifically the penalties, that is, weights, placed on each of the terms in the objective function. To illustrate this, the tunings for the traditional MPC, which utilizes a state-space model for inferential control purposes, are modified. The same experiment is carried out, but this time with the error terms associated with the setpoint constraints weighted higher than those of the soft upper and lower bounds by a factor of 10. Specifically, instead of having Γe ) 100 for all error variables, Γe ) 10 for those terms associated with the soft upper and lower bounds. The results are shown in Figure 6. As a result of the change in controller tunings, the bounds on the state estimates are not enforced as strictly. The controller is better able to track the setpoint
despite the bounds on the unmeasured states. Note that the unmeasured states are still affected by the soft constraints and they do not surpass their bounds as much as they had in the case where they were completely unconstrained. Since the controller tunings that specify the relative weights of each term in the objective function have such a great effect on the controller performance, it is essential that the implemented controller be tuned appropriately. Existing tuning methods rely on seemingly ad hoc methods to choose the appropriate trade-offs, leading to the need for a more intuitively tuned controller, especially as the problem becomes larger and the system becomes more complex (interacting). 3.3.3. Prioritized Objective MIMPC. Finally, the mixed-integer formulation is used for the prioritized objective inferential control of the system. Again, the benefit of this approach is that the controller attempts to meet discrete control objectives in a logical order based on an assigned priority. This makes the controller tuning more intuitive than that in traditional approaches. The same control objectives enforced in the traditional MPC cases are employed again. However, this time each objective is assigned a priority based on its relative importance. A summary of these discrete control objectives is presented in Table 1. The controller is tuned with m ) 2, p ) 60, Γe ) 100, and Γu ) 50, as before. The additional tuning parameters for the mixed-integer model predictive control (MIMPC) approach include the penalty for meeting a
3582
Ind. Eng. Chem. Res., Vol. 44, No. 10, 2005
Figure 6. Unmeasured state, output, and input trajectories for the experimental tank system being controlled by inferential MPC using a state-space model shown for both a reference transition and a disturbance load with Γe ) 100 for errors associated with setpoint constraints, Γe ) 10 for errors associated with all soft upper and lower bounds, and Γ∆u ) 50.
Figure 7. Unmeasured state, output, and input trajectories for the experimental tank system being controlled by the MIMPC using a state-space model or prioritized objective inferential control of unmeasured states after both a reference transition and a disturbance load.
Table 1. Summary of Control Objectives for the Pressure Tank System constraint type
constrained variable
discrete objective
priority number
setpoint setpoint upper bound upper bound lower bound lower bound upper bound
y1 y2 x1 x3 y1 x3 y2
1 2 3 4 5
1 2 3 4 5
discrete control objective ΓO ) -10 000 and the penalty for meeting the objectives in order of priority ΓP ) -200 000. The results are presented in Figure 7. The benefit of this approach is the ability to judiciously choose which control objectives to satisfy first. This is particularly important when trying to satisfy constraints of obviously varied importance, such as economic and safety constraints. In this particular case, at the time of the first setpoint transition, the process is moved to a scenario in which the controller is unable to meet all discrete control objectives. However, since the upper bound on the unmeasured pressure (P1) is assigned a higher priority than the upper bound on the second process output (P4), the controller correctly chooses to satisfy the P1 constraint first. Figure 8 shows the total number of discrete control objectives satisfied at a given time in the experiment,
Figure 8. Total number of discrete objectives met and the number met in order of their priority for the MIMPC control approach.
as well as the number met in order based on their priority. At the time of the initial setpoint transition, a few discrete objectives cannot be met. The system moves until all possible discrete objectives are met. Similarly, at the time of the disturbance, control objectives are again violated briefly, but the controller eventually manages to meet all of the objectives.
Ind. Eng. Chem. Res., Vol. 44, No. 10, 2005 3583
4. Conclusions In summary, an MPC algorithm has been presented that utilizes a linear state-space model and exploits the fact that this model explicitly defines all states of a system, including unmeasured states. Knowledge of the unmeasured states is gained through the use of an external state estimation routine, allowing for the otherwise unregulated unmeasured states to be inferentially controlled using a flexible mixed-integer prioritized objective formulation of MPC. The formulation provides the ability to discretize any or all of the constraints and to prioritize all of the discrete control objectives, and it even goes as far as permitting the same priority level to be assigned for any number of discrete control objectives. Whereas traditional MPC approaches rely on ad hoc tuning to achieve the desired closed-loop response, this MIMPC provided a more intuitive tuning that ensures that the highest priority objectives are met first. Despite not tracking the setpoint, the controller was found to be effective in both setpoint tracking and disturbance rejection as it chose to obey the highest priority objectives when applied to the experimental pressure tank system. Acknowledgment The authors would like to acknowledge financial support from the American Chemical Society Petroleum Research Foundation Grant No. 38539-G9 and the National Science Foundation Early Career Development Grant No. CTS-0238663. Nomenclature A ) state-space coefficient matrix mapping current state to future state for the system with states representing tank pressures A ˆ ) state-space coefficient matrix mapping current states to future states for the system with states that have no physical meaning b ) vector containing the constraint RHS values Bu ) state-space coefficient matrix mapping inputs to future state for the system with states representing tank pressures B ˆ u ) state-space coefficient matrix mapping inputs to future states for the system with states that have no physical meaning Bd ) state-space coefficient matrix mapping measured disturbances to future state for the system with states representing tank pressures B ˆ d ) state-space coefficient matrix mapping measured disturbances to future states for the system with states that have no physical meaning C ) state-space coefficient matrix mapping states to outputs for the system with states representing tank pressures C ˆ ) state-space coefficient matrix mapping states to outputs for the system with states that have no physical meaning CV1 ) Badger control valve no. 1 (% open) CV2 ) Badger control valve no. 2 (% open) d ) measured disturbance vector Du ) state-space coefficient matrix mapping inputs to outputs for the system with states representing tank pressures D ˆ u ) state-space coefficient matrix mapping inputs to outputs for the system with states that have no physical meaning
Dd ) state-space coefficient matrix mapping disturbances to outputs for the system with states representing tank pressures D ˆ d ) state-space coefficient matrix mapping disturbances to outputs for the system with states that have no physical meaning e ) error vector for a single time step of the prediction horizon ep ) error vector for all errors over the prediction horizon ex ) error associated with a single state at a particular time f ) vector of weights for the MILP objective function G ) discrete time multivariable transfer function gain H ) diagonal matrix of weights for QP and MIQP objective function I ) identity matrix J ) objective function value in compact optimization problem k ) discrete time step m ) move horizon M ) matrix of the constraint coefficients in the compact optimization problem Nj ) large constant for “big M” constraints NO ) number of discrete control objectives NP ) number of priorities nx ) number of process states nu ) number of process inputs ny ) number of process outputs (measurements) nd ) number of measured disturbances O ) vector of binary discrete objective flags p ) prediction horizon P ) vector of discrete objective priority flags P1 ) pressure in tank 1 (psig) P2 ) pressure in tank 2 (psig) P3 ) pressure in tank 3 (psig) P4 ) pressure in tank 4 (psig) rxj ) reference (setpoint) for state j T ) nonsingular matrix used in the state-space coordinate transformation t ) time u ) vector of inputs for a single time in the move horizon um ) vector of inputs for all times in the move horizon Vd1 ) leak valve in tank #1 Vd3 ) leak valve in tank #3 x ) state vector (states represent tank pressures) xe ) vector of estimated states provided by the model xf ) state vector (states have no obvious physical meaning) xp ) vector of all states over the prediction horizon y ) model predicted output vector ym ) vector of measurements yp ) vector of all measurements over the prediction horizon z ) vector of all problem variables including inputs, states, outputs, errors, and binary flags for objectives and priorities zlb ) lower bound on the variable vector zub ) upper bound on the variable vector ∆u ) vector of the differences in input positions at two consecutive time steps ∆um ) vector of the differences in input positions at two consecutive time steps for all points in the prediction horizon δ ) tolerance associated with determining if a discrete control objective is met γj ) tolerance that dictates the constraint level in relationship to the setpoint Γe ) vector of weights for errors associated with constraints on both unmeasured states and outputs ΓO ) vector of weights on meeting the discrete control objectives ΓP ) vector of weights on meeting the discrete control objectives in order of priority
3584
Ind. Eng. Chem. Res., Vol. 44, No. 10, 2005
Γ∆u ) vector of weights to penalize excessive input movements Φ ) objective function in the MPC formulation
Literature Cited (1) Findeisen, R.; Imsland, L.; Allgower, F.; Foss, B. A. State and Output Feedback Nonlinear Model Predictive Control: An Overview. Eur. J. Control 2003, 9 (2-3), 190-206. (2) Garcı´a, C. E.; Prett, D. M.; Morari, M. Model Predictive Control: Theory and Practice - A Survey. Automatica 1989, 25 (3), 335-348. (3) Mayne, D. Q. Control of Constrained Dynamic Systems. Eur. J. Control 2001, 7, 87-99. (4) Mayne, D. Q.; Rawlings, J. B.; Rao, C. V.; Scokaert, P. O. M. Constrained Model Predictive Control: Stability and Optimality. Automatica 2000, 36, 789-814. (5) Morari, M.; Lee, J. H. Model Predictive Control: The Good, the Bad, and the Ugly. In Proc. Conf. Chem. Process Control, Int. Conf., 4th, Padre Island, TX, 1991; pp 419-444. (6) Morari, M.; Lee, J. H. Model Predictive Control: Past, Present and Future. In PSE ESCAPE-7 Symp., Trondheim, Norway, 1997. (7) Bemporad, A.; Morari, M. Control of Systems Integrating Logic, Dynamics, and Constraints. Automatica 1999, 35, 407-427. (8) Gatzke, E. P.; Doyle, F. J., III. Multi-Objective Control of a Granulation System. Powder Technol. 2001, 121 (2), 149-158. (9) Parker, R. S.; Gatzke, E. P.; Doyle, F. J., III. Advanced Model Predictive Control (MPC) for Type I Diabetic Patient Blood Glucose Control. In Am. Control Conf., Proc., Chicago, IL, 2000. (10) Reyes, A. N.; Dutra, C. S.; Alba, C. B. Comparison of Different Predictive Controllers with Multi-Objective Optimization: Application to An Olive Oil Mill. In Proc. IEEE Int. Conf. Control Appl. and Int. Symp. Comput. Aided Control Syst. Des., Glasgow, U.K., 2002, pp 1242-1247. (11) Vada, J.; Slupphaug, O.; Johansen, T. A.; Foss, B. A. Linear MPC with Optimal Prioritized Infeasibility Handling: Application, Computational Issues, and Stability. Automatica 2001, 37, 1835-1843.
(12) Kerrigan, E. C.; Bemporad, A.; Mignone, D.; Morari, M.; Maciejowski, J. M. Multi-Objective Prioritisation and Reconfiguration for the Control of Constrained Hybrid Systems. In Am. Control Conf., Proc., Chicago, IL, 2000. (13) Vada, J.; Slupphaug, O.; Johansen, T. A. Optimal Prioritized Infeasibility Handling in Model Predictive Control: Parameteric Preemptive Multiobjective Linear Programming Approach. J. Optimization Theory Appl. 2001, 109 (2), 385-413. (14) Tyler, M. L.; Morari, M. Propositional Logic in Control and Monitoring Problems. Automatica 1999, 35, 565-582. (15) Long, C. E.; Miles, J. D.; Holland, C. E.; Gatzke, E. P. A Flexible Multivariable Experimental Air Tank System for Process Control Education. In Am. Control Conf., Proc., Denver, CO, 2003. (16) Gatzke, E. P.; Doyle, F. J., III. Model Predictive Control Of A Granulation System Using Soft Output Constraints and Prioritized Control Objectives. Powder Technol. 2001, 121, 149158. (17) The MathWorks. MATLAB 6.1; Prentice Hall: New York, 2000. (18) Dai, L.; A° stro¨m, K. J. Dynamic Matrix Control of a Quadruple Tank Process. In Proc. IFAC Symp., 14th, Beijing, China, 1999; pp 295-300. (19) Gatzke, E. P.; Meadows, E. S.; Wang, C.; Doyle, F. J., III. Model Based Control of a Four-Tank System. Comput. Chem. Eng. 2000, 24, 1503-1509. (20) Johansson, K. H.; Nunes, J. L. R. A Multivariable Laboratory Process with an Adjustable Zero. In Am. Control Conf., Proc., Philadelphia, PA, 1998; pp 2045-2049. (21) Vadigepalli, R.; Gatzke, E. P.; Doyle, F. J., III. Robust H-Infinity Control of an Experimental 4-Tank System. Ind. Eng. Chem. Res. 2001, 40 (8), 1916-1927. (22) Astrom, K. J.; Wittenmark, B. Computer-Controlled Systems: Theory and Design, 3rd ed.; Prentice Hall, Inc.: New York, 1997.
Received for review August 6, 2004 Revised manuscript received February 26, 2005 Accepted February 28, 2005 IE049287P