Robust Model Predictive Control Design for Fault-Tolerant Control of

May 25, 2006 - Robust Model Predictive Control Design for Fault-Tolerant Control of Process. Systems. Prashant Mhaskar*. Department of Chemical Engine...
6 downloads 19 Views 197KB Size
Ind. Eng. Chem. Res. 2006, 45, 8565-8574

8565

Robust Model Predictive Control Design for Fault-Tolerant Control of Process Systems Prashant Mhaskar* Department of Chemical Engineering, McMaster UniVersity, Hamilton, ON L8S 4L7, Canada

This work considers the problem of stabilization of nonlinear systems subject to constraints, uncertainty and faults in the control actuator. We first design a robust model predictive controller that allows for an explicit characterization of the set of initial conditions starting from where feasibility of the optimization problem and closed-loop stability is guaranteed. The main idea in designing the robust model predictive controller is to employ Lyapunov-based techniques to formulate constraints that (a) explicitly account for uncertainty in the predictive control law, without making the optimization problem computationally intractable, and (b) allow for explicitly characterizing the set of initial conditions starting from where the constraints are guaranteed to be initially and successively feasible. The explicit characterization of the stability region, together with the constraint handling capabilities and optimality properties of the predictive controller, is utilized to achieve fault-tolerant control of nonlinear systems subject to uncertainty, constraints, and faults in the control actuators. The implementation of the proposed method is illustrated via a chemical reactor example. 1. Introduction A control system design for chemical processes needs to account for the inherent complexity of the process exhibited in the form of nonlinear behavior and operational issues such as constraints and uncertainty, as well as to safeguard against eventualities including (but not limited to) faults in the control actuators. The nonlinear behavior exhibited by most chemical processes, together with the presence of constraints on the operating conditions (dictated by performance considerations or due to limited capacity of control actuators), the presence of modeling uncertainty and disturbances, and the unavailability of all the process states as measurements have motivated significant research work in the area of nonlinear control focusing on these issues;1-10 also, see refs 11 and 12 for excellent reviews of results and the references therein. One of the control methods suited for handling constraints within an optimal control setting is model predictive control (MPC). The main idea in the model predictive control approach is to set up an optimization problem to compute a manipulated input trajectory (subject to constraints on the manipulated input variables and state variables) that minimizes a performance index (evaluated using the process model) over a finite horizon. Only the first “piece” of the manipulated input trajectory is implemented, however, and the optimization problem is solved again at the next sampling time with the revised state values (thereby imbuing the approach with the advantages of feedback control). Numerous research studies have investigated the stability properties of model predictive controllers for systems without uncertainty (see, for example, the review paper ref 13). The problem of analysis and design of predictive controllers for uncertain linear systems has been extensively investigated (see refs 13 and 14 for surveys of results in this area). For uncertain nonlinear systems, the problem of robust MPC design continues to be an area of ongoing research (see, for example, refs 15-20). Several robust predictive formulations utilize what we will refer to as the “min-max” approach, where the manipulated input trajectory is computed by solving an * To whom correspondence should be addressed. E-mail: mhaskar@ mcmaster.ca. Phone: 905 5259140 x23273. Fax: 905 521-1350.

optimization problem that requires minimizing the objective function (and satisfying the input and state constraints) over all possible realizations of the uncertainty. While the min-max formulations provide a natural setting within which to address this problem, computational problems with these approaches are well-known and stem in part from the nonlinearity of the model which typically makes the optimization problem nonconvex and in part from performing the min-max optimization over the nonconvex problem. Predictive control formulations (including those that do not consider uncertainty) typically owe their stabilizing properties to some form of “stability” constraints that essentially require some appropriate measure of the state to decrease or reach a target set by the end of the horizon. The initial feasibility of the stability constraint is typically assumed to ensure stability. In refs 21 and 22, a predictive controller is designed that does not assume but provides guaranteed stabilization from an explicitly characterized set of initial conditions under input21 and input and state constraints22 in the absence of uncertainty. The stability guarantees of existing predictive control approaches for nonlinear systems with uncertainty, however, remain contingent upon the assumption of the initial feasibility of the optimization problem (or the assumption of the knowledge of a set of initial conditions starting from where the optimization problem is guaranteed to be feasible), and the set of initial conditions, starting from where feasibility of the optimization problem (and, therefore, stability of the closedloop system) is guaranteed, is not explicitly characterized. In ref 23 (the robust hybrid predictive control design), embedding the operation of predictive controllers within the stability region of Lyapunov-based bounded robust nonlinear controllers (e.g., see ref 7) is utilized to achieve stability and explicit characterization of the stability region for the switched closed-loop system. The robust hybrid predictive controllers, however, enable implementation of existing predictive controller formulations with an explicit characterization of the robust stability region via switching to a fall-back controller. The problem of designing a robust model predictive controller that by itself (i.e., without switching to a fall-back bounded controller) guarantees stability from an explicitly characterized set of initial conditions is not considered.

10.1021/ie060237p CCC: $33.50 © 2006 American Chemical Society Published on Web 05/25/2006

8566

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

The occurrence of faults in control actuators adds another layer of complexity to the problem of controller design for nonlinear uncertain systems. Note that the stability guarantees provided by a controller may no longer hold in the presence of faults in the control actuators that prevent the implementation of the control action prescribed by the control law, and these faults can have substantial negative ramifications owing to the interconnected nature of processes.24 One approach to maintain closed-loop stability would be to use all available (and redundant) control actuators so that even if one of the control actuators fails, the rest can maintain closed-loop stability (the reliable control approach; e.g., see ref 25). The use of redundant control actuators, however, incurs (possibly) preventable operation and maintenance costs. These economic considerations dictate the use of only as many control loops as is required at a time. To achieve tolerance with respect to faults, control-loop reconfiguration can be carried out in the event of failure of the primary control configuration. Controller reconfiguration has been utilized to achieve fault-tolerance in the context of aerospace engineering applications (see, e.g., refs 26 and 27) and in the context of chemical process control, based on the assumption of a linear process description.28.29 More recently, reconfiguration has been utilized to achieve fault-tolerant control for nonlinear systems (see, for example, refs 30 and 31) where the main idea is as follows: First, backup control configurations are identified and an explicit characterization of the stability region associated with each control configuration is obtained using Lyapunov-based bounded robust nonlinear controllers. Reconfiguration laws that determine, on the basis of the stability regions, which of the available backup control configurations can preserve closed-loop stability in the event of faults in the primary control configuration are subsequently devised. In ref 31, the problem of achieving fault-tolerance in the presence of uncertainty was considered, where the robust hybrid predictive controller23 was used to characterize the stability region under each control configuration. The approaches in refs 30 and 31 rely on bounded controller designs to characterize the stability region under a candidate control configuration and address the problem of determining which backup configuration can be activated to ensure stability. The fault-tolerant capabilities of these approaches depend on whether the state, at the time of the failure, resides in the stability region of the backup control configuration. The problem of ensuring that it is possible to switch to such a backup configuration at the time of occurrence of a fault is not addressed, which can be done via guiding the system trajectory toward the stability region of the backup control configurations. Guiding the system toward a precomputed target set requires invoking the model predictive control approach that allows for incorporating both state and input constraints in the control design. A prerequisite to implementing the model predictive control approach for this purpose is to design a robust model predictive controller that guarantees closed-loop stability from an explicitly characterized set of initial conditions. In summary, the discussion above reveals that designing a robust model predictive controller that can guarantee stability from an explicitly characterized set of initial conditions is important from the perspective of guaranteeing stability for nonlinear uncertain systems in the absence of faults, as well as from the point of view of tackling the important problem of achieving tolerance with respect to faults. Motivated by these considerations, we first describe the class of systems considered in section 2 and, then in section 3.1, design a robust model predictive controller

that guarantees stability from an explicitly characterized set of initial conditions. The main idea in the robust model predictive controller design is to employ Lyapunov-based techniques to formulate constraints that (a) explicitly account for uncertainty in the predictive control law, without making the optimization problem computationally intractable, and (b) allow for explicitly characterizing the set of initial conditions starting from where closed-loop stability is guaranteed. The application of the robust model predictive controller is demonstrated in section 3.2. The explicit characterization of the stability region, together with the constraint handling capabilities and optimality properties of the predictive control approach, is utilized in section 4.1 to not only determine the appropriate backup configuration but to also drive the system trajectory into the stability region of a candidate backup control configuration and achieve fault-tolerant control subject to failures in the primary control configuration. Finally, in section 4.2, implementation of the proposed method to fault-tolerant control of a chemical reactor example is demonstrated. 2. Preliminaries We consider nonlinear systems with uncertain variables and input constraints, described by

x˘ ) f(x) + G(x)u + W(x)θ(t), u ∈ U, θ ∈ Q

(1)

where x ∈Rn denotes the vector of state variables, u ∈ Rm denotes the vector of constrained manipulated inputs, taking values in a nonempty convex subset U of Rm, where U ) {u ∈ Rm: |u| e umax}, ||‚|| is the Euclidean norm of a vector, umax > 0 is the magnitude of input constraints, and θ(t) ) [θ1(t) ‚‚‚ θq(t)]T ∈ Q ⊂ Rq denotes the vector of uncertain (possibly timevarying) but bounded variables taking values in a nonempty compact convex subset of Rq and f(0) ) 0. The vector function f(x), the matrices G(x) ) [g1(x) ‚‚‚ gm(x)] and W(x) ) [w1(x) ‚‚‚ wq(x)], where gi(x) ∈ Rn, i ) 1 ‚‚‚ m, and wi(x) ∈ Rn, i ) 1 ‚‚‚ q, are assumed to be sufficiently smooth on their domains of definition. The notation ||‚||Q refers to the weighted norm, defined by ||x||Q2 ) x′Qx for all x ∈ Rn, where Q is a positive definite symmetric matrix and x′ denotes the transpose of x. The notation Lfh denotes the standard Lie derivative of a scalar function h(‚) with respect to the vector function f(‚). Throughout the manuscript, we assume that for any u ∈ U the solution of the system of eq 1 exists and is continuous for all t and we focus on the state feedback problem where measurements of the entire state, x(t), are assumed to be available for all t. 3. Model Predictive Control of Uncertain Nonlinear Systems The objective in this section is to design a model predictive controller that stabilizes nonlinear systems subject to uncertainty and input constraints, as well as allows for an explicit characterization of the set of initial conditions starting from where closed-loop stability is guaranteed. 3.1. Robust Model Predictive Control. Consider the system of eq 1 and assume that the uncertain variable term is vanishing in the sense that W(0)θ(t) ) 0 for any θ ∈ Q (see remark 6 for how to handle nonvanishing disturbances), i.e., disturbances that do not perturb the system from its nominal equilibrium point, and that a robust control Lyapunov function (RCLF32), V, exists. Consider the receding horizon implementation of the control

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8567

action computed by solving an optimization problem of the form:

P(x, t): min{J(x, t, u(‚))| u(‚) ∈ S}

(2)

s.t. x˘ ) f(x) + G(x)u

(3)

LGV(x)u(t) e LGVubc(x(t))

(4)

where

ubc ) -

(

)

R(x) + x(R1(x))2 + (umaxβ(x))4 (β(x)) [1 + x1 + (umaxβ(x)) ] 2

2

(LGV)T

(5)

when LGV * 0 and ubc ) 0 when LGV ) 0, where R(x) ) LfV + (F||x|| + χθb||LWV||)(||x||/(||x|| + φ)), R1(x) ) LfV + F||x|| + χθb||LWV||, β(x) ) ||(LGV)T||, LGV ) [Lg1V ‚‚‚ LgmV], and LWV ) [Lw1V ‚‚‚ LwqV] are row vectors, θb is a positive real number such that |θ(t)| e θb, for all t g 0, and F, χ, and φ are adjustable parameters that satisfy F > 0, χ > 1, and φ > 0; S ) S(t, T) is the family of piecewise continuous functions (functions continuous from the right), with period ∆, mapping [t, t + T] into U. Equation 3 is the “nominal” nonlinear model (without the uncertainty term) describing the time evolution of the state x. A control u(‚) in S is characterized by the sequence u[j] where u[j] :) u(j∆) and satisfies u(t) ) u[j] for all t ∈ [j∆, (j + 1)∆). The performance index is given by

J(x, t, u(·)) )

∫t t+T[||xu(s; x, t)||Q2 + ||u(s)||R2] ds

(6)

where Q and R are positive semidefinite and strictly positive definite, symmetric matrices, respectively, and xu(s; x, t) denotes the solution of eq 3, due to control u, with initial state x at time t and T being the specified horizon. The minimizing control u0(‚) ∈ S is then applied to the plant over the interval [t, t + ∆) and the procedure is repeated indefinitely. Preparatory to the formalization of the stability and feasibility properties of the robust MPC, we recall the following result6,7 for the control law described by eq 5. Specifically, let Π be the set defined by Π(θb, umax) ) {x ∈ Rn: R1(x) e umaxβ(x)} and assume that Ω :) {x ∈ Rn: V(x) e cmax} ⊆ Π(θb, umax) for some cmax > 0. Then, for any initial condition x0 ∈ Ω, it can be shown that there exists a positive real number * such that if φ/(χ - 1) < *, the states of the closed-loop system of eqs 1 and 5 satisfy x(t) ∈ Ω ∀ t g 0 and the origin of the closed-loop system is asymptotically stable. The stability properties of the predictive controller design follow from ensuring that V˙ under the predictive controller is at least as negative as that under the implementation of the controller of eq 5 (expressed in eq 4); feasibility of the optimization problem and stability properties of the closed-loop system under the predictive controller are formalized in theorem 1 below. Theorem 1. Consider the constrained system of eq 1 under the MPC law of eqs 2-6. Then, giVen any positiVe real number d, there exists a positiVe real number ∆* such that if ∆ ∈ (0, ∆*] and x(0) :) x0 ∈ Ω, then the optimization problem of eqs 2-6 is guaranteed to be initially and successiVely feasible, x(t) ∈ Ω ∀ t g 0 and limtf∞ sup ||x(t)|| e d. Proof of Theorem 1. The proof of this theorem is divided in three parts. In the first part, we show that, for all x0 ∈ Ω, the optimization problem of eqs 2-6 is guaranteed to be initially feasible. We then show that there exists a ∆* such that if ∆ ∈ (0, ∆*] then Ω is invariant under receding horizon implementation of the predictive controller of eqs 2-6 (implying that the

optimization problem continues to be feasible) and that the state trajectories converge to the desired neighborhood of the origin. Finally, in part 3, we show that the state trajectories, once they reach the desired neighborhood of the origin, continue to stay in the neighborhood. Part 1. Consider some x0 ∈ Ω under receding horizon implementation of the predictive controller of eqs 2-6, with a prediction horizon T ) N∆, where ∆ is the hold time and 1 e N < ∞ is the number of the prediction steps. We consider the two constraints that may possibly lead to infeasibility, i.e., the constraints of eq 4 and the constraint on the manipulated input u. An examination of the constraint of eq 4 reveals that one possible (and always feasible) solution to this constraint is u ) ubc. Furthermore, for all x(0) ∈ Ω, ||ubc|| e umax. Therefore, for all x(0) ∈ Ω, the solution comprising of ubc as the first element followed by N - 1 zeros is a feasible solution to the optimization problem. Part 2. Having shown the initial feasibility of the optimization problem in part 1, we now show that the implementation of the control action computed by solving the optimization problem of eqs 2-6 guarantees that, for a given d, if we pick a sufficiently small ∆ (i.e., there exists a ∆* such that if ∆ ∈ (0, ∆*]) Ω is invariant under the predictive control algorithm of eqs 2-6 (this would guarantee subsequent feasibility of the optimization problem due to part 1 above) and, then, that if the optimization problem continues to be feasible, then practical stability (convergence to a desired neighborhood of the origin) for the closed-loop system is achieved. To this end, we first note that since V(‚) is a continuous function of the state, one can find a finite, positive real number, δ′, such that V(x) e δ′ implies |x| e d. Now, consider a “ring” close to the boundary of Ω, described by M :) {x ∈ Rn: (cmax - δ) e V(x) e cmax}, for a 0 e δ < cmax, with δ to be determined later. Let the control action be computed for some x(0) :) x0 ∈ M . The satisfaction of the constraint of eq 4 (guaranteed from part 1) ensures that, under implementation of u 0,

V˙ (x0) ) LfV(x0) + LGV(x0)u0 + LWV(x0)θ(0) e LfV(x0) + LGV(x0)ubc + LWV(x0)θ(0) ) V˙ bc(x0) (7) Substituting the control law of eq 5 into the system of eq 1, it can be shown that, for all x(0) ∈ Ω and ||θ(t)|| e θb,

V˙ bc(x) ) LfV + LGVubc + LWVθ(t) e -F*V(x)

(8)

where F* > 0. This, together with eq 7, yields V˙ (x0) e -F*V(x0). Furthermore, if the control action is held constant until a time ∆**, where ∆** is a positive real number (u(t) ) u(x0) :) u0 ∀ t ∈ [0, ∆**]), then, ∀ t ∈ [0, ∆**],

V˙ (x(t)) ) LfV(x(t)) + LGV(x(t))u0 + LWV(x(t))θ(t) ) LfV(x0) + LGV(x0)u0 + LWV(x0)θ(0) + (LfV(x(t)) LfV(x0)) + (LGV(x(t))u0 - LGV(x0)u0) + LWV(x(t))θ(t) LWV(x0)θ(0) (9) Since x0 ∈ M ⊆ Ω, LfV(x0) + LGV(x0)u0 + LWV(x0)θ(0) e -F*V(x0). By definition, for all x0 ∈ M, V(x0) g cmax - δ; therefore, LfV(x0) + LGV(x0)u0 + LWV(x0)θ(0) e -F*(cmax δ). Since the function f(‚) and the elements of the matrices G(‚) and W(‚) are continuous, ||u(t)|| e umax, ||θ(t)|| e θmax, and M is bounded, and then, one can find, for all x0 ∈ M and a fixed ∆**, a positive real number K1, such that ||x(t) - x0|| e K1∆** for all t e ∆**.

8568

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

Since the functions LfV(‚), LGV(‚), and LWV(‚) are lipschitz, then, given that |x(t) - x0| e K1∆**, x0 ∈ Ω, and ||θ(t)|| e θmax, we have that one can find positive real numbers K2, K3, and K4 such that |LfV(x(t)) - LfV(x0)| e K3K1∆**, |LGV(x(t))u0 - LGV(x0)u0| e K2K1∆**, and |LWV(x(t))θ(t) - LWV(x0)θ(0)| e K4K1∆**. Using these inequalities in eq 9, we get

V˙ (x(t)) e -F*(cmax - δ) + (K1K2 + K1K3 + K1K4)∆** (10) For a choice of ∆** < (F*(cmax - δ) - )/(K1K2 + K1K3 + K1K4) where  is a positive real number such that

 < F*(cmax - δ)

(11)

we get that V˙ (x(t)) e - < 0 for all t e ∆**. This implies that, given δ′, if we pick δ such that cmax - δ < δ′ and find a corresponding value of ∆** then if the control action is computed for any x ∈ M and the “hold” time is less than ∆**, we get that V˙ remains negative during this time, and therefore, the state of the closed-loop system cannot escape Ω (since Ω is a level set of V). This in turn implies successive feasibility of the optimization problem for all initial conditions in M and that, for any initial condition, x0, such that δ < V(x0) e cmax, we have that V(x(t + ∆)) < V(x(t)). All trajectories originating in Ω, therefore, converge to the set defined by Ωf :) {x ∈ Rn: V(x) e cmax - δ}. Part 3. We now show the existence of ∆′ such that, for all x0 ∈ Ωf :) {x ∈ Rn: V(x0) e cmax - δ}, we have that x(∆) ∈ Ωu :) {x0 ∈ Rn: V(x0) e δ′}, where δ′ < cmax, for any ∆ ∈ (0, ∆′]. Consider ∆′ such that

δ′ )

max V(x(τ)) (12) V(x0)ecmax-δ,u∈U,θ∈Qτ∈[0,∆′]

Since V is a continuous function of x and x evolves continuously in time, then, for any value of δ < cmax, one can choose a sufficiently small ∆′ such that eq 12 holds. Let ∆* ) min{∆**, ∆′}. We now show that, for all x0 ∈ Ωu and ∆ ∈ (0, ∆*], x(t) ∈ Ωu for all t g 0. For all x0 ∈ Ωu ∩ Ωf, by definition, x(t) ∈ Ωu for 0 e t e ∆ (since ∆ e ∆′). For all x0 ∈ Ωu\Ωf (and therefore x0 ∈ M ), V˙ < 0 for 0 e t e ∆ (since ∆ e ∆**). Since Ωu is a level set of V, then x(t) ∈ Ωu for 0 e t e ∆. Either way, for all initial conditions in Ωu, x(t) ∈ Ωu for all future times. In summary, we showed that (1) for all x(0) ∈ Ω, the optimization problem is guaranteed to be feasible, (2) the optimization problem continues to be feasible and x(t) ∈ Ω ∀ t g 0, all state trajectories originating in Ω converge to Ωu, and (3) all state trajectories originating in Ωu stay in Ωu, i.e., x(t) ∈ Ω ∀ t g 0 and limtf∞ sup ||x(t)|| e d. This completes the proof of theorem 1. Remark 1. The key step in the design of the predictive controller is the appropriate choice of the stability constraint that is guaranteed to be initially and successively feasible from an explicitly characterized set of initial conditions, while not requiring the optimization to be performed over all possible realizations of the uncertainty. The constraint of eq 4 is one such constraint. Equation 4 requires the control action to be computed such that it renders the time derivative of the Lyapunov function, evaluated at the current state, to be more negative than the time derivative of the Lyapunov function that one would achieve if one were to implement the Lyapunovbased bounded robust controller of eq 5. Note that the model

used in the predictive controller is the nominal (without the uncertainty term) nonlinear model. However, negative definiteness of V˙ during the first time step is ensured by making the V˙ sufficiently negative for the initial condition such that V˙ continues to remain negative during the first time step for the nonlinear uncertain system. The feasibility of this constraint is ascertained via the use of the control law (and its associated region of stability) of eq 5. Several points need to be understood with regard to this particular choice of the constraint, and these are illustrated via the nonsuitability of some other, similarlooking, constraints in remark 2 below. Remark 2. One possible alternative to the constraint of eq 4, for example, could have been to require the Lyapunov function to decrease at the end of the horizon (similar to the contractive Lyapunov-based predictive controller of ref 33). In the presence of uncertainty, it would necessitate requiring the Lyapunov function to decay under all possible realizations of the uncertainty, making it a min-max type of optimization problem. Note that, in the presence of contractive-type constraints, the set of initial conditions from where the optimization problem is guaranteed to be feasible is not explicitly characterized even in the absence of uncertainty. Therefore, even if one were to implement such a constraint disregarding the computational complexity associated with solving min-max optimization problems, nothing could be said about the guaranteed (initial and successive) feasibility of such an optimization problem. Another choice could have been, for instance, to require the Lyapunov function to decrease only during the first time step. While, in the absence of uncertainty, such a constraint can be imposed and guaranteed to be initially and successively feasible from an explicitly characterized set of initial conditions (see refs 21 and 22), the presence of uncertainty brings in the following complications: if one were to persist with requiring the Lyapunov function to decay during the first time step (without requiring the constraint to be satisfied over all possible realizations of the uncertainty), such a constraint could be shown to be initially feasible from an explicitly characterized set of initial conditions; successive feasibility (and, therefore, stability), however, would not be guaranteed (due to the presence of uncertainty). The other option could be to compute by how much the V˙ (without considering the uncertainty) needs to be negative during the first time step to make sure that the V˙ for the system with uncertainty also remains negative. While such a constraint would guarantee stability for the uncertain system (without having to solve a min-max problem), initial feasibility of the optimization problem could not be guaranteed. In summary, the constraint of eq 4 represents one choice of stability constraint that achieves both stability and an explicit characterization of the feasibility region, without requiring min-max computations. Remark 3. Regarding the choice of the control law of eq 5, we note that the problem of designing control laws that guarantee stability in the presence of input constraints has been extensively studied (see, for example, refs 2, 7, 34, and 35). The bounded robust controller design of eq 5, proposed in ref 7 (inspired by the results on bounded control in ref 2 for systems without uncertainty) is an example of a controller design that (1) guarantees robust stability in the presence of constraints, and (2) allows for an explicit characterization of the closedloop stability region. The stability guarantees of the predictive controller of theorem 1 are not limited to this particular choice of controller, and any other robust controller that satisfies 1 and 2 above can be used to formulate the constraint of eq 4. Furthermore, referring to the control law of eq 5 (and this holds for other robust control laws as well), it is important to note

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8569

that a general procedure for the construction of RCLFs for nonlinear systems of the form of eq 1 is currently not available. Yet, for several classes of nonlinear systems that arise commonly in the modeling of engineering applications, it is possible to exploit system structure to construct RCLFs. For example, for feedback linearizable systems, quadratic Lyapunov functions can be chosen as candidate RCLFs and made RCLFs with an appropriate choice of the function parameters based on the process parameters (see, for example, ref 32). Also, for nonlinear systems in strict feedback form, backstepping techniques can be employed for the construction of RCLFs.32 Remark 4. Note that if the problem at hand was stabilization alone, one could have used a bounded robust control design to achieve stability and characterize the stability region. The analytical bounded robust control designs, however, do not have any mechanism for accounting for the performance criteria and are not designed to be optimal with respect to a prespecified cost functional. Using a predictive control approach allows the computation of the control action in a way that accounts for performance objectives, while, at the same time, the proposed predictive controller does not sacrifice on the stability guarantees and explicit characterization of the stability region. More importantly, the flexibility associated with specifying the objective function in the predictive control approach allows for “guiding” the state trajectory in a desirable fashion; this particular strength makes the proposed predictive controller well suited for handling faults in the control actuators, as described in the next section. Remark 5. Note that the objective behind the proposed robust predictive control design is not that of providing robust stability guarantees to various predictive controllers (achieved in ref 23 via switching between the predictive controller and a backup controller) but that of formulating a robust predictive controller that can guarantee closed-loop stability from an explicitly characterized set of initial conditions, without requiring any switching to a fall-back controller. Note also that the fact that only practical stability (convergence to a desired neighborhood of the origin) is achieved under the predictive controller is not a limitation of the approach (or the way the constraints are formulated) but is due to the discrete (implement and hold) nature of the control action; even if an analytical bounded robust controller was used in an implement and hold fashion, it could also achieve only practical stability with the size of the desired neighborhood of the origin dictating the maximum allowable hold time. Remark 6. While, for the sake of simplicity, the results in the present work have been presented under the assumption of vanishing disturbances, modifying the robust predictive control design to account for nonvanishing disturbances is relatively straightforward. The line of reasoning is as follows: Let d1 denote the neighborhood of the origin that the closed-loop state is required to converge to, and then, pick a d2 < d1 and parameters in the control design of eq 5 to achieve convergence to the d2 neighborhood of the origin under continuous implementation. This, in turn, implies that, under continuous implementation of the robust predictive controller (i.e., with ∆ ) 0), one can still achieve convergence to the same neighborhood of the origin. Finally, compute ∆*, such that under sample and hold implementation, with a sampling time less than ∆*, convergence to the neighborhood d1 is achieved. The reasoning can also be understood as follows: under vanishing disturbances, given an acceptable “loss” in convergence due to discrete implementation (i.e., instead of requiring convergence to the origin, to require convergence to a neighborhood of the origin),

one can come up with a bound on the implement and hold time such that the convergence to the desired neighborhood of the origin is achieved. Similarly, under nonvanishing disturbances with discrete implementation, the loss in convergence to the origin is due to two factors: one due to the nonvanishing disturbance and the other due to discrete implementation; however, both can be made as small as desired via the appropriate choice of parameters. Remark 7. One of the time-consuming calculations in computing solutions to nonlinear optimization problems subject to constraints is finding a feasible solution. In this light, the fact that the control action computed by the control law of eq 5 provides a feasible initial guess to the optimization problem positively impacts the computational complexity of the optimization problem. Furthermore, the fact that the stability guarantees provided by the optimization problem are independent of the horizon used in the predictive controller can further alleviate the computational complexity by allowing for reduction in the size of the optimization problem, without concern for loss of stability properties. Note, also, that the stabilizing properties of the predictive controller are achieved via a feasible solution and do not need the solution to be the optimal solution. It is also important to keep in mind that only the nominal model (without the uncertainty term) is used in specifying the objective function. This implies that the solution obtained is not optimal with respect to the objective function in the presence of uncertainty. More importantly, given that the optimization problem (for nonlinear systems) is typically a nonconvex optimization problem implies that solvers would only find locally optimum solutions. Both of these factors, however, do not impact negatively on the stabilizing properties of the controller design since stability is dependent on the feasibility of the optimization problem (not on finding an optimal solution). Furthermore, the set of initial conditions starting from where the optimization problem is guaranteed to be feasible is explicitly characterized. The ability to search for a feasible solution that is better with respect to the objective function (compared to the initial guess), however, has important implications as evidenced in the fault-tolerant capabilities of the predictive controller design formulated in section 4.1. 3.2. Application to a Chemical Reactor. In this section, we use a chemical reactor example (also used in ref 23) to demonstrate the constraint and uncertainty handling capability of the robust predictive controller design. To this end, consider the following model of an irreversible elementary exothermic reaction of the form k

A 98 B in a well-mixed continuous stirred tank reactor:

VR VR

dCA -E C V ) F(CA0 - CA) - k0 exp dt RT A R

( )

∆H Q dT -E k0 exp CAVR + ) F(TA0 - T) dt Fcp RT Fcp

( )

(13)

where CA denotes the concentration of species A, T and VR denote the temperature and volume of the reactor, respectively, Q denotes the rate of heat input to the reactor, k0, E, and ∆H denote the preexponential constant, the activation energy, and the enthalpy of the reaction, respectively, and cp and F denote the heat capacity and density of the fluid in the reactor, respectively. Table 1 lists the steady-state values and process parameters. The control objective is to stabilize the reactor at

8570

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

Table 1. Process Parameters and Steady-State Values VR ) 0.1 m3 CA0s ) 1.0 kmol/m3 ∆Hn ) -4.78 × 104 J/mol E ) 8.314 × 104 J/mol F ) 1000.0 kg/m3 TRs ) 395.3 K

R ) 8.314 J/(mol K) TA0s ) 310.0 K k0 ) 72 × 109 1/min cp ) 0.239 J/(g K) F ) 0.1 m3/min CAs ) 0.57 kmol/m3

the (open-loop) unstable equilibrium point by manipulating both the rate of heat input/removal and the inlet reactant concentration. Defining x1 ) CA - CAs, x2 ) T - Ts, u1 ) CA0 - CA0s, u2 ) Q, θ1(t) ) TA0 - TA0s, and θ2(t) ) ∆H - ∆Hn, where the subscript s denotes the steady-state value and ∆Hn denotes the nominal value of the heat of reaction, the process model of eq 13 can be cast in the form of eq 1. In all simulation runs, θ1(t) ) θ0 sin(3t), where θ0 ) 0.08TA0s, θ2(t) ) 0.5(-∆Hn), and the manipulated input constraints were |u1| e 1.0 kmol/m3 and |u2| e 92 kJ/s. In the simulation example, we use a quadratic Lyapunov function in the robust model predictive design and the parameters χ ) 1.01, φ ) 0.0001, and F ) 0.01 were chosen in the control law of eq 5, with ∆ ) 0.005 min and the number of prediction steps, N ) 6, to drive the closed-loop state trajectory to the desired neighborhood Ωb (shown in Figure 1) of the origin (verified via simulations); the robust feasibility (and stability) region Ω was explicitly characterized (see Figure 1). Note that the stability guarantees of the proposed predictive controller do not depend on the specific value of the horizon (or on being “large” enough). Note also that while a simple choice of the Lyapunov function may not yield the best estimates of the stability region, possible conservatism of the stability region obtained can only be gauged against the specific needs of a particular system under consideration by analyzing whether the given estimate of the stability region covers a “wide enough” set of initial conditions. No meaningful comparison can be made against the largest possible set of initial conditions starting from where stability is achievable because such characterization is impossible for nonlinear uncertain systems. By providing an estimate of the stability region for nonlinear uncertain systems under predictive control, the proposed approach provides an opportunity to judge the stabilization capability of the predictive controller design and modify it (if needed) via choosing a different Lyapunov function. Note also that possibly larger estimates of the stability region can be computed using constructive procedures such as Zubov’s method or by using a combination of several Lyapunov functions. The set of nonlinear ordinary differential equations (ODEs) was integrated using the MATLAB solver, ODE15s, and the optimization problem in the robust MPC was solved using the MATLAB nonlinear constrained optimization solver, fmincon. Following up on remark 2, we first investigate the possibility of using stability constraints other than the one of theorem 1,

Figure 2. Evolution of the closed-loop state profiles under the predictive controller requiring V˙ < 0 (dashed line) and V(t + ∆) < V(t) (dotted line) and under the proposed predictive controller of theorem 1 (solid line).

Figure 3. Evolution of the closed-loop input profiles under the predictive controller requiring V˙ < 0 (dashed line) and V(t + ∆) < V(t) (dotted line) and under the proposed predictive controller of theorem 1 (solid line).

Figure 1. Evolution of the closed-loop state trajectory under the predictive controller requiring V˙ < 0 (dashed line) and V(t + ∆) < V(t) (dotted line) and under the proposed predictive controller of theorem 1 (solid line)

and specifically, we first consider using a stability constraint that only requires V˙ to be negative. As can be seen from the dotted lines in Figures 1 and 2, starting from an initial condition within Ω, the optimization problem continues to be initially and successively feasible; however, due to not accounting for the presence of uncertainty, convergence to the desired neighborhood of the origin is not achieved (the corresponding manipulated input profiles are shown by dotted lines in Figure 3). As another option, we consider a stability constraint that requires V(t + ∆) to be less than V(t). Once again, starting from an initial condition within Ω, the optimization problem continues to be

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8571

feasible; however, as before, the desired convergence to the origin is not achieved (dashed lines in Figures 1 and 2). Finally, if the robust model predictive control design of theorem 1 is used, the optimization problem not only continues to be initially and successively feasible, irrespective of the choice of the horizon (i.e., the number of prediction steps N), but also achieves convergence to the desired neighborhood of the origin (solid lines in Figures 1 and 2).

function of the “target” backup configuration; see remark 8). Theorem 2 below formalizes the result. Theorem 2. Consider the closed-loop system of eq 14 under the primary control configuration, k(0) ) i, under the predictiVe control law of the following form:

Pi(x, t): min{Ji(x, t, u(‚))| u(‚) ∈ S}

(15)

s.t. x˘ ) f(x) + Gi(x)ui

(16)

LGiV(x)u(t) e LGiVubc(x(t))

(17)

4. Achieving Fault-Tolerance via Robust Model Predictive Control Having designed the robust model predictive control in the previous section, we utilize the control design to address the problem of implementing fault-tolerant control in the presence of uncertainty and faults in the control actuators. The main idea behind the achievement of fault-tolerant control is to characterize the stability region under each candidate control configuration and to drive the state trajectory in a way that it enters the stability region of candidate backup configurations, allowing the possibility of activating a backup control configuration to achieve stabilization in the event of a failure in the primary control configuration. The explicit characterization of the stability region and the ability to guide the system trajectory in a desired fashion make the nonlinear robust predictive controller developed in the previous section the ideal choice for the control law under each candidate control configuration. Specifically, we consider systems represented by the following state-space description:

x˘ ) f(x) + Gk(x)(uk + mk) + Wk(x)θk(t), uk ∈ Uk, θk ∈ Qk (14) where x(t) ∈ Rn denotes the vector of process state variables, uk(t) ∈ [-ukmax, ukmax] ⊂ R denotes the constrained manipulated input associated with the kth control configuration, mk(t) ∈ R denotes the fault in the kth control configuration, and K ) 1, 2, ..., P denotes the set of P available control configurations. For each value that k assumes in K, the process is controlled via a different manipulated input which defines a given control configuration. 4.1. Fault-Tolerant Model Predictive Control of Uncertain Systems. The problem that we address is that of ensuring closedloop stability in the event of a failure in the primary control configuration (note that this work does not focus on designing a fault-detection and isolation filter but assumes the availability of this information; for a recent work in this direction, see ref 30). In the rest of the section, we propose an optimization-based fault-tolerant control structure which (1) recognizes that a backup control configuration needs to be activated such that the state of the closed-loop system at the time of the failure resides in its stability region (i.e., closed-loop stability may not be achieved by simply activating any backup configuration) and (2) implements the robust predictive controller of section 3.1 to drive the system trajectory to reside in the stability region of some backup configuration should a failure occur. To this end, consider the nonlinear uncertain system of eq 14, where under each control configuration, k, robust model predictive controllers of the form of eqs 2-6 have been designed to drive the state trajectory to a desired neighborhood of the origin, dk, and the stability regions under each control configuration, Ωj, j ) 1, ..., P, have been explicitly characterized. Let dmax ) maxj)1,...,P dj, where dj was defined in theorem 1, ∆ e P Ωj. Let k(0) ) i for some ∆/min ≡ minj)1,...,P ∆/j , and ΩU ) ∪j)1 i ∈ K, x(0) :) x0 ∈ Ωi, and l * i be such that Dl ) minj∈K Vj(x0)/cmax (Vl, therefore, denotes the value of the Lyapunov j

with

Ji(x, t, u(‚)) )

∫t t+T∫[||xu(s; x, t)||Q2 + ||u(s)||R2] ds +

Vl(x(t + ∆)) (18)

Let Tfi be the earliest time that a fault occurs, and then, the following switching rule,

k(t) )

{

i, 0 e t < Tfi j, t g Tfi, x(Tfi) ∈ Ωj

}

(19)

with the control action being computed using the predictiVe controller of theorem 1 under configuration j guarantees that x(t) ∈ ΩU ∀ t g 0 and limtf∞ sup ||x(t)|| e dmax. Proof of Theorem 2. We consider the two possible cases: first, if no switching occurs and, second, if a switch occurs at a time Tfi. Case 1. The absence of a switch implies k(t) ) i ∀ t g 0. Note that the only difference between the control law of eqs 15-17 and that of eqs 2-4 is in the objective function. The stability properties of the control law depend on the stability constraints, and since they are the same in both formulations, the predictive controller of eqs 15-17 can also shown to be robustly stabilizing with the same stability region. Since x(0) ∈ Ωi and control configuration i is implemented for all times in this case, we have that x(t) ∈ Ωi ∀ t g 0 and limtf∞ sup ||x(t)|| e di. Finally, since Ωi ⊆ ΩU and di e dmax, we have that x(t) ∈ ΩU ∀ t g 0 and limtf∞ sup ||x(t)|| e dmax. Case 2. At time Tfi, the reconfiguration takes place and a control configuration j for which x(Tfi) ∈ Ωj is activated. From this time onward, since configuration j is implemented in the closed-loop system for all times under the predictive controller of theorem 1 and since x(Tfi) ∈ Ωj, we have that x(t) ∈ Ωl ∀ t g 0 and limtf∞ sup ||x(t)|| e dl. As in case 1, since Ωl ⊆ ΩU and dl e dmax, we have that x(t) ∈ ΩU ∀ t g 0 and limtf∞ sup ||x(t)|| e dmax. This completes the proof of theorem 2. Remark 8. The fault-tolerant controller is implemented as follows. (1) Given the nonlinear process of eq 14, identify the available control configurations k ) 1, ..., P, and for each control configuration, design the robust predictive controllers of eqs 2-4 and calculate an estimate of the stability regions Ωk, k ) 1, ..., P. (2) Given any x0 ∈ Ωi, compute Vj(x0)/cmax j , for all j * i, i.e, compute an estimate of the “distance” of the initial state from the boundary of the stability region of the candidate control configurations. Note that Vj(x0)/cmax e 1 implies that the initial j condition is in the stability region of the jth control configuration. If the initial condition is outside the stability region, Vj(x0)/cmax > 1 and the bigger the ratio is, the farther the initial j condition is from the stability region of a candidate backup

8572

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

configuration. Denote the candidate backup configuration which is closest to the initial conditions as the target configuration l. (3) Initialize the closed-loop system under the robust predictive controller of eqs 15-17. (4) At any time Tfi that a fault occurs, out of the available backup configurations check whether the state of the closedloop system resides in the stability region estimate under the candidate control configuration (i.e., to check if x(Tfi) ∈ Ωj). Pick the control configuration for which the state of the closedloop system resides in its stability region and apply the robust predictive controller using this control configuration to achieve closed-loop stability. Remark 9. The choice of lth configuration, for which is the minimum, and its use within the objective Vl(x0)/cmax l function and in the switching logic can be explained as follows: Note that Vl(x0)/cmax is only one measure of how far/ l close the initial condition is from the stability region of a candidate control configuration; this measure being the smallest for a given candidate control configuration does not provide any guarantees whether this candidate control configuration can be reached fastest under the implementation of the currently active control configuration (and subject to uncertainty). Hence, no claim is made that this is the “best” choice of the measure and that this is the candidate control configuration that should be used within the objective function, but it is simply a choice. Another option could be to change the optimization problem in a way that the control action is computed to minimize the time that it takes for the closed-loop state to reach the stability region of the backup configuration. Solving such an optimization problem online, however, would be an incredibly difficult task and impossible to implement online if the uncertainty were also to be explicitly taken into consideration. Another seemingly attractive possibility could be to incorporate, as a constraint, that the closed-loop state should enter the stability region of a candidate backup configuration by a certain time. Computing such a time, however, would once again be a difficult task for nonlinear systems with uncertainty. As opposed to all of these alternatives, the option used, that of incorporating the requirement in the objective function, provides a simple tool, that may reduce the time that the closed-loop state trajectory takes to enter the stability region of the backup configuration and may result in enabling fault-tolerant control (see the simulation example for a demonstration). Remark 10. Note that the proposed controller owes its faulttolerant properties to both nonlinear control tools and the flexibility provided by the optimization-based predictive control approach. In setting up the objective function, the formulation of the penalty on the Lyapunov function value of a candidate backup configuration (not of the currently active configuration) is facilitated by the characterization of the stability regions in the form of level sets of the Lyapunov functions, which in turn exploits a Lyapunov-based analysis. The ability to incorporate these considerations in the objective function is possible due to the use of the optimization-based approach and allows for the controller to take preventive measures (via guiding the state trajectory inside the stability region of the backup control configuration) to achieve fault-tolerant control. Remark 11. Note that, other than proximity of the initial condition to the stability region, additional considerations, such as ease of operation or cost of operation, could be incorporated in picking the “preferred” backup configuration (i.e., the one which is incorporated in the cost function in theorem 2). Note also that in the event that, at the time of the failure, the closedloop state resides in the stability region of more than one control

Figure 4. Evolution of the closed-loop state trajectory under the primary configuration in the absence of faults (dashed line) and in the presence of a fault but without accounting for it in the design of the controller for the primary configuration (dotted line) and under the proposed fault-tolerant controller of theorem 2 (solid line).

configuration, similar performance considerations may be employed in picking the backup configuration that is implemented in the closed-loop to achieve fault-tolerance. Remark 12. Note that the proposed fault-tolerant control structure differs from other reconfiguration-based fault-tolerant control methods (e.g., ref 30) in the control designs it uses under each control configuration and, as a result, in the fault-tolerant capability. Specifically, the robust predictive control design provides an explicit characterization of the robust stability region under each backup configuration (this characterization determines which backup control configuration should be implemented in the closed-loop) and the fault-tolerant controller uses the guiding capabilities of the predictive control approach in taking precautionary action (via driving the state trajectory into the stability region of the backup configurations) to enable faulttolerant control, as opposed to only Verifying whether faulttolerant control can be implemented. 4.2. Application to a Chemical Reactor. Consider once again the chemical reactor of section 3.2, and let the robust predictive controller be designed as before. The problem we consider in this section is the following: if, at some time during operation, the control actuator manipulating the heat input to the reactor Q were to fail, how could we use a backup control configuration to maintain closed-loop stability. For the purpose of illustration, let the backup control configuration be one that uses the inlet concentration (as before) and temperature as the manipulated variables, with constraints |u2| e 100K. As in section 3.2, quadratic Lyapunov functions were used in the robust model predictive designs under configurations 1 and 2, and for configuration 2, the parameters χ ) 1.5, φ ) 0.0001, and F ) 0.05 were chosen in the control law of eq 5, with ∆ ) 0.005 and the number of prediction steps, N ) 6, to drive the closed-loop state trajectory to the neighborhood Ωb (shown in Figure 4); the robust feasibility (and stability) regions Ωp (the stability region of the primary configuration) and Ωs (the stability region of the secondary configuration) were explicitly characterized (see Figure 4). The first simulation run, shown by the dashed lines in Figures 4-6, demonstrates the scenario where the primary configuration does not fail and, starting from an initial condition within the stability region of the primary control configuration, closedloop stability is achieved. The second run (shown by dotted lines in Figures 4-6) considers, once again, the implementation of the predictive control design of theorem 1 (not of theorem 2), wherein the possibility of a fault is not taken into consideration; however, a fault occurs in the primary configuration at t ) 0.075 min. At the time of the fault (at t ) 0.075 min), the state of the closed-loop system is outside the stability region of the backup control configuration, and switching to

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006 8573

the fault once again takes place at the same time (at t ) 0.075 min) and the system is initialized from the same initial condition, switching to a backup control configuration is able to achieve closed-loop stability. Note that, at the time of the fault, the state of the closed-loop system is still outside the stability region of the backup control configuration, but it is closer than it is in the first simulation run. The simulation result also underscores an important point that the stability region computed is only an estimate of the set of points starting from where the robust predictive controller can guarantee closed-loop stability. Note that it provides guarantees for all initial conditions within the set; however, it is also possible to achieve closed-loop stability from points outside this set. In summary, implementing the robust predictive control design that accounts for the occurrence of faults guides the system trajectory in a way that it goes faster toward the stability region of the candidate backup configuration and, together with verifying the presence of the closed-loop state in the stability region of the backup configuration (necessary to achieve faulttolerance), incorporates appropriate penalties in the objective function that increase the chances of achieving fault-tolerant control. Figure 5. Evolution of the closed-loop state profiles under the primary configuration in the absence of faults (dashed line) and in the presence of a fault but without accounting for it in the design of the controller for the primary configuration (dotted line) and under the proposed fault-tolerant controller of theorem 2 (solid line).

the backup configuration does not achieve closed-loop stability. Finally, when the fault-tolerant controller of theorem 2 is implemented, the control action computed by the controller in the primary configuration guides the system trajectory faster toward the stability region of the backup control configuration (solid lines in Figures 4-6). Compared to the control action implemented when not accounting for the possibility of the occurrence of a fault, the control action computed by the faulttolerant controller of theorem 2 uses up as much control action as is available; see the inset of Figure 6. As a result, even though

5. Conclusions In this work, we considered the problem of stabilization of nonlinear systems subject to uncertainty and constraints and designed a robust model predictive controller that guarantees stabilization from an explicitly characterized set of initial conditions. We then utilized the optimality and constraint handling properties of the proposed predictive controller to design a fault-tolerant controller that achieves fault-tolerance through controller reconfiguration. The fault-tolerant controller uses the knowledge of the stability regions of the backup control configurations to guide the state trajectory within the stability region of the backup control configurations to enhance the faulttolerance capabilities. The implementation of the proposed

Figure 6. Evolution of the closed-loop input profiles under the primary configuration in the absence of faults (dashed line) and in the presence of a fault but without accounting for it in the design of the controller for the primary configuration (dotted line) and under the proposed fault-tolerant controller of theorem 2 (solid line).

8574

Ind. Eng. Chem. Res., Vol. 45, No. 25, 2006

robust predictive controller as well as the application to faulttolerant control was demonstrated via a chemical reactor example. Literature Cited (1) Kravaris, C.; Palanki, S. Robust nonlinear state feedback under structured uncertainty. AIChE J. 1988, 34, 1119. (2) Lin, Y.; Sontag, E. D. A universal formula for stabilization with bounded controls. Syst. Control Lett. 1991, 16, 393. (3) Valluri, S.; Soroush, M. Analytical control of SISO nonlinear processes with input constraints. AIChE J. 1998, 44, 116. (4) Agamennoni, O.; Figueroa, J. L.; Palazoglu, A. Robust controller design under highly structured uncertainty. Int. J. Control 1998, 70, 721. (5) Kapoor, N.; Daoutidis, P. Stabilization of nonlinear processes with input constraints. Comput. Chem. Eng. 2000, 24, 9. (6) El-Farra, N. H.; Christofides P. D. Integrating robustness, optimality, and constraints in control of nonlinear processes. Chem. Eng. Sci. 2001, 56, 1841. (7) El-Farra, N. H.; Christofides, P. D. Bounded robust control of constrained multivariable nonlinear processes. Chem. Eng. Sci. 2003, 58, 3025. (8) Mhaskar, P.; El-Farra, N. H.; Christofides, P. D. Hybrid predictive control of process systems. AIChE J. 2004, 50, 1242. (9) Elisante, E.; Rangaiah, G. P.; Palanki, S. Robust controller synthesis for multivariable nonlinear systems with unmeasured disturbances. Chem. Eng. Sci. 2004, 59, 977. (10) El-Farra, N. H.; Mhaskar, P.; Christofides, P. D. Output feedback control of switched nonlinear systems using multiple Lyapunov functions. Syst. Control Lett. 2005, 54, 1163. (11) Bequette, W. B. Nonlinear control of chemical processes: A review. Ind. Eng. Chem. Res. 1991, 30, 1391. (12) Christofides, P. D.; El-Farra, N. H. Control of Nonlinear and Hybrid Process Systems: Designs for Uncertainty, Constraints and Time-Delays; Springer: New York, 2005. (13) Mayne, D. Q.; Rawlings, J. B.; Rao, C. V.; Scokaert, P. O. M. Constrained model predictive control: Stability and optimality. Automatica 2000, 36, 789. (14) Bemporad, A.; Morari, M. Robust model predictive control: A survey. In Robustness in Identification and Control, Lecture Notes in Control and Information Sciences; Garulli, A., Tesi, A., Vicino, A., Eds.; Springer: Berlin, 1999; Vol. 245, p 207. (15) Michalska, H.; Mayne, D. Q. Robust receding horizon control of constrained nonlinear systems. IEEE Trans. Autom. Control 1993, 38, 1623. (16) Sarimveis, H.; Genceli, H.; Nikolaou, M. Design of robust nonsquare constrained model-predictive control. AIChE J. 1996, 42, 2582. (17) Magni, L.; Nicolao, G.; Scattolini, R.; Allgower, F. Robust model predictive control for nonlinear discrete-time systems. Int. J. Rob. Nonlinear Control 2003, 13, 229. (18) Langson, W.; Chryssochoos, I.; Rakovic, S. V.; Mayne, D. Q. Robust model predictive control using tubes. Automatica 2004, 40, 125.

(19) Wang, Y. J.; Rawlings, J. B. A new robust model predictive control method I: theory and computation. J. Process Control 2004, 14, 231. (20) Sakizlis, V.; Kakalis, N. M. P.; Dua, V.; Perkins, J. D.; Pistikopoulos, E. N. Design of robust model-based controllers via parametric programming. Automatica 2004, 40, 189. (21) Mhaskar, P.; El-Farra, N. H.; Christofides, P. D. Predictive control of switched nonlinear systems with scheduled mode transitions. IEEE Trans. Autom. Control 2005, 50, 1670. (22) Mhaskar, P.; El-Farra, N. H.; Christofides, P. D. Stabilization of nonlinear systems with state and control constraints using Lyapunov-based predictive control. Syst. Control Lett. 2006, in press. (23) Mhaskar, P.; El-Farra, N. H.; Christofides, P. D. Robust hybrid predictive control of nonlinear systems. Automatica 2005, 41, 209. (24) Ydstie, E. B. New vistas for process control: Integrating physics and communication networks. AIChE J. 2002, 48, 422. (25) Yang, G. H.; Wang, J. L.; Soh, Y. C., Reliable H∞ control design for linear systems. Automatica 2001, 37, 717. (26) Patton, R. J. Fault-tolerant control systems: The 1997 situation. In Proceedings of the IFAC Symposium SAFEPROCESS 1997; Hull: United Kingdom, 1997; p 1033. (27) Zhou, D. H.; Frank, P. M. Fault diagnostics and fault tolerant control. IEEE Trans. Aerospace Electron. Syst. 1998, 34, 420. (28) Bao, J.; Zhang, W. Z.; Lee, P. L. Decentralized fault-tolerant control system design for unstable processes. Chem. Eng. Sci. 2003, 58, 5045. (29) Wu, N. E. Coverage in fault-tolerant control. Automatica 2004, 40, 537. (30) Mhaskar, P.; Gani, A.; El-Farra, N. H.; McFall, C.; Christofides, P. D.; Davis, J. F. Integrated Fault Detection and Fault-Tolerant Control of Process Systems. AIChE J. 2006 52, 2129. (31) Mhaskar, P.; Gani, A.; Christofides, P. D. Fault-tolerant control of nonlinear processes: Performance-based reconfiguration and robustness. Int. J. Rob. Nonlinear Control 2006, 16, 91. (32) Freeman, R. A.; Kokotovic, P. V. Robust Nonlinear Control Design: State-Space and LyapunoV Techniques; Birkhauser: Boston, 1996. (33) Kothare, S. L. D.; Morari, M. Contractive model predictive control for constrained nonlinear systems. IEEE Trans. Autom. Control 2000, 45, 1053. (34) Teel, A. Global stabilization and restricted tracking for multiple integrators with bounded controls. Syst. Control Lett. 1992, 18, 165. (35) Liberzon, D.; Sontag, E. D.; Wang, Y. Universal construction of feedback laws achieving ISS and integral-ISS disturbance attenuation. Syst. Control Lett. 2002, 46, 111. (36) Dubljevic, S.; Kazantzis, N. A new Lyapunov design approach for nonlinear systems based on Zubov’s method. Automatica 2002, 38, 1999.

ReceiVed for reView February 24, 2006 ReVised manuscript receiVed April 20, 2006 Accepted April 21, 2006 IE060237P