A Two-Stage Design of Two-Dimensional Model Predictive Iterative

May 8, 2015 - ABSTRACT: Combining model predictive control (MPC) with iterative learning control (ILC) is a widely applied strategy in controlling rep...
0 downloads 0 Views 838KB Size
Article pubs.acs.org/IECR

A Two-Stage Design of Two-Dimensional Model Predictive Iterative Learning Control for Nonrepetitive Disturbance Attenuation Jingyi Lu, Zhixing Cao, Zhuo Wang, and Furong Gao* Department of Chemical and Biomolecular Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China ABSTRACT: Combining model predictive control (MPC) with iterative learning control (ILC) is a widely applied strategy in controlling repetitive processes. Currently, there are two approaches for designing such a controller, namely the separate design and the integrated design. The separate design has better performance on nonrepetitive disturbance rejection, while the integrated design has faster convergent rate. In this paper, by borrowing the idea of the two-stage optimization, a new design has been proposed for enhancing the integrated control of combining MPC with ILC. In this way, good performance and fast convergent rate can be obtained simultaneously. Simulations are given to illustrate the effectiveness of the proposed algorithm on disturbance attenuation.

1. INTRODUCTION Iterative learning control (ILC), initially proposed by Uchiyama1 and Arimoto et al.2 to control robots, provides a good approach to improve control performance for repetitive systems such as semiconductor processes,3 injection molding processes,4 and batch reactors.5 These processes are under repetitive operations from cycle to cycle. If the time direction is defined as t and the cycle direction as k, ILC determines the control action u(t, k) for the current cycle based on the input and the tracking error of the previous cycle as u(t, k) = u(t, k − 1) + Ke(t + 1, k − 1). In this way, ILC does not need much knowledge of the system. Hence, it performs well on repetitive systems despite model mismatch. Up to now, ILC has been well developed for a variety of purposes.6−8 However, it is essentially an open loop control algorithm. Time-wise stability can not be guaranteed when nonrepetitive disturbances exist. To deal with nonrepetitive disturbances along the direction of cycle, feedback control is combined with ILC. Currently, there are many methods on combining ILC with feedback control including robust control,9 adaptive control,10 model predictive control (MPC),11 and so on.12 Among feedback control strategies, MPC13 is widely applied in process industries as surveyed in Morari and Lee14 and Qin and Badgwell.15 Its ability to handle constraints, easy implementation, and good performance make MPC a powerful tool in process industries. Thus, it is a good choice to combine MPC with ILC for controller design. Under the framework of MPC, a combination of ILC with feedback control can be designed in an integrated manner5,11 or in a separate manner.16 In the integrated design, previous cycle’s prediction errors were directly incorporated into the prediction model of the current cycle. Repetitive and nonrepetitive disturbances were not explicitly differentiated. Nonrepetitive disturbances were overly rejected in the current cycle. The impact of the nonrepetitive disturbances would last to the next and even the following several cycles. To relieve this phenomena, separate design methods were proposed to handle the nonrepetitive disturbances. In these methods, the optimization was conducted in two steps. The first step, © 2015 American Chemical Society

conducted at the beginning of each cycle, was to apply ILC only based on previous cycle’s information to get uILC(t, k). Then with real-time information, the second step optimization was conducted to get uFB(t, k). The final control law is u(t, k) = uFB(t, k) + uILC(t, k). Both of these methods have their own advantages and disadvantages. The integrated design method is essentially a two-time-dimensional feedback control. The system is easier to stabilize along the time direction by such a controller even when model mismatch is significant. It has a faster convergent rate. However, performance may not be good when nonrepetitive disturbances are significant. The separate design method has a better performance on disturbance rejection, but the convergent rate is slower. In this paper, we aim to propose a new design method that is a compromise of these two methods based on the structure of feedforward MPC.17 The control input is separated into two parts u1 and u2. First, we use u1 to reject the nonrepetitive disturbances. Then, u2 is derived to reject the repetitive and remained part. The method is an extension of Shi et al.11 It can be considered as embedding a feedforward controller before conducting the feedback control shown in Shi et al.11 Therefore, the method inherits nice properties such as fast convergence from Shi et al.11 In addition, since a feedforward control is conducted beforehand, performance can be improved. This paper is arranged as follows. In section 2, details of this new two-stage MPC design are given, and how the method helps to attenuate nonrepetitive disturbances is briefly analyzed. In section 3, simulations are conducted to show the effectiveness of the method. Finally, conclusions are drawn. Received: Revised: Accepted: Published: 5683

October 20, 2014 May 6, 2015 May 8, 2015 May 8, 2015 DOI: 10.1021/acs.iecr.5b01316 Ind. Eng. Chem. Res. 2015, 54, 5683−5689

Article

Industrial & Engineering Chemistry Research

2.2. Controller Design. Assume nonrepetitive disturbances can be modeled as assumed in Chin et al.,16 and the true model of the system is as

2. TWO-STAGE MODEL PREDICTIVE CONTROL 2.1. Problem Formulation. In general, a repetitive system can be represented in state space as x(t + 1, k) = Ax(t , k) + Bu(t , k)

(1)

x(t + 1, k) = A̅ (t )x(t , k) + B̅ (t )u(t , k)

y(t , k) = Cx(t , k) + dr(t ) + d i(t , k)

(2)

d i(t + 1, k) = Φ̅ (t )d i(t , k)

(10)

y(t , k) = Cx̅ (t , k) + dr(t , k) + d i(t , k)

(11)

Here, t ∈ [0, N] is the time index, N is the length of a cycle, and k ∈ [1, ∞) is the cycle index. Assume that the system is of order n1 and has m outputs and n inputs. Then, states x(t , k) ∈ n1; inputs u(t , k) ∈ n, outputs y(t , k) ∈ m. Matrices A, B, and C are arranged in proper dimensions. dr(t ), d i(t , k) ∈ m denotes exogenous disturbances on outputs. dr(t) is repetitive along the cycle direction, while di(t, k) is nonrepetitive disturbances. However, in industrial applications, it is impossible to get accurate A, B, and C because the process dynamics may be time varying or with nonlinearity. By system identification, only a nominal model, which is an approximation of real processes, can be obtained. Therefore, a better way to represent the real process is as

Assume the model identified is of the same structure as in eqs 9−11, but coefficient matrices are A, B, Φ, and C. First, assume all the states and disturbances are measurable. Then, we can split the identified system into two subsystems as S1: x1(t + 1, k) = Ax1(t , k) + Bu1(t , k) d i(t + 1, k) = Φd i(t , k) y1(t , k) = Cx1(t , k) + d i(t )

with x1(t, k) = x(t, k) − x(t, k − 1). The second subsystem is as S2: x(t + 1, k) = Ax(t , k) + Bu1(t , k) + Bu 2(t , k) d i(t + 1, k) = Φd i(t , k)

x(t + 1, k) = (A + ΔA (t , k))x(t , k) + (B + ΔB(t , k))u(t , k) (3)

y(t , k) = (C + ΔC (t , k))x(t , k) + dr(t ) + d i(t , k)

y(t , k) = Cx(t , k) + dr(t ) + d i(t ) (12)

(4)

Then, the prediction model can be derived as

Here, ΔA, ΔB, and ΔC are bounded unknown parameters. Since the system is under repetitive operations, uncertainty along cycle direction is repetitive, namely ΔA(t, k) ≈ ΔA(t, k − 1); ΔB(t, k) ≈ ΔB(t, k − 1); and ΔB(t, k) ≈ ΔB(t, k − 1). Then, when k ≥ 2, difference can be taken along the direction of cycle as

S1: x1p(t + i + 1, k) = Ax1p(t + i , k) + Bu1p(t + i , k) (13)

d ip(t

+ Bu 2p(t + i , k) + x(t + i + 1, k − 1)

(6)

− Ax(t + i , k − 1) − Bu(t + i , k − 1) (16)

d ip(t + i + 1, k) = Φd ip(t + i , k)

(17)

y p (t + i + 1, k) = Cx p(t + i + 1, k) + y(t + i + 1, k − 1) − Cx(t + i + 1, k − 1) − d i(t + i + 1, k − 1) + d ip(t + i + 1, k) (18)

(7)

Here, xp1(t, k) = x(t, k) − x(t, k − 1); i = 0,1,...,Pn − 1. Pn is the prediction horizon. For simplicity, we also take the control horizon as Pn. Then, we conduct a quadratic optimization based on eqs 13 and 15 to derive up1 as

y(t + 1, k) = C Δk x(t + 1, k) + Δk d i(t + 1, k) + y(t + 1, k − 1)

(14)

S2: x p(t + i + 1, k) = Ax p(t + i , k) + Bu1p(t + i , k)

with Δkx(t, k) = x(t, k) − x(t, k − 1) and others defined in a similar way. By such a difference, repetitive disturbance dr is eliminated. Furthermore, if Δku and Δkx are restricted to be small, the uncertainty terms also can be regarded as repetitive terms and eliminated since multiplications of terms such as ΔA(t, k) and Δkx(t, k) make the uncertainty even smaller. By ignoring the uncertainty terms, prediction based on difference becomes Δk x(t + 1, k) = AΔk x(t , k) + BΔk u(t , k)

+ i , k)

and

(5)

y(t , k) = (C + ΔC (t , k))Δk x(t , k) + Δk d i(t , k) + y(t , k − 1)

+ i + 1, k) =

Φd ip(t

y1p (t + i + 1, k) = Cx1p(t + i + 1, k) + d ip(t + i + 1, k − 1) (15)

Δk x(t + 1, k) = (A + ΔA (t , k))Δk x(t , k) + (B + ΔB(t , k))Δk u(t , k)

(9)

(8)

i = Pn − 1

To design a model predictive controller based on eqs 7 and 8 is the basic idea in Shi et al.11 They have also shown that the method converges in an exponential rate, although there is significant model mismatch. However, the method did not explicitly address the issue on nonrepetitive disturbance rejection. In this paper, on the basis of the work in Shi et al.,11 we propose a new two-stage MPC for systems with nonrepetitive disturbances. We claim that the tracking performance can be further improved.

min p

u1p(t , k) ,..., u1 (t + Pn − 1, k)



y1p (t + i + 1, k)

i=0

2 2

+ q1 u1p(t + i , k)∥22 (19)

s.t. (13), (14), (15) u1P(t , k),..., u1p(t + Pn − 1, k) ∈ 

Then, by incorporating the derived up1 into eqs 16 and 18, the second stage optimization can be conducted to derive up2 as 5684

DOI: 10.1021/acs.iecr.5b01316 Ind. Eng. Chem. Res. 2015, 54, 5683−5689

Article

Industrial & Engineering Chemistry Research i = Pn − 1

min p

u 2p(t , k) ,..., u 2 (t + Pn − 1, k)

+

q2 u 2p(t



yr (t + i + 1) − y p (t + i + 1, k)

i=0

+ i , k) −

u 2p(t

+ i, k −

− 1). The first stage optimization is to derive an input u1 to reject all the nonrepetitive factors. In the ideal case, which means the model is accurate and constraints are loose enough, q1 can be made to be zero, and the optimal yp1 = 0. Then, when the second optimization is conducted, nonrepetitive factors are canceled by this u1. Therefore, u2 is only used to reject the repetitive part. When model mismatch exists or constraints are active, q1 can not be made to zero. However, in this case, yp1 ≠ 0, and u1 can still reject most part of nonrepetitive factors as long as the model used in MPC can characterize major dynamics of the system. This can be verified by simulations later. As analyzed above, nonrepetitive disturbances are mainly rejected by u1 separately. Therefore, it will not affect the input in (k + 1) − st cycle due to the minimization on ∥u2(t, k + 1) − u2(t, k)∥2 in the second stage of optimization. Different from the method shown in Chin et al.,16 both u1 and u2 in this approach are derived online, while in Chin et al.,16 the first part of input is derived before the start of each cycle. In this way, more online information can be integrated so that tracking errors converge in a faster rate from the beginning of the process. Compared with Shi et al.,11 the new method is equivalent to incorporating a feedforward control into Shi et al.11 It can be noticed that if q1 is taken to be extremely large, the optimal solution of the first stage optimization will be up1(t, k) = up1(t + 1,k) = ...= up1(t + Pn − 1,k) = 0. In this case, the new method degenerates into Shi et al.11 Therefore, the new method inherits nice properties including fast convergence of Shi et al.11 In addition, to conduct optimization twice in this way leads to an increase of the degree of freedom of the method. Hence, the method has better ability on nonrepetitive disturbance rejection compared with Shi et al.11 In the above settings, states and disturbances are assumed to be measurable. If this is not the case, an observer can be designed to get the states. For instance, the state space model can be augmented as

2 2

1)∥22

(20)

s.t. (16), (17), (18) u1p(t , k) + u 2p(t , k),..., u1p(t + Pn − 1, k) +u 2p(t + Pn − 1, k) ∈ 

Here, yr is the reference, and q1 and q2 are the penalty weights on input terms. The set  is the feasible set of inputs. By such a two-stage optimization, up1 and up2 from t to t + Pn − 1 can be derived. According to the receding horizon strategy in MPC, only the first input is implemented, namely u(t , k) = u1p(t , k) + u 2p(t , k)

When there are no constraints, or constraints are rather loose, the analytical solution of the two-step optimization can be easily obtained as u1p( tt + Pn− 1, k) = − (G2T G2 + Q 1)−1G2T [G1(x(t , k) − x(t , k − 1)) + G3di(t , k)]

u 2p( tt + Pn− 1, k) = − (G2T G2 + Q 2)−1 × [G2[e( tt ++ 1Pn, k − 1) − G1(x(t , k) − x(t , k − 1)) − G2u1p( tt + Pn− 1, k) + G2u1( tt + Pn− 1, k − 1) − G3(d i(t , k) − d i(t , k − 1))]] + u 2( tt + Pn− 1, k − 1)

Here, ⎡ CA ⎤ ⎢ 2⎥ ⎢ CA ⎥ G1 = ⎢⎢ CA3 ⎥⎥ , ⎢ ⋮ ⎥ ⎢ Pn ⎥ ⎣CA ⎦

⎡ CB 0 ⎢ CAB CB G2 = ⎢⎢ ⋮ ⋮ ⎢ P −1 ⎣CA n B CAPn− 2 B

⎤ ⎥ ⎥, ⎥ ⎥ ... CB ⎦ ... ... ⋮

0 0 ⋮

⎡ x(t + 1, k) ⎤ ⎢ ⎥= ⎢⎣ d i(t + 1, k)⎥⎦

⎡ ϕ ⎤ ⎢ ⎥ ⎢ ϕ2 ⎥ G3 = ⎢ ⎥ ⎢ ⋮ ⎥ ⎢ Pn− 1⎥ ⎣ϕ ⎦

⎡ A 0 ⎤⎡ x(t , k) ⎤ ⎡ B ⎤ ⎥+ u(t , k) ⎢ ⎥⎢ ⎣ 0 Φ ⎦⎢⎣ d i(t , k)⎥⎦ ⎢⎣ 0 ⎥⎦ (22)

y(t , k) =

⎡ x(t , k) ⎤ ⎥ + d r (t ) [C I ]⎢ ⎢⎣ d i(t , k)⎥⎦

(23)

Batch-wise differences can be taken to remove repetitive term dr(t) as

Then, inputs can be derived as u p(tt + Pn− 1, k) = u1p(tt + Pn− 1, k) + u 2p(tt + Pn− 1, k)

⎡ Δk x(t + 1, k) ⎤ ⎢ ⎥= ⎢⎣ Δk d i(t + 1, k)⎥⎦

= u(tt + Pn− 1, k − 1) + (G2T G2 + Q 2)−1G2e(tt ++ 1Pn , k − 1) − (G2T G2 + Q 2)−1[Q 2(G2T G2 + Q 1)−1G2T + Is]

⎡A 0 ⎤ ⎢ ⎥[Δ x(t , k)Δk d i(t , k)] ⎣ 0 Φ⎦ k ⎡ B⎤ + ⎢ ⎥Δk u(t , k) ⎣0⎦

G1x(t , k) + (G2T G2 + Q 2)−1G1x(t , k − 1) (G2T G2 + Q 2)−1Q 2(G2T G2 + Q 1)−1G2T G1x(t , k − 2) − (G2T G2 + Q 2)−1[Q 2(G2T G2 + Q 1)−1G2T + Is]

Δk y(t , k) =

G3(d i(t , k) − d i(t , k − 1))

(21)

⎡ Δk x(t , k) ⎤ ⎥ [C I ]⎢ ⎢⎣ Δk d i(t , k)⎥⎦

(24)

(25)

⎛⎡ A 0 ⎤ ⎞ If the pair ⎜⎢ ⎥ , [C I ]⎟ is observable, Δkx(t, k) and ⎝⎣ 0 Φ ⎦ ⎠ Δkdi(t, k) can be estimated, and the estimation can be more accurate than directly using eqs 22 and 23. In the controller design part, only the value of Δkx(t, k), instead of x(t, k), is

Here, Is ∈ mPn× mPn is an identity matrix. Q 1 , Q 2 ∈ nPn× nPn are diagonal matrices with q1 and q2 aligned in the diagonal, respectively. Subsystem S1 characterizes nonrepetitive factors including disturbances and differences on states between x(t, k) and x(t, k 5685

DOI: 10.1021/acs.iecr.5b01316 Ind. Eng. Chem. Res. 2015, 54, 5683−5689

Article

Industrial & Engineering Chemistry Research needed. In addition, di(t, k) can be computed by di(t, k) = Δkdi(t, k) + di(t, k − 1). On the basis of eqs 24 and 25, by applying a Luenberger observer,18,19 states can be easily estimated. When stochastic disturbances are considered, a Kalman filter20 can be applied. Moreover, when systems are with uncertainty or model mismatch, robust filters as shown in Masreliez and Martin21 and Xie et al.22 can be applied to guarantee that the estimation errors are bounded. Since results and theories on estimation are quite rich and mature, here we do not explicitly discuss this issue so that we can focus on the design of control algorithms. Remark: this control method is applicable when k ≥ 2. For k = 1, since no historical information is available, no cycle-wise learning can be conducted. Therefore, any one-dimensional method can be applied. For the sake of easy implementation, one simple way to design the first batch is to remove the first stage of optimization and assume x(t,0) = 0 and u2(t,0) = 0 for ∀t = 0,1,2,...,N. Then, the controller is a classical state space model predictive controller.13

Figure 2. Case 1: comparison between the proposed method and the method in Chin et al.16 on tracking error.

3. SIMULATIONS In this section, first, we compare the newly proposed method with the two-stage method shown in Chin et al.16 Then, we apply the method to control the injection velocity in injection molding process and compare the control performance with Shi et al.11 3.1. Case 1: Comparison with Chin et al.16 Following the first example shown in Chin et al.,16 we assume dynamics of the plant is as G (s ) =

2.5 300s + 35s + 1 2

Figure 3. Case 1: comparison between the proposed method and the method in Chin et al.16 on outputs in the 46th, 47th, and 48th cycles.

The nominal model used in control is as G̅ (s) =

1.5 270s 2 + 33s + 1

46th cycle to the 48th cycle. From this figure, we can see that outputs from the 46th cycle are disturbed by the nonrepetitive disturbances but recover quickly in the 47th cycle. Therefore, both of the two methods show ability on independent disturbance rejection. However, from Figure 2, it is noticeable that in the first few cycles, Chin et al.’s16 tracking errors are larger than the new method. In addition, convergent rate of the new method is faster. This shows the superiority of the new method. This superiority is due to model mismatch. To further verify this conclusion, assume the real model of the plant is

Sample period is one. Simulation is taken for 100 time steps. A unit step disturbance filtered by 1/(10s + 1) enters into the plant in the 46th cycle from t = 30 to the end. Assume di(t, k) is measurable at (t, k). The reference to be tracked is shown in Figure 1. Parameters of Shi et al.16 are taken as Q = I, S = 0.01I, Λ = I, Γ = 0.01I, m = 30, which are the same as the original paper. For the method proposed in this paper, prediction horizon is taken as 30. Penalty weights q1 and q2 are taken as q1 = q2 = 0.04 when k ≥ 2. When k = 1, the first stage optimization is removed and q2 = 0.04. Figure 3 shows the outputs from the

G̃ (s) =

0.8 15s + 8s + 1 2

which has a significant mismatch on both magnitude and phase compared with G̅ (s). Figure 4 shows the tracking error of the two methods. When model mismatch is more significant, the tracking errors become larger, but the new method still converges faster than the method in Chin et al.16 Therefore, the new method is more suitable for systems with significant model mismatch compared with that of Chin et al.16 3.2. Case 2: Comparison with Shi et al.11 In this part, we take the control of injection velocity as an example. Injection velocity is a key variable in the filling stage of injection molding. It is required to be controlled to follow a given reference to ensure product quality. According to Wang et al.,23 response from the injection velocity to the proportional valve is as

Figure 1. Case 1: reference. 5686

DOI: 10.1021/acs.iecr.5b01316 Ind. Eng. Chem. Res. 2015, 54, 5683−5689

Article

Industrial & Engineering Chemistry Research

Figure 4. Case 1: comparison between the proposed method and the method in Chin et al.16 on error with significant model mismatch.

Figure 6. Case 2: comparison between the proposed method and the method in Shi et al.11 on tracking error.

y(t , k) = 1.582y(t − 1, k) − 0.5916y(t − 2, k) + 1.69u(t − 1, k) + 1.419u(t − 2, k)

Assume the real plant is as y(t , k)= 1.882y(t − 1, k) − 0.5916y(t − 2, k) +1.09u(t − 1, k) + 1.419u(t − 2, k) +0.015(u(t − 1, k) − 1)2 y(t − 1, k)

There is model mismatch on both zeros and poles of the two systems in terms of the linear part, and an additional nonlinear term is added onto the plant. Similar to Case 1, a nonrepetitive step disturbance di(t, k) filtered enters the system in the 46th cycle from t = 30 to t = 60. di(t, k) is measurable at (t, k). Assume the inputs of the system are constrained as

Figure 7. Case 2: outputs in the 46th, 47th, and 48th cycles of the proposed method.

−5 ≤ u(t , k) ≤ 5

The reference to be tracked is shown in Figure 5. Prediction horizon of the new method is taken as 30. Penalty weight is

Figure 8. Case 2: inputs in the 46th, 47th, and 48th cycles of the proposed method.

Figure 5. Case 2: reference.

method, the output in the 47th cycle is just slightly affected, as shown in Figure 7. The reason is that in the 46th cycle, by the two-step optimization in the new method, the nonrepetitive disturbances are rejected by u1(t, k−1) in the first step. In the 47th cycle, minimizing ∥u2(t, k) − u2(t, k − 1)∥2 in the second step will not drag u2(t, k) to deviate from the desired value. This phenomenon can be verified by comparing inputs in Figures 8 and 10. In Figure 8, the input of the new method in the 47th cycle quickly goes back to the desired value. However, in Figure 10, the input of Shi et al.11 in the 47th cycle is much closer to the input in the 46th cycle compared with the new method. Therefore, ability on nonrepetitive disturbance

taken as q1 = q2 = 3 when k ≥ 2, and q2 = 3 when k = 1 with the first stage removed. Parameters of Shi et al.11 are taken as n1 = n2 = 30 and η = 1, α = β = 0, and γ = 3. Both of the two methods are well tuned. Figure 6 shows that convergent rates of the two methods are almost the same. However, the new method has better performance on nonrepetitive disturbance rejection. This can be further verified by comparing Figures 7 and 9. Figure 9 shows that by applying the method in Shi et al.,11 the output in the 47th cycle is greatly affected by the disturbance in the 46th cycle. Meanwhile, by applying the new 5687

DOI: 10.1021/acs.iecr.5b01316 Ind. Eng. Chem. Res. 2015, 54, 5683−5689

Industrial & Engineering Chemistry Research

Article



ACKNOWLEDGMENTS



REFERENCES

The authors acknowledge the financial support from the Hong Kong Research Grant Council under Project No. 612512, Guangzhou scientific and technological project (12190007), and Academician Workstation Project of Guangdong Province (2012B090500010).

(1) Uchiyama, M. Formulation of high-speed motion pattern of a mechanical arm by trial (in Japanese). Trans. SICE 1978, 14, 706−702. (2) Arimoto, S.; Kawamura, S.; Miyazaki, F. Bettering operation of dynamic systems by learning: A new control theory for servomechanism or mechatronics systems. IEEE Proc. Decision Control 1984, 23, 1064−1069. (3) Yang, D. R.; Lee, K. S.; Ahn, H. J.; Lee, J. H. Experimental application of a quadratic optimal iterative learning control method for control of wafer temperature uniformity in rapid thermal processing. IEEE Trans. Semicond. Manuf. 2003, 16, 36−44. (4) Gao, F.; Yang, Y.; Shao, C. Robust iterative learning control with applications to injection molding process. Chem. Eng. Sci. 2001, 56, 7025−7034. (5) Lee, K. S.; Chin, I.-S.; Lee, H. J.; Lee, J. H. Model predictive control technique combined with iterative learning for batch processes. AIChE J. 1999, 45, 2175−2187. (6) Bristow, D. A.; Tharayil, M.; Alleyne, A. G. A survey of iterative learning control. IEEE Control Syst. 2006, 26, 96−114. (7) Chen, Y.; Gong, Z.; Wen, C. Analysis of a high-order iterative learning control algorithm for uncertain nonlinear systems with state delays. Automatica 1998, 34, 345−353. (8) Saab, S. S. A stochastic iterative learning control algorithm with application to an induction motor. Int. J. Control 2004, 77, 144−163. (9) Shi, J.; Gao, F.; Wu, T.-J. Robust design of integrated feedback and iterative learning control of a batch process based on a 2D Roesser system. J. Process Control 2005, 15, 907−924. (10) Tayebi, A. Adaptive iterative learning control for robot manipulators. Automatica 2004, 40, 1195−1203. (11) Shi, J.; Gao, F.; Wu, T.-J. Single-cycle and multi-cycle generalized 2D model predictive iterative learning control (2DGPILC) schemes for batch processes. J. Process Control 2007, 17, 715− 727. (12) Amann, N.; Owens, D. H.; Rogers, E. Iterative learning control using optimal feedback and feedforward actions. Int. J. Control 1996, 65, 277−293. (13) Camacho, E. F.; Alba, C. B. Model Predictive Control; Springer: New York, 2013. (14) Morari, M.; Lee, J. H. Model predictive control: Past, present, and future. Comput. Chem. Eng. 1999, 23, 667−682. (15) Qin, S. J.; Badgwell, T. A. A survey of industrial model predictive control technology. Control Eng. Pract. 2003, 11, 733−764. (16) Chin, I.; Qin, S. J.; Lee, K. S.; Cho, M. A two-stage iterative learning control technique combined with real-time feedback for independent disturbance rejection. Automatica 2004, 40, 1913−1922. (17) Carrasco, D. S.; Goodwin, G. C. Feedforward model predictive control. Ann. Rev. Control 2011, 35, 199−206. (18) Luenberger, D. G. Observing the state of a linear system. IEEE Trans. Mil. Electron. 1964, 8, 74−80. (19) Luenberger, D. G. Observers for multivariable systems. IEEE Trans. Autom. Control 1966, 11, 190−197. (20) Kalman, R. E. A new approach to linear filtering and prediction problems. J. Fluids Eng. 1960, 82, 35−45. (21) Masreliez, C.; Martin, R. Robust Bayesian estimation for the linear model and robustifying the Kalman filter. IEEE Trans. Autom. Control 1977, 22, 361−371. (22) Xie, L.; Soh, Y. C.; de Souza, C. E. Robust Kalman filtering for uncertain discrete-time systems. IEEE Trans. Autom. Control 1994, 39, 1310−1314.

Figure 9. Case 2: outputs in the 46th, 47th, and 48th cycles of the method in Shi et al.11

Figure 10. Case 2: inputs in the 46th, 47th, and 48th cycles of the method in Shi et al.11

rejection of the new method is better.11 In addition, Figures 8 and 10 also show that inputs do not violate constraints. The above two examples indicate that the new method possesses faster convergent rate compared with Chin et al.16 when model mismatch is significant and better performance on nonrepetitive disturbance rejection compared with Shi et al.11

4. CONCLUSIONS In this paper, a two-stage design method on model predictive iterative learning control has been proposed for better rejecting nonrepetitive disturbances. The original system incorporated with ILC has been divided into two subsystems corresponding to nonrepetitive and repetitive disturbances, respectively. Correspondingly, a model predictive controller has been designed separately for each subsystem. In this way, repetitive and nonrepetitive disturbances are rejected separately, and impacts of nonrepetitive disturbances to the following cycles are attenuated. The method is essentially a feedback strategy. Therefore, good stability and fast convergence can be guaranteed from the very beginning.



AUTHOR INFORMATION

Corresponding Author

*E-mail: [email protected]. Notes

The authors declare no competing financial interest. 5688

DOI: 10.1021/acs.iecr.5b01316 Ind. Eng. Chem. Res. 2015, 54, 5683−5689

Article

Industrial & Engineering Chemistry Research (23) Wang, Y.; Shi, J.; Zhou, D.; Gao, F. Iterative learning faulttolerant control for batch processes. Ind. Eng. Chem. Res. 2006, 45, 9050−9060.

5689

DOI: 10.1021/acs.iecr.5b01316 Ind. Eng. Chem. Res. 2015, 54, 5683−5689