Frequency Domain Approach to Computing Loop ... - ACS Publications

This paper presents a frequency domain approach to accurately computing these phase margins for multivariable systems. With the help of unitary mappin...
0 downloads 0 Views 701KB Size
4418

Ind. Eng. Chem. Res. 2008, 47, 4418–4424

Frequency Domain Approach to Computing Loop Phase Margins of Multivariable Systems Zhen Ye, Qing-Guo Wang,* and Chang Chieh Hang Department of Electrical and Computer Engineering, National UniVersity of Singapore, 10 Kent Ridge Crescent, Singapore 119260

The loop phase margins of multivariable control systems are defined as the allowable individual loop phase perturbations within which stability of the closed-loop system is guaranteed. This paper presents a frequency domain approach to accurately computing these phase margins for multivariable systems. With the help of unitary mapping between two complex vector space, the MIMO phase margin problem is converted using the Nyquist stability analysis to the problem of some simple constrained optimization, which is then solved numerically with the Lagrange multiplier and Newton-Raphson iteration algorithm. The proposed approach can provide exact margins and thus improves the linear matrix inequalities (LMI) results reported before, which could be conservative. 1. Introduction Phase margin measures how far a system is away from instability if phase change is allowed only, which is important in control theory because it reflects the relative stability or robustness of the closed-loop system.1 For single-input-singleoutput (SISO) system, phase margin is well-defined and can be easily determined by Nyquist plot or Bode diagram based on Nyquist stability theorem. However, it is not straightforward to be extended to a multi-input-multi-output (MIMO) system because of the coupling among loops as well as complexity of matrix perturbations of unity size with different directions.2 Although Gershgorin bands can be used to find the phase margin of MIMO systems,3 this method gives conservative results because it requires the diagonal dominance of the system. Baron and Jonckheere presented the definition for the phase margin of the MIMO system,4 where the stability of the closed-loop system is guaranteed if the phase perturbation of a unitary matrix in the feedback path is less than the phase margin of the system. Such a definition allows the perturbations to be in the entire set of unitary matrices, not necessarily to be diagonal. While this is a nice formulation, permissible perturbations in this class are simply too rich to imagine intuitively and connect to phase changes of individual loops, which practical control engineers have been used to. Thus, a more direct and useful class of phase perturbations is one of diagonal phase perturbations. This corresponds to a multivariable control system where each loop has some phase perturbation but no gain change. Even in this case, the problem is not simple as one cannot calculate phase margin from each loop separately due to loop interactions, that is, one loop’s phase margin depends on other loop’s one. Recently, Wang et al. proposed a time domain method to obtain the loop phase margin for multivariable systems.5 The basic idea is motivated by the link between the phase lag and the time delay. Consider an MIMO system under a decentralized delay feedback, the stabilizing ranges of all time delays is obtained by an LMI delay-dependent stability criterion with the help of the free-weighting-matrix method.6–8 Then, these stabilizing ranges of time delays is converted into the stabilizing ranges of phases by multiplying some fixed frequency based on a proposition in ref 4 which are taken as the loop phase * To whom correspondence should be addressed. Email: elewqg@ nus.edu.sg. Telephone: +65-6516-2282. Fax: 65-6779-1103.

margins. Although the proposed method provides a systematic approach to calculate the loop phase margins for any given MIMO system, some conservativeness still exists: (1) LMI delay-dependent stability criterion is sufficient only and not necessary, which is common for all LMI techniques; (2) the fixed frequency determined by4 is underestimated because their phase perturbation is not necessary to be diagonal. This paper aims to reduce the above conservativeness in ref 5 Rather than a time domain method, a frequency domain approach is proposed for accurately computing loop phase margins for multivariable systems. Based on the work of ref 4 and with the help of unitary mapping between two complex vector space, the MIMO phase margin problem is converted to some simple constrained optimization problem, which is then solved numerically with the Lagrange multiplier method and Newton-Raphson iteration algorithm. New constraints are added in the optimization problem such that the diagonal structure of phase perturbations is maintained. Accordingly, loop phase margin are well-defined and easily determined. The paper is organized as follows. Section 2 presents the problem formulation with a formal definition of the loop phase margins for multivariable systems. Section 3 provides a numerical algorithm to calculate loop phase margins by finding solutions to an equivalent constrained optimization problem. Two examples are given to show the effectiveness of the proposed method in section 4, where comparison with the method in ref 4 is also presented. Section 5 draws the conclusions, finally. 2. Loop Phase Margins of Multivariable Control Systems In ref 5, the loop phase margins problem is formulated as follows: Problem 2.1. For anm × m square open-loop,G(s), under the decentralized phase perturbation, ∆ ) diag{ejφ1, · · · , ejφm}, find the ranges, (φi, φi), -π e φi < φi < π, i ) 1, · · · , m, such that the closed-loop system is stable when φi ∈ (φi, φi) for all i, but marginally stable whenφi ) φi or φi ) φi for some i. Definition 2.1. The solution to Problem 2.1, φi ∈ (φi, φi), is called the phase margin of the ith loop of G(s) under other loops’ phases of φj ∈ (φj, φj), j * i, i ) 1, · · · , m. If φi ) φj ) φ and φi ) φj ) φ, then (φ, φ) is called the common phase margin of G(s).

10.1021/ie701693j CCC: $40.75  2008 American Chemical Society Published on Web 06/05/2008

Ind. Eng. Chem. Res., Vol. 47, No. 13, 2008 4419

Figure 1. Diagram of a MIMO control system.

To obtain the loop phase margins, a time domain approach is proposed with two steps. First, consider the MIMO system under a decentralized delay feedback and obtain the stabilizing ranges of time delays for each loop based on LMI techniques with a delay-dependent stability criterion. Then, convert the stabilizing ranges of time delays into the stabilizing ranges of phases by multiplying some fixed frequency determined by the following proposition. Proposition 2.1.4 There exists a unitary perturbation ∆ destabilizingG(jω) if and only if there exists an ω such that 0 e σ(G(jω)) e 1 e σ(G(jω)). It should be pointed out that the loop phase margins obtained with this time domain approach are indeed stability margins but may not be exact or the maximum margins available. This is due to conservativeness introduced in both steps. For the first step, LMI-based delay-dependent stability criterion gives only sufficient but not necessary conditions for stability of the closedloop system under loop delay perturbations. This conservativeness is common in all the LMI techniques. For the second step, the frequency is underestimated because the phase perturbation in ref 4 is not required to be diagonal. To reduce this conservativeness, a frequency domain approach is proposed, which is demonstrated in the following section.

It follows from Definition 2.1 that the loop phase margin of a given multivariable system is the polytope in m-dimensional real vector space representing m independent loop phase perturbations. To find such a stabilizing region, we try to locate its boundary. Consider the unity output feedback system depicted in Figure 1, where G(s) represents the open-loop transfer function matrix of size of m × m, and ∆(s) ) diag{ejφi}, i ) 1, 2, · · · , m, is the diagonal phase perturbation matrix. Note that unlike a common robust stability analysis where the nominal case means ∆(s) ) 0, our nominal case means no phase perturbations, i.e., φi ) 0, i ) 1, 2, · · · , m, and thus ∆(s) ) Im, the identity matrix. Except for the above difference, we follow the typical robust stability analysis framework. In particular, we assume, throughout this paper, nominal stabilization of the closed-loop system; that is, the closed-loop system is stable when ∆(s) ) Im. By the assumed nominal stabilization, the system can be destabilized if and only if there is a phase perturbation ∆ such that (1)

which is equivalent to the existence of some unit vector z ∈ Cm such that z ) ∆V ) -∆Gz

m

〈V, z〉 ) V*z )



m

V/k zk )

k)1



m

ejφkV/k Vk )

k)1

∑ |V |

2

k

cosφk +

k)1

m

j

∑ |V |

2

k

sinφk

k)1

where m



m

|Vk|2 cosφk )

k)1



m

|Vk|2 cos|φk| g cosφ

k)1

∑ |V |

2

k

) cosφ

k)1

To ensure φ ) max{|φk|} really holds, cos φ has to be minimized, which can be achieved by minimizing its upper m bound ∑k)1 |Vk|2 cos φk. Likewise m



m

m

|Vk|2 cosφk )

k)1



¯ |Vk|2 cos|φk| e cosφ

∑ |V |

2

k

¯ ) cosφ

k)1

k)1

and cos φ has to be maximized to ensure φ ) min{|φk|} really holds, which can be achieved by maximizing its lower bound m ∑k)1 |Vk|2 cos φk. Clearly

3. Proposed Approach to Computing Loop Phase Margins

det(I + G(jω)∆) ) 0

Ω, has to be determined with the help of Proposition 2.1 or µ-analysis9 to guarantee the existence of all solutions to (2). Then, within Ω, numerical solutions to (2) can be found by the Newton-Raphson algorithm in the framework of the proposed constrained optimization. Let z ) [z1, z2, · · · , zm]T and v ) [V1, V2, · · · , Vm]T. The holding of (2) yields zk/Vk ) ejφk, where φk is the phase change from Vk to zk. Note that φk ( 2kπ, k ∈ N, are also the solutions; here we limit φk ∈ [-π,π) since the nominal system (φi ) 0, i ) 1, 2, · · · , m) is stable according to our assumption. Suppose that φ ) max{|φk|} and φ ) min{|φk|}, the inner product of v and z is

(2)

where “-” denotes the negative feedback configuration. Thus, ∆ is a unitary matrix which maps the unit vector v into z. If all solutions to (2), z and v, can be found, boundary points, φi, i ) 1, 2, · · · , m, are simply the phase angle of divisions by the corresponding elements from z and v. However, solutions to (2) do not always exist for ∀ω ∈ (-∞,+∞) since solutions to (1) are frequency-dependent. Hence, first the frequency range,

m

2

∑ |V |

2

k

cosφk ) v*z + z*v ) -[z*(G* + G)z]

k)1

m Maximizing ∑k)1 |Vk|2 cos φk is equivalent to minimizing / z (G + G)z. Note that z/(G/ + G)z is the quadratic form of z (convex), its maximum/minimum can be easily determined by numerical methods. Hence, we choose z/(G/ + G)z as the cost function. Now we check the constraints on z. A diagonal unitary mapping via z ) ∆v yields |zk| ) |Vk|, i.e., z/k zk ) V/k Vk, k ) 1, 2, · · · , m. Note that z/k zk ) z/Hkz, where Hk ) [hi, j] ∈ Rm×m is given by /

hi,j )

{

1, i ) j ) k 0, otherwise

and V/k Vk ) v/Hkv ) z*G*HkGz since v ) -Gz. Thus, z/k zk ) V/k Vk yields z/(Hk - G*HkG)z ) 0. On the other hand, the unitary z and v yield z*z ) 1 and v*v ) zG*Gz ) 1. If z/z ) 1, v*v m m ) ∑k)1 V/k Vk ) ∑k)1 z/k zk ) z/z ) 1 naturally holds due to the diagonal nature of ∆, which implies only m + 1 independent constraints as follows:

{

z*z ) 1 z*(Hk - G*HkG)z ) 0,

k ) 1, 2, · · · , m

(3)

Thus, finding the stabilizing boundary of loop phase perturbation is then equivalently converted to the constrained minimization problem as follows: min[z*(G* + G)z] z*z ) 1 s.t. z*(Hk - G*HkG)z ) 0,

{

k ) 1, 2, · · · , m

(4)

[

4420 Ind. Eng. Chem. Res., Vol. 47, No. 13, 2008 m On the contrary, if ∑k)1 |Vk|2 cos φk needs to be minimized, an equivalent constrained maximization framework can be constructed in a similar way. Here, we focus on the constrained minimization problem (4) only and omitted its counterpart since the numerical algorithm proposed to solve both of them are completely the same. The proposed problem of constrained optimization (4) is then solve with the approach of Lagrange multiplier.10 Let

m

F(κ) ) z*(G* + G)z + λ1(z*z - 1) +

∑λ

x 1,1 -y1,1 y1,1 x 1,1 Gc ) l l x m,1 -ym,1 ym,1 xm,1

there holds z/(G/ + G)z ) zcT(GTc + Gc)zc; z/z ) zcTzc; z/(Hk - G/HkG)z ) zcT(Hck - GTc HckGc)zc, k ) 1, 2, · · · , m, where Hck ) [hi, j] ∈ R2m×2m with

k+1z*(Hk -

hi,j )

k)1

G*HkG)z where κ ) [z1, · · · , zm, λ1, λ2, · · · , λm+1]T. The constrained optimization problem (4) can be solved by finding zeros of the following function

f(κ) )

∂F(κ) ) ∂κ

[

m

(G* + G)z + λ1z +

∑λ

k+1(Hk - G*HkG)z

k)1

z*z - 1 z*(H1 - G*H1G)z l z*(Hm - G*HmG)z

]

Numerical solutions to (5) are obtained by the Newton-Raphson algorithm:

[

κn+1 ) κn - J-1[f(κn)]f(κn)

(6)

2

∂f(κ) ∂ F(κ) ) ) ∂κ ∂κ2 (G* + G) + λ1Im +

J[f(κ)] )

z (H1 - G*H1G)z · · · (Hm - G*HmG)z

m

∑λ

k+1(Hk - G*HkG)

k)1

2z* 2z*(H1 - G*H1G) l 2z*(Hm - G*HmG)

0 0 l 0

··· ···

0 0 l 0

···

0 0 l 0

]

is the Jacobian matrix of f(κ). If J is singular, then a Moore-Penrose inverse is used.11 Once the iteration routine converges to a zero of f(κ), the eigenvalues of the Hessian matrix

{

1, j ) 2k or 2k - 1 0, otherwise

Thus, the constrained optimization (4) in Cm is equivalent to the optimization problem in R2m as follows: min[zcT(GTc + G)zc] s.t.

(5)

]

· · · x 1,m -y1,m · · · y1,m x1,m l l ∈ 2m×2m · · · x m,m -y m,m · · · y m,m x m,m

{

zcTzc ) 1 zcT(Hck - GTc HckGc)zc ) 0,

k ) 1, 2, ···, m

(8)

Newton-Raphson iteration algorithm is then used to calculate the stabilizing boundary of the diagonal phase perturbation. Once the boundary is obtained, hypercubes are ready to be prescribed and the loop phase margin can be easily determined according to Definition 2.1. There is still one thing left: determine the frequency range, Ω, such that the solution to (4) or (8) may exist. From ˆ ) {ω|0 e σ(G(jω)) e 1 e σ Proposition 2.1, let Ω j (G(jω))}, ˆ because ∆ in Proposition 2.1 does not limit to be then Ω ⊆ Ω ˆ . Note diagonal, which implies that Ω is overestimated by Ω that singular values are the square roots of eigenvalues of the cascade system G*(s)G(s) and suppose the state-space representation of G(s) to be ˙x1 ) Ax1 + Bu y1 ) Cx1 , then G(s) ) C(sI - A)-1B and G*(s) ) GT (¡s) ) [C(-sI A)-1B]T ) -BT (sI + AT )-1CT. The state-space representation of G(s)* is ˙x2 ) -ATx2 + CTy1 y2 ) -BTx2 The system G(s)*G(s) is then written as

m

∂2F λk+1(Hk - G*HkG)] H ) 2 ) [(G* + G) + λ1Im + ∂z k)1



(7) is calculated to see whether it is a local minimum or maximum. For the local minimum (or maximum), a new initial search direction is chosen as the negative of the eigenvector of H corresponding to the most positive (or negative) eigenvalue to achieve the local maximum (or minimum). Since the cost function and the constraints in (4) are quadratic form of z, the local minimum (or maximum) is also the global minimum (or maximum). It should be pointed out here that z, v ∈ Cm will lead to the failure of solving the optimization problem (4) because neither the cost function nor the constraints are holomorphic functions of z or ω.12 Fortunately, the standard technique of converting (4) to an equivalent real constrained optimization problem is applicable by the process of decomplexification, which makes use of a canonical isomorphism between Cm and R2m. Let zk ) xk + jyk, xk, yk ∈ R, k ) 1, 2, · · · , m; zc ) [x1, y1, · · · , xm, ym]T ∈ R2m; Gi, j ) xi, j + jyi, j, xi, j, yi, j ∈ R, i, j ) 1, 2, · · · , m; and

ˆ , we may begin with finding the ωi such To obtain the set Ω that σ(G(jωi)) ) 1, which implies that det[I - G*(jωi)G(jωi)] ˜ (jωiI - A˜)-1B˜] ) det[I - B˜C ˜ (jωiI- A˜)-1] ) det[jωiI ) det[I - C -1 - (A˜ + B˜C˜)] det(jωiI - A˜) ) 0. Suppose that G(jωi) has no poles on the imaginary axis, which is the case for most MIMO plants in practice, det(jωiI - A˜) * 0 for ∀ω. Thus, det[jωiI (A˜ + B˜C˜)] ) 0 yields that ωi are pure imaginary eigenvalues of (A˜ + B˜C˜). Note that σ(G(jω)) is a continuous function of ω, and between the interval of two consecutive ωi and ωi+1, no other ω ∈ (ωi,ωi+1) exists such that σ(G(jω)) ) 1, otherwise,

Ind. Eng. Chem. Res., Vol. 47, No. 13, 2008 4421

ωi and ωi+1 are not consecutive any more. This implies that σ(G(jω)) is always greater or less than 1 for ∀ω ∈ (ωi,ωi+1). Hence, by calculating σ j (G(jω)) and σ(G(jω)) for one ω ∈ ˆ . By Lemma 1 in (ωi,ωi+1), we know whether (ωi,ωi+1) ⊆ Ω ref 5, ω is symmetric with respect to the origin, only positive ω need to be checked, which can simplify the process of calculation. ˆ , z can be found from (2) by solving an For every ω ∈ Ω ˆ is equivalent constrained optimization problem. Since Ω ˆ overestimated for Ω, some ω ∈ Ω may cause the Newton-Raphson algorithm divergent, which implies that no diagonal phase perturbation exists to destabilize the closed-loop system at that frequency. The algorithm to find loop phase margins is summarized as follows: Step 1. Determine the frequency range Ω such that the solutions to (1) or (2) exist. Step 2. Construct the framework of the constrained optimization (4), which is then converted equivalently to its isomorphism in real space as (8). Step 3. For every ω ∈ Ω, solve (8) with Lagrange multiplier and find z by Newton-Raphson iteration (6). Step 4. Use the similar procedures in Step 3 to solve maximum of (8) with different initial values. Step 5. The points on the stabilizing boundary of loop phase margins are given by φi ) arg{zi/Vi},i ) 1, 2, · · · , m, and loop phase margins are hypercubes prescribed in the stabilizing region.

Figure 2. Solving the constrained optimization for ω ∈ Ω.

4. Illustration Examples Example 1. Consider the same 2 × 2 system as that given in ref 5, where

[ ]

1 2.5 s+1 s+1 G(s) ) 3 4 s+1 s+1

[ ]

ejφ1 0 and ∆ ) 0 ejφ2

Its state-space representation gives A)

[

[

]

-1 0 , 0 -1

[ ] ] [ ][ B)

Then 0 B ˜+B ˜C ˜) A A + T 0 C C -AT

2 0 , 0 2

C)

0 -B

[

T

[

1.25 0.5 1.5 2

]

Figure 3. Stabilization region of (φ1, φ2).

])

-1 0 3.8125 3.625

0 -1 3.625 4.25

-4 0 1 0

0 -4 0 1

]

whose eigenvalues are (0.7737j and (5.4453j. Since only positive pure imaginary eigenvalues are need to be considered, we have ω1 ) 0.7737 and ω2 ) 5.4453, and σ(G(jωi)) ) 1, i ) 1, 2. Thus, the frequency range [0,+∞) is divided into three intervals as [0, 0.7737], [0.7737, 5.4435], and [5.4435,+∞). Choose a frequency in every interval and calculate the singular ˆ by checkvalues of G(jω), respectively; we can determine Ω ing whether σ(G(jω)) e 1 e σ(G(jω)) holds. For example, ω ˆ. ) 0.5 yields σ(G(jω)) ) 1.1309 > 1, so [0, 0.7737] is not ⊆Ω ˆ. ω ) 6 yields σ j (G(jω)) ) 0.9102, so [5.4453,+∞) is not ⊆Ω Only ω ) 1 yields σ(G(jω)) ) 0.8940 < 1 and σ j (G(jω)) ) ˆ ) [0.7737, 5.4453]. For every ω ∈ Ω ˆ , obtain 3.9148 > 1. So, Ω the solution z to the constrained optimization (8) with the Newton-Raphson iteration (6), where initial values are arbitrarily chosen. However, (6) is convergent only for ω ∈

[0.9254, 1.7315] ∪ [3.3547, 5.0396], shown as the solid lines in Figure 2, which implies that Ω ) [0.9254, 1.7315] ∪ [3.3547, 5.0396]. Similarly, the maximum of (8) is also found for all ω ∈ Ω, shown as the dashed lines in Figure 2. One sees that the minimum and maximum loci constitute two closed contours. As the cost function moves along these contours, the pair (φ1, φ2) moves along their stabilizing boundary. After z is known, v ) -G(jω)z and φi ) arg{zi/Vi}, i ) 1, 2, ∀ω ∈ Ω. According to Lemma 1 in ref 5, the stabilizing boundary for the pair (φ1, φ2) is symmetric with respect to the origin (0, 0); thus it is obtained and shown in Figure 3, where the solid and dashed lines correspond to the minimum and maximum loci, respectively, whose symmetric parts with respect to the origin are presented by the dotted lines. Comparing Figure 3 with its counterpart given in ref 5, one sees that the stabilizing regions are just the same. As shown in section 2, loop phase margins are not unique. A reasonable one can be determined in the following way. As shown in Figure 2, when ω ∈ [0.9254, 1.7315], it follows from Figure 3 that φ1 ∈ [-2.4960, -1.9509] and φ2 ∈ (-π, π) for (φ1, φ2) on the stabilizing boundary. By the symmetry of the stability boundary, the stability of the closed-loop system requires φ1 ∈ (-1.9505, 1.9505) if φ2 is allowed to vary arbitrarily in (-π, π). Likewise, when ω ∈ [3.3547, 5.0396],

4422 Ind. Eng. Chem. Res., Vol. 47, No. 13, 2008

Figure 4. Solving the constrained optimization for ω ∈ Ω.

Figure 5. Stabilization region of (φ1, φ2).

the stabilizing boundary yields φ2 ∈ [-2.0442, -1.5783] and φ1 ∈(-π, π).Closed-loopsystemstabilityrequiresφ2 ∈(-1.5783, 1.5783). LetPM){(φ1,φ2)|φ1 ∈(-1.9505,1.9505)andφ2 ∈(-1.5783, 1.5783)} and denote Φ as the stabilizing region of (φ1, φ2), it is clear that PM is a rectangle prescribed in Φ, i.e., PM⊆Φ. According to Definition 2.1, (-1.9505, 1.9505) and (-1.5783, 1.5783) are phase margins for loop 1 and 2, respectively. The common phase margin can be obtained by (-1.9505, 1.9505) ∩ (-1.5783, 1.5783) ) (-1.5783, 1.5783). Example 2.13 Consider a chemical process of quadruple tank as follows:

[

1.5 2.6 62s + 1 (23s + 1)(62s + 1) G(s) ) 2.8 1.4 (30s + 1)(90s + 1) 90s + 1

]

It follows from Proposition 2.1 and the same procedures in Example ˆ ) {0.0149, 0.0456}. For every ω ∈ Ω ˆ , the constrained 1 that Ω optimization framework (8) is established and solved with the Newton-Raphson iteration (6). Figure 4 shows that (6) is convergent only for ω ∈ [0.0155, 0.0446], which implies that Ω ) [0.0155, 0.0446]. As the cost function moves along these contours, the pair (φ1, φ2) moves along their stabilizing boundary, which is given in Figure 5. After the stabilizing region of (φ1, φ2) is obtained, the loop phase margins are graphically the rectangle prescribed in the stabilizing region. For example, the common phase margin is (-1.6919, 1.6919). Example 3.4 Consider the system

[

0 1 -3 -0.75 A) 0 0 4 1 1 0 0 0 C) 0 0 1 0

[

]

0 1 0 -4

] [ ]

0 0.25 , 1 -1

0 0 B) 0 0.25

0 1 , 0 0

Its transfer function matrix is G(s) ) C(sI - A)-1B )

1 s + 1.75s + 7.5s2 + 4s + 8 0.0625s + 0.25 s2 + s + 4 × 0.25s2 + 0.1875s + 0.75 s + 4

[

4

3

]

The pure imaginary eigenvalues of A˜ + B˜C˜ are (0.643j and (1.613j. Thus, the frequency range [0, +∞) is divided into three

Figure 6. Solving the constrained optimization for ω ∈ Ω.

intervals as [0, 0.643], [0.643, 1.613], and [1.613, +∞). By checking the holding of σ(G(jω)) e 1 e σ(G(jω)) for any given ˆ ) (0.643, 1.613). For every ω ∈ Ω ˆ, ω in these intervals, Ω obtain the solution z to the constrained optimization (8) with the Newton-Raphson iteration (6), where initial values are arbitrarily chosen. The frequency range for the convergence of (6) yields Ω ) (0.764, 0.884) ∪ (1.533, 1.572), shown in Figure 6, where the solid and dashed lines represent the minimum and maximum loci of the cost function, respectively, who constitute two closed contours, denoted by A and B. As the cost function moves along contour A (or B), the pair (φ1, φ2) moves along their stabilizing boundary A (or B) correspondingly, shown in Figure 7, where the dotted lines are determined by the symmetry with respect to (0, 0). It needs to be clarified that two (or more) boundaries may be obtained in the limited range [-π, π) as this example shows, which is different from the case in Example 1 that only one boundary can be found. If multiple boundaries exist, the stabilizing region of φi is the polytope encompassed by the nearest boundary (boundary B for this example). By comparing Figure 7 with Figure 3, one sees that φ1 is allowed to vary arbitrarily in [-π, π) in this example if |φ2| is less than some value; in other words, there is no stabilizing boundary for φ1 if |φ2| is small enough, which is another difference from the case in Example 1. This is because one of

Ind. Eng. Chem. Res., Vol. 47, No. 13, 2008 4423

Figure 9. Stabilizing region of (L1, L2).

Figure 7. Stabilization boundaries for (φ1, φ2).

Table 2. Loop Delay Comparison with LMIApproach case 1 method 6–8

LMI proposed

Figure 8. Stabilization region of (φ1, φ2). Table 1. Loop Phase Comparison with Other Methods loop phase margin method proposed Bar-on and Jonckheere4 Wang et al.5

φ2

common margin

(-π, π) -

(-0.2423, 0.2423) -

(-0.3108, 0.3108) (-0.2701, 0.2701)

[0, 0.1878]

[0, 0.1231]

[0, 0.1265]

φ1

characteristic loci of G(s) always lies in the unit circle and never goes through the critical point (-1, j0). To show the stabilizing region of (φ1, φ2) for this example more clearly, zoom boundary B in Figure 8, where φ1 ∈ [-π, π) and φ2 ∈ [-0.2423, 0.2423] for ∀ω ∈ (1.533, 1.572). By the symmetry with respect to (0, 0), φ1 ∈ (-π, π) and φ2 ∈ (-0.2423, 0.2423) are one of the phase margins for loops 1 and 2, respectively. Since there is no boundary for φ1, the common phase margin for this example can be determined by letting φ1 ) φ2, which is (-0.3108, 0.3108), or (-17.808°, 17.808°). Table 1 lists some comparisons of the proposed method with the existing frequency domain and time domain methods of refs 4 and 5, respectively. The method of ref 4 actually gives the common phase margin only, which is (-0.2701, 0.2701) or (-15.476°, 15.476°), and smaller than the result given by the proposed method. This is because only the decentralized control is considered here and the phase perturbations are not necessarily

case 2

L1

L2

L1

L2

[0, 0.1979] [0, 0.1979] [0, 0.2392]

[0, 0.1967] [0, 0.1989] [0, 0.1967]

[0, 0.2920] [0, 0.2920] [0, 0.3355]

[0, 0.1914] [0, 0.1938] [0, 0.1914]

ergodic in the entire set of unitary matrices. The proposed method also gives a larger loop and common phase margins than the method of ref 5 does, which shows that the proposed method evidently improves the LMI results by reducing the possible conservativeness. Since a phase margin is often used to measure robustness of the multivariable system against time delay variations, it is meaningful to further investigate the stabilizing rang of time delays for each loop. As the corresponding frequency, ωk, for each point on the stabilizing boundary, (φ1,k, φ2,k), is already known, the point on the stabilizing boundary of time delay, (L1,k, L2,k), is obtained as (L1,k, L2,k) ) -(φ1,k, φ2,k)/ωk, ∀ωk ∈ Ω, L1,k g 0 and L2,k g 0. Following such a calculation, Figure 8 is then transformed to Figure 9 with L1 g 0 and L2 g 0. Now, we are ready to compare the stabilizing range of loop delays with the results from the LMI approach.6–8 One sees from Table 2 that, for both two cases in LMI approach, if the ranges of L1 are fixed, the stabilizing ranges of L2 are smaller than those from the proposed method. The same observation is

Figure 10. System characteristic loci.

4424 Ind. Eng. Chem. Res., Vol. 47, No. 13, 2008

obtained if the ranges of L2 are fixed. Figure 10 shows the system characteristic loci respectively for L1 ) 0.1979, L2 ) 0.1967 (by LMI approach) and L1 ) 0.2392, L2 ) 0.1967 (by the proposed method). One sees that the characteristic loci for the proposed case indeed go through (-1, j0), but the loci for the LMI approach do not. The achievement of the marginal stability shows the less conservativeness and the more accuracy of the proposed method as compared to LMI approaches. 5. Conclusion In this paper, the loop phase margins of multivariable control systems are defined as the allowable individual loop phase perturbations within which stability of the closed-loop system is guaranteed. A frequency domain approach is presented to accurately computing these phase margins, which is converted using the Nyquist stability analysis to the problem of some simple constrained optimization with the help of unitary mapping between two complex vector space. Numerical solutionsisthenfoundwiththeLagrangemultiplierandNewton-Raphson iteration algorithm. The proposed approach can provide exact margins and thus improves the LMI results by reducing the possible conservativeness. Literature Cited (1) Horowitz, I. M. Synthesis of Feedback Systems; Academic: New York, 1963. (2) Wang, Q.-G. Decoupling Control; Springer: New York, 2003.

(3) Ho, W. K.; Lee, T. H.; Gan, O. P. Tuning of Multiloop PID Controllers based on Gain and Phase Margin Specifications. Ind. Eng. Chem. Res. 1997, 36, 2231. (4) Bar-on, J. R.; Jonckheere, E. A. Phase Margins for Multivariable Control Systems. Int. J. Control 1990, 52, 485. (5) Wang, Q.-G.; He, Y.; Ye, Z.; Lin, C.; Hang, C. C. On Loop Phase Margins of Multivarible Control Systems. J. Process Control 2008, 18, 202. (6) He, Y.; Wu, M.; She, J. H.; Liu, G. P. Delay-dependent Robust Stability Criteria for Uncertain Neutral Systems with Mixed Delays. Syst. Control Lett. 2004, 52, 57. (7) He, Y.; Wu, M.; She, J. H.; Liu, G. P. Parameter-dependent Lyapunov Functional for Stability of Time-delay Systems with Polytopic-type Uncertainties. IEEE Trans. Autom. Control 2004, 49, 828. (8) Wu, M.; He, Y.; She, J. H.; Liu, G. P. Delay-dependent Criteria for Robust Stability of Time-varying Delay Systems. Automatica 2004, 40, 1435. (9) Doyle, J. Analysis of Feedback Systems with Structured Uncertainty. IEE Proc., Part D 1982, 129, 242. (10) Bertsekas, D. P. Constrained Optimization and Lagrange Multiplier Methods; Academic Press: New York, 1982. (11) Lancaster, P.; Tismenetsky, M. The Theory of Matrices; Academic Press: Orlando, FL, 1985. (12) Grasse, K. A.; Bar-on, J. R. Regularity Properties of The Phase for Multivariable Systems. SIAM J. Control Optim. 1997, 35, 1366. (13) Johansson, K. H. The Quadruple-Tank Process. A Multivariable Laboratory Process with an Adjustable Zero. IEEE Trans. Control Syst. Technol. 2000, 8, 456.

ReceiVed for reView December 11, 2007 ReVised manuscript receiVed March 14, 2008 Accepted March 19, 2008 IE701693J