Kernel-Based Spatiotemporal Multimodeling for Nonlinear Distributed

Aug 27, 2012 - Department of Systems Engineering & Engineering Management, City ... ABSTRACT: Many industrial processes are nonlinear distributed ...
0 downloads 0 Views 5MB Size
Article pubs.acs.org/IECR

Kernel-Based Spatiotemporal Multimodeling for Nonlinear Distributed Parameter Industrial Processes Chenkun Qi,*,† Han-Xiong Li,‡ Shaoyuan Li,§ Xianchao Zhao,† and Feng Gao† †

School of Mechanical Engineering, Shanghai Jiao Tong University, State Key Laboratory of Mechanical System and Vibration, Shanghai 200240, China ‡ Department of Systems Engineering & Engineering Management, City University of Hong Kong, Hong Kong, China and School of Mechanical & Electrical Engineering, Central South University, China § Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China ABSTRACT: Many industrial processes are nonlinear distributed parameter systems (DPS) that have significant spatiotemporal dynamics. Due to different production and working conditions, they often need to work at a large operating range with multiple working points. However, direct global modeling and persistently exciting experiment in a large working region are very costly in many cases. The complex spatiotemporal coupling and infinite-dimensional nature make the problem more difficult. In this study, a kernel-based spatiotemporal multimodeling approach is proposed for the nonlinear DPS with multiple working points. To obtain a reasonable operating space division, an iterative approach is proposed where the operating space division and local modeling are performed iteratively. The working range of the current local model will help to determine the next operating point required for modeling. Utilizing the potential of each local model, the number of regions can be reduced. In the local modeling, the Karhunen−Loève method is used for the space/time separation and dimension reduction, and after that unknown parameters of kernels are estimated. Due to consideration of time-scale properties in the dimension reduction, the modeling approach is particularly suitable for dissipative PDEs, particularly of parabolic type. The multimodeling and space/time separation techniques can largely reduce the complexity of global nonlinear spatiotemporal modeling. Finally, to guarantee a smooth transition between local spatiotemporal models, a scheduling integration method is used to provide a global spatiotemporal model. To design scheduling functions, a two-stage training method is proposed to reduce the design complexity. Compared with direct global modeling, the exciting experiment and modeling for each local region become easier. Compared with one local modeling, the multimodel integration will improve modeling accuracy. The effectiveness of the proposed modeling approach is verified by simulations.

1. INTRODUCTION Many production processes in industry are inherently distributed in space and time, e.g., convection−diffusion− reaction process, fluid flow process and heat exchange process in chemical industry,1 spray deposition process in material industry,2 and snap curing process in the integrated circuit (IC) packaging industry.3 All these processes are distributed parameter systems (DPSs),1,4 where the main difference between DPSs and the lumped parameter system (LPSs) is that the inputs, outputs, and dynamics of DPSs are distributed in space. For simplicity, the spatial nature is often ignored in the modeling and control applications. However, its performance will deteriorate if system dynamics significantly vary with space. To satisfy production quality requirements, the spatial nature should be considered in the modeling and control. The control of the DPS requires a complex control algorithm that largely depends on either good measurement of spatial information or an accurate process model. However, in practice there is only partial spatial information caused by limited sensing. To estimate the spatial distribution at other unmeasured locations, an accurate prediction model is very desirable. For industrial processes, there are many model uncertainties due to incomplete process knowledge and complicated boundary conditions. To compensate for these unknown uncertainties, development of model identification © 2012 American Chemical Society

techniques from experimental data is necessary. Though many identification methods are developed for LPSs, identification of DPSs is not easy due to the complexity of spatiotemporal coupled, nonlinear, and infinite-dimensional dynamics. Traditional LPSs identification methods can model finite-dimensional multi-input and multi-output (MIMO) systems, but they do not consider the spatial nature. Very often, due to different production and working conditions, the DPSs need to work at a large operating range with varying working points. The model at one working point may not work well at another working point. Only one local model may deteriorate performance and even make the control system unstable. All these considerations motivate the global modeling of unknown nonlinear DPS over multiple working points, which will be investigated in this study. When the first-principles knowledge of DPS is accurately determined, it can derive models of nonlinear partial differential equations (PDEs). While such models can accurately predict nonlinear and distributed dynamic behavior at an entire operating range, their infinite-dimensional nature makes the Received: Revised: Accepted: Published: 13205

June 17, 2012 August 20, 2012 August 27, 2012 August 27, 2012 dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

unknown nonlinear DPS based on the KL method23−25 because it may result in very low-order ODEs and handle complex boundary conditions. However, a very general nonlinear model of neural networks makes control design not easy. Recently, spatiotemporal Volterra, Hammerstein, and Wiener models26−30 were proposed to identify a class of unknown nonlinear DPS, which could lead to simple control design. Though these models can approximate nonlinear DPS, global modeling for a nonlinear DPS over multiple working points is still very challenging due to complex nonlinear or time-varying dynamics and a difficult persistently exciting experiment over a large working range. Even for traditional lumped processes, direct global modeling and the persistently exciting experiment within a large working region are still very difficult and even impossible for some systems in practice. In the modeling of LPS, to overcome similar difficulties, multimodeling31−33 is a good choice for global modeling from a view of cost and performance. Complex nonlinear dynamics are decomposed into several local working regions where the local modeling and experiment can be easily performed. By integrating multimodels in each working region it can obtain a global model at a large working range. One advantage is that the complexities of modeling and experiment can be largely reduced because multimodeling does not require “global” persistence of excitation. Different multimodeling methods, e.g., local model networks,34 Takagi−Sugeno fuzzy model,35,36 multiple neural network models,37 and local polynomial interpolation and piecewise linear model, involve the same underlying ideas. Though multimodeling has been developed with many industrial applications, it has only been utilized for lumped processes so far. Multimodeling for unknown nonlinear DPS is still an open problem due to the spatiotemporal and infinite-dimensional dynamics. In this study, a kernel-based spatiotemporal multimodeling for nonlinear distributed parameter processes with multiple working points is proposed. The whole working space of nonlinear DPS is first divided into multiple local working regions, where complex nonlinear spatiotemporal dynamics are decomposed into multiple simple spatiotemporal dynamics. To obtain a reasonable operating space division, an iterative approach is proposed where the operating space division and local modeling are performed iteratively. The working range of the current local model will help to determine the next operating point required for modeling. Utilizing the potential of each local model, the number of regions can be reduced. In the local spatiotemporal kernel modeling the Karhunen−Loève method is performed for the space/time separation and dimension reduction. Then unknown parameters of each kernel are estimated using the prediction error method. These two decompositions (i.e., multimodeling decomposition and space/time separation) will gradually reduce the complexity of global nonlinear spatiotemporal modeling. Finally, to guarantee a smooth transition between local spatiotemporal models, a scheduling integration method is used to provide a global spatiotemporal model. To design scheduling functions, a two-stage training method is proposed to reduce the design complexity. Compared with direct global modeling, the exciting experiment and modeling for each local region become easier. Compared with one local modeling, multimodeling can model nonlinear dynamics over multiple operating points. Compared with traditional MIMO modeling of LPSs, the proposed approach can model DPSs with low-dimensional spatiotemporal models. Compared with traditional multimodeling of LPSs,

model and control difficult to implement in real time with reasonable computing power. Thus, lumping techniques have to be used to reduce PDEs into approximate finite-dimensional ordinary differential equations (ODEs). Conventional f inite dif ference and f inite element often lead to high-order ODEs which could be still not very suitable for real-time implementation if ODEs are not further reduced. The advanced spectral method and approximate inertial manifold1 are popularly used to derive models for control because it may result in loworder ODEs, especially for parabolic PDE systems. Note that for model reduction of nonlinear parabolic PDEs with nonlinear spatial operators, the Karhunen−Loève (KL) method can obtain empirical eigenfunctions from the process data instead of analytical eigenfunctions.5,6 However, all of these lumping methods require PDEs accurately known. When PDEs of DPS are unknown due to process uncertainties, data-based model identification has to be used. If the PDE structure of the system is known and only some parameters are unknown, then these parameters can be estimated from process data.7−9 After that, one can utilize model-based control methods, possibly accounting explicitly for model uncertainty. If the DPS has unknown structure which, widely exists in industry, the black-box identification has to be used. Multiple PID controllers could be used for control without model identification, but the DPS is just considered as a MIMO lumped system and the spatial nature is ignored. Actually, due to the spatiotemporal and infinite-dimensional dynamics identification of DPSs requires special methods that are different from LPSs. Currently, some black-box modeling approaches for DPSs have been proposed. For a linear DPS, it can be represented by a transfer function in the frequency domain or an impulse response function (i.e., Green’s function and kernel) in the time domain. In some cases, the transfer function or Green’s function can be derived from first-principles knowledge.10,11 If the analytical transfer function or Green’s function is not available, it can be estimated from the input−output data. For example, using the concept of residence time distribution, an input−output model is constructed for optimal control that is actually a linear kernel model.12 On the basis of a singular function expansion, a timeinvariant Green’s function model can be estimated using a singular value decomposition (SVD) method.13 To avoid the time-invariant assumption, a time-varying Green’s function model can be obtained from the singular function and Karhunen−Loève (KL) basis function expansion using the SVD-KL method.14−16 It can also be estimated using the leastsquares method.17 Because the traditional Green’s function model is a linear time-invariant model, it may only be able to approximate the linear DPS or nonlinear DPS at a given working point. Though the linear time-varying Green’s function model can approximate the nonlinear DPS in some cases, direct identification of the time-varying kernel is not easy in practice because it is difficult to guarantee the parametrization of the time-varying kernel on the time variable is proper for an unknown system. Some methods are suitable for nonlinear DPS. Identification methods of the infinite-dimensional PDE model have been studied in refs 18−20. Identification based on f inite dif ference21 and f inite element22 still leads to high-order models. When the nonlinearity of a nominal PDE is unknown, neural networks can model nonlinear dynamics based on the spectral method,3 but it requires a regular boundary condition and known nominal PDEs. Neural networks are popularly used to model completely 13206

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

Figure 1. Input−output mapping of a 3D kernel model.

will be much easier than that of general time-varying kernel model.

the proposed approach can model spatiotemporal dynamics of DPSs. Due to consideration of time-scale properties in the dimension reduction using the KL method, the proposed modeling approach is particularly suitable for dissipative PDEs, particularly of parabolic PDEs. For hyperbolic PDEs, it may not work well due to the absence of two time-scale properties. The rest of the paper is organized as follows. The spatiotemporal kernel model is described in section 2. In section 3, the kernel-based multimodeling is introduced, where the division approach of the operating space is presented in section 3.1 and the construction and aggregation approaches of local spatiotemporal kernel models are given in sections 3.2 and 3.3, respectively. Section 4 contains one application example. Finally, a few conclusions are presented in section 5.

3. KERNEL-BASED SPATIOTEMPORAL MULTIMODELING The proposed methodology for the kernel-based multimodeling of nonlinear DPS is shown in Figure 2. To handle the

2. PRELIMINARIES OF SPATIOTEMPORAL KERNEL A linear time-invariant DPS can be represented as a linear mapping from the input u(x,t) to the output y(x,t), where x∈Ω denotes space variable, Ω is the spatial domain, and t is time. This mapping can be expressed in a Fredholm integral equation of the first kind10,13 y(x , t ) =

∫Ω ∫0

Figure 2. Spatiotemporal multimodeling methodology.

t

g (x , ζ , t − τ )u(ζ , τ )dτ dζ

complex nonlinear and time-varying nature, the whole operating space is first decomposed into multiple local working regions and correspondingly original nonlinear spatiotemporal dynamics are divided into multiple simple spatiotemporal dynamics. Then for each local region the local spatiotemporal modeling can be easily performed using a kernel-based method. To handle the spatiotemporal coupled nature in the kernel modeling, the Karhunen−Loève (KL) method is used for the space/time separation. Finally, to provide a global nonlinear spatiotemporal model, local spatiotemporal models are combined together using a smooth transition technique: scheduling integration. By the multimodel decomposition and the space/time separation, the complexity of global nonlinear spatiotemporal modeling is gradually reduced. Using the multimodeling the burden of persistently exciting experiment will also be reduced. The main tasks of the multimodeling include • division of the whole operating space into reasonable operating regions, • construction of local spatiotemporal models, • integration of local spatiotemporal models. 3.1. Operating Space Division. Division of a whole operating space into multiple operating regions (Figure 2) can be based on different criteria,38 e.g., physical elements, phenomena, and operating states. Implementation methods are mainly based on a priori process information39 or local model validation40 and clustering analysis41 on process data. A priori information often gives a coarse division, while the process data may give a fine division but with some experiments. In general, the number of regions cannot be too

(1)

where g(·) is so-called Green’s function (i.e., impulse response function or kernel) that denotes the influence of the input at the location ζ and time τ on the output at the location x and time t (Figure 1). For convenience, the model in eq 1 can be equally expressed by y(x , t ) =

∫Ω ∫0

t

g (x , ζ , τ )u(ζ , t − τ )dτ dζ

(2)

The model in eq 1 can work for the linear time-invariant DPS. To model the linear time-varying DPS, one way is to change the kernel to a time-varying one y(x , t ) =

∫Ω ∫0

t

g (x , ζ , t , τ )u(ζ , τ )dτ dζ

(3)

The linear time-varying model (eq 3) can model more complex DPS than model 1, and it can approximate the nonlinear spatiotemporal dynamics in some cases. However, direct identification of the time-varying kernel in eq 3 is very difficult in practice. The reason is that it is not easy to guarantee the parametrization of the time-varying kernel on time variable t to be proper for a given system. Thus, to reduce the modeling complexity, the multimodeling methodology will be utilized in this study, where the time-invariant kernel model 1 is used as the local modeling. With the transition of local models among different working states, this time-invariant kernel-based multimodeling can be considered as a state-dependent timevarying kernel model. However, its practical implementation 13207

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

Step 3: Evaluate the linearity of operating region to check if the excited local operating region can be covered by a linear model. If the linearity test is failed, reduce the amplitude Δur(x) of input signals and repeat steps 2 and 3. Step 4: Identify the local spatiotemporal kernel model (eq 2) (see section 3.2), where the trained model is tested on a set of testing data collected with the same range of input signals as training inputs. Step 5: To determine the working range of the obtained local model, collect two new sets of testing data by shifting the mean of input signals to ussr(x) ± βur. Step 6: Calculate the root of mean square error (RMSE) of the model on the testing data as in eq 9 (section 3.2.3). Shift input signals with βur and repeat steps 5 and 6 until a given tolerant local model error in eq 10 is reached (section 3.2.4). Then determine the current operating region yssr(x) ± (Δyr(x) + βyr) with ussr(x) ± (Δur(x) + βur). Step 7: Determine the next operating point yss(r+1)(x) with uss(r+1)(x) required for the exciting experiment. Set r = r + 1, and repeat steps 2−7. The iteration is stopped until a required working range is covered. This method could be very efficient since the region division and local modeling can be compensated by each other. The first operating point and initial amplitude of input signals can be selected based on engineer’s experiences or previous experiments with physical considerations. Construction of local spatiotemporal kernel models and estimation of their working range will be studied in the next section. 3.2. Construction of Local Spatiotemporal Kernel Models. For the rth local operating region, a set of spatiotemporal input−output data {u(x,t)|x∈Ω, t = 1,...,Lr}, {y(xi,t)|xi∈Ω, i = 1,...,N, t = 1,...,Lr} can be obtained, where Lr denotes the time length and N is the number of measured spatial points of the output. Now the problem is to estimate the corresponding spatiotemporal model (eq 2) from this set of local experimental data. For simplicity, it is assumed that the spatial information of the input is known from some physical knowledge and sensor locations xi (i = 1,...,N) are uniformly distributed over the spatial domain. 3.2.1. Space/Time Separation. Similar to Fourier series, the key idea is to expand the spatiotemporal kernel onto spatial output bases {φi(x)ni=l}, spatial input bases {ψi(x)mi=l}, and temporal bases {ϕi(t)li=l}

large or too small (i.e., the range of each region cannot be too small or too large). • If the range is small, local dynamics may be simple but the number of regions is large. • If the range is large, the number of regions will be small but a simple linear model could not cover the local region and a nonlinear local model could be required for satisfactory performance. In general, too complex nonlinear local modeling should be avoided due to the largely increased complexity. In this study, to make the local modeling simple, a linear local modeling: spatiotemporal kernel modeling is considered. The linearity of each local region is evaluated to guarantee that the local region can be covered by a linear model. To obtain a reasonable operating space division, an iterative approach is proposed where the local modeling and operating space division are iteratively performed. The working range of the current local model will help to determine the next operating point required for modeling. By utilizing the potential of each local model, the number of regions can be reduced. 3.1.1. Linearity Evaluation of Operating Region. In general, the working point of the system can be represented by the steady state output yss(x) with the corresponding steady state input uss(x). Similarly, a working region can be represented by yss(x) ± Δy(x) with uss(x) ± Δu(x). For a working region, the exciting input signals are often designed with the mean uss(x) (i.e., the steady state input corresponding to the center of the working region). If the excited region is linear, the output is also expected to have the mean yss(x) (i.e., the steady state output corresponding to the center of working region). Significant deviation of the output mean from yss(x) is an indication of nonlinearity. Of course, other linearity evaluation can also be used, e.g., the condition number of a regressor− output data matrix.41 3.1.2. Iterative Operating Space Division. As shown in Figure 3, the basic idea of iterative division is that an initial

n

g(·) =

Figure 3. Iterative operating space division.

m

l

∑ ∑ ∑ θijkφi(x)ψj(ζ )ϕk(τ) (4)

i=1 j=1 k=1

where θijk ∈ R (i = 1,...,n, j = 1,...,m, k = 1,...,l) are unknown constant coefficients of the kernel and n, m, and l are the dimensions of the corresponding bases (i.e., kernel size). Then substituting eq 4 into eq 2 will have

operating point is selected first and an exciting experiment is performed with a local modeling; then the working range of this local model will help to determine the current operating region and the next operating point required for experiment; the iteration is stopped until a required working range is covered. The detailed procedure is given in Algorithm 1. Algorithm 1: Iterative Operating Space Division. Step 1: Select the first operating point yss1(x) with the corresponding input uss1(x). Set the index of the current operating region r = 1. Step 2: Design input signals {u(x,t)|x∈Ω, t = 1,...,Lr} with the mean ussr(x) and the amplitude Δur(x) for an exciting experiment around the rth operating point, where Lr denotes the time length. Collect the output data {y(xi,t)|xi∈Ω, i = 1,...,N, t = 1,...,Lr} for training, where N is the number of measured spatial points of the output.

t

y ̂( x , t ) =

n

m

l

∫Ω ∫0 ∑ ∑ ∑ θijkφi(x)ψj(ζ)ϕk(τ)u i=1 j=1 k=1

(ζ , t − τ )dτ dζ

(5)

The bases design is an important issue, which are designed as follows. Spatial Output Bases Design. Spatial output bases {φi(x)ni=l} are identified at N spatial locations using the Karhunen−Loève decomposition1,15,16,42−44 from the system output {y(xi, t)}i=l,t=lN,Lr}. Then one can interpolate eigenfunctions to 13208

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

temporal domain. Using the Galerkin method,1,4 the space/ time separation of the input and output can be further performed. Projecting eq 5 onto the output basis functions {φi(x)}ni=l will lead to the following expression

locations where the data are not available. Because these bases can be ordered according to their importance to the system,43 only the first few spatial bases can represent the dominant spatial modes. This is the reason why we choose the Karhunen−Loève method. The KL method actually utilizes two time-scale properties where the system can be separated into slow (dominant) and fast (ignorable) parts. Thus, it is particularly suitable for dissipative PDEs, particularly of parabolic PDEs. For hyperbolic PDEs, it may not work well due to the absence of two time-scale properties. The small value of n can be determined using the energy method.43 See related references for more details of the Karhunen−Loève decomposition. Spatial Input Bases Design. Spatial input bases {ψi(x)mi=l} are often determined from the a priori physical knowledge about the distribution of control action in the spatial domain. They can be identified from experimental data of actuators. The approximation using other functions (e.g., splines) can also be used when the a priori knowledge is not available. The dimension m is often the number of independent inputs. Temporal Bases Design. Temporal bases {ϕi(t)li=l} can be selected as one-parameter Laguerre functions,45 two-parameter Kautz functions,46 and generalized multiparameter orthonormal basis functions47 according to the system complexity. Here, Laguerre functions ϕi(t ) =

n

∫Ω φh(x)y ̂(x , t )dx = ∑ ∫Ω φh(x)φi(x)dx i=1

m

∑ ∑ θijkvjk(t ),

where vjk(t) = ∫ 0tuj(t−τ)φk(τ)dτ, uj(t−τ) = ∫ Ωψj(ζ)u(ζ,t−τ)dζ (j = 1,...,m, k = 1,...,l). Since the bases {φi(x)}ni=l are orthonormal, it becomes an ndimensional model m

yi ̂ (t ) =

(7)

where ŷi(t) = ∫ Ωŷ(x,t)φi(x)dx. Now parameters θijk of the kernel can be estimated using the least-squares method by solving the following minimization problem Lr

min ∑ |yi (t ) − yi ̂ (t ))|2 , i = 1, ..., n t=1

(8)

where yi(t) = ∫ Ωy(x,t)ϕi(x)dx. Since m and n can be smaller than M and N, the optimization in eq 8 can be solved more easily than eq 6. 3.2.3. Local Model Validation. The local model should be evaluated on a new testing experiment. First, a new set of testing inputs {utest(x,t)|x∈Ω, t = 1,...,Ltestr} around the rth local operating point with the same range as training inputs is designed. Then a new set of experimental output data {ytest(xi,t)}i=l,t=lN,Ltestr can be measured. By online simulating the model (eq 2) on the same testing inputs, predicted outputs {ŷtest(xi,t)}i=l,t=lN,Ltestr of the model are obtained. A model error index, root of mean square error (RMSE), can be estimated as follows

are chosen for development, due to their simplicity of use (only one adjustable time-scaling factor ξ). Of course, ξ should be properly designed in order to achieve significant modeling performance. The optimal selection method is proposed for the temporal kernel system.48 For this spatiotemporal kernel modeling, the time-scaling factor ξ and the dimension l can be selected using the cross-validation technique. Note that one advantage of the expansion onto spatial Karhunen−Loève basis functions and temporal Laguerre basis functions is that the number of parameters to be estimated in the spatiotemporal kernel model can be small, which will largely reduce the modeling complexity. Moreover, spatial Karhunen− Loève basis functions can lead to a low-dimensional model, which is very attractive for practical control. 3.2.2. Parameter Estimation. Estimation in Spatiotemporal Domain. When the kernel sizes n, m, and l are assumed to be known, unknown parameters θijk of the kernel can be estimated by minimizing the following cost function

RMSE =

1 NL testr

N L testr

̂ (xi , t ))2 ∑ ∑ (ytest (xi , t ) − ytest i=1 t=1

(9)

The RMSE on the testing data will give a criterion of model performance. If satisfactory performance is not achieved, adjust the time-scaling factor ξ of Laguerre functions and the dimensions n and l of bases. 3.2.4. Local Model Working Range. Note that the iterative working space division can also use a similar model validation process to determine the working range of the local model. First, define a tolerant RMSE for a local working region as

Lr

i=1 t=1

l

∑ ∑ θijkvjk(t ), i = 1, ..., n j=1 k=1

i = 1, 2, ..., l , ξ > 0

N

h = 1, ..., n

j=1 k=1

e ξt di−1 2ξ · i − 1 [t i − 1·e−2ξt ], (i − 1)! dt

min ∑ ∑ |y(xi , t ) − y (̂ xi , t )|2

l

(6)

N,Ltestr

where {y(xi,t)}i=l,t=l is the system output measured at N spatial locations and Lr time length. Note that the predicted output {ŷ(xi,t)}i=l,t=lN,Ltestr of the kernel can be estimated from eq 5 using the pointwise input data {u(xi,t)}i=l,t=lN,Ltestr at M spatial locations; then eq 6 is minimized using the linear optimization method. The dimension m is often fixed, while the dimensions n and l of the bases can be gradually increased until satisfactory performance is achieved. Estimation in Temporal Domain. If M and N are very large, then the optimization in eq 6 may involve a complex computation. In this case, it can be approximately solved in

RMSEtol = γ × RMSE

(10)

where γ ≥ 1 denotes a given tolerance ratio. Then new testing experiments are performed by shifting the mean of testing inputs to utest(x,t) ± βu. Next, we can recalculate RMSE and compare it with tolerant RMSE. By adjusting the shift βu until the tolerant RMSE is obtained, the working range of the model can be found. 3.3. Aggregation of Local Spatiotemporal Kernel Models. 3.3.1. Scheduling for Aggregation. Hard switching49 of local models could lead to deterioration in performance 13209

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

during transitions between local regions. To ensure a smooth transition, the global model is obtained by a scheduling integration (Figure 4) y (̂ x , t ) =

will represent the system state. 3.3.2. Scheduling Functions Design. Though a priori process knowledge can give some information about scheduling functions, one better and accurate method is to learn these functions from the process data. First, a set of training data {u(x,t)|x∈Ω,t = 1,...,L}, {y(xi,t)|xi∈Ω, i = 1,...,N, t = 1,...,L} covering all required working regions and transitions between local regions is collected. Then utilizing the scheduling variable s(Ω,t) and all local model predictions {ŷi(x,t)}πi=l as the input the scheduling functions can be trained to minimize the prediction error deviating from measured system output y(x,t). Assume that the scheduling function wi(s(Ω,t)) can be approximated by

w1(s(Ω, t ))y1̂ (x , t ) + ... + wπ (s(Ω, t ))yπ̂ (x , t ) w1(s(Ω, t )) + ... + wπ (s(Ω, t )) (11)

c

wi(s(Ω, t )) = wi(∑ φj(Ω)bj (t )) = μi (b(t )) j=1

(15)

and μi(b(t)) can be further parametrized into z

∑ ϑijΦj(b(t ))

μi (b(t )) =

Figure 4. Spatiotemporal kernel multimodel with scheduling integration.

(16)

j=1

where Φj(b(t)):Rcd → R (j = 1,...,z) are nonlinear basis functions (e.g., Gaussian radial basis functions) and ϑij (i = 1,...,π, j = 1,...,z) are unknown parameters. Now unknown parameters ϑ = [ϑ11,...,ϑ1z,...,ϑπ1,...,ϑπz]T can be estimated by minimizing the following cost function

where the divisor w1(s) + ... + wπ(s) is used to guarantee the sum of normalized scheduling values (i.e., model weights) wi(s)/∑ni=lwi(s) is one. As shown in Figure 5, each local model

N

L

min ∑ ∑ |y(xi , t ) − y (̂ xi , t )|2 ϑ

i=1 t=1

(17)

where the global model prediction ŷ(x,t) can be obtained by y ̂( x , t ) =

is assigned to a scheduling function (SF) w(s), which is related to scheduling variable s (i.e., system state). To guide the model scheduling, the system state s(Ω,t) = {s(x,t)|x∈Ω} can be constructed using a set of d-lagged field distribution (12)

However, this scheduling variable s cannot be directly used because it is spatially distributed and infinite dimensional. For implementation, it has to be reduced into a finite-dimensional variable. As in section 3.2.1, the space/time separation method is used again, where s(x,t) is expanded onto a set of finitedimensional spatial bases {φi(x)}ci=l c

s(x , t ) =

∑ φi(x)bi(t ) i=1

(13)

min ∑ |y(xi , t ) − y ̂(xi , t )|2 , t = 1, ..., L , μ(t )

∫Ω

i=1

s . t . μ1(t ) + ... + μπ (t ) = 1

b(t ) = [b1T (t ), ..., bTc (t )]T ∈ Rcd s(x , t )φi(x)dx ∈ Rd

(18)

N

c The spatial bases {φi(x)}i=l can be obtained from the Karhunen−Loève decomposition, while finite-dimensional temporal coefficients

bi (t ) =

μ1(b(t )) + ... + μπ (b(t ))

Though there are many optimization algorithms (e.g., Newton method), directly solving eq 18 may not be a very good choice in this case. This is because simultaneously optimizing all scheduling functions will largely increase the search space, the computation cost, and the risk of local minima. More importantly, because the individual scheduling values μi(b) (i.e., individual optimization objectives) are unknown in advance and only one final combined optimization objective y(x,t) is given, it will be very difficult to check and guarantee approximation and generalization performance of schedule functions μi(b) themselves. In this sense, the tuning of function structure and size of μi(b) becomes blind. If the function structure and size of μi(b) are not properly chosen, it will be very difficult to give proper scheduling. To overcome this problem, a two-stage optimization method is developed. The first stage is to find optimal scheduling values μ(t) = [μ1(t),...,μπ(t)]T (t = 1,...,L) by minimizing the following cost function

Figure 5. Scheduling functions for integration.

s(x , t ) = [y(x , t − 1), ..., y(x , t − d)]T ∈ Rd

μ1(b(t ))y1̂ (x , t ) + ... + μπ (b(t ))yπ̂ (x , t )

(19)

where the global model prediction ŷ(x,t) are given by y ̂(x , t ) = μ1(t )y1̂ (x , t ) + ... + μπ (t )yπ̂ (x , t )

(14) 13210

(20)

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

It is actually a constrained linear least-squares problem which can be easily solved using standard algorithms. In general, the sensor number is larger than the local model number (i.e., N > π); thus, it will give optimal scheduling values in the sense of spatial averaging. The second stage is to train scheduling functions μi(b) (i = 1,...,π) using the data of scheduling variables {b(t)}Li=l and scheduling values {μi(t)}Lt=l. It is actually a standard function approximation problem, which can be performed using polynomials, radial basis functions, support vector machines, and related learning algorithms. After training, a new set of testing data will be collected to check approximation and generalization performance of trained scheduling functions. If its performance is not satisfactory, adjust the function structure and size of μi(b) (i = 1,...,π). The main advantages of this two-stage method are that (1) by decomposing the complex optimization problem into several individual simple optimization problems the search space of each individual optimization problem becomes smaller, (2) approximation and generalization performance of each schedule function μi(b) can be easily checked, and (3) tuning of function structure and size of each μi(b) becomes easier. This scheduling function design step minimizes the global model error, while the local modeling in section 3.2 only minimizes the local model error. After these weighting functions are obtained, the integrated global model can be used for system prediction, control, and optimization over a large working range. Note that though the modeling experiment may need sufficient sensors to collect the spatially distributed information, once the model is established a few sensors can be enough for real-time prediction and control.

y(x , 0) = y0 (x)

where y(x,t), u(x,t), βT, βu, and γ denote the temperature in the reactor, temperature of the cooling medium, heat of reaction, heat transfer coefficient, and activation energy. The process parameters are often set as βT = 50, βu = 2, γ = 4

There are available four actuators u(t) = [u1(t),...,u4(t)]T with the spatial distribution function ψ(x) = [ψ1(x),...,ψ4(x)]T to manipulate the temperature u(x,t) of the cooling medium (i.e., u(x,t) = ψ(x)Tu(t)), where ψi(x) = H(x − (i − 1)π/4) − H(x − iπ/4) (i = 1,...,4) and H(·) is the standard Heaviside function. The number of sensors required for modeling may depend on the complexity of the system, desired modeling accuracy, and physical consideration. In this case, 19 sensors uniformly distributed in the space are used for measurements (N = 19). The effect of the sensor number on the modeling accuracy will be discussed later. The sampling interval is set as Δt = 0.01. Note that the considered process consists of a single scalar PDE, though the proposed modeling can also be applicable for systems with two (or more) coupled PDEs if the corresponding outputs are measurable. Operating Space Division and Construction of Local Models. The operating space division and construction of local models are performed iteratively using Algorithm 1. Initially the first operating point is selected as the steady state output yss1(x) with the steady state input uss1(x) = 1. Correspondingly, the random input signals around uss1(x) with amplitude Δu1(x) = 4 are used to excite the system. As an example, 2000 samples of actuator 1 with simulation time 20 are shown in Figure 7, where

4. CASE STUDY The catalytic rod reactor (Figure 6) is a typical convection reaction process in the chemical industry. The reactor is fed

Figure 6. Catalytic rod.

Figure 7. Input signals u1(t) of actuator 1 for training in region 1.

with pure species A, and a zeroth order exothermic catalytic reaction of the form A → B takes place in the rod. Since the reaction is exothermic, a cooling medium that is in contact with the rod is used for cooling. Under the assumptions of constant density and heat capacity of the rod, constant conductivity of the rod, constant temperature at both sides of the rod, and excess of species A in the furnace, the mathematical model which describes the spatiotemporal evolution of the rod temperature consists of the following parabolic PDE1

the dashed red line denotes the steady state input uss1(x) (also the mean of input signals). Correspondingly, 2000 samples of the output y(x,t) as shown in Figure 8 are measured to train the model. To illustrate the excited dynamics at one spatial location, the measured output y(x11,t) at sensor 11 is also shown in Figure 9 as an example, where the dashed red line denotes the steady state output yss1(x11) (also the mean of y(x11,t)). The linearity evaluation of the excited region can show that the amplitude of input signals Δu1(x) = 4 is suitable. Next, the linear spatiotemporal kernel modeling is performed. In the modeling, the first four Karhunen−Loève basis functions {φi(x)}ni=l (n = 4) as shown in Figure 10 are used as spatial output bases. The temporal bases {ϕi(t)}li=l are chosen as Laguerre functions with the time-scaling factor ξ = 4.05 and dimension l = 4. These parameters (i.e., n, l, and ξ) are adjusted

∂y(x , t ) ∂ 2y(x , t ) = + βT (e−γ /1 + y − e−γ ) ∂t ∂x 2 + βu(u(x , t ) − y(x , t ))

subject to the boundary and initial conditions y(0, t ) = 0, y(π , t ) = 0 13211

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

Figure 8. Measured output y(x,t) for training in region 1.

Figure 11. Measured output y(x,t) for testing in region 1.

Figure 9. Measured output y(x11,t) at sensor 11 for training in region 1. Figure 12. Predicted output ŷ(x,t) of local model 1 on the testing data in region 1.

Figure 10. Karhunen−Loève spatial output bases ϕi(x) (i = 1, 2, 3, 4) for modeling in region 1.

and determined according to modeling performance. The spatial input bases {ψi(x)}mi=l (m = 4) are standard Heaviside functions, which are determined by process knowledge. The obtained model will be tested after training. The measured output y(x,t) and predicted output ŷ(x,t) of the local model 1 on the testing data are plotted in Figures 11 and 12, respectively, while the prediction error e(x,t) = y(x,t) − ŷ(x,t) and spatial normalized absolute error SNAE(t) = ∫ |e(x,t)|dx/ ∫ dx are shown in Figures 13 and 14, respectively. To show model performance at one spatial location, the testing results at sensor 11 are also shown in Figures 15 and 16. From these testing results, it can be shown that the obtained local spatiotemporal model works well in its local region.

Figure 13. Prediction error e(x,t) of local model 1 on the testing data in region 1.

To determine the working range of the obtained local model 1, a new set of testing data is collected with mean shifted input signals (βu1 = 1.5) as shown in Figure 17. From Figure 18, it can be seen that SNAE(t) of local model 1 on the new testing data is larger than SNAE(t) on the original testing data. Compared with the original testing, RMSE is increased from 0.027 to 0.0733. This model error is acceptable for the assumed 13212

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

Figure 14. SNAE(t) of local model 1 on the testing data in region 1.

Figure 17. Input signals u1(t) of actuator 1 for determining the working range of local model 1.

Figure 15. Measured output y(x11,t) and predicted output ŷ(x11,t) of local model 1 at sensor 11 on the testing data in region 1.

Figure 18. SNAE(t) of local model 1 on testing data with mean-shifted input signals (βu1 = 1.5).

excite the system. After the linearity evaluation of the excited region is passed, the training and testing of local model 2 are performed. The working range of this local model suggests that the third operating point can be selected as the steady state output yss3(x) with the steady state input uss3(x) = 7. • Next, random input signals around uss3(x) = 7 with amplitude Δu3(x) = 4 are used to excite the system at the third working point. The linearity evaluation of the excited region, training and testing of local model 3, and determination of its working range are then followed in a similar way. • Finally, the fourth operating point is selected as the steady state output yss4(x) with steady state input uss4(x) = 10. The linearity evaluation of the excited region, training and testing of the local model 4, and determination of its working range are performed similarly. As an example, here four working regions are assumed to be enough to cover the required working range. Aggregation of Local Models. After operating space division and construction of local models are performed, the global model can be obtained by aggregating the obtained local models. First, to train scheduling functions, random input signals covering four local regions are designed to excite system dynamics, especially transition dynamics between local regions.

Figure 16. Prediction error e(x11,t) of local model 1 at sensor 11 on the testing data in region 1.

tolerant RMSE 0.08. These testing results suggest that the second operating point can be selected as the steady state output yss2(x) with the steady state input uss2(x) = uss1(x) + 2βu1 = 4. In a similar way, more local working points and local models can be obtained as follows. For simplicity, more details are ignored since they are similar to the first working point. • For the second working point, random input signals around uss2(x) = 4 with amplitude Δu2(x) = 4 are used to 13213

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

Figure 19. Input signals u1(t) of actuator 1 for scheduling functions training.

measured 14 000 samples of the output y(x,t) as shown in Figure 25 are used to test the multimodel. The predicted output and prediction error of the multimodel are shown in Figures 26 and 27, respectively. SNAE(t) of the multimodel on the testing data is also shown in Figure 28. Obviously, system dynamics over four local regions can be modeled very well by the proposed multimodeling. The influence of the sensor number on the modeling error RMSE is studied. As seen from Figure 29, the multimodeling performance will be better as the number of sensors increases since more spatial information is available. However, little improvement is seen if more than 19 sensors are used. In the modeling, the KL method is used to perform the dimension reduction. To compare with a detailed solution on the computing time, a finite difference model with 40 discretizated spatial points is simulated. It takes about 6.3 s to obtain the output as shown in Figure 25, while the reducedorder multimodel only needs about 2 s to obtain the prediction as shown in Figure 26. It can be shown that the resulting reduced-order model is faster to solve than the detailed solution. This can benefit real-time optimization and control schemes, e.g., optimal and predictive control, where the model needs to be solved repeatedly.50,51 To compare with single modeling, new 14 000 samples of the output with uniformly distributed random input signals between −3 and 14 are collected for training, and testing data are same as before. First, a single linear kernel model is obtained using the same local kernel modeling as in multimodeling. As shown in Figure 30, the single linear model works well only in a middle working region, while the multimodel can work well in all four working regions because of its multimodeling mechanism. Second, a single nonlinear model is obtained with Karhunen−Loève-based neural network modeling.23−25 As shown in Figure 31, compared with the single linear modeling, the modeling error is reduced due to its nonlinear modeling capability. However, it still only works well in the middle working region and is worse than the proposed multimodeling at boundary regions. The reason could be that it is very difficult to get a good global model structure and design input signals with a large range over multiple working regions. By decomposing complexities of modeling and experiment using the multimodel mechanism, modeling performance can be

For example, 14 000 samples of actuator 1 with the simulation time 140 are shown in Figure 19, where the mean of input signals increases gradually from region 1 to region 4 to cover four local regions and their transitions. Correspondingly, the measured 14 000 samples of the output y(x,t) as shown in Figure 20 are used to train scheduling functions of four local models.

Figure 20. Measured output y(x,t) for scheduling functions training.

Then scheduling values μi(t) (i = 1, 2, 3, 4) are computed as shown in Figure 21 for scheduling functions training. They can reflect weights of the corresponding local models as system states change. A schedule variable b(t) is constructed as the mean of 10-lagged spatiotemporal output to reflect the change of system states and working regions. Using the scheduling variable and scheduling values as the input and output, scheduling functions of four local models can be trained as shown in Figure 22. To show scheduling functions performance, the measured weights and predicted weights of scheduling functions of local models 1 and 4 are shown in Figures 23 and 24, respectively, as an example. It can be shown that scheduling functions can capture dominant dynamics of weights of local models. Finally, all local models are integrated together to obtain the global multimodel. To test the performance of the obtained multimodel, random input signals covering four local regions are designed to excite system dynamics. Correspondingly, the 13214

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

Figure 21. Computed (measured) scheduling values μi(t) (i = 1, 2, 3, 4) for scheduling functions training.

Figure 22. Scheduling functions μi(b) (i = 1, 2, 3, 4) of four local models.

Figure 24. Measured weights μ4(t) and predicted weights μ̂4 (t ) of scheduling function μ4(b) on the training data.

Figure 23. Measured weights μ1(t) and predicted weights μ̂ 1(t) of scheduling function μ1(b) on the training data.

Figure 25. Measured output y(x,t) for the multimodel testing.

improved. The effectiveness of the proposed modeling method is clearly demonstrated in this application.

operating space division and local modeling are performed iteratively, where the working range of the current local model will help to determine the next operating point. Using the potential of each local model, the number of regions can be reduced. In the local modeling, to reduce the model dimension and number of unknown parameters, the Karhunen−Loève method is used for the space/time separation and dimension

5. CONCLUSIONS In this study, the modeling problem of unknown nonlinear DPS with multiple working points is investigated. A kernelbased spatiotemporal multimodeling approach is proposed. The 13215

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

Figure 29. RMSE of the multimodel on the testing data.

Figure 26. Predicted output ŷ(x,t) of the multimodel on the testing data.

Figure 30. SNAE(t) of the multimodel and single linear model on the testing data. Figure 27. Prediction error e(x,t) of the multimodel on the testing data.

Figure 31. SNAE(t) of the multimodel and single nonlinear model on the testing data.

Figure 28. SNAE(t) of the multimodel on the testing data.

range is larger than the single local modeling. Experiment and modeling complexities become reduced compared with the single global modeling. The effectiveness of the proposed modeling approach is verified by simulations.

reduction. Finally, to guarantee a smooth transition between local spatiotemporal models, a scheduling integration method is used to provide a global spatiotemporal model. To simplify the scheduling functions design, a two-stage design method is proposed. Techniques such as multimodeling, space/time separation, and two-stage scheduling functions design can reduce the complexity of the global DPS modeling. Using multimodeling, the model accuracy is better and the working



AUTHOR INFORMATION

Corresponding Author

*E-mail: [email protected]. 13216

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

Notes

(21) Guo, L. Z.; Billings, S. A. Sate-space reconstruction and spatiotemporal prediction of lattice dynamical systems. IEEE Trans. Autom. Control 2007, 52 (4), 622. (22) Coca, D.; Billings, S. A. Identification of finite dimensional models of infinite dimensional dynamical systems. Automatica 2002, 38 (11), 1851. (23) Aggelogiannaki, E.; Sarimveis, H. Nonlinear model predictive control for distributed parameter systems using data driven artificial neural network models. Comput. Chem. Eng. 2008, 32 (6), 1225. (24) Zhou, X. G.; Liu, L. H.; Dai, Y. C.; Yuan, W. K.; Hudson, J. L. Modeling of a fixed-bed reactor using the KL expansion and neural networks. Chem. Eng. Sci. 1996, 51 (10), 2179. (25) Qi, C. K.; Li, H.-X. Hybrid Karhunen-Loève/neural modeling for a class of distributed parameter systems. Int. J. Intell. Syst. Technol. Appl. 2008, 4 (1−2), 141. (26) Li, H.-X.; Qi, C. K.; Yu, Y. G. A spatio-temporal Volterra modeling approach for a class of nonlinear distributed parameter processes. J. Process Control 2009, 19 (7), 1126. (27) Li, H.-X.; Qi, C. K. Incremental modeling of nonlinear distributed parameter processes via spatio-temporal kernel series expansion. Ind. Eng. Chem. Res. 2009, 48 (6), 3052. (28) Qi, C. K.; Li, H.-X. A Karhunen-Loève decomposition based Wiener modeling approach for nonlinear distributed parameter processes. Ind. Eng. Chem. Res. 2008, 47 (12), 4184. (29) Qi, C. K.; Li, H.-X. A time/space separation based Hammerstein modeling approach for nonlinear distributed parameter processes. Comput. Chem. Eng. 2009, 33 (7), 1247. (30) Qi, C. K.; Zhang, H.-T.; Li, H.-X. A multi-channel spatiotemporal Hammerstein modeling approach for nonlinear distributed parameter processes. J. Process Control 2009, 19 (1), 85. (31) Narendra, K. S.; Balakrishnan, J.; Ciliz, M. K. Adaptation and learning using multiple models, switching, and tuning. IEEE Control Syst. Mag. 1995, 15 (3), 37. (32) Baruch, I. S.; Lopez, R. B.; Guzman, J. L. O.; Flores, J. M. A fuzzy-neural multi-model for nonlinear systems identification and control. Fuzzy Set Syst. 2008, 159 (20), 2650. (33) Banerjee, A.; Arkun, Y.; Ogunnaike, B.; Pearson, R. Estimation of nonlinear systems using linear multiple models. AIChE J. 1997, 43 (5), 1204. (34) Göttsche, Th. H.; Hunt, K. J.; Johansen, T. A. Nonlinear dynamics modeling via operating regime decomposition. Math. Comput. Simul. 1998, 46, 543. (35) Pal, N. R.; Saha, S. Simultaneous Structure Identification and Fuzzy Rule Generation for Takagi−Sugeno Models. IEEE Trans. Syst. Man Cybern. B 2008, 38 (6), 1626. (36) Boukhris, A.; Mourot, G.; Ragot, J. Non-linear dynamic system identification: a multi-model approach. Int. J. Control 1999, 72 (7−8), 591. (37) Eikens, B.; Karim, M. N. Process identification with multiple neural network models. Int. J. Control 1999, 72 (7−8), 576. (38) Murry-Smith, R.; Johansen, T. A. Multiple model approaches to modeling and control; Taylor and Francis: London, 1997. (39) Johansen, T. A.; Foss, B. A. Constructing NARMAX models using ARMAX models. Int. J. Control 1993, 58 (5), 1125. (40) Johansen, T. A.; Foss, B. A. Identification of non-linear system structure and parameters using regime decomposition. Automatica 1995, 31 (2), 321. (41) Venkat, A. N.; Vijaysai, P.; Gudi, R. D. Identification of complex nonlinear processes based on fuzzy decomposition of the steady state space. J. Process Control 2003, 13 (6), 473. (42) Holmes, P.; Lumley, J. L.; Berkooz, G. Turbulence, coherent structures, dynamical systems, and symmetry; Cambridge University Press: New York, 1996. (43) Sirovich, L. New perspectives in turbulence, 1st ed.; Springer: New York, 1991. (44) Park, H. M.; Cho, D. H. The use of the Karhunen-Loève decomposition for the modeling of distributed parameter systems. Chem. Eng. Sci. 1996, 51 (1), 81.

The authors declare no competing financial interest.



ACKNOWLEDGMENTS The authors would like to thank the associate editor and anonymous referees for their valuable comments and suggestions. This work is partially supported by grants from NNSF China (61004047, 51175323, 61273182, 61233004, 61074061, 51175519), a GRF grant from RGC of Hong Kong SAR (CityU: 117310).



REFERENCES

(1) Christofides, P. D. Nonlinear and Robust Control of PDE Systems: Methods and Applications to Transport-Reaction Processes; Birkhäuser: Boston, 2001. (2) Jones, P.; Duncan, S.; Rayment, T.; Grant, P. Control of temperature profile for a spray deposition process. IEEE Trans. Control Syst. Technol. 2003, 11 (5), 656. (3) Deng, H.; Li, H.-X.; Chen, G. Spectral approximation based intelligent modeling for distributed thermal processes. IEEE Trans. Control Syst. Technol. 2005, 13 (5), 686. (4) Ray, W. H. Advanced Process Control; McGraw-Hill: New York, 1981. (5) Baker, J.; Christofides, P. D. Output feedback control of parabolic PDE systems with nonlinear spatial differential operators. Ind. Eng. Chem. Res. 1999, 38, 4372. (6) Baker, J.; Christofides, P. D. Finite-dimensional approximation and control of nonlinear parabolic PDE systems. Int. J. Control 2000, 73, 439−456. (7) Demetriou, M. A.; Rosen, I. G. On-line robust parameter identification for parabolic systems. Int. J. Adapt. Control 2001, 15 (6), 615. (8) Banks, H. T.; Kunisch, K. Estimation techniques for distributed parameter systems; Birkhauser: Boston, 1989. (9) Coca, D.; Billings, S. A. Direct parameter identification of distributed parameter systems. Int. J. Syst. Sci. 2000, 31 (1), 11. (10) Butkovskiy, A. G. Green’s functions and transfer functions handbook, 1st ed.; Ellis Horwood: Chichester, NY, 1982. (11) Curtain, R.; Morris, K. Transfer functions of distributed parameter systems: a tutorial. Automatica 2009, 45 (5), 1101. (12) Li, M. H.; Christofides, P. D. An input/output approach to the optimal transition control of a class of distributed chemical reactors. Chem. Eng. Sci. 2007, 62, 2979. (13) Gay, D. H.; Ray, W. H. Identification and control of distributed parameter systems by means of the singular value decomposition. Chem. Eng. Sci. 1995, 50 (10), 1519. (14) Zheng, D.; Hoo, K. A.; Piovoso, M. J. Low-order model identification of distributed parameter systems by a combination of singular value decomposition and the Karhunen-Loève expansion. Ind. Eng. Chem. Res. 2002, 41 (6), 1545. (15) Zheng, D.; Hoo, K. A. Low-order model identification for implementable control solutions of distributed parameter system. Comput. Chem. Eng. 2002, 26 (7−8), 1049. (16) Zheng, D.; Hoo, K. A. System identification and model-based control for distributed parameter systems. Comput. Chem. Eng. 2004, 28 (8), 1361. (17) Doumanidis, C. C.; Fourligkas, N. Temperature distribution control in scanned thermal processing of thin circular parts. IEEE Trans. Control Syst. Technol. 2001, 9 (5), 708. (18) Guo, L. Z.; Billings, S. A. Identification of partial differential equation models for continuous spatio-temporal dynamical systems. IEEE Trans. Circuits Syst. II-Express Briefs 2006, 53 (8), 657. (19) Guo, L. Z.; Billings, S. A.; Wei, H. L. Estimation of spatial derivatives and identification of continuous spatio-temporal dynamical systems. Int. J. Control 2006, 79 (9), 1118. (20) Bär, M.; Hegger, R.; Kantz, H. Fitting partial differential equations to space-time dynamics. Phys. Rev. E. 1999, 59 (1), 337. 13217

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218

Industrial & Engineering Chemistry Research

Article

(45) Wahlberg, B. System identification using Laguerre models. IEEE Trans. Autom. Control 1991, 36 (5), 551. (46) Wahlberg, B. System identification using Kautz models. IEEE Trans. Autom. Control 1994, 39 (6), 1276. (47) Heuberger, P. S. C.; Van den Hof, P. M. J.; Bosgra, O. H. A generalized orthonormal basis for linear dynamical systems. IEEE Trans. Autom. Control 1995, 40 (3), 451. (48) Campello, R. J. G. B.; Favier, G.; Amaral, W. C. Optimal expansions of discrete-time Volterra models using Laguerre functions. Automatica 2004, 40 (5), 815. (49) Kulkarni, S. R.; Ramadge, P. J. Model and controller selection policies based on output prediction errors. IEEE Trans. Autom. Control 1996, 41 (11), 1594. (50) Armaou, A.; Christofides, P. D. Dynamic optimization of dissipative PDE systems using nonlinear order reduction. Chem. Eng. Sci. 2002, 57, 5083. (51) Dubljevic, S.; Mhaskar, P.; El-Farra, N. H.; Christofides, P. D. Predictive control of transport-reaction processes. Comput. Chem. Eng. 2005, 29, 2335.

13218

dx.doi.org/10.1021/ie301593u | Ind. Eng. Chem. Res. 2012, 51, 13205−13218