3158
Ind. Eng. Chem. Res. 2009, 48, 3158–3166
Methodology for the Screening of Signal Analysis Methods for Selective Detection of Hydrodynamic Changes in Fluidized Bed Systems Malte Bartels, Bart Vermeer, Peter J. T. Verheijen, John Nijenhuis, Freek Kapteijn, and J. Ruud van Ommen* Delft UniVersity of Technology, DelftChemTech, Delft Research Centre for Sustainable Energy, Julianalaan 136, 2628 BL Delft, The Netherlands
Various multiphase reactor systems need to be monitored, e.g., for increasing efficiency or for avoiding catastrophic events such as defluidization, excessive foaming, and flooding. In addition to the commonly available average process variables, pressure fluctuation signals measured at a high sampling frequency have been shown to contain relevant information about the process state. There are many different analysis techniques available that can be used for the data analysis of such signals. However, only sufficiently selectiVe methods will be suitable for an unambiguous detection of a specific change in the process, i.e., the identification of the cause. In this paper, a new methodology is presented for screening many different signal analysis methods in combination with various signal pretreatment methods with the goal to find those combinations that are selective toward a specific process change. This methodology can generally be applied for any specific process change; here we focus on the detection of agglomeration in fluidized beds. The presented methodology is illustrated with some fluidized bed data sets, demonstrating the validity and the benefit of this approach. Introduction Multiphase reactors are used for variety of processes in industry. Typical examples are trickle beds (gas-liquid-solid), monolith reactors (gas-liquid-solid), bubble columns (gasliquid), slurry bubble columns (gas-liquid-solid), and fluidized beds (gas-solid). Each of these reactor types can exhibit specific operational problems. For trickle beds liquid maldistribution should be avoided. Operated in countercurrent mode flooding can occur even at low gas and liquid velocities (e.g., ref 1). Bubble and slurry bubble columns, e.g., used for wastewater treatment and in the biotechnology area, can exhibit excessive foaming or regime transitions (e.g., refs 2 and 3). For fluidized beds the formation of unwanted agglomerates can lead to defluidization of the bed (e.g., ref 4). Prevention of unwanted behavior in multiphase reactors using an early warning system is therefore important to ensure trouble-free operation. Fluidized Bed Monitoring Applications. Gas-solid fluidized beds as a specific multiphase reactor type are utilized for a variety of processes in the chemical industry, such as catalytic reactions, drying, coating, and energy conversion (e.g., ref 5). Different operating parameters play a role in fluidized beds. Besides the operating parameters such as flows, pressures, and temperatures, the operation of a particulate process is also determined by the particle propertiessmainly the size distribution, density, shape factor, and coefficient of restitution (quantifying the elasticity of the particle collisions). Changes in those parameters can significantly influence the hydrodynamic behavior of the bed. Knowledge of the process conditions is of great importance not only for the operability and safety of the process but also for its economics, operating at optimal conditions and avoiding unscheduled shutdowns. Therefore, there is a clear need for online monitoring methods of various fluidized bed applications. The goal of each monitoring method can vary, however. Generally, one can think of a method that would allow reliably operating the process closer to the optimal conditions, e.g., * To whom correspondence should be addressed. Tel.: +31-15-2782133. Fax: +31-15-27-85006. E-mail:
[email protected].
controlling the bed hydrodynamics and resulting heat transfer by keeping the particle size within specific limits. Another important objective is the early detection of catastrophic events, leading to an undesired shutdown of the installation. The importance of process monitoring is illustrated by two examples of different fluidized bed processes. In the case of fluidized bed drying of powders an optimal moisture content and particle size distribution mainly define the desired product quality. During the drying process moisture evaporates and larger agglomerates will break up into smaller entities. As the moisture reaches a critical lower level, the temperature of the bed will quickly rise due to the heat continuously provided from the fluidization gas. This can have negative effects on product quality and should be avoided; i.e., a defined drying “end point” should not be exceeded. In this case process monitoring is necessary to accurately determine the optimal point to stop the drying process.6 In the area of combustion and gasification often fluidized beds with inert bed material are used to obtain a uniform heat distribution from the burning solid fuel. Silica sand is commonly used as a heat reservoir to ensure homogeneous heat distribution. In this process sand particles can become covered with a sticky layer due to the occurrence of eutectic mixtures with melting points below the operating temperature. The sand particles consecutively adhere to each other and larger agglomerates are formed (e.g., refs 4 and 7). This effect is undesired, as it decreases the degree of mixing and results in inhomogeneous heat distribution. Ultimately, it can lead to partial or total defluidization and a costly shutdown of the installation. A monitoring technique providing early detection of this phenomenon is therefore crucial for preventing such events.8 Conventional vs Advanced Detection Methods. In industrial fluidized bed installations, temperature and pressure measurements are in general rather simple and robust, which explains their wide application in process monitoring;9 mainly pressure drop over the whole bed or a part of it, and/or temperature differences in the bed are utilized. The actual sensors are used to obtain process variables that are sampled at a relatively low sampling frequency, normally below 1 Hz. Those low-frequency
10.1021/ie8012105 CCC: $40.75 2009 American Chemical Society Published on Web 02/03/2009
Ind. Eng. Chem. Res., Vol. 48, No. 6, 2009 3159
measurements have often been shown to reliably detect a process event only at a very late stage in the process, e.g., approaching defluidization in fluidized bed combustion.8 This leaves insufficient time to counteract the agglomeration process and return to normal operating conditions. Pressure fluctuation measurements have been shown to be suitable to overcome this dilemma. The pressure fluctuations characterize the hydrodynamics of the bed, which is mainly dominated by the different bubble phenomena: formation, rise, coalescence and breakup, and eruption. The bubble phenomena are, in turn, influenced by the particle properties. Therefore, the pressure fluctuations indirectly also contain information about the changing particle properties involved in the early stages of agglomeration, which makes them a suitable measurement source for monitoring techniques. The pressure fluctuations have to be measured at high sampling frequencies in order to resolve the occurring high-frequency phenomena. Typical sampling frequencies are on the order of magnitude of 100-400 Hz. In addition, the pressure fluctuation signal is high-pass filtered; this effectively leads to the removal of the average so that only the fluctuations around zero are considered. In the literature several methods have been proposed for monitoring fluidized beds and the early detection of agglomeration. The proposed methods can vary in terms of complexity. Only a short overview is presented here; for a thorough summary the reader is referred to ref 10. Relatively simple methods employ existing average process measurements at the available sampling frequencies for process variables, usually well below 1 Hz. The average pressure drop over the bed has been proposed (e.g., ref 11), with its standard deviation and variance (e.g., refs 12 and 13) as well as principal component analysis based on pressure drop (e.g., ref 14). These methods generally do detect particle size changes and/or agglomeration, but only in a relatively late stage. Their response toward other process changes has either not been investigated or has been shown to be rather sensitive, e.g., for changes in the fluidizing gas flow.15 Based on pressure fluctuation data, different more complex methods have been presented, e.g., attractor comparison16 and the W-statistic.17 From those methods only attractor comparison has been explicitly investigated for its sensitivity toward other process parameters and has been shown to be insensitive toward relative changes of gas velocity and bed mass within 10% for bubbling beds. Selectivity. Many of the presented methods for agglomeration detection in the literature have indeed been shown to be sensitive toward the actual agglomeration process. Ideally, a method would be exclusiVely sensitive toward the agglomeration process and not to any other process changes. However, the different methods can also exhibit some sensitivity toward other process changes, which also affect the hydrodynamics of the bed, i.e., suffer from cross-sensitivity. The investigation of this aspect is often neglected in the literature. Changes in the fluidizing gas velocity and total bed mass are considered the most common process changes. For the case of agglomeration detection it is undesired that any given method will be sensitive toward another process change, as this would falsely indicate an agglomeration event (“false alarm”). However, a method is still useful if its sensitivity toward agglomeration relative to its sensitivity toward other process changes is big. Therefore, it is important to consider the relative difference of any given method toward the different process changes. Moreover, along the same rationale one can also consider a method that is sensitive toward a specific other process change, e.g., gas velocity, and not toward agglomeration. Such a method
Figure 1. Global flow sheet of the screening methodology.
can be very valuable as a “countercheck”, i.e., to check whether an observed change in the process can be attributed to a phenomenon other than agglomeration. Lead Time. For a ready-to-use monitoring application one also has to consider the lead time of a method, i.e., the time between the reliable detection of the event and the event itself. Whether a method can be considered suitable depends on its lead time in combination with the time scales of the process to still prevent the event. With the current methodology we only focus on the sensitivity and selectivity for a specified process change. The lead time of any given method can subsequently be extracted from the results together with a threshold (alarm) definition. Goal. The goal of this paper is to present a new generic methodology for screening various signal analysis methods and signal pretreatment methods in order to identify suitable methods which are selectiVely sensitive toward specific process changes. Suitable methods therefore should satisfy two requirements: a high sensitivity toward the desired process change and a minimum cross-sensitivity for other process changes (high selectivity). The methodology should not be confused with the online monitoring itself, as it is only used to identify suitable methods that subsequently have to be implemented in a real process for online monitoring. The methodology should only serve as a tool to assess existing signal analysis methods in the literature for their suitability of being a selective event detection tool. We emphasize that the presented methodology is generic and therefore in principle applicable to any process and signal type. In this paper, the methodology is applied to pressure fluctuations in gas-solid fluidized beds and focused on the detection of agglomeration as relevant process change. For this purpose, we first present the methodology followed by an illustrating example. Screening Methodology The screening methodology is described in more detail in the following and is globally illustrated in a flow sheet in Figure 1; a more detailed/refined flow sheet of the methodology is presented in Figure 5.
3160 Ind. Eng. Chem. Res., Vol. 48, No. 6, 2009
Figure 2. Illustration of a typical power spectrum from fluidized bed pressure fluctuation data.
Figure 3. Illustration of the continuity criterion, incorporating the standard deviation of each time block. A method is only accepted if a continuous (increasing or decreasing) line can be constructed within the window defined by twice the standard deviation around each mean value (indicated by the bars here).
Data Sets. The first step is to provide the input data sets. For the general methodology, any kind of measurement source could be used; here we use pressure fluctuation data. Each data set should contain changes in just one process variable. That means that all operating parameters have to be kept constant except the one that is to be varied either gradually or in distinct steps. If the acquisition of such isolated process changes is not possible, the methodology still can be applied with some alternative definitions; we will elaborate on this in the following sections. Signal Pretreatment. Pressure fluctuations from a fluidized bed contain information about the hydrodynamics, i.e., bubble phenomena (formation, coalescence, breakup) and bed oscillations. Those phenomena are generally frequency-dependent; the intensity or amplitude of the pressure fluctuations as a function of frequency is commonly presented in a power spectrum, as illustrated in Figure 2. By applying signal pretreatment in the frequency domain, one can therefore try to reduce the number of phenomena in the signal. The power decreases with increasing frequency in fluidized beds, e.g., ref 8. In addition, certain frequencies can contain more power than others, leading to characteristic local maxima in the power spectrum. By applying different filters to the raw signal before the actual analysis, one can limit the analysis to certain frequencies of the signal. Different frequencies relate to corresponding phenomena via the physical scale of the considered phenomenon. For example, bubbles in the bed will exhibit pressure fluctuations over a large frequency range, whereas bed oscillations can be observed in a relatively confined frequency range, often exhibiting a characteristic peak at the lower end of the power spectrum at a few hertz. The absolute frequencies are largely dependent on the bed and particle scale. By confining the analysis to only certain frequency ranges, one can enlarge the sensitivity of a method toward specific hydrodynamic changes. In general, the frequency is related to the physical scale; e.g., more macroscopic phenomena will refer to lower frequencies, whereas single particle-particle interactions refer to higher frequencies. However, it is normally not clear a priori which frequency range would correspond to a specific physical effect and which frequency range one should focus on in the analysis. Therefore, we chose to not make any assump-
tions, but to screen different frequency ranges. The pretreatment methods incorporated here consist of three groups: frequency filtering using a sixth-order Butterworth filter, wavelet-based filtering using a Daubechies 5 wavelet, and principal component reprojection into a lower dimensionality. Compared to frequency and wavelet filtering, the approach to pretreat the data by principal component analysis is somewhat different and does not allow any direct correlation with frequencies as presented above. For the parameters of each pretreatment method, see the Appendix. Signal Analysis. As previously mentioned, information about the hydrodynamic state of a fluidized bed can be extracted from high-frequency pressure fluctuation measurements in the bed. A large variety of methods for the analysis of time-series data is available in the literature. In general, these methods are generic and therefore are not only applicable to a specific process, such as a fluidized bed. For the practical implementation of this screening methodology, a limited number of promising methods has been selected. The implemented signal analysis methods consists of the following: Kolmogorov-Smirnov (KS) test, Kuiper test, rescaled range (R/S) analysis, diffusional analysis, probability density function (PDF) moments, autocorrelation, principal component analysis (PCA), time-frequency analysis, attractor comparison (S-statistic), correlation dimension, Kolmogorov-Sinai (KS) entropy, average cycle time (ACT), average absolute deviation (AAD), and W-statistic. The choice to incorporate these methods into the screening has been based on their appearance within the relevant fluidization signal analysis literature. Fluidized beds can be considered as chaotic systems;18,19 one could therefore argue for the application of only nonlinear analysis methods. However, also linear methods have been incorporated in the current approach because also linear methods can be capable of extracting certain relevant information from the pressure fluctuation data. Moreover, in the case of comparable performances, linear methods are typically preferred over nonlinear methods because of their simplicity and generally lower computational demand compared to nonlinear methods. The various signal analysis methods together with the various pretreatment methods give rise to a rather large number of products of pretreatment methods and analysis methods. In the current implementation of the methodology we have implemented 26 different basic methods with differing parametrization in some cases, yielding a total of 40 signal analysis methods. The three basic pretreatment methods with differing parametrization resulted in a total of 32 pretreatment methods applied here besides the raw data. The total of all combinations therefore equals 40 × 33 ) 1320 for each data set. This rather large number inspired an automated screening approach. Although one could potentially omit certain combinations a priori based on physical or mathematical reasoning, we have chosen to retain all possible combinations. This way no unexpected promising combination within the chosen set of methods is a priori excluded. Selectivity Index Calculation. Depending on the number of data sets, pretreatment methods, and analysis methods, the described methodology can yield a very large number of analysis results. They subsequently have to be assessed in terms of their suitability to detect a certain process change, e.g., agglomeration. The crucial step in this evaluation lies in the quantification of the sensitivity of a method as well as the continuity of its response toward a distinct phenomenon on the bed, often imposed in several distinct steps. We have translated these aspects into the following two requirements:
Ind. Eng. Chem. Res., Vol. 48, No. 6, 2009 3161
as the difference between the minimum and maximum time block mean values of the response. Even if a method clearly follows, e.g., an imposed step change in the operating variables, its absolute sensitivity can still be low; the reaction of the method will then potentially be dominated by a different physical effect with a larger influence on its sensitivity. If a method shows a desired sensitivity toward process change A (e.g., agglomeration) as well as toward another process change B (e.g., bed mass changes), it can still be suitable if both sensitivities differ significantly. Figure 4 illustrates the sensitivity criterion. The actual quantity to measure this sensitivity is designated as the “selectivity index” and is given by eq 1. Figure 4. Illustration of the sensitivity criterion: the method is more sensitive to process change A compared to process change B, despite the same qualitatiVe trend of the increase.
1. continuity of the observed trend, also taking into account its standard deviation 2. high sensitivity of a method toward a specified process change compared to other process changes Both requirements are further explained in the following. Regarding the first demand, the observed trend of the analysis variables should be continuous in order to have an unambiguous behavior of the method on the process change. The trend can be either increasing or decreasing. For this aspect, the resultsthe analysis variable over timesis first subdivided into n different time blocks; over each time block the average of the analysis variables from each time window is taken. The continuity aspect could subsequently simply be applied to the mean values of each step; this, however, could lead to inappropriate rejection of valid methods keeping in mind a certain variation around the mean value. Only a slight change in the mean between consecutive time blocks could therefore reject an otherwise good method. Therefore we have chosen a more robust continuity criterion. A method is only rejected if one cannot construct a continuous trend through a window around the mean value; this window is defined by twice the standard deviation around the mean in that interval. This measure respects the natural occurrence of a certain variation in the data during a stable process. Figure 3 illustrates the continuity criterion with three examples. In case the applied process change has been carried out in distinct steps, one should obviously choose the same number for the number of time blocks. For agglomeration (or any other continuous trend) one can freely choose the number of time blocks. Here, one has to be careful to choose a “suitable” number of blocks. The smaller the number of blocks, the smaller also the maximum difference one gets. The larger the number of blocks, the easier a trend can be rejected due to the influence of only little discontinuities. Moreover, the sensitivity also increases somewhat with increasing block size, implying that the results will only serve the purpose of a relatiVe comparison between methods. In case of forced process changes (not agglomeration), the time block size is significantly larger than the typical (hydrodynamic and reaction) time scales of the fluidized bed to make sure the system is indeed stationary. It is also remarked that for a step change one has to make sure that the process of the system becoming stationary after the process change is small compared to the time block size, in order to remove any effects on the time block average. Regarding the high sensitivity of a pretreatment/analysis method combination toward a certain effect compared to other effects, it is important to notice that not only a continuous trend of the analysis variable is of importance, but also its absolute sensitiVity toward different effects. The sensitivity is here defined
f)
∆maxzi,change type A ∆maxzi,change type A + ∆maxzi,change type B
(1)
Here, ∆maxzi denotes the difference between the highest and lowest mean values zi of all time blocks in the response. As defined in eq 1, the selectivity index f therefore always scales between 0 and 1. This normalization is necessary to compare different analysis methods with each other. The closer the value is to 1, the more selective the method will therefore be toward effect A. The different options of how to use this selectivity index are explained in the following. If pressure fluctuation data obtained at different distinct process changes (agglomeration, gas velocity change, bed mass change, particle size change,...) are available, one can check the selectivity index of a method for successfully detecting this process change. The selectivity index here indicates how sensitively a method reacts to the process change to be detected (e.g., agglomeration) compared to the sum of different other process changes (e.g., gas velocity change, bed mass change, and particle size change combined). In this case eq 1 modifies to f)
∆maxzi,change type A ∆maxzi,change type A + ∆maxzi,change type B +
(2)
∆maxzi,change type C + ∆maxzi,change type D The process change to be detected, agglomeration, is still indicated with index A; the other process changes are indicated with indices B, C, and D. In order to determine the necessary magnitude of those process changes, some knowledge of the specific process is required. In order to arrive at a suitable monitoring method with the help of this methodology, one has to translate operational requirements in an industrial environment (e.g., “the fluctuations in gas flow are within a range of 15%”) into the input data. One can use available historic data with a larger range and only use part of that data in order to tailor it to the specific process. Ideally, the raw pressure fluctuation data are obtained from the same measuring location of the same setup under the same operating conditions besides the imposed process change as a reference case. Historical data can be used for the analysis. However, if they are not available, it is often not feasible to obtain data of such isolated process changes, especially in industrial practice. First, the fluidized bed installation will always exhibit common process fluctuations. In a laboratory this effect principally also occurs, but is typically significantly smaller, as the operating conditions of such a setup can be much better controlled. Second, exposing the setup to changes in specific operating conditions could be unfeasible with respect to safety and/or process economics.
3162 Ind. Eng. Chem. Res., Vol. 48, No. 6, 2009
Figure 5. Flow sheet of the screening methodology. Dashed boxes represent choices to be made and actions by the user. *For the selection of a different process change for another screening, the data do not have to be pretreated and evaluated again.
If it is not possible to obtain all the relevant isolated process changes in a single fluidized bed, two alternative strategies are possible. First, one can choose to combine data originating from different fluidized bed setups. With this approach, one has to keep in mind that methods which perform well in this way will not necessarily have to perform well under other conditions or vice versa. Second, one can choose to not relate the relevant process change (e.g., agglomeration) to other isolated process changes, but relate to the variation within the common process operation. Equation 2 then simplifies to f)
∆maxzi,change type A ∆maxzi,change type A + ∆maxzi,normal process changes
(3)
This approach therefore relates the sensitivity of a method toward effect A to the common process operation. This common process condition has to contain a representative part of the normal process changes, which will contain several different physical effects occurring simultaneously. The choice of such data should be made based on knowledge and experience of the process. One also has to consider that in this case one does not have to make any explicit choices on the magnitude of occurring process changes (e.g., “gas velocity fluctuates within a relative range of 15%”), compared to the previous approach with data sets containing isolated process changes. The only other requirement in this case is that the relevant process change (e.g., agglomeration) has indeed been observed and measured. Therefore, two basic approaches, which are related to different goals, can be distinguished. The first approach, using data from different isolated process changes (eq 2), will be applicable when
trying to gain more insight into which methods will be sensitive toward which specific process changes and why this is the case, including physical insight into the process. Here, it is possible to only focus on the process change to be detected, but also to investigate whether certain methods are selectively sensitive toward specific other process changes. Those methods then can serve as “counterchecks”, avoiding false alarms. The second approach can be seen as a practical engineering approach, to be used as a tool for finding a method suitable for a specific fluidized bed process. For a different process, the methodology would then probably have to be carried out again. On the contrary, if the first approach should be used for any practical purpose, one has to make a choice of which other process changes to include and also of the magnitude of each process change. This choice can only be made with relevant process knowledge and experience. Visualization of the Results. The continuity requirement has been implemented as a first step, either accepting or rejecting the results of a signal pretreatment/analysis combination. In the following, the selectivity index of such a combination is evaluated, using eq 2 or 3. The results are subsequently visualized in a matrix of all pretreatment methods (horizontally) and all analysis methods (vertically). The elements of this twodimensional matrix are filled in a gray scale from black (0) to white (1) according to the selectivity index value. When it is difficult to easily identify the most promising pretreatment/ analysis combinations, all fields with values smaller than a certain threshold are replaced by black fields. We found 0.7 to be a generally good choice for this threshold. Please note that
Ind. Eng. Chem. Res., Vol. 48, No. 6, 2009 3163
Figure 6. Matrix with values of f. (Only values of f > 0.7 are shown.) Vertical lines are to better indicate the different groups of pretreatment methods. HP ) high pass, LP ) low pass, BP ) band-pass for frequency filtering, D ) detail level, A ) approximation level for wavelet filtering; see the Appendix for a complete list of the analysis and pretreatment methods.
the function of this threshold value is only to gain a better overview of the best methods in the results matrix. The threshold value could vary, depending on the system under consideration. The visualization in a matrix form has been chosen in order to identify suitable methods. For each chosen method one can consecutively visually confirm the suitability of the pretreatment/ analysis method with the help of the response toward the different data sets. The last step of this approach consists of a final visual confirmation of how each method reacts to the individual process changes, checking the trend of a method for each provided data set. Finally, all the steps of the presented screening methodology are summarized in a more detailed flow sheet presented in Figure 5. Illustrating Results The methodology presented in this paper is illustrated with two examples; here, we restrict the examples to the selectivity index as defined in eq 2. Several different case studies were being investigated and will be available in a different publication. The data for this example are taken from two different fluidized beds. Agglomeration data have been taken from a laboratory-scale hot fluidized bed in which the pyrolysis of straw resulted in agglomeration and consecutive defluidization.20 The other effects under consideration have been measured in a pilotscale fluidized bed under cold flow conditions. The following effects have been imposed on the bed in several distinct steps: • increase in total bed mass (seven steps, total increase 27%, starting from 550 kg) • increase in fluidizing gas velocity (seven steps, total increase 62%, starting from 0.21 m/s) • increase in particle size (four steps, replacement of fine sand (d50 ) 532 µm) with coarse sand (d50 ) 1280 µm) resulting in a bimodal size distribution; total replacement of 36%, corresponding to a total increase in d50 particle size of 51%) The results matrix of the selectivity index f (eq 2) for each combination of pretreatment method and analysis method is presented in Figure 6. Each field in the matrix refers to values of f ranging from 0 to 1, corresponding to the range from black to white. From this
matrix one can then easily spot which methods are most selective for agglomeration. Moreover, one can see that “bands” of methods are evolving within different pretreatment groups. From this, a general impression of the suitability of a certain pretreatment method is obtained. In a second step, it is investigated how a specific combination of pretreatment method and analysis method performs, by inspecting at the response toward all four provided process changes. The first example (“1” in Figure 6) is a rather light field in the matrix, and comprises a low-pass filter with cutoff at 5 Hz in combination with the average cycle time. The response toward all four imposed changes is shown in Figure 7. The average cycle time does indeed correctly indicate the agglomeration process with an upward trend until defluidization. On the other hand, it only shows a very small sensitivity for changes in bed mass, gas velocity, and particle size. Clearly, this method is selectively sensitive for agglomeration in this case. There are many dark fields in the matrix; one example is the standard deviation based on the raw data without pretreatment (“2” in Figure 6). The response of this method is shown in Figure 8. In this case, the trend is rather ambiguous, with increasing and decreasing sections during the agglomeration process. Moreover, the cross-sensitivity, the response toward the other effects, is relatively large (not shown here). Therefore, this method would not be suitable here, confirmed by a dark field in the matrix. Final Remarks. We have presented this methodology for the detection of agglomeration in fluidized beds. In this case, the systematic search for a monitoring method based on pressure fluctuations is primarily motivated by the fact that agglomeration is hard to quantify. In principle, agglomeration could be quantified by, e.g., the average particle size in the bed, but this is very hard to determine continuously during operation. It is emphasized, however, that the methodology itself is generic and in principle is applicable to any multiphase reactor process and signal type. Other potentially important applications include, e.g., the monitoring of changes in the particle size in fluidized beds, preventing flooding in trickle beds, and avoiding excessive foaming in bubble columns. For this case one has to redefine the relevant phenomenon to which the methods should be sensitive, as defined in the numerator of the selectivity index f. One also has to potentially use a different measurement technique for obtaining the data from the process.
3164 Ind. Eng. Chem. Res., Vol. 48, No. 6, 2009
Figure 7. Example 1: response of average cycle time toward different operational changes, based on 5 Hz low-pass filtered data.
conveniently be used for this purpose by substituting different pretreatment methods by a changing parametrization for each method. In this way, one can visualize how the sensitivity and selectivity of a method changes with its parametrization in order to determine the optimal parametrization. Finally, to come to a ready-to-use monitoring method, one has to consider the lead time of any given method investigated. This lead time can be extracted from the calculated responses of each combination of detection and pretreatment method. In addition to these, one has to define a confidence threshold for the reliable detection (alarm level). This, however, is beyond the scope of this work and requires a separate study. Conclusions Figure 8. Example 2: response of standard deviation for the agglomeration process, based on raw data. Logarithmic scale used to emphasize the fluctuating trend.
Another aspect to consider for agglomeration detection is that not only sensitivity exclusively toward agglomeration can be desirable. Also a method being selectively sensitive toward one or more effects other than agglomeration can serve as a valuable tool to check whether a detected event is indeed to be linked to agglomeration (“counterchecks”). The aspect of scale-up has not been treated in this paper. The methods found to be suitable on a small scale will not necessarily have to be suitable on a large scale as well, mainly due to the nonlinear behavior of scale-up. However, the presented methodology itself is equally suitable when applied to large-scale systems. The main difference is that in case of the application of step changes one would have to pay attention to how quick an analysis variable becomes stationary. Furthermore, the presented methodology does not directly incorporate an investigation of the parametrization of each detection and pretreatment method (if applicable). This feature could, however, easily be implemented. The results matrix can
A new methodology is presented for the efficient screening of a large number of signal analysis methods in order to find those methods that are sensitive as well as selective for specific process changes in multiphase reactors. The detection of agglomeration in a fluidized bed served as an example for an application. The results from all investigated methods, in combination with different pretreatment methods, are expressed in terms of a relative selectivity index f that is a measure for the sensitivity and selectivity of such a combination to detect a specific process change. The sensitivity indices f of all different combinations are collected and visualized in a matrix, from which suitable methods visually emerge. As a last step, one should inspect the actual temporal response of the method for the cases with high selectivity indices to ensure the temporal response is indeed suitable for online monitoring. Using pressure fluctuation data from fluidized beds exhibiting different isolated process changes-agglomeration, gas velocity, bed mass, and particle size-the methodology has been illustrated and has been shown to be an efficient tool. The presented case will be extended with more case studies in a different publication, analyzing data ranging from laboratory to industrial scale installations.
Ind. Eng. Chem. Res., Vol. 48, No. 6, 2009 3165 Table 1. Overview of All Applied Pretreatment Methods index
method
parameter
1 2-7 8-13 14-16 17-27 28-33
no pretreatment frequency filtering, high pass (Butterworth 6th-order filter) frequency filtering, low pass (Butterworth 6th-order filter) frequency filtering, band-pass (Butterworth 6th-order filter) wavelet decomposition with Daubechies 5 wavelet principal component (PC) decomposition filtering: projection of the data onto a new axis system, dimensionality ) 20
cutoff frequencies: 5, 10, 15, 20, 30, 50 Hz cutoff frequencies: 5, 10, 15, 20, 30, 50 Hz lower/upper cutoff frequencies: 5/10, 5/30, 15/30 detail levels 1-10, approximation level 10 axis systems for the reprojection defined by: each individual axis system; first block only; PC 1-10, PC 10-20, PC 1-5, PC 15-20
Table 2. Overview of All Applied Analysis Methods and Brief Outline of Each Method index
method
1
Kolmogorov-Smirnov (KS) test H0 rejection/acceptance (0/1), based on distribution of overall data KS test cumulative distribution function (CDF) distance Kuiper test H0 rejection/acceptance (0/1) Kuiper test CDF distance KS test H0 rejection/acceptance (0/1), based on mean crossings (MC) of pressure fluctuations with zero KS test CDF distance, based on MC Kuiper test H0 rejection/acceptance (0/1), based on MC Kuiper test CDF distance, based on MC Hurst exponent (HE) at small windows: rescaled range analysis HE at medium windows: rescaled range analysis HE at large windows:rescaled range analysis HE at 1000 point distances: diffusional analysis HE at 2500 point distances: diffusional analysis HE at 4999 point distances: diffusional analysis mean standard deviation skewness kurtosis autocorrelation 63% decay time with 0.1 min maximum lag autocorrelation 63% decay time with 0.01 min maximum lag autocorrelation 37% decay time with 0.1 min maximum lag autocorrelation 37% decay time with 0.01 min maximum lag PCA principal component 1 variance contribution fraction PCA principal component 2 variance contribution fraction PCA principal component 5 variance contribution fraction PCA principal component 10 variance contribution fraction PCA principal component 20 variance contribution fraction power spectral density (PSD) power at 2 Hz PSD power at 25 Hz PSD power at 60 Hz attractor comparsion (S-statistic) correlation dimension: maximum likelihood correlation dimension: best fit Kolmogorov-Sinai entropy, bits/s Kolmogorov-Sinai entropy, bits/cycle average cycle time (ACT) average absolute deviation (AAD) W-statistic, thresholding up to detail level 1 W-statistic, thresholding up to detail level 5 W-statistic, thresholding up to detail level 10
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
Appendix Table 1 gives an overview of all the applied pretreatment methods, and Table 2 gives an overview of all the applied analysis methods as well as a brief outline of each method. The following numbers refer to the indices in Tables 1 and 2. 1, 2, 5, 6: The Kolmogorov-Smirnov (KS) Test (e.g., ref 21) compares the similarity of two probability distributions by the maximum distance of two cumulative distribution functions (CDFs). For once, the actual distance is calculated/monitored (methods 2 and 6). Moreover, a null hypothesis of both CDFs being similar based on a 95% confidence interval is either rejected or accepted (0/1) (methods 1 and 5). The KS test is applied to the distribution of the pressure fluctuation data (CDF distance, methods 1 and 2) as well as to the distribution of the
lengths between consecutive crossings of the pressure fluctuation signal with zero (MC, methods 5 and 6). 3, 4, 7, 8: The Kuiper test (e.g., ref 21) is analogous to the KS test, but it uses the sum of the maximum distances on both distribution sides for the probability calculation. 10, 11: The rescaled range (R/S) analysis (e.g., ref 22) is a measure for the self-similarity of a data set, as expressed by the Hurst exponent. 12-14: Diffusional analysis (e.g., ref 23) monitors two different Hurst exponents over time, one related to long-term effects and one related to short-term effects. 15-18: The first four probability density function (PDF) moments describe different properties of the distribution of data: the mean, the standard deviation, the skewness (a measure for the asymmetry of a distribution), and the kurtosis (a measure for the peakedness of a distribution). 19-22: The autocorrelation (e.g., ref 24) of a signal is the cross-correlation of the signal with a time-shifted version of itself. The values of specific decay times for the cross-correlation coefficients to drop to 37% (1/e) and 63% (1 - 1/e) of the autocorrelation and for lag times of 0.1 and 0.01 min are monitored along time. 23-27: Principal component analysis (PCA) (e.g., ref 25) describes the variation in a set of multivariate data in terms of a new set of uncorrelated variables. The data are consecutively reprojected onto the subspace defined by these uncorrelated variables. Here, the contribution of a specific component1,2,5,10,20 to the overall variability of all components of the data, i.e., the percentage of the total variability explained by that component, is calculated. 28-30: Within time-frequency analysis, the power in a power spectrum (obtained by Fourier transformation) at different frequencies is monitored. 31: In the attractor comparison method16 the data are projected into a multidimensional state-space, yielding an attractor. Consecutively, this attractor is compared to a reference attractor as obtained from a reference condition using a statistical test26 which assesses the dimensionless distance S between both attractors. 32, 33: The correlation dimension signifies the integral dimension of an object. It is therefore a measure for the complexity of the attractor. (For a general definition, see, e.g., ref 27; calculations are carried out in ref 28.) 34, 35: The Kolmogorov-Sinai (KS) entropy is a measure for the predictability of an attractor, expressed in bit/s (34). Alternatively, it can be divided by the average cycle time and is then expressed in bits/cycle (35). (For a general definition, see, e.g., ref 27; calculations are carried out in ref 29). 36: The average cycle time (ACT) is the average time for three subsequent crossings of the time series with its mean value. 37: The average absolute deviation (AAD) is the average of the absolute deviations from the mean value. 38-40: The W-statistic17 calculates the so-called small pressure fluctuations component (obtained by subtracting a wavelet-smoothed signal from the raw signal) in relation to the
3166 Ind. Eng. Chem. Res., Vol. 48, No. 6, 2009
original signal. The signal is first decomposed up to a certain level (1, 5, and 10 here), after which the smallest coefficients in the detail coefficient vectors (smallest 60% here) are set to zero and the smoothed version of the signal is reconstructed. Subsequently, the smoothed signal is subtracted from the original signal. Literature Cited (1) Breijer, A. A. J.; Nijenhuis, J.; van Ommen, J. R. Chem. Eng. J. 2008, 138, 333–340. (2) Villa, J.; van Ommen, J. R.; van den Bleek, C. M. AIChE J. 2003, 49, 2442–2444. (3) Ruthiya, K. C.; Chilekar, V. P.; Warnier, M. J. F.; van der Schaaf, J.; Kuster, B. F. M.; Schouten, J. C.; van Ommen, J. C. AIChE J. 2005, 51, 1951–1965. ¨ hman, M.; Nordin, A.; Skrifvars, B.-J.; Backman, R.; Hupa, M. (4) O Energy Fuels 2000, 14, 169–178. (5) Kunii, D.; Levenspiel, O. Fluidization Engineering; ButterworthHeinemann Ltd.: Woburn, MA, 1991. (6) Chaplin, G.; Pugsley, T.; Winters, C. Powder Technol. 2005, 149, 148–156. (7) Visser, H. J. M. ECN-Report (Energy Research Centre of the Netherlands); ECN-C-04-054; 2004. (8) Nijenhuis, J.; Korbee, R.; Lensselink, J.; Kiel, J. H. A.; van Ommen, J. R. Chem. Eng. Sci. 2007, 62, 644–654. (9) Werther, J. Powder Technol. 1999, 102, 15–36. (10) Bartels, M.; Lin, W.; Nijenhuis, J.; Kapteijn, F.; van Ommen, J. R. Prog. Energy Combust. Sci. 2008, 34, 633–666. (11) Rehmat, A. G.; Patel; J. G. (Inst. Gas Technology (IGTE)). Controlling and maintaining fluidised beds-under non-steady state conditions in ash agglomerating fluidised beds. U.S. Patent 4,544,375A, 1985. (12) Davies, C. E.; Fenton, K. IPENZ Trans. 1997, 24 (EMCh). (13) Chirone, R.; Miccio, F.; Scala, F. Chem. Eng. J. 2006, 123, 71– 80. (14) Fuller, T. A.; Flynn, T. J.; Daw, C. S.; Halow, J. S. Proceedings of the 12th International FBC Conference; Rubow, L. N., Ed.; 1993; Vol. 1, pp 141-155.
(15) van Ommenr, J. R.; Schouten, J. C.; van den Bleek, C. M. An EarlyWarning-Method for Detecting Bed agglomeration in Fluidized Bed Combustors. Proceedings of the 15th International Conference on Fluidized Bed Combustion; Reuther, R. B. ed.; ASME: New York, 1999; Paper No. FBC99-0150. (16) van Ommen, J. R.; Coppens, M.-O.; van den Bleek, C. M.; Schouten, J. C. AIChE J. 2000, 46, 2183–2197. (17) Briens, C.; McDougall, S.; Chan, E. Powder Technol. 2003, 138, 160–168. (18) van der Stappen, M. L. M. Chaotic hydrodynamics of fluidized beds, Thesis; Delft University Press: Delft, The Netherlands; 1996. (19) Johnsson, F.; Zijerveld, R. C.; Schouten, J. C.; van den Bleek, C. M.; Leckner, B. Int. J. Multiphase Flow 2000, 26, 663–715. (20) van Ommen, J. R.; Schouten, J. C.; Coppens, M.-O.; Lin, W.; DamJohansen, K.; van den Bleek, C. M. Proceedings of the 16th International Conference on Fluidized Bed Combustion; 2001; pp 1146-1159. (21) Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. Numerical Recipes in C: The Art of Scientific Computing, 2nd ed.; Cambridge University Press: Cambridge, U.K., 1992. (22) Zhao, G.-B.; Yang, Y.-R. AIChE J. 2003, 49, 869–882. (23) Giona, M.; Paglianti, A.; Soldati, A. Fractals 1994, 4, 503–520. (24) Carlson, G. E. Signal and Linear System Analysis, John Wiley & Sons, Inc.: New York, 1998. (25) Everitt, B.; Dunn, G. Applied MultiVariate Data Analysis; Hodder Arnold: London, 2001. (26) Diks, C.; van Zwet, W. R.; Takens, F.; DeGoede, J. Phys. ReV. E 1996, 53, 2169–2176. (27) Kantz, H.; Schreiber, T. Nonlinear Time Series Analysis; Cambridge University Press: Cambridge, U.K., 2000. (28) Schouten, J. C.; Takens, F.; van den Bleek, C. M. Phys. ReV. E 1994, 50, 1851–1861. (29) Schouten, J. C.; Takens, F.; van den Bleek, C. M. Phys. ReV. E 1994, 49, 126–129.
ReceiVed for reView August 19, 2008 ReVised manuscript receiVed December 18, 2008 Accepted December 31, 2008 IE8012105