From Laboratory to Field Tomography: Data Collection and

Aug 2, 2011 - We first describe basic process of projection data collection for our research γ-ray tomography unit and typical field use. Next, we re...
0 downloads 0 Views 2MB Size
ARTICLE pubs.acs.org/IECR

From Laboratory to Field Tomography: Data Collection and Performance Assessment Zeljko Kuzeljevic,*,† Milorad Dudukovic,† and Hugh Stitt‡ †

Chemical Reaction Engineering Laboratory (CREL), Department of Energy, Environmental and Chemical Engineering (EECE), Campus Box 1180, One Brookings Drive, Washington University at St. Louis (WUSTL), St. Louis, Missouri 63130, United States ‡ Johnson Matthey Technology Centre, PO Box 1, Belasis Avenue, Billingham, Cleveland TS23 1LB, United Kingdom ABSTRACT: In field tomography, the total number of scan lines collected during imaging is usually limited. At the same time, the demands on the technique regarding resolution are low, corresponding to diagnostic purposes (e.g., presence or absence of maldistribution). In this study, we investigate the level of information needed for purpose of field diagnostics when using a fan beam tomography scanner. We utilize the theoretical framework of Fisher information content to examine dependence between projection data collection methodology and image quality. The findings are substantiated by experimental results obtained on suitable phantom designed to mimic maldistribution in packed bed. The performance of expectationmaximization and alternatingminimization image reconstruction algorithms for varying number of scan lines used is compared.

’ INTRODUCTION Tomographic measurements enable a better understanding of the hydrodynamics of a given two-phase or multiphase system and allow visualization of phase distributions in such a system. Transmission tomography uses measured signals emitted from a point source and attenuated through a domain to computationally determine a sectional image of the phase distribution in the system. Particularly appealing is that these techniques are not hindered by the opaque character of the process equipment and, in effect, are able to provide “an eye” in an opaque flow. Tomographic techniques are now being widely used as laboratory investigative tools for the study of multiphase flow reactors. They enable a high-quality, high-spatial, and/or temporal resolution of the flow and phase patterns and help to improve the existing theoretical insights, as well as lead to new ones.16 A natural extension to these studies is the use of noninvasive imaging techniques in industrial systems. Reactor performance is critically dependent on mixing (e.g., stirred tanks and other backmixed systems) or the lack of longitudinal mixing (plug-flow reactors). In general, mixing problems are exacerbated at larger (production) scale, and this affects production rates and selectivity. Therefore, there is interest in utilizing noninvasive imaging techniques for diagnosis of operating units on manufacturing sites. This places demands on the technique regarding portability and resolution that are different to those encountered in research.7 Tomographic techniques can be broadly classified based on the type of signal used as electrical- and radiation-based tomography methods. The electrical methods include electrical resistance (ERT), electrical impedance (EIT), and electrical capacitance tomography (ECT). Electrical tomography techniques recently have gained prominence, because they offer high temporal resolution (at the cost of spatial resolution) in the images and have been applied to industrial-scale vessels.8 However, the technique has field limitations, because it requires sensors to be in direct contact with the process fluid. r 2011 American Chemical Society

Sensors associated with X-ray and γ-ray tomography methods do not require contact with the flow in any form and, therefore, present an attractive choice. The radiation source required for X-ray tomography needs electrical power for operations. These usually emit photons of low energy (soft source), which is not favorable when dealing with large cross sections of field vessels and the metal components associated with it. High-energy X-ray sources used on pilot-scale laboratory applications9 might not be suitable for field applications, because of portability issues. Gamma-ray sources, which represent a key component to γ-ray tomography, are prevailing in the field and industrial applications. The source produces high-energy photons that can be used for large cross sections, and it does not require electrical power for operation. The discussion above is summarized in Table 1, where the basic requirements of laboratory and field tomography are compared. As shown, the requirements of portability and applicability to vessels with thick metal walls restrict the choice of the modality to γ-ray computed tomography (CT), which is suitable for opaque, high attenuation media, large cross sections, and steady-state flow conditions. The use of this technique in the laboratory setting is wellestablished.1014 Tomography and velocimetry have been widely applied to the study of multiphase reactor systems. There are numerous reports on using various techniques such as γ-ray fan beam tomography,15 γ-ray densitometry,16 and electrical capacitance tomography17 and radioactive particle tracking18 for slurry bubble columns, as well as γ-ray tomography of a monolith reactor19 and trickle beds,20 applied to vessels up to 0.6 m in diameter in controlled, laboratory-type, or cold-flow pilot-scale environments. Received: August 21, 2010 Accepted: June 30, 2011 Revised: June 22, 2011 Published: August 02, 2011 9890

dx.doi.org/10.1021/ie101759s | Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research

ARTICLE

Table 1. Laboratory and Field Tomography Priorities Laboratory Priorities Field Tomography Resolution

Priorities applicable to thick metal

tomography technique

spatial

temporal

transportable?

wall vessels?

γ-ray

good

some

yes

yes

positron emission

good

moderate

no

yes

X-ray

good

some

moderate

moderate

electrical

moderate

excellent

yes

no

capacitance optical

good

good

yes

no

MRI

good

good

no

no

Typically, when performing tomographic studies in the field, the conditions are considerably different to those encountered in the laboratory research environment. The access to the vessel is frequently restricted, there is a need to adjust the tomography set up to accommodate the column external piping and instrumentation and to carry out studies at significant heights on a large diameter column with high density (high attenuation) media.21 Furthermore, the key driver in field tomography is commonly diagnostic in nature; the need is to obtain the data with sufficient resolution to establish any severe mal-operation at minimum cost. Hence, there is a need to know what degree of resolution is required and how many measurements are required to achieve an adequate tomogram. There is little information in the process literature on the reliability of tomographic measurements at low measurement densities. Murphy and York22 presented an interesting discussion of this for the more-complex situation of soft field techniques where data density is inherently a function of radial location. Attempts are also reported on how to cope with conditions of “sparse” or incomplete data, largely using sonogram techniques, to extrapolate into the data sparse regions.23,24 In the present context, the latter does not seem a satisfactory approach as a key consideration is whether, by misfortune, the particular feature for diagnosis lies within the data sparse region and model-based extrapolation of data external to that region may inherently not be successful. Darwood et al.21 reported qualitative findings from an empirical study with γ-ray tomography, using a relatively simple least-squares-based reconstruction technique, on the minimum requirements to observe certain features represented by phantoms. Also, their study demonstrates the benefits of a fan beam approach over a parallel path method. The described preferred approach is to position the source of γ-radiation on one side of the vessel and a set of radiation sensors horizontally opposite in a fan arrangement. Source and sensors are moved around the vessel and the intensity of radiation transmitted through the vessel is recorded by the sensors for each source location. In the study of Darwood et al.,21 the sensors were not collimated, and peak counting was shown to have a negligible effect on the quality of the tomogram. The key difference between the setup described by Darkwood et al.21 and the various tomographic systems described for laboratory-scale work is in the number of line density measurements taken. Taking Roy et al.12 as being representative of a

Figure 1. Sourcedetector arrangement in the γ-ray computed tomography (CT) unit (not drawn to scale).

laboratory system, the total number of scans completed for a planar tomogram is ∼17 000. They study of Darwood et al.21 was performed on a 6.2-m-diameter distillation column with 6 line density measurements from each of 32 detector positions (a total of 192 scan lines). While the full study has clearly been invaluable in developing a methodology for the commercial tomography product, there is clearly a need and opportunity for a moredetailed and fundamental study to better understand the relationship between measurement density and available information, and the impact of the reconstruction algorithm. In this study, we use a research γ-ray tomography system to better understand the field diagnostics systems. A large set of γ-photon transmission (projection) data through a known phantom is gathered using the research tomography system. This set is processed into small segments that represent the typical amount of projection data gathered in the field tomography system. The aim is to assess what level of information is required for the purpose of field diagnostics and which methodology of data collection would help minimize the needed projection data. The paper is organized as follows. We first describe basic process of projection data collection for our research γ-ray tomography unit and typical field use. Next, we review the employed image reconstruction procedures, and projection data processing, followed by the details of the theoretical methodology for the assessment of scanner performance. Lastly, the results of the image reconstruction are presented, and it is demonstrated how the theoretical framework can be used to guide the projection data collection methodology in a field scanner.

’ BACKGROUND Laboratory γ-ray CT Scanner. The principal schematic of the γ-ray CT sourcedetector arrangement used in this work is given in Figure 1.6,25,26 The imaged object is positioned between the source and an array of seven NaI (2-inch-diameter) detectors. 9891

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research

ARTICLE

A 100 mCi 137Cs point γ-ray source provides a 40° fan beam in the horizontal plane. In order to obtain line (projection) attenuation measurements, the detectors are shielded and equipped with 2 mm  2 mm collimators. During the experiment, the source and detectors take different positions, relative to each other and to the column, and at each of these positions, attenuation of the incident radiation is recorded. Within one fixed source position (view), any given detector takes 25 different positions (projections) in increments of 0.2°.25 Hence, there are 175 projections per each view. The source makes a full circle around the scanned object in increment of 3.6°, i.e., there are 100 views in one scan. Thus, the total number of projections that are being considered is 17 500. Reconstruction Algorithms. In tomography applications where γ-photon counts (signal source) are high, ignoring the stochastic nature of the γ-photon transmission may not be a serious limitation in determining the choice of the image reconstruction algorithm. The Poisson counting noise is only a component of the intrinsic noise of the system.27 However, at low signal strengths, reconstruction methods that model the stochastic nature of the γ-photon provide more-accurate results. The expectation maximization (EM)27,28 and the alternating minimization algorithm (AM)29 belong to this category of algorithms. They are expected to yield improvement in the reconstruction quality, compared to the Fourier techniques30,31 and iterative algebraic methods such as the incorporation of nonnegativity constraints, and an objective measure of the quality of reconstruction.27 Other stochastic algorithms have been proposed and reported in the literature,3237 but, for the sake of brevity, are not discussed here. EM Algorithm. The EM algorithm is a general procedure for maximum-likelihood (ML) estimation and is most commonly employed in the incomplete data problems.38 The formulation of the algorithm is due to Dempster et al.,28 while its specific application for transmission tomography was given by Lange and Carson.27 In this section, we briefly summarize the main points of the EM algorithm, and of the Lange and Carson27 derivation. Suppose that p(Y,μ) is a known likelihood (density) function of the measured data (Y) parametrized by the vector μ. The objective is to estimate the elements of μ. The standard ML method would involve the maximization of ln p(Y,μ), with respect to (wrt) the vector of parameters (μ). However, in general, this maximization does not need to be a trivial issue. EM algorithm circumvents this problem by postulating a complete dataset (X) and its conditional log-likelihood function, given the measured (incomplete) data (ln f(X,μ|Y) to make the maximization problem more tractable. The ML estimate of the vector of parameters is then achieved by an iterative procedure consisting of two steps: expectation (E) and maximization (M). In the E-step (eq 1), the expectation of the complete dataset, given the measured data and the current estimate of the parameter vector μ is calculated. EfXjY, μðnÞ g

ð1Þ

In the M-step, the complete data log-likelihood is maximized wrt μ, thus providing a new (μ(n+1)) estimate of the vector of parameters. The outcome of the iteration procedure is the vector μ corresponding to the maximum value of the complete data loglikelihood. At the same time, such vector maximizes the incomplete data log-likelihood28 and thus, indirectly, achieves the objective of the standard ML procedure.

In the Lange and Carson27 implementation for transmission tomography, the incomplete dataset (i.e., the number of photons registered by the detectors) likelihood function follows the Poisson distribution pðYi , μÞ ¼

Y ½EfYi gYi Yi !

i

expð  EfYi gÞ

ð2Þ

with the expected value given by the BeerLambert law, EfYi g ¼ λi expð

∑ lijμj Þ

ð3Þ

j ∈ Ii

In eqs 2 and 3, i and j are the projection and pixel indexes, respectively, Yi is the random vector of the number of photons registered at the detectors, λi is the mean number of photons emitted by the source, E{ 3 } designates the expectation operation, and Ii is a set of the pixels contributing to projection i. The ML estimation problem, i.e., the calculation of the vector of attenuation coefficients (μ), is then performed by introducing the number of photons leaving each pixel (Xij) as the complete dataset. Lange and Carson27 showed that, for such a choice of the complete dataset, the E-step (eq 1) is given by EfXij jYi g ¼ di þ EfXij g  EfYi g

ð4Þ

where di designates the actual (measured) values of the random vector Yi. The number of photons leaving each pixel (i.e., the compete dataset) is dependent only on the number of pixels that enter and the probability of the photon transmission given by the BeerLambert’s law. Hence, the complete dataset likelihood follows a binomial distribution and is given by eq 5: ln f ðX, μÞ ¼

∑i j ∑∈ I fXi, jþ1 lijμj i

þ ðXi, j  Xi, jþ1 Þ lnð1  exp½lij μj Þg þ Rj

ð5Þ

where Rj represents all the terms not dependent on μj. The maximization yields the following expression for the update function ðn þ 1Þ 2

Aj ðμj

Aj ¼

ðn þ 1Þ

Þ þ Bj μj

þ Cj ¼ 0

l2ij

∑ ðXij  Xi, jþ1 Þ12

i ∈ Jj

Bj ¼ 

l

∑ ðXij þ Xi, jþ1Þ 2ij i∈J

ð6Þ

j

Cj ¼

∑ Xij  Xi, jþ1 i∈J j

where i ∈ Jj is the set of projections to which pixel j contributes. Note that the derivation of eq 6 from eq 5 involves the approximation of the exponential terms with the Taylor series expansion. This expansion is more accurate for domain with small values of the attenuation coefficient of the material in the domain (μ).27,31 AM algorithm. AM algorithm29 is an iterative procedure that utilizes the concept of I-divergence36 or relative entropy37 to obtain a ML estimate. [I-divergence is a measure of entropy, or discrepancy between two stochastic entities.] As in the case of the EM algorithm described above, the BeerLambert law is used to model the transmission of photons, through the domain ! ðnÞ

qi 9892

¼ λi exp 

∑ lijμjðnÞ

ð7Þ

j ∈ Ii

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research

ARTICLE

Table 2. Theoretical Values of Attenuation Coefficient for the Regions in the Phantom Object (Figure 2) region of phantom

medium

μtheor (cm1)

A

water

0.086

B C

air water

0.0001 0.086

Table 3. Cases Considered in the Image Reconstructiona total number of scan lines

Figure 2. Phantom object: Gray (A and C) regions are filled with water; the white (B) region is empty. A 1-in.-diameter tube is positioned 0.2 in. off-center in the y-direction. Table 2 gives the theoretical values of the attenuation coefficients for regions A, B, and C.

The relative entropy between the model and the measured signal is minimized when the model accurately represents transmission of the γ-ray photon (measured signal) through the domain. In the AM algorithm I-divergence is used to assess the discrepancy between the model vector (q) and the observation vector (d) and, for iteration n, it is given by eq 8. ( ) ! di ðnÞ ðnÞ ðnÞ di ln ðnÞ  ðdi  qi Þ ð8Þ I ðd q Þ ¼ i qi )



The objective of the reconstruction is to find μ such that that it minimizes the value of the I-divergence. The minimization of eq 8 yields the following update function for μ:29 ðn þ 1Þ μj

¼

ðnÞ μj

1  ln Zj

∑i lij di ∑i lij qðnÞ

ð9Þ

where zj represents the scaling factor that is chosen such that eq 10 is satisfied for each pixel.39 ! lij e1 ð10Þ j ∈ Ii Zj



The numerator and denominator terms with in logarithmic term of the update function in eq 9 represent the back projections of the photon counts measured by the sensors, and the model vector q based on the current estimate of μ(n). In the AM algorithm, the derivation of the update function does not involve Taylor series approximation as introduced in the EM algorithm.27 The effects of the approximation are more pronounced when high attenuating materials (metal internals) are present in the domain. This could lead to beam hardening and other image artifacts. The I-divergence function has a nonnegative property, and has a unique minima at zero while the upper bound of the maximum likelihood function used in the EM algorithm is not known a priori.

a

number of projections

considered, δ (% of full scan)

number of views

per view, τ

5 5

5 10

175 91

5

14

63

5

18

49

10

10

175

10

20

91

10

28

63

10

36

49

25

25

175

25

48

91

25

69

63

25

89

49

50

50

175

50

96

91

For pixel sizes (ξ) of 1.70, 1.90, 2.18, 2.54, 3.05, 3.81, 5.08, and 7.62 mm.

’ DATA COLLECTION AND PROCESSING Experiments are performed to obtain the full scan dataset (17 500 projections using 100 views and 175 projections per view, as discussed above) for the phantom shown in Figure 2. The phantom is designed to mimic rather typical diagnostic tasks in field tomography, namely, the detection of maldistribution. In such a scenario, a zone of low density—and, hence, attenuation—is present within the otherwise high attenuation domain. Therefore, we designed our phantom with a similar configuration. It consists of 6-in.-outer-diameter tube with a concentrically placed 3.25-in.-diameter tube. The outer annular region between the tubes is filled with water. Inside the 3.25-in. tube a 1.25-in.diameter tube is placed 0.2 in. off-center and filled with water. All tubes are made of Plexiglas. Table 2 gives the theoretical values of the linear attenuation coefficient for the regions indicated in Figure 2. These values are used in the process of the calculation of the reconstruction error, per the procedure described below. The data processing was designed to address the error behavior with the change in the pixel size, number of scan lines used, and the data collection procedure. Table 3 shows the cases that are being considered. We consider the range of pixel sizes (ξ) from 1.7 mm to 7.6 mm. The number of scan lines was chosen to correspond to 5%, 10%, 25%, 50%, or 100% of the full scan. Hence, the corresponding total number of scan lines was 875, 1750, 4375, and 8750, respectively. We shall refer to the number of scan lines by its percentage value and use the symbol δ to represent it. For the prescribed value of δ, the influence of the data collection 9893

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research

ARTICLE

Figure 3. Reconstructed images (EM algorithm). Color bars indicate values of linear attenuation coefficient (cm1): (a) ξ = 2.54 mm, δ = 100%, τ = 175 projections per view; (b) ξ = 2.54, δ = 50, τ = 175; (c) ξ = 2.54, δ = 50, τ = 91; (d) ξ = 3.81, δ = 25, τ = 175; (e) ξ = 3.81, δ = 25, τ = 49; (f) ξ = 5.08, δ = 10, τ = 175; (g) ξ = 5.08, δ = 10, τ = 91; (h) ξ = 7.62, δ = 5, τ = 175; and (i) ξ = 7.62, δ = 5, τ = 49. (Also see Table 3.)

procedure was addressed by varying the number of projections per view τ ∈ {49,63,91,175}. Note that each case is uniquely identified by specifying the pixel size (ξ), the number of scan lines (δ), and the number of projections per view (τ). Naturally, for a fixed value of δ, the number of views decreases as τ increases. For example (Table 3), one of the cases is given as follows. We consider the total of 875 scan lines and designate it as δ = 5% (0.05  17 500 = 875). We choose a value of τ = 175, which represents the number of projections for each view (see Figure 1 and discussion about the laboratory CT scanner). Then, as shown in Table 3, the corresponding number of views for this case is

5. Hence, we can retrieve δ as 5  175 = 875 total scan lines. Note that the case will be fully specified once the pixel size ξ ∈ {1.70, 1.90, 2.18, 2.54, 3.05, 3.81, 5.08, 7.62} is specified. The number of views for each case is given in Table 3; however, all the trends will only be reported with respect to the number of projections per view.

’ CHARACTERIZING RECONSTRUCTION PERFORMANCE Irrespective of their use, a very important issue for the tomographic scanner is the assessment of the image quality 9894

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research

ARTICLE

Figure 4. Reconstructed images (AM algorithm). Color bars indicate values of linear attenuation coefficient (cm1): (a) ξ = 2.54 mm, δ = 100%, τ = 175 projections per view; (b) ξ = 2.54, δ = 50, τ = 175; (c) ξ = 2.54, δ = 50, τ = 91; (d) ξ = 3.81, δ = 25, τ = 175; (e) ξ = 3.81, δ = 25, τ = 49; (f) ξ = 5.08, δ = 10, τ = 175; (g) ξ = 5.08, δ = 10, τ = 91; (h) ξ = 7.62, δ = 5, τ = 175; and (i) ξ = 7.62, δ = 5, τ = 49. (Also see Table 3.)

and scanner performance for a given method of image formation.40 The process of image formation can be described as a two-step process.41 First, the information about the object is extracted in the form of raw sensor data constituting an observation. Second, the reconstruction process is used to extract some estimate of the scanned object properties from the raw sensor data. The image quality is then defined as the measure of deviation of the estimate to the true object and is generally influenced by the object properties, imaging technique (e.g., X-ray, γ-ray tomography, etc.), geometry and data collection

method, sensors used, and reconstruction procedure employed. The reconstruction performance can be assessed (and, in principle, predicted) using the Fisher information (FI) concept. FI is a measure of information that an observable random variable Y, which is characterized by likelihood p(Y,μ), carries about an unknown parameter vector μ = {μ1 ,μ 2 ,...,μ N}: 42 Fμ ¼ COV μ fSST g ¼ EfSST g 9895

ð11Þ

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research

ARTICLE

(d i ) in place of the expected values E{Y i } thus arriving at the observed FI matrix with the km element: 0 1 obs Fkm ¼@

¼



lik lim EfYi gA

i ∈ ðJk ∩ Jm Þ



lik lim di

EfYi g f di

ð16Þ

i ∈ ðJk ∩ Jm Þ

For more details on the distinction between FI, the expected FI, and the observed FI, see the work of McLachlan and Krishnan.38 Furthermore, we can describe the information content via the CramerRao bound (CRB), which gives a lower bound on the covariance matrix of any unbiased estimate for a vector of nonrandom parameters (eq 17).43 Cμ ¼ Fμ 1

ð17Þ

The total information content (for the entire μ vector) can be generalized as tr{Cμ}, where tr designates the trace operator.44 Lower values of tr{Cμ} indicate that it is possible to achieve lower variances in the μ estimate and, thus, designate better performance.

’ RESULTS AND DISCUSSION

Figure 5. Reconstruction error as a function of percentage of data used and pixel size (all data points are for τ = 175, EM algorithm).

In eq 11, T denotes the transpose operation, COV is a covariance, and S is a score statistic: S¼

∂ln pðY, μÞ ∂μ

ð12Þ

FI can be also calculated as43 ( ) ∂2 Fμ ¼ E ln pðY, μÞ ∂μ2

ð13Þ

Hence, FI is an N  N matrix with the km element given by ( ) ∂2 Fkm ¼  E ln pðYi , μÞ k, m ¼ f1, 2, :::, Ng ð14Þ ∂μk ∂μm Here, we utilize eq 13 to define the content of information that the measured (projection) data (Y) carries about the values of attenuation coefficient (μ). Combining eqs 2 and 13, we obtain the expected FI: Fkm ¼



lik lim EfYi g

Reconstructed Images. As discussed, the objective of our case study is the detection of the low attenuation region embedded between two higher attenuation regions mimicking the typical case of the maldistribution diagnostics. The reconstructed values of the linear attenuation coefficient are shown in Figure 3 (EM algorithm) and Figure 4 (AM algorithm). The images are given for the range of ξ (pixel size), δ (total number of scan lines used given as percentage of the full scan), and τ (number of projections per view) values. In the resulting images of the full set data (i.e., δ = 100%; see Figures 3a and 4a), higher and lower attenuation regions are clearly captured and separated. As δ decreases, the images deteriorate and eventually become corrupted with artifacts. Furthermore, for a fixed value of δ, the images clearly exhibit the dependence on τ. This can be verified by comparison of cases (b) and (c), (d) and (e), (f) and (g), and (h) and (i) in Figures 3 and 4. We discuss these differences in more detail below. Error of Reconstruction. The mean percentage error is used to quantify the reconstruction error:

ð15Þ

i ∈ ðJk ∩ Jm Þ

where i ∈ (Jk ∩ Jm) designates all projections to which both pixels k and m contribute. To simplify the calculation of the elements of F μ , we utilize the experimental projection data

error ¼

100 N absðμ  μtheor Þ N i¼1 μtheor



ð18Þ

where N is the total number of pixels in the reconstructed image and μtheor is the theoretical value of the attenuation coefficient (Table 2). To compare the results obtained by the two algorithms, it is necessary to have a common basis. One of the options could be to set a termination point in the iterations based on the rate of change in the values of the cost functions (see eqs5 and 8 for the EM and AM algorithm, respectively). After a few iterations, the rate of change in the cost function usually rapidly diminishes and a higher number of iterations are required to make a small change. It is also a known fact that, with a higher number of iterations, the image tends to get noisy and grains tend to appear.45 We adopted the following approach to exclude the effect of number of iterations from the analysis. The reconstructions are performed over a large span of iterations and, for each case, the error is taken as the minimal error within all the error values. This implies that 9896

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research

ARTICLE

Figure 6. Error of reconstruction for different total number of scan lines (δ): (a,b) δ = 5%, (c,d) δ = 10%, (e,f) δ = 25%, and (g,h) δ = 50%.

the results presented here correspond to those obtained by the use of the optimal stopping criteria. In Figure 5, the reconstruction error is shown as a function of the number of total scan lines. All data points are for τ = 175. The results indicate potential for significant reduction of the number of scan lines, since the reconstruction error levels off at values of δ of ∼25%50%. Hence, for diagnostics purposes, an increase in the number of scan lines above 50% (8750) or 25% (4375) does not bring any additional gain in the reconstruction process. Results in Figure 5 indicate that the maldistribution zone considered in this study can be successfully detected with as little as 875 scan lines. Such decrease in the number of needed scan lines also gives the potential for the reduction of scan time during field diagnostics. As shown in Figure 6, the reconstruction error is a very complex function of parameters ξ, δ, and τ (see Table 3) and the algorithm employed. The results indicate the reduction of error with the increase in pixel size. This can be explained by considering the total information content for a given resolution (see discussion related to Figure 7 below). The increase in pixel size leads to the increase in the overall information content available; i.e., lower resolution images can be obtained with less projection data. However, increasing the pixel size limits the size of the “maldistribution zone” that can be detected. In this study, a 1-in. (25 mm) annular section of low attenuation was successfully detected with only 875 scan lines and the maximum pixel size of ∼7 mm. The results of Darwood et al.21 indicate that large-scale

maldistribution in the industrial equipment (6.2-m-diameter column) is possible with even lower number of scan lines (see discussion above). Hence, during actual field diagnostics, it is essential to achieve a proper tradeoff between image quality (pixel size) and the expected scale of maldistribution. Also, the error is very sensitive to the method of data collection (i.e., value of τ). Larger values of τ (projections per view) are achieved in practice by employing the larger number of sensors and lead to higher capital investment. On the other hand, lower values of τ would require a corresponding increase in the number of source positions during the scan. Results shown in Figures 6 and 7 show that no definite trends of the reconstruction error, with respect to the number of projections per view, can be identified. For example, in Figures 6e and 6f (δ = 25%) and Figures 6g and 6h (δ = 50%), the error is a decreasing function of τ. In Figures 6a and 6b (δ = 5%), most cases exhibit the minimum error at τ = 63, with the exception of ξ = 2.5 mm, for which minimum error occurs for τ = 175. On the other hand, in Figures 6c and 6d, τ = 91 yields the minimal reconstruction error. We see these trends correctly reflected in Figures 3 and 4; the images in panels (b), (d), (g), and (i) better capture the phantom than the images in panels (c), (e), (f), and (h), respectively. Even though there is a lack of definite trends for the reconstruction error, the concept of FI can be utilized to guide the data collection methodology as shown below. 9897

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research

Figure 7. Change in the information content with the pixel size. Full scan data (δ = 100%, τ = 175).

Figure 8. Change in the information content and the reconstruction error with the number of projections per view (τ) for constant total number of scan lines (δ): (a) ξ = 7.6 mm and (b) ξ = 5.1 mm.

Generally, the AM algorithm outperforms the EM algorithm when reconstruction error is considered (Figure 6). The differences are due to the approximation used in the derivation of the update function (see eqs 5 and 6 for M-step of the EM algorithm). On the other hand, cross examination of the images shown in Figures 3 and 4 reveals that both the algorithms have a comparable capability of performing the objective of detecting a low attenuation region. Performance Prediction. We first verify the developed total information content criteria tr{Cμ} (see also eqs 16 and 17) against the known resolution limit of the scanner. Roy25 utilized the expression proposed by Yester and Barnes46 and showed that, for the scanner used in our study, its value is ∼2 mm. In Figure 7, the quantity tr{Cμ} is shown as a function of pixel size. The sharp increase in tr{Cμ} at ξ ≈ 2 mm is evident and is coinciding with

ARTICLE

the resolution limit of the scanner. Hence, tr{Cμ} can properly capture the basic performance parameters of the scanner. Similar analysis can be extended to clarify the intricate trends of the reconstruction error. As given by eqs 16 and 17 , the information content depends on the number of photons at detectors (i.e., scanned object properties) and the scanner geometry. Values of lik and lim are a complex function of the pixels size, and the data collection arrangement (i.e, the values of ξ, δ, and τ). Note that the calculation of tr{Cμ} involves the inversion of the N  N FI matrix (eq 17), which becomes computationally intensive for smaller pixel sizes.47 For the sake of simplicity, we limit our discussion to the lower-resolution cases. This is reasonable, because such cases are more relevant to the field diagnostics. The EM and AM algorithms are essentially characterized by the same information content, since the model equation for AM (eq 7) is the same as incomplete data likelihood for EM (eq 2). Note that this is also properly reflected in the similar trends of EM and AM reconstruction error. To simplify the plot, we limit our remaining discussion to the use of EM reconstruction error. A comparative plot of reconstruction error and tr{Cμ} is given in Figure 8. Similar to the trends exhibited in Figure 7, tr{Cμ} progressively increases with the decrease in pixel size. Hence, the smaller pixel size leads to the larger variance in the estimate, or, in other words, it leads to the lower information content. These results are in agreement with trends shown in Figures 5 and 6. Overall, the trends of reconstruction error are well-reflected in the value of tr{Cμ} and it is possible to predict minimum of the reconstruction error using it. For example, for ξ = 7.6 mm and δ = 5%, tr{Cμ} takes values of ∼10 and ∼700 for τ = 49 and 175, respectively. For these cases, reconstruction error increases from 11.8% to 16.2%. The parameter tr{Cμ} is less sensitive to τ for higher values of δ. In Figure 8, the tr{Cμ} values also suggest that, for a small number of line measurements (projections) (δ = 5% and δ = 10%), lower values of tr{Cμ} are obtained when the number of views is increased and the number of projections per view (τ) is kept smaller. The same conclusions can be made based on reconstructed images (see Figures 3 and 4) and error trends (see Figures 5 and 6). This observation is very pertinent to field tomography applications where there is a linear operating cost associated with the number of line measurements that can be made with a sensor array and a source. Images with greater accuracy could be obtained with fewer sensors and more views or source positions around the vessel to be scanned.

’ CONCLUDING REMARKS In this study, we examined the various aspects of γ-ray tomography as a research and field diagnostics tool. A laboratory computed tomography (CT) unit was used to gather a large amount of projection data, which were then processed into smaller sets to mimic the field tomography conditions of low scan data. The aim was to assess what level of information is required for the purpose of field diagnostics and which methodology of data collection would help minimize the needed projection data. We presented theoretical framework for the assessment of scanner performance. Such framework was proven useful in capturing and predicting the behavior of reconstruction error. It was further demonstrated that typical tasks of field diagnostics (i.e., detection of maldistribution) can be performed with the very limited set of projection data. In practice, the same number of projection data in field tomography can be obtained 9898

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research by either using the higher number of source positions around the object coupled with the lower number of scan lines for each source position, or vice versa. It was demonstrated that, in the case of very low number of scan lines, the reconstruction performance was better for the case of using a larger number of sensor positions. This observation is very pertinent to field tomography applications, since the needed number of scan lines within view is directly related to the operating cost of the sensor array. The principles developed and applied to phantom geometry considered in this study can be directly extended to other phantom geometries. Similarly, in field applications, the FI concept can be used to guide the scanning process and minimize the needed projection data.

’ AUTHOR INFORMATION Corresponding Author

*E-mail: [email protected].

’ ACKNOWLEDGMENT The authors acknowledge the financial support of industrial sponsors of Chemical Reaction Engineering Laboratory. Dr. Rajneesh Varma is acknowledged for valuable discussions regarding many tomography and image reconstruction issues and for providing the code for the AM algorithm. Prof. Dr. Muthanna Al-Dahhan is also acknowledged for valuable discussions. ’ NOTATION d = measured number of photons (at detector) l = length (cm) Jj = set of projections to which pixel j contributes n = iteration number q = photon transmission model vector of AM algorithm z = scaling factor used in AM algorithm Fμ = Fisher information matrix T = transpose operation p = probability density function of measured (incomplete) dataset Y = random vector of measured (incomplete) data f = probability density function of complete dataset E{ 3 } = expectation operator i = projection index j = pixel index Ii = set of pixels contributing to projection i S = score Cμ = CramerRao bound tr = trace operator error = mean absolute percentage error Greek Letters

μ = attenuation coefficient (cm1) = theoretical value of attenuation coefficient (cm1) μtheor i ξ = pixel size in the reconstructed image (mm) δ = total number of scan lines considered in the process of reconstruction (% of full scan) (a full scan has 17 500 scan lines) τ = number of projections per view μ = set of parameters to be estimated via EM or AM algorithm λ = mean number of photons emitted by γ-ray source

ARTICLE

’ REFERENCES (1) Barigou, M. Particle tracking in opaque mixing systems: An overview of the capabilities of PET and PEPT. Chem. Eng. Res. Des. 2004, 82 (9), 1258–1267 (Special Issue). (2) Chaouki, J.; Larachi, F.; Dudukovic, M. P. Noninvasive tomographic and velocimetric monitoring of multiphase flows. Ind. Eng. Chem. Res. 1997, 36 (11), 4476–4503. (3) Boyer, C.; Duquenne, A.-M.; Wild, G. Measuring techniques in gasliquid and gasliquidsolid reactors. Chem. Eng. Sci. 2002, 57 (16), 3185–3215. (4) Reinecke, N.; Petritsch, G.; Schmitz, D.; Mewes, D. Tomographic measurement techniques. Visualization of multiphase flows. Chem. Eng. Technol. 1998, 21 (1), 7–18. (5) Gladden, L. F.; Alexander, P. Applications of nuclear magnetic resonance imaging in process engineering. Meas. Sci. Technol. 1996, 7 (3), 423–435. (6) Kumar, S. B.; Dudukovic, M. P. Computer assisted gamma and X-ray tomography: Applications to multiphase flow systems. NonInvasive Monitor. Multiphase Flows 1997, 47–103. (7) Stitt, E. H.; James, K. Process tomography and particle tracking: Research and commercial diagnostics tool. Presented at the 3rd World Congress on Industrial Process Tomography, Banff, Canada, 2003. (8) Mann, R.; Dickin, F. J.; Wang, M.; Dyakowski, T.; Williams, R. A; Edwards, R. B.; Forrest, A. E.; Holden, P. J. Application of electrical resistance tomography to interrogate mixing processes at plant scale. Chem. Eng. Sci. 1997, 52, 2087–2097. (9) Toye, D.; Marchot, P. Imaging of liquid distribution in reactive distillation packings with a new high-energy X-ray tomograph. Meas. Sci. Technol. 2005, 16 (11), 2213–2220. (10) Kumar, S. B.; Moslemian, D.; Dudukovic, M. P. Gas-holdup measurements in bubble columns using computed tomography. AIChE J. 1997, 43 (6), 1414–1425. (11) Boyer, C.; Koudil, A.; Chen, P.; Dudukovic, M. P. Study of liquid spreading from a point source in a trickle bed via gamma-ray tomography and CFD simulation. Chem. Eng. Sci. 2005, 60 (22), 6279–6288. (12) Roy, S.; Kemoun, A.; Al-Dahhan, M. H.; Dudukovic, M. P.; Skourlis, T. B.; Dautzenberg, F. M. Countercurrent flow distribution in structured packing via computed tomography. Chem. Eng. Process. 2004, 44 (1), 59–69. (13) Khopkar, A. R.; Rammohan, A. R.; Ranade, V. V.; Dudukovic, M. P. Gasliquid flow generated by a Rushton turbine in stirred vessel: CARPT/CT measurements and CFD simulations. Chem. Eng. Sci. 2005, 60 (89), 2215–2229. (14) Bhusarapu, S.; Al-Dahhan, M. H.; Dudukovic, M. P. Solids flow mapping in a gassolid riser: Mean holdup and velocity fields. Powder Technol. 2006, 163, 98–123. (15) Chen, J. W.; Gupta, P.; Degaleesan, S.; Al-Dahhan, M. H.; Dudukovic, M. P.; Toseland, B. A. Gas holdup distributions in largediameter bubble columns measured by computed tomography. Flow Meas. Instrum. 1998, 9 (2), 91–101. (16) Shollenberger, K. A.; Torczynski, J. R.; Adkins, D. R.; O’Hern, T. J.; Jackson, N. B. Gamma densitometry tomography of gas holdup spatial distribution in industrial scale bubble columns. Chem. Eng. Sci. 1997, 52, 2037. (17) Bennett, M. A.; West, R. M.; Luke, S. P.; Jia, X.; Williams, R. A. Measurement and analysis of flows in a gas liquid column reactor. Chem. Eng. Sci. 1999, 54, 5003–5012. (18) Chen, J.; Kemoun, A.; Al-Dahhan, M. H.; Dudukovic, M. P.; Lee, D. J.; Fan, L.-S. Comparative hydrodynamics study in a bubble column using computer-automated radioactive particle tracking (CARPT)/computed tomography (CT) and particle image velocimetry (PIV). Chem. Eng. Sci. 1999, 54 (1314), 2199–2207. (19) Al-Dahhan, M. H.; Kemoun, A.; Cartolano, A. R. Phase distribution in an upflow monolith reactor using computed tomography. AIChE J. 2006, 52 (2), 745–753. (20) Boyer, C.; Fanget, B. Measurement of liquid flow distribution in trickle bed reactor of large diameter with a new gamma-ray tomographic system. Chem. Eng. Sci. 2002, 57 (7), 1079–1089. 9899

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900

Industrial & Engineering Chemistry Research (21) Darwood, M. W.; Davies, M.; Godden, D.; Jackson, P.; James, K.; Stitt, E. H. Development and implemenation of gamma-ray tomography for field applications. Presented at the 3rd World Congress on Industrial Process Tomography, Banff, Canada, 2003. (22) Murphy, S.; York, T. A. Confidence in tomographic imaging. Presented at PROCTOM 2006, Warsaw, Poland, Sept. 1415, 2006. (23) Constantino, E. P. A.; Ozanyan, K. B.Sinogram recovery for sparse angle tomography using a sinusoidal hough transform. Meas. Sci. Technol. 2008, 19 (9). (24) Bertram, M.; Wiegert, J.; Schafer, D.; Aach, T.; Rose, G. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT. IEEE Trans., Med. Imaging 2009, 28 (7), 1011–1022. (25) Roy, S. Phase distribution and performance studies of gasliquid monolith reactor. D.Sc. Thesis, Washington University, St. Louis, MO, 2006. (26) Kumar, S. B. Computer tomographic measurement of void fraction and modeling of the flow in bubble columns, Ph.D. Thesis, Florida Atlantic University, Boca Raton, FL, 1994. (27) Lange, K.; Carson, R. EM reconstruction algorithms for emission and transmission tomography. J. Comput.-Assisted Tomogr. 1984, 8 (2), 306–316. (28) Dempster, A.; Laird, N. M.; Rubin, D. B. Maximum likelihood from incomplete data via EM algorithm. J. R. Stat. Soc., B 1977, 39, 1. (29) O’Sullivan, J. A.; Benac, J. Alternating minimization algorithms for transmission tomography. IEEE Trans., Med. Imaging 2007, 26, 283–297. (30) Patel, A. K.; Patwardhan, A. W.; Thorat, B. N. Comparison of ML-EM algorithm and art for reconstruction of gas hold-up profile in a bubble column. Chem. Eng. J. 2007, 130 (23), 135–145. (31) Varma, R.; Bhusarapu, S.; O’Sullivan, J. A.; Al-Dahhan, M. H. A comparison of alternating minimization and expectation maximization algorithms for single source gamma ray tomography. Meas. Sci. Technol. 2008, 19 (1). (32) Elbakri, I. A.; Fessler, J. Statistical image reconstruction for polyenergetic X-ray computed tomography. IEEE Trans., Med. Imaging 2002, 21 (2), 89–99. (33) Fessler, J. A.; Rogers, W. L. Spatial resolution properties of penalized-likelihood image reconstruction: Space-invariant tomographs. IEEE Trans., Image Process. 1996, 5 (9), 13461358. (34) Lange, K.; Fessler, J. A. Globally convergent algorithms for maximum a posteriori transmission tomography. IEEE Trans., Image Process. 1995, 4 (10), 1430–1438. (35) Byrne, C. L. Iterative image reconstruction algorithms based on cross-entropy minimization. IEEE Trans., Image Process. 1993, 2, 93–103. (36) Csiszar, I. Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems. Ann. Stat. 1991, 19, 2033–2066. (37) Kullback, S. Information Theory and Statistics; Wiley: New York, 1959. (38) McLachlan, G. J.; Krishnan, T. The EM Algorithm and Extensions, Second ed.; Wiley: Hoboken, NJ, 2008. (39) Benac, J. Alternating minimization algorithms for X-ray computed tomography: Multigrid acceleration and dual energy application. D.Sc. Thesis, Washington University, St. Louis, MO, 2005. (40) Barrett, H. H. Objective assessment of image quality: Effects of quantum noise and object variability. J. Opt. Soc. Am. A 1990, 7 (7), 1266–1278. (41) O’Sullivan, J. A.; Blahut, R. E.; Snyder, D. L. Informationtheoretic image formation. IEEE Trans., Inf. Theory 1998, 44 (6), 2094–2123. (42) van Trees, H. L. Detection, Estimation and Modulation Theory; Wiley: New York, 1968. (43) Lehmann, E. L.; Casella, G. Theory of Point Estimation, Second ed.; Springer: New York, 1998. (44) Gupta, R. D.; Kundu, D. On the comparison of Fisher information of the Weibull and distributions. J. Stat. Planning Inference 2006, 136 (9), 3130–3144.

ARTICLE

(45) Snyder, D. L.; Schulz, T. J.; O’Sullivan, J. A. Deblurring subject to nonnegativity constraints. IEEE Trans., Signal Process. 1992, 40, 1143–1150. (46) Yester, M. W.; Barnes, G. T. Geometrical limitations of computed tomography scanner resolution. Appl. Opt. Instrum. Med. 1977, 127, 296–303. (47) Hero, A. O.; Usman, M.; Sauve, A. C.; Fessler, J. A. Recursive algorithms for computing the CramerRao bound. IEEE Trans., Signal Process. 1997, 45 (3), 803–807.

9900

dx.doi.org/10.1021/ie101759s |Ind. Eng. Chem. Res. 2011, 50, 9890–9900