Particle Size Distribution Determination from ... - ACS Publications

Jump to Particle Size Distribution Determination Using Neural Networks - The network outputs are required to ... size distribution, for example, the ...
0 downloads 0 Views 104KB Size
Ind. Eng. Chem. Res. 2001, 40, 4615-4622

4615

Particle Size Distribution Determination from Spectral Extinction Using Neural Networks Mingzhong Li, Thor Frette, and Derek Wilkinson* Centre for Molecular and Interface Engineering, Department of Mechanical and Chemical Engineering, Heriot-Watt University, Edinburgh EH14 4AS, U.K.

The use of techniques based on spectral extinction to recover particle size distributions has become increasingly popular in recent years. However, they are time-consuming and are not always successful in practical applications. In this paper, a novel method is proposed to determine particle size distributions using neural networks from several spectral extinction measurements. Simulations and experiments have illustrated that it is feasible to use a neural network to obtain the parameters of a particle size distribution from turbidity measurements. Although the neural network was trained using log-normal distribution data, it can be used to recover some nonlog-normal distributions. The method has advantages of simplicity of use, instantaneous delivery of results, and suitability for online particle size analysis. 1. Introduction Particle size distribution measurement is important in many industries, for example, the manufacture of pigments, pharmaceutics, cosmetics, and foods. Highquality products would benefit from improved control achievable with online measurement in real time of particle size, shape, and composition. Detection of particle size distributions by spectral extinction has received much more attention in recent years because of its attractive advantages of simplicity and nonintrusiveness. In spectral extinction, an electromagnetic beam is attenuated as it passes through a particulate suspension. The degree of extinction depends on the wavelength of the radiation and on the particle’s size, shape, and refractive index. Spectral extinction has been shown to be a feasible method of obtaining particle size distributions for spheres of known refractive index over a range of wavelengths. However, the traditional methods to invert the extinction spectrum into a particle size distribution, such as direct inversion methods1-4 and iterative inversion methods,5-7 are time-consuming and are not always successful. The use of neural networks has become widespread for modeling complex nonlinear processes because of their outstanding ability in approximating an arbitrary nonlinear function.8 Neural networks have been used to recover particle size distributions in recent years.9-11 Neural networks9,10 were trained to determine lognormal particle size distributions. Spectral extinction measurements at several wavelengths were selected as the network inputs, and the two parameters of lognormal distributions, the particle number mean diameter and the geometric standard deviation, were the network outputs. However, this method cannot solve some practical problems. For practical applications, the suspension concentration is a very important element which directly affects the spectral extinction. For a given particle size distribution, the spectral extinction pattern depends on the concentration at high concentrations, causing the training data requirement to increase * Corresponding author. Tel.: +44-131-449 5111 ext. 4717. Fax: +44-131-451 3077. E-mail: [email protected].

dramatically. Also, it was not mentioned9,10 whether the method could determine non-log-normal particle size distributions. Neural networks were used11 to compute particle size distributions from laser diffraction measurements. In that work, the network had 31 inputs and 31 outputs. Such a complex network structure was very difficult or impossible to train satisfactorily. In this work, a simple method based on neural networks to determine particle size distributions is proposed. A neural network with only four inputs and two outputs has been trained by an effective training algorithm, the Levenberg-Marquardt (LM) algorithm,12 to identify the geometric mean and standard deviation of a log-normal particle size distribution. The method is independent of the suspension concentration for a dilute suspension. It is effective over particle mean sizes from 100 to 10 000 nm and geometric standard deviations from 1.0 to 1.5 using spectral extinction measurements at wavelengths from 300 to 900 nm. Furthermore, this method can determine parameters of some non-log-normal particle size distributions. Simulation and experimental results have demonstrated the effectiveness of the proposed method. 2. Background 2.1. Light Extinction. When a beam of monochromatic radiation impinges on a sample containing particles with an index of refraction different from that of the dispersant medium, scattering and absorption lead to attenuation of the transmitted beam. According to the Beer-Lambert law for low concentrations, light of wavelength λ is attenuated on passing through a suspension of particles as follows:

τ(λ) )

I0 1 ln L I

(1)

where τ(λ) is the turbidity, I0 is the intensity of radiation entering the suspension, I is the intensity of radiation emerging from the suspension, and L is the path length of light through the suspension. At high concentrations, the Beer-Lambert law fails because of multiple scattering and interaction effects. The maximum concentra-

10.1021/ie000826+ CCC: $20.00 © 2001 American Chemical Society Published on Web 09/13/2001

4616

Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001

Figure 1. Structure of a feedforward neural network.

tion for application of the Beer-Lambert law depends on the properties of the system, but for most cases, a practical upper limit is a particle volume fraction of 5%.13 If the sample is a dilute homogeneous suspension of noninteracting polydisperse spheres, when multiple scattering and interaction effects can be neglected, the turbidity at a wavelength λ is given by

τ(λ) )

cπ 4

∫0∞D2Qext(D,λ,m) f(D) dD

(2)

where c is the suspension concentration, D is the diameter of the spherical particles, m is the system’s relative refractive index (the ratio between the particle and medium refractive indices), Qext is the extinction coefficient, and f(D) is the particle number probability distribution. The extinction coefficient Qext depends on the relative refractive index m and the dimensionless size parameter R,R ) πD/λ. Qext can be calculated from Mie theory.14 Equation 2 can be rewritten as

τ(λ) )

cλ3 4π2

∫0∞R2Qext(R,m) f(R) dR

(3)

The above equation is a Fredholm integral equation of the first kind, in which τ(λ) is obtained by experiments, (λ3/4π2)QextR2 is its corresponding kernel, and f(R) is the unknown distribution function. Direct 1-4 and iterative5-7 inversion methods have been used to solve this Fredholm integral equation. These methods are generally slow and cannot be relied upon to achieve a solution in every case. 2.2. Neural Networks. Figure 1 represents an ninput m-output feedforward neural network with one hidden layer having nh hidden units. It has been shown that the network can approximate any continuous nonlinear function to an arbitrary accuracy.8 The output of the hidden units can be represented as

hj ) F(sj) ) F(VTj I), j ) 1, 2, ..., nh

(4)

where I ) [1, x1, ..., xn]T is the network input vector, Vj ) [vj,0, vj,1, ..., vj,n]T is the weight vector connecting the network inputs to the jth hidden unit, and F(x) is the sigmoid function. The output of the output layer neurons can be represented as

yi ) g(WTi H), i ) 1, 2, ..., m

(5)

where H ) [1, h1, ..., hnh]T is the network hidden layer output vector, Wi ) [wi,0, wi,1, ..., wi,nh]T is the weight vector connecting the network hidden layer units to the ith output neuron, and g(x) may be either a line function

or a sigmoid function depending on the problem. In this work, g(x) ) F(x) ) 1/(1 + e-x), the sigmoid function, was selected. The neural network defines a mapping G:X f Y where X ∈ Rn is an input vector and Y ∈ Rm is an output vector. Any nonlinear function can be approximated by the network through appropriately determined weight matrixes V ) [V1, V2, ..., Vnh] and W ) [W1, W2, ..., Wm]. The training of a neural network may be classified into either batch learning or pattern learning. In batch learning, the weights of the neural network are adjusted after a complete sweep of the entire training data, while in pattern learning, the weights are updated during the course of the process using data gained online. Batch learning has greater mathematical validity because the gradient-descent method can be implemented exactly. Pattern learning, usually applied as batch learning approximations, can be used to modify network weights online so that a model can track the dynamics of a timevarying process. In this paper, batch learning was selected as appropriate to train the network offline. 3. Particle Size Distribution Determination Using Neural Networks Problems to be tackled in order to use neural networks to determine particle size distributions are the selection of the network inputs and outputs, the number of hidden units, and the training algorithm. The selection of the number of hidden units and the choice of the training algorithm will be discussed in section 4.1. The network outputs are required to represent the characteristics of particle size distributions. Several models have been used to simulate a particle size distribution, for example, the Gauss, log-normal, γ, and Rosin-Rammler distributions. The log-normal distribution was assumed here because it is the most widely used for a suspension of particles. A log-normal distribution is characterized by two parameters: the particle geometric number mean diameter Dm and the geometric standard deviation σ. Its mathematical representation can be defined as follows:

f(D) )

{ (

)}

1 1 ln D - ln Dm exp 2 ln σ ln σx2πD

2

(6)

Thus, the two parameters particle geometric mean diameter Dm and geometric standard deviation σ are selected as the network outputs. The turbidity at a wavelength depends on not only the particle size distribution but also their concentration in the suspension. Normally, several turbidities at different wavelengths were selected as the network inputs,9-11 which led to complexity of the network training. According to eq 2, the turbidity is proportional to the suspension concentration in the dispersant. Thus, to eliminate the effect of concentration on the network training, the ratios of turbidity to a reference τ(λ0) were selected as inputs of the neural network, defined as

τ(λ) R(λ,λ0) ) ) τ(λ0)

∫0∞ D2Q(D,λ,m) f(D) dD ∫0∞ D2Q(D,λ0,m0) f(D) dD

(7)

where λ0 is a selected reference wavelength. For the given reference wavelength λ0 and the measurement wavelength λ, the ratio R(λ,λ0) is determined

Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4617 Table 1. Simulation Results training algorithm ABP

CG

LM

hidden neurons

ASE

iterations

ASE

iterations

ASE

iterations

7 8 9 10 11

0.0351 0.0351 0.0338 0.0360 0.0415

5000 5000 5000 5000 5000

0.0328 0.0212 0.0193 0.0160 0.0204

5000 5000 5000 5000 5000

0.0160 0.0102 0.0113 0.0089 0.0105

5000 5000 5000 4129 5000

4. Simulations

Figure 2. Turbidity ratios for different log-normal distributions.

by the particle size number probability distribution f(D) only and is independent of the suspension concentration. This approach is justified provided the concentration is below the maximum limit of the Beer-Lambert law (section 2.1). The number of network inputs is another problem which should be further considered. The choice of input variables affects the accuracy of the network’s results. It is important to investigate the method of presenting turbidity spectra to the network in order that sufficient information is provided to recover the particle size distribution while the use of an excessive number of inputs is avoided. Too few inputs will result in the loss of significant information from the spectrum, while too many will make the network unnecessarily complex and difficult to train. Figure 2 shows turbidity ratios with reference wavelength λ0 ) 500 nm for different log-normal distributions over the range of wavelengths from 300 to 900 nm. The refractive indices at different wavelengths were those of the system silica in water.15 It is seen that the curves are significantly different for different distributions. From inspection of the shape of these curves, it was concluded that four turbidity ratios at different wavelengths are sufficient to represent the differences between the distributions. It is important to maximize the information content of data input to the neural network without requiring an excessive number of input nodes. Thus, the number of network inputs was set at four. The values of the wavelengths were selected, following inspection of Figure 2, to achieve maximum information input with a limited number of input nodes. In the following, the wavelengths were selected at 300, 400, 550, and 650 nm corresponding with the spectrophotometer used in the experiments whose range is from 300 to 900 nm.

Log-normal particle size distributions with geometric mean diameters of 100-10 000 nm and geometric standard deviations of 1.0-1.5 were considered for recovery by neural networks. The reference wavelength was set at λ0 ) 500 nm. The measurement wavelengths were selected as λ1 ) 300 nm, λ2 ) 400 nm, λ3 ) 550 nm, and λ4 ) 650 nm. A total of 600 pairs of training data (30 means logarithmically spaced and 20 deviations linearly spaced) were generated to train the neural network and a further 11 pairs of data were used to investigate the effectiveness of the trained neural network. Calculation of the Mie extinction coefficients was performed using a FORTRAN subroutine by Bohren and Huffman.16 The relative refractive indices at different wavelengths were selected as those of the experimental system silica in water.15 In this section, the selection of the number of hidden units and the choice of the training algorithm were developed and then the recovery of non-log-normal distributions by the well-trained neural network was investigated. 4.1. Determinations of Hidden Units and the Training Algorithm. Two important steps in the training procedure were the determination of an adequate number of neurons in the hidden layer and the selection of the training algorithm. Many simulations were carried out in order to determine the appropriate number of hidden units according to a criterion of minimum average squared error (ASE) between the desired outputs and the calculated outputs from the network using three different training algorithms, i.e., adaptive back-propagation (ABP) algorithm,17 conjugate gradient (CG) algorithm,18 and Levenberg-Marquardt (LM) algorithm.12 A maximum of 5000 iterations and a target ASE of 0.009 were used in all runs. The training results are shown in Table 1. It is clear that neural networks trained by the LM algorithm gave the best approximating results. Thus, the LM algorithm was selected to train neural networks in this work. The validation test results for different numbers of hidden neurons are summarized in Figure 3. The average relative errors of the geometric means and standard deviations using different numbers of hidden neurons are shown in Tables 2 and 3. It was concluded that the network with eight hidden neurons recovers the lognormal distributions most accurately because it has the lowest average relative error in determining the mean for the test data (Table 2) and has an acceptably low average relative error in standard deviation determination (Table 3). The network training procedure was finished in 15 min using a 500 MHz Pentium processor. 4.2. Non-Log-Normal Distribution Recovery. Simulations were also done to test how well the neural network was able to recover non-log-normal distributions because in many applications the precise form of distribution would not be known accurately before

4618

Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001

Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4619

Figure 3. Validation results. Effect of varying the number of hidden neurons.

neurons desired output

seven

eight

nine

ten

eleven

120 184.79 284.57 438.23 674.85 1039.2 1600 2464 3795.2 5844.4 9000 average relative errora

100 100 100 100.13 159.38 676.93 1483.1 2272.7 5471.8 6741.1 7559.8 36.9%

178.97 234.56 268.24 407.93 611.4 1209.4 1073.5 2672.4 6373.3 6850.6 7237.2 23.7%

153.6 426.21 186.04 367.53 557.82 1211.3 100.16 3442.5 5335.9 6921.6 8708.1 39.9%

174.39 638.24 286.71 474.72 423.28 777.6 1496.8 2535.7 6280.1 6736.4 7277.9 43.0%

237.39 100.08 320.52 259.96 640.06 941.65 1283.2 2671 4155.2 6868.3 9467.6 24.7%

a Average relative error ) 1/N∑N |desired_output(i) - neti)1 work_output(i)|/desired_output(i).

Table 3. Validation Results for Geometric Standard Deviations neurons desired output

seven

eight

nine

ten

eleven

1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5 average relative error

1.0419 1.1756 1.0128 1.2433 1.1339 1.2716 1.2219 1.3139 1.4188 1.3523 1.2981 6.33%

1.1186 1.0677 1.0359 1.1386 1.1834 1.1693 1.3119 1.3755 1.3632 1.3549 1.3402 6.24%

1.0121 1.0297 1.0624 1.1274 1.1857 1.2497 1.2896 1.3535 1.3515 1.3691 1.3514 2.71%

1.0619 1 1.0768 1.1207 1.1878 1.2643 1.2708 1.3248 1.3711 1.3607 1.3387 3.71%

1.0239 1.0382 1.0041 1.0048 1.1718 1.2771 1.349 1.3529 1.3772 1.3606 1.3201 4.83%

making measurements. The neural network which had been trained to recover log-normal distributions was used to approximate a non-log-normal distribution function. A non-log-normal distribution function was produced by combining two different log-normal distributions:

fN(D) ) γf1(D) + (1 - γ)f2(D)

(8)

where fN(D) is a composite non-log-normal distribution, γ ∈ (0, 1) is a selected constant, and f1(D) and f2(D) are different log-normal distributions characterized by parameters Dm1, σ1 and Dm2, σ2. An equivalent log-normal distribution approximation of the composite distribution fN(D) is characterized by

(

)

∫0∞fN(D) ln D dD DNm ) exp ∫0∞fN(D) dD

Table 2. Validation Results for Geometric Means (nm)

and

σN ) exp

(x

(9)

)

∫0∞fN(D) [ln(x)]2 dD - (ln DNm)2 ∞ ∫0 fN(D) dD

(10)

where DNm and σN are the particle geometric mean diameter and geometric standard deviation of the equivalent log-normal distribution. Four different non-log-normal distributions and their corresponding equivalent log-normal distributions are shown in Figure 4a-d; their parameters are shown in Table 4. The neural network estimation results from the input turbidities of the non-log-normal distributions are also shown in Table 4. When the shapes of the non-lognormal test distributions are compared with those of the neural network estimation results in Figure 4 for the monomodal distributions (Figure 4a,b), the neural network estimation results approximate quite closely the equivalent log-normal distributions. For the bimodal distributions (Figure 4c,d), the neural network approximation was unsatisfactory. It would appear that the leading parameters (mean and standard deviation) of a monomodal non-log-normal distribution function can be retrieved reasonably accurately using the network trained by log-normal distributions, but multimodal distributions are not well recovered using a neural network trained by log-normal distributions. 5. Experiments Nontoxic, nonhazardous silica was chosen as the particle used for the preparation of suspensions. The silica used was a spherical (minimum sphericity 0.98) silica powder with known size distributions D1 ) 193 nm, σ1 ) 1.5 and D2 ) 1000 nm, σ2 ) 1.45 (measured by laser diffraction). Suspensions were prepared by measuring known masses of silica directly into volumetric flasks and making up to a volume with deionized water, hence giving samples with known concentration. Care was taken to minimize exposure of the dry sample to the atmosphere to ensure that the mass of the sample taken was that of the particles only. Extinction was measured using a DR/4000 spectrophotometer (Hach

4620

Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 Table 4. Non-Log-Normal Distribution Estimation Results Figure 4 non-log-normal distribution

equivalent log-normal distribution neural network estimation result

Dm1 (nm) σ1 Dm2 (nm) σ2 γ DNm (nm) σN D (nm) σ

a

b

c

d

1000 1.1 800 1.3 0.7 935 1.214 897 1.179

1000 1.1 800 1.3 0.3 855 1.281 834 1.264

500 1.2 1000 1.2 0.15 901 1.360 953 1.158

500 1.2 1000 1.2 0.85 555 1.360 790 1.430

Figure 5. Spectral extinction with different concentrations for the particle size distributions: (a) D1 ) 193 nm, σ1 ) 1.5; (b) D2 ) 1000 nm, σ2 ) 1.45.

The suspension of known concentration was then placed in a quartz spectrophotometer cell, and its absorbance was measured and recorded. This procedure was carried out five times for each sample to ensure accuracy and consistency of the results. The consistency of the results across these repeated measurements was taken to indicate that settling and agglomeration were not significant over the duration of the measurements.

Figure 4. Non-log-normal distribution estimation results.

Co.) over the wavelength range from 300 to 900 nm. Measurements were taken at 2 nm intervals across the wavelength range. Each scan across the wavelength range was completed in less than 2 min. To prevent agglomeration of the particles, each freshly prepared sample was subjected to a period of ultrasound (approximately 10 min), which broke down any aggregates.

Figure 5 shows the results of measurements of each particle size distribution at different concentrations. The curves of turbidity ratio vs wavelength for different concentrations as well as the simulation results are shown in Figure 6. For a given particle size distribution, the curves of the turbidity ratio vs wavelength are almost the same at different concentrations. Thus, it is shown that the turbidity ratio is determined by the particle size distribution and is independent of the suspension concentration for these dilute suspensions. At the same time, it is found that the simulated spectra

Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001 4621

training could be performed using experimental measurements of well-characterized particle systems, avoiding the need to know the refractive index. It is wellknown that training of a neural network is timeconsuming; however, through proper selection of training algorithms, the problem can readily be solved. Furthermore, once the neural network is trained, the particle size distribution can be obtained almost instantaneously so that it is appropriate in an online measurement environment. To improve the practicability of the method, turbidity ratios at four different wavelengths were selected as the inputs of the neural network, making the method simple and effective. However, it restricts the method to application to low-concentration systems only. To train the neural network for high concentrations would require a substantial increase in the quantity of training data and in training time. Work is ongoing to identify the range of concentrations for which multiple scattering is significant and to investigate a method to recover particle size distributions at high concentrations. Acknowledgment

Figure 6. Comparison of simulated turbidity spectra with experimental measurements: (a) D1 ) 193 nm, σ1 ) 1.5; (b) D2 ) 1000 nm, σ2 ) 1.45. Table 5. Experimental Estimation Results group 1 mean (nm) distribution parameter 193 neural network estimation 216 average relative error 11.92%

group 2

deviation

mean (nm)

deviation

1.5 1.472 1.87%

1000 1165 16.5%

1.45 1.366 5.79%

results match well with measured spectra. Thus, it is valid to use the neural network trained by the simulation data to recover the silica particle size distributions. Good predictive results by the neural network are shown in Table 5 for these size distributions. 6. Discussion and Conclusion In this paper, a novel method to determine particle size distributions from several spectral extinction measurements using neural networks is proposed. Simulations and experiments have illustrated that it is feasible to use neural networks to obtain the parameters of a particle size distribution from turbidity measurements. Although the neural network was trained using lognormal distribution data, it can also be used to recover the leading parameters of some non-log-normal distributions. The method has advantages of simplicity of use, independence of a range of suspension concentrations, and suitability to be used for online particle size analysis. Compared to established methods1-7 of inverting the Fredholm integral equation, neural networks supply solutions essentially instantaneously and without the need for empirical parameters. Simulations to train neural networks rely on accurate knowledge of the particle and dispersant refractive index over the wavelength spectrum. For some materials, these data are not readily available. However,

The financial support of EPSRC (Grant GR/M54742) is gratefully acknowledged. The cooperation of colleagues on the project “Chemicals Behaving Badly” at the Centre for Molecular and Interface Engineering, Heriot-Watt University, is gratefully acknowledged. Literature Cited (1) Crawley, G.; Cournil, M.; Benedetto, D. D. Size Analysis of Fine Particle Suspensions by Spectral Turbidimetry: Potential and Limits. Powder Technol. 1997, 91, 197. (2) Dellago, C.; Horvath, H. On the Accuracy of the Size Distribution Information Obtained from Spectral Extinction/ Scattering Measurements. J. Aerosol Sci. 1990, 21 (Suppl. 1), S155. (3) Uthe, E. E. Particle Size Evaluations Using Multiwavelength Extinction Measurements. Appl. Opt. 1992, 21 (3), 454. (4) Wilkinson, D.; Waldie, B. Characterisation of Particle Suspensions by Spectral Extinction: a Numerical Study. Powder Technol. 1991, 68, 109. (5) Bassini, A.; Musazzi, S.; Paganini, E.; Perini, U.; Ferri, F.; Gilio, M. Optical Particle Sizer Based on the Chahine Inversion Scheme. Opt. Eng. 1992, 31 (5), 1112. (6) Ferri, F.; Bassini, A.; Paganini, E. Modified Version of the Chahine Algorithm to Invert Spectral Extinction Data for Particle Sizing. Appl. Opt. 1995, 34 (25), 5829. (7) Ye, M.; Wang, S.; Xu, Y. An Inverse Technique Devised from Modification of Annealing-evolution Algorithm for Particle Sizing by Light Scattering. Powder Technol. 1999, 104, 80. (8) Blum, E. K.; Li, L. K. Approximation Theory and Feedforward Networks. Neural Networks 1991, 4 (4), 511. (9) Gordon, S.; Hammond, R.; Roberts, K.; Savelli, N.; Wilkinson, D. On-line Measurement of Particle Size Distributions in Batch Crystallisation Reactor Using Optical Spectroscopy. 14th International Symposium on Industrial Crystallization, Cambridge, U.K., 1999; Institution of Chemical Engineers: Rugby, U.K., 1999. (10) Ishimaru, A.; Marks, R. J., II; Tsang, L.; Lam, C. M.; Park, D. C. Particle-size Distribution Determination Using Optical Sensing and Neural Networks. Opt. Lett. 1990, 15 (21), 1221. (11) Nascimento, C. A. O.; Guardani, R.; Giulietti, M. Use of Neural Networks in the Analysis of Particle Size Distributions by Laser Diffraction. Powder Technol. 1997, 90, 89. (12) Hagan, M. T.; Menhaj, M. B. Training Feedforward Networks with the Levenberg-Marquardt Algorithm. IEEE Trans. Neural Networks 1994, 5 (6), 989. (13) Krauter, U.; Riebel, U. Extinction of Radiation in Sterically Interacting Systems of Monodisperse Spheres. Part 2: Experimental Results. Part. Part. Syst. Charact. 1995, 12, 132-138.

4622

Ind. Eng. Chem. Res., Vol. 40, No. 21, 2001

(14) Kerker, M. The Scattering of Light and Other Electromagnetic Radiation; Academic Press: New York, 1969. (15) Washburn, E. W.; et al. International Critical Tables of Numerical Data, Physics, Chemistry and Technology; McGrawHill Book Co. Inc.: New York, 1992. (16) Bohren, C. F.; Huffman, D. R. Absorption and Scattering of Light by Small Particles; Wiley: New York, 1983. (17) Weir, M. K. A Method for Self-determination of Adaptive Learning Rates in Back Propagation. Neural Networks 1991, 4 (3), 371-379.

(18) Xu, X. P.; Burton, R. T.; Sargent, C. M. Experimental Identification of a Flow Orifice Using a Neural Network and the Conjugate Gradient Method. J. Dyn. Syst., Meas., Control 1996, 118 (2), 272-277.

Received for review September 18, 2000 Accepted July 23, 2001 IE000826+