Improved techniques for particle size determination by quasi-elastic

Jul 1, 1985 - ACS Legacy Archive. Note: In lieu of an abstract, this is the article's first page. Click to increase image size Free first page. View: ...
0 downloads 11 Views 866KB Size
496

Langmuir 1985, 1 , 496-501

Improved Techniques for Particle Size Determination by Quasi-Elastic Light Scattering I. D. Morrison* and E. F. Grabowski Webster Research Center, Xerox Corporation, Webster, New York 14580

C. A. Herb Technical Center, Owens-Corning Fiberglas Corporation, Granville, Ohio 43023 Received January 3, 1985. I n Final Form: March 22, 1985 The measurement of the time dependence of light scattered from a collection of moving particles, commonly referred to as quasi-elasticlight scattering (QELS),provides information about the distribution of particle velocities. For monodisperse particles undergoing Brownian motion, the time dependence of the autocorrelation of the scattered light is a simple exponential function of the hydrodynamic radius of the particle. For polydisperse suspensions undergoing Brownian motion, the time dependence of the autocorrelation of the scattered light is a function of the particle-size distribution and can be expressed either as a Fredholm integral or a s u m of exponentials. This paper describes improvements in the methods of data acquisition and analysis that significantly increase the resolution of the technique. These improvements enable accurate determinations and smooth representations of both bimodal and broad unimodal distributions without operator intervention or a priori information about the particle-size distribution.

Introduction The determination of particle-size distributions of dispersions by quasi-elastic light scattering (QELS) is becoming an increasingly popular technique with the availability of several new commercial instruments capable of both gathering and analyzing data.' The use of QELS to determine particle size is based on the measurement, via the autocorrelation of the time dependence of the scattered light, of the diffusion coefficients of suspended particles undergoing Brownian motion. The measured autocorrelation function, G@)(7), is given by G ( 2 ) ( ~=) A [ l + & ( ' ) ( ~ ) 1 ~ ] (1) where A is the base line, p is an equipment-related constant, and g'l'(7) is the normalized first-order autocorrelation function. The base-line constant, A, can be obtained from the long-time asymptote of the measured autocorrelation function or from the square of the average photon flux. The measured autocorrelation function of scattered light intensity as a function of delay time is easily transformed into the normalized first-order autocorrelation function, g(l)(T),which depends upon delay time, 7 , the scattering vector, k,and the particle-size distribution as shown in eq 2a in integral form and in eq 2b in algebraic form.

where F ( r ) = normalized distribution function of the decay constants, r, ai = probability of scattering center with decay constant ri,I' = decay constant = k2D, k = scattering vector = (4an/X) sin (0/2), n = refractive index of the medium, X = wavelength of incident light, 0 = scattering angle, and D = particle diffusion constant. When data are analyzed by means of the algebraic equation (2b), the resulting distribution is a histogram described by a probability for each decay constant, expressed by the vector set (ai, ri). When data are analyzed by means of

the continuous equation (2a), the resulting distribution is the probability density function, F ( r ) . The function F ( r ) or the constants ai are converted from light intensities to mass fractions by the use of the appropriate Mie corrections and from distributions of decay constants to distributions of particle sizes by the Stokes-Einstein expression for the diffusion coefficient, D =kT/(3aqd),k = Boltzmann constant, 5" = temperature, d = particle diameter, and 7 = viscosity. The experimental problem is to measure the autocorrelation function as accurately as possible. Practically, this means setting the time scale for the correlator so that the decay in the autocorrelation function at the shortest delay times is sensitive to the smallest particles present, setting the longest delay times so that the correlation function is sensitive to the largest particles, and measuring the data for as long as possible to improve the photon counting statistics. The theoretical problem is to analyze the data by inverting eq 2a or 2b numerically to obtain the particle size distribution. Practically, this means to develop a numerical procedure that requires minimal operator input, that can clearly differentiate multimodal distributions from broad unimodal ones, that applies both to narrow and to broad particle-size distributions, and that is stable against random experimental error. We propose an experimental technique and an improved numerical technique which, when combined, give a procedure for determining particle-size distributions by QELS which we believe meet these requirements and require no a priori information about the size distribution. Improved Techniques in Data Analysis. For narrow size distributions, the autocorrelation function is satisfactorily analyzed by the method of cumulants to give the moments of the particle-size distribution.2 However, the analysis of QELS data for polydisperse or multimodal distributions is a much more difficult problem and remains an area of active r e ~ e a r c h . Figure ~ 1 shows the semilogarithmic plot of the normalized first-order autocorrelation function, g(l)(7),for a mixture of two monodisperse polystyrene standards (60 and 166 nm in diameter) vs. delay time. The autocorrelation function is a simple sum of two ~~

(1) Brookhaven Instruments Corp., Ronkonkoma, NY. Nicomp In-

struments, Santa Barbara, CA. Malvern Instruments Ltd., Malvern, Worcestershire, UK. Coulter Electronics, Hialeah, FA.

~

(2) Koppel, D. E. J. Chem. Phys. 1972,57, 4814-4820. (3)Danheke, B. E., Ed. "Measurement of Suspended Particles by

Quasi-Elastic Light Scattering"; Wiley-Interscience: New York, 1983.

0743-7463/85/2401-0496$01.50/00 1985 American Chemical Society

Langmuir, Vol. 1, No. 4, 1985 497

Particle Size Determination by QELS

O' O r - - - - - I

1

-0. 51..

-4.01 0.0

I

I

I

I

0.5

1.0

1.5

2.0

I

I

2.5

3.0

I

3.5

3

DELAY T I M E (seconds x 10

)

Figure 1. Logarithm of the normalized fmt-order autocorrelation function vs. delay time for a mixture of two monodisperse polystyrene standards, 60 and 166 nm.

exponentials (see eq 2b). If the ratio of the particle sizes is large, the semilogarithmic plot will show two linear regions. Since the two sizes of particles used to obtain the data in Figure 1 are similar, the sum of the exponentials is a smooth curve. It is easy to see that a single exponential would not be able to describe these data within experimental error. The numerical problem is to fit the data with a sum of exponentials. The ideal inversion technique should be capable of analyzing for the particle-size distribution without a priori information. A natural procedure is to choose a large number of feasible particle sizes, calculate their corresponding decay constants, and find the particle-size distribution by the least-squares fit of the QELS data to eq 2b for the assumed particle sizes. The expectation is that only those sizes that are actually in the dispersion will appear in the final distribution. However, this class of mathematical problems is known to be ill-conditioned (also called ill-posed), that is, small variations in experimental data lead to large variations in the calculated particle-size distributions even if the solution sought is the least-squares fit to the d a h 4 The ill-conditioned nature of the problem manifests itself most clearly by giving negative components to the size distribution. One remedy for avoiding these physically impossible negative solutions is to reduce the ill-conditioned nature by limiting the number of different assumed particle sizes. Unfortunately, what has been found is that to avoid ill-conditioned inversions, the number of assumed particle sizes has to be so few that the size resolution is poor. Pike and co-workers5 showed how to maximize the particle-size resolution of this ill-conditioned calculation. Their idea is that any reasonable set of assumed particle sizes forms a basis vector set, within experimental error, for the inversion. The instability of the matrix invdrsion algorithms limits the number of assumed sizes, but the same data can be analyzed repeatedly with different sets of assumed particle sizes, i.e., equivalent but different basis sets. The repeated analysis of the same data with equiv(4) Tikhonov, A.N.; Arsenin, V. Y . 'Solutions of Ill-Posed Problems"; John, F., translator; Wiley: New York, 1977. (5) McWhirter, J. G.;Pike, E. R. J.Phys. A: Math. Gen. 1978, 11, 1729-1745.

alent bages gives a set of equally likely particle-size distributions. ?he average of these equally likely size distributions is a more probable representation of the real size distribution than any single one of them. (Combining N histogram distributions of the form ( a J J , each with different Fit is most simply done by replacing the probability of scattering centers, ai, with a i / N and using all the r i in the histogram.) This method of data analysis gives smoother particle-size distributions but is still severely limited in size resolution because the ill-conditioned inversions still limit each analysis to, at best, five or six assumed pivticle sizes at a time; otherwise, unreal, negative components to the particle-size distribution are obtained. We believe the essential element missing from this approach is that the constraint of nonnegativity of the size distribution has not been taken advantage of during the search for the least-squares fit to the data. The use of the nonnegative constraint during numerical analysis has been used to advantage in other problems. The algorithm CAEDMON, used to compute the heterogeneities of a solid substrate from measurements of the physical adsorption of gases where no assumed surface energy can be associated with a negative surface area, is an early and successful example of precisely this approachS6 The use of the nonnegative constraint during the numerical analysis permits the use of a large number of assumed energy levels. The analysis of the particle-size distributions for QELS is equivalent: the size distribution can have no negative components and it is advantageous to work with large numbers of assumed sizes in order to attain good size resolution. Another example of the use of nonnegative constraints during the numerical analysis is in the economic problem of how to maximize profits given constraints with respect to number of employees, raw materials, equipment, time, and product mix. I t is perfectly obvious in this problem that the solution for maximizing profit could not include a negative component of the product mix. The methods to find nonnegative solutions to this class of problems have been well studied, are numerically stable, and are able to handle large numbers of variables.' Particularly important is that the numerical routines developed to solve the economic problem, called simplex methods, are guaranteed to converge in a finite number of iterations. To implement the method of nonnegativelyconstrained least squares, we choose the method of Lawson and Hanson7called NNLS (for nonnegative least squares) to solve eq 2b by the method of least squares given the autocorrelation data at a set of delay times, T , and for a set of assumed particle sizes (for which the corresponding decay constants are easily calculated from the scattering vector and the diffusion coefficient). The major advantages in using the nonnegatively constrained algorithms are these: The nonnegative constraint applied during the calculations substantially reduces the ill-conditioned nature of the inversion since physically unreal, negative distributions are never considered and the least-squares search must converge in a finite number of iterations. With the more stable inversion algorithm, a greater number of unknowns can be used, and the potential particle-size resolution increases. The exact number of particle sizes assumed is not critical, nevertheless, several limits need to be kept in (6) Ross, S.;Morrison, I. D. Surf. Sci. 1975,52, 103-119. Sacher, R. S.;Morrison, 1. D. J. Colloid Interface Sci. 1979, 70,153-166. (7) Lawson, C. L.; Hanson, R. J. "Solving Least Squares Problems"; Prentice-Hall: Englewood Cliffs, NJ, 1974. A FORTRAN program and subroutines called NNLS.

Morrison, Grabowski, and Herb

498 Langmuir, Vol. 1, No. 4, 1985

mind. Taking too small a number of particle sizes restricts the resolution of the particle-size analysis. As the number of assumed particle sizes is taken to be larger, the size resolution increases; but the time required for numerical computation also increases. As the potential resolution of the analysis increases, the benefit of more accurate data is realized. More accurate data is obtained by accumulating data for longer times. For the routine use of QELS for particle sizing in our laboratories, we accumulate data for a single autocorrelation function from 10 min to an hour and analyze each data set with 10-25 simultaneous, assumed particle sizes. The length of time necessary for accumulatingdata is primarily determined by the dynamic range of the digital correlator; the important parameter is the number of photons counted for each data set. The largest and smallest detectable particle sizes in any experiment are determined by the experimental conditions: the minimum detectable size is determined by the shortest delay times, the largest detectable size is determined by the longest delay times. Details of how we choose the size limits have been given elsewhere.8 If, after an initial analysis, we find that this size range is larger than needed for the sample as made evident by no particles detected near the limits of the assumed size range, a subsequent analysis is run with narrower limits. The assumed particle sizes can be distributed throughout the size range in any reasonable manner; for the analyses reported below, we used quadratic spacing.8 We have used logarithmic spacing as others do,5,9as well as linear spacing in diameter, and found no significant differences. The solution of eq 2b by the use of nonnegatively constrained least squares has been demonstrated over a wide range of experimental conditions.8 Analyzing data with the nonnegatively constrained algorithms gives much better size resolution than does the analysis with the unconstrained inversions. However, the technique of using nonnegative least-squares analysis alone has had two shortcomings: (a) for a single data set correlated experimental error can be interpreted as a small spurious peak and (b) for broad distributions this analysis tends to give a set of separated sizes instead of a continuous broad peak. Another approach to the solution of ill-conditioned problems, for which the determination of particle-size distributions from QELS data is an example, is to construct appropriate (not necessarily least squares) solutions that fit the data within experimental error but that are also stable to small changes in the data. This approach is based on the use of a regularizing ~ p e r a t o r .The ~ regularizing operator is an extra constraint on the nature of the “best” solution to the ill-conditioned problem. Provenche? (with a computer program named CONTIN) has implemented a method to construct stable solutions to ill-conditioned problems by chosing the regularizing condition that the solution must be “smooth”. After first obtaining the nonnegatively constrained least-squares fit to the QELS data (CONTIN also uses NNLS), the distribution is iteratively “smoothed”by minimizing the second derivative of the size distribution while maintaining a good fit to the data. The rationale for “smoothing” the size distribution as the regularizing operation is that the information content of the distribution is thus reduced leading to a more parsimonious result still consistent with the data. Our experience with CONTIN has been that the smoothing of the distribution works well when smooth distributions (8) Grabowski, E. F.; Morrison, I. D., in ref 3. (9) Provencher, S. W. Comput. Phys. Commun. 1982, 27, 213-227,

229-242.

are expected. However, the method works less well when multimodal distributions are expected. For instance, in the initial stages of the flocculation of model monodisperse dispersions, the most parsimonious description is two separated monodisperse peaks corresponding to the singlets and doublets, not a smooth distribution. Small or narrow peaks in a multipeaked distribution are unnecessarily minimized with respect to large ones because a small or narrow peak necessarily has a larger second derivative than a large or broad one. In essence, parsimony as measured by the “smoothness” of the size distribution is not always appropriate. (A second disadvantage of this technique is that the computer requirements are large. For the month in which we analyzed numerous data sets with CONTIN, our computer bill was just under $10000.) The improvement in data analysis we are reporting in this paper is that the nonnegatively constrained leastsquares analysis can be regularized by including the proposal made by Pike and co-workers for enhancing the unconstrained least-squares analysis: that is, analyze each data set several times with different sets of assumed particle sizes and combine the results of each analysis. The averaging of equally likely solutions to the ill-conditioned problem is the regularizing operation. The average of equivalent solutions contains less detail that any of the individual solutions. The combination of the techniques of nonnegatively constrained least-squares analysis and multiple-pass analysis on a single data set significantly improves the representation of multimodal distributions and unimodal distributions that are not too broad, owing to the ”regularizing” effect of multiple analyses while retaining the size resolution enabled by the explicit use of nonnegative constraints. What is particularly significant is that the combination of the two ideas enhances the QELS analysis without introducing any new assumptions or a priori information, even that the distributions must be smooth. One problem remaining is that some of the error in the measured autocorrelation function is systematic (as described in the next section) and these powerful inversion techniques interpret nonrandom error as spurious particle sizes. Another problem is that the multiple-pass analysis does not remove the tendency to produce multimodal distributions for samples that actually have broad unimodal distributions. We have discovered improvements in experimental technique to reduce or eliminate these problems. Improved Experimental Techniques. The manufacturers of digital autocorrelators have made great progress in equipment design. In particular, they have been able to provide several important improvements: multiple data bits per channel which enables greater input light intensity and hence greater accuracy in the autocorrelation function for the same time of data acquisition, data acquisition over nonlinear time scales, and more accurate measurements of the long delay time base lines. A careful explanation of the necessary experimental conditions for obtaining QELS data has been given by Ford.lo Ignoring instrument errors such as misalignment or scratches, the accuracy in measurements should go roughly as the inverse of the square root of the cumulative light intensity. Obviously, the longer the data acquisition time, the less the statistical error in the measurement. However, because of the circuit logic by which correlators are constructed,1° the errors in intensity of consecutive delay times, i.e., subsequent channels, are not independent. (10) Ford, N. C., in ref 3.

Particle Size Determination by QELS This is easily seen in the initial moments of data acquisition by the propagation of ripples through the autocorrelation function. A transient thermal or mechanical perturbation or a stray piece of dust causes short duration intensity fluctuations which are propagated through the autocorrelation function systematically. Taking data for long times reduces these effects, but, even when data are collected for extremely long times, the autocorrelation function contains vestiges of these random events. What we have found is that the effects of these nonrandom errors are reduced if multiple data sets on the same sample are taken, each data set analyzed independently, and all of the resulting size distributions averaged. That is, rather than run a single experiment for a long time, run several shorter ones, analyze each separately, and average the distributions. This means that the average of the constrained inverses of individual autocorrelation functions is a more probable representation of the real particle-size distribution than the constrained inverse of the average of several autocorrelation functions. This can be explained by the fact that any correlation in errors in a single autocorrelation function will be inappropriately interpreted by a sufficiently powerful inversion algorithm, but the chance of the same random correlation of residuals in repeated experiments is small, and if any occur, they quickly average out. Recurring particle sizes will be reinforced while occasional spurious peaks will be diminished. If the true distribution is indeed bimodal, a gap will remain between the peaks after the averaging. If, however, the distribution is a wide unimodal peak, the positions of the separated peaks will vary from one data set to another, thus filling in to a smooth unimodal peak. We analyze each data set with the same set of assumed decay constants so that we take the probability of a scattering center with decay constant rito be just the average of the probabilities determined from each analysis. Experimental error in the measured autocorrelation functions may broaden the calculated distribution beyond the real particle-size distribution. Thus, if more resolution is desired, the quality of the autocorrelation function must be increased. If the analysis shows a wide unimodal distribution, the best method to ensure that the sample has a wide unimodal rather than a bimodal distribution is to increase the quality of the autocorrelation functions. This implies longer data collection times. While rapid particle-size distribution analysis is desirable (and is touted as a prime advantage in some commercially available instruments), our experience has been that an increase in reliability of particle-size determinations is worth the wait. Proposal for Improved QELS Techniques. Therefore, we propose that substantial improvements in the particle-size resolution of the analysis of QELS data, beyond the use of nonnegatively constrained algorithms, are obtained by (1)analyzing each measured autocorrelation function with multiple sets of assumed particle sizes to obtain a set of particle-size distributions and averaging the individual distributions and (2) taking repeated data sets on the same sample (each data set being taken for a significant time), analyzing each data set as described above, and averaging the final distributions. The nonnegative constraint gives the particle-size resolution. The multiple analysis of each data set regularizes the size distribution. The analysis of multiple data sets reduces or eliminates spurious peaks and eliminates the tendency to report multiple peaks when the sample actually has a broad unimodal distribution. Data collection for a significant time enables accurate particle-size determinations. As should be perfectly clear, none of these

Langmuir, Vol. 1, No. 4, 1985 499 proposals includes any new assumption about the nature of the distributions. The accuracy of the particle-size distribution results now depends upon the patience of the experimenter and not upon any a priori assumption about the nature of the solution. To illustrate the significance of these proposals, we have analyzed data from three types of experiments: (1) a known bimodal mixture of monodisperse particles analyzed repeatedly for short durations, (2) the same bimodal mixture analyzed repeatedly for long durations, and (3) an emulsion with a broad size distribution. The results of the first experiment demonstrate the power of the technique in clearly separating peaks in a bimodal distribution. A comparison of the results of the first and second experiments demonstrates the increased accuracy in size analysis when more accurate data are obtained (by taking data for longer times). The third experiment demonstrates that this new technique can also analyze broad distributions. The ability to differentiate between broad unimodal and multimodal distributions requires no operator input or a priori information.

Experimental Section The scattering experiments were carried out on a BI240 light-scattering goniometer from Brookhaven Instruments Corp. This includes the necessary focusing optics, sample cell assembly, beam stop, detection optics, photomultiplier tube, amplifier/ discriminator, and the goniometer base itself. The light source for the instrument is a Spectra-Physics, Model 124B, 15-mW, linearly polarized, HeNe laser. The laser and optical components are all mounted on a Newport Research Corp. vibration isolation table. The index matching fluid being used to surround the sample cell is Cargille Fuild S1056 (blended siloxanes with a refractive index of 1.45). A Millipore peristaltic pump is used to circulate and filter the index matching fluid through a 0.45-wm membrane filter to remove any dust or particles that might interfere with the scattering experiment. A Neslab RTE-5DD refrigerated, circulating bath is used to maintain a constant temperature in the sample cell and surrounding liquid. The signal from the amplifier/discriminator is sent to a BI2020 correlator, also from Brookhaven Instruments. The correlator has 136 channels followed by 4 base-line channels starting at 1024 sample times. The data were acquired using the "multi-tau" feature of the instrument. That is, when the first 32 channels are assigned a sample time of T, channels 33-64 have a sample time of 2T, channels 65-96 have a sample time of 4T, and channels 97-136 have a sample time of 8T. This enables the instrument to cover a wide range of delay times in a single experiment without sacrificing the quality of the data for the important short time delays. The autocorrelation functions were stored on floppy disc by the BI2020 system and later transferred to a Hewlett-Packard 9836 microcomputer on which the distribution analyses were carried out. The lattices used were Polyscience polystyrene latex standards No. 8691 (d = 50 nm, u = 5 nm) and 7304 (d = 166 nm, u = 10 nm). Both are supplied as a 2.5% solids latex. The water used in all sample preparations was purified by a Millipore Milli-Q cartridge system. The sample was prepared by dispersing the appropriate weight of each latex in a filtered (0.2 pm) 5 mg/L aqueous solution of sodium dodecyl sulfate. The final dispersion was filtered through a 0.45-pm membrane filter directly into the scattering cell. The emulsion sample was prepared similarly to the latex mixture with the exception that there was no final filtration into the scattering cell. All measurements were made a t 25.0 OC and a t a scattering angle of 90°. The latex mixture sample contained 16 mg/L of the 50-nm particles and 3.5 mg/L of the 166-nm particles. The emulsion sample contained 20 mg/L of the dispersed phase.

Results To demonstrate the power of this technique to correctly

Morrison, Grabowski, and Herb

500 Langmuir, Vol. I, No. 4, 1985

U E

5

-OVERALL

-d

=

0

=

49 nm 16 nm 81:; o f mass

nm 45 nm

d = 72

u

=

Ul Ul

U E W

> H

I-

U

-I W CY

s

PARTICLE

DIAMETER

(nm)

Figure 2. Short time data: The particle-size distribution determined for a 4.6:l mix of 50- and 166-nm polystyrene latex standards;21 autocorrelationfunctions were measured, each data run lasting 10 min (T= 15 ps) for a total of 3.5 h of data collection. analyze known bimodal distributions, data for the 50nm/ 166-nm latex mixture were acquired in two different ways: (1)“Short time datan-21 autocorrelation functions were measured, each data run lasting 10 min (sample time, T = 15 ps) for a total of 3.5 h of data collection. (2) ”Long time data”-6 autocorrelation functions were measured, each data run lasting 6 h (sample time, T = 15 ps) for a total of 36 h of data collection. Each measured autocorrelation function was analyzed 5 times with five different sets of 20 assumed particle sizes as described above. All light intensities were converted to volume fractions using the Mie correction fact0rs.l’ The resulting particle-size distribution for the “short” runs is shown in Figure 2. Note first that the two peaks are base-line separated. The first peak has a mean of 49 nm and a standard deviation of 16 nm and contains 81% of the mass. The second peak has a mean of 162 nm and a standard deviation of 21 nm and contains 19% of the mass, thus giving an experimentally measured mass ratio of 4.3:l. These values compare well to the actual values of a 50-nm diameter and 5-nm standard deviation for the first latex and a 166-nm diameter and 10-nm standard deviation for the second, with the exception that the measured standard deviations are 2 to 3 times larger than the actual values. In addition, the measured mass ratio of 4.3:l compares nicely to the known ratio of 4.6:l. If one is interested in even better resolution than is shown in Figure 2 (and can afford the data acquisition time), the technique is capable of providing it. This is demonstrated in Figure 3, where the results of the six “long” tests are shown. The first peak this time has a mean of 53 nm and a standard deviation of 7 nm and contains 79% of the mass. The second peak has a mean of 168 nm and a standard deviation of 12 nm and contains 20% of the mass, thus giving an experimentally measured mass ratio of 4.0:l. Note that the peak widths as measured by the standard deviations around each peak mean are now very close to their known values. As was mentioned above, the resolution of the technique is limited by the random error in the data which results in a broadening of the peaks. Acquiring data for longer times decreases the random error and the resulting peak widths are narrower, as they should be. The photon count rate was quite low for these examples. Data of equivalent quality can be obtained in about (11) Bohren, C. F.; Huffman, D. R. ‘Absorption and Scattering of Light by Small Particles”; Wiley: New York, 1983. A FORTRAN program called BHMIE.

PARTICLE

DIAMETER

(nm)

Figure 3. Long time data: The particle-size distribution determined €or a 4.6:l mix of 50- and 166-nm polystyrene latex standards; Six autocorrelation functions were measured,each data run lasting 6 h (T = 15 ps) for a total of 36 h of data collection. m

PARTICLE

DIAMETER

(nm)

Figure 4. Particle-size distribution for a typical industrial emulsion known to have a moderately wide distribution;30 autocorrelation functions were measured, each data run lasting 162/3 min ( T = 25 ps) for a total of 81/3h of data collection. one-tenth the time by using a photon count rate of 2 photons per sample time. One problem of the original constrained-least-squares approach8 was a tendency to produce multimodal histograms when the sample had a broad unimodal distribution. To demonstrate that this problem has been resolved by the application of these new techniques, a typical industrial emulsion known to have a moderately wide distribution was analyzed. Figure 4 shows the result of an averaging of 30 data sets, each data set taken for 162/3min ( T = 25 ps) for a total data acquisition time of 8l/3 h. The distribution has a mean of 310 nm with a itandard deviation of 85 nm, with emulsion particles ranging continuously from 55 nm up to almost 600 nm. (N.b. This is an order of magnitude wider than the peaks ih Figure 3.) The distribution found is continuous and ”smooth”. (To check that the particle-size distribution was not narrower than this, longer data runs could be taken.) The slightly irregular shape of the right-hand side of the distribution is due mainly to the rapid variations of the Mie scattering corrections in this particle-size regime. It is important to point out that the technique requires very little input from the operator. The program used for this work will run automatically with default values or will allow the operator to choose the upper and lower bounds on the particle size, the number of assumed particle sizes, and the number of times each data set is analyzed. The

Langmuir 1985,1,501-506 greater the detail in the size distribution desired, the more accurate the data must be. Conclusions The resolution of the particle-size distributions by the analysis of QELS data is significantly improved by (1) using a least-squares technique that utilizes the nonnegativity constraint as part of the iterative calculation, (2) analyzing each data run with several sets of assumed particle sizes and averaging the resulting particle-size

501

distributions, (3) taking several sets of data on the same sample, analyzing each set independently, and averaging the final distributions, and (4) taking data for a sufficient length of time. A FORTRAN implementation of this technique is available from the authors. With these improvements in both experimental technique and theoretical development, determining the correct particle size distribution does not require operator intervention or a priori knowledge, only patience.

Role of Shape in Models of Micellar Equilibria Marjorie J. Vold 17465 Plaza Animado, No. 144, San Diego, California 92128 Received January 9, 1985. In Final Form: March 4, 1985 The mass action approach to modeling micellar equilibrium has been used to study the effect of the shape assumed for the micelles as a function of aggregation number, using prolate and oblate ellipsoidal shapes and cylinders with hemispherical caps. It has been possible to reproduce the cmc’s, preferred aggregation number, and degree of monodispersity moderately well for n-alkyl chain sodium sulfates by using a single set of parameters. Shape affects chiefly the proportion of small aggregates such as dimers. One plausible prolate ellipsoidal model leads to a dimer equilibrium constant matching experiment. Introduction The theories of micellization as formulated by Tanf~rdl-~ and by Mukerjee4p5can be made a basis for exploring the predicted role of various parameters on such observables of micellar systems as critical micellization concentration and weight-average aggregation number. This paper is focused on the effect of the postulated shapes of the micellar aggregates on these and related quantities. The heart of Tanford’s formulation is his equation for the mole fraction X, of n-alkyl chain amphiphile present in aggregates of size Q (Q monomers per micelle) which can be written as In X, = Q/RT(S + I(Nc - 1))- P(AH, - B ) + Q In Xo + In Q (1) The first two terms represent the transfer free energy for the methyl group and (Nc - 1) methylene groups from water to the micelle interior. They are an overestimate of the hydrophobic effect, since groups at the waterlmicelle interface are not fully so removed. The third term is a correction for this overestimate and involves the product of a free energy per unit area of micelle core (P)and the core area per monomer less the area shielded by the head group (B). The fourth term represents head group repulsion, taken as inversely proportional to the micelle’s area per monomer measured at the site of the repulsive force some distance outside the core. The final terms (involving the monomer mole fraction Xoand aggregation number come directly from the mass law. I t can be seen that the shape of the micelle of size Q exerts its effect on its calculated abundance entirely through the two area terms PAH, and a / A R , with the residual terms (S + I(N, - 1) + PB) affecting only the (1)Tanford, C. “The Hydrophobic Effect”; Wiley: New York, 1973. (2)Tanford, C.J. Phys. Chem. 1974, 78, 2469. (3)Tanford, C.Proc. Natl. Acad. Sci. U.S.A. 1974, 71,1811.

(4)Mukerjee, P. In “Micellization, Solubilization, andd Microemulaions”; Mittal, K. L., Ed.; Plenum: New York, 1977. (5)Mukerjee, P.J. Phys. C h e n . 1972, 76, 565.

overall concentrations for a given Xobut not the relative concentrations of aggregates of varying size. Tanford2 studied both prolate and oblate ellipsoids of revolution after observing that a spherical shape could not be maintained for large Q without a vacuole or some water within the core, both deemed abhorrent. Nagarajan and Ruckenstein6 nevertheless studied the distribution assuming a sperical shape for all aggregates and also that AH, = ARm. They also showed that the crossover point for curves of l / Q In K(Q) and In k(Q) as functions of Q is an estimate of the most probable Q value. The symbol K(Q) stands for the equilibrium constant for overall micellization with k(Q) representing the equilibrium between micelles of sizes Q, (Q - l),and monomer. In extending this work we explore a variety of postulated shapes that may change as Q increases, always keeping the same values of the controlling parameters S, I, P, and (Y with Nc = 12 (n-dodecyl),and calculate the expected osmotic coefficient, monomer concentration, and number and weight average (Q average) degrees of micellization. We also examine the size distribution in more detail than is yet observable by direct experiment. A single calculation is reported for sequence NC = 8, 10, 12, 14. Selection of Parameters The four parameters S, I, P, and (Y appearing explicitly in eq 1 can be chosen arbitrarily only within limits. One estimate of the parameter S comes from plotting the solubilities of n-alkanes against chain length, yielding an intercept corresponding to the Gibbs free energy of transfer of a methyl group from &ane to water and equated to the negative of transfer from water to the micelle core interior. Alternatively, the solubilization of alkanes by aqueous solutions of micellar sodium dodecyl sulfate can be used. Slopes of plots yield the corresponding free energy incrementa per methylene group, the parameter I. Another way (6) Nagarajan, R.; Ruckenstein, E. J. Colloid Interface Sci. 1983, 91,

500.

0743-7463/85/2401-0501$01.50/0 0 1985 American Chemical Society