Environ. Sci. Technol. 1996, 30, 2899-2905
Optimizing Composite Sampling Protocols F. JAMES ROHLF* Department of Ecology and Evolution, State University of New York at Stony Brook, Stony Brook, New York 11794-5245
H . R E S I T A K C¸ A K A Y A Applied Biomathematics, 100 North Country Road, Setauket, New York 11733
STEVEN P. FERRARO U.S. Environmental Protection Agency, Hatfield Marine Science Center, 2111 SE Marine Science Drive, Newport, Oregon 97365-5260
Composite samples are often used in environmental studies to reduce costs by decreasing the number of expensive tests that have to be performed. Statistical models for composite sampling are discussed, and procedures are given for determining the optimal number of primary sampling units to include in each composite sample. For the problem of comparison of means, methods are presented (a) to find the optimum sampling protocol that stays within a fixed budget and (b) to find the least costly sampling protocol that is still able to reliably detect a specified difference in means.
Sampling Models In this section, we compare models for grab and for composite sampling. We distinguish two models for measurements made from composite samples: (a) measurement of the concentration of a substance and (b) measurement of the sum of the contributions of each primary unit. In both cases, we assume simple random sampling of independent sampling units. In reality, samples are often spatially and temporally autocorrelated. Consideration of such models is beyond the scope of this paper. We also do not take into account the problem of the existence of censored data, as when observations fall below a detection limit for an analytical test. The models we present are still appropriate, but special methods have to be used to obtain unbiased estimates of the means and variances. Helsel (4) gives a general overview of methods for dealing with censored data. If data are not normally distributed, data transformations (5) may be used to achieve normality for grab or composite samples. Models for Grab Sampling. The model for an observation, Yijk, resulting from a grab sample is that of a mixed model nested analysis of variance (ANOVA) design (see ref 5). The model can be expressed as
Yijk ) µ + γi + Sij + ijk
Introduction Compositing is the process of pooling two or more primary sampling units (1). Samples obtained without compositing are sometimes called “discrete” or “grab” samples. Examples of primary sampling units are an individual fish or a specified amount of muscle tissue from an individual fish. A composite, “mixed”, or “secondary” fish sample is formed by homogenizing two or more fish or tissue samples from two or more fish. Replicate composite samples are usually taken within each group or station being compared. Composite sampling has many applications in science, medicine, and industry (1, 2). It may be necessary to use composite samples in order to provide sufficient material for accurate analysis. For example, conventional chemical analytical procedures require about 10 g of tissue for analysis of priority pollutant metals and about 50 g of tissue for priority pollutant organic compounds (3). If the mass of the primary sampling unit is less than that needed for accurate chemical analysis, compositing may be performed to obtain sufficient mass. Other reasons for compositing are to reduce the analytical costs of samples and to obtain estimates of average conditions at a lower cost than by * Author for correspondence; telephone: (516) 632-8580 (leave messages at (516) 632-8600); fax: (516) 632-7626; e-mail address:
[email protected].
S0013-936X(95)00733-4 CCC: $12.00
using the average of measurements on each primary unit. When knowledge of the distribution of the primary units is important, compositing is counterproductive. In this paper, we present models for two types of measurements made on composite samples. Methods and examples are presented for both models to determine (a) the optimum sampling protocol for testing differences between means for a fixed cost and (b) the least costly sampling protocol needed to reliably detect a specified true difference between population means.
1996 American Chemical Society
(1)
where µ is the overall population mean, γi is the fixed effect of being in the ith group (usually a “station” in environmental monitoring applications), Sij is the effect of being in the jth sample of the ith group, and ijk is the effect of being the kth replicate measurement (analytical test) made on the jth sample from the ith group. If the groups being compared represent random group effects (e.g., in a study of a random sample of populations), then the γi term is replaced by Gi. We will use the symbol g for the number of groups, b for the number of replicate samples in each group, and n for the number of replicate measurements made on each replicate sample. An example of a grab sample is when individual fish or a given mass of tissue from individual fish are collected and kept distinct. In this model, at each of the g stations, b fish would be measured n times. Models for Composite Sampling. The models for composite sampling are similar to the mixed model nested ANOVA design for grab sampling described above. The difference is that each measurement is taken on a composite sample that is composed of a homogeneous mixture of two or more primary sampling units. It is assumed that the observed value for a measurement from a composite sample is a weighted sum of the true values for each primary sampling unit plus the effect of measurement error. This
VOL. 30, NO. 10, 1996 / ENVIRONMENTAL SCIENCE & TECHNOLOGY
9
2899
results in the following model:
∑ w (µ + γ + P ) + ) µ ∑w + γ ∑w + ∑w P
Yijl )
l
l
i
i
l
ijl
ijk
l
l ijl
+ ijk
(2)
where wl is weight for the contribution of the lth primary unit to a composite sample and Pijl is the effect of the lth primary unit in the jth composite sample from the ith group. We will use the symbol p for the number of primary units in each composite sample, g for the number of groups, b for the number of replicate composite samples in each group, and n for the number of replicate measurements made on each composite sample. If p ) 1, this model reduces to that of grab sampling described above. There are several possible models for the wl. If one takes a fixed sized sample from each organism and measures a concentration of a certain substance, then the concentration in the composite sample is a weighted average of the concentration of the substance in the individual organisms. This corresponds to wl ) 1/p. Therefore ∑wl ) 1, and the sampling model reduces to
Yijk ) µ + γi +
∑P
l ijl
p
+ ijk
(3)
An example of this type of composite sample is a homogenized sample of p equal-sized individual fish or, more commonly, a homogenized sample of an equal mass of tissue from each of p individual fish. At each of g sampling stations, b composite samples are formed, and a chemical test is made n times on each one. Rohde (6) and Tetra Tech (7) discuss a similar model but do not allow for a separate error due to the measurements themselves. If the variable being measured is a sum of the contributions of the primary units (e.g., the total weight of p individuals is measured), then the measured value from a composite sample is just the sum of the values of the p individuals. This corresponds to wl ) 1. In most cases, the measured value (including the error component) is divided by p so that the recorded value will be in terms of an average individual. The sampling model is
∑ w (µ + γ + P ) + ∑w ∑P + )µ+γ +
Yijk )
l
l
i
ijl
l
l ijl
i
ijk
(4)
ijk
p
An assumption of this model is that the absolute magnitude of the analytical error, ijk, does not depend upon p. An example would be when a scale is used to weigh a set of p fish taken from a single station, and the scale weighs all samples regardless of weight to the nearest gram. The division by p to obtain an average weight reduces the absolute magnitude of the error variance by p2. The wl can be random variables rather than fixed constants as assumed above. Rohde (1, 6) described models in which the wl are mixing proportions that follow the Dirichlet distribution. This distribution is based in part on the assumption that the wl are independent and have a mean of 1/p. One effect of having wl random is that the variance is increased by a 2 2 factor of 1 + p2bσw , where σw is the variance among the wl values (7). The models considered above can be viewed as special cases (with the variance of the wl equal to zero) of this more general model.
2900
9
ENVIRONMENTAL SCIENCE & TECHNOLOGY / VOL. 30, NO. 10, 1996
True replicate measurements may not be possible as some analytical procedures are destructive. The standard practice for estimating chemical analytical error when analyses are destructive is to analyze subsamples from the same homogeneous sample. Replicate measurements may either be omitted or kept to a minimum because of their expense and the expectation that biological variability is much larger than analytical error. When no replicate measurements are taken, no estimate is directly available for the ijk. Estimates for the variance of the ijk must then be obtained either from prior work or from a pilot study. A fortunate aspect of composite sampling is that the measurements tend to be normally distributed even when the values for the primary sampling units do not follow the normal distribution. This is because the value for each composite sample is the sum of the contributions of each primary unit. According to the Central Limit Theorem, sums of independent random variables with finite means and variances will tend toward normality as the number of terms, p, increases (8). Comparisons among groups (e.g., stations) are usually carried out as single classification ANOVA or by t-tests. The standard error of a mean is based on the variation among readings taken on composite samples within each group. Even when measurements are only taken on composite samples, the effects of individual variation are still present. The expected mean square for variation among samples in a nested ANOVA (5) is 2 EMSS ) σ2 + nσS⊂G
(5)
where σ2 is the variance expected for replicated analytical tests, n is the number of replicate tests made on each 2 sample, and σS⊂G is the variance component for variation among samples within each group. The mean square among groups has the following expected value:
∑
γi2 2 EMSG ) σ2 + nσS⊂G + nb g-1
(6)
where ∑γi2/(g - 1) is the added component for fixed differences among groups. If the differences among groups correspond to random effects, then this term is replaced by σG2 . In a typical environmental monitoring application, groups (stations) are usually considered to be fixed treatment effects. In the case of composite samples and the wl ) 1/p model, 2 2 2 σS⊂G is replaced by (1/p)σP⊂S (where σP⊂S is the added variance component for variation among primary units). The expected mean square for samples becomes
n 2 EMSS ) σ2 + σP⊂S p
(7)
For the wl ) 1 model with the measured values divided by p, the expected mean square for samples is
EMSS )
1 2 n 2 σ + σP⊂S p p2
(8)
The expected mean square divided by sample size yields the expected variance of a group mean (the square of the standard error of a group mean):
σY2h )
EMSS nb
(9)
FIGURE 1. Expected variance of a group mean as a function of p and b for n ) 1 (solid line) and n ) 2 (dashed line) for the model w ) 1/p. The curves are given for b ) 2 and 6.
FIGURE 3. Cost of samples for one group as a function of p and b for n ) 1 (solid line) and n ) 2 (dashed line). The curves are given for b ) 2 and 6.
n has only a very small effect. In this case, the variance approaches 0 as p becomes very large.
Cost of a Sampling Program A cost function can be defined that gives the cost for each group being compared:
C ) bCS⊂G + nbCR
(10)
where CS⊂G is the cost of one sampling unit (a single grab sample or a single composite sample), CR is the cost of making a single analytical test (whether from a grab or a composite sample), and C is the total cost of data collection for a single group (station). Since a composite sample is made up of p primary sampling units, we can replace CS⊂G with pCP⊂S to yield the following equation FIGURE 2. Expected variance of a group mean as a function of p and b for n ) 1 (solid line) and n ) 2 (dashed line) for the model w ) 1. The curves are given for b ) 2 and 6.
This quantity is important in determining the expected sensitivity of a sampling programshow small of a difference between group means one can expect to detect (see below). For small sample sizes, as are usually the case in environmental monitoring studies, the degrees of freedom, g(b 1), of the expected mean squares are also important. For simplicity, we have assumed that equal numbers of primary units are present in each composite sample and that the same number of replicate measurements are made on each composite sample. In the case of unequal sample sizes, n and p should be replaced by the special means, n0 and p0, such as used in ANOVA (e.g., see ref 5 for the equation for n0). While larger sample sizes always result in more powerful statistical tests, having additional observations in some groups and not others does not contribute as much to statistical power as one might expect. This is because the effective average sample size for statistical tests is always equal to or less than the average of the sample sizes, i.e., n0 e n j and p0 e p j. Figure 1 shows an example of how the expected variance of a group mean varies as a function of p and b for n ) 1 and 2 for the w ) 1/p model (eqs 7 and 9 with σ2 ) 1.5 and 2 σP⊂S ) 4). As p becomes large, the variance approaches σ2/(nb) asymptotically. Figure 2 shows the corresponding plot for the w ) 1 model (eqs 8 and 9). Note how increasing
C ) b(pCP⊂S + nCR)
(11)
where CP⊂S is the cost of collecting and processing a single primary unit (assuming that the cost of p primary units are the same whether or not the wl are equal) but not their analysis. Given n and p, this equation can be inverted to yield
b)
C pCP⊂S + nCR
(12)
This enables one to compute the number of samples that one can afford. Figure 3 shows an example of the effect of varying n, p, and b on the total cost, C, of sampling one group using CR ) 250 and CP⊂S ) 50. While keeping costs low is always desirable, the set of parameters (n, p, and b) that minimizes the cost may not result in a sufficiently small variance of a group mean for reliable statistical inference. The next section shows a strategy for obtaining the best compromise between minimizing cost and maximizing the sensitivity of a sampling design.
Comparison of Means Composite samples are usually used in environmental studies to monitor mean levels of some quantity (e.g., PCB levels in fish tissues) against a standard or critical value (e.g., FDA Action Limits) or to compare mean levels in samples from different stations. Increasing n, p, or b
VOL. 30, NO. 10, 1996 / ENVIRONMENTAL SCIENCE & TECHNOLOGY
9
2901
decreases the variance but increases the cost of a sampling program (Figures 1-3). Thus, the optimal design must balance these factors in order to obtain the most powerful statistical test for the least cost. In some cases, it is possible to derive an explicit formula, but usually an iterative or a trial and error procedure must be used to find the optimum composite sampling protocol. Optimal Allocation of Resources. For grab sampling, the optimum number of analytical tests to make on each sampling unit can be computed directly using the following formula (based on ref 5, p 312):
n ˆ)
x
CS⊂GSR2 2 CRSS⊂G
(13)
The value for n ˆ is truncated to an integer and constrained to be equal to or greater than unity. Analogous formulas can be derived for n and p in composite sampling by simultaneously minimizing both the variance of a group mean and the cost of a sampling plan. This is done by computing the derivative (with respect to n and to p) of the product of cost and variance, setting these equal to zero, and then solving for the optimal values of n and p (9). The number of composite samples, b, is determined so that the plan is affordable and the desired statistical test has sufficient statistical power (see below). The number of composite samples must also be greater than or equal to 2 so that an estimate of the sampling variance can be obtained. The results depend upon the model for the wl. If wl ) 1/p, then explicit formulas cannot be obtained for n and p but only for their ratio:
n ˆ ) pˆ
x
SR2 CP⊂S 2 SP⊂S CR
(14)
Thus, an iterative or trial and error approach is required as described below. Given a pair of n and p values (in the above ratio), b can be computed so as to achieve a desired total cost using the following formula:
bˆ )
C pCP⊂S + nCR
(15)
Alternatively, one can solve for a value of b that yields a specified standard error of a group mean using the following formula:
SR2 bˆ )
n
+
2 SP⊂S
p
SY2h
(16)
For the wl ) 1 model with the Yijk divided by p, the optimum number of replicates can be computed directly as
n ˆ)
x
SR2 Cp⊂S
2 SP⊂S CR
(17)
which does not depend upon p. If n ˆ < 1, then a single replicate should be used. For this model, p should be as large as one can afford.
2902
9
ENVIRONMENTAL SCIENCE & TECHNOLOGY / VOL. 30, NO. 10, 1996
The value for b depends upon the limitation of funds and the desired statistical power one wishes to have for testing differences between populations. The formula for b to stay within a fixed cost is as given above. The formula to achieve a desired standard error for this model is 2
2
1 SR SP⊂S + p p2 n bˆ ) 2 SYh
(18)
Power of a Test and Estimation of Sample Size. A sampling program needs to have sufficient sample sizes so that if there are important differences between groups they are likely to be detected statistically. The following relationship (from ref 5, p 263) determines the required sample size for comparing two means
n)2
2
(δσ) (t
R[ν]
+ t2(1-P)[ν])2
(19)
where P is the desired probability that an observed difference, as small as δ, will be found to be statistically significant, σ is the true standard deviation for a sample, n is the degrees of freedom of the estimate of σ, and R is the significance level one plans to use. P is also one minus the probability of a type II error. This formula is based on the assumption that the difference between means follows the normal distribution. In the case of composite samples, this formula can be expressed as
EMSS (tR[ν] + t2(1-P)[ν])2 2 δ
n ˆ bˆ ) 2
(20)
where EMSS is the expected value for the mean square for differences among samples based upon the wl ) 1/p or wl ) 1 models, whichever is appropriate, and ν is the degrees of freedom that one will have for MSP⊂S in a statistical analysis based on the planned sampling program (usually ν ) g(b - 1)). The other symbols are as defined above. This equation must be used as part of an iterative cycle since the degrees of freedom, ν, depend upon b, and the expected mean square depends upon both n and p. The solution of this equation is further complicated by the fact that for small sample sizes one cannot ignore the fact that n, p, and b are integers. Unique solutions are not always possible.
Algorithms for Estimation of Sample Sizes The procedures to be used differ depending on whether one wishes to find the best sampling design (minimum δ) for a fixed cost or the least expensive sampling design that can detect a specified δ. For each problem, the formulas differ slightly depending on the model assumed for the w. In all cases, one must consider transformations, removal of outliers, robust estimation methods, etc. so that the means are more or less normally distributed. One then needs to obtain good estimates of the variance due to measurement error, σ2, and the variance among the primary 2 sampling units, σP⊂S . This latter quantity may be difficult to obtain if the reason for using composite samples is that analytical errors are very high when measuring a single primary unit. If the data have been transformed, then these estimates must also be for the transformed variables. The cost, CR, of each measurement on a composite sample and
the cost, CP⊂S, of each primary unit are also needed. There may be other costs associated with collecting data. What is important in this context is the cost of adding another primary unit to a composite or to make another replicate measurement. The total cost being modeled is the variable part of the experiment after any initial setup costs. One also needs to make a decision about what levels of type I and type II errors one can tolerate and the smallest differences that are important to detect.
Finding the Best Sampling Plan for a Fixed Cost, C
bˆ )
(
C ) b(pCP⊂S + nCR)
x
(
)
2 2 2 SR SP⊂S δ ) (tR[ν] + t2(1-P)[ν])2 + b n p
x
2 SP⊂S CR
(
)
(22)
The combination that yields the smallest δ is the optimal design.
Finding the Least Expensive Sampling Plan That Can Detect a Specified Difference Model w ) 1/p. (1) Compute the ratio
r)
2 SP⊂S CR
(
)
2 2 2 1 SR SP⊂S (tR[ν] + t2(1-P)[ν])2 + p δ2 p2 n
(25)
The result should be truncated to an integer and constrained to be g2 so that an estimate of the variance can be obtained in the planned design. (3) For each feasible combination of n, p, and b, the cost of each group in the planned experiment is computed as
C ) b(pCP⊂S + nCR)
SR2 CP⊂S
2 2 2 1 SR SP⊂S δ ) (tR[ν] + t2(1-P)[ν])2 + b p2 n p
x
SR2 CP⊂S
Round n to an integer and constrain to be g1. (2) For integer values of n and p (with p constrained to be g1), compute
(21)
Round n to an integer and constrain to be g1. (2) For integer values of p compute b ) C/(pCP⊂S + nCR). Truncate b to an integer and constrain to be g2 so that an estimate of the variance can be obtained in the planned design. (3) For each feasible combination of n, p, and b, estimate δ, the smallest difference that one can expect to detect in the planned experiment: 2
n ˆ)
bˆ )
The combination that yields the smallest δ is the optimal design. Model w ) 1. (1) Compute
n ˆ)
(24)
The combination that yields the lowest cost is the optimal design. Model w ) 1. (1) Compute
SR2 CP⊂S 2 SP⊂S CR
(2) For integer values of n and p ) n/r (with p g 1), compute b ) C/(pCP⊂S + nCR). The number of composite samples should be truncated to an integer and constrained to be g2 so that an estimate of the variance can be obtained in the planned design. (3) For each feasible combination of n, p, and b, estimate δ, the smallest difference that one can expect to detect in the planned experiment: 2
(23)
The degrees of freedom, ν ) g(b - 1), are a function of b, so this equation must be solved iteratively. The result should be truncated to an integer and constrained to be g2 so that an estimate of the variance can be obtained in the planned design. (3) For each feasible combination of n, p, and b, compute the cost of each group in the planned experiment:
Model w ) 1/p. (1) Compute the ratio
r)
)
2 2 2 SR SP⊂S (tR[ν] + t2(1-P)[ν])2 + p δ2 n
x
SR2 CP⊂S 2 SP⊂S CR
(2) For integer values of n and p ) n/r (p rounded to an integer and constrained to be g1), compute
(26)
The combination that yields the lowest cost is the optimal design. MS Windows-compatible software (OptiCmp) that implements the algorithms described above is available from the authors (see below).
Examples Here we give hypothetical but realistic examples to demonstrate the use of the algorithms. Although we use reasonably accurate values for the variance and cost estimates, one only needs to know the ratio of the parameters (such as the ratio of the cost of the analysis to the cost of preparing a composite sample) to determine the optimum values of n, p, and b. The cost of tests are quite variable. A chemical test may involve a single chemical compound, a specific class of compounds (such as metals or PCBs), all of the more than 120 EPA priority pollutants (a full scan), or any number these or other compounds. The cost of a chemical analysis for a specific class of compounds ranges from about $88 to $430 (10). In our examples, we use CR ) $250, which is about the average. The cost of a primary sampling unit (within a composite sample), CP⊂S, is more difficult to estimate. The cost of a trawl for collecting sample organisms from a vessel may be about $140 (10). However, note that this is the fixed cost of one group and will be incurred regardless of the level of compositing. The cost that is relevant for determining the optimum sample sizes is the one for combining tissue from one or more organisms
VOL. 30, NO. 10, 1996 / ENVIRONMENTAL SCIENCE & TECHNOLOGY
9
2903
TABLE 1
TABLE 2
Finding Most Sensitive Designs for Fixed Cost for w ) 1/p and w ) 1 Modelsa
Finding Least Expensive Design for Fixed δ for w ) 1/p and w ) 1 Modelsa
a
See text for parameters used.
(primary units) to make one composite sample that will be analyzed. Here we assume that this is lower than the cost of a trawl and that CP⊂S ) $50. 2 For the variances, we assume that SR2 ) 1.5 and SP⊂S ) 4.0. This may, for instance, correspond to a 50% within group coefficient of variation (CV) and analytical error (CV) of 30% for a mean chemical concentration of 4.0 mg kg-1, which is close to the average concentration of total PCBs in the livers of winter flounder (7). We used the first two algorithms to search for a sampling design that allows one to detect the smallest difference between two means with the constraint that the maximum cost is $5000. The significance level was set at R ) 0.05, and the power of the test (probability of detecting a difference as small as δ) was set at P ) 0.9. The results for various combinations of n, p, and b are shown in Table 1. For the w ) 1/p model, which is the appropriate model in this case since we are interested in concentrations, the combination n ) 1, p ) 4, and b ) 11 results in the smallest δ with a cost of $4950. However, there is an alternative design with n ) 1, p ) 3, b ) 12 that results in a δ that is very close, but costs $150 less. For the w ) 1 model, which would be appropriate if the measurements were weights rather than concentrations, the combination n ) 1, p ) 15, and b ) 5 results in the smallest δ with a cost of e$5000. When the estimated sample sizes are small, such as in the present example, the achievable values of δ are discontinuous, and there can be multiple local minima. Depending upon the ratios of the costs and the variances, there may be a single best solution, or several combinations may yield nearly equivalent solutions. We used the last two algorithms to search for the least expensive sampling designs that are expected to be able to
2904
9
ENVIRONMENTAL SCIENCE & TECHNOLOGY / VOL. 30, NO. 10, 1996
a
See text for parameters used.
detect a difference as small as δ ) 2.0. The probabilities R and P were as before. The results are shown in Table 2. For the w ) 1/p model, the combination n ) 1, p ) 3, and b ) 16 yields the least expensive design that can detect the specified difference. For the w ) 1 model, the least expensive design has the combination n ) 1, p ) 8, and b ) 4.
Discussion The formulas and methods presented in this paper for determining the optimum composite sampling protocol are generally applicable. We chose chemical concentrations in fish for our examples because chemical analyses are often performed on composited animal tissue samples and because we believed scientists working in many disciplines and/or with other media (water, sediments, etc.) could easily relate to them. Environmental composite sampling designs have typically aimed for adequacy (i.e., a sampling strategy that could detect a desired minimum difference or “effect size”) not optimality (the sampling strategy that could detect a desired minimum difference at minimum cost). If p primary units and b composite samples per treatment could detect an important difference, the sampling protocol was considered adequate. Sampling protocols shown to be adequate in one study have often been adopted in subsequent studies even though some other combination of primary units and composite samples might be capable of detecting the same or smaller difference at less cost. The NOAA Status and Trends Program collects three composites of between 10 and 20 fish livers and three composites of 30 mussels at each site (11). Although these
compositing strategies are probably adequate for many purposes, we are unaware of any studies to show that they are optimal. Tetra Tech (7) concluded that for most contaminants in environmental samples, 6-8 individuals per composite sample is adequate. However, without a cost function, it is not possible to determine if that compositing strategy is optimal. They also did not explicitly take into account measurement errors in making analytical tests, which can be quite large. Baez and Bect’s (12) conclusion that 20 mussels per composite sample is optimal for detecting differences in DDT concentrations is based solely on happenstance and does not consider costs. Readers are advised to be skeptical of claims of optimality when the variance ratios, δ, or costs are not considered. Composite sampling can reduce the number of expensive analytical tests required, provide a more accurate estimate of the mean, and result in a sampling distribution closer to the normal distribution. Compositing, however, makes it difficult to reliably estimate the variance and the sampling distribution of the primary sampling units, which is important in some studies. The OptiCmp software for MS Windows is available from the authors. It carries out the computations described in this paper. Copies of the software can be obtained most conveniently using a WWW browser over the Internet. The URL address of the Web home page is http://Life.Bio.SUNYSB.edu/ee/biometry.
Acknowledgments We thank J. Heltshe, D. Young, and M. Winsor for reviewing an earlier draft of this manuscript. Although the information in this document has been funded wholly or in part by the U.S. Environmental Protection Agency under Contract 68-CO-0051 to AScI Corporation, it does not necessarily reflect the views of the Agency, and no official endorsement should be inferred. This is Contribution No. 795 from Graduate Studies in Ecology and Evolution, State University
of New York at Stony Brook, and Contribution No. N-198 of the U.S. Environmental Protection Agency, Environmental Research Laboratory, Naragansett, RI.
Literature Cited (1) Rohde, C. A. In Sampling biological populations; Cormack, R. M., Patil, G. P., Robson, D. S., Eds.; International Co-operative Publishing House: Fairland, MD, 1979; pp 365-377. (2) Garner, F. C.; Stapanian, M. A.; Williams, L. R. In Principles of environmental sampling; Keith, L. H., Ed.; ACS Professional Reference Book; American Chemical Society: Washington, DC, 1988; pp 363-374. (3) Tetra Tech, Inc. Quality assurance/quality control (QA/QC) for 301(h) monitoring programs: guidance on field and laboratory methods; EPA 430/9-86-004; U.S. Environmental Protection Agency, Office of Marine and Estuarine Protection: Washington, DC, 1987. (4) Helsel, D. R. Environ. Sci. Technol. 1990, 24, 1766-1774. (5) Sokal, R. R.; Rohlf, F. J. Biometry, 3rd ed.; W. H. Freeman: New York, 1995. (6) Rohde, C. A. Biometrics, 1976, 32, 273-282. (7) Tetra Tech, Inc. Bioaccumulation monitoring guidance: strategies for sample replication and compositing; EPA 430/09-87-003; U.S. Environmental Protection Agency, Office of Marine and Estuarine Protection: Washington, DC, 1987. (8) Parzen, E. Modern probability theory and its applications; Wiley: New York, 1960. (9) Kendall, M. G.; Stuart, A. The advanced theory of statistics, Vol. 3; Hafner: New York, 1966. (10) Tetra Tech, Inc. Examples of cost calculations for marine discharge monitoring programs; U.S. Environmental Protection Agency, Office of Marine and Estuarine Protection: Washington, DC, 1984. (11) Shigenaka, G.; Lauenstein G. G. National Status and Trends Program for marine environmental quality: benthic surveillance and mussel watch projects sampling protocols. NOAA Technical Memorandum NOS OMA 40, 1988. (12) Baez, B. P. F.; Bect, M. S. G. Mar. Pollut. Bull. 1989, 20, 496-499.
Received for review October 2, 1995. Revised manuscript received March 29, 1996. Accepted March 29, 1996.X ES950733X X
Abstract published in Advance ACS Abstracts, August 15, 1996.
VOL. 30, NO. 10, 1996 / ENVIRONMENTAL SCIENCE & TECHNOLOGY
9
2905