Variable filter for digital smoothing and resolution enhancement of

FB 6—Angewandte Physik, Universitat Paderborn, D-4790 Paderborn, West Germany. Starting with the discussion of the optimality of digital filtering...
0 downloads 0 Views 997KB Size
2052

Anal. Chem. 1984, 56,2052-2058

Variable Filter for Digital Smoothing and Resolution Enhancement of Noisy Spectra Manfred U. A. Bromba* Siemens AG, Balanstrasse 73, 0-8000 Miinchen 80, West Germany Horst Ziegler

FB 6-Angewandte Physik, Universitat Paderborn, 0-4790 Paderborn, West Germany

Starting wlth the dlscusslon of the optlmallty of dlgltal fllterlng In the sense of parameter estimatlon, we propose a slmple filter formula that can be adjusted for a wlde range of appllcatlons between smoothlng and resolution enhancement. Wlthln certaln Ilmlts, resolutlon enhancement can be achieved even simultaneously wlth nolse reductlon. Slnce In the llght of parameter estimation the use of dlgltal fllters other than the matched filter can only be Justlfledheuristlcally, the new filter formula Is optlmlred malnly wlth regard to computatlonal efflclency and the nongeneratlon of artlfacts. The filter Is especially suited for interactlve spectrum processing using graphlc termlnais. Slmulatlons wlth nolse-free and nolsy synthetlc spectra are supplled to demonstrate the efficacy of the method.

In many situations, heights and positions of individual lines in a spectrum (or chromatogram) are the most important and often the only parameters to be calculated for the purpose of physical interpretation. The extraction of these parameters from a real spectrum is always complicated by the presence of random noise such as detector noise, amplifier noise, or quantization noise from the A/D conversion. If some a priori information about the signal, its parameters, and the noise is available, usual parameter estimation techniques such as Bayesian methods (“MMSE”, “MAP”) can be applied (1-5). In the absence of a priori information about the parameters, maximum likelihood estimation (“ML”) has been proven useful. Under certain circumstances (white, normally distributed noise), this method is equivalent to simple leastsquares estimation. In this paper we only consider normally distributed white noise. Furthermore, we assume that any background has already been subtracted and that the time between consecutive samples after the A/D conversion is short in comparison to the error of the line position estimate. The line heights are assumed to be positive, (The generalization to “bipolar” spectra as they appear in double ENDOR (7), ODMR, or MCD (6) is straightforward.) From radar and communication theory (1,8) it is a wellknown fact that for a single line (“pulse”) of known shape and width which is superimposed by normally distributed white noise, the best least-squares estimator for the unknown parameters height and position (“arrival time”) is simply a matched filter combined with a maximum searcher (2,8-11). A matched filter is defined to have a filter function (“impulse response”) which for symmetric lines is equal to the undisturbed line function, except for a normalizing factor. Beside its optimality in the parameter estimation sense, it maximizes the SNR (signal to noise ratio) as it is usually defined in spectrometry (12). After the filtering operation which can also be viewed as a “cross-correlation” of the noisy signal with a noise-free reference signal (13,14),the least-squares estimator requires a search for the maximum value. This nonlinear

processing part is due to the nonlinearity of the position parameter. The value of the maximum and its location then are the optimal estimates (in the sense of maximum likelihood and least-squares estimation) for the height and the position of a single line, respectively. (The problem of line detection is simply solved by introducing an appropriate lower bound for the height estimates. If a maximum does not exceed this bound, it is regarded to be noise.) Matched filters can also be applied for multiple line spectra, provided there is no overlap before filtering. The lines may overlap after matched filtering; this will have no influence on the individual height and position estimates. Obviously, the nonoverlapping property is a further kind of a priori knowledge. (Although spectral lines always overlap in theory because of their strict positivity (15),they may be viewed as finite-duration signals with sufficient accuracy.) Sometimes it is argued that a disadvantage of matched filtering is the considerable broadening of the lines. This is not astonishing because all information to be extracted is concentrated in the maximum of the filtered noisy line; i.e., the tails of the filtered line do not contribute to the final result and thus are allowed to get severely deformed. It must be considered that the height and position estimates available after the maximum search, together with the a priori knowledge (concerning shape and width) can be used to reconstruct the true line. I t is only the reconstruction which gives a highly satisfying visual representation, virtually containing no noise since the additive noise in the original single-line spectrum now is solely reflected by the uncertainty of the line parameters. Real problems, however, are spectra with overlapping lines. In such cases, digital matched filtering with subsequent maximum search fails to be the optimal least-squares estimation procedure. If the line shapes, widths, and positions are known a priori, simple least-squares algorithms, which may be performed either nonsequential ( 1 ) or sequential by a simple version of Kalman filters (I6-18), can be used to estimate the line heights. If the positions and moreover the widths must be estimated, also, iterative algorithms ( I ) with their excessive time consumption and their need for precise initial estimates become unavoidable. Such procedures are often known as curve fitting or nonlinear regression (19-22). In this situation the use of digital filters which reduce the noise while performing less line broadening than the matched filter may be advisable. Such filters, when adjusted to weak filtering (23), are the low-pass filters treated, e.g., by Savitzky and Golay (24), Kaiser and Reed ( W ) ,Ernst (12), Kosarev and Pantos (26), and Kauppinen et al. (27). However, it must be emphasized that from the view of optimal parameter estimation, all these filters-though they may be optimal in the sense of very sophisticated criteria-merely represent heuristical procedures which are in a certain sense arbitrary. Their use can only be justified when computational speed is more important than accuracy and precision, when initial estimates for iterative algorithms must be found or when the a priori

0003-2700/84/0356-2052$0 1.50/0 0 1984 American Chemical Society

ANALYTICAL CHEMISTRY, VOL. 56, NO. 12, OCTOBER 1984

knowledge is not sufficient for parameter estimation (e.g., unknown line shape). The desire for faster execution may also arise in cases where the lines do not overlap so that the matched filter is the optimal filter. To increase the computational speed, the filter function of a matched filter could be approximated by simpler functions (polynomials,trigonometric functions, exponentials (12)) than Gaussians or Lorentzians to enable a recursive (28) and therefore fast operation of the filter. However, such approximations are not without problems. It is well-known that a matched filter maximizes the SNR or, equivalently, the SNR efficiency considered in ref 15. Unfortunately, the SNR efficiency is no exact criterion for judging suboptimal approximations to the matched filter. That is, two different digital filters which perform the same SNR efficiency and which are normalized to give exact height estimates in the absence of noise, may produce different height and position errors (in the mean!). Such problems have their origin in the nonlinearity of the height and position estimation procedure. Since the line position is unknown, the height estimate becomes biased. This bias completely stems from the noise. To understand this phenomenon, consider the random variable X = max{X,, X2, X3],where X1, X2, and X 3 are independent zero-mean random variables. Obviously, X has a mean value which is greater than zero and depends on the probabilitydensity function of the X, (n = 1, 2, 3) as well as on their variance. This simple example can be transferred to the nonconstant mean case of spectrometric functions where this bias will additionally depend on the flatness of the line maximum. To our knowledge no explicit expression for this bias has been obtained for general signals. It is common in nonlinear estimation that only lower bounds for the estimation error such as the Cramer-Rao bound (which is associated with the Fisher information matrix (4)) are available. Under certain conditions (e.g., large SNR, bias neglectable) these bounds may be replaced by approximate equations (2,8). For the matched filter procedure this means that the (squared) height and position errors are approximately given by the noise variance divided by the square sum of the (noise-free) reference signal in the height case and divided by the square sum of the differentiated reference signal in the positLon case. However, in the general case including low SNR not even useful lower bounds are known since the Cramer-Flao bound for biased estimates needs some knowledge about the bias. From this discussion which applies to the optimal estimation procedure it seems to be clear that there is not the slightest chance to find criteria for a theoretical comparison of two suboptimal digital filters. (To find the best suboptimal filter, even exact error expressions or at least reasonable upper error bounds would be necessary!) The preceding discussion can be summarized as follows: (a) In the case of nonoverlapping lines of known shape superimposed by normally distributed white noise, matched digital filtering with a subsequent maxima search is the optimal maximum likelihood and least-squares estimation method for heights and positions. To enhance the computational speed, suboptimal matched filters may be reasonable. However, it is very difficult to obtain mathematical criteria for deciding which one of two suboptimal filters yields lower height and position errors. Only the height-error performance may approximately be judged by the relative SNR enhancement (=SNR efficiency (15))when the absolute SNR is great enough. (b) In the case of overlapping, digital filtering fails to be a part of the optimal heights and positions estimation procedure, and iterative line fitting algorithms (19) must be chosen. To enhance the computational speed by several orders of magnitude, digital filters with restricted line broadening

2053

(weak filtering) or resolution enhancing digital filters may be applied as a nonoptimal, heuristical method (in the sense of parameter estimation) to alleviate the interpretation of spectra or to get improved initial estimates for optimal algorithms. Consequently,it may be reasonable to use digital filters which maximize the computational speed while producing as few artifacts (15) as possible. Such empirical filters have been recently proposed by Bush (29). The intention of this paper is to present a two-parameter digital filter formula which allows a computationally efficient implementation. This filter formula is derived from a recently proposed one-parameter shifted-triangular filter function (28) by adding a positive or negative constant to all filter coefficients within the filter width. The new filter function is also normalized to unit area. The first parameter (N) as usual varies the filter width to match various line widths. The second (new) parameter (a)allows a continuous change of the filter type: The cases a = 0, 1 / 2 , 1correspond to the simple moving average, the triangular filter (twice filtering with a moving average), and the second-moment conserving filter treated in ref 28, respectively. If a > 1,the filter is able to enhance resolution by reducing the equivalent width (15). For a values not too great, it will be shown that resolution enhancement can be achieved even simultaneously with noise reduction. The capability of certain digital filters to reduce noise while enhancing resolution has not yet been widely recognized. Usually these features are viewed as being mutually exclusive. However, in the region of small resolution enhancements some “overlap” occurs as we shall show.

THEORY Given a digital filter formula in operator notation N

Af[kl = C a [ n l f [ k- nl n=-N

(1)

with f and Af being the unfiltered and filtered spectra, respectively, and a being the filter function which depends on the filter width parameter N , a new two-parameter ( N , al) filter function can be created by

While the constant a2 is used to normalize the new filter function a for unit area (eq 14 of ref 15), a1represents a free continuousparameter which can essentially influence the filter characteristic by a simple “vertical” shift of the filter function a. a may be any filter function that is linearly independent on the moving average filter function of same width N . Examples are the filter functions of the Savitzky-Golay smoothing filters of second and higher degree (23,24) and the frequency sampling low-pass filters (30, 31). (In contrast to the polynomial form of Savitzky-Golay filter functions, frequency sampling filters have trigonometric (sine, cosine) filter functions. The reason for preferring polynomials over cosines in spectrometry seems to be mainly historical since truncated polynomials do not necessarily fit better to spectral lines than truncated cosines. Similar to the polynomial filters, frequency sampling filters can be implemented both nonrecursivelyand recursively (30).) For computational expediency we have chosen the vertically shifted triangular filter introduced in ref 28 as the “reference” filter A . Hence the two-parameter filter function, say a,, is given by 2a 1 2a a,[n] = -(3) 2N 1 N ( N 1) In’

+ +

+

In( I N The corresponding filter operator shall be denoted by A,. The

2054

ANALYTICAL CHEMISTRY, VOL. 56, NO. 12, OCTOBER 1984

+

continuous parameter a is equal to 1 / ( 1 - (2N l)al). Especially, if a = 1, then a1= 0 and the new fiiter A, = Al equals the reference filter A; see eq 1 and 2 of ref 28. The shape of the filter function a , essentially looks like the shifted triangular function in Figure 1 of ref 28. Only the slope of the triangular sides, the negative side lobes, and the maximum height of a, increases (decreases) with increasing (decreasing)

A

a.

In the following we discuss the filter properties and applications for various a values. Negative a values need not be considered since these lead to filters with highly unfavorable properties such as “line duplicating” (a,[O] does not remain the maximum of the filter function a, as demanded by the consideration in ref 15). a = 0. In this special case, a , equals the simple moving average filter function. The properties of this filter are well-known (15,23)and need not be reconsidered here. Since it performs least noise amplification (white noise) subject to the constraints of unit area and a fiied filter width N, it might be advantageous when the loss of end points in the spectrum record must be kept as small as possible. a = l / p This case leads to a filter type with some very suitable features which can be attributed to the fact that the resulting filter operation essentially equals twice filtering by a moving average. The filter function is positive and monotonic. The frequency response is even strictly positive (see eq 6 of ref 15) and does not exceed 1. This means that the filter with a = 1/2 (N arbitrary) exhibits all properties listed in Table I of ref 15, including mathematical invertibility (Le., the original spectrum could be completely reconstructed without error from the filtered spectrum, if the filtering operation has been accomplished without numerical errors such as overflow (underflow) or truncation (rounding)). Additionally, among all a values, a = l/z produces the best highfrequency attenuation-if that should be reasonable. The filter with a = 1/2 is ideally suited for SNR enhancement of unknown spectrometric functions (15)and as an approximation to the matched filter. When using it as a matched filter approximation, the filter should be normalized for zero line height error (assuming absent noise) rather than for area conservation. Assuming symmetric lines, this can easily be accomplished by multiplying the filter function am byg[O]/A,g[O], where g is the centered reference line of known shape and width. If the height bias introduced by searching the line maximum in the presence of noise is neglected, the statistical height estimation error for Gaussian lines is less than 1% greater than that of the ideal matched filter. (The height estimation error is equal to the inverse SNR (8). Thus the deterioration by suboptimal filtering can be directly obtained from the SNR efficiency (15) which is 99.8% for triangular filter functions, assuming sufficiently large N (231.) If the bias cannot be neglected, comparisonsare only possible on the basis of empirical simulation studies which are beyond the scope of this paper. The same holds for the position estimation error as we have seen in the introduction. For matched filter applications, the filter width N must be fitted to the line width in a suitable way. For Gaussians, an approximate design rule might be N = 0.94 fwhm (22 X 0.47 fwhm) which we have adopted from Table I of ref 23, noting that the effective filter width of double filtering (02)is twice the single filter width. At first sight, it seems to be a disadvantage that the width parameter cannot be varied continuously. To see how this problem can be overcome, we consider the cases a = N / ( 2 N 2) < 1/2 and a = (N 1)/2N > 1/2. In the first case, a, is the usual triangular filter function of width N . In the second case, we get a triangular function of width N - 1. (If a > (N + 1)/2N, a, begins to show negative side lobes.) This fact suggests the variation of a continuously

+

+

-0.5

0.0

0.5

1.0

1.S

2.0

2.5

3.0

FREWENCY

Figure 1. Frequency responses of A , (a = 0,0.5, 1, 2; N = 10).

between N / ( 2 N + 2) and ( N + 1)/2N to simulate any noninteger filter width between N and N - 1. Obviously, this procedure is only necessary for small N. For N large enough, N / ( 2 N + 2) = ( N + 1)/2N = 1/2, and a stepwise variation of the filter width is sufficient. a = 1. The case a = 1yields the reference filter A which has been extensively studied in ref 28. It represents a computationally more satisfying alternative to the second degree Savitzky-Golay smoothing filter which is advantageous especially in the region of weak filtering (23). Additionally, 1 is the largest a value which enables frequency responses not increasing 1. This implies that a = 1 marks the boundary between smoothing filters (15) and resolution enhancement filters discussed next. a > 1. If a > 1, the frequency response of the filter A , increases beyond zero frequency and then falls off rapidly; see Figure 1 where the cases a = 0, 1/2, 1, and 2 (N = 10) are shown. Theoretically this can be explained by the relative increase in area of the negative side lobes of the filter function a, (see Figure 1 of ref 15) which causes the second moment of u, to change its sign; see eq 30. However, a negative second moment gives a positive second derivative of the frequency response at zero (see eq 37 of ref 30). Since the first derivative a t zero frequency is zero (a, is symmetric!), an increasing frequency response near zero frequencies results. In the range of weak filtering, the systematic line height error (noiseless line) is proportional to the second moment of a, in our case. If the second moment changes its sign, the error may change its sign also; Le., the line height may increase by filtering! Because area is conserved exactly (eq 26, ref 15) this fact causes a possible decrease in equivalent width (15). A further description of the effect on spectral lines can be given using frequency (or Fourier) domain considerations. Since convolutions become multiplications in the frequency domain, it is easily seen that an increasing frequency response increases the line width in the frequency domain, provided that the frequency response does not fall off too soon. In the time (or data) domain this results in a l i e narrowing according to elementary Fourier transform theory (32). It should be remarked that frequency responses like those considered here can be imagined as a cascade of an ideal (self-) deconvolution filter (i.e., a filter that narrows a line of known shape and width into a (zero-width) delta impulse) and a low-pass filter. Ideal self-deconvolution of spectral lines shows some highly welcomed properties-when executed on ideal signals. In reality, even the smallest distortions like uncertainties in the signal, measurement noise, quantization noise in the A/D converter (33),and changes of the line position within one sampling time interval (especially when the line maximum is not exactly hit by a sampling point) drastically deteriorate the performance. The major shortcomings are a noise amplification near infinity and an excessive generation of side lobes, both simulating nonexistent lines. For those and

ANALYTICAL CHEMISTRY, VOL. 56, NO. 12, OCTOBER 1984

other reasons suitable approximations have been advised, including linear and nonlinear, terminating iterative and noniterative, and time domain and frequency domain (Fourier transform) methods (12,33-44). (Time domain and frequency domain methods are mathematically equivalent if some precautions are observed (45). However, there may be computational advantages in using one or the other method.) The deconvolutional approach suggests another way of deriving resolution enhancement filters with limited noise amplification. In ref 30 we have shown how least-squares digital filters can be formally derived from ideal operations, e.g., smoothing filters from the identity operation and leastsquares differentiation filters from the ideal differentiator. The orthonormal discrete functions to be used determine whether one obtains, for example, Savitzky-Golay filters (polynomials) or frequency sampling filters (trigonometric functions). This method can also be applied to self-deconvolution filters. Especially, if we orthogonalize the rectangular and the triangular function, the filter function in eq 3 results with a being closely related to the shape and the width of the line to be self-deconvoluted. Obviously, if the noise amplification is reduced, the achievable resolution enhancement will also decrease. Besides resolution enhancement, filters with a > 1 can be used to equalize decreasing frequency responses of nonideal analog antialiasing and reconstruction filters (46), provided that the error is linear phase (integrating A/D converter). In the case of nonlinear phase shifting (RC filter) other (simple) equalizing filters must be used in addition to reduce the line deformation caused by the prefilter. a >> 1. The resolution enhancement increases monotonically with increasing a. However, the gain in resolution rapidly converges to a fixed value. This can easily be shown when inspecting the filter function a, (eq 3) for larger a values. We obtain the approximation a,[n]

F;:

CY

~

(2/+

1

-

2

N(N+ 1)

Clearly, a degrades to an ordinary scaling factor and the effect of the filter can mainly be described by the expression in parentheses on the right-hand side of eq 4. In fact, eq 4 essentially works like a second derivative filter function. The limitations of resolution enhancement by second derivative methods are well documented (38).On the other hand, for large a the noise amplification (eq 25) and the numerical range growth at the filter output (eq 28) are directly proportional to a without limit. Therefore it is quite unfavorable to increase a beyond a certain limit. This point will be further discussed in the Results and Discussion section. To take advantage of the computationallyefficient structure of the vertically shifted triangular filter A (28),we write the filter A , in the operational form

A , = P&+ P2B2 (5) where the digital filters B, and Bz are defined by the filter functions b, (triangular) and b, (rectangular), respectively (see eq 16 and 17 of ref 28) N

B J[kl = C ( N + 1 - Inl)f[k- n] n=-N

(6)

N

BdPI = Only the multipliers

c f [ k - nl

n=-N

(7)

and ,Rz depend on the parameter a 2a = N(N 1)

+

2 4 N + 1) - N OZ = - N(2N + 1) Thus, the new filtering method, as desired, only changes the two multipliers in the recursive realizations of A (eq 21, 27, and 32 of ref 28); the structure and the recursions remain unchanged. For this reason, the whole discussion in ref 28 concerning stability and word length effects essentially remains valid. It is only the (unfavorable) direct form realization (eq 14 and 15 of ref 28) which cannot be adopted since the multiplications in the recursions must be exact; this cannot be realized with arbitrary a values. Furthermore, it must be considered that for a values larger than 1, together with a small filter width N , the magnitudes of P1 and 6 2 exceed 1 and thereby further increase the word length due to an increase in range growth (eq 28). But since this happens after the recursions, there is no influence on the stability. The complete direct form recursive realization of A , is given by gl[k] = 2g1[k - 11 -gi[k - 21 + f [ k + N] - 2 f [ k - 11 + f [ k - N - 21 ( 1 0 )

g,[kl = g , [ k - 11

+ f [ k + Nl - f [ k - N

- 11

(11)

(12) AUml = P1g,[kI + P&,[kl (gl and g, may be identified with Bf and B J , respectively, see eq 6 and 7 . ) For a proper initialization, the spectrum record f must begin with at least 2N + 2 zeros, e.g., f[l] = 0, f [ 2 ] = 0, ...,f [ 2 N + 21 = 0. Additionally, g l [ N + 21, g , [ N + 11, and g 2 [ N + 21 must be set to zero before starting the algorithm at K = N + 3 in this case. The other (more efficient) recursive realizations treated in ref 28 can be adopted in exactly the same way by simply changing the multipliers (eq 27 and 32 of ref 28) according to 6, and pz. A great advantage of the vertically shifted triangular filter is that it can be decomposed into three independent ordinary moving average filters, which do not need any multiplications; compare eq 7 . For that reason also computationally efficient nonrecursiue realizations (which neither need initialization nor exact computation to guarantee stability) can be derived. For this purpose we start with eq 6 and 7 and replace E , and B2, applied to a signal f , by B j = C*Cf = CC*f (13)

+

B2f= Cf C*f - f (14) where C and C* are digital filter operators defined by the filter functions c and c* (eq 22 and 23 of ref 28) N

CfWI = Cf[k - nl

(15)

n=O 0

C*f[kl =

IMPLEMENTATION

2055

C f [ k - n]

(16)

n=-N

That is, the triangular filter Bl (eq 6) can be replaced by a cascade of two multiplierless moving average filters. At first sight the partitioning of the moving average filter B2 into smaller-width moving average filters does not yield any computational advantages. However, using eq 5 , 1 3 , and 14, we may write

Ad = PlCC*f + Pt(Cf

+ C*f - fl

(17) Here the filterings C*f, C f , and CC*f = C(C*f) must be calculated, each needing N additions per filtered data point since C(C*f)can be computed from C*f by a further convolution. Thus C*f can be used for both B1and B,, leading to 3 N + 3 additions/subtractions. A further reduction to 2 N 3 can be obtained when realizing that C f can be completely derived from C*f by a simple “time” shift Cf[k] = C * f [ k - N] (every k) (18) Consequently, the nonrecursive realization of A , according

+

2056

ANALYTICAL CHEMISTRY, VOL. 56, NO. 12, OCTOBER 1984

to eq 17 altogether needs 2N + 3 additions/subtractions and 2 multiplications, in contrast to 2N additions and N 1 multiplications of a straightforward application of eq 3 which only takes into account the symmetry of a,. If the number of additions should still be too great, a final reduction can be accomplished by using a cascade of geveral low-order convolutions instead of C*. This method has been investigated in detail by Fam (47). It is especially well suited for moving average filters whose number of coefficients is a power of two, e.g., N + 1 = 2K. For example, let N = 7 ( K = 3). Then C* can be replaced by a cascade of three filters whose filter functions, say cl*, cz*,and c3*, are zero everywhere except for Cl*[k] = 1 (k = 0, -1) (19) ~ , * [ k ]= 1 (k = 0, -2) (20) c3*[k] = 1 (k = 0, -4) (21) (The convolution of all three filter functions together indeed delivers the filter function c*.) The three convolutions with cl*, cq*, and cg* only require 3 ( = K ) additions instead of 7 (=A9 for the direct form (eq 16). A disadvantage of this method is that the structure of the algorithms depends on the filter width Nand therefore requires a reorganization for every new N. (Roughly said, each N needs its own filter program.) If N + 1 is not a power of 2, the method is les8 efficient although the remaining gains are still impressive for larger

+

N. Precaution is advisable when trying to implement one of the time saving algorithms described above in BASIC. We have ascertained that the fast algorithms, especially when they include multiple filtering such as in eq 13, may require even more time than the direct implementation using eq 1 with a = a,. The main reason for this possibly surprising fact is that the management of data arrays to be used for intermediate storage can be a time-consuming job in BASIC.

RESULTS AND DISCUSSION This section is mainly devoted to the region a > 1where A, is able to operate as a noise-reducing filter which conserves or even enhances spectral resolution. Of special interest is the adjustment of the two parameters filter width (N) and a in eq 3 or 8 and 9. We have only examined the qualitative behavior of A, with varying parameters. A detailed numerical study is beyond the scope of the present paper. Obviously, two parameters are much more difficult to control properly than a single variable N , especially when there is some "correlation" between them. For that reason we proposed in the Theory section to select first the filter type by fixing a (e.g., a = 0, simple moving average; a = 1/2, fool-proof filter for unknown spectra in the sense of ref 15 or approximation to matched filter; a = 1,substitute for second degree Savitzky-Golay smoothing filter (weak filtering)) and then predetermine the noise amplification or the SNR improvement in agreement with the possibly a priori known line width by adjusting N . For filters with a > 1we did not always follow this way. Sometimes, for example, it might be convenient to keep the noise amplification constant (using the parameter a) and vary the filter width N. First we study the effect of A , on single lines. Because it simply depends on the quotient of filter width to line width (except for extremely small filter widths N), we are able to restrict the empirical analysis to the single line width fl = 10 of a Gaussian line

f [ k l = exp(-k2/P2) (22) This corresponds to a full width at half maximum (fwhm) of approximately 17 data points. Thus a variation in N/fwhm is accomplished by simply changing N. In our empirical study we have fixed one variable (e.g., Nlfwhm) while varying the

2.0

I

1.5 I

h

-0.5

'

-40

-M

0 WHPLE NUMBER

20

A0

I

Figure 2. Filtered Gaussian lines. Filter width constant (N = 5). a = 1 (smallest line), 2, 3, 4, 5 (largest line).

other (e.g., a) to find out the qualitative influence of each variable on line height, side lobes, and line width. Noise Amplification. The effect of A, on the signal must always be considered in connection with its effect on the noise. The noise amplification can be derived from theory; see eq 25 of the Appendix. From this equation we can conclude: (a) The noise amplification (assuming random white noise) monotonically increases with increasing a ( N fixed). (b) T h e noise amplification monotonically decreases with increasing filter width N ( a fixed). Line Height. (a) T h e height of the filtered line monotonically increases with increasing a ( N l f w h m fixed). Especially, the height of the filtered line may become equal ar greater than the height of the original line. This fact allows the construction of height-and-area conserving filters if the line width is known a priori: While Savitzky-Golay smoothing filters have zero line height error only for zero N/fwhm ratios (23),A , allows zero line height error with nonzero Nlfwhm ratios if a > 1. Figure 2 shows the effect of varying a(=l,2, 3, 4, 5) on the noiseless Gaussian line f (eq 22). For a = 1, there is no visible deviation from the unfiltered line so that we could renounce a figure off. The corresponding noise amplifications are 0.46, 0.77, 1.10, 1.45, and 1.80. (b) I f a is fixed, t h e height of the filtered line shows a maximum relative t o N l f w h m . That is, in contrast to Savitzky-Golay filters and Al, the height first increases before decreasing at great Nlfwhm values. It must be noted that for a > 1, the line height error need not be the maximum error (15). As a result, serious line shape deformations cannot be controlled by considering merely the line height error. Side Lobe Behavior. An inherent disadvantage of most resolution enhancement algorithms is their production of negative and positive side lobes which are not due to the noise. Ernst (12),for example, has proposed a filter which can be continuously varied between an exact matched filter and an ideal self-deconvolution filter. The advantage of this filter over A, is its wider adjustment range. (A, only achieves a very limited resolution enhancement.) On the other hand, A , has a good-natured side lobe structure, i.e., at most two negative side lobes can be created (one on each side of the filtered line), whereas Ernst filters may produce a large number of negative and positive side lobes (see Figures 1 and 2 of ref 33). We have observed the following side lobe properties of A,: (a) T h e amplitude (regarding absolute values) of the (negative) side lobes monotonically increases with increasing a ( N l f w h m f i x e d ) ;see Figure 2. (b) If a is fixed, the side lobe amplitude shows a maximum relative to N l f w h m . (This effect can also be observed in Figure 4 of ref 23.) If Nlfwhm is not too large (the limit decreases with increasing a),the negative side lobes can be neglected.

ANALYTICAL CHEMISTRY, VOL. 56, NO. 12, OCTOBER 1984

2.0

2057

I l-----T 1 I I 2.0

A

I

-0.5 -20

-IO

20

-IW

10

SWlPLE NUHBER

Equivalent Width. A, is designed to conserve area for any line arealline height, an increase in line height therefore implies a decrease in equivalent width. Therefore, we are able to use the results for the line height. (a) T h e equivalent width of a filtered line monotonically decreases toward zero with increasing a ( N / f w h m fixed). (b) T h e equivalent width possesses a minimum relative t o N j f w h m ( a fixed). These results are to be handled with care. Like most line width measures, the equivalent width has its own problems. Indeed, A, shows some similarities with the optimal equivalent width decreasing digital filter (optimum subject to the constraints of a fixed filter width N and a fixed noise amplification). The procedure of optimizing, however, often shamelessly utilizes the weak points of a criterion. In the m e of equivalent width, the weak points are the negative side lobes. In order to keep the line area unchanged while increasing the line height, the filter produces these side lobes to compensate for the increase in “positive”area. Fortunately, the line narrowing effect is not completely cancelled. This can be shown when using the fwhm as the width measure. The fwhm of the filtered lines behaves like the equivalent width, except for the nonzero l i i i t for increasing a(N/fwhm fiied). Here the fwhm works well since it ignores the negative side lobes. In other cases, however, the fwhm may also show artifacts. We have already mentioned in the Theory section that the decrease in line width is a rapidly converging process. Therefore, too large of an a value (perhaps a > 10) is quite unreasonable, especially in view of the fact that the noise amplification approximately increases linearly with a. Since those cases where the noise amplification is smaller than or equal to one might be of special interest, we have filtered the line f (eq 22) with different widths N (=4, 6, 8, 10) such that the noise amplification equals one. The corresponding a values (=2.39, 2.96, 3.44, 3.86) can be derived from eq 25 3W(N

+ 1)

2W+2N+2

(23)

Ilaall = 1 The filtered noiseless lines are shown in Figure 3 together with the original line f . Resolution. As has been noted in the introduction, the application of digital filters which perform reduced line broadening or even resolution enhancement can under certain circumstances be justified when lines overlap before filtering. For illustration we have chosen the nonresolving doublet f [ k I = (exp(-(k

50

IW

Figure 4. Unresolved Gaussian doublet (smallest doublet) and its filtered versions. a = 5. N = 5 (second smallest doublet), 7, 9, 11, 13, 15 (largest doublet). 2.0,

a and N. Since the equivalent width is defined by the quotient

=

0

SIMPLE NURBER

Figure 3. Unfiltered Gaussian line (lowest line) and Its filtered versions. Constant nolse amplificatlon (=1). Filter width N = 4 (second smallest line), 6, 8, 10 (largest line).

(y2

-50

- 712/100) + exp(-(k + 7)2/100)]/2 (24)

6

I

1.5

c >

(0

c

1.0

W

0.5

a J

$ 0.0

I

Flgure 5. Noisy Gaussian doublet and its filtered version, noise amplification one (N = 8, a = 3.44).

Figure 4 shows the noiseless doublet f (eq 24) and its filtered versions Adwith a = 5 and N = 5,7, 9,11,13, 15 (llasll = 1.80, 1.53, 1.36, 1.23, 1.13, 1.06). (a) From Figure 4 it is obvious that the resolution enhancement has a maximum relative to N l f w h m ( a fixed). (b) T h e resolution enhancement monotonically increases (toward some fixed value) with increasing a ( N / f w h m fixed). Interestingly, a good resolution enhancement can already be achieved for a and N values which do not cause excessive side lobes. Finally, we have filtered a noisy version of the doublet f (eq 24) obtained by adding uniformly distributed white noise. The result is shown in Figure 5. We have chosen the filter width N = 8. The parameter a has been adjusted to give unit noise amplification (Ila,ll = 1,a = 3.44). Surprisingly, the noise amplitude appears to be reduced. The reason for this fallacy is the redistribution of the noise in the frequency domain according to the frequency response in Figure 1. High-frequency noise is reduced at the expense of (signallike!) lowfrequency noise. This should be taken into account when the spectrum is analyzed visually. In that case it is also not recommendable to further improve the high-frequency attenuation of A, since this noise is a certain safeguard against misinterpreting the low-frequency noise as being signal structures (48). (An impressive demonstration of the relative low influence of the remaining high-frequency noise (compared with its low-frequency parts) on the visual representation is provided by Figure 7 of a paper by Kauppinen e t al. (27).) However, this discussion does not necessarily apply to a precise determination of line parameters of an already detected line with the aid of a computer. Here it may further be advisable to normalize A , for line height conservation rather than for area conservation. The determination of the normalizing constant (see eq 33 of ref 15 for unit height reference lines

2058

ANALYTICAL CHEMISTRY, VOL. 56, NO. 12, OCTOBER 1984

f), however, requires the a priori knowledge of the line shape and the line width. A final comment on the negative regions of the frequency response of A , (Figure 1)seems to be appropriate. In ref 15 we have shown that a positive frequency response is always advantageous-at the expense of a larger number of nonzero filter coefficients a [n] Recently, Marchand and Marmet have shown that such negative regions moreover can produce misleading results when a group of closely spaced lines is filtered with a large Nffwhm ratio (strong filtering); see Figure 3 of ref 49. This should be taken into consideration when the filter A, (a # 112) is misused for strong filtering. However, no such artifacts are to be feared in that Nffwhm-range where A , shows the merits described above.

.

APPENDIX The following formulas may be useful in more detailed numerical studies of the filter A,. To get a feeling for the qualitative behavior of A , (which is completely represented by its filter function a,, eq 3), approximations for large N a n d large cy are given. For this purpose we always use the signs = (large N) and (large a). The notation is the same as in ref 15 or ref 28. (a) Squared noise amplification (random white noise)

llaa1I2 =

4a2(W+ N + 1) + 3 N ( N + 1) 3N(N

+ 1)(2N + 1)

+

4a2 3 =6N

2a2 3N

- (25)

(b) Range growth input-output (15)

llaalll = 1 if 0 5 a 5 (N

+ 1)/2N

(27)

min llaalll = 1 ff

(c) Second moment of a , p 2 ( ~ , )=

+ 1)/3 = (1 - a ) W / 3

(1 - a)N(N

-cYW/~ (30)

min ff

I/J~(~,)I= 0

(amin

= 1)

(31)

(d) Proportionality constant for line height in the case of strong filtering (28)

(e) Proportionality constant for line height error in the case of weak filtering (28) p 2 ( ~ , ) l l ~ , 1 i1=~(1 -

0()(401~+ 3)2/108

(34)

min lp2(a,)lJaa1141= 0.0732

(amin = 3/10)

(35)

The minimum in eq 35 exclusively extends over those a values which yield positive filter functions (0 Ia I( N + 1)fZN). The absolute theoretical minimum for positive filter functions of this type is achieved with the (L,M) = ( 1 , O ) filter discussed in ref 30. Its value amounts to 0.072. LITERATURE CITED Sorenson, H. W. "Parameter Estimation"; Marcel Dekker: New York 1980. Van Trees, H. L. "Detection, Estimation, and Modulation Theory", Part I;Wiley: New York, 1968. Himmelblau, D. M. "Process Analysis by Statistical Methods"; Wiley: New York, 1970. Kelly, P. C.; Harris, W. E. Anal. Chem. 1971, 4 3 , 1170-1163. Kelly, P. C.; Harris, W. E. Anal. Chem. 1971, 4 3 , 1184-1195. Ahlers, F. J.; Lohse, F.; Spaeth, J.-M.; Mollenauer, L. F. Phys. Rev. 8 1983, 2 8 , 1249-1255. Niklas, J. R.; Bauer, R. U.; Spaeth, J.-M. Phys. Status Solldl 8 1983, 119, 171-178. Wainstein, L. A.; Zubakov, V. D. "Extraction of Signals from Noise"; Dover: New York, 1962:. Munzel, F. Arch. €lekt. Ubertragung 1982, 3 6 , 160-164. Kwakernaak, H. Automatlca 1980, 16, 367-377. Woodward, P. M. "Probability and Information Theory, with Applications to Radar", 2nd ed.; Pergamon: New York, 1964. Ernst, R. R. Adv. Magn. Reson. 1966, 2 , 1-135. Lam, R. B.; Sparks, D. T.; Isenhour, T. L. Anal. Chem. 1982, 5 4 , 1927-1931. Horlick, G. Anal. Chem. 1973, 4 5 , 319-324. Bromba, M. U. A.; Ziegler, H. Anal. Chem. 1983, 55, 648-653. Poulisse, H. N. J. Anal. Chim. Acta 1979, 112, 361-374. Brown, T. F.; Brown, S. D. Anal. Chem. 1981, 5 3 , 1410-1417. Seelig, P. F.; Blount, H. N. Anal. Chem. 1979, 51, 327-337. Maddams, W. F. Appl. Spectrosc. 1980, 3 4 , 245-267. Westerberg, A. W. Anal. Chem. 1969, 4 1 , 1770-1777. Anderson, A. H.; Gibb, T. C.; Littlewood, A. B. Anal. Chem. 1970, 4 2 , 434-440. Boudreau, P. A,; Perone, S. P. Anal. Chem. 1979, 51, 811-817. Bromba, M. U. A.; Zlegier, H. Anal. Chem. 1981, 5 3 , 1583-1586. Savitzky, A.; Golay, M. J. E. Anal. Chern. 1964, 3 6 , 1627-1638. Kaiser, J. F.; Reed, W. A. Rev. Sci. Instrum. 1977, 4 8 , 1447-1457. Kosarev, E. L.; Pantos, E. J . Phys. E 1983, 76, 537-543. Kauppinen, J. K.; et al. Appl. Opt. 1982, 2 1 , 1866-1872. Bromba, M. U. A.; Ziegler, H. Anal. Chem. 1983, 5 5 , 1299-1302. Bush, I.E. Anal. Chem. 1983, 5 5 , 2353-2361. Bromba, M. U. A.; Ziegler, H. Znt. J . Circuit Theory Appl. 1983, 7 1 , 7-32. Koch, A. TM, Tech. Mess. 1982, 4 9 , 221-226. Bracewell, R. N. "The Fourier Transform and Its Applications", 2nd ed.; McGraw-Hill: New York, 1978. Wittbold, W. M.; et al. J . M a n . Reson. 1980, 3 9 , 127-135. Schafer, R. W.; Mersereau, R. M.; Richards, M. A. Proc. I€€€ 1981, 6 9 , 432-450. Kauppinen, J. K.; et al. Appl. Spectrosc. 1981, 3 5 , 271-276. Kauppinen, J. K.; et al. Appl. Opt. 1981, 2 0 , 1866-1879. Kauppinen, J. K.; et al. Anal. Chem. 1981, 5 3 , 1454-1457. Griffiths, T. R.; King, K.; Hubbard, H. V. S.A. Anal. Chim. Acta 1982, 143, 163-176. Kennett, T. J.; Prestwich, W. V.; Robertson, A. Nucl. Instrum. MetbOdS 1978, 151, 285-292. Kennett, T. J.; Prestwich, W. V.; Robertson, A. Nucl. Instrum. Metho d ~1978, 157, 293-301. Kennett, T. J.; et al. Nucl. Instrum. Methods 1978, 753, 125-135. Horlick, G. Anal. Chem. 1972, 4 4 , 943-947. Cernansky, M. J. Appl. Crystallogr. 1983, 16, 103-112. Grabaric, B.S.; O'Halloran, R. J.; Smith, D. E. Anal. Chlm. Acta 1981, 733, 349-356. Lam, R. B.; Wieboldt, R. C.; Isenhour, T. L. Anal. Chem. 1981, 5 3 , 889A-90 1A. Willson, P. D.; Edwards, T. H. Appl. Spectrosc. Rev. 1976, 12, 1-81. Fam, A. T. I€€€ Trans. Acoust., Speech, Signal Process. 1981, ASSP-29, 1128-1136. Marmet, P. Rev. Scl. Instrum. 1979, 5 0 , 79-83. Marchand. P.; Marmet, L. Rev. Scl. Instrum. 1983, 5 4 , 1034-1041.

RECEIVED for review March 5, 1984. Accepted May 14, 1984.