real-time digital filters - American Chemical Society

cess, illustrated as a switch, the output is a discrete-time sequence, x[ri\. classes: batch and .... sive filters that relate the present out- .... A...
3 downloads 0 Views 8MB Size
REAL-TIME DIGITAL FILTERS Finite Impulse Response Filters Stephen E. Bialkowski Department of Chemistry and Biochemistry Utah State University Logan, Utah 84322

In the past, chemists were not concerned with filtering, because data were obtained using analog instrumentation with hardware analog filters. The most common implementation of a filter consisted of a network of resistors and capacitors to affect the frequency characteristics of signal transfer. However, with the recent advent of affordable digital processor-based data acquisition systems, real-time digital filtering is becoming an ever-increasing facet of the modern analytical laboratory. Proper use of digital filters can result in data with dramatically improved signal-to-noise (S/N) ratios and in the simplification of complex information. Digital filtering is not a magical procedure for data transformation. The trick to proper implementation of the digital filter is prior knowledge of the systems's signal and noise components. In this A/C INTERFACE, three types of finite impulse response smoothing filters—moving average filters, matched filters, and Kalman innovation filters—are discussed. Moving average filters, such as the Savitzky-Golay, are useful in situations in which the noise is known to be stationary and the form of the signal is not known. If the signal is known, then matched filter smoothing is the best that can be done. If the noise is not stationary, or if there are several coherent signal components, then the Kalman innovation filter must be used to filter the measure0003-2700/88/0360-355A/$01.50/0 © 1988 American Chemical Society

A/C INTERFACE ment. The other type of real-time digital filter, the infinite impulse response filter, will be discussed in part two of this series. Real-time digital filters Real-time digital filtering is certainly not new. It is routinely used in imageprocessing applications, communications, transportation engineering, and instrument design (1-4). In fact, many of today's miracles are possible only as a direct result of real-time digital filtering. It might be impossible to fly the space shuttle, monitor air traffic at a large airport, or optimize network communication systems without some form of real-time digital filtering. Precise theories for real-time digital filters were developed in advance of the current widespread practice. Even so, there often remains an element of trial and error in the use of filters to improve data quality. But data quality is no more a subjective criterion than digital filters are random signal processors.

The trial-and-error aspect of filtering is mostly attributable to a lack of knowledge of the expected signal and noise components in a measurement and should not exist. The fact that the purpose of digital filtering is to improve data quality by rejection of unwanted signal or noise components in itself implies some prior knowledge of the signal and noise components. Use of this prior knowledge to implement real-time digital filters without trial and error will be discussed. Theory In a broad sense, a filter is a process that can reduce the quantity of information, thereby translating it into a simpler, more interprétable form. In other words, the digital filter can extract the important information from a complex signal. The digital filter is a highly flexible, programmable process that may be used to perform complex data manipulation and reduction. Digital filters are divided into two main

ANALYTICAL CHEMISTRY, VOL. 60, NO. 5, MARCH 1, 1988 · 355 A

Figure 1. A schematic illustrating the production of a sampled, discrete-time measurement from individual components. The important input to the system is a continuous time signal, s[f]. This signal is added to system noise, v[t], to produce a continuous output, x[f]. Because of periodic sampling of a continuous physical process, illustrated as a switch, the output is a discrete-time sequence, x[n].

classes: batch and real time (2). The batch filter is a postprocessor that uses the entire data set stored on suitable media; it can be implemented at any time after the measurement process. Because computation time constraints are not critical, batch filters can perform extremely complex processes. The real-time digital filter, on the other hand, does not require all data to be obtained prior to processing and thus can operate while data are being produced and sampled. Because of finite computation times and the fact that real-time filtering can be concurrent with data sampling, implementations of these filters are generally linear processes whereby the filtered signal is an arithmetic combination of discretetime input data. Compared with batch filters, realtime filters offer distinct advantages in flexibility, decision making, and data compression. The real-time digital filter can be used to process data while data are being collected or in off-line mode for batch filter processing; the real-time filter is insensitive to the origin of the data. Decisions regarding the extent to which data are collected, or whether or not to terminate a collection process because an expected signal is absent, can be made even while the data are being collected. In addition, reduction of data can be performed by

rejecting signals attributable to interferences or noise components. Because the amount of information is reduced, the required storage capacity can often be reduced as well. Digital filtering implies a discrete or quantized measurement. The conversion of a continuous process into a discrete one is covered in depth in most texts addressing digital filtering (1-6). The digitization process is illustrated in Figure 1. Upon digital conversion of a continuous electrical signal, quantization occurs in both time and magnitude of the measurement. Periodic sampling converts a continuous process into a discrete-time one that is characterized by the sample period or interval. It is well established that the maximum frequency that can be discerned is given by the Nyquist frequency equal to half the inverse sample interval time. The appearance of highfrequency signals as low-frequency ones can occur, which results in an apparent signal of frequency equal to the difference between the real signal and the sampling frequency. This can be avoided by employing a low-pass Nyquist frequency filter prior to the analog-to-digital converter (4). Magnitude quantization through analog-to-digital conversion results in a limited precision of the sampled value as well as a limited range of measurements. Thus the greater the number of bits in a con-

356 A · ANALYTICAL CHEMISTRY, VOL. 60, NO. 5, MARCH 1, 1988

verted digital word, the more accurate the sampled result. Considering all the potential errors attributable to quantization in conversion, round-off and truncation errors in digital computation, and the discrete representation of the impulse response function discussed below, one might wonder why digital filtering is even used. There are three reasons. First, there are no noise sources after conversion into digital form other than computation round-off errors because of quantized number representation. In contrast, every component of an analog filter is a noise source. As an analog filter circuit becomes more complex, a point is reached where the noise dominates the signal. Thus there is a practical limit to the complexity of an analog filter. Second, digital filters can perform operations that cannot be performed with analog filters. There just is no analog equivalent to certain important digital filter processes. Third, even if analog filters can perform particular processes, digital filters offer flexibility. It is often easier to change a line of computer code than it is to change even a resistor in a passive analog filter. Moreover, it could well be more cost-effective in the long run to use a digital filter when changes in parameters are often necessary. There are two main types of realtime digital filters: finite impulse response (FIR) and infinite impulse response (IIR) (6). The FIR filter uses the input over a finite, usually fixed range to produce the current output. In other words, output is a function only of the limited past and present inputs. Although FIR filters generally are nonrecursive in that the current output is not a function of past output values, they may sometimes be cast into recursive output forms. IIR filters are recursive filters that relate the present output to immediate past outputs and present inputs. Because of the recursive nature of the output, the output represents functions of all past inputs, thus the name "infinite impulse response." Digital filters are further classified according to their functions (5). Smoothing filters estimate the signal within the interval of available measurements, prediction filters estimate the signal outside of the interval of available values, and interpolation filters estimate the signal between discrete measurement values. Measurements typically are classified by the expected noise processes and by the time-dependent signal behavior (3, 5). Digital filtering will be addressed in the context of stationary and cyclo-stationary noise processes. A stationary noise process is nonperiodic and thus has a noise power spectrum that does not change with time. In com-

bination with time-dependent signals, stationary noise results in normally distributed data. A cyclo-stationary noise process is one in which the power spectrum may change in time, but in a cyclic fashion. Cyclo-stationary noise processes generally are associated with causal systems which, as the name im­ plies, have a cause or starting point. For example, the starting point could be the pulse of a laser used to excite a sample or the start of a mirror scan in an FT-IR spectrometer. The range of measurement times is finite and gener­ ally fixed in cyclo-stationary processes. Finally, stationary noise processes are not limited to noncausal systems. Gen­ erally, both stationary and cyclo-sta­ tionary noise are present in causal sys­ tems. Why should anyone be interested in all of these definitions? Because of the many types of real-time digital filters available, one must be selective. Al­ though the filters described below cov­ er a wide range of measurement situa­ tions, this article does not provide a comprehensive survey of the possibili­ ties. FIR smoothing filters for both noncausal and causal systems, most of which assume stationary noise, will be discussed. The important exception is the Kalman innovation filter, which is useful for measurement situations in which cyclo-stationary noise predomi­ nates.

input, that is, one where x[n] equals zero for all but one n, the output, y[n], will be the weight function, h[k], locat­ ed at n. Thus the impulse response function definition follows from the physical situation. In this particular moving average definition, the weights are applied from k = η — Ν to k = η + Ν for an output at n. The output will therefore lag behind the most current measure­ ment by Ν sample intervals. Strictly speaking, this filter is not real time. The output index is chosen such that the output is an estimate of the signal at this index. The odd number (2N + 1) of symmetric, monotonie weights is generally sufficient to eliminate any ambiguity in the assignment of the in­ dex to y[n]. Causal data filters use a different index range. In this case the measurement has a starting point. Fil­ ter smoothing of the first Ν points in this finite sequence can be performed with a filter that adapts to the number of measurement values available. A useful example of the moving aver­ age filter is that of the simple average (6). The simple average filter has an impulse response with equal weights for all finite h[k]. This filter can be

used for data smoothing when the sig­ nal is expected to change on a time scale greater than that of the sample interval. It is generally convenient to use unit gain filters. Because the ex­ pected signal is a constant over the fil­ ter interval, the gain may be defined as the filter output for unit input signal. Thus JV

gain= £

h[k]

(3)

k=-N

The h[k] for unity gain are found from h[k] = 1/(2N + 1)

(4)

for all nonzero h[k\. For each output index, n, this filter yields the mean of x[n] over an interval from η — Ν to n + Ν. The index is updated when the next value of χ is available. This pro­ cess is illustrated schematically in Fig­ ure 2. The measurements are delayed such that all χ [η] required for the cal­ culation are simultaneously available. The improvement that this filter provides over a single measurement may be seen by examining the effect of the filter on the signal and noise terms. From the definition of the filter output in Equation 2,

Moving average smoothing filters

Moving average smoothing filters cal­ culate a weighted mean over a finite range of measurement values. Consider a sequence of discrete-time measure­ ments illustrated in Figure 1 and de­ scribed by a linear combination of sig­ nal and noise terms x[n] = s[n] + υ[η]

(1)

where x[n] is the measurement, s[n] is the signal, υ[η] is the noise, and η = 0,1, 2, . . . . The sample interval, T s , is a constant and thus the integer index, n, is sufficient to locate the measurement in time. For simplicity, the sampling operation is always operating (noncau­ sal data) so that the range of η is infi­ nite in both the forward and reverse time directions. A linear filter is de­ fined as the superposition of weights, h[k], with the measurement to produce the output, y [n], Ν

y[n] = ^

h[k]x[n - k]

(2)

k=-N

The filter as defined above is linear be­ cause the output is the sum of indi­ vidual terms obtained by applying the weight function to the inputs one at a time. This summation is the discrete version of the convolution integral. The weight function is called the impulse response of the filter. For an impulse

Figure 2. A schematic of the operation of the real-time digital filter. The input to the filter is a discrete-time sequence, x[n], attributable to periodic sampling of a continuous physical process. The filter can be any number of mathematical or logical operations to produce the out­ put, y[ri\. The amplifier symbols are used to indicate a gain based on the impulse response function,

AM. ANALYTICAL CHEMISTRY, VOL. 60, NO. 5, MARCH 1, 1988 · 357 A

Figure 3. Frequency dependence of three simple average filters with 2Λ/ + 1 equal to 5, 11, and 25 points. The index is proportional to the frequency. The frequency index is determined from the sample rate and the number of sampled points and is 2π/(2η + 1)TS.

Ν

y[n] = V h[k]s[n - k] + k = -N Ν

^

h[k]v[n-k]

(5a)

k=-N

thus, > Ή = 1/(2Ν+1)

N

/

^ Ν

^

*[n-k]

+ \

υ[η - k] )

(5b)

When υ[η] is a zero mean, normal noise process, and assuming that the s[n] are constant over the range of smoothing, y [η] will approach s [η] as Ν approaches infinity. In fact, the statistics for this simple case are well characterized. For normally distributed zero mean noise, both the signal and variance will accu­ mulate as 22V + 1. Defining the S/N ratio as the accumulated signal divided by the square root of the variance, it can be shown that the S/N improvement realized with this filter is equal to V(2iV+l)(5). However, there is a penalty for this improvement. As the number of points used in the filter increases, so does the effective response time of the filter. This response time has nothing to do with the time required to perform the calculations, which is another matter altogether. Rather, the response time indicates how fast the y [η] respond to a change in the x[ri[. The response time of a filter can be determined as its out­ put for a step function input. For ex-

ample, the simple average filter has no response up to the point where the edge of the step function is one index in front of the summation limit, and a complete response when the edge is at η — Ν. Thus the response time of the simple average filter is (2/V + 1)TS, where T s is the sample interval time. Examination of filter response to a step function input is not always an apparent criterion and often does not have analytical form. Although the dig­ ital filter operates in the time domain, it is often more instructive to analyze the effect of the filter in terms of its frequency domain characteristics, or transfer function. The transfer func­ tion for the filter process, Η(Ώ), is found from the discrete Fourier trans­ form of h[k] (4). The moving average filter is a convolution of the measured data with the filter weight function. As­ suming that x[n] is a stationary pro­ cess, the convolution of time-depen­ dent functions can be represented as the product of the respective trans­ forms. Thus Υ(Ω) = Η(Ώ)Χ(Ώ)

(6)

where Χ(Ω) and Υ(Ω) are the trans­ formed measurement input and filter output, respectively, and Ω is the angu­ lar frequency. This equation results in the important, albeit rather simple, re­ lationship between the transfer func­ tion and the impulse response func­ tion H(Q) = Ϋ

h[k]e~iakT'

358 A · ANALYTICAL CHEMISTRY, VOL. 60, NO. 5, MARCH 1, 1988

(7)

Inspection of the transfer function shows the frequency components that the filter emphasizes and attenuates. Because the h[k] are real, /ί(Ω) is gen­ erally complex. There will be both real and imaginary parts to the transfer function. It is more useful to examine the effect of filtering by inspecting the power spectrum magnitude, |ίί(Ω)| (6). The magnitude of the simple average transfer function for several different Ν is shown in Figure 3. Two important features are immediately apparent. First, the simple average filter empha­ sizes the lower frequencies much more than the high. This low-pass behavior is a general characteristic of smoothing filters. Second, the greater the TV, the narrower the low-pass band of the fil­ ter. The multi-banded dc peaked forms of these transfer functions indicate low-order multipole filters. The analog equivalent of this filter is difficult to realize (4). Indeed, the simple average filter re­ sults in a superior estimate of the dc component of the signal. But if the dc component were the only value sought from the sequence of measurements, one would do just as well to determine the mean of tbe entire data. Generally, the purpose for using a moving average filter is to estimate the value of a dy­ namic signal. One can do much better by using other weights. If the dynamic behavior of the system being observed is known or anticipated, then an opti­ mal estimation of this signal can be ob­ tained by using the expected signal as the weights. However, without prior knowledge of the system dynamics, one must use models that allow for the arbi­ trary signal to be filtered. There are many model functions that can be used to estimate the arbi­ trary signal. In general terms, the mea­ surement can be modeled by M

ΦΙ = X aJJn]

(8)

where the fm are model functions and the am are the coefficients. In this case the am are determined that best fit the measurement data, χ [η], by some crite­ ria. The minimum least-squares error criterion is commonly used. The filter output, y[n], is calculated based on the model coefficients and functions. Per­ haps the most used model for the arbi­ trary signal is the power series M

*M = Σ am"m

(9)

m=0

The am may be calculated using the method of least-squares errors. Calcu­ lation of y[n] is possible once the am are known by evaluation of Equation 9 at the time index n. Because the indepen­ dent variable is n, all terms in the leastsquares error equations involving the dependent variable, x[n], are linear in

x[n]. Solving the least-squares error equations for the overdetermined χ [η] is equivalent to determining a series of weights to be used in the impulse re­ sponse function. Application of leastsquares error analysis of the data mod­ eled by a power series, but in the form of a moving average, is known as the Savitzky-Golay filter {7,8). Savitzky-Golay filters are character­ ized by the highest order in the series, the number of data in the smoothing interval, and the degree of the deriva­ tive that is sought. Consider the qua­ dratic power series x[n] = a 0 + αχη + a2n2

y[n]" = 2a 2 h"[k] = (2/A)(kVt2 - 1/to)

(16a) (16b)

where h'[k] and h"[k] are the impulse response of the first and second deriva­ tive filters, respectively. The ability to estimate the derivatives of the data is a very useful feature of the Savitzky-Go­ lay moving average. Derivatives are useful in the determination of peak po­ sitions in a number of analytical tech­ niques, including spectroscopy and chromatography. A plot of the transfer function mag­ nitude for a number of Savitzky-Golay

filters is illustrated in Figure 4, and the two derivatives, y[n]' and y[n]", are shown in Figure 5. It is clear that the signal estimate is a low-pass filter with characteristics much like those of the simple average. There is an important difference in that the band is wider than the simple average for a given smoothing interval range and that the shape of this band is that of a higher order low-pass filter. Figure 5 shows that the derivative filters are high-pass frequency filters. This high-pass char­ acter makes sense. After all, the deriva­ tive of a dc signal is zero, and the higher the frequency components of a signal,

(10)

The least-squares error equations can be found by setting the derivatives of the squares error with respect to the am coefficients equal to zero and then solv­ ing these three equations for the three am. The filter output, y[n], is then cal­ culated by evaluating the power series at n. By setting η to zero, thus shifting the smoothing interval median to the origin, the impulse response function can be found. This is equivalent to fix­ ing the impulse response function while updating the measurements in a serial fashion. Because η is zero at the smoothing interval median y[n] = a0

(11)

The cto is obtained from the leastsquares error equations. Defining ίο = 2N + 1

(12a)

Ν

t2= y

k2

d2b)

4

(12c)

k=-N Ν

i

4

= ^ i

Figure 4, Frequency dependence of the quadratic Savitzky-Golay filter with 2Λ/+ 1 equal to 5, 11, and 25 points.

k=-N

Δ = tjt2 - t2/t0

(12d)

the solution to the least-squares error equations yields y[n] = Ν

Σ

(ί 4 /ί 2 - k2)x[n - fe]/Aio (13)

k=-N

and the impulse response function is therefore h[k] = (tJh ~ k2)/At0

(14)

The least-squares error solution may also be used to obtain the first and sec­ ond derivatives of the measurements. In fact, differentiation of Equation 10 determines the model coefficient de­ pendence and subsequently the im­ pulse response functions for the deriva­ tives

and

y[n\ = αχ

(15a)

h'[k] = k/t2

(15b)

Figure 5. Frequency dependence of the first (I) and second (II) derivative quadratic Savitzky-Golay filters using 5 points. ANALYTICAL CHEMISTRY, VOL. 60, NO. 5, MARCH 1, 1988 · 359 A

the more that signal will change per unit of time and the greater will be the derivative. An intuitively appealing feature of the Savitzky-Golay filter is that errors that arise in modeling the data to the power series are minimized. However, one should not mistake this as indicat­ ing that error minimization is the best that can be done. The validity of the model remains questionable. To be sure, several models other than the power series have been explored. It is interesting to note that the SavitzkyGolay filter is often the standard by which other models are compared (9). The choice of a particular model de­ pends on the expected signal and noise components of the measurement data. As shown below, the choice of model can be simplified when the expected behavior of the system is known. Matched fitter smoothing When the signal is either known or ex­ pected to occur, the optimal h[k] can be determined (2, 6). For simplicity, con­ sider a causal time-dependent mea­ surement of a signal of known form but of variable amplitude. The maximum power attributable to this signal will be available at the filter output when the signal is within the range of smoothing interval. Thus it is sufficient to use a single summation of the measurement impulse response function product to maximize the signal power at that par­ ticular time. The squared S/N ratio at the output of the filter is (S/N)2 - (^h[k]s[k]\

Ι

]Γ(Λ[*Μ*]) 2 *=o

of this equation are equal. For the equality to hold, the optimum impulse response function must be h[k] = Cs[k]

(20)

where C is any constant. The fact that the impulse response is that of the expected signal to within a constant factor results in the name "matched filter." The matched filter can be implemented in the form of a moving average filter, as in Equation 2, or it can be causal when the measurement is of a signal that is in response to an excitation at a known time. The matched filter is the optimum filter for normal noise processes. Being arbitrary, the constant, C, can be chosen to result in the proper scale of the signal. For example, if one is estimating peak heights from the output of an emission spectrometer, the constant would be chosen to be the peak height of the Euclidean normalized impulse response. The question of what to choose for the length of the smoothing interval, or the range of k index, is difficult to address. Some insight may be gained by noting that the simple moving average is the matched filter for a constant, dc signal. Recall that the S/N ratio improved in this case as the square root of the smoothing interval. Thus one would expect the accuracy of the signal estimate to increase as the sample interval increases. But the dc signal is a special case in that the signal is always present. A causal signal will not always be present, and there will be a point of diminishing improvement as t h e smoothing interval increases. A finite smoothing interval range is sufficient (JO). The S/N ratio can be computed by combining Equations 18-20:

(17) S/N=*

/JTs[fe]2/i[n] is f[n\; that for y0[n\ is the simple average given in Equation 4b. To find the innovation functions, we require Ν

Y Wlflk] = 0

(18)

fc=0

In order to choose h[k\ so as to maximize the S/N ratio, it is sufficient to maximize the numerator. This can be done using the Schwartz inequality

2

fj>[fc] V|>[*] ) (19) / \k=o

(24a)

Ν

a2^h[k]2

\k=o

(23a)

y 1 H = a 1 = C 1 ^ i 1 [ ^ W (23b)

^

Kalman innovation filter

2

(22)

/

The maximum occurs when both sides

Measurements resulting from pulsed excitation experiments often are made up of several independent causal signal components, as in Equation 8. In general, the simultaneous occurrence of several signal components will result in an estimation error if the components overlap in time. The problem of filtering then becomes one of both estimation and separation of each individual estimate (3, 5, 9). Either several am for corresponding fm can be determined simultaneously or a single signal, accompanied by coherent noise or unwanted signal components, can be determined. In either case, forms of the impulse response functions are required that esti-

360 A · ANALYTICAL CHEMISTRY, VOL. 60, NO. 5, MARCH 1, 1988

l'iffe] = 0

(24b)

Equation 24a ensures t h a t y o M is n ° t a function of f[n], and Equation 24b en­ sures that yi[n] is not a function of the dc component, a0. The Gram-Schmidt orthogonalization procedure subtracts those components that are not orthogo­ nal from the original impulse response function. In this manner,

i0[k\=h0[k]-l^h0[k]hx[k]j I JV

fe=0

' \

^h^hjlk]

\hj\k]

fc=o

/

(25a)

Figure 6. Choice of FIR filter based on signal and noise in the measurement.

Ν

\

Σ Ί0[Λ]Λ0[*] γφ] fe=o

(25b)

/

The reader may wish to confirm that io[k] and ή [λ] given above are indeed orthogonal to h\[k\ and ho[k], respec­ tively. Thus for a signal of the form given in Equation 22, the use of the innovation filter impulse response functions given above will result in the independent estimation of the magni­ tudes of both the dc and time-depen­ dent components. Because the innovation impulse re­ sponse functions are not the same as those of the original matched filter, the signal components do not have the maximum theoretical S/N ratio that could be obtained with normal noise processes. The degree to which the in­ novation signal estimation is degraded over that of the matched filter is, of course, a function of how independent the signal components are. However, if the signal components always occur and if they do so in a coherent fashion (e.g., in causal processes), and if the individual components cannot be de­ termined independently, then the in­ novation is the best choice. The noise is not a normal process in this case. Each signal component can be thought of as a noise source to the other components. The innovation can thus be thought of as a "whitening" filter that transforms the measurement into one described by a series-independent signal and noise sources. Innovation filters can only be used if the signal components occur coherent­ ly. Orthogonalization does not work if the expected signals occur at random time intervals relative to each other.

Coherent signals normally occur in causal, triggered processes; in such cases the Kalman innovation filter is used as a triggered filter. However, the innovation filter may be used as a mov­ ing average when the trigger time is not known. The expected signal given in Equation 22 is a special case. The re­ sulting innovation impulse response functions can easily be used in a mov­ ing average form because the dc com­ ponent of the expected signal is con­ stant. In addition, the time-dependent signal estimation obtained from Equa­ tion 25b does not respond to a constant dc signal. Thus a dc rejection filter has been obtained by the orthogonalization process. Conclusion Three main types of FIR smoothing fil­ ters can be used in real-time digital sig­ nal processing. As shown in Figure 6, the choice of which filter to use de­ pends on knowledge of the particular measurement system. When nothing is known about the expected signal and the noise is assumed to be normally distributed, then the Savitzky-Golay filter is probably best suited to smooth­ ing. If the signal is known and the noise is stationary, then the matched filter will give the best possible result. Final­ ly, if the measurement is made up of several known signals that occur coher­ ently or if the system is cyclo-stationary, then the Kalman innovation will result in the best signal estimate. The impulse response function is central to understanding the operation of the smoothing FIR filter. This con­ cept can be extended to describe many other types of signal processing, both digital and analog. For example, in this approach, the gated integrator is mere­ ly a causal simple moving average smoothing filter in analog form for cyclo-stationary processes. Readers in­

terested in digital filtering should read the modern college-level textbooks used in teaching the systems approach in electrical engineering (4). Within the systems approach, all elements that make up an experiment are analyzed in terms of their particular impulse re­ sponse. The beauty of this approach is that the measurement can be traced from start (e.g., the mixing of two chemical reagents) through the fin­ ished data analysis as a continuous and self-consistent flow of information. The information flow chart that results may allow one to predict the potential signals and noise in an experiment even before it has started. With this ap­ proach, one quickly realizes that digital filtering is just one link in the chain and that this particular link can be opti­ mized based on the entirety of the pro­ cess to result in maximum utility. Support for this work by CHE-8520050 awarded by the National Science Foundation is gratefully acknowledged. References (1) Ljung, L.; Soderstrom, T. Theory and Practice of Recursive Identification; MIT Press: Cambridge, Mass., 1983. (2) Couch, L. W. Digital and Analog Com­ munications Systems; Macmillan: New York, 1983. (3) Ziemer, R. E.; Tranter, W. H. Principles of Communications, 2nd éd.; Houghton Mifflin: Boston, Mass., 1985. (4) Poularikas, A. D.; Seely, S. Systems and Signals; PWS Publishers: Boston, Mass., 1985. (5) Papoulis, A. Probability, Random Variables, and Stochastic Processes, 2nd éd.; McGraw-Hill: New York, 1984. (6) Schwartz, M.; Shaw, L. Signal Processing; McGraw-Hill: New York, 1975. (7) Savitzky, Α.; Golay, M.J.E. Anal. Chem. 1964,36,1627-39. (8) Steiner, J.; Termonia, Y.; Dettor, J. Anal. Chem. 1972,44,1906-09. (9) Biermann, G.; Ziegler, H. Anal. Chem. 1986,58, 536-39. (10) Bialkowski, S. E. Rev. Sci. Instrum. 1987,58,687-95.

Stephen E. Bialkowski is an associate professor in the Department of Chem­ istry and Biochemistry at Utah State University. He received a B.S. from Eastern Michigan University and a Ph.D. from the University of Utah. He then spent two years at the National Bureau of Standards and three years at Michigan Technological University before joining the faculty of Utah State. His research interests include the application of laser-based spectro­ scopic and optical signal-processing techniques to chemical analysis.

ANALYTICAL CHEMISTRY, VOL. 60, NO. 5, MARCH 1, 1988 · 361 A