Improving Mass Spectrometry Peak Detection

Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology,. Clear Water Bay, Kowloon, Hong Kong, China, ...
0 downloads 0 Views 2MB Size
Improving Mass Spectrometry Peak Detection Using Multiple Peak Alignment Results Weichaun Yu,*,† Zengyou He,† Junfeng Liu,‡ and Hongyu Zhao§ Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China, Department of Statistics, West Virginia University, Morgantown, West Virginia 26506, and Department of Epidemiology and Public Health, Yale University, New Haven, Connecticut 06520 Received June 14, 2007

Mass spectrometry data are often corrupted by noise. It is very difficult to simultaneously detect lowabundance peaks and reduce false-positive peak detection caused by noise. In this paper, we propose to improve peak detection using an additional constraint: the consistent appearance of similar true peaks across multiple spectra. We observe that false -positive peaks in general do not repeat themselves well across multiple spectra. When we align all the identified peaks (including false-positive ones) from multiple spectra together, those false-positive peaks are not as consistent as true peaks. Thus, we propose to use information from other spectra in order to reduce false-positive peaks. The new method improves the detection of peaks over the traditional single spectrum based peak detection methods. Consequently, the discovery of cancer biomarkers also benefits from this improvement. Source code and additional data are available at: http://www.ece.ust.hk/∼eeyu/mspeak.htm. Keywords: Mass spectrometry data analysis • Biomarker discovery • Peak detection • Peak alignment

1. Introduction Technical advances have made it possible to acquire mass spectrometry (MS) data in a high-throughput fashion. This has motivated the use of MS techniques in the study of cancers at the protein/peptide level with the goal of finding cancer biomarkers (e.g., see refs 6, 8, and 12). Consequently, the development of statistical and computational methods for MS data has attracted much attention in recent years (e.g., see refs 1, 2, and 9). Two common MS data types are Matrix-Assisted Laser Desoprtion/Ionization Mass Spectrum (MALDI-MS) data and Surface-Enhanced Laser Desorption-Ionization Mass Spectrum (SELDI-MS) data. Their analysis methods are very similar. In this paper, we will discuss the analysis of ovarian cancer MALDI-MS data. They are obtained by using the Micromass M@LDI-L/R instrument in the Reflectron mode with the range 800–3500 Da. In the analysis of MS data, researchers usually use a common protocol that consists of the following steps: peak detection, peak alignment, and statistical analysis.10,13 Among these steps, robustly identifying peptide-related peaks from raw mass spectra is critical––peak identification not only is a feature extraction step in the automatic data analysis framework, it is also a critical step to ensure the good performance of classification algorithms. However, peak identification is a chal* To whom correspondence should be addressed. E-mail: [email protected]. † The Hong Kong University of Science and Technology. ‡ West Virginia University. § Yale University. 10.1021/pr070370n CCC: $40.75

 2008 American Chemical Society

lenging problem because MS data can be very noisy, making it difficult to distinguish low-abundance peaks from noisy data points. Different approaches have been proposed to denoise the spectra before identifying peaks from noisy MS spectra. For example, Coombes et al.2 defined noise as the median value of intensities in spectra and set up a threshold value to only keep peaks whose intensity values were above the threshold. Similarly, Satten et al.7 used the negative parts of normalized MS data to estimate the variance of noise to be used to establish noise threshold. Coombes et al.3 used wavelets to represent MS data and denoise the spectra by removing those wavelets components with small coefficients. Additional constraints have also been proposed to improve peak detection. For example, peak width check was proposed to reduce false-positive detection results,16 and the relatively fixed distances among isotopic peaks were used as well.15 In the above approaches, only information from each individual spectrum is used in peak detection. In this paper, we propose a feedback strategy to borrow information from multiple spectra to help peak identification. Through such peak purification, the feature space is significantly reduced. Consequently, the statistical analysis of MS data (including classification and biomarker discovery) is improved as well. From the viewpoint of borrowing information across different samples, a related method was proposed.5 The authors used the mean spectrum for peak detection. The implicit assumption behind this approach is that noise is additive with mean zero. Under this assumption, the mean spectrum is less affected by noise. This assumption is not needed in our approach. In the Journal of Proteome Research 2008, 7, 123–129 123 Published on Web 11/22/2007

research articles

Yu et al.

Figure 1. Overview of the peak purification framework using the feedback strategy. See text for details of the alignment criterion.

Experiments section, we shall demonstrate the merit of relaxing this assumption. The rest of the paper is organized as follows: section 2 describes our feedback strategy in detail; section 3 demonstrates the benefit of purifying identified peaks; section 4 concludes this paper.

2. Purify Peak Detection Using Peak Alignment Results In the following, we first give an overview of our algorithm in figure 1 and then explain the details step by step. 2.1. Initial Peak Detection. This is a standard peak detection step. We use local maximum search combined with any single spectrum based constraints to identify peaks (including noisy ones). In this paper, we adopt the method described previously.16 It consists of the following steps: baseline correction, smoothing, local maximum search plus peak width check, and mean intensity check (the intensity value of a peak should be larger than the mean intensity value in the local neighborhood). For the reflectron MS data we analyze in this paper (the data are available at http://bioinformatics.med.yale.edu/MSDATA), the peak width is set to be at least 0.25 Da wide, and the local neighborhood is set to be 3 Da for the mean intensity check. 2.2. Peak Alignment. This step identifies the locations of peaks in the standard set using identified peaks from multiple spectra. Here, we adopt the scale-space based approach.14 All identified peaks are given equal weights during the alignment. Concretely, we represent each identified peak as a Dirac function with its weighting coefficient set to 1. Then, we convolve the set of Dirac functions with a Gaussian function to diffuse the distribution of Dirac functions. By changing the standard deviation of the Gaussian function, we adjust the scale parameter (thus controlling the diffusion process). After fixing the scale parameter using an energy minimization method, we convert the problem of determining the locations of standard peak set to a problem of local maximum search in the diffused representation. After finding out the locations of the standard peak set, we further use a closest point matching method to align identified peaks to common peaks. The output of this step is the locations of the standard peak set and an alignment matrix denoting the correspondence relations between the identified peaks and common peaks. The reader is referred to ref 14 for more details about the scale-space approach and closest point matching based alignment. 2.3. Alignment Quality Check. To carry out alignment quality check, we need a quantitative measure. Here, we use the average number of peaks associated with each common peak in the standard peak set as the quality measure and denote it as Nav. Nav )

M K

(1)

where M denotes the total number of identified peaks, and K denotes the number of common peaks in the standard peak 124

Journal of Proteome Research • Vol. 7, No. 01, 2008

set. If we denote the number of spectra as N, then Nav  [1,N] (Note that Nav is not necessarily an integer). Intuitively, Nav is a measure of alignment complexity: the closer Nav is to 1, the more complicated the alignment model; the closer Nav is to N, the better the alignment model. The extreme case is that every identified peak is considered as a common peak and then Nav ) 1. In this paper, we set Nav g N/2 as the condition to stop peak purification (In step 4, N/2 is also used as a threshold to remove false-positive peaks). 2.4. Peak Purification Using Alignment Result. We assume that peptide-related peaks are reliable only when they appear in the majority of all similar spectra. Those peaks that appear only a few times will be considered as noise and can be removed. The arguments are twofold: I. In the context of cancer biomarker discovery, an informative biomarker should be consistent at least in either case or control group. II. In the control group, the biological variation could be large. In the case group, some subtypes of the same cancer may have very different disease mechanism. While it is ideal if we could study these variations in more detail, the technology limitation (high noise level, limited dynamic range of intensity measurement) and inadequate sample size prevent current analysis methods from having enough statistical power to detect these complicated details. As a tradeoff between false findings and loss of information, we have to focus on strong signals (“common” peaks) for which we have higher confidence to detect differences between case and control if there are any. In this paper, we simply set up a threshold condition as follows: We denote the number of peaks associated with the same common peak across N spectra as Np (note that it is possible to count Np after peak alignment). When Np is larger than a threshold, we consider the corresponding peaks across N spectra are true peaks. The rest of peaks are considered as false positives and are removed during peak purification. The setting of threshold is critical here. In the Experiments section, we will show the influence of systematically varying the threshold on the peak purification results as well as the final classification results. After peak purification, we expect to obtain a less noisy set of peaks. To assess whether this procedure will result in improved performance of sample classification and facilitate the discovery of cancer biomarkers, we consider two quantitative methods: 1. The first method is to demonstrate the improvement of peak alignment quality. Yu et al.14 defined a term named average distance to measure the quality of alignment: N

Kj

∑ ∑ ||d ||

2

ij

Dav )

j)1 i)1

M

(2)

Here N denotes the number of spectra, Kj indicates the peak number in the j-th spectrum, M denotes the total number of all identified peaks, Dij denotes the distance between the i-th peak in the j-th sample and its closest common peak in the standard peak set. Generally, the smaller the Dav value, the better the alignment results. However, the value of Dav itself does not tell us how many common peaks are derived from M peaks to form the standard peak set. In other words, Dav value does not consider the issue of model complexity. Consequently, the comparison of Dav values is reasonable only when two

research articles

Improving MS Peak Detection by Multiple Peak Alignment Results

Figure 2. (Left panel) Stack plots of peak locations from 50 spectra. Blue circles denote the locations of peaks on the m/z axis. Red circles denote noisy points. To mimic biological variations, we randomly delete some true peaks. (Middle panel) 3-D view of true peaks and noisy points. True peaks have relatively smaller intensity values, while noisy points have much higher intensity values. Here, y-axis denotes peak set index, and z-axis denotes intensity values. (Right panel) Common peak locations denoted as local maxima after the diffusion process. The intensity values of all peaks (including noisy points) are set to one to emphasize the relative frequency of appearance. See text for more details.

alignment results provide the same number of common peaks. In fact, the value of Dav decreases when we increase the number of common peaks (cf. the right plot in Figure 5 in ref 14). The extreme case is that every identified peak is considered as a common peak and we obtain a minimal measure of Dav ) 0. But this value obviously is the result of overfitting. In this paper, we modify the above average distance concept by using a term named normalized average distance DN: N

Dav ) DN ) Nav

K

Kj

∑ ∑ ||d ||

Table 1. Key Values before and after Peak Purification Using the Condition NP g 25a M K Dav

Nav

DN

common peak locations

before purification 141 4 1.533 35.25 0.044 (106.5, 143, 159.5, 183) after purification 121 3 1.079 40.33 0.027 (106.5, 143, –, 183) a Here, M denotes the total number of identified peaks, K denotes the number of common peaks, Dav denotes the average distance, Nav denotes the average number of peaks associated with common peaks in the standard peak set, and DN denotes the normalized average distance. The difference before and after purification is the removal of common peak location at m/z ) 159.5 Da and the corresponding noisy points.

2

ij

j)1 i)1

M2

(3)

where Dav is defined in eq 2 and Nav is defined in eq 1. The inclusion of Nav, especially the limitation we put on the range of Nav values, helps to avoid the overfitting problem by penalizing large number of common peaks (i.e., large K value). For example, if we do not allow Nav equal to or close to one (i.e., K ) M), the extreme case of Dav ) 0 mentioned in the above discussion will not appear. Moreover, the normalization effect of Nav makes it possible to compare alignment results derived from methods with different number of common peaks (i.e., different model complexities). 2. The second method is to check the classification result before and after peak purification using the same data analysis pipeline. The key is that we keep everything in the pipeline as the same except the peak detection results (with or without peak purification). Then, the difference in final classification results will reflect the effect of peak purification.

3. Experiments In this section, we first demonstrate the improvement of peak alignment after the purification step. Then, we use real ovarian cancer data to show the classification improvement resulting from the feedback strategy. We also compare our feedback strategy with the mean spectrum based approach. Finally, we check the effect of different purification threshold values on the alignment results as well as the classification results. 3.1. Synthetic Data. Figure 2 gives a very simple synthetic example. It plots the m/z locations of three peaks identified from 50 samples (see blue circles in the left plot in Figure 2) together with some noisy points (the red circles in the left plot in Figure 2). If we only consider intensity information, these noisy points will be considered as reliable, since they all have much higher intensity values than true peaks, as shown in the

Figure 3. (Left panel) Stack plots of six replicate MS spectra. (Right panel) The numbers of identified peaks in these six MS spectra. Variations are large from spectrum to spectrum, even though these are replicate spectra.

3-D plot (see middle plot). Table 1 shows the consequences in the alignment result. However, if we consider the relative frequency of peaks instead of intensity values as the quality measure, we will be able to remove these noisy points in the peak purification step. Correspondingly, the alignment results will favor peaks with more frequent appearances across multiple spectra, as shown in Table 1 as well. In fact, the purification step can also be understood as using equal intensity weighting in the scale space representation and then thresholding the maxima in the diffused representation. To illustrate this principle, we also show a plot of the scale space representation (see ref 14 for details of the scale space representation) by setting all peak intensity values equal to one (see the right plot in Figure 2). 3.2. Replicate Data. We acquired MS data from 7 different samples. For each sample, we used 6 technical replicates. Figure 3 plots the MS spectra of one such sample. If we detect peaks separately in each individual spectrum, variations among peak detection results are unavoidable (see right plot in Figure 3). Since we consider replicate data, we keep only those peaks (after peak alignment) that appear at least three times across Journal of Proteome Research • Vol. 7, No. 01, 2008 125

research articles

Yu et al.

Figure 4. Effect of peak purification on technical replicate data. The top row shows initial peak detection results. The bottom row shows results after peak purification. Three columns denote results from three different samples with each sample containing six replicates. In each plot, we stack peak detection results from six replicates together and index them along the y-axis. We also label the x-axis with m/z values. Each circle represents the location of an identified peak. Table 2. Some Key Values for the Replicate Data before and after Peak Purification Using the Condition NP g 3a Index

M K Nav DN

before after before after before after before after

1

2

3

4

5

6

7

369 131 200 27 1.85 4.85 0.101 0.024

451 89 296 20 1.52 4.45 0.085 0.008

520 236 251 49 2.07 4.82 0.091 0.035

378 111 236 22 1.60 5.05 0.069 0.022

399 141 218 31 1.83 4.55 0.085 0.021

399 92 258 21 1.55 4.38 0.082 0.029

451 189 250 40 1.80 4.73 0.057 0.024

a Here, M denotes the total number of identified peaks, K denotes the number of common peaks, Nav denotes the average number of peaks associated with common peaks in the standard peak set, and DN denotes the normalized average distance.

the six spectra. After the purification, the peak sets from the replicate samples are more similar to each other (see Figure 4). Correspondingly, the average peak number Nav affiliated with common peaks (in Table 2) has increased. It is interesting to note that, although the number of common peaks in the standard peak set is reduced (see Table 2), the normalized average distance values DN did not increase (it even slightly decreased, as shown in Table 2). Recall that the number of common peaks is an index of model complexity; this indicates that we have improved the quality of alignment with a simpler model after peak purification. One may argue that it is not always feasible to acquire MS data using technical replicates. In the following, we shall demonstrate that biological samples from the same control group or cancer group also share the same trend and we still can use peak alignment result to purify peak identification results. 3.3. Ovarian Cancer Data. In this section, we use desalted human sera samples from the National Ovarian Cancer Early Detection Program at Northwestern University Hospital. The data was acquired using a M@LDI-LR instrument in the Keck Laboratory at Yale University. The detailed procedure of sample preparation and data acquisition is described in ref 16 with the reprint available at http://www.ece.ust.hk/∼eeyu/mspeak. htm. Here, we study the MS spectra obtained in the Reflectron mode with the mass/charge range 800 –3500 Da. Totally, there 126

Journal of Proteome Research • Vol. 7, No. 01, 2008

Table 3. Key Values before and after Peak Purification Using the Condition NP g 44 and Using the Mean Spectrum Based Ideaa

before purification after purification mean spectrum

M

K

Dav

Nav

DN

15769 5704 11147

512 75 156

0.74 0.41 2.12

30.80 76.05 71.46

0.0242 0.0055 0.0296

a The notations of M, K, Nav, and DN are the same as in Table, while Nav ) M/K.

are 47 cancer spectra and 44 control spectra. We use this data set to investigate the effect of peak purification on final classification results. Table 3 shows some key values during the test using the condition Np g 44 (the threshold is about half of the number of spectra). For comparison purpose, we also include the key values derived from the mean spectrum based approach. After peak purification, the number of peaks associated with each common peak largely increases and the normalized average distance decreases, indicating that the alignment result is improved. It should be noted that here we do not know the ground truth about the number and locations of common peaks. Thus, it is difficult to report quantitative measures such as falsenegative rate and false-positive rate. Instead, we indirectly demonstrate the effect of peak purification by comparing the sample classification results before and after peak purification: we set up every step in the data analysis pipeline (peak

Improving MS Peak Detection by Multiple Peak Alignment Results

Figure 5. Effect of peak purification using NP g 44 on the classification result using the ovarian cancer data set. For comparison purpose, we also illustrate the classification results using the mean spectrum based idea.5

detection, peak alignment, and statistical analysis) exactly the same except that we use peaks with purification in one implementation and peaks without purification in the other implementation. The difference in the final classification results then reflects the effect of peak purification. It should also be noted that the number of peaks or common peaks is a measure of feature space before feature selection. This number does not equal to the number of features we use in classification. In fact, the number of features with significant distinguishing power is another unknown parameter in the data analysis pipeline, and we have to use different settings in our comparison, as shown in figure 5. We choose two representative feature selection criteria: Chisquared statistic and Relief-F.4 Then, we apply the Support Vector Machine (SVM), Random Forests (RF), and Naive Bayes (NB) algorithms to carry out classification in the Weka environment.11 During classification, we apply the 10-fold cross-validation to calculate the classification accuracy. The number of selected features ranges from 5 to 50. All algorithms used in this experiment utilize their default parameters specified in Weka. If a peak is absent in a sample, we set the corresponding feature value to 0. Figure 5 clearly shows that better classification results can be achieved after peak purification for all three classifiers if we use Relief-F for feature selection, while the difference is not so significant when we use Chi-squared statistics for feature selection. We suspect that the Chi-squared statistics may not be the optimal criterion to choose a subset of features with interactions, while Relief-F is a better choice. This example also indicates that the effect of feature selection on the final classification result needs to be more carefully studied. We also carry out the mean spectrum based peak detection, use the detection results as common peaks to guide peak

research articles

detection in individual spectra, and then align them (using the procedure described in ref 5). During peak detection, we use the same parameter setting as we have used in our initial peak detection. Figure 5 illustrates the corresponding sample classification results. When we use Relief-F for feature selection, it is interesting to observe that the classification results using the mean spectrum based idea are better than the results before peak purification but worse than that after peak purification. For this phenomenon, we have the following explanations: 1. The mean spectrum based peak detection uses information from multiple spectra. The standard deviation of additive noise is reduced by N1⁄2 (N denotes sample size) after calculating the mean spectrum. Consequently, mean spectrum based peak detection provides more robust results than individual spectrum based peak detection methods. 2. The mean spectrum based peak detection is still based on local maximum search of intensity values. Peaks with higher intensity values are favored over weak peaks with lower intensity values (even though these weak peaks in the mean spectrum are less sensitive to noise than their counterparts in individual spectra), while those weak peaks might be more informative in the context of cancer biomarker discovery13 In contrast, our approach favors peaks with higher appearance frequency across multiple spectra. This enables us to treat weak peaks in the same way as strong peaks as long as their appearance frequencies are the same. One may wonder why the classification results in the mean spectrum based approach are even worse than those before peak purification when we use Chi-squared statistic in feature selection. We suspect that the reason might be that those informative features are corrupted by noise (in spite of our effort to reduce noise) so severely that a suboptimal feature selection method (such as the Chi-squared statistic) may not be able to choose true biomarkers from such noisy features. Furthermore, there is a possibility that the limitations of current MS techniques (including intensity dynamic range, sample digestion efficiency, mass range, resolution, and many other known or unknown factors) hamper the inclusion of true biomarkers in the MS data, which explains why even the best classification accuracy is only around 70%. After acknowledging this possibility, however, we like to argue that we need solid evidence in order to verify this statement. At the same time, we like to point out that our proposed method helps to verify the hypothesis that true biomarkers do exist in MS data. For the ovarian cancer data set, the mean spectrum based approach identifies 156 common peaks, while our method identifies 75 common peaks after peak purification (see Table 3. The locations of these common peaks are stored in the supplementary text files at http://www.ece.ust.hk/∼eeyu/ mspeak.htm. Among these 75 common peaks, 64 overlap with their counterparts in the mean spectrum based approach if we consider two peaks with a distance less than or equal to 0.5 Da between them as overlapping peaks. The number of overlapping peaks goes up to 72 if we relax the distance threshold to 1.0 Da. Obviously, our method further identifies a subset of common peaks derived using the mean spectrum based method. It would be nice if we could directly verify which common peaks are true ones or calculate the distances between common peaks and true peaks. But unfortunately, this is very difficult in current data due to the following reasons: Journal of Proteome Research • Vol. 7, No. 01, 2008 127

research articles

Yu et al. accuracy with respect to the increase of NP threshold values is clearly visible. This trend vividly reflects the conflict of removing false positives and keeping enough informative peaks. For a new data set, we could use similar plots to Figure 6 to determine the best threshold value. Interestingly, the condition of NP g 44 is very close to the optimal compromising point on these curves.

4. Conclusions

Figure 6. Effect of peak purification using different NP threshold values on the classification result using the ovarian cancer data set. Here, we use 10 features during classification.

• In current data, we do not know the ground truth of common peak number and locations, making it hard to calculate quantitative measures such as false-negative rate and false-positive rate. While we could use simulations examples,5 it is unclear to us how strongly the results depend on the assumptions behind the simulation and how far real MS data are from simulations. • Our peak alignment method and the counterpart used in the mean spectrum based approach are different. For example, the common peak location is estimated as the (weighted) mean of all peak locations in our peak alignment method,14] while it is determined as the local maximum location in the mean spectrum in ref 5. This difference may cause biased locations of common peaks which are influenced more by strong peaks. The Dav value shown in Table 3 reflects this bias, even though the Nav value clearly indicates that peaks in the mean spectrum based approach are much better aligned than peaks before the purification. In our future work, we plan to design new experiments to provide ground truth of true peptide-related peaks. We hope that the new design may facilitate the quantitative verification mentioned above. To check the effect of different Np thresholds on the alignment and classification results, we set up different threshold values ranging from 0 (i.e., without purification) to 70 (larger than 2/3 of the sample number; note that some spectra do not have peaks left if we keep increasing the threshold values beyond 80) and repeat the above experiments. For simplicity, we fix the number of features as 10 in all experiments. Figure 6 shows the plots of classification accuracy versus NP thresholds under different feature selection/classifier combinations. The trend of first increase and then decrease of classification 128

Journal of Proteome Research • Vol. 7, No. 01, 2008

In this paper, we have introduced a feedback strategy to improve peak detection from noisy mass spectrometry data. This method borrows information from other samples to enhance the true positive peak detections and weaken falsepositive detections. While we are not able to quantitatively describe the values of true positives and false positives due to the lack of ground truth of the identified peaks, we can demonstrate that peak purification results in a smaller feature space with higher classification accuracy than the original peak set. This indirect yet quantitative measure convincingly shows the benefit of peak purification. Comparisons with the mean spectrum based peak detection method also demonstrate the merit of our approach. In the future work, we plan to use this framework to analyze other types of MS data. One possible extension is the analysis of Liquid Chromatography–Mass Spectrometry (LC-MS) data: It is well-known that the function of liquid chromatography is to separate different protein components in samples. This separation also results in the concentration of protein components along the LC-axis before MS scanning. Consequently, neighboring MS spectra often contain similar peaks. For peak purification purpose, this property may serve as a strong constraint.

Acknowledgment. This work was supported by a RGC Direct Allocation Grant 2006/07 from the Hong Kong University of Science and Technology and Federal Funds from NHLBI/NIH contract N01-HV-28186, NIDA/NIH grant P30 DA 018343-01, and NIGMS grant R01 GM 59507 in the U.S. We thank Dr. Jing Li for his valuable suggestions. References (1) Baggerly, K. A.; Morris, J. S.; Wang, J.; Gold, D.; Xiao, L. C.; Coombes, K. R. A comprehensive approach to the analysis of matrix-assisted laser desorption/ionization-time of flight proteomics spectra from serum samples. Proteomics 2003, 3, 1667–1672. (2) Coombes, K. R.; Fritsche, H. A., Jr.; Clarke, C.; Chen, J.; Baggerly, K. A.; Morris, J. S.; Xiao, L.; Hung, M.; Kuerer, H. M. Quality control and peak finding for proteomics data collected from nipple aspirate fluid by surface-enhanced laser desorption and ionization. Clin. Chem. 2003, 49 (10), 1615–1623. (3) Coombes, K. R.; Tsavachidis, S.; Morris, J. S.; Baggerly, K. A.; Hung, M.; Kuerer, H. M. Improved peak detection and quantification of mass spectrometry data acquired from surface-enhanced laser desorption and ionization by denoising sepctra with the undecimated discrete wavelet transform. Proteomics 2005, 5, 4107–4117. (4) Kononenko, I. Estimating attributes: Analysis and extensions of relief. In ECML, Lecture Notes in Computer Science; Francesco, B., Luc De, R., Eds.; Springer: New York, 1994; Vol. 784, pp. 171–182. (5) Morris, J. S.; Coombes, K. R.; Kooman, J.; Baggerly, K. A.; Kobayashi, R. Feature extraction and quantification for mass spectrometry data in biomedical applications using the mean spectrum. Bioinformatics 2005, 21 (9), 1764–1775. (6) Petricoin, E. F., III; M Ardekani, A.; A Hitt, B.; Levine, P. J.; Fusaro, V. A.; Steinberg, S. M.; Mills, G. B.; Simone, C.; Fishman, D. A.; C Kohnb, E.; Liottab, L. A. Use of proteomic patterns in serum to identify ovarian cancer. Lancet 2002, 359 (9306), 572–577. (7) Satten, G. A.; Datta, S.; Moura, H.; Woolfitt, A. R.; Carvalho, G.; Facklam, R.; Barr, J. R. Standardization and denoising algorithms for mass spectra to classify whole-organism bacterial specimens. Bioinformatics 2004, 20 (17), 3128–3136.

research articles

Improving MS Peak Detection by Multiple Peak Alignment Results (8) Tibshirani, R.; Hastie, T.; Narasimhan, B.; Soltys, S.; Shi, G.; Koong, A.; Le, Q. Sample classification from protein mass spectrometry, by “peak probability contrasts”. Bioinformatics 2004, 20 (17), 3034– 3044. (9) Wadsworth, J. T.; Somers, K. D.; Cazares, L. H.; Malik, G.; Adam, B. L.; Stack, B. C., Jr.; Wright, G. L., Jr.; Semmes, O. J. Serum protein profiles to identify head and neck cancer. Clin. Cancer Res. 2004, 10 (5), 1625–1632. (10) Wagner, M.; Naik, D.; Pothen, A. Protocols for disease classification from mass spectrometry data. Proteomics 2003, 3 (9), 1692–1698. (11) Witten, I. H.; Frank, E. Data Mining-Practical Machine Learning Tools and Techniques with JAVA Implementations, 2nd ed.; Morgan Kaufmann Publishers; San Francisco, CA, 2005. (12) Wu, B.; Abbott, T.; Fishman, D.; McMurray, W.; Mor, G.; Stone, K.; Ward, D.; Williams, K.; Zhao, H. Comparison of statistical methods for classification of ovarian cancer using mass spectrometry data. Bioinformatics 2003, 19, 1636–1643.

(13) Yasui, Y.; Pepe, M.; Thompson, M. L.; Adam, B.; Wright, G. L., Jr.; Qu, Y.; Potter, J. D.; Winget, M.; Thornquist, M.; Feng, Z. A dataanalytic strategy for protein biomarker discovery: profiling of highdimensional proteomic data for cancer detection. Biostatistics 2003, 4 (3), 449–463. (14) Yu, W.; Li, X.; Liu, J.; Wu, B.; Williams, K.; Zhao, H. Multiple peak alignment in sequential data analysis: A scale-space based approach. IEEE/ACM Trans. Comput. Biol. Bioinf. 2006, 3 (3), 208– 219. (15) Yu, W.; Wu, B.; Lin, N.; Stone, K.; Williams, K.; Zhao, H. Detecting and aligning peaks in mass spectrometry data with applications to MALDI. Computat. Biol. Chem. 2006, 30, 27–38. (16) Yu, W.; Wu, B.; Liu, J.; Li, X.; Williams, K.; Zhao, H. MALDI-MS data analysis for disease biomarker discovery. In New and Emerging Proteomics Techniques; Humana Press, Inc.: Tolowa, NJ, 2006; Chapter 14, 199-216

PR070370N

Journal of Proteome Research • Vol. 7, No. 01, 2008 129