Automated Workflow for Large-Scale Selected Reaction Monitoring

Jan 30, 2012 - ... label-free quantification. We then study how the pathogen adapts to increasing amount of plasma in the grown media. ... We present ...
3 downloads 13 Views 4MB Size
Article pubs.acs.org/jpr

Automated Workflow for Large-Scale Selected Reaction Monitoring Experiments Lars Malmström,*,† Johan Malmström,‡ Nathalie Selevsek,† George Rosenberger,† and Ruedi Aebersold†,§ †

Institute of Molecular Systems Biology, ETH Zurich, Zurich, Switzerland Department of Immunotechnology, Lund University, S-22100 Lund, Sweden § Faculty of Science, University of Zurich, Zurich, Switzerland ‡

S Supporting Information *

ABSTRACT: Targeted proteomics allows researchers to study proteins of interest without being drowned in data from other, less interesting proteins or from redundant or uninformative peptides. While the technique is mostly used for smaller, focused studies, there are several reasons to conduct larger targeted experiments. Automated, highly robust software becomes more important in such experiments. In addition, larger experiments are carried out over longer periods of time, requiring strategies to handle the sometimes large shift in retention time often observed. We present a complete proof-ofprinciple software stack that automates most aspects of selected reaction monitoring workflows, a targeted proteomics technology. The software allows experiments to be easily designed and carried out. The steps automated are the generation of assays, generation of mass spectrometry driver files and methods files, and the import and analysis of the data. All data are normalized to a common retention time scale, the data are then scored using a novel score model, and the error is subsequently estimated. We also show that selected reaction monitoring can be used for label-free quantification. All data generated are stored in a relational database, and the growing resource further facilitates the design of new experiments. We apply the technology to a large-scale experiment studying how Streptococcus pyogenes remodels its proteome under stimulation of human plasma. KEYWORDS: targeted proteomics, mass spectrometry, selected reaction monitoring, discrete wavelet transform



mass spectrometry3 can quantify proteins in large number of conditions in a relative manner, where all the machine time is spent on the actual sample, compared to labeled approaches where as much as half of the time is spent measuring the reference sample. It is in principle possible to scale SRM experiments to measure the vast majority of proteins in an organism; in practice, eukaryotic systems are out of reach because of mostly instrument availability constraints. Microbes with small proteomes are, however, within reach even for single instrument laboratories, and this technology might be amendable to elucidate changes to the molecular phenotype as recently reviewed.4 There are several informatics challenges associated with SRM. The first challenge is to design the SRM assays, and in this step, both software and previous data play an important role in both selecting the best peptide(s) for a given protein and subsequently selecting the best transitions for the selected peptides. Considerable efforts have been made, producing software to make SRM experiments easier to carry out. Large collections of data can be obtained from the PeptideAtlas5 that can be used both to select high-flier peptides and suitable

INTRODUCTION Targeted proteomics is becoming an important proteomics technology that enables scientists to focus on a specific subproteome.1,2 Selected reaction monitoring (SRM) is a targeted proteomics technology and is carried out using, for example, a triple-quadrupole instrument where a peptide precursor ion is isolated in the first quadrupole (Q1) and then fragmented by collision-induced dissociation (CID) in the second quadrupole. A single fragment ion is isolated in the third quadrupole, Q3, and subsequently detected. The two isolation windows in Q1 and Q3 are referred to a transition, and the instruments are able to rapidly iterate over a list of transitions. A set of transitions that belong to the same peptide is called an SRM assay. SRM offers advantages over more traditional shotgun experiments in that the instrument spends the majority of the time measuring the proteins of interest resulting in a higher fraction of useful data. It allows for individual settings for each peptide, maximizing the detection probability, and hence can potentially allow the detection of lower-abundance proteins. The acquisition method is determined a priori and is considered a data-independent acquisition method. This allows the researcher to make statements about a peptide being absent from the sample, something that is difficult to do in datadependent acquisition experiments. Label-free quantitative © 2012 American Chemical Society

Received: August 30, 2011 Published: January 30, 2012 1644

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653

Journal of Proteome Research

Article

Figure 1. A typical experiment workflow is outlined schematically. The user decides which proteins to measure and which assays to use for those proteins in the first two steps. Assuming all assays are validated, so-called transition sets are created, followed by a creation of a production experiment. New MS drive files are exported, and the MS experiment is performed. DDB monitors the output directory of the MS, automatically converting and importing the data, which is subsequently scored using a GLM-based scoring method. Local FDR is estimated for each measurement, and the quality of the experiment is estimated. The identifications and the relative quantitative values are used to generate an Xplor instance that allows fast data integration and data analysis. Validation experiments need to be carried out to validate any assay without experimental coordinates.

transitions for the selected peptides. TIQAM6 can be used to create and manage transitions, and it is integrated with PeptideAtlas. As retention time can be predicted using, for example, SSRCalc,7 it is possible to carry out so-called scheduled SRM experiments, where an assay is only measured over a smaller time segment and it is possible to optimize the scheduling.8 Finally, data can be analyzed and scored using, for example, mProphet.9 There are also tools that support more than one aspect of the experiment process such as ATAQS,10 a web-based application, and Skyline,11 a windows application. Here, we extended an open source project, DDB,12,13 to create a complete proof-of-principle software suit that allows for SRM experiments to be carried out with a high degree of automation. The software contains support to generate assays de novo or using shotgun data, create driver and methods files to operate the instrument, automatically import the data, normalize the retention time, apply a novel scoring scheme, and estimate the false discovery rate (FDR) using decoy assays. Further, label-free quantitative capabilities of SRM are demonstrated. Last, the software is fully integrated with DDB12,13 and Xplor,13 storing the data in a relational database and facilitating interpretation of the results, respectively. We apply the technology studying how Streptococcus pyogenes organizes its proteome when grown in the presence of human plasma.

Protein Selection

The very idea of targeted proteomics is that only proteins of interest are measured, and in that effect, protein selection is the first step of each experiment. Two modes of selecting proteins are supported, one of which is manual, in which proteins are selected manually and added via the graphical interface. The other mode is to use a previous experiment, apply filters, and then add all the proteins that pass the filters. The filters can be applied to arbitrary information available in DDB and hence allow for selection of proteins that belong to a subsystem from the National Microbial Pathogen Data Resource (NMPDR),14 signal peptides,15 GO annotation,16 and more. This provides the maximum flexibility, as experiments can come from any source (shotgun experiments are worth mentioning) and it is easy to integrate external information. Assay Selection

An assay, by definition, is an instruction set designed to measure a single peptide at a given precursor charge and modification state, e.g., post-translational modifications or heavy amino acids. There are currently several thousands of assays in DDB, and they are annotated both with metadata and with any experimental data measured using the assay alongside all the metadata associated with those experiments. This information is used to aid the experimentalist to select the best assay for the protein(s) of interest. There are two modes of selecting assays, a manual and a batch-based. The manual allows the user to go over the proteins one by one and presents all the available assays for that protein. Batch-mode allows the user to add a fixed number of assays to all proteins simultaneously, which is faster but less flexible. In both cases, the user selects how many transitions to use for each assay. Assays need to be created and validated for proteins that have no or too few assays, and this will be described below.



RESULTS Several steps need to be carried out between the experimental design and the data interpretation. Validation of assays is one of the most important steps, with the goal of determining the experimental coordinates for measuring a particular peptide. Once the assays are validated, it becomes possible to use the assays to carry out quantitative SRM experiments, or so-called production experiments. We will first assume that all assays are validated and describe a production experiment, i.e., an experiment designed to answer a particular scientific question or provide data of interest to the scientist. We will then describe a validation experiment designed to validate assays, a prerequisite to performing the production experiment. Figure 1 shows a scheme covering the main steps in an SRM experiment, and the various steps will be discussed in detail in the sections that follow.

Create Transition Sets

A transition set is a collection of transitions intended to be measured together, and one or more transition sets are created containing the top N transitions for all assays selected in the previous step. As the relative retention time is known for all validated assays, it is possible to optimize how the assays get divided over the transition sets being created, minimizing the number of assays concurrently measured. 1645

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653

Journal of Proteome Research

Article

Figure 2. The retention time peptides are added to each sample and hence are measured in every injection. The retention time peptides are used to convert the measured retention times to ntime in which all comparisons are made. The robustness of the retention time normalization is shown here, where the ntime is on the x-axis and the measured time is on the y-axis. Each line corresponds to one of 464 experiments collected between February, 2009 (starting with red color) and January, 2011 (finishing with blue colors).

format to mzXML.17 The mzXML files have several thousand pseudospectra where the Q1 is recorded as the precursor m/z and Q3, and the intensity is encoded in the base-64 bit-packed spectrum. All spectra are reorganized into a single ion chromatogram (IC) per transition and stored in the relational database. These ICs are time series by design, and hence all data points are equally spaced, which is important as it allows the application of wavelets, for example.

Create Experiment

An experiment can now be created using the transition sets generated in the previous step. The user selects samples to be measured and specifies which transition sets to be used for each. Commonly, m transition sets are measured for each of the n samples, resulting in n·m measurements. A file name is automatically generated for each measurement. Create MS Driver Files

Generating driver files for the MS is of importance to minimize the risk of human error and to speed up the process. DDB generates csv files for each transition set in a format compatible with the MS, and a method file is created for each. In addition, it creates one or two driver files, depending on if the LC controller software is integrated with the acquisition software or not. The driver files are then imported into the acquisition software and the LC controller software, respectively. File names, autosampler vial positions, LC method, acquisition method, and the target directory where the acquired data are stored are set by DDB.

Process the Data

Each assay is scored independently and hence is done as soon as each measurement is imported. There are several steps performed in the scoring, and each is described in detail below. In short, the noise is reduced using wavelets followed by a simple peak detection algorithm. Peaks in transitions belonging to an assay are grouped into peak groups. A normalized retention time is estimated for each peak group and subsequently scored by comparing the peak group with the expected values for the validated assay, and this is done using a generalized linear model (GLM). Finally, intensity of the peak groups and the error is estimated. The peak picking can be adjusted after all the samples are measured.

Perform the MS Experiment

The MS experiment is now started, and DDBs automatic workflow manager monitors the MS data output directory and converts any file it considers complete from the proprietary 1646

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653

Journal of Proteome Research

Article

Wavelet-based Noise Reduction

low as in the case for low-abundance peptides or peptides that are not optimal for MS analysis. Good score models can separate correct from incorrect, are simple, are applicable in broad contexts, and allow the estimation of the false discovery rate. Moreover, it should be difficult to misuse the score model, and it should report the confidence in the score in such a way that two experiments can be easily compared. Here, we opted to use binominal GLM classifiers and fit the parameters on data from 331 crude peptides purchased from JPT measured on a ThermoScientific TSQ Vantage and evaluate the performance on the same peptides measured on another triple quadupole instrument by a different operator. This was done to make sure that the score models were trained and evaluated on data sets produced using the same sample but with the maximum amount of variation expected from the experimental setups. Data collected for each of the 331 peptides was compared to assays for each of the 331 peptides, generating one correct and 330 incorrect comparisons (decoy assays) per peptide with a total of 109 561 comparisons. The goal is hence for the GLM to separate the 331 correct data/assay pairs from the 109 230 incorrect data/assay pairs. We used the relative intensity of each transition and ntime and trained 240 models using two functions to compare the relative intensity (root-mean-square deviation; rmsd of the relative intensities and rmsd of the log relative intensities), three ways to compare ntime (Δt, Δt2,Δt4), five ways to pick out a subset of the negative comparisons (random, close in RT, close by rmsd, and two combinations of close in RT and rmsd), and then trained using 3, 4, 5, 6, 7, 8, 9, and 10 transitions per assay; see the Supporting Information, Table S1. Picking negative examples is important since a classifier that assigns everything incorrect is 99.7% correct, which is an exceptionally good classifier but is at the same time completely useless, as it has no power to identify correct assignments. One model was selected using a combination of evaluations; first we computed the receiver operating characteristic (ROC) area under the curve (AUC) both on the data selected for training and the entire data set. Higher AUC is better, as this is a result of high sensitivity and high specificity. Then, we applied all models to the evaluation data ignoring the number of transitions since this will give an indication on how robust the model was. The sensitivity was estimated at 0.00, 0.01, 0.02, 0.03, 0.04, and 0.05 FDR. One class of models outperformed the others, and the number of transitions used for training seemed to have less influence. The model picked has the form shown in eq 3:

The ICs is split into seven levels by applying a maximum overlap discrete wavelet transform (MODWT) using the least asymmetric 8 (LA8) wavelet. Each level contains partly overlapping frequency domains, and a new IC, referred to as the filtered IC, is then reconstructed by reversing the transformation using levels 4−7 and leaving out levels 1−3 as well as the detail, reducing high and low frequency signals, respectively. This is optimized for sampling frequencies of about 2.6 s and a 55-min gradient and needs to be reoptimized if the experiment parameters change significantly. Negative intensities are set to zero, as these are an artifact of the transform. Peak Detection

Each transition gives rise to several peaks, where a peak is defined as a stretch of continuous signal. The AUC is calculated by summing all the signals between the start and end of the peak. Peaks that do not fulfill eq 1 are discarded. 0.5
0.3n transitions

(2)

Note that this definition allows for more than one peak from the same transition to be grouped into a peak group. Retention Time Peptides and Calculating ntime

The score model described in the next section is relying on comparing the measured retention time with the retention time of the assay. This can be problematic, as the absolute retention time of a particular peptide can vary between samples, liquid chromatography (LC) columns, and instruments, among others. The order in which peptides elute from the peptide is highly conserved, and hence, there is valuable information in knowing the relative retention time compared to other eluents. ntime is a global retention time scale that each experiment is transformed to enable easy comparison between various experiments, even when performed at different times, on different columns, or even on different instruments. ntime relies on adding standard retention time (RT) peptides to each sample and using part of the machine time to measure these in every experiment. How we calculate ntime for each assay using the RT peptides is described in the Methods section. The RT peptides should be selected to avoid interference with the experiment itself and should cover as much of the elution as possible. Figure 2 shows the relation between measured elution time and ntime in 464 experiments collected on two different instruments over almost two years from February, 2009 (red) to January, 2011 (blue). Lines close in color were also collected close in time.

S=

1 2 1 + ea − bΔ − c·rmsd

(3)

where a is 4.699 867, b is 0.000 133 499 4, and c is 17.166 85. This model was selected because the family of classifiers using predictors with Δ2, rmsd, and a random negative selection outperformed the others in general. The particular classifier chosen was trained on six transitions, as this had a slightly better mix of advantages than others. As expected, larger differences between the assay and the measurement for both ΔRT and rmsd have a negative influence on the score.

Score Model

Error Model

Detecting a peptide using SRM involves selecting a number of transitions that then are measured. As there is no guarantee that the particular peptide is in the sample, it is necessary to assess if the peptide was indeed identified. There are many sources of noise making this difficult, especially when the signal-to-noise is

A false positive is a peak group that scores above a given threshold despite being wrong. One very useful metric to estimate the quality of an experiment is to report the number of false discoveries that likely are among the data points that pass some chosen criteria. The number of false discoveries is 1647

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653

Journal of Proteome Research

Article

of the light using a simple algorithm that estimates if the two peak groups are equivalent and then use the confidence score from the more confident data. The other scenario is when the same assay is used in many samples. There might be samples where the peptide is of lower abundance or missing, in which case it is more likely to fall outside the confidence interval. This data is rescued using the same algorithm. There are also rare cases where two peak groups are picked that are not the same; in these cases, the peak-picker is rerun with a time seed to pick the equivalent peak group. This ensures that we estimate abundance for peptides below the detection limit of the experiment.

reported as a false discovery rate (FDR). It is difficult to exclude all false positives without also discarding data that are likely to be correct; see Figure 3. Hence, it becomes important

Quantification

The intensity of a peak is strongly dependent on the amount of ions hitting the detector (among many other factors), and this can be used when quantifying as long as the peptides compared are identical (different samples) or differ only by isotope (same sample). Heavy peptides spiked in the sample at known concentration are often used to estimate absolute abundance but come with two drawbacks: first, half of the instrument time is used to measure reference peptides, and second, heavy reference peptides for which the concentration is accurately determined are expensive. The advantages are of course that an accurate estimate of the absolute amount of the light peptide is present in the sample and that a detectable amount normally added to the sample to guarantee detection can serve as reference if the light peptide is of low abundance or absent. Many applications do not need absolute quantification and allow for relative label-free quantification. The advantage here is that one is using all of the available instrument time to measure the samples of interest. To show the label-free quantitative properties on SRM, we designed an experiment where we measured S. pyogenes proteins in an increasingly complex background (Homo sapiens lysate) from 0% to the point where 100% of the sample is human. We show the measured IC from peptide ISGVPIVETLANR from inosine-5′-monophosphate dehydrogenase in Figure 4A and the integrated area under the peaks selected in Figure 4B. In addition, we measured 324 peptides in this experiment matching to proteins present across the entire S. pyogenes proteome abundance range. Peptides are expected to fall on the diagonal, and all peptides detected at an lFDR of 1% can be seen in the Figure 4C.

Figure 3. This figure displays the performance of the score function on a large data set. The green line is the recall as a function of lFDR cutoff. The yellow and red lines display FDR and lFDR, respectively. Most assays were detected in this experiment, resulting in the lFDR being close to the FDR.

to estimate how many false discoveries one is likely to make in an experiment. Here, we estimate the local FDR (lFDR) by applying N decoy assays and estimating the lFDR as the number of decoy assays that scored above the correct assay divided by N. The value of N should be relatively large to estimate an accurate lFDR but as small as possible to minimize the number of assays required in the assay database and to speed up the scoring. We have found that N = 1000 is a good trade-off between accuracy and speed. The lFDR reflects how many decoy assays are equally good or better at explaining the acquired data. The FDR of the experiment is then computed as nassays·lFDR/npassed, where nassays is the number of assays used in the experiment and npassed is the number of assays with an lFDR below the chosen lFDR cutoff. The lFDR and the FDR are close in experiments where most assays successfully identify their target peptide, whereas the FDR is much higher in experiments where assays do not identify their target peptide; see Figure 3.

Create Xplor Instance

XPLOR is a tool designed to facilitate data processing, data integration, and data interpretation.13 XPLOR was adapted to handle data generated by SRM efficiently; the main adaptation allows the identification and quantification to be extracted from the same data, and this differs from shotgun where identifications are done in one MS/MS spectra and quantification is done using MS1 spectra. All XPLOR functionality is available once the data is imported, which allows for easy statistical analysis and integration of a multitude of resources.

Adjustment of Peak Picking

All measurements are evaluated with the score model independently of the rest of the data and, as such, are evaluated out of context. This precludes us from benefiting from experiments where the same assay is measured in several samples or when there are heavy and light assays. To compensate for this, we apply a correction algorithm to potentially false peak assignments using more confident data in the same data set. This, of course, only works if there is more confident data and the assay was applied on more than one biological sample expected to contain the peptide target of the assay or if using heavy reference peptides. One example scenario is to detect the heavy form, but the light falls below the significance cutoff. In these cases, we can boost the confidence

Perform Validation Experiments

Thus far, we have assumed that all assays are validated; however, this is not always the case. Here, we describe how assays are validated in a process not too different from a production experiment. Under the assay selection wizard, the user has an option to see a list of all tryptic peptides for which an automatic preassay creation can be triggered. Peptides are listed in order with a descending peptideProphet18 probability 1648

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653

Journal of Proteome Research

Article

Figure 4. Relative and normalized intensities of S. pyogenes peptides in human lysate background as a function of the percent of S. pyogenes of the total protein amount. (A) The ionchromatograms for peptide ISGVPIVETLANR from inosine-5′-monophosphate dehydrogenase, marked with the solid line, where the red signal corresponds to 100% S. pyogenes and blue corresponds to 0% S. pyogenes. Small peaks to the right represent noise peaks. (B) The signal for the highest scoring peak group (marked with a black line) is integrated and plotted as a function of fraction of S. pyogenes. Ideal peptides fall on the diagonal. (C) All peptides passing the 1% lFDR cutoff are displayed as a function of fraction of S. pyogenes. The red line is the average of the others. Darker lines correspond to peptides of lower intensity, which increase the noise level. Proteins were selected to cover the abundance range.

that transition within that particular assay. Transitions are automatically added to assays when they are created as follows. If shotgun data exists (DDB contains over 7000 shotgun experiments) for the selected peptide sequence, charge, and PTM, all spectra with a peptideProphet score of at least 0.99

if the peptide has been observed in any shotgun experiment available in the database. Peptides that have not been observed in any shotgun experiment are scored using an in-house version of APEX.19 Each assay has one or more transitions associated, and each transition has a rank that reflects the relative quality of 1649

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653

Journal of Proteome Research

Article

Figure 5. Relative and normalized intensities of S. pyogenes peptides measured as a function of increasing human blood in the growth medium (0, 1, 5, 10, and 20%). The expression profile matrix was dimension-reduced using PCA and then clustered using K-mean clustering, resulting in three clusters shown in panels A, B, and C, respectively. Proteins in (A) are considered not regulated, (B) as down-regulated, and (C) as up-regulated. Functional enrichment (using NMPDR subsystems) was calculated for each group. One example of a group significantly enriched in panel C (downregulated) was the S. pyogenes virulome system, and the proteins considered up-regulated in this group are plotted in (D). Each peptide is represented as a line, where darker lines represent peptides of less intensity and the red line is the average of the peptides in the subselection.

experiments are normally run on a pool of the samples of interest or using a mix of synthetic peptides to minimize the number of validation experiments. The validation transition sets contain between 10 and 20 transitions per assay ordered by rank (observe that already validated transitions are included here). One or more transition sets are created that are then used to measure a sample believed to contain the peptides specified by the assays that need validation. DDB attempts to add 180 transitions per transition set and then adds 18 transitions to measure the RT peptides. The user then creates an experiment akin to a production experiment, exports the MS drive files, and performs the experiment. The data is automatically processed. A rule-based score model is used to validate the peaks. This score model requires that only a single peak from the same transition can be in the peak group. The maximum Δapex of the trans-peaks in the group needs to be less than 10 s. The total ion current (TIC) of all the peaks needs to exceed 500, and any peak less than 0.1% of the largest trans-peak is excluded. To make sure that there are no ambiguities, peak groups need at least three peaks and contain at least 40% of the TIC measured for all the transitions. Each peak group that passed the rule-based score model is considered validated, and information, such as ntime and relative intensity between the transitions, is recorded. In addition, transitions are given new ranks based on the relative intensity, where the transition with the biggest peak receives the

are gathered (the vast majority of these shotgun experiments were analyzed using X! Tandem20 followed by PeptideProphet21 and ProteinProphet22). Each spectrum is decomposed into the observed y and b fragments and ranked according to their falling relative intensity, so that the most intense fragment has a rank of 1, the second a rank of 2, and so on. Each theoretical fragment is given a score of the sum of 31-rank for each spectrum. Fragments not among the 30 most intense fragments are discarded. This scoring scheme favors highintensity fragments observed in many spectra, which differs slightly from consensus spectra where the intensity of individual fragments has a bigger influence. The top-20 scoring fragments are added to the assay where the highest-scoring fragment is given a rank of 100, the second-highest-scoring fragment is given a score of 101, and so on. To make it more concrete, we have selected a peptide, ISGVPIVETLANR from inosine-5′monophosphate dehydrogenase. There were 721 spectra collected for this peptide in 257 LC−MS experiments (83 projects). Y9 was the best transition received a transition score of 18718 compared to 13851 for the second transition (B7). Preassays are then validated by experimental means as follows: DDB automatically creates a number of validation transition sets for all assays that need to be validated. Transition sets are collections of transitions that are used to create a method file for the MS, and hence the number of LC−MS experiments is ntransition_sets·nsamples. In practice, the validation 1650

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653

Journal of Proteome Research

Article

(Eksigent Technologies). The LC was operated with a flow rate of 400 nL/min. The mass spectrometer was operated in SRM mode, with both Q1 and Q3 settings at unit resolution (fwhm 0.7 Da). A spray voltage of +1700 V was used with a heated ion transfer setting of 270 °C for desolvation. Data were acquired using the Xcalibur software (version 2.1.0). The dwell time was set to 10 ms and the scan width to 0.01 m/z. All collision energies (CE) were calculated using eq 5:

rank 1, the second 2, and so on. Peak groups can also be validated manually by visual inspection. Validating assays can be done iteratively until each protein has the desired number of validated assays containing a minimal number of validated transitions. S. pyogenes Response to Plasma

S. pyogenes, a common human pathogen, infects several clinical sites and is likely adapting to a new site in ways detectable by mass spectrometers.23 To test this, we validated SRM assays for the 93% of the 1691 open reading frames (ORF) and measured the relative expression of all these peptides in S. pyogenes grown in 0, 1, 5, 10, and 20% plasma in biological duplicates totaling in 10 samples; see the Methods section. The experimental setup aims at simulating the adaption to plasma leakage that occurs during the inflammatory process caused by the bacteria. To accommodate a close to proteome-wide SRM measurement of the S. pyogenes proteome adaption, we used 5358 transitions split into 12 injections per sample resulting in 120 SRM experiments. 765 proteins were detected at a 1% FDR level. Xplor was used to normalize, scale, and aggregate this matrix, which subsequently was dimension-reduced using PCA, and we applied a K-mean clustering. This produced three protein clusters, each containing proteins that behaved similarly to each other during the site transition; see Figure 5A−C. These clusters were further analyzed for enrichment of functional groups. One of the clusters contains proteins that seem to be up-regulated with increasing plasma. The S. pyogenes virulome was found in this group with an enrichment z-score of 3.6, and the same data filtered for the S. pyogenes virulome is shown in Figure 5D.



CE = (parent m/z)(a + b)

where b = 3.314 and a for z = 2 was 0.034, a for z = 3 was 0.044. Samples

S. pyogenes SF370 (S. pyogenes; NCBI taxonomy ID 160490) were grown in Todd−Hewitt broth (TH) with 0, 1, 5, 10, and 20% human plasma in biological duplicates. The bacteria were lysed, and the proteins were digested, followed by the mixing of the tryptic peptide mixtures at equal amounts to create a pooled sample. The label-free quantitation was evaluated using a dilution series where we diluted the pooled sample with a human lysate in the following fractions: 0/5, 1/4, 2/3, 3/2, 4/1, and 5/0, where the first number is the relative amount of S. pyogenes pool and the second is human, based on volume. In addition, 2013 crude peptides were purchased from JPT, Germany. These peptides were pooled first on pools of up to 96 peptides and then in sets of about 500 peptides. Software Availability

All described above is part of a software project, DDB, which is open source and freely available from Sourceforge. The software is implemented in Perl and R and has been tested on the Linux operating system (Centos 5.5 and Ubuntu 10.4 LTS). It is dependent on a MySQL database, version 5.1, and an Apache web server, version 2.2. The database schema is checked in to the Sourceforge repository as database dumps without the data. The software is provided as is.

METHODS

Retention Time Normalization

The nine RT peptides used in a typical experiment were purchased from Biognosys AG, Switzerland, and two transitions are used per peptide to minimize the loss of machine time and maximize the chance of detecting them. Transformation of the measured retention time to ntime is done in a segment-based linear fashion for any retention time between the RT peptides where the RT peptides define the segment boundaries according to eq 4: ntime pep = RT1nt + (RT2nt − RT1nt )

(5)



DISCUSSION SRM is becoming a standard tool in targeted proteomics research; it is sensitive and produces a high fraction of useful data by focusing on peptides of interest. There are several important differences between SRM and the more commonly used shotgun-based technologies, necessitating the development of software to carry out the experiments and to handle the data. Here, we present a complete software stack to facilitate SRM experiments. The proof-of-principle software presented in this paper shows that it is possible to carry out large-scale experiments with a high degree of automation in all steps from the assay generation and validation to target selection, operation of instrument, data import, and analysis. Moreover, quantitative shotgun MS technologies based on integrated MS1 signals have an inherent shortcoming in the difficulty to discriminate between two coeluting peptides for which the isotope envelopes overlap. As quantification in SRM is estimated from fragment ions, the extra dimension in SRM, Q3, decreases the probability of signals from two coeluting peptides overlapping. Thus, theoretically, SRM will have an increased signal-to-noise ratio and hence result in improved peptide quantification. One other major advantage is that one can measure the absence of a peptide, as one is guaranteed to continuously measure over the elution where a peptide is expected, and not seeing a signal would indicate that the peptide is not present in detectable amounts. In traditional

PEPm − RT1m RT2m − RT1m (4)

where ntimepep is the ntime for the peptide of interest and RT1 and RT2 refer to the closest preceding and closest following RT peptides. Subscript “nt” refers to the ntime registered in the database for these peptides, and the “m” subscript refers to the retention time measured in this particular experiment. ntime cannot be calculated in this fashion for peptides that elute before the first RT peptide or after the last RT peptide, a linear model is created for these peptides to reflect the relationship between the measured RT and the ntime, and this model is then used to estimate ntime. Instrument Settings

The SRM measurements were performed on a TSQ Vantage triple quadrupole mass spectrometer (Thermo Electron, Bremen, Germany) equipped with a nanoelectrospray ion source (Thermo Electron). Chromatographic separations of peptides were performed on an Eksigent 1D NanoLC system 1651

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653

Journal of Proteome Research

Article

mass spectrometer; OGE, off-gel electrophoresis; Q1, quadrupole 1; Q3, quadrupole 3; rmsd, root mean square deviation; ROC, receiver operating characteristic; RT, retention time; SRM, selected reaction monitoring; TIC, total ion current; TH, Todd−Hewitt broth

shotgun experiments, the absence of a peptide can be a result of the peptide not being present or an under-sampling artifact. The main disadvantage of SRM is that is does not lend itself to discovery and hence is more of a complementary technology to shotgun-based mass spectrometry. Other perceived disadvantages generally attributed to these types of experiments, such as the high degree of a priori knowledge and timeconsuming operation, have been addressed by this software and are much less of a problem. We have shown that SRM is fully capable of measuring relative quantities of a predetermined set of proteins accurately and that the data contains few ambiguities and few missing data points. The results presented here constitute a step forward in making SRM more accessible and easier to use. The technology is rapidly becoming mature enough for a wider audience to enjoy the benefits of targeted proteomics.





ASSOCIATED CONTENT

* Supporting Information S

Table S1: The 240 models created (model.id) are summarized. rmds.mode: rmsd of the relative intensities (1) and rmsd of the log relative intensities (2). rt.mode: Δt (1), Δt2(2), Δt4 (3). neg.sel.mode: random (1), close by RT (4), close by rmsd (2), and close by RT and rmsd (3,5). n.transitions: n transitions per assay. The parameters fitted using glm (eq 3) are presented in columns a, b, and c. Evaluation was done using area under the curve (AUC) and the average sensitivity for 3−10 transitions at 0.1, 1, 2, 3, 4, and 5% in column avg.fdr0001−fdr005. As no model was best in all categories, we selected model 33 as the final model. See text for details. This material is available free of charge via the Internet at http://pubs.acs.org.



REFERENCES

(1) Picotti, P.; Bodenmiller, B.; Mueller, L. N.; Domon, B.; Aebersold, R. Full dynamic range proteome analysis of S. cerevisiae by targeted proteomics. Cell 2009, 138 (4), 795−806. (2) Malmström, J.; Beck, M.; Schmidt, A.; Lange, V.; Deutsch, E. W.; Aebersold, R. Proteome-wide cellular protein concentrations of the human pathogen Leptospira interrogans. Nature 2009, 460 (7256), 762−5. (3) Mueller, L. N.; Brusniak, M. Y.; Mani, D. R.; Aebersold, R. An assessment of software solutions for the analysis of mass spectrometry based quantitative proteomics data. J. Proteome Res. 2008, 7 (1), 51− 61. (4) Malmström, L.; Malmström, J.; Aebersold, R. Quantitative proteomics of microbes: Principles and applications to virulence. Proteomics 2011, DOI: 10.1002/pmic.201100088. (5) Deutsch, E. W.; Lam, H.; Aebersold, R. PeptideAtlas: A resource for target selection for emerging targeted proteomics workflows. EMBO Rep. 2008, 9 (5), 429−34. (6) Lange, V.; Malmström, J. A.; Didion, J.; King, N. L.; Johansson, B. P.; Schäfer, J.; Rameseder, J.; Wong, C. H.; Deutsch, E. W.; Brusniak, M. Y.; Bühlmann, P.; Björck, L.; Domon, B.; Aebersold, R. Targeted quantitative analysis of Streptococcus pyogenes virulence factors by multiple reaction monitoring. Mol. Cell. Proteomics 2008, 7 (8), 1489− 500. (7) Krokhin, O. V. Sequence-specific retention calculator. Algorithm for peptide retention prediction in ion-pair RP-HPLC: Application to 300- and 100-A pore size C18 sorbents. Anal. Chem. 2006, 78 (22), 7785−95. (8) Bertsch, A.; Jung, S.; Zerck, A.; Pfeifer, N.; Nahnsen, S.; Henneges, C.; Nordheim, A.; Kohlbacher, O. Optimal de novo design of MRM experiments for rapid assay development in targeted proteomics. J. Proteome Res. 2010, 9 (5), 2696−704. (9) Reiter, L.; Rinner, O.; Picotti, P.; Hüttenhain, R.; Beck, M.; Brusniak, M. Y.; Hengartner, M. O.; Aebersold, R. mProphet: Automated data processing and statistical validation for large-scale SRM experiments. Nat. Methods 2011, DOI: 10.1038/nmeth.1584. (10) Brusniak, M. Y.; Kwok, S. T.; Christiansen, M.; Campbell, D.; Reiter, L.; Picotti, P.; Kusebauch, U.; Ramos, H.; Deutsch, E. W.; Chen, J.; Moritz, R. L.; Aebersold, R. ATAQS: A computational software tool for high throughput transition optimization and validation for selected reaction monitoring mass spectrometry. BMC Bioinf. 2011, 12 (1), 78. (11) MacLean, B.; Tomazela, D. M.; Shulman, N.; Chambers, M.; Finney, G. L.; Frewen, B.; Kern, R.; Tabb, D. L.; Liebler, D. C.; MacCoss, M. J. Skyline: An open source document editor for creating and analyzing targeted proteomics experiments. Bioinformatics 2010, 26 (7), 966−8. (12) Malmström, L.; Malmström, J.; Marko-Varga, G.; WestergrenThorsson, G. Proteomic 2DE database for spot selection, automated annotation, and data analysis. J. Proteome Res. 2002, 1 (2), 135−8. (13) Malmström, L.; Marko-Varga, G.; Westergren-Thorsson, G.; Laurell, T.; Malmström, J. 2DDB - A bioinformatics solution for analysis of quantitative proteomics data. BMC Bioinf. 2006, 7, 158. (14) McNeil, L. K.; Reich, C.; Aziz, R. K.; Bartels, D.; Cohoon, M.; Disz, T.; Edwards, R. A.; Gerdes, S.; Hwang, K.; Kubal, M.; Margaryan, G. R.; Meyer, F.; Mihalo, W.; Olsen, G. J.; Olson, R.; Osterman, A.; Paarmann, D.; Paczian, T.; Parrello, B.; Pusch, G. D.; Rodionov, D. A.; Shi, X.; Vassieva, O.; Vonstein, V.; Zagnitko, O.; Xia, F.; Zinner, J.; Overbeek, R.; Stevens, R. The National Microbial Pathogen Database Resource (NMPDR): A genomics platform based on subsystem annotation. Nucleic Acids Res. 2007, 35 (Database issue), D347−53.

AUTHOR INFORMATION

Corresponding Author

*E-mail: [email protected]. Phone: +41 44 633 2195. Fax: +41 44 633 10 51. Author Contributions

L.M. designed and implemented the software, carried out experiments, and wrote the manuscript; J.M. designed the software, wrote the manuscript, and carried out the experiments; N.S. carried out experiments; G.R. wrote software in regards to APEX; R.A. wrote the manuscript and provided general guidance.



ACKNOWLEDGMENTS We acknowledge funding from SystemsX.ch, the Swiss initiative for systems biology (PhosphoNetX project). J.M. was funded by funds from the Swedish research council with No. 2008:3356, funds from Crafoordska Stiftelsen Grant No. 20090802 and 20100892, and the Swedish Foundation for Strategic Research (SSF) Grant No. FFL4. N.S. was supported by funding from the EU FP7 Grant “Unicellsys” (Grant No. 201142).



ABBREVIATIONS AUC, area under the curve; CID, collision induced dissociation; CSV, comma separated values; FDR, false discovery rate; GLM, generalized linear model; GNU, GNU’s not Unix; GO, gene ontology; GUI, graphical user interface; IC, ion chromatogram; LA8, least asymmetric 8; LC, liquid chromatography; MODWT, maximum overlap discrete wavelet transform; MS, 1652

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653

Journal of Proteome Research

Article

(15) Bendtsen, J. D.; Nielsen, H.; von Heijne, G.; Brunak, S. Improved prediction of signal peptides: SignalP 3.0. J. Mol. Biol. 2004, 340 (4), 783−95. (16) Ashburner, M.; Ball, C. A.; Blake, J. A.; Botstein, D.; Butler, H.; Cherry, J. M.; Davis, A. P.; Dolinski, K.; Dwight, S. S.; Eppig, J. T.; Harris, M. A.; Hill, D. P.; Issel-Tarver, L.; Kasarskis, A.; Lewis, S.; Matese, J. C.; Richardson, J. E.; Ringwald, M.; Rubin, G. M.; Sherlock, G. Gene ontology: Tool for the unification of biology. The Gene Ontology Consortium. Nat. Genet. 2000, 25 (1), 25−9. (17) Pedrioli, P. G.; Eng, J. K.; Hubley, R.; Vogelzang, M.; Deutsch, E. W.; Raught, B.; Pratt, B.; Nilsson, E.; Angeletti, R. H.; Apweiler, R.; Cheung, K.; Costello, C. E.; Hermjakob, H.; Huang, S.; Julian, R. K.; Kapp, E.; McComb, M. E.; Oliver, S. G.; Omenn, G.; Paton, N. W.; Simpson, R.; Smith, R.; Taylor, C. F.; Zhu, W.; Aebersold, R. A common open representation of mass spectrometry data and its application to proteomics research. Nat. Biotechnol. 2004, 22 (11), 1459−66. (18) Keller, A.; Nesvizhskii, A. I.; Kolker, E.; Aebersold, R. Empirical statistical model to estimate the accuracy of peptide identifications made by MS/MS and database search. Anal. Chem. 2002, 74 (20), 5383−92. (19) Lu, P.; Vogel, C.; Wang, R.; Yao, X.; Marcotte, E. M. Absolute protein expression profiling estimates the relative contributions of transcriptional and translational regulation. Nat. Biotechnol. 2007, 25 (1), 117−24. (20) Craig, R.; Beavis, R. C. TANDEM: Matching proteins with tandem mass spectra. Bioinformatics 2004, 20, 1466−7. (21) Keller, A.; Nesvizhskii, A. I.; Kolker, E.; Aebersold, R. Empirical statistical model to estimate the accuracy of peptide identifications made by MS/MS and database search. Anal. Chem. 2002, 74, 5383− 92. (22) Nesvizhskii, A. I.; Keller, A.; Kolker, E.; Aebersold, R. A statistical model for identifying proteins by tandem mass spectrometry. Anal. Chem. 2003, 75, 4646−58. (23) Malmström, J.; Karlsson, C.; Nordenfelt, P.; Ossola, R.; Weisser, H.; Quandt, A.; Hansson, K.; Aebersold, R.; Malmström, L.; Björck, L. Streptococcus pyogenes in human plasma: Adaptive mechanisms analyzed by mass spectrometry-based proteomics. J. Biol. Chem. 2012, 287, 1415−25.

1653

dx.doi.org/10.1021/pr200844d | J. Proteome Res. 2012, 11, 1644−1653