Article Cite This: J. Phys. Chem. B XXXX, XXX, XXX−XXX
pubs.acs.org/JPCB
Statistical Learning of Discrete States in Time Series Published as part of The Journal of Physical Chemistry virtual special issue “Deciphering Molecular Complexity in Dynamics and Kinetics from the Single Molecule to the Single Cell Level”. Hao Li and Haw Yang* Department of Chemistry, Princeton University, Princeton, New Jersey 08544, United States
Downloaded via WESTERN SYDNEY UNIV on January 13, 2019 at 07:53:56 (UTC). See https://pubs.acs.org/sharingguidelines for options on how to legitimately share published articles.
S Supporting Information *
ABSTRACT: Time series obtained from time-dependent experiments contain rich information on kinetics and dynamics of the system under investigation. This work describes an unsupervised learning framework, along with the derivation of the necessary analytical expressions, for the analysis of Gaussian-distributed time series that exhibit discrete states. After the time series has been partitioned into segments in a model-free manner using the previously developed change-point (CP) method, this protocol starts with an agglomerative hierarchical clustering algorithm to classify the detected segments into possible states. The initial state clustering is further refined using an expectation-maximization (EM) procedure, and the number of states is determined by a Bayesian information criterion (BIC). Also introduced here is an achievement scalarization function, usually seen in artificial intelligence literature, for quantitatively assessing the performance of state determination. The statistical learning framework, which is comprised of three stages, detection of signal change, clustering, and number-of-state determination, was thoroughly characterized using simulated trajectories with random intensity segments that have no underlying kinetics, and its performance was critically evaluated. The application to experimental data is also demonstrated. The results suggested that this general framework, the implementation of which is based on firm theoretical foundations and does not require the imposition of any kinetics model, is powerful in determining the number of states, the parameters contained in each state, as well as the associated statistical significance.
■
INTRODUCTION A time series is a sequence of observables that depict the evolution of a system over time. The output of time-dependent measurements such as single-molecule spectroscopy (SMS)1−8 or single-particle spectroscopy (SPS)9−13 is usually manifested as time series containing information regarding the dynamics and kinetics of the system. For example, the magnitude of the recorded time-varying SMS signals and the transitions between them are associated with the molecular state and moleculetransformation events, respectively. Oftentimes, such time series exhibit discrete states connected by abrupt transition between them. In chemical dynamics, this implies that the experimental time resolution is much shorter than the statedwelling time but much longer than the state-transition time.14 Quantitatively identifying these features in a time series is therefore the first step toward a mechanistic understanding of the system. For an experimental SMS/SPS time series, due to measurement noise and the inherent thermal fluctuations,15 it has been a well-recognized challenge to resolve the distinct states and to determine the physical parameters associated with them (e.g., the number of states, the relative populations of the states, and the magnitude of the experimental observable for each state). A common practice seen in the literature is to construct a histogram for the experimental observable along the time series. The information on the states is then obtained by fitting © XXXX American Chemical Society
to the histogram a mixture of Gaussian distributions. Approaches like that, while intuitive and easy to implement, are hardly accurate especially when the states are not well separated or when there are more than three states. If the system can be described by a kinetics model and the model is known, then model-based methods such as the hidden Markov model method could be helpful.16−18 In complex systems where the SMS/SPS approach can truly provide unique information not available from ensemble-averaged methods, however, not only is the kinetics model usually not known but also the system often fails to meet the assumptions built into a kinetics model. A well-known example is the intermittent photoemission of a single quantum dot (QD),19 in which the dynamics spans several decades of time scales20 and involves a continuous number of emission states.21 Therefore, it would be highly beneficial to have available a general and statistically rigorous approach that determines the states without any subjective kinetics model assumption. The task of determining the states in a time series can be cast, and solved, as a clustering problem in data mining and statistical learning. Such problems are generally about sorting the data into subsets, or clusters, such that data in the same Received: October 30, 2018 Revised: December 24, 2018 Published: December 26, 2018 A
DOI: 10.1021/acs.jpcb.8b10561 J. Phys. Chem. B XXXX, XXX, XXX−XXX
Article
The Journal of Physical Chemistry B
setting up the terminologies and likelihood functions that will be carried over to the stages of statistical learning the states. Detection and Localization of Abrupt Signal Changes. For the detection and localization of abrupt changes in signal, the CP analysis based on a generalized likelihood ratio test is used here.31 To illustrate the concrete implementation of the method, we will focus on the time series sampled at equal time intervals, {xi}, where each element xi is drawn from a Gaussian probability function
cluster have a greater similarity than those in different clusters. With the explosion of “Big Data” problems, statistical learning has become highly demanded in science,22−24 economics,25,26 business,27,28 and many other areas.29,30 Unfortunately, these methods are not one-size-fits-all formulas in that appropriately applying a method developed for a different discipline may be far from straightforward. It is generally understood that the performance of a clustering method varies and depends on the data set type; there has been no consensus on a single best approach that works well in all scenarios.30 In this contribution, we present a framework for statistically inferring the states from a Gaussian-noise-corrupted time series that exhibits discrete signal levels with stepwise changes. It is built on the original contribution of change-point (CP) detection.31 In addition to establishing the data-driven CP detection method, that initial publication also introduced the model-free analysis of discrete states from Poisson-noisecorrupted time series, although the state-inference part was not characterized as thoroughly as one would have liked. The basic ideas originally established for Poisson-noise-corrupted data are generalized in this work to the more commonly encountered Gaussian-noise-degraded data. The derivation of the analytical expressions for the various statistical tools, the introduction of the achievement scalarization function concept for quantitatively assessing the performance of the overall protocol, and the full characterization of the procedure are all included. The results show that the framework is highly effective in determining the states from noise-corrupted time series under the experimentally realistic conditions.
xi ∼ 5(I , σ )
i = 1 ... N
with 5(I , σ ) ≡ (1/ 2πσ 2 ) exp[−(x − I )2 /2σ 2] being the Gaussian density and “∼” denoting drawing a random variable on the left-hand side from the function on the right-hand side. An example of Gaussian mean CP analysis on the simulated trajectory is shown in Figure 1. This type of data is
Figure 1. CP detection on a simulated time series with Gaussiandistributed noise. The time series (gray) consists of 2000 data points and includes 5 intensity states with average intensities of 100, 150, 200, 250, and 300. All intensity states have the same noise level of 19. The vertical dashed lines indicate the locations of CPs based on a generalized likelihood ratio test.
■
METHODS This work considers time series with discrete jumps followed by some period of constant signal level, where the state changes occur much faster than the time scale of measurement. While the discussions here are motivated by SMS/SPS intensity−time trajectories, this type of data appears in many different contexts in natural sciences. As such, without loss of generality, “intensity” is used in the remainder of the report as a general term for the signal level in a time series. The statistical learning of intensity states is accomplished in three stages: The first stage is to pinpoint the locations of intensity changes, dividing the intensity−time trace into segments. The second stage is to classify these segments into clusters where segments in the same cluster are considered to come from the same intensity state; however, the number of distinct clusters is not yet determined at this point. The third stage is to determine the number of clusters that the data can support with statistical significance. These three stages of statistical inference are accomplished by, respectively, detection and localization of abrupt signal changes by the CP method, classification by an initial agglomerative hierarchical (AH) clustering followed by refinement by expectation-maximization (EM) inference, and adjudication by Bayesian information criterion (BIC) selection. The CP−EM−BIC framework has been constructed very much mindful of having statistical significance for the results, including the intermediate outputs in each step. Moreover, the statistical tools therein have been carefully chosen so that they are compatible with one another and the resulting framework rests upon a congruent overall theoretical structure. Below we describe in detail how each stage is accomplished, starting with a summary of the CP method. The CP method is recapitulated here not only for completeness but also for
representative of the output of camera-based SMS/SPS experiments widely used in physical chemistry and biophysics. That camera-based output is Gaussian-distributed, as illustrated by the distributions of the camera pixel readouts at the typical intensity levels in experiments (Figure S1). As summarized in Tables S1−S3, the distributions indeed pass the Kolmogorov−Smirnov tests for normality. The CP analysis appears to be the only method to date to have thoroughly evaluated the type-I (false-positive) error complete with detection power in CP detection and a confidence interval for each CP detected. Further developments include the detection of changes in particle diffusivity (detecting changes in σ),32 more recently a computational parallelization scheme (detecting changes in I),33 as well as the simultaneous detection of speed and position changes in single-particle tracking experiments (detecting changes in I and/or σ).34 The advantages of having a model-free way of detecting signal changes with statistical significance is gaining wider-spread recognition as the community continues to reflect upon broader potential issues when assignments are based on visual inspection or on empirical methods. Indeed, while the CP method has been routinely used as a preanalysis tool to locate the information-rich region in single-molecule Förstertype resonance energy transfer (smFRET) trajectories,35−37 it has also enabled many exciting discoveries. For example, in spectroscopic applications, it has been used to investigate the emission states of a single fluorescent molecule,38,39 photophysics of single fluorescent proteins,40 and the on/off time distributions of QD emission.41 In the more biologically relevant context, it allows the analysis of conformational states B
DOI: 10.1021/acs.jpcb.8b10561 J. Phys. Chem. B XXXX, XXX, XXX−XXX
Article
The Journal of Physical Chemistry B
Figure 2. Performance of CP detection as a function of size of the intensity change and relative noise level. The relative noise was defined as σrel = σ/I1. I1 = 100 was used in all of the simulations. For 30 different ratios of I2/I1 (plotted on a log scale) and 50 different σrel, 10000 traces of 200 Gaussian-distributed random numbers were generated for each (I2/I1, σrel) pair, with the intensity change occurring at the 100th data point. A typeI error of 0.05 was used in the analysis. (a) Detection power of CPs as a function of intensity change and noise level. (b) Standard deviation of the location of the identified CPs for all of the conditions.
of proteins,42 oligomeric states of macromolecule complexes or protein copy number,43 dynamics in enzymatic turnovers,44 as well as diffusive states of nanoparticles in a cellular milieu.45 We note that other attractive approaches to identifying step changes have also been proposed after the original CP work. One is an algorithm based on the Student’s t test.46 The Student’s t test, however, is premised upon the two samples to be compared having similar sample sizes and variances. Its performance is therefore unclear when the segments before and after a CP have very different lengths, a scenario frequently encountered in experiments. On the other hand, it is possible to construct a CP detection method based solely on Bayesian formulation.47 The Bayesian approach provides an alternative quantification of, and interpretation for, the evidence in the data but with the caveat that it tends to be computationally intensive.48 Detection of Sudden Intensity Changes. For the N observations, {xi}, i = 1...N, that are Gaussian-distributed, two hypotheses, HA and H0 are considered. HA states that there is a CP at the kth data point with xi ∼ 5(I1 , σ1)
i = k + 1 ... k
xj ∼ 5(I2 , σ2)
j = k + 1 ... N
noise-corrupted CPs, the test is termed the generalized likelihood ratio test and the parameters are replaced with their respective maximum likelihood estimate (MLE) values: I1̂ , σ̂ 1, I2̂ , σ̂ 2, I0̂ , and σ̂ 0, where hereafter “ ”̂ denotes MLE. The calculation of the critical region has been shown previously;34 the same values are used here. Note that the error rate from eq 2 is not uniform across the entire trajectory because the parameters used in the generalized log-likelihood test are MLE values. As k approaches either end of the time series, fewer data points are available for calculating the parameters’ MLE values, resulting in greater uncertainty for the parameters and hence higher error rates. As shown in the previous studies,31,33,34 the false-positive error rates are very low (less than 1%) even when k is close to the time-series end points. Also, the error rates would decrease as the time series becomes longer.33 To identify all of the CPs in a trajectory, the above method is applied recursively to the whole trajectory with a binary segmentation algorithm.31 As a result, (J + 1) trajectory segments will be generated by the J CPs. Each segment has its own intensity level distinguishable from its neighboring segments, and no CPs are found within each segment. Power of Detection. The performance of CP detection can be tested by the probability of missing a true CP and the accuracy of where the CP is located. Here the power of detection is defined based on this probability. For example, the power of detection is 0.95 if the probability of missing a CP is 5%. The power of detection as a function of the magnitude of Gaussian-distributed intensity change has been characterized by Song et al.33 For the Gaussian-distributed time series, however, the variance (noise), in addition to the size of intensity changes, is also likely to affect the detection of a CP.32 Intuitively, for a certain intensity jump, a CP is easier to detect if the noise level is lower. This is especially relevant in SMS where a wide range of signal-to-noise ratios (SNRs) is usually encountered, depending on experimental conditions. We use simulations to quantitatively evaluate the power of detection as a function of the relative size of the intensity jump, I2/I1, and the noise level, represented by the standard deviation σ in the Gaussian density here. For each I2/I1 and σ, 200 point trajectories with one CP at k = 100 were used. Concretely, the first 100 data points have a “true” intensity of I1, and the second 100 data points have a “true” intensity of I2 with both segments having the same σ. To make the discussion more general, a relative noise level is defined as σrel = σ/I1. The result is shown in Figure 2a. As expected, the power of detection
This alternative hypothesis is compared to the null hypothesis H0, which states that there is no CP within the time series xi ∼ 5(I0 , σ0)
i = 1 ... N
To determine if there is a CP at k, the log-likelihood ratio of these two hypotheses is computed (see ref 34 for derivation) 3k = −
k N−k N ln σ12 − ln σ2 2 + ln σ0 2 2 2 2
(1)
The maximum of the log-likelihood ratio gives the most likely position where a CP occurs within a time series AN = max {2 × 3k} 1≤k95%. These statistical assessments allow the user to make objective statements about the obtained data. Another important aspect for objective statement is the statistics of the data itself. In practice, one needs to analyze a large number of trajectories for a general picture of the system being studied because the number of states may vary from molecule to molecule. Other than molecular behaviors or properties, this could also happen because of practical reasons. For example, some molecules may not bleach within the data acquisition time, some may bleach too fast to let any event be observed, and still some may show an unreasonable number of states due to complicated photophysics or formation of aggregates. In this regard, the state-learning results from our method could conveniently serve as criteria for categorizing or filtering the data. As an immediate example in SMS, the photobleached segment, if considered to exist, could be reliably removed, and the rest of the trajectory should be analyzed again. In practical applications, the model-free feature and the learning results from our method help to implement all of these types of considerations and ensure that the conclusions of the system are drawn from a representative set of data.
Figure 9. Statistical learning framework for a time series composed of discrete states separated by abrupt transitions.
and parameter estimation were achieved by a combination of AH clustering and EM refinement, and the model selection was achieved by the BIC metric. In that work, however, the full state-inference framework was not thoroughly characterized. Moreover, whether the concept was sufficiently general to treat other types of time series remained unknown. In this work, we have generalized the idea to the Gaussian-noise-corrupted time series and derived analytical expressions suitable for such implementation (see the summary in blue in Figure 9). Because the Poisson noise (ref 31) and Gaussian noise (this work) are the two most common types of noise seen in physical sciences, these two implementations should be able to handle most cases. That is, this work completes a rigorous and practical statistical learning framework for inferring the discrete states from a noise-corrupted time series. In complex environments, whether a system is under thermal equilibrium or not, the system’s dynamics may involve many orders of time and length scales. Time-dependent SMS/ SPS is most powerful for elucidating the physical principles and molecular origins underlying a complex system. Depending on the relative time scale between the system dynamics and experimental resolution, an experimental time series may either exhibit abrupt changes between discrete states or appear as a continuous progression. The completed statistical learning framework discussed so far is most suitable for the former case with discrete signals; importantly, because there are no assumptions built into the framework, it is applicable to both equilibrium and nonequilibrium conditions, as well as systems that cannot be described by kinetics schemes. For the latter case with continuous signals, on the other hand, a full statistical learning framework has not yet emerged. Nevertheless, several works have suggested that statistical learning from continuous experimental signals is not intractable. They range from direct analysis on the signal level62−64 to statistical learning of the dynamics parameter guided by a physical model65−68 and to the more abstract theoretical objects.69,70 It is hoped that with these and many forthcoming new developments the pace of new discoveries will be greatly accelerated.
■
CONCLUSIONS Figure 9 summarizes the contributions from this work and places them in context. The conceptual framework for the model-free statistical learning is shown on the left. It is comprised of three major stagessignal change detection, clustering, and number-of-state model selection. The specifics that the mathematical tools used to accomplish these three goals do not seem to be important, as long as they are rigorous and statistically robust. Some possible alternatives for each stage have been discussed in the main text where appropriate. This framework was first put forward in the original work dealing with the Poisson-noise-corrupted time series.31 In the initial implementation, the signal change detection was achieved by likelihood ratio test CP detection, the clustering
■
ASSOCIATED CONTENT
S Supporting Information *
The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acs.jpcb.8b10561. K
DOI: 10.1021/acs.jpcb.8b10561 J. Phys. Chem. B XXXX, XXX, XXX−XXX
Article
The Journal of Physical Chemistry B
■
(11) Olson, J.; Dominguez-Medina, S.; Hoggard, A.; Wang, L.-Y.; Chang, W.-S.; Link, S. Optical Characterization of Single Plasmonic Nanoparticles. Chem. Soc. Rev. 2015, 44, 40−57. (12) Pietryga, J. M.; Park, Y.-S.; Lim, J.; Fidler, A. F.; Bae, W. K.; Brovelli, S.; Klimov, V. I. Spectroscopic and Device Aspects of Nanocrystal Quantum Dots. Chem. Rev. 2016, 116, 10513−10622. (13) Shen, H.; Tauzin, L. J.; Baiyasi, R.; Wang, W.; Moringo, N.; Shuang, B.; Landes, C. F. Single Particle Tracking: From Theory to Biophysical Applications. Chem. Rev. 2017, 117, 7331−7376. (14) Yang, H. Change-Point Localization and Wavelet Spectral Analysis of Single-Molecule Time Series. Adv. Chem. Phys. 2011, 146, 217−243. (15) Yang, H. In Theory and Evaluation of Single-Molecule Signals; Barkai, E., Brown, F. L., Orrit, M., Yang, H., Eds.; Scientific Publishing: Singapore, 2008. (16) Andrec, M.; Levy, R. M.; Talaga, D. S. Direct Determination of Kinetic Rates from Single-Molecule Photon Arrival Trajectories Using Hidden Markov Models. J. Phys. Chem. A 2003, 107, 7454−7464. (17) Bronson, J. E.; Fei, J.; Hofman, J. M.; Gonzalez, R. L.; Wiggins, C. H. Learning Rates and States from Biophysical Time Series: A Bayesian Approach to Model Selection and Single-Molecule FRET Data. Biophys. J. 2009, 97, 3196−3205. (18) van de Meent, J.-W.; Bronson, J. E.; Wiggins, C. H.; Gonzalez, R. L. Empirical Bayes Methods Enable Advanced Population-Level Analyses of Single-Molecule FRET Experiments. Biophys. J. 2014, 106, 1327−1337. (19) Nirmal, M.; Dabbousi, B. O.; Bawendi, M. G.; Macklin, J. J.; Trautman, J. K.; Harris, T. D.; Brus, L. E. Fluorescence Intermittency in Single Cadmium Selenide Nanocrystals. Nature 1996, 383, 802− 804. (20) Kuno, M.; Fromm, D. P.; Hamann, H. F.; Gallagher, A.; Nesbitt, D. J. Nonexponential ”Blinking” Kinetics of Single CdSe Quantum Dots: A Universal Power Law Behavior. J. Chem. Phys. 2000, 112, 3117−3120. (21) Zhang, K.; Chang, H.; Fu, A.; Alivisatos, A. P.; Yang, H. Continuous Distribution of Emission States from Single CdSe/ZnS Quantum Dots. Nano Lett. 2006, 6, 843−847. (22) Szymkuć, S.; Gajewska, E. P.; Klucznik, T.; Molga, K.; Dittwald, P.; Startek, M.; Bajczyk, M.; Grzybowski, B. A. Computer-Assisted Synthetic Planning: The End of the Beginning. Angew. Chem., Int. Ed. 2016, 55, 5904−5937. (23) Raccuglia, P.; Elbert, K. C.; Adler, P. D. F.; Falk, C.; Wenny, M. B.; Mollo, A.; Zeller, M.; Friedler, S. A.; Schrier, J.; Norquist, A. J. Machine-Learning-Assisted Materials Discovery using Failed Experiments. Nature 2016, 533, 73−76. (24) Sanchez-Lengeling, B.; Aspuru-Guzik, A. Inverse Molecular Design using Machine Learning: Generative Models for Matter Engineering. Science 2018, 361, 360−365. (25) Jordan, M.; Mitchell, T. Machine Learning: Trends, Perspectives, and Prospects. Science 2015, 349, 255−260. (26) Mullainathan, S.; Spiess, J. Machine Learning: An Applied Econometric Approach. J. Econ. Perspect. 2017, 31, 87−106. (27) Ngai, E.; Hu, Y.; Wong, Y.; Chen, Y.; Sun, X. The Application of Data Mining Techniques in Financial Fraud Detection: A Classification Framework and an Academic Review of Literature. Decis. Support Syst. 2011, 50, 559−569. (28) Choi, T.-M.; Chan, H. K.; Yue, X. Recent Development in Big Data Analytics for Business Operations and Risk Management. IEEE Trans. Cybern. 2017, 47, 81−92. (29) Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer: New York, 2009. (30) James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer: New York, 2013; Vol. 103. (31) Watkins, L. P.; Yang, H. Detection of Intensity Change Points in Time-Resolved Single-Molecule Measurements. J. Phys. Chem. B 2005, 109, 617−628. (32) Montiel, D.; Cang, H.; Yang, H. Quantitative Characterization of Changes in Dynamical Behavior for Single-Particle Tracking Studies. J. Phys. Chem. B 2006, 110, 19763−19770.
Derivation of Bayesian information criterion, experimental materials and methods, tables providing distributions of camera readouts, figures showing distributions of camera readouts, success rates, the fraction of bias on the number of states, comparison with HaMMy reconstructed single-molecule trajectories, trajectory histogram with inferred states, and comparison with the HaMMy reconstructed QD trajectory (PDF)
AUTHOR INFORMATION
Corresponding Author
*E-mail:
[email protected]. ORCID
Hao Li: 0000-0003-2886-5658 Haw Yang: 0000-0003-0268-6352 Notes
The authors declare no competing financial interest. The source codes associated with the publication is made available through the Yang lab or through the Princeton github repository at https://github.com/PrincetonUniversity/ GaussianEMBIC. The source codes is under the Creative Commons license CC BY-NC-SA (Attribution-NonCommercial-ShareAlike).
■
ACKNOWLEDGMENTS The authors thank the Betty and Gordon Moore Foundation (Grant GBMF4741) and Princeton University for financial support. We thank an anonymous reviewer for the suggestion to include the HMM analysis and the Gaussian-mixture model fitting as contrast.
■
REFERENCES
(1) Xie, X. S.; Choi, P. J.; Li, G.-W.; Lee, N. K.; Lia, G. SingleMolecule Approach to Molecular Biology in Living Bacterial Cells. Annu. Rev. Biophys. 2008, 37, 417−444. (2) Tan, Y.-W.; Yang, H. Seeing the Forest for the Trees: Fluorescence Studies of Single Enzymes in the Context of Ensemble Experiments. Phys. Chem. Chem. Phys. 2011, 13, 1709−1721. (3) van Oijen, A. M. Single-Molecule Approaches to Characterizing Kinetics of Biomolecular Interactions. Curr. Opin. Biotechnol. 2011, 22, 75−80. (4) Juette, M. F.; Terry, D. S.; Wasserman, M. R.; Zhou, Z.; Altman, R. B.; Zheng, Q.; Blanchard, S. C. The Bright Future of SingleMolecule Fluorescence Imaging. Curr. Opin. Chem. Biol. 2014, 20, 103−111. (5) Moerner, W. E.; Shechtman, Y.; Wang, Q. Single-Molecule Spectroscopy and Imaging over the Decades. Faraday Discuss. 2015, 184, 9−36. (6) Aggarwal, V.; Ha, T. Single-Molecule Fluorescence Microscopy of Native Macromolecular Complexes. Curr. Opin. Struct. Biol. 2016, 41, 225−232. (7) Chu, J.-W.; Yang, H. Identifying the Structural and Kinetic Elements in Protein Large-Amplitude Conformational Motions. Int. Rev. Phys. Chem. 2017, 36, 185−227. (8) Ray, S.; Widom, J. R.; Walter, N. G. Life under the Microscope: Single-Molecule Fluorescence Highlights the RNA World. Chem. Rev. 2018, 118, 4120−4155. (9) Tachikawa, T.; Majima, T. Single-Molecule, Single-Particle Fluorescence Imaging of TiO2-Based Photocatalytic Reactions. Chem. Soc. Rev. 2010, 39, 4802−4819. (10) Fernée, M. J.; Tamarat, P.; Lounis, B. Spectroscopy of Single Nanocrystals. Chem. Soc. Rev. 2014, 43, 1311−1337. L
DOI: 10.1021/acs.jpcb.8b10561 J. Phys. Chem. B XXXX, XXX, XXX−XXX
Article
The Journal of Physical Chemistry B (33) Song, N.; Yang, H. Parallelization of Change Point Detection. J. Phys. Chem. A 2017, 121, 5100−5109. (34) Yin, S.; Song, N.; Yang, H. Detection of Velocity and Diffusion Coefficient Change Points in Single-Particle Trajectories. Biophys. J. 2018, 115, 217−229. (35) Hanson, J. A.; Duderstadt, K.; Watkins, L. P.; Bhattacharyya, S.; Brokaw, J.; Chu, J.-W.; Yang, H. Illuminating the Mechanistic Roles of Enzyme Conformational Dynamics. Proc. Natl. Acad. Sci. U. S. A. 2007, 104, 18055−18060. (36) Xu, C. S.; Kim, H.; Yang, H.; Hayden, C. Joint Statistical Analysis of Multi-Channel Time Series from Single Quantum Dot(Cy5)n Constructs. J. Phys. Chem. B 2008, 112, 5917−5923. (37) Flynn, E. M.; Hanson, J. A.; Alber, T.; Yang, H. Dynamic Active-Site Protection by the M. tuberculosis Protein Tyrosine Phosphatase. J. Am. Chem. Soc. 2010, 132, 4772−4780. (38) Wustholz, K. L.; Bott, E. D.; Kahr, B.; Reid, P. J. Memory and Spectral Diffusion in Single-Molecule Emission. J. Phys. Chem. C 2008, 112, 7877−7885. (39) Wong, N. Z.; Ogata, A. F.; Wustholz, K. L. Dispersive ElectronTransfer Kinetics from Single Molecules on TiO2 Nanoparticle Films. J. Phys. Chem. C 2013, 117, 21075−21085. (40) Goldsmith, R. H.; Moerner, W. E. Watching Conformationaland Photodynamics of Single Fluorescent Proteins in Solution. Nat. Chem. 2010, 2, 179−186. (41) Cordones, A. A.; Bixby, T. J.; Leone, S. R. Evidence for Multiple Trapping Mechanisms in Single CdSe/ZnS Quantum Dots from Fluorescence Intermittency Measurements over a Wide Range of Excitation Intensities. J. Phys. Chem. C 2011, 115, 6341−6349. (42) Bockenhauer, S.; Fürstenberg, A.; Yao, X. J.; Kobilka, B. K.; Moerner, W. E. Conformational Dynamics of Single G ProteinCoupled Receptors in Solution. J. Phys. Chem. B 2011, 115, 13328− 13338. (43) Tuson, H. H.; Biteen, J. S. Unveiling the Inner Workings of Live Bacteria Using Super-Resolution Microscopy. Anal. Chem. 2015, 87, 42−63. (44) Terentyeva, T. G.; Engelkamp, H.; Rowan, A. E.; Komatsuzaki, T.; Hofkens, J.; Li, C.-B.; Blank, K. Dynamic Disorder in SingleEnzyme Experiments: Facts and Artifacts. ACS Nano 2012, 6, 346− 354. (45) Welsher, K.; Yang, H. Multi-Resolution 3D Visualization of the Early Stages of Cellular Uptake of Peptide-Coated Nanoparticles. Nat. Nanotechnol. 2014, 9, 198−203. (46) Shuang, B.; Cooper, D.; Taylor, J. N.; Kisley, L.; Chen, J.; Wang, W.; Li, C. B.; Komatsuzaki, T.; Landes, C. F. Fast Step Transition and State Identification (STaSI) for Discrete SingleMolecule Data Analysis. J. Phys. Chem. Lett. 2014, 5, 3157−3161. (47) Ensign, D. L.; Pande, V. S. Bayesian Detection of Intensity Changes in Single Molecule and Molecular Dynamics Trajectories. J. Phys. Chem. B 2010, 114, 280−292. (48) Liu, J. S. Monte Carlo Strategies in Scientific Computing; Springer Publishing Company, Inc., 2008. (49) Fraley, C.; Raftery, A. E. How Many Clusters? Which Clustering Method? Answers via Model-Based Cluster Analysis. Comput. J. 1998, 41, 578−588. (50) Ward, J. H. Hierarchical Grouping to Optimize an Objective Function. J. Am. Stat. Assoc. 1963, 58, 236−244. (51) Scott, A. J.; Symons, M. J. Clustering Methods Based on Likelihood Ratio Criteria. Biometrics 1971, 27, 387−397. (52) Schwarz, G. Estimate the Dimension of a Model. Ann. Stat. 1978, 6, 461−464. (53) Wallace, C. S.; Boulton, D. M. An Information Measure for Classification. Comput. J. 1968, 11, 185−194. (54) Rissanen, J. Modeling by Shortest Data Description. Automatica 1978, 14, 465−471. (55) Lanterman, A. D. Schwarz, Wallace, and Rissanen: Intertwining Themes in Theories of Model Selection. Int. Stat. Rev. 2001, 69, 185− 212. (56) Rissanen, J. J. Stochastic Complexity and Modeling. Ann. Stat. 1986, 14, 1080−1100.
(57) Rissanen, J. J. Fisher Information and Stochastic Complexity. IEEE Trans. Inf. Theory 1996, 42, 40−47. (58) Myung, J. I.; Tang, Y.; Pitt, M. A. Methods in Enzymology; Elsevier, 2009; Vol. 454; pp 287−304. (59) Wierzbicki, A. P. The Use of Reference Objectives in Multiobjective Optimization. In Multiple Criteria Decision Making Theory and Application. Lecture Notes in Economics and Mathematical Systems; Fandel, G., Gal, T., Eds.; Springer: Berlin, Heidelberg, 1980; Vol. 177, pp 468−486. (60) Wierzbicki, A. P. A Mathematical Basis for Satisficing Decision Making. Math. Model. 1982, 3, 391−405. (61) McKinney, S. A.; Joo, C.; Ha, T. Analysis of Single-Molecule FRET Trajectories Using Hidden Markov Modeling. Biophys. J. 2006, 91, 1941−1951. (62) Watkins, L. P.; Yang, H. Information Bounds and Optimal Analysis of Dynamic Single Molecule Measurements. Biophys. J. 2004, 86, 4015−4029. (63) Watkins, L. P.; Chang, H.; Yang, H. Quantitative SingleMolecule Conformational Distributions: A Case Study with Poly-Lproline. J. Phys. Chem. A 2006, 110, 5191−5203. (64) Yang, H. Detection and Characterization of Dynamical Heterogeneity in a Event Series Using Wavelet Correlation. J. Chem. Phys. 2008, 129, 074701. (65) Haas, K. R.; Yang, H.; Chu, J. W. Fisher Information Metric for the Langevin Equation and Least Informative Models of Continuous Stochastic Dynamics. J. Chem. Phys. 2013, 139, 121931. (66) Haas, K. R.; Yang, H.; Chu, J. W. Expectation-Maximization of the Potential of Mean Force and Diffusion Coefficient in Langevin Dynamics from Single Molecule FRET Data Photon by Photon. J. Phys. Chem. B 2013, 117, 15591−15605. (67) Haas, K. R.; Yang, H.; Chu, J. W. Analysis of Trajectory Entropy for Continuous Stochastic Processes at Equilibrium. J. Phys. Chem. B 2014, 118, 8099−8107. (68) Haas, K. R.; Yang, H.; Chu, J. W. Trajectory Entropy of Continuous Stochastic Processes at Equilibrium. J. Phys. Chem. Lett. 2014, 5, 999−1003. (69) Li, C.-B.; Yang, H.; Komatsuzaki, T. Complext Network of Protein Conformational Fluctuations in Single-Molecule Time Series. Proc. Natl. Acad. Sci. U. S. A. 2008, 105, 536−541. (70) Wang, J.; Ferguson, A. L. Nonlinear Reconstruction of SingleMolecule Free-Energy Surfaces from Univariate Time Series; 2016; Vol. 93, p 032412.
M
DOI: 10.1021/acs.jpcb.8b10561 J. Phys. Chem. B XXXX, XXX, XXX−XXX