Deep Learning for Nonadiabatic Excited-State Dynamics

In this work we have used seven hidden layers with 240, ... Figure 1: Schematic model of deep neural networks (DNNs) with four hidden ... See text for...
0 downloads 0 Views 2MB Size
Letter pubs.acs.org/JPCL

Cite This: J. Phys. Chem. Lett. 2018, 9, 6702−6708

Deep Learning for Nonadiabatic Excited-State Dynamics Wen-Kai Chen,† Xiang-Yang Liu,† Wei-Hai Fang,† Pavlo O. Dral,‡ and Ganglong Cui*,† †

Key Laboratory of Theoretical and Computational Photochemistry, Ministry of Education, College of Chemistry, Beijing Normal University, Beijing 100875, China ‡ Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr, Germany

J. Phys. Chem. Lett. Downloaded from pubs.acs.org by UNIV STRASBOURG on 11/12/18. For personal use only.

S Supporting Information *

ABSTRACT: In this work we show that deep learning (DL) can be used for exploring complex and highly nonlinear multistate potential energy surfaces of polyatomic molecules and related nonadiabatic dynamics. Our DL is based on deep neural networks (DNNs), which are used as accurate representations of the CASSCF ground- and excited-state potential energy surfaces (PESs) of CH2NH. After geometries near conical intersection are included in the training set, the DNN models accurately reproduce excited-state topological structures; photoisomerization paths; and, importantly, conical intersections. We have also demonstrated that the results from nonadiabatic dynamics run with the DNN models are very close to those from the dynamics run with the pure ab initio method. The present work should encourage further studies of using machine learning methods to explore excited-state potential energy surfaces and nonadiabatic dynamics of polyatomic molecules.

M

ideally reproduce reference nonadiabatic dynamics,41 both studies have revealed that practical dynamics of highdimensional systems with ML is particularly challenging near conical intersections. In these regions, KRR PESs have been insufficiently accurate in dynamics with the Zhu−Nakamura method40 and ML has been unable to predict very narrow nonadiabatic couplings for a high-dimensional system.41 Thus, in both cases ML has not been used in the vicinity of conical intersections, and instead calculations with a reference method have been done,40,41 which may incur significant additional computational cost. Very efficient nonadiabatic dynamics should ideally be performed exclusively with ML and avoid slow quantum chemical calculations altogether. Thus, accurate description of conical intersections with ML is pivotal, but to the best of our knowledge pure ML models have not been used yet for efficient and accurate description of these intersection regions in high-dimensional systems. On the other hand, until now, only KRR-based ML techniques have been used for fitting complicated ground- and excited-state potential energy surfaces. It is known, however, that a very large number of training data points pose a big problem for KRR.41 This may become a significant issue for some applications. Thus, it is necessary to explore other ML methods such as deep learning based on deep neural networks (DNNs),46 which can be applied for huge numbers of training points and are shown to perform very well for complex and highly nonlinear potential energy surfaces as expected near conical intersections.

achine learning (ML) methods have achieved significant success in many research fields of chemistry, physics, biology, and materials, e.g., in chemical synthesis,1−3 property-targeted materials search,4−6 reducing error in molecular properties calculated with approximate methods,7−10 parameter refinement of semiempirical methods,11 quantum wavepacket dynamics,12,13 force field developments,14−16 freeenergy calculations with quantum mechanics/molecular mechanics methods,17 fitting of atomic charges,18−20 design of density functionals,21−25 etc. In addition, ML techniques have also been widely used to train large data sets of atomic configuration and related energies and forces to produce accurate, transferable, and efficient potential energy surfaces (PESs) for large-scale and long-time molecular dynamics.26−35 However, these ML high-dimensional PESs are mainly used in ground-state applications. Although there are some studies in which coupled excitedstate diabatic potential energy surfaces are constructed,36−39 ML methods are scarcely reported for studying mixed quantum−classical nonadiabatic dynamic simulations involving multiple complicated adiabatic potential energy surfaces. In two recent studies by Lan et al.40 and Dral et al.,41 two different variants of surface-hopping dynamics have been performed with kernel ridge regression (KRR)-based ML. In the former study, calculation of nonadiabatic couplings has been avoided by using the Zhu−Nakamura method,42 and in the latter study, KRR43 has been used to predict these couplings for the decoherence-corrected fewest switches surface-hopping dynamics.44,45 In both studies KRR has been employed to reproduce ground- and excited-state (S0 and S1) PESs of a reference method. Although it has been shown on a one-dimensional system that ML can in principle almost © XXXX American Chemical Society

Received: October 1, 2018 Accepted: November 7, 2018 Published: November 7, 2018 6702

DOI: 10.1021/acs.jpclett.8b03026 J. Phys. Chem. Lett. 2018, 9, 6702−6708

Letter

The Journal of Physical Chemistry Letters In this work, we use a recently introduced deep learning (DL) method29,47−49 combined with the Zhu−Nakamura method42 to perform pure ML nonadiabatic dynamics of CH2NH. We explore topological structures of both S0 and S1 potential energy surfaces and S1/S0 conical intersections and analyze the photoisomerization pathway of CH2NH. For this we train feed-forward DNNs on both ground- and excited-state PESs of CH2NH. Reference calculations for training and testing our DNN models are performed at the CASSCF level of theory. Our DNN models are similar to widely used neural network (NN) models, in which the potential energy of a polyatomic molecule is expressed as a sum of energies E = ∑i Ei of the constituent atoms i with coordinates R i (see Figure 1)26,27,29,47−51 Ei = iout(inh(inh − 1(···(1i ({d i0})))))

Table 1. RMSD Errors in Energies (eV) and Forces (eV/Å) Calculated with our Trained DNNs for Training and Validation Sets energy, training energy, validation S0 S1

1.09 × 10−2 3.36 × 10−3

1.06 × 10−2 3.38 × 10−3

force, training

force, validation

5.56 × 10−2 1.66 × 10−2

5.74 × 10−2 2.35 × 10−2

The accuracy of ML simulations depends on the choice of input vector, which in chemistry is typically a representation of a molecule (molecular descriptor) or atomic environment. It should respect many requirements, such as translational, rotational, and permutational invariances. Thus, significant effort has been put into designing good ML input vectors for atomistic simulations.26,27,50−54 In this work, we have used the local coordinate information on the atom i as DNN input vector Dij in a form recently suggested by Weinan E and coworkers, which also respects the aforementioned requirements.29,47−49 For an atom i, input vector Dij consists of inverse internuclear distances {1/Rij} and also includes angular information {xij/R2ij, yij/R2ij, zij/R2ij} for each neighboring atom j within a threshold value Rc = 6 Å in the present work (see the Supporting Information). Neighboring atoms are sorted first by element type and then by their distance to the atom i. The parameters of DNNs for an atom i, i.e. {W1i , b1i , W2i , b2i , out ···, Wout i , bi }, are optimized iteratively to minimize the loss function (pe , pf ) p (pe , pf ) = pe ΔE2 + f ∑ |ΔFi |2 3N i (2)

(1)

where ih(d ih − 1) = ψ (W idd ih − 1 + bih) is a combination of linear and nonlinear transformations on the vector data d ih − 1 ∈ nh that are generated in the previous layer h − 1 associated with nh nodes. The input layer requests input vector d0i = Dij(Ri), whose construction is discussed below. The weight matrix W id ∈ nh × nh−1 and the bias vector bih ∈ nh are parameters to be optimized. The nonlinear activation function ψ is taken to be a hyperbolic tangent, tanh. In the final output layer, only the linear transformation iout(d inh) = W ioutd inh + biout is applied to

where ΔE and ΔFi are the energy difference per atom and the force difference on atom i between DNN prediction and ab initio training data; N is the number of atoms; pe and pf are two tunable prefactors. DNN forces are derived directly from the above analytical potential energy expression. Thus, in principle, our DNN model is energy-conserving. Two individual DNN models are trained independently for both S1 and S0 PESs of CH2NH on CASSCF data calculated for the same set of molecular coordinates. See the Supporting Information for more details on the training procedure. The final DNN models are trained on 90 000 ab initio data points prepared using molecular dynamics simulations (see the Supporting Information). Special emphasis has been paid to include geometries near conical intersections in the training set to ensure proper treatment of these regions during DNN nonadiabatic dynamics. The root-mean-square deviations (RMSDs) of energies and forces for the training data become sufficiently small: 1.09 × 10−2 (3.36 × 10−3) eV and 5.56 × 10−2 (1.66 × 10−2) eV/Å for the S0 (S1) DNN model, respectively. To check the reliability of our DNN models, an independent validation testing set, 26 522 data points, is also used, and as can be seen from Figure 2 and Table 1, RMSDs are also small and close to those in the training set. These comparisons demonstrate that our DNN models reproduce ab initio data very accurately. The topological structures of involved S0 and S1 PESs and related S1/S0 conical intersections play a crucial role in excitedstate relaxation dynamics. Thus, it is indispensable to check the accuracy of our DNN models for the description of these structures. According to previous studies,55,56 it is known that the main excited-state relaxation pathway of CH2NH is the photoisomerization along the central dihedral angle. We have therefore first checked the reliability of our DNN models for

Figure 1. Schematic model of deep neural networks (DNNs) with four hidden layers and three atoms. Atomic coordinates Ri are first transformed into a set of DNN input vectors Dij as proposed recently by Weinan E and co-workers.29,47−49 These descriptors are input into DNNs for training. See text for details.

avoid any restriction in the range of possible output values of DNNs. In this work we have used seven hidden layers with 240, 960, 480, 240, 60, 30, and 10 nodes. This combination is found to produce very accurate results for our studied system (see the Supporting Information). 6703

DOI: 10.1021/acs.jpclett.8b03026 J. Phys. Chem. Lett. 2018, 9, 6702−6708

Letter

The Journal of Physical Chemistry Letters

Figure 2. Comparison of energies (top) and forces (bottom) of both S0 (left) and S1 (right) states calculated with DNN and CASSCF for the validation data. Those for the training data are in the Supporting Information.

Figure 3. Two-dimensional PESs with respect to the two branching space vectors g and h near the S1/S0 conical intersection calculated with the CASSCF method (panels a and b for S0 and S1, respectively) and our DNN models (panels c and d). Also shown are the S1 relaxed minimumenergy reaction path (panel e) and the three-dimensional PESs near the S1/S0 conical intersection (panel f). The color codes in the left panels indicate the potential energies relative to the S0 minimum.

S1 PESs with respect to the two branching space vectors g and h are correctly and accurately reproduced by our DNN models, as indicated by the two- and three-dimensional potential energy surfaces in the bottom panel of Figure 3. The good performance of our DNN models for the intersection region

this isomerization process. As shown in the top-right panel of Figure 3, the S0 and S1 energy profiles calculated from the trained DNN models fully overlap with those calculated from the ab initio data (dashed vs solid lines). In addition, we have checked the two-dimensional topological structures near the S1/S0 conical intersection. It is clear that the CASSCF S0 and 6704

DOI: 10.1021/acs.jpclett.8b03026 J. Phys. Chem. Lett. 2018, 9, 6702−6708

Letter

The Journal of Physical Chemistry Letters

vertical excitation energies. Our trained DNN models reproduce closely ab initio vertical excitation energies. In both ab initio and DNN cases, the same distribution width, ranging from 4.2 to 5.1 eV, and the same peak at 4.8 eV are observed. Even more importantly, our DNN models give the distribution of the S1−S0 energy gaps at all hopping points very close to such distribution from the reference ab initio dynamics simulation (Figure 5). Distributions from both DNN and CASSCF simulations have almost the same width and peak. This provides additional evidence for the good performance of our DNN models for the regions in the vicinity of conical intersection. Figure 6 shows the time-dependent S0 and S1 state populations averaged over 200 trajectories. Overall, the time evolution of these populations is very similar for both CASSCF and DNN dynamics. The small deviations of DNN populations from ab initio populations are observed and are the inevitable result of non-negligible albeit tiny deviations between DNN and ab initio energies and forces. Nevertheless, the DNN models still give accurate S0 and S1 state populations. First, the S1 → S0 hoppings start from the same time, 55 fs. Second, at the end of the 400 fs simulation time, ab initio and DNN models have the same numbers of trajectories still surviving in the S1 state (three trajectories). Third, fitting these curves with a single-exponential function of y = y0 + A exp(−(x − x0)/t) leads to very close S1 excited-state lifetimes (182 vs 191 fs). In addition, the distribution of the S1 → S0 hopping points from DNN nonadiabatic dynamics is also similar to ab initio dynamics (see the top panel of Figure 7). The initial bending ∠C1-N4-H5 and dihedral ∠H2-C1-N4-H5 angles visibly cluster around ca. 120° and 0°, respectively. At the hopping points, the centroid of the distribution of the dihedral angle ∠H2-C1-N4-H5 moves to ca. 90° for both DNN and CASSCF dynamics simulations. In contrast, the centroid of the distribution of the bending angle ∠C1-N4-H5 varies little except that there is a clear distribution spreading. Moreover, we have performed frequency distribution analysis on the distribution of the dihedral angle ∠H2-C1-N4-H5 at the hopping points, as shown in Figure S3, which shows that both CASSCF and DNN dynamics simulations also give very close results. These observations provide further important evidence that our DNN models are capable of the correct treatment of the regions near conical intersections. In addition, the final distribution of the dihedral angle ∠H2-C1-N4-H5 at the end of the 400 fs simulation time is also similar in both ab initio and DNN dynamics (see the bottom panel of Figure 7). In conclusion, we have employed deep learning based on two individual deep neural networks (DNNs) for accurate representation of the S1 and S0 PESs of a polyatomic molecule CH2NH. The training set has consisted of 90 000 geometries, and CASSCF has been used for calculating reference energies and forces. To ensure proper treatment of the conical intersections, the training points near these regions have been included. The RMSD errors of energies and forces for both S1 and S0 DNNs are smaller than 0.25 [0.26] kcal/mol and 1.33 [1.29] kcal/mol/Å for the training and validation data, respectively. We have demonstrated that the DNN models can very accurately reproduce the topological structures of both S1 and S0 PESs, particularly those involved in the photoisomerization pathway. Most importantly, the DNN models also correctly and accurately give the S1/S0 conical intersection structure and energy, which plays a crucial

Figure 4. Distribution of the S1−S0 energy gaps (eV) of 200 trajectories at the starting points calculated with CASSCF and our DNN models.

Figure 5. Distribution of the S1−S0 energy gaps at all hopping points in ab initio and DNN nonadiabatic dynamics simulations.

Figure 6. Time-dependent S1 and S0 state populations averaged over 200 trajectories from CASSCF and DNN dynamics. The fitted lines are also shown.

between the S0 and S1 PESs makes them reliable for the following nonadiabatic dynamics simulations. In on-the-fly nonadiabatic dynamics, 200 trajectories are propagated for 400 fs with a 0.5 fs time step. For better comparison, the same initial coordinates and velocities are used for ab initio and DNN surface-hopping dynamics simulations (see the Supporting Information for details). Because the S1−S0 energy gap is an important physical quantity for determining whether system hops from S1 and S0, we have monitored it. Figure 4 shows the distribution of S1−S0 energy gaps of 200 trajectories in the Franck−Condon region, i.e., 6705

DOI: 10.1021/acs.jpclett.8b03026 J. Phys. Chem. Lett. 2018, 9, 6702−6708

Letter

The Journal of Physical Chemistry Letters

Figure 7. Distributions of the bending ∠C1-N4-H5 and dihedral ∠H2-C1-N4-H5 angles at the S1 → S0 hopping points (top) and of the dihedral angle of ∠H2-C1-N4-H5 at the end of 400 fs simulation times from ab initio (left) and DNN (right) dynamics (bottom).

Notes

role in nonadiabatic dynamics simulations. Furthermore, we have shown that the DNN models are accurate enough for pure ML nonadiabatic dynamics simulations without the need of additional ab initio calculations during dynamics. Both ab initio and DNN nonadiabatic surface-hopping dynamics simulations give very similar time-dependent S1 and S0 state populations, hopping-point and product distributions, and almost the same S1−S0 energy gaps at either Franck−Condon or hopping regions. This work is an important step toward using machine learning methods in studies of complicated photophysical processes, photochemical reactions, and nonadiabatic dynamics of polyatomic molecules. In addition, because ML-based potential energy surfaces are much more efficient than direct dynamics, ML-based dynamics simulations could be used to help explore long-time physical and chemical events or enhance statistical convergences.



The authors declare no competing financial interest.



ACKNOWLEDGMENTS This work has been supported by NSFC Grants: 21522302 (G.C.) and 21520102005 (G.C. and W.-H.F.).



(1) Segler, M. H. S.; Preuss, M.; Waller, M. P. Planning Chemical Syntheses with Deep Neural Networks and Symbolic AI. Nature 2018, 555, 604−610. (2) Granda, J. M.; Donina, L.; Dragone, V.; Long, D.-L.; Cronin, L. Controlling an Organic Synthesis Robot with Machine Learning to Search for New Reactivity. Nature 2018, 559, 377−381. (3) Raccuglia, P.; Elbert, K. C.; Adler, P. D. F.; Falk, C.; Wenny, M. B.; Mollo, A.; Zeller, M.; Friedler, S. A.; Schrier, J.; Norquist, A. J. Machine-Learning-Assisted Materials Discovery Using Failed Experiments. Nature 2016, 533, 73−76. (4) Sanchez-Lengeling, B.; Aspuru-Guzik, A. Inverse Molecular Design Using Machine Learning: Generative Models for Matter Engineering. Science 2018, 361, 360−365. (5) Lu, S.; Zhou, Q.; Ouyang, Y.; Guo, Y.; Li, Q.; Wang, J. Accelerated Discovery of Stable Lead-Free Hybrid Organic-Inorganic Perovskites via Machine Learning. Nat. Commun. 2018, 9, 3405. (6) Gómez-Bombarelli, R.; Aguilera-Iparraguirre, J.; Hirzel, T. D.; Duvenaud, D.; Maclaurin, D.; Blood-Forsythe, M. A.; Chae, H. S.; Einzinger, M.; Ha, D.-G.; Wu, T.; et al. Design of Efficient Molecular Organic Light-Emitting Diodes by a High-Throughput Virtual Screening and Experimental Approach. Nat. Mater. 2016, 15, 1120−1127. (7) Hu, L. H.; Wang, X. J.; Wong, L. H.; Chen, G. H. Combined First-Principles Calculation and Neural-Network Correction Approach for Heat of Formation. J. Chem. Phys. 2003, 119, 11501− 11507. (8) Duan, X.-M.; Li, Z.-H.; Song, G.-L.; Wang, W.-N.; Chen, G.-H.; Fan, K.-N. Neural Network Correction for Heats of Formation with a

ASSOCIATED CONTENT

S Supporting Information *

The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acs.jpclett.8b03026. Input vector, deep neural network, data preparation, training details, Zhu−Nakamura nonadiabatic dynamics method, and additional results (PDF)



REFERENCES

AUTHOR INFORMATION

Corresponding Author

*E-mail: [email protected]. ORCID

Wei-Hai Fang: 0000-0002-1668-465X Pavlo O. Dral: 0000-0002-2975-9876 Ganglong Cui: 0000-0002-9752-1659 6706

DOI: 10.1021/acs.jpclett.8b03026 J. Phys. Chem. Lett. 2018, 9, 6702−6708

Letter

The Journal of Physical Chemistry Letters Larger Experimental Training Set and New Descriptors. Chem. Phys. Lett. 2005, 410, 125−130. (9) Sun, J.; Wu, J.; Song, T.; Hu, L. H.; Shan, K. L.; Chen, G. H. Alternative Approach to Chemical Accuracy: A Neural NetworksBased First-Principles Method for Heat of Formation of Molecules Made of H, C, N, O, F, S, and Cl. J. Phys. Chem. A 2014, 118, 9120− 9131. (10) Ramakrishnan, R.; Dral, P. O.; Rupp, M.; von Lilienfeld, O. A. Big Data Meets Quantum Chemistry Approximations: The ΔMachine Learning Approach. J. Chem. Theory Comput. 2015, 11, 2087−2096. (11) Dral, P. O.; von Lilienfeld, O. A.; Thiel, W. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations. J. Chem. Theory Comput. 2015, 11, 2120−2125. (12) Shen, X.; Chen, J.; Zhang, Z.; Shao, K.; Zhang, D. H. Methane Dissociation on Ni(111): A Fifteen-Dimensional Potential Energy Surface Using Neural Network Method. J. Chem. Phys. 2015, 143, 144701. (13) Richings, G. W.; Habershon, S. Direct Quantum Dynamics Using Grid-Based Wave Function Propagation and Machine-Learned Potential Energy Surfaces. J. Chem. Theory Comput. 2017, 13, 4012− 4024. (14) Yao, K.; Herr, J. E.; Parkhill, J. The Many-Body Expansion Combined with Neural Networks. J. Chem. Phys. 2017, 146, 014106. (15) Wang, H.; Yang, W. Force Field for Water Based on Neural Network. J. Phys. Chem. Lett. 2018, 9, 3232−3240. (16) Li, Y.; Li, H.; Pickard, F. C.; Narayanan, B.; Sen, F. G.; Chan, M. K. Y.; Sankaranarayanan, S. K. R. S.; Brooks, B. R.; Roux, B. Machine Learning Force Field Parameters from Ab Initio Data. J. Chem. Theory Comput. 2017, 13, 4492−4503. (17) Shen, L.; Wu, J.; Yang, W. Multiscale Quantum Mechanics/ Molecular Mechanics Simulations with Neural Networks. J. Chem. Theory Comput. 2016, 12, 4934−4946. (18) Bereau, T.; Andrienko, D.; von Lilienfeld, O. A. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules. J. Chem. Theory Comput. 2015, 11, 3225−3233. (19) Nebgen, B.; Lubbers, N.; Smith, J. S.; Sifain, A. E.; Lokhov, A.; Isayev, O.; Roitberg, A. E.; Barros, K.; Tretiak, S. Transferable Dynamic Molecular Charge Assignment Using Deep Neural Networks. J. Chem. Theory Comput. 2018, 14, 4687−4698. (20) Sifain, A. E.; Lubbers, N.; Nebgen, B. T.; Smith, J. S.; Lokhov, A. Y.; Isayev, O.; Roitberg, A. E.; Barros, K.; Tretiak, S. Discovering a Transferable Charge Assignment Model Using Machine Learning. J. Phys. Chem. Lett. 2018, 9, 4495−4501. (21) Brockherde, F.; Vogt, L.; Li, L.; Tuckerman, M. E.; Burke, K.; Müller, K.-R. Bypassing the Kohn-Sham Equations with Machine Learning. Nat. Commun. 2017, 8, 872. (22) Snyder, J. C.; Rupp, M.; Hansen, K.; Müller, K.-R.; Burke, K. Finding Density Functionals with Machine Learning. Phys. Rev. Lett. 2012, 108, 253002. (23) Zheng, X.; Hu, L. H.; Wang, X. J.; Chen, G. H. A Generalized Exchange-Correlation Functional: The Neural-Networks Approach. Chem. Phys. Lett. 2004, 390, 186−192. (24) Li, H.; Shi, L. L.; Zhang, M.; Su, Z.; Wang, X. J.; Hu, L. H.; Chen, G. H. Improving the Accuracy of Density-Functional Theory Calculation: The Genetic Algorithm and Neural Network Approach. J. Chem. Phys. 2007, 126, 144101. (25) Liu, Q.; Wang, J.; Du, P.; Hu, L.; Zheng, X.; Chen, G. Improving the Performance of Long-Range-Corrected ExchangeCorrelation Functional with an Embedded Neural Network. J. Phys. Chem. A 2017, 121, 7273−7281. (26) Behler, J.; Parrinello, M. Generalized Neural-Network Representation of High-Dimensional Potential-Energy Surfaces. Phys. Rev. Lett. 2007, 98, 146401. (27) Behler, J. First Principles Neural Network Potentials for Reactive Simulations of Large Molecular and Condensed Systems. Angew. Chem., Int. Ed. 2017, 56, 12828−12840.

(28) Chmiela, S.; Tkatchenko, A.; Sauceda, H. E.; Poltavsky, I.; Schütt, K. T.; Müller, K.-R. Machine Learning of Accurate EnergyConserving Molecular Force Fields. Sci. Adv. 2017, 3, No. e1603015. (29) Zhang, L.; Han, J.; Wang, H.; Car, R.; E, W. Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics. Phys. Rev. Lett. 2018, 120, 143001. (30) Bartók, A. P.; Payne, M. C.; Kondor, R.; Csányi, G. Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, without the Electrons. Phys. Rev. Lett. 2010, 104, 136403. (31) Kolb, B.; Luo, X.; Zhou, X.; Jiang, B.; Guo, H. HighDimensional Atomistic Neural Network Potentials for MoleculeSurface Interactions: HCl Scattering from Au(111). J. Phys. Chem. Lett. 2017, 8, 666−672. (32) Liu, Q.; Zhou, X.; Zhou, L.; Zhang, Y.; Luo, X.; Guo, H.; Jiang, B. Constructing High-Dimensional Neural Network Potential Energy Surfaces for Gas-Surface Scattering and Reactions. J. Phys. Chem. C 2018, 122, 1761−1769. (33) Huang, S.-D.; Shang, C.; Zhang, X.-J.; Liu, Z.-P. Material Discovery by Combining Stochastic Surface Walking Global Optimization with a Neural Network. Chem. Sci. 2017, 8, 6327−6337. (34) Khorshidi, A.; Peterson, A. A. AMP: A Modular Approach to Machine Learning in Atomistic Simulations. Comput. Phys. Commun. 2016, 207, 310−324. (35) Jiang, B.; Li, J.; Guo, H. Potential Energy Surfaces from High Fidelity Fitting of ab initio Points: The Permutation Invariant Polynomial - Neural Network Approach. Int. Rev. Phys. Chem. 2016, 35, 479−506. (36) Zhu, X.; Malbon, C. L.; Yarkony, D. R. An Improved QuasiDiabatic Representation of the 1, 2, 31A Coupled Adiabatic Potential Energy Surfaces of Phenol in the Full 33 Internal Coordinates. J. Chem. Phys. 2016, 144, 124312. (37) Lenzen, T.; Manthe, U. Neural Network Based Coupled Diabatic Potential Energy Surfaces for Reactive Scattering. J. Chem. Phys. 2017, 147, 084105. (38) Guan, Y.; Fu, B.; Zhang, D. H. Construction of Diabatic Energy Surfaces for LiFH with Artificial Neural Networks. J. Chem. Phys. 2017, 147, 224307. (39) Xie, C.; Zhu, X.; Yarkony, D. R.; Guo, H. Permutation Invariant Polynomial Neural Network Approach to Fitting Potential Energy Surfaces. IV. Coupled Diabatic Potential Energy Matrices. J. Chem. Phys. 2018, 149, 144107. (40) Hu, D.; Xie, Y.; Li, X.; Li, L.; Lan, Z. Inclusion of Machine Learning Kernel Ridge Regression Potential Energy Surfaces in Onthe-Fly Nonadiabatic Molecular Dynamics Simulation. J. Phys. Chem. Lett. 2018, 9, 2725−2732. (41) Dral, P. O.; Barbatti, M.; Thiel, W. Nonadiabatic Excited-State Dynamics with Machine Learning. J. Phys. Chem. Lett. 2018, 9, 5660− 5663. (42) Zhu, C.; Nakamura, H. The Two-State Linear Curve Crossing Problems Revisited. III. Analytical Approximations for Stokes Constant and Scattering Matrix: Nonadiabatic Tunneling Case. J. Chem. Phys. 1993, 98, 6208−6222. (43) Murphy, K. P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, 2012. (44) Tully, J. C.; Preston, R. K. Trajectory Surface Hopping Approach to Nonadiabatic Molecular Collisions: The Reaction of H + with D2. J. Chem. Phys. 1971, 55, 562−572. (45) Hammes-Schiffer, S.; Tully, J. C. Proton Transfer in Solution: Molecular Dynamics with Quantum Transitions. J. Chem. Phys. 1994, 101, 4657−4667. (46) Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, 2016. (47) Wang, H.; Zhang, L.; Han, J.; E, W. DeePMD-kit: A Deep Learning Package for Many-Body Potential Energy Representation and Molecular Dynamics. Comput. Phys. Commun. 2018, 228, 178− 184. (48) Zhang, L.; Han, J.; Wang, H.; Car, R.; E, W. DeePCG: Constructing Coarse-Grained Models via Deep Neural Networks. J. Chem. Phys. 2018, 149, 034101. 6707

DOI: 10.1021/acs.jpclett.8b03026 J. Phys. Chem. Lett. 2018, 9, 6702−6708

Letter

The Journal of Physical Chemistry Letters (49) Han, J.; Zhang, L.; Car, R. Deep Potential: A General Representation of a Many-Body Potential Energy Surface. Commun. Comput. Phys. 2018, 23, 11. (50) Behler, J. Neural Network Potential-Energy Surfaces in Chemistry: A Tool for Large-Scale Simulations. Phys. Chem. Chem. Phys. 2011, 13, 17930−17955. (51) Behler, J. Constructing High-Dimensional Neural Network Potentials: A Tutorial Review. Int. J. Quantum Chem. 2015, 115, 1032−1050. (52) Rupp, M.; Tkatchenko, A.; Müller, K.-R.; von Lilienfeld, O. A. Fast and Accurate Modeling of Molecular Atomization Energies with Machine Learning. Phys. Rev. Lett. 2012, 108, 058301. (53) Rupp, M.; Ramakrishnan, R.; von Lilienfeld, O. A. Machine Learning for Quantum Mechanical Properties of Atoms in Molecules. J. Phys. Chem. Lett. 2015, 6, 3309−3313. (54) Häse, F.; Valleau, S.; Pyzer-Knapp, E.; Aspuru-Guzik, A. Machine Learning Exciton Dynamics. Chem. Sci. 2016, 7, 5139−5147. (55) Frank, I.; Hutter, J.; Marx, D.; Parrinello, M. Molecular Dynamics in Low-Spin Excited States. J. Chem. Phys. 1998, 108, 4060−4069. (56) Fabiano, E.; Keal, T. W.; Thiel, W. Implementation of Surface Hopping Molecular Dynamics Using Semiempirical Methods. Chem. Phys. 2008, 349, 334−347.

6708

DOI: 10.1021/acs.jpclett.8b03026 J. Phys. Chem. Lett. 2018, 9, 6702−6708