Three-dimensional "Pople diagram" - The Journal of Physical

Three-dimensional "Pople diagram". Martin. Karplus. J. Phys. Chem. , 1990, 94 (14), pp 5435–5436. DOI: 10.1021/j100377a002. Publication Date: July 1...
0 downloads 0 Views 288KB Size
J. Phys. Chem. 1990, 94, 5435-5436

5435

Three-Dimensional “Pople Diagram” Martin Karplus Department of Chemistry, Harvard University, Cambridge, Massachusetts 02138 (Received: February 20, 1990) At the Symposium on Atomic and Molecular Quantum Theory held on Sanibel Island, FL, in January 1965,John Pople’ introduced what has come to be called the ‘hyperbola of quantum chemistry”. It illustrates the inverse relationship between the sophistication of a calculational method and the number of electrons in a molecule that can be studied by that method. John was particularly concerned with the increased divergence between the two cultures of quantum chemistry first described by Charles Coulson a t the Conference on Molecular Quantum Mechanics held a t the University of Colorado in June 195ga2Coulson had pointed out that quantum chemists fall into two classes-group I composed of scientists concerned with highly accurate a b initio calculations on small systems and group I1 composed of scientists concerned with understanding chemical phenomena by use of less sophisticated (e+, semiempirical) methods that can be applied to larger molecules. It is now nearly 25 years since John introduced his diagram, which was frequently shown at this meeting celebrating the 40th anniversary of his first paper.3 Over the intervening 40 years, there have been great advances both in the power of computers (from electric calculators to supercomputers) and in the methodology of quantum chemical calculations. An estimate made at the meeting concluded that the speed of computers has increased by about 7 orders of magnitude and that the intrinsic speed of quantum chemistry codes has increased by about 3 orders of ma itude. This increase in computing power by a factor of about 10 stands in striking contrast to the modest increase in the size of the systems that can be treated by quantum calculations at the highest level of accuracy. They have progressed by at most 1 order of magnitude, from 2 to 20 electrons. Although correct, this view of quantum chemistry is an overly pessimistic evaluation of the progress that actually has taken place in the 40 years under discussion. What is more important than the quantitative limitations that are still present is that there has been a qualitative change in quantum chemistry. One essential aspect of this change can be characterized as a partial merging of groups I and 11. The existence of the two groups in the 1960s and 1970s was not just a matter of taste for the scientists involved. For many years it was almost impossible to do ab initio calculations that answered questions of chemical interest. If one wanted to contribute to chemistry, one had to resort to partly empirical methods and use qualitative insights to solve chemical problems. This is no longer the case. It is now possible to do a b initio calculations that are of sufficient accuracy to answer chemically important questions for reasonably complex molecules and for potential surfaces describing their reactions. Our chairman and other speakers at this meeting have been at the forefront of such developments. The growth of the utility of a b initio calculations in solving chemical problems is attested to by the number of articles appearing in the Journal of the American Chemical Society, in which theoretical papers were almost nonexistent in the 1960s. Should we be satisfied that a b initio molecular quantum mechanics is an integral part of chemistry and that it can deal in a satisfactory manner with molecules of intermediate complexity? There are two caveats to such complacency. The first is that the extension of group I studies into the arena of group I1 problems has come at a significant cost. Group I calculations are much more expensive than those of group 11, but that may not be very important because the needed computer time is available, largely

IF

( I ) Pople, J. A. J . Chem. Phys. 1965.13, S229. (2) Coulson, C. A. Reo. Mod. Phys. 1960.32, 170. (3) Pople, J. A.; Lennard-Jones, L. Proc. R. Soc. London 1950, ,4202,166.

through the NSF Supercomputer Initiative. What is unfortunate is that the growth and success of group I applications has not always led to the depth of understanding provided by more qualitative group I1 analyses. Configuration interaction or perturbation theory calculations with large basis sets make it possible to obtain results that can be compared with experimental data and, at times, can demonstrate that the interpretations of the data are incorrect. However, the much greater complexity of the calculations makes it more difficult to provide a simple description of the meaning of the results. One often-heard response to such a complaint is that the correct interpretation is complex and that an interpretation in “chemical” terms is not possible. I do not believe this to be true, though it is not completely false. Rather, many of the group I quantum chemists do not seem to want to put in the effort required to interpret what they are doing. This attitude of group I worried Charles Coulson in 1959 and is even more pertinent today. Not everyone has Richard Feynman’s gift at reducing complexity to simple concepts without sacrificing a c ~ u r a c y . ~Nevertheless, one can hope that more quantum chemists will use group I a b initio methods that now give useful numerical answers and still try to relate the results to intuitive models, as did members of group 11. A second concern is that the demands on theoretical chemistry have expanded as the calculational methods have improved. It is no longer sufficient to think of understanding molecules with 5-10 atoms and 20-80 electrons. Many problems require the study of systems composed of 102-104atoms with 103-10selectrons. Examples arise in the analysis of solvation, of reactions in solution, of complex solids, and of biomolecules. Such systems are beyond what one can treat today (or tomorrow) with the necessary accuracy by a b initio calculations or even by semiempirical techniques. Even if computing power were to increase by a factor of 1O’O every 40 years, satisfactory calculations for systems of los electrons that require a 10% increase in computing power would become feasible only in 200 years. In spite of this, the theoretical study of large systems is a rapidly growing field that already has contributed significantly to the computational statistical mechanics of liquids and solidsS and to the simulation of biomolecules.6 Progress in this area is based on the realization that, for many of the important problems, the essential interactions can be represented by an empirical force field. Such a force field is grounded in quantum mechanics and assumes the validity of the BornOppenheimer approximation. It involves, in part, an extension of vibrational force fields from small molecules to very large systems. This is often possible because most of the latter consist of many copies of a small number of different molecules (e.g., only one kind in a simulation of a periodic box of 512 water molecules) or of different molecular fragments (e.g., only 20 different kinds of amino acid residues in the simulation of a protein). As long as the perturbing interactions in the large system are sufficiently weak, experimental data and/or high-level ab initio calculations for small model systems can be used to evaluate the parameters appearing in the force field used for the large system. An implicit assumption of such an approach is the general transferability of parameters. This is not always valid and corrections may have to be introduced. Also, quantum calculations (4) Feynman, R. P.; Leighton, R. B.; Sands, M. The Feynman Lectures on Physics; Addison-Wesley: Reading, MA, 1965.

(5) Cicotti, G.; Frankel, D.; McDonald, I. R. Ed. Simulations of Liquids and Solids: Molecular Dynamics and Monte Carlo Methods in Statistical Mechanics; North-Holland: Amsterdam, 1987. (6) Brooks 111, C. L.; Karplus, M.; Pettitt, B. M. Proteins: A Theoretical Perspective of Dynamics, Structure, and Thermodynamics; Advances in Chemical Physics 71; Wiley: New York, 1988.

0022-3654/90/2094-5435%02.50/00 1990 American Chemical Society

J . Phys. Chem. 1990, 94, 5436-5439

5436 ACCURAFY

Figwe 1. Three-dimensional YPoplediagram” for present-day chemistry (adapted from ref 1). The non-self-avoiding nonrandom walk of John Pople through quantum chemistry space is shown on the accuracy sur-

face.

are required for the parts of the system involved in a reaction, for excited states and more generally for any phenomenon where changes in electronic structure play a significant role. However, even for such problems it is usually possible to treat only a small part of the system by semiempirical or a b initio quantum mechanical methods and to treat the much larger remainder, including the interactions with the quantum mechanical subsystem, by an empirical force field.’** The extended range of theoretical approaches just described makes it useful to add a third dimension to the Pople diagram (see Figure 1). In addition to sophistication (type of method) and complexity (number of electrons), there is needed a dimension (7) Singh, U. C.; Kollman, P.J . Compur. Chem. 1984, 5, 129. (8) Bash, P.A.; Field, M.J.; Karplus, M.J . Am. Chem. Soc. 1987, 109, 8092-8094.

that provides an estimate of the accuracy of the calculation for the system under consideration. A simple proportionality between sophistication and accuracy, implied by the two-dimensional Pople diagram, is not applicable to the wide variety of calculational methods now being employed. An impressionistic view of the accuracy of the various methods is given by the vertical dimension in Figure 1. This yields a limiting surface for the accuracy of a given type of calculation for a given system size. The edge of this surface projects onto the hyperbola of quantum chemistry, with the two axes in the plane corresponding to those of the Pople diagram. On the sophistication axis, only empirical methods have been added and they have been assumed to be least sophisticated (in deference to the quantum chemistry audience at this meeting). As to the size of systems to be studied, the linear scale in the Pople diagram, which covered the range from 1 to 100, has been replaced by a logarithmic scale that goes from 1 to lo6. The accuracy of a method increases with distance from the Pople plane. All methods, at whatever level of sophistication, are presumed to reduce to the exact result for one electron. Beyond that, the best a b initio and the empirical methods give the highest accuracy, with the former extending to systems of 10’ electrons and the latter to l@ electrons or more. The other methods fall in between these limits of sophistication and number of electrons with the accuracy surface being rather complex. For the a b initio methods, there is a monotonic increase in sophistication and accuracy and a monotonic decrease in range of applicability in going from minimum basis LCAO-SCF, to extended basis (full) Hartree-Fock, to low-level and high-level correlation calculations. The evaluation of the accuracy dimension in the semiempirical direction from the LCAO-SCF methods is more difficult. The drawing suggests that PPP/CNDO-type methods, which generally make use of a minimal basis set, are more accurate than minimal basis set LCAO-SCF ab initio calculations and also are more accurate than Hiickel calculations. Not included in the diagram are density functional methods, which appear to violate the hyperbola of quantum chemistry. They are in the range of accuracy and sophistication of Hartree-Fock-type calculations but can treat a larger number of electrons with the available computer time. Since this meeting and issue are dedicated to John Pople, I have tried to encompass his work over four decades by drawing a path representing his work in the quantum chemistry space in Figure 1. H is wide-ranging interests and research accomplishments make clear that, even today, a single person can span groups I and 11.

John Pople: The CNM) and INDO Methods Gerald %gal College of Letters, Arts and Sciences, Administration 200, University of Southern California, Los Angeles, California 90089-401 2 (Received: November 10, 1989)

A reminiscence on the development of the CNDO and INDO methods.

In 1963, John Pople chow to leave England and join the faculty of Carnegie Mellon University, then Carnegie Institute of Technology. After some visa problems, he actually arrived in March of 1964 and I was there, a first-year graduate student, expectantly lying in wait for him. Thus the honor fell to me to become John’s first American graduate student. David Santry had come with him from England and Mark Gordon joined the group a few days later; my honor was short-lived. Most of you know John today after twenty-five years of acculturation to this country, so you can well imagine that our lab routine was more than a little English. Each day the four of us had afternoon tea. 0022-3654/90/2094-5436$02.50/0

In those hours of relaxed conversation, I learned enormous amounts of quantum chemistry from John, and I have always felt lucky, lucky to have been with him at a time when he was new to the country and thus not too busy and lucky to have worked with one of those rare men who are so good with students that their products form an entire school of science. John Pople’s students are everywhere today and each of them, I think, feels as fortunate as I to bear the marks of his influence. When we sat down to discuss what I might do, John suggested that it would be good to try to extend the neglect of differential overlap approximation, the Pariser-Parr-Pople theory of calcu0 1990 American Chemical Society