A New Algorithm for Large-Scale Eigenvalue Problems - American

Mar 25, 2016 - Solutions of the Schrödinger equation for the electrons composing the molecule are the ... Electronic eigenvalues compose the potentia...
1 downloads 0 Views 410KB Size
Article pubs.acs.org/JPCA

EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems Hillel Tal-Ezer* School of Computer Sciences, Academic College of Tel-Aviv Yaffo, Rabenu Yeruham Street, Tel-Aviv 61803, Israel ABSTRACT: Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK (Sorensen, D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations; Springer, 1997; Lehoucq, R. B.; Sorensen, D. C. SIAM Journal on Matrix Analysis and Applications 1996, 17, 789; Calvetti, D.; Reichel, L.; Sorensen, D. C. Electronic Transactions on Numerical Analysis 1994, 2, 21) (or Eigs of Matlab) but significantly simpler.

1. INTRODUCTION

(operator) are not given explicitly, shift and invert cannot be considered as a viable approach. A filtering method targeting the ground state was developed based on a Chebychev propagation in imaginary time of the evolution operator.4 Additional eigenvalues could be obtained by repeating the procedure, filtering out the eigenvector that is orthogonal to the ground state or the previous eigenstates found. The next development was the use of a Gaussian filter to concentrate a random vector in the vicinity of a desired energy range.5 It was then realized that relying completely on filtering to find the eigenfunctions is not efficient. A two-step process was developed where after an incomplete filtering step generating small Krylov space, a diagonalization procedure is applied within the subspace.6,7 An iterative procedure to find eigenfunctions was first devised by Lanczos8,9 for a hermitian matrix. The method is a powerful tool for extracting some of the extreme eigenvalues of hermitian matrices. The drawback of the Lanczos method is that it becomes numerically unstable due to loss of orthogonality.10 An improved version of the traditional Lanczos is the implicitly restarted Lanczos method.3,11 It is similar to implicitly restarted Arnoldi (IRAM)2 aimed at solving the eigenvalues problem of nonhermitian matrices. The present paper describes a new algorithm, named EvArnoldi. It is a matrix-free algorithm designed to get p ≪ n approximated eigenvalues and eigenvectors of a general, huge matrix (operator). EvArnoldi belongs to the family of Krylov methods (section 2) that make use of the Arnoldi algorithm

In the quantum world, the eigenvalues of the Hamiltonian operator play a central role. A complete diagonalization is equivalent to solving the dynamics for any time period. The complete set also allows calculation of the partition function. The numerical effort of obtaining such a complete set of eigenvalues and eigenvectors scales as O(N3), where N is the size of Hilbert space. As a result, this task is possible only for systems with a low-dimension Hilbert space. Typically in simulating real systems, this is almost never true. For example, the size of vibrational Hilbert space of even a small molecule could be on the order of millions. The effective size of Hilbert space of a DFT electronic structure problem could even be larger. In many problems, it is sufficient to know only a small fraction of the full spectrum of the Hamiltonian. For example, at low temperature, only a few of the lowest eigenvalues are occupied. In predissociation, the dynamics is determined by a few resonances. The Hartree−Fock approximation requires calculation of a subset of eigenvalues that is on the order of the number of participating electrons. For a closed quantum system, the Hamiltonian operator Ĥ is Hermitian, and the eigenvalues are real. For open quantum systems, the generator of the dynamics 3 has complex eigenvalues representing the decay to equilibrium. Complex eigenvalues are also obtained in descriptions of resonances when outgoing boundary conditions are imposed. It is therefore desirable to develop algorithms that can calculate the eigenvalues and eigenvectors of a selected subspace of the spectrum of the matrix (operator). Usually, the shift and invert strategy is applied for this task. It results in the need to solve huge linear systems, which is a highly timeconsuming process. When the elements of the matrix © XXXX American Chemical Society

Special Issue: Ronnie Kosloff Festschrift Received: January 25, 2016 Revised: March 24, 2016

A

DOI: 10.1021/acs.jpca.6b00814 J. Phys. Chem. A XXXX, XXX, XXX−XXX

Article

The Journal of Physical Chemistry A

Proof: Let yi be the coordinate vector of ui in the space spanned by the columns of V. We have

(section 3). An iterative approach is employed (section 4), in which the set of the approximated eigenvalues gradually becomes closer to the region of interest, and the approximation becomes more accurate. Thus, after a sufficient number of iterations, the desired p eigenvectors and eigenvalues can be obtained with the required accuracy. Numerical results are presented in section 5.

ui = Vyi

m

m

m

AVyi = λiVyi

i=1

i=1

Hyi = λiyi

(10)

1≤i≤p

(11)

and the proof is concluded. In practice, we do not have the vector r = ∑pi=1 αiui. IRAM (implicit restarted Arnoldi)2 is an algorithm that, iteratively, computes an approximation of r (implicit Lanczos is actually IRAM for symmetric A). The algorithm presented in the present paper named EvArnoldi, in its basic form, is mathematically equivalent to IRAM but significantly simpler.

3. ARNOLDI ALGORITHM The implementation of EvArnoldi (and IRAM as well) is based on the Arnoldi algorithm.12 The algorithm implements the Gram−Schmidt orthonormalization process to build an orthonormal basis of the Krylov space Kr(A,r,m+1). The algorithm also yields an m × m matrix representation of A in the Krylov space. The basic formulation of the algorithm is Algorithm: Arnoldi(A,r,m

m

(1)

j=1

1≤i≤p

If we multiplying the left by V†, we get

∑ βjAj − 1 ∑ αiui = ∑ (∑ βλj i j − 1αi)ui j=1

(9)

Hence

2. MATHEMATICAL FOUNDATION OF KRYLOV SPACE Assume that A is an n × n matrix with u1, ..., un linearly independent eigenvectors and λ1, ..., λn corresponding eigenvalues. Assume further that we are interested in finding u1, ..., up and the corresponding eigenvalues λ1, ..., λp. One can find an approximation of these eigenvectors and eigenvalues based on the following two propositions. Proposition 1: Let U be a vector space whose basis is u1, ..., um, a set of linearly independent eigenvectors of a matrix A with corresponding eigenvalues λ1, ..., λm, λi ≠ λj. If r = ∑mi=1 αiui, αi ≠ 0, then U = Span{r, Ar, ..., Am−1r}. Proof: Because U is space of dimension m, we only have to show that r,Ar, ..., Am−1r are linearly independent. Let w = ∑mj=1 βjAj−1r. Assuming w = 0, we have to show that βj = 0, 1 ≤ j ≤ m. We have w=

1≤i≤p

Because u1, ..., um are linearly independent m

∑ cijβj = 0

1≤i≤m (2)

j=1

where cij = αiλi j − 1

1 ≤ i, j ≤ m

(3)

One can write eq 2 as cβ = 0

β = [β1 ,..., βm]T

(4)

where c is a matrix whose elements are cij. The matrix c can be written as The output of the Arnoldi algorithm is a set of orthonormal vectors v1, ..., vm+1 and a matrix Hm such that

(5)

c = αλ

where α is a diagonal matrix with α1, ..., αm on its diagonal and the elements of the matrix λ satisfy λij = λj−1 i . λ is a Van Der Monde matrix, and therefore, det(λ) ≠ 0. Because det(α) ≠ 0, det(c) ≠ 0, and therefore, β = 0 and the proof is concluded. Remark: The subspace U defined above is known in the literature as Krylov space m−1

K r(A , r , m) = Span{r , Ar ,..., A

r}

AVm = VmHm + hm + 1, mvm + 1emT

1≤i≤p

Hm = Vm† AVm

(13)

and it is actually the matrix representation of A on the subspace spanned by the columns of Vm. Equation 12 can be written also as

(6)

AVm = Vm + 1H̅ m

(14)

where v1, ..., vm+1 are the columns of Vm+1 and H̅ m is an (m + 1) × m matrix that results from adding the row [0, ..., 0,hm+1,m] to Hm . The algorithm described above is a particular case of the more general one, GenArnoldi. In this case, we know already Vs+1 and H̅ s, which satisfy

(7)

where Hyi = λiyi

(12)

where Hm is an m × m Hessenberg matrix and Vm is an n × m matrix whose columns are v1, ..., vm. Hm satisfies

Proposition 2: Let A be an n × n matrix and u1, ..., up be a few of its linearly independent eigenvectors with λ1, ..., λp corresponding eigenvalues. Let U = Span{u1, ..., up} and V be an n × p matrix whose columns are orthonormal vectors in U. If H = V†AV, then λ1, ..., λp are the eigenvalues of H and u1, ..., up satisfy ui = Vyi

emT = [0 ,..., 0, 1]

AVs = Vs + 1H̅ s

(8) B

s