Photophysics of polycyclic aromatic hydrocarbons adsorbed on silica

Robert J. Kavanagh and J. K. Thomas. Langmuir 1998 14 (2), 352-362 ... Jeremy M. Shaver and Linda B. McGown. Analytical Chemistry 1996 68 (1), 9-17...
0 downloads 0 Views 809KB Size
J . Phys. Chem. 1993, 97, 5980-5986

5980

Photophysics of Polycyclic Aromatic Hydrocarbons Adsorbed on Silica Gel Surfaces. 1. Fluorescence Lifetime Distribution Analysis: An Ill-Conditioned Problem? Yuan S. Liu and William R. Ware’ Photochemistry Unit, Department of Chemistry, The University of Western Ontario, London, Ontario, Canada N6A 5B7 Received: December 28, 1992

The ill-conditioned nature of the lifetime distribution analysis is demonstrated using the fluorescence decay data of pyrene adsorbed on variously dehydroxylated silica gel surfaces. A set of physically plausible results can be obtained only after a regularization technique is employed in the data reduction. This study indicates that a bimodal distribution appears to be suitable for describing the lifetime of pyrene adsorbed on a silica gel surface.

Introduction The fluorescencelifetimes of polycyclic aromatic hydrocarbons (PAHs) adsorbed on the surface of silica gel and other solid materials have been studied by many authors’ in the last decade. It has been generally observed that the fluorescence decays of these systems are almost always nonexponential. A traditional method of dealing with these data is to fit the decay curves with two or three exponential terms. It has been noted, however, that these recovered exponential terms do not necessarily possess physical meaning, because the same decay curve can also be fitted equally well by other different combinationsof exponential termse2 Therefore, in many cases, this traditional approach is not appropriate. One way to avoid this problem is to analyze these decay data in terms of lifetime distributions, without assuming arbitrarily the existence of two or three exponential terms. In recent years, the techniques for lifetime distribution analysis3 have been developed4J and applied with fair success to a number of photophysical systems, such as proteins? vesicles or membranes,’ micellar systems,8cyclodextrin inclusion complexe~,~ fluorophors in rigid or viscous media,I0 and surfaces.I’ Studies in this field has been reviewed12recently. In most of these applicatioins, an a priori model, such as a single or double Gaussian or Lorentzian distribution, was used in the data analysis. Great difficulties arose when the lifetime distribution analysis approach was employed in surface photophysical studies. An intensive investigation by the present groupI3over the last several years has indicated that there are two major problems. On one hand, since the structure and organization of surface functional groups and the disposition of the adsorbed molecules on these surface groups, as well as their influence on the photophysics of adsorbed molecules, are all very inadequately understood, it is difficult to construct a theoretical model for the assumed lifetime distribution. As a result, in most cases, the comparatively simple type of lifetime distribution analysis, using an a priori distribution model,” is not applicable. On the other hand, if the recovery of the lifetime distribution is attempted without an a priori model, the problem then becomes that of solving a Fredholm integral equation of the first kind; this is well-known as an “ill-conditioned” problem; and the solution derived by this approach is normally very unstable (see below). This paper is a report of our efforts to deal with the above problems. The ill-conditioning of the lifetime distribution analysis, based on fluorescence decay measurements using the singlephoton-counting method, is discussed in detail, and the use of a regularization technique is shown to be essential to achieve

’ Publication No. 487 from the Photochemistry Unit, University of Western Ontario.

reasonable solutions. Pyrene, adsorbed on variously dehydroxylated silica gel surfaces, has been used as a model system. In the present paper, attention is focused on the problems related to the lifetime analysis only. On the basis of the lifetime distribution data, we have also made a %omplete’’ photophysical study for a number of PAHs adsorbed on silica gel surfaces, including fluorescence quantum yield measurements and the derivation of radiative decay rates. These results will be discussed in the other two papers in this series.14

Concept of the Lifetime Distribution Analysis Assuming a fluorescence system is excited by impulse excitation and this results in a decay curve Z(t), we define a decay rate distribution function i(k) by an integral transform, Z(t) = Jabi(k)e-kt d k

(1)

which is the well-known Laplace transform, where a 1 0 and b Ia,and e-kris the kernal deduced from the first-order kinetics. This definition is, in fact, a generalization of the classical approach based on a sum of discrete decay rates. Under this generalized definition, the discrete decay rate components can be represented by a sum of delta functions:

By substituting (2) into (l), the classical expression is obtained: Z(t) = CAiJab6(k - ki) e-kt d k = E A i exp(-kit) i

i

By changing the variable, the corresponding distributions in T = l/k, x = In k, and y = In T domains may be similarly defined. They are given in e q s 3-5:

(3) where T , 2 0 and

where x , 2

--01

76

Ia,

and x b I+a, and

where y, 1 -a andyb I+-. For convenience, we will generally describe the recovery of all these distributions as the “lifetime distribution analysis”, unless it is necessary to specify the particular type of distribution. From eqs 1 and 3-5, the relationships between i(k) and iT(T), i&), and iJy) can be readily derived. They are

0022-365419312097-5980$04.00/0 0 1993 American Chemical Society

Fluorescence Lifetime Distribution Analysis

i,(ln k ) i(k) = k

The Journal of Physical Chemistry, Vol. 97, No. 22, 1993 5981

(7)

any case, the integral equation to be solved can be written in the form of eq 9. For a lifetime analysis based on the single-protoncounting technique, the model response function G(r) is the theoretical impulsedecay function Z(t) convoluted with instrument response function E(1). That is,

G ( t ) = JorZ(t’) E ( t - t’) dt’

iY(-ln k ) i(k) = k

Substitution of (9) into (10) yields

G ( t ) = S,s”f(s)[xk(s,t’)E ( r - t ’ ) dt’] d s

It is evident that eqs 1 and 3-5 have the general form of Z(t)

= s,””scs) k(s,t) ds 0

(10)

0

(9)

which is known as the Fredholm integral equation of the first kind.15 To recover the distribution functionfls) from a decay curve Z ( t ) is, therefore, to solve a Fredholm integral equation of the first kind. This equation appears frequently in many branches of physical science. When Z ( t ) is determined experimentally, the problem of seekingfls) is referred to as an “inverse problem”.16J7 The theory of Fredholm integral equations of the first kind and the related inverse problems have been studied in depth, and the mathematical propertiesof these problemsare already well-known. The most salient feature of the Fredholm integral equation of the first kind is the instability of its solution. In other words, the problem is “ill-conditioned”. This means that a very small perturbation in Z ( t ) can cause an arbitrarily large change infls). Excellent discussions can be found in the literature.l* Although the ill-conditioned nature of this problem is well-known in principle, for a particular type of data, questions such as “how ill-conditioned is this particular problem” and “what is the maximum amount of reliable information available in the given set of data” are still unclear. In the practice of the lifetime distribution analysis, favorable examples can be easily found in which an underlying distribution can be precisely recovered. These examples give the impression that the inverse technique used to be able to recover other distributions of the same level of complexity and that the ill-conditioning of the problem is not very serious under the given conditions of data precision. In some unfavorable examples, however, a simple distribution may be completely distorted by the same technique under the same level of data precision. In this regard, an easily ignored problem is that, when dealing with an underlying unknown distribution, one is usually unable to distinguish a favorable case from an unfavorable case. As a result, successful examples alone can never be sufficient to justify a result obtained for a new and unknown case; other evidence is required.

The Marquardt Method and the Smoothing Technique of Phillips In this laboratory, several programs have been developed for the lifetime distribution analysis.I2 In the present work, the algorithm is based on a combination of the Marquardt method19 and the smoothing technique of Phillips,20hereafter called the smoothed Marquardt method (SMM). Details of this program have been described e1~ewhere.l~These two techniques are, separately, both widely employed for solving inverse problems in physical science.21,22In photophysical studies, the Marquardt method has long been a classic algorithm for the recovery of one, two, or three discrete lifetimes.23 In addition, James and Ware have applied the Marquardt method in their earlier work on the lifetime distribution analysis.24 Some of the data obtained in the present work have also been analyzed with the MEM method using a programi2 previously developed in this laboratory. In the SMM method, the recovery of lifetime distributions is conducted in the log k or log T domain (see eqs 4 and 5 ) . The theoretical basis of this selection is that the resolution of this inversion process is evenly distributed in the log domain.25 In

(11)

After convolution, the integral equation can still be written in the form

G ( t ) = J‘xs) S. z(s,t) d s where z(s,t) = E k ( s , t ’ )E(t - t’) dt’

(13)

We can then make an usual quadrature of eq 12 to obtain a system of linear equations:

g = Zf (14) where f = VI,f2, ...,JJT is the model (distribution) parameter vector, g = (gl, g2, ..., g,JTis the model response decay vector, and Z is the n X p matrix with elements Zji. We now give f a n initial estimate fO and assume a correction vector 6 = (al, 62, ..., 13~)r to be added to the initial estimate fO so that

f=P+b (15) can better satisfy eq 14. Substituting (15) into (14) yields g = go

+ Zb

(16) where 90 = ZfO. Let the observed decay be represented by the vector d = (dl, d2,...,d,JTand the difference between the model response g and the observed d be represented by residual vector r so that we have

r=d-g Combining (16) and (1 7) yields r = ro - z 6 where r0 = d - 90. In the simplest least-squares or “Gauss-Newton” approach, we seek to minimize the sum of squared errors x = rTr( ~ 2with ) respect to the correction vector 6. From eq 18, we have

The smoothing technique of $hillips serves to minimize the sum of squares of the second derivative offls). In the present problem, the second derivative is approximated by the second difference of the elements off. When the smoothing constraint of Phillips is imposed, we seek to minimize x ( a ) , x ( a ) = rTr

+ a(sTs)

(20)

where s is the second difference vector and a is a parameter which controls the relative weight of the smoothing constraint. Minimization of x(a) requires dx/du = 0 and yields

+

+~ C P

( Z ~ Z c u ~ ) b= ZTro

(21)

5982

Liu and Ware

The Journal of Physical Chemistry, Vol. 97, No. 22, I993

where the n X n matrix C is defined by 1 - 2 1

e=

.,.

0

0

*! ::!] 0

...

0

... ... ... ... ... ... ... 0 1 - 4 6 -4 ... 0 0 1 -4 5 ... 0 0 0 1 -2

(22)

1 -2 1 When a is given, solving eq 21 will produce the correction vector 6 toward a smoothed solution. The above derivation is for unweighted decay data. If a weighting factor of wi for each channel is taken into account, then eq 21 becomes

+

( Z ~ W Z (YC)~ = ZTWro+ (YCP where W = diag(w,, w2, ..., w,,).

(23)

When searching the solution, the Marquardt algorithm gives the off-diagonal elements of Z W Z aC unchanged but multiplies the diagonal elements by a factor of 1 0.26The working program uses the following slightly modified Marquardt procedures to carry out the minimization. They are as follows: (A) Make an initial guess for the fo. Calculate an initial XO for these values to yield the normal equation (23). (B) Set 0 = 0.01 initially. (C) Solve for 6 by the modified normal equation. (D) Correct the initial values by 6 and then evaluate the new

+

+

X.

(E) If x C xO, then accept the corrected I, replace the initial

fo by the corrected ones, and then go to step C. (F) After 10successful iterations, reduce 0 to 0.10, replace the initial fo by the corrected ones, and then go to step C. (G) If x > xo, then the corrected solutions are unacceptable. Increase /3 to 100 and return to C. The modification made here, as compared with a standard Marquardt method, is that 0 is reduced after 10, instead of every, successfuliterations. This is because each time when 0 is adjusted, the inverse of matrix Z V Z + aC has to be recalculated; this is the most time-consuming aspect of the calculation. Experience has shown that as x2 decreases, the same 0, and therefore the same (ZWZ &)-I matrix, may be used repeatedly as x2 will keep decreasing. Considerable CPU time can be saved in this way. This method has been described in the literatures2' Because the decay data from the surface work are normally noisy, it is highly desirable to remove the scattered light and stray light interference whenever possible. The program has used two additional components, one corresponding to T = 0 and another onecorrespondingto T = Q),to fit thedecaycurves. This technique has been described.23 Because these two components are not generated from fluorescence light, they are physically not of interest and have been removed from the recovered distribution. When using this program, we give a various values in order to search for solutions under different levels of regularization or smoothing. When a is set to 0, the smoothing constraint is completely removed, and the program uses the ordinary Marquardt approach. We denote by MM the SMM program running in this smooth-constraint-free mode. On the other hand, SMM refers, unless indicated otherwise, to the case where LY takes the largest possiblevalueunder the conditions that x2 and the residual profile are statistically acceptable. The solutionsderived from the working programs are presented, for convenience, in one of two styles, the vertical bar style or the profile line style. The solutions are presented in log T domain. When the vertical bar style is used, these vertical bars represent the number and distribution of the fitting components. The

+

lb

lo0

LOG LIFETIMEIns Figure 1. Instability in a lifetime distribution analysis: five decay curves were obtained by repeatedly measuring one sampleto the highest precision practical in this laboratory (512 channel and 5 X lo5counts in the peak channel) and then analyzed (a) by the MM program and (b) by the SMM program .

goodness of fit of a result is indicated by the residual profile shown below the result.

Use of the SMM Program in Surface Photophysical Studies The ill-conditioned nature of the lifetime distribution analysis and the effect of the regularization technique were clearly demonstrated by analyzing a series of decay curves derived from a set of experiments with variously dehydroxylated silica gel. For the selected decay data sets, we first used the MM and MEM programs to seek solutions and then compared these solutions with that obtained from the SMM program. The experimental details are described el~ewhere.1~ Briefly, five sampleswere made with the silica gel preheated in vacuum (