728
ANALYTICAL CHEMISTRY, VOL. 51, NO. 6, MAY 1979
Laser Induced Thermal Lens Effect for Calorimetric Trace Ana lysis N. J. Dovlchl and J. M. Harris” Depaflment of Chemistty, University of Utah, Salt Lake City, Utah 84 1 12
The thermal lens effect, generated In condensed phase by absorption of a laser beam, produces a loss of radiation in the center of the beam whkh Is enhanced relative to normal Beer’s law behavior by lncreaslng laser power. The time dependence of the effect provldes a reference or blank measurement without requlrlng a double-beam arrangement. An applicatlon of the technique to the determlnatlon of trace level Cu(I1) with EDTA Is presented. Uslng a 4-mW He:Ne laser, a minimum corresponding to 3.3 ng detectable absorbance of 1.0 X copper, Is found.
T o reduce detection limits for a solution absorbance measurement of a very weakly fluorescent sample ( I ) , one can determine a temperature rise associated with the nonradiative relaxation of the excited molecules. Since the magnitude of heat delivered to the solvent depends linearly on source power, the sensitivity of this approach can benefit from the high power of a laser source. Several methods for measuring small absorbances in condensed phase by a laser-induced temperature increase have recently been compared in the review literature (2-4). These methods, listed in order of decreasing detection limits, include thermocouple calorimetry, photoacoustic calorimetry, interferometry, thermal lens calorimetry, and several laser intracavity effects such as spot size variation, beat frequency shift, and resonator gain variations (5). The intracavity effects generally produce the lowest limit of detection because of the large average power in the resonator and the sensitivity of the oscillator to small refractive index gradients produced by absorption by the sample. Thermal lens calorimetry, although somewhat less sensitive, provides a much simpler method for decreasing the detection limits in absorption measurements. The thermal lens effect, first reported by Gordon et al. (6), is produced in an experimental arrangement similar to normal single-beam absorption spectrometry. The major difference is that laser radiation passing through a sample is detected only at the center of the beam by restricting the field of view of the detector with a pinhole. The sample causes a loss of radiation from the beam center by thermal defocusing; that is, light absorbed by the sample is converted to heat by nonradiative relaxation and increases the temperature of the solvent by an amount which is greatest at the center of the beam. This temperature increase results in a lowering of the refractive index, producing a negative lens which defocuses the beam. If the path from the laser to the sample is initially blocked and then opened with a shutter, the thermal lens takes a finite time to build up. A steady-state condition is obtained when the rate of laser heating equals the rate of heat loss due to the thermal conductivity of the solvent and the finite temperature rise. The buildup of the lens can take place on time scales from tens of microseconds to hundreds of milliseconds depending on the thermal conductivity of the solvent and the radius of the laser beam through the sample (6, 7). The intensity measured at the beam center, I @ ) will , initially ( t = 0) reflect only the Beer’s law response of the sample:
I(O)/IO = or for small A,
(Io - Z ( O ) ) / I o = 2.303A
(2)
where A is the sample absorbance and Io is the incident light intensity. After sufficient time, when a steady state temperature difference is reached, the intensity at the detector, I(=), depends on the optical arrangement of the system. An optimum Configuration which minimizes I ( m ) is obtained when the sample is placed one confocal length beyond the beam waist formed by a long focal length lens. In this configuration, using a TEM, laser beam to probe a sample whose length, 1, is sufficiently small ( I