Spreadsheet Investigation of First-Order Kinetics - Journal of Chemical

Promoting Graphical Thinking: Using Temperature and a Graphing Calculator To Teach Kinetics Concepts. Deborah A. Moore-Russo , José E. Cortés-Figuer...
0 downloads 0 Views 54KB Size
Information • Textbooks • Media • Resources

Computer Bulletin Board

Spreadsheet Investigation of First-Order Kinetics Glenn V. Lo Department of Physical Sciences, Nicholls State University, Thibodaux, LA 70310; [email protected]

Computer spreadsheets have been useful in providing insight on complex chemical problems by facilitating “whatif ” investigations and visual representations of numbers and equations. For example, Bruist (1) used a spreadsheet to numerically solve differential rate equations and illustrate enzyme kinetics. Sundheim (2) developed a spreadsheet exercise to investigate the effect of errors on calorimetric measurements. For my physical chemistry laboratory course, I use spreadsheet exercises to let students do an in-depth exploration of the first-order rate law. In the first exercise, we examine the basics. If the integrated rate law for a reaction is [A] = [A]0 exp(᎑kt)

(1)

where [A] is the concentration of reactant A at time t, [A]0 is the initial concentration of reactant A, and k is the first-order rate constant, then any property, X, that is linearly related to [A] can be used to determine the rate constant k. This is easily derived algebraically. However, generating numerical values and graphs to demonstrate this fact before being shown the analytical derivation makes a more lasting impression. The spreadsheet as shown in Figure 1 illustrates the two generally used methods for determining k. If property X is linearly related to [A], that is, X = c0 + c1[A]

(2)

Figure 1. Snapshot of spreadsheet for exploring the first-order rate law. Students can change the values in cells B1, B2, F1, and F2 to see the effect of changing the initial concentration, rate constant, and the slope and intercept relating property X to the exponentially decaying concentration of reactant, [A]. The time intervals (dt ) can also be changed (cell B3). As shown, calculations are carried out to 5 lifetimes at intervals of 0.1 lifetime; note that some rows are hidden from view. The plot of ln|X| vs time is not linear because c 0 is nonzero. The chosen interval (d, cell F3) for the Guggenheim method is one lifetime.

532

where c0 and c1 are constants, then k is obtained as follows: Method 1. Plot ln|X t| vs t, if c0 = 0. Method 2. Plot ln|X t – Xt+d| vs t, where d is a constant. In both methods, the plot is linear and the slope is ᎑k. Method 1 requires that X t be directly proportional to [A]. It is important to note that if c0 is known (even if it is nonzero), then ln|Xt – c0| vs t yields a linear plot with slope ᎑k. Method 2, known as the method of Guggenheim (3), requires taking data in two sets with each datum in the second set delayed from one in the first set by a fixed time interval (d ). Taking data at equal time intervals allows for efficient use of intermediate data—first as X t+d , then as X t . It is especially useful if c0 cannot be accurately determined, since X t+d – X t does not depend on c0. The slope can be obtained by linear regression, a standard tool in spreadsheet programs. It is also easily estimated from the plot by noting the lifetime (1/k), which is the time it takes for ln(X) or ln|X t – X t+d | to change by one unit. The slope is simply the negative reciprocal of the lifetime. Constructing the spreadsheet in Figure 1 is straightforward. The only formulas that need to be typed in are for the cells in block B6..G6 and cell A7, which are $B$1*exp(-$B$2*A6), $B$1*exp(-$B$2*(A6+$F$3)), $F$1+$F$2*B6, $F$1+$F$2*C6, ln(abs(D6)), ln(abs(D6–E6)), and A6+$B$3 respectively. Each of these formulas can then be copied to the succeeding rows. With the spreadsheet laid out, students can easily explore the various factors affecting the time profile of a property (X) that is linearly dependent on the exponentially decaying concentration of a reactant, [A]. They can then address questions pertinent to their experiments, such as questions 1 and 2 below. QUESTION 1. In the kinetic study of the acid-catalyzed hydrolysis of methyl acetate, cf. problem 19.5 of Alberty and Silbey (4 ), k can be obtained from a plot of ln(V∞ – Vt ) vs t. Why is the slope of this plot equal to ᎑k? Note: V is the volume of standard NaOH solution used to titrate the acid (catalyst and product) in a quenched aliquot of the reaction mixture at time t, and V∞ is the volume of titrant for the aliquot taken at “infinite time” (two days later). QUESTION 2. If one tries to follow the hydrolysis of sucrose in acid (5) with X = optical rotation, why might the Guggenheim method work even when Method 1 does not? Both questions can be answered by examining the stoichiometry. In question 1, (V∞ – Vt ) can be shown to be proportional to the concentration of the ester, assuming the reaction goes to completion. In question 2, the optical rotation

Journal of Chemical Education • Vol. 77 No. 4 April 2000 • JChemEd.chem.wisc.edu

Information • Textbooks • Media • Resources 4

4 3

ln(X)

3

ln |Xt -Xt +d|

2

2 1

1

0 -1

0 -2

ln(X) -1

ln |Xt -Xt+d|

-3

-2

-4 0

1000

2000

3000

4000

0

5000

1000

2000

3000

4000

5000

time

time

Figure 2. Effect of random errors within ±5% of the X values. The ln(X) plot is mildly affected. The Guggenheim plot is adversely affected if the d value is too small; in this case, d = 0.1 lifetime.

Figure 3. Effect of random errors within ± 5% X 0. The chosen interval (d ) for the Guggenheim plot is one lifetime. The data become unreliable beyond two lifetimes.

(αt ) can be shown to be linearly dependent on the reactant concentration if the optical rotations of the reactant and products (glucose and fructose) are additive. It should be noted that (αt – α∞) is directly proportional to the concentration of sucrose. In the second exercise, we explore the effects of systematic (questions 3 and 4) and random (questions 5 and 6) errors. QUESTION 3. What happens if X is proportional to [A] (c0 = 0) but the measured values of X are all off by a constant factor? For example, in the study of the acid-catalyzed hydrolysis of methyl acetate (6 ), what would happen if the NaOH solution used to titrate the aliquots was assumed to be, say, 0.200 M but was, in fact, 0.190 M? This error can be simulated by changing c1. Obviously, the erroneous X is still proportional to [A] and the linear plots are unaffected. QUESTION 4. What happens if the measured values of X are all off by a constant offset? For example, in the methyl acetate experiment, what would happen if the experimental V∞ is larger than the correct V∞? This is simulated by simply changing the value of c0. It is interesting that the accuracy is generally affected in Method 1 only. If c0 is nonzero, Method 1 does not yield a linear plot (see Fig. 1). On the other hand, c0 is irrelevant in the Guggenheim method. Indeed, one does not need to obtain V∞ if the Guggenheim method is used for the analysis. A good follow-up study for this would address the following question: what is the maximum offset (expressed as a percentage of X0) that would still yield a plot that is more or less linear up to three lifetimes in Method 1 and what is the associated error in k ? QUESTION 5. What interval (d ) should be chosen when using the Guggenheim method? This can be answered by investigating the effects of random errors. For data represented in Figure 2, the random errors are within ±5% of the correct X values; in the spreadsheet, formulas for Xt and Xt+d are multiplied by (0.95+0.1*@rand()), which evaluates to a number between 0.95 and 1.05. The errors do not significantly affect the results for Method 1. However, results are quite interesting for the Guggenheim method. Using a small d (e.g., 0.1 lifetime or less) makes the analysis very sensitive to random errors.

Results are much better for d values of one lifetime or more. QUESTION 6. How often and for how long should we be taking experimental data? While we generally want to use as much data as possible, first-order data beyond three lifetimes are frequently not used. Furthermore, obtaining data at very small time intervals, even if possible with a computer-controlled instrument, may not always provide additional significance. Investigating the effect of random errors nicely illustrates the reason for these. Figure 3 shows the effect of random errors within ± 5% of X0 added to all X values. With errors of this magnitude, data become highly unreliable beyond two lifetimes; differences in adjacent X values are also comparable to the magnitude of the errors. Although most laboratory manuals provide specific instructions on when and for how long to take data, these investigations provide a basis for understanding the authors’ choices and for designing procedures for “further studies”. With the preceding exercises, I sought to provide students with a sound foundation upon which to make intelligent decisions on when to take data and how to extract the best possible value for the rate constant from their data, and to write an intelligent discussion in their reports. Follow-up studies could be assigned as mentioned earlier. In a followup lecture, I alert students to the fact that the Guggenheim method also yields a linear plot with a reversible first-order reaction, with the negative of the slope equal to the sum of the forward and reverse rate constants. Literature Cited 1. 2. 3. 4.

Bruist, M. R. J. Chem. Educ. 1998, 75, 372. Sundheim, B. R. J. Chem. Educ. 1997, 74, 328. Guggenheim, E. A. Philos. Mag. 1926, 2, 538. Alberty, R. A.; Silbey, R. J. Physical Chemistry; Wiley: New York, 1992; p 662. 5. Daniels, F.; Williams, J. W.; Bender, P.; Alberty R. A.; Cornwall, C. D. Experimental Physical Chemistry, 6th ed.; McGraw-Hill: New York, 1962; p 140. 6. Daniels, F.; Williams, J. W.; Bender, P.; Alberty R. A.; Cornwall, C. D. Ibid.; p 132.

JChemEd.chem.wisc.edu • Vol. 77 No. 4 April 2000 • Journal of Chemical Education

533