Effectiveness of a Daily Class Progress Assessment Technique in

View Sections. ACS2GO © 2018. ← → → ←. loading. To add this web app to the home screen open the browser option menu and tap on Add to hom...
0 downloads 0 Views 96KB Size
In the Classroom

Effectiveness of a Daily Class Progress Assessment Technique in Introductory Chemistry Brian J. Rogerson Chemistry Program, Natural Sciences and Mathematics, The Richard Stockton College of New Jersey, Pomona, NJ 08240-0195; [email protected]

During my first year of teaching, a significant proportion of freshman students in my introductory (first semester) chemistry course did not perform well and either failed or withdrew from the course. This phenomenon has been observed in other sections of the course taught by a number of different teachers. Colleagues have suggested a number of factors that may be contributing to such an outcome, including poor academic preparedness, a lack of good study habits, and the fact that students have off-campus jobs that significantly reduce the time available for their college work. However, as a novice teacher, I was also concerned about my teaching effectiveness and how it impacted the performance of my students. Although each term I administered several quizzes and exams, the class as a whole did not seem constantly engaged in the work. Further, any feedback provided on these tests came too late for the students, since errors had already translated into poor grades. This was frustrating, not only for the students, but also for me. With the harm already done, there was no easy way to revisit the material and reevaluate student understanding after remediation. My attempts to monitor student understanding by conventional means, for instance, by encouraging oral participation in class or conducting workshops before exams, had mixed results. It was during this time that I joined a study group at Stockton College that was using Angelo and Cross’s Classroom Assessment Techniques (1) to improve teaching effectiveness and student learning. During my second year of teaching, I tested a new rapid feedback technique designed to give my introductory chemistry students an opportunity to constantly monitor their understanding of the material as the class progressed. I also hoped that student performance would improve if I became aware at the end of each class of where students were having difficulties so that I could take corrective action and resolve the confusion before formal testing.

to accomplish three tasks: (i) to obtain feedback from all students in the class, not just the more vocal ones, (ii) to obtain feedback immediately after each class, thereby creating an expectation in students that they needed to make an effort to understand the material presented every time they came to class, and (iii) to give feedback to students on their answers to the assessment questions. To keep the technique simple and to enhance a positive learning atmosphere, the assessments were not graded and focused on concepts I expected my students to understand before they walked away from class. Some examples of questions are shown in Figure 1. These can usually be prepared in a couple of minutes; they must be kept simple and straightforward and must pertain to the key points of each class. Not more than ten minutes before each class ended, two copies of the question(s) were handed out. It took about five minutes for students to record their answers on both copies. Students returned one copy of the assessment, anonymously if they wished, and retained the second copy so that they could check their work at the beginning of the next class when the correct and incorrect answers were discussed. Anonym-



a) 0.0560 L

160

b) 5.5 x 104 km

c) 10.0 ns

d) 0.003 g



Give two reasons why K is more reactive than Li.



Why is it that AlCl3 is the empirical formula of the ionic compound made up of aluminum ions and chloride ions? Why not AlCl, AlCl5, or Al2Cl?



You are studying a compound that is made up of Na, Cr, and O. Following the strategy outlined in class you have determined how many moles of each are present in the compound: 0.761 moles of Na, 0.763 moles of Cr and 2.68 moles of O. What is the empirical formula of this compound?



In the lab you are studying this reaction:

Methods The introductory chemistry class at our institution is divided into sections with a maximum capacity of 48 students, 70% of whom are freshmen, the subject population of this study. Lecture classes meet three times a week and class periods are 1 hour and 15 minutes long. The small class size and longer meeting time compared to larger institutions suggested that analyses of frequent surveys would be feasible provided they were limited in scope. A daily class progress assessment was developed with this objective in mind. It is conceptually similar to the technique reported by Holme (2) with several differences as described herein. At the end of every class period, students were asked to answer, in writing, brief questions about material that had just been discussed in class. My intent was to continuously survey all students for their understanding of basic ideas. I wanted the technique

How many significant figures are there in the following measurements?

2 NO2Cl(g)

2NO2(g) + Cl2(g)

and at equilibrium you found the following concentrations, [NO2Cl] = 0.00106 M, [NO2] = 0.0108 M, and [Cl2] = 0.00538 M. a) Write the equilibrium expression for this reaction. b) Calculate the value of Kc. c) Calculate the value of Kc for the reverse reaction. Figure 1. A sampling of questions compiled from different class assessments.

Journal of Chemical Education • Vol. 80 No. 2 February 2003 • JChemEd.chem.wisc.edu

In the Classroom

Results Student grades during five consecutive semesters of introductory chemistry were examined. During this time, the instructor, textbook, course content, and the order in which chapters were taught did not change. The grading was based on an absolute scale (not curved) and was derived exclusively from a similar number of quizzes and exams with similar levels of difficulty during the two and a half year study. Only during terms 3 and 4 was the daily class progress assessment implemented. During terms 1, 2, and 5 no assessments were used. As can be seen from Figure 2, the average withdrawal (W) frequency observed for freshmen during the semesters when class assessments were not administered was 26.7% (24兾90). At Stockton College students are allowed to withdraw very late in the term (up until three weeks before classes end), after they have been tested several times. Generally, students that withdraw are failing the course and so the combined failure and withdrawal frequency observed, in this case 34.5% (31兾90), best represents the subset of students that was performing poorly. The total number of freshmen enrolled in terms without assessments was 90. In contrast, during the two semesters when the daily class progress assessments were administered, the withdrawal frequency decreased to 6.7% (4兾60). While a slight increase in the failure frequency was observed, the combined failure and withdrawal frequency fell to 16.7% (10兾60), half of what was observed

without this technique. The total number of freshmen enrolled in terms with assessments was 60. Not surprisingly, the gain associated with the use of this technique was reflected in an improved performance by the marginal students since an increased frequency of students earning C and D grades was observed. Also, the fact that A and B grade frequencies did not change significantly between the two groups (20.0% vs 16.7% for A grades, and 21.1% vs 21.7% for B grades) suggests that students who earn these grades were not being affected by the technique. To determine whether the drop in withdrawal frequency was statistically significant, a chi-square analysis was applied to the two sample enumeration data shown in Table 1. The χ2 value was determined by the formula χ

( observed − expected

=

2

− 0.5)

2

expected

The χ2 calculated was 8.202. The χ2 value for α = .005 and df = 1 is 7.879; thus it can be said that the proportion of students that withdrew was smaller when the class assessment was used (p < .005). However, such a decrease would be meaningful only if there was no significant increase in the failure frequency. Therefore, it made more sense to place students that failed and withdrew in one group. Table 2 shows the data that were subjected to chi square analysis. The χ2 calculated was 4.872. The χ2 value for α = 0.05 and df = 1 is 3.841; thus it can be said that the combined failure and withdrawal frequency (in other words, the number of students that were performing very poorly) decreased when the class assessment was used (p < .05). Assessment sheets collected during term 4 were chosen for a more detailed analysis. Since the technique was designed so that students have the option of signing or anonymously returning the assessments, we could not always distinguish between freshmen and non-freshmen responses. Nevertheless, an examination of all responses from the class proved to be instructive. As shown in Figure 3, the fraction of students that returned assessments either signed or anonymously, varied between 97.8% (assessment no. 2 at the beginning of the

30 25

Frequency (%)

ity appeared to reduce performance pressure and helped students focus on answering the questions. Students were allowed to check their notes and to get help from their neighbors in order to answer. As the term progressed I noticed that some students did check their notes at least some of the time. In general, students made a genuine effort to answer the questions on their own, a behavior I encouraged, which differs from Holme’s format (2). Although it may appear time consuming, this is a remarkably simple and quick assessment technique. Before the next class I would take about thirty minutes to analyze the results. Since these assessments were not graded, the evaluation process was very straightforward and quick. I determined how many students got the assessment right, and then proceeded to categorize the incorrect answers. Often different students made the same mistake but there was always a subset of unique answers. Depending on the nature of the errors and how many students made them, I spent more or less time discussing them. Students’ actual answers were always summarized on an overhead. It should be kept in mind that these were questions students would find in a quiz, except they were not being asked to answer them days or weeks after the topic was discussed, but rather, immediately after the topic was covered in class. Thus, the assessment not only gave me an indication of whether students understood what had just been discussed but provided information about student engagement and attentiveness in class. It also assessed my effectiveness during that class. The discussion of the incorrect answers and any further explanations always took place at the beginning of the following class and took less than ten minutes. This discussion accomplished the dual role of correcting students misconceptions and acting as a reminder of what was discussed in the prior class.

20 15 10 5 0 A

B

C

D

F

W

Letter Grade Figure 2. Freshman performance without assessments (black bars) and when daily class assessments were used (gray bars). Frequencies were calculated based on the performance of 90 freshmen during three semesters without assessments and 60 freshmen during two semesters with assessments. Withdrawals are designated by W.

JChemEd.chem.wisc.edu • Vol. 80 No. 2 February 2003 • Journal of Chemical Education

161

In the Classroom

term) and 48.9% (assessment no. 19, toward the end of the term). The frequency with which students returned signed assessments declined over time, while the fraction of students returning assessments anonymously fluctuated (considerably) around 30%. The increase over time in the number of students that did not return assessments can be mostly accounted for by non-freshmen who continued to perform poorly and were withdrawing from class. During this particular term, 2兾29 freshmen and 7兾16 non-freshmen withdrew from class. In other words, the withdrawal frequency for freshmen during this semester when the assessment technique was in use was 6.9%, whereas for non-freshmen it was 43.8%. The poor performance of non-freshmen is beyond the scope of the current discussion, but it underscores the fact that freshmen and non-freshmen are two very distinct populations. It must be remembered that introductory chemistry is a course that students normally take during their first year in college. In any event, toward the end of the term, 20% of the total number of students (9兾45) had withdrawn from class, which inflated the fraction of non-returned assessments (Figure 3). If one corrects for this, then the fraction of students that returned assessments, either signed or anonymously, varied between 98% and 61%. Although some students were choosing not to return their assessments they presumably still reaped the

Table 1. Freshmen Withdrawal Outcomes Freshmen

Stayed in Class

Withdrew

Total

Without assessmentsa

66 (73.17)

24 (16.83)

90

With assessmentsa

56 (48.78)

4 (11.22)

60

Total

122

28

150

a

Expected values are in parentheses next to the observed values.

benefit of the review process associated with them. While attendance was not mandatory, there were no obvious absenteeism problems that might have significantly contributed to the lower number of returned assessments. This attendance policy was in effect during both the assessment and non-assessment semesters. The lack of attendance records is the reason why the data in Figure 3 are based on the total number of students rather than just those in attendance. The types of errors observed for the questions in Figure 1 varied greatly. In response to the question on significant figures, students often say that measurements such as 0.0560 L and 0.002 g have 4 and 3 significant figures respectively, or that 5.5 × 104 km has 5 significant figures. In response to why K is more reactive than Li, a common answer was “because it is lower in the group” without explaining what this meant. When writing formulas for ionic compounds or when asked to write the ions that make up such compounds, students often did not remember ion charges and had difficulties with the structures of polyatomic ions. They would split them into monoatomic ions. This was a chronic problem with the marginal students. When asked to use mole ratios to determine the formula Na2Cr2O7 students correctly arrived at the 1:1:3.5 ratios but then some of them “approximated”, as in NaCrO3 or NaCrO4. When asked to write the equilibrium expression for a reaction, some students would multiply the substance concentrations by their coefficients in the balanced equation, and when calculating the constant for the reverse reaction some would write the new expression and go through the whole calculation again. These are just a few examples, but in every case, these responses were obtained after class discussions that I thought had ensured such responses would not occur. Students that signed their responses were more likely to be correct when answering their assessments than those students responding anonymously (Table 3). Conceivably, students responding anonymously did so because they were unsure of themselves, a notion that is consistent with the higher fraction of incorrect responses observed in this group. Also, freshmen and non-freshmen were equally likely to sign

Table 2. Combined Freshmen Failure and Withdrawal Outcomes

70

Passed

Failed or Withdrew

Total

Without assessmentsa

59 (65.43)

31 (24.57)

90

With assessmentsa

50 (43.62)

10 (16.38)

60

109

41

150

Total a

Expected values are in parentheses next to the observed values.

Students (%)

Freshmen

80

signed anonymous

60 50 40 30 20

not returned

10 0 5

10

15

20

Assessment Number Table 3. Assessment Answers All Students

Freshmen

Answers

Signed (%) (n = 426)

Anonymous (%) (n = 305)

Signed (%) (n = 315)

Correct

62.4

51.8

62.2

Incorrect

37.6

48.2

37.8

162

Figure 3. Assessment returns from all students (including non-freshmen) during the second term the assessment technique was implemented. Solid line: students that returned signed assessments. Dashed line: anonymous assessments. Gray line: the sum of students that either did not return an assessment, were absent, or withdrew. 100% of the students are accounted for at each time point (assessment).

Journal of Chemical Education • Vol. 80 No. 2 February 2003 • JChemEd.chem.wisc.edu

In the Classroom

their responses. I found that on average 73.9% (315兾426) of the signed responses were coming from freshmen. Since freshmen constitute about 70% of the class, signing a response appears to follow the class demographic. Official student evaluations of teaching suggested that students who remained in class (after the withdrawal date deadline) had a more positive attitude toward the course when the assessments were used. Without the assessment, 50.9% of respondents rated their “course as whole” experience with a score of 6 or 7 (1–7 scale, 7 being the highest possible score). However, when assessments were used, a larger proportion of students (68.1%) rated their experience with these high scores. Interestingly, student rating for the “instructor’s overall performance” category remained constant, with 86.1% of students giving the instructor scores of 6 and 7 when no assessments were used, while 88.7% did so when assessments were used. Again, because of the anonymous nature of these evaluations, data could not be obtained exclusively from freshmen. All we know is that during the semesters when the assessments were not used, an average of 67.9% of all the students who stayed in class and answered the evaluations were freshmen, whereas when the assessments were used, 74.8% were freshmen. Discussion The simplest interpretation of these results is that freshmen students experiencing difficulties with the material were being helped by this technique. Being formally questioned about material that was just discussed in class, each and every time the class met, placed an unfamiliar pressure on students. As the term progressed, students appeared to welcome the opportunity to check their understanding of what was taught in each class and, of course, students were not the only ones to get feedback. This technique has helped my teaching as well. I now have a way of telling immediately at the end of each class whether students are understanding my explanations, examples, et cetera and finding out where students are having difficulties. This feedback is immediate and is obtained from the majority of students attending class. These assessments attempt to survey all the students in the class, not just the more vocal ones as occurs when prompting the class for questions. The inclusive nature of this technique cannot be emphasized enough; it is one of its most important features and has helped me more than anything else in taking the pulse and gaining insights into the progress of the class. While this technique is not novel conceptually (2, 3) it has some distinguishing features. For instance, students are being asked questions about what just transpired in class and are not “studying” for the assessment. When I first implemented this technique I was surprised at how very simple questions would still reveal misconceptions or misunderstandings in a significant proportion of the students. Even after classes in which I felt I had explained something very well and thoroughly, there were students for whom the answer to the assessment was not obvious. At the other end of the spectrum, a student would occasionally complain that the questions on the assessments were too easy compared to questions on tests. However, the point of the technique was to ensure that all students in the class under-

stood the fundamentals before proceeding to the next discussion. Indeed, I have yet to receive a 100% correct response to an assessment. This has taught me to never take anything for granted. The number of student errors on these assessments can vary greatly. There are times when 90% of the respondents will get the answer correct, but there are also plenty of occasions when one sees 65%, 50%, or even only 10–20% correct answers. There are times when poor results are predictable. However, there have been a number of occasions when such results came as a total surprise. This was critical information that I did not have beforehand (4). A very low comprehension level suggests that the information was not properly conveyed to the class or that I incorrectly assumed the class knew background material. Therefore, these daily class assessments are opportunities for clarification in the form of new explanations as well as reinforcement of skills and previously covered material, particularly for the less vocal students. Presumably, this technique helped raise the baseline level of understanding for a larger proportion of students than was the case without the assessment. There is great diagnostic value in analyzing a student’s incorrect answers as discussed at a recent conference by Gonsalves et al. (5). Reflecting on why a student provided an incorrect response to an assessment can serve to improve both teaching and learning in a way that tests cannot. It also allows for early intervention so that information transfer can be modified to ensure that a misconception does not gain a foothold in the student’s mind and interfere with the student’s learning of further material. Admittedly, this daily class progress assessment is just a first step in a multistep process, which improves student retention and helps the marginal students meet a minimum proficiency level to pass the course. Clearly, additional teaching strategies will have to be tested to determine whether student learning can be further improved (6, 7); after all, these are still marginal students. Yet, even a modest improvement as described herein indicates that these marginal students are not refractory to classroom strategies aimed at improving student learning. Strategies for increasing interaction among students will be important to test since these have been suggested to improve learning (2, 8, 9). However, since introductory chemistry is a course that services many majors (students majoring in chemistry are rare in these classes) and since this first semester chemistry course will be the only chemistry many students will have at Stockton, the assessment technique that is discussed here can already be viewed as fulfilling a useful role. Students looked forward to the class assessments, since on those rare occasions when I was unable to administer one, a number of students lightheartedly complained. Their feeling of anticipation at finding out how they did on an assessment was often quite evident, suggesting that this technique may be helping students develop self-assessment skills (10). End-of-term surveys revealed that students believed the assessments helped them gauge their progress at understanding the material and strongly endorsed them. Many said knowing that an assessment would be coming up at the end of class helped them pay attention, and many also suggested that continuous feedback should be a regular feature in all classrooms. My hope was that the student response to an incorrect answer on an assessment would be additional work

JChemEd.chem.wisc.edu • Vol. 80 No. 2 February 2003 • Journal of Chemical Education

163

In the Classroom

outside of class or a consultation with the instructor if the in-class explanation of the incorrect answers remained unclear. Anecdotal evidence suggests this was the case. However, a subset of students remained who still withdrew or failed the course. More often than not, these students either lacked the needed quantitative skills or the time (or maturity) to study owing to ambitious credit loads, full-time jobs, or busy extracurricular activities. My expectation was that eliminating the class assessment would result in higher withdrawal frequencies again. Indeed, during the fifth term when I decided to test this notion, I found this to be the case. My (presumably) more experienced teaching did not help the marginal freshmen students the way the assessment technique did during terms 3 and 4. As one anonymous reviewer stated: “For the marginal students, the review and reflection of the daily progress assessment provides a replacement, in part, for the study routine that is the staple of the more able student.” Indeed, I consider poor studying skills the biggest problem I face. While the insights I gained during the assessment semesters became part of my teaching during this fifth term, they were not effective in the way student reflection and review were when assessments were in use. Summary The benefits derived from this technique include: (1) encouraging student reflection and review, (2) identifying, before formal testing, areas where students are having difficulties, (3) assessing the effectiveness of information transfer, and (4) student appreciation for the unrelenting nongraded feedback. Admittedly, this technique has some impact on the instructor’s ability to cover all the material in the syllabus, but only requires that a few adjustments be made toward the end of the course to ensure all the material is covered. I plan to use rapid feedback strategies in every course I teach because they provide valuable information on how well I am teaching and how students are learning. I can no longer see myself teaching without them.

164

Acknowledgments I wish to thank Ed Paul, Kelly Keenan, Ellen Clay, Jamie Cromartie, and Yitzhak Sharon for critically reviewing the manuscript and anonymous reviewers for their helpful suggestions. I also thank Stockton’s Institute for the Study of College Teaching for its support of first-year faculty members. Part of this work was presented at the 5th Annual Lilly Conference on College and University Teaching (April 2001), at Towson University, Towson, MD. Literature Cited 1. Angelo, T. A.; Cross, K. P. Classroom Assessment Techniques: A Handbook for College Teachers, 2nd ed.; Jossey-Bass: San Francisco, CA, 1993. 2. Holme, T. J. Chem. Educ. 1998, 75, 574–576. 3. Frellich, M. B. J. Chem. Educ. 1989, 66, 219–223. 4. Steadman, M. Using Classroom Assessment to Change Both Teaching and Learning. In Classroom Assessment and Research: An Update on Uses, Approaches, and Research Findings; Angelo, T., Ed.; Jossey-Bass: San Francisco, CA, 1998; pp 23–35. 5. Gonsalves, S.; Ince, E.; Kubricki, S.; Mathis, S. Student’s Incorrect Answers as Diagnostic Teaching-Learning Opportunities: A Discipline Based Study; Paper presented at the Lilly Conference on College and University Teaching; University of Maryland, 2000. 6. Cottell, P.; Harwood, E. Do Classroom Assessment Techniques (CATs) Improve Student Learning? In Classroom Assessment and Research: An Update on Uses, Approaches, and Research Findings; Angelo, T., Ed.; Jossey-Bass: San Francisco, CA, 1998; pp 37–46. 7. Duffy, D. K.; Duffy, J. J.; Jones, J.W. Journal on Excellence in College Teaching 1997, 8, 3–20. 8. Lemke, J. Talking Science: Language, Learning and Values; Ablex: Norwood, NJ, 1990. 9. Johnson, D. W.; Johnson, R. T.; Smith, K. A. Change 1998, 30, 27–35. 10. Wiediger, S. D.; Hutchinson, J. S. J. Chem. Educ. 2002, 79, 120–124.

Journal of Chemical Education • Vol. 80 No. 2 February 2003 • JChemEd.chem.wisc.edu