Commentary pubs.acs.org/jchemeduc
How Will Classroom Response Systems “Cross the Chasm”? James MacArthur* Science Department, Western Nebraska Community College, Scottsbluff, Nebraska 69361, United States ABSTRACT: A recent survey suggests that classroom response system use by chemistry faculty still remains in the early adopter stage. This commentary provides recommendations on what early adopters of classroom response systems might do to encourage their more pragmatic colleagues of the benefits of classroom response systems.
KEYWORDS: First-Year Undergraduate/General, Curriculum, Computer-Based Learning, Student-Centered Learning
R
ecent discussion in this Journal has focused on the application of the technology adaption cycle to the use of classroom response systems (“clickers”) in college-level chemistry courses.1,2 This commentary provides some guidelines for chemical education researchers on what sorts of research might be helpful in order to cross the chasm from the early adopters stage into the majority.
(innovators and early adopters) think about a technology and the way that users in the mainstream market (the rest) think about a technology. Why has a technology that seemed so promising 10 years ago not yet entered the mainstream in college chemistry courses? It would appear that in order to begin seeing a majority of chemistry faculty members using clickers in their classrooms, chemical educators interested in the use of clickers must be able to demonstrate “measurable learning gains and improvement in their own teaching practice”.2 How well have those of us in the early adopters stage of clicker use addressed this practical concern held by the majority of faculty members? Recent publications on the use of clickers to clarify student understanding,6 as a research tool in the classroom,7 and other novel applications8−10 could be very useful to innovators and early adopters, yet might not be sufficient to encourage the majority of chemists to adopt the technology. The increasing changes in technology11,12 may eventually make the chasm smaller, but might not provide motivation for the majority of chemistry faculty to jump across it. How will early adopters of clickers make a good case to the majority that they should make that jump? It might be easy to despair that clickers will be yet another technology that falls by the wayside, were it not for the fact that they have been embraced in other fields. A Web-based survey over a decade ago revealed both a higher number of adopters of and a higher percentage of positive responses toward peer instruction,13 mostly among physicists, than the recent chemistry survey revealed existed among adopters of clicker technology1 (384 vs 288, and over 80% vs 23%). Although peer instruction existed prior to the adoption of clickers, some have claimed that the adoption of clickers and peer instruction, at
■
COMPARATIVE USE OF CLICKERS IN COLLEGE SCIENCE COURSES In 2000, the National Research Council identified classroom response systems as a promising trend in education.3 A number of applications of clickers in various fields saw publication in the years that followed.4 However, a recent publication suggests that over a decade later clicker use in college chemistry courses still remains in the early adopters stage.1 According to the technology adoption cycle,2,5 illustrated in Figure 1 (as originally published in ref 2), a fundamental difference exists between the way users of a technology in the early market
Figure 1. Graphical representation of the technology adoption lifecycle, illustrating the chasm between early adopters and early majority, as originally published in ref 2. Copyright 2010 American Chemical Society and Division of Chemical Education, Inc.. © 2013 American Chemical Society and Division of Chemical Education, Inc.
Published: January 25, 2013 273
dx.doi.org/10.1021/ed300215d | J. Chem. Educ. 2013, 90, 273−275
Journal of Chemical Education
Commentary
least in physics, have gone hand in hand,14,15 and it appears to be common among physicists to combine the two when discussing their teaching practices.16 Clearly, the nature and goals of these two surveys are not the same; however, it does present a striking difference between the convergence of so many physicists on the use of peer instruction (and presumably with it clickers) 10 years ago, and the lack of such an adoption by chemists on the use of clickers currently. Likely many factors contribute to this stark difference, but in the context of looking for the “measurable learning gains and improvements in their own teaching practice” necessary to make the transition from visionaries to pragmatists,2 perhaps it is the lack of a pedagogy to embrace with the clickers that has hindered a more widespread adoption by chemists. Recent work suggests that chemists have not necessarily converged on a pedagogy to accompany clicker use the way that physicists appear to have,17 and peer instruction is mentioned in the physics education literature much more often than the chemistry education literature.4 Perhaps chemists will eventually embrace peer instruction, or perhaps they will converge on another pedagogy. But peer instruction has been around for 20 years now, so one must wonder why chemists would suddenly be interested in using this pedagogy if they have shown tepid interest in using it so far.
does not provide a measure of how often they were used in the course.21 Figuring out which questions we are asking is a necessary first step in addressing the bigger question of figuring out what sorts of questions should we be asking. A cruder and quicker way of addressing the issue of “what kinds of questions are we asking?” is to simply categorize clicker questions over the course of a semester based on percentage of correct answers. Figure 2 is a sample from a preparatory
Figure 2. Distribution of correct responses to clicker questions over the course of a semester in a preparatory chemistry course.
■
chemistry course the author taught recently, and was easily assembled in a few hours. This course had an enrollment of approximately 100 students, and the students were encouraged to discuss their answers with each other before making their selections. They received one point for answering, and two points for answering correctly. Their clicker responses accounted for 5% of the total grade in the course. What does this histogram tell us about how the course was taught? Where should the peaks exist in the histogram? Is it OK that the distribution is bimodal? It is not clear that there are any answers to these questions yet; however, analyzing this sort of data might at the very least result in some useful changes to one’s course. A rough look at the data in Figure 2 suggests the cluster of questions at 20% or lower should probably be reconsidered or revised. A comparison of the questions around the 85−90% peak with those around the 60% peak would reveal even more, and raises the question: is one of these preferable? Peer instruction users would say the 60% peak is preferable because the conversations between students while they are considering the clicker question are not as useful if the questions are not sufficiently challenging.22 But what little has been published on the types of questions used by chemistry instructors suggests that maybe things are different in how chemistry is taught.17,20,23 If early adopters of clickers take the time to analyze this sort of data that they can easily assemble in their own courses, it might begin to shed some light on what sorts of pedagogies could be adopted by the mainstream for teaching with clickers. Many peer-reviewed articles and presentations at national meetings contain little or no information on the nature of the questions asked with clickers. And without knowing this, it is hard to know what kind of learning might be going on in those classrooms. If the utility of this technology is that it provides the instructor with the opportunity to ask the class to answer questions instantaneously, then perhaps the more pragmatic chemists would also like to know what kinds of questions they should be asking. The literature in chemical education does not have a clear answer to this, whereas the literature in physics education did. The differences in adoption of the technology between the two disciplines should not be all that surprising.
DETERMINING PEDAGOGIES CONDUCIVE TO TEACHING WITH CLICKERS So how do those of us in the early adopters stage go about showing these “measurable learning gains and improvements in our own teaching practice” to our more pragmatic colleagues? One of the difficulties inherent in this sort of research is figuring out how to produce measurable learning gains without some sort of controlled experiment that, even if executed perfectly, results in hundreds of students receiving an inferior education by way of random (or even pseudo-random) assignment to the control group. Examples can be found in the literature of these sorts of experimental studies that focus on the technology more so than the pedagogy to accompany it, and the results are not going to convince pragmatists to cross the chasm.18,19 No technology by itself will magically make the classroom better, although useful technologies will allow students to learn in ways that were not possible before. If the only thing that changes is the technology, then likely the technology is not being put to optimum use. As an example, cellphone technology might not appear any more useful than land line phones if one were only to use a cellphone in the ways that land line phones might be used. However, if one were to use a cellphone to ease communication while traveling, one would find a use for the technology that was not available with previous technologies, and that pragmatic users of telephones would find useful. Similarly, if more pragmatic chemists could be shown that a new technology allows teaching and learning in ways that were not possible before, those chemists would be more likely to adopt the technology. Perhaps instead of attempting these classroom experiments, research should focus on developing tools for understanding the nature of what is going on in the clicker classroom in the first place, and thereby find a path toward this optimum use. Bruck and Towns provided the basis for one such tool by categorizing each clicker question used during one semester into several pre-existing taxonomies.20 More recently, Woelk has proposed a taxonomy for clicker questions, although he 274
dx.doi.org/10.1021/ed300215d | J. Chem. Educ. 2013, 90, 273−275
Journal of Chemical Education
■
Commentary
(13) Fagen, A.; Crouch, C.; Mazur, E. Phys. Teach. 2002, 40, 206− 209. (14) Lasry, N. Phys. Teach. 2010, 46, 242−244. (15) Burnstein, R.; Lederman, L. Phys. Teach. 2001, 39, 8−11. (16) Weiman, C. Change 2007, 39, 9−15. (17) MacArthur, J.; Jones, L.; Suits, J. Faculty Viewpoints on Teaching Large-Enrollment Science Courses with Clickers. J. Comput. Math. Sci. Teach. 2011, 30 (3), 251−270. (18) Bunce, D. M.; VandenPlas, J. R.; Havanki, K. L. Comparing the Effectiveness on Student Achievement of a Student Response System versus Online WebCT Quizzes. J. Chem. Educ. 2006, 83 (3), 488−493. (19) Patterson, B.; Kilpatrick, J.; Woebkenberg, E. Nurse Educ. 2010, 30, 603−607. (20) Bruck, A.; Towns, M. Chem. Educ. Res. Pract. 2009, 10, 291− 295. (21) Woelk, K. Optimizing the Use of Personal Response Devices (Clickers) in Large-Enrollment Introductory Courses. J. Chem. Educ. 2008, 85 (10), 1400−1405. (22) Crouch, C.; Mazur, E. Am. J. Phys. 2001, 69, 970−977. (23) Asirvatham, M. Clickers in Action: Increasing Student Participation in General Chemistry; W. W. Norton and Company, Inc.: New York, 2009.
SUGGESTIONS FOR INCREASING ADOPTION OF CLICKERS IN CHEMISTRY COURSES The following are recommendations for early adopters if they wish to convince their more pragmatic colleagues that clickers can provide “measurable learning gains and improvement in their own teaching practice”. • Chemical education researchers should focus more on understanding what goes on in the clicker classroom instead of conducting experimental studies in which the technology use is all that changes between classrooms. • Early adopters of clickers should adopt a reflective practice on their selection of clicker questions and share what they learn from this with their more pragmatic colleagues. • Both researchers and practitioners should engage in productive dialogue about the sorts of questions raised during research. One such question is this: What sorts of clicker questions should we be asking? • As a pedagogical model to accompany clickers begins to emerge, early adopters should promote this model to their more pragmatic colleagues. It is entirely possible that clicker use in chemistry courses will never cross the chasm from early adopters to early majority. Perhaps a significant number of chemists will find other ways of achieving a student-centered classroom for the course sizes they typically deal with. However, the questions that early adopters of clicker technologies could easily address might be quite beneficial in convincing pragmatic colleagues of their ability to bring about improvements in their own teaching practices.
■
AUTHOR INFORMATION
Corresponding Author
*E-mail:
[email protected]. Notes
The authors declare no competing financial interest.
■
REFERENCES
(1) Emenike, M. E.; Holme, T. A. Classroom Response Systems Have Not “Crossed the Chasm”: Estimating Numbers of Chemistry Faculty Who Use Clickers. J. Chem. Educ. 2012, 89 (4), 465−469. (2) Towns, M. H. Crossing the Chasm with Classroom Response Systems. J. Chem. Educ. 2010, 87 (12), 1317−1319. (3) Bransford, J.; Brown, A.; Cocking, R., Eds. How People Learn; National Academy Press: Washington, DC, 2000. (4) MacArthur, J.; Jones, L. Chem. Educ. Res. Pract. 2008, 9, 187−195. (5) Moore, G. A. Crossing the Chasm; Collins Business: New York, 2002. (6) King, D. J. Chem. Educ. 2011, 88, 1485−1488. (7) Bunce, D. M.; Flens, E. A.; Neiles, K. Y. How Long Can Students Pay Attention in Class? A Study of Student Attention Decline Using Clickers. J. Chem. Educ. 2010, 87 (12), 1438−1443. (8) Jones, L.; MacArthur, J.; Akaygun, S. CEPS J. 2011, 1, 117−135. (9) Flynn, A. B. Developing Problem-Solving Skills through Retrosynthetic Analysis and Clickers in Organic Chemistry. J. Chem. Educ. 2011, 88 (11), 1496−1500. (10) Straumanis, A. R.; Ruder, S. M. A Method for Writing OpenEnded Curved Arrow Notation Questions for Multiple-Choice Exams and Electronic-Response Systems. J. Chem. Educ. 2009, 86 (12), 1392−1396. (11) Williams, A. J.; Pence, H. E. Smart Phones, a Powerful Tool in the Chemistry Classroom. J. Chem. Educ. 2011, 88 (6), 683−686. (12) Tremblay, E. J. Comput. Math. Sci. Teach. 2010, 29, 217−227. 275
dx.doi.org/10.1021/ed300215d | J. Chem. Educ. 2013, 90, 273−275