EDITORIAL pubs.acs.org/jchemeduc
Striking a Balance with Assessment Norbert J. Pienta* Editor-in-Chief Department of Chemistry, University of Iowa, Iowa City, Iowa 52242-1294, United States ABSTRACT: Assessment has become more common for courses and programs, including the need to satisfy requests from one’s department and institution. It has also led to pressure on some individuals when the stakes get very high. KEYWORDS: General Public, Testing/Assessment
T
he evidence of an effective professional meeting is when one returns to work inspired and ready to integrate new knowledge. I recently had the opportunity to participate in a Gordon Research Conference on Chemical Education Research and Practice: Foundations and Frontiers.1 I learned about the value of Gordon Conferences as a new assistant professor 30 years ago: a small, highly interested group of attendees and a unique atmosphere that fosters discussion on the latest scholarship both promote high value to those who attend. A brief history of the chemical education version of this meeting has been reported in this Journal by Towns.2 But because attendees promise not to divulge the details of the talks and discussions, I can’t give an accounting of events that led to my concerns about assessment or the lack of it in certain cases. Although it’s really not that simple, it seems that chemical educators fall into two groups: those who espouse the value of assessment and those who seem to be in denial about its potential role. In this context, assessment is a measure of the nature or quality of someone or something. It is not simply about testing students; testing can be a part of assessment, yet they are not the same. Assessment is a valuation, and as such, requires both objective and subjective components. To be optimally useful, the data or objective portions should include information that the chemical education community might agree upon, at least as a collective community. The value of the subject or item being assessed requires a judgment and includes standards appropriate to a constituency or institution. In a chemical education context, assessment is what one does to find out what is happening in a component, course, curriculum, or program. Research is good for undergraduate chemistry majors at University X. But how do we know that? What specific information or data support or refute that notion? Is it as good for first-year students as for ones in their fourth year? And equally important...How does knowing this information shape our decisions about how and when we provide this opportunity to our students? Several years ago, I served on a steering committee in preparation for the decennial reaccreditation of my institution. The Higher Learning Commission (HLC), a part of the North Central Association of Colleges and Schools (NCA), is one of six regional accreditors of postsecondary educational institutions in the United States.3 The HLC values assessment, and being on the institutional steering committee that was conducting its internal review gave me considerable insight into the implementation of assessment. The “request” from the Provost’s office for an Copyright r 2011 American Chemical Society and Division of Chemical Education, Inc.
assessment plan from each department and program resulted in a mixture of responses. On the negative side was a small group of faculty members who were sure that some administrative “bean counter” was going to use the resulting information to stifle their programs. But there were also those who embraced the opportunity and even a few who reached the level of epiphany. For example, the English general education literature program and Rhetoric (who instructs all the first-year students) recognized that they had not aligned their efforts about success in writing “in a few years” (or maybe since the last reaccreditation). Their individual assessment plans expedited their discussions and plans. In the Chemistry Department, the discussion might have sounded familiar to many of you...we know that we have a great undergraduate program because our students get into and succeed in the best graduate programs. Fortunately, the conversation turned appropriately introspective. If research is good, why do not we require it? How will we tell when a student has a positive learning experience from independent study? In addition, if we use ACS Examinations Institute standard exams to confirm content knowledge of entering graduate students, why don’t we do the same with our undergraduate students before they leave? And on and on. My colleagues quickly got beyond the suspicions and accepted that we should be doing this for ourselves. And it’s not only the items one chooses for the assessment plan, it also involves the process. As scientists, we are taught to question our experiments and data. We always ask, “How do I really know that?” We should do the same about our teaching. It’s wonderful that we believe that our students “like” what we do, but it is not sufficient. This is part of our collective responsibility as educators. We must learn what to do, how to do it, and then embrace a formative process to make improvements. This brings me back to the Gordon Conference on Chemical Education Research and Practice: Foundations and Frontiers. Without sharing the context or the evidence to support my assertions, there is reason for concern. A few too many people seemed to be saying that they didn’t know much about assessment, how to do it, or even why they should. They were not the “chem ed guy” and saying this somehow absolved them of this duty. But data-based assessment can and should be part of maintaining quality in education and in our professional development Published: July 13, 2011 1199
dx.doi.org/10.1021/ed200442j | J. Chem. Educ. 2011, 88, 1199–1200
Journal of Chemical Education
EDITORIAL
as educators. There are plenty of examples and a rich literature: the tools and methods are discoverable and many individuals and institutions have publicly posted their plans and materials, including the Chemistry assessment plan at the University of Iowa.4 It is time we got our community beyond denial. We should not be talking about whether we do it, but about the best practices. No system is without some flaws. Testing has become very common, and in the politicized versions of “No Student Left Behind”, accountability can be tied to high-stakes testing. Unfortunately, widespread cheating on 2009 standardized tests in the Atlanta Public Schools was reported in July 2011.5 Of 56 schools that were examined, cheating was apparently discovered in 44 of them, with 178 teachers and principals implicated. During the investigation, many of the individuals admitted to cheating, while others pled the Fifth Amendment, attempting to avoid self-incrimination. Most of them also admitted that they resorted to this behavior because the pressure and intimidation was so high. I am not trying to justify the behavior of those who cheated, nor condemning a single group. Test scores often form the basis for teacher and principal evaluation and pay, despite the absence of any compelling research data to show that test-driven reform has made an impact on student achievement in the last decade. The caveats here are that testing should not be considered equivalent to assessment and that the data should be carefully considered. Student content knowledge is only one indicator of educational success. Standardized exams are inexpensive and easy to administer but are not a substitute for a more complete study examining more facets. And the final lesson is that as educators we should take the lead in defining those criteria that should be part of the more comprehensive plan. And we can and should start with our own scholarship.
’ AUTHOR INFORMATION Corresponding Author
*E-mail:
[email protected].
’ REFERENCES (1) Information about the Gordon Research Conferences and the specific one on chemical education can be found at http://www.grc.org/ programs.aspx?year=2011&program=chemedu (accessed Jul 2011). (2) Towns, M. H. A Brief History of the Gordon Research Conference in Chemistry Education Research and Practice. J. Chem. Educ., 2010, 87 (11), 1133 1134, DOI: 10.1021/ed100085f. (3) Information about the Higher Learning Commission can be found at http://www.ncahlc.org/ (accessed Jul 2011). (4) The Department of Chemistry assessment plan for undergraduate majors can be found at http://www.uiowa.edu/reaccreditation/ outcomes/index.html (accessed Jul 2011) (Scroll to the Draft Assessment Plans of UI Departments and select Department of Chemistry.). (5) Strauss, V. Probe: Widespread Cheating on Tests Detailed in Atlanta. The Washington Post. Posted online at 06:04 pm EDT, 07/05/ 2011. http://www.washingtonpost.com/blogs/answer-sheet/post/ probe-widespread-cheating-on-tests-detailed-in-atlanta/2011/07/05/ gHQAURaczH_blog.html (accessed Jul 2011).
1200
dx.doi.org/10.1021/ed200442j |J. Chem. Educ. 2011, 88, 1199–1200