The Slippery Slope of Student Evaluations - ACS Publications

Feb 14, 2017 - Editor was recently reminded about student evaluation of teaching when he was assembling a packet of materials for a five-year post-ten...
0 downloads 0 Views 174KB Size
Editorial pubs.acs.org/jchemeduc

The Slippery Slope of Student Evaluations Norbert J. Pienta* Department of Chemistry, University of Georgia, Athens, Georgia 30602-2556, United States ABSTRACT: The proper and improper uses of student evaluations are discussed, with an introduction and summary of the literature on the subject. KEYWORDS: General Public, Testing/Assessment

T

as 10 items to as many 150 in the corresponding surveys. Since the early 1970s, it has been clear that a simple measure of a complex set of teacher behaviors and activities was not possible. Differences exist in the students, the teacher, and in the interactions between the teacher and their students. Demographic differences among the students (e.g., gender, major, college year) all play a role, although correlations rarely account for a substantial amount of the variance. Likewise, students’ cognitive, personality, and performance characteristics were examined, particularly comparing information gathered both early and late in a course. Instructors of large introductory courses consistently receive lower scores than those of smaller, more advanced courses; likewise, elective courses yielded higher scores than required ones. Subject matter clearly plays a role, and we chemical educators are all nodding in agreement at this stage. At the time, Kulik and McKeachie8 documented the importance of using several other means to collect evaluative information about a teacher’s performance, including ratings by colleagues, administrators, and by the instructor in question. Cohen’s metastudy published in 1981 showed that despite disagreement about the multitude of variables and effects, student evaluations had significance and value.9 Several decades later, Algozzine et al.10 included the subtitle “a practice in search of principles” in their review of the status of student evaluation of teaching. This quote speaks to some of the more serious concerns (ref 10, p 135): Originally intended to represent private matters between instructors and students regarding strengths and weaknesses, course evaluation information often has been put to another, more controversial use: to provide input in the annual evaluations, as well as for salary, promotion, and tenure decisions. It is one of the most prevalent measures of teaching effectiveness, despite continuing arguments against the practice. Their review points out the multidimensional aspects of teaching and the folly of trying to use a single criterion given that there is no widely accepted measure of effectiveness. A single overall score cannot provide information or potential feedback about specific behaviors. Formative assessment (i.e., improving instructional practices) and summative assessment (i.e., merit or employment decisions) must be considered separately and differently. Furthermore, for the latter summative purposes, they advise the use of “crude judgments”

he academic world houses many examples of assessment. Most chemical educators would probably equate the word “assessment” with the realm of student testing, but universitylevel academia has protocols in place for the evaluation of the institutions of higher learning via accreditation and reaccreditation, departmental and program valuation, the tenure process and annual merit evaluations for faculty, and even the peer review of submissions to scholarly journals like this one. Your Editor was recently reminded about student evaluation of teaching when he was assembling a packet of materials for a five-year post-tenure review. The pages of this Journal comprise creative scholarship and research about chemistry education, as has been the practice for over 90 years. Thus, this body of knowledge is substantial: clearly, it is a complex subject already, and it continues to grow. Although confession is good for the soul, your Editor should not admit (and, in writing, nonetheless) that he only possesses a portion of this knowledge. What should we teach? What are the best strategies? Specifically, what does the evidence tell us? In the midst of these questions about teaching and learning chemistry and at the end of each term, we subject ourselves to the ritual of student evaluations. At the risk of sounding cynical, consider that general chemistry instructors ask 17- and 18-yearold, first-semester students to comment about their “teaching”. Thus, each semester several hundred of these students are asked about my ability to teach (and/or to help these students learn). What do these students actually know about how they learn, and to what extent the instructor is helping them succeed at it? Many articles have been published about student evaluations in this Journal1−7 and elsewhere.8−10 This is not intended as a review but simply to bring attention to the complexity of the issue and to encourage their appropriate and thoughtful use. Student evaluations were first reported in the 1920s, and they have found widespread use since then. The decades of the 1970s and 1980s saw considerable evaluation of their strengths and shortcomings. For example, Kulik and McKeachie8 reviewed the evaluation of teachers in higher education: inferences made about the quality of a teacher’s performance by that teacher’s students, colleagues, supervisors, or the teacher themselves, and a second category of evaluation based on the performance of the students. Historically, student ratings focused on “empathy and professional maturity” and broadened scope to include behavior, communication, organization, and academic emphasis as the data collection expanded from as few © 2017 American Chemical Society and Division of Chemical Education, Inc.

Published: February 14, 2017 131

DOI: 10.1021/acs.jchemed.7b00046 J. Chem. Educ. 2017, 94, 131−132

Journal of Chemical Education

Editorial

such as “exceptional, adequate, and unacceptable”.10 Course characteristics (e.g., class size, course requirements and level, topic difficulty), student characteristics (e.g., previous experiences, ability level, gender, interest in the subject, attitude similarities with the teacher), and instructor characteristics (e.g., gender, communication and expressiveness, experience, rank) have all been implicated; some are quite controversial. Effects of the evaluation procedure and instrument and students’ grades or standing in the course have also been examined. The major recommendations are to use a variety of procedures to determine success in university teaching and to use the process to improve teaching. As the Editor and author of this editorial, some personal experiences (and empiricism) seem appropriate. Based on some ill-perceived notations of the expertise and near-infallible abilities of someone in this position on all things related to chemistry education, I am often asked to write letters of recommendation for tenure and promotion. For those faculty whose record is dominated or entirely composed of evaluations related to teaching, it is not possible for me (or anyone else for that matter) to declare someone a great teacher or to even begin the process without much more information. (It is not my interest to solicit more activities of this kind; as magnanimous as the Editor might be, attempting to perform a review of a faculty member based only on global student evaluation scores does disservice to that individual.) While serving on a college promotion and tenure committee, I was provided a document from a person who submitted a promotion report and who spoke of making great progress in sequential offerings of a course in which the global score went from 4.17 to 4.29. My comment was that when reduced to one significant figure, that comparison would be hard to make. My significant figure suggestion was apropos to the literature findings of being limited to “crude comparisons”. Although my personal post-tenure review is not complete, I would hope that in my “Lake Wobegon-esque”11 view of these matters that I will be found to be above average. In closing, I encourage all of you in academia to educate yourselves, your colleagues, and your institutions about the potential value and limits to student evaluations.



(2) Hedrick, J. L. The Keller Plan and Student Evaluation. J. Chem. Educ. 1975, 52 (1), 65. (3) Chisholm, M. G. Student Evaluation: The Red Herring of the Decade. J. Chem. Educ. 1977, 54 (1), 22−23. (4) Brooks, D. W.; Kelter, P. B.; Tipton, T. J. Student Evaluation versus Faculty Evaluation of Chemistry Teaching Assistants. J. Chem. Educ. 1980, 57 (4), 294−295. (5) Freilich, M. B. A Student Evaluation of Teaching Techniques: ‘None of Them Is Unimportant’. J. Chem. Educ. 1983, 60 (3), 218− 221. (6) Garafaio, A. R.; LoPresti, V. C.; Lasala, E. F. Student Evaluation of an Integrated Natural Science Curriculum. J. Chem. Educ. 1988, 65 (10), 890−891. (7) Bergin, A.; Sharp, K.; Gatlin, T. A.; Villalta-Cerdas, A.; Gower, A.; Sandi-Urena, S. Use of RateMyProfessors.com Data as a Supplemental Tool for the Assessment of General Chemistry Instruction. J. Chem. Educ. 2013, 90 (3), 289−295. (8) Kulik, J. A.; McKeachie, W. J. The Evaluation of Teachers in Higher Education. Rev. Res. Educ. 1975, 3, 210−240. (9) Cohen, P. A. Student Ratings of Instruction and Student Achievement: A Meta-Analysis of Multisection Validity Studies. Rev. Educ. Res. 1981, 51 (3), 281−309. (10) Algozzine, B.; Gretes, J.; Flowers, C.; Howley, L.; Beattie, J.; Spooner, F.; Mohanty, G.; Bray, M. Student Evaluation of College Teaching: A Practice in Search of Principles. Coll. Teach 2004, 52 (4), 134−141. (11) Lake Wobegon entry at Wikipedia. https://en.wikipedia.org/ wiki/Lake_Wobegon (accessed Jan 2017).

AUTHOR INFORMATION

Corresponding Author

*E-mail: [email protected]. ORCID

Norbert J. Pienta: 0000-0002-1197-6151 Notes

Views expressed in this editorial are those of the author and not necessarily the views of the ACS. Norbert J. Pienta is Professor and Director of General Chemistry at the University of Georgia, where he teaches and conducts research and scholarship about the teaching and learning of chemistry, devising methods, instruments, and analytics to characterize student learning and increase student success. He currently also serves as the Editor-in-Chief for the Journal of Chemical Education.



REFERENCES

(1) Schaff, M. E.; Siebring, B. R. A Survey of the Literature Concerning Student Evaluation of Teaching. J. Chem. Educ. 1974, 51 (3), 150−151. 132

DOI: 10.1021/acs.jchemed.7b00046 J. Chem. Educ. 2017, 94, 131−132