Assessing Undergraduate Research in Chemistry - ACS Symposium

May 2, 2018 - Best Practices for Supporting and Expanding Undergraduate Research in ... To support this best practice, the department wants to provide...
2 downloads 0 Views 2MB Size
Downloaded via ARIZONA STATE UNIV on July 4, 2018 at 19:22:07 (UTC). See https://pubs.acs.org/sharingguidelines for options on how to legitimately share published articles.

Chapter 18

Assessing Undergraduate Research in Chemistry Rebecca M. Jones* Department of Chemistry and Biochemistry, George Mason University, 4400 University Drive, MSN 3E2, Fairfax, Virginia 22030, United States *E-mail: [email protected]

Mentoring undergraduate research students is an important role for many faculty, however it is challenging to evaluate student progress. In order to understand more about how faculty assess their students, I surveyed a group of peer faculty from diverse institutions. The responses show a consistent set of expectations, but no uniform method of evaluation. I developed two rubrics to assess the process and products of an undergraduate research experience to offer useful tools for faculty mentors. These rubrics are described here, as well as examples of implementation and potential benefits to both faculty and students.

Introduction As faculty, we know undergraduate research is valuable for our students. Higher education literature clearly describes the benefits and value of this high-impact practice (1–9). In Bloom’s taxonomy (10), research is the highest level, where new ideas are synthesized and constructed from never-before performed experiments. By its very nature, research is open-ended and doesn’t lend itself to simple evaluation. Projects, even within a group, are unique and rarely do they go as planned; failure is always an option. Additionally, the students involved in undergraduate research bring different experience and skills to their projects. Taken together, these factors make assessment of undergraduate research (UR) students a challenging task. For many faculty, the burden of needing to assign a grade to undergraduate students working with us on a project is not trivial. Can we assign a grade for critical thinking and perseverance even © 2018 American Chemical Society Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.

when a project fails? We know it is valuable to articulate learning outcomes for our classes (11), so it is logical that this step would also be valuable for UR (7). What outcomes are really important for UR in chemistry and how can we measure them? I approached these questions twofold. First, I asked how are other faculty assessing their students? What insight might I learn from these expert practitioners? And second, could I develop or adapt a rubric to better articulate the desired outcomes. In this chapter, I present the results from a survey of 12 faculty who regularly mentor undergraduate research students. These peers share many common expectations and also have unique ideas about how to tackle the daunting task of assessment. I will also share newly adapted rubrics that can be used to assess UR students on both the process of conducting research as well as a research product, such as a paper or presentation.

Surveying Peer Faculty Using Survey Monkey (12), I deployed as short survey to faculty around the country. Questions asked respondents to qualitatively describe their experience working with undergraduates in their research labs. A total of 12 tenured faculty responded to the survey and two were interviewed by phone to discuss their answers. Of this group, 9 were full Professors and 3 were Associate Professors. Nine respondents said they mentor 3 or more students per semester. Table 1 identifies the schools that employ the respondents. There is a range of geographic locations, Carnegie classifications, and student populations. Only 4 of the 12 respondents work at public universities, all others are from private schools.

Table 1. Schools Employing the Faculty Survey Respondents Student Population * = public

Carnegie classification

School Name

City and State

Sewanee- The University of the South

Sewanee, Tennessee

Baccalaureate Colleges: Arts & Sciences Focus

1,714

DePauw University

Greencastle, Indiana

Baccalaureate Colleges: Arts & Sciences Focus

2,215

University of Richmond

Richmond, Virginia

Baccalaureate Colleges: Arts & Sciences Focus

4,182

Union College

Schenectady, New York

Baccalaureate Colleges: Arts & Sciences Focus

2,242

Continued on next page.

302 Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.

Table 1. (Continued). Schools Employing the Faculty Survey Respondents Carnegie classification

Student Population * = public

School Name

City and State

Muhlenberg College

Allentown, Pennsylvania

Baccalaureate Colleges: Arts & Sciences Focus

2,440

Cornell College

Mount Vernon, Iowa

Baccalaureate Colleges: Arts & Sciences Focus

1,086

Eastern Illinois University

Charleston, Illinois

Master’s Colleges & Universities: Larger Programs

8,913*

U of Michigan Dearborn

Dearborn, Michigan

Master’s Colleges & Universities: Larger Programs

8,923*

University of Wisconsin - Eau Claire

Eau Claire, Wisconsin

Master’s Colleges & Universities: Medium Programs

10,721*

Otterbein University

Westerville, Ohio

Master’s Colleges & Universities: Medium Programs

2,791

George Mason University

Fairfax, Virginia

Doctoral Universities: Highest Research Activity

33,729*

University of San Diego

San Diego, California

Doctoral Universities: Moderate Research Activity

8,349

In the survey, faculty were asked a series of questions about working in their lab and assessment. These questions and a summary of the responses are shared below.

What Does It Look Like To Work in Your Lab? All respondents indicated they meet weekly with their students. There are varying levels of group vs. solo work, but most expect students to be able to work independently. Some faculty also discussed required group interactions, such as mentoring younger researchers and attending regular group meetings. The students work a varying amount of time per week on their research; across the respondents, the averages were 8-12 hrs/week during semester and 35-40 hrs/week during summer. Some students received class credit for their research; this is most common during the semester. These responses suggest a very similar experience in mentoring undergraduate research students across the range of schools represented. 303 Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.

What Products (Notebooks, Papers, etc.) Are Required at the End of the Semester/Summer? Faculty stated that students must complete the required/ agreed upon hours for their project, regardless of outcomes. Students are expected to keep a good lab notebook and 10 of the faculty require a final paper or presentation to be completed. At least three faculty described that they also expect a “good faith effort” by each student. How Do You Evaluate if a Student Has Been Successful in Their Research? What Specific Benchmarks Do You Have? Respondents said they look for progress toward the project goals and often ask “did they get the data?” The ability to work unsupervised is a common benchmark. The quality of the lab notebook and written reports is also used to assess students, although there was no common method used for evaluation. If Applicable, Describe How Do You Assign a Grade to Your Undergraduate Research Students. If You Use Any Rubrics or Grading Scales, Please Describe Them Here. Ten of the 12 respondents said they had no formal mechanism of evaluating their students. One said he uses “20+ years of gut check” and that having research courses with Pass/Fail option makes this type of grading easier. Another faculty member said, “If they “do the deal” they get A’s (99 % of instances). If not, I give them something less. I have no formal evaluation protocol.” Two respondents described using a rubric to evaluate a student’s written thesis. These were developed by departments over time and provide consistency across the various groups. For one school, criteria include quality of experimental work, quality of thesis (drafts and final version), level of detail and accuracy of the information recorded in your laboratory notebook, quality of presentation of results (NCUR, Steinmetz, etc.), adherence to the time expectations in the lab, attitude towards your research, attendance at department seminars, attendance at a Safety Lecture, and participation in group meetings. A student who meets these expectations will earn a “B” and possible grades cover the entire range from “A” to “F”. Faculty in this department each assign their own grades each term for thesis students, then they collectively meet to discuss and justify these grades. This method has improved consistency between advisors in grades awarded. In summary, the faculty described very similar general expectations and a holistic approach to evaluation. There was very little to no standardization, which may or not be a negative characteristic of this process. There was one detailed example of a department-deployed rubric, which may be difficult to deploy in a large department due to the time-intensive coordination involved. A few things were conspicuously absent from the faculty responses. First, there is no formal feedback loop for students. Some faculty talked about regular meetings to keep students “on track”, but what exactly does that mean? Students should understand what is expected of them and receive regular feedback in order to be most 304 Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.

successful. Second, there was almost no evaluation of the process of research. Student research papers and presentations are products, not learning outcomes, like improving critical thinking. Besides being able to work independently, faculty did not articulate or describe evaluation of the myriad other outcomes of an UR experience. Besides a rather vague “gut check”, there were no overarching measures to assess the learning outcomes of undergraduate research.

Developing Assessment Tools The modest survey of peer faculty revealed a need for useful tools to evaluate UR students in chemistry. As a faculty member committed to UR, I modified two public rubrics to address this need. The Research Process and Research Presentation Rubrics, shown below in Figures 1 and 2, were based upon rubrics developed at George Mason University and the Association of American Colleges and Universities (AAC&U). In 2009, Mason selected undergraduate research as the focus of its Quality Enhancement Plan and created the Students as Scholars initiative. Beginning in 2012, Students as Scholars focused on supporting students on various developmental levels during their undergraduate career, from the discovery of how knowledge is generated to the creation and communication of new knowledge through original scholarly or creative work. Rubrics were developed and published to assess this university-wide program and the corresponding student learning outcomes; other universities and individual faculty were encouraged to adapt these rubrics to suit their specific needs (13). The AAC&U also has a series of rubrics developed for student assessment (14). The Research Process and Research Presentation Rubrics were developed from these resources and have been tailored to assess chemistry and physical science students. Practical electronic versions (PDF format) of these rubrics are available from the author by request.

Research Process Rubric The Process Rubric describes outcomes and levels of competency related to the process of research, rather than only the results. This rubric identifies five learning outcomes relevant to all chemistry research projects in the first column: Students who engage in undergraduate chemistry research will be able to • • • • •

Articulate and refine a question, problem, or challenge. Take responsibility for creating and executing an original scholarly project. Gather and evaluate evidence appropriate to the inquiry. Appropriately analyze scholarly evidence. Communicate knowledge from an original scholarly project.

305 Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.

Figure 1. Research process rubric. Adapted with permission from Students as Scholars program rubric, George Mason University Students as Scholars Initiative (2016). Retrieved from https://oscar.gmu.edu/faculty-staff/assessment/. Figure appears courtesy of the author. Copyright 2017 Rebecca M. Jones.

306 Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.

Figure 2. Research presentation rubric. Adapted with permission from "VALUE: Valid Assessment of Learning in Undergraduate Education." Copyright 2017 by the Association of American Colleges and Universities. http:// www.aacu.org/value. Figure appears courtesy of the author. Copyright 2017 Rebecca M. Jones.

307 Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.

The remaining 4 columns represent various levels of competency. Considering the learning outcomes and various levels, this tool provides language for describing strengths and weaknesses we observe in our students. As mentors, we want students to progress from Novice, thru Emerging and Proficient, and hopefully reach Advanced outcomes. These terms do not connote good or bad, pass or fail, but rather serve as benchmarks for both the students and mentoring faculty. Rather than use this rubric to arbitrarily say students must be at a certain level, faculty are encouraged to use this tool to track progress over time. Students can see the descriptions under each level and use them as inspiration for how they can improve. I have used this rubric to evaluate my undergraduate research students for the last 3 years. One student began working with me as a freshman, with no prior research experience. Over the course of the 2 years he worked with me, I watched him progress from novice into the emerging and proficient levels over time. Although his project itself stalled at times, seeing his progress in these metrics was very satisfying and helped us both see that he was still learning, despite the challenges he encountered. Research Presentation Rubric The Research Presentation Rubric is designed to evaluate an oral or poster presentation given by a student researcher about their project and could also be used to evaluate a research paper. Rather than specific learning outcomes, this rubric provides categories of required components (Organization, Content and Supporting Material, Language, Delivery, Overall) and describes the corresponding levels of competency. I have used this newer rubric for the past 1.5 years to evaluate research papers and student presentations. This rubric has also been successfully used to evaluate senior level presentations in a chemistry course. I distributed the rubric with the assignment and encouraged or required the students to use it for peer and self-evaluation. Completed rubrics were returned to students to provide detailed feedback. In addition, this tool is general enough that has been used to evaluate interdisciplinary scientific posters at a college level research colloquium.

Conclusions This chapter provides insight into the scarcity of assessment mechanisms for undergraduate research in chemistry and offers two complementary rubrics that have been developed to address this deficit. Despite having a reasonably uniform set of expectations for students in their labs, the chemistry faculty surveyed here did not describe a consistent approach to evaluating their undergraduate research students. The rubrics described here are tools that may help chemistry faculty assess their undergraduate research students. Used in concert, these rubrics can provide a benchmark toward which students can aim, identifying areas in which they are novices and describing what it means to be Proficient or Advanced. Mentoring faculty can use this information to adjust goals or their 308 Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.

mentoring strategies. These tools may be also used in the classroom to provide students with feedback. Additionally, the descriptions in the rubrics can provide descriptive language for recommendation letters. Future research will investigate the usefulness of these new rubrics and offer a quantitative perspective on their value. Outcomes for chemistry research students should not be vague targets with obscure milestones. Assessment does not have to be akin to a four-letter word in the minds of faculty mentors. Adoption and utilization of these rubrics can yield improved outcomes for students and ease the burden of assessment by faculty.

References 1.

Lopatto, D. Survey of Undergraduate Research Experiences (SURE): First Findings. Cell Biology Education 2004, 3, 270–277. 2. Lopatto, D. What Undergraduate Research Can Tells Us on Research on Learning http://www.pkal.org/collections/VolumeIV.cfm (accessed Nov 30, 2017). 3. Lopatto, D. Undergraduate Research as a High-Impact Student Experience. Peer Rev. 2010, 12, 27–30. 4. Kardash, C. M.; Wallace, M.; Blockus, L. Science Undergraduates’ Perceptions of Learning from Undergraduate Research Experiences. In Developing, promoting, and sustaining the undergraduate research experience in psychology.; Miller, R. L., Rycek, R. F., Balcetis, E., Barney, S. T., Beins, B. C., Burns, S. R., Smith, R., Ware, M. E., Eds.; Society for the Teaching of Psychology: Washington, DC US, 2008; pp 258–263. 5. Carter, F. D.; Mandell, M.; Maton, K. I. The Influence of On-Campus, Academic Year Undergraduate Research on STEM Ph.D. Outcomes: Evidence From the Meyerhoff Scholarship Program. Educ. Eval. Policy Anal. 2009, 31, 441–462. 6. Hu, S.; Kuh, G. D.; Gayles, J. G. Engaging Undergraduate Students in Research Activities: Are Research Universities Doing a Better Job? Innov. High. Educ. 2007, 32, 167–177. 7. Hunter, A.-B.; Laursen, S. L.; Seymour, E. Becoming a Scientist: The Role of Undergraduate Research in Students’ Cognitive, Personal, and Professional Development. Sci. Educ. 2007, 91, 36–74. 8. Graham, M. J.; Frederick, J.; Byars-Winston, A.; Hunter, A.-B.; Handelsman, J. Increasing Persistence of College Students in STEM. Science 2013, 341, 1455–1456. 9. Couch, B. A.; Brown, T. L.; Schelpat, T. J.; Graham, M. J.; Knight, J. K. Scientific Teaching: Defining a Taxonomy of Observable Practices. CBE Life Sci. Educ. 2015, 14, 1–12. 10. Bloom, B. S. Taxonomy of Educational Objectives: The Classification of Educational Goals; DMcKay Co, Inc: New York, 1956. 11. Astin, A. W. What Matters in College? Lib. Educ. 1993, 79, 4. 12. SurveyMonkey https://www.surveymonkey.com (accessed Nov 30, 2017). 309 Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.

13. George Mason University Students as Scholars Initiative. Students as Scholars program rubric https://oscar.gmu.edu/faculty-staff/assessment/ (accessed Nov 30, 2017). 14. Rhodes, T. Assessing Outcomes and Improving Acheievement: Tips and Tools for Using the Rubrics; Association of American Colleges and Universities: Washington, DC, 2009.

310 Gourley and Jones; Best Practices for Supporting and Expanding Undergraduate Research in Chemistry ACS Symposium Series; American Chemical Society: Washington, DC, 2018.