Downloaded via COLUMBIA UNIV on December 5, 2018 at 06:27:20 (UTC). See https://pubs.acs.org/sharingguidelines for options on how to legitimately share published articles.
Chapter 9
Coupling Eye Tracking with Verbal Articulation in the Evaluation of Assessment Materials Containing Visual Representations Jessica J. Reed,1 David G. Schreurs,1 Jeffrey R. Raker,2 and Kristen L. Murphy*,1 1Department
of Chemistry & Biochemistry, University of Wisconsin–Milwaukee, Milwaukee, Wisconsin 53211, United States 2Department of Chemistry, University of South Florida, Tampa, Florida 33620, United States *E-mail:
[email protected].
Assessment is a key component in the teaching and learning of chemistry. Assessment materials in chemistry often contain visual representations such as molecular structures, particulate nature of matter (PNOM) representations, graphs, diagrams, and pictures of laboratory instruments or equipment. Eye tracking provides a specialized means to understand how students interact with the features present in these visual representations. Eye-tracking measures coupled with students’ verbal descriptions of salient features in representations present in assessment items allow for greater insight in to how students use and understand these representations within assessment materials. We will highlight in this chapter the eye-tracking methods used to aid in the evaluation of assessment materials with a focus on coupling eye tracking with students’ verbal articulation.
© 2018 American Chemical Society VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
Introduction Representational competence, the skill to interpret and use representations, is critical for success in chemistry and other science, technology, engineering, and mathematics (STEM) fields. Such competence is well documented in relation to chemistry education and instruction (1–9). Students’ course materials frequently include representations such as graphs, particulate nature of matter (PNOM) diagrams, molecular representations or structures, and images of experimental glassware and equipment (10–17). Effective assessment of students’ chemistry knowledge requires testing students’ ability to interact with these representations in relation to chemistry content. Studies have examined how students use these representations and associated misconceptions (18–22). Yet it is not well understood how students interact with and make decisions about how to use the features and information found within these representations in an assessment environment. In order to help students develop representational competence and to create new assessments to measure students’ representational competence, it is important to understand how students make judgments about information contained within visual representations when solving problems. Eye tracking coupled with verbal articulations provides a means to gain understanding of the conscious interplay between student thinking and decision-making when using visual representations in an assessment environment. By using both research methods, a rich dataset is created in which eye-movement data corroborate students’ verbal descriptions in a manner that a traditional interview or eye-tracking session could not do independently. Therefore, our research study aimed to examine how students interpret and use representations when solving chemistry problems in a low-stakes assessment environment. The purpose of this chapter is to provide an overview of how eye tracking can be coupled with verbal articulation to gather in-depth information about student processing of information found in visual representations and to provide a guide for developing eye-tracking studies for assessment materials. In this regard, the chapter will focus on the steps taken to design, implement, and analyze the data collected from the study, rather than an in-depth analysis of results. Articulation and Eye Tracking Having students articulate when and how they are using an image as they complete a task allows for a more in-depth analysis and understanding of eye-fixation data. While similar to traditional think-aloud protocols associated with eye-tracking studies, the articulation method described herein is focused on capturing descriptions of components of visual representations and their use rather than collecting all processes associated with solving the task. In this regard, eliciting students’ description of representation use was our primary focus rather than capturing a cohesive or successful problem-solving method. Similar data collection methods have been employed for a variety of purposes, including consideration of expert and novice image processing and description (23–26). Verbal descriptions provide added value to eye-tracking data by allowing a 166 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
researcher to have insight as to what specific features of the image the subject is using, and what implicit judgments the subject is making about those features they either deem irrelevant to the task or do not understand. A researcher then has the opportunity to ask additional questions of the participant and gain further insight about observed eye-movement patterns. By collecting data in this manner, a researcher has the potential to make more robust inferences about students’ use of representations than had a stand-alone eye-tracking or interview process been used. For example, if a student fixates on a component of a representation but incorrectly articulates its features or use, then the researcher has more information about why or how a student’s representational competence may be lacking. Compared to having only eye-tracking or articulation data independently to make such conclusions, the coupled methods provide additional insight to students’ thoughts about representation use. Another advantage to coupling eye-tracking and articulation methods is possible verbal confirmation and explanation of gaze shifts as students articulate what they are looking at and why, providing for greater clarity of scanpath data. Also, better estimates of students’ time on task are possible because students are verbally describing how and when they are using image features in real time. However, consideration of the type and quantity of data a researcher is prepared to process is necessary prior to determining whether articulation is a viable method to couple with eye tracking. A limitation to coupling these methods is the amount of time it takes to not only conduct the eye-tracking interview sessions but then to independently analyze and combine the quantitative eye-tracking data and the qualitative articulation data. Thinking about desired outcomes along with means for stimulus design and data collection can aid a researcher in determining the ideal scenario for the research study. A discussion of the methods we used to design stimuli and collect data for an eye-tracking study coupled with a verbal articulation study follows.
Methods Stimulus Selection and Design Selection of stimuli and design of the study are critical components of the project. Refer to Chapter 3 for more information on stimulus design. In order to determine how students use visual representations in assessment environments, it is important to use assessment materials that contain visual representations commonly associated with chemistry content and require use of the representation to complete the associated task. Additionally, it is important to select items with representations that are unambiguous for ease of student articulation. For these reasons, assessment items and their associated representations from ACS Exams were selected for use because the methods surrounding item creation and trial testing provided a higher probability that the associated representations were valid (27). Additionally, selected representations were broad and not tied to a specific textbook or test bank format and were grounded in the classroom expertise of the exam development committee of chemistry instructors. Use of ACS Exams items containing visual representations ensured a reliable foundation to expand the future scope of the study to encompass assessment tasks and images beyond 167 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
the realm of ACS Exams. Because ACS Exams items are secure and copyright protected, these items cannot be shown herein; however, examples of similar items and images are presented. PNOM representations and energy profiles were the focus of this study. These types of representations are readily found on General Chemistry ACS Exams. Additionally, these representations are associated with topics found in both the first and second semesters of a year-long course. Our decision to explore only two types of representations in this study was twofold. First, it was anticipated that our work would be the first in a series of studies to explore students’ articulations of visual representations; future studies would explore additional representations. Second, to fully explore how students use and articulate these representations, it was necessary to have a broad selection of images and tasks within each category. To keep the length of the session reasonable while still collecting meaningful data, it was necessary to limit the focus of the study to two types of representations. Assessment item selection stemmed from previous analysis of General Chemistry ACS Exams items (13, 28). We looked at items from the past 10 years that contained PNOM or energy profiles and selected those representations and associated content that were used most frequently. Additionally, we considered how the task coupled with the image might influence how students articulate and use the image. We wanted to understand the degree to which students’ use and interpretation of the representation changes based on whether the representation is by itself (task-independent) or within the context of an assessment item (task-dependent). To explore this idea, stimuli were created that were independent of an assessment task. In these scenarios only an image was shown on the screen and students were asked to describe the features of the image they noticed. Task-dependent items included an image and a multiple-choice assessment item. Participants were asked to complete the assessment task and verbally describe any features of the image they were using to solve the problem. Examples of PNOM and energy profile items similar to those used in the study are shown in Figures 1 and 2, respectively. These items are from the ACS General Chemistry Study Guide and are representative of the types of items used in the study (29). Fifteen stimuli were developed based on assessment items; a breakdown of the task type across representation types is shown in Table 1. More task-dependent items were included than task-independent items because it was posited that features of the image were likely to be articulated differently when in the context of a specific task. All representations are black and white and two-dimensional, mirroring use to ACS Exams items.
168 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
Figure 1. An example of a PNOM image and associated item similar to those used in the study. A task-independent item would show only the image, whereas a task-dependent item would have the image and an associated multiple-choice question.
Figure 2. An example of an energy profile image and associated item similar to those used in the study. A task-independent item would show only the image, whereas a task-dependent item would have the image and an associated multiple-choice question. 169 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
Table 1. Distribution of Tasks by Image Type Distribution of Stimuli Image Type
Task-Independent
Task-Dependent
Total
PNOM
3
5
8
Energy profile
2
5
7
Total
5
10
15
Eye Tracker Setup A remote eye-tracking device (SensoMotoric Instruments) was used for collection of students’ eye movements and was mounted to an external monitor, which sat approximately 700 mm in front of the student. A sampling rate of 60 Hz was used and the threshold for defining a fixation was 100 ms. The interviewer sat at a table adjacent to the student to monitor the student’s eye movements and prompt the student to articulate in instances where the interviewer could see the student’s eyes fixated on features of the representation but the student remained silent (see Figure 3 for setup). Additionally, a video recorder was positioned to capture the view of the stimulus monitor without capturing identifying features of student participants. The video recordings captured student articulations and allowed for corroboration with eye-tracking data.
Figure 3. Setup of remote eye-tracking device and interviewer laptop. 170 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
Interview Protocol Students from the second semester of a year-long general chemistry course were invited to participate in the research study. Such students have been exposed to all of the content associated with the representations in the study. The study occurred 2 weeks prior to the final exam; students were encouraged to participate as an opportunity to review for the final exam. Before the interview, written and verbal consent was obtained. The eye tracker was calibrated for the participant’s eyes. Three example tasks were used to train the student to provide rich articulations. Students were told there was no penalty for guessing or not knowing correct terminology. Students were not given a score at the end of the session. To limit the influence of the interviewer, the interviewer did not intervene when participants completed a task incorrectly or used an incorrect term, but the interviewer did ask for clarification when the student used ambiguous language. Interviews ranged from 30 to 60 minutes. Students received an ACS Exams General Chemistry Study Guide for participating. A total of 29 students participated. Limitations Our data collection scenario has several potential problems. Eye-tracking data collection coupled with another data collection method has the potential to create difficulties. For example, many students are not familiar with being asked to describe how and when they are using a visual representation to solve a problem; therefore, the interviewer had to carefully pose questions to the student to elicit how information was being obtained from the representation without compromising data and directing the student to look at or describe a specific feature. Additionally, when creating stimuli and selecting representations, it may be necessary to consider the desired nuance of the articulation and create stimuli and areas of interest (AOIs) accordingly. To capture how students were using individual features of representations to solve problems in this study, it was necessary to select stimuli with discrete, well-defined features and then set AOIs for these individual features rather than the whole representation. In some instances, multiple features within the representation may be grouped together in one AOI, depending on the level of nuance desired for articulation and analysis. For example, in this study, multiple PNOM molecules may function as a single AOI because the associated task does not require the student to differentiate between individual molecules in the representation, and therefore it is arbitrary which molecule the student is looking at when articulating. However, had distinctions between various molecules in the representation been deemed important to the study, then it may have been necessary to enlarge or redesign the stimulus such that generated AOIs would appropriately capture eye fixations to couple with nuanced articulations. Finally, the stakes associated with the assessment may influence the quality of articulation. In our study, there was no penalty for guessing or saying “I don’t know,” and students did not receive a score at the end of session, which may have influenced how some students approached the task of articulation. Future studies may consider how added stakes, such as 171 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
receiving a score on the assessment (even though it has no bearing on course grade) or the opportunity to earn additional incentives based on performance, influence how students approach the articulation process (30, 31). Additionally, it may be useful to consider when students will be exposed to the representations and capture representation use shortly after instruction has completed that utilized the representations of interest to minimize articulation difficulties due to students’ inability to recall information about specific representations or associated content.
Data Analysis Methods Data collected for the study posed a unique challenge for analysis. After conducting semi-structured interviews where students verbally described visual representations in assessment tasks while their eye movements were tracked, we were left with a large amount of data to sift through. Our aim was to understand how students were using the representations and making judgments about which features were relevant for completing the associated task. Therefore, it was important to understand patterns in how students used visual representations, which necessitated the ability to make comparisons across individual scanpaths to create aggregate scanpaths that represent the most common pattern of eye movement while completing the task. A general summary of this process follows. Analysis included generation of AOIs that were representative of key features that students articulated such as the question prompt, answer choices, or aspects of the figure in question. A hypothetical example of how a prompt may be broken down into key AOIs can be seen in Figure 4. In this example, AOIs were created for the task label, the reactants, reaction arrow, and products. A participant’s eye fixation data are then represented based upon the AOI where the fixation occurred, generating a scanpath for each individual participant.
Figure 4. The transformation of a prompt into key AOIs. After the data are put in terms of AOIs, an aggregate scanpath can be generated from the data to represent the average scanpath of all participants. This presented a unique challenge because each participant had a different number of fixations. While some participants completed a task quickly and only needed to look at each AOI a few times, others continued to look back and forth and had dozens of fixations. To simplify the process of qualitatively finding patterns among sequences of different lengths, each AOI was assigned a numerical value and a distinct color. Assigning the AOI a number allowed the data to be easily aggregated and condensed. Figure 5 shows how the AOIs from Figure 4 could be simplified into a respective color and number scheme. 172 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
Figure 5. Sample numerical and color scheme for the AOIs generated in Figure 4. AOIs were assigned a numerical value and color coded to aid in ease of analysis. After fixation data for each individual task were converted into a number and color scheme, generation of an aggregate scanpath for the task could begin. To initially group participants with similar eye movements, eyePatterns software was used for bulk data analysis to determine numerical similarities between participants’ fixation patterns (32). The results of this analysis generated the similarity tree shown in Figure 6. The fewer branches it takes to connect two subjects, the more similar the subjects’ fixations arel however, there is not a set threshold for the number of branches to determine similarity. For example, participants S17 and S24 have similar patterns of eye fixations while participants S16 and S24 have been deemed to have dissimilar eye-movement patterns. While a similarity tree shows the general similarities between two subjects, it does not consider the patterns within each subject’s fixations, only the prevalence of hits within an AOI. This means that the similarity tree was useful for knowing which subjects were similar, but to determine how they were similar, an additional program was needed.
Figure 6. Similarity tree showing the numerical similarities between the subjects. 173 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
To find similar patterns between any two subjects we used the BasicAlgorithms-of-Bioinformatics Applet (BABA) (33). Student subjects had previously been identified and grouped as similar or dissimilar to one another with the eyePatterns software and were now being compared for similarity of specific fixation patterns using BABA. The program determines similarities by aligning the AOIs of one subject against the AOIs of a different subject in a matrix. The program then scans from the top left of the matrix to the bottom right searching for the same numbers to appear. When the numbers differ, a +1 penalty is added in the box. Figure 7 shows the BABA scan for two similar subjects. In the top left corner of the matrix, a 0 can be seen. This 0 indicates the starting point where no penalty has been added because none of the AOIs have been compared. Following along the shaded line, a 1 appears next. This +1 penalty was added because the first AOI fixations of the two subjects are different (S24—4; S17—1). In the next shaded box, the 1 reappears. The second number of each sequence is a 2 so this box has a penalty of 0. However, the program keeps a running penalty total of the sequence so the 1 from the previous box is carried through.
Figure 7. BABA example for two similar subjects. The line of shaded boxes indicates the best alignment of the two sequences. A perfectly diagonal line indicates the sequences are already ideally aligned. An ideal alignment is when the participants share the most fixations in common. When the line moves horizontally, the two sequences have a similar pattern but their sequences are misaligned. Figure 8 depicts the BABA scan for two subjects who are dissimilar. Their dissimilarity can be determined by both the number of realignments that were required (horizontal shifts of the shaded line) and by the final penalty total in the lower right corner, although there is no absolute penalty threshold for determining similarity and dissimilarity. Figure 7 shows S24 and S17 only required three realignments and only had a deviation penalty of nine, 174 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
making them similar in eye-movement patterns. Figure 8 shows S24 and S16 required eight realignments and had a deviation penalty of 12, deeming their eye-movement patterns relatively dissimilar.
Figure 8. BABA example for two dissimilar subjects. The BABA analysis was conducted for all participants. Ideal alignment between any two subjects was determined based off of the longest diagonal line in the analysis before a realignment was required. All aligned data were consolidated and visually analyzed for patterns. An example of the aligned data can be seen below in Figure 9, where each column represents the scanpath of a single participant. These data were then visually broken down into three main patterns based on the AOIs present in this particular stimulus. Box A depicts an alternating pattern between AOI 1 (reactants) and 2 (products) that is prevalent throughout most of the participants’ scanpaths; however, its location in the scanpath may vary by participant. Box B shows that some participants fixated on AOI 3 (reaction arrow), and box C shows the participants that fixated on AOI 4 (task label). This analysis was useful because it allowed the researchers to visually identify groups of multiple student participants who had similar eye-movement patterns and then later investigate whether students within those groups had similar articulations. Additionally, it allows for identification of eye-movement patterns that are prevalent across all participants, which may indicate key features of the representation that participants are engaging to complete the task, and thus, theoretically, should also be articulating. The example shown in Figure 9 is for a basic task-independent item; however, as the number of AOIs increased in task-dependent items, the participant groupings generated by this analysis method became more meaningful because they could be tied to students’ problem-solving and articulation methods. The eye-movement patterns identified with the BABA analysis method were then used to generate aggregate scanpaths that were coupled with students’ verbal articulations. 175 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
Figure 9. Three basic eye-movement patterns that emerged among participants. Verbal articulations were transcribed and aggregated by task. Preliminary analysis of articulations included identifying the order in which individual participants described specific features of the image and then creating an average articulation order across all participants for the specific task. These articulations were coupled with aggregate scanpaths, like the one shown in Figure 10, to create an average pattern of eye-movement and verbal image description for each task. The aggregate scanpaths were created by identifying the most prevalent fixation patterns in the BABA analysis and then determining the average fixation time from individual students’ scanpaths to create an average order of fixations and an average fixation time for each. For the example task in Figure 4, the average scan pattern (Figure 10) would be compared to the average order of feature articulation and specific description provided by individual students. In this scenario, the most frequent pattern of articulation began by explaining that the representation featured a chemical reaction after approximately 3000 to 4000 ms, but the aggregate scanpath indicates that they have looked at the label, reactants, and products before making that conclusion. Then students would describe the reactants; however, they would commonly use phrases such as “the molecules on the left” or “the circles on the left” rather than the word “reactants.” A quick glance at the reaction arrow and products was followed by a longer look (approximately 3000 to 4000 ms in length) and more detailed description of the reactants as “light” and “dark” colored “circles” or “diatomic molecules” and the quantity of each reactant. The 176 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
product molecules were then described in a similar fashion during 15,000 to 18,000 ms of elapsed fixation time shown in Figure 10. The students would often not identify the molecules as hydrogen, oxygen, and water until after describing the products and reviewing the reaction. Students then spent time reviewing the diagram without articulation and adding more description such as how the atoms were specifically connected (“two light-colored hydrogen atoms for every one dark-colored oxygen atom”) and ensuring that the reaction was balanced before ending the task. By coupling the aggregate articulations and average scanpath, it was possible to note that students were often looking at features of the image without articulating. For example, in Figure 10 students may not have started to articulate until approximately 3000 to 4000 ms had passed, but the aggregate scanpath indicates that their eyes fixated on several AOIs during this time. This suggests that they are likely making some kind of conscious decision about relevant image features and information before completing the task and provides direction for future studies and analysis. By coupling eye tracking and articulation methods, it was possible to discover patterns in students’ representation use that may have been overlooked had the two methods been used independently.
Figure 10. An example of an aggregate average scan pattern for the task shown in Figure 4. Average student articulations coupled with the scanpath reveal how students are utilizing the AOIs of the task.
Summary and Implications Eye tracking coupled with verbal articulation provides a unique method for studying students’ use of visual representations in assessment items. This chapter proposed an approach for conducting an eye-tracking and articulation study as well as highlighted a method for data analysis to generate aggregate scanpaths. The articulation process described herein is different than traditional think-aloud procedures in which the emphasis tends to be on gaining insight into a student’s thinking process associated with completion of a task or problem. Our articulation process was less about understanding the individual 177 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
steps or calculations a student completed to solve a problem and more about understanding how they described features of visual representations and used those features to make judgments about solving problems. Coupled with fixation and eye-movement data, these articulations provide useful information to researchers about how students are consciously making decisions about which features are most relevant for completing a task. For researchers, there are multiple applications for methods related to the pairing of eye tracking and articulation. For example, students may be attending to or assigning meaning to features of visual representations that go undetected in traditional assessment measures, largely because the student arrives at the correct answer. From an assessment perspective, the method of coupling eye tracking with verbal description of visual representations affords an opportunity for additional validation that students are using the representation in the manner in which it was intended and can provide some initial data for creation of assessment materials to measure representational competence. Student articulations of visual images may aid construction of assessment materials for students with visual impairments. By understanding the order in which sighted students look at and describe features of a representation, assessment designers can refine student articulations to create robust descriptions to be encoded as image tags in assessment materials to provide visually impaired test takers with a concise and appropriate description of the representation. Studies using eye tracking coupled with verbal articulations provide insight for improving instruction on the use of visual representations in chemistry courses. Identifying how students are using visual representations and making judgments about important features when solving a task creates an opportunity for researchers to inform instructional practice by suggesting methods for classroom intervention and assessment of representational competence. Beyond chemistry, representational competence is an important skill for student success in all STEM fields. How students use similar types of representations across different STEM disciplines has not been well examined. Future studies in which eye tracking and articulation are coupled could elicit how students are making decisions about features of similar representations based upon the context of the discipline in which the representations are used. Such information would allow discipline-based education researchers to develop strategies for strengthening content, instructional, and assessment connections between disciplines to foster students’ representational competence and success.
References 1.
2.
Kozma, R.; Russell, J. Students Becoming Chemists: Developing Representational Competence. In Visualization in Science Education. Models and Modeling in Science Education; Gilbert, J. K., Ed.; Springer: Dordrecht, Netherlands, 2005; Vol. 1. Treagust, D. F.; Chittleborough, G. Chemistry: A Matter of Understanding Representations. In Subject-Specific Instructional Methods and Activities; 178 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Advances in Research on Teaching; Volume 8; Brophy, J., Ed.; Emerald Group Publishing Limited: Bingley, U.K., 2001; pp 239–267. Sim, J. H.; Daniel, E. G. S. Representational Competence in Chemistry: A Comparison between Students with Different Levels of Understanding of Basic Chemical Concepts and Chemical Representations. Cogent Education 2014, 1, 991180DOI:10.1080/2331186X.2014.991180. Rau, M. A. Enhancing Undergraduate Chemistry Learning by Helping Students Make Connections among Multiple Graphical Representations. Chem. Educ. Res. Pract. 2015, 16, 654–669DOI:10.1039/C5RP00065C. Grove, N. P.; Cooper, M. M.; Rush, K. M. Decorating with Arrows: Toward the Development of Representational Competence in Organic Chemistry. J. Chem. Educ. 2012, 89, 844–849. Kozma, R. B.; Russell, J. Multimedia and Understanding: Expert and Novice Responses to Different Representations of Chemical Phenomena. J. Res. Sci. Teach. 1997, 34, 949–968DOI:10.1002/(SICI)10982736(1997711)34:93.0CO;2-U. Wu, H. K.; Krajcik, J. S.; Soloway, E. Promoting Understanding of Chemical Representations: Students’ Use of a Visualization Tool in the Classroom. J. Res. Sci. Teach. 2001, 38, 821–842DOI:10.1002/tea.1033. Padalkar, S.; Hegarty, M. Models as Feedback: Developing Representational Competence in Chemistry. J. Educ. Psych. 2015, 107, 451DOI:10.1037/ a0037516. Stieff, M.; Scopelitis, .; Lira, M. E.; Desutter, D. Improving Representational Competence with Concrete Models. Sci. Educ. 2016, 100, 344–363DOI:10.1002/sce.21203. Ainsworth, S. The Educational Value of Multiple-Representations when Learning Complex Scientific Concepts. In Visualization: Theory and Practice in Science Education; Gilbert, J. K., Reiner, M., Nakhleh, M., Eds.; Springer: New York, 2008; pp 191–208. Cook, M. P. Visual Representations in Science Education: The Influence of Prior Knowledge and Cognitive Load Theory on Instructional Design Principles. Sci. Educ. 2006, 90, 1073–1091DOI:10.1002/sce.20164. Raker, J. R.; Holme, T. A. A Historical Analysis of the Curriculum of Organic Chemistry Using ACS Exams as Artifacts. J. Chem. Educ. 2013, 90, 1437–1442DOI:10.1021/ed400327b. Luxford, C. J.; Linenberger, K. J.; Raker, J. R.; Baluyut, J. Y.; Reed, J. J.; De Silva, C.; Holme, T. A. Building a Database for the Historical Analysis of the General Chemistry Curriculum Using ACS General Chemistry Exams as Artifacts. J. Chem. Educ. 2015, 9 (2), 230–236DOI:10.1021/ed500732q. Linenberger, K. J.; Holme, T. A. Results of a National Survey of Biochemistry Instructors To Determine the Prevalence and Types of Representations Used during Instruction and Assessment. J. Chem. Educ. 2014, 91, 800–806DOI:10.1021/ed400201v. Linenberger, K. J.; Holme, T. A. Biochemistry Instructors’ Views toward Developing and Assessing Visual Literacy in Their Courses. J. Chem. Educ. 2015, 92, 23–31DOI:10.1021/ed500420r. 179 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
16. Nyachwaya, J. M.; Wood, N. B. Evaluation of Chemical Representations in Physical Chemistry Textbooks. Chem. Educ. Res. Pract. 2014, 15, 720–728DOI:10.1039/c4rp00113c. 17. Oliver-Hoyo, M.; Babilonia-Rosa, M. A. Promotion of Spatial Skills in Chemistry and Biochemistry Education at the College Level. J. Chem. Educ. 2017, 94, 996–1006DOI:10.1021/acs.jchemed.7b00094. 18. Ferk, V.; Vrtacnik, M.; Blejec, A.; Gril, A. Students’ Understanding of Molecular Structure Representations. Int. J. Sci. Educ. 2003, 25, 1227–1245DOI:10.1080/0950069022000038231. 19. Cooper, M. M.; Grove, N.; Underwood, S. M.; Klymkowsky, M. W. Lost in Lewis Structures: An Investigation of Student Difficulties in Developing Representational Competence. J. Chem. Educ. 2010, 87, 869–874DOI:10.1021/ed900004y. 20. Linenberger, K. J.; Bretz, S. L. Generating Cognitive Dissonance in Student Interviews through Multiple Representations. Chem. Educ. Res. Pract. 2012, 13, 172–178DOI:10.1039/C1RP90064A. 21. Luxford, C. J.; Bretz, S. L. Development of the Bonding Representations Inventory to Identify Student Misconceptions about Covalent and Ionic Bonding Representations. J. Chem. Educ. 2014, 91, 312–320DOI:10.1021/ ed400700q. 22. Sanger, M. J.; Greenbowe, T. J. Students’ Misconceptions in Electrochemistry Regarding Current Flow in Electrolyte Solutions and the Salt Bridge. J. Chem. Educ. 1997, 74, 819DOI:10.1021/ed074p819. 23. Li, R.; Pelz, J.; Shi, P.; Alm, C.; Haake, A. Learning Eye Movement Patterns for Characterization of Perceptual Expertise, ETRA 2012, ACM Symposium on Eye Tracking Research & Applications, ACM Press, 2012; pp 393–396. DOI: 10.1145/2168556/2168645. 24. Van Gog, T.; Paas, F.; Van Merriënboer, J. J. G. Uncovering ExpertiseRelated Differences in Troubleshooting Performance: Combining Eye Movement and Concurrent Verbal Protocol Data. Appl. Cogn. Psychol. 2005, 19, 205–221DOI:10.1002/acp.1112. 25. Li, R.; Pelz, J.; Shi, P.; Haake, A. Learning Image-Derived Eye Movement Patterns to Characterize Perceptual Expertise. In Proceedings of the Annual Meeting of the Cognitive Science Society, 2012, Vol. 34, pp 1900–1905. 26. Rosengrant, D. Gaze Scribing in Physics Problem Solving. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, 2010; pp 45–48. DOI: 10.1145/1743666.1743676. 27. Holme, T. A. Assessment and Quality Control in Chemistry Education. J. Chem. Educ. 2003, 80, 594–596DOI:10.1021/ed080p594. 28. Reed, J.; Villafañe, S.; Raker, J.; Holme, T. A.; Murphy, K. L. What We Don’t Test: What an Analysis of Unreleased ACS Exam Items Reveals about Content Coverage in General Chemistry Assessments. J. Chem. Educ. 2017, 94, 418–428DOI:10.1021/acs.jchemed.6b00863. 29. Eubanks, L. T.; Eubanks, I. D. American Chemical Society Division of Chemical Education’s Examination Institute. ACS General Chemistry Study Guide; American Chemical Society: Ames, IA, 1998. 180 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.
30. Wise, S. L.; DeMars, C. E Low Examinee Effort in Low-Stakes Assessment: Problems and Potential Solutions. Educ. Assess. 2005, 10, 1–17DOI:10.1207/s15326977ea1001_1. 31. Wise, S. L.; Kong, X. Response Time Effort: A New Measure of Examinee Motivation in Computer-Based Tests. Appl. Meas. Educ. 2005, 18, 163–183DOI:10.1207/s15324818ame1802_2. 32. Haake, A. R.; West, J. M. eyePatterns software. Source Forge, 2005. https:/ /sourceforge.net/projects/eyepatterns/ (accessed Jan. 26, 2018). 33. Casagrande, N. BABA: Basic-Algorithms-of-Bioinformatics Applet software. Source Forge, 2004. https://sourceforge.net/projects/baba/ (accessed Jan. 26, 2018).
181 VandenPlas et al.; Eye Tracking for the Chemistry Education Researcher ACS Symposium Series; American Chemical Society: Washington, DC, 2018.