Studying the Language of Organic Chemistry ... - ACS Publications

word, static diagrams, scenes, dynamic animations, simulations, or music. All of these ... type of task and what their implications are for the study ...
0 downloads 0 Views 1MB Size
Eye Tracking for the Chemistry Education Researcher Downloaded from pubs.acs.org by COLUMBIA UNIV on 12/05/18. For personal use only.

Chapter 10

Studying the Language of Organic Chemistry: Visual Processing and Practical Considerations for Eye-Tracking Research in Structural Notation Katherine L. Havanki* Department of Chemistry, The Catholic University of America, Washington, DC 20064, United States *E-mail: [email protected].

Structural notation helps organic chemists to convey a tremendous amount of information. While providing sequential details about a chemical change that can be read like a sentence, chemical equations and mechanisms also provide an array of visual elements that the reader must search to find specific targets among a series of distractors. The information found in these types of representations include facts (i.e., atom position, bond angles, stereochemistry) and visual information (i.e., lines, vertices, letter, symbols) relating to the compounds or equations. This chapter will focus on the latter, discussing how the visual information of symbolic notation is processed and providing practical considerations for the development of stimuli and designing an eye-tracking experiment using this type of notation.

© 2018 American Chemical Society

Introduction Organic chemistry uses a variety of representations to convey information about organic compounds, including text, molecular and structural formulas, chemical equations, mechanisms, Lewis structures, Newman Projections, space-filled and ball-and-stick models, and electrostatic potential maps. Each type of representation presents the viewer with a different task. For example, if presented with text written in English describing an organic reaction and asked to recall the reaction described, the viewer will perform characteristic eye movements starting from the upper left of the text, moving to the across each line of text and down the body of the text. While some characteristics, such as fixation duration, are governed by the task of reading the text for recall, the overall pattern of eye movements is governed by the conventions of natural language (i.e., orthography and grammar). On the other hand, if presented with a three-dimensional electrostatic potential map and asked to describe the charge distribution in the molecule, the viewer is not constrained by rules like those of written language and is free to use idiosyncratic viewing strategies to process the image. This gives rise to unique patterns of eye movement. These examples are two very different types of tasks that would yield very different patterns of eye fixations. To simplify the discussion on eye-movement research and organic chemistry, a single type of representation, symbolic notation, has been selected for this discussion. Not only is this one of the most common forms of representation in organic chemistry, reading this notation combines elements of the three types of tasks identified in eye-movement research: reading, visual search, and scene perception (1). Although other forms of representation used in organic chemistry clearly differ from symbolic notation in form and task, many of the underlying concepts discussed in this chapter will carry over to these representations. The first section of this chapter will describe some of the characteristics and features of organic chemistry notation. This discussion is meant as a springboard for researchers to critically look at their own representations and to identify any critical features that would affect eye movements. Next the chapter will discuss visual information processing and conclude with a discussion of special considerations when eye tracking this type of notation.

Characteristics of Organic Chemistry Notation Many have argued that organic chemistry has its own language (1–4). Chemical symbols, alphanumeric symbols, and graphical elements (e.g., lines, simple geometric shapes, and wedges) are the basis for this shared chemical language. When put together, these basic elements create chemical formulas that act as a chemist’s words. Alone, chemical formulas can provide information about characteristics such as solubility, reactivity, physical state, color, etc. When combined with other formula, they can be used to describe a chemical transformation through a chemical equation (similar to a sentence). They can also be combined with the “electron arrow pushing” formalism to illustrate the steps of an organic chemical reaction in a reaction mechanism (similar to a paragraph). 184

Chemical formulas can be written in a variety of styles, all of which are governed by rules and international conventions. Every first-year organic chemistry student learns four distinct styles of chemical notation: molecular, condensed, expanded, and skeletal or line-angle (Figure 1).

Figure 1. Four distinct styles of symbolic notation use in organic chemical education.

While all four notation styles convey the chemical composition of a molecule, they differ in the amount of information explicitly presented about the overall structure of the chemical species. The least explicit is the molecular formula. It gives only basic composition information by providing the number of each type of atom in the compound, but there is no information about the connectivity or spatial arrangement of atoms. Of the four representations in Figure 1, the most explicit style of notation is the expanded formula, which provides significant visual information to the reader about how all of the atoms are interconnected. The remaining styles of notation, condensed and skeletal, are referred to as “shorthand” styles. These styles strike a balance between explicit information about the organization of atoms within the compound and implicit features. In the cases of the condensed formula, bonded atoms are grouped together; however, bonds are not explicitly shown. In the skeletal formula, carbon-carbon bonds and carbonheteroatom bonds are explicitly represented as lines; however, hydrogen atoms are implicit and carbon atoms are not represented directly using symbols, but are shown as vertices and endpoints of lines. Heteroatoms are represented using element symbols along with their associated hydrogen atoms. 185

In organic classrooms, these four styles are taught to students as ways of representing molecules; however, in practice, personalized shorthand that mixes elements of these styles are often used for reasons such as brevity, speed of transcription, or to clarify ambiguous features. Additionally, other alphanumeric symbols and abbreviations further simplify the notation by representing complex structural features in more concise ways (e.g., Ph instead of the phenyl, C6H5; Et instead of ethyl, C2H5; Ts or Tos instead of the tosyl group, CH3C6H4SO2). In Figure 2, the ethyl and phenyl groups are represented as abbreviations rather than drawing out their structural features using lines.

Figure 2. Heterocyclic compound that uses mixed notation and wedges to indicate spatial relationships. Ph represents a phenyl group and EtO represents an ethoxy group. Not only does the representation in Figure 2 provide information about the arrangement of atoms and functional groups, it also provides information about important three-dimensional relationship through the use of wedges. The wedge and broken wedge to indicate the spatial relationship of the OEt group (coming out the plane of the page at the reader) and the phenyl group (going out the plane of the page away from the reader). Through the use of additional symbols, such as arrows and plus signs, chemical formulas can be linked together to show the progress of chemical reactions in a chemical equation. Mechanisms go one step further using the electron arrow pushing formalism, where curved arrows show the flow of electrons and explain the sequence of bonds being broken and formed. Other symbols give an indication of reaction conditions (Δ) or the formal change on the atoms. Even lone pairs of electrons are shown if they are conceptually important to the understanding of the reaction. It is easy to tell from this brief introduction that the amount of information displayed in these types of representation can vary depending on a variety of factors such as complexity of the chemical reaction being described, the stereospecificity of the reaction, the size of the chemical species involved, the level of detail of the representation, and the style of notation. These representations can provide a large amount of conceptual information as well as visual information. The next section will focus on the latter and explore issues related to how humans process the complex visual information presented in the symbolic notation used in organic chemistry. 186

Visual Information Processing and Visual Attention Visual information processing is a set of cognitive processes that assigns meaning to visual information. This assigned meaning is commonly called perception. Perception integrates three broad, interconnected topics: vision, memory, and attention. Humans use their eyes to gather visual information about the environment. Memory retains this information over time for future use in other cognitive activities. The role of attention is to guide the rapid movement of the eyes, called a saccade, from one fixation to the next (5). During fixation, the eye pauses over an area of interest allowing visual information from the stimulus to be focused on the fovea, the region of the eye responsible for sharp central vision. At the same time, less detailed visual information is collected from the region of the eye outside the fovea; this is called peripheral or indirect vision. Information gathered by the central and peripheral vision is converted to electrical impulse and transmitted to the brain via the optic nerve (6). In this way, attention controls what information the eyes provide to working memory. One prominent model of visual attention divides this control into two mechanisms that interact to guide attention: bottom-up processes and top-down processes (7). Initial attention is driven by the primitive bottom-up mechanism. In this fast mechanism, features are selected for fixation based on saliency, with the most salient feature being fixated on first, then the next salient, and so on. Saliency is a relational property of objects in a scene. Objects are ranked based on how much they “pop-out” of the scene. This distinctiveness is based on properties such as color, luminosity, size, orientation, shape, and movement onset. This effect is illustrated in Figure 3.

Figure 3. Illustration of distinctiveness based on color and luminosity. In Figure 3, luminosity (or contrast) differences between the gray and the white regions define two 3 x 3 grids of squares. In grid I, the center square is 10% darker than the surrounding squares. The color difference is difficult to detect because the squares appear equally salient. In contrast, grid II has a center square that is more distinct than the surrounding squares because it is 50% darker and more salient than the surrounding squares. It’s important to note that the term salient should not be confused with “relevance” when discussing attention. A saliency map is a two-dimensional representation of the location of the objects in the stimulus and their saliency. It is used to drive attention. The initial fixations on a stimulus are controlled by the bottom-up mechanism (8–10). For an example of a saliency, you may refer to Chapter 3. If this were the only control of attention, fixations would follow a distinct pattern from most salient to least salient. There would be no regressive eye movements, and fixation durations would not 187

vary much because only visual information would be encoded for each fixation. However, eye movement data shows fixation patterns that are not always driven by salient features; therefore, a second mechanism must also control attention. The second mechanism responsible for the patterns of eye fixations observed during specific tasks is the top-down mechanism. It is controlled by the cortex and is goal driven. Under this control, fixation is made to complete a specific task (e.g., locate a target, determine the type of mechanism, and identify functional groups). Contextual information, past experience, and prior knowledge are all used to recognize patterns and drive attention from one fixation to the next. Working in concert, these two mechanisms, top-down and bottom-up, control visual attention. Eye-tracking measures the location where visual attention is focused (fixation) and how long the attention stays on a feature (duration). The next section will identify features of two different types of representations and discuss how they are processed.

Representations In cognitive science there are two types of representation: external representations and internal representations (11). This section will focus on the distinction between these two forms. External Representation External representations come in many forms, including written text, spoken word, static diagrams, scenes, dynamic animations, simulations, or music. All of these representations share a common feature–they are a set of relationships between elements that make up the representation. It is these relationships that are processed by the viewer during visual tasks (12). For organic chemistry notation, these relationships include the interconnections between atoms in a single molecule, the relationship of reactants to products in chemical equations, and the step-by-step making and breaking of bonds in a mechanism. During viewing, external representations like these are processed using both top-down and bottom-up mechanisms to create internal representations. External representations can be divided into categories based on their major features and how they are processed. Auditory, sentential, and diagrammatic representation are common external representations that are used in eye-tracking research. They can also be combined into more complex forms of representation that require special consideration, such as an animation with an audio narration. Audio sensory information is processed in a different way from visual information and is outside the scope of this chapter. For a review of auditory processing, see Baldwin (13). For this discussion the chapter will focus on the two written forms of external representations. In their seminal work Larkin and Simon classified external representations as either sentential (a written expression of natural language) or diagrammatic (arrangement of elements or items in an array or scene) (11). Both representations 188

capture relationships between elements but differ as to the type of relationships conveyed. Sentential representations are sequential and record temporal relationships governed by rules of grammar and language while diagrammatic representations are governed by physical or geometric relationships like distance and often do not have a temporal component; however, many have argued that this distinction is not that clear and provide examples that are counter to this dichotomy (14, 15). Organic notation also falls into this category. Chemical equations and mechanisms record sequential events and follow specific conventions set by the notation, but they also show the spatial arrangement between elements. For example, there is no difference between A + B → C and B + A → C. However, there is a big difference between those two equations and C → B + A. If we consider a mechanism, there is no convention for how a mechanism must be displayed other than the arrow formalisms; they do not conform to conventions like other natural languages. An English sentence must be written from left to right, top to bottom; however, a mechanism can go left to right on the first line then right to left on the second or left to right on the second line as long as the arrow formalism is followed. External representations, such as text, equations, and mechanisms, are viewed by the reader. Visual information processing mechanisms convert these external representations into internal representations that are used to complete cognitive tasks. Internal Representations Also known as cognitive representations or mental representations, internal representations are important for a variety of tasks, including problem solving. As the viewer perceives an external representation, they must convert what they are seeing into an internal representation of the stimulus held in working memory. During problem solving, this internal representation is manipulated to discern an answer. Success on the task depends on the accuracy of this representation. These representations are also used to strengthen a viewer’s mental model, which is their understanding of the concept or topic (16). Unlike internal representations, mental models are stored in long term memory and used during problem solving and pattern matching. Internal representations have been called the “Holy Grail” of cognitive psychology because they are difficult to directly measure. Some cognitive processes are automated and not consciously accessible by participants. They also suffer from considerable noise, including incomplete encoding and attentional limitations (17). This is leading researchers to use techniques that do not solely require recall of the participants. Today, researchers are using brain data to elucidate more about the internal representation, including electroencephalograms (EEG) and Blood Oxygenation Level Dependent signals (BOLD) from functional magnetic resonance imaging (fMRI) (18, 19). For a more detailed discussion, refer to Chapter 7 of this text. As discussed earlier, organic chemistry notation lies on the continuum between sentential and diagrammatic representations. In order to understand how relationships within notation are encoded, it’s important to consider how 189

each type of representation is processed. Eye-movement research has been used successfully to study three main information processing tasks: reading, scene perception, and visual search. The next section provides a brief overview of each type of task and what their implications are for the study of how people read organic notation.

Eye-Tracking Tasks Reading Reading is a complex set of cognitive processes that construct meaning from written language. For decades, eye tracking has been used to study the cognitive processes that are involved with reading English as well as other languages (1, 20). Several factors have been identified that can influence eye movement behavior. For example, different reading tasks have different eye movement behaviors. Silent reading has shorter fixation durations (225–250 ms) than oral reading (275–325 ms) (20). Features of the text can also influence viewing patterns. According to Rayner, factor such as the complexity of the relationships between elements of the text, lexicon frequency of word used, sentence length, and word length all affect reading times and fixation durations (20). Longer words tend to be ones that the reader rarely encounters (low lexicon frequency), so the reading times for these words have a tendency to be longer. Shorter, more frequent words exhibit shorter fixation durations. Longer sentences have more complex relationships between the words, and, therefore, have more fixations, exhibit more regressive eye movements, and have longer reading times (1). In addition to the words used, the appearance of the text can also affect viewing patterns. Morrison and Inhoff cited features of the text, such as quality, font shape, and spacing as some of the factors that influence eye movements (21). For additional information about how the format of the text can affect eye-movement behaviors, refer to Chapter 3 of this text. From the viewing patterns, several models have been developed to show the interplay between eye movements and reading comprehension. However, none have emerged to account for all the observed components of reading. This chapter will discuss two models based on eye-tracking data: the E-Z Reader and the Process Model for Reading Comprehension. The E-Z reader model, proposed by Reichle, Rayner, and Pollatsek is defined by the idea that attention is allocated serially, meaning fixation will occur on one word at a time (22, 22). The eyes land on a word, and pre-attentive visual processing begins. Eye movements are then programmed in two parallel stages: 1) Word identification (labile): looks at the word immediate to the right of the fixation and determines if it is familiar. If the word is simple and does not need to be fixated on it will send a signal to stop the saccade programming. At the same time, the fixated word is identified and deciphered. 2) Saccade programming (non-labile): the saccade is programmed to land on the next word. If the signal is received to stop programming, visual 190

attention moves to the second word to the right from the current fixation and the process begins again with word identification. In this way, the model explains why some simple words are skipped (e.g., a, the, of) during reading. This model is useful for the discussion of reading organic chemistry notation because eye-tracking studies of organic chemical equations have shown that not all regions of the structures are fixated on (15). This could be due in part to the lexical familiarity of some parts of the structure, such as -CH2-. Other features garner greater fixation because they are less familiar to the reader and require more processing. In this way, saccades would be programmed to skip familiar groups and fixate on unfamiliar. The second model for discussion is The Process Model for Reading Comprehension, proposed by Just and Carpenter (24, 25). While this is an older model, it has many features that are worth discussing and are still applicable to reading in organic chemistry. This model provides two major assumptions that play an important role in eye-tracking research and allow researchers to draw conclusions relating fixations to processing: the immediacy assumption and the eye-mind assumption. According to the immediacy assumption, the reader interprets a word as they encounter it. For example, as a participant read the formula in Figure 2, they will fixate on the functional groups. When they fixate on the “Ph” they immediately interpret the symbol as a phenyl functional group. The eye-mind assumption states that processing takes place only during fixation; therefore, fixation time is a measurement of processing time. When the participant shifts their gaze from one functional group to another (for example, from the “Ph” to the “OEt”), they stop processing the first function group and begin to process the new one. Using these two assumption, Just and Carpenter proposed a six-stage process model that account for the eye movement during the reading comprehension (24), shown in Figure 4. In the first stage (Get Next Input) the eyes fixate on a word. The physical characteristics of the letters in the word are encoded in stage 2 (Extract Physical Features). The next three stages can occur in any order and as needed: stage 3 forms an internal representation of the word (Word Encoding and Lexical Access); stage 4 assigns the relevant relationships between the newly fixated word and those in working memory (Assign Case Roles); stage 5 integrates the new global meaning to the text being read. At this point, if the end of the sentence has not been reached, the cycle repeats starting at stage 1 when the eyes move to a new fixation; however, if the end of the sentence has been reached, the entire thought contained in the sentence and its meaning is evaluated in stage 6 (Sentence Wrap-up). It is important to note that, for stages 2-6, this model includes access to working memory which holds the representation for manipulation (e.g., physical characteristics, case roles, assigned meaning) and long-term memory for important declarative knowledge (e.g., grammar, syntax, domain knowledge, episodic knowledge). Models such as this one can be used as inspiration to develop new explanations of other viewing behaviors. This was the case for the author’s work on reading 191

comprehension of chemical equations. This model was adapted to explain how organic chemistry equations are processed (15), shown in Figure 5. Just as in the previous model, there is an interplay between the processes driving eye movements, working memory that contains an internal representation of the reaction, and long-term memory where declarative and procedural knowledge about organic chemistry is stored. This model also contains six stages. In the get input stage the eyes move across the chemical equation. This is a completely mechanical step. The search process seeks out key features of the molecule in an attempt to classify compounds and determine the orientation of the reaction center. The next three stages occur in no particular order: encoding and access lexicon where meaning is assigned to the features and the mental representation of the equations is updated with new incoming information; intramolecular relationships assigns meaning to features within a given molecule; and intermolecular relationships assigns relationships between molecules in the chemical equation. During the last stage, reaction wrap-up, the reader attempts to clear up confusion, validate the internal representation of the equation, and identify any inconsistencies between the internal representation and the given equation.

Figure 4. Schematic of The Process Model for Reading Comprehension. Adapted with permission from reference (24). Copyright 1980 APA. 192

When the reader views a sentence or a reaction, they have an initial goal of finding meaning in the representation. This meaning may then be used for a wide variety of tasks such as comprehension, read aloud, predict the outcome, problem solving, etc. Comprehension models like those discussed in this section cover the initial phase of reading where the reader is trying to extract meaning from external representations.

Figure 5. Schematic of the Model for the Comprehension of Organic Chemistry. Adapted with permission from reference (15). Copyright 2012 Havanki. While much is known about the relationship of eye movements and reading, visual search and scene perception are less understood (1). The processing of diagrammatic representations is very different because there is no order or overarching organization like grammar and syntax found in sentences. Therefore, developing an overarching model for the comprehension of diagrammatic representations is more difficult. The next section will discuss some of the current thinking on how viewers process diagrammatic representations using scene perception and visual search. 193

Scene Perception and Visual Search Since diagrammatic representations vary widely, determining just one model to cover all possible scenarios is difficult. Several attempts have been made for specific types of diagrams. Just and Carpenter suggested a three-stage process model for mental rotation tasks: 1) the viewer searches for specific elements of the diagram; 2) elements are transformed (rotated or translated) and compared; and 3) the viewer confirms that the conditions were met (26). Other process models that have been developed for content areas including mathematics (27, 28) and physics (11). However, these models do not cover eye movements outside of the specific task for which they were designed. In processing scenes, such as a photograph or a mechanism, the viewer will not fixate on every part of the scene. Instead viewers focus on getting the gist, or layout, of the scene early in the processing, requiring very little time (as short as 260 ms) to get enough information to recognize a scene (29, 30). How viewers select fixations in a scene depends on the saliency of features in the scene (discussed in a previous section) and the task presented to the viewer. In his famous eye-tracking study of participants performing different tasks while viewing the painting The Unexpected Visitor, Yarbus concluded that, “depending on the task in which a person is engaged, i.e., depending on the character of the information which he must obtain, the distribution of the points of fixation on an object will vary correspondingly, because different items of information are usually localized in different parts of an object (31)”. The experiments were repeated with modern equipment in 2009 by DeAngelus and Pelz with similar results (32). For this reason, is it important that the directions to participants about the task are very clear. Related to the processing of a scene is the processing of visual search (33). In a visual search task, the participant tries to quickly find a target among a set of distractors. The target is a feature or item with a specific set of characteristics of interest which are specified by the researcher. The distractors have features similar to those of the target. Based on the degree to which the target and distractors differ, searches are classified into two categories: feature search or conjunction search. In feature search, the discriminator between the target and distractors is one feature, such as color, shape, size, or orientation. These types of searches are believed to be parallel. Even when more distractors are added, the accuracy of the search and the reaction times do not vary significantly. Contrast that with conjunction search. In these types of searches two or more features of the distractor differ from the target. Here, accuracy and reaction times increase with an increase in the number of distractors, and the process is believed to be a two-step process, one parallel and one serial (8). Attention during these searches can be influenced by visual as well as auditory cues (34, 35). These cues usually reveal the location of target features. Differences in color, shape, orientation, size, or an abrupt onset of motion can enhance an element of the diagram or scene and cause a shift in attention. When it is the target that has this enhanced saliency, the efficiency of the search always increases. If it is the distractors that have enhanced saliency, the opposite holds true. 194

Visual context and experience can also guide attention (36, 37). For example, if participants were viewing a series of pictures of labs to locate the safety equipment, the presence of an elephant in one of the pictures would immediately attract attention while a beaker on the counter may not. A beaker is expected in this context, however, an elephant is not. Prior experience can also guide a participant’s sequence of eye movements (i.e., scanpath). It has been shown in a variety of settings that visual search for experienced viewers is not random. Imagine a study where participants are asked to locate a safety shower in a picture of an academic lab. Because of the prior experience of the participants with shower heads in their own labs or in bathrooms, participants are more likely to look toward the ceiling in the picture rather than near the floor to locate the safety shower. Other factors that drive searches are individual characteristics of the participant, such as relevant content knowledge, expertise level, prior exposure to the stimuli, or visual ability (15, 38–41). The viewer does not have to exert conscious effort on the task that they have done at an earlier time, and they can usually locate a familiar target faster. Prior knowledge or pre-exposure to the stimulus can also lead to anticipator eye movements, which are movements of the eye to regions of interest prior to being prompted (42). To counteract this effect, screening of the participant is required. Having discussed the tasks involved with visual information processing, this final section of this chapter will discuss considerations for designing a study that looks at reading, scene perception, or visual search using the symbolic notation of organic chemistry.

Considerations for Designing Organic Chemistry Notion Stimuli Chapter 3 outlines, in general, several considerations that should be made when selecting or designing any stimuli. This section will discuss these consideration as they apply to organic chemistry notation.

Stimuli Design As with all good research, significant time should be spent on the experimental design, including the stimuli design. Stimuli must be carefully designed as to address the research question by providing enough information to complete the tracking task without unintentionally influencing eye movement behavior. Consideration must be made about how the stimuli are presented. Notation style, including the use of abbreviations or another symbolic notation, must be carefully considered. Additionally, other features to think about will be discussed, including equation size, location of target functional groups, and white space availability.

195

Style of the Notation When choosing the style of notation, there are several things that need to be considered. First is what style of notation to choose. This should be guided by the research question. Once you have chosen the style, the author recommends that the researcher compile a list of how the features will be displayed. This will help keep all the stimuli internally consistent. The researcher should ask several questions about how notation will be written: Will alcohols be written as “-OH” or “-O-H”? Will terminal methyl groups be explicit (-CH3) or implied “- “? Will carboxylic acids be written in the skeletal form, as “-COOH”, or as “-C(O)OH”? This list should be fairly detailed as to clear up any ambiguity before the data collection begins. Next, the researcher should determine if abbreviations will be used. Although abbreviations simplify the notation, for a novice reader, it may be like encountering an infrequent word. As discussed earlier, fixations times will increase if the reader has had infrequent interactions with that particular abbreviation. There are several options to address this including offering a legend or provide training at the start of the session to ensure the participants have encountered these features. However, these options also may provide a cueing effect that influences the reader to fixate on these specific features. This is something the researcher should keep in mind. Similarly, the use of color in the stimuli should also be considered. As discussed in Chapter 3 of this text, color can play a significant role in guiding attention, specifically when the colors are dissimilar (37). This is why many textbooks use a scheme of coloring of functional groups to help guide the learner’s attention to key features of the structure. However, depending on the research question, colored functional groups may have an unintended consequence of influencing attention; therefore, the decision to use monochromatic or colored symbolic notation must be driven by the research question and used consistently through the study. Finally, when considering structural notation, line weights should be taking into consideration. Although there is no current research on attention and line weights in structural drawings, related research in reading does provide some guidance for consideration. Stroke, also known as weight, of a font is the thickness of a character relative to its height. Using a standard Courier font, Bernard et al. showed that for text rendered with either very thin strokes (27% of the normal stroke) or very thick strokes (304% of the normal stroke), reading times were significantly longer than for those rendered with standard or near standard stroke (43). For very thick strokes, they point to the loss of specific letter features, such as intra-letter spacing, that lead to confusion between letters like “o” and “a”. For letters with a thinner stroke, the contrast between the text and the background is lower. This decrease in contrast can lead to an increase in reading times, fixation durations, and fixation frequency (44). A similar pattern can be found for pseudotext (strings of letters and number of variable length) (45) and visual search times (46). Figure 6 illustrates the effect of line weight on organic notation. The three representations of benzene shown in Figure 6 illustrate the importance of selecting an appropriate line weight for a stimulus. Structure A uses an increased line width, which leads to a loss of intra-bond spacing in the double 196

bonds. Structure B uses the standard line width. Structure C uses a reduced line width, which leads to a significant decrease in the contrast between the bonds and the background. It is important to note that once selected, line weights should be tested on the eye-tracker display to ensure proper sizing and resolution.

Figure 6. The skeletal formula Benzene illustrated using three different line widths.

Complexity of Notation The complexity of a representation affects the ability of a participant to process the visual information. The complexity of notation falls into two categories: lexical complexity and the diagrammatic complexity. In the previous section, the author hinted at lexical complexity. Words that are infrequently used tend to have longer fixation times and more frequent fixations. Since readers are unfamiliar with these infrequent words, processing takes longer and may require several regressions over the same words in order to determine the meaning. More frequently used words have short fixations times and fewer overall fixations (1). The same would apply to the chemical notation. If reaction conditions, abbreviation, or symbols used in the equation are infrequently encountered by participants, this will not only affect their eye movement behavior, but also their reading comprehension. Diagrammatic representations also have complexity as measured by the number of elements in the diagram (47). The more complex a scene, the greater the demands on working memory and the greater the attentional affects (48, 49). Increased complexity prompts longer scanpaths and a higher frequency of fixations, regardless of the content of the diagram or scene. More elements mean more spatial relationships and more distractors to view. As the number of distractors increase, so does the reaction times and the fixation durations. In previous research, the author showed that equations with molecules that had a high number of visual elements had significantly longer reading times, longer fixation durations, and higher fixation frequencies than those with molecules that had a low number of visual elements (15). Careful selection of molecules for the reactions is important. Be aware of the number of visual elements in the equation and select molecules among the trials with similar complexity.

197

Size For over 100 years, the size of fonts has been studied first in print and now on digital displays. In a comprehensive study of printed text, Tinker found that 10 pt fonts are optimal for reading when compared with 6 pt, 8 pt, 12pt, and 14 pt (50). Later work by Bernard et al. found that 12 pt fonts produced faster reading times than 10 pt and 14 pt fonts (51). While Beymer, Russell and Orton found that first pass reading speeds do not significantly change with font size, they did find significant differences in fixation duration and return sweeps, where smaller font elicited longer fixations and faster return sweeps than larger fonts (52). Larger font sizes are also preferred by readers when viewing text on a computer screen (53). When smaller font sizes are used, viewers have a tendency to decrease the viewing distance by leaning in; however, this poses a problem since eye-tracking participants must keep a constant distance from the screen. It’s recommended that a font size of 12 pt should be used for greatest readability. In order to compare fixation patterns among stimuli, the font size should remain constant. As discussed above, the optimal font size is 12 pt, however, this may pose a problem when a stimulus contains a significant amount of visual information, like a mechanism. Large mechanisms or chemical equations involving complex species are usually scaled to fit the screen, which reduces the font size. This in turn increases the fixation durations on text. During stimuli design, the researcher should keep this in mind and ensure that the font size among the stimuli remains constant. This may mean that smaller mechanisms use smaller font sizes so that all stimuli are consistent. Location of Target Functional Groups An informal survey of several first-year organic chemistry textbooks for this chapter found two common patterns of display for chemical equations: 1) the site of the reaction center or leaving groups are usually positioned as close to the tailend of the arrow symbol or, 2) species are positioned so that reacting functionality is in proximity. The products usually match the orientation of a reactant, allowing the reader to compare reactant and product, as illustrated in Figure 7.

Figure 7. Reactions illustrating common orientation used to teach undergraduate students. Research has shown that location of features matter; when viewers regularly encounter features in specific locations, they form expectations. Viewers create mental models that help them understand phenomena like chemical reactions. This 198

has been shown in a variety of applications, including web design (54), scene perception (55), and advertising (56). In these studies, when targets were located in unexpected locations, there was an increase in search times and a difference in fixation patterns. Keeping this phenomenon in mind, researchers should consider the chosen orientation of reactants and products. If stimuli are to be compared, the orientation of reactants and products should be consistent among stimuli. Consider the following example of two Diels-Alder reactions (Figure 8).

Figure 8. The effects of orientation of reactants that may affect reading behaviors. In reaction I, the diene and the dienophile are oriented in such a way as to imply the cycloaddition that occurs. In reaction II, the viewer needs to mentally reorient the reactants in order to determine the products of the cycloaddition. Eye movements for reaction II should exhibit a decrease in fixation frequency and/or longer fixation times. This is supported by the work on mental rotation of stimuli, including blocks (26, 57–60), geometric figures (61), images during driving (62), and elements in a scene (63). If orientation is part of the research question, careful, systematic rotation needs to be part of the design. It has been demonstrated that the number of fixations is higher, and the frequency is lower for large rotation angles than for smaller rotation angles (59). Therefore, the angle of rotation should be carefully controlled among stimuli as to make the result comparable. White Space Availability There are several advantages to including significant white space in the design of stimuli. Gestalt grouping is the natural tendency of humans to group objects together based on proximity. White space separates different elements of a chemical equation or mechanism allowing the reader to quickly and easily group elements together into different features of the chemical equation or mechanism (reactants, conditions, products, etc.). White space can direct a viewer’s saliency-based, visual attention as in the case of paintings (64), and it can define the size of a chemical species in the same way that the white spaces in this text define each word. 199

Task Dependency of Eye Tracking It is well known that eye movements are task dependent, and Just and Carpenter would argue, eye movements during a specific task reflect the underlying cognitive processes (24). Different tasks would have different cognitive processes and therefore different eye movements. This has been illustrated in a variety of fields, including reading engineering schematics (65), reading graphs (66), and memorizing or searching scenes (67). In each instance, the eye movements were different depending on the task given to the participant. Even when different tasks are performed on the same stimuli, the eye movements exhibit significant differences. Just as with other eye-tracking studies, those with organic notation need to have very specific tasks and directions to the participant that are defined by the research question. Since attention is driven by a combination of bottom-up and top-down processes, fixation patterns are defined by both the saliency of the features in the stimuli and the task. So, defining a detailed task is very important. There are a variety of tasks that can be defined, including: • • • •

“Free viewing”: the participant is able to view the task without a specific purpose. Natural reading: the participant is asked to read the reaction for comprehension. Identification: the participant is asked to identify a feature of the stimuli (e.g., a function group, the type of reaction, the name of the reaction). Predictive: the participant predicts something about the stimuli (e.g., the products of a reaction, the stereochemistry of a product, the next step in a mechanism).

These tasks are all very different and yield very different viewing patterns. It is also important to note that the “free viewing” option is the only task in the list that does not have a clearly defined goal and would be driven primarily by bottom-up cognitive processes. Natural Reading versus Timed Viewing The difference between natural reading and timed viewing is duration of the stimuli. In natural reading the duration of the stimulus ends with input from the participant. This means that viewing times are variable from stimulus to stimulus for a given participant and among participants in the same study. Timed viewing is when a stimulus is only shown to a participant for specific amount of time. The choice of which to select should be driven by the research question. While natural reading times are more authentic to learning environments, they are more difficult to analyze because of the time differences. Timed stimuli have a more straightforward analysis because all stimuli are viewed for the same amount of time; however, they are not authentic to how teaching and reading occurs. While this chapter discusses many things that a researcher should keep in mind while planning and designing an experiment, this is only a beginning. This 200

chapter should be used as a springboard to start discussions in your research group. Eye tracking organic chemistry notation is challenging; however, careful control of variables such as notation style, complexity, cuing, and priming effects will lead to less frustrating analyses and better results.

References 1.

2. 3.

4. 5. 6. 7. 8. 9.

10.

11. 12. 13. 14.

15.

16.

Rayner, K. The 35th Sir Frederick Bartlett Lecture: Eye Movements and Attention in Reading, Scene Perception, and Visual Search. J. Exp. Psychol. 2009, 62, 1457–1506. Crosland, M. P. Historical Studies in the Language of Chemistry; Dover Publications, Inc: Mineola, NY, 2004. Habraken, C. L. Integrating into Chemistry Teaching Today’s Student’s Visuospatial Talents and Skills, and the Teaching of Today’s Chemistry’s Graphical Language. J. Sci. Educ. Technol. 2004, 13, 89–94. Laszlo, P. Towards Teaching Chemistry as a Language. Sci. Educ. 2013, 22, 1669–1706. Carrasco, M. Visual Attention: The Past 25 Years. Vision Res. 2011, 51, 1484–1525. Daw, N. How Vision Works: The Physiological Mechanisms Behind What We See; Oxford University Press: Oxford, UK, 2012. Parkhurst, D.; Law, K.; Niebur, E. Modeling the Role of Salience in the Allocation of Overt Visual Attention. Vision Res. 2002, 42, 107–123. Itti, L.; Koch, C. A Saliency-based Search Mechanism for Overt and Covert Shifts of Visual Attention. Vision Res. 2000, 40, 1489–1506. Koch, C.; Ullman, S. Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry. In Matters of Intelligence; Synthese Library; Studies in Epistemology, Logic, Methodology, and Philosophy of Science;Vaina, L. M., Ed.; Springer: Dordrecht, Netherlands, 1987; Vol. 188, pp 115–141. Gibson, J. J. A Theory of Direct Visual Perception. In The Psychology of Knowing; Royce, S. R.; Rozenboom, W. W., Eds.; Gordon & Breach: New York, 1972; pp 77–89. Larkin, J. H.; Simon, H. A. Why a Diagram is (Sometimes) Worth Ten Thousand Words. Cogn. Sci. 1987, 11, 65–100. Zhang, J. The Nature of External Representations in Problem Solving. Cogn. Sci. 1997, 21, 179–217. Baldwin, C. L. Auditory Cognition and Human Performance: Research and Applications; CRC Press: Boca Raton, FL, 2012. Cheng, P. H. C.; Lowe, R. K.; Scaife, M. Cognitive Science Approaches to Understanding Diagrammatic Representations. Artif. Intell. Rev. 2001, 15, 79–94. Havanki, K. A Process Model for the Comprehension of Organic Chemistry Notation. Ph. D. Dissertation, The Catholic University of America, Washington, DC, 2012. Johnson-Laird, P. N. Mental Models and Human Reasoning. Proc. Natl. Acad. Sci. U. S. A. 2010, 107, 18243–18250.

201

17. Ma, W. J.; Husain, M.; Bays, P. M. Changing Concepts of Working Memory. Nat. Neurosci. 2014, 17, 347–356. 18. Smith, M. L.; Gosselin, F.; Schyns, P. G. Measuring Internal Representations from Behavioral and Brain Data. Curr. Biol. 2012, 22, 191–196. 19. Nestor, A.; Vettel, J. M.; Tarr, M. J. Internal Representations for Face Detection: An Application of Noise‐based Image Classification to BOLD Responses. Hum. Brain Mapp. 2013, 34, 3101–3115. 20. Rayner, K. Eye Movements in Reading and Information Processing: 20 Years of Research. Psychol. Bull. 1998, 124, 372–422. 21. Morrison, R. E.; Inhoff, A. W. Visual factors and eye movements in reading. Visible Language 1981, 15, 129–146. 22. Reichle, E. D.; Pollatsek, A.; Rayner, K. Using E-Z Reader to Simulate Eye Movements in Nonreading Tasks: A Unified Framework for Understanding the Eye–mind Link. Psychol. Rev. 2012, 119, 155. 23. Reichle, E. D.; Rayner, K.; Pollatsek, A. The E-Z Reader Model of Eye-movement Control in Reading: Comparisons to Other Models. Behav. Brain Sci. 2003, 26, 445–476. 24. Just, M. A.; Carpenter, P. A. A Theory of Reading: From Eye Fixations to Comprehension. Psychol. Rev. 1980, 87, 329–354. 25. Just, M. A.; Carpenter, P. A. Using Eye Fixations to Study Reading Comprehension. In New Methods in Reading Comprehension Research; Kieras, D. E.; Just, M. A. Eds.; Erlbaum: Hillsdale, NJ, 1984; pp 151–182. 26. Just, M. A.; Carpenter, P. A. Cognitive Coordinate Systems: Accounts of Mental Rotation and Individual Differences in Spatial Ability. Psychol. Rev. 1985, 92, 137–172. 27. Koedinger, K. R.; Anderson, J. R. Abstract Planning and Perceptual Chunks: Elements of Expertise in Geometry. Cogn. Sci. 1990, 14, 511–550. 28. Epelboim, J.; Suppes, P. A Model of Eye Movements and Visual Working Memory During Problem Solving in Geometry. Vision Res. 2001, 41, 1561–1574. 29. Underwood, G. Eye fixations on pictures of natural scenes: Getting the gist and identifying the components. In Cognitive Processes in Eye Guidance; Underwood, G., Ed; Oxford University Press: Oxford, UK; pp 163–188. 30. Rousselet, G. A.; Joubert, O. R.; Fabre-Thorpe, M. How Long to Get to the “Gist” of Real-world Natural Scenes? Visual Cogn. 2005, 12, 852–877. 31. Yarbus A. L. In Eye Movements and Vision; Plenum Press: New York, 1967; pp 171–211. 32. DeAngelus, M.; Pelz, J. B. Top-down Control of Eye Movements: Yarbus revisited. Visual Cogn. 2009, 17, 790–811. 33. Wolfe, J. M. Guided Search 2.0 A Revised Model of Visual Search. Psychon. Bull. Rev. 1994, 1, 202–238. 34. Chun, M. M. Contextual Cueing of Visual Attention. Trends Cognit. Sci. 2000, 4, 170–178. 35. Nakajima J.; Kimura A.; Sugimoto A.; Kashino K. Visual Attention Driven by Auditory Cues. In MultiMedia Modeling; MMM 2015: Lecture Notes in Computer Science; He, X.; Luo, S.; Tao, D.; Xu, C.; Yang, J.; Hasan, M. A. Eds.; Springer: Cham, Switzerland, 2015; Vol. 8936, pp 74–86.

202

36. Elmer, M. The Neural Basis of Attentional Control in Visual Search. Trends Cog. Sci. 2014, 18, 526–535. 37. Wolfe, J. M.; Horowitz, T. S. Five Factors that Guide Attention in Visual Search. Nat. Hum. Behav. 2017, 1, 1–8. 38. Dogusoy-Taylan, B.; Cagiltay, K. Cognitive Analysis of Experts’ and Novices’ Concept Mapping Processes: An Eye Tracking Study. Comp. Human Behav. 2014, 36, 82–93. 39. Reingold, E. M.; Charness, N.; Pomplun, M.; Stampe, D. M. Visual Span in Expert Chess Players: Evidence From Eye Movements. Psychol. Sci. 2001, 12, 48–55. 40. Van Gog, T.; Paas, F.; Van Merrienboer, J. J. G. Uncovering Expertise‐related Differences in Troubleshooting Performance: Combining Eye Movement and Concurrent Verbal Protocol Data. Appl. Cogn. Psychol. 2005, 19, 205–221. 41. Mayer, R. E. Unique Contributions of Eye-tracking Research to the Study of Learning with Graphics. Learn Instr. 2010, 20, 167–171. 42. Taya, S.; Windridge, D.; Osman, M. Trained Eyes: Experience Promotes Adaptive Gaze Control in Dynamic and Uncertain Visual Environments. PLoS One 2013, 8. https://doi.org/10.1371/journal.pone.0071371 (accessed on May 20, 2018). 43. Bernard, J. B.; Kumar, G.; Junge, J.; Chung, S. T. L. The Effect of Letter-stroke Boldness on Reading Speed in Central and Peripheral Vision. Vision Res. 2013, 84, 33–42. 44. Legge, G. E.; Rubin, G. S.; Luebker, A. Psychophysics of Reading—V. The Role of Contrast in Normal Vision. Vision Res. 1987, 27, 1165–1177. 45. Roufs, J. A. J.; Boschman, M. C. Text Quality Metrics for Visual Display Units: I. Methodological Aspects. Displays. 1997, 18, 37–43. 46. Näsänen, R.; Ojanpää, H.; Kojo, I. Effect of Stimulus Contrast on Performance and Eye Movements in Visual Search. Vision Res. 2001, 41, 1817–1824. 47. Halford, G. S.; Wilson, W. H.; Phillips, S. Processing Capacity Defined by Relational Complexity: Implications for Comparative, Developmental, and Cognitive Psychology. Behav. Brain Sci. 1998, 21, 803–831. 48. Vlaskamp, B. N.; Hooge, I. T. Crowding degrades saccadic search performance. Vision Res. 2006, 46, 417–425. 49. Bradley, M. M.; Houbova, P.; Miccoli, L.; Costa, V. D.; Lang, P. J. Scan Patterns When Viewing Natural Scenes: Emotion, Complexity, and Repetition. Psychophysiology. 2001, 48, 1544–1553. 50. Tinker, M. Legibility of Print; Iowa State University Press: Ames, IA, 1963. 51. Bernard, M.; Lida, B.; Riley, S.; Hackler, T.; Janzen, K. Determining the Best Online Font for Older Adults. Usability News 2001, 3, 50–60. 52. Beymer, D.; Russell, D.; Orton, P. An Eye Tracking Study of How Font Size and Type Influence Online Reading. In Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction − Volume 2; BCS Learning & Development Ltd: Swindon, UK, 2007; Vol 2, pp 15–18. 53. Lang, C.; Nguyen, T. V.; Katti, H.; Yadati, K.; Kankanhalli, M.; Yan, S. Depth Matters: Influence of Depth Cues on Visual Saliency. In Computer Vision –

203

54.

55.

56.

57. 58.

59.

60.

61. 62. 63. 64.

65.

66. 67.

ECCV 2012. Lecture Notes in Computer Science; Springer: Berlin, Germany, 2012; Vol. 7573, pp 101–115. Bernard, M. L. Developing Schemas for the Location of Common Web Objects. In Proceedings of the Human Factors and Ergonomics Society 45th Annual Meeting; October 2001; SAGE Publications: Thousand Oaks, CA, 2001; Vol. 45, pp 1161–1165. Biederman, I.; Mezzanotte, R. J.; Rabinowitz, J. C. Scene Perception: Detecting and Judging Objects Undergoing Relational Violations. Cogn. Psychol. 1982, 14, 143–177. Resnick, M.; Albert, W. The Impact of Advertising Location and User Task on the Emergence of Banner Ad Blindness: An Eye-Tracking Study. Int. J. Hum. Comput. Interact. 2014, 30, 206–219. Just, M. A.; Carpenter, P. A. Eye Fixations and Cognitive Processes. Cogn Psychol. 1976, 8, 441–480. Carpenter, P. A.; Just, M. A. Eye Fixations During Mental Rotation. In Eye Movements and the Higher Psychological Functions; Senders, J. W.; Fisher, D. F.; Monty, R. A., Eds.; Erlbaum: Hillsdale, NJ, 1978; pp 115–133. Paschke, K.; Jordan, K.; Wüstenberg, T.; Baudewig, J.; Leo Müller, J. Mirrored or Identical — Is the Role of Visual Perception Underestimated in the Mental Rotation Process of 3D-objects?: A Combined fMRI-eye Tracking-study. Neuropsychologia 2012, 50, 1844–1851. Bałaj, B. The Influence of Object Complexity and Rotation Angle on Eye Movements During Mental Rotation. Roczniki Psychologiczne/Annals of Psychology 2015, 18, 485–503. Lin, J. J. H.; Lin, S. S. J. Tracking Eye Movements When Solving Geometry Problems with Handwriting Devices. J. Eye Mov. Res. 2014, 7, 1–15. Recarte, M. A.; Nunes, L. M. Effects of Verbal and Spatial-imagery Tasks on Eye Fixations While Driving. J. Exp. Psychol. Appl. 2000, 6, 31–43. Nakatani, C.; Pollatsek, A. An Eye Movement Analysis of “Mental Rotation” of Simple Scenes. Percept. Psychophys. 2004, 66, 1227–1245. Fan, Z.; Zheng, X. S.; Zhang, K. Computational Analysis and Eye Movement Experiments of White Space in Chinese Paintings. In 2015 IEEE International Conference on Progress in Informatics and Computing (PIC), Nanjing, China, December 18−20, 2015; IEEE: New York, 2015. Lohmeyer, Q.; Matthiesen, S.; Meboldt, M. Task-dependent Visual Behaviour of Engineering Designers-an Eye Tracking Experiment. In DS 77: Proceedings of the DESIGN 2014 13th International Design Conference, Dubrovnik, Croatia, May 19−22, 2014; The Design Society: Scotland, 2014. Goldberg, J.; Helfman, J. Eye Tracking for Visualization Evaluation: Reading Values on Linear Versus Radial Graphs. Inf. Vis. 2011, 10, 182–195. Castelhano, M. S.; Mack, M. L.; Henderson, J. M. Viewing Task Influences Eye Movement Control During Active Scene Perception. J. Vis. 2009, 9, 1–15.

204