Graphs: Working with Models at the Crossroad ... - ACS Publications

Apr 24, 2019 - In the context here of examining the cross-over between chemistry and .... used in teaching, as assessment and also as a way for instru...
0 downloads 0 Views 3MB Size
Chapter 4

Graphs: Working with Models at the Crossroad between Chemistry and Mathematics

It’s Just Math: Research on Students’ Understanding of Chemistry and Mathematics Downloaded from pubs.acs.org by UNIV OF ROCHESTER on 05/13/19. For personal use only.

Felix M. Ho,1,* Maja Elmgren,1 Jon-Marc G. Rodriguez,2 Kinsey R. Bain,3 and Marcy H. Towns2 1Department of Chemistry, Ångström Laboratory, Uppsala University, 751 20 Uppsala,

Sweden

2Department of Chemistry, Purdue University, West Lafayette, Indiana 47907, United States 3Department of Chemistry, Michigan State University, East Lansing, Michigan 48824,

United States *E-mail: [email protected]

The use and interpretation of graphs pose significant challenges to the learner but also open up opportunities for developing skills in combining both chemical and mathematical knowledge in problem solving. The analysis of a task in chemical kinetics serves in this chapter as the basis for discussing the design and use of openended problems through the lens of a number of frameworks, with the aim of providing the practitioner with practical examples as well as with tools and insights for further investigations and ways to help improve student learning.

Introduction The meaning attributed to the word model is often context- and discipline-specific. Depending on its use, it may encompass many complex ideas or technical considerations, and the construction and use of models that represent systems play a critical role in scientific thinking. It is therefore increasingly important to engage students in the process of modeling during instruction (1–3). According to Izsak, modeling can be tersely defined as the “coordination of quantities with other types of knowledge” (4). We build on this definition to encompass making connections across different representations as well as having an understanding of the nature of models and their limitations (2). In the context of chemistry, modeling often involves translating and making connections across the levels of representation used to describe chemical phenomena: particulate, symbolic, and macroscopic. These levels of representation are commonly known as the “chemistry triplet” or “Johnstone’s triangle,” named after A. H. Johnstone, who asserted that one of the reasons chemistry is challenging is because students must be able to engage in a “series of mental gymnastics” as they move from one representation type to another (5). Since then, the chemistry triplet has also © 2019 American Chemical Society

been extended and discussed in the literature to bring in further factors that influence student learning (6–10). Graphs are particularly useful for representing, analyzing, and communicating data in such a way as to provide insights into chemical processes. Carefully constructed and used, graphs reveal trends and patterns that could otherwise be obscured. Graphs lie at the interface between chemistry and mathematics, where their interpretation and use in the context of chemistry puts demand on learners’ ability to integrate their mathematical and chemical understanding. Graphs also offer a valuable context for examining how students draw connections between a symbolic representation and the phenomena modeled (at both the particulate and macroscopic levels). This chapter attempts to present and explore these interactions and interconnections, using chemical kinetics as the specific context, through the perspective of an assessment task involving the use of a graph. We will begin with a discussion of the nature of graphs and the possibilities and challenges they present for both scientific understanding and student learning. This will be followed by the specific example of a study analyzing how students analyzed and interpreted a graphical assessment task. Here, we offer different frameworks and tools that can be helpful for gaining insights into student understanding. Finally, we broaden the horizons of this discussion and explore how a mathematical modeling cycle that has been used in mathematical education research could expand the scope of our reflections and understanding of how learners integrate their chemical and mathematical knowledge to model physical phenomena. We hope that this chapter will provide practitioners with practical examples as well as with tools and insights for supporting students’ conceptual learning across different types of representations, especially in reasoning using graphs.

Graphs: Possibilities and Challenges Graphs are powerful tools. With a wide variety of possible types (xy-coordinate graphs, bar graphs, histograms, pie charts, etc.), graphs allow the visualization of data in different ways that can reveal trends and patterns as well as provide a means to examine and compare different datasets. Graphs allow the conversion of potentially vast amounts of numerical data into a succinct and much more readily accessible visual representation. A now-classic example are the so-called rose diagrams made by Florence Nightingale in her report Notes on Matters Affecting the Health, Efficiency and Hospital Administration of the British Army (1858) to the British Army (11). These graphs were a striking visual representation of statistical data on soldier mortality during the Crimean War and were pivotal in bringing about sanitation reforms that arguably saved the lives of many soldiers. From a cognitive load perspective, a major advantage of graphs is their ability to reduce the cognitive capacity required to analyze a set of data, thereby improving the chances for understanding and learning (12, 13). By summarizing large strings of data and grouping or “chunking” them into one concise visual representation, the demands on a person’s cognitive capacity for processing the data are reduced. This frees up space for other mental processes involved in problem solving and analysis, making it easier to discern trends and patterns in the dataset as well as to make comparison between datasets. In the present age of big data, data visualisation has become an ever more important tool in research and data analysis and is also increasingly popularized among the general public for informational and entertainment purposes (e.g., McCandless (14), GapMinder (15)). The ability to understand graphs is not only important for careers in science and technology; more than ever, it is an essential skill for everyday life. However, this reductionist nature presents its own challenges, and being able to critically assess graphs and understand their limitations is another vital skill. Choices have to be made during graph 48

construction for the sake of clarity and intelligibility. Individual points may be combined, averaged, or even excluded, categories may be merged, or readouts of exact data point values may not be possible. Thus, although graphs can reveal and make explicit trends and patterns that may not be easily discernible from a set of data, other information can at the same time be made implicit during graph construction. Furthermore, although graphs are descriptive in nature, presenting data in a more condensed format, they have in and of themselves little explanatory power (16) (cf. the principle that correlation does not prove causation). Graphs can summarize data but cannot explain why or how the data came to be. One given graphical shape can result from multiple different unique phenomena, processes, or mechanisms; the correspondence between process to graphical representations is not bidirectionally unique as seen in Figure 1(a). Take the example of a simple concentration-vs-time curve in Figure 1(b) for the formation of a product during a reaction. Although it can be concluded from the flat region of the graph that net product formation stops after a certain amount of time, it is not possible to distinguish whether this is due to complete reactant consumption in an irreversible reaction or to a reversible reaction reaching dynamic equilibrium. Two inherently different processes can give rise to the same graphical shape, with no possibility to decide between them based on the graph alone.

Figure 1. (a) Many-to-one mapping of processes to graphs; (b) A concentration-vs-time graph. The utility of graphs is therefore dependent not only on the design choices made by the maker of the graph, but it is also dependent on the user’s ability to interpret the graph. Potgieter et al. have provided a good summary of the literature concerned with students’ construction and interpretation of graphs, including research on their possible use in improving student understanding, the necessity for students to learn the disciplinary conventions involved in graphical representations, and the challenges students face in interpreting graphs (17). We could even extend this concept of interpreting graphs further to consider a more exploratory idea of interrogating graphs, whereby the process of interrogation aims to obtain emergent and unexpected insights from the data, beyond the information that the graphical representation of the data may have initially been expected to provide. This would arguably place extra demands on the ability to integrate mathematical skills involved in graph reading with relevant knowledge and skills in relevant disciplinary domains (such as chemistry) in question, giving the possibility of new and “right” questions that lead to new hypotheses that can be tested by further data analysis or experimentation.

Analysis of a Problem in Chemical Kinetics In the context of chemistry education, graphs are a powerful tool for facilitating learning and conceptual understanding but clearly also present significant challenges in their interpretation and interrogation, since knowledge and skills from both chemistry and mathematics are required. Better 49

understanding of how students approach and make use of graphs in the context of chemistry provides further insights on which strategies could be useful to aid student learning. In this section, we will present and discuss the design of an assessment task in chemical kinetics involving a concentrationvs-time graph (18). This task was designed to probe students’ ability to interpret a concentrationvs-time curve from a chemical perspective and provide possible explanations at a molecular level for the observed behavior (Figure 2). The characteristics and design principles involved in constructing this task will first be discussed to highlight the knowledge and skills required to solve such a problem and how a task can be formulated to probe different levels of understanding. Analysis of students’ responses to this task are then presented, revealing the wide range of ways in which students conceptualize and tackle such a task. These provide important insights regarding difficulties students have and which instructional approaches can help support them (19).

Figure 2. An assessment task in chemical kinetics consisting of three prompts involving the use of a concentration-vs-time graph. Reproduced with permission from reference (18). Copyright 2018 Royal Society of Chemistry. Analysis of the Task The task in Figure 2 was used as part of the assessment of the chemical kinetics component of a first-year undergraduate course in general chemistry. The precise wording of the prompts (questions) was developed through several rounds of discussions and also through initial piloting with PhD students and faculty members. This process was important to ensure that the prompts could elicit the reasoning and argumentation students should demonstrate while keeping the task open enough and not overly narrow the possible range of answers. A number of basic principles were used in its design. First, all three prompts in the task required the student to give chemical explanations, but formulated 50

at increasing levels of difficulty both in terms of understanding the graph mathematically and in being able to combine this with chemical knowledge. Second, the shape of the curve was chosen to not resemble any typical, textbook examples of zeroth‑, first ‑, or second-order reactions that students were familiar with. Instead, an unfamiliar shape was used to force students to engage directly with the features of the graph, rather than potentially resorting to memorized facts about a reaction of a known order. Third, the final prompt is an open-ended question, in which a variety of different (though not equally plausible) explanations for the shape of the curve are possible (e.g., addition of an initially limiting reagent, change in temperature or pressure for an equilibrium reaction, presence of an underlying complex reaction mechanism). The intention was to require students to draw on their bank of chemical knowledge in order to give and evaluate hypotheses about the cause of the observed phenomenon. A useful framework for analyzing different kinds of problems in terms of their data, method, and outcome was proposed by Johnstone (20). According to this framework (Table 1), data for solving the problem can be given or incomplete, the problem solving method can be familiar or unfamiliar to the student, and the outcome of the problem can be given (well-defined) or open. The different combinations of these result in eight problem types that provide training in different sets of skills. Analyzed using this framework, prompts (a) and (b) would be classified as Type 1, because all the data required to solve the problem are given in the graph, the concentration-vs-time graphs and the use of gradients of tangents at different time points to determine relative instantaneous reaction rates are well-known to students in this course, and there is a single correct answer for each part. Focusing on recalling of algorithms (20), such problems are commonly used for practicing and testing fundamental knowledge and skills for the given topic, which was indeed the purpose here. Table 1. Different Types of Problems as Proposed by Johnstone Problem Type

Data

Methods

Outcomes

1

Given

Familiar

Given

2

Given

Unfamiliar

Given

3

Incomplete

Familiar

Given

4

Incomplete

Unfamiliar

Given

5

Given

Familiar

Open

6

Given

Unfamiliar

Open

7

Incomplete

Familiar

Open

8

Incomplete

Unfamiliar

Open

Prompt (c), by contrast, requires a more conceptual understanding of chemical kinetics. First, there is no single correct answer regarding the cause of the behavior of the reaction, so the outcome can be regarded as open. Second, although students could be expected to be aware of a number of factors that could affect reaction rate, there is no established method for evaluating and deciding between different possible causes of such changes in such an atypical example, and students need to rely more on conceptual reasoning about the reaction. Finally, there is insufficient information to decide definitively on the cause (but this was admittedly not actually part of the task). Therefore, prompt (c) belongs to Type 6 according to this framework, with given data, unfamiliar method, and open outcomes, with a focus on making decisions about goals of the task, choice of method, and 51

exploring the student’s network of knowledge and techniques (20). This part of the problem is clearly more challenging for the student and assesses different sets of skills than prompts (a) and (b). This framework is a very useful tool when designing problems for learning and assessment. It highlights different possible constructions of a problem, the different demands put on the learner in each case, and the aspects of problem-solving skills that the learner can practice or be assessed on. Note that it is not suggested that the “higher” problem types are always “better.” Rather, this example illustrates how the framework can be used as a tool for analyzing study and assessment material to ensure that the task is fit-for-purpose with respect to the desired objectives for knowledge and skills and that the learners are exposed to an appropriate range of different problem types. Furthermore, one could additionally refer to frameworks such as the Next Generation Science Standards to see what competencies are covered by such tasks (e.g., analyzing and interpreting data, developing and using models, constructing explanations, and engaging in mathematical thinking and argument based on evidence) (21). The assessment problem can also be analyzed from the perspective of the Johnstone’s triangle (22) that comprises, as mentioned earlier, different levels of representation: macroscopic (visible and observable phenomena), particulate (invisible level involving models at the molecular and particle level), and symbolic (chemical symbols and formulas, mathematical and graphical representations, pictures and icons, etc.) (see Figure 3, including examples of different representations that may be encountered in the context of chemical kinetics).

Figure 3. Johnstone’s triangle, with examples of each representation level from chemical kinetics. Color change at a macroscopic level, molecular (mental) model of reaction between particles over time at the particulate level, and the various graphics, formulas, mathematical expressions, and technical terms used to describe, discuss, and model reaction kinetics. This triplet model has been widely discussed, used, and further developed as a simple yet powerful model for understanding the different levels at which chemistry is represented and discussed (7–9). The need to work across all three levels presents a significant challenge for novice learners, placing significant demands on their cognitive capacity in moving and connecting between these different levels when thinking about chemical phenomena. Research has suggested also that many students tend to be more comfortable with reasoning at the symbolic level, in ways that seem compartmentalized from reasoning at the particulate and macroscopic levels, thereby raising 52

questions of their actual conceptual understanding as opposed to engagement in purely algorithmic problem solving (23–28). Examining the problem here from the perspective of this triplet relationship, it can be seen that prompts (a) and (b) are the least demanding. In both cases, it was sufficient to connect two out of the three levels of representation: first at the symbolic level, to decipher and extract the necessary information from the graph (upward trend in concentration for [a], different gradients at different points in time for [b]), then directly relating this to either a macroscopic or particulate conception of the phenomenon in question (formation of a product and the rate at which product formation occurs, respectively, thought of as an actual observable product and sequence of events in time or in a mental model of the reaction). By contrast, prompt (c) is much more demanding, requiring connections and internal consistency across all three levels. Again beginning at the symbolic level of the graph in question, the students are required to suggest events at the macroscopic level (changes in experimental conditions, addition of reactants, etc.) that would give rise to changes that occur at the molecular, particulate level that they are required to explain. They also need to ensure that their own suggestion(s) could reasonably give rise to a concentration-vs-time graph that would be consistent with the one given. As will be seen in the sections below discussing the student responses, not all students managed to achieve this. In constructing learning and assessment tasks, this example illustrates how the Johnstone’s triangle can be a useful framework to keep in mind for designing tasks that help students develop the representational competence necessary for working across these levels fluently (7, 29). Although there is some literature discussion highlighting the definitional ambiguities of the different levels (8, 9), it can nevertheless be said it is important to be aware of the existence of such different levels and the demands they place on the learner. For example, whereas the actual chemistry concepts may be quite foundational, if a high level of representational competence is required to decipher the task (through the use of multiple or unfamiliar diagrams, symbols, formalisms, etc.), the overall difficulty may increase significantly. What may seem routine and trivial to an expert with many years of experience working with multiple representations in chemistry may not at all be obvious to the novice learner. Although training in dealing with multiple representations is clearly necessary and desirable for the learner, it is also vital to be aware of one’s own expert tacit knowledge in designing tasks for learning and assessment, to avoid overwhelming the learner or constructing tasks that are more difficult than appropriate or intended. In the context here of examining the cross-over between chemistry and mathematics, such considerations are particularly relevant. Analysis of Student Responses Detailed qualitative analysis of the student responses revealed a number of ways in which students, successfully or otherwise, combined their mathematical and chemical knowledge in answering these questions (18). In the following section, we present an overview and also further discussions of those findings with a greater focus on insights that may be relevant for instructional design and teaching practice. Making Sense of the Graph: The Role of Covariational Reasoning Covariational reasoning involves coordinating two variables and considering how they change in relation to one another, a skill that has been identified as critical for modeling dynamic processes and reasoning about graphical representations (30–34). When considering the prompts in Figure 2, the students were indeed required to appreciate the fact that the graph showed the change in 53

concentration of substance X with respect to time. For example, at the simplest level, in response to item (a), it sufficed to recognize the general positive direction of the covariation to be able to connect this with the chemical interpretation of substance X being a product rather than a reactant in the reaction. For the other prompts, however, more subtle differences in the students’ mathematical reasoning could be discerned. Our analysis of the covariational reasoning used by students in response to the task is presented in more detail in Rodriguez et al. (18). Here we provide an abridged discussion, highlighting how we used Moore and Thompson’s shape thinking framework to characterize covariational reasoning (35). According to that framework, students’ graphical reasoning can be described as static (involves conceptualizing a graph as an object) or emergent (involves conceptualizing a graph as a process and more explicitly considering ideas of covariation). This process–object distinction parallels a body of work in the mathematics literature about how mathematic functions and graphs can be viewed (17, 36–39). To illustrate the static–emergent distinction, Table 2 provides some examples of student responses to prompt (b).

Table 2. Emergent versus Static Reasoning Static Reasoning

Emergent Reasoning

John: At t = 1 min the reaction rate is highest since the curve has the steepest slope there. The slope at a particular point corresponds to the reaction rate at that point. At t = 5 min the reaction rate is the lowest since the curve does not even have a slope i.e. the derivative at that point is 0.

Lyndon: The reaction rate was the highest at t = 1 min and the lowest at t = 5 min. We can see this on the slope of the graph at the different time points. The slope tells us how quickly the concentration changes with respect to time and the steeper the slope, the larger the change (larger reaction rate). Of the three given time points, the graph was the steepest at t = 1 min and “flattest” at t = 5 min.

Zachary: The reaction rate was highest at t = 1 min since the gradient of the reaction was the steepest. The gradient is the rate constant and the steeper it is on the curve, the faster the reaction goes. The lowest was at t = 5 when it became horizontal with time so rate = 0.

Florence: Highest: at t = 1 min – biggest change of X during the shortest change in time. Lowest: at t = 5 min → constant (unchanged) amount of X → lowest reaction rate since rate is mol/ s.

In the instances in which the responses were classified as involving static reasoning, the students simply recognized the steepness of the curve at the respective individual time points and equated this to the rate of the reaction, essentially treating each section of the graph as a static object. The responses classified as emergent, on the other hand, involved more explicit reasoning about the actual change in one variable (concentration) in relation to the other (time), therefore focusing more on the process of change. However, our data suggests that the static–emergent classifications represent more of a spectrum of covariation rather than two mutually exclusive extremes. For example, in Zachary’s response, even though his explanation for time point t = 1 min was essentially focused on the gradient as an object, his explanation for time point t = 5 min considered a more explicit consideration of the passage of time, suggesting some covariational reasoning. Similarly, in the case of Lyndon, while the core of his response relied on the slope of the curve at the respective time points (a more static 54

view), he also explained the origins of the correspondence between slope and rate using more explicit references to the covariation of concentration and time. From an instructional practice point of view, it is also important to emphasize that static-versusemergent reasoning is not an incorrect-versus-correct distinction. As the responses above show, depending on the context, both perspectives of reasoning may be productive for solving a particular problem. Nevertheless, although static reasoning can be as productive as emergent reasoning in certain situations, it is not necessarily sufficient for all situations. As underlying processes become more complex, the ability to understand, interpret, and interrogate graphs as models for physical phenomena increasingly relies on the ability to see the covariation of variables in an emergent manner, as the further analysis of the responses to prompt (c) will illustrate. Making Sense of the Phenomenon: Activation of Chemistry Resources Analysis of the student responses to prompts (a) and (b) revealed that students were generally successful in arriving at the correct answer, with not a large amount of variation in how they approached the prompts chemically (18). This was expected given the Type 1 nature of the questions and showed that most students had at least a basic grasp of how to arrive at the expected answer. By contrast, the responses to prompt (c) elicited a much wider and richer range of responses, chemically plausible or otherwise, and were much more informative of students’ degree of conceptual understanding of chemical kinetics and their ability to apply this understanding to a given problem. This was again expected, given the open nature of this prompt, and is a nice illustration of the earlier discussion of the many-to-one correlation between possible processes that can give a rise to a particular graph. It is also a good example of how such open-ended questions with possible divergent outcomes can be used not only to diagnose what students understand but also how they apply their knowledge when faced with an unfamiliar situation. In this section, we present and discuss some analyses of the student responses to prompt (c) to exemplify how such open-ended problems can be used in teaching, as assessment and also as a way for instructors to gain deeper insight into student thinking and reasoning in chemistry in order to help them improve their teaching practices (see Rodriguez et al. for more detailed discussions about the theoretical framework and data analysis (18)). The analysis of the student responses was based on the resources framework (40–42) and focused on identifying which resources the students used when tackling this problem and how such resources were applied. Within this framework, resources are regarded as distinct cognitive units that students activate or “call upon” in a particular situation (in this case, a problem in chemical kinetics), which can then be applied to solve the problem at hand, productively or otherwise. Such resources are generally rather basic units of ideas that students have formed, which can be activated in different contexts and situations. Here, most students recognized the difference in reaction rate between the time points in question, possibly partly because prompt (b) had primed their thinking, and proceeded to propose explanations for the observed phenomenon. A number of resources that were identified in the student responses to prompt (c) are shown in Table 3. Not all resources that students made use of were productively applied in terms of arriving at chemically plausible suggestions. It should be emphasized that “productivity” here refers to the student’s application of the resource in solving the problem, rather than to the resource itself. For a resource to be classified as having been productively applied, it needs to be in alignment with scientific thinking, relevant to the given problem, and used in a productive way. 55

Table 3. Chemistry Resources Activated and Applied Chemistry resource Temperature change

Description

Application Observed

Student discusses the effect of temperature on rate Productive

Student discusses that more reactants result in a More reactant, higher higher rate or reasons that the reaction rate should Productive rate be quicker at the beginning because there are more reactants Adding catalyst

Student discusses adding a catalyst to reaction depicted

Equilibrium

Student reasons in terms of equilibrium and stress Productive, unproductive on the system

Complex reaction mechanism

Student discusses the possibility of a complex or multistep reaction mechanism to explain the observed graph

Unproductive

Titration

Student discusses acid-base protolysis as giving rise to the observed graph

Extraneous and unproductive

Productive, unproductive

Resources that many students could activate and productively apply to give chemically plausible explanations involved the effect of temperature, reactant concentration, catalysts, and reaching and perturbing equilibria (with a minority of unproductive application of the last two). The answer by Richard below illustrates the ability to work across all three levels of representation and link them to provide a consistent and chemically plausible explanation (extracting differences in rate from the graph, suggesting what happened to the reaction macroscopically and which implications this had at the molecular level that was consistent with the behavior in the graph). Richard: The reaction is probably the fastest at the beginning because there are more reactants that can collide and therefore form products, the more reactants that have reacted, the fewer will be left and the reaction rate decreases, at t 5 min it has completely tailed off. Something happens here, either more reactants are added and therefore more products begins to form again or a catalyst could have been added to get the last of the reactants to collide more easily and form product and therefore the reaction rate increases again. Note from this relatively short response, we also gain insight into a possible misunderstanding of how a catalyst works (that they make collisions easier), though it could also have simply reflected imprecise wording. It would nevertheless have opened up possible follow-up discussions in an instructional context. The application of complex reaction mechanism as a resource was challenging for the students, and in general, led to unproductive answers that were not chemically plausible, as the following answers illustrate. Florence: The reaction mechanism: reactants cannot go directly from A to B in one step but rather there can be intermediate steps, one of which will be rate limiting since it is the slow step and therefore the bottleneck in the total reaction. The rate-limiting step in this reaction is the second step in the reaction mechanism, i.e. elementary reaction 2, that is the 56

mechanism that comes after t5 and continues at t10. t1 is the fast reaction that forms the reactants for part 2 of the reaction. Molecular level: perhaps it can be pointed out that the product of part 1 is an intermediate form, between the reactants and products of the total reaction. James: R = k[A]a[B]b – at the beginning there was a lot of reactants that could form products – high reaction rate. The fact that it later increased again could be due to another reaction. The products themselves became reactants and formed new product. Andrew: The reaction would be of second order, where the first step went very quickly in relation to the second step which instead goes very slowly. These students’ explanations all in their own way neglected the fact that the concentration of X as shown would decrease if it had itself become a reactant in a subsequent mechanistic step and would have led instead to a negative slope in the graph. The students seemed to either treat the yaxis as showing product concentration, irrespective of its actual identity, or slipped into interpreting the graph as showing the rate of reaction over time, rather the concentration of the specific substance X. In fact, although the presence of an underlying complex reaction mechanism could theoretically explain the variations in rates of product formation, as a resource, this is a very challenging one for students at this first-year level to apply productively. There is an infinite number of different possibilities for complex mechanisms, but only a limited number that could account for the given graph. Although these students had been exposed to the general possibility of multiple steps being involved in an overall mechanism, at this stage, they still lacked sufficient experience and background to be able to propose or fully evaluate among the highly divergent possibilities that this resource brings. The complexity of Andrew’s answer is a good reflection of a valiant attempt to apply the resource productively, but ultimately he failed to do so. In contrast to the complex reaction mechanism resource being too difficult to apply productively, the titration was a resource that, as extraneous and irrelevant to the context of this particular problem, was inherently unproductive, which the two examples below show. Sarah: It is presumably a titration of a diprotic acid since it has 2 EP. It means that the acid releases 1 proton at t = 1 which gave an increased reaction rate, after a while at t = 10 the acid releases another proton and then the reaction rate increases somewhat. When t = 5 it is the half-equivalence point in other words the same amount needs to be added to get the next proton release. Rachel: The curve looks like a titration curve, when a strong acid has been protonated. Here, it can be clearly seen that there was an initial shape recognition that led to an association with acid-base titration, then an attempt (at least by Sarah) to reconcile this with the graph, inserting references to reaction rate with no regard to the quantities represented by the axes or why the molecular events would influence reaction rate as asserted. The responses showed mainly registration of surface features of the shape of the curve alone. One further factor that influenced whether students could arrive at a chemically plausible explanation was their ability to see the graph as more than simply a plot of paired data points for an arbitrary mathematical function that could be seen as a static entity, but rather to see it as a model of an event that unfolds emergently in time. This is arguably a crucial aspect for learners to grasp in order 57

to successfully combine mathematics and chemistry in problem solving. Two contrastive examples are illustrative of this. Lucy: At t = 1 min there is a plenty of reactant molecules (conc. of the reactants is high), which means that [more] collisions occur per sec. which in turn increases the likelihood that collisions leads to enough energy for a reaction to occur. Many collisions → more reactions → higher reaction rate. The slope of the graph at t = 10 min is not as steep, this is because the conc. of the reactants has decreased (lower number of reactant particles). This leads to fewer collisions which leads to fewer reactions which leads to lower reaction rate. Lucy looked at t = 1 min and t = 10 as disjointed events, rather than two linked time points such that the events between them also needed to be taken into account. Even though the individual application of the “more reactant, higher rate” resource was reasonable viewed in isolation, her failure to consider the entire time course of the reaction as an emergent whole meant that the explanation was ultimately incomplete. The response failed to successfully integrate the application of chemical resources with the mathematical representation of the graph as a model of events. One the other hand, Nancy was much more successful in this regard. Nancy: The reaction rate was high at first because there was a lot of reactants. A lot of reactants means that many molecules can collide and create products. The fewer reactant molecules, the more the reaction rate evens out. When the curve stays at the same reaction rate the reaction has probably reached equilibrium and products are created as fast as reactants are reformed. The increase in reaction rate can be due to some change for example compression, temperature change or addition of more reactant etc. Here we see that Nancy considered the entire course of the reaction emergently, as if she were “walking” along the graph point by point. She could also move easily between and integrate the particulate (collisions of molecules), the macroscopic (changes in experimental conditions), and the symbolic (shape of the curve) levels of representation to give a complete and internally consistent answer that could be characterized as a full mathematical narrative (18, 43). We would suggest that encouraging and helping students develop such emergent mental models and coherent narratives that combine chemistry and mathematics across the different levels of representation would be a challenging, but worthwhile and productive, goal for developing learners’ conceptual understanding of chemistry and problem-solving skills. In summary, these responses show how the opened-ended nature of prompt (c) led to students activating a range of chemistry resources, with a rich variety of subtle differences in details, approaches, and productivity in their application for solving the problem. The responses show both the challenge and opportunities that these open-ended types of problems can offer, for teaching and student learning as well as for assessing their understanding. As some of the examples above show, a potential issue for the learner is not fully “thinking through” the consequences of their explanation, so that contradictory effects are not recognized or dealt with. Thus, such problems also offer students the opportunity to realize the need for and practice in self-checking, which is a key metacognitive skill both for problem solving and more generally in developing students’ abilities in self-regulated learning (44, 45).

58

The Interaction between Mathematics and Chemistry in Modeling and Problem Solving Mathematical Modeling Cycles The analyses of the student responses above have been illuminating about the range of ways students see and make use of graphs as well as how they apply their knowledge of chemistry to solve such an open-ended problem in chemical kinetics. Responses giving full and chemically plausible narratives showed that it is certainly possible for students to combine their mathematical and chemical knowledge in a highly productive manner. But how do they do this? What are the cognitive steps involved, at a more detailed level? A better understanding of the processes involved would help both researchers and practitioners investigate the challenges students face and develop ways to help them overcome those challenges. In the field of mathematics education, there has been much interest over the past few decades in investigating how students engage in mathematical modeling, the process of modeling, and solving real-world problems using mathematics (16, 46, 47). A much-discussed topic is the development of theoretical mathematical modeling cycles that attempt to map out the cognitive processes involved in converting a real-world situation into a mathematical model that can be used to obtain results that can then be related back to real-world results. This area of research is highly relevant and potentially productive in our context of understanding the interaction between mathematics and chemistry in chemistry education. A number of different modeling cycles have been developed, each conceptualizing the modeling process with different phases and factors that influence the process (16, 46, 48, 49). Nevertheless, they generally separate the cognitive activities in working with the real-world situation from those involving working with mathematical models. Furthermore, all these cycles involve the following steps in various guises: simplification of a real-world situation and its conversion into a mathematical model (mathematization), mathematical work to produce mathematical results, and interpretation and validation of these results to real results that can be related back to the real-world situation. One of the most widely discussed and used cycles is that proposed by Blum and Lieβ (50), which is shown in Figure 4, in which the adaptations by Borromeo Ferri (49) have also been included.

Figure 4. A mathematical modeling cycle. Reproduced with permission from reference (49). Copyright 2006 Springer Nature.

59

The potential of such a model to help to pick apart the interaction between chemistry and mathematics in problem solving is clear. Typical of many modeling cycles, the real-world and mathematical halves are linked through mathematization in one direction and interpretation of mathematical results in the other. A particularly relevant feature of this model is that the role of “extra mathematical knowledge” (EMK) is specifically emphasized in the model construction steps. In the context of solving problems in chemistry using mathematics, such EMK would correspond to the activation and application of chemistry resources, such as those identified above. Especially significant is that EMK is highlighted as playing an important role even in the mathematization step. For problem solving in chemistry, this reflects the need to combine and integrate both mathematical and chemical knowledge in constructing a reasonable and plausible mathematical model. Research and experience has indeed shown that this mathematization phase can be problematic, with many students simply matching variables in problems to those in known formulas (“plug-and-chug”), with potentially little or no understanding of their physical meaning or the underlying mathematical model being used (51–53). A study by Uhden et al. from physics education research has also pointed out that the mathematization process is not unique for a given problem (54). Different kinds and levels of mathematization steps are possible when constructing the mathematical model, involving more or less explicit conceptual links along the way to the physical phenomena. Furthermore, the authors incorporated the work by Pietrocola, who distinguished the technical and structural roles of mathematics in physics, the former being a purely algorithmic use of mathematics whereas the latter represents mathematics embedded as part of the conceptualization and structure of the discipline (55). What is perhaps surprising and worthy of more attention are the interpretation and validation steps in the cycle above, in which EMK surprisingly is not incorporated as a factor during these steps. Much less focus seems to have been placed in the mathematics education research literature about the processes involved in interpreting and validating mathematical results. Although it may well be fairly unproblematic for students to engage in such steps when the problem at hand involves simple numerical answers with clear connections to tangible everyday situations (e.g., height of a person, distance from a lighthouse, cost of gasoline for different journeys in Blum and Borromeo Ferri (56)), EMK would be much more important in the context of chemistry to connect the mathematic results to actual phenomena, possibly requiring transitions between all three levels of representation in the chemistry triplet. The contexts and phenomena can be much less familiar or intuitive and can involve a high level of abstraction. The results from the mathematical work may be symbolic rather than numerical (e.g., thermodynamic relationships, quantum mechanical calculations, complex rates expression), consisting of complex mathematical expressions for which knowledge in both mathematics and chemistry must be combined to arrive at a physically plausible interpretation. Such challenges have also been observed within physics education, with research showing that students experience working with mathematics within physics as different from just doing mathematics. Students might be reasonably proficient at the mathematical manipulations, but nevertheless fail in interpreting the results from a physical point of view afterward (reviewed in Caballero et al. (57)). Utility of Mathematical Modeling Cycles In the context of chemistry education, the main utility and strength of such mathematical modeling cycles is arguably that they allow the researcher and practitioner to consider the steps involved in combining chemistry and mathematics in a more structured and detailed manner, so that the “right questions” can be asked when designing tasks and improving instruction. Whether any 60

particular framework is objectively “correct” is perhaps not as vital in practice. Indeed, Doerr et al. have advocated using multiple ways to represent the modeling cycle in order to more fully capture the processes involved (16). Although this and other proposed normative cycles suggest a linear progression through the cycle, it is generally recognized in the literature that the actual process is often individual, and in general, people “bounce around” (16) the different parts of the cycle during problem solving (56, 58). Even the starting point in a given modeling cycle might vary, depending on the nature of the task. In our assessment task (Figure 2), the substantive information for the task is actually presented in a predetermined mathematical model, namely, the concentration-vs-time graph. In order to fully understand the problem itself, mathematical work and interpretation of results, including the application of chemical knowledge, are required from the start. It is additionally necessary to construct a mental representation of where these two time points are temporally and chemically connected by the intervening time (the “unfolding” of the events, as mentioned earlier). Only then is the true nature of the task apparent: which possible chemically plausible reasons can be offered for different reaction rates at the two time points in question, taking also into account changes that took place in-between? From this point, engagement in the cycle continues, during which chemical knowledge is required to arrive at possible explanations, from which a mathematical model in the form of a graph would ideally be constructed (at least mentally) in checking the proposal’s consistency with the actual given graph (a step that not all students succeeded in, as seen above when using complex reaction mechanisms as a resource). Analogous to research showing that the modeling cycle is idealized and that actual progression through the cycle during problem solving is not necessarily linear, it can also be seen here that the problem formulation may not be a “pure” real-world situation but rather is mixed with predetermined mathematical models and representations. This could be conceived as a different starting point in the modeling cycle or, alternatively, a “step 0.” This would not be uncommon in the framing of problems in chemistry and is again arguably a consequence of the multiple levels of representation used in working with chemistry. Engaging in more than only a single “round” of the modeling cycle may be necessary, as illustrated here given that multiple plausible answers are possible. Further rounds of the cycle can also be necessary when checking the correctness and plausibility of the answer. Regardless of the “nonnormative” nature of most real mathematical-modeling and problemsolving situations, the main utility of the modeling cycle framework in chemistry education, as mentioned earlier, is in facilitating a more careful analysis of the steps and challenges in solving problems in chemistry that involve the use of mathematics. Such an approach increases our awareness and understanding of the modeling and problem-solving process, highlighting the need for more nuanced consideration of the process and challenges of “using mathematics” in solving chemistry problems, thereby uncovering issues and questions for further research and progress in chemistry education.

Implications for Practice Models are of crucial importance in chemistry research and education. Whereas instruction often focuses on explaining different models, and in which ways they correlate to well-behaved empirical results, models are more rarely discussed in relation to the nature and philosophy of science, expanding on the thinking behind models. Furthermore, empirical results that significantly deviate from what would be expected from the models are seldom shown or discussed. This might lead to students regarding models as the objective “truth” instead of idealized representations aimed 61

at giving a reasonable description of a complex reality as well as a way to communicate scientific ideas. Furthermore, there is often not enough time assigned to learning how to interpret models such that students see their benefits and limitations, let alone create or interpret models of their own. Chemistry education thus tends to focus on convergent thinking, in which specific models are used to solve problems with typical data sets, rather than divergent thinking, in which different theories can be helpful when data are less well-behaved and when there are multiple explanations possible. This convergent approach might lead to students focusing on superficial similarities and standard solutions, a striking example being the unproductive use of the resource titration discussed above. There is therefore a need to complement the current practice with complex problem solving. It is, however, not enough to simply introduce atypical problems; they must be accompanied with wellthought-out instructional designs that guide student learning rather than confuse it. Appropriate forms of assessment are also needed, so that students are made aware of the importance of complex conceptual understanding and get feedback on their efforts in doing that. The discussions above have provided a number of examples of frameworks that can be useful for analyzing instructional and assessment material, including problem types and the skills they aim to develop as well as the level of representational competence that they demand. The analyses of the student responses have also illustrated ways in which answers can be assessed to gain deeper understanding of students’ knowledge and reasoning and thereby highlight where instructional support and further discussions with students are necessary. The modeling cycle (a model in itself) can be used as a tool when thinking about students’ learning of scientific models. It gives a language to analyze and discuss the various difficulties novices encounter when dealing with new types of problems. To scaffold learning, it is vital to understand where the challenges are. For example, whereas a particular intervention may be needed if a student cannot perform the necessary calculations, different interventions would be needed if the difficulty lies in other parts of the modeling cycle. In using the modeling cycle, as is the case for any other scientific model, it is important not to confuse the model with reality. As discussed above, the modeling process is in fact often more complex, including many loops and steps back and forth. Nevertheless, instructors can use the modeling cycle, or similar models, in their practice to identify difficulties in discussions with students and colleagues. If problems are designed and teaching format is chosen in order to specifically address different steps in the cycle, such problems can be applied in a well-conceived sequence to avoid cognitive overload. Some steps that otherwise often are forgotten or overseen can also be dealt with more explicitly. In addition, the cognitive resources framework can be used to discuss the importance of various inputs and the need to evaluate the resources, in relation both to whether they match the accepted scientific understanding of chemistry and to their usefulness in the situation in question. In this way, students are guided to see the problem and the problem-solving process in a more expert-like way. Learning activities could be designed in relation to both the modeling cycle and the resource framework. Using atypical problems is one way to activate different resources. Standard textbook problems are helpful for students for laying the foundation for more complex examples and also for understanding idealized models. Such standard problems certainly have their place in teaching and learning. However, to really engage students in discussions about chemical phenomena and to compare and evaluate different solutions, less common graphs or data sets may prove useful, whereas problems that are too open will be difficult to handle. A balance must be struck. The task discussed above is an example of a possible approach, with a teaching sequence that starts with convergent questions in relation to unfamiliar graphs, followed by more divergent questions when students are more acquainted with the data. 62

For the divergent questions, instructors can guide the student to first discuss the problem holistically (i.e., to clarify what the problem is about), a part of problem solving that is sometimes neglected among novices. After that, they can be encouraged to come up with various resources that help them further and then to evaluate the usefulness of the resources in the particular case. In this evaluation, critical questions in relation to various resources might further explore the problem. When the most appropriate resources are identified, it is time to use them to solve the problem. This kind of problem-solving sequence could be accompanied by discussion about where more effort needs to be directed during the solution process, with possible reference to the modeling cycle as a supporting framework. Such a meta-level discussion allows the complex problem process to be verbalized and made explicit and is a way to show how experts think and develop professional knowledge. This contributes to students’ thinking about their own learning and thus adds to their metacognitive awareness, which is highly important for learning and helps them take the first steps toward becoming reflective practitioners (59, 60). Insights from both the resource framework and the modeling cycle can also influence assessment practice. As shown here, uncommon graphs and open questions can be used in examination. If used more regularly, students will be encouraged to think through problems more carefully instead of following some standard “plug-and-chug” routine. Furthermore, the form of assessment can be developed to further emphasize the importance of emergent thinking, focusing on the steps in reasoning. For example, a series of written or oral assessment tasks could be developed in which students successively reflect on and justify their knowledge, thinking, and reasoning at increasingly deeper levels during a course. Such a format can help students gradually develop their reasoning skills in a scaffolded manner when solving complex problems. Assessment can thereby become a learning experience during which students develop emergent mental models and coherent narratives that combine chemistry and mathematics.

Acknowledgements We would like to thank the Towns research group for their support and helpful comments on this manuscript. F. M. H. and M. E. would additionally like to acknowledge the financial support of the Centre for Discipline-Based Education Research in Mathematics, Engineering, Science and Technology, Uppsala University.

References 1. 2.

3.

4.

Harrison, A. G.; Treagust, D. F. Learning about Atoms, Molecules, and Chemical Bonds: A Case Study of Multiple-Model Use in Grade 11 Chemistry. Sci. Educ. 2000, 84, 352–381. Schwarz, C. V.; Reiser, B. J.; Davis, E. A.; Kenyon, L.; Acher, A.; Fortus, D.; Shwartz, Y.; Hug, B.; Krajcik, J. Developing a Learning Progression for Scientific Modeling: Making Scientific Modeling Accessible and Meaningful for Learners. J. Res. Sci. Teach. 2009, 46, 632–654. Schwarz, C.; Reiser, B. J.; Acher, A.; Kenyon, L.; Fortus, D. MoDeLS: Challenges in Defining a Learning Progression for Scientific Modeling. In Learning Progressions in Science: Current Challenges and Future Directions; Alonzo, A. C., Gotwals, A. W., Eds.; SensePublishers: Rotterdam, 2012; pp 101–137. Izsak, A. Students’ Coordination of Knowledge When Learning to Model Physical Situations. Cogn. Instr. 2004, 22, 81–128.

63

5. 6. 7. 8. 9.

10. 11. 12. 13. 14. 15. 16.

17. 18.

19.

20. 21. 22. 23.

Johnstone, A. H. Macro- and microchemistry. [Notes and correspondence.]. Sch. Sci. Rev. 1982, 64, 377–379. Mahaffy, P. Moving Chemistry Education into 3D: A Tetrahedral Metaphor for Understanding Chemistry - Union Carbide Award for Chemical Education. J. Chem. Educ. 2006, 83, 49–55. Gilbert, J. K., Treagust, D. F. , Eds. Multiple Representations in Chemical Education; Springer Science+Business Media B.V.: 2009. Talanquer, V. Macro, Submicro, and Symbolic: The Many Faces of the Chemistry “Triplet.”. Int. J. Sci. Educ. 2011, 33, 179–195. Taber, K. S. Revisiting the Chemistry Triplet: Drawing upon the Nature of Chemical Knowledge and the Psychology of Learning to Inform Chemistry Education. Chem. Educ. Res. Pract. 2013, 14, 156–168. Sjöström, J.; Talanquer, V. Humanizing Chemistry Education: From Simple Contextualization to Multifaceted Problematization. J. Chem. Educ. 2014, 91, 1125–1131. Brasseur, L. Florence Nightingale’s Visual Rhetoric in the Rose Diagrams. Tech. Commun. Q. 2005, 14, 161–182. Plass, J. L., Moreno, R., Brünken, R. , Eds. Cognitive Load Theory; Cambridge Univeristy Press: Cambridge, 2012. de Jong, T. Cognitive Load Theory, Educational Research, and Instructional Design: Some Food for Thought. Instr. Sci. 2010, 38, 105–134. McCandless, D. Knowledge is Beautiful; Collins: London, 2014. GapMinder. https://www.gapminder.org/ (accessed Sept. 15, 2018). Doerr, H. M.; Ärlebäck, J. B.; Misfeldt, M. Representations of Modelling in Mathematics Education. In Mathematical Modelling and Applications. Crossing and Researching Boundaries in Mathematics Education; Stillman, G. A., Blum, W., Eds.; Springer International Publishing AG: 2017; pp 71–82. Potgieter, M.; Harding, A.; Engelbrecht, J. Transfer of Algebraic and Graphical Thinking between Mathematics and Chemistry. J. Res. Sci. Teach. 2008, 45, 197–218. Rodriguez, J.-M. G.; Bain, K.; Towns, M. H.; Elmgren, M.; Ho, F. M. Covariational Reasoning and Mathematical Narratives: Investigating Students’ Understanding of Graphs in Chemical Kinetics. Chem. Educ. Res. Pract. 2019, 20, 107–119. This assessment task and the student responses obtained were first presented in Rodriguez et al. (2018) (reference 18). The sections below extend the analyses and discussions in that paper, applying further theoretical perspectives and with additional focus on implications for instruction and practice. Johnstone, A. H. Introduction. In Creative Problem Solving in Chemistry; Wood, C., Sleet, R., Eds.; The Royal Society of Chemistry: London, 1993. Next Generation Science Standards. https://www.nextgenscience.org/ (accessed Nov. 15, 2018). Johnstone, A. H. Why Is Science Difficult to Learn? Things Are Seldom What They Seem. J. Comput. Assist. Learn. 1991, 7, 75–83. Bain, K.; Rodriguez, J.-M. G.; Moon, A.; Towns, M. H. The Characterization of Cognitive Processes Involved in Chemical Kinetics Using a Blended Processing Framework. Chem. Educ. Res. Pract. 2018, 19, 617–628. 64

24. Cracolice, M. S.; Deming, J. C.; Ehlert, B. Concept Learning Versus Problem Solving: A Cognitive Difference. J. Chem. Educ. 2008, 85, 873–878. 25. Nakhleh, M. B.; Lowrey, K. A.; Mitchell, R. C. Narrowing the Gap between Concepts and Algorithms in Freshman Chemistry. J. Chem. Educ. 1996, 73, 758–762. 26. Sawrey, B. A. Concept-Learning versus Problem-Solving - Revisited. J. Chem. Educ. 1990, 67, 253–254. 27. Stamovlasis, D.; Tsaparlis, G.; Kamilatos, C.; Papaoikonomou, D.; Zarotiadou, E. Conceptual Understanding versus Algorithmic Problem Solving: Further Evidence from a National Chemistry Examination. Chem. Educ. Res. Pract. 2005, 6, 104–118. 28. Rodriguez, J.-M. G.; Santos-Diaz, S.; Bain, K.; Towns, M. H. Using Symbolic and Graphical Forms to Analyze Students’ Mathematical Reasoning in Chemical Kinetics. J. Chem. Educ. 2018, 95, 2114–2125. 29. Kozma, R. B.; Russell, J. Multimedia and Understanding: Expert and Novice Responses to Different Representations of Chemical Phenomena. J. Res. Sci. Teach. 1997, 34, 949–968. 30. Thompson, P. W. Images of Rate and Operational Understanding of the Fundamental Theorem of Calculus. Educ. Stud. Math. 1994, 26, 229–274. 31. Confrey, J.; Smith, E. Splitting, Covariation, and Their Role in the Development of Exponential Functions. J. Res. Math. Educ. 1995, 26, 66–86. 32. Carlson, M.; Jacobs, S.; Coe, E.; Larsen, S.; Hsu, E. Applying Covariational Reasoning While Modeling Dynamic Events: A Framework and a Study. J. Res. Math. Educ. 2002, 33, 352–378. 33. Habre, S. Students’ Challenges with Polar Functions: Covariational Reasoning and Plotting in uhe Polar Coordinate System. Int. J. Math. Educ. Sci. Technol. 2017, 48, 48–66. 34. Ellis, A. B.; Ozgur, Z.; Kulow, T.; Dogan, M. F.; Amidon, J. An Exponential Growth Learning Trajectory: Students’ Emerging Understanding of Exponential Growth Through Covariation. Math. Think. Learn. 2016, 18, 151–181. 35. Moore, K. C.; Thompson, P. W. Shape Thinking and Students’ Graphing Activity, In Proceedings of the 18th Annual Conference on Research in Undergraduate Mathematics Education, Pittsburgh, Pennsylvania, Feb 19-21, 2015; Fukawa-Connelly, T.; Infante, N.; Keene, K.; Zandieh, M. , Eds. Pittsburgh, Pennsylvania, 2015; pp 782–789. 36. Even, R. Subject Matter Knowledge for Teaching and the Case of Functions. Educ. Stud. Math. 1990, 21, 521–544. 37. Schwartz, J.; Yerushalmy, M. Getting Students to Function in and with Algebra. In The Concept of Function: Aspects of Epistemology and Pedagogy [MAA Notes, Volume 25]; Harel, G., Dubinsky, E., Eds.; Mathematical Association of America: Washington, DC, 1992; pp 261–289. 38. Sfard, A. Operational Origins of Mathematical Objects and the Quandary of Reification – the Case Function. In The Concept of Function: Aspects of Epistemology and Pedagogy [MAA Notes, Volume 25]; Harel, G., Dubinsky, E., Eds.; Mathematical Association of America: Washington, DC, 1992; pp 59–84. 39. Moschkovich, J.; Schoenfeld, A. H.; Arcavi, A. Aspects of Understanding: On Multiple Perspectives and Representations of Linear Relations and Connections Among Them. In Integrating Research on the Graphical Representation of Functions; Romberg, T. A., Fennema, E., Carpenter, T. P., Eds.; Erlbaum: New York, 1993; pp 69–100.

65

40. Hammer, D.; Elby, A.; Scherr, R. E.; Redish, E. F. Resources, Framing, and Transfer. In Transfer of Learning from a Modern Multidisciplinary Perspective; Mestre, J. P., Ed.; Information Age Publishing: Greenwich, 2005; pp 89–119. 41. Hammer, D.; Elby, A. Tapping Epistemological Resources for Learning Physics. J. Learn. Sci. 2003, 12, 53–90. 42. Hammer, D.; Elby, A. On the Form of a Personal Epistemology. In Personal Epistemolgy: The Psychology of Beliefs about Knowledge and Knowing; Hofer, B. K., Pintrich, P. R., Eds.; L. Erlbaum Associates: Mahwah, NJ, 2002; pp 169–190. 43. Nemirovsky, R. Mathematical Narratives, Modeling, and Algebra. In Approaches to Algebra: Perspectives for Research and Teaching; Bernarz, N., Kieran, C., Lee, L., Eds.; Springer Netherlands: Dordrecht, 1996; pp 197–220. 44. Schraw, G.; Crippen, K. J.; Hartley, K. Promoting Self-Regulation in Science Education: Metacognition as Part of a Broader Perspective on Learning. Res. Sci. Educ. 2006, 36, 111–139. 45. Rickey, D.; Stacy, A. M. The Role of Metacognition in Learning Chemistry. J. Chem. Educ. 2000, 77, 915. 46. Kaiser, G.; Brand, S. Modelling Competencies: Past Development and Further Perspectives. In Mathematical Modelling in Education Research and Practice. Cultural, Social and Cognitive Influences; Stillman, G. A.;, Blum, W., Eds.; Springer International Publishing AG: 2015. 47. Schukajlow, S.; Kaiser, G.; Stillman, G. Empirical Research on Teaching and Learning of Mathematical Modelling: A Survey on the Current State-of-the-Art. ZDM 2018, 50, 5–18. 48. Blum, W. Quality Teaching of Mathematical Modelling: What Do We Know, What Can We Do?; Springer International Publishing: Cham, 2015; pp 73–96. 49. Borromeo Ferri, R. Theoretical and Empirical Differentiations of Phases in the Modelling Process. ZDM 2006, 38, 86–95. 50. Blum, W.; Leiβ, D. How Do Students and Teachers Deal with Modelling Problems? In Mathematical Modelling: Education, Engineering and Economics; Haines, C., Galbraith, P., Blum, W., Khan, S., Eds.; Horwood Publishing: Chichester, 2007; pp 222–231. 51. Camacho, M.; Good, R. Problem Solving and Chemical Equilibrium: Successful Versus Unsuccessful Performance. J. Res. Sci. Teach. 1989, 26, 251–272. 52. Chandrasegaran, A. L.; Treagust, D. F.; Waldrip, B. G.; Chandrasegaran, A. Students’ Dilemmas in Reaction Stoichiometry Problem Solving: Deducing the Limiting Reagent in Chemical Reactions. Chem. Educ. Res. Pract. 2009, 10, 14–23. 53. Becker, N.; Towns, M. Students’ Understanding of Mathematical Expressions in Physical Chemistry Contexts: an Analysis Using Sherin’s Symbolic Forms. Chem. Educ. Res. Pract. 2012, 13, 209–220. 54. Uhden, O.; Karam, R.; Pietrocola, M.; Pospiech, G. Modelling Mathematical Reasoning in Physics Education. Sci. Educ. 2012, 21, 485–506. 55. Pietrocola, M. Mathematics as Structural Language of Physical Thought. In Connecting Research in Physics Education with Teacher Education; Vicentini, M., Sassi, E., Eds.; International Commission on Physics Education: 2008; Vol. 2. 56. Blum, W.; Borromeo Ferri, R. Mathematical Modelling: Can It Be Taught and Learnt? J. Math. Model. Appl. 2009, 1, 45–58.

66

57. Caballero, M. D.; Wilcox, B. R.; Doughty, L.; Pollock, S. J. Unpacking Students’ Use of Mathematics in Upper-Division Physics: Where Do We Go from Here? Eur. J. Phys. 2015, 36, 065004. 58. Prediger, S. “Aber Wie Sag Ich es Mathematisch?” Empirische Befunde und Konsequenzen zum Lernen von Mathematik als Mittel zur Beschreibung von Welt. In Entwicklung naturwissenschaftlichen Denkens zwischen Phänomen und Systematik. Jahrestagung der Gesellschaft für Didaktik der Chemie und Physik in Dresden 2009; Höttecke, D., Ed.; LIT-Verlag: Münster, 2010; pp 6–20. 59. Schön, D. A. The Reflective Practitioner: How Professionals Think In Action; Basic Books: New York, 1983. 60. Schön, D. A. Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions; Jossey-Bass: San Francisco, 1987.

67