Factors Influencing Student Prerequisite Preparation for and

Mar 10, 2010 - For students entering Chemistry Two following a Chemistry One course, an assessment exam was given and the results were evaluated in co...
0 downloads 12 Views 837KB Size
Research: Science and Education edited by

Diane M. Bunce The Catholic University of America, Washington, D.C. 20064

Factors Influencing Student Prerequisite Preparation for and Subsequent Performance in College Chemistry Two: A Statistical Investigation

Michael J. Sanger Middle Tennessee State University, Murfreesboro, Tennessee 37132

David C. Easter Department of Chemistry and Biochemistry, Texas State University;San Marcos, San Marcos, Texas 78666 [email protected]

Satisfactory completion of Chemistry One (Chem One) is a prerequisite for Chemistry Two (Chem Two), yet many students who enroll in second-semester college chemistry have a poor grasp of the prerequisite concepts and skills. In this report, Chem One and Chem Two refer, respectively, to the first- and secondsemester introductory courses for undergraduate science majors. This study explores ten demographic factors and evaluates their relative contributions to a student's level of prerequisite Chem One content mastery as measured by an assessment exam at the beginning of the Chem Two semester. The demographic variables are then analyzed to develop a multivariable model that estimates a student's final course performance in Chem Two. A number of studies have been published that identify assessments and other factors influencing student success in college chemistry courses. Factors that have been considered include mathematics skills (1, 2), logical thinking skills (3), chemistry diagnostic testing (2, 4), previous high school curriculum content and instructional practices (5, 6), student demographics (e.g., age and year in school) (7), and personality type (8). Although most such studies have focused on Chem One, reports by Leopold and Edgar and by Bunce and Hutchinson center on Chem Two (1, 3), and separate reports by Hahn (9), Derrick (10), and Nicoll (11) focus on physical chemistry (p chem). A common conclusion is that mathematics skills and chemistry background both play important roles in a student's success. Leopold and Edgar report that ∼17% of variations in Chem Two course grades can be attributed to differences in mathematics assessment test scores (1). Russell reports that between 10 and 25% of an entering student's success in Chem One can be predicted on the basis of the score earned on a chemistry diagnostic test (4). McFate and Olmstead report a 9-36% predictive value of a combination mathematics-chemistry placement test for Chem One (2). The importance of mathematics skills to a student's success in p chem has also been reported (10), although the number of mathematics courses completed was not found to be significant (11). In addition, student performance in prior organic chemistry courses was found to be a significant predictor of p chem success (10). In addition to mathematics skills and chemistry background, several other factors have been reported as significant. Logical thinking skills were found to be important for nonscience majors taking general chemistry (but less so for science majors) (3); similar findings regarding logical thinking skills have been reported for p chem (11). Age (but not year in school) was

_

cited as a factor in general chemistry by Wagner et al. (7). Clark and Riley found that a student's personality type is relevant, with the highest average course grades achieved by students who are productive when studying by themselves; recognize the class as beneficial to their goals; are comfortable handling abstract ideas; prefer to reach conclusions based on mathematical and logical deductions; and are both organized and punctual in completing studious tasks (8). For p chem, Hahn and Polik also emphasize the importance of study skills and student motivation (9). Purpose of This Study This investigation differs from previous studies in several important ways. Our initial goal was to assess demographic factors (based primarily on easily collected, self-reported student data) that affect a student's preparation for Chem Two. The assessment focuses on prerequisite Chem One concepts and skills, and is administered at the beginning of the Chem Two semester to a student population characterized by diverse backgrounds. Our follow-up goal was to evaluate how the assessment exam outcomes can be combined with the aforementioned demographic factors in order to estimate subsequent student achievement in Chem Two. By design, demographic variables considered in this study were such that they can be collected in the classroom via a five-minute student survey early in the semester. Because our approach was designed to make data collection facile for the classroom teacher, it has some limitations. For example, entrance exam scores (e.g., SAT) and other data that are not routinely provided to classroom instructors were not considered; neither did we administer any specialized exams. Methodology Precourse Assessment and Survey A total of 339 students enrolled in six sections of Chem Two (all taught by the same instructor with identical grading standards) took the ACS Brief (50 question) General Chemistry Exam (form GC98B) on or before the fifth class day. The data in this study were collected over two years, from Spring 2006 through Fall 2007. Exam administration followed the guidelines published by the ACS Exams Institute: the 55-min time limit was enforced, and students were instructed to do their best to answer every question. Because a number of question topics covered on the exam are not included in our Chem One curriculum, the

_

r 2010 American Chemical Society and Division of Chemical Education, Inc. pubs.acs.org/jchemeduc Vol. 87 No. 5 May 2010 10.1021/ed800165t Published on Web 03/10/2010

_

Journal of Chemical Education

535

Research: Science and Education

assessment score was based solely on the number of correct responses to the 31 exam items that are directly related to topics in the Chem One curriculum, although this information was not communicated to students prior to the exam. For purposes of motivation, students were guaranteed the opportunity to replace their lowest regular exam score with the assessment exam score whenever the substitution was advantageous to their course grade. The cumulative score distribution, test statistics, and question item statistics are provided in the online supporting material as Figure S1 and Tables S1 and S2. The assessment score distribution is nearly normal (i.e., Gaussian or bell shaped), with an extended high-end tail. The test reliability is 0.71, and more than two-thirds of item discrimination indices (DIs) are larger than 0.30. Results of norm-referenced (standardized) exams with reliabilities of 0.70 or better can be used with confidence; for comparison, the nationally normed reliability of the complete ACS exam (when all 50 question items are included) is only slightly higher, at 0.79. A DI of 0.30 or higher indicates that the question item is strong in its ability to distinguish students achieving higher overall test scores from those earning lower scores. In combination, these exam statistics provide confidence that the assessment score provides a meaningful metric for distinguishing relative differences in student mastery of the Chem One prerequisite material. Before the assessment exam began, students were asked to respond to a survey consisting of nine question items; selfreported demographics derived from student responses provide the foundational data for this investigation. Self-reported demographic data include GPA (grade-point average); major; classification (first-year, second-year, junior, senior, or graduate student standing); when Chem One was completed; where Chem One was completed; who was a student's Chem One instructor; Chem One final grade; enrollment status in the Chem Two lab; and the number of repeats of taking the Chem Two course. The original survey questions can be found in the online supporting material in Table S3. The semester of enrollment was incorporated as an additional variable in subsequent analysis. Statistical Methodology The null hypothesis, that variations within a given demographic variable are unrelated to assessment scores, was rejected with 95% confidence whenever the statistical significance, p, was less than or equal to 0.05. Preliminary one-way analysis of variance (ANOVA) determined that a minimum of one group is statistically distinct from at least one other group within each of the demographic variables surveyed. A Levene test for homogeneity of variances was carried out for each variable: variances were considered homogeneous when p > 0.05 in the Levene test, otherwise they were treated as not being homogeneous. Multiple-category analysis was then applied to each variable, using the Bonferroni method for variables with homogeneous variances and the Tamhane T2 method for all remaining variables. Two-way ANOVAs were also carried out to test for interactions between demographic variables, but no such interactions were identified. For quantitative assessment purposes, numerical values were assigned to categories within each demographic variable and Pearson correlations of each variable to the assessment score were calculated, with missing data eliminated pairwise. Correlations are considered to be statistically significant with 95% confidence when p e 0.05. In general, the square of the correlation 536

Journal of Chemical Education

_

Vol. 87 No. 5 May 2010

_

coefficient (r2) measures the proportion of variability in the dependent variable (the assessment score) that can be “explained” by differences in the independent variable(s) being tested. Following two-dimensional analysis of each demographic variable treated as a stand-alone predictor of the assessment score, multivariable regression models were constructed via a backward approach. The algorithm treats all variables simultaneously; it retains a variable in the model when p e 0.05, or removes the variable when p > 0.10; if two or more variables simultaneously have p > 0.10, the variable with the largest p value is removed first. At every step of the procedure, each variable is reevaluated, and the process continues until all remaining variables are significant within a 90% confidence limit (p < 0.10). Statistical results were calculated using the SPSS 11.0 software package, and documentation for each of the procedures is in print (12, 13). In all multivariable linear modeling procedures, missing values were substituted by the corresponding mean value of the data set. Postcourse Summative Evaluation Following analysis of the precourse assessment exam results, the procedures outlined above were applied to analyze the demographic variables and the precourse assessment scores as predictors of subsequent student course achievement. The course grade, assigned by the same instructor with a consistent grading policy, was chosen as the metric of student achievement and was evaluated only for students who remained enrolled throughout the semester. For quantitative purposes, each course grade was assigned a numerical value: A = 1.00; B = 0.75; C = 0.50; D = 0.25; F = 0.00. Results and Analysis Initial ANOVA and multiple-category analysis verified that a minimum of one category within each demographic variable is characterized by an average assessment exam performance that is statistically distinct from at least one other category. Table S4 in the online supporting material documents the descriptive statistics and identifies all differences between variable categories that are identified as significant. In order to develop quantitative models, it was necessary to assign a numerical value to each variable category. Assigned numerical values are summarized in Table 1; for the sake of consistency, all values range from zero to one. Detailed discussion and justification of the methodology and rationale supporting these assignments is included in the online supporting material. Based on our data, one of the demographic variables (the student's major) did not appear to be quantifiable on any statistical basis; consequently, the student's major was not included as an independent factor in our modeling. Direct correlations between individual variables and both the assessment exam score and the final course grade were evaluated; results are collected in the leftmost columns of Table 1. Each cell contains two values: the significance, p, and the adjusted r2 value, expressed as a percent. The adjusted r2 value is a quantitative estimate of how effective the variable will be for predicting outcomes in similar student populations. Stated differently, adjusted r2 identifies the percent of variation in the outcome (assessment exam score or course grade) that can be directly attributed to variations within the variable. Inspection of Table 1 confirms that, treated as isolated, stand-alone predictors, all nine variables are significantly correlated to both outcomes; however, their predictive efficacy (adjusted r2) values range from a low of 1.8% to a high of 18.3%.

pubs.acs.org/jchemeduc

_

r 2010 American Chemical Society and Division of Chemical Education, Inc.

Research: Science and Education Table 1. Comparison of Linear Regression Results by Variablea p Values and Adjusted r2 Values, %

Values for β and p by Model Variables Used in This Study

Values Assigned to These Variables

Assessment Grade Model 1, Grade Model 2, Model, 29.9% 27.1% 28.4%

Assessment Exam

Course Grade

0.000

0.000

GPA (two decimal places) divided by 4.00

0.253

0.227

17.2

Students' self-reported grade point average (GPA)

0.164

12.2

0.002

0.000

0.000

0.000

0.000

Classification group

0.121

0.099

3.8

First-year/grad = 1; second-year/junior/senior = 0

0.154

4.5

0.001

0.011

0.037

0.000

0.000

No delay = 1; one or more semester's delay = 0

[0.005]

[0.000]

5.9

Delay between taking Chem One and Chem Two

[0.071]

8.3

[0.168]

[0.923]

[0.985]

0.003

0.011

Four-year university = 1; two-year college = 0

0.117

0.095

1.8

Institution type at which Chem One was completed

0.137

2.3

0.003

0.012

0.042

0.000

0.006

0.080

[0.054]

3.2

Where available, ISE average in Chem One, divided by 5.00

0.172

7.1

Chem One instructor's student evaluation (ISE) average

0.000

0.096

[0.265]

0.002

0.004

[-0.019]

[-0.020]

2.4

Concurrent enrollment = 1; never enrolled = 0.5; previously completed = 0

[0.010]

2.6

Enrollment in Chem Two lab

[0.836]

[0.713]

[0.689]

0.004

0.001

Repeats of Chem Two

One or more = 0; none = 1

[-0.007]

[0.003]

[0.005]

2.1

3.0

[0.890]

[0.946]

[0.924]

0.002

0.000

Semester of course

Spring/summer = 1; fall = 0

0.090

0.138

0.134

2.7

4.2

0.057

0.004

0.005

0.000

0.000

18.3

17.9

N/A

0.000 16.5

Chem One grade

A = 1.00; B = 0.75; C = 0.50; D = 0.25

0.332

0.263

0.210

0.000

0.000

0.000

Assessment exam score

Raw assessment exam score divided by 31, the maximum possible score

N/A

N/A

0.163 0.003

a The leftmost columns contain correlation results when each variable is treated as an isolated, stand-alone predictor of the assessment exam score or final course grade. The rightmost columns contain the outcomes of the three multivariable models. Values reported in bold type are significant at p < 0.05; values in brackets were not found to be significant at p < 0.10. Tabulated variables are identified and described in the text.

Not surprisingly, for both the precourse assessment and the final course grade, the two most powerful single-variable predictors are the Chem One grade and the GPA, both having adjusted r2 values of ∼12-18%. Each of the remaining isolated variables has a lower (adjusted r2 = ∼2-8%) predictive value. As the sole predictor of subsequent course grades, the assessment exam falls in the same range (adjusted r2 = 16.5%) as the Chem One grade and the GPA; this is comparable to the previously reported predictive efficacies of other individual variables (1, 2, 4). Three multivariable linear models were developed as described above: the first modeled assessment exam scores using all nine variables in Table 1; the second modeled the final course grade using the same nine variables; and the third modeled the final course grade using the assessment exam scores in addition to the nine variables. In essence, the modeling procedure fits the outcome data to a linear equation of the form X ð1Þ S ¼Cþ ðBi Vi Þ where S is the outcome value (e.g., assessment exam score), C is a constant, Vi is the assigned numerical value of variable, i, (Table 1), and Bi is its corresponding multiplicative constant; the C and Bi values in eq 1 are optimized by the regression procedure. The final calculated sum, S, includes only variables that

r 2010 American Chemical Society and Division of Chemical Education, Inc.

_

are determined to be significant in the model with a minimum confidence of 90% (p < 0.1). Complete details of all modeling results are documented in the online supporting material, and key results are collected in the rightmost columns of Table 1. Each cell in the three rightmost columns of Table 1 contains two values. The first value is a β coefficient, whose value is determined similarly to B, except that all variable values are first converted to standard or z-scores, and the governing multivariable fit equation is X β zi ð2Þ zS ¼ i

The standard score is defined by the relationship, z, which is the student value minus the class mean, with that result divided by the standard deviation of class values. Because standard (z) scores are independent of the absolute scale chosen to assign raw variable values, resulting β coefficients can be compared directly in order to evaluate the relative contributions of variables, both within and across models. Values for β coefficients and their corresponding significance levels, p, are tabulated for all variables in Table 1; variables presented in bold type are significant with 95% confidence in the model (p < 0.05), and variables presented in brackets were not found to be significant within 90%

pubs.acs.org/jchemeduc

_

Vol. 87 No. 5 May 2010

_

Journal of Chemical Education

537

Research: Science and Education

confidence limits (p > 0.10). All three multivariable models have a significance of p = 0.000 and an overall adjusted r2 statistic between 27-30%; the latter values are tabulated in the rightmost three column headings of Table 1. Based on our data, multivariable modeling suggests that using a subset of the original variables can be as efficient as using the complete set for the purpose of projecting outcomes; furthermore, we ran separate calculations that confirmed that inclusion of all variables in the modeling does not improve the adjusted r2 statistic for any of the three multivariable linear models in this study. In order to assess the models' efficiencies for projecting course preparation and final course grades within our data set, we calculated projected scores substituting the C and Bi values from Tables A7 and A8 (provided in the appendix of the online supporting material) into eq 1. Projected outcomes were then separated into ten percentile bins, ranging from the 0s (the bottom 10% of projected scores) to the 90s (the top 10% of projected scores). The projected percentile represents the percentage of classmates the student is expected to outperform based on the model calculation. For example, a percentile of 60 indicates that the student's performance is projected to exceed that of 60% of his or her classmates. Figure 1 summarizes the relationships between actual outcomes and the projected percentile groups. Assessment scores in Figure 1 are represented as percentages, and course grades are based on the standard four-point scale (A = 4, etc.). The error bars in Figure 1 represent standard deviations of the outcomes for each respective percentile group: statistically, ∼68% of student outcomes fall within one standard deviation of the mean value. In our population >68% of students in the top 10th percentile earned an A in the course, with a group average grade of 3.7. At the opposite end of the distribution, ∼68% of students in the projected bottom 10th percentile earned a C or a D, with a group average between 1.3 and 1.4. It is important to note that the 32% of grades not represented within the errors bars are broadly spread: the top two percentile groups had minimum student grades of B and D, respectively, but the entire grade range (A-F) was observed in the lower eight percentile groups. Analysis of our own data set suggests that the top 20% of students in our course grade projections were at minimum risk of failing the course; the relative risk increased as the percentile group decreased from the 90th down to the 50th percentile. Outcomes were virtually indistinguishable among the four intermediate (20th to 50th) percentile groups, but students in the two lowest projected percentile groups tended to be at higher risk than their classmates. Discussion and Conclusions Advantages of the Multivariable Linear Modeling Approach Ideally, each demographic variable would be independent of each other and would have no logical connections or overlaps with any of the other variables. Realistically, however, some overlap exists between variables in this investigation. A specific example involves students who were repeating Chem Two: such students were nearly always non-first-year students, a majority of them had completed the Chem Two lab previously, and none of them were enrolling in the course the semester immediately following their completion of Chem One. Because of such interdependencies, efficient modeling must account for 538

Journal of Chemical Education

_

Vol. 87 No. 5 May 2010

_

Figure 1. Relationship between model projections and actual outcomes. For each of the three models, average actual outcomes and their standard deviations are shown for each model's predicted percentile groups.

redundant information contained within the set of variables. The effectiveness of the multivariable modeling approach lies partially in its capability of identifying the minimum set of variables, along with each variable's relative contribution, that is sufficient to optimize predictions of the desired outcome; in the same process, variables that are redundant (because including them does not improve the model's predictive reliability) are identified and removed from the model. Compared to the predictive value of any single-variable approach, the predictive value of our multivariable models is improved by a minimum of 50%. Furthermore, only six variables were identified as significant in the three optimized models, confirming redundancy and overlap within the original variable set. Variables Identified as Significant in This Study Not surprisingly, the student's GPA and Chem One grade are among the most significant variables in all three multivariable models. Both variables are direct indicators of a student's academic strength and historical performance. In general, the intuitive expectation is supported: stronger students tend to perform at a higher level in comparison to weaker students. In addition to GPA and Chem One grade, four demographic variables were identified in our study as significant for

pubs.acs.org/jchemeduc

_

r 2010 American Chemical Society and Division of Chemical Education, Inc.

Research: Science and Education

estimating either or both a student's assessment score and final course grade. Where the student completed the Chem One prerequisite (institution type: four-year or two-year institution) was found to be a relevant consideration. As a group, students who completed the Chem One prerequisite at a two-year college were more poorly prepared, and subsequently performed more poorly in the course compared to students who completed the prerequisite at a four-year university. The reason underlying the difference is very likely tied to the fact that Texas State University;San Marcos enforces higher entrance standards for new first-year students than for transfer students from other Texas state colleges and universities. This result highlights the fact that evaluation of either or both a student's GPA and Chem One record will be more informative when carried out with consideration of the strengths and weaknesses associated with the institution from which the student's grades were awarded. The student's Chem One instructor was found to have a definite influence on subject preparation (assessment exam, p = 0.000), both as a stand-alone variable and in the multivariable model; this influence carried over, but to a lesser extent (p = 0.096), to the student's eventual performance in Chem Two when the assessment exam was not included in the model; however, the influence was not statistically significant (p = 0.265) in the multivariable model when the assessment exam was included as a variable (Table 1, rightmost columns). The most straightforward interpretation of this result is that the student's level of preparation for Chem Two is directly influenced by the Chem One instructor's effectiveness, and that the student's prerequisite preparation is what carries forward to affect subsequent achievement in Chem Two. Our quantitative analysis of the Chem One instructor's influence was based on instructor student evaluation (ISE) averages for Chem One instructors who had taught Chem One at Texas State during the period of this study. The apparent correlation between ISE averages and assessment exam scores raises the question: do student evaluation rankings reflect teaching effectiveness, and how should such rankings be weighted in personnel (e.g., tenure, promotion, and merit) decisions? Nearly three decades ago Martin reported in this Journal that he found no meaningful relationship between instructor student evaluation averages and the average class performance on a common final exam (14). Unfortunately, our data are insufficient to address this question for three reasons: (i) because the data were limited to instructors who had taught Chem One at Texas State within a limited time frame, approximately one-third of students (including all transfers) were not represented in the statistics; (ii) only five Texas State Chem One instructors were represented by twenty or more students; and (iii) approximately half of the included students had one of two instructors, the first of whom is highly esteemed by peers for superior teaching and is also extremely popular among both students and peers, while the second is regarded by peers to be a weaker teacher and is also significantly less popular among students, faculty, and staff. Because the results are so heavily tied to two Chem One instructors, it is not possible to distinguish the influence of effective teaching from that of personal likeability in the ISE averages. That there is a relationship between excellent Chem One classroom instruction and higher assessment exam scores is unambiguous. Our data set, however, is simply too limited to establish a statistically meaningful relationship between the instructor's ISE ratings and teaching effectiveness.

r 2010 American Chemical Society and Division of Chemical Education, Inc.

_

Student classification and semester of enrollment are two additional factors that contribute to all three multivariable models. Both variables are related to whether a student is “on track” with respect to the recommended course sequence: firstyear and graduate students are typically enrolling in the course on schedule, while non-first-year undergraduates are not. (Graduate students typically enroll in Chem Two at their first opportunity in order to meet graduate program requirements that were not required for the undergraduate degree.) Similarly, enrollment in the spring is the norm, while fall enrollment is off sequence. As a group, students who are on sequence perform at a higher level compared to those who are not. Factors that underlie the differences are hypothesized to partially include student attitudes and motivations toward the course; this conjecture is consistent with extensive anecdotal evidence from instructors who have taught the course in both fall and spring semesters. Students who enroll in the condensed, five-week summer session typically work as diligently and perform at levels comparable to their spring counterparts; as a result, summer students were grouped with the “on sequence” students in our analysis. These results appear to confirm that student attitudes and motivations toward the course are relevant factors influencing ultimate achievement. In all three of our multivariable models, three variables (delay between Chem One and Chem Two, number of repeats of Chem Two, and enrollment status in the Chem Two lab) were eventually removed on the basis that their statistical significance within the model did not meet the minimum (p < 0.10) threshold (Table 1, rightmost columns). Recall that, as standalone predictors, each of these three excluded variables is significantly correlated both to the assessment exam and to the course grade outcomes, with predictive efficacies in the range of 2-8% (Table 1, leftmost columns). To analyze this further, we reran all three multivariable model calculations, this time retaining all variables regardless of their significance levels: all three recalculated models had adjusted r2 values equal to those of the models that exclude variables on the basis of statistical significance (Table 1). The implication is that including the three “redundant” variables in the models does not affect (either positively or negatively) the models' predictive values. This result is expected when the variables in question provide redundant information. That said, it is conceivable that in a completely different classroom having demographic dynamics that differ from our own, one or more of the three excluded variables might hypothetically be evaluated as statistically significant within a similar multivariable model, and could conceivably replace one or more variables identified as significant in our models. Limitations and Transferability Because this study was based on ∼300 students taking Chem Two from a single instructor, it is reasonable to wonder whether the results are applicable to other instructors at other institutions. The adjusted r2 statistic calculated as part of each model (Table 1) measures the percentage of variance in predicted outcomes that is expected from the same model when applied to a different yet similar population of students. For all three models, the adjusted r2 is 27-30%, confirming general applicability to similar student populations. Because statistical reliability tends to increase with sample size, it is possible that conducting the study over a longer period of time with a larger student population could improve

pubs.acs.org/jchemeduc

_

Vol. 87 No. 5 May 2010

_

Journal of Chemical Education

539

Research: Science and Education

confidence in the results. More problematic, though, is the notion of similar populations. Our student population was arbitrarily limited because only those students who enrolled in Chem Two under the same instructor were included; therefore the participant selection was not a random sample. Within our population, large subsets of students were non-first-year (87%), transfer students (16%), or enrolled during an “off-sequence” semester (34%). To the extent that demographics in other classrooms differ from our own, those student rosters may not be similar enough to warrant quantitative application of our model results. Nevertheless, the general principles observed in this study appear to be transferable, the most important of which is that effective assessment of student potential and risk requires a multivariable approach. Modeling of our data suggests the importance of four general considerations: historical academic achievement (GPA and Chem One grade); quality of prerequisite preparation (Chem One instructor or Assessment Exam score) (2, 4); academic background (institution type at which Chem One was completed); and student motivation and attitude (classification and semester of enrollment) (7-9). In our models, the relative importance (i.e., weighting) of historical academic achievement was ∼50%, that of student motivation and attitude was ∼25%, and the remaining two factors were weighted at ∼12.5% each. Although approximate, these weightings establish a framework from which many classroom instructors will be able to devise a useful assessment strategy. A final comment is in order. Although adjusted r2 values near 30% are good for this kind of modeling, actual individual outcomes differ from model projections. Models such as these are helpful for illuminating the nature and relative influence of factors that affect outcomes. However, a student's ultimate success will always be strongly influenced by personal motivation, hard work, and effective classroom guidance. Ongoing Work At least three questions remain. By design, this investigation did not consider any data that were not easily accessible to the normal classroom instructor (e.g., entrance exam scores, problem solving skills, or personality profiles); as a result, the outcomes of this study do not tell the “complete” story. They complement but do not supersede the findings of previous Chem Two studies focused on mathematics skills (1) and logical thinking skills (3). Would including such data improve the effectiveness of the multivariable models? In our analysis, we have hypothesized that two demographic factors (both found to be significant: classification and semester) are linked to student attitudes and

540

Journal of Chemical Education

_

Vol. 87 No. 5 May 2010

_

motivations; is it possible to establish this connection more solidly? Nearly one-third of students in this study were excluded from the Chem One instructor analysis because the relevant Chem One instructor data were unavailable; is it possible to establish a relationship between student perceptions of teaching effectiveness and their actual prerequisite preparation (assessment exam scores) in the absence of independent external data related to specific Chem One instructors? In order to address these and other questions, we have modified both the preassessment survey and the assessment exam itself and we have recently launched a new two-year study that will encompass all Chem Two sections taught within the department. For the sake of uniformity, semester achievement will be measured via scores on a common ACS Final Exam, and entrance exam scores will be incorporated in the analysis when available. Each student who takes the exam will be provided with a report identifying the topic covered by each question and indicating whether the student answered that item correctly. In addition, we intend to investigate the extent to which providing such information, whether alone, or in combination with additional review materials, will be efficacious in improving ultimate student success in the course. Literature Cited 1. Leopold, D. G.; Edgar, B. J. Chem. Educ. 2008, 85, 724–731. 2. McFate, C.; Olmsted, J., III. J. Chem. Educ. 1999, 76, 562–565. 3. Bunce, D. M.; Hutchinson, K. D. J. Chem. Educ. 1993, 70, 183– 187. 4. Russell, A. A. J. Chem. Educ. 1994, 71, 314–317. 5. Tai, R. H.; Sadler, P. M. J. Chem. Educ. 2007, 84, 1040–1046. 6. Tai, R. H.; Ward, R. B.; Sadler, P. M. J. Chem. Educ. 2006, 83, 1703–1711. 7. Wagner, E. P.; Sasser, H. J. Chem. Educ. 2002, 79, 749–755. 8. Clark, G. J.; Riley, W. D. J. Chem. Educ. 2001, 78, 1406–1411. 9. Hahn, K. E.; Polik, W. F. J. Chem. Educ. 2004, 81, 567–572. 10. Derrick, M. E.; Derrick, F. W. J. Chem. Educ. 2002, 79, 1013– 1016. 11. Nicoll, G.; Francisco, J. S. J. Chem. Educ. 2001, 78, 99–102. 12. SPSS for Windows 11.0; SPSS: 2001. 13. Norusis, M. J. SPSS 11.0: Guide to Data Analysis; Prentice Hall: Upper Saddle River, NJ, 2002. 14. Martin, R. R. J. Chem. Educ. 1979, 56, 461–462.

Supporting Information Available Detailed documentation and justification of the statistical methods used, accompanied by numerous tables and figures. This material is available via the Internet at http://pubs.acs.org.

pubs.acs.org/jchemeduc

_

r 2010 American Chemical Society and Division of Chemical Education, Inc.