Assessment of the Effectiveness of Instructional ... - ACS Publications

The purpose of this manuscript is to introduce a software solution for a meta-analysis and to provide information on some steps of conducting a meta-a...
0 downloads 12 Views 2MB Size
Chapter 8

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

Assessment of the Effectiveness of Instructional Interventions Using a Comprehensive Meta-Analysis Package Alexey Leontyev,*,1 Anthony Chase,2 Steven Pulos,3 and Pratibha Varma-Nelson2,4 1Department of Chemistry, Computer Science and Mathematics, Adams State University, Alamosa, Colorado 81101, United States 2STEM Education Innovation and Research Institute (SEIRI), Indiana University-Purdue University Indianapolis, Indianapolis, Indiana 46202, United States 3School of Psychological Sciences, University of Northern Colorado, Greeley, Colorado 80639, United States 4Department of Chemistry and Chemical Biology, Indiana University-Purdue University Indianapolis, Indianapolis, Indiana 46202, United States *E-mail: [email protected].

The purpose of this manuscript is to introduce a software solution for a meta-analysis and to provide information on some steps of conducting a meta-analysis. We have used a set of papers that investigate the effectiveness of peer-led team learning strategies as an example for conducting a meta-analysis. Comprehensive Meta-Analysis software was used to calculate effect sizes, produce a forest plot, conduct moderator analysis, and assess the presence of publication bias.

Introduction When multiple studies addressing the same topic exist, a meta-analysis can be conducted to summarize, integrate, and interpret the results of these studies. One can view a meta-analysis as conducting research on research. In meta-analyses, instead of surveying people, which is a common way of conducting research in social sciences, research reports are surveyed, essential information is extracted © 2017 American Chemical Society Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

from them, and the resulting information is analyzed using adapted statistical techniques. Meta-analysis is conducted to investigate dispersion of effects found in multiple studies and to assess what factors influence the direction and the magnitude of these effects. Another advantage of a meta-analysis is that instead of looking at the p-values, we are working directly with effect sizes and interpreting them in our given context. Effect size is a measure of the difference between two groups that emphasizes the size of that difference rather than confounding this with sample size. The purpose of this manuscript is to demonstrate how some steps of metaanalysis can be performed in Comprehensive Meta-Analysis package (CMA). In our meta-analysis, we included studies that were reported in the review by Wilson and Varma-Nelson (1) and satisfied the following criteria: i) the study design included two or more groups; ii) at least one of the groups used peer-led team learning (PLTL), and at least one was a comparison group that did not use PLTL; iii) enough statistical information about achievement outcomes was reported or both groups, ether in the form of mean scores or success rates. The purpose of this chapter is not to conduct a comprehensive meta-analysis of PLTL literature but rather to show the functionality of the software. It is most likely that published papers that are included in this meta-analysis represent only a small sample of all possible studies. For example, it is recommended to include gray literature (conference proceedings, dissertations, and grant reports) in meta-analyses but it was not included in this analysis because it was not included in the original review article by Wilson and Varma-Nelson (1). One of the advantages of using this real data set for the analysis is that it allows readers to see obstacles of meta-analysis such as multiples outcomes and missing data. From the review by Wilson and Varma-Nelson (1), we found 16 studies to be included in the following meta-analysis (2–17). All of these studies have in common that they investigate the effectiveness of PLTL by comparing them with alternative strategies. We had to exclude several studies, for example, the study (18) which compares traditional PLTL with PLTL in cyberspace (cPLTL). The included studies are summarized in Table 1. The analysis was performed in Comprehensive Meta-Analysis (version 3.3.070) package developed by Biostat, Inc. This software was chosen for its rich functionality, ease of use, and high quality graphics. The first author of the manuscript (AL) received an evaluation copy of the software for one year from Biostat, Inc. A trial version can be downloaded from https://www.meta-analysis.com. Note that the trial version will only work for 10 days or 10 runs. While it is possible to complete a meta-analysis in one run having all data ready to enter into the trial version, we felt obliged to inform our readers about alternatives for conducting a meta-analysis. A section describing alternatives can be found at the end of the chapter.

118 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Table 1. Included Studies, Discipline, and Reported Outcomes

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

Study

Outcome

Discipline

Akinyele, 2010 (2)

General, Organic, and Biochemistry (GOB)

Achievement

Baez-Galib et al., 2005 (3)

General Chemistry

Passing

Chan & Bauer, 2015 (4)

General Chemistry

Achievement

Hockings et al., 2008 (5)

General Chemistry

Passing

Lewis & Lewis, 2005 (6)

General Chemistry

Achievement

Lewis & Lewis, 2008 (7)

General Chemistry

Achievement

Lewis, 2011 (8)

General Chemistry

Achievement and Passing

Lyon & Lagowski, 2008 (9)

General Chemistry

Achievement

McCreary et al., 2006 (10)

General Chemistry Labs

Achievement

Mitchell et al., 2012 (11)

General Chemistry

Passing

Rein & Brookes, 2015 (12)

Organic Chemistry

Achievement

Shields et al., 2012 (13)

General Chemistry

Achievement

Steward et al., 2007 (14)

General Chemistry

Passing

Tenney &Houck, 2003 (15)

General Chemistry

Passing

Tien et al., 2002 (16)

Organic Chemistry

Achievement

Wamser, 2006 (17)

Organic Chemistry

Passing

Figure 1. Logic of Comprehensive Meta-Analysis.

119 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Entering Data into CMA

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

CMA includes two distinct parts of its interface. The first is when you start the program, you will see a table where you should enter all studies and their statistical information. The second is when you click the Run Analysis button, you will see the window with the analysis of the entered studies and all computational functions of CMA. The logic of the program is shown in Figure 1. In this manuscript, we will refer to these parts of the program as STUDIES TAB and ANALYSIS TAB. When you open CMA, it automatically starts in the STUDIES TAB, where you need to enter information about the studies, type of outcome, their statistical characteristics, and moderator variables. CMA displays the spreadsheet depicted in Figure 2 when you start it.

Figure 2. CMA at start.

At this point, the program does not recognize a type of information you enter in columns. You need to insert columns for study names, outcomes, and moderators. To do this, go to Insert → Column for → Study names. If the data includes two or more outcomes, repeat the same procedure for Outcome names. For our dataset, we are going to treat the discipline as a moderator variable. PLTL studies that we included in our analysis were done in various classes, such as general chemistry or organic chemistry. This suggests that the moderator is a categorical variable. To add a column for a moderator variable, click on Insert → Column for → Moderator variable. When you add a column for a moderator, you will see a dialog box (Figure 3), where you need to enter the name for the moderator and specify the data type of this moderator. 120 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

Figure 3. The dialog box for entering a moderator in CMA.

After you have entered the data you can select an appropriate measure form an array of effect size indicies. Since the studies we included in out dataset utilize experimental or quasi-experimental designs based on means, the appropriate choice for effect size would be Hedges’s g. Hedges’s g is similar to the well known Cohen’s d effect size statistic, which is commonly used as a measure of effect sizes for the studies that involve comparison between groups.

However, Cohen’s d overestimates effect sizes for small samples. This bias can be eliminated by using Hedge’s g which includes a correction factor (19).

CMA allows data entry in more than 100 different formats, but for this dataset, we used only three. Included studies mainly use two types of outcomes: achievement scores and passing rates. These studies should be entered with corresponding information necessary to compute effect sizes. Several studies (2,4,9) include multiple measures of the same outcomes for the same or different 121 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

groups. In these cases, we averaged the effect sizes and entered them directly into the program. CMA also offeres the option to merge data from multiple outcomes or groups automatically. To enter the column for the studies that used direct comparisons of two groups, click Insert → Column for → Effect size data. In the dialog box, select Show common formats only → Comparison of two groups, time-points, or exposures (includes correlations) → Two groups or correlations → Continuous (means) → Unmatched groups, post data only → Mean, SD and sample size in each group. When you click on the last option, a dialog box will appear where you can specify the names for each group. It is convenient to go with the options offered by the program and name Group-A as Treated and Group-B as Control. To enter the columns for studies that report the dichotomous outcome, click on the same options as in the previous paragraph, but select Dichotomous (number of events) under Two groups or correlation. Then select Unmatched groups, prospective (e.g., controlled trials, cohort studies) → Events and sample size in each group. To add the last option for the effect sizes, click on Two groups or correlation → Continuous (means) → Computed effect sizes → Hedge’s g (standardized by pooled within-groups SD) and variance. After all formats for entering effect size data are entered, you can see the tabs at the bottom of the screen (Figure 4). The highlighted tab indicates that you can enter the data in the corresponding format. For our meta-analysis, we also selected Hedges’s g as our primary index (located in the yellow column) at the very end of the data entry table.

Figure 4. One of the templates for entering the data in CMA.

Now we can begin entering studies. After a study is entered and the effect direction is specified as Auto, you should see effect sizes for each study in the 122 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

yellow column. To switch between data entry formats, click on the tabs at the very bottom of the screen. After all studies are entered, the tab should look as in Figure 5. It is possible to upload all data into an Excel spreadsheet, and then into CMA, where you can specify the columns. This approach works well with existing datasets where all studies report data in the same format. Often, this is not the case with educational research studies.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

Analysis of Data in CMA After all data are entered, click on the Run Analysis button at the top of the screen and that will bring you to the ANALYSIS TAB with the outcomes of the analysis. You will see a table with all effect size information and the so-called forest-plot, a graphical interpretation of effect sizes explained later (Figure 6). From the ANALYSIS TAB, you can have several computational options that will allow you to see different facets of a meta-analysis. Some of them are listed in the sections below.

Figure 5. STUDIES TAB after all studies are entered into CMA.

Fixed versus Random Effect At the bottom of the screen, you can see tabs Fixed, Random, Both Models. These tabs indicate which model was used to produce the overall effect size. The fixed-effect model is based on the assumption that the observed effect is constant for all studies and varies only due to sampling error. The random-effects model assumes that the true effect varies from study to study. For our data, since participants are coming from different populations, the random effects model is the most appropriate. However, with a small number of studies, estimates that are 123 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

used to calculate random effects are not stable, so it might be beneficial to look at estimates and confidence intervals for both models.

Figure 6. ANALYSIS TAB of CMA.

Figure 7. Sensitivity analysis. 124 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Sensitivity Analysis

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

If you click on the tab One study removed at the very bottom of the screen, you will see the results of the so-called sensitivity analysis (Figure 7). In the sensitivity analysis, the overall effect is computed by removing one study at a time. In this case, the effect shown on each row is not the effect for that row’s study, but rather the summary effect for all studies, with that study removed. Generally, this does not change the results for meta-analyses that contain more than 10 studies. As we can see from the results of the sensitivity analysis, an elimination of any study does not lead to any change in the effect that would be of substantive import.

Analysis of Heterogeneity Once you click the Next table button, it will toggle the display, and will present you with another computational outcome of meta-analysis (Figure 8). You will see numerical overall effect sizes with their standard error, variance, lower and upper limit of confidence intervals. The test of the null hypothesis tells us that the overall effect is significantly different from zero. You will also see the Q-value (along with corresponding df and p values) and I-squared, both are measures of heterogeneity statistics. The Q-test tells us that there is dispersion across effect sizes. The I-squared statistics attempts to quantify how much of the study-to-study dispersion is due to real differences in the true effects. Tau-squared is an estimate of between-studies variance, and Tau is an estimate of the between-studies standard deviation. Note that the values for Q statistics, I-squared, and tau estimates are reported on the line with the fixed effect models, because these values are computed using weights computed under fixed effect assumptions. However, these statistics apply to both statistical models.

Figure 8. Numerical outcomes of meta-analysis: overall effect size, test of null hypothesis, heterogeneity analysis, and tau-squared estimates. 125 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

Forest Plot Once you click the High resolution plot button, the program will get you to the graphical interface. There you can work with the forest plot. The menu will allow you to customize graphic parameters according to aesthetic preferences. A customized forest plot for the meta-analysis of PLTL is presented in Figure 9. Note that we removed some duplicate information to declutter the forest plot. The forest plot can be viewed as the end game of meta-analysis, so it is worth spending time and effort to better convey information about studies and results. The forest plot gets its name because it allows seeing “forest through the trees.” A forest plot is a graphic presentation of a result of a meta-analysis. The square is centered on the value of the effect size for each study. The location of the square represents the direction and magnitude of the effect of an intervention. The length of a line crossing that square indicates the 95% confidence intervals. In CMA, the size of the square is proportional to the sample size of the study. If this line crosses the g = 0 line, this indicates a non-significant study. In general, the confidence intervals are wider for the studies with the smaller number of participants. The diamond at the very bottom of the screen represents the overall effect size. The forest plot can be exported as a Word or PowerPoint file. To do this, select the corresponding option from the File menu.

Figure 9. Forest plot of meta-analysis.

Publication Bias Analysis Publication bias analysis is based on the same premise as a sensitivity analysis. While multiple approaches exist for testing the presence of publication bias, most of them are based on the idea that studies with non statistically significant p-values are more likely to be excluded from the analysis. This phenomenon is called the 126 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

file drawer problem, because studies that failed to find statistical significance are less likely to get published and instead they end up in file drawers. To perform this analysis, click on Analysis → Publication Bias. This will bring you to a funnel plot where effect sizes are plotted versus standard error. You can choose between two types of funnel plots: you can either plot effect sizes versus standard error or precision. To switch between these two, simply click on the button at the top of the screen: Plot precision or Plot standard error. Visual examination for asymmetry of the funnel plot may give some idea about the presence or absence of small non-significant studies. Small studies have a larger standard error, thus they are located in the lower part of the funnel plot. Non-significant studies are located in the left part of the funnel plot because their effect size is very small or negative. If a funnel plot appears to be symmetrical this indicates absence of publication bias. The funnel plot for our analysis in shown in Figure 10.

Figure 10. Funnel plot for publication bias analysis.

If you click on Next table button at the top of the screen, you will be shown a numerical analysis of publication bias. CMA allows the following types of analyses: classic fail-safe N, Orwin N fail-safe, Begg and Mazumdar rank correlation, Egger’s regression, Duval and Tweedie’s trim and fill procedure. Fail safe numbers represent how many non-significant studies are needed to zero the observed effect of intervention, Egger’s and Begg and Mazumdar investigate a relationship between study size and effect size, and Duval and Tweedie’s trim and fill procedure estimates the number of missing studies and corrects for funnel plot asymmetry arising from omitting these studies. Out of all of them, Duval and Tweedie’s method gives the most reliable estimate of the presence of publication bias. However, this method does not work well with datasets with high heterogeneity. For our dataset, this method shows two imputed studies 127 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

to the left of the mean. As we specified earlier, our analysis does not contain gray literature. Since gray literature is more likely to include non-significant results than published studies, it is quite likely that a careful search of conference proceedings will yield several small and nonsignificant studies that are similar to the imputed studies from Duval and Tweedie’s trim method. Figure 11 shows the funnel plot with imputed studies and the corrected overall effect. The theory underlying Duval and Tweedie’s trim method is quite simple. Publication bias leads to an asymmetry of the funnel plot. Studies that are small and failed to find significance are located to the left of the overall effect size. Duval and Tweedie’s is an iterative process that trims the most extreme studies and fills with computed estimates to preserve the symmetry of the funnel plot (19). The imputed studies that are shown on the plot are needed to keep the funnel plot symmetrical across the unbiased estimate of the overall effect.

Figure 11. Funnel plot with imputed studies.

Moderator Analysis The relationship between independent variable (mode of instruction) and dependent variable (achievement) can be affected by a moderator variable that can be either continuous or categorical. Examples of categorical moderators can be a type of chemistry class or geographical region where a study is conducted. Examples of continuous moderators can be duration of a study or attrition rates. To perform an analysis that investigates the impact of a moderator variable (discipline), go to Computational options → Group by, then select Discipline in the dialog box as a moderator variable and check both boxes for Also run analysis across levels of discipline and Compare effects at different levels of discipline as shown in Figure 12. Click Ok. 128 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

Figure 12. The dialog box for moderator analysis. Now we can see studies grouped by their moderator variables in the ANALYSIS TAB. To see only overall effects for each level of a moderator, click on the Show individual studies icon in the toolbar (Figure 13).

Figure 13. Effect sizes by moderator variables. Analyses for the GOB and General Lab categories do not produce any meaningful data because there is only one study for each of these levels of the discipline moderator. Comparison of PLTL studies that are done in General and Organic settings reveals a slightly higher effectiveness of PLTL for Organic classes. Click Next table for an analysis that reports a p-value for the between-group differences.

Conclusion We provided an overview of entering data and performing meta-analyses in CMA software. While we presented the core procedures for meta-analysis, this manuscript is not a comprehensive guide for meta-analysis. For example, we did not cover meta-regression, because we did not have a continuous variable as a moderator. We also did not show cumulative analysis due to the nature of our data. 129 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

However, the results that we showed illustrate the effectiveness of PLTL. Although it may not be possible to infer effectiveness of PLTL from a single study with non-significant findings, aggregating results of several studies described above increases the power of analysis. From the results of our meta-analysis, we can see that the overall effect size for the studies that use PLTL is 0.364. If we look at disaggregated effect sizes by different disciplines, we see that studies that are done in organic chemistry classes showed higher effectiveness (0.400) than studies done in general chemistry (0.331). There is also substantial variation between studies. The I-squared coefficient (84%) tells us that most of the variation in observed effects reflects variation in true effects, and not sampling error. The standard deviation of true effects (T) is 0.198. There are variations in the implementation of the PLTL model such as peer leader training and the length of the PLTL workshop. Both these factors can affect the effectiveness of PLTL. Future work can expand on differences identified between studies. The analysis did not reveal any substantial publication bias; however, this inference is not particularly strong due to high variation and a small number of studies.

Table 2. Selected Effect Sizes from Hattie’s Work (20) and Their Categories Category

Examples of factors and their influence

Zone of desired effects (d > 0.40)

Providing formative evaluation (0.90) Feedback (0.73) Meta-cognitive strategies (0.69) Problem solving teaching (0.61) Mastery learning (0.58) Concept mapping (0.57) Cooperative learning (0.41)

Teacher effects (0.15 < d < 0.40)

Computer assisted instruction (0.37) Simulations (0.33) Inquiry based teaching (0.31) Homework (0.29) Individualized instruction (0.23) Teaching test taking (0.22)

Developmental effects (d < 0.15)

Gender (0.12) Distance education (0.09)

Reverse effects (d < 0.0)

Summer vacation (0.09) Television (0.18)

One of the most important steps of meta-analysis is putting its results into the context of other studies. Here is one of the possible interpretations of effect sizes suggested by Hattie (20): instead of interpreting effect sizes as large, small, medium, Hattie suggested using 0.40 as the hinge point because it corresponds to a level where the effects of interventions are noticeable and are above the average

130 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

of all possible factors. Hattie also suggested the categorization of effects that is presented in Table 2. The overall impact of PLTL instruction is generally positive and appears to be greater for the upper division classes such as organic chemistry. PLTL interventions for organic chemistry, as found in this meta analysis, have a similar effect size as Hattie’s aggregate score for cooperative learning.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

Other Software Solutions for Meta-Analysis The Comprehensive Meta-Analysis package is not the only software solution for meta-analysis. Technically, meta-analysis can be performed using hand calculations and standard graphic solutions. Table 3 includes various software products that can be used for performing meta-analyses.

Table 3. Possible Software Solutions for Meta-Analysis Description

Software R

There are more than 20 packages on CRAN for various stages of meta-analysis. The most common are rmeta, meta, metafor.

MIX

Add-in to perform meta-analysis with Excel 2007

RevMan

Software package used for preparing Cochrane Reviews

OpenMeta[Analyst]

Open-source software for performing meta-analyses

Stata

Software package used for running multilevel models that can be used for meta analyses. Several macros are available for meta-analysis.

References Note: * indicates those references that were included in the meta-analysis. Wilson, S. B.; Varma-Nelson, P. J. Chem. Educ. 2016, 93, 1686–1702. Akinyele, A. F. Chem. Educ. 2010, 15, 353–360*. Báez-Galib, R.; Colón-Cruz, H.; Wilfredo, R.; Rubin, M. R. J. Chem. Educ. 2005, 82, 1859–1863*. 4. Chan, J. Y. K.; Bauer, C. F. J. Res. Sci. Teach. 2015, 52, 319–346*. 5. Hockings, S. J. Chem. Educ. 2008, 85, 990–996*. 6. Lewis, S. E.; Lewis, J. E. J. Chem. Educ. 2005, 82, 135*. 7. Lewis, S. E.; Lewis, J. E. J. Res. Sci. Teach. 2008, 45, 794–811*. 8. Lewis, S. E. J. Chem. Educ. 2011, 88, 703–707*. 9. Lyon, D. C.; Lagowski, J. J. J. Chem. Educ. 2008, 85, 1571–1576*. 10. McCreary, C. L.; Golde, M. F.; Koeske, R. J. Chem. Educ. 2006, 83, 804–810*. 1. 2. 3.

131 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.

Downloaded by UNIV OF FLORIDA on December 11, 2017 | http://pubs.acs.org Publication Date (Web): November 20, 2017 | doi: 10.1021/bk-2017-1260.ch008

11. Mitchell, Y. D.; Ippolito, J.; Lewis, S. E. Chem. Educ. Res. Pract. 2012, 13, 378–383*. 12. Rein, K. S.; Brookes, D. T. J. Chem. Educ. 2015, 92, 797–802*. 13. Shields, S. P.; Hogrebe, M. C.; Spees, W. M.; Handlin, L. B.; Noelken, G. P.; Riley, J. M.; Frey, R. F. J. Chem. Educ. 2012, 89, 995–1000*. 14. Steward, B. N.; Amar, F. G.; Bruce, M. R. M. Aust. J. Educ. Chem. 2007, 67, 31–36*. 15. Tenney, A.; Houck, B. J. Math. Sci. Collab. Explor. 2003, 6, 11–20*. 16. Tien, L. T.; Roth, V.; Kampmeier, J. A. J. Res. Sci. Teach. 2002, 39, 606–632*. 17. Wamser, C. C. J. Chem. Educ. 2006, 83, 1562–1566*. 18. Smith, J.; Wilson, S. B.; Banks, J.; Zhu, L.; Varma-Nelson, P. J. Res. Sci. Teach. 2014, 51, 714–740*. 19. Borenstein, M.; Hedges, L.; Higgins, J. P. T.; Rothstein, H. R. Introduction to Meta-Analysis; Wiley, 2013. 20. Hattie, J. Visible learning: A synthesis of over 800 meta-analyses related to achievement; Routledge, 2009.

132 Gupta; Computer-Aided Data Analysis in Chemical Education Research (CADACER): Advances and Avenues ACS Symposium Series; American Chemical Society: Washington, DC, 2017.