Project Longhorn: A Pilot Project in the Use of Batch Computing in

Oct 10, 1997 - Computing in High School Chemistry Teaching. Sally Young Busboom. Anderson High School, Austin, TX 78759. The ambiance of the high ...
0 downloads 0 Views 93KB Size
In the Classroom

Project Longhorn: A Pilot Project in the Use of Batch Computing in High School Chemistry Teaching Sally Young Busboom Anderson High School, Austin, TX 78759 The ambiance of the high school should be an ideal setting for teaching chemistry—or any science. Small classes, a potentially flexible schedule for the interaction of didactics and practical work, highly motivated students (from the point of view that chemistry often is not a required subject), and eager young learners who are characteristically interested in almost everything. Yet, the reality of high school chemistry teaching is markedly different, basically because few planners recognize the impact that modern technology can make in keeping the tried-and-true elements of instruction viable in logistically difficult educational environments. A number of projects outlining successful applications of technology have appeared in this Journal (1, 2). We address the issue here of reinvigorating the timehonored use of “homework” in chemistry instruction. For a number of understandable logistic reasons this useful and time-proven pedagogic tool has fallen on bad times. Basically, chemistry teachers deal with an increasingly larger number of students in a class—and often with a greater number of classes—because of the perceived usefulness of chemistry as a subject that is the fundamental basis for other areas of study of interest to young people, for example, biotechnology-oriented subjects. Described here is the use of batch-oriented computing as the basis for reintroducing homework problems as a teaching tool in beginning chemistry instruction. The batch system described below was created by the Department of Chemistry and Biochemistry at The University of Texas at Austin and has been in use for nearly 15 years in that department. The University of Texas at Austin Delivery System The UT system produces, delivers, grades, and keeps records for homework assignments, quizzes, and examinations. It incorporates the capacity to grade these instructional elements using mark-sense technology, and it is capable of keeping detailed records of students’ work. The system is capable of producing any number of equivalent homework assignments, quizzes, or examinations on demand. This capability is a useful pedagogical adjunct for teachers who want to make certain that students do their own work for subsequent evaluation. The key element in the system is the catalog of questions developed by UT faculty over a 15-year period. The questions are of several types, including the usual multiple choice format as well as numeric questions. Below is an example of a numeric-type question: What volume of chlorine, Cl 2(g), is necessary to prepare 45.09 liters of hydrogen chloride, HCl, if an excess of hydrogen, H 2 is used? All gases are measured at 100 °C at 1 atmosphere pressure. There are several interesting and useful aspects of numeric questions. First, the question as stored in the computer contains variables (indicated by bold type in the example shown here); these variables, which may be num-

bers or symbols, are automatically randomized each time the question is used. The numeric question files also contain the algorithm for the solution to the questions. Thus, the catalog of numeric questions is a representation of not only the questions that reside within the system, but also all possible answers for these questions. The questions in the catalog are organized according to the subjects that commonly are found in the typical entry-level chemistry textbook. The chemistry teacher selects the questions desired for the homework, quiz, or examination; he or she (or a surrogate) then interacts with the computer system, which then produces the requisite number of student items—homework, quizzes, or examinations—all different, if required. These instructional elements are printed out, each individual student paper carrying a unique identification number. This unique number provides the key to the reconstruction of the student assignment when the assignment is graded. In other words, the program does not keep records of student assignments for grading purposes. The items on the individual student assignments can be scrambled in a number of ways: the placement of the individual items can be scrambled; the possible responses for multiple-choice questions can be scrambled; and the variables within numeric questions can be randomized. The “scrambling process” is also generated by the system and is encoded by the unique identification number. The instructor passes out the student assignments (in no particular order) together with a specially designed mark-sense sheet (see Fig. 1). One side of the sheet (Fig. 1, top) is used for multiple choice questions, but the other side (Fig. 1, bottom), which is unique, allows the student to encode a numeric answer. The answer sheet also incorporates an area (Fig. 1, top) where the computer-generated unique identifier that is printed on the student assignment can be encoded. Thus, the mark-sense sheet carries all the information necessary to link the identity of the student (and his or her responses) to the computer-generated student assignment. The last link in the system described, but an important one, is the method used by the teacher to communicate with the system. (Recall that the catalog is a representation of the pedagogically useful, chemistry-oriented material). After the teacher makes a selection of the questions to be printed in a student assignment, someone—the teacher or a surrogate—interacts with the system to produce as many individual assignments as required. The Anderson High School Experience: Project Longhorn It became obvious that the logistic problems in the Department of Chemistry and Biochemistry at The University of Texas at Austin were the same as those I faced at Anderson High School; instead of one large class (~300 students) at UT, I had several smaller classes with a relatively large number of total students. Discussions with the

Vol. 74 No. 10 October 1997 • Journal of Chemical Education

1181

In the Classroom general chemistry coordinator at UT (Denis Kohl—DAK) suggested that it might be possible to use the system with my chemistry classes. Thus, we created Project Longhorn. Of course, all the questions in the UT data base were not necessarily appropriate, but that didn’t matter because I could pick and choose those that were appropriate; also, the delivery system is sufficiently flexible to allow new questions to be added easily and rapidly. Since Project Longhorn was designed to be a pilot program, we decided that the communication between the exercise generator (DAK at UT) and the classroom (SYB at Anderson) would be as simple as possible. Instead of attempting to set up an electronic communication link, we decided to use the “sneaker network,” with DAK shuttling between UT and Anderson High School at the beginning or end of the work day with the appropriate material—either the identity of the data base items that would be used to produce the individual student assignments or the pile(s) of individual student assignments. Typically, I selected three sets of questions from the hard copy of the data base that was made available to me. I selected all of the questions that I thought were appropriate

for the level of chemistry being taught. The selected questions were randomly divided (by me) into two homework sets and one test set. The selected and distributed items were provided to DAK, who then produced the individualized student activities (tests or homework) together with an answer key arranged by the unique number of the assignment. The chemistry concepts associated with the questions were introduced in a variety of ways, including lectures, demos, labs, questioning techniques, and reading assignments. One set of homework questions was distributed and was due the next day. The following day the scan sheets were collected (and given to DAK), questions were discussed, and reteaching was done as needed. Another set of questions was given to the students and it was due the next day. Sometimes an extra day was allowed if there were a large number of problems or if the students were experiencing a tight schedule at school. The second set of homework was collected and, by then, the scan sheet for the first homework set had been returned (by DAK). More discussion and reteaching, extension of topics, and clarification was done at this time. The test was given on the next available test day. (There are two days during the week on which chemistry tests can be given.) When tests were administered, the students turned in both the copy of the test and the marksense sheet. Tests were kept until all students had been tested on that concept. Students were able to come by before school, after school, or at lunch and check their answers against the answer key for the homework sets and tests. Students were always eager to check their answers after turning in a homework set. This was an easy operation, since the students kept their homework papers and only turned in the corresponding mark-sense sheet. They were able to verify the answers they had marked by finding the unique number of their test on the posted answer key. This process provided immediate feedback to students. The two semesters during which Project Longhorn operated followed this same pattern of two homework sets and then a test during a 5- to 7-day time frame for most of the year. Each homework set and each test was unique for each student. Some papers had the same questions for each student, but the questions were randomly arranged and the answers were also random for each question. Numeric questions were used as well as multiple choice questions. The numeric questions were unique for each student and had a different set of values for each question. Students had to bubble in numeric answers in scientific notation. Some of my students commented that it was hardly worth the time to call a friend to compare answers on homework because it was very time consuming to find the same question on both papers (if it existed) and it was impossible to compare answers for the numeric questions. (That student observation is an interesting commentary on the value students put on their time; to some it was easier [took less time] to do the work than it was to figure out how to copy someone else’s work.) Since each paper was unique, the random arrangement of questions and answers eliminated cheating by students during a test as well as for those who had to take a make-up test.

Figure 1. Mark-sense answer sheet: top, multiple choice; bottom, numeric answers.

1182

Journal of Chemical Education • Vol. 74 No. 10 October 1997

In the Classroom Results and Perceived Benefits of Project Longhorn For this pilot project, it was difficult to establish a rigorous evaluation of the system—for example, to establish experimental and control groups. However, interesting and useful anecdotal observations are available. There were many benefits for me as a teacher with Project Longhorn. I selected the questions, which was probably the most time-consuming task in the whole project! I submitted the number for each of the selected questions, which I obtained from the hard-copy catalog. The next day I received the number of homework assignments or tests that I had requested. Each set was on legal-sized paper so there was ample room for students to do their calculations at the side. The questions were printed “sideways” so that the 81/2-inch side was the left margin of the paper. Each set was stapled; each set was unique. I also received an answer key that contained the unique numbers of each paper and the unique set of answers for that paper. The answer key could have been posted before the test, since the students did not know which unique number they would receive. The papers were given to the students in no particular order along with a blank mark-sense sheet. When the student keyed his or her name and the unique number of the assignment into the mark-sense sheet, that assignment and its answer were linked to that student. The mark-sense sheets were collected after the assignment and submitted to The University for grading. This was accomplished by means of a scanner at the Department of Chemistry and Biochemistry. The mark-sense sheets were returned the next day along with a printout of the students’ scores, and an analysis of how many students missed a particular question and which answers were chosen most frequently for any particular question. For numeric questions, the student answer and the correct answer were recorded. This information gave me insight into which concepts or questions needed to be clarified or retaught. Since the questions in the data bank were submitted by many college professors, my students had the benefit of being challenged by experts. Students learn to anticipate questions from a teacher, but having this level of question available added an extra dimension to the homework and tests. Students could no longer just anticipate my questions; now, they had to truly know the concepts about which they were questioned. In effect, I had an external examination board available for my use, on demand. It would have been impossible for me to continuously generate the level of questions or the number that were available to my students. This impossibility existed not only because of my own lack of depth into some of the concepts, but mostly because of the time involved in making up three sets of questions every week as well as grading and returning them in a timely manner. When the students were first introduced to the idea of taking tests and doing homework from the UT Chemistry data bank, I was cautious so as not to scare them. I assured them that I knew they could do well on the homework and tests. The students were very grade conscious and certainly did not want to jeopardize their grades by participating in some project that would be impossibly difficult. In the end, students found that they could do the work. Occasionally I would choose problems or questions that took more insight than I had originally thought. This became a wonderful opportunity to expand on a topic; also, I had the option of eliminating any questions that I might later deem inappropriate for testing. In this event, the answer sheets were regraded with the “offending” question removed, and a new scoring printout was generated.

Students sometimes wanted to challenge the computer grading system; for this, “complaint sheets” were provided to help them focus on the nature of the complaint. This process allowed me to pull the student’s mark-sense sheet, call the student in for a conference, if necessary, and work out the problem. Invariably it was a problem the student had with a concept and not a “computer error”. Interestingly, one obstacle to overcome was the inability of students to bubble in their social security number or school number with consistency; but it didn’t take long for them to realize that they had a responsibility to do so. Thanks to DAK, the grading program was modified to take some kinds of student errors into account; the computer produced a “best guess” as to whose paper was missing or incorrectly marked, which I could then verify by hand. When it came time for the final exam, I had decided to give 50 of my traditional final exam questions along with 50 from the UT data bank. Scores for the two groups of questions showed that some students performed better on my part of the exam and some performed better on the UT questions. I modified my plans and decided to give them the best of the two scores, not the average, as I had originally intended. Some students really liked Project Longhorn and some did not. I did not have any complaints from parents about the system, so I concluded that the students did not find it any more difficult than my traditional way of teaching and testing. The Project was funded for only a year, and I had subsequent students asking when we were going to take the tests from UT. I took that to be a good sign because these students had heard from their peers that Project Longhorn was a good, although challenging, adventure. The system, for me, was very easy. My task was to select the questions. The capability of submitting my own questions into the UT system was also available. The tests and homework papers were generated, answer sheets were scanned for grading, and statistics for each student and for each question were provided without any effort on my part. All I had to do was to think about and plan what I wanted to accomplish; the system did all the “busy work”. At the end of the semester, a computer grade book was available for each class. Tests were weighted differently from the homework. I was also able to submit lab grades into the grade book, which were weighted and included in the final average grade for each student. Teaching is just great and I love doing it; however, I do not like to get bogged down with producing materials and grading them. Project Longhorn eliminated the burden of grading homework and tests. The time saved from doing the onerous task was reinvested into the class preparation. I was able to spend more time preparing demos, lectures, lessons, and activities, and even more time talking and interacting with students before and after school. In this age of technology, this type of data bank and grading process should be available to all teachers. Project Longhorn, the computer-based operation, gave me more time in which to prepare for my classes; but more importantly, it unmeasurably enriched the quality of education my students received. This project not only provided students with enrichment in the quality of questions used to challenge them, but it also allowed them to develop confidence that they could be successful in chemistry at the university level. This system was used in both Chemistry I and Chemistry II. Readers interested in further details may contact D. A. Kohl at: Department of Chemistry and Biochemistry, The University of Texas at Austin, Austin, TX 78712.

Vol. 74 No. 10 October 1997 • Journal of Chemical Education

1183

In the Classroom Acknowledgments The considerable assistance of D. A. Kohl is gratefully acknowledged. His diligence in running the “sneaker network”, scanning the mark-sense sheets, and producing the requisite output was a major factor in the success of Project Longhorn. Dr. Kohl has continued to mentor both teachers and students at Anderson High School long past the termination point of Project Longhorn. His willingness to answer teacher and student queries, frequently via email, is greatly appreciated.

1184

Literature Cited 1. Castleberry, S. J.; Lagowski, J. J. J. Chem. Educ. 1970, 47, 91; Rodewald, L. B.; Culp, G. H.; Lagowski, J. J. J. Chem. Educ. 1970, 47, 134; Castleberry, S. J.; Culp, G. H.; Lagowski, J. J. J. Chem. Educ. 1973, 50, 469. 2. Smith, S. G. J. Chem. Educ. 1970, 47, 608; Smith, S. G. J Chem. Educ. 1971, 48, 727; Smith, S. G.; Ghesquiere, J. R.; Avner, R. A. J. Chem. Educ. 1974, 51, 243.

Journal of Chemical Education • Vol. 74 No. 10 October 1997