at&&.& 4 CHEMICAL EDUCATION - ACS Publications - American

tive institutions. Some of the information was gathered by tele- phone, but most of it was reported on a mail questionnaire. Of the. 100 questionnaire...
0 downloads 0 Views 1MB Size
at&&.&4 CHEMICAL EDUCATION 4--saecctqc A Report of the Committee on Teaching: The Use of Student Evaluations At the 1973 Spring meeting of the American Chemical Society, this Committee sponsored a symposium on the evaluation of ins t r u c t w j and cl,urre~ hy mean9 oi student quertmnnnlres. Subsequently, the .lm.mal of Chrrn~olFducorwn ptlhlishrd some of the rmperi (rum t h i ~sympurium The suh~errhas nrauwd n arent many queries about how the data collected~fromstudent que&onnaires are used. This report is written in an attempt to answer these queries. We have polled some of our colleagues about practices a t their respeetive institutions. Some of the information was gathered by telephone, but most of it was reported on a mail questionnaire. Of the 100 questionnaires mailed, 77 were returned with usable information. Information from a n additional 23 institutions was collected by telephone. The departments were generally selected on the basis that student evaluation questionnaires were being used. Efforts were made t o provide a reasonable distribution among types of institutions and geographic location. The distribution by type was as follows:

'

No two-year colleges were included. In considering the responses to the questions, one should take into account the limitations of data collected in this manner. For example, the answer to the question, "What part do student evaluations play in merit raise allocations?" depends on the experience and interpretation of the individual answering the question. If he has served on an evaluation committee, he himself may have put heavy emphasis on student evaluations, whereas one of his colleagues serving in the same capacity may have discounted student evaluations completely. Some respondents answered this question, and other questions, "unknown" or "information not available." In addition, an element of judgment necessarily played a part in compiling the results. A systematic classification of the responses called for a limited number of categories, and placing a response in one category or another called for judgment as to where it belonged. Ascertaining or measuring almost any aspect of student evaluation is an elusive and frustrating endeavor. Trying to determine the use to which student evaluations are put proved no exception. Institutions vary widely in the degree of seriousness attached to student evaluations. There are evidently some institutions where faculty members are virtually ranked on basis of student evaluations and their professional futures are dependent on this ranking. The number of institutions in this category is relatively low. At the other end of the spectrum are institutions where student evaluations are voluntary and results are used only as the professor sees fit. Probably this wide divergence in uses of student evaluation results from wide differences of o ~ i n i o n sas t o the validity of student eval~ations.~ In spite of the limitations, the writer feels that these data provide a reasonable indication of the use of student evaluations across the country.

,

Chairman Adminirtrslion

70

14

i

%i

.Altht~udht h e que\~i