Organizer:Murray Saffran, PhD

Reporter:Judith Saffran, PhD
Authors of this Report: Dev Sangvai and Murray Saffran

What do medical students think of the Student-to-Faculty feedback in their schools?

The most often used indicator of course and teaching quality is student-to-faculty feedback. Six students from 5 different medical schools presented their opinions on the process and value of feedback systems used at their respective schools. The student participants were:

Mary Jo Lechowicz and Barbara Sherman, SUNY Health Science Center at Syracuse NY; James Parker, Mayo Medical School, Rochester MN; Chris Raispis, Medical University of South Carolina, Charleston SC; Walter Rush, Dartmouth University School of Medicine, Hanover NH; and Devdutta Sangvai, Medical College of Ohio, Toledo OH.

In most schools, student feedback takes the form of end-of-course questionnaires designed and distributed by faculty. The rate of return of the questionnaires is seldom good, although attempts at better return, such as distribution with the final examinations, are frequently used. The questionnaires usually request ratings on a 5-point scale, from strongly disagree (1) to strongly agree (5), of statements about the course and its teachers. There is also room on the form for free-form comments.

At Dartmouth, unlike most other medical schools, the students design the questionnaire and organize its distribution and analysis. The results are then given to the professors. The response rate was poor, but improved when the questionnaires were distributed immediately after the final examination. A standardized questionnaire is used to enable courses to be compared with each other. As an experiment, an evaluation form with only 3 questions was distributed to the class very early in the course, and students were asked to complete the short form after every lecture. The evaluations also had space for free comments. The completed evaluations were given to the faculty at the end of the course.

In a course on Ethics and Medicine in Society at SUNY Syracuse, students not only rated the course, but also participated in the planning of the course and in the design of the evaluation instrument. The course was planned in curriculum committee meetings every two weeks. The course took the form of whole class sessions and small group meeting of 12 students with two faculty, one basic scientist and one clinician. The curriculum committee met with the course director every two weeks to evaluate progress and to modify the course if necessary. Changes in the course have been made based on feedback from the students. Other courses at SUNY Syracuse are contemplating similar evaluation programs, but the two students stressed that the system is valuable only if the course is responsive to student feedback.

At the Mayo Medical School, a new course in Pathophysiology is evaluated by questionnaires distributed to the entire class at the time of the final exam. A committee of 6 students gathers the information and writes a Consensus Statement on the course to give to the faculty. The exercise contributes to student satisfaction with the course because the faculty take the student feedback seriously.

At the Medical University of South Carolina, every course has its own end-of-course questionnaire. Ratings scales are used, along with room for free comments. Few free comments are written. Again, the students want evidence that the faculty are responsive to student feedback.

At the Medical College of Ohio at Toledo, evaluation forms are not used. Instead, groups of 8-12 students are assigned to write narrative reports on the courses and teachers they encounter in a week of the basic science curriculum. The reports are composed by the entire group and are handed in at the end of the week for distribution to the course directors. The frequent feedback makes mid-course corrections possible, which cannot occur with the end-of-course evaluations. Moreover, the MCO reports are entirely narrative, thereby providing more detailed feedback than a number on a 5-point scale.

All student participants in the symposium agreed that faculty response to student suggestions is essential for the maintenance of confidence in the value of the reporting system. Some evidence of change in response to student suggestions provides the best incentive for student participation in the reporting process.

This audience of 46 faculty and education administrators participated in a lively exchange with the presenting students and seemed to profit from the information provided by the students about the value of feedback. In spite of the perceived imperfections of the various forms of feedback, the participants and audience agreed that feedback is essential to the maintenance of quality teaching in medical schools. The audience voiced their approval of the opportunity to hear about student perceptions of what is generally a faculty-run evaluation system.