A Review from Medical Science Educator from Dr. John Szarek

This month the IAMSE Publications Committee review is taken from the article titled Factors That Determine the Perceived Effectiveness of Peer Feedback in Collaborative Learning: A Mixed Methods Design, published online in Medical Science Educator, (19 May 2020), by Daou, D., Sabra, R. & Zgheib, N.K.

Bill Gates said, “We all need people who will give us feedback. That’s how we improve.” Yet, many times when our learners hear the word “feedback” their first thought is, “Oh no, here it comes!” What drives this disposition of learners toward feedback? In their Medical Science Educator article Dayane Daou, Ramzi Sabra, and Nathalie Zgheib, used a mixed-methods approach to identify factors important in the quality and effectiveness of peer feedback among medical students.

In their study, the authors explored the volume and quality of the written feedback given to students as part of the periodic peer assessment in Team-Based Learning (TBL), and whether it changes with time; the factors that determine the volume and quality; and the learner’ perceptions of the benefits of peer feedback. Preclinical medical students were randomly assigned to 5-6 member TBL teams three times over two years. At the end of each session students completed an anonymous peer evaluation form which included two open-ended questions asking about (1) the single most valuable contribution the person makes to the team and (2) the single most important way the person could more effectively help the team. Students had two opportunities to learn about the value of peer evaluation and how to give effective feedback. The quantitative component included the number of comments each student received and scores on the quality of the comments (0-3) based on a rubric. A thematic analysis of comments obtained during focus group sessions with students constituted the qualitative component of the study.

In general, the number and quality of comments was low. Forty-nine percent to 96% of students received comments with the number of comments per student ranging from 1.27 to 1.90. The top 3 areas of focus for students’ comments were personality traits, participation and cognitive abilities which accounted for about 70% of the comments. The mean quality rating ranged between 1.24 and 1.86. The focus groups revealed several reasons for the low number and quality of comments but two stood out. First, the perception students had that rather than being formative feedback they were evaluative and would be used for judgmental purposes. Second, there was a disincentive to provide constructive feedback to their peers so as not to disrupt their social relationship. Although there was a negative disposition of the students toward feedback, they still perceived the process as very beneficial for personal development. The authors conclude that success of peer evaluation lies in establishing a safe environment for students to feel that they can be forthright and the need for extensive training with periodic enforcement.

Many of us utilize peer evaluation as a requirement in the curriculum positioning it as an episodic event which must be accomplished to satisfy a course or curriculum requirement. This article causes us to rethink how we can encourage learners to relish feedback as a means to being the best healthcare professional they can be.

John L. Szarek, BPharm, PhD, CHSE
Professor and Director of Clinical Pharmacology
Education Director for Simulation
Geisinger Commonwealth School of Medicine
Member IAMSE Publications Committee