Using Problem-based Learning Evaluations to Improve Facilitator Performance and Student Learning

Scott A. Cottrell, Ed.D., Mary Wimmer, Ph.D., Barry T. Linger, Ed.D., James M. Shumway, Ph.D. and Elizabeth A. Jones, Ph.D.

West Virginia University
Morgantown, WV 26505 U.S.A.

(+)1- 304-293-0410
(+)1-304-293-4973

ABSTRACT

This paper will report the relationship between course and faculty evaluations for a problem-based learning (PBL) experience in a medical school curriculum. Identifying relationships between students??bf? reflections about the problem-based learning experience and how well facilitators guided the group (e.g., helped identify key learning issues) can answer fundamental questions about the potential of PBL to advance essential skills and knowledge. In 45 PBL groups across the 2001 and 2003 academic years, students completed a facilitator and a PBL course evaluation. The facilitator evaluation included nine questions. Each question used a five-point scale from Poor (1), Fair (2), Somewhat good (3), Good (4) to Excellent (5). The PBL course evaluation included 9 questions on a standard 5-point scale, ranging from Not at All (1); Slightly (2); Somewhat (3); Mostly (4), and Completely (5). Two statistical analyses were conducted to address the research questions. First, a factor analysis was used to explore the organization of underlying factors in the facilitator and course evaluations. Factor analysis can provide evidence of construct validity for both instructional and learning dimensions.Using each factor as a variable, factor scores (mean of the items in each factor) for the facilitation evaluations were used to predict factor scores yielded from the course evaluation. A regression analyses explored the potential for facilitator performance scores (independent) to predict student observations about their own learning (dependent). An analysis of the questions reveals reasonable interpretations of the two factors (Collaboration and Independent Leaning Skills). The results revealed significant relationships between the facilitator scale score and both scale scores for the course evaluations. Overall, these results suggest that the facilitator evaluation reveals a global indication of facilitator performance. Targeting the quality of specific skills, then, may require additional assessment strategies, such as having trained raters evaluate facilitator performance. An analysis of the course evaluation also reveals that students distinguish self-directed learning skills from collaborations skills. The connection between these factors suggests that facilitator performance, although limited, does impact the extent of students learning and development. Failing to recognize the importance of appropriate facilitation skills may ultimately compromise the learning environment.


INTRODUCTION

Some researchers observe that teaching evaluations can be valid and reliable instruments.1 Analyses of the variability of evaluations, however, suggest that they are not created equally.2 Despite some concerns with evaluations, educators rely on them to explore the potential of PBL to impact student learning and to judge facilitator performance. The purpose of this paper is to report how course and facilitator evaluations for a (PBL) course were developed, and to investigate the components of the evaluations. An analysis of the evaluation data will also address the extent facilitator performance influences student learning. Exploring these issues will help identify both the limitations and benefits of using evaluation data to make decisions about PBL courses.

PBL was first developed by McMaster University Medical School in Canada. Several other medical schools have adopted PBL into their curricula, such as Maastricht (the Netherlands), Newcastle (Australia), and University of Hawaii. PBL can be designed using several different strategies, which may be peculiar to an institution’s requirements, objectives, and resources.3 While it may be developed differently, PBL in medical education generally is designed to challenge a student with a complex problem, which, like healthcare delivery, may not lend a clear course of action or immediate answers.4 PBL drives students to cooperatively work together to evaluate information and solve problems, which aim to develop critical thinking abilities, communication, and team-building skills. As faculty help guide and facilitate the PBL group, each student must explore and coalesce new information, bridging the gap between the parameters of the classroom and clinical practice.

The responsibilities for both students and faculty in the PBL process are different from traditional learning methods.5 Student must do more than sit, listen, and take notes. They must assume a more active role to explore information to understand a problem, develop potential diagnoses and create tenable treatment options. These activities help integrate clinical and basic science material. Students also discuss concepts, question ambiguity, and forge their own opinions, which further a sense of commitment to learn. Faculty must avoid lecturing content material or dispensing critical information, which stymies students’ self-directed learning and development. Assuming the characteristics of facilitator and tutor, PBL faculty is to refrain from answering questions. Students must find and reflect on information to target the problem, requiring little or no formal instruction from faculty.

Researchers have examined whether PBL helps students develop knowledge and essential life-skills, such as self-directed learning. Some studies, for example, have contrasted students’ acquisition of content knowledge in traditional programs with PBL learning approaches. Albanese and Mitchell6 reported that in some PBL courses, students did not acquire as much content knowledge when compared with students engaged with lectures, as evidenced by performance on multiple-choice exams. In a similar study, Vernon and Blake7 found that students’ performance on factual knowledge assessments did not favor traditional instruction.

While there may be a some disparity between the overall retention of students’ knowledge in a PBL curriculum and in traditional curricula, requiring students to spend time acquiring, integrating, and evaluating information has several advantages that traditional learning strategies do not capture.8 As Barrows3 noted, “The irony is that few formal assessment procedures can distinguish problem-based learning from conventional curriculum students because such procedures are generally insensitive to the cognitive and behavioral differences that are observed in PBL.”

Some research has used assessment methods other than traditional tests to examine whether PBL has an impact on students” higher-order skills. For example, Blumberg and Deveau9 used surveys that reported significant differences in students” attitudes and behaviors. After completing a PBL course, students reported that PBL helped them to develop communication sills, examine issues that were not specifically addressed, and foster self-directed learning skills. In another study, students in the McMaster’s program rated themselves as being better prepared than students taught through traditional methods at implementing independent learning, problem solving, self-evaluation, and data-gathering skills.10

MATERIALS AND METHODS

At the West Virginia University School of Medicine, multidisciplinary faculty designed a PBL learning experience to augment an interdisciplinary basic-science course, Human Function. It is a yearlong course that combines the disciplines of biochemistry, human genetics, and human physiology. The PBL course includes one facilitator and eight students to a group. The course is divided into two, 15-week semesters. After the first 15-weeks, students are placed into a new PBL learning group with a different facilitator and different students. All students, then, have the benefit of two facilitators each year, and working with different peers in each component. The aims of the PBL sessions are:
??bf?to integrate information across the various disciplines of basic and clinical sciences
??bf?to narrow the gap between basic and clinical sciences by using clinical cases to illustrate basic science principles
??bf?to enhance students’ acquisition, retention and use of knowledge
??bf?to enhance students’ self-directed learning skills
??bf?to develop students’ communication and interpersonal skills
??bf?to increase students’ level of intrinsic interest in the subject matter
Each 15-week component includes five cases. Each case is divided into three parts. First, students are confronted with a complex problem. A packet of information explains a patient’s chief complaints, a psychosocial history, physical symptoms, and particular lab results. Students are asked to share and explore hypotheses of the patients’ condition. Students also identify key learning issues, or questions about the material. The learning issues, which drive students’ self-directed learning skills, are researched before the next PBL meeting.

Second, students discuss the collected information that addresses the learning issues. As students cooperatively share information, PBL aims to develop critical thinking abilities, communication, and team-building skills. Addition information about the patient is given, yielding more learning issues for the third, and final, PBL component. The last component begins with the presentation and discussion of the learning issues. Addressing learning issues help students refine their hypotheses about the patient’s presenting problem, eventually leading to a course of action and a full discussion about the implications of the medical condition.

Several authors have suggested recommendations about how tutors should conduct a PBL group.11 Using these recommendations as a guide, the West Virginia Medical School outlined the facilitator’s responsibilities for the PBL learning experience, which were addressed in a one-day training session for all facilitators. The Problem-based learning facilitators will:
1.Avoid lecturing and offering information that students could retrieve for themselves.
2.Assist the group to work cooperatively.
3.Guide the group by asking questions.
4.Aid students with the identification of appropriate learning issues or questions.
5.Aid students with identifying gaps in knowledge that need to be addressed.
6.Help the group develop learning issues that integrate the basic and clinical sciences.

In order to ascertain whether the PBL learning experience was meeting its aims and whether the faculty were displaying appropriate facilitation skills, course and facilitator evaluations were implemented after each 15-week PBL course between the 2001 and 2003 academic years. Questions were phrased to address specific learning outcomes and facilitator behaviors. The focus of these evaluations, then, was to answer fundamental questions about what students were expected to learn, how they were taught, and what skills they advanced and furthered.

This research study was guided by two research questions:
1.What is the underlying structure of the facilitator and course evaluations?
2.Is there a relationship between PBL facilitator evaluations and student reflections about learning?

At the end of each semester, students were asked to complete two evaluations: a PBL course evaluation and a facilitator evaluation. Students’ names remained anonymous and the evaluation results were not given to faculty until the semester was completed. Approximately 12 PBL groups were completed for each semester. The evaluations were collected from each group and anonymously labeled PBL group one, PBL group two, etc. Faculty names were not revealed with the analyzed data. Participants included PBL learning groups between the academic years of 2001 and 2003, yielding a total 45 groups. A total of 28 facilitators facilitated the 45 groups. Approximately six to eight students completed each PBL evaluation and facilitator evaluation after each 15-week component of PBL.

In 45 PBL groups across the 2001 and 2003 academic years, students completed the facilitator and the PBL course evaluation. The facilitator evaluation included nine questions. Each question used a 5-point scale from Poor (1), Fair (2), Somewhat good (3), Good (4) to Excellent (5). The PBL course evaluation included nine questions on a standard 5-point scale, ranging from Not at All (1); Slightly (2); Somewhat (3); Mostly (4), and Completely (5).

Two statistical analyses were conducted to address the research questions. First, a factor analysis was used to explore the organization of underlying factors in the facilitator and course evaluations. Factor analysis can provide evidence of construct validity for both instructional and learning dimensions. Gall, Borg, and Gall12 defined construct validity as: “The extent to which inferences from the test’s scores adequately represent the content or conceptual domain that the test is claimed to measure” (p. 756). One criticism of factor analysis to explore dimensions of an instrument is that the choice of method may determine the factor solution. For example, analyzing data without rotation does not minimize the number of variables that load highly on any given factor, which varimax rotation is likely to do. Therefore, analyzing the data with factor analysis should be supported with a theoretical foundation.

The theoretical foundation of facilitator and course evaluations, however, is mixed. Some research, for example, suggests that evaluations of faculty performance tend to load heavily on one factor that is indicative of a global factor. Other research suggests that multiple factors indicate that students can distinguish between dimensions, such as instructional skills and course organization skills.2 For the purposes of this study, an exploratory approach was implemented, which can be characterized as a theory-generating approach. While attention was made to craft questions that target specific facilitation skills and learning outcomes, there is no existing evidence to suggest that particular questions should load into distinct factors. This is particularly true for the developed facilitator evaluation, which did not include some domains identified in the literature, such as course design issues.

A principal component analysis with varimax rotation was used to maximize the potential that questions will align with a particular factor, which is a common rotation option for exploratory analysis. The factors revealed, as well as the pattern of the factor-loading, will suggest hypothetical or explanatory constructs. An analysis of the latent variables will ideally yield plausible labels that distinguish one factor from another.

If a reasonable relationship between questions in each factor can be defined, a linear regression will be conducted. Using each factor as a variable, factor scores (mean of the items in each factor) for the facilitation evaluations will be used to predict factor scores yielded from the course evaluation. A regression analyses will explore the potential for facilitator performance scores (independent) to predict student observations about their own learning (dependent).

RESULTS

The results for the facilitator evaluation revealed a single factor that explained approximately 55% of the variance. All questions loaded at least .537 with the factor, allowing for no rotation of the solution (Table 1).

These results suggest all nine items represent a global construct: general facilitation skills. A one factor solution may indicate a halo effect, which suggests that students cannot distinguish between facilitator skills. That is, if a student rates a facilitator high on one particular skill, then the other skills are probably also rated highly. This result is also consistent with the information processing models of performance ratings.2 This model posits that students have general impressions of facilitator performance. Semantically similar questions cue the students to retrieve these general impressions to make judgments about specific skills, which reveals little or no difference between skills.

These results also suggest that the questions targeted a singular dimension: facilitator skills. Questions that addressed issues such as course organization, preparation and appropriateness of material, and facilitator’s knowledge of the material were not posed. The one factor solution, then, is a reasonable alignment with the original intention of the evaluation.

The results for the PBL course evaluation revealed a two-factor solution. The principal-components solution revealed that three of the nine items were grouped for the first factor (Collaboration Skills), totaling 31% of the variance explained. The second factor (Independent Learning Skills) included three items that explained 23% of the variance. Aggregating the two factors yielded a cumulative 54% of the variance explained (Table 2).

An analysis of the questions reveals reasonable interpretations of the two factors. The first factor, Collaboration Skills, captures several skills that characterize interaction and cooperation between students. The three questions focus on developing communication skills, teamwork skills, and problem-solving skills, which emphasize group interaction to address the problem and explore solutions.

The second factor, Independent Learning Skills, includes questions that focus on the attributes of active and independent learning. Students are expected to be engaged in the learning process, use multiple sources of information, and assume a self-directed role in considering all aspects of a case.

The next step of this analysis is to investigate whether facilitator performance is related to student learning. Questions for each factor were collapsed into a mean, revealing three scale scores. Considering that the factor analysis revealed reasonable interpretations of the latent variables, this analysis explored whether the facilitator scale score predicts the two scale scores for the course evaluation. The purpose of the linear regression is to treat the factors as variables and ascertain a possible relationship between the factor scores. The results revealed significant relationships between the facilitator scale score and both scale scores for the course evaluations (Table 3).

These findings suggest that facilitator performance, as a global factor, is related to factors distinguishing student learning. The relationship between facilitator performance and the collaboration/independent learning skills is an intuitive one. For example, facilitators are responsible for engaging students, aiding with learning issues development, and encouraging students to use multiple sources of information. Students’ general impression of facilitator performance, then, is appropriate aligned with the learned skills.

Still, the results suggest that the relationship between facilitator performance and student learning is limited. The R2 value represents the total amount of dependent score (Collaboration and Independent Leaning Skills) variance that can be explained by the independent or predictor variable (Facilitation Skills). In this analysis, only 3 % of the Collaboration Skills and 2% of Independent Learning Skills can be explained by Facilitator performance. Therefore, while facilitator performance is a significant predictor, roughly 97% of the variance remains unexplained or due to other factors.

DISCUSSION

The results of this study reveal several implications. First, the interpretation of the factor analysis for the facilitator evaluations suggests evidence for construct validity. The factor analysis technique is a strategy to indicate the extent variables relate to an underlying factor. It is up to the researcher to define conceptually the factors.12 Construct validity, as a function of the scores, suggests that the test is measuring what it purports to measure. The single factor for the facilitation evaluation is consistent with the original design, which focused on observable performance that students could reliable judge. Issues such as course organization skills and quality of learning materials were not addressed. Defined in this way, the construct of general facilitation skills is a reasonable interpretation of the data.

The results, however, suggest there are limitations with using facilitator evaluations to inform facilitator training. Because the factor analysis reveals little distinction between facilitator skills, it is difficult to use the results to extrapolate suggestions for facilitator development. For example, a facilitator may acknowledge a need to improve her ability to set appropriate learning issues. However, if the facilitator receives a high score on any item, then the facilitator is also likely to receive high scores on all the other items, including setting appropriate learning issues. The evaluation data is therefore unlikely to confirm or refute whether a facilitator should address specific skills.

Second, the distinction between the course evaluation factors also presents a tenable argument for construct validity. An analysis of the questions reveals an appropriate inference that the factors are conceptually different and definable. That is, Collaboration Skills are conceptually distinct from Independent Learning Skills. This evidence also suggests that students are able to distinguish these skills, and reflect on gains in learning relative to the two factors.

Third, the linear regression results suggest that facilitator performance can predict some of the variance in student learning. In addition, this relationship is distinct for learning related to issues such as self-directed learning and the interaction of students. This evidence can be used to inculcate the importance of facilitator skills, such as posing questions, exhibiting enthusiasm and defining quality learning issues. As reflective educators, facilitators can be reminded that these skills are necessary to achieve the aims of PBL.

Still, the regression analyses highlight the need to move beyond facilitator performance and explore other variables that may impact student learning in PBL. Researchers may examine, for example, how learning preferences, PBL materials, and grading strategies may inhibit or excite learning in PBL. Understanding these influences will help maximize student learning and development, and ultimately answer fundamental questions about PBL and its potential to further essential skills.

CONCLUSIONS

Overall, these results suggest that the facilitator evaluation reveals a global indication of facilitator performance. Targeting the quality of specific skills, then, may require additional assessment strategies, such as having trained raters evaluate facilitator performance. An analysis of the course evaluation also reveals that students distinguish self-directed learning skills from collaborations skills. The connection between these factors suggests that facilitator performance, although limited, does impact the extent of students learning and development. Failing to recognize the importance of appropriate facilitation skills may ultimately compromise the learning environment.

REFERENCES

  1. Cashin, W.E. Student ratings of teaching: the research revisited. Center for Faculty evaluation and Development: Division of Continuing Education, Kansas State University. 1995; IDEA paper No. 32.
  2. D’Appollonia, S. and Abrami, P.C. Navigating student ratings of instruction. American Psychologist. 1997; 52(11):1198-1208.
  3. Barrows, H.S. Problem-based learning in medicine and beyond: a brief overview. New Directions for Teaching and Learning. 1996;68:3-11.
  4. Lloyd-Jones, G., Margetson, M., and Bligh, J.B. Problem-based learning: a coat of many colours. Medical Education. 1998;32:492-494.
  5. Mierson, S. and Parikh, A.A. Stories from the field: problem-based learning from a teacher’s and a student’s perspective. Change. 2000;32(1):21-27.
  6. Albanese, M.A. and Mitchell, S. Problem-based learning: a review of literature on its outcomes and implementation issues. Academic Medicine. 1993;62(2):52-81.
  7. Vernon, D.T. and Blake, R.L. Does Problem-based learning work? a meta-analysis of evaluative research. Academic Medicine. 1993;68(7):550-63.
  8. Norman, G.R. and Schmmidt, H.G. Effects of conventional and problem-based medical curricula on problem solving. Academic Medicine. 1986;66(7):380-89.
  9. Blumberg, P. and Deveav, E.J. Using a practical program evaluation model to chart the outcomes of an educational initiative: problem-based learning. Medical Teacher. 1995;17:205-214.
  10. Woodward, C.A. and Ferrier, B.M. The content of medical curriculum at McMaster University: graduates’ evaluations of their preparation for post-graduate training. Medical Education, 1983;17:54-60.
  11. Rothman, A. and Page, G. Problem-based learning. In: Norman, G.R. and Van der Vleuten, D.I., Newble eds. International Handbook of Research in Medical Education. Great Britain: Dordrecht: Kluwer Academic Publishers; 2003. p. 613-644.
  12. Gall, M.D., Borg, W.R., and Gall, J.P. Educational Research, 6th ed. White Plains, New York: Longman; 1996. p. 447-450.

Published Page Numbers: 58-63