The First Experience of a Global Clinical Examination at the National University of Cuyo Medical School

Ana María Reta de De Rosas, M.Ed., Celia Bordín, M.D., Ph.D., Norma Carrasco,, M.D., Francisco Eduardo Gago, M.D., Ph.D., Carlos Alberto López Vernengo, M.D., Bernardo Odoriz, M.D., Eduardo Reta, M.D., Ana Lía Vargas, M.D. and Maria Jose López, M.Ed.

Faculty of Medicine

National University of Cuyo
Mendoza 5500 ARGENTINA

(+) 054-0261-439-0834
(+) 054-0261-449-4047

ABSTRACT

Started in 1997, the new medical program of the National University of Cuyo, requires that students pass a Global Clinical Exam before graduation, in order to determine if they are able to approach and solve health problems in each of the major medical specialties. This report describes the first experience with this type of exam held at the Medical School by the Global Clinical Exam Committee.

The exam consists of two parts: a written test, including multiple choice questions (MCQ) and case-based open-ended questions, and an oral exam to assess clinical skills using simulated patients. Students must pass both parts of the exam. This examination was first administered in the year 2003 to 59 students, all of who passed the written test, while only two failed the oral exam.

In the written exam, the minimum passing grade was 60 %, using a traditional approach for setting this standard. The scores on the written exam ranged from 68 to 87, with a mean value of 79.1 %. There was significant variation among means when the scores at each specialty were considered separately. The procedure for setting the standard used for the OSCE was the Modified Angoff method. Students were clustered in two groups for the oral exam that was held on two consecutive days. The average score was 54 % for the first group and 61 % for the second. The overall scores ranged from 44 % to 69 %. The correlation was high when considering the score for each station.

This experience demonstrated that this medical school is able to implement a global clinical exam, despite the complexity implied. Most of the students showed that they had achieved the knowledge, skills and attitudes required for graduation. The statistical analysis of the results oriented the Committee regarding how to increase validity and reliability of the assessment tools.


INTRODUCTION

In 1997, the National University of Cuyo Medical School introduced a new curriculum, based on problem based learning (PBL) and courses with high content integration. Since then, upon finishing the three year basic cycle, the students have been required to pass a global knowledge test to clerkships and the obligatory internship. After a two-year clerkship and one year of internship they must also pass a Global Clinical Exam, as a “sine qua non” condition to graduate and obtain their diplomas as medical doctors.

The global exams at the end of each cycle of the program are new in Argentina and are intended to guarantee the quality of the graduate. They are offered twice a year and those who fail can take them again on the next scheduled date. These exams are drafted and administered by two “ad hoc” committees -the Global Basic Sciences Exam Committee and the Global Clinical Exam Committee- working throughout the academic year through weekly meetings at the Office of Medical Education. After four years of experience with the Global Basic Sciences Exam, the Global Clinical Exam was administered for the first time in March, 2003.

The overall goal for the Global Clinical Exam is to determine if students are able to approach and solve health problems related to Internal Medicine, Surgery, Pediatrics, Gyneco-Obstetrics and Psychiatry, before graduation.

The specific objectives for this exam are:
To assess -through written clinical problems- the students’ declarative and procedural knowledge primarily related to clinical reasoning (diagnosis, differential diagnosis, request and interpretation of appropriate clinical tests, and therapeutics).

To assess -through structured clinical situations with simulated patients- the students clinical skills such as interviewing, physical examination, clinical reasoning throughout the interview, and communication skills.

The Faculty faced the challenge of preparing and administering this exam which includes the standard and cut-off point setting to make pass/fail decisions.

MATERIAL AND METHODS

Fifty nine students were evaluated for the first time in March 2003. Thirteen of them had started their studies in the old curriculum and switched to the new one. The remaining forty six had completed all courses in the new curriculum.

To accomplish the specific objectives planned, the Global Clinical Exam consisted of two parts:
A Written Knowledge Exam, which assessed mainly the student’s fund of knowledge and their ability to retrive and apply that knowledge to clinical cases.
An Objective Structured Clinical Exam (OSCE), which assessed skills and attitudes that can not be evaluated in a written test (interviewing, physical examination, communication skills, clinical reasoning and patient management).

The students had to pass both exams in order to graduate from medical school.

The written knowledge exam included 100 questions of different types: Seventy-two of them were MCQ -following Grondlund’s1 instructions- with four choices related to a short case scenaria that included requesting and interpretating appropriate clinical tests, diagnosis and therapeutic management. The remaining 28 questions were open-ended, in order to allow students to elaborate on the answers concerning eight different clinical cases (These 28 questions belonged to Internal Medicine, Pediatrics and Psychiatry).

This written exam was composed as follows: 40 questions for Internal Medicine; 15 for Surgery; 20 for Pediatrics; 20 for Gyneco-Obstetrics and 5 for Psychiatry. Students scored one point for every right answer; the passing score was 60. Before the exam administration, the Written Exam Review Committee informed the Global Clinical Exam Committee about the quality of each question: They -as clerkship professors, who knew the students and the concepts involved- considered the questions concerning Internal Medicine too difficult for undergraduates provoking, among the members of the Global Clinical Exam Committee, some concern about the expected results. The exam was administered to all students simultaneously.

Mean, standard deviation and Cronbach α (reliability) of the results were calculated before informing students of their scores.

The Objective Structured Clinical Exam (OSCE) consisted of a number of stations where different clinical situations took place. Each station had a scenario prepared for each case, with a standardized patient to be interviewed or examined according to the case objectives as described and suggested by Harden & Gleeson,2 Harden,3 Ladyshewsky4 and Troncon et al.5

The main objectives to be assessed were: interviewing skills (two stations); physical exam (two stations); ability to request the appropriate laboratory tests (one station); capability for diagnosis (one station) and decision making concerning patient management (one station). Communication skills were assessed in all seven stations. Each case included a check list describing the expected behaviors. There were two observers at each station/room to watch the student’s performance and mark the check list as the student progressed through the exam. In addition, the observers had to give an overall score to the student’s performance. The last two minutes in each station were left for the student to answer a couple of written questions, related to tests, diagnosis or treatment concerning the case. Equivalent cases were made for each day to avoid the flow of information among students from the first to the second day of exam (They were equivalent in terms of the skill assessed and the level of difficulty, though the amount of the check list items and those required to pass were not the same, depending on the case pathology).

The standard for the OSCE was set following what Cusimano6 called “combination method” (within “continuum models”). More precisely, the standards for all skills, except communication, were set using the Modified Angoff approach recommended by Friedman Ben-David7, which is a mix of the Angoff and Borderline procedures that Kaufman et al.8 found reasonably fair, valid, accurate and defendible. The OSCE Review Committe had determined the minimum essential standards required to successfully pass each station. The pass/fail points for each day were the sum of the minimum essential scores for each station (stated by the Committe), plus the communication skills score set as compensatory standard (Friedman Ben-David, 2000), plus the score obtained for the two final questions at the end of every station. The cutoff score for the pass/fail decision was 33 out of 69, for the first day of the exam, and 32 out of 64, for the second day.

Mean, standard deviation and Cronbach α (reliability) of the results were calculated before students received their scores.

RESULTS

As shown in Table 1, 100 % of the students passed the written test. The mean was quite acceptable (79.1 %) and there was a relatively narrow range of scores. However, the reliability of the test was low (Cronbach α = 0.44) and there were nine questions with a variance = 0 (answered correctly by all students).

Means and standard deviations vary among specialties. They are shown in Table 2, along with the differences between the kinds of questions belonging to Internal Medicine and Pediatrics, which were not significant.

Results of the Objective Structured Clinical Exam are shown in Table 3. Two students did not pass this exam and, therefore, did not pass the Global Clinical Exam as a whole. Means for both days were low. There was a narrow distribution of scores, since the range from maximum to minimum each day were 20.65 % and 18.92 %.

DISCUSSION

For the written exam the score Mean was 79.1 indicating that the average was higher than expected by both the Review Committee and the Global Clinical Exam Committee. If the means for each specialty are considered (Table 2), the second lowest was the one for the Internal Medicine questions, but still high enough despite their evaluation as “too difficult” by the Review Committee. This could lead to the conclusion that both the Global Clinical Exam Committee and the Review Committee undervalued the students’ knowledge and their capability to apply it, especially in Surgery. Future qualitative research on the reasons for low achievement in Psychiatry should be carried out, although the small number of questions -compared to the other disciplines- might be the reason for a lower score, considering that each question missed resulted in a greater capacity for a lower score.

The fact that there were no significant differences between the multiple choice scores and open answer question scores was surprising, since the Committee had considered that the MCQ were easier to answer, being the right answer within the options. This fact will probably lead the Committee to take a practical decision and make all the questions multiple choice, since they are easier to correct.

The range -19- between maximum and minimum scores (Table 1) shows homogeneous achievements among students, which suggests a positive evaluation of the new curriculum. However, it is important to remember that the group of students assessed by this exam is not the entire group that entered the medical program in 1997 (120), but rather the first 46 to complete the program.

Some data obtained call attention to some issues related to the exam’s design: a) Nine questions had a variance = 0, since they were answered correctly by all the students. Should this kind of questions be avoided in the future, or should be maintained in the exam, since probably they address the most important, emphasized or best taught concepts? This is a question the Global Clinical Exam Committee has not answered yet. b) A Cronbach (reliability coefficient) of 0.44 is too low to speak of a reliable instrument. On this topic, the Global Clinical Exam Committee consulted an expert in educational assessment. His opinion was that, due to the fact that this is a global exam on knowledge that is required for graduation, the data obtained should be thought of as a “criterion-referenced assessment” rather than as a “normative-referenced assessment”, for which statistical analysis such as Cronbach is intended. His conclusion was that, in this case, it is desirable that all the students that have accomplished an adequate learning process, achieve the goals and pass the exam that assesses them. However, the Office of Medical Education is still undecided about this issue.

Regarding the OSCE’s results (Table 3), the means for both days were low -54 % and 61 %, respectively- and much lower than the written exam mean. These results serve as feedback on the curriculum, suggesting the need for more frequent exposure to patient care and better feedback on students’ performance. This global exam has already demonstrated to the Faculty that the medical students??bf? clinical skills must be developed during the program, under the strict supervision and formative assessment of physician educators.

As happened in the written examination, the range from maximum to minimum for each day of the oral exam -20.65 and 18.92 %, respectively- showed a narrow distribution of scores, which indicates a similar achievement by all students. This would also be a positive indicator to evaluate the new curriculum, since the clinical performance of the worst students was not far away from the best.

Though knowledge is very important to the physician when he approaches a patient’s problem, knowledge itself does not assure development of adequate clinical skills. Much work remains in order to improve the effectiveness of the program for providing quality medical education and to ensure acceptable students performance on clinical skills upon graduation. Moreover, this Medical School will have to define which professional competences -beyond knowledge and clinical skills- are goals to reach during the program and to be assessed in the Global Clinical Exam.

The correlation between cases in series A and B (Table 4) indicates the equivalent cases were well elaborated in order to mantain the variable “clinical situation” constant. The correlation between the morning and afternoon scores was close to 1, which indicates that the long period of time and the weariness of observers and standardized patients did not significantly change the assessing situation.

CONCLUSIONS

This experience allows the Faculty to state that, though it is a very complex process, they are able to elaborate and implement a final Global Clinical Examination. In this first experience, the great majority of the students demonstrated the appropiate knowledge, skills and attitudes required for graduation.

From now on it will be necessary to maintain the same level of difficulty in the areas of knowledge, within the written exam. It will also be necessary to increase the number of cases used in the OSCE, to better represent the variety of situations, knowledge and skills that the students must demonstrate for graduation. Also, the stations that assess diagnosis and treatment must be analyzed. Those skills might be better assessed by written questions, at the end of the stations evaluating anamnesis, physical exam and interpretation of laboratory tests. Additionaly it will be necessary to improve the process for setting standards by increasing the number of judges in the OSCE Review Committee.

It would be useful to have an explicit description of the competences required at the end of the medical program to refer to when designing the evaluation system of the entire program, not just the Global Clinical Exam.

REFERENCES

  1. Gronlund, N.E. Assessment of student achievement. Boston, Allyn and Bacon. 1998; 230.
  2. Harden, R.M., and Gleeson, F.A. Assessment of Medical Competence using an Objective Structured Clinical Examination (OSCE). Edinburgh, ASME Medical Education. 1979; Booklet No 8.
  3. Harden, R. Twelve tips for organizing an Objective Structured Clinical Examination (OSCE). Medical Teacher. 1990; 12(3-4):259-264.
  4. Ladyshewsky, R. Simulated patients and assessment. Medical Teacher. 1999; 21(3):266-269.
  5. Troncon, L.E.A., Dantas, R.O., Figueiredo, J.F.C., Ferriolli, E., Moriguti, J.C., Martinelli, A.L.C., and Voltarelli, J.C. A standardized, structured long-case examination of clinical competence of senior medical students. Medical Teacher. 2000; 22(4):380-385.
  6. Cusimano, M.D. Standard setting in Medical Education. Academic Medicine. 1996; 71(10 supplement):S112-S120.
  7. Friedman Ben-David, M. Standard setting in student assessment. Guide No 18. Medical Teacher. 2000; 22(2): 120-130.
  8. Kaufman, D.M., Mann, K.V., Muijtjens, A.M.M. & van der Vleuten, C.P.M. A comparison of standard-setting procedures for an OSCE in undergraduate medical education. Academic Medicine. 2000; 75(3): 267-271.

NOTE: Please refer to the complete PDF file for the referenced tables and figures.