101 – Assessment of Non-Cognitive Skills in Medical Students During Collaborative Group Testing
Adrienne Z. Ables, Alexis M. Stoner, Heidi Lujan, Stephen E. diCarlo, and Robert A. Augustyniak
Edward Via College of Osteopathic Medicine – Carolinas Campus and Michigan State University
PURPOSE: The primary form of assessment used by most American medical schools is the multiple-choice format of one question, five answers, and only one correct answer. However, assessing in this manner creates the illusion of right and wrong, and ignores the endlessly fluid nature of information and that knowledge is complex, evolving, and can be improved upon over time. Furthermore, many students believe that the test defines them such that their intelligence is determined by their grade, which erodes motivation and sets up a competitive environment that disconnects students from one another. Finally, this testing format only assesses content knowledge, while neglecting many crucial non-cognitive skills such as, critical thinking, communication, compassion and teamwork. Collaborative group testing (CGT) is a form of assessment where students work together and facilitate each other’s learning by asking questions and guiding peers toward the development of their own understanding of the question and answers. We hypothesized that CGT could also be used to effectively assess non-cognitive skills, increase motivation and decrease competition.
METHODS: The VCOM-Carolinas Class of 2020 (n=159 students) was invited to participate in this study. There are three components to the design: 1) video recording of CGT sessions, 2) faculty viewing and assessment of non-cognitive skills during CGT, and 3) evaluation of student perceptions of motivation and competition both at the beginning and end of the course.
RESULTS: To date, all data have been collected, analysis has begun and results will be presented at the annual IAMSE meeting upon abstract acceptance.
CONCLUSIONS: Early results indicate that students viewed CGT quite favorably, suggesting that it may increase motivation. Assessment of non-cognitive skills during CGT appears feasible and important but time consuming.
Poster Award Nominee:
102 – Identifying High Performing Students in Inpatient Settings
Anne Zinski, L. N. Herrera, R. B. Khodadadi, E. O. Schmit, and Carlos Estrada
University of Alabama at Birmingham, University of Alabama School of Medicine, Baylor College of Medicine, and Mayo Clinic
PURPOSE In contrast to the pre-clinical phase, which is largely assessed based on examinations of knowledge, the clinical training phase of medical school includes summative assessments with examinations and numerous performance components. Despite the array of available norm and criterion based tools to guide the process medical student assessment, standards for identifying the highest performing (“honors”) students in inpatient settings during clinical training remains imprecise. For this reason, our goal was to investigate behaviors that internal medicine and pediatrics faculty consider when assigning grades to top-performing students on inpatient ward services.
METHODS We utilized written comments from a survey of internal medicine and pediatrics attending physicians to examine behaviors they consider for determining honors grades in the inpatient setting at a large Southern United States teaching hospital. A team of four trained researchers independently reviewed free text comments and identified codes and subsequent themes using conventional content analysis.
RESULTS Overall, 79 of the 141 attending physicians who were surveyed (56% response rate) provided a total of 90 text comments. Most respondents were internal medicine physicians (n=54) while one third were pediatrics attending physicians (n=25). Main findings included the identification of four major theme areas of student behavior that faculty considered when assigning grades of honors: taking ownership of patient care, applying knowledge via clinical reasoning, working as a team member, and demonstrating self-awareness and willingness to grow as a learner.
CONCLUSIONS We found several common themes that internal medicine and pediatrics attending physicians use for identifying top performing medical students during clinical rotations. It is not surprising that displays of individual knowledge, clinical reasoning, and teamwork were common. However, additional investigation is warranted, as this cohort also reported that medical student behaviors which exhibit reflective awareness of their growth and progress were a key scoring consideration for top performers in inpatient settings.
103 – Correlation between faculty and fellow assessment of internal medicine subspecialty milestones in an OSCE
Arundathi Jayatilleke, Alfred Denio, Chris Derk, and Irene Tan
Drexel University College of Medicine, Geisinger Medical Center, University of Pennsylvania School of Medicine, and Temple University School of Medicine
PURPOSE: The Pennsylvania Rheumatology Objective Structured Clinical Examination (PROSCE) is an annual assessment of communication skills in first and second year rheumatology fellows. We revised our assessment tools to align with Accreditation Council of Graduate Medical Education (ACGME) internal medicine subspecialty milestones. The objective of this study is to evaluate the utility of ACGME competency assessment by comparing to self-assessment by trainees as well as faculty narrative feedback and checklists.
METHODS: We evaluated 16 fellows from 8 rheumatology programs using 6 standardized patient (SP) scenarios. Fellows were observed by faculty raters during SP interaction and completed self-assessments afterwards. We recorded faculty and fellow feedback using previously developed 9-point Likert scale instruments. Items were mapped to milestones within the ACGME competencies of medical knowledge, interpersonal and communication skills, patient care, and professionalism. Faculty assessments included checklists and requests for narrative feedback.
RESULTS: Six faculty and six fellow feedback forms on six SP cases were analyzed, yielding over 250 data points among four competency domains. There was good correlation between mean faculty and fellow rating in each competency domain: in medical knowledge, 6.3±1.9 out of 9 for faculty vs. 6.7±1.5 for fellows, in patient care 6.8 ±0.4 vs. 6.2±1.7 for fellows, in interpersonal and communication skills 7.0±1.0 for faculty vs. 6.6±1.6 for fellows, and in professionalism 7.1±1.2 for faculty vs. 7.5±0.8 for fellows. These values correspond to a milestone level of 3 – 4, at the threshold for readiness for unsupervised practice. Checklists on knowledge and information gathering did not correlate with competency levels. Narrative feedback pertained to use of jargon, clarity of explanation, and hand-washing.
CONCLUSIONS: Faculty and fellow assessment of milestone achievement was similar in each competency domain evaluated but deviated from checklist evaluations. Future research on achievement of each rheumatology milestone would help determine which evaluation format is most appropriate in each domain.
Poster Award Nominee:
104 – THE AFFECTIVE AND COGNITIVE EMPATHY PROFILE OF AN OSTEOPATHIC GRADUATING CLASS OF 2017
Bruce W. Newton and Zachary T. Vaskalis
Campbell University School of Osteopathic Medicine
PURPOSE: To determine if significant changes occur in affective and cognitive empathy scores during the education of an osteopathic graduating class of 2017.
METHODS: The CUSOM class of 2017 voluntarily took the Balanced Emotional Empathy Scale (BEES) and the Jefferson Scale of Physician Empathy (JSPE) surveys five times (n = 122/157). Students indicated their sex and specialty choice. Specialties were broken into “Core” and “Non-Core” groups, with Family Medicine, Internal Medicine, Ob/Gyn, Pediatrics and Psychiatry representing Core specialties. The other 18 specialties were Non-Core, e.g., Radiology, Emergency Medicine.
RESULTS: From orientation until graduation, BEES scores for Core men and women dropped 18.0% and 3.6%, respectively. Non-Core men and women dropped 42.1% and 21.8%, respectively. The largest BEES score decrease occurred after finishing the first clinical year (M3) with Core men dropping 30.1% and Non-Core women 19.8%. Core men JSPE scores men dropped 1.3% but increased 2.4% for Core women. Non-Core men and women dropped by 0.9% and 4.4%, respectively. The largest drops in JSPE scores occurred after finishing the first year for Non-Core women (3.9%) and after finishing the M3 year (6.5%) for non-Core men. Overall, there was no significant decrease in BEES or JSPE scores for Core or Non-Core women. The only significant decrease was for the Core men M3 BEES score compared with their senior BEES scores.
CONCLUSIONS: The class of 2017 empathy scores remained mainly stable during their osteopathic education. Except for Non-Core women JSPE scores, there is a trend for all scores to decline. These data differ from allopathic data where there are dramatic declines in BEES scores after completing the first and third years (Acad. Med. 83:244-249, 2008). Maintaining affective and cognitive empathy suggests these osteopathic students may be better able to establish an empathic bond of trust with patients than the previously studied allopathic students.
105 – TOWARDS COMPETENCY-BASED EDUCATION- A DEVELOPMENTALLY APPROPRIATE CLINICAL SKILLS ASSESSMENT AND FEEDBACK PROGRAM FOR THIRD YEAR MEDICAL STUDENTS
Diana Callender, Mary Triano, Mario Cornacchione, Anthony Gillott, Thomas Martin, Kathryn Powell, Margrit Shoemaker, Brian Wilcox, and William Iobst
Geisinger Commonwealth School of Medicine
PURPOSE: In longitudinal integrated clerkships (LICs), the longitudinal relationship of learners and preceptors allows preceptors to evaluate learners’ progression towards entrustment. At Geisinger Commonwealth School of Medicine (GCSoM) students participate in a hybrid LIC/Block curriculum in the third year. GCSoM has four regional campuses and varied learning venues so we instituted three Objective Structured Examinations (OSCEs), a formative session in February, and two summative exams in April and June, to help standardize outcomes.
METHODS: In 2016, we changed our clinical-skills assessment format with the goal of improving student outcomes and stimulating self-directed learning. Formative OSCEs were given in October and February and a single, summative, OSCE was given in May. Students were assessed in the domains of communications skills, history taking, physical examination and clinical reasoning. Additionally a criterion-based assessment replaced the previous norm-based assessment. Students received a grade in the formative OSCEs to help them gauge their progress, viewed their videos and met with faculty, shared their self-assessment and received feedback. Students generated individualized learning plans (ILP) and were encouraged to share them with preceptors.
RESULTS: Students were able to monitor their progress and, with deliberate practice, improve their performance across the year. Anecdotally, the relationship faculty developed with students as they discussed their self-assessments and gave feedback supported student growth towards entrustment. While only 2/100 students earned a cumulative failing score, clerkship directors were concerned about the performance of 14 other students who did not meet the criteria in one or more domains.
CONCLUSION: Despite the improvement that faculty saw in student performance, some students performed below the set criteria in several domains. As a result, clerkship directors required that students not only earn a cumulative score equal or greater than 70% but meet that criterion in all domains in the 2017-18 Academic year.
Poster Award Nominee:
106 – Assessment of Medical Students’ Critical Reasoning in Year One
Cathy B. Wilcox
Geisinger Commonwealth School of Medicine
PURPOSE: Case-Based Learning has been shown to increase student satisfaction with learning. There is some evidence that it improves critical reasoning skills. However, research in this area is limited. We developed a critical reasoning exercise that increases in complexity and difficulty as students develop. In order to test gains in critical reasoning, the final and most complex exercise (post) was also administered as a pre-test. Pre and post student responses were compared in order to determine critical reasoning changes during the first year of medical school.
METHODS: A four step developmentally incremental critical reasoning exercise was developed for first year medical students, based on work by Silvia Mamede, et. al. Students read a clinical vignette, identify key features, group features into categories, state two hypotheses about the underlying cause or problem and generate follow-up questions to differentiate between the hypotheses. Pre-test and post-test responses were de-identified, randomized and graded. After grading, pre-post pairs were identified and changes in critical reasoning ability determined.
RESULTS: First year medical students’ critical reasoning ability, as measured by their ability to successfully complete the critical reasoning exercise, improved during the academic year. This illustrates the students’ reasoning skills acquisition.
CONCLUSION: Most incoming students do not have the skills needed to identify key features in a clinical vignette, group them, state two hypotheses about the underlying cause and generate follow-up questions. However, over the first year, they can develop this ability. The increase in their competence in these areas demonstrates improved critical reasoning appropriate to their level of education.
107 – Factors Impacting Performance on High Stakes Exams
Elizabeth McClain, Rance McClain, Teresa Camp Rogers, Sherry Smith, Joshua Courtney, and Mike Lam
William Carey University College of Osteopathic Medicine and TrueLearn
PURPOSE:Medical education has driven multiple innovations in curriculum, however high stakes standardized board examinations have remained universally accepted measures of minimal competency. This study explored behaviors of at-risk academic performance focused on approaches to question bank usage in relation to medical licensing examination performance.
METHODS:A non-experimental retrospective design was used to investigate multiple metrics (Questions, Mode, Accuracy and Timing) of TruLearn’s Combank Level I Q-Bank, representing study behaviors across three medical student cohorts at a single osteopathic medical school. Academic performance outcome measures included GPA, Quartile rank and Comlex Level I score. Descriptive statistics, correlations, anova and regression were completed to explore associations between q-bank behaviors and performance and
RESULTS:282 students (n=93; n=98; n=91) in cohorts 2017-19 were assessed. Q-bank mean usage (m=2461; SD 1221), GPA (M=84.38: SD=3.5) and COMLEX (M=511.42: SD 97.47). Bottom quartile performers demonstrated lower means for Questions (Total-Q, Unique-Q); Mode (Timed, Tutor) and Time (Aver Time per Q). Anova’s demonstrated significant difference in Scoring for questions in Early Timed, Late Timed Total Unique Score and Total Score accuracy. Significant large positive correlations were identified between Comlex and GPA and variables (Early Timed, Late Timed Total Unique Score and Total Score). Regression model using both Early/Late Timed, Unique Score, Total Score (R2= .403F(4, 273) = 46.12, p = .00). The model accounted for a significant proportion of variance in the COMLEX scores.
CONCLUSION:It is vital to develop interventions for increased academic performance including medical board preparation behaviors and study approaches. Bottom quartile completed less timed questions both early on and late and had lower performance scores for total number of questions and total number of unique questions when compared to quartile groups. These behaviors were also associated with academic performance (GPA and COMLEX I). Early intervention may impact academic outcomes for at-risk students.
108 – Active Learning Assessment of Histology Knowledge
Jennifer A Fischer, Christina Jenkins, and Bruce Palmer
Rowan School of Osteopathic Medicine and West Virgina University School of Medicine
PURPOSE: Rowan School of Osteopathic Medicine has implemented an innovative active learning assessment strategy requiring medical students to independently evaluate a tissue sample and identify histological structures. This exercise is intended to allow students to participate in an assessment different than typical multiple choice exams. This abstract describes the assessment, its delivery and how the information is being used.
METHODS: Rowan School of Osteopathic Medicine has used a histology scavenger hunt as a unique assignment that requires students to interpret information and receive individual feedback. A student survey from three separate cohort years is being used to gain insight into student perceptions. Data analysis is also being performed on the online assessment to examine average time to complete the assessment, was it completed in one sitting, answer order, and common structures correctly identified vs those commonly incorrectly identified.
RESULTS: Preliminary survey results show that students primarily used lecture notes, lab handouts and the textbook to accomplish the activity, at least 40% of surveyed students went back to review the slide and material and at least 19% of surveyed students used the information to study for the final cumulative exam. In addition, data is being reviewed to report on student behaviors. The presention will describe how the analysis is used to inform content delivery, examining what structural elements the cohort learned well and which ones may require curricular changes to enhance learning on particular histological topics that proved difficult.
CONCLUSIONS: RowanSOM student feedback to date has been positive about the activity, the online submission version has provided more data to inform content deliver and we have learned that the timing of the assessment can impact student perception. Overall, this type of activity was well received and allowed students to interact with content material in an active way.
109 – STUDENT PERFORMANCE ON MULTIPLE CHOICE EXAM QUESTIONS ASSOCIATED WITH DIFFERENT LEARNING ACTIVITIES IN UNDERGRADUATE MEDICAL CURRICULUM
Joseph Fontes, Shawnalee Criss, Anthony Paolo, and Emma Nguyen
University of Kansas School of Medicine
PURPOSE: A previous study conducted at our institution revealed that students were concerned that learning activities commonly labeled “active learning” would be less efficient than traditional lecture in imparting information. As a consequence, students expressed anxiety that their performance on exam questions related to non-lecture activities would suffer [Walling, A et al., Teach Learn Med. 2017 Apr-Jun;29(2):173-180]. We implemented a major curricular revision at the University of Kansas School of Medicine in AY2017 (the “ACE” curriculum) which extensively employed three types of non-lecture learning activities: Problem Based Learning (PBL), Flipped Classroom (FLIP), and Case-Based Cooperative Learning (CBCL). The purpose of the present study was to determine if student performance was lower on NBME-style, multiple choice summative exam questions related to each of these activities, in comparison to questions associated with material delivered by lecture.
METHODS: The percent of the class who answered summative exam questions correctly, for each type of learning activity was compared using dependent t-tests and Cohen’s d effect size.
RESULTS: Our analysis revealed three pairwise comparisons indicating small differences in summative exam question performance. Students correctly answered questions associated with PBL (88.5%), CBCL (86.5%) and lecture (85.8%) at a slightly higher rate than those questions associated with FLIP (83.0%).
CONCLUSIONS: Despite student concerns that non-lecture, “active learning” activities would result in poorer performance on exams, we found little evidence of this in the first year of the ACE curriculum. While performance on exam questions associated with FLIP sessions were nominally lower than those associated with other activities, reasons beyond the effectiveness of this pedagogical approach (e.g. faculty inexperience, lack of student engagement) may be contributing factors.
Poster Award Nominee:
110 – Using Predictive Modeling to Better Identify Students in Need of Proactive Educational Support Prior to the USMLE Step 1 Exam
Laurah Lukin, Laura Malosh, Pamela Baker, Bruce Giffin, and Cijy E. Sunny
University of Cincinnati
PURPOSE: Poor performance on Step 1 of the US Medical Licensing Examination (USMLE) can negatively impact a students’ ability to match into residency programs. Identifying at risk students early allows for proactive educational support prior to the dedicated study time for Step 1. Studies have shown independent correlations between variables such as the National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination (CBSE), NBME Customized Assessment Service (CAS) Examinations scores and Step 1 performance. NBME CAS examinations are administered as End of Block Exams (EOBE) for all organ system courses. This study aims to strengthen predictive Step 1 models by identifying students at risk by combining several predictors shown independently to exhibit correlations with Step 1 performance.
METHODS: Using scores from eight CAS EOBEs from the M1 and M2 years and the CBSE, a stepwise regression model was tested to predict students at risk for failing or scoring below the Step 1 national mean. A second model without CBSE was tested to identify significant EOBE predictors of Step 1. Both models were tested on four years of data (n=498 students). The analysis was performed using SPSS.
RESULTS: The primary model which included all eight EOBE and CBSE as predictors explained 76% of the overall variance in Step 1 scores. CBSE was the strongest predictor (0.3 SD). The second model, which only included the eight EOBE, accounted for 73% of the variance in Step 1 scores. In this model, Blood/Cardiovascular system EOBE explained the most variance (0.2 SD).
CONCLUSION: These results are consistent with previous findings that NBME CAS exams are correlative to performance on the USMLE Step 1. Our model suggests using the CBSE Exam and NBME CAS Scores from Organ System courses in the M1 and M2 years can better predict Step 1 performance than CBSE scores alone.
111 – Prediction of Medical Student Moral Virtues of Honesty, Helpfulness, and Humility from Personal Characteristics
Robert Treat, Diane Brown, William J. Hueston, Jeff Fritz, Kristina Kaljo, Craig Hanke, Koenraad De Roo, Amy Prunuske, and Dawn Bragg
Medical College of Wisconsin
PURPOSE: Medical student moral virtues are notoriously challenging to predict from personal characteristics,¹² but are critically important to examine during their development as future physicians. Self-transcendent virtues with a social focus are aligned closely with strong personal character and care for patients. Virtues with strong associations to self-reported student attributes³ can help academic medicine understand the personal qualities of future physicians.
The purpose of this study is to analyze the relationship of medical student moral virtues with their personal characteristics.
METHODS: In spring 2017, 115/500 M1/M2 medical students completed these self-reported surveys: Schwartz’s Value Inventory (scale: 0=not important/7=supreme importance) for measuring virtues of honesty, helpfulness, and humility; Five Factor Personality Survey; Trait-Emotional Intelligence; Orientation to Happiness and Life Satisfaction Instrument; RS-25 Resilience Scale. Pearson correlations and stepwise multivariate linear regressions were used for assessing associations. IBM® SPSS® 24.0 generated statistical analysis. This study is IRB approved.
RESULTS: Overall, student scores on the scales for honesty (mean + sd=5.3+1.2), helpfulness (5.2+1.1), and humility (4.8+1.5) all exceeded the instrument’s midline score of 3.5. When we looked at associations within the value inventory, honesty was significantly correlated to helpfulness (r=0.4, p<.001) and humility (r=0.7, p<.001).
Using regression models, honesty was independently associated with these factors: neuroticism (factor of personality), self-control and emotionality (emotional intelligence), life of meaning (life satisfaction), and purpose (resilience) (model R²=0.61, p<.001).
Helpfulness was independently associated with factors of conscientiousness, openness (personality), sociability (emotional intelligence), and perseverance (resilience) (R²=0.80, p<.001).
Humility was independently associated with factors of conscientiousness, agreeableness, openness (personality) and life of meaning, pleasure, engagement (life satisfaction) (R²=0.88, p<.001).
CONCLUSION: While it may be difficult for medical schools to assess the components of moral value in applicants and students, honesty, helpfulness, and humility scores were positive and predicted by a wide range of common and unique personal characteristics.
112 – Analysis of Medical Student Motivation and its Association to Resilience, Trait Affect, and Gender
Robert Treat, William J. Hueston, Amy Prunuske, Diane Brown, Koenraad De Roo, Kristina Kaljo, Craig Hanke, and Dawn Bragg
Medical College of Wisconsin
PURPOSE: Two new three-year medical degree programs have recently been created at our institution. It is important to examine dimensions of student learning which will adapt to these curricular developments. These elements not only include cognition (what students learn) and metacognitive regulation (how they learn), but also affect and motivation (why they learn). The motivational element involves coping with emotions which will impact student well-being and the progression of their learning.
The purpose of this study is to analyze the relationship of medical student motivation, trait affect, and resilience as moderated by gender.
METHODS: In spring 2017, 115/500 M1/M2 medical students (55M/50F) completed these self-reported surveys: 30-item Trait Emotional Intelligence to measure the facet of motivation; 25-item RS-25 Resilience Scale; 60-item Positive and Negative Affect Schedule (PANAS-X); 50-item Five Factor Personality Survey. Stepwise multivariate linear regressions used for predicting motivation from resilience, personality traits, and trait-affect. Independent t-tests compared mean scores differences in motivation. IBM® SPSS® 24.0 generated statistical analysis. This research was approved by the institution’s IRB.
RESULTS: Significant regression models of motivation for female students (R²=0.55/p<.001) included three predictors (in descending order): conscientiousness (personality), openness (personality), and steadiness (trait-anxiety).
Significant regression models for male students (R²=0.98/p<.001) included eleven predictors: neuroticism (personality), authenticity (resilience), worrying (trait-anxiety), extroversion (personality), satisfied with self (trait-anxiety), delighted (trait-affect), perseverance (resilience), agreeableness (personality), drowsy (trait-affect), jittery (trait-affect), and active (trait-affect).
No significant differences in motivation scores (p<.985) between females (mean(sd)=5.5(1.3)) and males (5.5(1.7)), but both are significantly (p<.001) above midline scores (4.0 on 7-point scale).
CONCLUSIONS: Medical student motivation scores are nearly identical between gender and well above midline scores. Many personal characteristics predicted male motivation and included emotional elements of personality, trait-anxiety, trait-affect, and resilience. The characteristics that predicted female student motivation included cognitive elements of personality traits and low trait-anxiety.
113 – ASSESSMENT OF MEDICAL STUDENT COMPETENCIES IN A LONGITUDINAL INTEGRATED CLERKSHIP CURRICULUM: DO THEY MEASURE A UNIQUE CONSTRUCT?
Shane Schellpfeffer, Edward Simanton, and Valeriy Kozmenko
University of South Dakota and University of Nevada-Las Vegas
PURPOSE: During the past five years, the University of South Dakota Sanford School of Medicine has undergone numerous curricular changes. One of these changes was a conversion from traditional block clerkships to a longitudinal integrated clerkship.
Because of this curriculum change, the development of new methods of assessment was required. An increasing emphasis was placed on the importance of formative assessment and feedback in order to provide students with timely and standardized feedback.
A system of centralized grading during the clinical year was developed. Competency-driven grading, where competencies are integrated into the disciplines, was a part of this new model. Numerous data sources are used to calculate competency grades. The original study sought to determine: Are the competency grades really measuring a different set of skills than the discipline grades?
METHODS: The original analysis used data from three cohorts of students (N=154). An additional two cohorts of students, with approximately 120 more subjects, will be added to this analysis. The follow up study will exactly replicate the original study. For the original study, descriptive statistics and Pearson correlations were calculated, and a factor analysis was conducted, using a Varimax (orthogonal) rotation.
RESULTS: The baseline data results from the initial study found that competencies showed more variation in means and standard deviations than did the disciplines. All of the disciplines were highly correlated to each other. Correlations between only the competencies were less strong than among only the disciplines, and the correlations between disciplines and competencies were relatively constant.
CONCLUSIONS: We found that the four components of the factor analysis measured a student’s: exam-taking ability, interpersonal skills, professionalism, and inter-professional collaborative skills. These results suggest that medical student competencies do measure unique constructs. We anecdotally expect to find, with additional data, that competencies continue to measure unique constructs.
114 – Exploring Structural Validity of Competency Assessments
PURPOSE: This study focuses on developing a practical measurement framework that offers statistical evidence of the structural validity of an outcome-based educational model for an undergraduate medical program. The empirical evidence provides a framework for evaluating the validity of assessments at the program level and efficacy of the assessments.
METHODS: The validity inquiry is carried out by applying measurement principles that scrutinize the scoring aspects of assessment. Then the scoring evidence is structurally modeled. The study utilizes existing data from the pre-clinical curriculum mainly MCQs and practical exams. Data is explored using EFA, then modeled through CFA and SEM techniques. The approach yields quantitative evidence for the reliability and validity of assessments.
RESULTS: Three constructs emerged from the assessments. These were named biomedical sciences, patient care, and Osteopathic philosophy. Composite ratio, an index of reliability which is viewed as less biased than Cronbach’s alpha was within the acceptable range for the latent variable PC (CR = .85) and BMS (CR = .85). However, OPP had the lowest CR= .56. These indices were relatively similar the Cronbach’s Alpha that were generated during the EFA (BMS ? = .84, PC ? = .86, and OPP ? = .53). A CR value of .7 and above is considered acceptable. CMIN = 5.70, TLI = .80, CFI = .83, RMSEA = .09, SRMR = .072, GFI = .86, AGFI=.81 and p-close = .000 The regression weights which represents the factor loadings were all statistically significant. The model fit indices are relatively within an acceptable range of model fit but not the best.
CONCLUSION: The study was able to generate statistical evidence for the structural validity of the preclinical curriculum. Though the structural model was not a perfect fit there is an indication of a relatively good fit for the data.
115 – Do academic interprofessional teaching practice sites reduce healthcare utilization?
Shelley Bhattacharya, Stephen Jernigan, Dory Sabata, Traci Foster, Laura Zahner, Nick Marchello, Myra Hyatt, Crystal Burkhardt, and Denise Zwahlen
University of Kansas Medical Center, School of Medicine, University of Kansas Medical Center, School of Health Professions, PT, and University of Kansas Medical Center, School of Health Professions, OT
PURPOSE: With the complex healthcare needs of geriatric patients, collaborative interprofessional (IP) care is essential. Teaching this model remains a gap in medical education. The Geriatric Interprofessional Teaching Clinic (GITC) is a six-profession, collaborative practice teaching clinic required during the 3rd year Geriatrics clerkship at the University of Kansas School of Medicine. Students form three-profession teams to see recent hospital discharges, new patients and consults, each as a two-hour visit. It is unclear if IP models reduce health care usage. GITC faculty are exploring if IP teaching clinics reduce healthcare system utilization.
METHODS: Health records of GITC patients were retrospectively analyzed using the HERON electronic information portal. For study inclusion, a GITC visit was required between 1/1/14-12/31/15. Extracted information included professions present, hospitalizations and Emergency Department (ED) visits one year prior to, and one year following, the GITC visit.
RESULTS: One hundred eleven records met the inclusion criteria. The professions engaged in patient visits included: medicine (111), occupational therapy (89), pharmacy (58), physical therapy (38), social work (33), and dietetics (6). Of these, 22 patients were hospitalized in the year prior to and 13 were hospitalized in the year after their GITC visit (41% reduction). Forty-six patients visited the ED in the year prior to the GITC visit, compared to 38 seen in the year after (17% reduction).
CONCLUSION: This academic interprofessional collaborative practice model correlates with substantially reduced healthcare utilization. The combined expertise of multiple professions seems to efficiently identify risk factors to contain healthcare costs. More research is needed to further validate this collaborative model and its effect on geriatric patient outcomes and healthcare utilization. Preliminary results support the importance of interprofessional teaching practice sites at academic medical institutions.
116 – Advancing Educational Research at the Inaugural Anatomy Education Research Institute (AERI)
Valerie Dean O’Loughlin, Polly R Husmann, and James J Brokaw
Indiana University School of Medicine
PURPOSE: While some medical institutions provide professional development programs for faculty interested in pursuing educational research, many faculty lack these resources and/or lack face-to-face mentoring to develop as medical education research scholars. To meet this need, the co-authors developed the inaugural Anatomy Education Research Institute (AERI). This institute was inspired by the American Physiological Society’s Institute for Teaching and Learning (APS-ITL). However, AERI was focused on Anatomy faculty and graduate students (whereas the APS -ITL was primarily focused on physiology faculty).
METHODS: AERI was funded by an American Association of Anatomists Innovations grant submitted by the co-authors, and the 5-day institute was held in Bloomington, IN in early July 2017. The co-authors were the institute co-organizers, and were responsible for planning the schedule, inviting speakers, reviewing applications, and developing assessment instruments for the institute. Invited speakers discussed educational research topics and mentored participants who wanted to develop their own educational research project.
RESULTS: A total of 62 registered participants attended AERI 2017. Participants were introduced to many aspects of educational research and had regular scheduled meetings with a mentor to develop their own rigorous educational research question and determine appropriate methods of assessment. Participants had strong positive impressions of the conference (per post-conference survey feedback and social media comments) and individuals requested future AERI conferences be planned.
CONCLUSIONS: AERI 2017 provided a helpful and intensive face-to-face environment for future medical education research scholars to develop their skills as education researchers. Follow-up analysis will determine if these immediate impacts of AERI persist for the long term.
117 – The cost-effective method of enhancing long-term knowledge retention for high fidelity simulation
Valeriy Kozmenko, Wesley James Henze, and Shane Schellpfeffer
Background: Studies have shown knowledge degrade if it is not reinforced. Repeated exposure to the concept or skill is required to increase their retention. Although, when a person is re-exposed to the same content their brain skips information that is perceived as familiar. It explains why consequential reading same information fails to increase knowledge. High fidelity simulation (HFS) has been proven to produce knowledge and skills gain. Its effectiveness is explained by the active learning involvement. Attempts to improve long-term retention of acquired knowledge and skills via repeated HFS experiences has two challenges to overcome: (1) it is expensive; (2) repeating the same scenario multiple suffers from the same subconscious skipping phenomenon as reading the same book again. The cost of high fidelity simulation makes this approach less feasible. University of South Dakota Sanford School of Medicine has investigated a method of enhancing long-term retention via a method called the “induced flashback.”
Method: To establish knowledge baseline, students complete an MCQ. After the scenario, a faculty conducts a debriefing session. After debriefing, the students complete the same quiz. The difference between pre- and post-activity quizzes measures an acute knowledge gain. Administering the same quiz six weeks later makes the information from debriefing session operational.
Results: Six months after the high fidelity simulation training, students in the test group have had a 21% higher knowledge retention than their peers from the control group.
Discussion: Using alternative simulation modalities enhances long-term knowledge retention acutely acquired with high fidelity simulation. This study postulates that using repeated quiz triggers recollection of the debriefing session, makes information operational and enhances long-term retention.
Conclusion: Study conducted by the USD SSOM demonstrated that students who performed a the spaced multiple choice quiz had higher knowledge retention six months after training than their counterparts in the control group. Using spaced quiz to enhance HFS is cost-saving
118 – MEDICAL TRAINEES’ APPROACHES TO LEARNING MEDICINE AND THEIR CLINICAL PERFORMANCE
Yen-Yuan Chen, Kuan-Han Lin, and Tzong-Shinn Chu
National Taiwan University College of Medicine
PURPOSE: Workplace-based assessments have been emphasized as important measures for clinical performance such as clinical skills, clinical reasoning, and decision-making. Few studies have been conducted to examine the associations between medical trainees’ approaches to learning medicine and their clinical performance. The objectives were: (1) to examine the associations between medical trainees’ approaches to learning medicine and clinical skills; and (2) to examine the associations between medical trainees’ approaches to learning medicine, and clinical reasoning and decision-making ability.
METHODS: We used the Approaches to Learning Medicine questionnaire, a modification of the Revised Learning Process Questionnaire. We assessed medical trainees’ clinical skills using mini-CEX, and clinical reasoning and decision-making using CbD. We conducted Pearson’s correlation coefficients and Spearman’s Rank correlation coefficients for examining the linear relationships between a continuous independent variable and the dependent variable, and between a categorical independent variable and a dependent variable, respectively. Stepwise multivariate linear regression analysis was carried out to examine the association between approaches to learning medicine and clinical performance.
RESULTS: A medical trainee’s academic performance as indicated by the class rank upon medical school graduation (?=0.27, p=0.05), and the deep motive of “Intrinsic Interests”
(?=0.09, p=0.04) were positively associated with the performance of clinical skills. In comparison, a medical trainee’s surface motives of “Aim for Qualification” (?=0.15, p=0.05) and “Fear of Failure” (?=-0.14, p=0.02) were positively and negatively associated with his/her clinical reasoning and decision-making, respectively.
CONCLUSION: Our study reported that the approaches to learning medicine predicted medical trainees’ clinical skills, clinical reasoning, and clinical decision-making ability. Medical trainees as medical learners are suggested to promote intrinsic interests for being a capable physician, to see each assessment as one of the series of qualifications, and to avoid fearing failure by using the strategies such as early preparation, making good time arrangement, and so on.
119 – APPLICATION OF COGNITIVEILY DIAGNOSITC ASSESSMENTS IN BIOCHEMISTRY: A PRIMER
Youn Seon Lim and Catherine Bangeranye
Zucker School of Medicine at Hofstra/Northwell
PURPOSE: In undergraduate medical education (UME), course assessments usually provide only pass/fail scores. This approach provides students with insufficient diagnostic information to recognize their strengths and weaknesses as well as to plan for their learning in the framework of self-directed learning. This study aims to introduce and describe how cognitive diagnosis assessment (CDA) that provides more diagnostic information about each student’s performance than a single test, could be applied to course assessments in UME.
METHODS: For this study, we used final exam responses of 200 students enrolled in a biochemistry course in UME. The exam includes 25 questions that were tethered to a case vignette such that clinical reasoning and basic science knowledge. First, we constructed the matrix of question by latent knowledge skills (which are required to answer each question correctly) manually and statistically. Then we conducted the skill diagnosis modeling (by comparing model fit statistics) to provide feedback to students on their strengths and weaknesses.
RESULTS : Results showed a successful application of CDA to the biochemistry exam (DINA model with fisher z-transformation corr= 2.65). The exam measured four knowledge skills: understanding basic science concepts, tying together complex information, using basic science to solve problems, and using basic science in clinical reasoning. This application identified which skills each of the 25 items were assessing and in turn, the skill mastery profile for each student. We also categorized students into 16 groups based on their skill profile for the instructor to have usable feedbacks.
CONCLUSION: Providing efficient feedback for improving learning behaviors can prove to be a challenge. Use of the CDA becomes another approach that can be useful in providing UME students with efficient feedbacks to recognize their strengths and weaknesses as well as to plan their learning especially in the framework of self-directed learning.
120 – METACOGNITIVE AWARENESS IN FIRST-YEAR MEDICAL STUDENTS AND ITS RELATION TO ACADEMIC PERFORMANCE IN A FOUNDATIONAL SCIENCE COURSE
Youngjin Cho, Alfred Hamilton III, and Jess Cunnick
Geisinger Commonwealth School of Medicine
PURPOSE: This study sought to examine the relationship between self-awareness of metacognition skills and performance in the first foundational science course for medical students at Geisinger Commonwealth School of Medicine.
METHODS: At the beginning of the first year, one hundred and twelve medical students completed a 9 week foundational science course called Cellular Molecular Basis of Life (CMBL) encompassing biochemistry, genetics, molecular biology, cell biology and introductory histology. Their academic performance in the course was measured by two exams. The same students completed a 52-item self-reported metacognition awareness inventory (Shraw and Dennison) as a part of course assignments in Case Based Learning. Using Pearson’s correlation test, the relationship between the exam average and the scores of two major domains of metacognition, knowledge about cognition and regulation of cognition, as well as those of sub-processes under each domain were examined.
RESULTS: A statistically significant, positive correlation was found between knowledge of metacognition and the exam average for CMBL (p<0.001, Pearson’s r = 0.48). The correlation between regulation of metacognition and the exam average was weak (Pearson’s r <0.25). The positive correlation between declarative knowledge and the exam average was the strongest among the sub-processes under knowledge about cognition. There was no statistically significant correlation between debugging strategies and comprehensive monitoring under the domain of regulation of cognition.
CONCLUSION: The correlation between self-awareness of declarative knowledge and performance in the foundational science course supports the hypothesis that selective metacognitive skills are important for the first year medical students who are acclimating into a medical school. Strategies to screen for and improve all metacognitive skills for incoming medical students should be explored in an attempt to aid their transition into medical school.
121 – AN ACADEMIC COUNSELLING TEAM TO IMPROVE STUDENT CONFIDENCE AND ACADEMIC PERFORMANCE: A REVIEW OF OUTCOMES
Yuliya Modna and Frances Jack-Edwards
Trinity school of medicine
PURPOSE: The Academic Counselling Team (ACT) is a counselling service that is committed to identifying strategies for students who need learning assistance. The goal of this review is to look at the effectiveness of the Academic Counselling Team on academic performance and confidence of students at Trinity School of Medicine.
METHODS: Seventy-two medical students who failed one or more midterm exams (32 males and 40 females; mean age = 24.7 years, SD = 2.3) were invited to attend an ACT session. The ACT was comprised of three faculty members, who had a structured discussion about the student’s study plan and made recommendations to improve their outcomes. Students were divided into four groups according to their academic term. Academic performances were analyzed before and after the ACT session for each student along with the end of semester summary report submitted by the academic advisor.
RESULTS: Analysis of students’ exam scores after the ACT session showed that Term 1 and 2 students improved their academic performances by implementing changes in their study strategies, they found the sessions were non-threatening and they noticed a positive interaction with faculty. The term 3 and 4 students were not enthusiastic about the ACT sessions. They showed less evidence of improvement in academic performance and only expressed a need for guidance on implementing changes to their workload and not a need for a study plan.
CONCLUSIONS: The ACT session is an effective structured service for term 1 & 2 students who were transitioning from undergraduate to the medical program to increase their confidence and academic performance. The session was productive for term 3 & 4 students only to help them manage their academic workload. Overall the students felt the ACT sessions encouraged them to reach their educational goals.
122 – GOOD, BETTER, HOW (GBH): A SIMPLE METHOD FOR GIVING FEEDBACK IN MEDICAL SCHOOL
Neil Haycocks, Ellen Cosgrove, Edward Simanton, and Mark Guadagnoli
UNLV School of Medicine
PURPOSE: Feedback is globally acknowledged as a requirement for effective learning. It is therefore ubiquitous in medical education and practice. Despite its commonality and necessity, many modes of delivering feedback have significant flaws that contribute to unintended negative consequences. To address these shortcomings we have adapted a feedback strategy originally designed for motor skills into Good, Better, How (GBH). Here we describe the implementation of GBH in a new medical school.
METHODS: A structured, three-step framework for providing feedback is discussed. The first is to recognize what was good or positive about the behavior being evaluated. The second is to make an objective assessment of what could be improved or made better. The third is to develop a concrete plan, with intrinsic mechanisms for accountability, to achieve the desired improvement. When properly administered, this approach fulfills several criteria for a responsible feedback framework: GBH (1) encourages behaviors that have been successful, (2) discourages behavior that yields to suboptimal performance (3) and encourages of new behaviors that will enhance performance. The GBH framework has been incorporated into all avenues of feedback given within the UNLV School of Medicine. These avenues include formative and summative feedback provided to students and faculty, course feedback, and annual faculty evaluations. Focus groups were queried about the perception of GBH and its impact on both individual performance and the culture within the institution.
RESULTS: GBH has contributed to the development of a positive learning and working environment. The basic strategy of GBH is broadly applicable to many settings, both inside and outside the educational realm.
CONCLUSION: Broad implementation of GBH as a systematic means of delivering feedback has been successful within our institution. Further expansion of GBH into clinical education and the gathering of quantitative data will permit a more thorough exploration of its impact.
123 – The National Emergency Medicine Fourth-Year Student (M4) Examinations: Findings from one Medical School
David Story, David Manthey, and Hong Gao
Wake Forest University School of Medicine
PURPOSE: At institutions where students can rotate in the emergency department during both 3rd and 4th years, different examinations are necessary to avoid grade inflation due to administering the same test. We investigated the correlation between two examinations utilized at our institution and whether performance on either of those exams was predictive of performance on the USMLE Step exams.
METHODS: Data for this study was obtained over a 3-year period from students rotating in a MS4 EM clerkship at a private medical school. Data was compiled for EM-M4, USMLE Steps 1 and 2 CK, and EM-ACE. In addition, practice exam utilization and student interest to pursue EM as a specialty was merged with other data. All analyses were conducted in SAS.
RESULTS: Correlation statistics show that EM-M4 has low (r=0.34), moderate(r=0.48), and low to moderate (r=0.44) correlations with Step 1, EM-ACE, and Step 2 CK, respectively. Among the independent variables, Step 1 has a moderate correlation (r=0.49) with EM-ACE and a moderate to high correlation (r=0.74) with Step 2 CK. EM-ACE performance has a moderate correlation (r=0.64) with Step 2 CK performance.
Regression analysis indicates that among four variables, the number of practice exams (F(1, 86)=5, p<0.05) and EM-ACE (F(1, 86)=8.18, p<0.01) are significantly related to the performance on EM-M4. Student interest in EM as a specialty was also found to significantly affect EM-M4 performance (F(1, 87)=6.95, p<0.01).
CONCLUSIONS: Although EM-ACE was significantly related to EM-M4 performance in both models, the moderate correlation between EM-ACE and EM-M4 was lower than our expectation of 0.7 as both tests were designed to assess 4th year students’ knowledge in EM. Student interest in pursuing EM as a specialty was a significant factor in determining their performance on EM-M4, as they tend to take more practice exams, indicating that motivation was playing an important role in the process.