News

Join us for the IAMSE 2024 Medical Educator Fellowship Program!

The International Association of Medical Science Educators (IAMSE) is pleased to announce that applications for the 2024 Medical Educator Fellowship (MEF) Program are now being accepted! IAMSE is once again offering members and non-members the option of completing the MEF Program 100% virtually, from any location around the globe.

The primary goal of the MEF is to support the development of well-rounded healthcare education scholars through a program of targeted professional development and application of learned concepts to mentored research projects. The program is designed for healthcare educators from all backgrounds who wish to enhance their knowledge and productivity as educational scholars.

Please note that as a prerequisite, applicants are required to have completed the Essential Skills in Medical Education (ESME) Program. For more detailed information about the program, please visit the information on our website at http://www.iamse.org/fellowship-program/.

Applicants for the next cohort will be accepted until December 1, 2023. To submit your application, fill out this application form, then email the completed form along with your ESME Completion Certificate and your CV to support@iamse.org

In case your application is accepted, you will be invited to an online kickoff meeting on December 13, 2023. For questions about the Fellowship or how to apply, please contact support@iamse.org. We thank you for your interest and look forward to supporting you in achieving your professional goals in educational scholarship.

#IAMSE24Ā Poster & Oral Abstracts Now Welcomed!

Deadline December 1, 2023

The International Association of Medical Science Educators (IAMSE) is pleased to announce the call for abstracts for oral and poster presentations for the 28th Annual IAMSE Conference to be held at the Hilton Minneapolis Resort in Minneapolis, Minnesota, USA from June 15-18, 2024. The IAMSE conference offers opportunities for training, development, and mentoring, to meet the needs of learners and professionals across the continuum of health professions education.

Students who would like feedback on a draft of their abstract prior to final submission should email it to the Student Professional Development Committee, care of Stefanie Attardi at support@iamse.org, by November 10, 2023.

A few things to note:

  • The first time you enter the site, you will be required to create a user profile. Even if you did submit in previous years, you need to create a new account.
  • All abstracts for oral and poster presentations must be submitted in the format requested through the online abstract submission site.
  • You may list several authors, but you are limited to one presenter.
  • Once the submission deadline has passed, you may not edit your abstract. This includes adding authors.
  • Once the submission deadline has passed, authors will no longer have access to their abstract submissions.

There is no limit on the number of abstracts you may submit, but it is unlikely that more than two presentations per presenter can be accepted due to scheduling complexities. Abstract acceptance notifications will be returned in March 2024. Please contact support@iamse.org for any questions about your submission.

We hope to see you in Minneapolis next year!

Say hello to our featured member Ian Murray!

Our association is a robust and diverse set of educators, students, researchers, medical professionals, volunteers and academics that come from all walks of life and from around the globe. Each month we choose a member to highlight their academic and professional career and see how they are making the best of their membership in IAMSE. This monthā€™s Featured Member is Ian Murray.

Ian V.J. Murray
IAMSE 2023 Virtual Forum Program Chair
Professor of Physiology
Alice Walton School of Medicine (AWSOM), USA

How long have you been a member of IAMSE? 
I was first introduced to IAMSE in 2012 when a colleague suggested we submit our research at the conference. I enjoyed the conference so much that I returned in 2015, where I presented a poster on student attention span during lectures. It was at this conference that I was introduced to educational theories, and with special note of the cognitive load lecture by Dr. Jimmie Leppink. I have attended all of the IAMSE conferences since!

You are currently Chair of the 2023 Virtual Forum Planning Committee. Tell me a little bit about what that process has been like. What are you most looking forward to during the event? 
It has been a very rewarding experience working with IAMSE, and it facilitated interaction with colleagues, leadership, and administration. It is a pleasure to work with the team to make this conference a reality.

The theme of the Virtual Forum is ā€œShould It Stay or Should It Go? Changing Health Education for Changing Timesā€œ. The process of ideation for the theme and subthemes was a collective effort using convergent and divergent thinking. Seeing how the different ideas converged to a common one was satisfying. With evolving and changing health education, with the end of the pandemic and the advancement of artificial intelligence (AI), we aimed to obtain international insight on innovations from students and international members. We also aimed to discuss and share about what does and does not work, and how medical educators have overcome and innovated as a result of these challenges. Additionally, since students are the future of this organization, the student registration fee was intentionally priced at $25.

I am most excited to be a part of this amazing forum and learn from learners and educators of all walks. We have three amazing Ignite Talks of 60 mins in length, which are unique and allow for direct interaction of participants, AND with the speaker. We also accepted 60 amazing Lightning Talks, with each talk being 14 minutes. Again, we planned these talks to allow for significant participant interaction with the speaker. Please take time to review the schedule, as we have content on AI, curriculum, teaching, and related to students.

Looking at your time with the Association, what have you most enjoyed doing? What are you looking forward to?
I have served on several IAMSE committees, with my first one being the Engage Committee, now renamed the Education and Advocacy Committee (EAC). I always enjoy my interactions and discussions with members in and out of the conference, as well as the new collaborations that have formed. These interactions have led to my connecting with mentors and interacting with members of communities of growth (COG). This has led to exciting and amazing friendships, opportunities, collaborations, and publications. 

I have always been excited to present my medical education research at these conferences. My pivot from wet lab research to medical education research occurred when I moved to the Caribbean and became director of student research. I feel that it is crucial to engage students in research and pair them with suitable mentors. This enables the students to not only experience the research process, but also increase their competitiveness for residency applications. IAMSE is an excellent place for students to make connections and present their research.

Looking back at your time during your graduate studies and early career, if you could give your younger self a piece of advice what would it be?
Reflecting on my career, my best advice pertains to all levels for students and educators. The advice is to identify an effective mentor early on, one who is preferably external to their own institution. Perspective is influenced by one’s lens. The advantage of a mentor is that they bring their wisdom and personal relationship with the mentee into play to reframe perspectives. I experienced the power of mentoring during an AAMC grant workshop. Here, the expert mentor gave an example of how the grant feedback was perceived as failure by the applicant, but then was reframed as steps for success. I was pleased to see IAMSE develop a mentoring program and a publication on this important topic. 

Looking back at your time during your graduate studies and early career, if you could give your younger self a piece of advice what would it be?
I look forward to meeting you at IAMSE in the future, and please feel free to reach out to me to say ā€œHelloā€œ!


IAMSE Fall 2023 Webcast Audio Seminar Series – Week 4 Highlights

[The following notes were generated by Douglas McKell MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concluded on October 5, 2023. Over these five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concludes on October 5, 2023. Over these five sessions, we will cover topics including the foundational principles of Artificial Intelligence and Machine Learning, their multiple applications in health science education, and their use in teaching and learning essential biomedical science content.

The fourth session in this series is titled Artificial Intelligence (AI) Tools for Medical Educators and is presented by Drs. Dina Kurzweil, Elizabeth Steinbach, Vincent Capaldi, Joshua Duncan, and Mr. Sean Baker from the Uniformed Services University of the Health Sciences (USUHS).  Dr. Kurzweil is the Director of the Education & Technology Innovation (ETI) Support Office and an Associate Professor of Medicine. She is responsible for all of the strategic direction for the ETI, including instructional and educational technology support for the faculty. Dr. Steinbach is the Academic Writing Specialist in the newly established writing center at USUHS. She has 20 years of experience teaching and facilitating the learning of academic writing. LTC (P) Vincent F. Capaldi, II, MD is the Vice Chair of Psychiatry (Research) at USUHS and Senior Medical Scientist at the Center for Military Psychiatry and Neuroscience at the Walter Reed Army Institute of Research in Silver Spring, MD. Dr. Capaldi is also the program director of the National Capital Consortium combined Internal Medicine and Psychiatry residency training program and chair of the Biomedical Ethics Committee at Walter Reed National Military Medical Center. Dr. Joshua Duncan is the assistant dean for assessment. He earned his medical degree and MPH  from the UHSUS and is board-certified in pediatrics, preventative medicine, and clinical informatics. Mr. Sean Baker is the chief technology and senior information security officer, where he leads a team of 80 technologists to support the IT needs of USUHS and the entire military health system.

Dr. Kurzweil reviewed the goals of this webinar presentation and the learning outcomes.

  • Understand AI terminology
  • Identify AI teaching opportunities
  • Review citation options for AI tool use
  • Explain course policies using AI-generative tool(s)
  • Describe two accountability measures for using AI systems
  • List several impacts of using AI for assessment

Dr. Duncan briefly described AI as an intersection of Big Data, Computer Science, and Statistics and defined AI as a computer performing a task that would typically take the cognition of a human.  A subset of AI is Machine Learning (ML), where machines are programmed with algorithms to perform some of these tasks, which can be supervised or unsupervised by human interaction. Supervised learning can include Computer Vision, Natural Language Processing, and Machine learning, in contrast to Deep Learning, which is unsupervised and mimics human cognition.

Dr.  Duncan emphasized that understanding and using AI is becoming a required competency in health care, medical education, and research.  He provided several examples such as using AI to do large database statistical analysis, keyword database searching, use of clinical algorithms in clinical decision support, and to support clinical thinking and dialogue. One specific example he discussed, with references, was using Natural Language Processing in medical education assessment to evaluate three categories: Processing Trainee Evaluations, Accessing Supervisory Evaluation Techniques, and Accessing Gender Basis.  

Dr. Duncan then presented a demonstration of Chat GPT to illustrate the many uses for medical educators. He used ChatGPT to generate the following six topic prompts: Curriculum development, Assessment creation, Teaching, Teaching methodology, Research ideas, and Adaptive teaching.

Using the ChatGPT platform, he provided a prompt for the above areas.  For curriculum development, he asked ChatGPT to create a 6-week course on medical ethics that included lecture topics, readings, and assessments. In a manner of seconds, the 6-week course was designed. He pointed out that while the course topics and sequence generated by ChatGPT may be only a partial version of the course, it provides the user with a great starting point if they want to create a course like this from scratch. Dr. Duncan emphasized that it was essential to be cautious about all references ChatGPT provides because AI models, as text-predictors,  can hallucinate, meaning that if they do not have access to real answers, they will make up some. The AI user needs to verify all content and references to ensure they are valid and legitimate. He then demonstrated an Assessment creation using a detailed ChatGPT prompt to create five NBME style multiple choice questions with answer explanations on cardiovascular physiology, suitable for first-year medical student assessment.  Like the first ChatGPT demonstration, the five questions were generated with five possible answers, the correct answer was indicated, and an explanation was given for why this answer was the most accurate choice.  Dr. Duncan stated that there is an art of asking good questions (or prompts) so that the output generated is close to what you were looking for or expecting.  The prompts that Dr. Duncan used during his demo were one-sentence prompts and can be specific, for example, asking for effective teaching methodologies for imparting clinical skills to medical students.  He concluded his presentation by prompting ChatGPT for three research topics in medical education that are currently under-explored and why they are important. Dr. Duncan stated that AI can be an important member of the medical education team by providing the user with a draft that is 80% complete with answers to their prompts.

Mr. Sean Baker, in charge of IT security at USUHS, discussed the need for careful compliance when using all AI tools.  He stressed the importance of not entering information not already cleared for public release, such as personal data and information, controlled unclassified information, hiring, performance management or contract data, student data, evaluations, and Personal Identification Information (PII).  Mr. Baker then highlighted the need to be aware of the policies at the userā€™s institution and provided examples of how they use Generative AI at USUHS. He compared using AI to using Social Media in that you do not want to post anything on AI that you would not post on Social Media.

Dr. Kurzweil then presented the topic of higher education’s need to think critically about user agreements and how we present these agreements to our students and faculty.  These policies must be discussed and decided at all levels, from Federal, State, University, College, Departments, including individual courses and classrooms. She emphasized that AI will be widely used, and its use will depend on individual institutions’ decisions, especially when it comes to student use in courses and faculty use in the classroom.  She pointed out that it is important to clearly state examples of where AI cannot be used, such as requiring all course assignments to be exclusively the student’s work and specifying that the student cannot use AI applications like Spinbots, DALL-E or ChatGPT. She also provided examples of when AI use is permitted, such as when the assignment will require a topic or content search strategy or provide a reference for additional information. 

Dr. Kurzweil discussed a 2023 Educause article by McCormack1, describing use cases clustered around four common work areas to incorporate Generative AI in higher education. They are:

  • Dreaming:  Brainstorming, summarizing information, research, and asking questions.
  • Drudgery: Sending communications, filling out reports, deciding on materials, and gathering information to help develop syllabus reading.
  • Design: Using Large Language Models to create presentations, course materials, and exams.
  • Development: Creating detailed project plans, drafting institutional policies and strategic plans, or producing images and music.

Many AI tools are currently available, and you, as the user, need to decide how best to use them. It is essential to consider how these tools can be used in teaching and what we must do to prepare our learners and faculty to develop their digital fluency.  She cautioned that these tools can hallucinate, i.e., makeup sources,  so you need to check your work. You need to check all citations to be sure they are real and that the information is correct. Dr. Kurzweil emphasized that nothing comes out of these tools that she would take at face value without first verifying the information source.

Dr. Kurzweil then described opportunities to use AI tools to help you teach, including:

  • Altered active real learning
  • Independent thinking and creativity
  • Review of data and articles quickly
  • Overcoming writerā€™s block
  • Research and Analysis skills
  • Real-time response to questions
  • Tutoring and Practice
  • Creation of Case Studies

She then described several ways to create Curriculum Integration Opportunities with AI in the classroom, including:

  • AI formalized curriculum
  • Introduction to AI concepts
  • Computer literacy and fluency
  • Data Science
  • Hands-on AI tool practice
  • Medical Decision-Making with AI
  • Professional Identity Formation
  • Ethical Decision-Making
  • Computer Science Theory

Dr. Kurzweil presented the application of Assessment with AI  using seven examples, including:

  • Project-based learning
  • Expectations of draft completeness
  • Rubrics created and applied to student work
  • Annotated references
  • Reflections
  • Using pen and paper in class for initial (draft) work development
  • Testing centers

She then highlighted these examples linked to specific Assessment Practices examples impacted by AI, including:

  • Requiring students to work collaboratively
  • Scaffolding assignments
  • Becoming familiar with students’ writing style
  • Making assignments personal, timely, and specific
  • Creating assignments that require higher-level cognitive skills
  •  Authentic assessments with Observation and Simulation experiences

 Dr. Kurzweil then listed six ways that AI can be incorporated into the medical education curriculum:

  1. Provide medical students with a basic understanding of what AI is and how it works.
  2. Introduce medical students to the principles of Data Science
  3. Introduce medical students to the use of AI in radiology and pathology.
  4. Teach medical students how AI can be used to analyze patient data and provide treatment recommendations.
  5. Introduce medical students to ethical considerations of AI, such as privacy, bias, and transparency.
  6. Provide medical students with an opportunity to apply their AI foundational knowledge in real-life clinical scenarios.

She then turned the session over to Dr. Steinbach to discuss plagiarism.

Dr. Steinbach focused on our need to be aware of plagiarism occurring with AI, especially when students use ChatGPT to complete assignments. Many AI detectors utilize a perplexity score, which measures the randomness of text, and a burstiness score, which measures the variation in perplexity to differentiate between text composed by humans or text written by AI.  She noted in a paper published in 2023 that the software GPTZero correctly classified 99% of human-written articles and 85% of AI-generated content.  Educators will have concerns that our students may be using AI, such as ChatGPT, to generate text for their writing assignments without correctly citing the source of the generated text, which could give them an advantage over students who are not using AI to help them complete their assignments. Dr. Steinbach stated that writing assignments that focus on studentsā€™ reflections or interpretations that are generated by ChatGPT could pass without getting identified by the AI detectors.  The same can be said for the writing of scientific papers and abstracts, where the software was only able to identify that humans wrote 68% of these.  The way to help avoid these issues is to be very clear about the policies and expectations in your course syllabus.

If you allow your students to use Generative AI in your course assignments, you must be clear on how you want them to cite the AI-generated information. Dr. Steinbach focused on two main style guides, AMA and APA, and a guide on how to cite text generated through AI. First, AI tools cannot be listed as an author because they are not human and cannot answer questions about the work that was produced. For both citation style guides, you can put in the method sections how AI was used and also note it in the acknowledgment sections for AMA. According to the APA style guide, you can also mention AI in the introduction section.  She stated that the APA style guide requires the author to include the prompt and identify the text generated by the AI tool. AMA style guide is not clear in their guidance yet, nor do they provide any advice on in-text citations.

The last speaker, Dr. Capaldi, emphasized that there isnā€™t a perfect AI detector because as the large language models develop and become more sophisticated, the AI detectors tend to lag behind these software improvements. The best AI detectors can do is provide the user with a probability score of whether the text was AI-generated.  When used as an AI detector, Watson was only able to detect as AI-generated about 60% of what ChatGPT produced.  Dr. Capaldi stated it is harder to detect text that has been edited, combined, or paraphrased. He also noted the probability scores are not perfect either, and there can be false positives and determinations as to whether or not the text was generated using AI tools.  He asked the audience to be careful when using AI detectors because they are not entirely accurate and are not completely foolproof when it comes to their implementation in the academic setting since probability scores are not absolute determinations of text that are or are not AI-generated.

Dr. Kurzweil ended the session by stating that AI and education have immense promise, but it also comes with responsibility. She asked that we commit to using AI to empower our learners, faculty, and educational institutions as AI a tool and not as a replacement for us as educators. AI needs to be viewed as a partner working with educators to enhance our ability to make education efficient and effective.  She stated we need to embrace innovation and digital fluency while upholding the values of equity, privacy, and ethics in education.

References

  1. McCormack, M. Quick Poll Results: Adopting and Adapting to Generative AI in Higher Ed. Tech. Educause Research Notes, 2023. https://er.educause.edu/articles/2023/4/educause-quickpoll-results-adopting-and-adapting-to-generative-ai-in-higher-ed-tech?utm_source=Selligent&utm_medium=email&utm_campaign=er_content_alert_newsletter&utm_content=06-21-23&utm_term=_&m_i=KLvwCDTJUoupZ8FnwYkdq9V07qSZlQeD9ZID2uHfuGiuD%2BGrd53tXNOEA7c6mzGSLdnJzOY6_I0FO0uh8dBaxv0XVHjX0R1KKK&M_BT=36667538866



As always, IAMSE Student Members can register for the series for FREE!

IAMSE Fall 2023 Webcast Audio Seminar Series – Week 3 Highlights

[The following notes were generated by Douglas McKell MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concluded on October 5, 2023. Over these five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

The Co-Presenters for the 3rd session are Dr. Michael Paul Cary Jr. and Ms. Sophia Bessias.  Dr. Cary is an Associate Professor and Elizabeth C. Clipp Term Chair of Nursing at the Duke University School of Nursing. Ms. Bessias is the evaluation lead for the Algorithm-Based Clinical Decision Support (ABCDS) Oversight program. She provides operational support and peer review for clinical decision support software proposed for use within the Duke University Health System (DUHS).  Ms. Bessias holds masterā€™s degrees in science and Analytics and Public Health from NC State University and the University of Copenhagen.

Dr. Cary listed four objectives of the session:

  1. Establishing Context and Recognizing Challenges
  2. Operationalizing Bias Mitigation through AI Governance
  3. Navigating the Terrain of Large Language Models (LLMs)
  4. Equipping Educators and AI-Driven Healthcare Technologies

The session was divided into four sections, each discussing one of the above Session Objectives.

Objective 1: Establishing Context and Recognizing Challenges

Dr. Cary began by sharing the context of the promises and perils of AI and Healthcare. AI in healthcare can revolutionize healthcare through the promise of:

  • Improve patient care and clinician experience
  •  Reducing clinical burnout
  • Operational efficiencies
  • Reducing costs. 

He then highlighted several potential perils that need to be taken into consideration, such as:

  • Non-adoption or over-reliance on AI
  • No impact on outcomes
  • Technical malfunction
  • Violation of government regulations
  • Non-actionable or biased recommendations that could exacerbate health disparities

Dr. Cary posed a fundamental question: ā€œWhy is Identity Bias in algorithms so important?ā€ He discussed a 2019 study by Obermeyer et al.al1 that demonstrated that a biased algorithm systematically assigned the same risk score to White patients and Black patients even though Black patients had 26.3% more chronic disease than White patients, which resulted in systematically excluding Black Patients from accessing needed care management services. The reason behind this was the algorithm assigned risk scores based on past healthcare spending, and Black patients tend to have lower spending than White patients for a given level of health. The error resulted from the developers using an incorrect label to predict a particular outcome, called Label Bias. Once the algorithm was corrected, the percentage of Black patients automatically enrolled in the care management program rose from 17.7% to 45.5%1

Dr. Cary reviewed four elements of AI Government Regulations that are evolving.   These include the 2022 FDA Final Guidance on Software as a Medical Device regulations that will regulate software and medical devices, including AI-powered devices. There is also the AI Bill of Rights, which aims to protect individuals from the potential harms of AI, such as Label bias and other biases and discrimination.  There is also a lot of AI regulation going on at the State level by their Attorney Generals beginning to regulate AI in their states. In 2022, the Attorney General of California sent a letter to the CEOs of all the hospitals in CA asking for an account of the algorithms being used in their hospitals, what the potential bias could be, and what they plan to do to mitigate these biases. Finally, the Department of Health and Human Services (DHHS) announced a proposed rule of Section 1557 of the Patient Protection and Affordable Care Act (PPACA) that states Covered Entities (Health Care Systems and Providers) must not discriminate against any individual through the use of clinical algorithms in decision making and develop a plan to mitigate that possibility. Dr. Cary stated that while this is a huge step forward, the proposed rule needed to go further to specify what the covered entities need to do to reduce bias. Still, it did solicit comments on best practices and strategies that can be used to identify bias and minimize any discrimination resulting from using clinical algorithms.

Dr. Cary and his team determined that the covered entities referenced in Section 1557 of the PPACA  would need to know how to examine their clinical algorithms to ensure they complied with the proposed rule.  They conducted a Scoping Review of 109 articles to identify strategies that could be used to mitigate biases in clinical algorithms with a focus on racial and ethnic biases.  They summarized a large number of mitigation approaches to inform health systems how to reduce bias arising from the use of algorithms in their decision-making.  While Dr. Cary outlined the literature search, the study selection, and data extraction, he could not show or discuss the results of their review before its official publication.  He noted that the Scoping Review results would be published in the October 2023 issue of Health Affairs at www.healthaffairs.org.

Dr. Cary then discussed some of the most pressing challenges facing the use of AI in healthcare. These include the lack of an ā€œequity lens,ā€ which results when AI algorithms are trained on biased or unrepresentative data sets. The result of this oversight is to exacerbate existing healthcare disparities, resulting in the AI decision-making system not providing equitable care.

The second challenge is the need for AI education and training of healthcare professionals and health professional educators. Very few of us have the necessary AI training, which results in a gap in knowledge and skills required to promote the successful integration of AI in healthcare.  This leads to healthcare professionals struggling to understand the capabilities and limitations of AI tools, leading to a lack of trust, use, and improper use.  Lastly, there is little to no governance in the design or use of data science and AI tools, which could lead to ethical and privacy concerns.

Objective 2: Operationalizing AI Governance Principles

Ms. Bessias began her presentation by sharing how Duke AI Health and the Duke Health System are attempting to overcome some of these challenges. In 2021, the Dean, Chancellor, and Board of Trustees charged the Duke Health Care System leadership to form a governance framework for any tool that could be used in patient care, specifically any algorithm that could affect patient care directly or indirectly. The outcome of this charge was the formation of The Algorithm-Based Clinical Decision Support (ABCDS) Oversight Committee. The ABCDS is a ā€œpeople-processed technologyā€ effort that provides governance, evaluation, and monitoring of all algorithms proposed for clinical care and operations at Duke Health.  This committee comprises leaders from the health system, the school of medicine, clinical practitioners, regulatory affairs and ethics experts, equity experts, biostatisticians, and data scientists. It takes all of these perspectives working jointly to adequately assess the risks and benefits of using algorithms in health care.

The mission of the ABCDS Oversight Committee is to ā€œguide algorithmic tools through their lifecycle by providing governance, evaluation, and monitoring.ā€  There are two core functions of the ABCDS. The first step is registering all electronic algorithms that could impact patient care at Duke Health. The second step is to evaluate these algorithms as high, medium, or low risk.  High-risk algorithms involve all data-derived decision-making tools, sometimes home-grown and sometimes from vendors. In either care, this process investigates how they were developed and how they are proposed to be used. Medium risk involves knowledge-based clinical consensus-based algorithms based on clinicians sharing their expertise to create a rubric. Lastly, there are low-risk algorithms that include the Medical Standard of Care that are well integrated into clinical practice and frequently endorsed by relevant clinical societies. The specific type of risk evaluation used varies depending on the details of any given use case.

Ms. Bessias then took us through a detailed review of the ABCDS Evaluation Framework, which consists of the different stages the algorithm must meet to proceed to the next stage. It is based on a software development life cycle process. There are four stages in the Evaluation process:

  • Stage 1: Model development
  • Stage 2: Silent evaluation
  • Stage 3: Effectiveness evaluation
  • Stage 4: General deployment.

Each one of these stages is separated by a formal Gates Review that evaluates each stage through a series of quality and ethical principles, including transparency and accountability, clinical value and safety, fairness and equity, usability, reliability and adoption, and regulatory compliance. The intention is to ensure that when the AI algorithms are deployed, patients see the maximum benefit and simultaneously limit any unintended harm. Quality and ethical principles are translated at each gate into specific evaluation criteria and requirements.

Dukeā€™s AI Health goal is anticipating, preventing, and mitigating algorithmic harms. In 2023, they introduced a new bias mitigation tool to help development teams move from a more reactive mode to a more proactive and anticipatory mode of thinking about bias. One of their process’s most critical aspects was linking the algorithm with its implementation: ABCDS tool = Algorithm(s) + Implementation.

What was discovered is that bias can be introduced anywhere in the life cycle of the algorithm and needs to be considered during each stage. To better understand this, Duke AI Health focused on a publication by Suresh and Guttag in 2021.2 This study is known as a framework for understanding the sources of harm during the Machine Learning life cycle as it illustrates how bias can be introduced.  The 7 types of Bias are Societal (historical), Label, Aggregation, Learning, Representation, Evaluation, and Human Use.  They use a template to help people identify and address each type of bias. It is during data generation that historical, representation, and label biases are introduced.  Ms. Bessias discussed three of these biases: Societal (due to training data shaped by present and historical inequities and their fundamental causes), Label (use of biased proxy target variable in place of the ideal prediction target), and Human Use (inconsistent user response to algorithm outputs for different subgroups) and gave an example of each one, as well as ways to address and mitigate them.

Objective 3: Navigating the Terrain of Large Language Models (LLMs)

Everyone is thinking about how to navigate the terrain of Generative AI in health care, especially large language models. Ms. Bessias then addressed how we can apply some of these tools and frameworks to LLMs. There are a large number of proposed applications of Generative AI in healthcare that range from low-risk to very high-risk.  These include generating billing information, drafting administrative communications, automating clinical notes, EHR inbox responses, providing direct medical advice, mental health support, etc. There are some limitations and ethical considerations as well.  For example, LLMs are trained to generate plausible results that may not necessarily be factual or accurate results.  Explainability (how the algorithm produces an output) and Transparency (accessible communication about the sources of data creating outputs) is the second major consideration. This leads to an ethical consideration of what happens when an algorithm provides misleading or incorrect information. What options are available to address algorithmic harm, and who has this recourse? Another important question is about access versus impact when considering equity. How are the risks and benefits of Generative AI distributed in the population?  An example of these considerations was discussed using Automated Clinical Notes as the AI application. Ms. Bessias stated there are many questions and few answers, but these are all the things that need to be considered as healthcare moves towards deploying some of these Generative AI technologies. To end this session, Ms. Bessias shared a reflection from Dr. Michael Pentina, who is the chief data scientist at Duke Health and the Vice Dean for Data Sciences at Duke University School of Medicine, in an Op-Ed that he wrote on how to handle generative AI:

ā€œEnsure that AI technology serves humans rather than taking over their responsibilities or replacing them. No matter how good an AI is, at some level, humans must be in charge.ā€

Objective 4: Equipping Educators for AI-Driven Healthcare Technologies

Dr. Cary then discussed the 4th webinar objective about the competencies needed for health care professionals and health care educators as published by Russell et al. in 2023.3 The data for this publication was collected by interviewing 15 experts across the country, and they identified 6 competency domains:

  • Basic Knowledge of AI ā€“ factors that influence the quality of data
  • Social and Ethical Implications of AI ā€“ impact on Justice, Equity, and Ethics
  • Workflow Analysis for AI-based Tools ā€“ impact on Workflow 
  • AI-enhanced Clinical Encounters ā€“ Safety, Accuracy of AI tools
  • Evidence-based Evaluation of AI-based Tools – Analyze and adapt to changing roles
  • Practice-Based Learning and Improvement Regarding AI-based Tools

By developing these competencies, healthcare professionals can ensure that AI Tools are used to improve the quality and safety of patient care.

For the past year, Dr. Cary and Duke University have partnered with North Carolina Central University, a historically black university with a deep understanding of the challenges faced by underrepresented, underserved communities. Through this partnership, they developed a proposed set of competencies for identifying and mitigating bias in clinical algorithms.

  1. Trainees should be able to explain what AI/ML algorithms are in the context of healthcare.
  2. Trainees should be able to explain how AI governance and legal frameworks can impact equity.
  3. Trainees will learn ways of detecting and mitigating bias in AI algorithms across the life cycle of these algorithms.

Dr. Cary ended the sessions by presenting the audience with several training opportunities and resources offered by Duke University. These include short courses and workshops, formal programs, and virtual seminar series shown in the fall and spring semesters open to anyone worldwide.  In March 2024, Dr. Cary will present at the first-ever Duke University Symposium on Algorithmic Equity and Fairness in Health.

Lastly, Dr. Cary invited all webinar members to join Duke University in their commitment to advancing health equity and promoting responsible AI through a Call to Action for Transforming Healthcare Together.

References:

  1. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 Oct 25;366(6464):447-453.
  2. Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In Equity and access in algorithms, mechanisms, and optimization (pp. 1-9).
  3. Russell, Regina G. et al. Competencies for the Use of Artificial Intelligence-Based Tools by Health Care Professionals. Academic Medicine 98(3):p. 348-356.



As always, IAMSE Student Members can register for the series for FREE!

Free Workshop for Students: New Educator and Scholar Training (NEST)

The International Association of Medical Science Educators (IAMSE) invites your students to join a free Professional Development Workshop for Students. Hosted by IAMSE and sponsored by ScholarRx, this highly interactive workshop will provide student participants with an introductory, hands-on experience in applying Kern’s Six-Step model to design a complete education activity with appropriate pedagogic strategies. Students will also explore models of converting medical education design and development into scholarship.

Session: New Educator and Scholar Training
Facilitators: Colleen Croniger, Amber Heck, Tao Le and Elizabeth Schlegel
Date and Time: Saturday, October 7, 2023 from 9:00 AM – 12:00 PM (EDT)
Cost: FREE for students!

After participating in this session, student attendees should be able to:

  • Describe a framework for medical education professional development
  • Discuss and apply principles and best practices for curriculum design, pedagogic strategies, and educational scholarship
  • Identify and synthesize themes that integrate across major domains of medical education professional development.

All students are welcome to attend this free event. If you are not already an IAMSE member, we encourage you to join by clicking here. Membership is not required to register. Non-members will need to present proof of enrollment before the event date. If you have any questions please reach out to support@iamse.org.

Don’t Miss the IAMSE #VirtualForum23This October!

We hope youā€™ve made plans to join medical educators and students from around the world for the second annual IAMSE Virtual Forum! Join IAMSE October 18-20, 2023 as we host lightning talks, ignite talks, and more! This yearā€™s theme is:

Should It Stay or Should It Go?
Changing Health Education for Changing Times

At this first-of-its-kind event, we will discuss what works and what does not serve educators and students alike when it comes to curriculum reform – what should stay and what should go. Topics will cover how curriculum reform impacts students, curriculum, and teaching as well as Artificial Intelligence in Health Sciences Education.

Meet the Ignite Speakers

IVF23 Ignite Group
From left: Kimara Ellefson, Holly Gooding and Neil Mehta

Calibrating Our Compass: Flourishing as the North Star for Charting the Way Forward
Wednesday, October 18 from 10:15 AM – 11:15 AM EDT
Presented by Kimara Ellefson, Kern National Network

How Are We Going to Get to The Moon? Developing Operating Principles for Effective Curriculum Change
Thursday, October 19 from 10:15 AM – 11:15 AM EDT
Presented by Holly Gooding, Emory University School of Medicine

Teaching in the Age of Online Resources: Designing Lesson Plans to Enhance the Value of In-Person Classroom Learning
Friday, October 20 from 12:00 PM – 1:00 PM EDT
Presented by Neil Mehta, Cleveland Clinic Lerner College of Medicine of CWRU


View the Lightning Talk Abstracts


Wednesday, October 18
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: Mental Health
Lightning Talks Room 2: Student Success
Lightning Talks Room 3: Student Success
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Social Determinants of Health

Thursday, October 19
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Assessment
Lightning Talks Room 3: Interprofessional Education
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Courses

Friday, October 20
10:15am – 11:15am Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Professional Development
Lightning Talks Room 3: Social Determinants of Health
Lightning Talks Room 4: Research
Lightning Talks Room 5: Other


If you have any questions, comments, or concerns, please let us know at support@iamse.org. Additional forum details and registration can be found at www.iamseforum.org.

Weā€™re looking forward to seeing you in October!

Last Call to Submit IAMSE 2025 Manual Proposals

Due October 1, 2023

Don’t miss your chance to submit proposals for contributions to the IAMSE Manuals book series!

The IAMSE Manuals series was established to disseminate current developments and best evidence-based practices in healthcare education, offering those who teach in healthcare the most current information to succeed in their educational roles. The Manuals offer practical ā€œhow-to-guidesā€ on a variety of topics relevant to teaching and learning in the healthcare profession. The aim is to improve the quality of educational activities that include, but are not limited to: teaching, assessment, mentoring, advising, coaching, curriculum development, leadership and administration, and scholarship in healthcare education, and to promote greater interest in health professions education. They are compact volumes of 50 to 175 pages that address any number of practical challenges or opportunities facing medical educators. The manuals are published by Springer; online versions are offered to IAMSE members at a reduced price.

We welcome proposal submissions on topics relevant to IAMSEā€™s mission and encourage multi-institutional, international, and interprofessional contributions. Previously published manuals can be found by clicking here. Currently, manuals on the topics of feedback, problem-based learning, professionalism, online education, and leveraging student thinking are already being developed for publication in 2023 and 2024.

To submit your proposal, please click hereThe submission deadline is October 1, 2023.

Each proposal will be evaluated by the IAMSE Manuals Editorial Board using the criteria specified above. The Editorial Board will then discuss the proposals and select 2-to-3 for publication. Selections will be based on how well the proposals match the above criteria. We expect publication decisions to be made by December 2023. We anticipate that selected manual proposals will be published during the second half of 2025.

Read here for the full call and submission guidelines.

If you have any questions about submission or the Manuals series please reach out to support@iamse.org.

We look forward to your submissions.

Hersh to present Artificial Intelligence: Implications for Health Professions Education

Are you curious how Artificial Intelligence (AI) is transforming medical education, especially its impact on faculty teaching and student learning? Join the upcoming IAMSE Fall webinar series entitled “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education” to learn about the intersection of AI and medical education. Over five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

Bill Hersh


The series begins on September 7 with a presentation by Homayoun Valafar to define AI and machine learning. The series will continue on September 14 with a discussion by Erkin Otles and Cornelius James on strategies to prepare our trainees to appropriately utilize AI in their future healthcare jobs. On September 21st, Michael Paul Cary and Sophia Bessias will present on critical ethical issues, including the potential for unintended bias and disparities arising from AI. Finally, Dina Kurzweil and Bill Hersh will wrap up the series on September 28th and October 5th, respectively, with practical tips for educators and learners alike to utilize AI to maximize teaching and learning. Don’t miss this exciting opportunity to join the conversation on the future of AI in medical education.

Artificial Intelligence: Implications for Health Professions Education

Presenters: Bill Hersh, MD, FACP, FACMI, FAMIA, FIAHSI
Session Date & Time: October 5, 2023 at 12pm Eastern
Session Description: This webinar will cover:

  • Brief introduction of AI tools that medical students (health professions learners) can use including ChatGPT.
  • Applying ChatGPT and other AI tools to learning in medical education/health professions education.
  • Academic misconduct and other risks of ChatGPT and AI tools
  • Future directions for ChatGPT and other AI tools


As always, IAMSE Student Members can register for the series for FREE!

We Hope to See You Exhibit at #IAMSE24Ā in Minneapolis, MN, USA!

We are pleased to extend your company an invitation to be an exhibitor at the International Association of Medical Science Educators (IAMSE) Annual Conference to be held on June 15-18, 2024, at the Minneapolis Hilton!

At the annual IAMSE conference, faculty, staff, and students from around the world who are interested in medical science education come together in faculty development and networking opportunities. Sessions on curriculum development, assessment and simulation are among the common topics available at the annual conference.

BACK FOR #IAMSE24
Executive Level Sponsorships

Only our Executive sponsors will enjoy a 30-minute networking session with a short platform presentation with attendees scheduled during the conference program. This networking session is unavailable to other sponsorship levels. These exclusive spots are limited, so make sure you keep an eye on your email for registration in early January 2024. Any early registration applications will not be processed until registration is open. This includes processing any payment you may send.Ā 

We are currently finalizing the details of our exhibitor prospectus, so keep an eye on your email for more info. Registration will be open to both attendees and exhibitors in early January. 

Thank you for supporting IAMSE and we look forward to seeing you in Minneapolis!

IAMSE #VirtualForum23Lightning Talk Schedule

The IAMSE 2023 Virtual Forum is right around the corner! As attendees are considering the schedule of live presentations throughout the three-day event, we encourage you to consider the 45 Lightning Talk presentations, to be held in groups throughout the event.

2023 Lightning Talk Schedule

Wednesday, October 18
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: Mental Health
Lightning Talks Room 2: Student Success
Lightning Talks Room 3: Student Success
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Social Determinants of Health

Thursday, October 19
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Assessment
Lightning Talks Room 3: Interprofessional Education
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Courses

Friday, October 20
10:15am – 11:15am Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Professional Development
Lightning Talks Room 3: Social Determinants of Health
Lightning Talks Room 4: Research
Lightning Talks Room 5: Other

View all the 2023 Lightning Talk abstracts

If you have any questions, comments, or concerns, please let us know at support@iamse.org. Additional forum details and registration can be found at www.iamseforum.org.

Weā€™re looking forward to seeing you in October!

IAMSE Fall 2023 Webcast Audio Seminar Series – Week 2 Highlights

[The following notes were generated by Douglas McKell MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7th, 2023, and concluded on October 5th, 2023. Over these five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

Dr. Cornelius James and Erkin Otles presented the second session in this series from the University of Michigan. Dr. James is a Clinical Assistant Professor in the Departments of Internal Medicine, Pediatrics, and Learning Health Sciences at the University of Michigan (U-M). He is a primary care physician, practicing as a general internist and a general pediatrician. Dr. James has completed the American Medical Association (AMA) Health Systems Science Scholars program and was one of ten inaugural 2021 National Academy of Medicine (NAM) Scholars in Diagnostic Excellence. As a NAM scholar, he began working on the Data Augmented Technology Assisted Medical Decision Making (DATA-MD) curriculum. The DATA-MD curriculum is designed to teach healthcare professionals to use artificial intelligence (AI) and machine learning (ML) in their diagnostic decision-making. Dr. James and his colleagues are also using DAYA-MD to develop a web-based AI-based curriculum for the AMA.

Mr. Erkin Otles is a seventh-year Medical Scientist Training Program Fellow (MD-PhD student) at the University of Michigan. His research interests are across the continuum of clinical reasoning and include creating digital reasoning tools at the intersection of AI and medicine, including informatics, clinical reasoning, and operation research. His doctoral research focused on creating AI tools for patients, physicians, and health systems. He has led work across the AI lifecycle with projects advancing from development to validation, technical integration, and workflow implementation. He is also interested in incorporating AI into Undergraduate Medical Education.

Dr. James began the session by reviewing the webinar objectives and the attendees’ response to AI’s impact on healthcare and medical education. Mr. Otles defined Artificial Intelligence AI as a lot of math and programming with nothing magical about its operation. He defined AI as intelligence that systems constructed by humans demonstrate.  He defined Machine Learning (ML) as a subfield of AI interested in developing methods that can learn and use data to improve performance on a specific task.  An example of this is a physician interested in analyzing pathology slides to see if there is evidence of cancer cells. AI or ML systems might use historical data (consisting of images and previous pathologist interpretations) to learn how to detect cancer evidence in slides it has never seen before, thereby demonstrating its learning and data recognition ability based on human programming.

As we learned in the first session of this series, AI and ML contain many sub-fields. Inside AI exists a technique known as natural language processing (NLP), which can process human-generated text or speech into data. There is also knowledge representation or reasoning (KRR), which is the ability to encode human knowledge in a way that is automatically processed.  Input from sensors and related biometric data is of this type. Even though AI and ML are used synonymously, many techniques are AI but not ML.

There are also subsets within ML, such as Deep Learning, which is very effective because it can identify patterns from large amounts of data. ML draws heavily from optimization theory, operations research, statistics analysis, human factors engineering, and game theory, to name a few contributing disciplines.

We encounter AI in our everyday lives, such as asking Siri to play music, route planning, spam detection, route planning, and topic word searching by Google.  Accomplishing these tasks often requires several AI tools working together to accomplish the seamless appearance of the final task presentation to a human. Mr. Otles emphasized that many industries depend heavily on AI embedded in their operations. For example, Airlines and e-commerce businesses use AI to optimize their operations workflow. In contrast, tech companies use AI to sort through large databases to create content that will keep their users engaged with ML, presenting this to each person based on prior user or purchasing habits.

Mr. Otles presented a brief review of ChatGPT that identified its two primary functions: 1. A Chatbot for human user interface, and 2. A Large Language Model to predict the best answer, using reinforcement learning training to identify the preferable word(s) response. He pointed out the inherent problem with this process that results in answers entirely dependent on whatever data it was trained on, which may not be accurate, relevant, or unpleasant. Understanding where the data came from allows us to understand how it can be used, including its limitations. The random sampling of the underlying data also contributes to its limitations in ensuring its accuracy and reliability.

Mr. Otles then switched to discussing how AI is used in health care. Not too long ago, it was unusual to see AI used in health care. Demonstrating the rapid growth in AIā€™s impact on healthcare, Mr. Otles presented a chart showing that the number of FDA-cleared AI devices used in healthcare increased from one device in 1995 to 30 devices in 2015 to 521 devices in 2022. He emphasized that not all AI devices need to be approved by the FDA before being employed in healthcare and that most are not. He gave an example of a specific AI system at the University of Michigan, approved for use by the FDA in 2012, that removes the bone images from a chest radiograph to provide the radiologist with a more accurate view of the lungs. Other examples of the use of AI in healthcare from the speakerā€™s research include prostate cancer outcome prediction in hospital infection and sepsis risks and deterioration risks. Mr. Otles cautioned that a number of the proprietary LLMs currently used that have not been subject to FDA scrutiny have not proven to be as accurate predictors as initially thought due to the developer’s lack of understanding of their use in actual clinical context it is being used in that was not the same as when they were trained in research-only environments (Singh, 2021; Wong, 2021). He also shared several feedback examples in medical education that identified the type and possible gaps, or biases, in feedback provided to medical and surgical trainees based on their level of training (Otles, 2021; Abbott, 2021; Solano, 2021).

Mr. Otles concluded his presentation by discussing why physicians should be trained in AI even though AI is not currently a part of most medical schoolsā€™ curricula. He emphasized that physicians should not just be users of AI in healthcare; they need to be actively involved in creating, evaluating, and improving AI in healthcare.  Healthcare users of AI need to understand how it works and be willing to form partnerships with engineers, cognitive scientists, clinicians, and others, as AI tools are not simply applications to be implemented without serious consideration of their impact. He stressed that healthcare data cannot be analyzed outside of its context. For example, a single lab value is meaningless without understanding the patient it came from, the reported value, the timing of the measurement, and so on. Mr. Otles pointed out that AI has a significant benefit for physicians to rapidly summarize information, predict outcomes, and learn over time which can ultimately benefit physicians. He suggested that these AI characteristics of using complicated data and workflows to reach an answer and subsequent action were the same type of decision processes that physicians themselves use and understand. As a result, Mr. Otles stressed that physicians need to be more than ā€œusersā€ of AI tools; they need to be involved in creating, evaluating, and improving AI tools (Otles, 2022).

Dr. James hosted the remainder of the Webinar.  Dr. James presented two questions to the audience: ā€œAre you currently using AI for teaching, and Are you currently teaching about AI in health careā€?  Dr. James then presented to the audience the latest recommendations from the National Academy of Medicine (NAM) related to AI and health care and highlighted their recommendation that we develop and deploy appropriate training and education programs related to AI (Lomis et al., 2021). These programs are not just for physicians but must include all healthcare professionals. He also discussed a recent poll published in the New England Journal of Medicine (NEJM), where 34% of the participants stated that the second most important topic medical schools should focus on was data science (including AI and ML) to prepare students to succeed in clinical practice (Mohta & Johnston, 2020).

Currently, AI has a minimal presence in the curriculum in most medical schools. If it is present, it is usually an elective, a part of the online curriculum, workshops, or certificate programs.  He stated that this piecemeal approach was insufficient to train physicians and other healthcare leaders to use AI effectively. As more and more medical schools start to include AI and ML in their curricula, Dr. James stressed that it is essential to set realistic goals for what AI instruction and medical education should look like.  Just as not all practicing physicians and other healthcare workers are actively engaged in clinical research, it should not be expected that all physicians or clinicians will develop a machine-learning tool. With this said, Dr. James stated that just as all physicians are required or should possess the skills necessary to use EBM in their practice, they should also be expected to be able to evaluate and apply the outputs of AI and ML in their clinical practice. Therefore, medical schools and other health professional schools need to begin training clinicians to use the AI vocabulary necessary to serve as patient advocates and ensure their patient data is protected and that the algorithms used for analysis are safe and not perpetuating existing biases.

Dr. James reviewed a paper by colleagues at Vanderbilt that provides a great place to start as a way to incorporate AI-related clinical competencies and how AI will impact current ACGME competencies, such as Problem-Based Learning and Improvement and Systems-Based Practice, which is part of the Biomedical Model of education used for the training of physicians (McCoy et al., 2020). Even though historically, the Biomedical model has been the predominant model for training physicians, he suggested that medical education start to think about transiting to a Biotechnomedical Model of educating clinicians or healthcare providers (Duffy, 2011). This model will consider the role that technology will play in preventing, diagnosing, and treating illness or disease.  He clearly stated that he does not mean that models like the bio-psycho-social-spiritual model be ignored. He stressed that he was suggesting that the Biotechnomedical model be considered complementary to the bio-psycho-social-spiritual model to get closer to the whole-person holistic care that we seek to provide. If medical education is going to be able to prepare our learners to be comfortable with these technologies successfully, then a paradigm shift is going to be necessary. He believes that within 1-2 years, we will see overlaps between AI and ML content in courses like Health System Science, Clinical Reasoning, Clinical Skills, and Evidence-Based Medicine. This is already occurring at the University of Michigan, where he teaches.

Dr. James feels strongly that Evidence-Based Medicine (EBM) is at least the first best home for AI and ML content.  We have all heard that the Basic and Clinical Sciences are the two pillars of medical education. Most of us have also heard of Health Systems Science, which is considered to be the third pillar of medical education. Dr. James anticipates that over the next five to ten years, AI will become the foundation for these three pillars of medical education as it will transform how we teach, assess, and apply knowledge in these three domains. He briefly reviewed this change in the University of Michigan undergraduate medical school curriculum.

Dr. James concluded his presentation with an in-depth discussion of the Data Augmented Technology Assisted Medical Decision Making (DATA-MD) program at his medical school. It is working on creating the foundation to design AI/ML curricula for all healthcare learners (James, 2021). His team has focused on the Diagnosis process using EBM, which will impact the Analysis process. Their work with the American Medica Association is supporting the creation of seven web-based modules using AI more broadly in medicine.

Dr. James concluded his presentation by stressing four necessary changes to rapidly and effectively incorporate AI and ML into training healthcare professionals. They are: 1. Review and Re-prioritize the existing Curriculum, 2. Identity AI/ML Champions, 3. Support Interprofessional Collaboration and Education, and 4. Invest in AI/ML Faculty Development. His final ā€œTake Home ā€œ points where AI/ML will continue to impact healthcare with or without physician involvement, AI/ML is already transforming the way medicine is practiced, AI/ML instruction is lacking in medical education, and Interprofessional collaboration is essential for healthcare professionals as key stakeholders.




As always, IAMSE Student Members can register for the series for FREE!