News

#IAMSE23 Registration is NOW OPEN

We are pleased to announce that registration for the 27th Annual Meeting of IAMSE, to be held June 10 – 13, 2023 in Cancun, Mexico is now open. At this annual meeting of the International Association of Medical Science Educators (IAMSE) faculty, staff, and students from around the world who are interested in health science education join together in faculty development and networking opportunities. Sessions on curriculum development, assessment, and simulation are among the common topics available at the meeting.

Featured plenary speakers include Professor Kara Caruthers (Meharry Medical College, USA), Dr. Michelle Daniel (University of California San Diego School of Medicine, USA), Dr. Anique de Bruin (Maastricht University, The Netherlands), and Dr. Ricardo Leon-Borquez (World Federation for Medical Education).

Additional meeting details and registration can be found at www.iamseconference.org.

#IAMSECafe Archives & COVID-19 Resources for Medical Science Educators

IAMSE Cafe Virtual Sessions

10/03/2023 Professionalism with Chasity O’Malley                                                                                          9/19/23 Accreditation Standards Around DEI with Rakhi Negi
9/5/23 Foundational Sciences vs. Basic Science Factoids with Wendy Lackey Cornelison and Neil Osheroff
8/15/23 How to Distinguish Students After Step 1 P/F with Dr. Dave Harris (chat description)
8/1/23 Tips for Getting Students to Buy in to Active Learning with Tameka Clemons
6/20/23 I23 Recap with Wendy Lackey Cornelison and Amber Heck
5/16/23  Overcommitement with Chasity O’Malley                                                                                          5/2/23  KNN with Dr. Dave Harris, Sarah Williams, Joelle Worm                                                                  4/18/23 Imposter Syndrome with Wendy Lackey Cornelison                                                                             3/21/23 IAMSE 2023 Annual Conference Preview with Amber Heck                                                                3/7/23 Curriculum Refresh with Jon Wisco  
2/21/23 What Does Research Mean to Undergraduate Medical Education? with Rakhi Negi
2/7/23 What are You Reading This Winter? with Katie Huggett
1/17/2023 Foundational Competencies with Lisa Howley and Eric Holmboe
*If you would like to give feedback on this initiative please fill out the form here or email CBME@aamc.org
12/20/2022 End-of-Year Celebrations with Jon Wisco
11/15/2022 Best Practices for Preparing a Successful Grant Proposal for the IAMSE Educational Scholarship and Curriculum Innovation Grants with Amanda Chase, Amber Heck, and Algevis Wrench
11/1/2022 Should I Stay or Should I Go? When Your Next Career Move is Somewhere Else with Katie Huggett
10/18/2022 The Aquifer Sciences Curriculum Database: A Collaborative Development between the IAMSE community and Aquifer with David Harris and Dr. Tracy Fulton
10/4/2022 Sharable Open Education Resources (OER) With ScholarRx with David Harris
9/20/2022 Mitigating Implicit Bias in Medical School Curricula with Jacqueline Powell and Jayne Reuben
9/6/2022 Early Career Award Winner with Jaya Yoda
8/16/22 Toolkits for Medical Science Educators: A Resource for Professional Development and an Opportunity for Scholarship with Nicole Deming, Amber Heck and Jon Wisco
8/2/22 Moving Up in MedEd with Christina DeLucia and Diana Lautenberger
7/19/22 Summer Reading with Chasity O’Malley – Chat Log
6/21/22 IAMSE 2022 Meeting Recap with Kelly Quesnelle
5/3/22 Mentoring in Health Professions Education with Alice Fornari and Darshana Shah
4/19/22 How to Make the Most of Conference Attendance with Heather Christensen
4/5/22 #IAMSE22 Annual Meeting Preview with Maria Sheakley
3/15/22 Third Party Resources with Kelly Quesnelle
3/1/22 Assessment Styles with Jon Wisco
2/15/22 What is the Optimal Timing of Course Evals? with Jon Wisco
2/1/22 Maintaining Professionalism Through Fatigue with Wendy Lackey
1/18/22 Practical Tips for Connecting With Students with Jon Wisco
1/4/22 The Changing Landscape of MedEd with Adi Haramati, Giulia Bonaminio, Frazier Stevenson, Amy Wilson-Delfosse, and Neil Osheroff
12/7/21 IAMSE New Member Meet and Greet with Kelly Quesnelle
11/16/21 Technology in Health Sciences Education During COVID-19: Gains, Losses, and Transformations with Poh Sun Goh and Sol Roberts-Lieb
11/2/21 IAMSE Fellowship and ESME at IAMSE with Adi Haramti, Amber Heck and Amanda Chase
10/19/21 IAMSE Ambassador Program: Global perspectives on medical and science education with Claudio Cortes and Joseph Grannum
10/5/21 Teaching and Incorporating the Health Humanities with Alice Fornari2/2/21 To teach, or not to teach (to the test), that is the question with Jon Wisco
9/21/21 Open Forum to Discuss Basic Science in the Clinical Years with Kelly Quesnelle
9/7/21 Meaningful, Sustainable Transdisciplinary Collaboration: What would it look like? with Atsusi Hirumi
8/17/21 Opportunities for Health Sciences Education in One Health with Margaret McNulty and Rebecca Lufler
8/3/21 Evaluations of Our Teaching with Wendy Lackey
7/20/21 Incorporating Telehealth Into Basic Science Education with Jon Wisco
7/6/21 Virtual Simulations with Jon Wisco
6/1/21 The State of Medical Educators in Developing Nations with Sylvia Olivares and Smart Mbagwu
5/18/21 Building bridges between health science educators from diverse programs with Jennifer Lamberts, Jayne Reuben, and Jonathan Wisco
5/4/21 Outreach Programs with Kelly Quesnelle
4/20/21 The New Horizon of (Medical) Education with Cafe Hosts
4/6/21 IAMSE 2021 Annual Conference Preview with Mark Hernandez
3/16/21 Career impacts of the COVID Year with Lisa Coplitt
3/2/21 Conducting and Disseminating Medical Education Scholarship
2/16/21 The paradigm shift implications on courses and curricula as a result of moving to pass/fail USMLE Step 1 with Doug Gould
1/19/21 The Basic Sciences and the Medical Humanities: An Integrative Approach with Hedy Wald. Suggested reading and faculty development opportunities discussed during the call.
1/5/21 Best Practices for Mentoring with an Eye and Ear Toward Diversity, Equity, Inclusion, and Justice with Heather Christensen
12/15/20 Unconventional Teaching Methods with Jon Wisco
12/1/20 Resiliency with the IAMSE Cafe hosts
11/17/20 Learning During and From a Crisis: The Student-Led Development of an online COVID-19 Curriculum with Abby Schiff and Katie Shaffer
Links from the conversation during this session can be found below.
11/3/20 Teaching Race and Medicine: Unlearning what we think we know with Staci Leisman
Links from the conversation during this session can be found here.
10/20/20 The future of education programs for residents and medical students with Lourdes Lopez
10/6/20 Technology and Education with Edgar Herrera Bastida
9/15/20 Team-Based Learning in the Virtual Environment with Drs. Raihan Jumat, Irene Lee and Peiyan Wong
9/1/20 Networking 102 – Networking Outside the Box with Kelly Quesnelle
8/27/20 Mentoring to Make a Difference with Katie Huggett – Literature references can be found here.
8/13/20 IAMSE Ambassadors – Pakistan, Australia, and Finland with Di Eley and Yawar Hyatt Khan
7/23/20 The Disappearing Pathology Instructor with Amy Lin and Regina Kreisle
7/9/20 Partnering with medical students to discover educational solutions for on-line learning with Emily Bird
6/25/20 The Future of Medical Education Conferences: What SHOULD it look like? with Bonny Dickinson
6/11/20 Communities of Practice in a Virtual World with James Pickering
5/28/2020 IAMSE Ambassadors – Mexico, China, Caribbean with Raul Barroso, Sateesh Arja and Zhimin Jia
5/21/2020 Faculty Development in the COVID-19 Era with Alice Fornari
5/14/2020 Evolving Anatomical Education during the COVID pandemic: What will this mean for the future of anatomy teaching? with Jon Wisco, Richard Gonzalez and Lane Fortney
5/7/2020 COVID-19 and the New Medical School with Amber Heck and Michael Lee
4/30/2020 MedEd Equity During COVID-19 with Heather Christensen
4/23/2020 IAMSECafe Welcomes Medical Science Educator EIC with Peter de Jong
4/16/2020 Q&A with the IAMSE President with Neil Osheroff
4/14/2020 MedEd Mailbag: Free Resources During COVID-19 with Kelly Quesnelle. Resources discussed and shared during this session can be found below.
4/9/2020 How Re-thinking and Re-designing Anatomy Instruction Into the Online Space Can Lead to Better Classroom and Cadaver Lab Learning Experiences with Jon Wisco
4/7/2020 MedEd Mailbag: Being Productive in Your Own Space with Kelly Quesnelle
4/2/2020 Leading by Example: Practicing Self-care in a Time of Crisis with Adi Haramati
3/31/2020 MedEd Mailbag: The Virtual Teacher with Kelly Quesnelle

 

Resources for Educators During COVID-19

Harvard Medical School Medical Student COVID-19 Curriculum
One of the greatest difficulties facing everyone nowadays is a lack of clarity about what is going on and what lies ahead. We students especially feel a need to deepen our knowledge of the situation, as we are often viewed as resources by our friends and family. However, it soon became clear how challenging it was to process the wealth of information coming our way. A team of us at Harvard Medical School set out to quickly collate and synthesize accurate information about the pandemic to share with those who do not have the time or resources to research it themselves.
Additional resources include Curriculum for Kids, an article written by the team discussing the curriculum, and an opportunity to give direct feedback to the developers.

AAMC COVID-19 Resource Hub
The AAMC continues to monitor guidance from federal, state, and local health agencies as it relates to the coronavirus (COVID-19). Find information and updates from AAMC on this emerging global health concern.

Acland Anatomy
Acland’s Video Atlas of Human Anatomy contains nearly 330 videos of real human anatomic specimens in their natural colors.

MedEd Portal Virtual Resources
This collection features peer-reviewed teaching resources that can be used for distance learning, including self-directed modules and learning activities that could be converted to virtual interactions. As always, the resources are free to download and free for adaptation to local settings. The collection will be reviewed and updated regularly.

BlueLink Anatomy
From the University of Michigan Medical School

Aquifer
Aquifer is offering free access to 146 Aquifer signature cases, WISE-MD (Surgery), and WISE-OnCall (Readiness for Practice) through June 30, 2020, to all current Aquifer institutional subscribers in response to the COVID-19 outbreak.

Kaplan iHuman
With i-Human Patients, students experience safe, repeatable, fully-graded clinical patient encounters on their devices anywhere, anytime.

Online MedEd
The unprecedented COVID‐19 crisis has upended the medical and medical education landscape. Our aim during this difficult and confusing time is to support you with what we do best—concise, high–yield videos to help you get up to speed efficiently and effectively—so you can feel confident with however you’re being called on to adjust.

ScholarRX Bricks
In response to a request for assistance from a partner medical school impacted by COVID-19, ScholarRx has agreed to make its Rx Bricks program available at no cost to M2 students for the remainder of the 2019-20 academic year. This comprehensive, online resource can assist schools in implementing contingency plans necessitated by the COVID-19 outbreak.

Osmosis
You can raise the line by training healthcare workers who don’t have experience treating COVID-19. Encourage healthcare workers you know to complete this free CME course on COVID-19 so they’re prepared to fight the virus.

AnatomyZone
Top-quality anatomy videos, all for free.

Harvard Macy
Crowdsourced List of Online Teaching Resources Collated by the Harvard Macy Institute (@HarvardMacy)

Anatomy Connected

Chronicle of Higher Education

Dartmouth SOM Interactive Rad/Anatomy

Firecracker
We understand some of the unique challenges you are facing due to the COVID-19 pandemic and, as a company, are putting together resources to help you keep up with your courses as well as stay up to date with the latest research and evidence-based practices for addressing this new coronavirus.

LWW Health Library

Bates’ Visual Guide

5 Minute Consult
Primary health care is important to everyone, and now more than ever it’s important that you have access to evidence-based diagnostic and treatment content. To help you with caring for all of your patients, we are offering 30-day free access to 5MinuteConsult.com. Use code 5MC30DayAccess73173 to sign up.

Say Hello to Our Featured Member 2019 Annual Meeting Site Host Rick Vari

Our association is a robust and diverse set of educators, researchers, medical professionals, volunteers and academics that come from all walks of life and from around the globe. Each month we choose a member to highlight their academic and professional career and see how they are making the best of their membership in IAMSE. This month’s Featured Member is our 2019 annual meeting site host, IAMSE President Rick Vari.



Rick Vari, PhD
Professor & Senior Dean for Academic Affairs
Virginia Tech Carilion School of Medicine
Roanoke, Virginia, USA

Why was the Virginia Tech Carilion School of Medicine the right choice for the 2019 IAMSE meeting?
We are the right choice for the meeting this year because we did a fabulous conference several years ago and we were already in the queue for a future IAMSE meeting. We had some scheduling issues with our original site for 2019 and we were able to step in and fill the void. We have a wonderful hotel site (at the Hotel Roanoke), and the people who came from all across North America for the Collaborating Across Borders V: An American-Canadian Dialogue on Interprofessional Healthcare and Practice, in 2015 really enjoyed it. As a relatively new medical school, we are excited about continuing our growing success in medical education; hosting the IAMSE meeting is a real honor for us.

What opportunities will attendees see in Roanoke that they’ve not seen in years past?
Roanoke is a beautiful city to have a conference. We’ve localized the venue, which is a major goal for IAMSE. Attendees and exhibitors will appreciate the layout of the conference site. We are adjacent to the Roanoke Market Square with restaurants, breweries, and shopping featuring local items. There are just lots of opportunities for networking and entertainment. The program is outstanding with presentations and sessions on current and future challenges facing health sciences educators. International abstract submission is up, so more colleagues from other parts of the world may be attending. Increased student participation will be another highlight. This year, IAMSE is also hosting a Taste of Roanoke Street Fair which will replace the annual gala dinner. IAMSE 2019 is going to be a very easy conference to attend. If you can stay for the Grand Extravaganza on Tuesday afternoon it is going to be very special with a hiking trip to a beautiful location on the Blue Ridge Parkway and a visit to the Ballast Point brewery (East Coast operation) for dinner. 

Can you tell me more about this new event?
We are blocking off the Market Square in downtown Roanoke. We will have tastes of local food, beverages, and music. This is a chance to interact in a casual fun setting with lots of local food and a live band! It’s going to be a lot of fun.

What session or speaker are you most looking forward to this year?
I’m looking forward to, of course, the Board of Directors and Committee Chairs meeting.  I’ve enjoyed being president and interacting with the Board and Committee Chairs in this planning session provides IAMSE with a sense of solid direction.  The plenary sessions also look very strong. I’m interested in the Gen Z session (Generation Z: The New Kids on the Block) and How to Use Disruptive Technology to Make Education Better – Not Just Different.

It sounds likes there is much to look forward to this year. Anything else you’d like to share?
The local response from the other medical schools in the area in support of the IAMSE meeting in Roanoke has been very strong.  As a new school, this is a tremendous opportunity for us and the other medical schools in the area to get better acquainted.

To learn more about the 2019 IAMSE Annual Meeting, including the plenary speakers, workshops and networking opportunities, or to register, please visitwww.IAMSEconference.org.

Reserve your spot before March 15 to ensure the Early Bird Discount!

IAMSE at GRIPE 2019 in New Orleans

The IAMSE booth will be exhibiting at the annual winter meeting of the Group for Research in Pathology Education (GRIPE) in New Orleans, LA on January 24-26, 2019. IAMSE Association Manager Julie Hewett will also be delivering a pre-conference workshop titled, “Using Social Media to Disseminate Your Scholarly Work.” If you plan on attending this meeting, don’t miss this session and do not forget to swing by the IAMSE booth and say hello!

Information on the GRIPE Meeting can be found here. We look forward to seeing you there!

Registration for the 23rd Annual IAMSE Meeting is Now Open!

We are pleased to announce that registration for the 23rd Annual Meeting of IAMSE, to be held June 8-11, 2019 in Roanoke, VA, USA, is now open. At this annual meeting of the International Association of Medical Science Educators (IAMSE) faculty, staff and students from around the world who are interested in medical science education join together in faculty development and networking opportunities. Sessions on curriculum development, assessment and simulation are among the common topics available at the annual meetings.


Featured plenary speakers include Don Cleveland, Claudia Krebs, Craig Lenz and Geoff Talmon.


Additional meeting details and registration can be found at http://www.iamseconference.org.

IAMSE Fall 2023 Webcast Audio Seminar Series – Week 4 Highlights

[The following notes were generated by Douglas McKell MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concluded on October 5, 2023. Over these five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concludes on October 5, 2023. Over these five sessions, we will cover topics including the foundational principles of Artificial Intelligence and Machine Learning, their multiple applications in health science education, and their use in teaching and learning essential biomedical science content.

The fourth session in this series is titled Artificial Intelligence (AI) Tools for Medical Educators and is presented by Drs. Dina Kurzweil, Elizabeth Steinbach, Vincent Capaldi, Joshua Duncan, and Mr. Sean Baker from the Uniformed Services University of the Health Sciences (USUHS).  Dr. Kurzweil is the Director of the Education & Technology Innovation (ETI) Support Office and an Associate Professor of Medicine. She is responsible for all of the strategic direction for the ETI, including instructional and educational technology support for the faculty. Dr. Steinbach is the Academic Writing Specialist in the newly established writing center at USUHS. She has 20 years of experience teaching and facilitating the learning of academic writing. LTC (P) Vincent F. Capaldi, II, MD is the Vice Chair of Psychiatry (Research) at USUHS and Senior Medical Scientist at the Center for Military Psychiatry and Neuroscience at the Walter Reed Army Institute of Research in Silver Spring, MD. Dr. Capaldi is also the program director of the National Capital Consortium combined Internal Medicine and Psychiatry residency training program and chair of the Biomedical Ethics Committee at Walter Reed National Military Medical Center. Dr. Joshua Duncan is the assistant dean for assessment. He earned his medical degree and MPH  from the UHSUS and is board-certified in pediatrics, preventative medicine, and clinical informatics. Mr. Sean Baker is the chief technology and senior information security officer, where he leads a team of 80 technologists to support the IT needs of USUHS and the entire military health system.

Dr. Kurzweil reviewed the goals of this webinar presentation and the learning outcomes.

  • Understand AI terminology
  • Identify AI teaching opportunities
  • Review citation options for AI tool use
  • Explain course policies using AI-generative tool(s)
  • Describe two accountability measures for using AI systems
  • List several impacts of using AI for assessment

Dr. Duncan briefly described AI as an intersection of Big Data, Computer Science, and Statistics and defined AI as a computer performing a task that would typically take the cognition of a human.  A subset of AI is Machine Learning (ML), where machines are programmed with algorithms to perform some of these tasks, which can be supervised or unsupervised by human interaction. Supervised learning can include Computer Vision, Natural Language Processing, and Machine learning, in contrast to Deep Learning, which is unsupervised and mimics human cognition.

Dr.  Duncan emphasized that understanding and using AI is becoming a required competency in health care, medical education, and research.  He provided several examples such as using AI to do large database statistical analysis, keyword database searching, use of clinical algorithms in clinical decision support, and to support clinical thinking and dialogue. One specific example he discussed, with references, was using Natural Language Processing in medical education assessment to evaluate three categories: Processing Trainee Evaluations, Accessing Supervisory Evaluation Techniques, and Accessing Gender Basis.  

Dr. Duncan then presented a demonstration of Chat GPT to illustrate the many uses for medical educators. He used ChatGPT to generate the following six topic prompts: Curriculum development, Assessment creation, Teaching, Teaching methodology, Research ideas, and Adaptive teaching.

Using the ChatGPT platform, he provided a prompt for the above areas.  For curriculum development, he asked ChatGPT to create a 6-week course on medical ethics that included lecture topics, readings, and assessments. In a manner of seconds, the 6-week course was designed. He pointed out that while the course topics and sequence generated by ChatGPT may be only a partial version of the course, it provides the user with a great starting point if they want to create a course like this from scratch. Dr. Duncan emphasized that it was essential to be cautious about all references ChatGPT provides because AI models, as text-predictors,  can hallucinate, meaning that if they do not have access to real answers, they will make up some. The AI user needs to verify all content and references to ensure they are valid and legitimate. He then demonstrated an Assessment creation using a detailed ChatGPT prompt to create five NBME style multiple choice questions with answer explanations on cardiovascular physiology, suitable for first-year medical student assessment.  Like the first ChatGPT demonstration, the five questions were generated with five possible answers, the correct answer was indicated, and an explanation was given for why this answer was the most accurate choice.  Dr. Duncan stated that there is an art of asking good questions (or prompts) so that the output generated is close to what you were looking for or expecting.  The prompts that Dr. Duncan used during his demo were one-sentence prompts and can be specific, for example, asking for effective teaching methodologies for imparting clinical skills to medical students.  He concluded his presentation by prompting ChatGPT for three research topics in medical education that are currently under-explored and why they are important. Dr. Duncan stated that AI can be an important member of the medical education team by providing the user with a draft that is 80% complete with answers to their prompts.

Mr. Sean Baker, in charge of IT security at USUHS, discussed the need for careful compliance when using all AI tools.  He stressed the importance of not entering information not already cleared for public release, such as personal data and information, controlled unclassified information, hiring, performance management or contract data, student data, evaluations, and Personal Identification Information (PII).  Mr. Baker then highlighted the need to be aware of the policies at the user’s institution and provided examples of how they use Generative AI at USUHS. He compared using AI to using Social Media in that you do not want to post anything on AI that you would not post on Social Media.

Dr. Kurzweil then presented the topic of higher education’s need to think critically about user agreements and how we present these agreements to our students and faculty.  These policies must be discussed and decided at all levels, from Federal, State, University, College, Departments, including individual courses and classrooms. She emphasized that AI will be widely used, and its use will depend on individual institutions’ decisions, especially when it comes to student use in courses and faculty use in the classroom.  She pointed out that it is important to clearly state examples of where AI cannot be used, such as requiring all course assignments to be exclusively the student’s work and specifying that the student cannot use AI applications like Spinbots, DALL-E or ChatGPT. She also provided examples of when AI use is permitted, such as when the assignment will require a topic or content search strategy or provide a reference for additional information. 

Dr. Kurzweil discussed a 2023 Educause article by McCormack1, describing use cases clustered around four common work areas to incorporate Generative AI in higher education. They are:

  • Dreaming:  Brainstorming, summarizing information, research, and asking questions.
  • Drudgery: Sending communications, filling out reports, deciding on materials, and gathering information to help develop syllabus reading.
  • Design: Using Large Language Models to create presentations, course materials, and exams.
  • Development: Creating detailed project plans, drafting institutional policies and strategic plans, or producing images and music.

Many AI tools are currently available, and you, as the user, need to decide how best to use them. It is essential to consider how these tools can be used in teaching and what we must do to prepare our learners and faculty to develop their digital fluency.  She cautioned that these tools can hallucinate, i.e., makeup sources,  so you need to check your work. You need to check all citations to be sure they are real and that the information is correct. Dr. Kurzweil emphasized that nothing comes out of these tools that she would take at face value without first verifying the information source.

Dr. Kurzweil then described opportunities to use AI tools to help you teach, including:

  • Altered active real learning
  • Independent thinking and creativity
  • Review of data and articles quickly
  • Overcoming writer’s block
  • Research and Analysis skills
  • Real-time response to questions
  • Tutoring and Practice
  • Creation of Case Studies

She then described several ways to create Curriculum Integration Opportunities with AI in the classroom, including:

  • AI formalized curriculum
  • Introduction to AI concepts
  • Computer literacy and fluency
  • Data Science
  • Hands-on AI tool practice
  • Medical Decision-Making with AI
  • Professional Identity Formation
  • Ethical Decision-Making
  • Computer Science Theory

Dr. Kurzweil presented the application of Assessment with AI  using seven examples, including:

  • Project-based learning
  • Expectations of draft completeness
  • Rubrics created and applied to student work
  • Annotated references
  • Reflections
  • Using pen and paper in class for initial (draft) work development
  • Testing centers

She then highlighted these examples linked to specific Assessment Practices examples impacted by AI, including:

  • Requiring students to work collaboratively
  • Scaffolding assignments
  • Becoming familiar with students’ writing style
  • Making assignments personal, timely, and specific
  • Creating assignments that require higher-level cognitive skills
  •  Authentic assessments with Observation and Simulation experiences

 Dr. Kurzweil then listed six ways that AI can be incorporated into the medical education curriculum:

  1. Provide medical students with a basic understanding of what AI is and how it works.
  2. Introduce medical students to the principles of Data Science
  3. Introduce medical students to the use of AI in radiology and pathology.
  4. Teach medical students how AI can be used to analyze patient data and provide treatment recommendations.
  5. Introduce medical students to ethical considerations of AI, such as privacy, bias, and transparency.
  6. Provide medical students with an opportunity to apply their AI foundational knowledge in real-life clinical scenarios.

She then turned the session over to Dr. Steinbach to discuss plagiarism.

Dr. Steinbach focused on our need to be aware of plagiarism occurring with AI, especially when students use ChatGPT to complete assignments. Many AI detectors utilize a perplexity score, which measures the randomness of text, and a burstiness score, which measures the variation in perplexity to differentiate between text composed by humans or text written by AI.  She noted in a paper published in 2023 that the software GPTZero correctly classified 99% of human-written articles and 85% of AI-generated content.  Educators will have concerns that our students may be using AI, such as ChatGPT, to generate text for their writing assignments without correctly citing the source of the generated text, which could give them an advantage over students who are not using AI to help them complete their assignments. Dr. Steinbach stated that writing assignments that focus on students’ reflections or interpretations that are generated by ChatGPT could pass without getting identified by the AI detectors.  The same can be said for the writing of scientific papers and abstracts, where the software was only able to identify that humans wrote 68% of these.  The way to help avoid these issues is to be very clear about the policies and expectations in your course syllabus.

If you allow your students to use Generative AI in your course assignments, you must be clear on how you want them to cite the AI-generated information. Dr. Steinbach focused on two main style guides, AMA and APA, and a guide on how to cite text generated through AI. First, AI tools cannot be listed as an author because they are not human and cannot answer questions about the work that was produced. For both citation style guides, you can put in the method sections how AI was used and also note it in the acknowledgment sections for AMA. According to the APA style guide, you can also mention AI in the introduction section.  She stated that the APA style guide requires the author to include the prompt and identify the text generated by the AI tool. AMA style guide is not clear in their guidance yet, nor do they provide any advice on in-text citations.

The last speaker, Dr. Capaldi, emphasized that there isn’t a perfect AI detector because as the large language models develop and become more sophisticated, the AI detectors tend to lag behind these software improvements. The best AI detectors can do is provide the user with a probability score of whether the text was AI-generated.  When used as an AI detector, Watson was only able to detect as AI-generated about 60% of what ChatGPT produced.  Dr. Capaldi stated it is harder to detect text that has been edited, combined, or paraphrased. He also noted the probability scores are not perfect either, and there can be false positives and determinations as to whether or not the text was generated using AI tools.  He asked the audience to be careful when using AI detectors because they are not entirely accurate and are not completely foolproof when it comes to their implementation in the academic setting since probability scores are not absolute determinations of text that are or are not AI-generated.

Dr. Kurzweil ended the session by stating that AI and education have immense promise, but it also comes with responsibility. She asked that we commit to using AI to empower our learners, faculty, and educational institutions as AI a tool and not as a replacement for us as educators. AI needs to be viewed as a partner working with educators to enhance our ability to make education efficient and effective.  She stated we need to embrace innovation and digital fluency while upholding the values of equity, privacy, and ethics in education.

References

  1. McCormack, M. Quick Poll Results: Adopting and Adapting to Generative AI in Higher Ed. Tech. Educause Research Notes, 2023. https://er.educause.edu/articles/2023/4/educause-quickpoll-results-adopting-and-adapting-to-generative-ai-in-higher-ed-tech?utm_source=Selligent&utm_medium=email&utm_campaign=er_content_alert_newsletter&utm_content=06-21-23&utm_term=_&m_i=KLvwCDTJUoupZ8FnwYkdq9V07qSZlQeD9ZID2uHfuGiuD%2BGrd53tXNOEA7c6mzGSLdnJzOY6_I0FO0uh8dBaxv0XVHjX0R1KKK&M_BT=36667538866



As always, IAMSE Student Members can register for the series for FREE!

IAMSE Fall 2023 Webcast Audio Seminar Series – Week 3 Highlights

[The following notes were generated by Douglas McKell MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concluded on October 5, 2023. Over these five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

The Co-Presenters for the 3rd session are Dr. Michael Paul Cary Jr. and Ms. Sophia Bessias.  Dr. Cary is an Associate Professor and Elizabeth C. Clipp Term Chair of Nursing at the Duke University School of Nursing. Ms. Bessias is the evaluation lead for the Algorithm-Based Clinical Decision Support (ABCDS) Oversight program. She provides operational support and peer review for clinical decision support software proposed for use within the Duke University Health System (DUHS).  Ms. Bessias holds master’s degrees in science and Analytics and Public Health from NC State University and the University of Copenhagen.

Dr. Cary listed four objectives of the session:

  1. Establishing Context and Recognizing Challenges
  2. Operationalizing Bias Mitigation through AI Governance
  3. Navigating the Terrain of Large Language Models (LLMs)
  4. Equipping Educators and AI-Driven Healthcare Technologies

The session was divided into four sections, each discussing one of the above Session Objectives.

Objective 1: Establishing Context and Recognizing Challenges

Dr. Cary began by sharing the context of the promises and perils of AI and Healthcare. AI in healthcare can revolutionize healthcare through the promise of:

  • Improve patient care and clinician experience
  •  Reducing clinical burnout
  • Operational efficiencies
  • Reducing costs. 

He then highlighted several potential perils that need to be taken into consideration, such as:

  • Non-adoption or over-reliance on AI
  • No impact on outcomes
  • Technical malfunction
  • Violation of government regulations
  • Non-actionable or biased recommendations that could exacerbate health disparities

Dr. Cary posed a fundamental question: “Why is Identity Bias in algorithms so important?” He discussed a 2019 study by Obermeyer et al.al1 that demonstrated that a biased algorithm systematically assigned the same risk score to White patients and Black patients even though Black patients had 26.3% more chronic disease than White patients, which resulted in systematically excluding Black Patients from accessing needed care management services. The reason behind this was the algorithm assigned risk scores based on past healthcare spending, and Black patients tend to have lower spending than White patients for a given level of health. The error resulted from the developers using an incorrect label to predict a particular outcome, called Label Bias. Once the algorithm was corrected, the percentage of Black patients automatically enrolled in the care management program rose from 17.7% to 45.5%1

Dr. Cary reviewed four elements of AI Government Regulations that are evolving.   These include the 2022 FDA Final Guidance on Software as a Medical Device regulations that will regulate software and medical devices, including AI-powered devices. There is also the AI Bill of Rights, which aims to protect individuals from the potential harms of AI, such as Label bias and other biases and discrimination.  There is also a lot of AI regulation going on at the State level by their Attorney Generals beginning to regulate AI in their states. In 2022, the Attorney General of California sent a letter to the CEOs of all the hospitals in CA asking for an account of the algorithms being used in their hospitals, what the potential bias could be, and what they plan to do to mitigate these biases. Finally, the Department of Health and Human Services (DHHS) announced a proposed rule of Section 1557 of the Patient Protection and Affordable Care Act (PPACA) that states Covered Entities (Health Care Systems and Providers) must not discriminate against any individual through the use of clinical algorithms in decision making and develop a plan to mitigate that possibility. Dr. Cary stated that while this is a huge step forward, the proposed rule needed to go further to specify what the covered entities need to do to reduce bias. Still, it did solicit comments on best practices and strategies that can be used to identify bias and minimize any discrimination resulting from using clinical algorithms.

Dr. Cary and his team determined that the covered entities referenced in Section 1557 of the PPACA  would need to know how to examine their clinical algorithms to ensure they complied with the proposed rule.  They conducted a Scoping Review of 109 articles to identify strategies that could be used to mitigate biases in clinical algorithms with a focus on racial and ethnic biases.  They summarized a large number of mitigation approaches to inform health systems how to reduce bias arising from the use of algorithms in their decision-making.  While Dr. Cary outlined the literature search, the study selection, and data extraction, he could not show or discuss the results of their review before its official publication.  He noted that the Scoping Review results would be published in the October 2023 issue of Health Affairs at www.healthaffairs.org.

Dr. Cary then discussed some of the most pressing challenges facing the use of AI in healthcare. These include the lack of an “equity lens,” which results when AI algorithms are trained on biased or unrepresentative data sets. The result of this oversight is to exacerbate existing healthcare disparities, resulting in the AI decision-making system not providing equitable care.

The second challenge is the need for AI education and training of healthcare professionals and health professional educators. Very few of us have the necessary AI training, which results in a gap in knowledge and skills required to promote the successful integration of AI in healthcare.  This leads to healthcare professionals struggling to understand the capabilities and limitations of AI tools, leading to a lack of trust, use, and improper use.  Lastly, there is little to no governance in the design or use of data science and AI tools, which could lead to ethical and privacy concerns.

Objective 2: Operationalizing AI Governance Principles

Ms. Bessias began her presentation by sharing how Duke AI Health and the Duke Health System are attempting to overcome some of these challenges. In 2021, the Dean, Chancellor, and Board of Trustees charged the Duke Health Care System leadership to form a governance framework for any tool that could be used in patient care, specifically any algorithm that could affect patient care directly or indirectly. The outcome of this charge was the formation of The Algorithm-Based Clinical Decision Support (ABCDS) Oversight Committee. The ABCDS is a “people-processed technology” effort that provides governance, evaluation, and monitoring of all algorithms proposed for clinical care and operations at Duke Health.  This committee comprises leaders from the health system, the school of medicine, clinical practitioners, regulatory affairs and ethics experts, equity experts, biostatisticians, and data scientists. It takes all of these perspectives working jointly to adequately assess the risks and benefits of using algorithms in health care.

The mission of the ABCDS Oversight Committee is to “guide algorithmic tools through their lifecycle by providing governance, evaluation, and monitoring.”  There are two core functions of the ABCDS. The first step is registering all electronic algorithms that could impact patient care at Duke Health. The second step is to evaluate these algorithms as high, medium, or low risk.  High-risk algorithms involve all data-derived decision-making tools, sometimes home-grown and sometimes from vendors. In either care, this process investigates how they were developed and how they are proposed to be used. Medium risk involves knowledge-based clinical consensus-based algorithms based on clinicians sharing their expertise to create a rubric. Lastly, there are low-risk algorithms that include the Medical Standard of Care that are well integrated into clinical practice and frequently endorsed by relevant clinical societies. The specific type of risk evaluation used varies depending on the details of any given use case.

Ms. Bessias then took us through a detailed review of the ABCDS Evaluation Framework, which consists of the different stages the algorithm must meet to proceed to the next stage. It is based on a software development life cycle process. There are four stages in the Evaluation process:

  • Stage 1: Model development
  • Stage 2: Silent evaluation
  • Stage 3: Effectiveness evaluation
  • Stage 4: General deployment.

Each one of these stages is separated by a formal Gates Review that evaluates each stage through a series of quality and ethical principles, including transparency and accountability, clinical value and safety, fairness and equity, usability, reliability and adoption, and regulatory compliance. The intention is to ensure that when the AI algorithms are deployed, patients see the maximum benefit and simultaneously limit any unintended harm. Quality and ethical principles are translated at each gate into specific evaluation criteria and requirements.

Duke’s AI Health goal is anticipating, preventing, and mitigating algorithmic harms. In 2023, they introduced a new bias mitigation tool to help development teams move from a more reactive mode to a more proactive and anticipatory mode of thinking about bias. One of their process’s most critical aspects was linking the algorithm with its implementation: ABCDS tool = Algorithm(s) + Implementation.

What was discovered is that bias can be introduced anywhere in the life cycle of the algorithm and needs to be considered during each stage. To better understand this, Duke AI Health focused on a publication by Suresh and Guttag in 2021.2 This study is known as a framework for understanding the sources of harm during the Machine Learning life cycle as it illustrates how bias can be introduced.  The 7 types of Bias are Societal (historical), Label, Aggregation, Learning, Representation, Evaluation, and Human Use.  They use a template to help people identify and address each type of bias. It is during data generation that historical, representation, and label biases are introduced.  Ms. Bessias discussed three of these biases: Societal (due to training data shaped by present and historical inequities and their fundamental causes), Label (use of biased proxy target variable in place of the ideal prediction target), and Human Use (inconsistent user response to algorithm outputs for different subgroups) and gave an example of each one, as well as ways to address and mitigate them.

Objective 3: Navigating the Terrain of Large Language Models (LLMs)

Everyone is thinking about how to navigate the terrain of Generative AI in health care, especially large language models. Ms. Bessias then addressed how we can apply some of these tools and frameworks to LLMs. There are a large number of proposed applications of Generative AI in healthcare that range from low-risk to very high-risk.  These include generating billing information, drafting administrative communications, automating clinical notes, EHR inbox responses, providing direct medical advice, mental health support, etc. There are some limitations and ethical considerations as well.  For example, LLMs are trained to generate plausible results that may not necessarily be factual or accurate results.  Explainability (how the algorithm produces an output) and Transparency (accessible communication about the sources of data creating outputs) is the second major consideration. This leads to an ethical consideration of what happens when an algorithm provides misleading or incorrect information. What options are available to address algorithmic harm, and who has this recourse? Another important question is about access versus impact when considering equity. How are the risks and benefits of Generative AI distributed in the population?  An example of these considerations was discussed using Automated Clinical Notes as the AI application. Ms. Bessias stated there are many questions and few answers, but these are all the things that need to be considered as healthcare moves towards deploying some of these Generative AI technologies. To end this session, Ms. Bessias shared a reflection from Dr. Michael Pentina, who is the chief data scientist at Duke Health and the Vice Dean for Data Sciences at Duke University School of Medicine, in an Op-Ed that he wrote on how to handle generative AI:

“Ensure that AI technology serves humans rather than taking over their responsibilities or replacing them. No matter how good an AI is, at some level, humans must be in charge.”

Objective 4: Equipping Educators for AI-Driven Healthcare Technologies

Dr. Cary then discussed the 4th webinar objective about the competencies needed for health care professionals and health care educators as published by Russell et al. in 2023.3 The data for this publication was collected by interviewing 15 experts across the country, and they identified 6 competency domains:

  • Basic Knowledge of AI – factors that influence the quality of data
  • Social and Ethical Implications of AI – impact on Justice, Equity, and Ethics
  • Workflow Analysis for AI-based Tools – impact on Workflow 
  • AI-enhanced Clinical Encounters – Safety, Accuracy of AI tools
  • Evidence-based Evaluation of AI-based Tools – Analyze and adapt to changing roles
  • Practice-Based Learning and Improvement Regarding AI-based Tools

By developing these competencies, healthcare professionals can ensure that AI Tools are used to improve the quality and safety of patient care.

For the past year, Dr. Cary and Duke University have partnered with North Carolina Central University, a historically black university with a deep understanding of the challenges faced by underrepresented, underserved communities. Through this partnership, they developed a proposed set of competencies for identifying and mitigating bias in clinical algorithms.

  1. Trainees should be able to explain what AI/ML algorithms are in the context of healthcare.
  2. Trainees should be able to explain how AI governance and legal frameworks can impact equity.
  3. Trainees will learn ways of detecting and mitigating bias in AI algorithms across the life cycle of these algorithms.

Dr. Cary ended the sessions by presenting the audience with several training opportunities and resources offered by Duke University. These include short courses and workshops, formal programs, and virtual seminar series shown in the fall and spring semesters open to anyone worldwide.  In March 2024, Dr. Cary will present at the first-ever Duke University Symposium on Algorithmic Equity and Fairness in Health.

Lastly, Dr. Cary invited all webinar members to join Duke University in their commitment to advancing health equity and promoting responsible AI through a Call to Action for Transforming Healthcare Together.

References:

  1. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 Oct 25;366(6464):447-453.
  2. Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In Equity and access in algorithms, mechanisms, and optimization (pp. 1-9).
  3. Russell, Regina G. et al. Competencies for the Use of Artificial Intelligence-Based Tools by Health Care Professionals. Academic Medicine 98(3):p. 348-356.



As always, IAMSE Student Members can register for the series for FREE!

Free Workshop for Students: New Educator and Scholar Training (NEST)

The International Association of Medical Science Educators (IAMSE) invites your students to join a free Professional Development Workshop for Students. Hosted by IAMSE and sponsored by ScholarRx, this highly interactive workshop will provide student participants with an introductory, hands-on experience in applying Kern’s Six-Step model to design a complete education activity with appropriate pedagogic strategies. Students will also explore models of converting medical education design and development into scholarship.

Session: New Educator and Scholar Training
Facilitators: Colleen Croniger, Amber Heck, Tao Le and Elizabeth Schlegel
Date and Time: Saturday, October 7, 2023 from 9:00 AM – 12:00 PM (EDT)
Cost: FREE for students!

After participating in this session, student attendees should be able to:

  • Describe a framework for medical education professional development
  • Discuss and apply principles and best practices for curriculum design, pedagogic strategies, and educational scholarship
  • Identify and synthesize themes that integrate across major domains of medical education professional development.

All students are welcome to attend this free event. If you are not already an IAMSE member, we encourage you to join by clicking here. Membership is not required to register. Non-members will need to present proof of enrollment before the event date. If you have any questions please reach out to support@iamse.org.

Don’t Miss the IAMSE #VirtualForum23This October!

We hope you’ve made plans to join medical educators and students from around the world for the second annual IAMSE Virtual Forum! Join IAMSE October 18-20, 2023 as we host lightning talks, ignite talks, and more! This year’s theme is:

Should It Stay or Should It Go?
Changing Health Education for Changing Times

At this first-of-its-kind event, we will discuss what works and what does not serve educators and students alike when it comes to curriculum reform – what should stay and what should go. Topics will cover how curriculum reform impacts students, curriculum, and teaching as well as Artificial Intelligence in Health Sciences Education.

Meet the Ignite Speakers

IVF23 Ignite Group
From left: Kimara Ellefson, Holly Gooding and Neil Mehta

Calibrating Our Compass: Flourishing as the North Star for Charting the Way Forward
Wednesday, October 18 from 10:15 AM – 11:15 AM EDT
Presented by Kimara Ellefson, Kern National Network

How Are We Going to Get to The Moon? Developing Operating Principles for Effective Curriculum Change
Thursday, October 19 from 10:15 AM – 11:15 AM EDT
Presented by Holly Gooding, Emory University School of Medicine

Teaching in the Age of Online Resources: Designing Lesson Plans to Enhance the Value of In-Person Classroom Learning
Friday, October 20 from 12:00 PM – 1:00 PM EDT
Presented by Neil Mehta, Cleveland Clinic Lerner College of Medicine of CWRU


View the Lightning Talk Abstracts


Wednesday, October 18
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: Mental Health
Lightning Talks Room 2: Student Success
Lightning Talks Room 3: Student Success
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Social Determinants of Health

Thursday, October 19
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Assessment
Lightning Talks Room 3: Interprofessional Education
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Courses

Friday, October 20
10:15am – 11:15am Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Professional Development
Lightning Talks Room 3: Social Determinants of Health
Lightning Talks Room 4: Research
Lightning Talks Room 5: Other


If you have any questions, comments, or concerns, please let us know at support@iamse.org. Additional forum details and registration can be found at www.iamseforum.org.

We’re looking forward to seeing you in October!

Last Call to Submit IAMSE 2025 Manual Proposals

Due October 1, 2023

Don’t miss your chance to submit proposals for contributions to the IAMSE Manuals book series!

The IAMSE Manuals series was established to disseminate current developments and best evidence-based practices in healthcare education, offering those who teach in healthcare the most current information to succeed in their educational roles. The Manuals offer practical “how-to-guides” on a variety of topics relevant to teaching and learning in the healthcare profession. The aim is to improve the quality of educational activities that include, but are not limited to: teaching, assessment, mentoring, advising, coaching, curriculum development, leadership and administration, and scholarship in healthcare education, and to promote greater interest in health professions education. They are compact volumes of 50 to 175 pages that address any number of practical challenges or opportunities facing medical educators. The manuals are published by Springer; online versions are offered to IAMSE members at a reduced price.

We welcome proposal submissions on topics relevant to IAMSE’s mission and encourage multi-institutional, international, and interprofessional contributions. Previously published manuals can be found by clicking here. Currently, manuals on the topics of feedback, problem-based learning, professionalism, online education, and leveraging student thinking are already being developed for publication in 2023 and 2024.

To submit your proposal, please click hereThe submission deadline is October 1, 2023.

Each proposal will be evaluated by the IAMSE Manuals Editorial Board using the criteria specified above. The Editorial Board will then discuss the proposals and select 2-to-3 for publication. Selections will be based on how well the proposals match the above criteria. We expect publication decisions to be made by December 2023. We anticipate that selected manual proposals will be published during the second half of 2025.

Read here for the full call and submission guidelines.

If you have any questions about submission or the Manuals series please reach out to support@iamse.org.

We look forward to your submissions.

We Hope to See You Exhibit at #IAMSE24 in Minneapolis, MN, USA!

We are pleased to extend your company an invitation to be an exhibitor at the International Association of Medical Science Educators (IAMSE) Annual Conference to be held on June 15-18, 2024, at the Minneapolis Hilton!

At the annual IAMSE conference, faculty, staff, and students from around the world who are interested in medical science education come together in faculty development and networking opportunities. Sessions on curriculum development, assessment and simulation are among the common topics available at the annual conference.

BACK FOR #IAMSE24
Executive Level Sponsorships

Only our Executive sponsors will enjoy a 30-minute networking session with a short platform presentation with attendees scheduled during the conference program. This networking session is unavailable to other sponsorship levels. These exclusive spots are limited, so make sure you keep an eye on your email for registration in early January 2024. Any early registration applications will not be processed until registration is open. This includes processing any payment you may send. 

We are currently finalizing the details of our exhibitor prospectus, so keep an eye on your email for more info. Registration will be open to both attendees and exhibitors in early January. 

Thank you for supporting IAMSE and we look forward to seeing you in Minneapolis!

IAMSE #VirtualForum23Lightning Talk Schedule

The IAMSE 2023 Virtual Forum is right around the corner! As attendees are considering the schedule of live presentations throughout the three-day event, we encourage you to consider the 45 Lightning Talk presentations, to be held in groups throughout the event.

2023 Lightning Talk Schedule

Wednesday, October 18
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: Mental Health
Lightning Talks Room 2: Student Success
Lightning Talks Room 3: Student Success
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Social Determinants of Health

Thursday, October 19
12:00pm – 1:00pm Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Assessment
Lightning Talks Room 3: Interprofessional Education
Lightning Talks Room 4: Anatomy
Lightning Talks Room 5: Courses

Friday, October 20
10:15am – 11:15am Eastern
Lightning Talks Room 1: AI
Lightning Talks Room 2: Professional Development
Lightning Talks Room 3: Social Determinants of Health
Lightning Talks Room 4: Research
Lightning Talks Room 5: Other

View all the 2023 Lightning Talk abstracts

If you have any questions, comments, or concerns, please let us know at support@iamse.org. Additional forum details and registration can be found at www.iamseforum.org.

We’re looking forward to seeing you in October!

IAMSE Fall 2023 Webcast Audio Seminar Series – Week 2 Highlights

[The following notes were generated by Douglas McKell MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7th, 2023, and concluded on October 5th, 2023. Over these five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

Dr. Cornelius James and Erkin Otles presented the second session in this series from the University of Michigan. Dr. James is a Clinical Assistant Professor in the Departments of Internal Medicine, Pediatrics, and Learning Health Sciences at the University of Michigan (U-M). He is a primary care physician, practicing as a general internist and a general pediatrician. Dr. James has completed the American Medical Association (AMA) Health Systems Science Scholars program and was one of ten inaugural 2021 National Academy of Medicine (NAM) Scholars in Diagnostic Excellence. As a NAM scholar, he began working on the Data Augmented Technology Assisted Medical Decision Making (DATA-MD) curriculum. The DATA-MD curriculum is designed to teach healthcare professionals to use artificial intelligence (AI) and machine learning (ML) in their diagnostic decision-making. Dr. James and his colleagues are also using DAYA-MD to develop a web-based AI-based curriculum for the AMA.

Mr. Erkin Otles is a seventh-year Medical Scientist Training Program Fellow (MD-PhD student) at the University of Michigan. His research interests are across the continuum of clinical reasoning and include creating digital reasoning tools at the intersection of AI and medicine, including informatics, clinical reasoning, and operation research. His doctoral research focused on creating AI tools for patients, physicians, and health systems. He has led work across the AI lifecycle with projects advancing from development to validation, technical integration, and workflow implementation. He is also interested in incorporating AI into Undergraduate Medical Education.

Dr. James began the session by reviewing the webinar objectives and the attendees’ response to AI’s impact on healthcare and medical education. Mr. Otles defined Artificial Intelligence AI as a lot of math and programming with nothing magical about its operation. He defined AI as intelligence that systems constructed by humans demonstrate.  He defined Machine Learning (ML) as a subfield of AI interested in developing methods that can learn and use data to improve performance on a specific task.  An example of this is a physician interested in analyzing pathology slides to see if there is evidence of cancer cells. AI or ML systems might use historical data (consisting of images and previous pathologist interpretations) to learn how to detect cancer evidence in slides it has never seen before, thereby demonstrating its learning and data recognition ability based on human programming.

As we learned in the first session of this series, AI and ML contain many sub-fields. Inside AI exists a technique known as natural language processing (NLP), which can process human-generated text or speech into data. There is also knowledge representation or reasoning (KRR), which is the ability to encode human knowledge in a way that is automatically processed.  Input from sensors and related biometric data is of this type. Even though AI and ML are used synonymously, many techniques are AI but not ML.

There are also subsets within ML, such as Deep Learning, which is very effective because it can identify patterns from large amounts of data. ML draws heavily from optimization theory, operations research, statistics analysis, human factors engineering, and game theory, to name a few contributing disciplines.

We encounter AI in our everyday lives, such as asking Siri to play music, route planning, spam detection, route planning, and topic word searching by Google.  Accomplishing these tasks often requires several AI tools working together to accomplish the seamless appearance of the final task presentation to a human. Mr. Otles emphasized that many industries depend heavily on AI embedded in their operations. For example, Airlines and e-commerce businesses use AI to optimize their operations workflow. In contrast, tech companies use AI to sort through large databases to create content that will keep their users engaged with ML, presenting this to each person based on prior user or purchasing habits.

Mr. Otles presented a brief review of ChatGPT that identified its two primary functions: 1. A Chatbot for human user interface, and 2. A Large Language Model to predict the best answer, using reinforcement learning training to identify the preferable word(s) response. He pointed out the inherent problem with this process that results in answers entirely dependent on whatever data it was trained on, which may not be accurate, relevant, or unpleasant. Understanding where the data came from allows us to understand how it can be used, including its limitations. The random sampling of the underlying data also contributes to its limitations in ensuring its accuracy and reliability.

Mr. Otles then switched to discussing how AI is used in health care. Not too long ago, it was unusual to see AI used in health care. Demonstrating the rapid growth in AI’s impact on healthcare, Mr. Otles presented a chart showing that the number of FDA-cleared AI devices used in healthcare increased from one device in 1995 to 30 devices in 2015 to 521 devices in 2022. He emphasized that not all AI devices need to be approved by the FDA before being employed in healthcare and that most are not. He gave an example of a specific AI system at the University of Michigan, approved for use by the FDA in 2012, that removes the bone images from a chest radiograph to provide the radiologist with a more accurate view of the lungs. Other examples of the use of AI in healthcare from the speaker’s research include prostate cancer outcome prediction in hospital infection and sepsis risks and deterioration risks. Mr. Otles cautioned that a number of the proprietary LLMs currently used that have not been subject to FDA scrutiny have not proven to be as accurate predictors as initially thought due to the developer’s lack of understanding of their use in actual clinical context it is being used in that was not the same as when they were trained in research-only environments (Singh, 2021; Wong, 2021). He also shared several feedback examples in medical education that identified the type and possible gaps, or biases, in feedback provided to medical and surgical trainees based on their level of training (Otles, 2021; Abbott, 2021; Solano, 2021).

Mr. Otles concluded his presentation by discussing why physicians should be trained in AI even though AI is not currently a part of most medical schools’ curricula. He emphasized that physicians should not just be users of AI in healthcare; they need to be actively involved in creating, evaluating, and improving AI in healthcare.  Healthcare users of AI need to understand how it works and be willing to form partnerships with engineers, cognitive scientists, clinicians, and others, as AI tools are not simply applications to be implemented without serious consideration of their impact. He stressed that healthcare data cannot be analyzed outside of its context. For example, a single lab value is meaningless without understanding the patient it came from, the reported value, the timing of the measurement, and so on. Mr. Otles pointed out that AI has a significant benefit for physicians to rapidly summarize information, predict outcomes, and learn over time which can ultimately benefit physicians. He suggested that these AI characteristics of using complicated data and workflows to reach an answer and subsequent action were the same type of decision processes that physicians themselves use and understand. As a result, Mr. Otles stressed that physicians need to be more than “users” of AI tools; they need to be involved in creating, evaluating, and improving AI tools (Otles, 2022).

Dr. James hosted the remainder of the Webinar.  Dr. James presented two questions to the audience: “Are you currently using AI for teaching, and Are you currently teaching about AI in health care”?  Dr. James then presented to the audience the latest recommendations from the National Academy of Medicine (NAM) related to AI and health care and highlighted their recommendation that we develop and deploy appropriate training and education programs related to AI (Lomis et al., 2021). These programs are not just for physicians but must include all healthcare professionals. He also discussed a recent poll published in the New England Journal of Medicine (NEJM), where 34% of the participants stated that the second most important topic medical schools should focus on was data science (including AI and ML) to prepare students to succeed in clinical practice (Mohta & Johnston, 2020).

Currently, AI has a minimal presence in the curriculum in most medical schools. If it is present, it is usually an elective, a part of the online curriculum, workshops, or certificate programs.  He stated that this piecemeal approach was insufficient to train physicians and other healthcare leaders to use AI effectively. As more and more medical schools start to include AI and ML in their curricula, Dr. James stressed that it is essential to set realistic goals for what AI instruction and medical education should look like.  Just as not all practicing physicians and other healthcare workers are actively engaged in clinical research, it should not be expected that all physicians or clinicians will develop a machine-learning tool. With this said, Dr. James stated that just as all physicians are required or should possess the skills necessary to use EBM in their practice, they should also be expected to be able to evaluate and apply the outputs of AI and ML in their clinical practice. Therefore, medical schools and other health professional schools need to begin training clinicians to use the AI vocabulary necessary to serve as patient advocates and ensure their patient data is protected and that the algorithms used for analysis are safe and not perpetuating existing biases.

Dr. James reviewed a paper by colleagues at Vanderbilt that provides a great place to start as a way to incorporate AI-related clinical competencies and how AI will impact current ACGME competencies, such as Problem-Based Learning and Improvement and Systems-Based Practice, which is part of the Biomedical Model of education used for the training of physicians (McCoy et al., 2020). Even though historically, the Biomedical model has been the predominant model for training physicians, he suggested that medical education start to think about transiting to a Biotechnomedical Model of educating clinicians or healthcare providers (Duffy, 2011). This model will consider the role that technology will play in preventing, diagnosing, and treating illness or disease.  He clearly stated that he does not mean that models like the bio-psycho-social-spiritual model be ignored. He stressed that he was suggesting that the Biotechnomedical model be considered complementary to the bio-psycho-social-spiritual model to get closer to the whole-person holistic care that we seek to provide. If medical education is going to be able to prepare our learners to be comfortable with these technologies successfully, then a paradigm shift is going to be necessary. He believes that within 1-2 years, we will see overlaps between AI and ML content in courses like Health System Science, Clinical Reasoning, Clinical Skills, and Evidence-Based Medicine. This is already occurring at the University of Michigan, where he teaches.

Dr. James feels strongly that Evidence-Based Medicine (EBM) is at least the first best home for AI and ML content.  We have all heard that the Basic and Clinical Sciences are the two pillars of medical education. Most of us have also heard of Health Systems Science, which is considered to be the third pillar of medical education. Dr. James anticipates that over the next five to ten years, AI will become the foundation for these three pillars of medical education as it will transform how we teach, assess, and apply knowledge in these three domains. He briefly reviewed this change in the University of Michigan undergraduate medical school curriculum.

Dr. James concluded his presentation with an in-depth discussion of the Data Augmented Technology Assisted Medical Decision Making (DATA-MD) program at his medical school. It is working on creating the foundation to design AI/ML curricula for all healthcare learners (James, 2021). His team has focused on the Diagnosis process using EBM, which will impact the Analysis process. Their work with the American Medica Association is supporting the creation of seven web-based modules using AI more broadly in medicine.

Dr. James concluded his presentation by stressing four necessary changes to rapidly and effectively incorporate AI and ML into training healthcare professionals. They are: 1. Review and Re-prioritize the existing Curriculum, 2. Identity AI/ML Champions, 3. Support Interprofessional Collaboration and Education, and 4. Invest in AI/ML Faculty Development. His final “Take Home “ points where AI/ML will continue to impact healthcare with or without physician involvement, AI/ML is already transforming the way medicine is practiced, AI/ML instruction is lacking in medical education, and Interprofessional collaboration is essential for healthcare professionals as key stakeholders.




As always, IAMSE Student Members can register for the series for FREE!

IAMSE Fall 2023 Webcast Audio Seminar Series – Week 1 Highlights

[The following notes were generated by Douglas McKell. MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7, 2023, and concludes on October 5, 2023. Over these five sessions, we will cover topics including the foundational principles of Artificial Intelligence and Machine Learning, their multiple applications in health science education, and their use in teaching and learning essential biomedical science content.

The series began with the session “An Introduction to Artificial Intelligence and Machine Learning with Applications in Healthcare” by Dr. Homayoun Valafar. Dr. Valafar is Chair of the Department of Computer Science and Engineering, Director of SC INBRE Bioinformatics Core, and Director Big Data Health Science Center for Genomic Core at the University of South Carolina.

Dr. Valafar started the session by discussing the role of Artificial Intelligence (AI) and Machine Learning (ML) in scientific discovery, followed by defining the relationship between AI and ML. He then reviewed the process of training ML models for practical applications. He completed the session by providing two examples of AI and ML techniques where they are being applied in different healthcare environments.  

At the beginning, it is important to note that there is an overlap with data science with AI and ML. He briefly explained the traditional data management role of acquisition,  verification, storage, retrieval, and analysis by data scientists, pointing out that today, AI and ML play a central role in data management. ML and the application of AI offer several advantages over traditional data management models, especially in data storage and retrieval, as well as pattern recognition.

Dr. Valafar explained that while AI and ML tend to be used interchangeably, this is incorrect because they are distinctly different. He emphasized that while ML is a subset of AI, some types of AI do not involve ML.  A technique is considered AI when machines act intelligently because humans specifically program intelligent actions. Conversely, ML is a sub-branch of AI that occurs when machines learn on their own how to manage the data they are given. The machine itself will know what it is doing; however, the human may or may not know what the device is doing or how it is doing it.  The machine learns on its own the governing rules of its behavior to reach a solution. Dr. Valafar termed this difference as being “Black Boxed (ML) vs. White Boxed (AI)” or transparent vs. opaque. In the former, the ML produces excellent output, but you have no idea the rules it uses to accomplish this task. In the latter, you have programmed the rules, but in both cases, you still need to assess the accuracy of the output.

An example of AI is when a human tells “Alexis” to turn the lights on at 7:00 p.m. Here, you, the human, specify the rules that the machine is to follow. The machine has “learned” to interpret your gestures into actions it needs to take. Other examples included moving your computer cursor to change a letter or a word, multiple keystroke actions to “automatically” make changes in a document, and using voice recognition to specify locations, playlists, and songs.

He also shared examples of AI that are also examples of ML. They included providing the data to an ML program and asking it to find patterns indistinguishable to me as a human. For example, when given the data:

  • find patterns that determine how a patient responds to a drug
  • distinguish the distinguishing attributes of smoking behavior
  • find common characteristics of patients with vascular disease.

ML can be divided into two branches: supervised learning and unsupervised learning. In supervised learning, the human is involved in machine learning model training. The human expert has reviewed, categorized, and labeled the data to discriminate between critical differences for investigation. e.g., a drug responder or a non-drug responder, or the presence or absence of calcification in the arterial system in a radiological image. This information is used to train the ML model. Because humans are involved, this technique is more common and reliable but tends to be more time and resource-consuming.  Supervised learning will also incorporate human error.  Unsupervised learning is when the machine will figure out the data independently, such as determining the clustering of data. This technique tends to be less prevalent and is used most in an academic setting.

This session focused on supervised learning, which includes collecting and aggregating relevant data, scrutinizing the data for accuracy and bias, such as sex, age, or race-based biases, removing biases, and dividing the data into training or testing sets. Failing to remove biases results in reliable but not valid data analysis by the ML model. The training set is used to train the model, and the testing set is not exposed to the ML network at all and is used to evaluate the trained model. The model’s training occurs when the internal parameters are adjusted until the correct input/output association is achieved. This occurs when the accuracy of performance increases while errors are decreased. The test set validates the accuracy of the training model. This process is central to the development of an Artificial Neural Network. The training process involved multiple iterations where you focus on increasing the accuracy of the output while minimizing the information loss.

Dr. Valafar emphasized that the ML process he outlined is based on the unsupervised approach and comes with an understanding that depending on what form of ML you employ, you may be able to extract the information out of the ML or it may just be embedded in the ML model itself and inaccessible you. In other words, you cannot explain nor replicate the process (rules) the ML model used to arrive at its output. ML techniques like this will usually depend on applying many well-known statistical methods such as Linear or Logistic Regression, Bayesian Classifiers, Decision Trees, Random Forest, and Artificial Neural Networks.

For the remainder of the webinar, Dr. Valafar provided several applications of how ML, incredibly biologically similar Deep Neural Networks, a sub-set of Artificial Neural Networks trained on large databases, can perform remarkably accurate predictive identification of critical characteristics of interest in new data, for example, a single patient expected response to a new drug over time. He reminded us that while this approach generally outperforms all other AI approaches, it still lacks explainability and reliability as a “Black Box” ML model. On the other hand, Decision Tree Models perform less well on complicated tasks but are explainable with information transparency. Examples included determining the cardiovascular factors given the set of individual patient factors such as age, weight, and smoking to categorize the patient as high risk or low risk. Human experts must provide the initial data categorization to help train the ML model.

ML-based AI examples included the Application of AI and ML to facilitate patient management in the emergency room during a mass casualty event by reducing the time-to-treatment for 300 simulated patients from 12 hours to 3 hours, identifying patients most likely to receive clinical benefit from Cardiac Resynchronization Therapy, determine an individualized predictive hydroxyurea drug response for Sickle Cell Anemia at 3, 6, and 9 months based on blood sample of target fetal hemoglobin above 12 % sing a Digital Twin graphing process, and using ML in vascular surgery where it can detect and quantify calcification or plaques and identify and track the health of the aorta and the peripheral arteries. The final application of AI and ML involves the patient wearing activity monitoring devices, such as watches, rings, necklaces, wrist or ankle bands that can monitor the number of steps, amount of sleep, sports activity, smoking cessation, whether the patient is eating and or taking their medications for example. In each of these examples, the individual’s physical actions are translated (interpreted by ML) into a pre-identified highly probable behavior with remarkable but not infallible accuracy, e.g., smoking and medication taking can be confused.

In conclusion, Dr. Valafar stated AI and ML have the potential to revolutionize healthcare and medical sciences. AI can be used to optimize patient outcomes, and ML can become an assistant to practitioners and tutors to apprentices by becoming a second set of eyes, by identifying subtle changes in a patient’s physiology, or by predicting an individual’s future risk factors for a disease progression or treatment response.  He ended by highlighting the two barriers that will prevent the full potential of AI in healthcare: 1. data sharing barriers need to be removed, and 2. the need for better collaboration between the health practitioner and the data sciences.




As always, IAMSE Student Members can register for the series for FREE!

Reminder* IAMSE Manual Proposals Due October 1, 2023

The IAMSE Manuals Editorial Board is seeking proposals for contributions to the IAMSE Manuals book series to be published in 2025.

The IAMSE Manuals series was established to disseminate current developments and best evidence-based practices in healthcare education, offering those who teach in healthcare the most current information to succeed in their educational roles. The Manuals offer practical “how-to-guides” on a variety of topics relevant to teaching and learning in the healthcare profession. The aim is to improve the quality of educational activities that include, but are not limited to: teaching, assessment, mentoring, advising, coaching, curriculum development, leadership and administration, and scholarship in healthcare education, and to promote greater interest in health professions education. They are compact volumes of 50 to 175 pages that address any number of practical challenges or opportunities facing medical educators. The manuals are published by Springer; online versions are offered to IAMSE members at a reduced price.

We welcome proposal submissions on topics relevant to IAMSE’s mission and encourage multi-institutional, international, and interprofessional contributions. Previously published manuals can be found by clicking here. Currently, manuals on the topics of feedback, problem-based learning, professionalism, online education, and leveraging student thinking are already being developed for publication in 2023 and 2024.

To submit your proposal, please click hereThe submission deadline is October 1, 2023.

Each proposal will be evaluated by the IAMSE Manuals Editorial Board using the criteria specified above. The Editorial Board will then discuss the proposals and select 2-to-3 for publication. Selections will be based on how well the proposals match the above criteria. We expect publication decisions to be made by December 2023. We anticipate that selected manual proposals will be published during the second half of 2025.

Read here for the full call and submission guidelines.

If you have any questions about submission or the Manuals series please reach out to support@iamse.org.

We look forward to your submissions.

Bessias & Cary to present Transforming Healthcare Together

Are you curious how Artificial Intelligence (AI) is transforming medical education, especially its impact on faculty teaching and student learning? Join the upcoming IAMSE Fall webinar series entitled “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education” to learn about the intersection of AI and medical education. Over five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

The series begins on September 7 with a presentation by Homayoun Valafar to define AI and machine learning. The series will continue on September 14 with a discussion by Erkin Otles and Cornelius James on strategies to prepare our trainees to appropriately utilize AI in their future healthcare jobs. On September 21st, Michael Paul Cary and Sophia Bessias will present on critical ethical issues, including the potential for unintended bias and disparities arising from AI. Finally, Dina Kurzweil and Bill Hersh will wrap up the series on September 28th and October 5th, respectively, with practical tips for educators and learners alike to utilize AI to maximize teaching and learning. Don’t miss this exciting opportunity to join the conversation on the future of AI in medical education.

Transforming Healthcare Together: Empowering Health Professionals to Address Bias in the Rapidly Evolving AI-Driven Landscape

Presenters: Sophia Bessias, MPH, MSA and Michael Paul Cary, Jr., PhD, RN, FAAN
Session Date & Time: September 21, 2023 at 12pm Eastern
Session Description: As the interest in utilizing AI/machine learning in healthcare continues to grow, healthcare systems are adopting algorithms to enhance patient care, alleviate clinician burnout, and improve operational efficiency. However, while these applications may appear promising, they also carry certain risks, including the potential to automate and reinforce existing health disparities.

During this seminar, we will introduce the ABCDS Oversight framework developed at Duke Health. This comprehensive framework focuses on the governance, evaluation, and monitoring of clinical algorithms, providing participants with practical guidance to ensure the responsible implementation of AI/ML. Specifically, we will highlight how high-level principles can be translated into actionable steps for developers, allowing them to maximize patient benefit while minimizing potential risks.

Read the full session description here.



As always, IAMSE Student Members can register for the series for FREE!

IAMSE Seeking 2025 Manual Proposals Due October 1, 2023

The IAMSE Manuals Editorial Board is seeking proposals for contributions to the IAMSE Manuals book series to be published in 2025.

The IAMSE Manuals series was established to disseminate current developments and best evidence-based practices in healthcare education, offering those who teach in healthcare the most current information to succeed in their educational roles. The Manuals offer practical “how-to-guides” on a variety of topics relevant to teaching and learning in the healthcare profession. The aim is to improve the quality of educational activities that include, but are not limited to: teaching, assessment, mentoring, advising, coaching, curriculum development, leadership and administration, and scholarship in healthcare education, and to promote greater interest in health professions education. They are compact volumes of 50 to 175 pages that address any number of practical challenges or opportunities facing medical educators. The manuals are published by Springer; online versions are offered to IAMSE members at a reduced price.

We welcome proposal submissions on topics relevant to IAMSE’s mission and encourage multi-institutional, international, and interprofessional contributions. Topics for the manuals may vary widely, including but not limited to the following:

  • Program Evaluation
  • CQI in Medical Education
  • Educational Models and Conceptual Frameworks
  • Teaching Using Learning Strategies
  • Approaches to Integration
  • Professionalism
  • Educational Technology

Previously published manuals can be found by clicking here. Currently, manuals on the topics of feedback, problem-based learning, professionalism, online education, and leveraging student thinking are already being developed for publication in 2023 and 2024.

The essential factors to consider in submitting a proposal are the proposed topic:

  1. informs medical education practice;
  2. provides practical instructions and tips to the reader;
  3. excites interest in the medical education community;
  4. demonstrates careful attention to sound research and theory.

We welcome proposals from medical educators, theorists, researchers, and administrators. The entire proposal should not exceed 2,500 words. The following criteria will be used to evaluate proposal submissions:

  • Objectives
    • The objectives should emphasize specific instructional practices that readers can implement in their instructional settings.
  • Description of the proposed manual
    • The description should clearly explain the primary topic of the manual, how—and to what extent—the topic is covered in existing publications, and how the proposed manual addresses gaps in the extant literature.
  • Manual title in conjunction with an expanded table of contents (TOC)
    • The expanded TOC should identify the major topics to be covered in each chapter, with short (two- to three-sentence) descriptions of what will be included in each chapter
  • Description of the target audience.
    • The description should include the anticipated size of the readership (i.e., the size of the market). 
  • A statement of general interest that addresses the expertise, skills, and attributes of the authors that contribute to the topic. (no longer than 1-2 pages in length)
  • Listing of authors. Include brief biographical sketches (no longer than 1-2 pages in length). 

To submit your proposal, please click hereThe submission deadline is October 1, 2023.

Each proposal will be evaluated by the IAMSE Manuals Editorial Board using the criteria specified above. The Editorial Board will then discuss the proposals and select 2-to-3 for publication. Selections will be based on how well the proposals match the above criteria. We expect publication decisions to be made by December 2023. We anticipate that selected manual proposals will be published during the second half of 2025.

Eligibility
Both IAMSE members and non-members are eligible to submit a proposal. IAMSE is a diverse community and strives to reflect that diversity in the composition of its authors. The Editorial Board welcomes applications from members of different countries, various health professions backgrounds, and members of minority groups.

If you have any questions about submission or the Manuals series please reach out to support@iamse.org.

We look forward to your submissions.