IAMSE Fall 2023 Webcast Audio Seminar Series – Week 2 Highlights

[The following notes were generated by Douglas McKell MS, MSc and Rebecca Rowe, PhD]

The Fall 2023 IAMSE WAS Seminar Series, “Brains, Bots, and Beyond: Exploring AI’s Impact on Medical Education,” began on September 7th, 2023, and concluded on October 5th, 2023. Over these five sessions, we will cover topics ranging from the basics of AI to its use in teaching and learning essential biomedical science content.

Dr. Cornelius James and Erkin Otles presented the second session in this series from the University of Michigan. Dr. James is a Clinical Assistant Professor in the Departments of Internal Medicine, Pediatrics, and Learning Health Sciences at the University of Michigan (U-M). He is a primary care physician, practicing as a general internist and a general pediatrician. Dr. James has completed the American Medical Association (AMA) Health Systems Science Scholars program and was one of ten inaugural 2021 National Academy of Medicine (NAM) Scholars in Diagnostic Excellence. As a NAM scholar, he began working on the Data Augmented Technology Assisted Medical Decision Making (DATA-MD) curriculum. The DATA-MD curriculum is designed to teach healthcare professionals to use artificial intelligence (AI) and machine learning (ML) in their diagnostic decision-making. Dr. James and his colleagues are also using DAYA-MD to develop a web-based AI-based curriculum for the AMA.

Mr. Erkin Otles is a seventh-year Medical Scientist Training Program Fellow (MD-PhD student) at the University of Michigan. His research interests are across the continuum of clinical reasoning and include creating digital reasoning tools at the intersection of AI and medicine, including informatics, clinical reasoning, and operation research. His doctoral research focused on creating AI tools for patients, physicians, and health systems. He has led work across the AI lifecycle with projects advancing from development to validation, technical integration, and workflow implementation. He is also interested in incorporating AI into Undergraduate Medical Education.

Dr. James began the session by reviewing the webinar objectives and the attendees’ response to AI’s impact on healthcare and medical education. Mr. Otles defined Artificial Intelligence AI as a lot of math and programming with nothing magical about its operation. He defined AI as intelligence that systems constructed by humans demonstrate.  He defined Machine Learning (ML) as a subfield of AI interested in developing methods that can learn and use data to improve performance on a specific task.  An example of this is a physician interested in analyzing pathology slides to see if there is evidence of cancer cells. AI or ML systems might use historical data (consisting of images and previous pathologist interpretations) to learn how to detect cancer evidence in slides it has never seen before, thereby demonstrating its learning and data recognition ability based on human programming.

As we learned in the first session of this series, AI and ML contain many sub-fields. Inside AI exists a technique known as natural language processing (NLP), which can process human-generated text or speech into data. There is also knowledge representation or reasoning (KRR), which is the ability to encode human knowledge in a way that is automatically processed.  Input from sensors and related biometric data is of this type. Even though AI and ML are used synonymously, many techniques are AI but not ML.

There are also subsets within ML, such as Deep Learning, which is very effective because it can identify patterns from large amounts of data. ML draws heavily from optimization theory, operations research, statistics analysis, human factors engineering, and game theory, to name a few contributing disciplines.

We encounter AI in our everyday lives, such as asking Siri to play music, route planning, spam detection, route planning, and topic word searching by Google.  Accomplishing these tasks often requires several AI tools working together to accomplish the seamless appearance of the final task presentation to a human. Mr. Otles emphasized that many industries depend heavily on AI embedded in their operations. For example, Airlines and e-commerce businesses use AI to optimize their operations workflow. In contrast, tech companies use AI to sort through large databases to create content that will keep their users engaged with ML, presenting this to each person based on prior user or purchasing habits.

Mr. Otles presented a brief review of ChatGPT that identified its two primary functions: 1. A Chatbot for human user interface, and 2. A Large Language Model to predict the best answer, using reinforcement learning training to identify the preferable word(s) response. He pointed out the inherent problem with this process that results in answers entirely dependent on whatever data it was trained on, which may not be accurate, relevant, or unpleasant. Understanding where the data came from allows us to understand how it can be used, including its limitations. The random sampling of the underlying data also contributes to its limitations in ensuring its accuracy and reliability.

Mr. Otles then switched to discussing how AI is used in health care. Not too long ago, it was unusual to see AI used in health care. Demonstrating the rapid growth in AI’s impact on healthcare, Mr. Otles presented a chart showing that the number of FDA-cleared AI devices used in healthcare increased from one device in 1995 to 30 devices in 2015 to 521 devices in 2022. He emphasized that not all AI devices need to be approved by the FDA before being employed in healthcare and that most are not. He gave an example of a specific AI system at the University of Michigan, approved for use by the FDA in 2012, that removes the bone images from a chest radiograph to provide the radiologist with a more accurate view of the lungs. Other examples of the use of AI in healthcare from the speaker’s research include prostate cancer outcome prediction in hospital infection and sepsis risks and deterioration risks. Mr. Otles cautioned that a number of the proprietary LLMs currently used that have not been subject to FDA scrutiny have not proven to be as accurate predictors as initially thought due to the developer’s lack of understanding of their use in actual clinical context it is being used in that was not the same as when they were trained in research-only environments (Singh, 2021; Wong, 2021). He also shared several feedback examples in medical education that identified the type and possible gaps, or biases, in feedback provided to medical and surgical trainees based on their level of training (Otles, 2021; Abbott, 2021; Solano, 2021).

Mr. Otles concluded his presentation by discussing why physicians should be trained in AI even though AI is not currently a part of most medical schools’ curricula. He emphasized that physicians should not just be users of AI in healthcare; they need to be actively involved in creating, evaluating, and improving AI in healthcare.  Healthcare users of AI need to understand how it works and be willing to form partnerships with engineers, cognitive scientists, clinicians, and others, as AI tools are not simply applications to be implemented without serious consideration of their impact. He stressed that healthcare data cannot be analyzed outside of its context. For example, a single lab value is meaningless without understanding the patient it came from, the reported value, the timing of the measurement, and so on. Mr. Otles pointed out that AI has a significant benefit for physicians to rapidly summarize information, predict outcomes, and learn over time which can ultimately benefit physicians. He suggested that these AI characteristics of using complicated data and workflows to reach an answer and subsequent action were the same type of decision processes that physicians themselves use and understand. As a result, Mr. Otles stressed that physicians need to be more than “users” of AI tools; they need to be involved in creating, evaluating, and improving AI tools (Otles, 2022).

Dr. James hosted the remainder of the Webinar.  Dr. James presented two questions to the audience: “Are you currently using AI for teaching, and Are you currently teaching about AI in health care”?  Dr. James then presented to the audience the latest recommendations from the National Academy of Medicine (NAM) related to AI and health care and highlighted their recommendation that we develop and deploy appropriate training and education programs related to AI (Lomis et al., 2021). These programs are not just for physicians but must include all healthcare professionals. He also discussed a recent poll published in the New England Journal of Medicine (NEJM), where 34% of the participants stated that the second most important topic medical schools should focus on was data science (including AI and ML) to prepare students to succeed in clinical practice (Mohta & Johnston, 2020).

Currently, AI has a minimal presence in the curriculum in most medical schools. If it is present, it is usually an elective, a part of the online curriculum, workshops, or certificate programs.  He stated that this piecemeal approach was insufficient to train physicians and other healthcare leaders to use AI effectively. As more and more medical schools start to include AI and ML in their curricula, Dr. James stressed that it is essential to set realistic goals for what AI instruction and medical education should look like.  Just as not all practicing physicians and other healthcare workers are actively engaged in clinical research, it should not be expected that all physicians or clinicians will develop a machine-learning tool. With this said, Dr. James stated that just as all physicians are required or should possess the skills necessary to use EBM in their practice, they should also be expected to be able to evaluate and apply the outputs of AI and ML in their clinical practice. Therefore, medical schools and other health professional schools need to begin training clinicians to use the AI vocabulary necessary to serve as patient advocates and ensure their patient data is protected and that the algorithms used for analysis are safe and not perpetuating existing biases.

Dr. James reviewed a paper by colleagues at Vanderbilt that provides a great place to start as a way to incorporate AI-related clinical competencies and how AI will impact current ACGME competencies, such as Problem-Based Learning and Improvement and Systems-Based Practice, which is part of the Biomedical Model of education used for the training of physicians (McCoy et al., 2020). Even though historically, the Biomedical model has been the predominant model for training physicians, he suggested that medical education start to think about transiting to a Biotechnomedical Model of educating clinicians or healthcare providers (Duffy, 2011). This model will consider the role that technology will play in preventing, diagnosing, and treating illness or disease.  He clearly stated that he does not mean that models like the bio-psycho-social-spiritual model be ignored. He stressed that he was suggesting that the Biotechnomedical model be considered complementary to the bio-psycho-social-spiritual model to get closer to the whole-person holistic care that we seek to provide. If medical education is going to be able to prepare our learners to be comfortable with these technologies successfully, then a paradigm shift is going to be necessary. He believes that within 1-2 years, we will see overlaps between AI and ML content in courses like Health System Science, Clinical Reasoning, Clinical Skills, and Evidence-Based Medicine. This is already occurring at the University of Michigan, where he teaches.

Dr. James feels strongly that Evidence-Based Medicine (EBM) is at least the first best home for AI and ML content.  We have all heard that the Basic and Clinical Sciences are the two pillars of medical education. Most of us have also heard of Health Systems Science, which is considered to be the third pillar of medical education. Dr. James anticipates that over the next five to ten years, AI will become the foundation for these three pillars of medical education as it will transform how we teach, assess, and apply knowledge in these three domains. He briefly reviewed this change in the University of Michigan undergraduate medical school curriculum.

Dr. James concluded his presentation with an in-depth discussion of the Data Augmented Technology Assisted Medical Decision Making (DATA-MD) program at his medical school. It is working on creating the foundation to design AI/ML curricula for all healthcare learners (James, 2021). His team has focused on the Diagnosis process using EBM, which will impact the Analysis process. Their work with the American Medica Association is supporting the creation of seven web-based modules using AI more broadly in medicine.

Dr. James concluded his presentation by stressing four necessary changes to rapidly and effectively incorporate AI and ML into training healthcare professionals. They are: 1. Review and Re-prioritize the existing Curriculum, 2. Identity AI/ML Champions, 3. Support Interprofessional Collaboration and Education, and 4. Invest in AI/ML Faculty Development. His final “Take Home “ points where AI/ML will continue to impact healthcare with or without physician involvement, AI/ML is already transforming the way medicine is practiced, AI/ML instruction is lacking in medical education, and Interprofessional collaboration is essential for healthcare professionals as key stakeholders.

As always, IAMSE Student Members can register for the series for FREE!