top of page
  • Hongjian Zhou

Newsletter from The Neural Medwork: Issue 7

Updated: Mar 15


 

Abstract:


Welcome back to the 7th newsletter of The Neural Medwork! In this newsletter, we will continue to dive deeper into AI algorithms, with the introduction of the KNN (K-Nearest Neighbors) algorithm. Next up, we will share findings related to using ambient scribing technology for medical records. Last, we will share a 'classic' technique when interacting with GPT models - few-shot in-context learning.

 

Core Concept: KNN (Nearest Neighbors)


Welcome back to The Neural Medwork, where we continue to delve into the fascinating world of AI algorithms in healthcare. Today, we're introducing another crucial concept in machine learning—  (KNN). Following our exploration of decision trees and random forests, KNN offers a distinct approach to supervised learning, emphasizing the power of proximity for classification or prediction tasks.


KNN operates on a simple yet effective principle: it classifies or predicts the group of a new data point based on its closeness to previously labelled data. The "K" in KNN represents the number of nearest neighbours the algorithm considers when making predictions or classifications. It's a parameter that the user sets to determine how many of the closest training examples to the new data point the algorithm will look at to make a decision. For example, if K is set to 3, the algorithm looks at the three nearest neighbours of a new data point to determine its classification. You may not know this, but you have likely reaped the benefits of a KNN algorithm if you watch Netflix. By analyzing the characteristics of shows you liked, Netflix uses a KNN-like algorithm to recommend new shows sharing similar traits—essentially, it finds your next favourite show by comparing it to your past preferences.


In healthcare, KNN's application can be beautifully illustrated through a prognosis example. Consider that you want to make a KNN algorithm that helps you predict the 3-month mortality for heart failure patients. For simplicity, let's say the two variables you believe are important for evaluating the 3-month survival rate for this group of patients are ejection fraction (EF) and age. By plotting these two variables for hundreds of patients, indicating who survived or passed away within 3 months, KNN can predict a new patient’s outcome by determining which group this new case is closer to, based on the two variables you have chosen (i.e. EF and age). This methodology showcases KNN’s capacity to provide valuable insights into patient prognoses by leveraging existing data patterns. 


KNN exemplifies the fusion of simplicity and efficacy in machine learning, offering a clear, understandable model for healthcare professionals to grasp and appreciate its potential applications in improving patient care and outcomes.





 

Relevant Research Paper


Title: "Enhancing Physician-Patient Interactions with Ambient AI Scribes: A Kaiser Permanente Experience"


Goal: Kaiser Permanente aimed to introduce Ambient AI Scribe technology to its physician network, exceeding 10,000, to reduce the burden of documentation, enhance the quality of physician-patient interactions, and maintain the quality of medical records. A comprehensive training program, including a 1-hour virtual interactive webinar, was provided to facilitate this integration.





The Experience - By Numbers:

  • Adoption: Over 10 weeks, 3,442 physicians utilized the Ambient AI Scribe across 303,266 encounters, with 968 physicians emerging as 'super users' engaging in over 100 encounters each.

  • Impact on Work-Life Balance: Usage of the Ambient AI Scribe was associated with a decrease in after-hours EHR documentation, affectionately termed 'pajama time.'

  • Patient Experience: 81% of surveyed patients noticed a reduction in their physician's screen time, potentially enhancing the quality of face-to-face interaction.

  • Transcript Quality: In a sample evaluation of 35 transcripts, the quality was rated highly, scoring 48 out of a possible 50 points.

Hurdles:

  • Language Limitation: The service was available exclusively in English, posing a barrier to non-English speaking patients and physicians.

  • Barriers to Use: Common obstacles included the multi-step activation process, unfamiliarity with the technology, and lack of integration with other clinical workflow tools.

  • Quality and Safety Evaluation: The study highlights the absence of robust mechanisms for evaluating the quality and safety of AI tools in healthcare, underscoring the need for ongoing monitoring and evaluation.




Summary: This pilot project at Kaiser Permanente demonstrated the potential of Ambient AI Scribes to positively impact physician efficiency, patient experience, and documentation quality. Despite notable successes, the project also identified significant hurdles, including language barriers, technical and integration challenges, and the need for comprehensive quality and safety evaluations.





Tierney et al. Ambient Artificial Intelligence Scribes to Alleviate the Burden of Clinical Documentation. NEJM Catalyst. 2024. https://catalyst.nejm.org/doi/full/10.1056/CAT.23.0404 

 

Tips and Tricks: Mastering Few-Shot In-Context Learning


Few-shot in-context learning, introduced by Brown et al. 2020, is a pivotal technique for enhancing the utility of Large Language Models (LLMs) like ChatGPT, especially when dealing with specialized tasks such as those in healthcare. This method involves providing the LLM with a small number of examples (few shots) within the prompt itself, effectively teaching the model the desired task or reasoning style in context before posing the actual question. It's a way to quickly adapt LLMs to specific tasks without extensive retraining, leveraging their pre-existing knowledge base.


What is Few-Shot In-Context Learning: Few-shot in-context learning primes LLMs with a concise set of examples that illustrate how to approach a particular type of problem or question, effectively setting the stage for the model to apply this learned pattern to new, unseen queries. For healthcare professionals, this means that even with the vast and varied nature of medical inquiries, LLMs can be guided to provide relevant, contextually appropriate responses by seeing just a few examples of similar problems solved.


Practical Example:

Imagine using ChatGPT to identify potential drug interactions in a patient's medication list. A few-shot in-context learning prompt might include:


"Given the following drug combinations, identify potential interactions: [Example 1: Drug A and Drug B], [Example 2: Drug C and Drug D]. Now, consider a patient taking Drug E and Drug F. Identify any potential interactions based on the previous examples."


Instead of the common format, the model will now answer in the specific format of [Example: Drug E and Drug F].


In this scenario, the model uses the structure and outcomes of the provided examples to understand and perform the task of identifying drug interactions for a new combination. This approach not only tailors the LLM's output to closely match the specific needs of the task but also enhances its reliability and specificity in applications where precision is paramount, such as patient care and treatment planning. Few-shot in-context learning thus empowers healthcare professionals to harness the full potential of AI, ensuring tailored and accurate assistance in their daily decision-making processes.


Thanks for tuning in,


Sameer & Michael

Comments


bottom of page