top of page
Hongjian Zhou

Newsletter from The Neural Medwork: Issue 4


 

Abstract:


Welcome back to the 4th newsletter of The Neural Medwork. We hope to enrich your AI learning in 2024!


This issue focuses on the diverse Machine Learning methods in AI for healthcare. We'll explore Supervised, Unsupervised, Semi-Supervised, and Reinforcement Learning, highlighting their applications in medical AI. Additionally, we feature a study on the AI system AMIE, comparing it with primary care physicians in diagnostic accuracy. Plus, quick tips on advanced AI interaction techniques. Dive in for your dose of AI advancements in healthcare!


 

AI Concept: Types of Machine Learning in AI Algorithms in Healthcare


As we delve deeper into the nuts and bolts of AI, it's essential to understand that AI, particularly in healthcare, falls under the broad category of Machine Learning (ML). Over the past weeks, we've explored various types of neural networks and AI architectures, including transformers. Now, it's time to explore the foundational mechanisms of how these networks acquire their capabilities. Machine learning, true to its name, involves training networks on data. Four primary types of learning empower these networks with their abilities.


Supervised Learning: Specific Training for Precise Tasks


Supervised learning in AI is akin to the educational approach for a junior learner in healthcare. It's about training the network with specific 'labels' or data points. For instance, when teaching a junior learner to read an ECG, we guide them through a systematic approach, focusing on rate, rhythm, axis, and intervals. Similarly, in supervised learning, the AI is trained with labelled data (like ECG readings labelled with specific diagnoses), enabling it to recognize these patterns and make accurate predictions on new, unseen data. In healthcare, supervised learning is ideal for tasks where specific outcomes and clear data labels are available, such as diagnosing specific conditions from medical imaging.


Unsupervised Learning: Discovering Patterns Independently


Contrastingly, unsupervised learning is like giving a set of ECGs to a learner and asking them to find patterns without any specific guidance or labels. The learner, or in this case, the AI network, looks at a vast array of ECGs and starts to classify them based on discovered patterns like rate, rhythm, axis, etc. This type of learning might reveal insights that humans haven't recognized before, such as determining a patient's sex from ECG features – something not conventionally taught or recognized in medical training. In the realm of healthcare, unsupervised learning is beneficial for discovering new patterns or correlations in large datasets, potentially leading to novel diagnostic criteria or therapeutic targets.


Semi-Supervised Learning: Combining Guided and Independent Learning


Semi-supervised learning blends elements from both supervised and unsupervised learning. Here, some aspects of the ECGs, like rate and rhythm, are labelled, but the network is also allowed to independently identify and learn from other patterns in the data. This approach is particularly useful in situations where labelled data is scarce or incomplete. It allows the AI to leverage both the provided labels and its ability to uncover hidden patterns. In healthcare, semi-supervised learning is valuable for enhancing diagnostic accuracy and efficiency, especially in complex cases where not all information is clearly defined.


Reinforcement Learning: Learning from Feedback


Reinforcement learning in AI can be compared to the process of training a learner using feedback. When the network correctly interprets an ECG, it is 'rewarded,' and when it makes an error, it is 'penalized.' This method enables the AI to continually learn from its experiences, refine its decision-making, and strengthen its analytical capabilities. In healthcare, reinforcement learning is particularly useful for iterative

tasks like treatment optimization or adaptive clinical decision support systems. Here, the AI system can learn to make more accurate predictions or recommendations over time, improving patient outcomes and care efficiency.


Each of these learning types plays a crucial role in the development and implementation of AI in healthcare. By understanding these mechanisms, healthcare professionals can better appreciate the potential applications and limitations of AI in various medical contexts. For example:

  • Supervised Learning is ideal for diagnostic applications where specific outcomes are known, such as identifying malignant tumours in radiology images.

  • Unsupervised Learning could be used to analyze patient data sets to uncover unknown correlations or new disease markers.

  • Semi-Supervised Learning can enhance patient monitoring systems by using labeled data to track vital signs while discovering new patterns indicating deteriorating conditions.

  • Reinforcement Learning is well-suited for dynamic treatment strategies, adjusting medical dosages or treatment plans based on patient responses.

As AI continues to evolve, these learning types will increasingly become integral to enhancing healthcare delivery, improving patient outcomes, and driving medical innovations.


 

Relevant Research Paper


Title: "Towards Conversational Diagnostic AI"


Purpose: The study aimed to develop and evaluate AMIE (Articulate Medical Intelligence Explorer), a large language model-based AI system optimized for medical diagnostic reasoning and conversations. AMIE's performance was compared to that of primary care physicians (PCPs) in a simulated clinical examination setup.


Methodology:


  • Design: Randomized, double-blind crossover study using text-based consultations.

  • Participants: Validated patient actors interacting with either board-certified PCPs or AMIE.

  • Setting: Simulated consultations were structured like Objective Structured Clinical Examinations (OSCEs), involving diverse medical scenarios.

  • Process: Participants engaged in synchronous text-chat consultations. The interactions were then evaluated by specialist physicians and patient actors.

  • Evaluation Criteria: The study focused on multiple dimensions including history-taking, diagnostic accuracy, clinical management, and clinical communication skills.




Key Findings:


  • Diagnostic Accuracy: AMIE demonstrated greater diagnostic accuracy compared to PCPs.

  • Quality of Interaction: Rated by specialists and patient actors, AMIE's performance was superior in 28 of 32 axes from a specialist's perspective and 24 of 26 axes from a patient actor's perspective.

  • Communication and Empathy: AMIE scored higher in terms of communication quality and empathy.

  • Length of Responses: AMIE's responses were typically more detailed and longer than those of the PCPs.




Conclusion & Limitations: The study revealed the potential of AI systems like AMIE in improving the quality and accuracy of medical consultations, especially in a virtual setting. There are a number of limitations to this study which one of the authors, Alan Karthikesalingam and Vivek Natarajan, have highlighted in a separate blog post which can be found here (https://blog.research.google/2024/01/amie-research-ai-system-for-diagnostic_12.html?m=1). The study's limitations include the atypical text-chat interface, which is not representative of usual clinical interactions, possibly underestimating the value of human conversations. Also, it's an initial exploratory step, needing further research for real-world application. Limitations like health equity, fairness, privacy, and robustness are yet to be addressed comprehensively. Additionally, the study's focus on unusual NEJM case reports might not reflect everyday clinical practice, limiting its scope for probing issues like equity or fairness. Nevertheless, this study highlights the promise of AI in aiding clinical diagnosis and potentially one day democratizing some aspects of healthcare delivery. 


Natarajan, V., Karthikesalingam, A., et al. (2024). "Towards Conversational AI in Healthcare: Evaluating a Large Language Model for Medical Dialogue." arXiv. [Online]. Available: https://arxiv.org/abs/2401.05654. [Accessed: 13 Jan. 2024].


 

Tips and Tricks: Chain-of-Thought (CoT) Prompting


Introduced in Wei et al. (2022), chain-of-thought (CoT) prompting in AI, particularly in Large Language Models (LLMs), involves guiding the LLM to process and generate information step-by-step, similar to human reasoning. This method is crucial in fields where complex reasoning and problem-solving are required, such as assisting clinicians with complex diagnoses. You can combine CoT with many other prompting skills, such as few-shot prompting and self-consistency, to get significantly better results on more complex tasks that require reasoning before responding.


What is Chain-of-Thought: Chain of Thought (CoT) prompting, was initially developed to enhance the reasoning capabilities of Large Language Models (LLMs), such as improving their ability to conduct complex mathematical calculations. This technique guides AI to approach problems through a step-by-step reasoning process, similar to how a human would logically break down complex issues. For clinicians, this means that when you are using LLMs such as ChatGPT to assist in tasks like interpreting a patient's symptoms, medical data analysis, or formulating differential diagnoses, you should give CoT prompting a try!


Zero-Shot Chain of Thought: Introduced in Kojima et al. (2022), this approach requires the LLM to apply its reasoning abilities without prior specific examples in the given context. It's similar to asking a medical professional to assess a situation they haven't encountered before but using their foundational knowledge to reason through it. To use this popular technique, it is as simple as adding the sentence, 'Let's think step by step', to your instructions for ChatGPT.


Automatic Chain of Thought (Auto-CoT): If you want to teach your ChatGPT to talk or think in certain ways, the traditional method requires crafting examples manually to guide the AI, a process that can be both time-consuming and prone to suboptimal results. However, Auto-CoT, as proposed by Zhang et al. (2022), streamlines this by using LLMs to automatically generate reasoning chains. This process begins with the AI being prompted to "think step by step," for a representative question, thereby constructing reasoning chains for each problem that it represents.


The Auto-CoT process unfolds in two main stages:

Question Clustering: Here, questions from a given dataset are grouped into several clusters. This helps in organizing the information and setting the stage for more focused reasoning.


Demonstration Sampling: In this stage, a representative question from each cluster is selected. The AI then generates the reasoning chain for these questions using Zero-Shot-CoT. This approach aims to produce simple yet accurate demonstrations for follow-up responses to follow.


One of the key benefits of Auto-CoT is mitigating errors that may arise from manually generated chains. By diversifying the demonstrations, the model reduces the impact of potential mistakes, leading to more reliable outcomes. This approach is particularly valuable when AI is required to handle a wide range of complex and diverse questions, such as making quick and accurate suggestions to assist patient care.



Practical Example:

Let's consider a scenario in healthcare. You need to evaluate the potential causes of a patient's chronic cough. A CoT prompt to ChatGPT could be:


"Consider a 45-year-old patient with a chronic cough lasting for three months, non-smoker, with no significant medical history. List the possible causes step-by-step, starting from the most common to the least common, and explain your reasoning behind each cause."


In this example, ChatGPT is directed to follow a logical, step-by-step approach to differential diagnosis, much like a healthcare professional would. This technique not only aids in comprehensive analysis but also in educating healthcare professionals about AI's reasoning process, making it less like a 'black box' and more interpretable.


Thanks for tuning in,


Sameer & Michael

Comments


bottom of page