top of page
  • Hongjian Zhou

Newsletter from The Neural Medwork: Issue 14



Welcome to the 14th edition of the Neural Medwork Newsletter. Today, we’re introducing an increasingly vital area of AI: Natural Language Processing, or NLP. You might already be familiar with NLP if you’ve ever used voice-to-text features, search engines, or digital assistants like Siri or Alexa. But what is NLP exactly, and why is it crucial in healthcare? Next, we're unpacking another impactful research paper about AI-enabled electrocardiography alert intervention. Lastly, we present an advanced trick of active prompt, which is more complex to fully utilize but provides some of the best results we've seen in LLM's applications.


Core Concept: The Essentials of Natural Language Processing (NLP) in Healthcare

What is Natural Language Processing (NLP)?

Natural Language Processing (NLP) is a technology that enables computers to understand, interpret, and respond to human language in a meaningful way. Think of it as teaching a machine to comprehend and interact with human language as naturally as possible, turning everyday language into a format that computers can grasp.

Why is NLP Important in Healthcare?

NLP has transformative potential in healthcare. It automates the extraction of crucial information from medical records, assists in patient communication, and helps in analyzing large volumes of patient feedback. These capabilities are vital for enhancing diagnostic accuracy, improving patient engagement, and streamlining documentation processes, allowing healthcare providers to devote more time to patient care.

How Does NLP Work? A Simple Breakdown

Let's explore how NLP works using a relatable analogy—a child learning to read and understand text:

  1. Listening and Reading (Input): Just like a child learns to read by first recognizing letters and words, NLP begins with input—either written text or spoken language. This input is the raw data that the system will process, much like how a child looks at a sentence on a page.

  2. Understanding (Processing and Analysis): As a child progresses from recognizing words to understanding sentences, NLP involves several steps to comprehend the input:

    1. Segmentation: Dividing text into manageable pieces, such as sentences or phrases.

    2. Tokenization: Splitting text into individual words or tokens, similar to how a child learns to identify words in a sentence.

    3. Removing Stop Words: Filtering out common words (like “and”, “the”, etc.) that add little semantic value, focusing on the keywords.

    4. Stemming and Lemmatization: Reducing words to their base or root form, helping the machine understand that words like “running” and “ran” are forms of “run.”

    5. Part-of-Speech Tagging: Identifying whether a word is a noun, verb, adjective, etc., which helps in understanding grammatical and logical relationships within the text.

    6. Named Entity Recognition (NER): Recognizing names of people, organizations, locations, medical codes, and other specific data.

    7. Vectorization: Transforming text into a numerical format that machines can understand, which is similiar to a child learning to associate meanings with words.

  3. Responding (Output): Finally, just as a child learns to answer questions about a story they’ve read, NLP learns from information from the previous steps to generate responses. This could be summarizing information, answering queries, or even translating text into another language.

Over the next few weeks, we will delve deeper into each of these processes. We'll explore how machines handle complex language tasks and how these capabilities can be harnessed in healthcare to improve both operational efficiencies and patient care. Stay tuned as we unpack the nuts and bolts of NLP, making this powerful technology accessible and applicable to our daily medical practices.


Relevant Research Paper:  AI-enabled electrocardiography alert intervention and all-cause mortality: a pragmatic randomized clinical trial


The primary objective of this trial was to evaluate whether AI-generated alerts for ECGs could reduce all-cause mortality among patients compared to conventional care practices. The study focused on a 90-day period following the initial ECG, providing a concise yet impactful observation window to assess the AI system's effectiveness. 


This RCT was conducted at 2 hospitals in Taiwan and involved nearly 16,000 patients. Patients were randomly assigned to receive either conventional ECG interpretations or AI-enhanced alerts that informed their physicians about potential high-risk findings. This design was single-blind, ensuring that the treating physicians were aware of the intervention. The AI system analyzed ECGs to identify patients at high risk of adverse events, using a combination of clinical data and ECG features to generate risk scores.


  • Overall Impact: There was a 17% reduction in all-cause mortality across the entire study cohort who received AI alerts.

  • High-Risk Patients: The most significant impact was observed in the pre-specified high-risk group, where there was a 31% reduction in mortality and an absolute reduction of 7 deaths per 100 patients at high risk.

  • Subgroup Analysis: The benefit of AI alerts was consistent across all examined subgroups, underscoring the generalizability of the AI system's efficacy.


This study not only showcases the ability of AI to reduce mortality rates significantly but also highlights its impact on patient management, leading to more assertive care and increased ICU transfers for high-risk patients. However, it also acknowledges the limitation of not fully understanding the mechanisms through which AI alerts influence these outcomes. Despite this, the trial marks a pivotal moment in healthcare, setting a new standard for AI's role in improving patient outcomes and potentially guiding future clinical pathways and interventions.

Lin, CS., Liu, WT., Tsai, DJ. et al. AI-enabled electrocardiography alert intervention and all-cause mortality: a pragmatic randomized clinical trial. Nat Med (2024).


Tips and Tricks: Enhancing LLM Adaptability with Active-Prompt in Healthcare AI

Active-Prompt, as proposed by Diao et al. (2023), offers a novel approach to refining the application of Large Language Models (LLMs) in healthcare by dynamically adapting to task-specific prompts. This method addresses the limitations of traditional Chain-of-Thought (CoT) prompting, which relies on a static set of human-annotated exemplars that may not always align perfectly with the requirements of diverse medical tasks.

What is Active-Prompt: Active-Prompt enhances the flexibility of LLMs by generating multiple potential answers for a given set of training questions and then evaluating these responses based on an uncertainty metric (such as the degree of disagreement among answers). The most uncertain questions are identified and selected for further human annotation. This process ensures that the examples used for training are highly relevant and effective for the specific tasks at hand, thus improving the model's performance and applicability in real-world scenarios.

Practical Example: Imagine using an LLM equipped with Active-Prompt in a clinical setting to determine the most appropriate management strategy for a patient with complex comorbid conditions. The process would involve:

Initial Query: The LLM is queried about different management strategies, using a few initial CoT examples related to comorbid patient care.

Response Generation and Evaluation: The model generates multiple management options, and the responses are assessed to identify areas of high uncertainty or disagreement.

Human Annotation: The most uncertain management strategies are then annotated by medical experts to provide clear, reasoned explanations.

Refinement and Application: These newly annotated examples are integrated back into the model, refining its ability to provide precise and clinically relevant advice based on the most current and expert-validated reasoning.

This iterative and responsive approach not only ensures that the AI's recommendations are grounded in expert knowledge but also allows the model to evolve continuously as new information becomes available or as patient scenarios change. Active-Prompt is particularly valuable in healthcare, where patient cases can vary widely and the stakes are high, requiring an AI system that can adapt quickly and accurately to new challenges.

Thanks for tuning in,

Sameer & Michael


bottom of page