top of page
  • Hongjian Zhou

Newsletter from The Neural Medwork: Issue 15



Welcome to the 15th edition of the Neural Medwork Newsletter. This edition builds on our journey through NLP and focuses on two other areas that are crucial in healthcare: Text Classification and Sentiment Analysis. Next, we present the ground breaking research Gemini Models in Medicine by Google. Lastly, we introduce a prompting technique focusing on enhancing the Large Language Model's summarization capability and show you how to utilize it in your daily clinical tasks.


Core Concept: Simplifying Text Classification and Sentiment Analysis in Healthcare

Structured vs. Unstructured Data 

  • Structured Data (10-20%): This includes information that is systematically organized and easy to search, often found in electronic health records (EHRs). Examples include patient age, diagnosis codes, medication lists, and laboratory values. 

  • Unstructured Data (80-90%): This comprises information that is not organized in a predefined manner, making it harder to collect and analyze. It includes doctors' notes, imaging reports, and transcripts from patient interactions.

The majority of healthcare data is unstructured, which is where NLP shines by helping to extract and analyze valuable information from texts that are not easily searchable.

BERT (Bidirectional Encoder Representations from Transformers) is a revolutionary NLP model designed to understand the context of words in a sentence more effectively than previous models. Here’s a simple way to understand how BERT works based on a healthcare example:

Imagine a doctor’s note that states, "The patient complained of severe abdominal pain but exhibited no signs of distress." Traditional NLP models might struggle to understand the contrast between "complained of severe abdominal pain" and "no signs of distress" because they read the sentence in order, like a human reading a book.

BERT, however, reads the entire sentence at once, forwards and backwards (that's the "Bidirectional" part), to grasp the full context before making sense of each part. This allows BERT to understand that the patient’s verbal report of pain contrasts with their calm appearance, a nuance that might inform diagnosis and treatment in ways that traditional analyses might miss.

How Does NLP Classify Sentiment?

Sentiment analysis in NLP is the process of determining the emotional tone behind a series of words. Here’s how it works in a step-by-step, easy-to-understand way:

  1. Input Text: Take the feedback from a patient—“I am unhappy with the long waiting times but happy with the medical care received.”

  2. Processing: The text is broken down into manageable parts, like sentences or phrases.

  3. Sentiment Analysis: The system identifies keywords or phrases that are indicative of positive or negative sentiments. For instance, "unhappy" signals a negative sentiment, while "happy" indicates a positive sentiment.

  4. Classification: Based on the words’ emotional indicators, the system classifies each segment of the text as positive, negative, or neutral.

In our example, NLP would help parse out that the patient is dissatisfied with the wait times but pleased with the quality of medical care, providing valuable dual feedback for service improvement.

Practical Application: Enhancing Healthcare with NLP

Let’s apply these concepts using the same patient feedback example:

  • Text Classification: BERT could help categorize the feedback into relevant sections like "service quality" and "clinical care."

  • Sentiment Analysis: An NLP tool would analyze the sentiment expressing dissatisfaction about wait times and satisfaction with clinical care.

This dual capability of NLP helps healthcare facilities understand not just what issues are being mentioned but also how patients feel about different aspects of their care, which is crucial for improving service and patient satisfaction.


Relevant Research Paper:  Capabilities of Gemini Models in Medicine

This recent paper from Google Research DeepMind introduces "Med-Gemini," an advanced AI system designed specifically for the healthcare field. This system builds on the earlier Gemini models, enhancing them to better handle the complex and varied data used in medicine, such as written records, images, and long patient histories.

Overview of Med-Gemini: Med-Gemini improves upon traditional medical software by being able to understand and process medical information in ways that mimic human reasoning. It excels in combining different types of medical data and analyzing extensive medical histories or detailed patient information, which is vital for doctors making diagnoses or treatment plans.

Key Enhancements of Med-Gemini:

  1. Adaptation to Medical Needs: Med-Gemini is fine-tuned to better recognize and interpret the specific types of data encountered in healthcare, such as diagnostic images or complex patient notes.

  2. Handling Multiple Data Types: The model can effectively work with various forms of medical data simultaneously. For example, it can consider a patient’s written medical history alongside recent test images to provide more accurate assessments.

  3. Analyzing Extensive Patient Information: It can sift through long and detailed patient records to focus on relevant medical history, helping healthcare providers make informed decisions without getting overwhelmed by too much information.

Real-World Applications and Future Steps: Med-Gemini has shown promising results in initial tests, such as summarizing medical texts and creating detailed patient referral letters, performing these tasks with accuracy comparable to or better than human experts. However, despite its potential, further extensive testing and validation are crucial before it can be fully integrated into clinical practice. This is necessary to ensure the safety and reliability of the AI system when used in real-world medical settings.

Conclusion: Med-Gemini signifies a significant step forward in using AI to support healthcare providers by handling complex data and providing insights that are immediately useful in clinical environments. This development indicates the growing role of AI in enhancing the efficiency and effectiveness of medical care, although careful implementation and continuous oversight are necessary to ensure these tools benefit patient care responsibly.

Saab, K., Tu, T., Weng, W. H., Tanno, R., Stutz, D., Wulczyn, E., ... & Natarajan, V. (2024). Capabilities of gemini models in medicine. arXiv preprint arXiv:2404.18416.


Tips and Tricks: Directional Stimulus Prompting

Directional Stimulus Prompting, introduced by Li et al. (2023), is a technique to hekp Large Language Models (LLMs) generate precise summaries. This technique employs a tunable policy language model (LM) trained to craft specific stimuli that guide the LLM to focus its responses according to the desired outcomes, particularly useful in summarizing complex medical data concisely and relevantly.

What is Directional Stimulus Prompting: This method refines LLM output by training a policy LM to generate targeted prompts or hints, which then guide the main LLM in producing summaries that are directly aligned with clinical needs. This is particularly advantageous for healthcare professionals who require quick, accurate interpretations of extensive medical documentation or patient data.

Practical Example:

Imagine a clinical scenario where a doctor needs an updated summary of a patient's ongoing treatment for chronic heart disease amidst a plethora of medical records. The process would unfold as follows:

  1. Initial Prompt to Policy LM: "Generate a stimulus focusing on the patient's heart disease treatment progress over the last six months."

  2. Stimulus Generation: The policy LM outputs a specific prompt, such as, "Highlight any changes in medication, recent cardiac test results, and noted side effects."

  3. Summary Production: With the stimulus provided, the primary LLM now focuses on extracting and summarizing only the most relevant information from the patient's records, such as medication adjustments, the latest echocardiogram results, and any new symptoms or adverse reactions reported.

This targeted summary aids the doctor in quickly understanding the critical aspects of the patient’s current treatment status without sifting through less pertinent details. Directional Stimulus Prompting thus enhances the efficiency and effectiveness of medical consultations, ensuring healthcare providers have immediate access to key information necessary for informed decision-making. This method exemplifies how advanced AI prompting techniques can significantly impact practical medical applications, improving care delivery and patient outcomes.

Thanks for tuning in,

Sameer & Michael


bottom of page