top of page

Newsletter from The Neural Medwork: Issue 22

mohammadkhan96

Updated: Mar 2

 

Abstract:

Welcome Back to The Neural Medwork!

Over the past few editions, we’ve been examining different generative AI models and their impact on the healthcare system. In this issue we’re going to examine Reinforcement Learning and then we’re going to further explore the impact of LLMs in clinical practice in diagnostic assessment of patients. We’re diving into a study from JAMA Network Open that challenges assumptions about the impact of artificial intelligence (AI) on clinical practice. The study explored whether AI-assisted tools could enhance diagnostic accuracy in primary care. While the results showed no significant improvement in objective outcomes, the subjective experiences of clinicians using AI highlight its potential and current limitations.


Let’s explore these nuanced findings and what they reveal about the human-AI partnership in healthcare, along with an example of how you may use AI in a primary care clinic scenario.


 

The AI Concept: Understanding Reinforcement Learning for Healthcare

Reinforcement learning (RL) is a type of machine learning where an AI learns to make decisions by interacting with its environment. It works much like trial-and-error learning in humans. Here’s a simple analogy and its healthcare application:


How Reinforcement Learning Works

  1. The Agent: Think of RL as training a "learner," or agent, which could be an AI model designed to make clinical decisions.

  2. The Environment: The agent operates in an environment, which could be a simulated healthcare scenario or a real-world electronic health record (EHR) system.

  3. Actions and Rewards: The agent takes actions (e.g., suggesting a treatment or ordering a diagnostic test) and receives feedback in the form of rewards or penalties:

    • reward signifies a successful action (e.g., choosing the correct treatment).

    • penalty occurs when the action leads to poor outcomes (e.g., unnecessary tests or a delayed diagnosis).

  4. Learning from Experience: Over time, the agent learns which actions maximize rewards, refining its decision-making process.

A Healthcare Example

Imagine an RL-based AI system designed to optimize ICU care:

  • Goal: Improve patient outcomes while minimizing interventions.

  • Actions: The agent decides when to administer medications, adjust ventilator settings, or order lab tests.

  • Feedback: The system evaluates the long-term effects of its decisions, such as reduced complications or shorter hospital stays, and adjusts its strategy accordingly.


Practical Benefits of RL in Healthcare

Reinforcement learning has the potential to revolutionize healthcare by:

  • Personalizing Treatment: RL systems could tailor interventions to individual patients by learning from their unique responses.

  • Optimizing Resource Use: By focusing on actions that maximize patient outcomes, RL could help minimize unnecessary tests and treatments.

  • Improving Training Simulations: RL-powered simulations could train healthcare professionals by modeling complex scenarios, offering real-time feedback based on actions taken.


Challenges to Overcome

While RL holds great promise, its application in healthcare faces hurdles:

  • Ethical Concerns: Ensuring patient safety during RL training is critical, as learning through trial and error carries risks.

  • Data Quality: RL systems require vast amounts of high-quality data to function effectively, which may not always be available.

  • Interpretability: Understanding why an RL system made a particular decision remains a challenge, especially in high-stakes environments.

 

Relevant Research Spotlight

The study, titled Association of Artificial Intelligence–Assisted Primary Care with Clinician Diagnostic Accuracy and Decision-Making: A Randomized Clinical Trial, involved over 300 primary care clinicians who were tasked with diagnosing a variety of cases. Half used AI assistance, while the other half did not.


Objective Results:

  • Diagnostic accuracy rates were nearly identical: 71.7% in the AI-assisted group versus 71.4% in the non-assisted group.

  • Decision-making time and confidence levels were also comparable between the groups.

Subjective Findings:

  • Increased Confidence: Many clinicians reported feeling more assured in their decisions when AI confirmed their initial impressions, particularly in challenging cases.

  • Decision Validation: The AI tool was often described as a “second set of eyes,” providing validation for diagnoses clinicians were leaning toward.

  • Perceived Efficiency: While objective results didn’t show faster decision-making, some clinicians felt the tool helped streamline their thought process by narrowing down possibilities.


These insights underscore the complex role of AI in medicine: its value often lies as much in perception and support as in measurable outcomes.


The Human Side of AI Integration

This study highlights the importance of understanding the clinician experience when integrating AI into healthcare. Here are three key themes that emerged from the study’s qualitative findings:

  1. Confidence Building: For many clinicians, the AI served as a psychological safety net, particularly in high-stakes or ambiguous cases. Even when its suggestions mirrored their own thinking, the reassurance was valuable.

  2. Cognitive Load Management: By offering a structured list of possibilities, the AI tool helped clinicians organize their diagnostic process, even if it didn’t directly improve outcomes.

  3. Frustrations with Mismatched Suggestions: Some clinicians felt that the AI’s recommendations were too generic or irrelevant, highlighting the need for tools that adapt to specific clinical contexts.

These experiences remind us that while AI may not always transform outcomes, it can profoundly impact how clinicians feel about their work.

Reference: Goh E, Gallo R, Hom J, et al. Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial. JAMA Netw Open. 2024;7(10):e2440969. doi:10.1001/jamanetworkopen.2024.40969

 

Tips and Tricks for Navigating AI in Practice

If you’re considering AI tools in your clinical practice, here are some tips inspired by this study:

  1. Use AI to Validate, Not Lead: Let AI suggestions complement your clinical judgment rather than dictate it. Trust your instincts while considering AI as a secondary perspective.

  2. Embrace the Confidence Boost: Even when AI suggestions match your thinking, the added validation can reinforce your decision-making process, especially in complex cases.

  3. Be Critical of Irrelevant Outputs: Don’t hesitate to disregard suggestions that don’t align with your patient’s presentation. Feedback to developers can help refine AI systems over time.

Practical Example: Using OpenEvidence to Determine When to Start Anticoagulation After tPA

 

Scenario:

You're working in the emergency department when a 67-year-old male arrives with acute ischemic stroke symptoms. His NIH Stroke Scale (NIHSS) score is 8, and a CT scan confirms no hemorrhage. Given the onset within 2 hours, you administer IV tPA (alteplase) per stroke protocol. However, you now need to determine when to start anticoagulation because the patient has new-onset atrial fibrillation (AF) with a high CHA₂DS₂-VASc score.

Step 1: Querying OpenEvidence

 

To get an evidence-based recommendation, you use OpenEvidence, a platform that synthesizes high-quality clinical research and guidelines. OpenEvidence is free for all clinicians and essentially functions like ChatGPT; except that it provides high-yield clinical resources to back up all of it’s answers so that you can verify the results yourself.

 

Search Query:

"When do you start anticoagulation after tPA for acute ischemic stroke?"

 

Step 2: Reviewing the Evidence

OpenEvidence provides current guidelines and clinical trial data relevant to your query. The response includes:

 

  1. AHA/ASA Guidelines (2021):

    • Recommend delaying the initiation of anticoagulation for at least 24 hours after tPA administration to reduce the risk of hemorrhagic transformation.

  2. Studies on Early vs. Delayed Anticoagulation:

    • Data from the RAF-NOAC and ELAN trials suggest that starting oral anticoagulation between 2 to 4 days after stroke may be safe in mild to moderate strokes (NIHSS ≤ 8).

    • However, in severe strokes (NIHSS ≥ 10), delaying anticoagulation to 7-14 days is generally recommended due to a higher risk of hemorrhagic conversion.

  3. Specific Guidance Based on Stroke Severity:

    • NIHSS 0-8 (Mild Stroke) → Start after 2-4 days

    • NIHSS 9-16 (Moderate Stroke) → Start after 4-7 days

    • NIHSS ≥ 17 (Severe Stroke) → Start after 7-14 days

Step 3: Applying the Evidence to the Patient

 

Based on the OpenEvidence findings, your patient has an NIHSS of 8 (mild stroke). According to current guidelines:

  • You should wait at least 24 hours after tPA before starting anticoagulation.

  • The optimal window for starting anticoagulation in mild strokes is 2-4 days post-stroke, per ELAN trial data.

 

Given this, you hold anticoagulation on Day 1, monitor the patient for hemorrhagic transformation, and consider starting a direct oral anticoagulant (DOAC) on Day 2 or 3 based on follow-up imaging. Most clinicians currently use resources like Uptodate where clinical answers need to be found through reading full articles, whereas now, with OpenEvidence, you can simply ask specific questions and have refined, evidence-based answers right at your fingertips.

 

The Future of AI in Clinical Decision-Making

The subjective experiences of clinicians in this study emphasize a vital truth: AI tools must do more than provide accurate suggestions. They must integrate seamlessly into the cognitive workflows and emotional landscapes of clinicians. Future developments should prioritize:

  • Personalization: Tailoring AI suggestions to match the clinician's style and the specifics of the case at hand.

  • Feedback Loops: Allowing clinicians to provide real-time feedback to improve the relevance of AI outputs.

  • Enhanced Contextual Awareness: Designing AI systems that understand the nuances of each clinical scenario, offering more targeted and actionable insights.

By addressing these areas, AI can evolve from being a mere tool to becoming a trusted collaborator in patient care.

 

Closing Thoughts

The findings of this study and practical example highlight both the promise and the current limitations of AI in medicine. While objective outcomes may not yet show significant improvements, the subjective experiences of clinicians point to the potential for AI to build confidence, streamline workflows, and support decision-making in subtle but meaningful ways.

As we continue exploring AI’s role in healthcare, the focus must remain on refining these tools to align with the needs of clinicians and, ultimately, the patients they serve. Stay tuned for more insights into the evolving intersection of technology and healthcare!


Thanks for joining us,

Mohammad, Sameer & Michael

 

Comentários


A background

CONTACT US

  • White Facebook Icon
  • White Twitter Icon
  • White YouTube Icon
For general inquiries, please get in touch

Thanks for submitting!

Logo
bottom of page