top of page
Hongjian Zhou

Newsletter from The Neural Medwork: Issue 10


 

Abstract:

Welcome to the 10th edition of the Neural Medwork Newsletter, where we continue to unravel the complexities of AI models, making them accessible and relevant to our healthcare professional readership. Building upon our exploration of Gradient Boosting, this edition shines a spotlight on XG Boost, a powerhouse in the realm of machine learning models, particularly in its application to healthcare data analytics. Next, we introduce a groundbreaking paper in forecasting future medical events with generative AI. Last, we leave you with another advanced trick - Tree of Thoughts to help you fully leverage the power of generative AI models.


 

Core Concept: XG Boost


XG Boost and Its Innovations

XG Boost, or eXtreme Gradient Boosting, advances the principles of Gradient Boosting by incorporating cutting-edge techniques to improve speed, efficiency, and model performance. While retaining the core idea of sequentially correcting errors from previous models, XG Boost introduces key features that distinguish it significantly from traditional Gradient Boosting.





Parallel Processing and Its Advantages

Contrary to the sequential model building in Gradient Boosting, XG Boost enhances this process with a form of parallel processing. This doesn’t imply building multiple trees simultaneously but rather, it processes the data and decisions that inform the next tree more efficiently and in parallel. This optimization means that XG Boost can swiftly handle larger datasets, a common characteristic in healthcare settings, thereby enhancing the model's speed and scalability.


Dealing with Missing Data: A Closer Look

XG Boost's approach to missing data is particularly innovative. Rather than requiring upfront data imputation, XG Boost automatically learns the best way to handle missing values during the training phase. For instance, if a patient's crucial bloodwork value is missing, the model tests both potential branches (presence or absence of the value) and adopts the path that optimizes outcomes. This method allows XG Boost to adaptively manage missing data, ensuring robust model performance even when information is incomplete.


Regularization Techniques: Simplifying Complexity

Regularization is a technique used to prevent overfitting, a scenario where a model performs well on training data but poorly on unseen data. XG Boost incorporates two main types of regularization:

  • L1 Regularization (Lasso): Simplifies the model by selectively including only the most important features, reducing the risk of overfitting.

  • L2 Regularization (Ridge): Penalizes the weight of less important features without eliminating them entirely, contributing to model simplicity and robustness.

These regularization techniques ensure that XG Boost models maintain high performance without becoming overly complex or sensitive to training data noise.


XG Boost in Action: Predicting Severe Sepsis

Imagine leveraging the vast amount of data in your Electronic Medical Record (EMR) system to predict severe sepsis in hospitalized patients. With variables ranging from lab results and microbiology to patient demographics and historical health records, the challenge lies in processing this data effectively to identify at-risk patients early.


The XG Boost Advantage

An XG Boost model can meticulously analyze this diverse dataset, learning from each piece of data, including handling instances where important lab values might be missing. By processing data in parallel and utilizing its innovative approach to navigate through missing information, XG Boost builds a comprehensive model capable of detecting subtle patterns and indicators of severe sepsis risk. This predictive capability means clinicians can be alerted to potential sepsis cases earlier than ever before, allowing for timely intervention and potentially saving lives.




 

Relevant Research Paper: Foresight—a generative pretrained transformer for modelling of patient timelines using electronic health records: a retrospective modelling study


The core mission of the Foresight study was to harness both the structured and unstructured data contained within EHRs. Traditional approaches often overlook the rich narrative found in unstructured text, such as physician notes, which can provide a granular view of a patient's medical history. Foresight aims to integrate these diverse data formats to create a comprehensive predictive model of patient health trajectories.


Crafting the Foresight Model

Foresight was built on a sophisticated framework comprising four integral components:

  1. CogStack for initial data retrieval and preprocessing.

  2. Medical Concept Annotation Toolkit (MedCAT), which structures the free-text information from EHRs into coded concepts.

  3. Foresight Core, the heart of the model, is a deep-learning algorithm that specializes in biomedical concept modeling.

  4. Foresight Web Application, an interactive platform that enables users to engage with the model's predictions.

This model was meticulously trained using data from over 800,000 patients across three hospital datasets, spanning both physical and mental health domains.


Results: Precision in Prediction

The accuracy of Foresight in forecasting future medical events was quite impressive. The authors measured accuracy through a metric called “Precision@10”, which means for 10 forecasted candidates, at least one was correct. The model achieved a precision@10 of 0.68, 0.76, and 0.88 across the KCH (Kings College Hospital), SLaM (South London and Maudsley), and MIMIC (US Medical Information Mart for Intensive Care)-III datasets, respectively. 

Furthermore, when evaluated against 34 synthetic patient timelines by clinicians, Foresight demonstrated a remarkable relevancy rate of 97% for its top predicted disorder, showcasing its potential utility in practical clinical settings.


Conclusion and Implications: Towards a Healthier Tomorrow

The development and success of Foresight marks a significant milestone in the journey towards integrating AI into healthcare. This model opens new avenues for real-world risk forecasting, virtual trials, and the progression study of disorders. It has the potential to revolutionize how we understand and interact with patient data, leading to more informed and personalized care strategies.





Kraljevic, Z., Bean, D., Shek, A., Bendayan, R., Hemingway, H., Yeung, J. A., Deng, A., Baston, A., Ross, J., Idowu, E., Teo, J. T., & Dobson, R. J. B. (2024). Foresight—a generative pretrained transformer for modelling of patient timelines using electronic health records: A retrospective modelling study. Lancet Digital Health, 6(4), e281–e290. https://doi.org/10.1016/S2589-7500(20)30219-4

 

Tips and Tricks: Tree of Thoughts (ToT)


The introduction of Tree of Thoughts (ToT) presents a groundbreaking advance in leveraging Large Language Models (LLMs), addressing the limitations of simpler prompting techniques in complex problem-solving scenarios. Developed by Yao et al. (2023) and Long (2023), ToT is a sophisticated framework that extends beyond the linear progression of chain-of-thought (CoT) prompting, enabling LLMs to navigate through a structured tree of intermediate thoughts or reasoning steps.


What is ToT: ToT revolutionizes how LLMs tackle multifaceted tasks by maintaining a "tree" structure, where each branch represents a potential reasoning path or an intermediate step towards a solution. This method allows for strategic exploration and evaluation of different thought paths, akin to a clinician considering multiple differential diagnoses before concluding. By incorporating search algorithms like breadth-first and depth-first search, ToT systematically explores various reasoning avenues, enhancing the LLM's problem-solving capabilities with lookahead and backtracking features.





Practical Example: Consider a scenario where you're using an LLM to determine the best treatment plan for a patient with multiple chronic conditions and drug allergies. Applying the ToT approach might involve:


Initial Thought: Evaluating the patient's current medications and conditions.


Branching Thoughts: Exploring alternative treatments, considering drug interactions, and patient allergies.


Lookahead and Backtracking: Assessing the potential outcomes of each treatment path and revisiting earlier thoughts based on new insights or contradictions.


This framework mimics the dynamic and iterative thought process healthcare professionals engage in when diagnosing and planning treatment, providing a more natural and effective approach to decision-making. ToT not only improves the depth and breadth of the AI's reasoning capabilities but also enhances its transparency and reliability, making it a valuable tool for healthcare professionals seeking AI assistance in complex clinical scenarios.


Thanks for tuning in,


Sameer & Michael

Comments


bottom of page