top of page
  • Sameer Shaikh

Newsletter from The Neural Medwork: Issue 2


 

Abstract:


Welcome back to the second newsletter of The Neural Medwork. We hope you enjoyed the first edition and would love your feedback on how we can make the content more relatable,

understandable, and relevant.

This edition of our newsletter delves into the fascinating world of Large Language Models (LLMs), the driving force behind ChatGPT, Gemini and LLaMA. We explore their development from pre-training, involving massive text data analysis, to fine-tuning with human feedback, and the critical stage of Reinforcement Learning from Human Feedback (RLHF). Using a medical school analogy, we liken LLMs to medical students evolving through theory and practical experience. While LLMs excel in linguistic tasks, most aren't specifically trained in medical knowledge, a gap being addressed by emerging models. A highlight study demonstrates the potential of LLMs in improving medical documentation, particularly in informed consent, outperforming traditional methods. Finally, we share tips on mastering role-play interactions with ChatGPT, emphasizing the importance of context and role definition for effective AI communication. Stay tuned for our upcoming podcast on GPTs in clinical practice!


 

AI Concept: Large Language Models - The Brains Behind AI & Linguistic Mastery


In the realm of artificial intelligence, Large Language Models (LLMs) represent vast collections of language data, often sourced from the extensive textual information available on the internet. They are prime examples of Generative AI, whose primary purpose is to create textual outputs, often in response to user prompts. But how exactly do these behemoths of AI operate, and what makes them so adept at their task? Let's dive in.


The Genesis of Large Language Models At their core, LLMs, including popular ones like ChatGPT, aim to predict the next word in a sequence. It sounds simple, but the process behind it is anything but. The development of these models begins with what's known as pre-training. During this phase, the model is fed a gargantuan amount of text data. It processes this data through the neural networks we discussed previously, beginning to define what we call 'parameters.' These parameters are essentially learned variables that understand relationships between words, contexts in which certain words are used, grammatical rules, and even nuances like idioms or colloquial expressions. The model “remembers” these patterns through configurations of its parameters, often referred to as “weights”. To put things into perspective, ChatGPT-4 has over 175 billion parameters.


From Pre-Training to Fine-Tuning Once pre-trained, which in itself is a resource-intensive phase involving significant compute power, time, and investment, the model undergoes fine-tuning. The model is now taught how to respond to queries and the various styles it can adopt in its responses. In most commercial LLMs, this fine-tuning involves human reinforcement, creating what is known as an assistant model. Here, the model is given questions and answers, and it learns to respond based on the parameters set during pre-training, with human feedback guiding its accuracy and appropriateness.


Reinforcement Learning from Human Feedback (RLHF) The third stage of training is where the model generates multiple answers, and a human expert decides which is the best - a process known as Reinforcement Learning from Human Feedback (RLHF). It's a crucial step in ensuring the model's responses are not just based on data but also aligned with human judgment and practicality.


A Medical School Analogy Imagine a new medical student with access to every significant medical textbook and journal. They absorb all this information, forming associations between concepts – this is the pre-training phase. However, theoretical knowledge alone isn't sufficient for medical practice. The fine-tuning phase is similar to the clinical clerkship and residency, where the student applies their knowledge to real cases, and is guided by experienced preceptors. A fine-tuned LLM is like a freshly graduated resident – rich in knowledge and practical experience, ready for the real world, though not without room for growth and improvement.


The Scope of LLMs in Medicine It's crucial to note that most commercial LLMs are not specifically trained on medical knowledge. While there are emerging open-source models (i.e., Meditron) with a medical focus, they often lack the extensive parameters and training data of larger, more generalized models. The performance of an LLM is usually tied to the volume of text it's trained on and its number of parameters – the larger the model, the more capable it is; however, with the rapid progress of AI, we are now starting to see smaller novel models perform at higher levels (i.e., Phi-2 by Microsoft, and Mistral AI). For those of you familiar with Daniel Kahneman’s famous book, "Thinking, Fast and Slow," LLMs are synonymous with System 1 thinking; a near-instantaneous process based on a model's parameters and the goal of finding the next appropriate word.





Evolution of LLMs, image sourced from Momentum Works

 

Relevant Research Paper


Title: Large Language Model-Based Chatbot vs Surgeon-Generated Informed Consent Documentation for Common Procedures


Purpose: The study aims to assess the efficacy of large language model (LLM)-based chatbots in generating informed consent documentation for surgical procedures compared to traditional surgeon-generated consent forms. This assessment is crucial for understanding if LLMs could potentially enhance patient comprehension and the overall process of informed consent in surgical care.


Methodology:

  • Design and Setting: A cross-sectional study conducted at an academic referral center in San Francisco.

  • Participants: Informed consent documents for 6 common surgical procedures (colectomy, coronary artery bypass graft, laparoscopic cholecystectomy, inguinal hernia repair, knee arthroplasty, spinal fusion) were analyzed.

  • Tools: LLM-based chatbot (ChatGPT-3.5, OpenAI) and surgeon-generated documents.

  • Measures: Readability (using scales like Flesch-Kincaid, Gunning Fog index), accuracy, and completeness of the risks, benefits, and alternatives (RBAs) in informed consent documents.


Key Findings:

  • The LLM-based chatbot generated more readable, complete, and accurate consent documentation than surgeons.

  • Mean readability scores indicated that the chatbot-generated RBAs were less complex compared to those written by surgeons.

  • In terms of completeness and accuracy, the LLM-based chatbot outperformed surgeons, especially in describing the benefits and alternatives of surgeries.


Conclusion: LLM-based chatbots show promise in improving the quality of informed consent documents, making them more readable and comprehensive. This can potentially ease the documentation burden on physicians and provide clearer information to patients. However, the findings also suggest the need for careful integration of LLMs in clinical settings, particularly ensuring physician review and edit of LLM-generated documents for accuracy and reliability.





Decker H, Trang K, Ramirez J, et al. Large Language Model−Based Chatbot vs Surgeon-Generated Informed Consent Documentation for Common Procedures. JAMA Netw Open. 2023;6(10):e2336997. doi:10.1001/jamanetworkopen.2023.36997



 

Tips and Tricks: Mastering Role Play with ChatGPT


When interacting with Large Language Models like ChatGPT, one of the most effective strategies to obtain precise and useful responses is through role playing. By clearly defining a role for ChatGPT, you can tailor its responses to fit specific contexts and needs. Here are some key aspects to consider when setting up a role for ChatGPT:

  1. Specify the Role: Clearly define the role you want ChatGPT to assume. It could be a specialist, an educator, a researcher, etc. This sets the tone and scope of the responses.

  2. Highlight Attributes: Include attributes relevant to the role. For example, if you choose an educator, mention the years of experience or specific expertise areas, like emergency medicine or pediatrics.

  3. State the Objective: Clarify the goal of the interaction. Is it to explain a complex concept, provide a differential diagnosis, or offer a step-by-step guide?

  4. Describe the Response Style: Indicate how you want ChatGPT to respond. Should it be in a simplified manner for students, detailed for professionals, or structured as a lesson?

  5. Contextualize the Scenario: If applicable, provide context or a scenario. For medical cases, this could include patient demographics or specific clinical settings.


Example in Action: Educator Teaching Chest Pain Evaluation

Let’s apply these principles in a practical scenario. Suppose you’re an educator wanting to use ChatGPT to teach medical students about evaluating chest pain in the ER. Here’s how you could frame your prompt:


"ChatGPT, you are a master educator and ER physician with over 20 years of experience. Your task is to teach medical students how to systematically think about life-threatening causes of chest pain in the ER, ensuring no critical diagnosis is missed. Use a systems approach to guide the differential diagnosis – considering cardiac, pulmonary, and general causes. The aim is to provide a structured and comprehensive learning experience."


This prompt is far more effective than a simple "What is the differential diagnosis for chest pain?" It guides ChatGPT to respond in a way that’s tailored to the educational context, focusing on systematic learning and critical thinking – essential skills for medical students.

Thanks for reading and we hope to publish our second podcast on evaluating GPTs in clinical practice out before Christmas!


Thanks for tuning in.


Sameer & Michael

Comentarios


bottom of page