top of page
  • Sameer Shaikh

Newsletter from The Neural Medwork: Issue 3



Welcome back to the third newsletter of The Neural Medwork. We hope you enjoyed the holiday season.

In this edition of The Neural Medwork, we delve into the transformative power of Transformers in AI, particularly their application in language understanding crucial for healthcare professionals. The featured core concept is the Transformer architecture, the backbone of Large Language Models (LLMs) like ChatGPT. We discuss a relevant research paper which evaluates ChatGPT against physician responses. Additionally, we offer practical tips on training ChatGPT to align with your communication style, ensuring that this powerful tool can be tailored to meet individual needs and preferences effectively.


AI Concept: Transformers in AI - Revolutionizing Language Understanding

After exploring neural networks and LLMs in our previous issues, we now turn to a critical component that powers these advanced systems: the Transformer architecture. As healthcare professionals, understanding Transformers can help us appreciate how AI tools like ChatGPT process and generate language, enhancing their utility in our field.

Transformers: The Powerhouse of Contextual Understanding At its core, a Transformer is an architectural model within neural networks, designed to understand and interpret language with a remarkable sense of context. Unlike traditional models that process words one by one, Transformers handle entire sequences simultaneously - sentences, paragraphs, or even whole articles.

How Transformers Work:

  • Attention Mechanism: The Transformer uses what's called 'attention' to process each word in relation to every other word in a sentence, not just in isolation. This approach allows it to capture the context and nuances of language and consider the entire context of a sentence or document, thus generating more coherent and contextually appropriate responses.

  • Parallel Processing: Parallel Processing allows models to handle multiple pieces of information simultaneously, unlike traditional methods that process one part at a time. This means that when learning from text, the model doesn't forget earlier parts as it moves along (a common issue known as 'vanishing gradient') and it learns much faster. In healthcare, this could be likened to a team of doctors each focusing on different tasks at the same time, leading to better treatment. Overall, this makes the model more efficient and effective at understanding and working with language.

  • Positional Encoding: this is a technique used to give the model a sense of the order of words in a sentence, something it wouldn't naturally understand due to parallel processing. Imagine a patient's medical history laid out randomly; without knowing the order of events, it's hard to understand the progression of their health. Positional Encoding ensures the model recognizes the sequence of words, much like a timeline, with a clear sense of 'before and after'. This makes the model's interpretations and predictions more accurate and meaningful.

The Inner Workings of a Transformer: Tokenization to Contextual Understanding

In understanding how Transformers function, let's consider a typical clinical sentence: "The patient has a cough, fever, and shortness of breath for 1 week." When processed by a Transformer, each word in this sentence is initially converted into a "token" – the fundamental unit comprehensible to computers. This process is akin to breaking down a sentence into its constituent parts for analysis. The model then processes each token, examining how it relates to the surrounding words based on its vast training data. This step, known as turning the token into a vector or word embedding, is crucial. It's how the neural network discerns the relationships and nuances between words – essentially, the DNA of a word's meaning in context.

What makes Transformers particularly remarkable is their ability to process each of these tokens simultaneously, capturing context and patterns efficiently. This simultaneous processing mirrors how a physician's brain functions during a patient consultation. As you gather a patient's history, conduct a physical exam, and interpret their lab results and diagnostics, you're assimilating each piece of information in context, much like fitting together pieces of a puzzle.

Drawing from the initial sentence, as a clinician, you would collate these symptoms to form a differential diagnosis. Adding the context of recent exposure to someone with COVID, for instance, might lead you to consider COVID-pneumonia. Conversely, if you learn the patient recently returned from India and has X-ray findings suggestive of tuberculosis, your diagnosis might shift accordingly. In your brain, you interpret all this information in the appropriate context to generate a differential diagnosis – precisely what Transformers do when they generate their output. They synthesize the context, much like a seasoned clinician, to produce a relevant and coherent interpretation or response.

Conclusion: Transformers represent a significant leap in AI's ability to parse and produce language. They enable LLMs to have sophisticated capabilities, making them highly effective tools in healthcare settings. Understanding the Transformer architecture helps us as healthcare professionals to better utilize these AI tools in our practice, ensuring we stay at the forefront of technology-driven healthcare. For those looking for an excellent resource that outlines how transformers work in more detail I would recommend reading this article from the Financial Times - 


Relevant Research Paper

Title: Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum

Purpose: This study aimed to evaluate the ability of an AI chatbot (ChatGPT, released in November 2022) to provide quality and empathetic responses to patient questions, compared to responses from physicians.


  • The study was a cross-sectional analysis using a database of questions from Reddit’s r/AskDocs.

  • A total of 195 exchanges (questions and responses) from October 2022, where verified physicians responded to public questions, were randomly selected.

  • Chatbot responses were generated by entering the original questions into ChatGPT.

  • These were evaluated in triplicate by a team of licensed healthcare professionals based on which response was better, the quality of information provided, and the empathy or bedside manner shown.

Key Findings:

  • Evaluators preferred ChatGPT responses over physician responses in 78.6% of the 585 evaluations.

  • Chatbot responses were significantly longer (mean 211 words) than physician responses (mean 52 words).

  • Chatbot responses rated significantly higher in quality and empathy than physician responses.

  • The prevalence of good or very good quality responses was 3.6 times higher for the chatbot.

  • The prevalence of empathetic or very empathetic responses was 9.8 times higher for the chatbot.


  • The study found that ChatGPT generated higher quality and more empathetic responses to patient questions on an online forum compared to physicians.

  • The results suggest that AI assistants could potentially aid in drafting responses to patient questions for review by clinicians.

  • Further exploration of this technology in clinical settings is warranted, along with randomized trials to assess its impact on clinician burnout and patient outcomes.

This study highlights the potential utility of AI chatbots like ChatGPT in enhancing communication with patients, though it also underscores the need for careful implementation and further research in clinical settings. The ability of AI to draft empathetic and high-quality responses could be a valuable tool for clinicians, potentially reducing workload and improving patient interaction quality. However, the limitations of the study, particularly the use of an online forum for data and the inherent subjectivity in evaluating responses, suggest the need for cautious and well-considered integration of AI in healthcare communication.

Ayers JW, Poliak A, Dredze M, et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern Med. 2023;183(6):589–596. doi:10.1001/jamainternmed.2023.1838


Tips and Tricks: Training  ChatGPT on your style

One of the most powerful aspects of LLMs like ChatGPT is that they can tailor their responses to your specific needs. Just as you guide medical students and residents to approach medical tasks and patient presentations in certain ways, you can similarly "educate" ChatGPT to respond in a manner that best suits your needs and preferences.

The Power of Framing and Training ChatGPT

  1. Understanding Your Style: First and foremost, recognize your unique style of communication and information processing. Do you prefer concise, bullet-pointed answers, or more detailed, narrative-style responses? Knowing your preference will help you guide ChatGPT more effectively.

  2. Effective Framing: When posing a question to ChatGPT, don't just ask the question; frame it within the context of your desired response style. For instance, specify the length of the answer you are looking for and what aspects of the answer should be highlighted.

  3. Providing Examples: One of the most effective ways to "train" ChatGPT is by providing examples of responses that align with your style. For instance, if you have a preferred way of explaining a certain condition to patients, show ChatGPT an example of this explanation. This helps the AI understand the tone, level of detail, and structure you prefer.

  4. Trial and Error: Just as with any trainee, there’s a learning curve. Experiment with different prompts and styles until you find what works best for you. Pay attention to what types of responses resonate most with your needs and refine your prompts accordingly. Iteration is key to harnessing the power of ChatGPT.

  5. Specifying Your Needs: Each time you ask ChatGPT a question, be clear about the style in which you want the information presented. If you’re dealing with a complex case, you might say, "ChatGPT, present a comprehensive differential diagnosis for these symptoms, including reasoning for each possibility."

Practical Example:

Let’s put these principles to work. Imagine you’re preparing for a patient education session on diabetes management. Your approach is to give clear, actionable advice in simple terms. Your prompt to ChatGPT might look like this:

"ChatGPT, as a seasoned physician with a focus on patient education, I need to prepare a short guide on managing Type 2 diabetes. It should be straightforward, easy for patients to understand, and include dietary advice, physical activity recommendations, and medication adherence tips. Please format it as a single page with the above sub-headings and bullet points under each section.” (If you have a sample of previous education material on diabetes you like you can also attach it as a PDF to provide an example.)

By explicitly stating your needs and preferred style, ChatGPT can generate a response that closely mirrors your unique approach to patient care. Thanks for tuning in, Sameer & Michael


bottom of page