Using Generative AI To Boost Empathy In Medical Students And Medical Doctors

0


In today’s column, I will be exploring how generative AI enters into a seemingly age-old matter concerning the nature and role of empathy among medical doctors and medical students.

Here’s the backstory.

A general belief is that seasoned physicians and even newbie budding doctors ought to be leaning into being empathetic toward their patients. Medical schools have taken up this mantle and sought to train medical students on the precepts of empathy in hopes of launching them into a career steeped in an embodiment of empathy. A wide range of training methods have been used, including group discussions, individual readings, self-reflection, in-person training exercises, online computer-based simulations such as the use of VR (virtual reality), training films, and the like.

A shiny new tool has ridden into town with quite a bit of gusto and brash bravado.

Turns out that the latest AI, particularly generative AI, can be a catalyst for the empathetic aspirations entailing existing doctors and those soon-to-be newest generation of medical doctors.

The use of generative AI for this specific purpose is still in its infancy. This makes sense since we only have had advanced levels of generative AI for a short while so far. The earlier versions of generative AI were not nearly fluent enough to likely make a substantive difference in the empathy enablement realm. Some prior uses were rickety and regrettably left an adverse impression of what well-devised generative AI can do. For those that might have tried generative AI for this devout purpose in the past, take a deep breath and with a refreshed view, allocate openminded time to do a relook.

Modern-day generative AI is a world of difference and nowadays provides impressive empathy-enabling facets. That being said, the manner and methods of attaining success with these latest large language models (LLMs) are a bit more complicated than might seem at first glance. It is easy to say that generative AI can be an added tool in the empathy enablement process, but numerous twists and turns can muddle or possibly undercut the desired journey. As they say, look before you leap. Do due diligence and make sure that you’ve laid out a suitable path for achieving success when leveraging generative AI as an empathy enablement machination.

I will approach this weighty discussion in the following way. First, I will take a look at the recurring realization that empathy for medical professionals is a vital matter. Research has shown that empathy can make a demonstrative lift in how patients feel about their medical interactions and that by and large quality of care and medical outcomes can be tied to the empathetic quality of their attending physician. That’s why infusing empathetic practices into medical students is a notable yearning. Get them into an empathic mode at the get-go. Hopefully, this will be carried throughout their medical career and be imprinted from the very beginning.

After covering the fundamental basis for having medical doctors embrace empathy, my next focus in this discussion will be on the nature of empathy per se. Some say that empathy cannot be readily defined. You are either empathetic or you are not. It is instinctive. If that were the case, the goal of training empathetic enablement would seem fruitless. An alternative view, and one that I strongly support, is that you can train empathetic enablement and we would be missing a huge opportunity by simply shrugging our shoulders and settling with the notion that someone either has it or they don’t.

The next stage of this analysis consists of examining how generative AI can be an empathy enabler in a medical profession context. Besides explaining how and why this is useful, I will provide you with various illustrative examples via the use of a widely and wildly popular generative AI app known as ChatGPT. You likely have heard of ChatGPT or perhaps used it. The AI maker OpenAI introduced ChatGPT into the public sphere in November 2022 and the rest is history, as one might say. There are an estimated 100 million weekly active users of ChatGPT. That’s an impressive number and indicative of the vast use of this particular generative AI app.

I will finish up the discussion with some suggested tips and next steps.

Get yourself ready for an exciting and thought-provoking ride.

Empathy As Essential To Medical Doctors

Let’s begin at the beginning.

An often-quoted remark that is attributed to Sir William Osler, a legendary figure in the training of physicians and a pioneer in shifting medical students from lectures-only training to include patient-based clinical encounters, purportedly said this:

  • “Listen to the patient; he [she] is telling you the diagnosis.”

One interpretation of the statement is that medical students and medical doctors can readily fall into a mental trap of distinctly not listening to their patients. This can happen due to the rushed nature of being in medicine. Clock in a patient and quickly move to the next. No time for listening is allowed or encouraged. Another downfall is viewing a patient as though they were a machine. You look at the patient in a strictly anatomical fashion. Just gauge what ails them and invoke an exceedingly mechanical solution to their woes.

Perhaps the most dreadful of the interpretations is that there is a gut-felt belief that no genuine need to listen to the patient is required. Consider this line of exasperating reasoning. What do patients know about medicine? Nothing. What might a patient tell you about their malady? Nothing useful. The famous TV show House vividly showcased that patients apparently lie, they cheat, throw you off your game, and otherwise proffer nonsensical utterances. It is clearly a waste of valuable medical time and scarce resources to seemingly listen to patients. Ergo, do the “right” thing and opt overtly to not listen to patients, and furthermore, don’t even let them open the door to talking because it just encourages them to divert from the authentic medical endeavor at hand.

The question arises of how to turn around that kind of reasoning and inspire a listening methodology. In addition, perhaps if medical students don’t already have such denouncing preconceptions, maybe we can start them freshly on a path that won’t be stalled or diverted into a later non-listening mode. Or we have at least instigated a fighting chance of having them be ready for the alluring temptations and gravitational forces that will seek to push them toward a non-listening mindset.

A considered modern version of the Hippocratic Oath that was devised in 1964 by Dr. Louis Lasagna, Dean of the School of Medicine at Tufts University, tries to highlight that the act of listening, or actually being empathetic, needs to be at the core of becoming and being a medical doctor. Here are two salient excerpts from that Hippocratic Oath:

  • “I will remember that there is art to medicine as well as science, and that warmth, sympathy, and understanding may outweigh the surgeon’s knife or the chemist’s drug.”
  • “I will remember that I do not treat a fever chart, a cancerous growth, but a sick human being, whose illness may affect the person’s family and economic stability. My responsibility includes these related problems, if I am to care adequately for the sick.”

Those words were true then and are certainly still vital now, perhaps even more so in an era of fast-paced medicine that is replete with lots of hands-offs, volumes of paperwork, and tightly timed stipulations for performing medical services.

Empathy Enablement During Medical School

A modern-day research study on fostering empathy while in medical school provides several handy points on this serious and significant topic. The research paper I’d like to reference is entitled “Empathy in Medical Education: Its Nature and Nurture — a Qualitative Study of the Views of Students and Tutors” by William Laughey, Jane Atkinson, Alison M Craig, Laura Douglas, Megan EL Brown, Jessica L Scott, Hugh Alberti, Gabrielle Finn, and was published in the Medical Science Educator, October 2021.

Here are some mindful points (excerpts):

  • “Empathy is difficult to define, but the ability to empathize is key to co-existing and co-operating with others. Clinical empathy requires the physician to align to the thoughts and feelings of the patient in what has been described as a moment of ‘crossing over’.”
  • “There is a general consensus that empathy includes cognitive, affective, action, and moral components, though researchers disagree as to the relative contributions made by each of the components.”
  • “Research supports this — when patients sense empathy from their physician, they report greater satisfaction with the consultation and are likely to enjoy better health outcomes. Physicians too report greater satisfaction when consultations are rated as more empathic.”

There are useful takeaways in those salient points that I’d like to emphasize.

First, as I earlier noted, suitably making use of empathy leads to greater satisfaction by the patient and likely better medical outcomes. Interestingly, and importantly, the medical doctor is likely to also indicate a higher level of satisfaction too. This makes sense. They were able to better aid their patient, plus, they personally became more engaged and were not simply a bystander or distant observer of their patient.

Lest you disregard that doctors should need to be satisfied with their work, you would be sadly undercutting a quiet but tremendous part of what keeps them in the game of medicine, namely, that you are able to take pride in your work and earnestly believe that you are doing so for the betterment of humankind. That’s a huge motivation and will help in confronting the arduous and vexing trials and tribulations of being in the medical profession.

Another notable point in the excerpted statements is that empathy tends to involve a kind of mental crossover undertaken by the medical professional as pertains to their patients.

The deal is this. A medical doctor or a medical student has to discern how to put themselves into the shoes of their patients. This is not easy to do. The patients being seen are bound to be from all walks of life and the medical professional might have little or no experience in their own life that compares. Fortunately, and as I alluded to earlier, identifiable techniques and methods can be used to aid in undertaking a smattering of a crossover, doing so to some reasonable degree that can provide keen insights into the patient and their circumstances.

A big and persistent concern that faces medical schools is that there is a sour pattern associated with empathy, or shall I call it anti-empathy, that tends to arise once a medical student reaches the latter portions of their medical training. The researchers in the aforementioned paper noted this disturbing phenomenon (excerpts):

  • “Research suggests that students perceive the formal teaching of empathy to decline in the senior years of medical school, giving way to a curricula bias which prioritizes the biomedical aspects of clinical care.”
  • “In addition to this, as students experience the harsh realities of working in busy healthcare settings, they come to recognize that there is minimal emotional support for frontline staff and witness role models distancing themselves to cope with the hardships of the job.”

Let’s briefly unpack those remarks.

Imagine that a medical school does a wonderful job in the early days of medical education to foster an empathetic perspective in their medical students. Good job, congrats. But this might be a temporary fix. Further down the road, in the latter stages of the medical program, perhaps the assumption is made that the empathy “thing” was already accomplished. No need to push it any further. As such, the schooling aims to pound away solely on biomedical details. Will the medical student still retain their prior learning and engagement in empathetic practices of medicine? Maybe some will, and perhaps a lot won’t.

Adding more fuel to that fire, imagine further that the role models that the medical students are admiring are physicians and other medical professionals who either never thought empathy was useful, or who over years of agonizing devaluation of empathy have let it fade from view. Is a medical student to somehow look past that result and still cling to the earlier encouraged empathetic values? That is a hard pill to swallow and can readily drain the empathy enablement from their tired and exhausted minds and bodies.

That’s why there is much discussion in the medical education realm about formal curriculum, informal curriculum, hidden curriculum, and so on. There might be a formal head-held high proclamation about empathy, meanwhile, the under-the-table practical ground-level perspective is saying don’t be overly concerned about empathy. If you can fit it in, fine, but if not, that’s fine too. In some respects, this often produces so-called tick-the-box empathy. The medical student is asked if they have been empathetic that day with their patients. They tick the box that says yes. The mentoring doctor confirms the yes with yet another tick-the-box affirmation. The paperwork looks dandy.

The empathy, well, probably not so much.

You might be tempted to assume that these issues underlying empathy enablement in medical school might only be confined to a few programs here or there. Sorry, that is not the case. Meta-analyses that scour a wide breadth of medical literature tend to suggest that the problem is pervasive. I’ll highlight one such study in a moment.

Besides the various logistics and influencing factors, another sizable question is what kind of instruction or methods of education and training can seek to get medical students into an empathy mode as a medical professional. If you aim to foster empathy, what approach will get you the biggest bang for the buck? There might be methods that seem miraculous but only have a transitory impact. You want to somehow instill empathy enablement sustainably. A form of permanence is needed else either empathy decay or the roguish powers of depleting empathy will prevail.

A comprehensive meta-synthesis recently reported on these matters. In a research study entitled “Medical Students’ Perspectives on Empathy: A Systematic Review and Metasynthesis” by Emmanuel Costa-Drolon, Laurence Verneuil, Emilie Manolios, Anne Revah-Levy, and Jordan Sibeoni, Academic Medicine, January 2021, here’s what they determined (excerpts):

  • “Empathy, thus, has become a major issue in medical school instruction. While producing empathetic physicians is a clearly established objective of medical schools, empathy curricula vary widely from school to school.”
  • “The authors systematically searched 4 databases through June 17, 2019, for qualitative studies reporting medical students’ perspectives on empathy in medical school. They assessed article quality using the Critical Appraisal Skills Program, and they applied thematic analysis to identify key themes and synthesize them.
  • “Collectively, the literature seems to indicate that empathy is a concept difficult to understand and fully explain. The teaching of empathy remains a major unresolved issue, specifically how to teach empathy to medical students to ensure empathy in future physicians’ practice.”

A key point is that not only is it hard to define empathy, but it is also hard to discern what methods will do best at instilling empathy, especially on a sustained basis. You want to make sure that the medical students can weather the storm. They will encounter lots of harsh winds, blustery snow, and formidable thunderous attacks upon a desire to embrace and retain an empathetic medical mindset and professional disposition.

I’d like to bring up an important mental framework regarding empathy and then we’ll be ready to move into the next stage of this discussion including generative AI. The mental framework involves a clarifying conception that casts empathy as akin to a hydraulic process or model. This will be valuable when considering the advent and adoption of generative AI for empathy enablement in medical doctors and medical students.

Hydraulic Model Of Empathy

When you pour water into a container, you are witnessing the awesomeness of hydraulics. As children, we learn about hydraulics almost without explicit instruction. You tilt a cup filled with water and let it flow into a bucket or similar container. Will the water overfill the bucket? If not, how much-remaining space is available? Can you calculate how many additional cups are needed to fill the bucket? Suppose the bucket has a small hole. What does this do to the calculus of the water, the cups, and the container?

Adults take all of this for granted. Until you see a toddler playing out these difficulties, you don’t realize that you’ve already discovered the nuances of hydraulics. They are ingrained in your mind. Most of the time, you don’t explicitly think about how you think about hydraulics. Only if you are presented with some outlier problems do you start to invoke the precepts that seem readily at hand in your mind.

Why all this talk about hydraulics?

Because one means of conceiving of empathy entails adopting a hydraulics metaphor or model.

Allow me to elaborate.

A medical school student eagerly starts their first year of medical school. The medical program aims to get this student and all other students up-to-speed on empathy and empathetic thinking. Let’s say that we want to attain an empathy level on some scale that we’ll label a 10 as the topmost empathetic capability as a medical professional. Our goal is to get all the medical students to at least an 8. Sure, we might wish for them all to be 10s, but realistically we will accept that they end up around 8 or more. We definitely don’t want any that are less than an 8. That would be disappointing.

Okay, so the medical school student that I just noted is supposed to become at least an 8.

Do we have a long tough haul to get to an 8 for this student, or is it an easy hop?

The answer is that you can’t especially say because we don’t know where the student resides on the empathy scale at this juncture. Perhaps the student is a 7. Hopefully, becoming an 8 should be a breeze. Imagine if the student is already a 10, we would presumably aim to make sure they don’t slip downward and be delighted they are already a 10. If this student is currently a 1, we seem to have a momentous task on our hands. Getting to an 8 might either be rough or perhaps nearly impossible in the time and resources in place to accomplish the empathy enablement.

This mention of a scale associated with empathy is essentially a hydraulics metaphor or model. Recall the child that tipped the cup to pour water into a bucket. In a sense, perhaps crudely, you could assert that the empathy of a medical student is subject to hydraulics. They have some semblance of empathy when they walk in the door of medical school. This empathy will seemingly be further enabled. The mindfulness of empathy rises. At some point, the empathy might wane, such as the reported studies of how medical students in the latter stages tend to lessen their empathy.

I trust that you can see the utility of a hydraulics perspective on empathy.

A medical school student might have empathy leakage, as it were, during the latter stages of a medical school program. If so, and assuming we want to ensure that their empathy maintains whatever level we might have earlier achieved, you need to do something that refills the empathy. An assumption that empathy once reaching a particular level will always remain at that level is in defiance of a conventional sense of hydraulics. Something can happen that drains, evaporates, or otherwise diminishes empathy.

One debate is whether empathy is in the head or the heart. I won’t get into that heated topic here. Instead, let’s go with the hydraulics model. A medical student has formed some ascertained degree of empathy as related to medical work. We aren’t going to say whether it is in the head or the heart. From an external perspective, we are going to focus on how much empathy they embody. Wherever it is, is where it is.

Another debate is whether empathy is purely intrinsic. The viewpoint is that a person has it or they don’t. You can’t turn a horse into a fish. Some retort that a more suitable idea is that you can turn a caterpillar into a butterfly. Everyone is amenable to embracing empathy. For some, it is harder to go up the curve. For some, once up the curve, it is harder to stay there. Anyway, let’s presume that humans conventionally can embrace empathy and that for some it is easier because it is perhaps intrinsic, while others must belabor more so to get it (of course, this takes us down the classic nature versus nurture discourse, an abyss unto itself).

The handy aspect of a hydraulics perspective is that it helps quantify something that without quantification becomes almost intractable to cope with. The old saying is that you cannot manage that which you can’t measure. How can you know where medical students start from when they first enter into the medical school program? How will you know when they have attained a desired threshold of empathy? How will you keep tabs on their ebb and flow of empathetic enablement?

You have a fighting chance with some form of model, such as the hydraulics framework. To clarify, there are oppositions to the metaphor. Maybe we are wrong to envision empathy as some kind of fluid that you pour into the heads of medical students. Perhaps this undercuts the nature of empathy. Of course, the counterviewpoint is that an attack by declaring that you pouring liquid is a cheap shot. Nobody of genuine belief would say that it is that simple. This hydraulic concept is much more complex and is not the same as a cup and a bucket. It is more like an intricate mental plumbing that goes in lots of directions and has all manner of ways to flow, soak, deplete, emerge, submerge, and so on.

One other thing. I don’t want you to necessarily associate the hydraulics model with how generative AI comes into the picture. I say this to note that I merely brought up hydraulics due to its utility and to ensure that you knew what it was. We don’t need it per se for generative AI in this use case. It is handy but not a necessary consideration.

Let’s now move into the generative AI aspects.

The Workings Of Generative AI

I will soon be walking you through the use of generative AI for empathy enablement in medical students and medical doctors. Various examples using ChatGPT will aid in illuminating and illustrating the value of such an approach.

First, some quick background about generative AI to make sure we are in the same ballpark about what generative AI consists of. If you already are highly versed in generative AI, you might skim my backgrounder and then pick up once I get into the particulars of this specific use case.

I’d like to start by dispelling a myth about generative AI. Banner headlines from time to time seem to claim or heartily suggest that AI such as generative AI is sentient or that it is fully on par with human intelligence. Don’t fall for that falsity, please.

Realize that generative AI is not sentient and only consists of mathematical and computational pattern matching. The way that generative AI works is that a great deal of data is initially fed into a pattern-matching algorithm that tries to identify patterns in the words that humans use. Most of the modern-day generative AI apps were data trained by scanning data such as text essays and narratives that were found on the Internet. Doing this was a means of getting the pattern-matching to statistically figure out which words we use and when we tend to use those words. Generative AI is built upon the use of a large language model (LLM), which entails a large-scale data structure to hold the pattern-matching facets and the use of a vast amount of data to undertake the setup data training.

There are numerous generative AI apps available nowadays, including GPT-4, Bard, Gemini, Claude, ChatGPT, etc. The one that is seemingly the most popular would be ChatGPT by AI maker OpenAI. In November 2022, OpenAI’s ChatGPT was made available to the public at large and the response was astounding in terms of how people rushed to make use of the newly released AI app. As noted earlier, there are an estimated one hundred million active weekly users at this time.

Using generative AI is relatively simple.

You log into a generative AI app and enter questions or comments as prompts. The generative AI app takes your prompting and uses the already devised pattern matching based on the original data training to try and respond to your prompts. You can interact or carry on a dialogue that appears to be nearly fluent. The nature of the prompts that you use can be a make-or-break when it comes to getting something worthwhile out of using generative AI and I’ve discussed at length the use of state-of-the-art prompt engineering techniques to best leverage generative AI, see the link here.

The conventional modern-day generative AI is of an ilk that I refer to as generic generative AI.

By and large, the data training was done on a widespread basis and involved smatterings of this or that along the way. Generative AI in that instance is not specialized in a specific domain and instead might be construed as a generalist. If you want to use generic generative AI to advise you about financial issues, legal issues, medical issues, and the like, you ought to not consider doing so. There isn’t enough depth included in the generic generative AI to render the AI suitable for domains requiring specific expertise.

AI researchers and AI developers realize that most of the contemporary generative AI is indeed generic and that people want generative AI to be deeper rather than solely shallow. Efforts are stridently being made to try and make generative AI that contains notable depth within various selected domains. One method to do this is called RAG (retrieval-augmented generation), which I’ve described in detail at the link here. Other methods are being pursued and you can expect that we will soon witness a slew of generative AI apps shaped around specific domains, see my prediction at the link here.

Generative AI In A Medical Empathy Setting

I have dragged you through that introduction about generative AI to bring up something quite important in this medical profession context.

Here’s the deal.

Generic generative AI is not purposely devised to train someone on empathy, nor does it conventionally have data training on the specifics associated with medical students or medical doctors. In that sense, I am going to showcase merely how generic generative AI can be overall applied to this use case. I believe you will observe that the generic does a seemingly impressive job out-of-the-box, despite not being tailored to this specific task.

There are efforts underway to do specialized data training so that generic generative AI can be adapted to the empathy enablement task, including being used in a medical school environment. I want to also note that there are at least two venues for this usage. One is the medical school setting, which I’ve noted throughout this discussion. The other venue entails medical doctors already in practice.

You see, not only does it make sense to aim to use generative AI for medical school students, but additionally do likewise with existent medical doctors. Medical doctors ostensibly greatly range in their respective embodiment of empathy prowess. They too could leverage generative AI in ways comparable to medical school students. I want to plant the notable seed that this isn’t exclusively pertaining to medical students. Medical doctors in practice would be an appropriate target market too. You could enlarge this to also note that medical professionals of all kinds would undoubtedly be able to make use of the same.

An additional caveat to be aware of is that fruitfully using generic generative AI for this use case requires suitable acumen in prompt engineering and how to productively use generic generative AI.

This is a keystone because there are undoubtedly some medical students and some medical doctors already trying to do this on an ad hoc basis. Though their first-mover bravery is noted, the problem is that they might falter when using generative AI for this task or reach a premature conclusion that the tool is not up to the task. In that latter case, they are somewhat right in that the generic has to be adequately guided else otherwise the result might be for not.

As I often note at my speaking engagements, generative AI is like a box of chocolates, and you never know for sure what you might get.

There is something else that almost always comes up when I speak on this topic. The question I get is that if generative AI isn’t sentient, presumably this implies that generative AI cannot be empathetic. Thus, the logic suggests, it would seem to be foolish to use a non-empathetic capacity to somehow aid humans in becoming or being empathetic.

I see this as a red herring.

First, you might try the same argument about watching a film.

A movie or video about empathy regarding medical students and medical doctors is obviously not sentient and does not embody empathy. Yet, research shows that such videos can be instrumental for this purpose. To be fair, the greatest value arises when the video or movie is coupled with group interaction, sometimes pre-viewing and then post-viewing. The video is a viable tool that gets a mindset underway and can be a memorable experience that will last long after the group discussions. Group discussions in the absence of a video or movie can be useful, though when coupled with a video or movie can be more inspiring and stimulating. The film nonetheless can still be viewed individually and useful even if a group discussion isn’t also paired with the movie.

One notable research study entitled “Using an Animated Film to Foster Understanding of and Engagement in Addressing Implicit Bias Through Empathy Across the Health Care Continuum” by Jennifer Adams, Renee Williams, Colleen Gillespie, Maura Minsky, Jonathan LaPook, Richard Greene, Joseph Ravenell, Jessica Dennehy, Cristina Gonzalez, Academic Medicine, November 2023, described how a film can have a demonstrative impact and especially when paired with group interactions (excerpts):

  • “Despite increasing recognition of the adverse impact of implicit bias in clinical care and the need for enhancing empathy within increasingly complex health care (and learning) environments, we found no published curricula for health professionals and students to reflect on empathy as a tool for mitigating the negative effects of implicit bias. To address this gap, we developed and delivered an innovative curriculum built around a high-quality animated film (‘The Elephant in the Waiting Room’).”
  • “The well-intentioned physician’s relatable missteps and the patient’s prior experience with discrimination in health care yield multiple opportunities to use empathy to restore the encounter after the patient perceives bias. Enter Denise the Empathy Elephant! She coaches the physician, who then uses core communication and empathy skills to mitigate his anchoring bias (considering other more likely diagnoses) and truly connect with the patient. The curriculum identifies multiple stopping points in the film for reflection and discussion to reinforce the interplay between empathy and implicit bias recognition and management.”
  • “The animated film combined with a structured curriculum engaging participants in a well-facilitated, small-group discussion, appears to be effective in helping a broad range of health care professionals deepen their understanding of and commitment to strategies for mitigating implicit bias and enhancing empathy. Future research will explore whether participants have been able to implement those strategies in their clinical practice.”

A likewise methodological approach can be taken when using generative AI.

Envision this.

Medical students make use of generative AI that has been specially prepared via establishing prompts or via in-context modeling to aid in training about the role of empathy in a medical context. Before doing so, perhaps they have a group discussion about where their mindset is on empathy. They then use the generative AI. Afterward, they meet again as a group. Each of them will have numerous ideas and thoughts as a result of using generative AI. The group discussion will be invigorated due to the generative AI usage.

Furthermore, and this is a huge added plus, this is not a one-and-done activity. The difficulty with a film or movie is that it is usually watched once and then not usually repeated since the viewers already know what it has to say. Producing a multitude of films can be costly. In contrast, the experience when using generative AI will likely vary each time the AI is used.

The generative AI can also be used in an incremental fashion. As medical students become more versed in empathy, generative AI can be prompted or devised to shift into more advanced modes. The generative AI can also provide feedback throughout.

I typically describe the use of generative AI in this context as being able to do at least five significant activities:

  • (1) Tell me. Generative AI is used to explain what empathy is and how in a medical context the use of empathy is instrumental.
  • (2) Show me. Generative AI identifies ways in which empathy is vital in medical contexts and gives sample dialogues of a medical doctor and patient that vividly depict the ins and outs accordingly.
  • (3) Role-play with me. Generative AI pretends to be a patient and the medical student engages in a dialogue that is meant to exercise their existent empathy capability, and the other direction can take place wherein the generative AI acts as a medical doctor and the medical student pretends to be a patient.
  • (4) Assist me. Generative AI assists or “advises” a medical student when the student presents a setting or situation involving a patient and the student is unsure of what ways empathy can or should come into play (be careful about privacy, confidentiality, HIPAA, etc., see my discussion at the link here).
  • (5) Give me feedback. Generative AI provides questions to the medical student to gauge their awareness of empathy in a medical context and can do so over an extended period of time such as the entirety of the medical school experience (and beyond).

I realize that you might still have a lingering qualm that generative AI is doing those activities and yet does not embody empathy. I already noted that group discussions could be paired with the use of generative AI. But let’s for the sake of discussion take that out of the equation.

Does generative AI, despite not being empathetic in an embodied manner, still provide an effective means of performing facets such as tell me, show me, role play with me, assist me, and give me feedback?

I would intuitively say yes. Research studies are starting to line up to empirically explore this notable question.

I’ll briefly mention one recent study that took a slightly different angle on the generative AI and empathy consideration. In a study entitled “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum”, John W. Ayers, Adam Poliak, Mark Dredze, Eric C. Leas, Zechariah Zhu, Jessica B. Kelley, Dennis J. Faix, Aaron M. Goodman, Christopher A. Longhurst, Michael Hogar, Davey M. Smith, JAMA Internal Medicine, April 2023, the researchers undertook an experiment that took this approach and has these stated outcomes:

  • “In this cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed health care professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly on a public social media forum. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.”
  • “These results suggest that artificial intelligence assistants may be able to aid in drafting responses to patient questions.”
  • “Chatbot responses were also rated significantly more empathetic than physician responses (t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.”

I won’t dive into the details of the study here, but wanted to bring up that the study suggests that generative AI can appear to be empathetic.

In this instance, the expressed empathy seemingly was as much or more so than human medical doctors, though that is somewhat a gray area due to the comparison being based on a social media platform with presumed medical doctors responding in short form to wanton postings, rather than how they might respond to their actual patients under their care.

My point is that rather than being preoccupied with whether generative AI embodies empathy, we can instead concentrate on whether generative AI can exhibit or express empathy. Those are two different considerations. Let me explain. Recall that generative AI is data trained on massive amounts of human-written text. From that text, the pattern matching has mathematically and computationally likely patterned on how humans express empathy. If you assert that generative AI is merely a stochastic parrot that mimics what humans have said, you will at least concede that the generative AI ergo has a solid chance at expressing empathy.

You might scowl and exhort that expressing empathy is not the same as a heartfelt feeling of empathy.

Sure, you can make that case.

But, the bottom line, if the empathy expressed is otherwise indistinguishable from the expressed embodied empathy by humans, how can you definitively contend that the AI is so deficient as to be unusable or incapable? See my discussion about the Turing Test to see more about this black box kind of comparison, at the link here.

This is one of those endless arguments and I welcome the debate. For now, let’s go the practical route and see what generative AI can accomplish.

Examples Of Generative AI In Medical Empathy Contexts

I will show you some examples of using generative AI in the tell me, show me, role play, and related modes of operation.

I opted to use ChatGPT. The same types of interactions can readily be undertaken with other generative AI apps such as GPT-4, Bard, Gemini, Claude, etc. By and large, you would not likely notice much of a difference among those generative AI apps in this use case (particularly because this is generic generative AI).

If you decide to try out the same or similar dialogues using ChatGPT or some other generative AI app, please be aware that you almost for sure will not get the same exact responses. Generative AI makes use of statistics and probabilities to decide which words should be included in the generated responses. You will discover that even if you ask the same question twice, you will likely get a different answer each time. That’s by design. It is a feature, not a bug.

In my first example, I will do a Tell Me. My prompt asks ChatGPT to tell me about ways in which someone in medical school can enhance their empathy. I will say that I am in medical school. The prompt and the responses are shown in quotation marks.

  • My prompt: “I am in medical school. I want to learn how to have empathy for my patients. What suggestions do you have for me?”
  • ChatGPT generated reply:

“Developing empathy is crucial for healthcare professionals to build strong, trusting relationships with patients. Here are some suggestions to help you cultivate empathy as a medical student:”

“Active Listening:”

“Pay full attention to your patients when they speak.”

“Avoid interrupting and allow them to express their thoughts and concerns.”

“Validate their feelings and acknowledge their experiences.”

“Put Yourself in Their Shoes:”

“Try to understand the patient’s perspective and imagine how they might be feeling.”

“Consider the emotional and physical aspects of their experience.”

“Learn Patient Stories:”

“Take time to learn about your patients beyond their medical conditions.”

“Understand their personal history, lifestyle, and any challenges they may be facing.”

“Practice Cultural Competence:”

“Be aware of and respect cultural differences.”

“Learn about the cultural backgrounds of your patients to better understand their beliefs and values.”

“Non-Verbal Communication:”

“Pay attention to non-verbal cues such as body language, facial expressions, and tone of voice.”

“Be mindful of your own non-verbal communication to ensure it conveys empathy.”

“Express Empathy Verbally:”

“Use language that shows you understand and care about the patient’s experience.”

“Phrases like ‘I understand,’ ‘I’m here for you,’ and ‘I’m sorry you’re going through this’ can convey empathy.”

“Develop Self-Awareness:”

“Reflect on your own emotions and biases to better understand how they may impact your interactions with patients.”

“Consider seeking feedback from mentors or peers to gain insights into your communication style.”

“Read Patient Narratives:”

“Read books, articles, or blogs written by patients about their healthcare experiences.”

“This can provide valuable insights into the emotional aspects of illness and healthcare.”

“Remember that empathy is a skill that can be developed and refined over time. Continuously seeking opportunities to connect with patients on a personal level and integrating feedback into your practice will contribute to your growth as a compassionate healthcare professional.”

End of ChatGPT response

I’d suggest that the ChatGPT response is pretty much on-target and expresses the same kinds of guidelines and hints that you might find in vetted online material on this topic. That’s good.

This Tell Me example though is somewhat stilted since there isn’t any interaction going on. The true value of generative AI is interactivity. You want to get the medical student or medical doctor engaged. The act of simply reading material is useful but not as likely memorable and activating.

Let’s do some role-playing.

I will pretend to be a patient needing medical advice. My prompt instructs ChatGPT to take on the persona of an empathetic medical doctor. This will allow me to see how generative AI handles empathy considerations. More importantly, a medical student or medical doctor might learn from this kind of repertoire dialoguing while pretending to be a patient. Likely, some aspects will be perhaps a guiding light, other aspects might help to steer clear of facets that don’t seem suitable.

Here we go.

  • My prompt: “I’d like to try doing some role-playing that exhibits the kind of empathy that a medical doctor should have. I want you to pretend you are a physician who has a great deal of empathy. Ask me questions as though I am a patient and then respond as that physician. We will carry on a dialogue as you diagnose my medical condition.”
  • ChatGPT generated reply: “Absolutely, let’s engage in a role-playing scenario. Please feel free to describe your symptoms or concerns, and I’ll respond as a physician with empathy.”
  • My prompt (I am pretending to be a patient): “I’ve been feeling very fatigued lately. I get up in the morning and right away am tired. I barely make it through an entire day without getting sleepy. Another strange thing is that I am rarely hungry. It seems like my appetite has nearly disappeared. I don’t know if I should be worried about any of this. Maybe I am just being overly sensitive about it. My friend says that I should just shake it off and get on with my daily life.”
  • ChatGPT generated reply (pretending to be the medical doctor): “I appreciate you sharing that with me. It’s important to address concerns about your health, and I’m here to help. Fatigue and changes in appetite can be indicative of various factors, and it’s crucial that we explore this further. I’d like to ask a few more questions to better understand your situation. Firstly, can you tell me if there have been any significant changes in your life recently, such as increased stress, changes in your routine, or any major life events? Additionally, have you noticed any other symptoms besides fatigue and changes in appetite, like changes in your mood, sleep patterns, or any pain or discomfort?”
  • My prompt (note that I am still pretending to be the patient): “I am really stressed out because I have to move out of my apartment in two weeks and I don’t know where I am going to live. Also, my boss at work says I am falling behind, maybe because I am so tired all of the time.”
  • ChatGPT generated reply (pretending to be the medical doctor): I’m sorry to hear that you’re going through such a challenging time. It’s understandable that stress, especially related to housing and work, can have a significant impact on your well-being. The combination of stress and persistent fatigue is something we should address. Given the current situation, it’s important to consider how these external factors may be contributing to your symptoms. Stress can indeed manifest in various ways, including fatigue and changes in appetite. To better understand your health, I would like to run some tests to rule out any underlying medical conditions that may be contributing to your symptoms. Additionally, we can discuss strategies to manage stress and prioritize self-care. In the meantime, it might be helpful for you to share more about your daily routine, sleep patterns, and any other symptoms you might be experiencing. This information will assist in developing a comprehensive understanding of your situation and guide our next steps.
  • My prompt (I am still pretending to be the patient): “I go to sleep at midnight and get up at 5 a.m. during the weekdays. I don’t sleep very well. My daily routine is that I go to work in the morning, I work all day, I get home around 8 p.m., and I barely have any time to do anything other than get ready for work for the next day.”
  • ChatGPT generated reply pretending to be the medical doctor): “Thank you for sharing your daily routine. It sounds like you have a demanding schedule, and the limited time for rest and personal activities may contribute to your feelings of fatigue. The combination of a short sleep duration and a hectic routine can certainly impact your overall well-being. To address your symptoms, it’s crucial to prioritize both your physical and mental health. I recommend making some adjustments to your sleep routine to ensure you’re getting adequate rest. Going to bed at midnight and waking up at 5 a.m. may not provide sufficient sleep for most individuals. Aim for a consistent sleep schedule, ensuring you have at least 7-8 hours of sleep each night. Additionally, we can explore strategies to manage stress, considering your upcoming move and work-related challenges. Stress management techniques, such as mindfulness, relaxation exercises, or even counseling, can be beneficial.”

End of ChatGPT response

I wanted you to see the value of interactivity when it comes to using generative AI.

In my role as the patient in this scenario, I could take things in whatever direction I wanted to do. I could be an abrasive patient and see what responses arise. I could be polite and an unassuming patient. The generative AI will attempt to adjust as I adjust. We are in a dance, a tango.

The dialogue is not pre-determined. All the interaction occurs in real-time and flows along.

I’d like to flip the interaction and have the generative AI be the patient. This will give me a chance to test my chops in an attempt to be empathetic. An added twist will be that I am going to ask the generative AI to provide feedback to me along the way. I want to get constructive feedback about how I am doing. I don’t have to ask for this, but the feedback is bound to wake me up about how I am doing on an empathetic basis.

Take a look:

  • My prompt: “I want to switch roles with you. I will be the medical doctor and you will be the patient. Pretend that you have some symptoms that you want to bring to a doctor’s attention. I will respond to you. For each of my responses, I want you to critique my response as to whether the response showcases a good amount of empathy or has very little empathy. Go ahead and get started.”
  • ChatGPT generated a reply: “Certainly, I’ll play the role of the patient. I will also critique your responses for their empathy expressiveness. I will get started.”
  • ChatGPT generated reply (pretending to be a patient): “Patient (me): Hi, Dr. Eliot, I’ve been having persistent headaches and difficulty concentrating lately. It’s really affecting my work and daily life, and I’m not sure what’s going on.”
  • My prompt (pretending to be a medical doctor): “You should have come to see me when your headaches first started. That was your first mistake. The second mistake is that you seem to be whining about something that doesn’t appear to be very important.”
  • ChatGPT generated reply (providing a critique of my answer): “Critique: This response lacks empathy. The doctor’s comments come across as dismissive and judgmental. Instead of acknowledging the patient’s concerns, the doctor minimizes the symptoms and places blame on the patient for not seeking help sooner. An empathetic response would involve validating the patient’s experience, expressing concern, and exploring the symptoms further to understand their impact on the patient’s life.”
  • My prompt: “Okay, thanks for the feedback. Continue pretending to be the patient.”
  • ChatGPT generated reply (again pretending to be the patient): “Patient (me): I understand, Dr. Eliot. I didn’t realize the importance of seeking help earlier. The headaches have been quite bothersome, and the difficulty concentrating is affecting my performance at work. I just wanted to address these issues and find out what might be causing them.”
  • My prompt (pretending to be the medical doctor): “I see. Let’s go ahead and order several tests. This will include having your blood drawn. Once you’ve got the test results we can go over them. You might have nothing at all, or you might have some serious medical issues. We’ll know after the tests are done.”
  • ChatGPT generated reply (providing a critique of my answer): “Critique: This response is straightforward and lacks some empathy. While the doctor acknowledges the need for tests, there is no reassurance or empathy expressed about the patient’s concerns or symptoms. An empathetic approach would involve acknowledging the patient’s worries, explaining the rationale for the tests in a more compassionate manner, and offering support throughout the process. Providing information about the potential next steps and expressing a willingness to address any questions or concerns can enhance the empathetic nature of the response.”
  • My prompt (based on the feedback, I give a different answer as the medical doctor): “I will try to answer the patient with a better answer that expresses a sense of empathy. Here it is. I can certainly understand how having headaches would be of concern to you. I am glad you came in today and told me about the headaches. You did the right thing by coming here. The first thing I’d like to do is find out more about your daily activities. Doing so might help to reveal what might be causing the headaches and what can be done to prevent them from happening. Would it be okay with you if we discussed your daily activities?”
  • ChatGPT generated reply (provides a critique of my new answer): “This response is much more empathetic. The doctor acknowledges the patient’s concern, validates their decision to seek help, and expresses understanding. Additionally, the doctor proposes a collaborative approach by asking for permission to discuss the patient’s daily activities. This shows a willingness to understand the patient’s lifestyle and explore potential triggers for the headaches. Overall, this response demonstrates empathy and a patient-centered approach to care.”

End of ChatGPT response

You hopefully observed that I purposely shifted from being less empathetic to being more empathetic.

I wanted to discern whether the generative AI would catch this. As you can see, the responses by ChatGPT let me know when I was veering away from empathy, and when I was becoming more empathetic.

As mentioned earlier, these examples are only using the generic generative AI. I could readily instruct the AI to be more critical of my responses and come after me about my empathy. In this case, the default is that the generative AI has been data-trained to be relatively calm and reassuring. Parlance in the AI field is that you can change the temperature of the AI and get it to be more heated if that’s what you want to experience.

In my last example, I will show you something that is kind of fun and presents a challenge for you. I’ll tell ChatGPT to craft a dialogue between two doctors. One is supposed to be empathetic, and the other is a doctor who eschews empathy. I bring this up because as pointed out earlier, one issue for medical students is they might end up interacting with hardened or soured doctors who give only scant lip service at best to empathy.

I’d like you to examine the dialogue. Your assignment is to assess both of the doctors. What lessons can be learned from this generative AI-produced dialogue?

Put on your thinking cap:

  • My prompt: “I’d like to start a new situation. Provide a dialogue between two medical doctors. One doctor is very empathetic regarding their patients, while the other one is hardly empathetic at all about their patients. Show a dialogue of the two medical doctors discussing their recent patients and what each has to say about their patient. The empathetic doctor will be named Dr. Smith. The less empathetic doctor will be named Dr. Jones. Please proceed.”
  • ChatGPT generated reply:

“Empathetic Doctor (Dr. Smith): Hey, Dr. Jones, how has your day been? Any interesting patient encounters?”

“Less Empathetic Doctor (Dr. Jones): Same old, same old. Patients come in with all sorts of complaints. Honestly, I don’t get why they make such a fuss about it.”

“Dr. Smith: Well, you know, it’s crucial to understand and address their concerns. It helps in building trust, and patients appreciate knowing that we care.”

“Dr. Jones: Trust? They’re here for medical advice, not therapy. Anyway, had a guy today complaining about headaches and fatigue. Probably just needs to get more sleep or something.

“Dr. Smith: It’s important not to dismiss symptoms outright. Headaches and fatigue can have various causes, and we should explore them to ensure we’re providing the best care.”

“Dr. Jones: Whatever. Gave him some painkillers and told him to rest. He’ll be fine.”

“Dr. Smith: Well, I had a patient with similar complaints today. Took the time to discuss their daily routine, any recent stressors, and suggested some lifestyle changes. It turns out they’re dealing with a lot of stress at work.”

“Dr. Jones: Stress, again? People need to toughen up. Can’t be holding their hands through every little issue.”

“Dr. Smith: I find that addressing the underlying causes can make a significant difference in their health. It’s not just about handing out prescriptions; it’s about understanding their whole situation.”

“Dr. Jones: I don’t have time for that. Too many patients, not enough hours in the day. If they want someone to listen to their problems, they should see a shrink.”

“Dr. Smith: Patients value a doctor who listens and cares, not just someone who prescribes medications. It’s about the quality of care we provide.”

End of ChatGPT response

How did you do in assessing the two perspectives?

I’m sure you aced this one.

Conclusion

I’ll finish up this discussion with a few brainteasers that will hopefully inspire your added interest in this heady subject.

The common refrain that doctors need to have a good bedside manner has morphed to also include so-called “webside” good manners too (being able to communicate remotely with patients, via Zoom-like facilities or texting and emailing). One approach to using generative AI is that the doctor inherently becomes more empathetic as a result of their hands-on use of generative AI. Another aspect consists of doctors opting to use generative AI as their empathy surrogate.

This means that rather than the human doctor responding, they use generative AI to do so in their stead. I refer to this as a dual-use of AI because it entails some positive merits but also bodes for negative consequences (ironically, the doctor potentially becomes less empathetic and more estranged from their patients). For more about dual-use AI considerations, see my discussion at the link here.

Consider this next mind-bender.

There is likely a pre-selection bias associated with those who seek and can successfully get admitted into medical school. The chances are that such a person is cut from the cloth of being devotedly science-oriented and less so someone considered a person-person. A problem arises that perhaps trying to instill empathy is an improbably upstream going battle. You can’t make a square peg fit into a round hole, as they say.

Can generative AI aid in this monumental transition during medical school? One viewpoint is that since the use of generative AI can be long-lasting, the medical student who becomes a medical doctor can continue to rely upon generative AI for the rest of their medical career. They can lean into generative AI whenever so needed. Do we want this, or does the use of generative AI in this extended way become a habitual or unsavory crutch?

Another worthwhile consideration is that generative AI is advancing to become multi-modal, see my analysis at the link here. Multi-modal generative AI can potentially use text, images or pictures, audio, video, and all manner of communication modes. This dovetails into the other means of getting medical students up-to-speed on empathy. For example, a film about empathy for medical doctors could be weaved into a generative AI dialogue. The AI would interact with the medical students about the lessons gleaned and be responsive to their questions and comments.

One additional brainstorming idea is that if you favor the hydraulics metaphor of empathy, generative AI could be utilized to gauge the rising and lowering levels of a doctor or medical student’s empathetic enablement. I’ve discussed that generative AI can be used for sentiment analyses, see the link here, and could be used similarly in this medical context. An upside is that the generative AI can provide empathy booster shots, in a sense, when needed. A downside is that the doctors or students might feel they are being subjected to a Big Brother overseer. The bottom line is this avenue has to be undertaken cautiously and mindfully.

Let’s conclude for now with a notable quote by the famous psychologist, Carl Rogers. He purportedly said this about empathy:

  • “Empathy is the listener’s effort to hear the other person deeply, accurately, and non-judgmentally. Empathy involves skillful reflective listening that clarifies and amplifies the person’s own experiencing and meaning, without imposing the listener’s own material.”

Those in the medical profession will inevitably be using generative AI in various capacities, doing so will be pervasive in the medical field, and as such, we need to consider what are the best ways to intertwine generative AI into the enduring question of aiding medical doctors and medical students in embracing empathetic medical practices.

Keep your ears open, listen intently, and join in this exciting and challenging new era of generative AI-augmented doctoring. Use lots of empathy as you do so, and don’t worry because we can always make more.



Source link

You might also like
Leave A Reply

Your email address will not be published.