Generative AI and Patient Care: It’s All About the Content

AI, and particularly generative AI, remains a provocative topic in healthcare, with many speculating on the potential for this technology to transform everything from medical billing to clinical drug trials. A McKinsey report from earlier this year hails generative AI as “a meaningful new tool that can help unlock a piece of the unrealized $1 trillion of improvement potential present in the [healthcare] industry.”

One area that has attracted attention is the potential of bringing generative AI to clinical decision support (CDS) – software that provides information and recommendations to empower clinicians with immediate access to the most up to date evidence-based knowledge. An essential tool at the point of care, CDS is designed to help reduce care variability and improve patient outcomes. Yet according to Pitchbook the industry is “exceedingly cautious” when it comes to deploying generative AI for CDS. What makes generative AI exciting – its ability to create new content – also makes it fraught with obvious risk of unintended consequences.

Generative AI has potential to augment CDS, but first we need to understand the risks and rewards

A new study from Mass General Brigham demonstrates the future potential of generative AI to augment clinical decision making, as well as some of the challenges to overcome. By feeding a series of clinical vignettes to ChatGPT, the study found ChatGPT ranged from 60-77% accuracy in clinical decision making and struggled with differential diagnosis, “where physicians are truly experts and adding the most value.” Overall, the authors likened ChatGPT’s rate of accuracy to an intern – certainly not an acceptable level of accuracy to apply to patient care.

Importantly, the Mass General Brigham research team calls for additional research before generative AI tools can be considered for integration into clinical care. Many of the issues raised by their research point back to the role of data used to train AI models. For example, until recently ChatGPT data only went back to 2021. We’ve heard from our customers, who are clinicians around the world, and there is a clear consensus: for CDS and generative AI to come together in a responsible way to meet the complex realities of frontline care, it will require content that is timely AND trustworthy.

The voices of caution do not mean that healthcare insiders are closing the door on the potential for generative AI in healthcare. Far from it. The industry urgently needs tools to help manage an explosion of data. Estimates show compound annual growth of healthcare data will reach 36% by 2025 – far faster than what we’re seeing in other sectors such as manufacturing and financial services. Any database supporting the healthcare industry must keep up, and generative AI offers a potential solution to help clinicians more effectively and efficiently use that data.

The future of CDS is safe, responsible application of generative AI using trusted, high-quality content

What was once loaded floppy by floppy on to a PC to give healthcare providers access to the most up to date medical content is now available and used by providers on a mobile phone. Over the last 30-plus years, CDS content presentation and delivery has continuously improved to best support accurate decision-making at the point of care. Now, the future of CDS includes harnessing the power of generative AI to retrieve and deliver trusted content to help clinicians make better and more informed decisions.

Given the power of generative AI, we need to take every precaution to ensure its safe application in medicine – where what’s at stake are people’s lives. The key to this is only returning information to clinicians when it is the result of highly vetted evidence from human authors and editors trained in evidence-based medicine who pore over every word and constantly update that evidence. Searching this evidence is different than searching the internet, and that’s a crucial point for the future of AI in medicine – the quality of source content and the medical context matter.

For clinicians to trust generative AI-derived search results, they need to know what they see are real-world answers based on current clinical evidence. They need to know experts and medical peers are at the center of all data.  AI can play a role in information presentation or shortening time to result, but currently no technological shortcut exists for the rigor of human curation in this vital clinical decision support application.

Ultimately, to get generative AI in healthcare right we need input from the clinicians who will use these tools – and who understand the ramifications of applying AI without first rigorously testing applications to ensure they are safe and provide clear clinical value. This means complementing and augmenting clinician workflows, not replacing them, and showing progress in reducing diagnostic errors and improving outcomes. That is, after all, why CDS was developed. Now, with AI, there is an opportunity to do it even better.

Photo: steved_np3, Getty Images