Munjal Shah is a 5-foot-something man with a mega vision that involves, yes, you guessed it — generative AI in healthcare. His ability to tell a story while being supremely convincing is a skill that would likely make most entrepreneurs green with envy. There is both bravado and sincerity.
No doubt those skills played a big role in winning $50 million in seed funding from General Catalyst and Andreessen Horowitz to build what he bills as a safe, large language model specifically for healthcare. That announcement came in May when the Palo Alto, California startup — Hippocratic AI — emerged from stealth. The idea was apparently rooted in a conversation between Shah and Hemant Taneja, the prominent venture capitalist in General Catalyst’s healthcare team. The two talked about how ChatGPT had consumed the world and brainstormed about whether there were applications of LLMs in healthcare.
Turns out that there are. In my roughly 40-minute interview with him at the HLTH conference on Tuesday, Shah engaged in a bewitching narrative of how LLMs can be a force for immense good. Imagine if every chronic disease patient in the United States getting the same level of high-quality care as if they had one chronic disease nurse specialist devoted solely to them. Imagine if heath systems could achieve this level of care without hiring an army of humans — which is impossible anyway given prohibitive costs and a nursing shortage in general.
That is the future Shah is creating, or more accurately attempting to create. His first healthcare venture crashed and burned, and he is currently battling a lawsuit, but more on that later.
Solving the staffing crisis
Shah is very clear about what his LLMs are not going to do.
“In fact, we explicitly are not even going to allow people to try to use it to do diagnoses,” Shah insists. “I don’t think it’s safe, honestly.”
Nor is he interested in leveraging AI to make more sense of the electronic medical record and relieve administrative burden.
“Everybody’s out there saying, ‘Let’s do EMR summarization or let’s write drafts in baskets in Epic, or let’s write pre-authorization letters to the insurance companies.’ ” he said of the conventional thinking about the best ways to apply generative AI in healthcare. “I think … maybe that’ll make doctors 10% more efficient.”
And it’s just too low of a bar for efficiency for Hippocratic AI, implies Shah. What he is interested in is solving the staffing crisis in healthcare.
“We’re focused on low risk activities that can still give huge leverage in healthcare,” he declared. “They drive huge costs in healthcare and can improve outcomes a lot.”
So what are those applications? A voice-enabled AI nurse trained to take care of very specific healthcare conditions and able to manage a conversation with a human battling those conditions.
“Our idea says, ‘Why don’t you build an actual nurse for them?’ Don’t even ship an API and actually most people don’t know this because we haven’t announced this yet, but don’t ship an API, don’t ship even a nurse. Literally don’t even ship a preoperative nurse. [Instead] ship a colonoscopy preoperative nurse, then ship a total knee replacement preoperative nurse, then ship a congestive heart failure chronic care nurse. We’re going very deep, deep, we can test it.”
The reason for discarding the notion of an API is because safety may be compromised, according to Shah. APIs are for broad usage. It’s hard to create an API for an AI nurse that can be trained on diabetes management and then have that same API be able to spit out a new LLM for a AI nurse that can also manage total knee replacement. That might be unsafe.
“We can’t test a broad thing,” he said.
Fair enough. But how are these modules being trained to talk to a person on the phone? It first begins with data. Shah claims that he has been able to lay his hands on data “that’s not currently in the language models in GPT4 and chatGPT. We got every single healthcare plans’ 200 page pdf describing every benefit. We got every single malpractice lawsuit in the country.”
Fine, but how does the AI train on dialogue? ChatGPT answers in paragraphs. That can’t be the model for a nurse AI calling on the phone, Shah explained.
“We are hiring actors to act the patient’s role, but the nurses are actual nurses in real life and certified and those exact type of nurses,” he declared.
In other words a nurse who is licensed and trained in taking care of total knee replacement patients, or those with congestive heart failure. Shah posits a future when the AI nurse taking care of a CHF patient can also tackle questions on diet. If the AI nurse module is built into the healthcare system, then presumably the patient can call to get a answer on what’s a safe food option.
“We put in there every menu in the country. So you can just ask it… if it was tied into your health system, you’d be like, ‘I’m at this restaurant, what should I not order? Okay, we’ll be like ‘Order that, but tell ’em to hold that salt.’ ” he explained of a hypothetical query in the future.
Besting GPT-4 in medical exams
Shah claims that the module has been wowing physicians in testing. For instance, recently, a physician at a health system asked Shah to query the AI module on hospital discharge. The scenario was that the patient didn’t have anyone who could be with them during discharge. The model responded that it would connect the patient to resources in the community. Then Shah said he asked the AI what if he put an ad on Craigslist to have someone accompany him for a couple of hours. The model’s response? Craig’s List is not a place to find people who you can trust.
“We never told it anything about Craigslist,” marveled Shah. “That wouldn’t have been in our training set.”
In other words, it can understand context and arrive at reasonable conclusions. Shah rattled off many more examples all designed to amaze, but none of this will hit health system shelves any time soon.
“So there’s no timeline because I kind of said, I mean company’s name is “Hippocratic”. It’s like the tagline says: Do no harm. I can’t say I care about safety and then say, ‘here’s the timeline’ because the timeline kind of depends on when they think it’s ready,” he declares. “We raised so much money honestly that you have a long runway.”
A few health system and tech company partners will work collaboratively Hippocratic AI to develop the technology. They are: HonorHealth, Cincinnati Children’s, Universal Health Services (UHS), SonderMind, Vital Software, Capsule, and Canada’s ELNA Medical Group. Even more are in the works to be announced in the coming weeks.
All are likely equally motivated by Shah’s vision and the sheer potential of solving the staffing and burnout problem in healthcare.
“Everybody’s fussing around about, oh, we’re missing 10% more nurses or 30% more nurses and we have 3 million nurses. We need another 900,000. I’m like, ‘Great. I’ll give you the 900,000.’ “
Where is the trust?
Trust is the core of building any business but a fledgling one that is pushing AI as a fundamentally transformative tool has to be bathed in it. And that is where Shah has a problem. In an excellent article that followed Hippocratic AI’s splashy launch, Forbes’ Katie Jennings enumerated how Shah’s previous company Health IQ is “facing allegations of millions in unpaid invoices, tens of millions in debt” with one lawsuit is “alleging fraud.”
Health IQ was a Medicare brokerage startup that Shah co-founded and was CEO of. It is currently in Chapter 7 bankruptcy proceedings.
When asked how folks can trust Shah’s new venture given this history, there is a noticeable change in his demeanor. He is surprised that the issue was raised at all but to his credit doesn’t shy away from addressing it. Gone is the bravado, the self-assuredness of knowing that he likely has the tools to can solve a really intractable problem in healthcare. In its place, there’s almost a look of sheepishness.
“I would say that, look, I built that business and really focused on trying to ensure we built a good business serving seniors. In building that business we took on a lot of debt to grow it,” he explained. “And the debt was cheap when debt was cheap. And then as the interest rates went up, we really struggled to make the debt payments.”
He added that he spent his entire Christmas of 2022 trying to raise fresh capital from 50 investors. He added that some creditors didn’t understand that certain senior lenders were secure and when the company folded through a Chapter 7, they would get paid first. Which meant others wouldn’t be paid at all.
“As the waterfall trickles down. there’s less and less to share,” he said. “I lost huge amounts of money personally in it. But in the end, when a company runs out of money, companies run out of money and not everybody can get paid.”
Shah built three companies before. The two before Health IQ weren’t in healthcare. Like so many tech entrepreneurs that have rushed in hoping to improve healthcare delivery and efficiency while reaping millions, he encountered an inhospitable terrain.
“I did not realize how difficult some parts of healthcare were to be,” he conceded. “I’ll try to get my redemption through Hippocratic and try to do something good.”
Meanwhile, many of the vendor lawsuits filed against Health IQ are on administrative stay pending the resolution of its bankruptcy filing. A class action lawsuit filed by employees is also pending.
Photo: Sylverarts, Getty Images