
Ep 6. AI in healthcare: payer/provider battleground or force for good?
January 9, 2025
32
min read


00:0000:00
We are back after the holiday break with Episode 6 which covers the rapidly evolving landscape of AI in healthcare. Is AI just another weapon for payers and providers to bludgeon each other or will it become a force for immeasurable public good? In this episode, Dr Anthony Paravati and Dr Amar Rewari discuss the transformative impact of AI in healthcare, exploring its potential to improve patient care, streamline insurance claims, and the ways it is used in clash between providers and insurers over the almighty dollar. They highlight success stories in cancer detection and stroke care, while also addressing the challenges and legal implications of AI in claims processing and denials. The conversation concludes with insights into future trends and the importance of ethical considerations in AI deployment.
The Financial Stakes of AI in Healthcare
Welcome back to Value Health Voices, where we bring health policy and healthcare finance to life in a way that's accessible, engaging, and every once in a while, even entertaining. This is episode six. I'm Dr. Anthony Paravati, and today we're diving into what might be the biggest technological revolution and power struggle in modern healthcare.
And I'm Dr. Amar Rewari. You know, Anthony, it's fascinating how AI has become both a source of hope and controversy in healthcare. We're seeing projections that AI will become an $188 billion industry in healthcare by 2030. But behind those impressive numbers lies a complex battle between healthcare providers and insurers that's reshaping how decisions are made and paid for.
The stakes couldn't be higher. Amar, think about this. Hospitals, and I think we've covered this on a previous episode—we'll have to go back and look—hospitals are spending nearly $20 billion annually just on appealing insurance claim denials. That's billions. 20 billion.
The financial pressure is driving both sides of the healthcare transaction to embrace AI. From the insurer perspective, it is to process and often deny claims more efficiently. And providers are similarly using AI to fight back against those denials. Before we dive into the battle, let's help our listeners understand just how transformative AI has already been in improving patient care. Right after the intro music, let's get into it.
Clinical Success Stories and Early Detection
All right, so let's start with some remarkable success stories first. I think that shows why healthcare providers are so excited about AI's potential. First, let's talk about breast cancer detection, because we're both oncologists, so it's near and dear to our hearts. A major UK study found that when AI systems were used to interpret mammograms, they reduced false positives by 5.7% and false negatives by 9.4%.
Now, think what that means. That's fewer women getting unnecessary biopsies and, more importantly, fewer missed cancers. Even more impressive, a South Korean study found that AI systems achieved 90% sensitivity in detecting breast cancer, compared to 78% for radiologists. And here's what's particularly exciting: that AI was even better at detecting early-stage breast cancer, with a 91% detection rate compared to 74% for human radiologists.
So these aren't just statistics, Anthony. These are real lives that are being saved through earlier detection.
Now that is awesome data, truly awesome from both the UK and South Korea. You open with a great oncology example, but the benefits really are in many areas. Another area that doesn't have anything to do with our practice, but certainly is an area where early detection and early action on the order of minutes really matters, is stroke care.
In that context, an AI startup called Viz.ai has developed technology that works like this: The moment a patient gets a brain scan, their technology instantly analyzes that scan for signs of stroke. It can often detect problems, according to their studies, much faster than a human could. If it detects early signs of a stroke, like an ischemic stroke, then it causes a stroke alert. As I was saying, every minute millions of brain cells are on the line, so it truly matters.
Another quick example is in diabetes care. AI algorithms have been shown to be able to detect diabetic retinopathy, which causes—for our non-physician listeners—blindness. This technology can exceed human ability to detect retinopathy, so definitely matters in that context as well.
Speaking about not just startups, but well-known health systems, they are also making significant leaps forward in their use of AI. For example, the NYU Langone Medical Center has built out an IT team that is focused around AI, kind of an IT command center. They're using this AI team for everything from claims denial—so back-office stuff—to enhancing communications between doctors and patients. They even have built their own proprietary large language model based on years and years of their clinical documentation.
Stanford finally has built out their team based around sophisticated cloud-based infrastructure that allows for rapid progress on research collaborations across institutions. So pretty awesome.
Listener Poll: Current AI Usage
Before we go further, though, I want to start something new on the podcast. Obviously, we're not recording this live—it'll be a podcast episode like all the others—but what we're going to do on our various social channels is have some questions. We want our audience to see those questions and provide your answer by indicating A, B, C, or D.
The first one is going to be for healthcare professionals: How is AI currently being used in your practice? Respond through our social channels. A is for clinical diagnosis and support. B is administrative tasks. C is both clinical and administrative. Or D, which I think is going to be probably the least answered, is not currently using AI at all.
It'll be very interesting to hear back from our listeners on how they're using it. I personally am using it more around the administrative task side of things. How about you, Anthony?
For me, it's definitely C. There's an AI model put out by a joint venture between a startup and Mayo Clinic called OpenEvidence. I found those answers very high quality in asking clinical questions. Very reliable. Yeah, a bit clinical, but certainly also on the back office for administrative tasks as well.
The Rising Cost of Insurance Denials
Gotcha. But let's pivot a little bit here. Let's talk about what's one of the most contentious issues in healthcare when it comes to AI, and that's the processing of insurance claims and the whole prior authorization process. According to the Healthcare Financial Management Association—and we've talked about this in many other podcasts—denial rates have jumped from 10-ish percent in 2020 to about 12% by the end of 2023.
More concerning is how long they're taking to process this. The aged accounts receivable over 90 days has increased to about 36% from 27% just a few years ago. These are really troubling statistics. So what does this mean in practical terms? Right now, on average, $44 is being spent to appeal each denied claim. When you multiply that across all claims, that could be as high as almost $20 billion spent annually just fighting denials. That money could be well spent on other resources.
I think no matter what, now we're in this age of AI, which people are comparing to the 90s—like '96, '97 before the Internet went absolutely crazy. Whatever the technology is, it doesn't matter. It's going to be a battleground between providers, health systems, and insurers. It just sort of seems to be a law of the universe.
Sorry to interrupt, Anthony, but I was just thinking for our listeners, this is obviously very contemporary given everything that's been in the news in the last few months with Luigi Mangione and UnitedHealthcare. This is impacting Americans. It's people's coffee cooler... coffee cooler discussions that they're having.
Water cooler.
Yeah, water cooler. Thank you. Water cooler or over a cup of coffee. They're discussions that they're having. So this is impacting everyday Americans left and right.
Insurer Algorithms and Legal Controversies
Sure. You invoked the famous UnitedHealth Group, so let's just start with them. I was going to get into some examples that demonstrate the perils of AI in the tensions between providers and payers—and really patients, actually. All three obviously are part of this.
UnitedHealth Group has come up with a tool that they use internally that I believe they call nH Predict. It's been in the news a little bit lately and they've been sued over this. You can read in the court documents, in the filing—it's actually a class action lawsuit—that alleges that United is using their AI system to predict basically how much care a patient should need and then deny everything after that. So any intensity of care—apologize for the jargon—that's above what this program says it should be results in an automatic denial.
The interesting thing about the court filing is that they allege—and I'm not sure exactly how they know this—but it's alleged that United knows there's a 90% error rate with this program and yet they're using it anyway. That's just one example.
If that's proven to be true, then I think they're obviously going to have successful litigation.
So another insurer that is active in this space is Cigna and theirs is called PxDx. ProPublica has reported on this. It's a system that they've used to deny hundreds, maybe thousands of claims really all at once. No human is touching them. There are some consistent features that this program identifies, and so there's no medical necessity review; it's just denied.
A couple of other quick examples: Florida Blue has developed their own large language model that they've used like a ChatGPT for similar reasons. Oscar Health, CVS Aetna, all for similar reasons to the previously cited examples of United and Cigna. What I think is particularly troubling, the American Hospital Association has also brought out in their data that they believe this AI trend has led to an increasing amount of denials in the Medicare Advantage space as well as in commercial products. They're saying that this does coincide with the rapid spread and uptake of the use of AI by insurers.
Right. And I think that's the American Hospital Association, correct?
Oh, sorry, yeah, American Hospital Association.
But the way I think about it, there are some positives. The negatives, as we're talking about from an insurance perspective, is obviously maximizing claim denials both on the prior authorization side as well as on the back end. But possibly, since we are talking about these aged accounts receivables, maybe if these insurers were to use it to make their claims processing faster, that would help patients get prior authorization sooner. I don't know if that's going to be an ultimate goal, but that would be a positive that may help patients.
Provider Defense Strategies and Startups
Absolutely. In terms of what the providers are doing, they're not just taking this lying down. They're definitely developing their own AI capabilities to fight back, especially the larger ones. Obviously, some of the smaller providers, like where I'm at, we're still just getting a grasp of all this. But a lot of the larger health systems are taking steps.
For example, Stanford Medicine has split their infrastructure between cloud and physical servers with a 15-person data science team developing the AI tools specifically for clinical research and claims management. NYU Langone has built their own large language model trained on a decade of clinical notes, which they're using to improve documentation and fight denials. Cleveland Clinic has partnered with IBM in their discovery accelerator project to revolutionize how they handle claims and clinical data.
But what's really interesting is the emergence of a bunch of AI startups designed specifically to help providers fight back against these denials. SmarterDx, for example, developed a tool that can generate appeal letters in a fraction of the time it would take a human. They're currently working with three major health systems and seeing impressive results.
I think our listeners would be really interested in hearing about some of these newer players. And I would just give a brief anecdote that for me personally, this is where I use AI. I find it very helpful in creating some of these appeals letters to the insurance companies on the denials.
Before I get into talking about some of the startups—you mentioned some of the health systems—we haven't developed this episode in collaboration with them. No one has sponsored this episode.
Yes, good point.
I want folks to know there are no affiliations, neither an endorsement nor a critique of what any of these entities are doing with AI. Simply trying to highlight the current panorama of the use of AI in healthcare.
One of the ones—and when I read this quickly, I actually read it and thought it said Humana Health—is Humata Health. It does: Humata. So not an N but a T. That is a startup that has developed AI systems that providers are using to elevate their approval rates on initial submissions. They claim that you can have essentially a 10% or greater impact on your approval rates for initial submissions for authorization. Based on what we could find, they've grown their customer base pretty significantly. Across all their customers, they already have 40,000 physicians using their product.
Another one that I thought was interesting is a firm called Claimable, a very new firm that launched not even a year and a half ago. Their approach is actually pretty accessible. They market either to physicians or to patients and they charge a flat fee of about 40 bucks. They'll generate an AI-powered appeal letter for you that includes, from the patient perspective, a personal story, supporting medical information, and citations. That's kind of interesting. Not necessarily B2B but also direct to the patient in this case.
Listener Poll: Denial Rates
So let's do another poll question. Again, not a live episode, but you'll see these questions in our social media channels after. This is more for the physicians: What percentage of your prior authorization requests result in an initial denial? For A, it's going to be less than 10%. B, 10 to 25%. C, 25 to 50%. Or D, I hope not more than 50%.
We had talked in other episodes about studies the different professional societies have done around this. In addition to physicians, I would love to also hear from the administrators, some of the billers and coders who may be listening in, to hear what they're also seeing.
Regulatory Landscape and Legislative Action
So one of the things I want to talk about is what's happening in the legal and regulatory landscape with all these issues related to AI in healthcare. It's definitely caught the attention of regulators and lawmakers. The National Association of Insurance Commissioners recently reported that 17 states have adopted their model AI bulletin and four states have implemented insurance-specific AI regulations. But what's concerning is many states haven't even begun to address these issues.
CMS, as we know, has taken action starting this year for government-sponsored health plans to respond to prior authorization requests within 72 hours for urgent cases, provide decisions within seven calendar days for standard requests, make prior authorization metrics publicly available, and implement standardized data exchange systems.
Another aspect of this that I had previously forgotten—we were talking about this at another meeting—was that before Medicare issued this final rule, another problem that we had from the Medicare Advantage contractors was that they would more or less ignore local coverage determinations. Another thing in this rule that's interesting is that it basically compels the Medicare Advantage insurance company to follow those local coverage determinations. That's an important thing that rule did, and we obviously appreciate the government taking that action.
Since we were talking about the state-to-state stuff, I thought maybe we might want to just mention California's recent legislation that went into effect this year regarding insurer AI as the primary tool for denials of claims.
Yes. That law, as you said, went into force on January 1st. I think AI is allowed, but it can't be the sole tool. A physician or appropriate healthcare professional still has to evaluate the medical necessity of the case. That seems to be a very appropriate step. Obviously, the devil is in the details. I haven't learned that law closely, but hopefully, it leads to a positive result for California. It may be a model for other states to follow as well.
We're getting a little bit into the legislative and also legal. We talked about United and the current suit that they're facing. I brought this up to somebody the other day and they were surprised. United has several legal challenges, Department of Justice investigations in a number of areas—tricky areas for them. Perhaps thought of as just the cost of doing business. I don't know. But I just wanted to mention that.
I'm still so impressed with what you mentioned about the 90% error rate that may be with their AI tool and UnitedHealth Group knowing that and still utilizing it. That'll be a very interesting case to follow.
These are things we're talking about here that have been gleaned from the court filings. Obviously, these things have to be proven in court. I guess maybe that's not the right legal standard, but compelling enough to be found. We will see. Are there any other lawsuits out there against insurers?
Cigna, again, for the system we talked about. I don't think I mentioned it earlier in our comments about Cigna and their PxDx system, but they too are facing a lawsuit based on what has been called improper claims denials significant enough to result in significant cost shifting to their members, possibly affecting as many as 2.1 million members in the state of California.
Moving away from lawsuits, just some other items of interest. The House Energy and Commerce Committee is also investigating Cigna. And we're speaking now about the last Congress, obviously, because the current one just started. The Senate Permanent Subcommittee has demanded internal documents from United, Humana, and CVS, all similarly about their AI use for Medicare Advantage. Obviously, the Senate is interested in that because that is seniors. They have a right to have Medicare coverage. Of course, they have the option of private Medicare coverage, but if pursuant to that right the senior is being taken advantage of, obviously the federal government is interested in that.
And then it's been covered here in the closing days of Biden's administration about his executive order on AI in healthcare, which puts into place some new standards and safeguards. We will see if those do last the test of time with the new administration coming in. So that's it for the review of current legal and regulatory pieces.
Clinical Advances and Ethical Challenges
Let's talk about a couple of success stories or at least highlight them again. We talked about stroke care and the products there to initially and rapidly identify strokes. Talked about diabetic retinopathy. Talked about screening breast cancer. And we don't have a specific startup to call out here, but the same kind of approach is being applied also to early detection of lung nodules for lung cancer screening. And that's exciting as well.
The reason why—I mean, detecting more breast cancers and earlier breast cancer is just as important as anything—but with lung cancer and the mortality being as high as it is, the identification of stage 1 lung cancers has a chance to make a massive impact. These are obviously definitively treatable either with definitive surgery, even now sub-lobar surgeries, or obviously definitive SBRT which is non-invasive. The detection of early-stage lung cancer has a chance to make a massive impact in improving cancer mortality.
I was just thinking, though, with all these advances, there are definitely some challenges. One of the challenges is around data privacy. We had talked about Stanford and how they're implementing AI systems and shifting everything to cloud and physical servers to split the data to maintain security. But every additional piece of data they collect improving the AI performance increases privacy risks.
There are issues related to bias, particularly around race. There was a study published in Science that found that AI algorithms showed racial bias. That's particularly concerning as these tools are being utilized to deny care and manage resource allocation.
One of the other things I found interesting is in terms of when patients send us messages in our electronic medical record. Currently as physicians, we respond to them. But there are definitely programs now that allow AI to respond to these patient questions as a first pass, at least to kind of eliminate some of the time associated with answering. But what hospital systems are struggling with around that is whether to add a disclaimer or not saying that this answer is generated by AI.
You would think, well, it should be a no-brainer to add that. But the downside with adding that is it may absolve some of the providers from fact-checking the responses on AI because they'll think there's a disclaimer attached to it, so it's not coming from me. So some health systems are choosing to not add those kinds of commentary so that the providers are going to check on it more.
It reminds me of sometimes you see it in people's emails, they'll say like, "This email may contain errors," I guess.
Right, on your phone.
Exactly. And I even see some physicians put it in their reports. I see it in voice-generated radiology reports where they'll say that there may be inaccuracies from the voice detection software. Occasionally these are massive inaccuracies. If they're not necessarily massive clinical mistakes, they might be highly ridiculous words that you wouldn't want in the report. It's close to some medical word, but it's not a medical word at all.
When I do see those, I often don't know if I'm being that guy, but I often actually, as a courtesy, alert a radiologist, "Hey, there's this word in there." And then they addend it, clean it up or whatever.
Yeah. And sometimes it's a big thing like you say left instead of right, which is huge. A big problem.
Exactly.
One of the other things I thought was interesting with AI, I saw—I don't know where I'd read about it—but there were surveys showing that when they got responses from AI versus physicians to some of their questions, they felt that AI sometimes appeared to have more empathy. The reason for that obviously is if a physician has a busy practice all day long and then at the end of the day is spending two hours answering patient questions, there's definitely physician burnout there. People want to go home to their families. And so you oftentimes may not see some of the empathy.
On a plus side to AI, again, it could potentially alleviate some physician burnout and give satisfaction to both patients and providers. As long as there are no issues related to trust, which is one of the other concerns I see. Whether there'd be any issues related to patients losing trust in their healthcare providers if they feel that they're relying on AI too much, maybe as opposed to having that doctor-patient relationship.
Out of all the things that AI is being developed to handle, from autonomous agents to a high degree of complexity, there's actually something I have on my list to watch. The CEO of Nvidia just gave a keynote speech at a major tech conference called CES, and he goes into these coming uses of AI. I only bring this up to say that I'm pretty confident that these applications of AI writing high-quality responses to patients that direct them to take further action in a way that's highly human and empathetic is probably pretty solvable very soon. Even if there are some lumps to work out at the moment.
Right, and the idea of biases that are built in. It's all based on what's fed into their language model. So they are prone to the same biases as humans are.
Absolutely. Other challenges: training requirements. Healthcare professionals would have to be trained on AI, continuous monitoring and validation of these systems, liability—what happens when AI tools make mistakes—and the cost of implementing and maintaining these systems. These are all real challenges.
They are. And it's unavoidable. As there's any wave of innovation, it always brings up this dynamic of the haves versus have-nots in healthcare. We mentioned some names of some health systems here that are obviously systems that are large, have significant revenues, and are in a position to make these investments and consolidate their advantages in this dimension as well as several others.
It's a problem, and it's a problem that neither you nor I are going to solve. And I don't think the government is either. But it is just something that does bother me because a person who lives in Atherton, California, one of the richest towns in Silicon Valley, is no more or less deserving of a highly sophisticated healthcare system to take care of them than somebody in a small town in Georgia, for example. It's just the way it is.
Future Trends: Preventative AI and Personalization
So where do you see things going in the future, Anthony?
I guess that's a good question to bring us in the direction of tying up this episode. Probably going to see several key trends that will shape the future of AI in healthcare. The most significant one is probably going to be the use of AI to predict patient complications before they occur. Everybody has seen in programs like Epic, which is a major electronic medical record, so-called BPAs or best practice advisories—alerts that, "Hey, this patient could be septic, do this or that."
These are going to be more widespread, they're going to be more accurate. They're going to even be used, I think, in the outpatient setting to intervene more upstream and keep patients out of trouble.
Sort of a preventative AI.
That's right. Preventative AI is a great term for it that's actually being used fairly widely in this space. Identify high-risk patients who need intervention, obviously optimize resource allocation in real or close to real time. For years and years, we've been talking about personalized medicine without really the infrastructure to do it. I think perhaps AI is now finally that infrastructure that will allow for a highly informed degree of personalization that's not just on a whim, but actually supported by the data.
And it's happening now. Cleveland Clinic is one example of all this coming together.
The Discovery Accelerator with IBM.
Right, exactly. And we're also seeing significant changes in how AI is being regulated. I think this is another issue. In the future, the National Association of Insurance Commissioners is planning to do a comprehensive survey for health insurers across the states to better understand how AI is being used, like around utilization management practices, patient impact assessments, transparency in decision making, and fair treatment standards.
Like I said, many states haven't even begun to address these issues in a meaningful way yet. I know there was a quote in one of the studies we saw during our research that the consumer representative for the National Association of Insurance Commissioners pretty much said that the first step is admitting there's a problem. And many states haven't even gotten to this first step.
That's right. It is a wide range of preparedness or a complete absence of it on a state-by-state basis.
Best Practices for Healthcare Professionals
One thing we could do in putting this episode to a conclusion: one of the reasons that motivated this podcast is you and I have a fair amount of experience in dealing with insurance companies in what we call payer relations, in dealing with the regulatory aspect of clinical documentation and best practices in documentation. So we could give maybe a few best practices or recommendations. Kind of break them down in two parts. There's overlap here, of course, because healthcare providers—physicians or others, advanced practice providers, nurses—often overlap with administrative roles too. So there's not going to be an exact line, "Okay, this is for the clinicians and this is for the administrators." But to give some sense to it, maybe we'll break it down in that way.
So AI denials. AI denials are automatic in many cases. And so sure, you're denied initially, but these are easily overturnable on appeal. So document meticulously what you're doing and the medical necessity behind it. I think that's number one. Document, document, document.
Yes. And I think at a certain point, both the haves and the have-nots are going to have to spend some money to invest in training their staff about AI systems. Otherwise, you're just going to fall behind. You may in fact have to write some checks, formally partner with AI startups in the denial space and some of these other avenues we talked about in this episode. I think it's going to become more and more the imperative.
And I think for the healthcare administrators, they also really should be evaluating which AI vendors they're using, carefully looking for transparency in their algorithms and proven track records so that they're not just investing in a company that's making empty promises. They need to monitor these denial patterns and build robust appeals processes. Consider, if they can pool their resources and they're a large enough system, investing in their own AI infrastructure for claims management.
I know it's easier said than done. I mean, most of our hospital systems are struggling to even maintain margins, let alone invest in this. But this is going to benefit the bottom line, because it's a form of obviously maximizing revenue.
Sure, yeah. Think about best practices in finance, best practices in clinical operations and in several other areas. AI is now becoming a key functional area that bridges across pretty much every other aspect of how we operate. So you're seeing that when these departments that the health systems are building—these AI working groups, these C-suite positions—direct the AI shops in these hospital systems and certainly in the payers as well.
I'm finding that the Chief Digital Officer or Chief Information Officer—whatever your health system uses that title, or sometimes they use both—they're becoming some of the most influential, powerful people now in your health system, apart from the CEO and the Chief Legal Counsel. Right?
Community Feedback and Conclusion
Absolutely. So I think for our third poll question, it could be for our listeners to comment about if they're from, let's say, a smaller practice, what do they recommend in terms of how to compete with the AI capabilities of larger insurers? What are they doing and what documentation practices have they found that are most effective in fighting some of these AI-driven denials?
Yes, and these are free-form questions. So we're not going to give you A, B, C, or D here.
It would just be great to hear some feedback because we can incorporate some of that into our future episodes.
That's right. How are you responding to the AI capabilities of larger insurers and what documentation have you found that's worked to help fight some of these AI-driven denials?
And we understand in making this episode that this is a hot topic. There's going to be probably, especially with this audience engagement, specific questions about doing more episodes on, let's say, breaking down given topics. So that you can stay tuned with us on what we're doing and the content we're developing, please visit our website, ValueHealthVoices.com. Visit our social channels on YouTube, on LinkedIn, and several others where you can find the show notes from this episode as well as others.
We haven't developed it yet, but I think we're going to develop a downloadable guide—a sort of practical guide for providers on AI and healthcare. Certainly we have our LinkedIn group, Value Health Voices, where you'll find all the content we put out: short-form videos, long-form videos. Follow us on X where we post daily, TikTok, Instagram, all those locations.
And please subscribe to our podcast, especially if you found today's discussion valuable, and leave a review and a star rating. That always helps us. And your feedback continues to give us new content. So we would love to hear from you so we kind of know what are the future episodes we can do in healthcare delivery and finance. So expect another episode from us in a couple of weeks and see you later.
We'll talk then. All right.







