Care Delivery & Innovation

Care Delivery & Innovation

AI in Oncology: The Reality & Future of Cancer Care with Dr. Sanjay Juneja

November 20, 2025

50

min read

Dr. Sanjay Juneja, Value Health Voices Podcast Guest
AI in Oncology: The Reality & Future of Cancer Care with Dr. Sanjay Juneja cover art

Value Health Voices

AI in Oncology: The Reality & Future of Cancer Care with Dr. Sanjay Juneja

00:00
00:00

Is the rapid rise of artificial intelligence a threat to medicine or its greatest hope? In this episode, we tackle the massive hype and complex reality of AI in oncology with one of the leading voices in the field, Dr. Sanjay Juneja, also known as TheOncDoc. We break down what this technological revolution truly means for cancer patients, doctors, and the healthcare system at large. From uncovering hidden patterns in cancer data that defy human intuition to the practical challenges of implementation, we explore how AI is set to transform everything we thought we knew about medicine.

Join us as we separate fact from fiction in the world of medical AI. Dr. Sanjay Juneja, a medical oncologist and VP of Clinical AI Operations at Tempest, shares his journey from social media educator to a trailblazer in health technology. We dive deep into how AI can address the "unwarranted variation in care" that leads to inconsistent patient outcomes across the country. Dr. Juneja explains how machine learning models can analyze vast datasets to find novel insights, much like Google's AlphaGo made a move in the game of Go that was inconceivable to human grandmasters. This episode explores the incredible potential of the future of AI in healthcare, from AI scribes developed to combat AI and physician burnout to new diagnostic tools that can predict hyperglycemic events from the sound of your voice or determine a tumor's molecular features from a simple pathology slide.

However, the conversation doesn't shy away from the serious challenges ahead. We confront the "garbage in, garbage out" problem, discussing how biases in training data can lead to flawed or inequitable conclusions. A core part of our discussion focuses on the critical need for validating AI models in medicine before they are widely deployed, ensuring that these powerful tools are both safe and effective. We also explore the nuanced impact of AI and the doctor-patient relationship, debating whether an algorithm can truly be more empathetic than a human physician and what happens to trust when patients suspect their doctor's messages are AI-generated. Finally, we unpack one of the biggest hurdles to adoption: the issue of liability for AI in healthcare. When an AI model makes a mistake, who is responsible—the developer, the hospital, or the clinician who acts on its recommendation? This is a must-watch for any clinician, patient, or technologist seeking to understand the real-world implications of AI in oncology today and in the near future.


Dr. Anthony Paravati: Amr, it's great to be back with you. We have another episode here today. This is episode 22 of Value Health Voices podcast.

Dr. Amar Rewari: Yeah, it's exciting because we're going to be talking about AI and healthcare, and specifically a lot of this episode will be about AI and oncology.

Dr. Anthony Paravati: Yeah, that's right. It's not our podcast, we tell our listeners this all the time. We're a couple of oncologists, but it's not typical that we have also an oncologist as a guest. And this is Dr. Sanjay Juneja, who is a medical oncologist by training, but he has become very rapidly a leading voice in AI and healthcare.

Dr. Amar Rewari: He is widely recognized as the OncDoc, is a trailblazer in healthcare innovation and a rising authority on the transformative role of AI in medicine. He has over 750,000 followers and 50 million views on social media. He has a unique ability to simplify complex topics that's gained him a lot of global acclaim.

He has a podcast focused on AI and healthcare that's amassed over 4 million downloads in 120 countries, has founded [Could not verify with context], an educational platform specializing in AI applications and oncology, and he serves as the vice president of clinical AI operations at Tempest, and a contributing writer and technology counsel for Forbes. He's credentialed by Harvard Medical School's inaugural AI in Healthcare from Strategies to Implementation Executive Education and an editorial board member for the peer-reviewed journal, AI and Precision Oncology.

Dr. Anthony Paravati: Yeah, I don't know how the guy finds all the time to do all this stuff. He's a real dynamo doing great work. And we have a wide-ranging conversation with him today about AI in healthcare, AI in oncology, where things are going over the next two to five years, lots of practical tips as well. Really exciting conversation we're going to get into with him on the episode.

Dr. Amar Rewari: Particularly, we are really going to focus on separating reality from hype. If you had to take anything away from this episode, it's where we are now in AI, where we see things going in AI, and some of the pitfalls that we may encounter along the way.

Dr. Anthony Paravati: That's right. And one of the comments that I've made before and that I want to make to him is that there's a lot of hype. But so much has been said that we are going into this period of rapid innovation, another industrial revolution. I think I have an idea that he's going to say that that is true, that you can't hype it enough, that the promises for patients and for physicians to simplify their lives are so many. And I really am excited to talk with him.

Dr. Amar Rewari: Let's get into it.

Dr. Anthony Paravati: We are with Dr. Sanjay Juneja.

The Shift to AI: Tackling Unwarranted Variation in Cancer Care

Dr. Amar Rewari: Yeah, we're very excited to have you on, Sanjay. And as we mentioned earlier, you used to do a lot of content creation and had a very successful business with doing that and 750,000 followers, 50 million views. But now you've shifted to doing a lot more in the AI realm. And I was wondering, what made you feel that AI was urgent enough for you to shift your focus?

Dr. Sanjay Juneja: Yeah, it's funny you say that because this exact same reason I was doing social media and all of this education, it was just made aware to me that AI is the solution for what I really considered a band-aid with social media. It wasn't long into doing social media that I realized how discordant and non-uniform the care delivery is in this country. You can have community centers, you can have over-saturated places, you can have not enough oncologists, and I just had too many.

I'm talking hundreds at the beginning of, I didn't get referred for neoadjuvant, which means shrink a tumor before you take it out, neoadjuvant treatment before my breast cancer, and they were 35, and it was standard of care to do that, or I never got molecular testing, or just these different things, and it was wild to me. And I think I still am an enthusiastic social media poster but it definitely felt more like work because it was almost a necessity. When you see hundreds of thousands, these aren't feel-good stories when you hear the fact that somebody's course changed with something as scary as cancer, it was more like, man, this is a problem. And I just felt very compelled to have to put out a lot of content until I interestingly enough was doing a keynote at.

Dr. Sanjay Juneja: At a [Could not verify with context] conference right after the Google at the time, Google Cloud's global director of applied AI. And he decided to stay for my lecture. And his lecture was in six or seven dimensions, behind my eyeball in my frontal cortex, all this techie stuff. I was on a Windows Word 92 to 94, but he was subject. He's like, a lot of these problems that you talked about, they sound algorithmic, AI can do this. And I was like, tell me more. And then that's what started the journey.

Dr. Amar Rewari: So is that, it was after that time that you went to Harvard to study it more seriously?

Dr. Sanjay Juneja: Yeah, I took the first course that Harvard offered on AI and healthcare, From strategies to implementation. And then Scott Penberthy and I with Google, we started having this education platform that was more for just AI in general. What is it? I went way overboard. So if anyone's listening, they might be like, gosh, I really need to get around that AI thing, but it sounds intimidating.

You really don't need to go down to the basic sciences we do and we're used to as physicians. I was doing a perceptron and all these other things. That's not, there's still clinical or AI applications that you have some basic stuff that can help you vet models and just understand the nature of these tools. And it does not take long. I really think you could be very well-suited with a Celsius and a weekend of just shutting yourself off. You can accomplish that in two and a half days.

Beyond Human Intuition: AI's Power to Uncover Hidden Patterns

Dr. Anthony Paravati: Yeah. Well, as we went through in our intro of you, so we say it again for the listeners, you're a medical oncologist by training and had a successful part of your career there. And as many of our listeners know, both Amr and I are practicing oncologists, radiation oncologists. And what you said about what the people who study this call it unwarranted variation in care. Unwarranted variation that basically makes outcomes unpredictable, degrades quality.

One of the things I'm excited about in AI is the compute, the power to, and the democratization of data, hopefully, maybe I'm thinking about this in an overly positive way, that can allow us to really see with a great deal of granularity what we're going to get for certain clinical decision-making and what, for example, patient compliance with therapy or non-compliance with therapy costs the outcome. So is that something that you're interested in applying as part of AI for medical purposes in that regard.

Dr. Sanjay Juneja: Yeah. The economic implications and unpacking all that is crazy. I'm sure that you can just do so much. The one aspect I'll tell you that I get excited about, one thing that I tried to do early on in social media is when I, I don't want to call it a hater comment. It's not, but it's when there were things that were blunt and doubtful about traditional oncology, you want to get an idea of where the truth came from and what is the source of truth.

And one recurrent thing that I saw was a lack of faith because they were like, there's so much cancer. It's been around for so long. People are dying all the time and we're just barely moving the needle. So it has to be by intention. Year, year and a half with the first-line therapy and then no disease, all of a sudden it's a disease. And so I was like, okay, let's distill that down. And what I take it as one of the reasons or feelings about it, sentiments, which makes sense, is, dude, there's an aggregate.

Dr. Sanjay Juneja: Of human tragedy, that is, cancer outcomes, unfavorable and arguably unsuccessful. And the question has always been that how come we can't do better with all of the tech stuff and science that we have? And that's such a good point. And that's what I like to highlight.

Dr. Sanjay Juneja: This aggregate of these cases and individual journeys are never studied as a whole because they cannot be. Because of interoperability, because of your record somewhere in Pennsylvania, we're going to be different than Louisiana, different than Texas. There's all kinds of legal and commercial incentives to keep it together. So it was important for me for people to understand, it's not a federated model in the sense that everything's not pulled together and dissected.

However, now with these concepts of AI harmonization and AI normalizing data, which means, hey, can we bring a Rosetta Stone and take all of these different dialects and languages and make it one interpretable universal language to start just really getting some scientific insights? The answer is yes. And that's what's so exciting because now you have machine learning models. There's a fancy term for exactly what it says.

Machine learning and deep learning are, you don't have to tell me the what, the why. Just tell me the what. I don't know why, but somebody with a mutation of 76941 just seems to not respond to this first line therapy nearly as much or it's much shorter. So it can uncover things at the very least that can then prompt further scientific study and exploration and just adding things in the lab. One of the things that I get excited about is.

Dr. Sanjay Juneja: As an example, as Move 37, and I believe it was in 2014, don't quote me, but the 20-teens, and it was when AlphaGo, Google's AlphaGo, was playing the game of Go against the best grandmaster. And as I understand it, I don't know how to play Go. I hear there's more outcomes in a game than there are stars in the galaxy, which makes you really not want to ever play Go or learn it. But it's taught generationally. And all of these grandmasters were in the room.

And on move 37, AlphaGo made a move that you heard an audible gasp on the TV. It's like, ooh, wow, AlphaGo messed up. That makes no effing sense. Turns out AlphaGo won in the next couple of moves because it did something that us as humans and everything we're taught traditionally and generationally, it just wasn't conceivable. And I was like, that is exactly an example of how we are able to potentially get insights that we just don't have or know better because there is something to be said about things being handed down and convention in the way we learn things and have always learned things. It's really hard to unsee things, if that makes sense.

Demystifying AI for Clinicians: How Language Models Really Work

Dr. Amar Rewari: So it's interesting because you guys are both so positive about this. And obviously, Sanjay, it makes sense for you to be positive. You're literally living in this space full time. I'm going to be a little bit of a devil's advocate on this podcast, I think, because there's this concept of garbage in, garbage out. And so it's, yes, on the one hand, there's this amazing potential for AI, I think. But it all depends on what these language models are being trained on. There's inherent biases. Some of the studies don't look at ethnic minorities and only are examining certain patient populations. And so the conclusions that can be drawn can be not necessarily universal if applied incorrectly.

So there's definitely caution, but I do think it's exciting. And I guess one of the things I'm curious, if we ever step back way to the beginning, just for our listeners, I obviously use ChatGPT and some other AI projects, and I know Anthony does as well. But for the clinicians out there who are really just trying to first learn about this and maybe are just messing around with ChatGPT, what do you feel clinicians don't really understand about how AI actually works and what is some of their biggest fears and maybe break it down for them.

Dr. Sanjay Juneja: Yeah. Let me preface by saying, I don't know that I'm fully bullish on AI. And I'm being candid here for a second. I would argue outside of medicine, I worry more about AI than I do get excited about it. When I mean it socially and everything like that. I think we're ill-prepared for the amount of change and transformation that's going to bring in so many aspects of human life. But with that said, it is what it is. And that's where I'm like, okay, what can we use for good for some period of time? and ideally forever.

But with that said, the biggest thing to start with is understanding that AI is not a pre-programmed thing. So if I go into ChatGPT and I ask it the same question 10 times or go to 10 different models, I'm going to get different answers every time. It is a real-time, these neural networks that you hear about, it's a real-time cascade of events the same way that I could swear that I'm putting a Plinko chip exactly the same way into this game of Plinko. I don't know if some of our listeners are too young to know what that is, but it's got a Plinko chip differently every time. It's similar. It goes back at LLM, a large language model that's behind ChatGPT and Bard and Claude and all these other things.

Dr. Sanjay Juneja: It just uses pattern recognition, which is so wild, which means it's just seeing the pattern of letters and words and making sense and reason of what that means and going back to all of the things that is known and learned well and been trained on finding those same patterns and putting it contextually relevant to me that sounds one vulnerable and fallible but at the same time it also makes you think like wow that's pretty insane like the fact that it can be so accurate. so is it every now and then could it not perform as well sure are there things that you can do to make it not perform as well or better for that matter absolutely because that's what call that's this whole concept of a prompt engineering like I want to say, exclude anything 2019 and before if I'm asking for something in the last four or five years, etc..

Dr. Sanjay Juneja: That, as a whole, AI is able to just simply crunch so many things so much faster than any one of us can click, hit control T, open a tab, and all of these other things. And it can do the work for us. And I think that's what people need to realize. It's going to take me a good bit of work to find the literature of how radonc disagrees with medonc, which happens in a certain setting, oligometastatic. Is it three mets that you can treat prostate cancer or is it less? And we have conflicting studies, but I can have it say, yo, have the contrarian view as a menonk, have the view as a radonk, go ahead and debate and give me the 10 biggest things that we're going to argue and pick apart on each other's studies. And it happens in front of you. And that's wild.

Dr. Anthony Paravati: So, you're getting into one of the things that is a criticism of AI in medicine. That is that you could enter the exact same clinical history and get a different guidance from AI. So that is essentially not deterministic enough, or I suppose you can't be deterministic enough. It's either deterministic or not. But I wonder if actually over time that's actually going to be a strength of AI in medicine, because let's face it, aside from alive or dead or pregnant or not, or I don't know, you have COVID or not, most of the conditions in medicine are not one or zero, right? They are gradations of different states.

And so is it just, by the way, of the brute force of the capabilities, as you were intimating, that we can analyze incredible amounts of data that, Just that fact by its very nature, as well as obviously skill of how models are designed, will lead us directionally in the right place over time. Maybe not as fast as we want, but we'll get there.

Dr. Sanjay Juneja: Yeah, I think so. I think, especially with this whole concept of agentic workflows or having agents, you can really get creative. I think this guy, I believe his name is [Could not verify with context], and forgive me if I'm missing the first name or saying it incorrectly, but he basically made this pretty wild concept where he has it with intention.

Dr. Sanjay Juneja: Five different specialists in oncology, Medonc, Surgeonc, Radonc, and he's in generalist, and he's having them debate the pros and cons and everything. So he's trying to take one layer, that's the same concept of agents. He's trying to take one layer of an actual debate that happens in the background, not even necessarily in front of you, but he wants it in front of you. So he surfaces exactly what they're trained on, comprehensively, general internal stuff, radon, clitor, medonc, surge on, and he wants to limit them or guardrail them to just knowing those really well.

And then he has them all debate, and then he has this other agent on top of it that gives you this summary as a patient advocate. So you can get very creative in these ways. And then on top of that, closed models generally hallucinate far less than open continuous learning models. So continuous learning models are going to lose it a little bit more because they're being fed back and the process is a lot more complex but if you have something that's trained and it's quote-unquote closed meaning you've shut off this needing to reconfigure metaphorically itself and its pathways and then what it has access to then those can and will hallucinate less.

And to your point Amir, data is a problem even in regular studies, in formal traditional studies. We're like, man, if only that cohort had more Indians, for example, or if they had more females or whatever.

Dr. Sanjay Juneja: The beauty of being able to really have all of your data elements tagged, so to speak, or earmarked, you can now manipulate and really not manipulate in a bad way, but you can filter out those things and apply these learning models and everything to crunch it. Okay, but what happened to the African? Just like a subgroup analysis or these analyses that happened afterward.

Dr. Sanjay Juneja: And then what you can do is you can piecemeal data. And that's where I get very excited. Real-world evidence or real-world data across 10, 20 different institutions take all the Indians from those. And take all the easier for positive lung cancer indians from those and just start to really see how are they performing compared to the usual norm, if I can be redundant, compared to the average cohort of generally Caucasians or whatever are studied in. So I think it also offers some kind of laxity that otherwise we're very taught, we don't have data to support and we don't know, we didn't look at whatever. We have to say that because we didn't have anyone crunching the other things. We can actually have some assessment now on more tailored, in tailored fashions.

Dr. Amar Rewari: No, that's a good point. Because like you said, a lot of it, the studies themselves, the traditional way of looking at things also had bias baked in and didn't look at large populations. So that's very true. And what you brought up about that person, it reminded me almost it's like a virtual tumor board. It's almost like a tumor board that's been happening in real time with AI. Tumor board for our listeners, where the medical oncologist, radiologist, pathologist, radiation oncologist all get together and debate the literature about specific cases. So it's kind of like watching a tumor board happen live and having AI trained on that, which is an area I'd say right now, AI is not very good at, but that is the new frontier.

From AI Scribes to Voice-Based Diagnostics: What's Real in AI Today

And I think one of the things we want to accomplish with this episode is really talk a little bit about what is some of the, where's AI now? What's real versus what's hype in medicine, particularly also in oncology since we're all oncologists, but medicine in general.

Dr. Amar Rewari: Personally, I feel one of the areas it is doing very well is all our documentation. When we see a patient and for a consultation, we can have AI scribes taking all that so we don't have to waste all our time in documentation, which obviously affects physician burnout and all aspects around just not being able to see enough patients with quality time. So hospital systems like investing in this. So that's where we're at now. But I'm curious, where else do you think we're already successful now with where AI is?

Dr. Sanjay Juneja: It's happening so fast that depending on when you air this episode, what I might say is hype is actually real. That's how crazy this is. I'll tell you, one of the things that really blew my mind was a few weeks ago now, maybe a couple of months now at NextMed Health, Daniel Kraft's conference. There was a gentleman that wanted to democratize screening for diabetes in India. He's like, there's just impossible to do preventative care. In India, there's just too many clinics and it's reactive and you take the sickest people. So he was like, how can I make that happen? And he was like, everyone has a phone. So what he did was apply a bunch of machine learning models to hearing the frequency of your voice with a normal glucose.

Dr. Sanjay Juneja: And when your glucose goes over 200 or if it's below 120 and then between 120 and 200, apparently because of what happens with hyperglycemia and this metabolic disruption in your acidotic-alkalotic drive, which goes straight to your brain and changes your respiratory drive, the conduction of that respiration and the change in it affects your vocal cords. And in a subaudible manner that you and I can't appreciate, but it can tell and detect a difference in frequency called shrills or thrills, which means higher or lower, and it's different for men and women. And it's already hitting like 90 plus percent at knowing if you're hyperglycing it or not.

Dr. Sanjay Juneja: Process that for a second that this thing is on par with an oral glucose test in pregnancy and even poc gluconators. I thought they were 100 or not, so this thing is rivaling those by your voice. So that's one example of it's not hype, it's reality and it's just wild because that's going to think about how it disrupts the industry then on even just on the economics of glucometer machines.

I'll tell you, with pathology, we would get, we still do, we get these pathology slides, we stain it, that's what histopathology is and we see what receptors it has and doesn't and blah, blah, blah. But it's very conventional thinking right now to say you need molecular sequencing, you need what's called pcr, it's a technique that has to do this stuff in the lab, it's expensive. These models that are coming out on being able to predict.

Dr. Sanjay Juneja: The otherwise molecular stuff. So EGFR positive lung cancer, HER2 positive breast cancer are able to do so just on the slide itself. In small cell, it was able to tell a certain feature of small cell just by knowing, noticing that 89% of the cases that if you consider contralateral, which means on opposite sides, if it's touching the pleura, which is the lining around the lungs, if you're doing these couple of things and the gradient of the avidity on the PET scan, meaning the uptake that happens with sugar metabolism, it gets brighter. It was starting to appreciate what that drop off was and where the brightness was most to start saying what its molecular features were.

Dr. Sanjay Juneja: It's just absolutely bananas how it's transforming all of these tools that are relatively new to us. I know we started using sequencing only in the last 10 years regularly, I would say. When the first indication for immune, well, sorry, it's been longer than that, but immune therapy for 10 years. And it can predict immune therapy responses. They're models looking at mammograms and not able to tell you if this cancer, that's child's play. It's able to predict whether or not you're going to respond to neoadjuvant therapy. Again, the treatment before cancer based on glorified x-rays.

And how? Because you just took all of these mammograms, you took all of the people that got the same regimens, you took all the responses and not. And it says, can you figure out anything about this mammogram that can suggest and think about it, can look at it at such a high level beyond the naked eye, it's able to do so. These things are week to week blowing our mind on its capacities and that's why I prefaced about the hype versus real because it's just happening so fast.

Systemic Hurdles: Data Privacy, Reimbursement, and AI Adoption

Dr. Anthony Paravati: Wish we had a way to look into amr's brain and see the needle as to whether it's on the negative side or the positive side or shifting as you're talking because on its election night probability of victory and it's over here, we're good, you go to bed and then all.

Dr. Anthony Paravati: So I just wanted to mention, Amr and I spend a lot of time on this show. It's one of the main points of this show, explaining how the U.S. Health care system works, how it's financed, how the whole apparatus works or doesn't work for depending on where you exist as a stakeholder in the system. One of the things that is really going to be perhaps the bottleneck is shifting the way we develop codes and value those codes to represent the value of all these new innovations because the traditional methodology that we have and I'm such an expert on this and we've had many guests on to talk about it as well are really ill-suited for a lot of what these technologies are going to do for patients and for providers as well.

And I just wanted to say one other thing just about the data, and I was talking about data democratization before, is I really think we want to think long and hard about going almost to an extreme in terms of ensuring data protections for patients.

Dr. Anthony Paravati: Because if we do that, and as a country, I'm talking about the United States right now, if we go to that extreme, I think we're going to see a degree of buy-in from patients to give up their data, to participate in these studies, that I think it sounds counterintuitive, but correct me, Sanjay, if you think I'm wrong here, but I do think there's a real reluctance on the part of patients because they're like, well, who's going to use my data? And so if we can get that right. Even though it seems like an obstacle to get through, it may be an unlock to really allow us to actually then accelerate innovation more. Just a thought I've been wondering about, and I actually wanted to ask you about this during our recording.

Dr. Amar Rewari: And Sanjay, if you don't mind my interjecting before you answer that question, just because we're so much to unpack, and I don't want to lose train of thought here. With the codes, to answer your question, Anthony, so the way currently medical procedures are paid, as people understand, is based on physician work, which includes the time and intensity of it, as well as practice expense, which once again also includes the time and cost. And so AI, the things that's great about it is that it reduces time. But if your valuations are based on time, then inherently there's a conflict there. So there has to be a new way to value things. So that's that one thing.

And then, Sanjay, to your point about some of these innovations in pathology, radiology, driving even treatment care, I know one of the things we do, one of the tests I order nowadays for patients with prostate cancer. Is we decide whether we're going to give them hormone treatment or not based on some of these new AI-driven tests. But the catch with it, and I think another bottleneck, is that this still all has to be validated, right? So you have to take these tests that are using AI or looking at mammograms, and they still have to be validated against data sets that have looked at outcomes. When that takes time. So that's something that could slow things down as well. So back to you, Sanjay. I didn't mean to jump in there.

Dr. Sanjay Juneja: No, I fully agree. And I think it's going to really challenge the way we think about doing things. And I don't know what the right answer is, but it's like, do you validate on retroactive, like just a bunch of the populace of data that you have and just see if it's hitting its metrics or do you need this more formal prospect of study? And I think it's an interesting conversation.

My good friend Debra Patt she's the EVP of Texas Oncology and co-president and she often says, on the same aspect of needing to be very careful, regulated, especially if it's going to be relating to patient health, she's like, we also need to try and keep in mind of not letting good be the enemy of great. So what she means by that is, as I take it, is if we're underperforming at a 40 to 50 percent and something is at an 85 percent, it's very hard for us as physicians to be happy with an 85 percent. We want a 95, 99, these confidence intervals, these P values, we're just used to knowing and wanting what we see. But the question is, so do we just let it continue at a 45, 50%? If there's a true gap in the delivery of care or in the accuracy for that matter.

I mean, think about the response rates for first-line therapies that aren't targeted, or in adjuvant settings, meaning to reduce the chance of a cancer coming back after the definitive surgery is done.

Dr. Sanjay Juneja: They're not great percentages. When I say you're going to lose your hair and breast cancer and you're going to have all this stuff happen and it's like eight or nine months. Oh, so how much do I reduce my chance? I'm like, it's not 90%. It's not even anywhere near there. So that's where I think it's this kind of weighing of risk and benefit to what the status quo is today.

And to that same point, and to your point is that I think people, so I was in health a couple a week ago and there was a creator that I actually had my podcast, she's a geneticist and she's pregnant and there was someone else with her who was in the early stages of pre-menopause and they were like I can't wait for ai to hopefully surface things in pregnancy and menopause because we all know that there's a positive data because of the way times have been forever to be able to hopefully elaborate on things.

Dr. Sanjay Juneja: And one had an Apple watch and an Oura ring and one didn't at all. And I just needed to make the point that I'm like, look, yeah, you could be excited about these algorithms and all this studying that's happening from the data on an Oura ring, for example, if it's able to tell what your heart rate does overnight at different stages of RAM, blah, blah, blah. And can those suggest that there's a premenopause thing happening due to the temperature of your body going up, et cetera, et cetera. I'm like, one of you have data, one of you don't. And she's like, what do you mean? I'm like, because when those algorithms happen, if you're starting your data collection today, I'm like, you don't have this very long, longstanding heterogeneous amount of data to improve the accuracy and applicability of the algorithms that are coming. And she just totally flips. She's like, oh my gosh, I got to turn my eye on now. I'm like, do what you want to do, but you can't, it's hard for any of us to benefit from the insights of the aggregative data without having our own. And I think that concept is different and new, but I think if...

Dr. Sanjay Juneja: Having your data assessed against AI-analyzed cohorts of data can possibly give you a better prognostic idea of response, of tolerance, of drug toxicity, of all the things that we fear. I do think there's going to be this cultural shift on having a relaxation because I hope that it would demonstrate a value of personal and helpful means that people will be eager to find, which only happens with sharing and to also to have a better scientific understanding.

I have three boys. I have some bad genes from a crazy Plinko chip of history, of events that happened the last couple years and I would love for them to also have guidance when they grow up to have these things that are more atypical because my mom's from super east India my dad's from west and I have this weird mix and we won't have those included unless I'm sharing, and then for their benefit one day and for other people that are that is hybrid that I am. So these are ways to think about it that I think will soften a lot compared to how we've thought about it in the past.

Dr. Anthony Paravati: Absolutely. All excellent points. And by the way, I didn't want to give the listeners, just back one moment to code, code development. I don't want to give the listeners the idea that organized medicine or the AMA is asleep at the switch on all this. They are not, they have developed a digital medicine coding committee that is developing a new, a tentative new category for codes for exactly the kind of things we're talking about. And it's been called, I think, clinically meaningful algorithmic analysis for a lot of the things Amr was talking about, even a test that he's ordering already for patients. So they're working on it. And over time, I think they'll dial that in. But the whole march forward of innovation may move more quickly than they do.

Dr. Sanjay Juneja: Yeah, the CEO of the AMA is a good friend of mine, [Could not verify with context], and I think he's putting together this kind of digital AI exploration kind of, I don't want to call it a system, I don't know what it is, but it has four different pillars of like, how do you actually apply this to the institutions? how do you vet it? And yes, I think there is a serious pressure just knowing what the value of these things can bring to have some kind of system in place because even time lost from figuring that process out is still time lost on people dealing with illness today. And I think more than ever, there's this pressure of I want to get it right. I know how good I'm being told it is, I want to bring that into the world. And it just kind of puts a match under, so to speak. I'll just leave it at that.

AI and the Doctor-Patient Relationship: Can a Machine Be Empathetic?

Dr. Amar Rewari: I think one of the things that is important to bring up is the concept of the doctor-patient relationship, and how patients really trust their doctors. And when they do polling, doctors are one of the few professions that still poll very highly with high levels of trust. And the worry, one of the thoughts is if patients feel that the messages and communications they're getting from their doctors, whether it's through communicating in MyChart or their electronic medical record, are all being generated by AI and not necessarily even being reviewed by a doctor, do we lose some of that trust?

The contrary to that argument, I think, is there is this real concept of physician burnout. And how effective a conversation can you have at the end of a busy day, making tons of calls, doing a lot of other things, can AI be more effective and potentially more empathetic? And I know that there's been some analysis on this that they've looked and they've seen that actually AI can be more empathetic than physicians in communicating with patients, particularly because of burnout. So there's this dichotomy here, and it's really hard to resolve, but it all comes down to the trust and truth. So I'd love to hear your thoughts around that.

Dr. Sanjay Juneja: It's tricky. And we actually talked about this a good bit of health and at [Could not verify with context] and to me some of the issue in the dialogue is really defining what empathy is and trust is and how they differ or are they same thing etc. but to your point there is emerging evidence that's suggesting that people find these AI models more empathetic than their physicians.

I will argue on the other side that LinkedIn is sometimes intolerable, as well as one of my favorite platforms, because half of the stuff is AI coming from people and half isn't. And I personally do not write my LinkedIn posts with AI. And it doesn't mean that the AI posts aren't valuable, but it's just not my company. So I think, number one, there's going to be this kind of delineation on those that do and don't to what degree. I think that's just part of that originality of what is it, are human beings. But when you really dissect this empathy thing, one example that comes to mind, we've been there.

Dr. Sanjay Juneja: Voices that they do not like Dr. So-and-so, him and her. Are they good? Are you sure? Whatever. And I still remember, six years into my career, I would be shocked sometimes. I'd be like, what? They're the nicest, most caring physician. And so it showed me that I know, and I have the people that I choose to refer to, especially with Ray DeLonkin stuff. The ones that I know are just good ones, solid. I don't know how they're appearing or what stressful circumstances or short time or what kind of guard they need for their own emotional health in the room, the moment I can tell my patient, dude, they're solid. They're worrying about you. They're texting me at 7 p.m. about you and changing the script. And I'm sure y'all are the same kind of radiation oncologist. Take care. All of a sudden, they're placated.

So now the question is, I know that is an empathetic position. I don't know how it's necessarily being shown in the room. But I think some of that same stuff sprinkles into this conversation.

Dr. Sanjay Juneja: One gentleman at help, maybe he was on, he was being contrarian by intention for the sake of the panel. But he was saying, I do not believe trust can be earned on social media. He's like, trust has to be a back and forth. It has to be bi-directional, which in that case, you could see why AI may have a better shot. Because if I have 10 to 15 minutes, and they have 45 to 90 minutes unlimited at any time of the day, if you believe at a dialogue and exchange and being heard is a sole necessity for trust, sure.

But I can tell you that I educate a ton on social media and I had a very long list of second opinion requests, people traveling from different states and everything. And I try to tell them, look, if they're in an MD understanding of these other places, I'm like, I don't know. It's not worth the time. I don't have necessarily a superior treatment or superior protocol than a major academic center. But they're like, but please, I just trust your opinion.

Dr. Sanjay Juneja: And that was not bi-directional because a lot of my stuff is educational, but it is relatable and understandable. And I think I'm an empathetic person, so maybe some of that's communicated. But I think a lot of it has to do with I can speak in car metaphors. All of us have our favorite things, fishing metaphors. I say that because I'm in the South. I can't speak in cooking metaphors. But guess who I can? Hey, hi. If I said talk to me with a seventh grade reading level and only use cooking analogies for how chemotherapy and radiation works. It will nail it.

And it will do it with an avatar that's like Sora 2 or [Could not verify with context], which looks so realistic, like Sam Altman stealing stuff in the grocery store. It's going to be that realistic of an individual with that same degree of cooking analogy. So, one could think on how that would be challenging to outperform, so to speak.

Dr. Anthony Paravati: Sanjay, you're blowing my mind a little bit here, especially with the nuances of things because on one hand when you're starting out you were mentioning about, communication and how you talked about linkedin ai has commoditized writing stuff and so you can tell and the majority of people are using it and those who can still write well in a very kind of poignant way in a straightforward way that authentic human touch you can just see it it's so much better.

And this has implications for all sorts of things for example in the in travel industry like curation of where you should go to eat or experiences you should have in a city the ai stuff is going to be rubbish and without really taking you there and the stuff written by people is going to take you there. But then on the other hand that what you were just talking about with expression of of compassion, empathy of having unlimited time, it's just figuring out, it's ultimately it's a human's responsibility, isn't it? To think thoughtfully about where it has to be us and only us and where the machines can help us.

Dr. Sanjay Juneja: Exactly.

Dr. Sanjay Juneja: And, Reid Hoffman and, oh my gosh, I forgot his other name, in their recent book, he has a chapter where he basically says, we always ask what could go wrong or how could these things come to odds. But then he also says, we also need to remember to ask the question, what can go right? And I think that's also important, too, because certainly I think it will challenge, at least by today's standards in the system, it would be a real challenge to out-compassion or out-empathize in the time constraints. But at the same time, I think those things will actually soften and loosen as well.

And I think that patients are more informed than ever and more educated. So a lot of that explanation stuff that can barely miss the mark can actually be understood so that when you're having your interaction, I think the element that makes us different than AI is that we can have a value to or a quotient of the degree of suffering. What degree of suffering is entailed of this? And that could mean missing things from your children. It could mean coming in once a week for myeloma shots rather than once a month. But there's this appreciation for really what just human to human is, kind of worth the squeeze, not worth the squeeze. What are you looking for? I think I want to believe suffering, I don't mean suffering in a horrific way, but that evaluation, be it small or large, is where we can give a lot of guidance based on a lot of improved scientific understanding and options.

The Reality of AI in Daily Practice: Chart Summarization and Workflow Efficiency

Dr. Anthony Paravati: Absolutely. Absolutely. Hey, Amr, one thing I wanted to get into, and I wanted to ask your experience with this too, it's really for all three of us that we're practicing and using AI all the time, is we talked a little bit, you spoke a little bit about Ambient AI before. AI for the clinician and the clinician's workflow is now getting into also summarizing the chart. You have, for example, patients, maybe I've had an eight-year history of breast cancer. First, it wasn't metastatic, then it is, and they've been on multiple therapies. The ability to summarize all that and put that in front of the physician, powerful. I haven't seen it work really well yet. I'm curious if you guys have. And then I want to hear your opinion on workflow efficiencies, what AI is doing there. Because I think oncologists who might listen to us or other physicians, they're going to be interested in that practical piece. Where are we with these skills or not skills on the AI standpoint?

Dr. Amar Rewari: Yeah, for me, I don't find it to be that helpful because it can just be, Some of it can just be a lot of noise. It's just like everyone's used to seeing these notes where people just copy and paste everything from previous notes. And you get somebody's note and it's just 50 things copied and pasted. You don't know if they're being checked or edited properly. And trying to sift through that is so difficult. And some of this stuff that comes out that way almost feels similar, that it almost can take as much time to decipher what's in that. I'd have to take that and put that into ChatGPT again or something and be like, what's relevant here?

The other thing, and with health systems, health systems, at least in my opinion, have been very slow to make change. And a lot of it is because of fear of HIPAA and data privacy. And so, there are some great products out there that probably do this better. But the issue is if they're not attached to any of the legacy big EMR systems, a lot of hospital systems will refuse to let their providers use that, these other ones. And so you're stuck using something that may not be the best because it's been vetted by your system. And so that's kind of one of the things I feel like we're stuck in that cycle. Sanjay?

Dr. Sanjay Juneja: Yeah, it's a challenge. I think, fortunately with TEFCA and QHINs, so basically, ability to kind of improve what is the healthcare information exchange system. I think people are looking into being able to access those in a manner that is productive and resourceful. And I hope that it would make the IT lift because there's a big lift just from the IT department or both financially as well as time-wise, labor-wise, security-wise, to even implement something that may be outside of one of your legacy EHR models. But I do believe that there's a lot of work in AI agents, believe it or not, trying to figure out how to make that process less of a lift and more affordable, just cost and time-wise.

Dr. Anthony Paravati: Yeah. And there's two, I wanted to, two things to pick up on that you guys just said there that I think are a limit to hospitals is who they employ internally in terms of their information system staff, people to implement and bolt on these new solutions, that's a real bottleneck. A lot of those people have been recruited away by the payers, for example, by multi-state, very large health systems. This is nobody's fault. I'm just bringing out places I know who recruit aggressively, like Kaiser, like the HCAs. Those are places that can put out money and recruit these away from your $5 billion health system in a pretty straightforward way. And then the smaller and medium-sized players just don't have that talent.

The other thing I wanted to mention too about a lot of health systems, again, small and medium-sized players, is that they can't afford to make multi-million dollar mistakes again and again. Hey, let's take a shot on this thing. The operating margins are really thin. So that breeds a kind of conservative mindset. Just my experience.

The New Digital Divide: Will AI Create Further Healthcare Inequity?

Dr. Sanjay Juneja: I mean, it does worry me that I got into AI to, with the hope of satisfying inequities and gaps. And it worries me that there's going to be this whole concept of new inequity that's going to now not be even color of skin that matters socioeconomic status which matters acm or amc versus community setting but now even this whole concept exactly this like who can have these prognostic ai models that may or may not be able to guide you better on your cancer choices and treatments and who can't and is it because the institution can't afford it is it because the size of the institution is because of the regulation and policy that if it's state to state that some do and some don't is it because your insurance carrier doesn't cover the hospital system in your city that is using it like crazy and doesn't, that's scary.

If these models get really good and they offer things that are significantly supplemental to standard of care, traditional stuff, that's a whole other type of inequity now that we're talking about. And you have to go to your insurance provider and say, you probably don't want me to a sustained cost of not having this AI model because it reduces toxicity. And they'll say, okay, well, you'll get a pass and go over that.

Dr. Sanjay Juneja: There's going to be all these kind of tectonic shifts, I think, that you all can speak far better than I can on how this adoption occurs and how we can try to keep this a democratized thing like anything else.

Who's Responsible? Unpacking the Liability of AI in Healthcare

Dr. Amar Rewari: I think one of the things in terms of the adoption is also around liability. And so I keep, I don't mean to sound so pessimistic this episode. I really do like AI. But one of the other things that I'm worried about is we do need a lot of regulatory reform because if hospital systems feel that they're liable for what's coming out of AI, then once again, there's always going to have to be oversight. And then are you really creating any efficiencies if everything that's being generated by AI has to be double-checked by a physician because of fears of lawsuits? So there has to be some regulatory reform around this. When you've been at Health or some of these other meetings, has that come up and what are people saying?

Dr. Sanjay Juneja: People love to talk about this, the responsibility in it. It's a good one. The way I thought of it a few weeks ago when I was asked this question, thinking about, bear with me, your laboratory machine. So you have this, we all have these machines in our clinics that can approximate the chemistry, otherwise done as potassium and renal function and glucose, as well as the CDC. And these bigger centers in our center, we do it in-house. So it's in a machine.

Now, we all know, and it's on our boards, that the trick question of the glucose is 600, it comes back on the lab, and you treat it, you give insulin. We also know that that lab can be wrong. So there's all kinds of things in place. There's calibration to make sure that this thing is accurate. The potassium, make sure that's accurate. Make sure the test tubes that are collecting the CDC isn't hemolyzing and making the potassium falsely high, and that the stuff is kept at the right temperature. So we kind of forget how many things are entailed on ensuring the data points that we're using even today are accurate.

Dr. Sanjay Juneja: Making somebody's potassium come down because we freak out because it's 6.5 and the lab is calling us. Turns out, forget it, that's not the right patient, which also happens. But just the machine itself. And so when you think about that, there are people that are responsible. At least in my clinic, physicians are not responsible for what that machine gives you. In no way am I responsible if that is erroneous, that the glucose or potassium. Now, if I treat it, that's a different thing. And that's why we get tested on the boards. If I notice that most of my three patients that morning, I look at the history and I see they're usually hypokalemic or on that side and three are just mildly high normal. That's when I always call my lab and say, yo, is this calibrated? Because these patients are never hyperkalemic. They're not on UBP events. What's going on?

So I try to keep it in mind, but somebody else has that responsibility. Now, somebody else has a responsibility to vet the machine to ensure that it's technically correct early. So these are the same things that happen with AI models. You have one, the AI company.

Dr. Sanjay Juneja: Have varying degrees of pride and discipline on how good their model is and what has been validated on. Generalizability, which means kind of generalize it to your population from what I trained it on. Fitness is a fit model that will not continue to perform well, not drift. So they have to do all of these different things. That's one. So they can bear the fault if something goes wrong technically. Now you have the person that's vetting it. So it's like your innovation officer or your CTO or whoever that says, okay, what was it tested against? What's the fitness?

Dr. Sanjay Juneja: What's And that's the difference, by the way, on MedGemma. So Google came up with MedGemma, which they've made this not an open model, but they've made it so you can actually be open in the sense that you can employ MedGemma in your institution and start calibrating it on your scans and your stuff like that to try to democratize elite AI for radiology. But the criticism is it's not open to the sense that you can't see what it's trained on, you can't see what the population was, and anything like that. So that's another fail stop that usually occurs at an institution before adoption.

Then it's adopted. And now somebody is responsible to ensure there's no drift, to ensure that it's abstracting the stuff correctly, blah, blah, blah, blah. Usually not a physician, but you need to have a team for that. And then lastly, the output is on the physician, I think, at the very least, if they act on it. And it's the same question as back to the calibration machine. So I don't think it's all that different as it's been suggested the last couple of weeks or months. I think it's extremely important. But I do think there are a lot of healthy discussions and conferences happening in different points exactly that I mentioned of trying to address this. And it's one of those things that we just got to figure it out as we go. There is an alternative. We got to figure it out.

The Next Frontier: Predicting the 2- to 5-Year Future of AI in Oncology

Dr. Anthony Paravati: And Sanjay, we've covered so many things. I was just taking stock and looking at our notes of all the things we've talked about. And we had this idea that we would hit you with a bunch of rapid fire questions at the end. But honestly, you've answered a lot of the things. What I really want to know personally, and Amr can chime in too, but just if you could be so bold as to make a prediction, two years, five years, what are going to be capabilities at those time points that we don't have now? they're going to move the needle. Let's keep it centered. We don't usually do this, but let's keep it centered around oncology. If you want to go somewhere else, go ahead. But two in five years, where is this road going to lead us?

Dr. Sanjay Juneja: So I say this a little bit tongue in cheek, but I think I do mean it too. I don't know. I think I would have a little bit of doubt about the AI proficiency of someone if they're telling you what's going to happen in five years, five or 10 years mark. And I say that humbly. I always say that to say if you look backwards at what's happened the last three months i mean here's an example Sora, like Sam Allman's stealing stuff in the grocery store and then and all this like stephen hawking stuff that's just crazy where people's creativity goes. It is unimaginable what that is doing in video form compared to the ai of six months ago where we're like yeah the video is good but not great remember the whole thing about six, seven fingers.

Dr. Sanjay Juneja: We haven't, that seemed like it was years ago, but it wasn't. That's how quickly these things are happening. So, it would be very challenging to predict what will happen in year one, year two, and then year three, four. And I'm saying it so protractedly to give you an idea of, you're right, how would you know, given how far, sorry. Now, with that said, in two years, oh my gosh, I think.

Dr. Sanjay Juneja: I think at the very least, let's start here. Our drugs are better because you can test kind of have this fake mice model. They're saying mice may go away because the understanding of how the mouse's biology is getting so good that you could actually run the sim or the simulation to where you don't actually need a mouse because you don't know what's going to happen. It's crazy. But the computation and permutations of potential targets and antibodies and proteins and all these things, it doesn't have to be cancer. Or it can be vaccines, it can be longevity, is so insane that the numbers are something like seven hours compared to what would take two to three weeks at someone at a major academic institution of good things to pursue.

Then you put that in the hands of drug developers and scientific discovery stakeholders, and, recruitment has now gone extremely smaller in the time that it takes because you're able to identify patients by based on those earmark things that I talked about so much faster than a person in a brick and mortar that often didn't have a medical subspecialty and had to figure this stuff out in her chart. It was insane. It was seven, eight years.

Dr. Sanjay Juneja: Two years ago from bench to bedside, meaning when the drug was thought to be under investigation when it was given. Eight years, dude, that's crazy. And now that's going to be shortened. And then in clinic, I think there's going to be far less wait times than there used to be because the optimization is crazy. GE Healthcare just shared in health, how they just reduced a day based on all kinds of complicated metrics using AI of the inpatient day, immediately saved The Queen's Medical Center $20 million that's in Hawaii and Duke $40 million just by understanding the many complex things of, okay, the specialist that got consulted inpatient doesn't come on average until the next day. And then you're waiting on the dispo, you're rounding once a day. It's just like, do this. And it's just fast. And so that's a lot, big saving.

And the third thing, and I think this is the biggest one, whatever number I'm on, is this concept of wearables. Right now, it's kind of insane that you get a snapshot once to twice a year. I'm talking is still flash one picture of your overall health that you're snapping in a picture and saying how happy this person is or is it CBC, CMP, little physical. That's insane because you have 365 days of data.

Dr. Sanjay Juneja: You're looking at stuff now that a wearable like your watch, knows what your heart rate goes to from your car to inside the building to up the flight of stairs to when you sit on your desk. It knows what you do on average, just like it knows to the minute how long it takes you to get somewhere on Google Maps. It can tell that something is up.

Dr. Sanjay Juneja: Far sooner than you're in respiratory failure and needing oxygen. Far sooner than I, as a doctor, when I'm like, Hey, you usually get this many words out in a breath. Now you're only getting out this many before having to take another. And they're like, Oh, funny. I didn't notice that, but I can tell, I have the brain to know that you're needing less words per breath.

Dr. Sanjay Juneja: What do you think a wearable can do? It can be so far ahead of flu, causing pneumonitis and COVID causing pneumonitis. And all of a sudden we're pummeling these expensive antibodies because somebody's sat in 82%, dude, you're going to be able to gather that on that same walk with a flag, with an alert that just says something's up. And now you're prompted to go into the building and get this testing.

And they're so good. There's something called TemShield that does your temperature. It's able to ascertain based on the elevations in your temperature and when during REM sleep and activity, blah, blah, between a virus versus a bacteria versus something else just by the fluctuations of your temperature and the spike patterns. So anything that you could imagine that requires patterns or if you believe in relationships, which by the way, most of science and medicine is, if there are relationships or certain virtues or truths, otherwise known as it's a science, there's probably something that can unpack this because it's a science. And when we say that, we mean there are patterns or truths and those relationships can finally be studied to a degree that they've never been able to before. And that is what we're looking at in the next two to five years.

Dr. Amar Rewari: Wow. Sanjay, yeah, so I'm going to summarize that. So I agree with everything you said about the research, the aspects of replacing maybe mouse models and drug delivery, stuff around logistics with waiting rooms and decreasing times and inpatient stays. I think the one thing with the wearables, obviously there's a big push in this administration toward wearables under RFK Jr., One of my friends actually has a startup and he already is looking at doing this. It might not even be two years from now where, let's say patients who have PTSD or ADHD, they wear their wearable at home. And if they notice, if it's noticed that they're getting a panic attack, like if they were watching Saving Private Ryan on the TV, it would lower the volume. It'll decrease the temperature. It will change the ambient light, everything like that to allow them to function normal in society. So this is not even two years from now. These things are happening now. So this is, I think this is all so exciting.

Dr. Anthony Paravati: Listen, guys, whenever I'm down in the dumps about whatever's going on, I'm going to call the both of you, get on the phone and just talk about this stuff because there's so much positivity here about the future. It's been said, it's become a point of cliche that we're in a kind of a, whatever, second, third industrial revolution now that's really changing the world. And there's so much, I guess the sky's the limit to add another cliche into the conversation.

Dr. Sanjay Juneja: Yeah but it's true. It's exciting to time to live in etc but it's like I just again, some of it's anxiety driven but it's like you just can't keep up on what's happening month to month and it's like you just wish that you could just close your eyes for a second get a glimpse because it's almost inconceivable.

I actually had Bertalan Meskó, he's the medical futurist, he's like 350,000 followers on LinkedIn. I think Canada just kind of contracted him to help with their system and implementing these things. And he has a good book. I just don't give him a pitch. It's like your mind map to the future or something. And the whole point is how can you best prepare for an unknown future? And his main point is you don't try to predict the future. He's like, what you do is you try to conceive every possible.

Dr. Sanjay Juneja: Iteration of the future what could happen what couldn't happen what will go what won't go what is imaginable and then have a plan for all of it and that is this whole futurist thinking and it's an interesting exercise. And this is what these entrepreneurs like your friend, this innovator that that found this way to reduce the surrounding area and be it anxiety and they're talking about things now that to your point about add it's like somehow they're able to tell with the aperture of your eye and these other things of when you're spacing out because abson seizures sometimes are dangerous for kids and they're out.

It's just crazy. You can hear, you can communicate with someone in the room with an earpiece and have something to your jaw and just be mouthing or orating exactly what you're not phonetically actually putting audibly. And just when the mastication, it knows what words you're saying and then says it in the earpiece so that you don't even have to say things out loud to communicate. I'm sure the military already has this. Help is not an issue in front of me saying that, but it's just wild.

Dr. Amar Rewari: Well, thank you, Sanjay, for coming on. And for our listeners, please check out his podcast, [Could not verify with context]. And we're excited. Maybe we'll have you on again to talk more because this conversation will go on forever.

Dr. Anthony Paravati: It'll be a whole different conversation. It sounds like you're putting out great stuff on LinkedIn too. So give them a follow over there as well. But thank you again, Sanjay.

Dr. Sanjay Juneja: Thank you all so much and for people listening.

Credits

Value Health Voices is produced by Podcast Studio X.

Image of a us flag and hospital mask

Subscribe to our newsletter

Get expert analysis on the health policy and finance developments shaping the US healthcare system delivered straight to your inbox.

By subscribing, you agree to the Privacy Policy

Image of a us flag and hospital mask

Subscribe to our newsletter

Get expert analysis on the health policy and finance developments shaping the US healthcare system delivered straight to your inbox.

By subscribing, you agree to the Privacy Policy

Image of a us flag and hospital mask

Subscribe to our newsletter

Get expert analysis on the health policy and finance developments shaping the US healthcare system delivered straight to your inbox.

By subscribing, you agree to the Privacy Policy