PUBLISHED DATE: 2026-05-12 04:23:12
VIDEO TRANSCRIPT
SPEAKER A:
Hi all, thank you so much for being here. Very excited for today's webinar. I'm joined by two exceptional speakers. They are Tim Laws, the Senior Vice President and Global Health Lead at Publis Sapient, and Ash Magupta, the Global Director of Healthcare Strategy and Solutions at Google Cloud. Thank you both very much for being here. In terms of today's webinar, as you all know, we will be looking at the art of the possible, redefining healthcare with agentic AI to set the tone for the structure of the conversation, the way that we're going to run it today is we'll do some very brief introductions now. Then a set of rapid fire questions just to set the tone and the scene, I suppose. Then we'll move on to the panel discussion, which I'm very excited for. At the end, there will be Q&A. As Nikita mentioned at the very beginning there, please do submit your questions for the Q&A throughout the webinar. And at the end, we'll have some closing remarks. I'm very excited to be joined by you both, Tim and Ashma. Shall we get on with the rapid fire?
SPEAKER B:
Sure.
SPEAKER C:
Yeah, let's do it.
SPEAKER A:
Fabulous. Okay, so rapid fire questions. There's a lot of misconceptions that teams still have when they start exploring agentic AI. From your experiences of both of your organizations, what are the biggest misconceptions that teams still have with agentic AI?
SPEAKER D:
I guess I'll go first.
SPEAKER A:
Go for it.
SPEAKER D:
Yeah, so I mean, I think there's a few, but I'll just name one specific one, which is that when you're trying to build a gigantic AI for scale, there's actually a lot of engineering behind it. And so it's not just simply like a lot of people are used to chat GPT and kind of writing some prompts and things of that nature. But to write something that's robust and reliable, there's a lot of. guardrails and things you need to put in place and a lot of thinking you got to put around that.
SPEAKER A:
Very interesting.
SPEAKER E:
Yeah, I'm glad that, as Tim said, the second one that I see from a business perspective is as people are designing a gender AI, there's a belief that agents are meant to operate independently of clinicians, but it is on behalf of clinicians. We can't stress upon that. I'm more that agent GKI is about extending the clinical capacity, it's not independent agents going and taking rogue actions, and of course that means robust governance and data foundations, but agent GKI is assistive technology for clinicians, for nurses, for healthcare. I think we need to double down on that one.
SPEAKER A:
That's a beautiful way of phrasing it on behalf of, it's a very nice way of putting it. Okay, well let's move on to the next one. So what foundations should every organization put in place before experimenting with authentic AI? We'll start with Ashima for that one.
SPEAKER E:
So I would say start with data maturity, high quality clean data is the must, which is the prerequisite, and second is appropriately permissioned data for the agents to act upon. So A, have the data, make sure it's clean, second, permissible data for agents to act upon, and third, which is very important, is governance. Defining clear policies on what agent can and cannot do, workflow clarity, how many people have actually mapped their workflows end-to-end and say where agentic actions. can be introduced and operation readiness and I would end it with this agentic AI is actually 20% model 80% of it is that infrastructure process governance people and people have it kind of flipped there's so much obsession about the model of course we obsess on Gemini model as well but model alone is not sufficient for agentic AI you need to think across the value chain.
SPEAKER A:
Very interesting. And Tim, how about you? What are your thoughts around the foundation's organisations should put in place?
SPEAKER D:
Well, I agree with all those. And she was right. Like, yeah, the agent is kind of the sexy part. People are excited about that. You got to do all the other stuff to really make it work. I would I would add to everything she said is exactly on point. I would also add, though, that this is in general technology by AI, like the biggest thing that keeps you from realizing things is people. So you need to foundational, I think you need to. train people she talked about like standards and things of that nature thinking about bias ethics also what AI can be for people I think you need to train educate and get people involved so that they don't feel threatened by AI they feel part of the of the effort
SPEAKER A:
Maybe a big push for a lot of organisations across all industries for the next few years, that education piece. Maybe we'll drill down on that a little later on. What's the one thing that leaders should stop doing with AI and one thing they should start doing instead? Tim, would you mind taking that one?
SPEAKER D:
Sure. Well, I have a little controversial one, which is like stop doing POCs.
SPEAKER A:
Okay.
SPEAKER D:
I don't actually mean you actually do need to do POCs, but there's too many POCs and not enough like commitment to taking it to the next level to production. Because I think people eventually are going to be like, we're just spending all this money on POCs and nothing's happening. So I think we need more commitment to take it all the way.
SPEAKER A:
Certainly slightly positive.
SPEAKER E:
Go
SPEAKER A:
How
SPEAKER E:
ahead.
SPEAKER A:
about you?
SPEAKER E:
That was just going to build on Tim's point. The other one I would say is stop thinking AI is a futuristic thing and it's going to happen. It's here and now. And for us, we need to embrace it, think about how we can responsibly and boldly embrace it. It's not a kind of a future, distant future is here and now. And that's what we should stop thinking that it's not going to come here. It's already here and now. And from start thinking about it to building on Tim's point, when we don't think about just a pilot, you need to think about the platform. So start designing and enterprise wide. agent fabric a shared orchestration layer that multiple teams can build on now what does that layer mean it means common guardrails shared APIs and system access we already talked about high quality data but also unified monitoring and audit ability a reusable library of agent skills cross-functional oversight and governance I think that's you should start thinking of this as a platform in a shared operation layer.
SPEAKER A:
Yeah, lovely way of putting it. So let's think about those leaders who are maybe not at the point where they think of AI as the now. What early signals should those leaders pay attention to when deciding whether a gentic AI use case is ready to move from concept to production? Tim, as you mentioned earlier, too many POCs. Maybe we'll start with you there.
SPEAKER D:
Did you say 10? Sorry, you cut out for a second.
SPEAKER A:
Yes, sorry, yeah.
SPEAKER D:
Yeah, so... Yeah, I think I think that, again, I go back to the people part, but you want to create look for the signals that is something that is going to solve a problem that people will be happy is being solved. Something that the other things I look at, and these are just just to kind of, you know, let's have a little fun is like something that maybe feels a little magical or, you know, maybe completely changes the patient experience. and to something really beautiful so really thinking making something that people can get excited about and that the people who are involved in it don't feel threatened by it but feel like wow this is going to do good things
SPEAKER A:
How about you?
SPEAKER E:
I would say, to a large degree, the way to measure that, and we've seen that with our customers, where GenDKI has succeeded is the workflow is actually running smoother with the agents than without it. If that happens, you're there. And the only signal of that is we often talk about human in the loop, but we need to be careful that human in the loop should not be the cop out of making mistakes. So over time, you will see when it's ready to move. So those interventions drop steadily and they cut around explainable edge use cases. So for each escalation of going to human, that means you're not designed it correctly. Humans need to be in the loop and they need to be in the loop for the high quality. So you should see those interventions steadily go down and humans are looped in for high impact for edge use cases where they need to be in the loop. But in also safety guardrails fire when they should and not when they should not. So early signal is we were working with a customer and they said, well, this was interesting. I'm really checking it, but I'm really correcting it. So that shift is where they are looking into it, but they are kind of observing and then those errors go less and less. I think that's where even for the safety guardrails, that it is actually. escalating to human when it should and and that I believe that that's a good signal that it is escalating at the right moment
SPEAKER A:
It's another excellent perspective. In terms of practical steps, what practical steps could a team take this, maybe not this quarter because we're pretty close to the end of it, but what could they take say next quarter to build confidence in the safety and reliability of Aventic AI systems? Tim.
SPEAKER D:
Well, I think. There's a few things I think actually might talk a lot about the guardrails and things of that nature that I think you need to do. I think starting off with cases that maybe are lower risk is helpful as well when you're thinking about safety and reliability, as well as I kind of when I think about that also just helping people trust what you're what you're doing so that they feel trust in what's happening. I have seen some places where they have built an AI that mimics a current system. So they can kind of compare and kind of build that trust and see where they may go off the rails a little bit on reliability and safety and where they don't so they can learn and feel comfortable in their next initiative.
SPEAKER A:
Fantastic. Ashma, how about you?
SPEAKER E:
I think very specifically to build on Tim's point is teams should implement some very good practices, for example, on Google they follow red teaming. We follow adversarial data set testing for the agents and that goes into really intentionally feeding the agent ambiguous or adversarial inputs that forces it to make decisions about whether to use a tool, which tool to use, or which input violates the safety policy. And the more you do that red teaming, that adversarial data testing, the more you will get that. come for it you know trust comes with verification it comes with accountability and so I believe in the enterprises this red teaming and adversarial testing needs to be the first class citizen and and we need to build that muscle it's a new level of engineering and we need to make sure that it becomes part of the agent development life cycle
SPEAKER A:
Excellent. And final one for our rapid fire round. What's the most common mistake mistakes make when choosing a classic mistake? What's the most common mistake organizations make when choosing their first authentic AI use case? What would be a better starting point?
SPEAKER E:
I believe starting where low impact high frequency use case was the highest stakes. It's not ready for diagnostics type of use cases. So we often say start where back office use cases of picking the right use case is critical. And that means whether the stakes are low or healthcare is an industry is mission critical, I would say even it's life critical. So what are the use cases you will start in the back office to build the muscle to understand that engineering and then go into difficult high stakes use cases? It cannot be, you cannot fast practice, you need to build that muscle. And I see that mistake being made. Sometimes people jump to more core use case. High stakes, so there's a method to the madness.
SPEAKER A:
Makes sense. Tim, from your perspective?
SPEAKER D:
Yeah, for mine, and maybe this is why some things get stuck in POC as well, but what I've noticed is that I don't know why, like humans, right, they want to do it and then kind of show everyone like, wow, look at it, look what I did. And but then the other people aren't kind of bought into it and then it kind of fizzles. So, you know, in large organizations, there's often silo between technology and business. So I think technology using AI to solve business problems, but not including the business in it, and then it gets stalled. So I would say bringing both stakeholders, and there could be more than those two stakeholders into the creation and ideation of it, and then I think you have a much better chance of taking it all the way through.
SPEAKER A:
Perfect sense. And just before we move on from the rapid fire, we have a quick Q&A that I think we should clear up because we're going to talk about it a lot. Tim, could you define for one of our audience what POC is within agentic AI?
SPEAKER D:
Sure. Proof of concept. So when you're just trying to prove something out that it works. I've also seen POT, proof of technology, proving the technology can do it.
SPEAKER A:
Thanks so much. Very, very interesting rapid fire answers there. Thank you. you for your candle um let's jump into the panel discussion for the audience we're going to go through three different questions and Tim and Ashma will be walking us and talking us through some very interesting areas of agentic AI so to start us all off it's the sort of what are we seeing now question where are we Where is a gentle care already reshaping patient access and care delivery in measurable ways? And what early outcomes prove it's more than just hype? Ashton, could you start us on that one?
SPEAKER E:
Yeah, I would begin my question by sharing some of these facts, right? There's a projected shortage of around 10 million healthcare workers by 2030. This is a WHO stat. And we simply cannot afford to have collection spending today, they spend around 28 hours a week on administrative busy work. So where we're seeing traction is agentic AI helping with that administrative friction. And that's where you will see the quantifiable result. something that also lends itself beautifully to the human in the loop clinician in the loop or nurse in the loop kind of use cases so one example we were very fortunate to work with major system like HCA healthcare they use AI powered nurse handoff agent to summarize patient context across nurse shifts and this and in any given day they have around 60,000 handoffs on a daily basis is even if you add like five minutes savings savings are much larger but you feel very conservative now you're talking about 300,000 high quality nurse minutes daily going from documentation to care delivery so that's where we are seeing the documentation burnout in the backdrop of having the shortage of nursing conditions and to me when you create capacity in the system when the doc Doctors and nurses are not doing the busy work, the documentation work. They have more time to have a preventative conversation with the patients. They have more. So we need to kind of create that capacity in the system. We've seen that another example is MediTag. They leverage Google's search and summarization and in seven and a half minutes. Her appointment with the physician is saving, you know, when you are dealing with a chronic condition or a patient that notes, they go multiple pages long expecting in that 10 to 15 minutes doctor's meeting to be able to get a summary of that is invaluable. So we are seeing AI as an assistive technology in those settings, be it in a hospital setting like FDA or even... these primary care kind of settings with the MediTech.
SPEAKER D:
And I can build off on that. Yeah, the ambient listening like in the doctor's offices like it's such a no brainer and it's such a game changer. I know a lot of physicians who've expressed how much they love that. And then Again, also to Ashima's point around nurses. So there's some studies done where like one less nurse, you know, goes down and the mortality rate will go up. So, you know, it's not even just time. It's also like for death for patients because they get distracted. They can't get the attention that they need to give. So I think some of the stuff that Ashima talked about is really important. I saw another interesting one because a friend of mine joined this company called Diligent Robotics where they have robots they can move around and they actually go and get things. So they're not even doing the administrative. They're like, can you go to the inventory and get this and bring it? And so it saves a lot of time. Or can you bring this here? Can you go get that? And they're actually already in production at hospitals. And I thought that was a really. nice kind of innovation around AI and helping with the nurse shortage. Oh, go ahead. You had something? Shima?
SPEAKER E:
I was going to just build on that point and finish
SPEAKER D:
Okay.
SPEAKER E:
your thought and I'll jump in. From the care delivery perspective, there's another example that came to mind is, and this is, you know, the current science we have, the more repetitive is clear that when you think about cancer or early detection, we know that with the science we have today, the localized detection of breast cancer has a greater than 99% survival rate. Yet we know 20 to 30% of eligible women in the U.S. has cancer. I'm still not up to date on the memo grounds. So what gets in the way? It's not the screening itself, it's a logistics. We all know the drill, we need to figure out to get a PCP approval, then we need to find out. which imaging lab is offering that and booking all the appointments we will see a GenDKI come as that navigator as your personal kind of concierge helping you navigate the complexity of healthcare and in GenDKI actually in October we announced our partnership with Color where we're seeing this vision going to come to life which is they're helping Gemini It's grounded on American society of cancer state approval cancer guidelines assisting with the breast cancer questions or mammograms questions. Should I get it done? What does the nearest clinic mean? And then they actually go and book the appointment. So we receive from the patient facing side the care now.
SPEAKER A:
Investigation is an area where we are seeing a lot of traction there in terms of agenda care coming into the care delivery and patient facing.
SPEAKER B:
And that's perfect for what I want to talk about as well, which is kind of calling it triage, but it's also very related to patient access. So I'm sure we've all experienced like there's been times I put off medical care because it's just too hard. Like, what kind of doctor should I get? How do I get the doctor? Will they be accepting patients? What, you know, is their schedule going to work for me? Where are they located? Do they have parking? Okay. It seems like a simple thing, you know, you need medical attention, just go and get it. But it's not simple and it reduces access. And then if you have, you know, then there's a lot of people who have much harder accessing these things, especially in rural settings and so forth. So, you know, I think that triage and we actually did, I hate to say the word POC, but We did an initial one, and it's somewhat in production, but still a lot of it was POC, and it was actually with Google, and it was around that kind of triage, like if someone has a medical condition, helping them figure out what kind of doctor they might see, and then what doctors are receiving patients and so forth. And so it goes to some extent, but I can see in the future where, you know, it would go to the extent of, you know, basically scheduling you an appointment. and making sure you can get there on time and so forth.
SPEAKER A:
Yeah, and the color resistant to what I mentioned is not a pilot. It's in meaning these technologies are coming in, it went live, it is helping women with scheduling and screening assistant, and we will see innovation like that as Tim pointed out. Navigating health, navigating care is still very difficult and that friction between a prevention and a patient is where aging JDKI can be the bridge.
SPEAKER C:
It's only going to be a very exciting time for for Agenda KI. Thank you for your opinions on that one. I do wonder, there are probably quite a few people in our audience who are further behind than many of the amazing projects that you've just mentioned, particularly love the breast cancer and also the idea of just a robot designed to just collect things on your behalf. That's such a lovely use of it. But there'll be lots of people in the audience who are at a different stage in this journey. And it'd be really interesting to hear from both of your vast experiences in this what the highest value lowest regret places for healthcare organisations to begin with are and what foundations need to be in place in order to scale that safely. Tim, should we start with you for this one?
SPEAKER B:
Sure I think first of all it's good just to think bigger picture when you're thinking about this which is what is AI good at right so and there's various use cases you can do here and some of some of the low-hanging fruit actually has been just developing platforms faster and moving Lots of these organizations have very fragmented systems and data. And so modernizing those using AI to do that much quicker.
SPEAKER A:
Mm-hmm.
SPEAKER B:
We actually did that for a claims mainframe that was supposed to take 10 years, 10 years, 10,000 screens to move over. But in the mainframe, you just it just was you couldn't get access to the information, you couldn't easily make changes. So you weren't going to be able to adapt. to future needs and we were able to using AI to reduce it to less than three years so three times acceleration so that's I just want to put that out there because that's very low-hanging fruit that anyone that any organization can start to do that I think is really useful but again going back to some of the other things like high volume high variability with a lot of data I think can be used you know in the One of the things physicians have to do is they have to keep up with research. So applications where it helps you kind of summarize and keep up with that research is a really great place to start. A few other places. We talked about the triage. I think that's a place we're seeing a lot of.
SPEAKER A:
Mm-hmm.
SPEAKER B:
I think patient experience is a huge place where you can really understand a patient much better and really kind of cater and navigate, as Ashima was talking about. I talk about a couple of examples that we've done. There was one that was interesting, and this is in the health insurance space. Every year for annual enrollment, people have to, you know, they enroll for Medicare Advantage. And to check eligibility, there's actually a lot of questions that have to be answered. And so these organizations have to hire, you know, hundreds of people just temporarily to kind of overlook and see who's eligible and who's not. And there's like, so it's rules. So there's something else AI can do well, you can feed AI these rules. rules and stay
SPEAKER A:
Mm
SPEAKER B:
following rules
SPEAKER A:
-hmm.
SPEAKER B:
and so you know we were able to build something rather quickly probably in the old world that would have taken a couple months it took a couple days for an agentic system to basically take in and do hundreds or thousands of applications and check if they're eligible or not so that's just another that's like a quick hit like very easy to do saves makes the process much faster as well so better experience for the people who are trying to enroll
SPEAKER C:
Amazing. How about you?
SPEAKER A:
Yeah, I would add to that. So I'll give you an example of the work with American Society of Clinical Oncology, ASCO, and when we were talking to Dr. Haddis and the team, they mentioned that they have around 50,000 plus ASCO members slash oncologists in their community and this oncology guidelines continue to evolve, new research comes in and then they continue to change. Now for oncology, And for these thousands and tens of thousands of oncologies, the latest oncology guidelines are not easily available, right? They are buried in PDF, but PDF is like around 70 to 80 pages long. So what I did for them was, and this is a common use case, by the way. health systems or they have policies they have guidelines they have clinical protocols so first step was ingesting all those guidelines vectorizing them and grounding Gemini on those so that an oncologist whether you're in Nebraska or California or Florida you can get access to the same guideline to conversation And to me, that was one thing that we launched, I believe this was this year in March, and this has been such profound change for them and how they access the latest clinical guidelines. If I'm a melanoma, I'm treating melanoma, I can ask, what are the first line of therapy for this? Or what are the latest clinical trials? Things of that nature, which are very, very hard for oncologists to figure out and search and spend time on this is where AI is acting like an oncologist assistant giving them the tools and that pattern by the way is repeated like we let's take the medical necessity guidelines for revenue cycle management we work with IKS health and they're doing very similar thing Tim gave the example of Medicaid Advantage but most of the pairs have a medical necessity guidelines for prior authorization prediction if I'm a hospital system I need to understand for what procedure for what a diagnosis what are the medical guidelines for us to get paid for that claim and it's same pattern get the guidelines which are buried either in documents PDF thousand pages long this is what AI is actually good at it can ingest all those thousand pages and we call it vector representation on those embeddings and now you can do conversions in it in a language which is much easier and this is to me democratizing access to expertise looks like in the case of asking me very proud you know and despite the way of global solution not just U.S. so you're in India or in U.S. you have access easy access to the same guideline to me that is what democratization of health looks like we all need to elevate our expertise and make it easy for people to take care of of the patients.
SPEAKER C:
Yeah, absolutely. I mean, there's a huge imperative on everyone involved in healthcare to do so. Tim, do you have any final thoughts on that one?
SPEAKER B:
Yeah, I would say because you're talking about where to get started and how to think about it. I think a lot of, you know, the elephant in the room like hallucinations, you know, right when it gets it wrong. I was in a class one time with this psychologist, he said they shouldn't be calling it hallucinations. It's not really hallucinating. But that's what that's how it's become right the term is hallucination and so It's interesting because, yeah, there are hallucinations and I was reading a paper recently about like how you design the system, you can really minimize those hallucinations. And this paper was doing the towers of Hanoi, which is like a reasoning problem. If you know it, like there's like three pegs and you have you have different size on one and you got to get it to another one. And when they just asked the agents to do its thing. then it would within like it would hallucinate within like a hundred moves but then when they divided every task into a very subtask and then whenever these tasks would happen or these moves would happen then they had voting agents so then the voting agents would vote like is this the right move or the wrong move so this is what I was talking about the infrastructure you want to put in when they did that they were able to do over a million moves with no hallucination nations. So how you design it and how you break it up is really important to making sure that you get a result that you can trust. So I think that's one thing that's important and then also just call out bias. uh since yeah i was in a shout out to dr shannon blackman i was with her the other day and she was she has a whole presentation on this but uh but she was she was referring to i'm sure most of you are aware but you know amazon had used ai around hiring uh and it was trained on the data previous data of all their hiring and so it it was a very biased towards men so you know if it was a woman's college then it would you would get demoted and so you know really checking because you can hurt other you know groups of people you know and we all know like like skin when you talk about skin systems you know they they perform worse and on darker skin uh so really thinking about that and that's why we talked in the rapid session about how important it is to think about the ethical the bias the standards around how you do these things so you want to bring that into whatever you're doing as well
SPEAKER C:
Ashman, how about you? Any final thoughts?
SPEAKER A:
I would say the technology is getting better every day, right? But healthcare will move at the speed of trust and trust is very hard. hard to gain and very easy to kind of break right the every technology is improving every minute and trust takes years to build so that's why this guardrail this evaluation how do you actually check for hallucination and to me that guardrails that testing frameworks become the muscle right I often say this is a new level of engineering it is better to get it right then to get it fast if that makes sense you need you can't sidestep those steps you need to ensure and now we have so from Gemini perspective from Google perspective we are building those evaluation framework that you can actually have humans check the output and with each cycle that gets better and better and now with the large model like Gemini we have a 1 million token window what that means is you can actually pass the entire context context in the prompt you can have the entire guidelines just like in context learning so model responding from that context so to me technology will continue to evolve it will continue to get better But healthcare will move at the speed of trust. So be very thoughtful on how you're implementing these use cases, have the right guardrails, right governance, understand the workflow, the process map. As they always say, it's 20% model, but 80% of that is the operationalization of this people process.
SPEAKER B:
I assume I love that healthcare moves at the speed of trust. That's great. Great quote.
SPEAKER C:
But I'd imagine that that may weave its way into the health care agenda at some point when it gets finalised. It's a really lovely way of phrasing it. Let's move on to our final question for the panel. What new partnership models would allow life sciences, payers, providers to safely share data, co-develop and scale agenda care solutions that will work across health care? Tim, I think we'll start with you for this one. What are your thoughts?
SPEAKER B:
Yeah, I was thinking about this. It's very interesting. There's obviously, there's a lot of partnering going on. And we partner with Google all the time, which is fantastic. I think also there's, I haven't seen as much of this, but, you know, just putting ideas out there into the ether. You know, I think there's data, data is like the lifeblood of what AI can do, right? So if you think about patients. and you think about if providers have all the data of what happens you know when they're kind of in the hospital and payers have the data when they're outside right their claims and so forth I think the combination those type of partnerships you could get a much deeper view of a person and through that you could learn things you could help support them better and help them navigate so I think that's one really interesting idea and then I had one other idea which was I was So these are just ideas, but just put this up there too, because I was reading, oh, you can't see it. Okay, it doesn't work. Anyway, it's called Patient Priority. I was reading this maybe a year ago, but I was thinking about it in the context of AI and it was talking about this clinic in Germany, I think it's called Martini Clinic, that is specialized in treatment of prostate cancer. And so what they did is they used specialization. completely built around the patient, right? So not built around like function functions within the hospital. And then very disciplined about the data and always tying everything to an outcome and they track the outcomes not from when they left the hospital, but for 10 years. Right. So very longitudinal data. So very crisp. And, you know, AI would have a field day with all that data, all that clean data which you need. And we don't have enough of those examples. And I think you could do that. You could have specialization or you could have those type of disease areas where providers share across barriers and have a repository or registry where the AI can also play into that. And you could really reduce the cost of that care and the efficacy of it. So I think that's my bold idea to put out there.
SPEAKER C:
And Asma, how about you from your perspective? What new partnership models are we likely to see?
SPEAKER A:
With a few tickets that back. Here's a key friction point in healthcare and I was reading that I believe there are some reports that less than 3% of healthcare data is effectively used today due to the system fragmentation and silos. So when we think about payer and provider pharma, we are creating silos and this data is not shareable. And to me, this one is the biggest impediment for agenda AI, I also say. API before AI. And what that means is we need to be able to have standard-based access to data and wearing my policy bank hat like fire APIs to be able to exchange information is going to be key. And to me, this is why the Amy Gleason's work in the CMS for CMS interoperability pledge, which Google has pledged, is to be able to share. patient data or in the case in your the tool that you're building making sure there's an API we are building tools for where providers to be able to leverage standard-based data exchange and to me those are the type of partnerships we need to see where there's a value creation at both sides and healthcare has not traditionally been like that if you ask them who owns the data is it the hospital is it the EHR vendor Is it the patient that needs to turn on its head that it needs to be consumer-mediated data exchange where a patient... should have right to their own data they can share that to their own payer provider life sciences research company with their consent I think that hasn't happened yet and we cannot afford that like we have been talking about data interoperability for god like 10 years maybe longer we cannot afford now As we go into the AI, the AI interoperability, there's an agent, but it cannot access the patient data. It cannot have the access to the claims data. For agent and agentic AI to be that assistive technology for clinicians, for patients, that data liquidity is going to be the bedrock. And I believe that closed model in any new partnership. needs to be open so from ground perspective I can tell you we support Gemini but we support 100 other models similarly I think that mindset about ecosystem about platform thinking needs to come to health care that it's no lock-in like lock-in will prevent progress and as we head into AI it will be even more pronounced
SPEAKER B:
Yeah, so true. She hit it on the nail right there. It is a problem even in organizations, one organization, the data
SPEAKER A:
Yeah.
SPEAKER B:
is you can't get at it. And we're doing a lot of foundational work in that right now, which is a proof point for that. I think also it's important to point out that you don't have to expose all your data, right? Like you can expose it in different ways, you can expose the conclusions of it, you can have things that take the data and put it and just map the relationships and that's shared. shared so it doesn't have to be giving away all the data so there's a lot of different kind of configurations that can make it palatable for people but yeah we need that interoperability that Hashim was talking about
SPEAKER A:
Siemens has done a good job to me have now recently in a week reason they've done interoperability pledge that is basic set of data that should be able to exchange about the patient and giving their consent to the consumer to be able to share that data. To me, going back to the point, no AI without API meaning an easy way, a standardized way to access the data so that an agent can do the good things. It can help me book my appointment. It can help me with the medical literacy or help me manage my care management plan. But if you can't access the data, we'll create. It's going to damage us which don't have the data to have the agency to be able to perform a task. The big shift from AI to GenTQI is information to action. These agents will have action to do, book my appointment, help me understand my care plan. And that action cannot be completed without it. So just to, again, I think as an industry. all of us have this kind of moral obligation that we need to do right by the patients have the consumer mediated kind of data exchange you know growing up in India I had a doctor nearby would always give me the notes as a family doctor that knew all about me and now I compare and contrast you go to a doctor they're like I see their back busy entering and I don't know what's been entered I think that bridge where technology is actually invisible and I believe AgenTKI will get us there that invisible it keeps kind of magic but you have that information and the physician knows about you and that humanity and judgment will We found that if that documentation and the other burden kind of goes away. So to me, that's the future we should all aspire to.
SPEAKER B:
It's a really beautiful way of putting it. And Tim, any final thoughts on this one before we move on to our Q&A?
SPEAKER C:
Uh, no, I think we covered this one. It's perfect.
SPEAKER B:
Good. Some really, really interesting bits there and some lovely terms of phrase as well as some interesting stats. It's amazing that we are using less than 3% of that health data. That's both terrifying, but also quite exciting for the future. So we've had quite a few questions thanks to our audience. So I'm going to start you off. with one that I think is quite front centre for a lot of people at the moment. So cybersecurity is of huge importance with progressive and more complex risk. AI obviously creates some risk and there are the risks of hacking, threat actors. How are your organisations, how have you seen other organisations build confidence for clinicians, patients and administrators that agents will be secure? Thank you. Tim, can we start with you for that one, please?
SPEAKER C:
Yeah. Yeah, I think I'd probably start with that, like when we work with our clients and we it's always we call it zero trust.
SPEAKER A:
Mm-hmm.
SPEAKER C:
Um, so you don't want data flying all over the place, right? Like you don't want data that you don't need to see. You want it to say where it is, where it's secure. And so that that is kind of the foundation is sometimes it feels a little bit more painful to do it that way. But that is really important. Because at the end of the day, that's the data does need to be protected. And so that's probably one of the key things that we focus on when we think about kind of security and keeping your data safe.
SPEAKER A:
I would like to add two points. One is, as Tim said, because this agent will act. They need to have that orchestration layer that's managing, kind of see your central nervous system. Which agent has what access? How am I securing that? I think that orchestration layer, I call it the LLM ops, or you can call it the agent ops. We offer that as part of Gemini Enterprise. That is going to be key because... As you're building agents, this will get complex and getting that permission aware responses, permission aware agent execution is where the orchestration layer plays a critical part. The second is the whole concept of red teaming, you know, trying to break the system. We had a, when we launched. We have got a great team and they dance with the facilitated data sets, Red Teaming. That will give the confidence. So, and we offer that in security ops as well from Google security perspective. The agents will increase the attack surface area, right? Because now healthcare especially was built for patients coming into the facility. Now we're asking. the mobile apps the agents to have access to that information and to me those are the you need to secure the perimeter zero trust and how complete like we support proper compliance security psychops and now these attacks will also get sophisticated these attack developers are going to use the GenTK so you need an AI enabled security operations and they are going to go hand in hand What data and systems the agents have access to, making sure they're permission aware. Second, building the platform with the right guardrails, with security operations to see this attack vector and guarding that. And then third is designing the workflow like that. So all this kind of comes together.
SPEAKER B:
Makes perfect sense. So there's a really nice high level question here that someone's put through which may sometimes get lost in the weeds a little bit, but how can an organisation tell when a problem is a good fit for agentic AI? What's a good sort of methodology for looking at a problem and going, we can solve that with agentic AI? Asma, can we start with you for that one, please?
SPEAKER A:
So, not, that is a great question, by the way. Not every problem is an AI problem. Not every AI problem is an agentic AI problem. So I would say if there's any workflow that is multi-step, which is high frequency, I think that's where I would start to understanding your workflow, your process map is important. Oftentimes we see that workflow which is high frequency, you're doing it a lot many times. designs and it's very contained I would say start with that and then go the more complex the more nuanced I think that's where you need human the loop design but I can tell you not many organizations have well understood workflow process or process map. So starting with that, you know, understand the process end to end and then figure out what is a task which is isolated enough high frequency that I can put an agent to that. And then of course, then there's multi-agent to talk about that. We are seeing this multi-agents where one agent does the work, the other agent verifies. So there's a whole. methodology like that but first if you can't even get one agent to work independently before going into multi-agent I think you'll introduce a lot of complexity so starting small high frequency problem Like the example I gave of like nurse handoff, they didn't start with all of the documentation and given hospitals that day. They went very specific on to one use case. It was high frequency, 60,000 handoffs happen in a daily basis. And that's what they said, you know what, we'll solve it. And you will see those high frequency tasks which are like it takes another 60 to 75 minutes for handoff. And you save five minutes, that's value, that's ROI. So finding those right problems, high frequency where there's a lot of toil, administrative burden as this can be your guide.
SPEAKER B:
That's it.
SPEAKER C:
Yeah, and taking that example, it kind of goes like I was talking about is like doing AI that feels magical or brings joy to people who is going to interact with. So if a nurse has that extra five minutes, that is something they're going to appreciate. So I think that to me that those are a lot better places where humans can be like, oh, wow, this this is this makes my life better. The second thing I think you can look for is. solve problems that can't be solved without AI. And this is a specific problem, but I was in a panel a few weeks ago about rural health care, and there's a dearth of finances for it that there is going to hit some type of cliff coming up. And the consensus was that this, outside of changing policy, this cannot be fixed without AI. So let's get creative. Let's think about how that could happen. So I think, you know, thinking. I think you said this, Henry, at the beginning, like the art of the possible thinking, what can you do that will really change the game, that can solve a problem that couldn't be solved before? I think that also makes it really exciting.
SPEAKER B:
Certainly, I think, well, not for me to think at this point, let's move on to another question. So there's been a really interesting one, I think, that comes up not as often as it maybe should. We talk about security a lot and we talk about some of the zero trust pieces that you talked about earlier. Let's have a think about the regulatory challenges. What are the regulatory challenges and appreciate they will change from market to market, nation to nation, but what are the regulatory challenges in the future for the development of Agenji KI in healthcare?
SPEAKER A:
I would say from Google's standpoint, we believe AI is too important to not regulate. And too important to not regulate well. So this is going to be critical from AI kind of regulation perspective. And I would say that it's a different beast we're talking about, right? When there's a general purpose technology, you've taken an example. When you... Let's say if it's not general purpose AI or LM, you're defining a medical device. It served a purpose. You tested it for that regulatory framework for software as a medical device can exist. Now this model is general purpose. It can solve many things. So how do you put the guardrails around it? How would you test it? And to me, that's where the opportunity is to start with even what are the AI principles? What things the salesperson will do, not do? And second is what should be the regulatory framework? And I believe FDA and the organizations are working through that, but it's too important to not regulate.
SPEAKER C:
So I agree with that, but taking a different angle on regulations, so healthcare is highly regulated, and AI can actually help. So lots of times the regulations have slowed things down. We were, this was in pharmaceuticals, but when they create content, it has to be what they call MLR review, so medical legal review, and so it can take a lot of time, a lot of extra costs, a lot of time to publish content. content. And you can use AI actually to help understand the rules. So it will help you kind of create the guardrails and create content that can be much easier, more easily approved. Therefore, you know, making things a little bit easier, more cost effective and better. So I think there's areas. We have to regulate AI, but AI can help with the healthcare regulations as speeding things up. And so, you know, it's really good at looking at rules and following them.
SPEAKER A:
Yeah, AI can be an assistant for regulatory reporting, filing the MLR. I think the question is how should we even regulate such an important technology? What principles do we need to place? Do we need to have state by state? Does it need to be at the federal level? I think those are open questions and open dialogue today. We believe these. The availability of every state is designed their own, then versus a federal one where there's a rule that comes and which has common guardrails and you don't want availability because when you are a tech partner or Tim you're working the services understanding the nuances if everyone is implementing regulation differently it become very very complex it will become a fragmented mosaic of solutions which are very hard and to build solutions for so we definitely hope that there's a federated federated regulatory framework that we all can adhere to versus 50 different set of rules then that's become very hard to manage complexity for
SPEAKER C:
Yeah, I would love that. That'd be great.
SPEAKER B:
So we've got time for, I think, two more relatively rapid ones, but I think this one is very pertinent to the people I see in the audience today. So what are some examples of real use cases for healthcare that can be done by smaller entrepreneurs, not just large enterprises like Publisys or Google? What are the use cases you've seen for those smaller healthcare, health tech entrepreneurs in the gentic AI? Uh, Tim, could you start us off on that one?
SPEAKER C:
It's interesting. I mean, I think a lot of the examples I gave actually can be done, like the teams that we had do them weren't that large. So I think it's really finding where you see value can be had and then building it. So I'm just looking. I know I gave a couple examples like we talked about the Medicare eligibility. Anyone could do that. That wasn't special to us. Some other examples that we've done that anyone could do is we did one around competitive intelligence, which was basically helping understand if you have a brand in the market, whether you're a provider, a payer, pharma, whatever you are, how are people receiving it, right? You can have AI kind of scanning the social boards and all that stuff. And how are people receiving your message? Is it resonating? You know, are there gaps in what people think? Maybe there's gaps in... in in their understanding of what you're providing so you could create something like that and any entrepreneur could create that like that's that's kind of the fun of AI is that most of these things can be done whether you're very large or small I mean, the only thing I would say is that you need lots, you know, you need data. So competitive intelligence is a nice one because the data is all out there in the public domain. If you're working for a provider, then you can figure out possible ways to look at their data. There's a lot of cases around doing more predictive intervention type work by looking at that data. So I think those are some examples people could try.
SPEAKER A:
I think as he was saying, as foundation models, frontier models are becoming more affordable, more accessible, I believe this is democratizing access to building that expert solution. So whether you're a small company, a big company, these frontier models are making or democratizing that development. And one of the things that we've seen small companies do really well. is search and summarization. It's a very small use case. To be able to search an American record to summarize it, now it has many, many use cases. It could be done in the primary care setting, it could serve the nurse handoff, it could be your prioritization, medical necessity check. So building this building block of search and summarization, rounding it to that context, if you solve for that building block, you solve for many, many use cases. So I would say when you are thinking about building solutions, to think in terms of platform thinking, not narrow POCs, because these building blocks will then serve many, many use cases. So I would say get expertise in... in those areas.
SPEAKER B:
Perfect sense. We have just over two minutes left. So with about 45 seconds each, if our audience were to take away one thing from the last hour, which has been exceptional from both of you, what would you want them to take away about the future of identity AI and what excites you both and what that future looks like? Tim, could we start with you? Sure.
SPEAKER C:
Yep. So I think a couple of things is that I would emphasize is that this is to serve people. So make sure that the people in your organization that you include them in it. So it's part of part of our solution and not just your solution. I think that's really important. I think have fun. Like it's really cool. It's really magical what it can do. And then another another bold idea because I was at the doctors recently. for a surgery and they're like well we don't know the price of the anesthesiologist we don't know the price of the facility fee we you know like it's just so and then like maybe if you call this number maybe you'll get it so I'm just hopeful that somehow you know this can pull together a much more cohesive patient experience that doesn't just make it such a burden on the patient so I would love to see I'm excited about it
SPEAKER B:
I suppose for me, as our final thought for the day.
SPEAKER A:
I would say this is one of generational opportunity for us to reshape health care and make it more affordable, more accessible for everyone. And now all of us have this very powerful tool and it will come down to health care community on what use cases we will solve. In the past 10-15 years that I've been in healthcare, this is one of the most exciting times to be in tech and also one of the most fulfilling and meaningful time to be in the intersection of health and tech. So I would say keep your motivation, keep that energy and inspiration. We have work to do.
SPEAKER B:
energy and positivity I can't think of a better way to sign off thank you both so much for your time and to our excellent audience I'm sorry we didn't get around to every question there are in fact 12 unanswered questions which I think might be a record so a sign of how engaging that was thank you both and we will speak to you all soon
SPEAKER D:
Thank you to publicist Sapient, Google Cloud, and everyone that joined us today. Please visit our website, hlth.com, to catch up on all health webinars and watch the recording of today's session. Juneteenth in Los Angeles, February 22nd to 25th for Vive.