Thank you very much and good morning. Pleased to have you all here today. My name is Jan Willem. I'm a client partner at Publicis Sapient. Work very closely with Tim and I'm responsible for our go-to markets on cloud data and AI. Today we'll be talking about agentic AI, a big topic, but first let me introduce you to our guests. Tim Mason, Managing Director at Deutsche Bank, Head of Innovation and for many years now the driving force behind AI at Deutsche. James, a CIO at Nationwide and actually the one who works with innovation to bring it back to the business and to implement, as Ron just described, the changes that AI can have on business processes and organization. Let me start by talking about agentic AI before we kick off the questions. Agentic AI is the topic that we all have right now. It's the one word that everybody uses when there's an AI. And of course we know that agentic AI is not the one flavor out there. AI is a much broader topic set. One of the recent incarnations of how to deploy it is with agents and therefore agentic AI. Today we're going to talk about how does that impact the organization, the opportunities you have with AI, how is that actually maybe even changing the business case, as Ron mentioned, around modernization, legacy modernization, and then talk about the ethics and how to implement it. But let me kick off first. Tim, what's your definition of an agent about agentic AI?
Thanks, Jan. I think if you listen to the press, agents solve world peace and hunger, I think is the latest thing. They seem to be everywhere and do everything. But if you really think about what we're talking about with an AI agent, they go from information agents, getting information, synthesizing these and putting these together. We used to just think about that as a simple chat, but behind the scenes now it's a lot more complicated in the way agents will break down and get information. Examples like deep research that's been released from OpenAI. Google's got their own version and so on. These things now start to reason. They come back and say, this is how I'm going to answer your question. They'll go get information. We call that an information type agent. But the real opportunity starts where these things start to make decisions for you. And they start to recommend what should be done in certain situations or even take the action. And if you think about what that could be doing for banking when you're dealing with huge amounts of information, trying to synthesize it into something, anywhere from making a decision about opening account to KYC to fraud to all kinds of things, the AI agents have an opportunity actually doing those decisions and moving the human to a layer above that. But that's where the risk starts coming in. So we think about AI agents from the augment side to the automate side. And can they actually do the job of a person? And can they replicate what a person could do? That's when we're talking about AI agents. We're really talking about can they replicate what a person could do?
Thank you. James, a question for you. Where do you see the opportunities of AI, in particular agentic AI?
So at Nationwide, we see the opportunity mainly as the next step in process optimization with very much with human in the loop. So probably not going to that extent of full automation, but using it as a co-pilot based on the personas within our operations. So that can range within technology from technology delivery and how we optimize that work right through to case management and how we enable our colleagues to serve our customers better.
Excellent. And Tim, anything from your side that you say, well, two years ago when we started looking at the AI, we had a certain use case in mind. Right now, actually, I have more in mind that I can do.
We have, I think like every organization, we have huge amounts of opportunity around AI and AI agents. Certainly, it's the big topic for us this year. If I'm looking at the opportunities anywhere from, as I said, operations, all the information that comes in the door, complaints management, case management you mentioned, technology about how we deal with a lot of our technology decisions that have got to be made as well. So we don't see it as like one opportunity. We see it as so many opportunities. The question is how you then build for scale.
Excellent. And just to paraphrase on what I hear as well, many of the discussions is that agentic AI opens us up for more complex tasks to be done. But it's also more complex to deploy it and to build it. So it's really a question whether it's the right tool to use for what you're trying to solve for. It's not always the requirement. There's one particular area that Ron mentioned about the cost of legacy in the banks. And it's almost the stone that keeps everything down. And it's hard from a financial perspective to look at the business case, even though technology-wise and operational, you need to change it. From your perspective, legacy monetization, a topic for agentic AI?
Definitely. I think when we started looking at legacy modernization, it's not always a natural conversation about AI. You may think about how those two things actually come together, but it's a huge problem for most banks moving things to cloud. It starts with the first conversation that says, what does the old system do? How is it actually built? Does anybody understand what the COBOL mainframe used to do anymore? And if we look at one of the tasks that we've got, a BA has got to write a document to say, what does this system do? And then actually refactor that into something. AI has got a very good capability now of actually understanding that, making sense of it, and turning it into something. But where the agentic behavior comes in is the chain of events. You can start to say, understand the old system for me, step number one. Now you've done that, turn it into an architecture diagram. Now you've done that, give me the entity diagram. Now you've done that, put it into a new piece of code. Still with human in the loop, but you're starting to chain together a whole set of activity to actually start to do legacy modernization. So it's something that we're looking at. I think there's huge opportunity in that if we can deploy it in the right way.
Thank you. James, from your perspective as a CEO with the real kind of systems.
Yeah, so there's a lot of focus around that discovery phase, whereby when we come into modernization, there isn't the capability or the knowledge necessarily of what has been built and how that works in the detail that we'd like. So we find we're focusing on that end of the life cycle, where we might be spending 80% of our human beings' time just trying to understand fully how things work today. If we can flip that round and shift the focus from understanding how it is today to how we want it to be tomorrow, we think there's a huge opportunity there. I think one of the other things we're looking really carefully at is the models that underpin each of those agents. So whereby we use GitHub as some of the tools we have. That's very good at taking and building new code. That requires a different model to what we might need for an agent at the beginning of the process where we're really identifying how it works today.
Excellent. So very much mirrored by our experience that with legacy modernization, it's transformational what the NEI can do. It's less about the, let's say, the AI SDLC, about building the code at the end. It's actually understanding what you have today. So the code to spec journey, and then making sure you don't replicate the sins from the past. So you understand the past, but you build it for the future. That's where the real acceleration comes in. And we see like a three times impact in terms of speeding up those processes to go with clients. Of course, when AI, Agentic AI, the question is what are the ethical considerations that we need to take into mind? Anything from your...
I think the ethics side of this is huge and probably underrated in many ways. When you're going to get an Agentic AI, start making a decision. The whole question is, is it a good decision? Is it biased? So many of the time when we look at implementing systems, one of the first questions we ask is, do we understand what good looks like? Do we really know what a good decision a human would make? And can we measure that? Because if you can't, how do you know the AI system is going to give you what you need? So from an ethics perspective, the first thing we actually worry about is bias. To be slightly controversial, I asked three top popular models recently some questions. There's a Chinese one, two American ones, and it was about our Donald Trump. How many of his executive orders previously were implemented? And they're all very good at giving you an answer, but test it. And what's super interesting is that you immediately see the bias in the models. One of them, American one, refused to answer. Another one said, I keep giving the same answer. The Chinese one gave me a slightly different answer. If you're going to base your agents on those, and you're going to start to use that LLM to help you make decisions, how do you know the level of bias that's got into that? So it means you need to be very, very clear about how you're going to go test it. How do you understand what a good decision looks like? How do you measure a good decision? And we trust all our humans every day to make decisions because we can sit there and talk to them. But when that LLM is actually built by somebody else, how do you actually know that that's right? So that's the fundamental thing that we worry about with AI agents. When you're moving from something that gives you a recommendation into action, into doing something, how do you know it's right? So that's the