PUBLISHED DATE: 2025-08-12 00:02:36

VIDEO TRANSCRIPT:

SPEAKER: Jan Willem

Thank you very much and good morning. Pleased to have you all here today. My name is Jan Willem. I'm a client partner at Publicis Asapient. I work very closely with Tim and I'm responsible for our go-to markets on cloud data and AI. Today we'll be talking about agentic AI, a big topic, but first let me introduce you our guests. Tim Mason, managing director at Deutsche Bank, head of innovation, and for many years now the driving force behind AI at Deutsche. James, CIO at Nationwide, and actually the one who works with innovation to bring them back to the business and to implement, as Ron just described, the changes that AI can have on business processes and organization. Let me start by talking about agentic AI before we kick off the questions. Agentic AI is the topic that we all have right now. It's the one word that everybody uses when there's an AI. And of course we know that agentic AI is not the one flavor out there. AI is a much broader topic set. One of the recent incarnations of how to deploy it is with agents and therefore agentic AI. Today we're going to talk about how does that impact the organization, the opportunities you have with AI. How is that actually maybe even changing the business case, as Ron mentioned, around modernization, legacy modernization, and then talk about the ethics and how to implement it. But let me kick off first. Tim, what's your definition of an agent about agentic AI?

SPEAKER: Tim Mason

Thanks, Jan. I think if you listen to the press, agents solve world peace and hunger, I think is the latest thing. They seem to be everywhere and do everything. But if you really think about what we're talking about with an AI agent, they go from information agents getting information, synthesizing these and putting these together. We used to just think about that as a simple chat, but behind the scenes now it's a lot more complicated in the way agents will break down and get information. Examples like deep research that's been released from OpenAI, Google's got their own version and so on. These things now start to reason. They come back and say, this is how I'm going to answer your question. They'll go get information. We call that an information type agent. But the real opportunity starts where these things start to make decisions for you and they start to recommend what should be done in certain situations or even take the action. And if you think about what that could be doing for banking when you're dealing with huge amounts of information, trying to synthesize it into something, anywhere from say making a decision about opening an account to KYC to fraud to all kinds of things, the AI agents have an opportunity to actually doing those decisions and moving the human to a layer above that. But that's where the risk starts coming in. So we think about AI agents from the augment side to the automate side. And can they actually do the job of a person and can they replicate what a person could do? That's when we're talking about AI agents. We're really talking about can they replicate what a person could do.

SPEAKER: Jan Willem

Thank you. James, a question for you. Where do you see the opportunities of AI in particular agentic AI?

SPEAKER: James

So at Nationwide, we see the opportunity mainly as the next step in process optimization with very much with human in the loop. So probably not going to that extent of full automation, but using it as a co-pilot based on the personas within our operations. So that can range within technology from technology delivery and how we optimize that work right through to case management and how we enable our colleagues to serve our customers better.

SPEAKER: Jan Willem

Excellent. And Tim, anything from your side that you say, well, two years ago when we started looking at AI, we had certain use case in mind. Right now, actually, I have more in mind that I can do.

SPEAKER: Tim Mason

We have, I think like every organization, we have huge amounts of opportunity around AI and AI agents. Certainly, it's the big topic for us this year. If I'm looking at the opportunities anywhere from, as I said, operations, all the information that comes in the door, complaints management, case management, you mentioned technology about how we deal with a lot of our technology decisions that got to be made as well. So we don't see it as like one opportunity. We see it as so many opportunities. The question is how you then build for scale.

SPEAKER: Jan Willem

Excellent. And just to paraphrase on what I hear as well, many of the discussions is that agentic AI opens us up for more complex tasks to be done. But it's also more complex to deploy it and to build it. So it's really a question whether it's the right tool to use for what you're trying to solve for. It's not always the requirement. There's one particular area that Ron mentioned about the cost of legacy in the banks. And it's almost the, it's the stone that keeps everything down. And it's hard from a financial perspective to look at the business case, even though technology-wise and operational, you need to change it. From your perspective, legacy monetization, a topic for agentic AI?

SPEAKER: Tim Mason

Definitely. I think when we started looking at legacy modernization, it's not always a natural conversation about AI. You may think about how those two things actually come together, but it's a huge problem for most banks, moving things to cloud. It starts with the first conversation that says, what does the old system do? How is it actually built? Does anybody understand what the COBOL mainframe used to do anymore? And if we look at one of the tasks that we've got, a BA has got to write a document to say, what does the system do? And then actually refactor that into something. AI has got a very good capability now of actually understanding that, making sense of it and turning it into something. But where the agentic behavior comes in is the chain of events. You can start to say, understand the old system for me, step number one. Now you've done that, turn it into an architecture diagram. Now you've done that, give me the entity diagram. Now you've done that, put it into a new piece of code. Still with human in the loop. You're starting to chain together a whole set of activity to actually start to do legacy modernization. So it's something that we're looking at. I think there's huge opportunity in that if we can deploy it in the right way.

SPEAKER: Jan Willem

Thank you. James, from your perspective as a CEO with the real kind of systems.

SPEAKER: James

Yeah, so there's a lot of focus around that discovery phase, whereby when we come into modernization, there isn't the capability or the knowledge necessarily of what has been built and how that works in the detail that we'd like. So we find we're focusing on that end of the life cycle, where we might be spending 80% of our human beings' time just trying to understand fully how things work today. If we can flip that round and shift the focus from understanding how it is today to how we want it to be tomorrow, we think there's a huge opportunity there. I think one of the other things we're looking really carefully at is the models that underpin each of those agents. So whereby we use GitHub as some of the tools we have, that's very good at taking and building new code. That requires a different model to what we might need for an agent at the beginning of the process, where we're really identifying how it works today.

SPEAKER: Jan Willem

Excellent. I think what we also see is, it's always a trade-off between the benefits of the AI and the risks that come into play. So you want it to be more autonomous, you want it to be more self-learning, you want it to be able to decide and add the value, but at the same time, you want to monitor it. You want to know that what they did was right. You want to make sure that they don't look into, let's say, optional directions that they shouldn't be considering. So how do you limit, even you want to limit what goes into the models? So it's really that trade-off that's important. Tim, from your experience of scaling this at Deutsche Bank, what are some of the considerations to really implement AI and to move forward in getting it?

SPEAKER: Tim Mason

I think a lot of people talk about making sure that the data is right. That's still important, even when it's unstructured data, where does it come from? When you start to look at tools like deep research, where, as our organization will do lots of analyst reports, for example, tools like deep research are fantastic in doing that, but what's its information sources? Where does it get it from? So you need to be really, really clear about that. I think the biggest thing that we're thinking about is the control framework for AI. Banks are super at control. It's the big thing that slows everything down. You try to get anything done, the amount of functions, what was it, 93 decision points to get a data product in in Lloyds? I don't know what the number is at Deutsche Bank, but it won't be that far different. Those are things that slow a lot of things down, but we need to replicate that for the AI world. We need to still have the human in the loop in it, because the regulator's still going to ask us, but we need to flip from just saying deploy AI to saying, how do I put the control framework in place around it so I can deploy AI very, very fast? It's still incredibly hard to get good AI use cases into production. I think that's a challenge. I've certainly talked to all the other banks. Everybody's got the same challenge around that. It's still a challenge to make sure that the decisions are absolutely right. I suspect people are probably using AI when they don't need to. If you're looking for a deterministic answer, the answer is yes or no, don't use AI. If it's a probabilistic answer, what colors the sky today? Is it blue? Blue-ish? If you can deal with that level of uncertainty, yes, use AI. But the big thing for us