Hi, and welcome to our third episode of Impact TV. Impact TV, a Publicis Sapient and Constellation Research joint initiative, is a six-episode limited series focused on tech as a force for good and its positive impact on business and people. In today's episode, we'll probe into the topic of unleashing human potential in a data-driven and AI world. We'll explore how data and AI are powerful tools for augmenting human creativity, offering new insights, inspiration, and boundless possibilities. Our distinguished panel of industry experts and thought leaders will share their experiences and perspectives on leveraging data and AI to nurture human potential by sharing real-world examples and how data-driven AI has revolutionized industries and empowered people to push the boundaries of innovation. Joining me in this illuminating discussion are three of our industry guests and my co-host, Ray Wong. Hi, Ray.
Hey, thanks a lot. And I'm here with Teresa Barrera. She is the chief marketing and communications officer at the leading global digital consultancy, Publicis Sapient. Her creative thinking, coupled with her 25 years of global B2B marketing expertise, has enabled her to break the mold of traditional marketing for technology and service companies. But we also have three amazing guests, and I'm going to ask them to quickly introduce themselves as we go along the way. So, Ray Velez, go ahead. Tell us a little about yourself.
Hey, everybody. Yeah, sure. Great. Thanks for having us today. Really excited to chat. I'm Ray Velez, chief technology officer at Publicis Sapient. I focus on bringing together a lot of the different parts of Publicis Group to help create the solutions that operate at that intersection of data and technology.
Very cool. Joanne Stoner, you've got a brand new title. So, welcome. Welcome to the show.
Thanks, Ray. And nice to meet you, Teresa. And thanks for having me. Yes, I have a brand new minted title. I am the new MasterCard Fellow of data and AI, formerly their chief data officer, and before that, the chief privacy officer. But in my new role, I get to really focus on the dialogue that we're having both on this topic and others as the world is rapidly changing thanks to the introduction of generative AI, quantum computing, and other emerging technologies. So, happy to be here today.
We're so happy to have you here. And then, of course, Rishad Tabakawala, welcome to the show. Tell us a little about yourself.
Thank you. Thank you for having me. I'm Rishad Tabakawala. Today, I am a speaker, an advisor, and an author. I've written a book called Restoring the Soul of Business, Staying Human in the Age of Data, which is probably why I've been invited to this. And I'm also writing another book called Rethinking Work. Prior to this second career, I was the chief strategist and growth officer at Publicis, where I spent 37 years, including working with Ray all over the world.
That's amazing. Great to have you guys back together. Well, good. Well, we're going to start with the interview with Ray, and then we're going to go on to Joanne, and then, of course, we'll go to Rishad. So, let's start the show. Welcome, Ray.
Great name. Same here, Ray. So, we're excited to have you. As a passionate technologist, one of the interesting things that we talk about is really about where the future is, right? And for you, technology and data have been changing and transforming all these different industries. What opportunities are you seeing now differently through that lens for innovation and growth?
Right. Yeah. Yeah, absolutely. And as I'm sure we'll talk a lot about today is just over the last 6 to 12 months, there have been tremendous changes in how we can help our teams and our clients deliver on new experiences and business growth. And so, clearly, there's this new trend, and I think an easy way to think about it is we're now able to empower our teams with new superpowers. So, as a technologist bringing together data and technology to help our teams and our clients, we're starting to see an everyday activity where people are turning to technologies like generative AI to augment the way they deliver. And so, that could be leveraging a technology like Copilot to improve productivity, efficiency, quality, security of the code I'm writing. It could be a technology like what OpenAI and ChatGPT bring to help drive synthesis and understanding for complex topics. I know I use that all the time. Let me upload a white paper and give me the summary, and I'll go back and read it later, hopefully, or at least that's the theory. But all of that is incredibly powerful. So, you think about that intersection. That's super exciting. And I think I've been fortunate enough to work in such a fascinating industry. I think that's amazing. So, that's one big pillar. And then the other big pillar is how we can take these experiences that have been so rooted in browsers and apps and really break out of that. And so, what you're starting to see every day is making experiences much more conversational. And that applies to everything. That applies to a conversational experience that could be commerce-enabled, a conversational experience that could be contact center-enabled. All of these interactions are going to be much more natural for us as humans, given we're so strongly rooted in conversations.
Right. You were talking about generative AI, and there's a lot of excitement right now in the market. But there's also fear. Does it excite you or does it fear you? And I'm curious also, all the work you've been doing with clients, what do you really see the value? We talked a lot about the application of it, and everybody's experimenting. But what do you see really the value? Darn hallucinations. How do we end up there? But hey, real quick, we've got about 30 seconds. It seems cleanroom technologies are taking off. So, over the last six months, we'll talk a little about them, because most people don't know what a cleanroom is. And then talk about how it's improving outcomes and collaboration.
Yeah, yeah. And so, two things, I think, there. And I'll talk about a responsible AI approach. One is this technology that Gen AI has created and providing our teams and our experiences is powerful enough that it's going to make us reassess the way we were approaching responsible AI. And I'll use language like responsible machine learning in the past. And I think that's going to unlock opportunities where, in the past, I think industries and organizations within highly regulated industries have stayed away from leveraging data for better experiences or better enterprise outputs. And so, that's a positive thing. It's just too powerful, obvious, and true that you're going to start to build in a deeper sense of responsible AI. And I think there's two key principles when you think about that anchoring in machine learning. One is, what are the protected attributes, right? Because one of the easiest ways, yeah, so one of the easiest ways to think about machine learning, if you think about an old database structure, you select data from a source system, and you say, where this value equals this, right? And in that value part, you've got attributes that you may or want to protect that align with your brand values, or regulatory-wise, you have to protect. And so, that's super important. So, that could be, I'll just use the zip code example. Maybe zip code is a protected attribute. I want to make sure my generative AI models, my machine learning models, my customer lifetime value models, like everything that helps make a prediction or a decision for my business doesn't use zip code. And that's harder in Gen AI, but also, it's not as common in even just legacy machine learning-based approaches. So, I think that's really critical. And then the other thing, too, is machine learning is really tricky, right? So, if you want to build an outcome that you train a model on, it may find a proxy for zip code, right? And so, it could be in certain geographies, these 15 attributes are good enough to tell you somebody lives in a zip code, right? So, you also have to ensure that your models aren't finding a way around your protected attributes. So, I think those are really critical to responsible AI. And you need to step back. There's a little bit of philosophy here, too. What are the attributes I want to protect? I'm not just going to wait for the government to tell me I can't use zip code. I'm making that deliberate decision because it aligns with my brand values.
Wait, there's zip code bias. How are we going to handle that?
Right. Exactly, right? And so, Kathy O'Neill wrote an amazing book called Weapons of Math, M-A-T-H, Destruction. I highly recommend it. It's a fantastic read. She came out with that a couple of years ago, and I think it continues to drive even greater and greater importance. And so, even though she's a math professor, she helps to look at the full impact of data and how you can treat it responsibly and use it for a power of good. But you have to think about your approach, your principles, and how do you apply that, even beyond just zip code, especially with all the changes in our legal environment and all of those impacts. Maybe the government hasn't told me this is protected, but I want to ensure that I'm taking that into my principles.
And what is the consumer role in that decision?
Right. Well, and I think if you go back to, I think it was 2017, 2018 when the GDPR went into effect, the general theme was the right to be forgotten, right? And so, ensuring that the enterprise gives you that capability. And that's the legal—
Yeah, I think, you know, it's always been important to us that our data strategy aligned with our business strategy, right? And then I think we understood very early days that data was just one element of executing our business strategy. So if you think of your people strategy, your financial strategy, your technology strategy, data is just another element of how do you get, you know, to achieve the results that you want. And so when you think about data in that way, then it means you have to be responsible with it and you have to think about data as outcomes-based. And so then I think when you get to the ethical piece and the responsible piece, I think the way that we've always designed is thinking about, well, our customers, right? And even though we're a B2B company, we're really a B2C brand. And so we think all the time about the individuals that are going to use our products and solutions. So while we've always had a commitment to security and then privacy, I think we also began to recognize that we needed a commitment to data itself and how we were going to use it. And back at the end of 2018, we announced responsible data principles that were going to help guide our organization on how to design with data so that we made sure we were all on the same page. And so, yes, privacy, security, accountability. And we started talking very early about things like integrity in our data analytics practices. What were those going to look like? And we included in there things like bias testing and A-B testing and all the things that we talk about now. But I think it was really, really important that we were doing that early so that later on we added a principle around inclusion, which I think has really helped us focus on what's important when you talk about outcomes that are really going to be equitable when you look across the globe at products and solutions. And trust me, all those conversations are only heated up even further by generative AI.
Yeah, aligning data to outcomes and aligning the data to business strategy, that's really important. And I think sometimes companies forget that. But you also have consistently emphasized the importance of data in relation to people and its profound impact in their lives. And I feel that we probably have seen both the good and the bad. And so could you elaborate, Joanne, on that? What does that mean? And what are the implications it holds specifically now that data is now the new oil?
So remember, I started out as the privacy officer, right? And so I was charged with kind of protecting the rights of the individual. And so if you think about it that way, right, data represents us. It represents people. What we're tracking, all the data that we collect, and we collect more and more of it every day, right? It represents us. Even in its most anonymized form, it represents activities. It represents how people interact with each other, how ecosystems operate, how entities interact with each other, right? All of that, though, boiling it down, what we care about is each other, right? Organizations serve their consumers, serve different types of individuals, serve different types of entities. But, Teresa, you mentioned it at the top. We're really all about people, right? This is very human-centered, our world. And so if you can remember that ultimately every product, every solution is going to be used at the service of a person, and they're going to be perceiving that product. They're going to be handling it. They're going to be touching it. They're going to be using it in some way, whether that's virtual or physical. Well, then, if you design that way and you think about the data in that way, it makes you answer all of the questions that I think regulators care about, that your customers care about, that individuals care about in a way that just brings everything to life. So even though we're not, right, so it just makes more sense to us to think about the individual at the very beginning and incorporate that into our design process. I was at a conference just last week, and somebody asked about this, and they asked us if we had a group dedicated to human-centered design. And I think they were surprised when I said no, because everybody thinks this way at MasterCard.
That's amazing. And you're—go ahead, Teresa. Sorry, you're saying?
No, I think that's amazing way of thinking because I fundamentally believe that sometimes we hear a lot about the negativity of technology, but when we actually use technology or data and we use it to design solutions that are focused on people, are people-centric, as Joanne is saying, I think everybody can benefit. So it's really great to see that that's the way they think about it.
No, I definitely agree. And one of the things, Joanne, that's been your ethos for quite some time, right? You've always taken an approach that's been human-centric, that's been responsible in terms of responsible AI. I should bring you to Shack15 with me in the next few days. There's a big conversation about this. But this is huge, right? This era of data-driven AI is something we're going to see continued issues in terms of what's the correlation between data and AI? What does it mean in terms of that role and impact going forward? What are some of your thoughts there? Where do you see that future?
Now, I think we're at a really interesting point in time right now, right? So we're seeing all the large language models that have been released right into the public mind. And so what we're seeing right now is we don't know exactly how those models were built. However, we do know that they were built on public data. Okay, great. Except that the public data also shows our society for some of the flat sides that it has, right? Because those data sets were not curated for all the intended purposes that we're putting that to. Plus, they were built on North America. They were built in English language, which is notoriously imprecise, right? And we're seeing some of the results in those foundational models that are maybe not so good, right? We talk about hallucinations. We talk about the model is generative, right? So it's trying to put in those footnotes and references that it knows should be there. And if they're not there, well, I'll just make something up because I know something should be there, right? Well, it makes sense to the model, right? It makes perfect sense to the model that it's generating what it thinks should be there. But it may not be right for the outcomes. And especially for organizations who are going to build and use their operational models, which they hopefully they understand their data, they've curated their data, they understand the flat sides of their data. But the challenge is how are these things going to operate together, right? And how are we going to derive in this next generation the benefits of large language models and generative AI in tandem with operational models and understand the data? And so what we're trying to do is understand that we have to have more distance the more sensitive the outcome is. The more impactful it is on an individual, whether that's a fraud algorithm, credit models, right? All of those things are super important. If you're just looking for the train schedule, well, maybe the inaccuracy is going to be an inconvenience, right? But maybe it's not as long lasting as something else, right? Health information, et cetera. So I think we're in for a time when as a society we have to be very thoughtful now that we're using unfettered data to inform an awful lot of decisions. We, as those who understand the science and are building the science, we need to be very thoughtful now in the governance, the curation, and in how we're actually designing this methodology. Because now these connected ecosystems and how we're thinking it through is more and more important. That's, I think, the next generation of responsibility for all of us.
You know, talking about science, I have a question about talent. I have two young sons. One is entering college, the other one is in college. So thinking about talent and the work in the future interests me. And when you think about the crucial skills and expertise required for individuals and teams to work with data and AI, we often think about the science. But I'd love to get your perspective. How are those skills evolving? Meaning, do we need for the future not only science, but do we also need, like, liberal arts?
I love the question. I absolutely love the question because I get this question all the time. And I wish I had a crystal ball. I've always wished I'd had a crystal ball for almost all my jobs, right? So I can predict the future, Teresa. But here's what I think. I think science is going to still remain super important. One of my nieces is studying for her doctorate, and she's studying quantum dots. And she's amazing. It's in physics, right? And it's the material that actually will enable even more supercomputing at some point in the future, right? So science, I think, is still extraordinarily important to our world. But I also think liberal arts and I think even design thinking is going to be equally important, as is philosophy and how we look through some of the challenges, right? You know, we hear about prompt engineering right now. And if you think about that, those folks are cleaning up our imprecise language, right? They're trying to make precise our queries so that we get the results we want. Now, I don't think we're going to have forever careers in prompt engineering, right? But I think, though, the skills, those baseline skills of designing and thinking through a design lifecycle and being willing to try, try again and not think of an outcome that's maybe negative as a failure, but just as a learning moment is going to be super important. Plus, I think that the world of technology, everybody is going to need to understand tech and data and iteration of graphic designer. But just like Adobe did not destroy graphic designers, it just said if you're a graphic designer, this gives you more tools and more skills. So I basically believe that every one of our jobs will change, but not that our jobs will disappear.
You know, that's a great point. Oh yeah, go ahead.
Good to know, because I am in the creative profession.
We all are, we all are. We human beings are creative by our very nature because we make stories up all the time.
Well, adding to that point, right? I mean, we are changing the way we work and you have an upcoming book. So maybe we can get a sneak peek on rethinking work. Is there any advice you would offer to companies trying to define the work of the future or what that means, what those shifts are?
Yes, there are four shifts basically, and I've already been talking to a lot of companies. One is a demographic shift. And so when we have senior management, like I used to be senior management, you tend to be more seasoned, which is a way of saying we tend to be older. I'm not saying all seasoned management, but we don't tend to be as Gen Z, maybe in the tech industry, but otherwise we don't tend to be as Gen Z, right? So one is the mindset shift. So when you look at Gen Z, only 24% of them believe in capitalism, 76% want to work for themselves, 66% have a side hustle and side gig when they have a full-time job, right? So don't tell them to come back to the office because they actually got two other jobs, friends. Okay, so figure that one out, okay? So that happens to be one. The other is that there are new marketplaces, whether they be marketplaces like a Shopify or marketplaces like Amazon Web Services or at Etsy, there are lots of ways for us to be far more creative and to think about jobs differently. And then at the same time, there's obviously globalization and a whole bunch of other things. So basically what I say is the following. Number one is recognize that in the future, everybody is thinking about how, no longer how you can fit life into work, but everybody's trying to figure out how to fit work into life. And that is a mindset shift that people don't realize. It's no longer work-life balance where I try to fit life into work. The basic thing is how do I fit work into my life? Okay, which is number one. Number two is all of us, whether we like it or not, and by the way, I spent 37 years at 100,000 person company, which I continue to admire and be involved with. So what I'm about to say is not anti-large company under any circumstance, but in order to be successful, you need to also know how to operate like a company of one. Okay? And what I mean by a company of one is you got to be plugging and playing, have a discrete set of cells, have skills, have a very good API so you can work and collaborate well with other people, right? Continuously learn and keep yourself updated. Because even today, in the Publisist group, we have this platform called Marcel, which allows you to bring together different people in different projects. And that's the way it's been in Hollywood. That's the way it's been in consulting companies. But in order to do that, you have to be a company of one, which is you got to be well-known. You got to continue to hone your skills. You continue to have good relationships with everybody. Now, I operated as a company of one when I was a company of 100,000. Now I operate as a company of one and a company of one. But the skill sets are the same. And so my whole basic belief is every individual should be responsible for their own career. Do not outsource it to HR.
Yeah. Yeah. That is so true. I tell that to people all the time. You're the boss of your own career.
You are the boss of your own career. I'm obviously a big fan of HR and talent who can help us be bosses of our career. But eventually, it is our own, it's us. It's we are the bosses.
Yeah, absolutely. That's such great advice. We look forward to see your book. But I was reading an article that you written yesterday.
Yes.
And I love the article. And the title was The Five Keys to Ensure Professional Relevance. And in your article, you suggested that companies should develop a team that optimizes for the future with specific goals while focusing the rest of the organization on maximizing today with different sets of goals. I thought I was reading that paragraph and I thought it was brilliant and fantastic advice. My question for you is how do companies or departments balance both and do not sort of create an environment that separates the innovators from the doers?
The way to do it is in two ways. One is to have both groups be aware of each other and what each other's goals are and objectives. And their goals, objectives, and incentives and operating styles might be very different. I've done this three times inside of Publicis where I started new units and I reported up to the overall CEO as did the CEOs of the big units report. But we both knew that their job was to keep themselves growing, keep themselves profitable. And mine was to create new skill sets that kept us credible and world-class against the new competitors, not our existing competitors. So everybody knew why we were behaving differently and what we were doing differently. It wasn't just because we wanted to be different, which was number one. Number two is to allow people and encourage people to move from one to the other. So, hey, if you happen to be in what you consider to be today and you want to work in what might be tomorrow, welcome along. But importantly, to also recognize that eventually tomorrow and today would fuse. So when I launched a company called Starcom IP, it eventually fused into Starcom, right? When I launched some of our digital units, it eventually fused into what became Digitas and Razorfish and then obviously every other thing. So as long as people say, hey, I can be part of the future. I understand you're building this. I can be a part of the future. The reason you're doing this strange stuff is not because something's gone wrong with you, but because you have this goal. But in the end, you and I are working on the same team. We just happen to be two different streams that will come together to make the Mississippi.
It's amazing. We're here with author, speaker, teacher, and advisor, Rishad Tabakawala. Thank you so much for being here and sharing your insights and wisdom.
Thank you for having me. Thank you. This was a pleasure.
Thank you.
Wow. Wow, what a great conversation and different perspectives.
Yeah, I mean, we've actually seen many different points of view here. And I think it's really important for everyone as they get into their AI journey and their data journey to really consider not just the responsibility and the ethics, the need to balance different teams from right side, left side, right brain, left brain, to be able to get your thoughts and your ability to take data, spreadsheets, and a story, as one put it, but also really thinking about what the business value is and aligning back with the business. This has definitely been a very interesting group of speakers and some wonderful insights. What do you think, Teresa?
Yeah, I mean, and the other part for me that always really spoke to me is the ability also to think about people, putting people at the center. And that speaks to me a lot because I do believe that, because I often really forget why are we doing where we do it? Why are we doing these? Why are we making these solutions? Why are we doing these things? At the end of the day, it's all about people, being the customer, being the employee, being the citizen, being the client, it's always about people. And I do believe having that people-centric mindset, I do think, especially in the age of AI, it will serve all of us well and will serve society well.
Yeah, we definitely humanizing AI. This is episode number three, unleashing human potential in a data-driven AI world. Thank you so much, everyone.
Thank you.