Good morning, good afternoon, and good evening to everyone who's joining us here today. I'm Kira Barrett. I'm our AI marketing content lead at Publicis Sapient, and I'm joined today by our AI ethics task force. That's Francesca Sorrentino, Todd Cherkasky, and Sucharita Venkatesh. So Sucharita, Francesca, and Todd, welcome to the show.
Thanks, Kira.
Good to be here.
Excited to be here.
So I'm going to start with a tough question. What is ethical AI?
That's a good place to start, Kira. So I think ethical AI is AI that operates from a first do-no-harm principle. It's built and used in a way that's fair, inclusive, transparent, safe and secure, and sustainable. And sustainable, not just in an environmental sense, but in a social and economic sense as well.
Yeah, agreed. Gen AI is moving so fast, and we need to think differently than we did about other tools and technologies. With great power comes great responsibility, that kind of idea. There's high risks to our clients, to their customers, and to our people. It's a very practical thing. That's what I would say, that at Publicis Sapient, we've taken a stance of what responsible and ethical use of Gen AI is, and we've developed a set of principles, and we ask that all of our people adhere to those principles in their daily work.
Gotcha. And so to follow up, do you think AI ethics are the new ESG for organizations?
It's not quite the same thing. AI ethics and responsible use, it's itself a part of ESG, or I'd rather prefer to use the term sustainability, because that has a much broader scope. Sustainability requires long-term thinking, as payoffs or impacts of actions that we take today may not really come until much later, could be decades, maybe even longer. With AI, by contrast, you can see the impact really quickly.
That's right, Suchi. And I think the term ESG itself is also very tied to reporting and evaluating the performance of a company with respect to specific environments, social and governance measures.
Right.
So I guess to get into the nitty gritty, let's discuss these various implications in terms of the E, the S and the G. So we know businesses have massively increased investment in AI and particularly generative AI. And so we're all kind of wondering, are we doing AI ethically? And part of that is, are we doing this sustainably like you mentioned? Are we being mindful of the environmental and the social impacts? And so given that a factor of ethical AI is environment, the E in ESG, is AI and particularly generative AI sustainable? And I know that's a big question, but I've seen a statistic that generating a thousand images with a powerful AI model is responsible for roughly as much carbon dioxide as driving the equivalent of 4.1 miles in an average gasoline powered car. So given that statistic, what does this mean for businesses?
I'm really glad you asked this question, Kira, because it's a topic really close to my heart. Generative AI has a significant environmental impact, and it's not just about carbon dioxide emissions. It has really high water footprint as well. And generative AI is far more power and water hungry than other applications. So it's important for businesses to closely monitor the impact of their AI usage and take steps to mitigate the impact and should look at the benefits and costs of AI across the entire life cycle or not just like a certain part of it. Mitigation could include things like training and upskilling people to use Gen AI in a responsible way, using energy efficient hardware, using data centers who use clean energy, being very intentional about AI solution design and architecture, where sustainability is one of your co-design principles. So this could be in different ways. For example, using smaller models or analytical or even non-AI solutions where it's appropriate. For instance, one way we're doing this is for one of our clients, we've built a Gen AI chatbot, and we're constantly monitoring the impact of that chatbot and taking steps to reduce our impact. So we're doing things like optimizing the number of API queries so that we reduce our energy usage. We're also shifting to a smaller model since we found that the capabilities of the larger model were not really required and the smaller model was enough for us.
Yeah, if I could add on that, I think it means always making sure that you're using a model that's appropriate for the purpose. Don't use a mountain where a mohill will suffice. We need to make sure that we evaluate the tradeoff between environmental impact and performance. So for example, an MIT experiment found that training a computer vision model on 70% of its data reduced the accuracy by less than 1% but cut energy consumption by 47%. In some cases, a small language model may even offer better performance than a large language model. Mistral's AI's large language model ML2 is comparable in performance to flagship models like OpenAI's GPT-4.0, even though it's only 1 14th of its size.
Wow, that's really interesting and kind of unexpected. I think on the flip side, we've also seen people say that AI and Gen AI can actually help companies reduce their carbon emissions. So even though there's all these concerns about increasing carbon emissions, it's possible that certain applications can really help us with sustainability. And Google's chief sustainability officer said in an interview that a really powerful tool that they're really excited about that they think they'll see a lot of focus on in the years to come is the role of AI actually as a climate solution. But how realistic do you think that proposition is kind of given the carbon emissions impact of generative AI?
I think it's very realistic. I think there's real potential for AI to have a positive impact on climate. And there are a lot of examples out there from a lot of innovative people. There's real-time monitoring and analysis that's happening on climate. Stream Ocean uses AI to monitor marine biodiversity. Pano AI detects wildfires in real time. NatureDot uses AI to remotely monitor aqua farms, ensuring the health of fisheries. So Gen AI can have a huge impact. It can analyze complex data sets to optimize crop rotations and resource use, improving sustainable agricultural practices. And I think this is probably just the beginning.
Wow, that's a lot of really interesting use cases. So we've covered, we've touched on the E in ESG, but moving on to S, social impact. I want to talk about the social impact of generative AI for organizations' stakeholders. So that's both consumers and their employees. So let's start with consumers. We know that AI models require a lot of data, and in the future, a lot of that will actually be consumer data. So there's a host of privacy concerns about this and even actually bias concerns with some of these models. So the question is, how do you think companies can ensure the reliability, fairness, privacy of the data used in AI development? And do you think there should be industry-wide standards for data quality and bias testing?
Well, first of all, companies need to make sure that they've got consent. Consent to use the data for the purpose of AI development and ensuring that the data is as diverse and representative as possible. It would be best to avoid using personal data to train a model. But if that's required for the tool, companies need to ensure that they have user consent and the right or legal basis to use the data in this way. In cases like this, techniques like anonymization should be used to remove identifying information if possible, or at least carry out masking to reduce the risk of linking the data back to a person.
Yeah, consent for sure. Also, copyright. And we're hearing a lot about this on the news. Companies need to make sure they're obtaining permission to use copyrighted content from the copyright owner or work with the LLM provider who've trained in copyrighted content with permission. In the case that a public or pre-trained LLM are being used where permission might not have been obtained, it's still the company's responsibility to validate and verify for copyright violation with a reverse search or something like that. Ultimately, though, I think content and copyright, adhering to a responsible use model, preventing these changes from happening can ultimately enhance customer trust.
I totally agree with everything Todd and Fran have said. And the other thing that you've got to remember is ethical design for AI is not just about protecting privacy or security. It's also a key determinant of product quality and value. So bias in AI algorithms, we have seen this in many examples over the years, can result in inaccurate predictions, can have unfair results, and that damages not only the user experience, but also the company's reputation. So just like with other things, right, like we've seen with sustainability, with accessibility, when you plan for, you know, environmental impact or an accessible product right from the beginning, it leads to a much higher quality product. You know, similarly, ethical considerations when you build them into AI development and design right from the start, it will result in a much better solution for everyone. And that's one that's more aligned with human needs and goals.
Wow. So you're kind of hitting responsible use and you're gaining consumer trust and also developing kind of a better product if you're ensuring these ethical AI models from the start. I think that's really important. And I guess moving away from consumers and into employees, we know that some use cases for AI and generative AI can increase efficiencies by actually reducing headcount. So do you think there is a way that companies can approach this ethically? And are there best practices for this?
Yeah, I'm really glad we're talking about this. Gen AI will definitely affect jobs. The work that people do is going to change, but it doesn't have to be a foregone conclusion that that will reduce headcount. People are going to work differently with different tools, expanded tool sets. But the way we see it is people can do more and faster powered with Gen AI, ideally with increased excellence. We're seeing a lot of traction with