PUBLISHED DATE: 2025-08-11 23:19:15

VIDEO TRANSCRIPT

SPEAKER: Host

Hey, Melissa. Welcome. So hello. So nice to be here. We're excited to have you. You are a design leader at Google. And more importantly, we're happy to have you here. I think it's really, really important when we think about that future, figuring out how we inspire and maintain trust. One of the big conversations this year at the World Economic Forum was trust in the age of AI. And how is that going to happen? And of course, it's going to get more interesting as we get into this age of AI. Can you talk about what that means, particularly in the context of navigating fast-paced tech landscape? And more importantly, how do you ensure that user trust is maintained? And of course, doing that across all these rapid technological advancements.

SPEAKER: Melissa

Absolutely. Broadly speaking, trust is defined as the willingness to open oneself up to risk by engaging in a relationship with another party, whether that's an individual, a group, or even a company or entity. And so for us, this means continually asking the question, what do our products need to do in order for users to be willing to engage with them consistently over time? And so if we look at trust through a social science lens, trust comes from believing that another person or entity will do what is expected and will act in ways that benefit me. And so creating experiences that foster that belief really brings us back to some of the basics of what makes really great human-centered design. And so that means designing products and services that are, one, stateful, which means that we're meeting users where they are in different contexts. They're transparent, meaning that we're clearly communicating the implications of a particular action that a person might take within the system, but also that we're communicating how we're using their data and how we won't be using their data. Also accessible, so designing for kind of multiple modalities and also intersectional identities is huge. Designing so that the system feels credible, so that means the product does what it proposes to do. And if it fails, it's giving clear pathways to resolve those issues. And then finally, empowering. And that means like building mechanisms into the digital experiences that we're designing that provide control and a sense of agency at the right moments. And so, you know, with the rise of AI, I think the big differentiator here in terms of additional things that design teams need to start thinking about is that with AI, these experiences are going to become more conversational and more responsive, making them feel much more akin to social interactions. And what this means is that the tendency for us to project social-like expectations onto these digital experiences is going to increase. And so that means product teams will need to carefully consider like how the experience is adhering to or breaking expected social. And so for example, concepts of consent and justification are going to become really paramount.

SPEAKER: Host

Wow, that's a lot to unpack there. But trust is like, well, that's all we talk about. Because I think it's such a big thing and it's such a huge goal. Because when we talk about, a lot of the conversation today has been about that because we think especially of generative AI. We look at it and we think it's a black box. How it was trained. We don't know what's behind it. So how do we make it a white box? And for you, Melissa, my question is, with your focus on ethical design, why do you believe and feel that organizations need to prioritize, should they need to prioritize and invest in ethical considerations? And now more than ever, as specifically as they deploy more AI-based technologies and solutions.

SPEAKER: Melissa

Absolutely. You know, I think it's very tied to what Abby mentioned a little bit ago around, you know, with the introduction of AI into these interactive systems, it's going to be super important that technology is working on behalf of us and not the other way around. Like we're not trying to adapt to how the technology is working because it's becoming so much more sophisticated. Right. And so kind of flipping the power dynamic there and thinking through how to do that. Right. And then, you know, aside from being the right thing to do in terms of focusing on design ethics and making that a core part of the processes that product development teams deploy and when they're designing these products and services, trust is one of the founding pillars of any business. Because if your customers don't trust you or your products, you don't have a business, at least not a sustainable one. And it also increases the resilience of a company or business, because let's face it, no business is immune to mistakes or missteps. But if you've invested in building trust through ethical practices from the beginning, when you do have that, when those mistakes do occur, that foundation is going to serve that company really well in terms of its ability to bounce back from that and be resilient.

SPEAKER: Host

Yeah. The Tylenol story from years ago.

SPEAKER: Melissa

Yes.

SPEAKER: Host

Wow. It's true. These things don't change and sometimes they're timeless. And as you're doing this, like how do you create this collaborative partnership between tech firms and organizations in terms of shaping that responsible development deployment, especially with Gen AI? There's some concerns these days and people want to make sure they're doing the right thing.

SPEAKER: Melissa

Yeah. I believe one of the key benefits that partnering with technology firms or other outside agencies is that it just brings a diversity of perspectives into the conversation. Working in-house and working at companies like Meta and Google, when you're working day in and day out on something, you can find yourself where teams have kind of created a blindness to the problems and within the business. And so, therefore, by bringing, and I've done this on my teams, is bringing outside organizations or agencies, our work can really benefit from that because it brings in a wider aperture of perspectives to the problem that we're trying to solve. So whether that is bringing fresh perspectives to old problems, not only does it help us uncover new previously unexplored UX solutions and interaction paradigms, but it also exposes us to new processes to really push our creativity. Human-centered design is something that we've been talking about. I've heard that term for many years now, and I think it's something that's a fundamental principle for many organizations and their teams as they think about their experiences. How do you feel that, what do companies need to do to make a part of their design and make sure that their experiences is aligned to the needs of the end user?

SPEAKER: Melissa

Yeah, I think the introduction of AI makes two things even more imperative than ever before. One is UX research, which Abby mentioned a bit in her interview, but also content design. So on the first part, if companies aren't doing UX research today, they need to start. And it can't just be kind of upfront research, understand and do that empirical understanding, but iterative, rolling research throughout the product development process. Product design is never like a done thing. You design something, you ship it out into the world, and people are going to use it in very unexpected ways. They're going to have perceptions that you didn't anticipate, and that's what makes rolling UX research, testing, concept validation, monitoring the performance of the product is so very important, and even more so with AI-powered products and services because they are so dynamic, right, and they are more conversational. I'd say that the second piece in terms of content design, as mentioned before, AI experiences tend to feel more akin to interacting with another human, and therefore the social norms and expectations play a heavier role in the quality of the user experience that people are having. Language becomes a really important part to the whole experience that someone is having, and so teams need to increasingly think about their products as conversations, an ongoing dialogue between two people or one and many people, and this makes content design a critical discipline to have involved from the very beginning of the work and really kind of bringing intentional, principled content strategy to everything from the tone and voice of the AI, its personality, but also how the, like where do changes in the AI's vernacular of the language it uses need to change from moment to moment or in different parts of the user journey. It's going to be super, super important, and this is something that we have seen and have really started to kind of anchor into at Google in our work because even in some of the new latest features that have been released in terms of within Google Search, using, I don't know if you've used Gemini, but it is very conversational, right? I'm not just interacting with a user interface and a bunch of UI. It feels like I'm having a conversation with another person, so the tone, the vernacular that Gemini uses to interact with me and respond to me is super important.

SPEAKER: Host

That's a great point, right? And that third wave of research that people have been talking about, it's important capturing the qualitative capabilities has to be there, and, of course, the shift from probabilistic or deterministic models to probabilistic models means it's also a very, very different interaction point. So related to that, though, as this technology evolves, how do design teams stay inspired and, in turn, inspire others, right? A lot of this is really about bringing that passion to their work and work fulfills functional requirements and aligns with human values, and ethical codes are there to ensure that solutions have a positive impact on individuals and society as a whole. That makes it really hard as a design point, right? Those are a lot of constraints on the design process.

SPEAKER: Melissa

Yeah, I love this question. I have four kind of key points here. One, cyclical iteration based on rolling UX research and product performance monitoring, as I mentioned before. Two is keeping that kind of open line of communication and feedback from users, and Abby mentioned this as well. It's more important than ever to stay close to customers and really continually understand how are they using your products and services in the real world, what is the quality of those interactions, and monitoring that over time and iterating on the product based on what you're finding. Third, continually evangelizing ethical best practices and updating those rarely based on the latest research, also based on what's working well when you have your product in the field, right? Again, that monitoring piece. And then lastly, and this isn't something I hear talked about a lot, but it's something I think about and I have personally been inspired by throughout my career, is designers taking the time to engage with how artists are thinking about some of these questions and these ethical concerns. They say life often imitates art, and the dreams and stories of artists can serve as a real wealth of inspiration for how to think about the interaction between humans and technology, particularly within the world of science fiction. We can look at the stories of science fiction, and there's a lot of subtle commentary around what we really kind of want the interaction between human and technology to look like and what feels beneficial, but also they serve as examples of what could not be beneficial or what could be harmful to humanity.

SPEAKER: Host

Wow. Super deep. Yes, it's deep. And we see that science fiction coming to life a lot, don't we? Some of the movies that we watch and then years later. And I actually read somewhere that some of the tech companies, they bring together some of the Hollywood creators to really help them ideate.

SPEAKER: Melissa

A hundred percent. I'm a big Trekkie, and I have gotten countless inspiration from Star Trek episodes. So there's a wealth of insight to be had there. Absolutely.

SPEAKER: Host

So looking ahead, and not going ahead for the next sort of three years, what strategies do you recommend for companies and design teams to really stay ahead, for them to stay ahead of innovation and really maintain that relevancy?

SPEAKER: Melissa

Yeah. I'll answer this from the standpoint of kind of what I tell my design teams. First, always stay curious. Always kind of question your own knowledge and your understanding of the context and the situation and users because it's constantly evolving. You know, you may do a UXR study and come to some kind of particular insight that then informs your design decision, and then you put that out into the world. And a year later, people may surprise you. It may be completely different. It's changing so fast. So always build in time to learn new tools, skills, read up on the latest research, and just continually building that knowledge bank for yourself. Second is to really be open to experimenting and testing often and knowing and feeling comfortable with failure because we will fail. Like we will design things that literally just do not work. And sometimes we end up designing things that result in unintentional harm, right? And so we need to be open to learning from those failures and then course correcting very fast. And then lastly, you know, I'd say kind of what connected to what I was talking about earlier is designers really need to invest in understanding the role of language and crafting digital experiences. And so that concerns me, and it might lead to more mental health issues, or lack of social interaction, feeling alone. All those things are already pretty much exacerbated by technology in the physical way. I just think it might get accelerated. And so that part does concern me greatly.

SPEAKER: Host

Yeah, no, great point. Let's dive deeper into that, because that means the way we design human and machine interaction, or human and AI interaction in this case, is going to change. And so how do we integrate human-centric design principles into these experiences, ensuring that technology enhances rather than replaces that human touch in customer interactions? One of the things we talk about all the time in the world of AI, where do you insert the human in the process? And that is friction. That is like the biggest friction you can think about, inserting a human in a process.

SPEAKER: Melissa

Yeah, I think just as a general principle, I think we shouldn't brace AI. Don't be afraid of it. Embrace it. At the same time, don't view it as the end-all and be-all. Definitely view it as one more component to your arsenal and your tool. And if you can have that frame of reference and that mindset, then I think it will be very healthy. And as long as you have other humans that you interact with equally and trust equally as much as the AI, then I think it's great. You'll have a discourse. You'll have conflict. You'll have debates and people who don't agree. That's fantastic. And at the end of the day, you will make the call, right? So I think the key is not to rely solely on the AI. It will become very convenient to do that, I think, in the future. And I think the key is to make sure that you balance it with human interaction along with the AI interaction. And as it relates to human-centered design, I heard Melissa talk a lot about it. I think she brings up a lot of great points. And you mentioned this too, Ray, in terms of how do we keep our designers inspired and all that. And I actually think two things. I think constraints actually, for good designers, constraints actually inspire. They create a sense of, hey, challenge. And I always think of great constraints as a great design challenge. And I think great designers will also view it in the same way. I think the other thing is I would really treat the AI as kind of another partner. And if you can view it that way, I think it's also very healthy. And lastly, this idea of, you know, if you can design for shared experiences and really think of part of that shared experience just has an AI component in it, no matter what you're doing, I think that's also a very healthy perspective.

SPEAKER: Host

Yeah. You know, I also agree with that view that we should think of AI as a partner, as your companion. I use ChatGBT throughout my day all the time. Ask a lot of questions. Helps me ideate and helps me write things. It helps me edit things. That's the way I look at it. And I think also, you know, I think as we move more into using these AI tools, they are just going to become really, I think, in some ways just commoditized. Everybody's going to be using this. It's kind of like the Internet. It's not a differentiator. And I think that's what I think a lot of humanity, especially in design, I think that's what's going to be the differentiator in my opinion. But I'm wondering and curious to see if you have any examples of brands that have effectively leveraged design and innovation and have integrated AI into their customer experiences.

SPEAKER: Melissa

That's such a good question. I think you guys sent me the list of questions when I was in it, and I honestly drew a blank when I looked at that. I just stared at that for a while. And, you know, I went on the Internet and I searched for AI, great companies or great brands that have used AI. And the typical list of Fortune 500 come up, right? And I don't need to name them. They're all the people that, you know, are begging Ray to talk positively about them when he's on Fox News or CNBC or whatever he's doing, right? And so ignore the obvious choices. Of course these big companies have a huge budget, and they're smart. They're investing in machine learning, analytics, data, all that to improve their existing business processes and decision-making and whatnot, right, and even improving what I call customer experiences. But then I started to – I went to ChatGPT this morning, and I typed in the same question. And the first round was garbage. It was the same, you know, just rehashing the articles that we all could look up. But I kept pushing it and pushing it, and I think that's the key with working with ChatGPT is you just got to keep pushing it and keep diving and having it look at different corners of the world that we are not looking at. And it came up with a couple examples I thought were kind of cool. There's a company called Vicarious Surgical, and it uses AI and virtual reality to mimic the dexterity and decision-making of surgeons so that it can actually feed all of that information back into robotics so that they can perform minimally invasive surgery and cost-effectively throughout the corners of the world that don't have access to the same type of health care that we do. I thought that was such a great example of taking AI and VR and robotics and doing something good with it. So that's a great example where a new company is actually melding the idea of design, innovation, and AI. Another one that I experienced this weekend in the Super Bowl, I have a friend that works at a food innovation company called Maxon, and they have this AI, I'm sure, but they connect it to their incredible database that not a lot of people have, which is recipes and how recipes impact taste and how the biology of the human body reacts to different ingredients and all that get married together. And so they have this AI called Leo, and through working with Leo, I created a really cool combination of Galbi, which is what Ray and I really love to eat, which is Korean short ribs, and cupcakes, and it came up with this incredible recipe that was both sweet and savory for Galbi cupcakes. So I just thought that was really cool too, and that's something I could do at home. I don't necessarily need any food manufacturer to do for me. So stuff like that, I think it's both fun, it's interesting, and I think it will improve the human experience on many levels. Another great example, I think, sorry, Ray, is Khan Academy, you know, that online, I think it's high school or school, I don't know what grade state they teach, but they use generative AI, ChatGPT, to come up with Khan Migo, basically is a tutor to help the kids. So not to cheat on their tests, but actually to help them. So they have a question on math, they have a question on English. It's like your personal tutor, instead of having to pay $100 an hour for a tutor, they offer that to you. It's a great use of it.

SPEAKER: Host

I love that. That is such a, you know, you think about it, the best professors aren't the ones that do the best research, or have the most achievements or accolades or recognition. It's the ones that can take very complex, sometimes very hard to understand things, and simplify in a way where many people can get it. And I think what you just described is exactly that type of AI. Actually, you know, going deeper to your other point, like you reminded us of like user experiences can be smell. It could be taste, right? We've totally ignored that conversation as we go completely digital. There are those human experiences. Related to that, technology is going to continue to advance, including AI. And AI is going to hit all our senses, right? So taste and smell and touch are all going to be part of this. Do you think, well, actually, how do you think design teams are actually going to stay ahead by figuring out what users are going to expect? More importantly, trying to make this both intuitive and futuristic.

SPEAKER: Melissa

Yeah, I think you hit on it, Ray. We've talked a lot about large language models, right? But there's going to be, now there's large vision models. I think there's going to be large auditory or whatever models and taste models. And I think all those need to be combined together. And here's why I think it's important. Now, speaking about human-centered design, and the first, there's like five steps that we talked about, right? You know, getting some empathy that then will let you hopefully identify a need that then you define what a problem is you can solve. So those are the first two steps. And then you ideate, and then you prototype, and then you test it, right? And I think AI is actually helping on many of those levels. But on the sort of first two steps, which is getting empathy and defining, and I think sometimes their best insights come from when they see a disconnect between what people say and what they do. And so if you have large vision models married with large language models, I think it can then look and say, well, they're saying this, but in reality they're actually doing that. Some compensatory behavior is happening. Aha! There might be an insight there, right? But you need a large vision model married to a large language model in order to identify that gap. And so that's where I think some of the real exciting stuff is going to happen is when you bring in all five senses. So I think that's where design could really benefit is when having all these large databases available for all five senses, along with potentially an emotion library, an emotive sort of interpretation library. I think all of that's going to be great. There's a lot of GPTs out there now.