PUBLISHED DATE: 2025-08-11 22:36:29

VIDEO TRANSCRIPT

SPEAKER: Ray

Good question, Ray. I mean, the easy, obvious answer is dramatically and a little bit catastrophic is not the right word, but significantly and a little bit overwhelming to a lot of us, I think, including me. You know, at its core, I think there's a lot of talk about this technology, especially in our world, in the content creation world, and asking about, like, is this the new creativity? Are machines becoming creative? Will they destroy creativity? I think the answer there is a qualified no. These machines are not creative. At its core, what this technology is doing is it's a dramatic boost in productivity, and it's significantly improving or reducing the cost of production, which enables and unlocks a lot of creativity in humans, but I don't generally think of the technologies being creative. What it is doing is it's meaning that, you know, for any given creative work, whether you're a professional or somebody like me, a hobbyist or somebody who's just trying to get some work done, you have to divide your time and your creative work into a certain amount of creative thinking and a certain amount of productivity. And if you're lucky, the balance of those allows you to iterate on it because iteration is the key to success here is working and working and exploring and finding until you hit on that special piece of content that's going to serve your need. So what Genitivity is going to do in this world is shrink down the amount of time that is needed in the production side and for a lot of people like me, reduce a lot of the barriers to entry on the production side as well. So it will definitely be a huge time saving. What that will do for a lot of people is it, like I said, it will drive up their ability to iterate and it will free up a lot of time for the creativity side of it and give them that ability to, you know, to steal a cliche, to take 100 shots on gold, you know, and push and work and look and try and share and get feedback and iterate until they get something of real quality. What we will absolutely also see is people for whom They simply just take the opportunity to shrink the time and the budget put towards projects or to focus on doing more content and they won't put more creativity in it or they may even take some of the creativity out because as we know these technologies are very good at faking creativity, right? And making us think they're creative. So you'll see a massive increase in productivity. You will see a disruption and a shift in the way that people go about content creation and how they spend their time and their budget on production versus on creativity. And then I think we'll see this bifurcation into two worlds where you'll see just more content being churned out that maybe is not necessarily higher quality and you'll see people who invest more in the storytelling, invest more in the message they're trying to give. And what I think, you know, interesting thing that we may see come out of this a little ways down the line is the public, you know, the consumer, the people who are consuming this content start to be able to differentiate between those and develop a better sense of understanding whether there's true creativity, whether there's a true story being told or whether it's a canned story because we'll see a lot more and more canned stories coming out. And so I think down the line that may be sort of one of the biggest impacts here which is a bigger focus on authenticity and on storytelling and on understanding when that's present and when it's machine generated which, you know, will have a big impact on the industry of content creators. I think for the better, but it's going to be, you know, an interesting ride.

SPEAKER: Teresa

Yeah, and I think that is the big challenge, right? To distinguish. And I actually love what you say about iterate because as a CMO, I always feel that what we have to put out in the world, it should never feel finished. It should always continue to iterate. And sometimes, especially the creatives, they always want to have something perfect. But I think this allows us, you know, to put something out in the world. And like I love what you said, continue to iterate. Yes. But, you know, as a CMO, I think a lot about it. But also as a mother, I'm a mother to teenagers. And I actually got introduced to ChatGBT through my youngest son when he came home back in December and told me that his classmate got a 98 in his drama class because he created this amazing play and it was all done with ChatGBT. So one of the things I, you know, I think a lot about it, especially with students in education, what is the concerns, right? How do you distinguish? What do you know when it's actually original work and it's not? And I know Adobe, you have unveiled some new tools for designers specifically to help to not plagiarize or design offensive materials. What steps do you think can be used to encourage responsible AI today? And how do we identify those, what is real and what's original and it's not?

SPEAKER: Eli

Yep. So great question, Teresa. And I have teenagers myself, so I have the same concerns. And obviously we spend a lot of time talking to creatives who have a lot of these concerns themselves of, you know, where is this content coming from? How do they make sure that they maintain ownership and are fairly compensated for their own original creativity, their work, their style? So it's a really big and difficult topic. First and foremost, I think the answer here for everybody across all those domains is education and information, right? So I think the biggest thing we can do, and we're making a big investment in this at Adobe, is making sure that we are surfacing up all the information that people need to be able to understand the provenance of content, where it came from, how it was made, the storytelling, whatever the assertions about it, so that they can then make their own informed decisions. We can't tell you whether something is good or bad, but we can definitely make sure that we, as a community and us as Adobe, as tools and technology providers, that we are transparent about how content is created, about, you know, and encourage our customers and users and creators to be the same. And I think as a community, as a society as a whole, that is our job and that's how we respond to this, is we focus in on transparency. We focus in on information and on education to our teenagers, to our adults, about understanding and consuming that information and making informed decisions about it. So we've invested for the past few years in the Content Authenticity Initiative, which is an open source, open standard project that we've engaged in. We've got about a thousand other members in it working with us. And the whole goal there is exactly this, is to make sure that content carries with it assertions about how it was made, about who made it, in places where people want to have access to that information and where people want to make informed decisions based on access to that information. So that is a societal effort we need to do. We can't do it alone at Adobe. We are not doing it alone. It is a big effort. And I think as a society we need to start demanding answers to those questions. When we look at content, we have to be able to ask, where is this coming from? How can I assess it properly?

SPEAKER: Teresa

Absolutely. That transparency is really important. Yeah, no, definitely. And I think ethical considerations keep popping up, right, when we talk about generative AI. And it's really about what do we do when developing generative AI systems. What have you guys done at the system level? Because it sounds like you guys have given this a lot of deep thought.

SPEAKER: Eli

Yeah, great question, Ray. And there's obviously, you know, there's a difference between how do we encourage responsible use of the technology versus responsible development of the technology. Because as some of the people out there developing, you know, some of the cutting edge generative AI that's out there, we think that we, along with all of our colleagues in the industry, have a massive responsibility here to take that into account and think about it. The good news is that Adobe, we've been doing AI for over a decade now, and so we've had a lot of time to develop a good, strong set of standards for how we develop and deploy AI. So our focus, we have a whole, essentially a set of AI standards that every feature we develop, every time we develop any sort of foundational model, we put it through an ethical review before we put it out there. And so we look at, again, transparency, accountability, and responsibility. Those are our three sort of principles. And what that means is we ask every time, you know, how are we communicating about this technology? Again, how are we surfacing the information about it? And then what harm and bias, what harm can be done with the technology, either intentional or unintentional? And are we doing everything we can as a provider in taking the right steps to mitigate at least especially the unintentional harm and bias that people can create? So looking at things like, are we, you know, does the technology reinforce, represent, or combat stereotypes? Does it promote misinformation or help people to promote proper information? So there's a big investment for us to make there. Again, as creators of the technology, we take that responsibility very seriously.

SPEAKER: Ray

No, it's a great point. There's a large digital giant by accident, ingested an LLM from someone else and has been spending about a billion dollars to reverse it. I won't name them.

SPEAKER: Eli

I've heard that story. You know, this stuff is moving so fast and there's an incredible urgency to move fast and develop the technology and get it into people's hands so that everyone has access to it. You know, when some of these image generation technologies first came out, the creative world, our customers, our community started to freak out. And because they thought, you know, okay, now I don't have access to this, but everybody else does. And so we'd said, okay, we've got to get this into the hands of creators in a way that they can use in their flows. And it's very easy to make mistakes as you go. It's very easy to accidentally ingest the wrong data, to accidentally miss some issues. And again, I think the thing I didn't mention before as a creator is in addition to being thoughtful about everything we're doing, I think one of our responsibilities is to keep iterating, just like you said earlier for content creators, is to make sure that as mistakes are made, we are investing to correct them and to understand as the technology evolves and continue to invest in both what it can do, but also how it does it.

SPEAKER: Teresa

Eli, Adobe, you have embedded and adopted generative AI into all your product development. Correct. I'm curious, as a company, are you also embedding it and adopting it into the way you do your work, not just your products, but how you do your own work internally?

SPEAKER: Eli

We definitely are, yeah. I mean, there's, like everybody, we are learning as we go. Whatever it is, whatever I tell you about how we adopted, by the time this interview is over, it'll be out of date. We've adopted it in some other way. That's how fast we're moving. But yes, across every one of our functions, everybody inside Adobe is essentially asking the same questions that everybody out there watching is as well, which is how does this transform my work? How do I take advantage of it and get ahead of it and figure out how to use it in ways that are going to benefit the work, benefit the company, and benefit me? So we're looking, our developers are looking at how we engage with it in our content development technology, sorry, in our coding practices, our creatives are looking at how do we adopt the creative technology in our content creation processes, our marketers, even our financial people, like in the right proper places, like we're looking at how do we adopt this everywhere? So yeah if you will, untrammeled access and experimentation because it can go bad very quickly. Or you can end up with results that, hey, it looks good superficially, but then there's no, I can't believe the number of conversations I've been in in the last few weeks where executives have told me, hey, we tried this and we thought initially it was going to work, but we just applied the LLM and then we realized that, you know, it's just not practical because most of the information that the models are giving us is not actionable or not real. We need those warning labels that say, please do not try this at home. Professional driver, right? Please do not try this at home. Professional prompt engineer.

SPEAKER: Teresa

And I also, sometimes I feel too, is people in this excitement, which I think is great, but they're using it, but that really hasn't changed the way they work. They're still doing the same thing. The technology really hasn't done much. It's a great tool, you're playing with it, but you're still doing the same thing the same old way.

SPEAKER: Ray

Yeah, absolutely. We really need to think about the workflows, the change management. You know, the business side of this doesn't change in the enterprise world. That's still the same. It's outcome-driven, value-driven, and managers still need to manage. It's just a new tool for them to unlock significant productivity gains and also get to better effectiveness, get to better content itself. So, Nikhil, there's that open letter, an open AI letter where all these people sign, people are like, we're gonna halt AI. I don't know. Does that work? Does that not work? What is your stance and position on that?

SPEAKER: Nikhil

Oh, I think that's just the wrong approach. It's fundamentally not gonna work, and it's impractical. It's frankly impractical. There's thousands of people, tens of thousands of people around the world working furiously on this technology, and the technology is out there right now. It's in the wild. I mean, there's gonna be innovation. There's gonna be development. We can't halt it through a public letter like this. So first of all, it's impractical. And then I also think it's unreasonable. I think it's, you know, there's so much development, and so much of it, I think, is actually very, very practical, very useful. I don't think halting the development of AI is the right call at this point in time. Rather, I mean, I'm not saying that there's not a need for regulation, particularly when you look at the consumer space, and you look at some of these large platform providers and concerns around privacy or concerns around information security or concerns around disinformation. I mean, those are very legitimate concerns, and there's certainly a reason to regulate or to think about how that would work around consumer platforms. But in the enterprise, I mean, a lot of the core regulation already exists around the ethical use of these technologies and the ethical use of datasets, and managers are accountable for their actions. So I actually think that it's wrong to just halt the development of AI. I also don't think that we're at any risk of AI taking over the world anytime soon. So it's not like we have a Terminator situation here on our hands.

SPEAKER: Ray

So the hallucinations will not be hallucinating us, is what you're saying. But the genie is definitely out of the bottle.

SPEAKER: Nikhil

Absolutely. The genie is definitely out of the bottle. We're here with Nikhil Krishnaan, Chief Technology Officer at C3.ai. Thank you so much for being on the show.

SPEAKER: Teresa

Thank you, Nikhil.

SPEAKER: Ray

Thank you, Teresa. So Teresa, amazing group of people here, amazing conversations. What we've been seeing really is a bunch of use cases, a bunch of ethical considerations, this notion of generative AI is already genies out of the bottle, as you said, and we're seeing a lot of advancement. So when we think about what Sheldon was talking about, organizations need to get ready, right? They need to be prepared. And there's a process in this transformation. And I see that there's going to be a lot of methodology, a lot of approaches so that we get everybody up and able to be successful in their organizations. What we saw with the conversations with Eli is really this unleashed ability around creativity, right? Creativity is not going to disappear. In fact, we're going to see a lot more of it and tools are going to help us do that. And that's going to create a whole new set of industries around new jobs. So getting an idea out in the box is going to be easy, but then the question is like, which one do I choose, right? It's going to be one of those things. It wasn't like getting that first drawing or initial idea onto paper. There'll be tons of those. And in fact, we can deliver a global capability across the board, right? So you want to do that and you want to do like a 50 country launch? Wow, instead of doing that in six months, we could probably do that in six weeks or less, right? That's the kind of impact. And of course, what Nikhil was talking about on the enterprise side, I mean, this is going to change the way we work, the way we look at forecasts, the way that we think about decisions. That data decisions pipeline is going to be accelerated. And so we're just seeing this amazing change happen. Teresa, your thoughts.

SPEAKER: Teresa

Wow, I thought it was fantastic conversation. I was really excited by hearing everybody different perspectives and they all come from sort of thinking different point of view based on their existing roles. I was really excited to talk about creativity, that creativity is going to be enhanced. Now, frankly, coming out of this conversation, I feel relieved. Ray, your job and my job, we still have it, at least for the near future. I think there's going to be more jobs created. So I'm an optimist here and I think there are going to be more exciting jobs instead of the mundane ones that are getting automated over time. But this has been a wonderful episode, the inaugural episode of Impact TV. I'm here with my awesome co-host, Teresa. And of course, thank you for watching. So bye, everybody.

SPEAKER: Ray

Bye-bye.

SPEAKER: Teresa

Bye-bye.

SPEAKER: Ray

Bye-bye.

SPEAKER: Teresa

Bye-bye.