There's a major evolution happening in AI right now: the shift from generative AI to agentic AI—two fundamentally different approaches to making machines smarter. Generative AI is what powers tools like ChatGPT, which writes text, or DALL-E, which creates images. Agentic AI, on the other hand, refers to digital assistants that can act autonomously, performing tasks on their own without constant human instruction—almost like a co-worker or assistant.
While agentic AI offers greater potential power, it also brings complexity and integration challenges that can slow down value creation and scalability. In contrast, generative AI’s lower deployment barriers make it more immediately valuable, ensuring faster adoption in the global market for the foreseeable future.
How should businesses approach short-term and long-term investments in these two different categories of AI—both for internal enterprise and external, customer-facing use cases?
Generative AI refers to a class of machine learning (ML) models designed to produce new content—text, images, audio, code—by identifying and replicating patterns from extensive training data. These models, typically based on deep learning architectures like transformers or generative adversarial networks (GANs), generate outputs that align with the statistical properties of their training data rather than retrieving or modifying pre-existing content.
One of the most prominent applications of generative AI is automated content generation. Tools like OpenAI's GPT-4o can generate human-like text, enabling applications such as:
Beyond text, generative AI extends to other forms of content creation:
Agentic AI refers to systems designed to autonomously pursue complex goals with minimal human intervention. Built on generative AI and applied differently, these AI agents exhibit autonomous decision-making, planning, and adaptive execution to complete multi-step processes. Agentic AI can interact with other computer systems and solve problems directly, unlike generative AI, which generates step-by-step instructions for humans to follow.
Agentic AI refers to an application of various AI technologies, including generative AI. It is not a specific AI technology, like generative AI. Most AI agents are based on generative AI (and other types of AI, like natural language processing) so that they can communicate in natural language, generate images or texts, or help brainstorm.
For an AI agent to act on our behalf and make decisions, it will likely use a variety of technologies, such as machine learning, natural language processing, systems integration, generative AI, deterministic AI, and others.
Agentic AI or AI agents generally take longer to build, train, and deploy successfully, because they are all so unique. For example, an AI agent that can make doctors’ appointments will function very differently from an AI agent that analyzes supply chain forecasting data to make real-time decisions to re-route goods.
In each scenario, the AI agent or agentic platform will need to integrate with different data sources and systems and have different guardrails and privacy rules set up to make decisions. While these requirements are also true of generative AI applications, the specific technological integrations are not necessarily required. Generative AI cannot take action, and therefore doesn’t need to integrate directly with platforms and accounts.
AI agents will theoretically have a higher output and impact on society because they are able to do the work for us. However, this also means it will be much more difficult to achieve and scale in the coming years.
Experts predict the global market value of agentic AI to jump from $5.1 billion USD in 2024 to $47.1 billion USD in 2030, a 161 percent increase. In contrast, generative AI has an estimated global market value of $36.06 billion USD in 2024, compared to $356.05 billion USD in 2030, a 163 percent increase. Generative AI has a much higher global market value because it is faster and easier to scale for more generic applications, especially chatbots.
Both agentic AI and generative AI continue to grow in market size at relatively the same rate (161 percent vs. 163 percent). However, agentic AI applications will be limited to companies with flexible, composable technology architecture and higher data maturity. Widespread acceptance of agentic AI will take more time to develop.
If we compare the acceptance of self-driving cars to agentic AI, the pattern is similar—despite advances in technology, most people still don’t fully trust them. Their success depends on working seamlessly with other systems, from public transportation and ride-sharing services to partnerships with car manufacturers.
Because of this, many businesses are choosing to invest in hyper-specific, internal use cases for agentic AI that don’t rely on external integration or customer buy-in.
For example, in healthcare, AI agents can manage administrative tasks, allowing medical professionals to focus on patient care, by integrating natural language processing (NLP), machine learning (ML), robotic process automation (RPA), and rule-based decision engines. The AI agent would then integrate with hospital electronic health records (EHRs) using GPT-like transformer models to create concise, structured summaries for progress notes, discharge summaries, and referrals.
In contrast, a generative AI-only healthcare solution would require more human intervention and medical expertise, and may save less time for medical professionals, but would be faster to implement. A purely generative AI solution relies on pre-trained language models that can be fine-tuned on medical data, rather than requiring complex rule-based systems and deterministic AI pipelines. It eliminates the need for structured data integration, hardcoded compliance checks, and real-time cross-referencing with external medical databases, reducing development time. Generative AI can be deployed as a conversational interface with minimal backend changes, while a fully agentic solution requires deep integration with EHRs, pharmacy networks, and regulatory systems.
Developing a fully independent AI agent or workflow of AI agents will be more complex and time-consuming than developing a generative AI solution for a variety of reasons, from technology integrations to data privacy, to deterministic or hardcoded elements and more.
For especially complex, time-intensive, and costly workflows that:
investing in a proprietary AI agent will bring significant value.
Most companies won’t have the money or time to invest in their own proprietary AI agent, at least not right now. Third-party tools can be a practical alternative for automating non-core tasks. These platforms offer ready-made AI agents (like Agentforce) which can quickly handle customer service chats, document processing, or internal knowledge management with only minor customization. While they may lack deep system integration, they still provide efficiency gains without long development cycles. For critical functions, a custom-built agent might be worth the investment, but for standardized, repeatable workflows, a third-party tool could get the job done faster and at a lower cost.
Publicis Sapient’s AI product Sapient Slingshot is an example of when the creation of a proprietary AI agent (actually an ecosystem of AI agents) was worth the investment. This platform accelerates enterprise system integration and software development by using AI agents to automate code generation, testing, and deployment, reducing project timelines from months to weeks.
Sapient Slingshot draws on 20+ years of internal code developed by thousands of employees to quickly process millions of lines of code. Software development, application development, and legacy application modernization, all of which follow the software development lifecycle (SDLC), are essential to our business model and the service and value we provide to our clients. These projects typically take many years, and we can reduce the timing in half with this solution.
Generative AI alone was not effective for Sapient Slingshot because system integration requires precise execution of APIs, data transformations, and compliance with enterprise IT architectures, which LLMs cannot reliably enforce. Sapient Slingshot needs structured automation for tasks like code generation, testing, and deployment, ensuring accuracy, security, and performance—something generative AI struggles with due to its probabilistic nature and inability to validate outputs against system constraints.
A third-party AI development tool also wasn’t a fit because many off-the-shelf code assistant solutions lacked the customization, security, and integration needed for enterprise-scale system orchestration. Legacy modernization and software development are core to Publicis Sapient’s business, and our approach is uniquely tailored to complex enterprise environments. Third-party tools can’t adapt to our proprietary workflows or handle the precision required for code generation, testing, and deployment. Building Slingshot in-house ensures we maintain full control, optimize for our specific needs, and deliver the reliability and scalability that generic solutions can’t provide.
While we are still in the early stages of agentic AI exploration and development, we are already seeing use cases across industries where there are significant opportunities for companies to begin developing AI agents, or utilizing third-party applications of AI agents that can integrate with proprietary data.
At the same time, there are valuable opportunities for generative AI applications across industries that can provide immense value on a faster timeline. These are products where we are already seeing success with clients that have been able to generate ROI, and customers and employees alike are gaining value and time from using them.
Contrary to the “independent” nature of agentic AI, both generative and agentic AI (but especially agentic AI) require a “human in the loop.” Human intervention is essential in the model development, training, usage, and review.
Just because AI can be better and faster than humans with certain tasks or use cases, when things go wrong (a generative AI model hallucinates, or an AI agent makes a bad decision), humans are to blame, not the technology.
When we fully understand our AI solutions—including the benefits and risks, as well as plans and policies in place for a variety of unexpected situations—only then will we be able to gain the true value from them. Before we build, we should ask ourselves, our employees, and our customers: What are the pros and cons of this solution? How will this help you? What are your thoughts? What are the risks? What do you need?
If we were to put a child behind the wheel of a car, and they hit a tree, we would not arrest the car. We would talk to the child, and his or her parents. But that does not mean we go so far as to say “No one should ever drive a car.” The same is true of AI technology. If an AI tool tells a person to file their taxes incorrectly and they face a penalty, it is up to the business that created the tool to fairly compensate this person, take responsibility, and fix the situation quickly.
As these technologies evolve, mistakes in their implementation and use are inevitable. However, keeping humans in the loop at every stage helps mitigate negative consequences and allows us to adapt and improve. At the same time, choosing to ignore agentic and generative AI carries its own risks—just with different trade-offs.
Artificial Intelligence | Machine Learning
© 2025 Publicis Sapient. All rights reserved. A Publicis Groupe Company.