Building a Secure, Private Generative AI Assistant for the Enterprise: Lessons from PSChat
As generative AI rapidly transforms the way organizations operate, many enterprises are eager to harness its potential for internal productivity, knowledge sharing, and innovation. Yet, concerns around data privacy, security, and governance often stand in the way of adoption. At Publicis Sapient, we faced these same challenges—and addressed them head-on with the development and deployment of PSChat, our proprietary generative AI assistant built specifically for internal use.
This page offers a deep dive into the architectural decisions, integration strategies, and best practices that shaped PSChat, providing practical guidance for IT leaders, CISOs, and digital transformation officers seeking to implement secure, private generative AI solutions within their own organizations.
Why Build an Internal Generative AI Assistant?
The generative AI landscape is dominated by public tools like ChatGPT and Google Bard, which offer impressive capabilities but raise significant concerns for enterprise use. When employees paste sensitive code, client information, or proprietary data into public AI tools, there is no guarantee about how that data is stored, used, or even retrained into the model. High-profile incidents have shown that such data can inadvertently become accessible to others, posing risks to intellectual property and regulatory compliance.
Recognizing these risks, Publicis Sapient set out to build PSChat—a secure, private generative AI assistant designed to:
- Protect company and client data from exposure to third parties
- Accelerate day-to-day work across all critical business functions
- Enable rapid integration of best-of-breed AI tools while maintaining full control over data flows
Architectural Foundations: Security and Privacy by Design
PSChat is built on a foundation of security and privacy, leveraging best-of-breed large language models (LLMs) such as GPT-4, but with a custom architecture that ensures data never leaves our control. Key architectural decisions included:
- Customizable LLM Integration: While PSChat uses engines from leading providers like OpenAI, all surrounding components—interfaces, plug-ins, and data pipelines—are either open source or custom-built. This modular approach allows us to swap out LLMs as needed, ensuring flexibility and future-proofing against vendor lock-in.
- Data Residency and Control: To address concerns about data storage, we worked closely with our cloud partners to ensure that no sensitive data is stored on external servers. All queries and interactions are processed within our secure environment, with strict controls over data retention and access.
- Plug-in Ecosystem for Accuracy and Compliance: PSChat supports custom plug-ins that enforce business rules and ensure factual accuracy. For example, rather than relying on the LLM to guess answers to technical queries, plug-ins can invoke trusted internal tools or databases, reducing the risk of hallucinations and ensuring compliance with internal standards.
- Role-Based Access and Contextual Responses: The assistant features an "act as" capability, allowing users to specify their role (e.g., software developer, marketing content creator). This ensures that responses are tailored to the user’s context and adhere to relevant data access policies.
Integration with Enterprise Systems
A generative AI assistant is only as valuable as its ability to connect with the systems and workflows employees use every day. PSChat was designed for seamless integration, including:
- Support for Multiple LLMs: Users can select from different language models and compare outputs, enabling teams to leverage the strengths of various providers while maintaining a consistent security posture.
- Custom Interface and Sharing Features: The PSChat interface is tailored for Publicis Sapient employees, with features that allow users to share useful prompts and interactions across the organization—without exposing sensitive data externally.
- Continuous Learning and Feedback Loops: Usage analytics and feedback mechanisms help us understand how employees are using the tool, identify new use cases, and prioritize the development of additional plug-ins and integrations.
Data Governance and Risk Mitigation
Security and privacy are not just technical challenges—they require robust governance and ongoing vigilance. Our approach to data governance with PSChat includes:
- Strict Data Handling Policies: All data entered into PSChat is governed by internal policies that define what can be shared, how long data is retained, and who has access to logs and analytics.
- No Model Retraining on Sensitive Data: Unlike public AI tools, PSChat does not use internal queries to retrain the underlying LLMs, eliminating the risk of sensitive information leaking into future model versions.
- Collaboration with IT and Security Teams: The development of PSChat was a cross-functional effort, involving close collaboration between engineering, security, and compliance teams to ensure alignment with enterprise risk management frameworks.
Real-World Impact and Lessons Learned
Since its launch, PSChat has seen rapid adoption across Publicis Sapient, with tens of thousands of queries submitted in the first weeks alone. Employees across strategy, product, engineering, experience, and data & AI functions have leveraged the assistant to accelerate research, automate routine tasks, and break down data silos.
Key lessons from our journey include:
- Security and privacy must be embedded from day one. Retrofitting controls after deployment is far more difficult and less effective.
- Modularity enables agility. By designing PSChat as a composable platform, we can rapidly integrate new AI capabilities and adapt to evolving security requirements.
- User feedback drives value. Continuous analysis of usage patterns and proactive engagement with employees help us identify new opportunities and address emerging risks.
Best Practices for Secure Enterprise AI Assistants
For organizations considering their own internal generative AI solutions, our experience with PSChat highlights several best practices:
- Start with a clear vision and risk assessment. Define your goals, identify sensitive data flows, and engage security stakeholders early.
- Leverage modular, open architectures. Avoid vendor lock-in and enable rapid adaptation to new technologies and threats.
- Implement strict data governance. Control where data is stored, who can access it, and how it is used—especially for model retraining.
- Integrate with existing systems. Ensure your AI assistant can connect to internal tools and databases securely, maximizing its utility while minimizing risk.
- Foster a culture of responsible AI use. Educate employees on best practices, monitor usage, and continuously refine policies and controls.
The Road Ahead
The development of PSChat is an ongoing journey. As generative AI continues to evolve, so too will our approach to security, privacy, and data governance. By sharing our learnings, we hope to empower other enterprises to unlock the value of generative AI—securely, responsibly, and at scale.
Ready to explore how a secure, private generative AI assistant can transform your organization? Connect with our experts to start your journey.