
Generative AI is scaling across organizations at a pace few anticipated, and far faster than the last major enterprise technology shift. The rise of SaaS unfolded over nearly two decades: emerging around 2000, accelerating between 2011 and 2016, and becoming the dominant software model by the late 2010s. That extended timeline gave organizations room to develop governance frameworks, security controls, and operational standards as adoption matured.
AI has followed a very different trajectory. Generative AI moved from early experimentation to widespread enterprise use in under two years. The Big Three generative AI (GenAI) apps, ChatGPT, Microsoft Copilot, and Google Gemini, now operate on personal smartphones, home and BYOD laptops and tablets, browser-native assistants, and AI-enabled notetaking and task applications. Employees adopted these tools long before policies, training, or approved platforms could be established.
Where SaaS changed how work was stored and accessed, AI now changes how work is created and accomplished. That shift makes today’s governance gap far more consequential, touching strategy, operations, and decision-making in ways the SaaS era never approached.
1. Adoption Is Rapid, Governance Is Not
Industry research consistently shows that a majority of organizations now report using AI, and regular use of generative AI has increased sharply in the past year. Yet fewer than half have implemented formal AI governance policies, and even fewer have operational controls to enforce them.
The dynamic is familiar: tools spread quickly, employees integrate them into daily workflows, and leadership attempts to retrofit governance after usage patterns are already established. This mirrors the early SaaS era, but the acceleration and operational impact of AI make the stakes significantly higher.
2. Shadow AI: The Successor to Shadow IT
Shadow IT emerged as employees adopted cloud applications outside traditional approval processes. Today’s equivalent is shadow AI, and its growth is far faster and far more difficult to detect.
Employees routinely use:
- Consumer AI applications
- Browser extensions and plug-ins
- Productivity platforms with embedded AI features
- Mobile AI-enabled apps
- Unsupervised writing, coding, analysis, and research assistants
Surveys consistently show that many employees use AI tools not approved by their employer. Many also report entering internal content such as strategy drafts, operational narratives, presentations, or code into external AI systems.
Shadow AI is no longer an edge case. It is a systemic pattern across all industry verticals.
3. AI Use on Personal Devices: A Governance Blind Spot
The governance challenge becomes far more complex once AI activity leaves corporate-managed endpoints. The Big Three GenAI apps operate across personal smartphones, tablets, home laptops, browser-based assistants, and a variety of AI-enabled productivity tools. Employees can reach these systems from virtually any personal device, placing much of their AI usage outside formal oversight mechanisms.
A significant portion of this activity occurs entirely outside the organization’s technology estate, leaving traditional governance and monitoring approaches ineffective. In hybrid and remote environments, employees often move internal content through personal AI apps, generate drafts, and paste the results back into enterprise systems with no traceable record of prompts, decisions, or underlying data.
Identity and endpoint controls provide only partial protection. In most organizations, AI usage on personal devices remains largely unmonitored and unmanaged. This is now the most significant governance gap since the emergence of BYOD, and the challenge is amplified by the fact that AI creates new business content rather than simply handling existing files.
4. Why Governance Is Struggling to Keep Pace
- Policies arrive after behaviors are entrenched
Employees adopt AI tools to solve immediate workflow needs, and governance follows only after usage is already widespread.
- Policies tend to be high-level and difficult to enforce
Guidance such as “do not input sensitive data” is necessary, but it cannot stand alone without supporting controls and training.
- AI is embedded across the enterprise technology stack
AI capabilities now span the broader technology estate, covering core business systems, productivity platforms, collaboration tools, and departmental applications. CRM systems, office suites, HR platforms, and collaboration tools increasingly include generative AI by default. Even organizations attempting to limit AI adoption receive it through standard updates.
- The visibility layer is stillemerging
During the SaaS era, Cloud Access Security Brokers eventually provided unified oversight. Equivalent monitoring and governance tools for AI remain early-stage, fragmented, and inconsistently deployed.
5. Why the Stakes Are Higher Than in the SaaS Era
A. Data exposure is faster and less visible
SaaS risks centered on document storage and sharing. AI exposure often originates from prompts that contain strategic details, financial logic, or code fragments, none of which exist as traditional files.
B. AI-generated output becomes enterprise work product
Generative AI now produces emails, analysis, board materials, planning documents, and code. This output requires clear standards for:
- Retention
- Versioning
- Auditability
- Discovery and regulatory compliance
Most organizations have not yet integrated AI-generated content into their records-management frameworks.
C. Regulators acrossjurisdictionsare moving quickly, not just in Europe
Regulation is accelerating globally. The European Union’s AI Act is the first comprehensive horizontal legal framework for AI. The United States is advancing its own approach through the White House Executive Order on AI safety, NIST’s AI Risk Management Framework, sector-specific directives, and an active policy dialogue shaped in part by private-sector leaders, including David Sacks.
Regulatory expectations are forming through parallel developments rather than converging on a single model, and global organizations must be prepared for governance frameworks that vary by jurisdiction.
6. What Organizations Need To Do Now
1. Shift from policy creation to operational governance
Governance must be embedded into workflows, training, approvals, and accountability structures.
2. Treat AI output as electronically stored information
Organizations must know where AI-generated content lives, how long it is retained, and which retention policies apply.
3. Consolidateand standardize AI tools
A smaller set of sanctioned tools increases visibility, reduces risk, and improves compliance.
The Path Forward
AI adoption resembles the early surge of cloud-based applications, but with greater speed and far higher stakes. Employees are using AI across personal devices, unmonitored environments, and embedded capabilities that did not exist on enterprise roadmaps a year ago. Governance has not kept pace.
Future leaders in this space will not stand out because they adopted AI first, but because they put the proper guardrails around it. AI is no longer experimental, and it now sits at the center of how work gets done. Treating it with the same discipline as any other core operational capability is no longer optional.



