AI & TechnologyLegal & Compliance

Your AI Agents Are a Legal Liability. Here’s What to Do About It.

By Camilo Artiga-Purcell, General Counsel, Kiteworks

Every enterprise is rushing to deploy AI agents. Customer service bots that resolve tickets autonomously. Coding assistants that access internal repositories. Research tools that pull from databases full of proprietary data. The productivity gains are real. But so is the legal exposure, and most organisations haven’t come close to reckoning with it. 

AI agents are fundamentally different from the chatbots and search tools that preceded them. They don’t just generate text. They act. They browse systems, retrieve files, execute transactions, and make decisions. Often with the same credentials and access privileges as the employees who deployed them. That autonomy is precisely what makes them useful. It is also what makes them dangerous from a legal and regulatory standpoint. 

Expanding attack surface 

Consider the mechanics of a typical AI agent deployment. An organisation connects an agent to its internal file systems, email, CRM, or cloud storage. The agent inherits existing permissions that are often far broader than necessary, and begins scanning, indexing, and surfacing information at machine speed. Research from IBM found that 97% of organisations that experienced AI-related breaches lacked proper access controls, and that breaches involving unauthorised AI tools cost an average of $4.63 million. 

The Samsung incident in 2023 was an early warning. Engineers at Samsung’s semiconductor division pasted proprietary source code and internal meeting notes into ChatGPT to help debug problems and generate minutes. That data became part of OpenAI’s training pipeline, permanently outside Samsung’s control. The company eventually banned external AI tools entirely. But that was a simpler era. Today’s AI agents don’t wait for employees to copy and paste. They go looking for data on their own. 

Where the legal risks concentrate 

The legal landscape is not theoretical. Enforcement is accelerating across multiple fronts. In December 2024, the Italian Data Protection Authority fined OpenAI €15 million for processing personal data without a clear legal basis, failing to notify the authority of a breach, violating transparency obligations toward users, and lacking adequate age verification mechanisms to protect minors. The EU AI Act’s prohibitions on unacceptable-risk AI systems became fully enforceable in 2025, with penalties reaching up to €15 million or 3% of global revenue. In the U.S., NIST has issued a formal request for information on AI agent security and published a draft Cybersecurity Framework Profile for AI. Clear signals that regulatory expectations are crystallising fast. 

The exposure breaks down across several categories. First, data protection laws like GDPR and the CCPA require a legal basis for processing personal data, strict data minimisation, and defined retention periods. An AI agent that crawls an organisation’s file systems and accesses personal data beyond what’s needed for a specific task violates these principles. Second, trade secret law demands “reasonable measures” to protect proprietary information. An AI agent that uploads confidential formulas, source code, or customer lists to an external AI service may destroy trade secret protection permanently. There’s no getting it back once it enters a third-party model’s training data. 

Third, sector-specific regulations compound the risk. HIPAA requires minimum necessary access to protected health information. CMMC 2.0 mandates specific controls for handling Controlled Unclassified Information in defence contracting. SOX demands reliable internal controls over financial data. In each case, an AI agent with broad, unmonitored access can create violations that trigger substantial penalties and, in some cases, personal liability for executives. 

And then there’s the negligence exposure. Courts are increasingly willing to treat inadequate AI security as a breach of the duty of care. When an organisation deploys AI agents without proper controls and a data breach follows, the absence of safeguards becomes evidence of negligence. As the IAPP has noted, AI agents don’t recognise the boundaries of their competence, they execute high-stakes decisions with the same confidence as routine tasks. 

Control is the answer 

The instinct in many organisations is to ban AI agents outright or to rely on employee training and acceptable use policies. Neither approach works at scale. Bans sacrifice competitive advantage. Policies depend on human compliance, which, as Samsung demonstrated, fails quickly under real-world pressure. 

The answer is control. Granular, enforceable, technical control over what data AI agents can access, how they can use it, and where it goes. This is not about slowing down innovation. It’s about building the infrastructure that makes innovation sustainable and legally defensible. 

What does that look like in practice? Start with the principle of least privilege applied specifically to AI agents. Every agent should have access only to the data it needs for a defined purpose, and nothing more. This satisfies GDPR’s data minimisation requirement, HIPAA’s minimum necessary standard, and CMMC’s access control mandates simultaneously. Purpose-based permissions ensure that an AI agent deployed for customer service cannot wander into R&D files or financial records. 

Next, establish hard boundaries around sensitive data. Data loss prevention capabilities should prevent AI agents from transmitting trade secrets, pre-patent inventions, protected health information, or Controlled Unclassified Information to external AI services. This is especially critical because, as the Future of Privacy Forum has observed, AI agents are most valuable when they have access to the most sensitive data – the very data that carries the highest legal risk. 

Comprehensive audit trails are equally important. When regulators investigate, when litigation arises, or when breach notification obligations are triggered, the organisation needs a complete, immutable record of exactly what data each AI agent accessed, when, and for what purpose. Under GDPR’s accountability principle, the ability to demonstrate compliance is itself a legal requirement. Under trade secret law, the ability to prove “reasonable measures” can determine whether intellectual property retains its legal protection. 

Finally, organisations need the ability to intervene in real time. Continuous monitoring of AI agent behaviour, anomaly detection, and automated suspension of agents acting outside their defined scope are not optional features but the operational expression of the human oversight requirement increasingly expected by regulators worldwide. 

The window is closing 

The regulatory trajectory is clear. NIST is actively building a threat and mitigation taxonomy specifically for AI agents. The EU AI Act is being enforced. State privacy laws in the U.S. are multiplying. Courts are entertaining novel theories of liability tied to AI data practices. Organisations that deploy AI agents without robust data controls are not just accepting business risk, they are building a litigation record. 

The organisations that will thrive in this environment are not the ones that move the fastest. They are the ones that move with clear visibility into what their AI agents are doing, enforceable limits on what those agents can access, and defensible evidence that they took their legal obligations seriously. The technology to do this exists today. The question is whether your organisation will implement it before the first enforcement action, the first breach, or the first lawsuit forces the issue. 

Author

Related Articles

Back to top button