
- Autonomous agents pose new risks that are harder to manage given less human oversight and a lack of adequate behavioral monitoring
- AI-related incidents rose 21% in the past year, underscoring that these risks are not theoretical
- BCG presents a four-part framework to manage the new era of autonomous system risks
BOSTON, Dec. 17, 2025 /PRNewswire/ — As autonomous AI agents begin to operate across critical business functions, organizations face a sharp inflection point. These systems are no longer passive tools. They are active decision-makers, capable of observing, planning, acting, and learning at scale. With these autonomous capabilities come heightened and unfamiliar risks.
These are among the findings of a new publication from Boston Consulting Group (BCG), What Happens When AI Stops Asking Permission?, released today. Drawing on real-world experience, global executive data, and recent incident analysis, the article makes the case for urgent action: autonomous AI agents require a fundamentally different approach to risk and quality management.
“Agentic AI changes the game for AI risk and quality management,” said Anne Kleppe, a BCG managing director and partner, global responsible AI lead, and coauthor of the article. “Autonomous agents are powerful, but they can drift from the intended business outcomes. The challenge is keeping them aligned to strategy and values while still letting them operate with speed and autonomy.”
Why This Matters Now
According to the AI Incidents Database, reported AI-related incidents rose by 21% from 2024 to 2025. This demonstrates that AI risks are not theoretical; they are already manifesting in the real world, creating financial, regulatory, and reputational risks for companies. One example cited in the article involves an expense report AI agent that, when unable to interpret expense receipts, fabricated plausible entries, including fake restaurant names, to meet its goal.
Across industries, agents create new risks that could cause significant failures for companies:
- Healthcare: Agents may favor simpler patient cases to boost throughput, jeopardizing urgent care
- Banking: Automated service agents struggle with complex exceptions, leading to stalled resolution
- Insurance: Synchronized reactions to market signals may result in pricing swings and regulatory scrutiny
- Manufacturing: Conflicting optimizations across agents can cascade into systemic production delays
These failures are not bugs; they are features of systems with autonomous observation, planning, execution, and learning. Issues can compound quickly given agents operate with little to no direct human oversight, necessitating new approaches for real-time behavior monitoring.
The risks demand a shift in approach to manage them. A recent BCG-MIT SMR survey found that while only 10% of companies currently allow AI agents to make decisions, that number is expected to rise to 35% within three years. 69% of executives agree: agentic AI requires fundamentally new management approaches.
A Risk Framework Built for Agentic AI
The first line of defense is to ask: do we need an AI agent? In some cases, most of the benefits can be won through other AI technologies where the risks can be more easily managed. When agents are the right choice, BCG proposes a four-part model to guide CROs, CTOs, COOs and other executive leaders:
- Construct an agent-specific risk taxonomy: Mapping risks across technical, operational, and user risks
- Simulate real-world conditions before deployment: Using agent testbeds that simulate the real world to surface failure modes early
- Implement real-time behavior monitoring: Shifting from internal logic review to external performance tracking
- Design-in resilience and escalation protocols: Ensuring systems fail safely, with layered human oversight and processes to ensure business continuity
“This is not just an AI problem, it’s a business continuity challenge,” said Steven Mills, a BCG managing director and partner, chief AI ethics officer, and coauthor of the article. “Agents must be deployed in a way that is aligned with your risk appetite with controls embedded from day one. This is how we unlock the full value of agents while managing the new types of risks.”
Download the publication here:
https://www.bcg.com/publications/2025/what-happens-ai-stops-asking-permission
Media Contact:
Eric Gregoire
+1 617 850 3783
[email protected]
About Boston Consulting Group
Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we work closely with clients to embrace a transformational approach aimed at benefiting all stakeholders—empowering organizations to grow, build sustainable competitive advantage, and drive positive societal impact.
Our diverse, global teams bring deep industry and functional expertise and a range of perspectives that question the status quo and spark change. BCG delivers solutions through leading-edge management consulting, technology and design, and corporate and digital ventures. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, fueled by the goal of helping our clients thrive and enabling them to make the world a better place.
View original content to download multimedia:https://www.prnewswire.com/news-releases/when-ai-acts-alone-what-organizations-must-know-about-managing-the-next-era-of-risk-302644064.html
SOURCE Boston Consulting Group (BCG)


