
Like Apple’s infamous decision to ‘gift’ U2’s Songs of Innocence album to all iTunes users, Microsoft’s move toward automatically deploying Copilot across eligible environments was a disaster waiting to happen.
By introducing automatic installation on certain devices, the company risked extending that deployment without organisations fully understanding or controlling how Copilot would interact with their Microsoft 365 data and permissions.
Even before Copilot was automatically deployed, most businesses didn’t have visibility into who could access what across their tenant. In many cases, that access had built up over time, with permissions becoming difficult to track or audit at scale.
Copilot doesn’t change those permissions. But it does change what can be done with them. Organisations must now consider what Copilot can access and surface across the tenant based on existing permissions, and put guardrails in place to manage that risk.
Who let that AI in here?
AI rogue access is no small matter. Our research shows that 51% of organisations are already having to reverse AI-driven changes in their Microsoft 365 tenant due to security or governance concerns. That reflects how quickly these tools can introduce changes across environments that teams don’t fully see, understand, or control. It’s not really a revolutionary idea that businesses should have a say in how AI is introduced into their Microsoft 365 environment, and many admins felt that automatic deployment reduced that control.
The backlash was as predictable as it was forceful, and it came not just from businesses, but from consumers as well. Windows 11 users also recoiled at Copilot’s increasingly visible presence across Microsoft apps and Windows 11. As a result, the company has indicated it will scale back aspects of its AI’s presence across Microsoft 365 and Windows 11. Although exactly what that means, and how much say businesses will have in its reintegration to their systems, remain to be seen.
Danger zone
Copilot hasn’t created the risks around poor visibility, over-permissioned access, and weak governance, but it has exposed them. And in doing so, it has revealed just how unprepared most Microsoft 365 environments really are. Microsoft’s decision to hit pause on aspects of its Copilot rollout highlights a broader issue.
Providers are rapidly embedding AI agents into their offerings, often before organisations are ready. Rather than simplifying Microsoft 365, these tools are accelerating complexity, making environments faster to operate, but harder to control. In an agent-driven model, organisations are no longer just managing users, they are managing non-human actors that operate continuously, without pause or context.
Copilot is a clear example of this shift. It has its hands in emails, files and data across the entire tenant. This means its ability to surface confidential content that users are already permitted to access, including Outlook email and documents, is a real concern where permissions and oversharing are poorly governed.
That risk is worsened by the generally poor state of tenant security on Microsoft 365. Nearly half of large organisations (45%) have experienced a security or compliance incident caused by a Microsoft 365 misconfiguration in the past 12 months. That’s not surprising given the same proportion of organisations globally say they don’t have full visibility and control over their Microsoft 365 environment.
In fact, Microsoft 365 has expanded so rapidly in scope and complexity that one in five organisations now say it is almost impossible to manage and secure at an enterprise scale. Likewise, a whopping eight in 10 (82%) IT leaders describe managing Microsoft 365 as a severe operational burden.
As a result, the basics are being left by the wayside. Incredibly, 90% of organisations struggle to enforce even basic security controls including password policies and failed login monitoring. Almost nine in ten (87%) organisations have at least some administrators operating without multi-factor authentication, and it’s not enabled for 28% of administrators – which means over a quarter of the most powerful accounts on a given tenant are one slip away from relinquishing the keys to the kingdom.
Harden that tenant
Businesses whose critical systems are built in Microsoft 365 must take steps to harden their tenant on the platform. They need to be able to prepare for, withstand, and quickly recover from tenant-level incidents.
That starts with reducing the impact of compromised privileged accounts by limiting excessive administrative access. Rather than allowing admins to manage the whole tenant, it’s critical to be able to partition the user base into teams or regions and assign admin privileges on a need-to-know basis – so that if the US finance admin is compromised, the EMEA sales team doesn’t get taken down with them.
It’s also important to be able to detect unauthorised or high-risk configuration changes that can undermine security controls. Automating configuration review and remediation can speed up this process. If it does all go wrong and malicious misconfigurations make it through the net, businesses need to be able to restore their systems to ‘golden configurations’ – supporting quick recovery and minimal downtime for critical systems.
With these core principles in place, organisations will be more likely to preserve their Microsoft 365 systems from malicious incidents, and maintain operational continuity during high-pressure scenarios such as attacks, audits, or large-scale change. Implementing these approaches doesn’t have to be the sole responsibility of already-stretched IT teams, either. With the right tools and support, it’s possible to improve Microsoft tenant security rapidly and efficiently.


