
Three years on from the frenzy that surrounded ChatGPT’s initial release, the AI revolution hasn’t slowed. It’s continued accelerating.
Nowhere is this more evident than in the Microsoft Copilot ecosystem. Having started out as a standalone tool, Copilot is now the central hub for Microsoft 365’s extensive AI experience.
At the core of this are AI agents. Indeed, Microsoft now enables users to leverage and even create their own AI tools for specific business processes, helping companies to further enhance the ways in which they do business.
Creating your own AI agent
Crucially, creating these agents doesn’t require extensive technical expertise. You don’t have to be an AI engineer to create AI tools. In the case of Microsoft Copilot Studio, users can build agents by simply describing them in natural language, before testing, refining and deploying them throughout the wider business.
It’s testament to the exponential improvement in generative AI’s capabilities. In just a couple of years, we’ve moved from models providing lines of code that required tweaking to tools now being able to create entire solutions using just a few prompts.
That technology offers companies and business functions of all shapes and sizes with incredible opportunities, from optimising data triaging to improving customer interactions. And firms are already experimenting in a variety of ways.
According to Microsoft, Fujitsu and NTT DATA are using Azure AI Foundry to “build and manage AI apps and agents that help prioritise sales leads, speed proposal creation and surface client insights”. Meanwhile, it also reports that Stanford Health Care is using Microsoft’s healthcare agent orchestrator to build and test AI agents that can help alleviate administrative burdens.
The potential is clear. However, if not managed properly, this agentic ecosystem also presents some risks. If the development and deployment of AI tools isn’t carefully managed, companies could see agents inadvertently leaking sensitive information, misrouting confidential communications, or acting on spoofed instructions.
How AI agents can quickly become problematic
To understand how these issues can arise, it’s important to consider both how and why AI agents are generally created. Often, these are developed by individuals as personal productivity tools. And at present, individuals usually feel they have the autonomy to create agents because their organisation doesn’t govern or restrict the ability of employees to create them. Simply put, there aren’t typically policies in place addressing this.
If these productivity tools are found to be useful, they may in turn be shared between co-workers. In principle, nothing is wrong with that. However, quite quickly, organisations can find themselves in situations where potentially hundreds of AI agents are running wild, being used for all sorts of critical tasks, and using all sorts of critical data.
It is when nobody has oversight of these tools, or what they are doing, that challenges and risks can arise. Without proper governance of AI agents, these automation tools that haven’t been assessed may bypass security reviews, operate with excessive permissions, and lack any real audit trails. That lack of oversight – of how AI agents work, how they use and share data, and interact with other systems – may lead to business-critical information being exposed.
Consider how the handling of vast amounts of data raises privacy and confidentiality concerns associated with AI. We’ve seen examples of big tech firms like Samsung unwittingly leak top secret data through the use of AI platforms, with workers having inputted confidential data, source code and internal meeting notes.
Equally, without the right policies, checks and balances in place, AI hallucinations may arise that can cause a host of problems. In the legal sector, this has led to misinformation creeping into legal cases, which can potentially lead to miscarriages of justice. Earlier this year, in a £89 million damages case against the Qatar National Bank, 18 out of 45 case-law citations submitted by the claimants were found to be fictious.
Security and governance are the key to success
The challenge of AI agents isn’t just a technical one, but a cultural one. With AI tools having become common in many workplaces, individuals both desire and expect to have the autonomy to create tools that will be helpful in their new roles.
So, how can firms balance this with prioritising the governance, validation and verification of AI toolsets? Crucially, it all comes down to instilling the right policies.
Firms need strict processes for the development, testing and approval of AI agents. That doesn’t mean restricting employees from experimenting with these technologies. Rather, it’s a case of ensuring they have a safe, controlled environment in which to do so, and that any solutions are assessed by a technical expert before being implemented at the team, department or business level.
Fortunately, Microsoft facilitates this dynamic. Having actively considered the need for companies to securely deploy and manage AI agents, the firm has introduced Entra Agent ID – a feature that extends identity and access governance to AI agents. In essence, organisations can use this to ensure that each agent resides within a specific environment, and each of these environments only has access to a specific subset of data.
With these guardrails in place, firms can provide their employees with AI playgrounds – safe spaces in which they can experiment and create tools that may be of great benefit to them and the wider business, but without the threat of compromising data or security.
Deployment is a similar story. If managed in the right way, agentic AI poses no more of a threat than other business resources – from excel spreadsheets to Word document folders. If applications are isolated to subsets of data, then threat actors looking to use AI agents to harvest business information are unlikely to get very far.
Why Microsoft is the place to build AI playgrounds
This isn’t something that’s around the corner. It’s happening now, with Microsoft reporting that more than 230,000 organisations – including 90% of the Fortune 500 – have already used Copilot Studio to build AI agents and automations.
We’re on the edge of a new AI agent precipice. Indeed, Microsoft doesn’t see these tools as productivity enhancers, but a new form of foundational IT infrastructure. With that in mind, it’s vital that firms take the time to put the right foundations in place now rather than later and avoid potential issues arising down the line as agentic AI continues to evolve.
By giving employees the spaces to explore AI capabilities safely, firms may begin to benefit from solutions that can dramatically improve operational efficiencies, productivity, revenue generation and/or cost savings, while avoiding the potential pitfalls.
With that said, it is vital that all testing and deployment is done within the Microsoft ecosystem. This isn’t just because Microsoft is continually expanding its suite of pre-built agents and new AI models to assist developers. More importantly, Microsoft can provide the secure environments for this experimentation.
There’s a reason banks, hospitals and highly regulated organisations use Microsoft. It is a company with highly advanced, customisable and effective security protocols; a company that processes 84 trillion threat signals every day.
By building and running your AI agents within the Microsoft ecosystem, you can leverage these same robust safeguards – protections that external platforms and open environments won’t necessarily guarantee. The moment these business-critical applications leave the boundaries of Microsoft 365, all bets are off. Those protections may vanish, and the business-critical data that these agents are processing may be exposed irreversibly.
To avoid this, it’s vital to prioritise governance. Develop, test, deploy and manage AI agents in the right way, and you’ll be well placed to reap the benefits. Fail to do so, however, and you may quickly find that the risks far outweigh the rewards.


