Agentic

How to safely handle the rise of AI agents

By Fraser Dear, Head of AI and Innovation, BCN

Three years on from the frenzy that surrounded ChatGPTโ€™sย initialย release, the AI revolutionย hasnโ€™tย slowed.ย Itโ€™sย continued accelerating.ย 

Nowhere is this moreย evidentย than in the Microsoft Copilot ecosystem. Having started out as a standalone tool, Copilot is now the central hub for Microsoft 365โ€™s extensive AI experience.ย 

At the core of this are AI agents. Indeed, Microsoft now enables users toย leverageย and even create their own AI tools for specific business processes, helping companies to further enhance the ways in which they do business.ย ย ย 

Creating your own AI agentย ย ย 

Crucially, creating these agentsย doesnโ€™tย require extensive technicalย expertise. Youย donโ€™tย have to be an AI engineer to create AI tools. In the case of Microsoft Copilot Studio, users can build agents by simply describing them in natural language, before testing,ย refiningย and deploying them throughout the wider business.ย 

Itโ€™sย testament to the exponential improvement in generative AIโ€™s capabilities. In just a couple of years,ย weโ€™veย moved from models providing lines of code that required tweaking to tools now being able to create entire solutions using just a few prompts.ย 

That technology offers companies and business functions of all shapes and sizes with incredible opportunities, from optimising data triaging to improving customer interactions. And firms are already experimenting in a variety of ways.ย 

According toย Microsoft, Fujitsu and NTT DATA are using Azure AI Foundry to โ€œbuild and manage AI apps and agents that help prioritise sales leads, speed proposal creation and surface client insightsโ€.ย Meanwhile, it also reports that Stanford Health Care is using Microsoftโ€™s healthcare agent orchestrator to build and test AI agents that can help alleviate administrative burdens.ย 

The potential is clear. However, if not managed properly, this agentic ecosystem also presents some risks. If the development and deployment of AI toolsย isnโ€™tย carefully managed, companies could see agents inadvertently leaking sensitive information, misrouting confidential communications, or acting on spoofed instructions.ย 

How AI agents can quickly become problematicย 

To understand how these issues can arise,ย itโ€™sย important to consider both how and why AI agents areย generally created. Often, these are developed by individuals as personal productivity tools. And at present, individuals usually feel they have the autonomy to create agents because their organisationย doesnโ€™tย govern or restrict the ability of employees to create them. Simply put, thereย arenโ€™tย typically policies in place addressing this.ย 

If these productivity tools are found to be useful, they may in turn be shared between co-workers. In principle, nothing is wrong with that. However, quite quickly, organisations can find themselves in situations where potentially hundreds of AI agents are running wild, being used for all sorts of critical tasks, and using all sorts of critical data.ย 

It is when nobody has oversight of these tools, or what they are doing, that challenges and risks can arise. Without proper governance of AI agents, these automation tools thatย havenโ€™tย been assessed may bypass security reviews,ย operateย with excessive permissions, and lack any real audit trails. That lack of oversight โ€“ of how AI agents work, how they use and share data, and interact with other systems โ€“ may lead to business-critical information being exposed.ย 

Consider how the handling of vast amounts of data raises privacy and confidentiality concerns associated with AI.ย Weโ€™veย seen examples of big tech firms like Samsung unwittingly leak top secret dataย through the use ofย AI platforms, with workers having inputted confidential data, sourceย codeย and internal meeting notes.ย ย ย 

Equally, without the right policies, checks and balances in place, AI hallucinations may arise that can cause a host of problems. In the legal sector, this has led to misinformation creeping into legal cases, which can potentially lead toย miscarriages of justice. Earlier this year, in aย ยฃ89 million damages case against the Qatar National Bank, 18 out of 45 case-law citationsย submittedย by the claimants were found to be fictious.ย ย ย ย ย 

Security and governance are the key to successย 

The challenge of AI agentsย isnโ€™tย just a technical one, but a cultural one. With AI tools having become common in many workplaces, individuals both desire and expect to have the autonomy to create tools that will be helpful in their new roles.ย 

So, how can firms balance this with prioritising the governance,ย validationย and verification of AI toolsets? Crucially, it all comes down to instilling the right policies.ย ย ย 

Firms need strict processes for the development,ย testingย and approval of AI agents. Thatย doesnโ€™tย mean restricting employees from experimenting with these technologies. Rather,ย itโ€™sย a case of ensuring they have a safe, controlled environment in which to do so, and that any solutions are assessed by a technical expert before being implemented at the team,ย departmentย or business level.ย ย ย 

Fortunately, Microsoftย facilitatesย this dynamic. Having actively considered the need for companies to securely deploy and manage AI agents, the firm has introduced Entra Agent ID โ€“ a feature that extends identity and access governance to AI agents.ย In essence, organisationsย can use this to ensure that each agentย residesย within a specific environment, and each of these environments only has access to a specific subset of data.ย ย ย 

With these guardrails in place, firms can provide their employees with AI playgrounds โ€“ safe spaces in which they can experiment and create tools that may be of great benefit to them and the wider business, but without the threat of compromising data or security.ย ย ย 

Deployment is a similar story. If managed in the right way, agentic AI poses no more of a threat than other business resources โ€“ from excel spreadsheets to Word document folders. If applications are isolated to subsets of data, then threat actors looking to use AI agents to harvest business information are unlikely to getย very far.ย 

Why Microsoft is the place to build AI playgroundsย 

Thisย isnโ€™tย somethingย thatโ€™sย around the corner.ย Itโ€™sย happening now, withย Microsoft reportingย that more than 230,000 organisations โ€“ including 90% of the Fortune 500 โ€“ have already used Copilot Studio to build AI agents and automations.ย 

Weโ€™reย on the edge of a new AI agent precipice. Indeed, Microsoftย doesnโ€™tย see these tools as productivity enhancers, but a new form of foundational IT infrastructure. With that in mind,ย itโ€™sย vital that firms take the time to put the right foundations in place now rather than later and avoid potential issues arising down the line as agentic AI continues to evolve.ย ย 

By givingย employeesย the spaces to explore AI capabilities safely, firms may begin toย benefitย from solutions that can dramatically improve operational efficiencies, productivity, revenue generation and/or cost savings, while avoiding the potential pitfalls.ย 

With that said, it is vital that all testing and deployment is done within the Microsoft ecosystem. Thisย isnโ€™tย just because Microsoft is continually expanding its suite of pre-built agents and new AI models toย assistย developers. More importantly, Microsoft can provide the secure environments for this experimentation.ย 

Thereโ€™sย a reason banks, hospitals and highly regulated organisations use Microsoft. It is a company with highly advanced,ย customisableย and effective security protocols; a company that processesย 84 trillionย threat signals every day.ย ย ย 

By building and running your AI agents within the Microsoft ecosystem, you canย leverageย these same robust safeguards โ€“ protections that external platforms and open environmentsย wonโ€™tย necessarily guarantee. The moment these business-critical applications leave the boundaries of Microsoft 365, all bets are off. Those protections may vanish, and the business-critical data that these agents are processing may be exposed irreversibly.ย ย 

To avoid this,ย itโ€™sย vital to prioritise governance. Develop, test, deploy and manage AI agents in the right way, andย youโ€™llย be well placed to reap the benefits.ย Fail toย do so, however, and you may quickly find that the risks far outweigh the rewards.ย 

Author

Related Articles

Back to top button