Enterprise AI

With AI Agents Becoming the Next Enterprise Challenge, Governing AI Output Will Become Even More of a Priority

By Paul Walker, Global Solutions Director, iManage

What happens when AI agents โ€œjoin the workforceโ€? There will be innovative new workflows and productivity enhancements โ€“ but there will also be new challenges for organisations to wrap their arms around, particularly when it comes to governing agentic AI knowledge, reasoning, and decisions. This will be the new challenge for organisations that want to move ahead with autonomous or semi-autonomous AI securely and confidently โ€“ and itโ€™s an area that they ignore at their own peril.ย 

From smarter models to smarter workflowsย 

Itโ€™sย worth zooming outย for a momentย toย seeย how we got here.ย Over the past two years, the AI industry has been consumed by a race to build increasingly sophisticated large language models (LLMs).ย As models have reached an increasinglyย matureย and refinedย state,ย the tech capability is evolving from text generation to action,ย andย theย focus has slowly started shifting toย the agentic build platform:ย the foundation on whichย organisationsย design, deploy, and orchestrate AI agents that can reliably execute complex tasks.ย ย 

The primary focus around agents thus far hasย mainly beenย aroundย leveragingย โ€œknowledge vaultsโ€ย โ€“ essentially, aย repository of important documents and other contentย โ€“ย that an AI agent canย analyseย and pull informationย from, allowing it to return answers to a professional.ย ย 

For the most part, theseย tasks can beย characterisedย as basic search and retrieval, butย โ€œbasicโ€ย doesnโ€™tย meanย โ€œtrivialโ€ or โ€œlacking utilityโ€.ย In fact, there are quite a few scenarios where the ability toย quickly and efficiently perform this type of narrow taskย isย incrediblyย useful and can cut down the time involved from hours to minutesย โ€“ for instance, any task that involves reviewing heaps of documents.ย 

With the introduction of model context protocol (MCP)ย and Agent2Agent (A2A)ย โ€“open-source standardsย that enables different AI systemsย and systems of recordย to seamlessly communicate with one another โ€“ itโ€™s becoming feasible for AI agents to go beyond this basic search and retrieval and โ€œhand offโ€ the next part of the workflow to another agent.ย Andย thatโ€™sย where theย complexity begins to multiply, from a governance standpoint.ย ย 

First steps towards agentic operationsย 

To get their feet wet with agentic workflows,ย organisationsย mightย ramp up withย โ€œlow hanging fruitโ€ use cases, such as having AI agentsย takeย some form of contentย or โ€œanswerโ€ย that theyโ€™ve retrieved andย thenย โ€“ as the next part of the workflow โ€“ย email it to a certain individual or group of individuals, or post it to a specific Slack channel or Teams chatย for review and release toย the original requestor.ย 

Itโ€™s not hard to imagine other early use cases around answering employee questions. Organisations typically have channels โ€“ whether itโ€™s a messaging app, an online form, or a dedicated email addresses โ€“ for employees to submit everything from legal and HR questions to sales and marketing enquiries. Typically, these are answered by staff.ย 

Introducing an AI chatbot streamlines this process.ย The AI usesย officialย internal policies and contracts to provide responses, which are then reviewed by a human before being released.ย A human-in-the-loopย automation in essence.ย This multi-step approach creates an agentic workflow, combining AI knowledge sourcing with human validation.ย ย 

As AI agents develop,ย however,ย overseeingย the communications of this new โ€œlabor forceโ€ย will become a business imperative, particularly within all the different communication channels. And as AI agents assume greater decision-making authority,ย organisationsย will need robustย safeguardsย to track,ย manage, and ensure accountability for the actions and outputs of AI in their business.ย ย 

The governance framework behind safe AI adoptionย 

So, how best to put guardrails in place without hampering AIโ€™s potential? A prudent risk mitigation approach involves clearly delineating which components of your workflow are appropriate for AI assistance, while designating certain tasks as โ€œAI agent-freeโ€ zones. ย 

This process requires evaluating workflows on a spectrum from โ€œlow stakesโ€ to โ€œhigh stakesโ€ย in order toย assess the potential risks associated withย the involvement of AI agents.ย Itโ€™sย also worthย designatingย which workflows require a human in the loopย โ€“ย so thatย thereโ€™sย at least one set of human eyes reviewing outputsย โ€“ย and whichย workflowsย the agents can handle on their own.ย ย 

As another safety and governance measure,ย organisationsย must ensure they have high-quality data and a solid information architecture within theย organisationย for AI to draw upon.ย Thatโ€™s the only way for AI to take action based upon accurate, relevant, and up to date information.ย ย 

To help cultivate this information architecture,ย organisationsย should start byย designatingย authoritative datasets and usingย aย managedย repository asย a single sourceย of truth, with staff curating content to keep data quality high. Simple practices like marking final contract versionsย helpย organiseย key documents for effective AI use and prevent important records from being missedย โ€“ ensuring AI has the best possibleย dataย toย leverage.ย Ensuring an active lifecycle on the dataset is also a vitalย componentย of this strategy, guaranteeing that responses are derived from current and validated information thatย remainsย relevant and reliable.ย 

Ready or not, agents are comingย 

Asย organisationsย prepare for agents thatย donโ€™tย justย assistย but act,ย aย key requirementย will be theย ability to effectively overseeย and auditย their activity, put robust governance frameworks in place, and ensure accountability for the actions they take and the outputs they produce.ย ย 

After all, ifย AI agents areย slowlyย becoming part of the labor force, theย logical next stepย is to turn themย into accountable workers.ย For organizations with an eye on avoiding security or governance missteps,ย itโ€™sย the onlyย way to ensureย safe, scalable AI adoptionย as the technology continues to evolve.ย 

Author

Related Articles

Back to top button