Future of AIAI

Before Blaming the AI, Focus on Human Error

By Brian Jones, Senior Director, Customer Adoption, iManage

AI is quickly becoming a centerpiece of modern business strategy – but when the outputs disappoint, the blame often lands squarely on the tech.  

The truth? Most failures around AI adoption have a distinctly human fingerprint. Fortunately, a few best practices can help turn things around, providing a useful approach for successfully engaging with AI. 

Define the problem before deploying the solution 

There are many areas where AI already excels: it’s a whizz at automatically generating short replies to email threads, transcribing and summarizing video meetings, or analyzing images and tagging them appropriately.   

However, what if you’re introducing some form of generative AI – maybe a user-prompted desktop AI assistant – and there isn’t much of a plan beyond “other organizations are using this stuff, so we probably should too”? 

In the rush to get going, many organizations often skip the most basic question: What exactly are we trying to solve with this technology? 

For example, are we trying to increase revenue? Reduce customer churn? Streamline the amount of time required to generate reports? Improve customer satisfaction? Tackle some other business problem? 

The AI won’t magically figure this out for you. If you don’t know what you’re trying to achieve, neither will AI, so make sure to set aside some time up front to figure that out. Clear alignment between business objectives and AI capabilities helps avoid wasted effort and gives the technology a better shot at delivering meaningful outcomes. 

A narrow approach gets the best results 

After defining the business problem, organizations need to figure out where it makes sense to introduce AI to their end users.  

What workflows can AI assist them with? Where can the technology be introduced within the specific context in which they get work done every day? How can it deliver benefits that are aligned with the larger business goal that has been identified? 

In coming up with good answers to these questions, it helps to keep in mind that AI is great for tackling tasks that are very time consuming and somewhat tedious in nature – and the more you can home in on these activities, the better. 

For instance, consider the act of comparing a proposed legal contract to a collection of similar contracts to identify key differences and risks. While such a task may take a lawyer several hours or days, AI can complete it in minutes, resulting in substantial times savings. The value here comes from applying AI towards an easily measurable and quantifiable workflow.  

Another good usage might be extracting insights from large datasets. An operational manager in a large organization may need to aggregate data from multiple sources, generate reports, and present findings to senior management. This process often requires extensive manual effort and may take several days, depending on the volume of data involved.  

Happily for all parties, AI is particularly effective at automating this kind of narrowly defined task. It can quickly analyze numerous data repositories and deliver insights – some of which may not have even been identified if the task was done manually. 

As these scenarios demonstrate, success with AI begins when businesses apply it to clear, defined problems. And while technology can significantly reduce the time required for elements of these workflows, human oversight remains essential to review and interpret AI-generated results. In other words, the humans aren’t going anywhere – they’re just getting an “AI assist”. 

Structure beats experimentation 

Assuming every employee will become an AI power user is unrealistic. Most aren’t interested in exploring dozens of prompts or debugging frustrating outputs. They need structure and guidance, not wide-open experimentation. 

For instance, a marketing analyst may know what they need – a customer segmentation matrix – but struggle to phrase the right request. Without guidance, their early attempts might produce incoherent or incomplete results, leading to skepticism or disengagement. 

Organizations must bridge this gap with structured support. Pre-built prompt libraries, role-specific templates, and embedded AI actions within existing workflows can dramatically improve usability. Imagine a dashboard that offers “Generate Summary Report” or “Draft Email Based on Customer Feedback” at the click of a button.  

These contextual tools greatly ease the ability for new business users to get their feet wet with AI. Perhaps more importantly, these small, targeted wins build momentum. Teams are more willing to engage with AI once they see clear, practical benefits – paving the way for broader adoption. 

Avoid the hype cycle trap 

Every new technology follows a hype cycle: excitement, disappointment, then either widespread adoption or quiet abandonment. AI is no different – and its success depends less on what it can do and more on how it’s implemented. By addressing the “human error” factor early on, organizations can safely steer around the common pitfalls that might derail their AI efforts and achieve the greater innovation and efficiency that led them to AI in the first place. 

Author

Related Articles

Back to top button