Future of AI

How AI Will Balance Usability and Explainability in 2022

The buzz around the potential of AI to assist knowledge workers across a variety of industries has been steadily growing over the past decade. With each passing year, the buzz has only gotten louder as the technology is increasingly deployed for a growing number of use cases, from contract review to identification of expertise within an organisation.

As AI becomes more mainstream and adopted across more scenarios, the technology itself continues to evolve in interesting ways. These changes can be witnessed at a surface level in how users consume and interact with AI, as well as at a deeper level, in the logic that underpins its decision-making capabilities. Together, these changes represent two of the biggest trends around AI for the coming year.

Trend 1: AI’s Great Disappearing Act

It might seem like a bit of a paradox, but as AI becomes more popular, it might actually move out of the limelight to more of an understated, “behind the scenes” role. This is actually a good thing, and a sign of AI’s growing maturity.

It used to be the case AI was a seemingly magical and wondrous technology that companies would purchase in the hopes that it would make them more productive, more efficient, and more secure. The unspoken formula around many AI deployments seemed to be: Buy the technology, and then we’ll figure out a way to eke some benefits out of it.

That approach is no longer the case. People don’t want to purchase AI as a standalone technology anymore – they want to consume AI as something that is already plugged into the back of an application.

When AI is productised like this, customers don’t have to worry about how to use it or how to wire all the different pieces together; the AI simply becomes part of a product that they can easily take advantage of.

Because of this productisation trend, the industry will start to see increasing consolidation among AI vendors for the simple reason that it doesn’t make a lot of sense for tech vendors to each build their own AI engine for their application to sit on top of. Instead, there will be a handful of powerful, well-trained AI engines for vendors to leverage. This is good news for everyone who hopes to see more products on the market that easily enable end users to take advantage of AI.

Trend 2: “Thanks, AI – But How Exactly Did You Reach That Decision?”

While companies increasingly want AI productised, they also want greater explainability with their AI. In other words, users need to be able to say how and why the AI arrived at a particular decision rather than just writing off AI as a ‘black box’ whose thought process is totally opaque.

Because AI is being used to assist with both process automation (for example, extracting clauses from mountains of contracts) as well as decision-making (applying rules and logic to the information that’s been extracted), there needs to be more explainability at that decision-making level.

Making sure that the rules and the logic behind a decision are not just fully explainable to a human being but also defensible are key to ensuring that AI isn’t inadvertently making bad decisions based on poorly trained models, unintentional bias, or other faulty logic.

It’s easy to see a variety of situations where AI-powered decisions could potentially get a firm into hot water – whether that’s deciding which mortgage loan applications are approved or denied, or determining who is suitable to be insured and at what rates.

Even areas like recruitment can be problematic if you’re using AI to assess applications. An enterprise that’s making a yes/no decision on hiring somebody needs to know why the machine made the decision it did so that they can protect themselves against the possibility that the technology they’re using is driving bias of some kind.

Key to eliminating this bias is understanding what kind of data the AI has been trained on and what kind of examples it’s been fed. Also: at what level of detail does the model pull information out? Particulars around how the AI makes its decisions needs to be transparent, not hidden away in manuals and documentation. It’s the only way that users can have full confidence in the decisions they are increasingly relying on AI to help them make.

One Does Not Negate the Other

These two trends are largely of a piece: just because end users want AI to “fade into the background” as far as how they interact with it doesn’t mean they don’t want other aspects of AI to be more transparent – namely, knowing how AI does what it does.

In other words, the need for usability goes right along with the need for greater explainability. In responding to these two trends and finding a comfortable balance, AI is sure to continue its exciting evolution in 2022.

Author

  • Nick Thomson

    Nick Thomson is General Manager of AI at iManage, the company dedicated to Making Knowledge Work™. In his role, Nick delivers new and enhanced AI solutions that empower professionals to increase efficiency, improve productivity, and mitigate risk.

Related Articles

Back to top button