Future of AIAI

The next chapter for AI in ethics and compliance

By Tim Morss, CEO at SpeakUp

Compliance leaders should treat generative AI as a force-multiplier, but they should still keep both hands on the wheel 

The perennial resource gap 

In every company I have worked with, the compliance team has said the same thing, “we don’t have enough resources.” They are asked to cover more regulations, review more disclosures, and respond to more issues with the same or even smaller teams. I have never met an ethics and compliance function that felt fully staffed or fully funded. 

That reality and my own real-world experiences have shaped my view of artificial intelligence. For me, AI is not about hype or chasing the latest buzzword. It is about closing the resource gap.  

Compliance teams don’t need more dashboards or another piece of software to learn. They need the time to use their judgement to handle issues. AI gives compliance professionals a chance to remove the friction that eats away at their capacity. 

Tackling the bottlenecks first 

When I think about where AI belongs in ethics and compliance, I don’t start with the most ambitious vision. I start with bottlenecks. Where are people spending hours on manual work? Which hand-offs cause the most delay? 

Triage is one of the clearest use cases.  

Teams spend countless hours reviewing reports, summarizing details, and deciding what needs attention. AI can take on much of this front-end work by generating summaries, categorizing issues, and prioritizing severity. Investigators start with structured information instead of unfiltered narratives, which means less time sorting and more time addressing real risk. Over time, this also builds a reliable data set that helps spot patterns across locations or functions. 

Voice intake is another area where AI removes friction.  

Traditional hotlines come with hold times, language barriers, and multiple hand-offs before a case is even logged. A multilingual AI voice agent changes that. The caller speaks in their own language, the system transcribes and translates instantly, and the case enters the system without delay. This lowers costs while delivering a more consistent experience for employees, who know their concerns will be captured accurately and anonymized. 

Conflict-of-interest reviews also stand to gain.  

AI can’t make the final decision, but it can flag if a conflict has already been disclosed or if it connects to an ongoing report. Linking those dots across cases is critical in large organizations where disclosures often sit in silos. It gives reviewers context from the start and reduces duplication. 

These are not futuristic applications. They are practical, low-risk automations that free up hours for higher-value work. And they exist right now in leading ethics and compliance solutions. 

Entering the agentic era 

I believe we are on the edge of a fundamental shift in how compliance technology is designed. For example, the old model is a dashboard with pie charts and filters. The near future is going to be agentic: small, task-oriented services that respond to natural language. 

Instead of spending hours building charts and slicing data across multiple systems, a compliance leader type in natural language, “Show me a report on discrimination cases by region over the past year.” The agent will pull the data, generate the analysis, create visuals, and prepare a report ready to share with the board. The interface disappears; the insight remains. 

This change in interface design will matter as much as the algorithms themselves. Compliance officers will spend less time typing into forms and more time framing problems. AI will not take judgment away from us, it will strip away the layers that keep us from exercising it. 

Building from the problem outward 

Another lesson I’ve learned is to never start with the technology. Start with the problem. 

When we develop new features, we begin with a specific constraint. Such as language access in a hotline, low checkback rate, lengthy case cycle time, or the cost per disclosure review. We always begin based on qualitative data. What problems have we heard from our customers? Or what gaps do we see from competitor’s solutions that we as practitioners know could be done better?  

This problem-first approach stops us from chasing novelty. AI may not be the right solution for many issues and admitting that early saves wasted investment. 

What success looks like 

When AI works, the benefits are easy to describe. Critical matters can be routed to investigators in minutes rather than days, ensuring faster triage and more timely interventions. Patterns and trends begin to emerge across business units, suppliers, or regions that would otherwise remain hidden, giving compliance richer insight into the organization’s health.  

Employees also face lower barriers when speaking up, they can report in their own language, at a time that suits them, with greater confidence that their concerns will be understood. At the same time, compliance professionals spend less energy and resources on transcription, searching, or formatting and more on evaluating risk and advising leadership.  

The point is not efficiency for its own sake. It is the efficiency of acting sooner, with better information, to protect people and organizations. 

Risks we cannot outsource 

AI will also introduce new risks that compliance leaders cannot hand off to IT.  

Exporting data out of systems and inputting into consumer AI products raises serious questions about data privacy. Over-reliance on machine outputs can dull professional judgment and create a false sense of certainty. And if teams rely too heavily on automation, there is a danger of skills decay, as investigative and analytical expertise erodes even while routine work gets faster.  

The solution is vigilance. Compliance must own AI as much as Legal or IT does. If we abdicate responsibility, shadow adoption will creep in anyway and the risks will multiply. 

Preparing for the future 

AI will not replace ethical decision-making, but it will change who makes the first move. Instead of waiting for reports to pile up, compliance can identify trends early, intervene quickly, and measure cultural health with more precision. 

The next few years will bring new capabilities we can barely imagine today. Six months ago, I would not have predicted the progress we see in multilingual voice agents. Six months from now, we may see things like fully agentic triage, investigation support, and agents speaking to agents. 

The key is to prepare now by adopting governance frameworks, running pilots on real pain points, and creating feedback loops that let AI learn from resolved cases. That way, when the technology crosses from experimental to reliable, we will be ready to put it to work responsibly. 

Final thoughts 

AI is not a silver bullet. It’s a force multiplier that gives us the ability to do more of the work that matters. And in doing so, protecting people, preventing misconduct, and embedding integrity across organizations. 

When I look ahead, I do not see a future where compliance is run by machines. I see a future where compliance teams finally have the tools to match the scale of their mandate 

And the best part, that future is not decades away, it’s right now.  

 

Author

Related Articles

Back to top button