
Amazon CEO Andy Jassy’s calls for employees to embrace AI, saying it’s a “game-changer” and encouraging teams to leverage it themselves, echoes the sentiment spreading across boardrooms: AI is no longer optional, it’s inevitable. But despite this enthusiasm, nearly half of executives still admit they don’t fully trust handing over tasks to AI agents, citing concerns around oversight and accountability.
This wariness is warranted – the concern isn’t whether AI can deliver results, but rather how much visibility leaders will have over their organisations’ data and their confidence that information won’t be compromised. Using AI responsibly should also be a key priority, making sure that human judgment stays in the loop and ethical safeguards are in place.
As AI adoption accelerates, the C-Suite faces mounting, at times conflicting pressure. On the one hand, there’s urgency to keep teams at the forefront of AI so they have the tools they need to outperform competitors. Yet at the same time, security leaders cannot rush into decisions without taking the time to properly assess the risks. Knowing where to focus when looking to bring new innovations to teams is paramount, so where can business leaders start?
Prioritising the top risks for the top use cases
Firstly, it’s critical to zoom out and prioritise what matters most for your business when identifying new AI tools, as the risks you need to address will depend entirely on how AI is being used. For example, if you’re looking to draft a LinkedIn post with a generative tool, you’ll likely care more about the output’s grammatical accuracy and information rather than the architecture of the underlying model.
However, if AI is integrated into a company’s CRM to support sales, protecting sensitive customer data will be a top priority, and security leaders can’t commit until they have the right assurances. What’s important to understand here is that there is no single checklist – each individual use case will have unique risks attached, and there needs to be alignment among leadership about how the benefits weigh up against them. The goal is to enable innovation while keeping human oversight front and center, ensuring that AI augments rather than replaces judgment.
So, one of the first steps is to assess the areas of need and break down the specific risks associated with those use cases. Then, gather information on the types of protections needed by the organisation before engaging with a vendor or exploring a new tool.
Demand transparency
Transparency is a critical part of the process. Once you’re confident in what you’ll be using AI for, you need to be clear on all aspects of the model, how it works, how data is handled, where the training data comes from and so on. This is where a strong relationship with the vendor is critical, so both sides can engage in an open and honest discussion.
For example, our platform uses AI to extract insights from customer conversations to help sales representatives close deals faster. As a result, we have to invest heavily in safeguarding customer data, and that’s something we get asked about frequently. If a vendor can’t offer the right assurances about their product, you need to ask yourself if what’s being sold is worth the uncertainty.
Assess risk vs reward
Not all AI tools are created equally. Some don’t retain customer data, while others use it to constantly train underlying models. Some make that available in the public domain. Knowing which bucket a vendor falls into is crucial. The 2024-2025 period has witnessed an unprecedented surge in AI-related security incidents, with 73% of enterprises experiencing at least one AI-related security breach and an average cost of $4.8 million per incident.
This isn’t about technical capability, it’s about governance. You must understand which “bucket” a vendor falls into and whether their approach aligns with your organisation’s risk tolerance and compliance obligations. For example, our AI platform relies on active consent and ensures you retain control over your data, which is never shared in the public domain. Our product development is guided by a dedicated governance team focused on ethical use, human validation, and clear model oversight. Strong governance isn’t just a compliance checkbox, it ensures AI is used responsibly and builds trust with customers and employees alike. Ultimately, responsible AI adoption is as much about people and processes as it is about the technology itself. So, you’ve got to understand what safeguards are in place, and if a vendor can’t offer that level of clarity, consider alternatives that can.
Looking ahead
Momentum matters, but scrutiny matters more. Security leaders should accelerate AI adoption while insisting on vendor transparency, robust governance, and sound risk assessments.
As AI becomes a workplace norm, leaders must understand the risks as well as the rewards, ask the hard questions, and if the answers aren’t clear, keep asking or researching until they are. All in service of delivering results for the organisation in an impactful, safer way.