
In today’s corporate merry-go-round, it seems that every company is rushing to embed artificial intelligence (AI) – automating systems, streamlining processes, and ideally, cutting costs.
Yet in the stampede towards an AI-powered ‘new world’, too many businesses are making an expensive mistake: adopting powerful technologies without truly understanding their capabilities and how to manage them.
This lack of understanding isn’t just a technical oversight; it’s a cost and reputational catastrophe just waiting to happen. When things go wrong with ‘black-box’ systems whose decisions are not explained nor understood – tracing the cause and reversing the damage is difficult, if not practically impossible.
Using governance as a competitive advantage
The smartest organisations are placing AI governance in the boardroom alongside their strategy. They know it’s a recipe for lasting success with protection against the risks of harm and financial loss (Singla et.al, 2025).
In fact, proactively building ethics and oversight into AI system design and deployment from the outset doesn’t stifle innovation, it results in clarity and certainty. Systems become transparent and explainable, and the company stays in control with clear lines of accountability.
Luckily for businesses, building in effective AI governance at the outset doesn’t have to mean starting from scratch. There are already several well-established frameworks that businesses can adopt as the foundations to their AI strategy.
Here at Anekanta®, we’ve been pioneering in this space since 2020 when we created our 12-principle AI Governance Framework (Anekanta®, 2020).
Recognised by the UK Government, the OECD (Anekanta®, 2024), and even adopted by the Institute of Directors in their AI Governance in the Boardroom business paper (IoD, 2025), it provides a strategic blueprint for the highest accountability when embedding responsible AI into organisational culture.
Our principles are timeless and adaptable – designed to evolve alongside emerging global standards rather than chase them.
For businesses looking to operationalise AI governance across their business functions as a mark of trust, the new AI Management System Standard, ISO/IEC 42001, (ISO, 2023) offers a certifiable route.
It ensures AI policy is established at board level and that systems are managed in a measurable and auditable way with suitable controls which flag performance drift through monitoring and review.
However, adherence to ISO 42001 alone does not guarantee compliance with emerging regulations such as the EU AI Act1 which introduces additional legal obligations based on risk.
In the EU, AI systems are categorised according to the level of risk they pose to individuals’ health, safety, and fundamental rights (EU Law, 2024).
Systems which present unacceptable risks are prohibited, such as those employing manipulative techniques, while high-risk systems which affect access to services, education, and employment based on biometric and sensitive data are permitted but subject to strict controls.
For high-risk AI, companies must implement robust data governance, human oversight, security, and accountability – and be able to prove it. Declaration of conformity, whether internal or independently certified, is not optional.
The lesson is clear – businesses need to understand the AI Act and similar frameworks before they design their systems, not after. Retrofitting compliance is always more expensive, not to mention more complicated, than embedding it from the start.
The Dutch Government – a warning to us all
If any organisation still believes that governance is optional, the Dutch childcare benefits scandal should serve as a sobering reminder (Amnesty International (2021).
Back in 2013, the tax authority rolled out a self-learning AI system designed to detect false claims for childcare benefits. The plan sounded promising – let the automated AI flag fraudulent cases and speed up processes.
However, what began as an efficiency drive ended in disaster. The algorithm, riddled with bias, disproportionately targeted families with dual nationalities and low incomes, flagging them as likely fraudsters.
With no human meaningful oversight to challenge this bias, coupled with a lack of AI literacy regarding how the system made decisions, its judgments couldn’t be effectively questioned by government officials.
Over a period of more than five years, thousands of innocent parents were accused of fraud and ordered to repay benefits they had rightfully received. Many fell into financial hardship; there were suicides and some lost custody of their children.
For years, these families fought an uphill battle to prove their innocence. The process was confusing, appeals often fell on deaf ears, and the government largely turned a blind eye. That was until the scandal finally broke to the public after investigative journalists took up the parent’s complaints.
This tragedy illustrates exactly what can happen when organisations employ AI without governance at the outset. Governance means asking the right questions to understand what the AI models are doing and ensuring human accountability.
Machines may process data at superhuman speed, but they cannot understand fairness, context, or compassion. Only humans can do that, and only through proper governance.
Compliance is not the enemy of innovation
Too often, businesses treat compliance as a blocker to creativity, but governance is what allows innovation to thrive. When teams design AI systems with ethical principles, transparency, and accountability built in from inception, they create technology that is not only compliant but also trusted. That’s by customers, regulators, and the public.
One practical step is to establish an AI risk committee within the organisation. Such a body can guide development teams on responsible deployment, assess potential harms, and ensure innovation doesn’t come at the expense of safety or fairness.
If we want AI to deliver lasting value, not just short-term gains, we must build it on a foundation of ethics and understanding. After all, governance is not red tape. It’s the scaffolding that keeps progress standing upright.
References
Singla, A. et.al (2025) ‘The State of AI. How organizations are rewiring to capture value’ Quantum Black by McKinsey. Available at: https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/the%20state%20of%20ai/2025/the-state-of-ai-how-organizations-are-rewiring-to-capture-value_final.pd (Accessed: 26 October 2025)
Anekanta® (2020) ‘AI Governance Framework for Boards’ Anekanta Ltd. Available at: https://anekanta.co.uk/ai-governance-and-compliance/anekanta-responsible-ai-governance-framework-for-boards/ (Accessed: 26 October 2025)
Anekanta® (2024) ‘AI Governance Framework for Boards’ OECD.AI Catalogue of Tools and Metrics for Trustworthy AI. Available at: https://oecd.ai/en/catalogue/tools/responsible-ai-governance-framework-for-boards (Accessed: 26 October 2025)
Institute of Directors (IoD) (2025) ‘AI Governance in the Boardroom’ IoD Available at: https://www.iod.com/resources/business-advice/ai-governance-in-the-boardroom/ (Accessed: 26 October 2025)
ISO (2023) ISO/IEC 42001:2023 Information Technology. Artificial intelligence — Management system. Geneva: International Organization for Standardization Available at: https://www.iso.org/standard/42001 (Accessed: 26 October 2025)
EU Law (2024) ‘Regulation (EU) 2024/1689 of the European Parliament and of the Council: laying down harmonised rules on artificial intelligence and amending certain Union legislative acts’ OJ L168/1. Available at: Regulation – EU – 2024/1689 – EN – EUR-Lex (Accessed: 27 October 2025)
Amnesty International (2021) ‘Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’. EUR 35/4686/2021. London: Amnesty International. Available at: https://www.amnesty.org/en/documents/eur35/4686/2021/en/ (Accessed: 26 October 2025).



